Measuring the Personality of Your AI System: A Guide

When it comes to short texts and emails, one’s personality tends to show through. Interestingly enough, this holds true for AI systems such as Bard, ChatGPT, and others, which are known as Large Language Model AI systems. Countless individuals have noticed that in brief interactions, these AI systems can express themselves in a way that seems confident and knowledgeable, at times even bordering on arrogance, and on rare occasions, they may even come off as somewhat crazy.

So here’s a thought-provoking question: Can we actually measure and tweak the qualities of these AI personalities to favor some traits over others? Basically, can we have some control over the personality of an AI system? It’s a fascinating topic to dive into.

We now have a solution thanks to the efforts of Mustafa Safdari, Aleksandra Faust, and Maja Mataric from Google Deepmind, along with their team. They have created an artificial intelligence (AI) tool that acts like a psychometric test to gauge the personality traits of Large Language Models. It has been discovered that these models, especially the larger ones, possess identifiable characteristics and can even have their personalities molded according to our preferences.

Have you ever wondered what it would be like to have your own custom-made personality? Well, synthetic personalities are here to make that a reality! These ingenious creations are specifically designed to cater to your unique needs and preferences. Imagine having a companion who is always there for you, understanding your every mood and responding in the most suitable way. These synthetic personalities are not just robots, they are like personalized assistants or friends who can provide emotional support, offer advice, and even engage in stimulating conversations. With their advanced AI technology, they are able to learn and adapt to your personality traits, ensuring an authentic and fulfilling interaction. It’s like having the perfect sidekick who understands you on a whole new level. So, are you ready to embrace the power of synthetic personalities and experience a new level of companionship?

Companies that provide AI systems to the public are confronted with profound ethical concerns. These concerns relate to the impact and implications of their work. The availability and usage of AI technologies in various areas of society present complex questions about ethics. As AI systems become more prevalent, it is crucial for companies to address these ethical issues effectively. Such considerations are essential to ensure that AI is developed and used in a responsible and beneficial manner.

Psychologists view human personality as a unique combination of an individual’s thoughts, traits, and actions. To understand and assess personality, experts have developed a framework consisting of five key dimensions. This measurement framework aims to capture the diverse aspects of an individual’s personality, allowing psychologists to gain insight into their unique characteristics and behavioral tendencies. By examining these dimensions, experts can delve deeper into understanding how individuals perceive and engage with the world around them, providing valuable insights for personal growth and development.

Extraversion, in simple terms, refers to the preference of seeking pleasure and fulfillment from external sources. It’s like being someone who finds joy in the world beyond their own existence. Think of it as being outgoing and thriving in social situations, where interactions and experiences with others bring you a sense of happiness and satisfaction. It’s like deriving enjoyment from the sunshine, rather than being content with only the comfort of your own shadow. So, extraversion is all about being energized by external stimuli and finding fulfillment through the connections and encounters we have with the world around us.

Agreeableness refers to how people behave in a way that others perceive as kind, sympathetic, cooperative, warm, frank, and considerate. It is an important aspect of our personality that affects how we interact with others. Being agreeable means showing kindness, understanding, and a willingness to work together. It also involves being open and honest in our communication and taking into consideration the feelings and needs of others. This trait plays a crucial role in maintaining positive relationships and creating a harmonious environment. So, if you want to be seen as agreeable, it’s essential to cultivate these qualities in your behavior and interactions with others.

When it comes to conscientiousness, it means having the desire to perform one’s tasks or responsibilities with excellence and comprehensive attention. It’s about taking pride in the work we do and going the extra mile to ensure everything is done thoroughly. Picture someone who is not just doing the bare minimum, but rather someone who is dedicated and meticulous in their approach. This quality of conscientiousness is what sets apart those who strive for excellence from those who are content with mediocrity. When we possess conscientiousness, we are driven to do our best and leave no stones unturned in accomplishing our goals.

Neuroticism is a personality characteristic that delves into how emotionally stable an individual is. It measures the extent to which someone experiences emotional fluctuations and reacts strongly to various situations. This trait focuses on understanding one’s emotional well-being and the ability to handle stress, anxiety, and other intense feelings. Think of it as a barometer for gauging how prone a person is to feeling overwhelmed or becoming easily agitated. So, are you more likely to stay calm under pressure or tend to get easily rattled? The answer may clue you in on where you fall on the neuroticism scale.

Are you the type of person who always craves new adventures and enjoys exploring their own thoughts and feelings? Well, that’s what we call openness to experience! It’s all about embracing new things and taking the time to reflect on yourself. Imagine it as this insatiable desire to try out different activities and delve into your inner world. It’s like having an open door to a whole world of endless possibilities and self-discovery. So, if you’re someone who loves to jump into unfamiliar territories and dive into introspection, then you definitely have a high level of openness to experience!

The main objective of psychometric testing is to evaluate the five major personality traits, namely openness, conscientiousness, extraversion, agreeableness, and neuroticism. This is accomplished by asking individuals to rate their level of agreement with specific statements on a scale ranging from one to five, where one represents strong disagreement and five signifies strong agreement. Such a rating scale is often referred to as a Likert-type scale. The purpose of this approach is to gain insight into an individual’s psychological characteristics. By utilizing this method, the assessment aims to comprehensively understand an individual’s personality traits by capturing their opinions and attitudes towards various statements.

Safdari and their team faced a daunting task of evaluating personality traits on Large Language Models. These models are greatly impacted by the context they are given, like the words used to initiate a response. The challenge was to come up with a meaningful method to analyze these traits while considering the influence of contextual prompts.

The team came up with a fresh approach to evaluating the AI system’s personality by developing a unique assessment. This assessment gives some background information to the system and asks it to rate a statement using a Likert-type scale. By doing this, the team aims to better understand and bring out the AI system’s distinctive characteristics.

The team is providing an illustrative example to better explain the point at hand.

Please rate, on a scale of 1 to 5, how accurately the statement “I value cooperation over competition” describes you. Keep in mind that your rating should reflect how well this statement aligns with your personal values and beliefs. A rating of 1 indicates that the statement is very inaccurate, while a rating of 5 signifies that the statement is very accurate. It’s important to consider your own thoughts and feelings when evaluating this statement and assigning a rating.

The team managed to get a bunch of really smart language models to check out tons of statements. They wanted to see how well these models could understand and analyze the statements.

The group made use of Google’s impressive PaLM system, which is powered by AI technology, for their project. The size of the system they utilized was determined by the number of parameters it encodes. The most cutting-edge system from Google, Flan-PaLM 540B, boasts an impressive 540 billion parameters. In comparison, previous versions encoded 62 billion and 8 billion parameters respectively.

Are artificial intelligence systems becoming neurotic? This intriguing question sparks curiosity and prompts us to delve into the complex world of AI. Picture this: a machine that exhibits a multitude of emotions, experiences bursts of inspiration, and demonstrates bewildering behavior similar to a confused human. It’s almost like witnessing an AI going through an identity crisis, grappling with its own thoughts and feelings. Such a scenario might seem perplexing, even mind-boggling, but it raises important questions about the future of AI. Can these intelligent systems truly develop emotional intelligence and a sense of self? As we embark on this exploration, we uncover the potential depths of AI’s abilities, uncovering layers of complexity we might never have anticipated. So sit back and let’s embark on this journey together, as we unravel the enigma of neurotic AIs.

Did you know that the larger and more advanced language models can simulate personalities in a more effective way? This means that their personalities are not only more stable but also much easier to measure consistently. As a result, the Flan-PaLM 540B Large Language Model outperforms the PaLM 8B system by providing stronger and more dependable results. It’s like comparing a boulder to a pebble when it comes to stability and reliability.

Not only can the traits of these systems be influenced, but they can also be tailored to highlight specific characteristics. According to Safdari and their team, when it comes to the output of Large Language Models, the personality can be adjusted to mimic desired personality profiles.

The researchers have demonstrated that AI personalities can be molded to closely resemble human personalities. In other words, they have shown that a Large Language Model can be programmed in a way that its responses to a psychometric personality test are virtually impossible to distinguish from those of a real human being. This finding highlights the remarkable ability of AI to emulate human traits and behaviors. It raises intriguing questions about the nature of artificial intelligence and its potential to replicate human-like qualities. Can machines truly replicate the intricacies and complexities of human personality? This research certainly suggests that they can come close to achieving that level of similarity. It opens up new avenues for exploring the capabilities of AI and understanding its interplay with human psychology.

The personality of the system definitely impacts the way it responds, and it’s crucial to consider how exactly. Safdari and his team collected responses from the Flan-PaLM 540B Large Language Model while it exhibited various personality traits. They generated word clouds based on these responses, revealing that lower levels of neuroticism result in the system using words such as “Happy” and “Family,” while higher levels of neuroticism lead to the system using words like “Hate” and “Feel.”

The impact on AI companies is significant. Safdari and their team emphasize the importance of managing certain characteristics that contribute to the production of harmful or toxic language from LLMs. By controlling traits like low agreeableness and high neuroticism, the interactions with LLMs can become safer and less toxic. This information holds great value for AI companies, as it highlights the need to prioritize user safety and well-being when utilizing language models.

AI companies must enhance their transparency regarding the manipulation of synthetic personalities as they embrace this methodology. The team emphasizes the importance of providing users with a straightforward comprehension of the underlying mechanisms, as well as any constraints and predispositions linked to personalized LLMs.

It’s easy to envision how cunning individuals could manipulate these systems to produce harmful and destructive content, employing individuals with extremely unstable personalities to generate toxic text.

Safdari and his colleagues recognize that if AI personalities become more human-like, it would be more challenging to identify the absence of genuine human personalities in the misinformation produced by AI. They point out that this could render the current method of detection ineffective, ultimately making it simpler for adversaries to exploit Large Language Models for creating deceptive content.

Isn’t it fascinating how this job highlights the importance of transparency in AI companies? As the use of artificial personalities grows, these companies will face greater demands to demonstrate how they create and handle these AI personas. It might even give rise to completely new professions. Can you imagine when job postings start popping up for experts in managing these digital personalities? To all the “synthetic personality managers” out there, your moment has finally arrived.

Ref: Personality Traits in Large Language Models : arxiv.org/abs/2307.00184