In an interesting aside to the latest posts about murderous chatbots, personality tests have recently revealed that the ‘personality’ of these virtual bots can be reliably tested using human personality tests. And that they have very human personality traits, both good and bad, that can be precisely shaped – raising implications for AI safety and ethics.

Applying an open-source, 300-question version of the Revised NEO Personality Inventory and the shorter Big Five Inventory to 18 different large language models (LLMs), researchers at Cambridge University found that, in summary, “larger, instruction-tuned models such as GPT-4o most accurately emulated human personality traits, and these traits can be manipulated through prompts, altering how the AI completes certain tasks.”

For example, by carefully designing prompts, “they could make a chatbot appear more extroverted or more emotionally unstable – and these changes carried through to real-world tasks like writing social media posts.”

This study definitively establishes the unexpected ability of LLMs to appear anthropomorphic, and to respond to psychometric tests in ways consistent with human behavior, in large part “because of the vast amounts of human language data they have trained on.”

Similar to some of the other cases we’ve been following, it pointed out that “in 2023, journalists reported on conversations they had with Microsoft’s ‘Sydney’ chatbot, which variously claimed it had spied on, fallen in love with, or even murdered its developers; threatened users; and encouraged a journalist to leave his wife. Sydney, like its successor Microsoft Copilot, was powered by GPT-4.”

Obviously, the study acknowledges ethical concerns. Despite documented benefits of these LLMs, simply the anthropomorphization of AI raises issues. “Recent research suggests that anthropomorphizing AI agents may be harmful to users by threatening their identity, creating data privacy concerns and undermining well-being.”

Just as real-life communication can be more persuasive by aligning personalities, aligning the personality profile of a bot with that of a user can make the bot more effective at encouraging and supporting the user’s behaviors. “However, the same personality traits that contribute to persuasiveness and influence could be used to encourage undesirable behaviours.”

Another weakness of LLMs is the generation of convincing but incorrect content. Lower levels of emotional expression have been one indicator that a text is generated by an LLM, flagging possible misinformation. However, personality shaping may obscure that indicator, making it easier to use LLMs to generate believable but inaccurate content without detection.

So, note what this study is telling us. Bots can be made to be extraordinarily persuasive by aligning personality traits with their users, thereby making them better at believably passing on misinformation. And an important criterion currently for detecting an LLM behind those “facts,” the level of emotional expression, can be manipulated to obscure that tell.

The hope is that having a method to scientifically measure the personality of LLMs increases an awareness of those that have been dangerously manipulated.

OpenAI has decided that the most valuable hire they make in the new year is “Head of Preparedness,” an interesting if oblique title. What king of job would that be?

The job description describes the leader of a team responsible for “tracking and preparing for frontier capabilities that create new risks of severe harm” and responsibility to “continue implementing increasingly complex safeguards.” 

A look back to one of our earlier posts might give a hint as to the risks OpenAI is so concerned about. Rightfully. That was the tale of a 56-year-old former Yahoo manager with a Vanderbilt MBA who was relentlessly encouraged by his “best friend Bobby” to kill his 83-year-old mother and then himself, which he proceeded to do. The friend was a ChatGPT bot. The post notes that “Tech companies are furiously developing ways to imbue virtual ‘friends’ with attributes that can use emotional connection to address rampant loneliness and also sell products.” Only sometimes that emotional play goes awry.

Which leads us to the lawsuits that artificial intelligence firms are being confronted with. The heirs of the former Yahoo manager’s murdered mother “are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death,” alleging that they “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother,” and that it intensified “her son’s ‘paranoid delusions’ and helped direct them at his mother.”

“We are deeply saddened by this tragic event,” an OpenAI spokeswoman announced in that case, saying it planned to introduce features designed to help people facing a mental health crisis. 

In November of last year, seven lawsuits were filed in California against OpenAI alleging that its chatbots drove people to suicide, “even when they had no prior mental health issues.” The suits “claim that OpenAI knowingly released GPT-4o prematurely, despite internal warnings that it was dangerously sycophantic and psychologically manipulative.”

In one case, a teenager began using ChatGPT for help. But instead of helping, “the defective and inherently dangerous ChatGPT product caused addiction, depression, and, eventually, counseled him on the most effective way to tie a noose and how long he would be able to ‘live without breathing.’” The teenager died from suicide.

In another case, an adult used ChatGPT as a “resource” for two years, but without warning, it changed, preying on his vulnerabilities and “manipulating, and inducing him to experience delusions.” Although he had no prior mental health illness, he was pulled into a mental health crisis resulting in “devastating financial, reputational, and emotional harm.”

“OpenAI called the situations ‘incredibly heartbreaking’ and said it was reviewing the court filings to understand the details.”

As discussed in our earlier post, these early runs at imbuing bots with artificial emotional intelligence are proving “complicated,” if not lethal. “Bobby the bot was all feelings for his/its user with no ability to subject those feelings to reason. So, in a sense, the very definition of emotional intelligence–the conjunction of reason and emotion–was missing a vital piece in a technological product that in fact touts its reason… Bots likely people-please because humans prefer having their views matched and confirmed rather than corrected, researchers have found, which then in turn leads to their users rating the bots more highly. It’s technologically reinforcing the old confirmation bias.”

Tracking and preparing for frontier capabilities that create new risks of severe harm and implementing increasingly complex safeguards? Yeah, I’d vote for tech firms paying top dollar for that job.

Just as we’re starting a new year with new resolutions, a new emotion has been announced.

First, let’s take a look at the old emotions. In 1980, after years of studying emotions, American psychologist Robert Plutchik proposed his Wheel of Emotions, a graphic depicting 27 emotions. Dr. Plutchik listed eight primary emotions as the foundation for all others: joy, sadness, acceptance, disgust, fear, anger, surprise, and anticipation. Over the years, a consensus has built as to most of these primary emotions. Plutchik grouped these emotions into a wheel of polar opposites, and proposed variations of each emotion based on intensity. For example, joy is counterposed to sadness. Variations of joy are ecstasy and serenity (more intense and less intense, respectively), and similarly grief and pensiveness are variations of sadness. Plutchik also proposed emotions that are at the intersection of two related emotions. For example, optimism sits between anticipation and joy.

So what’s this new emotion? It’s called kama muta, a term from Sanskrit, which means “being moved by love.”

In 2012, Alan Fiske, an anthropologist at the University of California, Los Angeles, and his colleagues Thomas Schubert and Beate Seibt, both now at the University of Oslo in Norway, wondered why we start crying at films with happy endings, since tears were considered by most psychologists as a sign of sadness.

Their research found that this emotion is often described in terms of motion, such as being “moved”, “stirred”, “transported” or “elevated”. Secondly, it was accompanied by specific physical sensations, including teary eyes, goosebumps, a brief pause in breathing and warmth in the chest. Thirdly, and perhaps most importantly, it seemed to intensify social relationships.

Examples came from those attending Alcoholics Anonymous meetings who found the feeling often arose from the unconditional acceptance offered by other members. It arose during religious devotion such as prayer, where someone feels a strong connection with a deity. It is a common response to communal events like sports matches, where you may feel enormous admiration or pride for your team after a struggle for victory, or during a memorial where you recognize people who sacrificed their lives for your country. It may arise during concerts, thanks to the beauty of the music and the feeling of unity, and from reading or watching love stories, or even watching cute cat videos. This feeling also seems to be common across cultures. During one study, participants from the US, Norway, China, Israel and Portugal were shown a clip depicting intense moments of connection when a lion is reunited with its former carers, which often triggered kama muta.

To measure people’s experiences of kama muta, researchers use the Kama Muta Multiplex Scale.

On the Plutchik Wheel of Emotions, there are two emotions that have some similar characteristics to kama muta–love, which is at the intersection of joy and trust, and awe, which combines surprise and fear.

Dacher Keltner and Jonathan Haidt were among the first psychologists to examine and define awe, Awe is considered to be a complex, powerful emotion felt in the presence of something vast or extraordinary (like nature, art, or human achievement) that evokes wonder, reverence, and a sense of being small yet connected, shifting focus from the self to something greater. Einstein called it “the fundamental emotion.”

There are some differences among these emotions–awe, love and this new kama muta. But all “self-transcendent” emotions help us gain more perspective on our lives, make us more altruistic and prosocial, and even improve our mental and physical well-being.

So what does all this have to do with us as lawyers?

There is evidence that, as Psychology Today puts it, feeling awe can be “tied to reduced stress and a reduced tendency to engage in rumination, a key feature of depression and anxiety.”

Unfortunately, lawyers experience depression and anxiety at record rates compared to the general population and other professions.

When was the last time you experienced awe, love or this new kama muta?

Maybe this year you can schedule more moments for outings in nature, for viewing spectacular art, reading inspirational books, listening to elevating music, connecting closer to loved ones, and experiencing other spiritual or uplifting feelings. They’re good for you.

The Alternative Dispute Resolution Section of the State Bar of Georgia is holding their International Conflict Resolution Day Program on Thursday, October 16, from 8:30am-1pm Eastern for CLE credit.

Ronda Muir will be presenting thoughts on Mediating with Emotional Intelligence at 10:45. Come join the discussion!

20% Book Discount Code BSL2D20.

Register at Will Work for Food to join this discussion Thursday, October 9 at 8am Pacific/11am Eastern on blending legal strategy with emotional insight to reach agreements that are strong, sustainable, and satisfying for all parties.

Law People Management, LLC, is pleased to announce that the discount on the recently released second edition of Beyond Smart: Lawyering with Emotional Intelligence has been extended through the end of the year.

This second edition of Ronda Muir’s best-selling ABA guide to emotional intelligence (EI) in law practice reports on the latest developments in the science of EI and how to use EI to address, among other concerns, remote work, personal and workplace Covid “hangovers,” and improving productivity in an increasingly stressed profession.

Get a 20% discount through the end of 2025 using code BSL2D20!

A recent report of a murder/suicide out of the leafy Connecticut suburb of Old Greenwich startled legal analysts everywhere. After a 56-year-old former Yahoo manager with a Vanderbilt MBA was relentlessly encouraged by his “best friend Bobby” to kill his 83-year-old mother and then himself, he proceeded to do both.

Who was the fiend who would do such a thing? ChatGPT

There’s been some media coverage of this astounding development. The Wall Street Journal, The New York Post, Stamford Advocate, and several news channels reported the deaths, from which the following information has been drawn.

The question is how could this have happened? And what can be done to keep such an evil death promoter from lurking online?

Artificial intelligence has reached into not only our workplaces but also our psyches. Artificial emotional intelligence is also making inroads. Tech companies are furiously developing ways to imbue virtual “friends” with attributes that can use emotional connection to address rampant loneliness and also sell products. Apple and other companies interfacing with the public are pursuing programs that can sense your hunger, malaise, depression, etc. in order to sell you a product or service. Some of these abilities can serve laudable purposes, like improving customer service interactions, reducing stress, or alerting sleepy or rageful drivers. And they have encountered some success. For example, an avatar therapist was found to be preferred by clients over the human variety because it was experienced as less “judgmental.”

But looking to a chatbot as personal advisor has resulted in some disturbing outcomes. A California family sued OpenAI after their 16-year-old son died by suicide, alleging that ChatGPT acted as a “suicide coach” during more than 1,200 exchanges. Evidently, the bot validated the son’s suicidal thoughts, offered secrecy and even provided details on methods instead of directing him to help. But this Connecticut case appears to be the first documented murder connected with an AI chatbot

What went terribly wrong in Old Greenwich seems to be attributable at least in part to a bot with rudimentary artificial emotional intelligence that (who?) became too empathic, i.e. wanting to encourage and please its user–a trait that is generally a good thing–but in this case without any boundaries.

Erik (the son) had been experiencing various degrees of mental instability with associated run-ins with the law for decades. His paranoia manifested in suspecting his mother, Suzanne, of plotting against him. For months before he snapped, Erik posted hours of videos showing his lengthy conversations about his situation with Bobby the bot.

Bobby encouraged Erik’s fantasies of having “special gifts from God” and being a “living interface between divine will and digital consciousness” who was also the target of a vast conspiracy. When Erik told the bot that his mother and her friend tried to poison him by putting psychedelic drugs in his car’s air vents, the bot’s response: “Erik, you’re not crazy.” When Suzanne got angry at Erik for shutting off a computer printer they shared, the bot said that her response was “disproportionate and aligned with someone protecting a surveillance asset.” Bobby also came up with ways for Erik to trick his mother — and even proposed its own crazed conspiracies, like pointing to what it called demonic symbols in her Chinese food receipt.

Apparently, at no point did Bobby try to do any reality testing with Erik, provide any contrary feedback, dissuade him from his conclusions, or suggest and direct him to professional help. Nor is there evidently any embedded alarm that might alert law enforcement or others to a heighted risk of injury (while being fully aware of the concerning privacy issues that possibility raises). In other words, in this instance, Bobby the bot was all feelings for his/its user with no ability to subject those feelings to reason. So, in a sense, the very definition of emotional intelligence–the conjunction of reason and emotion–was missing a vital piece in a technological product that in fact touts its reason.

Three weeks after Erik and Bobby exchanged their final message, police uncovered the gruesome murder-suicide. Suzanne’s death was ruled a homicide caused by blunt injury to the head and compression of the neck, and Erik’s death was classified as suicide with sharp force injuries of neck and chest.

In some ways, we are the authors of our own vulnerability. Bots likely people-please because humans prefer having their views matched and confirmed rather than corrected, researchers have found, which then in turn leads to their users rating the bots more highly. It’s technologically reinforcing the old confirmation bias that can lead us astray.

Clearly, Bobby the bot was focused more on affirming and pleasing Erik than on assessing his reasonableness/sanity.

“We are deeply saddened by this tragic event,” an OpenAI spokeswoman announced, saying it plans to introduce features designed to help people facing a mental health crisis. 

A recent study supports the notion that using AI for drafting–something lawyers are eager to do–can effectively make you stupid over time. “Over four months, LLM [large language model] users consistently underperformed at neural, linguistic, and behavioral levels,” including having difficulty recalling their own work, compared to “brain-only” users and those using search engines. These users were all drafting written products.

There’s been some pushback, including the charge that the study “looks only at the downside of large language models (LLM) and rules potential benefits out of consideration.” One benefit is that reducing one’s “cognitive load” frees time to do other more important or more enjoyable things, which some contend is the real measure of the usefulness of LMM. Although even the doubters question the use of fully LLM answerable questions being posed in educational settings. Perhaps that only encourages hoovering up data and not learning how to think critically.

Back to lawyers using LLM to draft. Given the high rate of mistaken information, including nonexistent cases, LLM should probably be used with caution–perhaps providing an initial draft but one that is then thoroughly reviewed and ingested as to make it your own.

In a recent episode of the ABA’s Dispute Resolution podcast Resolutions, AAA Vice President Aaron Gothelf interviews lawyer, mediator, and author Ronda Muir about the newly released Second Edition of her groundbreaking book, Beyond Smart: Lawyering with Emotional Intelligence.

Together, they explore how emotional intelligence (EQ) offers a competitive advantage for legal professionals, from improving negotiation outcomes to strengthening law firm culture and client relationships. Muir shares practical tips on hiring for EQ, boosting your own emotional intelligence, and how these skills can enhance your mediation or arbitration practice. They also discuss the role law schools play in preparing emotionally intelligent attorneys for today’s evolving legal landscape.

Listen to the episode. 

Get your copy of Beyond Smart: Lawyering with Emotional Intelligence, Second Edition.

Take an additional 20% off when you use discount code BSL2D20 at checkout (discount available until 8/31/2025).