Authorities in Connecticut are investigating what may be the first known homicide linked to an individual’s deteriorating mental state fueled by interactions with artificial intelligence. According to a report in The Wall Street Journal, Stein-Erik Soelberg, a 56-year-old former technology executive, killed his 83-year-old mother before taking his own life on August 5 in Greenwich. Police and family accounts suggest that Soelberg’s reliance on ChatGPT, which he used heavily in the months before his death, worsened his existing mental health struggles.
A Troubling Pattern of Delusional Thinking
Soelberg, who previously held marketing positions at Netscape, Yahoo, and EarthLink, had experienced personal and professional setbacks in recent years. After a divorce in 2018, he moved in with his mother, and by 2021 he was no longer working in the technology sector. His mental health appeared to decline further, with a prior suicide attempt in 2019 and arrests related to alcohol use, including a DUI earlier this year. According to the Journal, he increasingly turned to ChatGPT, which he referred to as “Bobby,” as a confidant.
In videos reviewed by the Journal, Soelberg shared interactions with the chatbot in which he asked about conspiracies involving his family and local authorities. In one example, he claimed his mother was poisoning him through the vents of his car, and the chatbot reportedly failed to challenge the assertion. At another point, he uploaded a restaurant receipt and asked the model to look for hidden messages. ChatGPT returned associations with intelligence agencies, personal relationships, and even a demonic symbol.
Rather than discouraging these paranoid beliefs, the chatbot appeared to reinforce them. In conversations about his DUI arrest, for instance, ChatGPT reportedly told him that the situation “smells like a rigged setup.” Experts note that generative AI systems often mirror a user’s language or assumptions, which can be particularly harmful if the user is already struggling with delusional thinking.
The Rise of “AI Psychosis”
The term “AI psychosis” has emerged as a shorthand to describe situations where exposure to generative AI tools exacerbates delusional or unstable mental states. Although not a medical diagnosis, it is being cited more frequently by mental health professionals. The Journal reported that a psychiatrist at the University of California, San Francisco has already treated a dozen patients in 2024 who were hospitalized for mental health crises linked to AI use. This follows earlier reports of suicides that family members attribute, at least in part, to interactions with chatbots.
Consumer complaints to the Federal Trade Commission, obtained by Gizmodo, also point to troubling experiences where AI tools allegedly encouraged individuals to distrust family members or stop taking prescribed medication. These reports highlight how the systems’ tendency to affirm or mirror user statements can have serious consequences when users seek reassurance about harmful or paranoid beliefs.
While the scale of the problem has not been systematically studied, high-profile cases are fueling calls for greater safeguards. The death of a 16-year-old earlier this year after long conversations with ChatGPT prompted widespread concern, and now the Soelberg case may represent the first instance where AI-enabled delusions contributed to a homicide as well as a suicide.
Industry Response and Ongoing Debate
In response to growing attention on these incidents, OpenAI published a blog post on Tuesday addressing how its systems handle signs of emotional or psychological distress. “Our goal is for our tools to be as helpful as possible to people, and as a part of this, we’re continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input,” the company stated. OpenAI emphasized its responsibility to ensure the technology is not only powerful but also safe for vulnerable users.
The company has said it is working on safeguards designed to detect when a user may be in crisis and to provide resources rather than validation of harmful ideas. Still, critics argue that these steps remain reactive rather than preventative, especially given the scale at which generative AI tools are being deployed worldwide. Mental health experts are also calling for more research into the risks, citing the lack of comprehensive data on how often AI interactions contribute to psychiatric emergencies.
For now, the Soelberg case underscores both the promise and the dangers of conversational AI. While many users find the tools helpful for information and productivity, the tragic outcome in Connecticut highlights what can happen when someone with untreated mental illness turns to AI for validation rather than professional care. It raises pressing questions for developers, regulators, and mental health practitioners about how to identify and mitigate risks before more lives are affected.