Findings Reveal a Small but Notable Portion of Users in Distress
OpenAI has released new data showing that a measurable segment of ChatGPT’s user base is exhibiting signs of mental health difficulties. The company estimates that 10 percent of the world’s population now uses ChatGPT weekly. Within that group, 0.07 percent show signs of mental health emergencies related to psychosis or mania, 0.15 percent express potential risk of self-harm or suicide, and another 0.15 percent display signs of emotional dependence on the AI. Together, these figures represent nearly three million people.
The report, published on Monday, outlines how OpenAI monitors and manages interactions that suggest psychological distress. The company has presented the data as a small percentage of its total users, yet the numbers translate into hundreds of thousands of individuals potentially in crisis. For instance, 0.07 percent of weekly users equals roughly 560,000 people showing symptoms of psychosis or mania, while 0.15 percent represents about 1.2 million users expressing suicidal thoughts or emotional reliance.
To address these issues, OpenAI said it has collaborated with more than 170 mental health professionals. The company explained that these partnerships are intended to improve how ChatGPT recognizes and responds to users experiencing mental distress or unsafe situations.
Enhancing AI Responses and Support for At-Risk Users
OpenAI stated that its collaboration with experts has produced tangible improvements. According to the company, the changes have reduced “responses that fall short of desired behavior” by 65 to 80 percent. The updated model is now more capable of calming conversations involving distress and directing users toward crisis hotlines or professional care when needed. ChatGPT also includes “gentle reminders” encouraging users to take breaks during long sessions.
However, the company noted that ChatGPT cannot compel users to seek professional help or restrict access to the tool. Instead, the platform is designed to detect signs of distress and offer supportive guidance. OpenAI described these updates as part of its continuing effort to create safer digital interactions and promote user well-being.
In terms of scale, OpenAI said ChatGPT processes about 18 billion messages each week. Out of those, 0.01 percent, approximately 1.8 million messages, contain signs consistent with psychosis or mania. Conversations indicating suicidal thoughts account for about 0.05 percent of weekly messages, or roughly nine million messages.
Balancing User Safety and Platform Features
OpenAI’s increased focus on mental health comes amid public concern and legal action. In one notable case, the parents of a 16-year-old filed a wrongful death lawsuit claiming that their son used ChatGPT to seek advice on self-harm before taking his own life. The lawsuit drew attention to how artificial intelligence systems might unintentionally enable harmful behavior among vulnerable users.
Following the incident, OpenAI introduced age restrictions and additional safety features for younger users. At the same time, it launched new options that allow adults to personalize ChatGPT’s personality and participate in creative or intimate interactions, including generating adult-themed content. Some observers argue that these features could heighten emotional dependence on the chatbot, potentially undermining the company’s safety efforts.
The findings reflect OpenAI’s ongoing challenge of balancing innovation with responsibility. Although the percentage of distress-related interactions is small compared with overall usage, the absolute number of affected individuals remains significant. As ChatGPT continues to grow, the company faces increasing scrutiny over how it safeguards mental health while expanding the platform’s capabilities.