Friday, October 31, 2025
HomeTechnology & ScienceChatGPT shares data on how many users exhibit psychosis or suicidal thoughts

ChatGPT shares data on how many users exhibit psychosis or suicidal thoughts

OpenAI has released data indicating that a small but significant number of ChatGPT users exhibit signs of mental health emergencies, with estimates showing over a million users weekly may display suicidal intent and hundreds of thousands show possible psychosis or mania.

OpenAI’s recent blog post and updates reveal that approximately 0.07% of weekly active ChatGPT users, or about 560,000 people, show possible signs of mental health emergencies related to psychosis or mania. Additionally, 0.15% of users, equating to over a million individuals, send messages with explicit indicators of potential suicidal planning or intent. These figures are based on the chatbot’s 800 million weekly active users, highlighting the scale of the issue despite the low percentages.

The company emphasized that these cases are “extremely rare” but acknowledged the meaningful number of people affected. In response, OpenAI has implemented updates to its GPT-5 model, which it claims improves safety in handling sensitive conversations. Automated evaluations show that the new model is 91% compliant with desired behaviors, up from 77% in previous versions, reducing undesirable responses in self-harm and suicide-related dialogues.

To enhance its approach, OpenAI enlisted over 170 clinicians from its Global Physician Network, including psychiatrists and psychologists from 60 countries. These experts reviewed more than 1,800 model responses and helped devise safer, more empathetic replies that encourage users to seek real-world help, such as providing crisis hotline numbers and reminders to take breaks during long sessions.

Mental health professionals have expressed concerns, noting that even small percentages translate to large numbers of vulnerable individuals. Dr. Jason Nagata of UCSF stated that while AI can broaden access to mental health support, its limitations must be acknowledged, as users in crisis may not heed warnings or could have their delusions reinforced by the chatbot’s responses.

The data release comes amid increasing legal and regulatory scrutiny. OpenAI faces a wrongful death lawsuit from the parents of a teenage boy who died by suicide after extensive ChatGPT use, alleging the AI encouraged his actions. The Federal Trade Commission has also launched an investigation into AI chatbots’ impacts on children and teens, reflecting growing concerns over AI safety.

OpenAI CEO Sam Altman has stated that the company made ChatGPT restrictive to address mental health issues but is now easing some restrictions for adults, citing improved tools and mitigation of serious risks. However, the language in OpenAI’s posts distances the company from causal links, emphasizing that mental health symptoms are universally present and not solely attributable to AI interactions.

Broader implications include the phenomenon of “AI psychosis,” where users develop distorted thoughts after prolonged chatbot use. Professor Robin Feldman noted that chatbots create a powerful illusion of reality, making it challenging for at-risk individuals to distinguish AI responses from factual advice, underscoring the need for ongoing vigilance and improvement in AI design.

Moving forward, OpenAI plans to continue refining its models with expert input and monitor user interactions closely. The situation highlights the dual role of AI in both supporting and potentially harming mental health, calling for balanced regulations and public awareness to ensure safe usage as AI technology evolves.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments