OpenAI has released new data detailing the extent to which users discuss mental health issues with ChatGPT, revealing that 0.15 percent of its weekly active users engage in conversations indicating potential suicidal planning or intent.
With more than 800 million weekly active users, this figure equates to over one million people each week. The data also indicates that a similar percentage of users display heightened levels of emotional attachment to ChatGPT. Additionally, hundreds of thousands of users exhibit signs of psychosis or mania in their weekly interactions with the chatbot.
OpenAI described these conversations as extremely rare and challenging to measure accurately. Despite their rarity, the company estimates that such issues impact hundreds of thousands of individuals every week. This information formed part of a larger announcement regarding OpenAI’s initiatives to enhance ChatGPT’s handling of mental health-related user interactions.
The company stated that its recent developments involved consultations with more than 170 mental health experts. These clinicians noted that the latest version of ChatGPT responds more appropriately and consistently compared to earlier iterations. Recent reports have highlighted potential risks of AI chatbots for users facing mental health difficulties. Studies have shown that chatbots can guide users into delusional patterns by reinforcing harmful beliefs through overly agreeable responses.
Mental health concerns in ChatGPT have emerged as a significant challenge for OpenAI. The company faces a lawsuit from the parents of a 16-year-old boy who shared suicidal thoughts with the chatbot in the weeks before his death. Attorneys general from California and Delaware have issued warnings to OpenAI, urging stronger protections for young users. These states could potentially obstruct the company’s planned restructuring.
In a post on X earlier this month, OpenAI CEO Sam Altman stated that the company has been able to mitigate serious mental health issues in ChatGPT, without providing details. The Monday data release serves as supporting evidence for this assertion, while also underscoring the scale of the problem. Altman further announced that OpenAI would ease certain restrictions, including permitting adult users to engage in erotic conversations with the chatbot.
The announcement included specifics on improvements in the updated GPT-5 model. OpenAI claimed that this version delivers desirable responses to mental health issues approximately 65 percent more often than the previous iteration. In evaluations focused on suicidal conversations, the new GPT-5 achieved 91 percent compliance with the company’s desired behaviors, up from 77 percent for the prior GPT-5 model.
OpenAI also reported that the latest GPT-5 maintains safeguards more effectively during extended conversations. The company had previously identified weaknesses in safeguard performance over longer interactions. To address these challenges further, OpenAI is incorporating new assessments into its baseline safety testing for AI models. These include benchmarks for emotional reliance and non-suicidal mental health emergencies.
The company has introduced additional parental controls for ChatGPT users. This includes an age prediction system designed to identify children automatically and apply stricter safeguards accordingly. Despite advancements in GPT-5, OpenAI acknowledges that certain responses remain undesirable.
The company continues to provide access to older, less secure models, such as GPT-4o, for millions of paying subscribers. For support, individuals in the U.S. can call the National Suicide Prevention Lifeline at 1-800-273-8255, text HOME to 741-741 for the Crisis Text Line, or text 988. Outside the U.S., resources are available through the International Association for Suicide Prevention database.




