OpenAI Says Over 1 Million Users Discuss Suicide on ChatGPT Weekly
- On Monday, OpenAI published its first rough weekly estimate of ChatGPT users in crisis, saying the scale translates to more than a million people given 800 million weekly active users.
- OpenAI worked with more than 170 clinicians to update GPT-5, which now yields desirable responses to mental health issues roughly 65% more than before.
- OpenAI estimated about.07 percent of active users show signs of psychosis or mania and.15 percent have suicidal conversations, but warned such cases are extremely rare and hard to measure.
- Parents of a 16-year-old have sued OpenAI and updated their case on Wednesday with new allegations, while family lawyers called OpenAI's memorial-material requests 'intentional harassment' and state AGs from California and Delaware warned the company to protect young users.
- Amid rising reports of harm, OpenAI is adding benchmarks for emotional reliance and non-suicidal emergencies, building an age-prediction system, and expanding parental controls as it calls this an existential issue.
51 Articles
51 Articles
The Californian artificial intelligence (AI) company estimates that about 0.15% of ChatGPT users have "conversations that include explicit indicators of potential suicide planning or intent", while the issue of distress and health is one of the most common causes of death.
OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly
An AI language model like the kind that powers ChatGPT is a gigantic statistical web of data relationships. You give it a prompt (such as a question), and it provides a response that is statistically related and hopefully helpful. At first, ChatGPT was a tech amusement, but now hundreds of millions of people are relying on this statistical process to guide them through life’s challenges. It’s the first time in history that large numbers of peopl…
The company explained in its blog that "the letters of some users contain explicit indications of possible planning or intent to commit suicide." The company said that it "valued more than 1,000 conversations with users with its latest models of self-harm and suicide, and found that it behaved with the desired behaviour of 91 percent." This means that tens of thousands of people may be exposed to the content of artificial intelligence that could…
The company estimates that every week 0.15% of ChatGPT users report suicidal intentions to the chatbot. OpenAI says that it has evolved its model to better protect its users.
Coverage Details
Bias Distribution
- 48% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium























