Ex-OpenAI Researcher Slams ChatGPT’s Mental Health Risks
8 Articles
8 Articles
Ex-OpenAI Researcher Slams ChatGPT’s Mental Health Risks
In the rapidly evolving world of artificial intelligence, concerns about user safety have taken center stage, particularly at OpenAI, the company behind ChatGPT. A former safety researcher, Steven Adler, has publicly criticized the organization for what he sees as inadequate measures to address severe mental health crises among its users. Adler, who recently left the company, argues that OpenAI’s current safeguards fall short in preventing or mi…
OpenAI Study Finds ChatGPT Detects Signs of Mental Distress Among Users
OpenAI has unveiled results from a first-of-its-kind internal study examining how recent safety upgrades to ChatGPT are performing in detecting and responding to signs of mental or emotional distress among users. The move follows growing concern from mental health experts about people increasingly turning to AI chatbots for therapy or emotional support. According to the
Millions of suffering psychic users exchange each day with ChatGPT. After a resounding drama, the American giant is trying to react. The latter has just announced that he had evolved his algorithm to more effectively detect disturbing behaviors.
More than a million ChatGPT users have expressed suicidal thoughts in conversations with the artificial intelligence assistant, according to estimates by OpenAI. The revelation comes amid controversy sparked by the death of an American teenager and a lawsuit filed against the Californian developer, who claims it has implemented new safety measures to protect vulnerable users.
Around a million users of the ChatGPT chatbot have confided in the artificial intelligence (AI) about their suicidal thoughts, the French daily Le Figaro reported, citing OpenAI. According to the Californian AI company, more than one percent of its ChatGPT users, or around a million people, have conversations with the app that clearly indicate suicidal intent.
Sacramento (USA) - Around a million users of the ChatGPT chatbot have confided in the artificial intelligence (AI) about their suicidal thoughts, the French daily Le Figaro wrote, citing the company OpenAI....
Coverage Details
Bias Distribution
- There is no tracked Bias information for the sources covering this story.
Factuality
To view factuality data please Upgrade to Premium



