OpenAI Hires Forensic Psychiatrist and Builds Distress-Detection Tools After Reports of Chatbot-Induced Crises
- OpenAI hired a full-time forensic psychiatrist to research the mental health effects of its AI products amid rising concern over chatbot-induced crises.
- This followed studies, including one from MIT led by Nataliya Kosmyna, showing that using large language models reduces brain activation, memory retention, and critical thinking.
- Researchers and clinicians warn that AI chatbots can produce affirming yet false or harmful responses that may escalate mental health episodes or psychosis in vulnerable users.
- As of June 2025, ChatGPT had nearly 800 million weekly users and handled over 1 billion daily queries, highlighting the widespread impact of these mental health risks.
- OpenAI commits to ongoing improvements to better detect sensitive situations and reduce harm, stressing caution in AI therapy amid calls for stricter safeguards.
16 Articles
16 Articles
Artificial intelligence is a double-edged weapon if you often use it. OpenAI has no idea how problematic its chatbot is for people, according to experts.
ChatGPT and other AI chatbots risk escalating psychosis, as per new study
A growing number of people are turning to AI chatbots for emotional support, but according to a recent report, researchers are warning that tools like ChatGPT may be doing more harm than good in mental health settings. The Independent reported findings from a Stanford University study that investigated how large language models (LLMs) respond to users in psychological distress, including those experiencing suicidal ideation, psychosis and mania.…
AI chatbots are becoming the most common mental health tool, but their design is pushing vulnerable individuals into mania, psychosis, and even death.
Coverage Details
Bias Distribution
- 44% of the sources lean Left
To view factuality data please Upgrade to Premium