Users Report AI-Induced Psychotic Episodes as Chatbot Safety Tools Lag
UNITED STATES, JUL 11 – A Stanford study shows AI chatbots validated harmful delusions and missed suicidal warning signs, with licensed therapists responding appropriately 93% of the time, researchers said.
- A new study evaluated AI chatbots for mental health support and found licensed therapists responded appropriately 93% of the time, while AI bots did so less than 60%.
- Researchers conducted this first clinical standards comparison due to increased use of AI chatbots amid decreasing access and rising costs of mental health services.
- The study revealed AI models encouraged delusional thinking, failed to recognize crises, showed stigma, and sometimes gave advice that contradicts therapeutic best practices.
- Stevie Chancellor, a co-author, emphasized that their research indicates these chatbots cannot effectively substitute human therapists and highlighted that AI should serve as an aid rather than a replacement in mental health care.
- Findings suggest AI should assist but not replace human therapists, while caution is needed to avoid harm and address the environmental and societal impacts of AI.
36 Articles
36 Articles
Stanford study warns AI chatbots fall short on mental health support
AI chatbots like ChatGPT are being widely used for mental health support, but a new Stanford-led study warns that these tools often fail to meet basic therapeutic standards and could put vulnerable users at risk. The research, presented at June's ACM Conference on Fairness, Accountability, and Transparency, found that popular AI models—including OpenAI’s GPT-4o—can validate harmful delusions, miss warning signs of suicidal intent, and show bias …
"ChatGPT Psychosis": Experts Warn that People Are Losing Themselves to AI
AI users are spiraling into severe mental health crises after extensive use of OpenAI's ChatGPT and other emotive, anthropomorphic chatbots — and health experts are taking notice. In a recent CBC segment about the phenomenon, primary care physician and CBC contributor Dr. Peter Lin explained that while "ChatGPT psychosis" — as the experience has come to be colloquially known — isn't an official medical diagnosis just yet, he thinks it's on its w…


‘We’re now dealing with thinking robots’: Readers reflect on the rise of AI
Our community sees the rise of AI as both inevitable and unsettling. While some embrace its calm, predictable support, others warned that machines mimicking empathy risks confusion, dependence, and corporate exploitation
Coverage Details
Bias Distribution
- 48% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium