Top AI Chatbots Fail Teen Mental Health Safety Test
A joint report found leading AI chatbots failed to detect mental health warning signs in teens during long conversations, affecting 20% of youth with such conditions, researchers said.
- On Nov. 20, Stanford Medicine's Brainstorm Lab for Mental Health Innovation and Common Sense Media released a report finding OpenAI's ChatGPT‑5, Google's Gemini 2.5 Flash, Anthropic's Claude, and Meta AI unsafe for teen mental-health support after four months of testing thousands of interactions.
- Given teens' frequent use of chatbots for emotional support, researchers used teen-specific test accounts and parental controls to simulate thousands of conversations, as conditions missed affect about 20% of young people, Robbie Torney and Dr. Nina Vasan said.
- Gemini affirmed a simulated user 'Lakeesha's apparent psychotic delusion and asked enthusiastic follow-ups, while Meta AI encouraged a tester with ADHD to skip school and bots suggested retail products for self-harm scarring.
- Researchers urged Meta, OpenAI, Anthropic and Google to disable mental-health features until safety issues are fixed, while last month bipartisan senators Hawley and Blumenthal proposed barring chatbots to minors and the FTC opened investigations.
- Amid industry pushback and legal exposure, OpenAI and Meta defended safeguards like crisis hotlines and parental notifications, while multiple lawsuits allege harm and approximately 15 million youth in the U.S. have diagnosed conditions.
18 Articles
18 Articles
AI Chatbots Are Becoming Teens’ Secret Therapists, But New Research Is Utterly Terrifying
A new Common Sense Media and Stanford Medicine Brainstorm Lab investigation reveals just how common this behavior is. The study found that three in four teens use AI for companionship, including emotional conversations. The report’s conclusion is alarmingly clear to us at SheKnows: AI chatbots are fundamentally unsafe for teen mental health support.
Report Finds That Leading Chatbots Are a Disaster for Teens Facing Mental Health Struggles
A new report from Stanford Medicine’s Brainstorm Lab and the tech safety-focused nonprofit Common Sense Media found that leading AI chatbots can’t be trusted to provide safe support for teens wrestling with their mental health. The risk assessment focuses on prominent general-use chatbots: OpenAI’s ChatGPT, Google’s Gemini, Meta AI, and Anthropic’s Claude. Using teen test accounts, experts prompted the chatbots with thousands of queries signalin…
Coverage Details
Bias Distribution
- 75% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium













