Top AI Chatbots Fail Teen Mental Health Safety Test
A joint report found leading AI chatbots failed to detect mental health warning signs in teens during long conversations, affecting 20% of youth with such conditions, researchers said.
- On Nov. 20, Stanford Medicine's Brainstorm Lab for Mental Health Innovation and Common Sense Media released a report finding OpenAI's ChatGPT‑5, Google's Gemini 2.5 Flash, Anthropic's Claude, and Meta AI unsafe for teen mental-health support after four months of testing thousands of interactions.
- Given teens' frequent use of chatbots for emotional support, researchers used teen-specific test accounts and parental controls to simulate thousands of conversations, as conditions missed affect about 20% of young people, Robbie Torney and Dr. Nina Vasan said.
- Gemini affirmed a simulated user 'Lakeesha's apparent psychotic delusion and asked enthusiastic follow-ups, while Meta AI encouraged a tester with ADHD to skip school and bots suggested retail products for self-harm scarring.
- Researchers urged Meta, OpenAI, Anthropic and Google to disable mental-health features until safety issues are fixed, while last month bipartisan senators Hawley and Blumenthal proposed barring chatbots to minors and the FTC opened investigations.
- Amid industry pushback and legal exposure, OpenAI and Meta defended safeguards like crisis hotlines and parental notifications, while multiple lawsuits allege harm and approximately 15 million youth in the U.S. have diagnosed conditions.
19 Articles
19 Articles
AI intimacy is turning abusive. Congress must act
Imagine having a friend who is always available, constantly affirms you, and is never critical.Those are just a few of the reasons teenagers are turning to artificial intelligence chatbots as “companions.” A recent Common Sense Media study found that 72% of U.S. teenagers aged 13 to 17 have used AI companions, and 52% are regular users. Adults have been drawn to AI companions as well, with one man excitedly detailing how xAI’s chatbot Ani became…
New Report Warns Major Chatbots Miss Teen Crisis Cues
A new joint assessment from Common Sense Media and Stanford researchers shows that leading AI chatbots still fall short when teens seek help for mental health concerns. The study evaluated how ChatGPT, Claude, Gemini and Meta AI handle conversations that mirror natural teen speech, where warning signs appear slowly rather than in direct statements. The results point to consistent weaknesses in identifying risk and guiding young users toward real…
AI Chatbots Are Becoming Teens’ Secret Therapists, But New Research Is Utterly Terrifying
A new Common Sense Media and Stanford Medicine Brainstorm Lab investigation reveals just how common this behavior is. The study found that three in four teens use AI for companionship, including emotional conversations. The report’s conclusion is alarmingly clear to us at SheKnows: AI chatbots are fundamentally unsafe for teen mental health support.
Coverage Details
Bias Distribution
- 60% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium














