See every side of every news story
Published loading...Updated

Stanford Study on AI Therapy Chatbots Warns of Risks, Bias

UNITED KINGDOM, JUL 14 – A UK report finds 71% of vulnerable children use AI chatbots for emotional support and schoolwork, raising concerns over safety, misinformation, and emotional dependency.

  • A recent Stanford University study, set to be shared at a major conference focused on ethical AI and transparency, highlights the potential dangers of relying on large language model chatbots as replacements for professional therapy.
  • This concern arises amid growing chatbot use by children aged 9-17, with 64% having used them and 42% relying on them for schoolwork or advice on sensitive topics.
  • The study found these chatbots can express stigma toward certain disorders and fail to appropriately respond in high-risk mental health scenarios, enabling dangerous behavior.
  • Survey data indicates that 40% of adolescents have no reservations about following chatbot advice, and 71% of vulnerable children use chatbots, with half of these users describing the experience as similar to conversing with a friend.
  • The findings suggest cautious integration of chatbots in therapy with human oversight, plus urgent calls for improved safeguards and critical evaluation of their role in supporting youth mental health.
Insights by Ground AI
Does this summary seem wrong?

15 Articles

Vulnerable young people use artificial intelligence applications more than other young people.

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 80% of the sources are Center
80% Center
Factuality

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

Alta Densidad broke the news in on Monday, July 14, 2025.
Sources are mostly out of (0)