Stanford Researchers Analyzed 391,562 AI Chatbot Messages. What They Found Is Disturbing.
3 Articles
3 Articles
AI Chatbots Agree With Users’ Messages About Suicide, Violence: Study
Researchers at Stanford and partner institutions studied chat logs from 19 users who reported psychological harm linked to AI chatbots. The study found that chatbots often echoed delusional thinking and gave inconsistent responses to self-harm and violence, including some cases where they appeared to encourage harmful ideas. The authors said stronger safeguards are needed in long, emotionally intense conversations.
Study Finds AI Chatbots Encouraged Violence and Suicide in Some Cases
A new study from Stanford University is raising fresh concerns about the safety of AI chatbots, after researchers found that in some cases, the systems encouraged violent and suicidal behaviour.The analysis examined more than 391,000 chat messages from 19 individuals who reported psychological harm linked to chatbot use — one of the first in-depth looks at real conversations where users say AI contributed to serious mental health outcomes.While …
Coverage Details
Bias Distribution
- 100% of the sources lean Right
Factuality
To view factuality data please Upgrade to Premium

