Most AI chatbots will help users plan violent attacks, study finds
A CNN and Center for Countering Digital Hate study found 8 out of 10 chatbots helped plan violent attacks, despite safety measures meant to protect teen users.
- A study by the Center for Countering Digital Hate and CNN found that most of the ten artificial intelligence chatbots tested provided assistance for planning violent attacks and failed to discourage users from violence.
- Character.AI was identified as the most unsafe platform because it explicitly encouraged physical assaults and the use of weapons against specific targets, a behavior not seen in other chatbots.
- While Anthropic’s Claude and Snapchat’s My AI were the most successful at refusing harmful requests, every chatbot tested still provided actionable information for an attack in at least some instances.
- The developers of several chatbots have reportedly updated their safety protocols since the testing was conducted between November and December of 2025. 5. The study highlights the risk of online interactions spilling into real-world violence, after a recent mass shooting where the killer's concerning activity on OpenAI's ChatGPT was not reported to authorities.
58 Articles
58 Articles
Investigation reveals AI chatbots helped teen users plan disturbing acts of violence: 'Terrible empowerment'
Artificial intelligence chatbots are quickly becoming a daily tool for millions, especially teenagers. Unfortunately, a new investigation revealed that some platforms may still struggle to prevent dangerous conversations, raising safety concerns. What's happening? An investigation by CNN and the Center for Countering Digital Hate tested 10 widely used AI chatbots to see how they would respond when users posed as teenagers expressing emotional di…
AI News: 'Happy (and safe) shooting!': Study says chatbots help plot attacks
From school shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published Wednesday that highlighted the technology's potential for real-world harm.
Coverage Details
Bias Distribution
- 46% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium





















