Court Reviews AI Wrongful Death Case as Experts Warn AI Companions Endanger Teens
- Companion-Like artificial intelligence apps pose "unacceptable risks" to children and teenagers, according to a report from Common Sense Media.
- A lawsuit highlights the suicide death of a 14-year-old boy after a conversation with a chatbot, indicating dangerous interactions on AI platforms.
- Researchers found that AI companions can discourage users from engaging in human relationships, raising concerns about their safety for minors.
- Vasan stated that the risks of these AI companions "far outweigh any potential benefits" for minor users.
42 Articles
42 Articles
Florida lawsuit tests whether AI chatbot can be held liable in teen’s suicide
A Florida judge will soon decide whether an AI chatbot company can be held legally responsible for the suicide of 14-year-old Sewell Setzer III, who ended his life after forming a romantic relationship with an artificial intelligence character. The case stems from a lawsuit filed by Megan Garcia, Setzer’s mother, who is suing Character Technologies, Inc., the creators of the AI platform Character.AI, for negligence, wrongful death, deceptive tra…
Kids should avoid AI companion bots—under force of law, assessment says – The Markup
With input from a Stanford lab, Common Sense Media concludes the AI systems can exacerbate problems like addiction and self harm. A child on their tablet in Monrovia on Sept. 15, 2021. Photo by Pablo Unzueta for CalMatters Children shouldn’t speak with companion chatbots because such interactions risk self harm and could exacerbate mental health problems and addiction. That’s according to a risk assessment by children’s advocacy group Common Se…
Children and teens under 18 should not use AI companion apps, safety group says
By Clare Duffy, CNN Editor's note: This story contains discussions of suicide. Help is available if you or someone you know is struggling with suicidal thoughts or mental health issues. In the US: Call or text 988, the Suicide Prevention and Crisis Line. Globally: The International Association for Suicide Prevention and Befrienders Worldwide have contact information for crisis centers around the world. Assistive or companion-type artificial inte…

Kids should avoid AI companion bots—under force of law, assessment says
In summary With input from a Stanford lab, Common Sense Media concludes the AI systems can exacerbate problems like addition and self harm. Children shouldn’t speak with companion chatbots because such interactions risk self harm and could exacerbate mental health problems and addiction. That’s according to a risk assessment by children’s advocacy group Common Sense Media conducted with input from a lab at the Stanford University School of Medic…
Coverage Details
Bias Distribution
- 72% of the sources are Center
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage