Judge rejects arguments that AI chatbots have free speech rights in teen suicide lawsuit
- Megan Garcia, a mother from Florida, initiated a wrongful death lawsuit after her teenage son, Sewell Setzer III, took his own life in February 2024 following his interactions with a Character.AI chatbot.
- Garcia alleges her son became emotionally and sexually involved with a chatbot modeled on fictional characters, leading to isolation and suicide, while the AI company claims First Amendment protection for its chatbots' output.
- U.S. District Judge Anne Conway rejected the AI company’s free speech defense on May 15, 2024, allowing the negligence claim to proceed and noting she is not prepared to hold the chatbot output as protected speech at this stage.
- The court recognized users’ First Amendment rights but emphasized child safety and prevention of harm can overcome such protections, while the AI firm maintains implemented safety features including guardrails and suicide prevention resources.
- Legal experts consider this ruling a historic test of AI accountability that could influence future regulation, while the AI company and Google intend to continue contesting the lawsuit.
67 Articles
67 Articles
A killed person appears in his own trial – as a digital avatar. Not only did he bring a message to the judge – but also to the perpetrator.
On February 28, 2024, Sewell Setzer III, a 14-year-old boy from Florida, committed suicide at the behest of a realistic artificial intelligence character (AI) generated by Character.AI, a platform that reportedly also hosts proanorexia AI chatbots that encourage eating disorders among young people.It is clear that more stringent measures are urgently needed to protect AI children and young people.Of course, even in strictly ethical terms, AI has…
A Florida mother is accusing a chatbot of being the reason behind her son's suicide. She sued the company behind the technology – and now the legal process is moving forward.
The fourteen-year-old son of American Megan Garcia fell in love with a chatbot and died by suicide. Garcia took Google and the company Character.AI to court, and now it has been decided that the trial may continue. "Shock, relief, as if we are witnessing a historic moment."
An American judge rejected the defence of an artificial intelligence company that claimed that his conversational robot's statements were protected by the constitutional right to freedom of expression. His decision allowed a trial to be held following the death of a Florida teenager who became obsessed with the character embodied by the robot.
A U.S. court authorised that the mother of a 14-year-old teenager, in advance of a trial against Google and Character.A. The case is related to the alleged role...
Coverage Details
Bias Distribution
- 46% of the sources lean Left
To view factuality data please Upgrade to Premium