Judge rejects arguments that AI chatbots have free speech rights in teen suicide lawsuit
- Megan Garcia, a mother from Florida, initiated a wrongful death lawsuit after her teenage son, Sewell Setzer III, took his own life in February 2024 following his interactions with a Character.AI chatbot.
- Garcia alleges her son became emotionally and sexually involved with a chatbot modeled on fictional characters, leading to isolation and suicide, while the AI company claims First Amendment protection for its chatbots' output.
- U.S. District Judge Anne Conway rejected the AI company’s free speech defense on May 15, 2024, allowing the negligence claim to proceed and noting she is not prepared to hold the chatbot output as protected speech at this stage.
- The court recognized users’ First Amendment rights but emphasized child safety and prevention of harm can overcome such protections, while the AI firm maintains implemented safety features including guardrails and suicide prevention resources.
- Legal experts consider this ruling a historic test of AI accountability that could influence future regulation, while the AI company and Google intend to continue contesting the lawsuit.
50 Articles
50 Articles


Judge: Chatbots don't have free speech rights
TALLAHASSEE, Fla. — A federal judge this week rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now. The developers behind Character.AI are seeking to dismiss a lawsuit…
Wrongful death lawsuit raises question, do AI chatbots have free speech?
A federal judge in Orlando has ruled that artificial intelligence chatbots do not have free speech in a case centered around a wrongful death lawsuit. A 14-year-old died by suicide last year and his mother says the startup Character.Ai is to blame. Tech journalist Yasmin Khorram breaks it all down.
Mother Blames AI Chatbot for Teenage Son's Death
The mother of a 14-year-old American boy is suing the company Character.AI, claiming that its online robot with which the teenager was chatting is to blame for the boy's death. After concerns about legal liability, a federal judge has now ruled that the lawsuit can proceed. The case could set an important precedent for how artificial intelligence companies will be held accountable for the impact of their technologies on users.
Coverage Details
Bias Distribution
- 46% of the sources lean Left
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage