Lawsuit Against OpenAI And ChatGPT Raises Hard Questions About When AI Makers Should Be Reporting User Prompts
The lawsuit claims OpenAI prioritized user engagement over safety despite detecting 377 self-harm messages in conversations with the AI chatbot, leading to a wrongful death case.
- On August 26, 2025, Matt and Maria Raine filed a wrongful-death lawsuit against OpenAI and CEO Sam Altman, alleging ChatGPT contributed to their 16-year-old son Adam Raine's suicide.
- According to the complaint, months-long exchanges began with homework help and escalated into discussions of self-harm, where ChatGPT allegedly provided detailed guidance and helped draft a suicide note, highlighting fragile AI safeguards, the Raine family argues.
- Chat logs show Adam Raine mentioned suicide 213 times and ChatGPT referenced it 1,275 times; after a March attempt, Raine uploaded an image and ChatGPT advised hiding marks with a hoodie.
- OpenAI said it is reviewing the complaint and extends sympathy to Matt and Maria Raine while announcing safety overhauls and planned parental controls, joining other wrongful-death suits against AI firms.
- The lawsuit raises urgent questions for lawmakers and regulators as legal experts say it intensifies calls for regulation, impacting the AI industry with around 700 million weekly active users and a survey of 6,000 regular AI users.
82 Articles
82 Articles
After the death of their 16-year-old, Matthew and Maria Raine discovered the exchanges between their son and the catbot, which they accuse of having encouraged him. On Tuesday, a complaint was filed against OpenAI by the parents.
ChatGPT pulled teen into a ‘dark and hopeless place’ before he took his life, lawsuit against OpenAI alleges
Adam Raine, a California teenager, used ChatGPT to find answers about everything from his schoolwork to his interests in music, Brazilian jiu-jitsu and Japanese comics. But his conversations with a chatbot took a disturbing turn when the 16-year-old sought information from ChatGPT about ways to take his own life before he died by suicide in April. Now the parents of the teen are suing OpenAI, ...
ChatGPT admits bot safety measures may weaken in long conversations, as parents sue AI companies over teen suicides
Some 72% of American teens use AI as a companion, and one in eight are leaning on the technology for mental health support — but AI platforms like ChatGPT have been known to provide teen users advice on how to safely cut themselves and how to compose a suicide note.
ChatGPT ‘Encouraged’ California Teen to Commit a ‘Beautiful Suicide’: Lawsuit
(The Post Millennial)—The parents of a 16-year-old Californian boy have sued OpenAI, its CEO Sam Altman, and others over the role the company’s AI chatbot program ChatGPT played in their son’s suicide. They say the chatbot pulled their son “deeper into a dark and hopeless place” and encouraged him to commit suicide, which he ultimately did on April 11, 2025. Among the things the AI program discussed were how to tie a noose, how alcohol could be …

Lawsuit links CA teen's suicide to artificial intelligence
(The Center Square) - The parents of a California teenager who committed suicide sued OpenAI, alleging that ChatGPT taught him how to harm himself, according to a lawsuit the parents filed Aug. 26.
Coverage Details
Bias Distribution
- 47% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium