AI Hallucinations in Court Documents Are a Growing Problem, and Lawyers Are Responsible for Many of Them
- Damien Charlotin created a public database in early 2024 that tracks 120 court cases involving AI-generated hallucinations in legal documents worldwide.
- The rise in hallucinations links to increased availability of large language models and more awareness of AI's risks in legal text and citations.
- In 2023 and 2024, lawyers and pro se litigants both made AI errors, with recent months showing lawyers responsible for a growing number of cases and sanctions.
- Courts imposed fines exceeding $10,000 in several cases, including a $31,000 sanction on lawyers at K&L Gates and Ellis George for relying partly on fabricated cases.
- Charlotin noted courts have imposed mild penalties so far but emphasized the legal field's reliance on text patterns makes it prone to AI hallucinations, putting responsibility on parties to verify citations.
11 Articles
11 Articles
AI hallucinations in court documents are a growing problem, and lawyers are responsible for many of them
Judges are catching fake citations of legal authorities almost every day, and it's increasingly the fault of lawyers over-relying on AI.May Lim / 500px/Getty Images/500pxSince May 1, judges have called out at least 23 examples of AI hallucinations in court records.Legal researcher Damien Charlotin's data shows fake citations have grown more common since 2023.Most cases are from the US, and increasingly, the mistakes are made by lawyers, not layp…
AI hallucinations: A budding sentience or a global embarrassment?
An article cut and pasted from ChatGPT raises questions over the role of fact-checkers in legacy media. In a farcical yet telling blunder, multiple major newspapers, including the Chicago Sun-Times and Philadelphia Inquirer, recently published a summer-reading list riddled with nonexistent books that were "hallucinated" by ChatGPT, with many of them falsely attributed to real authors. The syndicated article, distributed by Hearst's King Features…
Mashable: 120 court cases have been caught with AI hallucinations, according to new database | ResearchBuzz: Firehose
Mashable: . “Lawyers representing Anthropic recently got busted for using a false attribution generated by Claude in an expert testimony. But that’s one of more than 20 court cases containing AI hallucinations in the past month alone, according to a new database created by French lawyer and data scientist Damien Charlotin. … In 2024, which was the first full year of tracking cases, Charlotin found 36 instances. That jumped up to 48 in 2025, and …
Roundup: AI Hallucinations in Court Documents are a Growing Problem, and Data Shows Lawyers are Responsible For Many of the Errors; What is Autoregression-Based Image Generation and How Will it Impact Document Fraud?; & More AI Headlines
Creativity Generative AI and Creativity: A Systematic Literature Review and Meta-Analysis (preprint; via arXiv) Education Transforming Education With Large Language Models: Trends, Themes, and Untapped Potential (via IEEE Access) Hallucinations AI Hallucinations in Court Documents are a Growing Problem, and Data Shows Lawyers are Responsible For Many of the Errors (via BI) ||| Direct to AI Hallucination Cases Database Images What is Autoregr…
Coverage Details
Bias Distribution
- 50% of the sources lean Left, 50% of the sources lean Right
Factuality
To view factuality data please Upgrade to Premium