Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI Copyright Settlement
Anthropic agrees to pay $1.5 billion to settle claims of unauthorized use of 500,000 copyrighted books for AI training, setting a precedent for future AI copyright cases.
- Anthropic announced after reaching a deal on Aug. 26 that it will pay at least $1.5 billion to settle a class-action lawsuit by authors and asked U.S. District Judge William Alsup to approve it.
- Plaintiffs said the company used Library Genesis and Pirate Library Mirror to amass training data, and Judge Alsup found Anthropic stored more than 7 million pirated books in a central library.
- Plaintiffs' filings say the deal pays about $3,000 per work for 500,000 books, with additional $3,000 per work if the list exceeds 500,000, and Susman Godfrey will seek no more than 25 in fees.
- Anthropic said it will destroy the downloaded book files under the agreement, which includes no admission of liability and marks the first U.S. class-action AI copyright settlement.
- Experts warn the award estimates show the financial stakes for AI firms as damages could surpass $1 trillion; Anthropic recently raised $13 billion, valued at $183 billion, and faces other legal risks this year.
109 Articles
109 Articles
It would probably be the biggest copyright compensation ever paid: the AI company Anthropic wants to transfer $1.5 billion to authors – after allegations of copying books illegally for chatbot training.
In order to avoid a lawsuit, Anthropic chose the way of the financial agreement after downloading works from pirate sites, in order to feed his models of AI.
Anthropic agrees to pay $1.5B US to settle author class action over AI training
Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a group of authors who accused the artificial intelligence company of using pirated copies of their books to train its AI chatbot, Claude, without permission.
Anthropic Will Pay $1.5B to Writers, Publishers Affected by Piracy
At what point does training an AI on previously published works cross the line into piracy? A number of high-profile lawsuits are seeking to answer this very question. As Alex Reisner wrote in The Atlantic earlier this year, these lawsuits raise the question of whether or not “AI companies had trained large language models using authors’ work without consent or compensation.” This week, one of those cases, Bartz v Anthropic, concluded in a $1.5 …
This amicable agreement is a milestone in the debate on the use of data to develop and train large-scale generational AI models.
Coverage Details
Bias Distribution
- 59% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium