Anthropic wins key US ruling on AI training in authors' copyright lawsuit
- A US federal judge ruled that Anthropic's conversion and use of copyrighted books to train its AI falls under fair use but rejected dismissal of the case requiring a trial over pirated copies.
- The ruling arose from a lawsuit filed last year by three authors who accused Anthropic of using pirated copies to build a large digital library for its AI training.
- Judge Alsup described Anthropic's use as 'exceedingly transformative,' stating AI models use works to create something different rather than replicate or supplant them.
- The company amassed a central collection containing over seven million illegally obtained books and may be liable for up to $150,000 in damages for each copyrighted work while the trial establishes responsibility.
- This legal decision sets a precedent favoring AI training under fair use but signals ongoing industry legal challenges and heightened scrutiny on content creators' rights.
293 Articles
293 Articles
In a first-of-its-kind decision, an AI company wins a copyright infringement lawsuit brought by authors
U.S. District Judge William Alsup's ruling this week, in a case brought by authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson last year, opens a potential pathway for AI companies to train their large language models on copyrighted works without authors' consent — but only if copies of the works were obtained legally.
Training an AI model with unauthorised books is not illegal, a U.S. judge decides. A key decision, called to make jurisprudence in the arm of iron between artists and artificial intelligence.


However, other cases of piracy could follow, both for Anthropic and other AI companies.
The judge validates a legal-key argument of the companies of the AI but always pursues the start-up for having used "pirated" books.
Coverage Details
Bias Distribution
- 55% of the sources are Center
To view factuality data please Upgrade to Premium