AI assistants make widespread errors about the news, new research shows
A study of 3,000 AI responses across 18 countries found 45% had significant issues, including 31% with serious sourcing errors and 20% with major accuracy problems.
- On October 22, 2025, the European Broadcasting Union and BBC released a study showing 45% of AI answers from ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity had significant issues.
- The study enlisted 22 public service media organisations across 18 countries and 14 languages, posing the same 30 news-related questions between late May and early June during publishers' two-week content access window.
- Accuracy failures—including hallucinations and outdated facts—accounted for 20% of errors, while sourcing errors affected 31% of responses and Gemini, Google's AI assistant, had issues in 76% of replies.
- The broadcasters and media organisations behind the study are calling for national governments and AI companies to act, launching the 'Facts In: Facts Out' campaign and the News Integrity in AI Assistants Toolkit.
- Only 7% of online news consumers use AI chatbots for news, rising to 15% among under-25s, and Jean Philip De Tender warned `This research conclusively shows that these failings are not isolated incidents`.
101 Articles
101 Articles
45% of AI-generated news is wrong, new study warns — here’s what happened when I tested it myself
A new study from the European Broadcasting Union found that nearly half of AI-generated news responses contain serious errors. I put the top bots to the test, and the results were even more surprising than expected.
IA helpers, who become a common tool for millions of people, regularly distort news content — regardless of language, territory or platform — are described in a joint study by the European Broadcasting Union and BBC.
Artificial Intelligence Distorts the News in 45% of Its Responses: "They Are Not Isolated Incidents"
A large-scale study in 14 countries and 18 languages discovers that ChatGPT, Gemini, Copilot and Perplexity make false attributions to the media, introduce value judgments when commenting on the current situation or incur inaccuracies "which may mislead the reader"
The main artificial intelligence assistants (AIs) distort the content of the news in almost half of their responses, according to a new study published on Wednesday by the European Broadcasting Union (ERU) and the BBC.The international research studied 3000 answers to questions about news from the main artificial intelligence assistants, computer tools that use AI to understand orders in natural language and complete tasks for a user.The accurac…
AI chatbots make mistakes with news content nearly half of the time, says study (Business)
A new report from a global alliance of public broadcasters says AI chatbots make mistakes with news content nearly half of the time. The study, which looked at how AI chatbots answer questions about news and current affairs, involved 22 public media organizations in 18 countries, including CBC/Radio...
Coverage Details
Bias Distribution
- 56% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium

























