Think You Can Trust ChatGPT and Gemini to Give You the News? Here's Why You Might Want to Think Again
The study found 76% of Google Gemini’s AI news summaries contained major issues, causing declining trust in news publishers across 18 countries, the BBC and EBU reported.
- This year, the BBC and the European Broadcasting Union found nearly half of AI assistant answers about the news contain major errors across 14 languages and 18 countries.
- Sourcing problems drove many errors, with 31% of responses affected, including examples where Google’s Gemini and ChatGPT hallucinated links or cited incorrect sources.
- Journalists evaluated 2,709 AI-generated responses in May and June from 22 public service news organisations against five criteria, with Google Gemini having 76% of responses flagged for issues.
- A BBC UK-wide survey of 2,000 people found distorted AI answers reduce trust and traffic to news publishers, with 36% blaming AI providers, 31% government or regulators, and 23% news providers.
- Researchers recommend media literacy and a disclaimer for AI-served news, as the EBU released a 'News Integrity in AI Assistants Toolkit' to promote transparency and accountability.
14 Articles
14 Articles
Extra, Extra, Read All About It: AI Sucks at Summarizing the News
“AI assistants… routinely misrepresent news content no matter which language, territory, or AI platform is tested.” Damn. That’s the introduction by the BBC’s Media Centre for a study coordinated by the European Broadcasting Union (EBU) and led by the British Broadcasting Corporation (BBC), a UK-government-owned media organization. That’s bad, bad news for the 7 percent of us who reportedly use AI to get our news, including 15 percent of people …
Largest Study Of Its Kind Finds AI Assistants Get The News Wrong Almost Half The Time
Artificial-intelligence-newspaper The largest study of its kind shows AI assistants get the news wrong 45% of the time, regardless of which language or AI platform is tested. In addition to consistently misrepresenting news content almost half of the time, the AI assistants often had significant sourcing problems including missing, misleading, or incorrect attributions. The results of the study were determined by 22 public service media (PSM) or…
Nearly half of the answers AI assistants give to news questions turn out to be incomplete or unreliable. This is evident from new international research led by the EBU and the BBC, in which the NPO and NOS also participated. The research investigated how AI assistants such as ChatGPT, Perplexity, Copilot and Gemini handle news questions. The NPO and NOS participated in this research to better understand how AI assistants handle news questions fr…
AI companies steal publisher traffic then undermine trust by getting answers wrong
The dangers of using generative AI platforms to surface news information have been highlighted in a devastating new report by the European Broadcasting Union and the BBC. It expands on research revealed in February this year that found most AI chatbot responses based on BBC news reports contained significant inaccuracies. This new report was conducted with 22 public service news organisations operating in 18 countries across Europe, the US and C…
An extensive international survey, led by the EBU and the BBC, highlights an alarming finding: AI assistants such as ChatGPT or Gemini are distorting the news. Nearly half of their responses present major flaws, ranging from factual error to source problem, threatening public confidence in information.
Coverage Details
Bias Distribution
- 60% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium










