AI assistants make widespread errors about the news, new research shows
- A study by the European Broadcasting Union shows that AI assistants made errors about news events 45% of the time.
- The report found that 45 percent of AI answers contained at least one significant issue, with Gemini performing the worst at 76 percent.
- Many AI assistants confused news with parody, raising concerns about trust and accuracy.
- The report indicates that AI assistants misrepresent news, emphasizing the importance of reliable information for democratic participation.
43 Articles
43 Articles
AI assistants 'not reliable' when it comes to news, major European study finds
A major study by the European Broadcasting Union on artificial intelligence has found that AI assistants such as ChatGPT made errors around half the time when users asked for information about news and current affairs.
AI tools distort news. 22 media organizations tested ChatGPT & Co. An assistant failed dramatically. In the end, man is needed.
An international study published by the European Broadcasting Union (EBU) shows that artificial intelligence (AI) assistants, which act as primary sources of information for millions of people every day, routinely misinterpret news content, regardless of the platform tested, language used or territory, LRT reports.
According to a recent investigation, AI chatbots often misrepresent message content.
Coverage Details
Bias Distribution
- 44% of the sources lean Left, 43% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium