Forbes Daily: AI Models Are Hallucinating More Despite Advancements
- Last month, an AI bot handling tech support for Cursor falsely announced a policy restricting the software to one computer per user.
- This false announcement exemplifies a rising trend of increased hallucinations in advanced AI reasoning models despite ongoing improvements.
- OpenAI's latest models, GPT o3 and o4-mini, showed hallucination rates of 33% and 48% respectively on public figure Q&A tests, surpassing earlier versions.
- Experts including Amr Awadallah and Pratik Verma highlight that hallucinations remain inherent to AI and complicate verifying factual accuracy, with OpenAI promising further research.
- The growing error rates in reasoning models challenge AI’s reliability in real-world tasks and underscore the need to reduce hallucinations to improve trust and utility.
Insights by Ground AI
Does this summary seem wrong?
12 Articles
12 Articles
All
Left
1
Center
2
Right
2
AI hallucinations are getting worse
As generative artificial intelligence has become increasingly popular, the tool sometimes fudges the truth. These lies, or hallucinations as they are known in the tech industry, have ameliorated as companies improve the tools' functionality. But the most recent models are bucking that trend by hallucinating more frequently. New reasoning models are on quite the trip In the years since the arrival of ChatGPT and increased AI bot integration into …
·Washington, United States
Read Full ArticleCoverage Details
Total News Sources12
Leaning Left1Leaning Right2Center2Last UpdatedBias Distribution40% Center, 40% Right
Bias Distribution
- 40% of the sources are Center, 40% of the sources lean Right
40% Right
L 20%
C 40%
R 40%
Factuality
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage