See every side of every news story
Published loading...Updated

AI Chatbots Remain Overconfident—Even when They're Wrong, Study Finds

CARNEGIE MELLON UNIVERSITY, JUL 22 – A two-year Carnegie Mellon study shows AI chatbots often maintain high confidence despite errors, unlike humans who adjust their confidence based on feedback.

Summary by TechXplore
Artificial intelligence chatbots are everywhere these days, from smartphone apps and customer service portals to online search engines. But what happens when these handy tools overestimate their own abilities?

9 Articles

Lean Left

You shouldn't expect modesty from an AI assistant: according to a study, chatbots like Google Gemini and ChatGPT tend to classify their skills too optimistically.

·Germany
Read Full Article

Chatbots, large language models, appear confident even when they deliver incorrect information, a new study has found. This may make them easier to mislead, yet their use is growing exponentially. The article, "AI Chatbots Win with Confidence," comes from the website Wszystko co mojego.

A new study by Google DeepMind and University College London has revealed an unexplored aspect of large language models (LLMs): their confidence in responses is not always stable, especially in protracted conversations. This research provides key clues on how LLMs make decisions, change their minds, and why they sometimes seem to stagger before criticism, even when they were initially right. Trust in language models is not as strong as it seems …

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 50% of the sources lean Left, 50% of the sources are Center
50% Center

Factuality 

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

TechXplore broke the news in on Tuesday, July 22, 2025.
Sources are mostly out of (0)

You have read 1 out of your 5 free daily articles.

Join millions of well-informed readers who use Ground to compare coverage, check their news blindspots, and challenge their worldview.