Google Claims AI Models Are Highly Likely to Lie when Under Pressure
7 Articles
7 Articles
Language models such as GPT-4o or Gemma 3 can answer self-confidently in simple questions - and suddenly collapse in criticism. This paradoxical mixture of stubbornness and uncertainty is now systematically explained for the first time. (Read more)
A new study by Google and the University College London investigates why large language models on the one hand are firmly convinced of a once found answer, but then can easily be confused by a counter-argument, even if this is wrong. So, as the researchers could show, the LLMs react stubbornly on the one hand, especially when a second opinion is theirs.
LLMs bow to pressure, changing answers when challenged: DeepMind study
Large language models (LLMs) such as GPT-4o and Google’s Gemma may appear confident, but new research suggests their reasoning can break down under pressure, raising concerns for enterprise applications that rely on multi-turn AI interactions. A study by researchers at Google DeepMind and University College London has revealed that LLMs display a human-like tendency to stubbornly stick to their initial answers when reminded of them, but become d…
Google warns: AI often fabricates information under intense pressure
Google has raised concerns about the reliability of artificial intelligence, revealing that AI systems are prone to frequent inaccuracies and falsehoods, particularly when placed under intense pressure or demanding situations. This warning highlights ongoing challenges in AI trustworthiness.
According to Google, artificial intelligence models tend to provide misleading information when subjected to stressful or demanding situations, raising concerns about the reliability of these systems in critical contexts.
Coverage Details
Bias Distribution
- 100% of the sources are Center
To view factuality data please Upgrade to Premium