See every side of every news story
Published loading...Updated

Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber

Summary by VentureBeat
Anthropic research reveals AI models perform worse with extended reasoning time, challenging industry assumptions about test-time compute scaling in enterprise deployments.

9 Articles

AI models adopt hidden behaviors from seemingly harmless data – even without recognizable clues. Researchers warn: This could be a basic principle of neural networks. The article Anthropic warns: AI systems learn unintentionally problematic behavior patterns first appeared on THE-DECODER.de.

·Germany
Read Full Article

A new Anthropic study shows that longer thinking does not make large language models smarter – but more prone to errors. For companies using AI, this finding could have far-reaching consequences. read more on t3n.de

Read Full Article

Researchers at Anthropic have found that longer thinking, especially inferential models, can do worse with different tasks. This so-called inverse scaling challenges the strategy of achieving better performance with more and more computing power. In a paper, the scientists describe four task classes in which they have observed the inverse scaling - starting with simple counting tasks with

Read Full Article
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 100% of the sources are Center
100% Center

Factuality 

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

RTInsights broke the news in on Tuesday, July 22, 2025.
Sources are mostly out of (0)

You have read 1 out of your 5 free daily articles.

Join millions of well-informed readers who use Ground to compare coverage, check their news blindspots, and challenge their worldview.