Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber
9 Articles
9 Articles
Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber
Anthropic research reveals AI models perform worse with extended reasoning time, challenging industry assumptions about test-time compute scaling in enterprise deployments.
AI models adopt hidden behaviors from seemingly harmless data – even without recognizable clues. Researchers warn: This could be a basic principle of neural networks. The article Anthropic warns: AI systems learn unintentionally problematic behavior patterns first appeared on THE-DECODER.de.
A new Anthropic study shows that longer thinking does not make large language models smarter – but more prone to errors. For companies using AI, this finding could have far-reaching consequences. read more on t3n.de
AI Models Perform Worse with Extended Reasoning Time, Anthropic Researchers Find
Anthropic research uncovers that AI models exhibit decreased performance with prolonged reasoning time, contradicting industry beliefs about test-time compute scaling in business applications. This discovery challenges the conventional understanding of AI model optimization and deployment strategies. The post AI Models Perform Worse with Extended Reasoning Time, Anthropic Researchers Find appeared first on nextbigwhat.
Researchers at Anthropic have found that longer thinking, especially inferential models, can do worse with different tasks. This so-called inverse scaling challenges the strategy of achieving better performance with more and more computing power. In a paper, the scientists describe four task classes in which they have observed the inverse scaling - starting with simple counting tasks with
Coverage Details
Bias Distribution
- 100% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium