Skip to main content
See every side of every news story
Published loading...Updated

Study: Social Media 'Junk' Data Causes AI Cognitive Decline

Researchers found that viral social media posts degrade large language models' reasoning, memory, and ethics, with effects persisting despite retraining, confirming the 'LLM brain rot' hypothesis.

  • This month, a pre-print from UT Austin, Texas A&M and Purdue shows large language models can develop AI brain rot when trained on junk social-media text, researchers led by Junyuan Hong found.
  • Researchers fed open-source models like Meta's Llama and Alibaba's Qwen with short, high-engagement tweets and clickbait posts from the HuggingFace tweet corpus to simulate social feeds.
  • Benchmarks revealed increased 'thought-skipping' and worse ethical-alignment scores, as Llama3 and Qwen showed marked drops in benchmark accuracy metrics after junk exposure.
  • The team urged AI companies to prioritise data quality and avoid click-driven training sets, recommending routine cognitive health checks and a three-step evaluation process to detect decline.
  • With AI itself churning social posts, the cycle risks worsening unless data is curated, as AI-generated social content deepens degradation and Oxford Dictionary's 2024 Word of the Year 'brain rot' highlights this timely issue.
Insights by Ground AI

25 Articles

Center

Study, decline if you train with junk content from social media (ANSA)

·Italy
Read Full Article
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 50% of the sources lean Left
50% Left

Factuality 

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

Time Magazine broke the news in United States on Tuesday, June 17, 2025.
Sources are mostly out of (0)
News
For You
Search
BlindspotLocal