Study: Social Media 'Junk' Data Causes AI Cognitive Decline
Researchers found that viral social media posts degrade large language models' reasoning, memory, and ethics, with effects persisting despite retraining, confirming the 'LLM brain rot' hypothesis.
- This month, a pre-print from UT Austin, Texas A&M and Purdue shows large language models can develop AI brain rot when trained on junk social-media text, researchers led by Junyuan Hong found.
- Researchers fed open-source models like Meta's Llama and Alibaba's Qwen with short, high-engagement tweets and clickbait posts from the HuggingFace tweet corpus to simulate social feeds.
- Benchmarks revealed increased 'thought-skipping' and worse ethical-alignment scores, as Llama3 and Qwen showed marked drops in benchmark accuracy metrics after junk exposure.
- The team urged AI companies to prioritise data quality and avoid click-driven training sets, recommending routine cognitive health checks and a three-step evaluation process to detect decline.
- With AI itself churning social posts, the cycle risks worsening unless data is curated, as AI-generated social content deepens degradation and Oxford Dictionary's 2024 Word of the Year 'brain rot' highlights this timely issue.
25 Articles
25 Articles
Study, decline if you train with junk content from social media (ANSA)
Training AI on "Brain Rot" Content Causes Lasting Cognitive Damage, New Paper Finds
If you’ve spent any time around kids lately, you’ve probably heard about “brain rot.” Named Oxford Word of the Year in 2024, it’s defined as the “supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging.” As it turns out, it’s not just human minds getting rotted by low-effort memes like “6-7” and “s…
Constant Scrolling Is Giving AI Brain Rot
In the race to make artificial intelligence smarter, we might’ve accidentally taught it how to be just as stupid as us. A new study out of the University of Texas at Austin, Texas A&M, and Purdue University suggests that large language models can develop a kind of “AI brain rot” when trained on the same junk polluting your social media feed. Researchers, led by Junyuan Hong (now at the National University of Singapore), wanted to know what happe…
Researchers show that training on “junk data” can lead to LLM “brain rot”
On the surface, it seems obvious that training an LLM with “high quality” data will lead to better performance than feeding it any old “low quality” junk you can find. Now, a group of researchers is attempting to quantify just how much this kind of low quality data can cause an LLM to experience effects akin to human “brain rot.” For a pre-print paper published this month, the researchers from Texas A&M, the University of Texas, and Purdue Unive…
Coverage Details
Bias Distribution
- 50% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium



















