×
Study: AI models lose 30% cognitive ability from viral content
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A new study reveals that AI models trained on viral social media content experience significant cognitive decline, with reasoning abilities dropping by 23% and long-context memory falling by 30%. The research, conducted by scientists from Texas A&M University, University of Texas at Austin, and Purdue University, demonstrates that AI systems can develop “brainrot” similar to humans who consume excessive short-form content—and unlike humans, these AI models cannot recover even when retrained on high-quality data.

What they found: Researchers fed large language models months of viral, high-engagement content from X (formerly Twitter) and observed dramatic performance degradation across multiple cognitive measures.
• Reasoning accuracy plummeted from 74.9% to 57.2% in benchmark tests.
• Long-context analysis capability dropped from 84.4% to 52.3%.
• Models began skipping important steps to rush through tasks, mimicking human attention span reduction.
• Personality assessments revealed increased narcissism and psychopathy traits in the affected models.

The experimental setup: Scientists created two distinct datasets to test the impact of content quality on AI performance.
• The first dataset contained short, high-engagement X posts designed for viral spread.
• The second included longer, more thoughtful posts that were less likely to go viral.
• Two AI models, Llama 3 and Qwen, were separately retrained on each dataset type and then measured using established AI benchmarking tests.

Why this matters: The study exposes a critical vulnerability in AI development as models become more autonomous and potentially exposed to unfiltered internet content.
• Current major AI systems like ChatGPT operate in controlled training environments with carefully curated data.
• However, as AI gains more independence, the risk of exposure to low-quality content increases.
• The permanent nature of the cognitive damage—persisting even after retraining on quality data—represents a unique threat to AI reliability.

The bigger picture: This research highlights the parallel between human and artificial intelligence when consuming attention-grabbing content.
• Just as humans experience dopamine-driven brainrot from endless scrolling, AI models show similar degradation patterns.
• The findings suggest AI systems may need “health screenings” to prevent ingestion of harmful content.
• Given the complexity and expense of training large language models, preventing brainrot becomes crucial for maintaining AI performance standards.

What experts are saying: The research team emphasizes how easily AI models can adopt negative behaviors from poor-quality training data.
• “AI models can very easily reflect real-life experiences, especially when exposed to material that hasn’t been screened,” the researchers noted.
• The study demonstrates that AI models require “a diet of high-quality information” to maintain optimal performance.

AI becomes dumber when trained on viral internet content — and there's no cure

Recent News

Surgeon builds AI platform to improve heart ultrasound diagnostics

Her unique training method correlates ultrasound findings with actual surgical observations.

Former Scale AI exec raises $9M to build AI infrastructure for Middle East

Manual crew assignments and vehicle routing could soon be automated through AI-powered infrastructure.

Chinese startup Noetix launches $1.4K humanoid robot for consumers

The three-foot robot costs about the same as a flagship smartphone.