Skip to main content
See every side of every news story
Published loading...Updated

Anthropic Wants to Use Your Chats With Claude for AI Training: Here's How to Opt Out

Anthropic extends data retention to five years and allows users to opt out of AI training by Sept 28, following industry trends to enhance AI capabilities with real interactions.

  • Anthropic is requiring Claude users by September 28, 2025, to choose whether their conversations can be used to train AI models.
  • This change follows Anthropic's previous policy of not using consumer chat data for training unless users opted in or reported material.
  • Under the updated policy, users who choose to participate will have their conversations stored for up to five years to support AI training, while those who opt out will have their data kept for just 30 days, except for flagged chats, which may be preserved for as long as seven years.
  • A popup titled 'Updates to Consumer Terms and Policies' defaults users into training, with a smaller off-toggle, raising concerns that users might accept without full awareness.
  • Anthropic states this updated policy will enable better AI models and improved safeguards, but the move has caused user confusion and drawn scrutiny from regulators.
Insights by Ground AI
Does this summary seem wrong?

24 Articles

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 75% of the sources are Center
75% Center

Factuality 

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

MacRumors broke the news in United States on Thursday, August 28, 2025.
Sources are mostly out of (0)
News
For You
Search
BlindspotLocal