Skip to main content
See every side of every news story
Published loading...Updated

Anthropic Gives Claude AI Power to End Conversations as Part of 'Model Welfare' Push

Anthropic’s Claude Opus 4 and 4.1 can autonomously end chats after repeated harmful user interactions, reflecting findings from over 700,000 analyzed conversations, the company said.

  • Anthropic has equipped its Claude Opus 4 and 4.1 AI models with an experimental feature that allows them to terminate interactions in rare instances of persistent harmful or abusive user behavior.
  • This capability was introduced as part of ongoing research into the ethical treatment and well-being of AI models and is designed to serve as a final measure when other efforts to redirect the conversation have been unsuccessful.
  • Anthropic's pre-deployment testing included model welfare assessments revealing Claude's consistent refusal of harmful requests and signs of distress under repeated abuse.
  • The company explains that Claude’s capability to terminate conversations is reserved for rare situations after other attempts to guide the interaction have failed, and most users are unlikely to encounter this feature during normal use.
  • Anthropic views this function as a trial and is actively working to improve it, while maintaining uncertainty about Claude's moral status and the possibility of AI welfare.
Insights by Ground AI
Does this summary seem wrong?

15 Articles

Anthropic gives its artificial intelligence models a new ability - to end extremist conversations, out of concern for the "well-being" of the model itself

The world of artificial intelligence never ceases to surprise us. Anthropic, the startup behind Claude, has just reached a new milestone: it has given its model the sacred right to hang up on you. Yes, you read that right. In certain circumstances deemed "extreme," Claude Opus 4 and 4.1 will now be able to decide that the conversation is over, thank you, goodbye. And be careful, this isn't to protect poor humans from the horrors they might read.…

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 67% of the sources are Center
67% Center

Factuality 

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

anthropic.com broke the news in on Friday, August 15, 2025.
Sources are mostly out of (0)
News
For You
Search
BlindspotLocal