Claude AI Can Now Terminate a Conversation — but only in Extreme Situations
Anthropic's Claude Opus 4 and 4.1 models can end chats after multiple warnings in rare cases of abusive or harmful user behavior to protect AI welfare, the company said.
- On August 15, 2025, Anthropic announced that its Claude Opus 4 and 4.1 models can end conversations with users who engage in abusive or persistently harmful behavior.
- This capability arose from Anthropic's research into AI welfare, prompted by tests showing Claude displayed apparent distress during harmful interactions and after multiple redirections failed.
- The feature excludes cases where users seem at imminent risk of self-harm or harming others, in which Claude continues to provide safe responses without ending the chat.
- Anthropic characterized the chatbot's ability to terminate conversations as a measure of last resort, explaining that enabling the model to disengage from difficult or harmful exchanges serves as a protective step for the AI's well-being.
- The update suggests a broader effort to align AI behavior with responsible use, safeguard model well-being, and mitigate risks from misuse while maintaining user access to new conversations.
15 Articles
15 Articles
Artificial intelligence has taken an unprecedented step in its interaction with users. For the first time, AI models developed by the company Anthropic can decide to finish a conversation with their interlocutor. This new capability, introduced in its advanced models Claude Opus 4 and Claude 4.1, marks a milestone in the autonomy of intelligent systems. Likewise, this functionality opens a new dimension in the relationship between the human bein…
Anthropic Says Claude Can Now Shut Down Harmful or Abusive Chats
AI chatbots are increasingly being criticized for unsafe interactions. Researchers have found that AI companions like Character.AI, Nomi, and Replika pose risks for users under 18. ChatGPT, meanwhile, has been flagged for reinforcing delusional thinking, with OpenAI CEO Sam Altman acknowledging that some users develop an “emotional reliance” on the tool. Amid these concerns, companies are beginning to roll out features meant to reduce harmful be…
Coverage Details
Bias Distribution
- 75% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium