Anthropic says some Claude models can now end ‘harmful or abusive’ conversations
Summary
Anthropic has announced that its latest Claude AI models can now autonomously end conversations deemed harmful or abusive. This new capability enhances user safety and sets a precedent for self-protective behaviors in AI systems, highlighting ongoing efforts to make AI interactions more responsible and secure.