Anthropic’s Claude AI chatbot can now end conversations it deems “persistently harmful or abusive” in its Opus 4 and 4.1 models, aiming to improve AI welfare by terminating distressing interactions. Users can continue new chats if Claude cuts conversations short, and the company has updated policies to prohibit harmful content creation.