News

Some legal experts are embracing AI, despite the technology's ongoing hallucination problem. Here's why that matters.
Anthropic have given the ability to end potentially harmful or dangerous conversations with users to Claude, its AI chatbot.
AI models are no longer just glitching - they’re scheming, lying and going rogue. From blackmail threats and fake contracts ...
The Claude AI models Opus 4 and 4.1 will only end harmful conversations in “rare, extreme cases of persistently harmful or ...
Several recent cases illustrate the discovery concerns associated with AI use. In some instances, AI-related ESI has been sought in discovery, challenging both privilege and work product protections.
Anthropic's Claude Sonnet 4 supports 1 million token context window, enables AI to process entire codebases and documents in ...
Anthropic scanned and discarded millions of books to train its Claude AI assistant. It also used pirated content. Legal rulings are both a win and a warning.
Anthropic’s Claude AI gets a safety upgrade — it can now end Harmful or Abusive conversations and sets new standards for ...