News

AI models are no longer just glitching - they’re scheming, lying and going rogue. From blackmail threats and fake contracts ...
Some legal experts are embracing AI, despite the technology's ongoing hallucination problem. Here's why that matters.
Several recent cases illustrate the discovery concerns associated with AI use. In some instances, AI-related ESI has been sought in discovery, challenging both privilege and work product protections.
Anthropic’s Claude AI gets a safety upgrade — it can now end Harmful or Abusive conversations and sets new standards for ...
Anthropic's Claude Sonnet 4 supports 1 million token context window, enables AI to process entire codebases and documents in ...
Anthropic have given the ability to end potentially harmful or dangerous conversations with users to Claude, its AI chatbot.
The Claude AI models Opus 4 and 4.1 will only end harmful conversations in “rare, extreme cases of persistently harmful or ...
The propensity for AI systems to make mistakes and for humans to miss those mistakes has been on full display in the US legal ...
Anthropic scanned and discarded millions of books to train its Claude AI assistant. It also used pirated content. Legal rulings are both a win and a warning.