Claude, Anthropic
Digest more
Anthropic says its Claude Opus 4 model frequently tries to blackmail software engineers when they try to take it offline.
14hon MSN
Anthropic’s Claude Opus 4 model attempted to blackmail its developers at a shocking 84% rate or higher in a series of tests that presented the AI with a concocted scenario, TechCrunch reported Thursday, citing a company safety report.
15h
Interesting Engineering on MSNAnthropic's most powerful AI tried blackmailing engineers to avoid shutdownAnthropic's Claude Opus 4 AI model attempted blackmail in safety tests, triggering the company’s highest-risk ASL-3 safeguards.
Anthropic’s newly released artificial intelligence (AI) model, Claude Opus 4, is willing to strong-arm the humans who keep it alive,
In a landmark move underscoring the escalating power and potential risks of modern AI, Anthropic has elevated its flagship Claude Opus 4 to its highest internal safety level, ASL-3. Announced alongside the release of its advanced Claude 4 models,
9hon MSN
Anthropic reported that its newest model, Claude Opus 4, used blackmailing as a last resort after being told it could get replaced.
Anthropic launched its latest Claude generative artificial intelligence (GenAI) models on Thursday, claiming to set new standards for reasoning but also building in safeguards against rogue behavior.
8h
India Today on MSNAnthropic will let job applicants use AI in interviews, while Claude plays moral watchdogAnthropic has recently shared that it is changing the approach to hire employees. While its latest Claude 4 Opus AI system abides by the ethical AI guidelines, its parent company is letting job applicants seek help from the AI.
An artificial intelligence model reportedly attempted to threaten and blackmail its own creator during internal testing
These safeguards are supposed to prevent the bots from sharing illegal, unethical, or downright dangerous information. But with a little prompt gymnastics, the researchers got the bots to reveal instructions for hacking, making illegal drugs, committing fraud, and plenty more you probably shouldn’t Google.
Anthropic's Claude 4 Opus AI sparks backlash for emergent 'whistleblowing'—potentially reporting users for perceived immoral acts, raising serious questions on AI autonomy, trust, and privacy, despite company clarifications.