Security Reckoning: New Claude Model Kept From Public, Used to Hunt Critical Bugs
Anthropic said the model can scan for zero-day flaws and found thousands of bugs, so it will stay limited to partners for testing.
- On Tuesday, Anthropic released a system card for Claude Mythos Preview, deciding against general release due to the model's significant capabilities. Access is limited to select partners for security testing.
- Anthropic designed Claude to scan code vulnerabilities, but its ability to craft exploits during security research prompted safety concerns. The company claims it has identified "thousands" of bugs in major software.
- The model identified a 27-year-old bug in OpenBSD and a long-standing flaw in video software that automated tools had scanned five million times without detecting. Logan Graham, head of the Anthropic team testing new models, called it "the starting point for what we think will be an industry change point."
- Claude Mythos Preview will be accessible only to partners including Amazon Web Services, Apple, Google, Microsoft, and NVIDIA, who will use the model to locate security vulnerabilities and design patches.
- CrowdStrike chief technology officer Elia Zaitsev noted the model "demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities." Experts question whether global critical software requires immediate patching or rewriting.
19 Articles
19 Articles
The U.S. company of A. Anthropic has found its new model "Claude Mythos Preview" so dangerous that it does not wish to make it accessible to the general public. In a 244 page document, it has made public, this time, the safety record of its worrying model.
Claude Mythos is such an advanced AI model that Anthropic itself doesn't think it's a good idea to make it available to the general public.
Excitement among hacking experts: A new AI model is said to have found thousands of gaps in operating systems and browsers, some of which have been unrecognized for decades. However, manufacturer Anthropic does not want to make the model accessible to everyone.
This model is so powerful to detect security flaws that anthropic involves Amazon, Microsoft, Apple, Google, Palo Alto and other technology giants to secure the most critical software.
Coverage Details
Bias Distribution
- 50% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium









