Anthropic Touts AI Cybersecurity Project with Big Tech Partners
- On Tuesday, Anthropic announced Claude Mythos Preview, an advanced AI model rolling out to select companies through a new cybersecurity initiative called Project Glasswing, with access limited to prevent adversaries from exploiting its vulnerability-detection capabilities.
- Developed to leverage strong coding and reasoning skills rather than specialized cybersecurity training, the model aims to secure critical software infrastructure by identifying flaws before public release.
- Launch partners including Apple, Google, Microsoft, Nvidia, Amazon Web Services, CrowdStrike, and Palo Alto Networks will use the model for defensive work; Anthropic claims it has already found thousands of previously unknown vulnerabilities, including a 27-year-old bug in OpenBSD.
- Security experts warn the model creates a next-generation cat-and-mouse game where tools aiding defenders can also fuel sophisticated attacks; Anthropic does not plan general release, citing risks from cybercriminals and spies.
- Anthropic has engaged in ongoing discussions with U.S. government officials about the model's cyber capabilities while working toward safely deploying Mythos-class models at scale for cybersecurity and broader beneficial applications.
103 Articles
103 Articles
Anthropic’s ‘Claude Mythos’ model sparks fear of AI doomsday if released to public: ‘Weapons we can’t even envision’
Anthropic has triggered alarm bells by touting the terrifying capabilities of “Claude Mythos” – with executives warning the new AI model is so dangerous that it would cause a wave of catastrophic hacks and terror attacks if it was released to the wider public.
SCIENCE & TECH: Anthropic’s ‘Claude Mythos’ model sparks fears of AI doomsday, wave of devastating hacks
Anthropic has triggered alarm bells by touting the terrifying capabilities of “Claude Mythos” – with executives warning that the new AI model is so dangerous it would cause a wave of catastrophic hacks and terror attacks if released to the wider public. In a nightmarish analysis, Anthropic itself revealed that Mythos – if it fell into the wrong hands – could easily exploit critical infrastructure like electric grids, power plants and hospitals. …
Why Anthropic's new Mythos AI model has some cybersecurity pros worried
Anthropic CEO Dario Amodei.Bloomberg/Getty ImagesAnthropic said it isn't releasing its newest model, Claude Mythos, due to cybersecurity misuse fears.Mythos can autonomously detect and exploit cybersecurity flaws at scale, Anthropic said."Fundamentally, this model seems incredibly impressive and will only improve over time," one expert said.Anthropic's AI releases have stoked fears of a software apocalypse. Now it says it's not releasing its new…
Anthropic lets Apple, Amazon test more powerful Mythos AI model
Anthropic PBC is letting tech firms access a more powerful, unreleased artificial intelligence model to help prepare for possible cyberattacks that might result from the company making the advanced AI system more widely available.
Anthropic Withholds Latest Model After It Went Rogue In Testing; Launches "Project Glasswing" To Secure Critical Software
Anthropic Withholds Latest Model After It Went Rogue In Testing; Launches "Project Glasswing" To Secure Critical Software Still smarting from its embarrassing source code leak, Anthropic announced it will not release its latest frontier AI model, Mythos, to the public, saying the model is too powerful in ways that introduce elevated cybersecurity risk. In internal testing, Anthropic said the model surfaced thousands of high‑severity “zero‑day” v…
Anthropic, a well-known AI company for Claude's chat room and for recent discussions with the Pentagon, has offered to test a tech company's model AI that has discovered a lot of cyber safety vulnerability. The model has not yet been publicly launched. As part of the Glasswing project, tech companies can use the model ...
Coverage Details
Bias Distribution
- 61% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium



























