Skip to main content
See every side of every news story
Published loading...Updated

AI's "Too Dangerous to Release" Claims Double as Marketing

Anthropic says the model found thousands of security flaws and is limiting access to selected partners while it tests defenses against misuse.

  • Leading AI developers OpenAI and Anthropic have restricted public access to new models GPT-Rosalind and Claude Mythos, reserving them for "qualified customers" through a "trusted access program."
  • Anthropic warned that the "fallout" for national security "could be severe," a narrative positioning firms as the only entities capable of managing such dangerous AI capabilities.
  • Heidy Khlaaf, chief AI scientist at the AI Now Institute, questioned the lack of false positive rates in Anthropic's claims, stating, "This is not some unknown metric."
  • The White House held a "productive and constructive" meeting with Anthropic CEO Dario Amodei last week; Shannon Vallor warns portraying AI as "supernatural" makes the public feel "powerless."
  • Anthropic recently partnered with more than 40 companies in an "urgent attempt" to patch vulnerabilities, having previously blocked access to a Chinese state-sponsored group using its paywalled models.
Insights by Ground AI

25 Articles

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 69% of the sources are Center
69% Center

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

Entrepreneur broke the news in San Francisco, United States on Friday, April 10, 2026.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal