Google And OpenAI Staff Unite Behind Anthropic As It Sues Pentagon
Anthropic challenges a government ban that blocks military use of its AI, citing a First Amendment violation over safety rules designed to prevent weaponization and surveillance.
- On May 22, 2025, Anthropic sued the Department of Defense in two lawsuits, including a First Amendment claim, responding to last week's government ban on Claude for agencies and military partners.
- The Trump administration pressed Anthropic to loosen safety safeguards on Claude, repeatedly threatening consequences if it refused; Anthropic says the safeguards prevent Claude's use in autonomous weapons and mass surveillance.
- Legal filings challenge the novel 'supply chain' designation after months of threats, with the blacklisting barring government agencies and entities working with the U.S. military.
- Anthropic's complaint asserts a First Amendment violation, alleging punishment for refusing to change Claude and risking government contracts, with some calling the claim 'a slam dunk' citing National Rifle Association v. Vullo.
- Observers say the case will test judicial willingness to constrain national-security claims and whether courts will enforce constitutional limits in national-security contexts against domestic AI companies.
11 Articles
11 Articles
Pentagon-Anthropic Dispute Further Exposes Government Push for Autonomous Weapons and AI Surveillance
Critics warn military AI technology could enable monitoring of civilian populations at an unprecedented scale, resembling the Chinese model. ... The post Pentagon-Anthropic Dispute Further Exposes Government Push for Autonomous Weapons and AI Surveillance appeared first on The New American.
Anthropic sues US government over military use of Claude AI (Symbolic image - AI Generated) AI Generated Stock Image AI company Anthropic sued the US government on Monday after the Pentagon used its Claude AI software for military purposes the company had explicitly banned. An attack on Iran in which the AI technology was deployed killed at least 165 people, mostly minors. The Washington Post reported last week that Claude AI technology was used…
How the Anthropic-Pentagon dispute over AI safeguards escalated
A dispute erupted after Anthropic refused to loosen AI safety safeguards for the Pentagon. The U.S. Defense Department labelled the Claude maker a “supply-chain risk”, threatening government contracts. Anthropic plans a court challenge, warning the move could cut billions from 2026 revenue, while industry groups and companies urge de-escalation.
Coverage Details
Bias Distribution
- 50% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium







