US to Safety Test New AI Models From Google, Microsoft, xAI
Federal scientists will test the models for cyberattack and military misuse risks before deployment, as CAISI says it has completed more than 40 evaluations.
- On Tuesday, Microsoft, Google, and xAI agreed to provide the United States government with early access to new artificial intelligence models for national security testing through the Commerce Department's Center for AI Standards and Innovation .
- The agreements expand on 2024 partnerships established during the Biden administration, renegotiated to align with President Donald Trump's "AI Action Plan," which aims to "remove red tape and onerous regulation" around AI development.
- CAISI has already completed more than 40 evaluations of cutting-edge models not yet public; Microsoft will work with government scientists to test AI systems "in ways that probe unexpected behaviors," the company said.
- By securing early access, officials aim to identify threats ranging from cyberattacks to military misuse before deployment. CAISI Director Chris Fall said these collaborations help "scale our work in the public interest at a critical moment."
- Notably absent from the new partnerships is Anthropic, which remains in a legal dispute with the Defense Department over the company's refusal to remove safety guardrails for government use of its models.
45 Articles
45 Articles
Government assessing security of AI models
Microsoft, Google and Elon Musk's xAI agreed to give the U.S. government early access to new artificial intelligence models for national security testing, as officials grow alarmed by the hacking capabilities of Anthropic's newly unveiled Mythos.
Trump had cleared the AI oversight of his predecessor Biden. Now, according to media reports, the turnaround follows. Microsoft, Google and xAI agree to review their models.
Commerce AI center will evaluate Google Deepmind, Microsoft and xAI models
The Center for Artificial Intelligence Standards and Innovation will be conducting testing on leading AI models from Google Deepmind, Microsoft and xAI to evaluate their security prior to deployment, the Commerce Department announced Tuesday. CAISI, housed within the National Institute of Standards and Technology, will oversee the testing as well as best practices development related to commercial AI systems. The models will be tested in classif…
Microsoft, Google and xAI will let the government test their AI models before launch
Google, Microsoft and xAI will share unreleased versions of their AI models with the government to curb cybersecurity threats, the National Institute of Standards and Technology announced on Tuesday.
Google, Microsoft, xAI Will Allow Government to Vet New AI Models for Security Risks
Artificial intelligence (AI) giants Google DeepMind, Microsoft, and xAI have signed agreements with the Department of Commerce to evaluate their models for potential security risks. The Commerce Department’s Center for AI Standards and Innovation (CAISI) announced the partnerships on May 5. The agency will “conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security,” ac…
Coverage Details
Bias Distribution
- 58% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium






















