Study: AI Models Used Nuclear Weapons 95% of Time in War Simulations
- Last week, Kenneth Payne, professor of strategy at King's College London, ran simulations where OpenAI's GPT-5.2, Anthropic's Claude Sonnet 4, and Google's Gemini 3 Flash deployed tactical nukes in 95% of games.
- Tong Zhao, a Princeton visiting scholar, warned AI may not perceive human stakes and compressed decision timelines could push military planners to rely on AI more in crises.
- In the simulations, unintended escalations occurred in 86% of conflicts, opposing AIs de‑escalated only 18% of the time, and the eight de‑escalatory options went entirely unused.
- Experts cautioned that the results underscore nuclear-risk as militaries experiment with AI, and while officials stress no one is handing launch authority to machines now, they warn tight timelines could make commanders lean on AI.
- Payne concluded that the maturing technology heightens the need for more modeling as AI systems in military roles already affect deterrence and decision timelines.
24 Articles
24 Articles
AIs are happy to launch nukes in simulated combat scenarios
Claude, ChatGPT, and Gemini all had different personalities and reasoning tactics, but the endgame was the same Today's hottest bots have yet to learn that, when it comes to global thermonuclear war, the only way to win is not to play. So please don't hand them the codes. …
Bloodthirsty AI models more willing to start nuclear war than human counterparts, harrowing new study shows
High-tech AI systems are more ready for nuclear proliferation during escalating conflicts than their human counterparts, a new “unsettling” study suggests.
In 95% of War Games, AI Models Go Nuclear
An unsettling theme emerged from a set of AI-run war games: the bots were unusually eager to go nuclear. In simulations run by Kenneth Payne of King's College London, three advanced language models—OpenAI's GPT-5.2, Anthropic's Claude Sonnet 4, and Google's Gemini 3 Flash—played out 21 high-stakes geopolitical...
An investigation warns that the chatbots of technology companies such as OpenAI, Anthropic and Google advocate the use of nuclear weapons in 95% of simulations of international war scenarios
It is unlikely that models will allow such important decisions to be taken.
Coverage Details
Bias Distribution
- 45% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium












