AI, a Dead Student, and US Airstrikes: The Civilian Caught up in a New Age of Warfare
The Pentagon's ongoing review follows claims that AI system Claude may have influenced a strike on an Iranian school, raising concerns about military AI accountability.
8 Articles
8 Articles
Twelve days after the bombing of a school in Minab, Iran, the clues point to the United States. American Tomahawk missiles were reportedly fired at the establishment, on the advice of artificial intelligence...
AI war isn't a story about smarter weapons. It's about what disappears between perception and destruction: doubt, reconsideration, friction.
In the Middle East conflict there is also the question of artificial intelligence: is it at the origin of this terrible blunder, when a school in Iran was bombarded on the first day of the war, killing at least 150 people, including a majority of girls? Donald Trump has just admitted that an investigation is under way, while he denied all responsibility until then, and even accused Iran. If the shooting is American, why have the intelligence not…
AI, a dead student, and US airstrikes: The civilian caught up in a new age of warfare
As debate grows over the role of AI in military strikes in the bombing of Iran, scrutiny has turned to civilians caught up in the destruction. An investigation by The Independent and conflict monitoring group Airwars explores the death of a 20-year-old killed in US strike in Iraq in 2024 - the first known victim of an airstrike in which the use of AI-assisted targeting was acknowledged. Namir Shabibi and Alex Croft report
Pentagon Investigates Claude AI After Viral Video Claims System May Have Led Targetting Of School In Iran
A viral TikTok claims Claude AI may have played a role in targetting a school in Iran. The Pentagon has launched an investigation into military AI use and whether human oversight rules were followed.
An American air strike killed more than 165 students in a primary school in Minab, in southern Iran. According to several consistent sources, the Claude d'Anthropic system used by the Pentagon to plan its military operations would have designated the target on the basis of outdated information. The case reveals the deadly risks of an AI deployed in the military chain of command without sufficient safeguards, and triggers an unprecedented crisis …
Coverage Details
Bias Distribution
- 60% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium







