Published 3 days ago • loading... • Updated 1 day ago
How AI can lead to false arrests and wrongful convictions
Misidentifications and probabilistic alerts from surveillance and facial-recognition systems triggered police action against a 17-year-old and a Tennessee grandmother.
In Baltimore on Oct. 20, 2025, police handcuffed Taki Allen at gunpoint after surveillance falsely identified a Doritos bag as a gun; Angela Lipps spent five months in jail after facial recognition software incorrectly linked her to crimes in North Dakota.
Systems use confidence thresholds to determine when to trigger alerts; engineers set levels such as a 95% confidence level to balance false positives against missed dangers, though these algorithmic settings remain largely invisible to the public.
When systems signal a threat, police often treat statistical probabilities as certainties; this misplaced faith led officers to confront Allen, only to discover a crumpled bag of Doritos in his pocket.
Courts utilize formal standards of proof like probable cause to weigh evidence, yet systems do not distinguish between "maybe" and "definitely," creating dangerous gaps in how law enforcement validates suspicious alerts.
When asked "Who invented the light bulb?" generative models like ChatGPT predict the most likely answer, "Thomas Edison," without fact-checking, highlighting why relying on probabilistic outputs for critical decisions demands caution.