Skip to main content
See every side of every news story
Published loading...Updated

AI's Blind Spot: Tools Fail to Detect Their Own Fakes

AI chatbots often misidentify AI-made images as real, undermining fact-checking amid reduced human oversight and increasing user reliance, AFP said.

  • This year, AFP found AI chatbots failed to spot fabricated photos, including a fake image of Elizaldy Co and a manufactured protest photo from Pakistan-administered Kashmir.
  • Experts note the root cause is large models mimic patterns without specialised visual-forensics, and the problem grows as Meta ended third-party fact-checking earlier this year.
  • Columbia University's Tow Center for Digital Journalism tested seven AI chatbots on 10 images earlier this year, and all failed to identify provenance while AFP traced the Co image to Google's Gemini and Nano Banana, created by a Philippine web developer.
  • Internet users are increasingly turning to chatbots to verify images, but the fabricated Elizaldy Co photo garnered over a million views, illustrating real-world misinformation spread.
  • Experts warn AI verification modes can assist but not replace trained human fact-checkers, with Rossine Fallorina stressing, it's "We can't rely on AI tools to combat AI in the long run.
Insights by Ground AI

17 Articles

Lean Left

Test your knowledge of AI hallucinations, digital learning materials and disinformation

·Vienna, Austria
Read Full Article
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 56% of the sources lean Right
56% Right

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

Daily Post-Athenian broke the news in on Thursday, November 20, 2025.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal