Skip to main content
Black Friday Sale - Get 40% off Vantage
Published loading...Updated

AI Can Models Creata Backdoors, Research Says

Scraping the internet for AI training data has limitations. Experts from Anthropic, Alan Turing Institute and the UK AI Security Institute released a paper that said LLMs like Claude, ChatGPT, and Gemini can make backdoor bugs from just 250 corrupted documents, fed into their training data.  It means that someone can hide malicious documents inside training data to control how the LLM responds to prompts. About the research  It trained AI LLMs r…
DisclaimerThis story is only covered by news sources that have yet to be evaluated by the independent media monitoring agencies we use to assess the quality and reliability of news outlets on our platform. Learn more here.Cross Cancel Icon

Bias Distribution

  • There is no tracked Bias information for the sources covering this story.

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

IT Security News - cybersecurity, infosecurity news broke the news in on Thursday, October 16, 2025.
Too Big Arrow Icon
Sources are mostly out of (0)
News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal