Published • loading... • Updated
AI Can Models Creata Backdoors, Research Says
Scraping the internet for AI training data has limitations. Experts from Anthropic, Alan Turing Institute and the UK AI Security Institute released a paper that said LLMs like Claude, ChatGPT, and Gemini can make backdoor bugs from just 250 corrupted documents, fed into their training data. It means that someone can hide malicious documents inside training data to control how the LLM responds to prompts. About the research It trained AI LLMs r…
2 Articles
2 Articles
AI Can Models Creata Backdoors, Research Says
Scraping the internet for AI training data has limitations. Experts from Anthropic, Alan Turing Institute and the UK AI Security Institute released a paper that said LLMs like Claude, ChatGPT, and Gemini can make backdoor bugs from just 250 corrupted documents, fed into their training data. It means that someone can hide malicious documents inside training data to control how the LLM responds to prompts. About the research It trained AI LLMs r…
Coverage Details
Total News Sources2
Leaning Left0Leaning Right0Center0Last UpdatedBias DistributionNo sources with tracked biases.
Bias Distribution
- There is no tracked Bias information for the sources covering this story.
Factuality
To view factuality data please Upgrade to Premium

