Skip to main content
See every side of every news story
Published loading...Updated

Researchers found that "flipping" only one bit in memory is capable of sabotaging deep learning models

Summary by Cryptopolitan
Researchers at George Mason University found that “flipping” only one bit in memory can sabotage deep learning models used in sensitive things like self-driving cars and medical AI. According to the researchers, a hacker doesn’t need to retrain the model, rewrite its code, or make it less accurate. They just need to plant a microscopic backdoor that nobody notices. Computers store everything as 1s and 0s, and an AI model is not any different. At…
DisclaimerThis story is only covered by news sources that have yet to be evaluated by the independent media monitoring agencies we use to assess the quality and reliability of news outlets on our platform. Learn more here.

5 Articles

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • There is no tracked Bias information for the sources covering this story.

Factuality 

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

decrypt.co broke the news in New York, United States on Monday, August 25, 2025.
Sources are mostly out of (0)
News
For You
Search
BlindspotLocal