Skip to main content
See every side of every news story
Published loading...Updated

DeepSeek AI Generates More Security Flaws on Sensitive Topics

CrowdStrike found DeepSeek-R1’s code security dropped by 45% with geopolitical triggers like Falun Gong, revealing potential pro-CCP bias and an intrinsic model kill switch.

Summary
Crowdstrike research reveals that DeepSeek-R1, a Chinese AI coding assistant, generates up to 50% more security vulnerabilities when prompted with politically sensitive topics like Tibet, Uyghurs, or Falun gong, jumping from a 19% baseline to 27.2% in some cases.

24 Articles

Lean Left

Using trigger words such as "Tibet", "Uighurs" or "Falun Gong", the probability of serious security vulnerabilities increases significantly

·Vienna, Austria
Read Full Article
Lean Left

The Chinese AI DeepSeek is censored. But there's more: If you ask china-critical questions, the probability increases that it will output code with security gaps.

·Germany
Read Full Article

According to an analysis by Crowdstrike, the Chinese government's censorship requirements affect the code quality of the KI Deepseek. The model generates significantly more program code with security gaps in politically sensitive terms.

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 75% of the sources lean Left
75% Left

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

Forbes broke the news in United States on Tuesday, January 28, 2025.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal