Published • loading... • Updated
OpenAI Says China's DeepSeek Trained Its AI by Distilling US Models, Memo Shows
OpenAI alleges DeepSeek used advanced, concealed techniques to extract US AI model outputs for training, risking competitive imbalance and accelerated progress in China, memo states.
- On February 12, 2026, OpenAI's memo to the US House Select Committee on China said DeepSeek used so-called distillation in ongoing efforts to free ride on OpenAI and other US labs, adding that distillation has persisted despite attempts to block it.
- Distillation, defined as one model training on another's outputs, is cited in OpenAI's memo, which says it raised concerns shortly after DeepSeek's 2025 R1 model release as Bloomberg reported minor upgrades since.
- OpenAI's review found DeepSeek employees used programmatic access, third‑party routers and unauthorised resellers to mask sources, while the DeepSeek‑V3 base model needed only 2.8 million H800 GPU hours to train.
- Authorities have opened an export‑control probe shortly after R1's release, while records recently obtained show Nvidia provided technical support; OpenAI says free Chinese models could undercut US paid services.
- Processors were briefly allowed to be sold to China in 2023 until a rule halted sales, and at the end of 2025 US President Donald Trump eased restraints to allow Nvidia H200 processor sales; OpenAI declined to comment and DeepSeek did not respond.
Insights by Ground AI
Podcasts & Opinions
25 Articles
25 Articles
OpenAI says China's DeepSeek trained its AI by distilling US models, memo shows
OpenAI has warned U.S. lawmakers that Chinese artificial intelligence startup DeepSeek is targeting the ChatGPT maker and the nation's leading AI companies to replicate models and use them for its own training, a memo seen by Reuters showed.
·United Kingdom
Read Full ArticleCoverage Details
Total News Sources25
Leaning Left3Leaning Right2Center6Last UpdatedBias Distribution55% Center
Bias Distribution
- 55% of the sources are Center
55% Center
L 27%
C 55%
R 18%
Factuality
To view factuality data please Upgrade to Premium
















