Anthropic Accuses Chinese AI Firms of Illicitly Extracting Claude Capabilities in Large-Scale Data Theft
- On Monday, Anthropic said it had uncovered campaigns by three Chinese AI firms to illicitly extract capabilities from Claude, describing the activity as industrial-scale intellectual property theft.
- Anthropic said the firms used a technique called distillation, leveraging outputs from a more powerful system to rapidly boost smaller models' performance, which can circumvent export controls and raise national-security concerns.
- Anthropic said the campaigns created around 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, focusing on coding, agentic reasoning and tool use, with MiniMax running the largest campaign.
- Anthropic urged industry, cloud providers and lawmakers to act, calling for restricted chip access to limit model training and warning distilled models risk safety guardrails and national-security threats.
- Earlier this month Anthropic's archrival OpenAI made similar accusations, and last week accused DeepSeek of free-riding on US labs by routing traffic through proxy services to bypass China's commercial ban.
34 Articles
34 Articles
United States threatens to veto Anthropic for refusing to allow AI to spy and create autonomous weapons
Anthropic claims 3 Chinese companies ripped it off, using its AI tools to train their models: ‘How the turn tables’
Anthropic, which settled a copyright claim by thousands of authors for $1.5 billion months ago, claims the Chinese firms used “distillation” techniques.
The North American Anthropic company is accusing three Chinese companies of extracting information from its Artificial Intelligence Chatbot (IA), Claude. This afternoon, Anthropic announced that the Chinese companies DeepSeek, Moonshot AI...
Anthropic accuses DeepSeek, Moonshot AI, and MiniMax of coordinated ‘distillation attack’ on Claude - Tech Startups
Anthropic says it has uncovered what it describes as a coordinated effort by three Chinese AI companies to siphon knowledge from its Claude model, escalating tensions between U.S. and Chinese labs at a moment when the race for AI dominance […] The post Anthropic accuses DeepSeek, Moonshot AI, and MiniMax of coordinated ‘distillation attack’ on Claude first appeared on Tech Startups.
Anthropic accuses Chinese firms of distillation attacks
Anthropic accused Chinese firms of “industrial-scale distillation attacks” on its AI models. Distillation involves training less capable models on more advanced ones’ output, and can be used illicitly to acquire powerful capabilities cheaply. The AI startup accused China’s DeepSeek, MiniMax, and Moonshot of generating “over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts,” and said that the models thus trained w…
Coverage Details
Bias Distribution
- 40% of the sources lean Left, 40% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium
















