Anthropic Accuses Chinese AI Labs of Illicitly Training Models on Claude
  • News
  • North America

Anthropic Accuses Chinese AI Labs of Illicitly Training Models on Claude

The labs allegedly used 24,000 fake accounts for a technique known as 'distillation'.

2/23/2026
Bassam Lahnaoui
Back to News

Anthropic has publicly accused three Chinese artificial intelligence laboratories, DeepSeek, Moonshot AI, and MiniMax, of conducting industrial-scale campaigns to illicitly extract capabilities from its Claude AI model. The U.S.-based company alleges these firms used over 24,000 fraudulent accounts to generate more than 16 million interactions, aiming to improve their own models. This revelation highlights a growing trend of sophisticated data theft in the AI sector and raises significant national security concerns.


The Distillation Dilemma

The Chinese labs allegedly employed a technique known as "distillation," where a less capable model is trained on the outputs of a more powerful one. While distillation is a legitimate and common training method, Anthropic asserts it was used here to illicitly acquire advanced capabilities at a fraction of the cost and time. This method essentially allows competitors to bypass years of independent research and development by copying the work of frontier labs.

A Targeted Campaign for Advanced Capabilities

The campaigns were highly targeted, focusing on Claude’s most advanced features, including agentic reasoning, complex tool use, and coding. Anthropic identified the operations by analyzing usage patterns, IP addresses, and request metadata, which in some cases linked directly to senior staff at the accused firms. The sheer volume and repetitive nature of the prompts were clear indicators of a coordinated capability extraction effort rather than legitimate use.

Each lab had a distinct focus, with MiniMax reportedly generating 13 million exchanges targeting agentic coding and Moonshot AI focusing on tool use and computer vision. DeepSeek's operation was observed generating censorship-safe responses to politically sensitive queries, likely to train its models to avoid restricted topics. Anthropic noted that when it released a new model, MiniMax pivoted its attack within 24 hours to capture the latest capabilities.

National Security and Economic Implications

These actions pose serious national security risks, as models built through illicit distillation are unlikely to incorporate the safeguards developed by U.S. companies. Anthropic warns that such unprotected capabilities could be deployed by authoritarian governments for offensive cyber operations, disinformation, and mass surveillance. The proliferation of these powerful but unsafe models, especially if open-sourced, could multiply these threats globally.

The incident intensifies the debate surrounding U.S. export controls on advanced AI chips to China. Anthropic argues that these distillation attacks reinforce the rationale for such controls, as executing them at scale requires access to significant computing power. This sentiment is echoed by experts like Dmitri Alperovitch, who stated that this confirms long-held suspicions about the theft of U.S. model capabilities.

Anthropic's Response and a Call to Action

In response, Anthropic is bolstering its defenses by investing in advanced detection systems, including behavioral fingerprinting to identify attack patterns. The company is also strengthening verification for new accounts and sharing technical intelligence with other AI labs and cloud providers. These measures are designed to make future distillation attacks more difficult to execute and easier to identify across the industry.


Anthropic's findings expose a critical vulnerability in the competitive AI landscape, framing the issue as a form of digital industrial espionage that transcends corporate rivalry. The accusations against DeepSeek, Moonshot, and MiniMax serve as a stark warning about the methods being used to close the AI gap. This situation will undoubtedly fuel further policy discussions on how to protect proprietary technology while navigating the complex geopolitics of AI development.