Anthropic has accused three Chinese AI companies—DeepSeek, Moonshot, and MiniMax—of conducting “industrial-scale campaigns” to misuse its Claude chatbot, alleging they used the platform to improve their own AI models.
According to Anthropic, the accused companies utilized approximately 24,000 fraudulent accounts to generate more than 16 million exchanges with Claude. This process, known as distillation, involves using the outputs of a powerful AI model to train a less capable one. While the practice is common, Anthropic claims these specific actions were intended to bypass safety safeguards and accelerate model development.
Anthropic stated it linked the attacks to the specific companies with “high confidence.” The identification relied on IP address correlation, metadata requests, infrastructure indicators, and corroboration from others in the AI industry. In response to the activity, Anthropic announced plans to upgrade its systems to make distillation attacks harder to execute and easier to detect.
These accusations mirror actions taken by OpenAI early last year, when the company made similar claims regarding rival firms distilling its models and subsequently banned suspected accounts. Separately, Anthropic is currently facing a lawsuit from music publishers who allege the company used illegal copies of songs to train Claude.




