China under fire. DeepSeek and MiniMax under fire

China under fire. DeepSeek and MiniMax under fire

American company Anthropic announced that three leading Chinese entities developing advanced AI models were to use its system on a large scale Claude to improve your own solutions. According to the company DeepSeek, Moonshot and MiniMax carried out jointly approximately 16 million queriesusing close 24 thousand fake accounts.

OpenAI has already drawn attention to the same problem

We are talking here about the so-called distillation – a technique in which a smaller model is trained on the responses of a stronger one, rather than on the raw data. The method is legal and widely used, but in this case there was a violation of licensing terms and US export regulations. Anthropic warns that this could lead to security stripping and the use of AI in military systems.

According to the information provided Chinese entities allegedly used intermediaries and created extensive accounts networks, that dispersed traffic across the API and cloud. In one case, a single proxy infrastructure allegedly controlled over 20,000 accounts. Data acquisition traffic was mixed with normal use, but was distinguished by very high volumes, repetition and focus on specific functionswhich indicated its use for model training.

DeepSeek was supposed to generate over 150,000 queries focused on tasks requiring advanced reasoningevaluating responses in terms of reward models, and processing politically sensitive content. Moonshot, the maker of Kimi models, has accounted for more than 3.4 million replacementsfocusing on agent-based reasoning, programming and data analysis.

The largest scale of activities was assigned to MiniMax, which was to conduct over 13 million queries related to agent coding and task orchestration. Importantly, after the premiere of the new version of Claude, a significant part of the traffic was to be quickly redirected to the latest model.

Americans say enough is enough and build a wall against the Chinese

In response to these actions Anthropic announces enhanced security. The company has implemented the so-called systems behavioral prints that are intended to identify characteristic patterns of mass distillation. The company also declares cooperation with other AI laboratories, cloud providers and state authorities, as well as stricter verification of educational, research and startup accounts, which are often used to create unauthorized access.

At the same time, Anthropic admits that the scale of such operations shows that effective defense requires not only technical solutions, but also coordinated industry action and regulatory support. Conflict around model distillation can become another front of technological competition between the US and China in the field of artificial intelligence.

Similar Posts