NUEVA YORK (EFE).— The state-owned company Anthropic has condemned a “distillation attack campaign” against its Claude model, accusing Chinese laboratories DeepSeek, Moonshot AI and MiniMax of fraudulent techniques to replicate their capabilities.
After signing, it was a coordinated eskama to gain knowledge of your AI system.
“We identified industrial-scale campaigns as part of DeepSeek, Moonshot AI, and MiniMax to illicitly use Cloud’s capabilities to improve their models,” Anthropic said in a press release. The company says service timings and regional access restrictions will affect operations.
Based on San Francisco technology, the three labs achieved more than 16 million interactions with Claude across 24,000 fraudulent accounts.
These actions represent what I have described as “distillation attacks,” a practice that consists of training a less powerful model starting with the results generated by a more advanced one.
Destyling is a legitimate and widely used method in the AI industry to optimize systems. However, Anthropic claimed that in this case it was “illegal” because they implemented “spoof accounts and proxy services”, hiding users’ true location in order to gain access on a large scale without being detected.
“The scope, structure, and emphasis of the ‘challenges’ were different from those commonly used by users, reflecting the intentional exclusion of capabilities in the context of legitimate use,” the company argued.
The report also suggested that DeepSeek delved into Cloud’s internal deliberations to create “safe alternatives to censorship for politically sensitive consultations.”
Anthropic advises that illegally styled models “take the necessary safeguards” to prevent “potential biological weapons attacks or malicious cyber activities” that pose a “significant threat to national security.”
The company has further strengthened protection mechanisms against this type of industrial practice.

Leave a Reply