Themata.AI
Themata.AI

Popular tags:

#developer-tools#ai-agents#llms#ai-ethics#claude#code-generation#openai#ai-safety#anthropic#open-source

AI is changing the world. Don't stay behind. Clear summaries, community insight, delivered without the noise. Subscribe to never miss a beat.

© 2026 Themata.AI • All Rights Reserved

Privacy

|

Cookies

|

Contact
claudeanthropicai-safetydistillation-attacks

Anthropic announces proof of distillation at scale by MiniMax, DeepSeek,Moonshot

XユーザーのAnthropicさん: 「We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.」 / X

twitter.com

February 23, 2026

1 min read

Summary

DeepSeek, Moonshot AI, and MiniMax conducted industrial-scale distillation attacks on Anthropic's models, creating over 24,000 fraudulent accounts. These labs generated over 16 million exchanges with Claude to extract its capabilities for training and improving their own models.

Key Takeaways

  • DeepSeek, Moonshot AI, and MiniMax conducted industrial-scale distillation attacks on Anthropic's models.
  • These labs created over 24,000 fraudulent accounts to generate more than 16 million exchanges with Claude.
  • The exchanges were used to extract capabilities from Claude to train and improve their own models.

Community Sentiment

Mixed

Positives

  • The release of models by MiniMax, DeepSeek, and Moonshot for free democratizes access to AI technology, fostering competition and innovation in the market.
  • The ability of Chinese labs to make advanced models accessible for fine-tuning and private hosting enhances opportunities for smaller businesses and research labs.
  • The feasibility of distilling models with around 16 million sessions suggests that AI-to-AI learning could significantly reduce the resource burden compared to traditional human learning.

Concerns

  • Anthropic's ongoing legal battles over unauthorized use of proprietary content raises serious ethical concerns about their practices in model training.
  • The hypocrisy of major AI labs claiming to build AI for everyone while releasing expensive, closed-source models undermines trust in their intentions.
  • The use of fraudulent accounts to extract capabilities from Claude highlights the questionable methods employed by these labs, potentially eroding the integrity of AI development.
Read original article

Related Articles

XユーザーのGoogle DeepMindさん: 「We’ve upgraded our specialized reasoning mode Gemini 3 Deep Think to help solve modern science, research, and engineering challenges – pushing the frontier of intelligence. 🧠 Watch how the Wang Lab at Duke University is using it to design new semiconductor materials. 🧵 https://t.co/BgSEmv00JP」 / X

Gemini 3 Deep Think

Feb 12, 2026

XユーザーのRobert Wrightさん: 「Anthropic's Claude helped select hundreds of targets for the opening wave of Iran strikes. There's a good chance that one of them was the elementary school where more than 100 girls died. My latest @NonzeroNews piece. https://t.co/d2uv9HANhS」 / X

Claude helped select targets for Iran strikes, possibly including school

Mar 9, 2026

XユーザーのSam Altmanさん: 「Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of」 / X

OpenAI agrees with Dept. of War to deploy models in their classified network

Feb 28, 2026

Xユーザーのchiefofautismさん: 「the #1 most downloaded skill on OpenClaw marketplace was MALWARE it stole your SSH keys, crypto wallets, browser cookies, and opened a reverse shell to the attackers server 1,184 malicious skills found, one attacker uploaded 677 packages ALONE OpenClaw has a skill marketplace https://t.co/3Qw9QoB1nt」 / X

The #1 most downloaded skill on OpenClaw marketplace was malware

Feb 19, 2026

XユーザーのAlexey Grigorevさん: 「Claude Code wiped our production database with a Terraform command. It took down the DataTalksClub course platform and 2.5 years of submissions: homework, projects, and leaderboards. Automated snapshots were gone too. In the newsletter, I wrote the full timeline + what I https://t.co/Y5diFkQwjN」 / X

Claude Code wiped our production database with a Terraform command

Mar 6, 2026

Source

twitter.com

Published

February 23, 2026

Reading Time

1 minutes

Relevance Score

55/100

🔥🔥🔥🔥🔥

Why It Matters

This page is optimized for focused reading: quick context up top, a clean summary block, and a direct path to the original source when you want the full story.