Themata.AI
Themata.AI

Popular tags:

#developer-tools#ai-agents#llms#claude#code-generation#ai-ethics#openai#ai-safety#anthropic#open-source

AI is changing the world. Don't stay behind. Clear summaries, community insight, delivered without the noise. Subscribe to never miss a beat.

© 2026 Themata.AI • All Rights Reserved

Privacy

|

Cookies

|

Contact
🕒 Latest🔥 Top

Filtering by tag:

qwenClear
NewsOpinionResearchTool
Something is afoot in the land of Qwen
qwenllmsalibabaai-research
News

Something is afoot in the land of Qwen

Alibaba's Qwen team has released the Qwen 3.5 family of open weight models. Junyang Lin, the lead researcher, announced his departure from the team via Twitter.

simonwillison.net

🔥🔥🔥🔥🔥

4 min

3/4/2026

Did Alibaba just kneecap its powerful Qwen AI team?

Key figures from Alibaba's Qwen AI team, known for their extensive contributions to open source generative models, have departed following the release of the Qwen3.5 small model series. The release received public acclaim from Elon Musk for its notable intelligence density.

venturebeat.com

🔥🔥🔥🔥🔥

5 min

3/4/2026

Alibaba releases Qwen3-Coder-Next to rival OpenAI, Anthropic

Qwen3-Coder-Next is an open-weight language model designed for coding agents and local development, built on the Qwen3-Next-80B-A3B backbone. It features a sparse Mixture-of-Experts (MoE) architecture with 80 billion total parameters, activating only 3 billion parameters per token to optimize performance and reduce inference costs.

marktechpost.com

🔥🔥🔥🔥🔥

5 min

2/4/2026

Something is afoot in the land of Qwen

Alibaba's Qwen team has released the Qwen 3.5 family of open weight models. Junyang Lin, the lead researcher, announced his departure from the team via Twitter.

simonwillison.net

🔥🔥🔥🔥🔥

4 min

3/4/2026

Alibaba releases Qwen3-Coder-Next to rival OpenAI, Anthropic

Qwen3-Coder-Next is an open-weight language model designed for coding agents and local development, built on the Qwen3-Next-80B-A3B backbone. It features a sparse Mixture-of-Experts (MoE) architecture with 80 billion total parameters, activating only 3 billion parameters per token to optimize performance and reduce inference costs.

marktechpost.com

🔥🔥🔥🔥🔥

5 min

2/4/2026

Did Alibaba just kneecap its powerful Qwen AI team?

Key figures from Alibaba's Qwen AI team, known for their extensive contributions to open source generative models, have departed following the release of the Qwen3.5 small model series. The release received public acclaim from Elon Musk for its notable intelligence density.

venturebeat.com

🔥🔥🔥🔥🔥

5 min

3/4/2026

Something is afoot in the land of Qwen

Alibaba's Qwen team has released the Qwen 3.5 family of open weight models. Junyang Lin, the lead researcher, announced his departure from the team via Twitter.

simonwillison.net

🔥🔥🔥🔥🔥

4 min

3/4/2026

Did Alibaba just kneecap its powerful Qwen AI team?

Key figures from Alibaba's Qwen AI team, known for their extensive contributions to open source generative models, have departed following the release of the Qwen3.5 small model series. The release received public acclaim from Elon Musk for its notable intelligence density.

venturebeat.com

🔥🔥🔥🔥🔥

5 min

3/4/2026

Alibaba releases Qwen3-Coder-Next to rival OpenAI, Anthropic

Qwen3-Coder-Next is an open-weight language model designed for coding agents and local development, built on the Qwen3-Next-80B-A3B backbone. It features a sparse Mixture-of-Experts (MoE) architecture with 80 billion total parameters, activating only 3 billion parameters per token to optimize performance and reduce inference costs.

marktechpost.com

🔥🔥🔥🔥🔥

5 min

2/4/2026

No more articles to load