Duplicating a block of seven middle layers in Qwen2-72B without weight changes or training produced a top model on the HuggingFace Open LLM Leaderboard. Since mid-2024, several strong open-source models have emerged, including Qwen3.5, MiniMax, and GLM-4.
dnhkng.github.io
20 min
5d ago
Sarvam 30B and Sarvam 105B are open-source reasoning models trained from scratch on large-scale, high-quality datasets. The training was conducted in India under the IndiaAI mission, optimizing various aspects including tokenization, model architecture, and execution kernels.
sarvam.ai
30 min
3/7/2026
Duplicating a block of seven middle layers in Qwen2-72B without weight changes or training produced a top model on the HuggingFace Open LLM Leaderboard. Since mid-2024, several strong open-source models have emerged, including Qwen3.5, MiniMax, and GLM-4.
dnhkng.github.io
20 min
5d ago
Sarvam 30B and Sarvam 105B are open-source reasoning models trained from scratch on large-scale, high-quality datasets. The training was conducted in India under the IndiaAI mission, optimizing various aspects including tokenization, model architecture, and execution kernels.
sarvam.ai
30 min
3/7/2026
Duplicating a block of seven middle layers in Qwen2-72B without weight changes or training produced a top model on the HuggingFace Open LLM Leaderboard. Since mid-2024, several strong open-source models have emerged, including Qwen3.5, MiniMax, and GLM-4.
dnhkng.github.io
20 min
5d ago
Sarvam 30B and Sarvam 105B are open-source reasoning models trained from scratch on large-scale, high-quality datasets. The training was conducted in India under the IndiaAI mission, optimizing various aspects including tokenization, model architecture, and execution kernels.
sarvam.ai
30 min
3/7/2026
No more articles to load