Qwen3.5 is a family of multimodal hybrid reasoning LLMs from Alibaba, featuring models such as Qwen3.5-35B-A3B, 27B, 122B-A10B, and 397B-A17B, as well as smaller versions like Qwen3.5-0.8B, 2B, 4B, and 9B. These models support a 256K context across 201 languages and deliver strong performance for their sizes.
unsloth.ai
13 min
3/7/2026
Qwen3.5 can be fine-tuned locally with Unsloth, supporting both vision and text fine-tuning for model sizes ranging from 0.8B to 122B. Unsloth enables Qwen3.5 to train 1.5× faster and use 50% less VRAM compared to FA2 setups, with specific VRAM requirements for bf16 LoRA across different model sizes.
unsloth.ai
4 min
3/4/2026
Qwen3.5 is a family of multimodal hybrid reasoning LLMs from Alibaba, featuring models such as Qwen3.5-35B-A3B, 27B, 122B-A10B, and 397B-A17B, as well as smaller versions like Qwen3.5-0.8B, 2B, 4B, and 9B. These models support a 256K context across 201 languages and deliver strong performance for their sizes.
unsloth.ai
13 min
3/7/2026
Qwen3.5 can be fine-tuned locally with Unsloth, supporting both vision and text fine-tuning for model sizes ranging from 0.8B to 122B. Unsloth enables Qwen3.5 to train 1.5× faster and use 50% less VRAM compared to FA2 setups, with specific VRAM requirements for bf16 LoRA across different model sizes.
unsloth.ai
4 min
3/4/2026
Qwen3.5 is a family of multimodal hybrid reasoning LLMs from Alibaba, featuring models such as Qwen3.5-35B-A3B, 27B, 122B-A10B, and 397B-A17B, as well as smaller versions like Qwen3.5-0.8B, 2B, 4B, and 9B. These models support a 256K context across 201 languages and deliver strong performance for their sizes.
unsloth.ai
13 min
3/7/2026
Qwen3.5 can be fine-tuned locally with Unsloth, supporting both vision and text fine-tuning for model sizes ranging from 0.8B to 122B. Unsloth enables Qwen3.5 to train 1.5× faster and use 50% less VRAM compared to FA2 setups, with specific VRAM requirements for bf16 LoRA across different model sizes.
unsloth.ai
4 min
3/4/2026
No more articles to load