Comprehensive side-by-side LLM comparison
Qwen3.5-397B-A17B leads with 3.0% higher average benchmark score. Qwen3.5-397B-A17B supports multimodal inputs. Both models have their strengths depending on your specific coding needs.
Xiaomi
MiMo-V2-Flash, released by Xiaomi on December 16, 2025, is a Mixture-of-Experts large language model with 309 billion total parameters and 15 billion active parameters per inference, designed for high-speed reasoning and agentic workflows. It features a 256K token context window, processes up to 150 tokens per second, and was trained on 27 trillion tokens. MiMo-V2-Flash targets open-source deployments requiring fast, capable coding and reasoning with an efficient inference footprint, under an MIT license.
Alibaba / Qwen
Qwen3.5-397B-A17B is a 397-billion-parameter mixture-of-experts model from Alibaba's Qwen team, released in February 2026 as the open-weight flagship of the Qwen3.5 series, featuring 17 billion active parameters per forward pass through a hybrid linear-attention and sparse-MoE architecture based on Gated Delta Networks. The model was co-trained on text, images, and video using early fusion, making it natively multimodal across a 262K token context window, while achieving significantly higher inference throughput than comparable dense models due to its sparse computation design. At release it was one of the most capable open-weight models publicly available, offered under Apache 2.0 and accessible through Alibaba's DashScope API as the Qwen3.5-Plus endpoint.
2 months newer
MiMo-V2-Flash
Xiaomi
2025-12-16
Qwen3.5-397B-A17B
Alibaba / Qwen
2026-02-16
Average performance across 1 common benchmarks
MiMo-V2-Flash
Qwen3.5-397B-A17B
Performance comparison across key benchmark categories
MiMo-V2-Flash
Qwen3.5-397B-A17B
Available providers and their performance metrics
MiMo-V2-Flash
Qwen3.5-397B-A17B
MiMo-V2-Flash
Qwen3.5-397B-A17B