Comprehensive side-by-side LLM comparison
Qwen3.5-397B-A17B supports multimodal inputs. Both models have their strengths depending on your specific coding needs.
Alibaba / Qwen
Qwen2-72B-Instruct is a 72-billion-parameter language model released by Alibaba's Qwen team in June 2024, serving as the flagship of the Qwen2 generation and representing a major step in open-weight multilingual modeling. Trained on data spanning 30+ languages with strong coverage of code and structured reasoning, the model was among the first openly released 70B-class models to demonstrate competitive performance across diverse benchmarks. It established the foundation architecture and training methodology that the Qwen2.5 series would later extend.
Alibaba / Qwen
Qwen3.5-397B-A17B is a 397-billion-parameter mixture-of-experts model from Alibaba's Qwen team, released in February 2026 as the open-weight flagship of the Qwen3.5 series, featuring 17 billion active parameters per forward pass through a hybrid linear-attention and sparse-MoE architecture based on Gated Delta Networks. The model was co-trained on text, images, and video using early fusion, making it natively multimodal across a 262K token context window, while achieving significantly higher inference throughput than comparable dense models due to its sparse computation design. At release it was one of the most capable open-weight models publicly available, offered under Apache 2.0 and accessible through Alibaba's DashScope API as the Qwen3.5-Plus endpoint.
1 year newer
Qwen2 72B Instruct
Alibaba / Qwen
2024-06-07
Qwen3.5-397B-A17B
Alibaba / Qwen
2026-02-16
Available providers and their performance metrics
Qwen2 72B Instruct
Qwen3.5-397B-A17B
Qwen2 72B Instruct
Qwen3.5-397B-A17B