Comprehensive side-by-side LLM comparison
Qwen3-235B-A22B leads with 6.7% higher average benchmark score. Mistral Small 3.1 24B Instruct supports multimodal inputs. Overall, Qwen3-235B-A22B is the stronger choice for coding tasks.
Mistral AI
Mistral Small 3.1 is a 24-billion-parameter multimodal model from Mistral AI, released in March 2025 as an update to Mistral Small 3 that added vision understanding and expanded the context window from 32K to 128K tokens. The model accepts both text and image inputs, broadening its applicability to document analysis, image-grounded reasoning, and mixed-media workflows without requiring an increase in parameter count. Released under Apache 2.0, it continued Mistral's pattern of incremental capability gains delivered in compact, practically deployable open-weight packages.
Alibaba / Qwen
Qwen3-235B-A22B, released by Alibaba's Qwen team on April 28, 2025, is a Mixture-of-Experts large language model with 235 billion total parameters and 22 billion active parameters per inference. It features a 256K token context window, hybrid thinking capabilities (both reasoning and direct generation modes), and was trained on 36 trillion tokens across 119 languages. Qwen3-235B targets complex reasoning, multilingual tasks, and open-source deployments under the Apache 2.0 license.
1 month newer

Mistral Small 3.1 24B Instruct
Mistral AI
2025-03-17
Qwen3-235B-A22B
Alibaba / Qwen
2025-04-28
Context window and performance specifications
Average performance across 1 common benchmarks
Mistral Small 3.1 24B Instruct
Qwen3-235B-A22B
Performance comparison across key benchmark categories
Mistral Small 3.1 24B Instruct
Qwen3-235B-A22B
Available providers and their performance metrics
Mistral Small 3.1 24B Instruct
Qwen3-235B-A22B
OpenRouter
Mistral Small 3.1 24B Instruct
Qwen3-235B-A22B
Mistral Small 3.1 24B Instruct
Qwen3-235B-A22B