Comprehensive side-by-side LLM comparison
Qwen2.5 32B Instruct leads with 9.3% higher average benchmark score. Mistral Small 3.1 24B Instruct supports multimodal inputs. Overall, Qwen2.5 32B Instruct is the stronger choice for coding tasks.
Mistral AI
Mistral Small 3.1 is a 24-billion-parameter multimodal model from Mistral AI, released in March 2025 as an update to Mistral Small 3 that added vision understanding and expanded the context window from 32K to 128K tokens. The model accepts both text and image inputs, broadening its applicability to document analysis, image-grounded reasoning, and mixed-media workflows without requiring an increase in parameter count. Released under Apache 2.0, it continued Mistral's pattern of incremental capability gains delivered in compact, practically deployable open-weight packages.
Alibaba / Qwen
Qwen2.5-32B-Instruct is a 32-billion-parameter open-weight model from Alibaba's Qwen team, released in September 2024 as part of the Qwen2.5 series trained on 18 trillion tokens. The model is positioned as a high-capability option for developers with access to multi-GPU setups or high-VRAM hardware, offering strong performance on coding, structured reasoning, and multilingual tasks while remaining fully open under Apache 2.0. Its 128K context window and support for structured output generation made it a popular choice for document processing and agentic workflows in the open-source community.
5 months newer
Qwen2.5 32B Instruct
Alibaba / Qwen
2024-09-19

Mistral Small 3.1 24B Instruct
Mistral AI
2025-03-17
Average performance across 1 common benchmarks
Mistral Small 3.1 24B Instruct
Qwen2.5 32B Instruct
Performance comparison across key benchmark categories
Mistral Small 3.1 24B Instruct
Qwen2.5 32B Instruct
Available providers and their performance metrics
Mistral Small 3.1 24B Instruct
Qwen2.5 32B Instruct
Mistral Small 3.1 24B Instruct
Qwen2.5 32B Instruct