Comprehensive side-by-side LLM comparison
DeepSeek R1 Zero leads with 5.1% higher average benchmark score. Magistral Medium supports multimodal inputs. Overall, DeepSeek R1 Zero is the stronger choice for coding tasks.
DeepSeek
DeepSeek-R1-Zero was introduced as an experimental variant trained with minimal human supervision, designed to develop reasoning patterns through self-guided reinforcement learning. Built to explore how models can discover analytical strategies independently, it represents research into autonomous reasoning capability development.
Mistral AI
Magistral Medium was introduced as a specialized model for advanced reasoning and analytical tasks, designed to handle complex problem-solving scenarios. Built to serve enterprise and professional applications requiring sophisticated reasoning, it represents Mistral's focus on high-quality analytical capabilities.
4 months newer

DeepSeek R1 Zero
DeepSeek
2025-01-20

Magistral Medium
Mistral AI
2025-06-10
Average performance across 3 common benchmarks

DeepSeek R1 Zero

Magistral Medium
Magistral Medium
2025-06-01
Available providers and their performance metrics

DeepSeek R1 Zero

Magistral Medium

DeepSeek R1 Zero

Magistral Medium