Comprehensive side-by-side LLM comparison
DeepSeek-R1-0528 leads with 17.6% higher average benchmark score. DeepSeek-R1-0528 offers 122.9K more tokens in context window than Qwen2.5 72B Instruct. Qwen2.5 72B Instruct is $1.90 cheaper per million tokens. Overall, DeepSeek-R1-0528 is the stronger choice for coding tasks.
DeepSeek
DeepSeek-R1-0528 represents a specific release iteration of the DeepSeek-R1 model, developed to incorporate refinements and improvements from ongoing training. Built to provide enhanced reasoning capabilities based on accumulated insights, it continues the evolution of DeepSeek's reasoning-focused architecture.
Alibaba Cloud / Qwen Team
Qwen 2.5 72B was developed as the flagship text model in the Qwen 2.5 series, designed to provide advanced language capabilities with 72 billion parameters. Built to compete with frontier models in reasoning, coding, and general language tasks, it represents Qwen's most capable instruction-following model in this generation.
8 months newer

Qwen2.5 72B Instruct
Alibaba Cloud / Qwen Team
2024-09-19

DeepSeek-R1-0528
DeepSeek
2025-05-28
Cost per million tokens (USD)

DeepSeek-R1-0528

Qwen2.5 72B Instruct
Context window and performance specifications
Average performance across 4 common benchmarks

DeepSeek-R1-0528

Qwen2.5 72B Instruct
Available providers and their performance metrics

DeepSeek-R1-0528
DeepInfra
DeepSeek
Novita

DeepSeek-R1-0528

Qwen2.5 72B Instruct

DeepSeek-R1-0528

Qwen2.5 72B Instruct

Qwen2.5 72B Instruct
DeepInfra
Fireworks
Hyperbolic
Together