Comprehensive side-by-side LLM comparison
DeepSeek-R1-0528 leads with 21.0% higher average benchmark score. DeepSeek-R1-0528 offers 68.6K more tokens in context window than o1-mini. DeepSeek-R1-0528 is $12.35 cheaper per million tokens. Overall, DeepSeek-R1-0528 is the stronger choice for coding tasks.
DeepSeek
DeepSeek-R1-0528 represents a specific release iteration of the DeepSeek-R1 model, developed to incorporate refinements and improvements from ongoing training. Built to provide enhanced reasoning capabilities based on accumulated insights, it continues the evolution of DeepSeek's reasoning-focused architecture.
OpenAI
o1-mini was created as a faster, more cost-effective reasoning model, designed to bring extended thinking capabilities to applications with tighter latency and budget constraints. Built to excel particularly in coding and STEM reasoning while maintaining affordability, it provides a more accessible entry point to reasoning-enhanced AI assistance.
8 months newer

o1-mini
OpenAI
2024-09-12

DeepSeek-R1-0528
DeepSeek
2025-05-28
Cost per million tokens (USD)

DeepSeek-R1-0528

o1-mini
Context window and performance specifications
Average performance across 1 common benchmarks

DeepSeek-R1-0528

o1-mini
Available providers and their performance metrics

DeepSeek-R1-0528
DeepInfra
DeepSeek
Novita

DeepSeek-R1-0528

o1-mini

DeepSeek-R1-0528

o1-mini

o1-mini
Azure
OpenAI