Comprehensive side-by-side LLM comparison
DeepSeek-V3 0324 leads with 6.2% higher average benchmark score. DeepSeek-V3 0324 offers 134.1K more tokens in context window than o1-mini. DeepSeek-V3 0324 is $13.58 cheaper per million tokens. Overall, DeepSeek-V3 0324 is the stronger choice for coding tasks.
DeepSeek
DeepSeek-V3-0324 represents a specific release iteration of DeepSeek-V3, developed to incorporate ongoing improvements and refinements. Built to provide enhanced stability and performance based on deployment learnings, it continues the evolution of the DeepSeek-V3 architecture with iterative enhancements.
OpenAI
o1-mini was created as a faster, more cost-effective reasoning model, designed to bring extended thinking capabilities to applications with tighter latency and budget constraints. Built to excel particularly in coding and STEM reasoning while maintaining affordability, it provides a more accessible entry point to reasoning-enhanced AI assistance.
6 months newer

o1-mini
OpenAI
2024-09-12

DeepSeek-V3 0324
DeepSeek
2025-03-25
Cost per million tokens (USD)

DeepSeek-V3 0324

o1-mini
Context window and performance specifications
Average performance across 2 common benchmarks

DeepSeek-V3 0324

o1-mini
Available providers and their performance metrics

DeepSeek-V3 0324
Novita

o1-mini

DeepSeek-V3 0324

o1-mini

DeepSeek-V3 0324

o1-mini
Azure
OpenAI