Comprehensive side-by-side LLM comparison
o3-mini leads with 30.0% higher average benchmark score. o3-mini is available on 2 providers. Overall, o3-mini is the stronger choice for coding tasks.
OpenAI
o3-mini was created as an efficient variant of the o3 reasoning model, designed to provide advanced thinking capabilities with reduced computational requirements. Built to make next-generation reasoning accessible to a broader range of applications, it balances analytical depth with practical speed and cost considerations.
Microsoft
Phi-3.5 MoE was created using a mixture-of-experts architecture, designed to provide enhanced capabilities while maintaining efficiency through sparse activation. Built to combine the benefits of larger models with practical computational requirements, it represents Microsoft's exploration of efficient scaling techniques.
5 months newer

Phi-3.5-MoE-instruct
Microsoft
2024-08-23

o3-mini
OpenAI
2025-01-30
Context window and performance specifications
Average performance across 4 common benchmarks

o3-mini

Phi-3.5-MoE-instruct
o3-mini
2023-09-30
Available providers and their performance metrics

o3-mini
Azure
OpenAI


o3-mini

Phi-3.5-MoE-instruct

o3-mini

Phi-3.5-MoE-instruct
Phi-3.5-MoE-instruct