Comprehensive side-by-side LLM comparison
Grok-3 Mini leads with 47.2% higher average benchmark score. Grok-3 Mini supports multimodal inputs. Overall, Grok-3 Mini is the stronger choice for coding tasks.
xAI
Grok 3 Mini was developed as an efficient version of Grok 3, designed to bring next-generation capabilities to resource-conscious deployments. Built to provide strong performance with practical efficiency, it extends Grok 3's innovations to broader application scenarios.
Microsoft
Phi-3.5 MoE was created using a mixture-of-experts architecture, designed to provide enhanced capabilities while maintaining efficiency through sparse activation. Built to combine the benefits of larger models with practical computational requirements, it represents Microsoft's exploration of efficient scaling techniques.
5 months newer

Phi-3.5-MoE-instruct
Microsoft
2024-08-23

Grok-3 Mini
xAI
2025-02-17
Context window and performance specifications
Average performance across 1 common benchmarks

Grok-3 Mini

Phi-3.5-MoE-instruct
Grok-3 Mini
2024-11-17
Available providers and their performance metrics

Grok-3 Mini
xAI

Phi-3.5-MoE-instruct

Grok-3 Mini

Phi-3.5-MoE-instruct

Grok-3 Mini

Phi-3.5-MoE-instruct