Comprehensive side-by-side LLM comparison
Grok-2 mini leads with 15.3% higher average benchmark score. Grok-2 mini supports multimodal inputs. Overall, Grok-2 mini is the stronger choice for coding tasks.
xAI
Grok 2 Mini was created as a more efficient variant of Grok 2, designed to provide strong capabilities with reduced computational requirements. Built to make Grok 2's advancements accessible to applications with tighter resource constraints, it balances performance with practical deployment needs.
Microsoft
Phi-3.5 MoE was created using a mixture-of-experts architecture, designed to provide enhanced capabilities while maintaining efficiency through sparse activation. Built to combine the benefits of larger models with practical computational requirements, it represents Microsoft's exploration of efficient scaling techniques.
10 days newer

Grok-2 mini
xAI
2024-08-13

Phi-3.5-MoE-instruct
Microsoft
2024-08-23
Average performance across 5 common benchmarks

Grok-2 mini

Phi-3.5-MoE-instruct
Available providers and their performance metrics

Grok-2 mini

Phi-3.5-MoE-instruct

Grok-2 mini

Phi-3.5-MoE-instruct