Comprehensive side-by-side LLM comparison
QvQ-72B-Preview supports multimodal inputs. Both models have their strengths depending on your specific coding needs.
Microsoft
Phi-3.5 MoE was created using a mixture-of-experts architecture, designed to provide enhanced capabilities while maintaining efficiency through sparse activation. Built to combine the benefits of larger models with practical computational requirements, it represents Microsoft's exploration of efficient scaling techniques.
Alibaba Cloud / Qwen Team
QVQ-72B Preview was introduced as an experimental visual question answering model, designed to combine vision and language understanding for complex reasoning tasks. Built to demonstrate advanced multimodal reasoning capabilities, it represents Qwen's exploration into models that can analyze and reason about visual information.
4 months newer

Phi-3.5-MoE-instruct
Microsoft
2024-08-23

QvQ-72B-Preview
Alibaba Cloud / Qwen Team
2024-12-25
Available providers and their performance metrics

Phi-3.5-MoE-instruct

QvQ-72B-Preview

Phi-3.5-MoE-instruct

QvQ-72B-Preview