Comprehensive side-by-side LLM comparison
QvQ-72B-Preview leads with 12.1% higher average benchmark score. Overall, QvQ-72B-Preview is the stronger choice for coding tasks.
Microsoft
Phi-4 Multimodal was created to handle multiple input modalities including text, images, and potentially other formats. Built to extend Phi-4's efficiency into multimodal applications, it demonstrates that compact models can successfully integrate diverse information types.
Alibaba Cloud / Qwen Team
QVQ-72B Preview was introduced as an experimental visual question answering model, designed to combine vision and language understanding for complex reasoning tasks. Built to demonstrate advanced multimodal reasoning capabilities, it represents Qwen's exploration into models that can analyze and reason about visual information.
1 month newer

QvQ-72B-Preview
Alibaba Cloud / Qwen Team
2024-12-25

Phi-4-multimodal-instruct
Microsoft
2025-02-01
Context window and performance specifications
Average performance across 2 common benchmarks

Phi-4-multimodal-instruct

QvQ-72B-Preview
Phi-4-multimodal-instruct
2024-06-01
Available providers and their performance metrics

Phi-4-multimodal-instruct
DeepInfra

QvQ-72B-Preview

Phi-4-multimodal-instruct

QvQ-72B-Preview

Phi-4-multimodal-instruct

QvQ-72B-Preview