- Home
- /
- Benchmarks
- /
- MMMU-Pro
MMMU-Pro
Multimodal
+
+
+
+
About
MMMU-Pro is a hardened multimodal benchmark testing AI models on college-level academic questions across 30 disciplines, with 10-option multiple choice and vision-only question formats that eliminate shortcuts exploitable by text-only models.
+
+
+
+
Evaluation Stats
Total Models6
Organizations3
Verified Results0
Self-Reported6
+
+
+
+
Benchmark Details
Max Score100
Sub-benchmarks1
+
+
+
+
Performance Overview
Score distribution and top performers
Score Distribution
6 models
Top Score
81.0%
Average Score
73.8%
High Performers (80%+)
1Top Organizations
#1Google DeepMind
1 model
81.0%
#2OpenAI
1 model
79.5%
#3Anthropic
4 models
70.6%
+
+
+
+
Leaderboard
6 models ranked by performance on MMMU-Pro
| License | Links | ||||
|---|---|---|---|---|---|
| Nov 18, 2025 | Proprietary | 81.0% | |||
| Dec 11, 2025 | Proprietary | 79.5% | |||
| Feb 17, 2026 | Proprietary | 74.5% | |||
| Feb 1, 2026 | Proprietary | 73.9% | |||
| Nov 1, 2025 | Proprietary | 70.6% | |||
| Sep 29, 2025 | Proprietary | 63.4% |