Global-MMLU
Multilingual
text
+
+
+
+
About
Global MMLU extends the Massive Multitask Language Understanding benchmark to evaluate AI models' performance across diverse global contexts and cultural perspectives. This benchmark assesses models' knowledge and reasoning capabilities while considering worldwide cultural diversity, regional knowledge variations, and global academic standards, providing comprehensive evaluation of multilingual and multicultural understanding.
+
+
+
+
Evaluation Stats
Total Models4
Organizations1
Verified Results0
Self-Reported4
+
+
+
+
Benchmark Details
Max Score1
Language
en
+
+
+
+
Performance Overview
Score distribution and top performers
Score Distribution
4 models
Top Score
60.3%
Average Score
57.7%
High Performers (80%+)
0Top Organizations
#1Google
4 models
57.7%
+
+
+
+
Leaderboard
4 models ranked by performance on Global-MMLU
License | Links | ||||
---|---|---|---|---|---|
Jun 26, 2025 | Proprietary | 60.3% | |||
May 20, 2025 | Gemma | 60.3% | |||
Jun 26, 2025 | Proprietary | 55.1% | |||
May 20, 2025 | Gemma | 55.1% |