Multilingual MMLU
Multilingual
text
+
+
+
+
About
Multilingual MMLU extends the Massive Multitask Language Understanding benchmark to multiple languages, evaluating language models' knowledge and reasoning capabilities across diverse linguistic contexts. This multilingual adaptation tests models' ability to understand and apply academic knowledge in various languages, assessing both subject matter expertise and cross-cultural knowledge transfer.
+
+
+
+
Evaluation Stats
Total Models2
Organizations2
Verified Results0
Self-Reported2
+
+
+
+
Benchmark Details
Max Score1
Language
en
+
+
+
+
Performance Overview
Score distribution and top performers
Score Distribution
2 models
Top Score
80.7%
Average Score
65.0%
High Performers (80%+)
1Top Organizations
#1OpenAI
1 model
80.7%
#2Microsoft
1 model
49.3%
+
+
+
+
Leaderboard
2 models ranked by performance on Multilingual MMLU
License | Links | ||||
---|---|---|---|---|---|
Jan 30, 2025 | Proprietary | 80.7% | |||
Feb 1, 2025 | MIT | 49.3% |