MEGA MLQA

Multilingual
text
+
+
+
+
About

MEGA-MLQA is a multilingual question-answering component of the MEGA benchmark suite that evaluates generative AI models across multiple languages. This benchmark tests models' cross-lingual reading comprehension and question-answering capabilities, measuring how well AI systems can understand and respond to questions in diverse languages while maintaining accuracy and contextual understanding.

+
+
+
+
Evaluation Stats
Total Models2
Organizations1
Verified Results0
Self-Reported2
+
+
+
+
Benchmark Details
Max Score1
Language
en
+
+
+
+
Performance Overview
Score distribution and top performers

Score Distribution

2 models
Top Score
65.3%
Average Score
63.5%
High Performers (80%+)
0

Top Organizations

#1Microsoft
2 models
63.5%
+
+
+
+
Leaderboard
2 models ranked by performance on MEGA MLQA
LicenseLinks
Aug 23, 2024
MIT
65.3%
Aug 23, 2024
MIT
61.7%
+
+
+
+
Resources