ARC-E
text
+
+
+
+
About
ARC-E (ARC Easy) is a baseline reasoning benchmark consisting of straightforward multiple-choice science questions that test fundamental knowledge and basic reasoning skills. Designed to ensure AI systems have the essential understanding needed for scientific reasoning, it serves as an entry-level evaluation before progressing to more challenging assessments. The benchmark focuses on simple inference capabilities and basic scientific comprehension that most competent AI systems should master.
+
+
+
+
Evaluation Stats
Total Models6
Organizations1
Verified Results0
Self-Reported6
+
+
+
+
Benchmark Details
Max Score1
Language
en
+
+
+
+
Performance Overview
Score distribution and top performers
Score Distribution
6 models
Top Score
88.6%
Average Score
81.9%
High Performers (80%+)
4Top Organizations
#1Google
6 models
81.9%
+
+
+
+
Leaderboard
6 models ranked by performance on ARC-E
License | Links | ||||
---|---|---|---|---|---|
Jun 27, 2024 | Gemma | 88.6% | |||
Jun 27, 2024 | Gemma | 88.0% | |||
Jun 26, 2025 | Proprietary | 81.6% | |||
May 20, 2025 | Gemma | 81.6% | |||
Jun 26, 2025 | Proprietary | 75.8% | |||
May 20, 2025 | Gemma | 75.8% |