InfiniteBench/En.MC
text
+
+
+
+
About
InfiniteBench En.MC is a long-context evaluation benchmark featuring English multiple-choice questions designed to test AI models' ability to process and reason over extremely long contexts exceeding 100,000 tokens. This benchmark evaluates models' sustained attention, long-range reasoning, and comprehension capabilities when handling extensive textual information in multiple-choice question formats.
+
+
+
+
Evaluation Stats
Total Models1
Organizations1
Verified Results0
Self-Reported1
+
+
+
+
Benchmark Details
Max Score1
Language
en
+
+
+
+
Performance Overview
Score distribution and top performers
Score Distribution
1 models
Top Score
63.3%
Average Score
63.3%
High Performers (80%+)
0Top Organizations
#1Meta
1 model
63.3%
+
+
+
+
Leaderboard
1 models ranked by performance on InfiniteBench/En.MC
License | Links | ||||
---|---|---|---|---|---|
Sep 25, 2024 | Llama 3.2 Community License | 63.3% |