LongFact Concepts
text
+
+
+
+
About
LongFact-Concepts is a long-form factuality benchmark that evaluates AI models' ability to generate accurate, detailed responses about conceptual topics. This benchmark tests models' factual accuracy when producing extended explanations about abstract concepts, measuring the reliability and truthfulness of generated content across diverse conceptual domains requiring comprehensive, factually correct long-form responses.
+
+
+
+
Evaluation Stats
Total Models1
Organizations1
Verified Results0
Self-Reported1
+
+
+
+
Benchmark Details
Max Score1
Language
en
+
+
+
+
Performance Overview
Score distribution and top performers
Score Distribution
1 models
Top Score
0.7%
Average Score
0.7%
High Performers (80%+)
0Top Organizations
#1OpenAI
1 model
0.7%
+
+
+
+
Leaderboard
1 models ranked by performance on LongFact Concepts