LongFact Objects

text
+
+
+
+
About

LongFact-Objects is a long-form factuality benchmark that evaluates AI models' ability to generate accurate, detailed responses about concrete objects and entities. This benchmark tests factual accuracy in extended descriptions of real-world objects, measuring models' capacity to provide truthful, comprehensive information about tangible entities while maintaining factual consistency throughout lengthy responses.

+
+
+
+
Evaluation Stats
Total Models1
Organizations1
Verified Results0
Self-Reported1
+
+
+
+
Benchmark Details
Max Score1
Language
en
+
+
+
+
Performance Overview
Score distribution and top performers

Score Distribution

1 models
Top Score
0.8%
Average Score
0.8%
High Performers (80%+)
0

Top Organizations

#1OpenAI
1 model
0.8%
+
+
+
+
Leaderboard
1 models ranked by performance on LongFact Objects
LicenseLinks
Aug 7, 2025
Proprietary
0.8%
+
+
+
+
Resources