DS-FIM-Eval
text
+
+
+
+
About
DS-FIM-Eval is a specialized benchmark that evaluates AI models' Fill-in-the-Middle (FIM) code completion capabilities. Developed by DeepSeek, this benchmark tests models' ability to complete code segments when given prefix and suffix context, simulating real-world IDE coding scenarios. DS-FIM-Eval measures contextual code understanding, intelligent completion accuracy, and practical coding assistance effectiveness.
+
+
+
+
Evaluation Stats
Total Models1
Organizations1
Verified Results0
Self-Reported1
+
+
+
+
Benchmark Details
Max Score1
Language
en
+
+
+
+
Performance Overview
Score distribution and top performers
Score Distribution
1 models
Top Score
78.3%
Average Score
78.3%
High Performers (80%+)
0Top Organizations
#1DeepSeek
1 model
78.3%
+
+
+
+
Leaderboard
1 models ranked by performance on DS-FIM-Eval
License | Links | ||||
---|---|---|---|---|---|
May 8, 2024 | deepseek | 78.3% |