DeepSeek R1 Distill Qwen 7B
Zero-eval
by DeepSeek
+
+
+
+
About
DeepSeek-R1-Distill-Qwen-7B was developed as a compact distilled model, designed to provide reasoning capabilities in an efficient 7B parameter package. Built to extend analytical AI to a broad range of applications, it balances capability with the practical benefits of a smaller model size.
+
+
+
+
Timeline
AnnouncedJan 20, 2025
ReleasedJan 20, 2025
+
+
+
+
Specifications
Training Tokens14.8T
+
+
+
+
License & Family
License
MIT
Performance Overview
Performance metrics and category breakdown
Overall Performance
4 benchmarks
Average Score
65.7%
Best Score
92.8%
High Performers (80%+)
2+
+
+
+
All Benchmark Results for DeepSeek R1 Distill Qwen 7B
Complete list of benchmark scores with detailed information
| MATH-500 | text | 0.93 | 92.8% | Self-reported | |
| AIME 2024 | text | 0.83 | 83.3% | Self-reported | |
| GPQA | text | 0.49 | 49.1% | Self-reported | |
| LiveCodeBench | text | 0.38 | 37.6% | Self-reported |
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+