DeepSeek-V3 0324
Zero-eval
by DeepSeek
+
+
+
+
About
DeepSeek-V3-0324 represents a specific release iteration of DeepSeek-V3, developed to incorporate ongoing improvements and refinements. Built to provide enhanced stability and performance based on deployment learnings, it continues the evolution of the DeepSeek-V3 architecture with iterative enhancements.
+
+
+
+
Pricing Range
Input (per 1M)$0.28 -$0.28
Output (per 1M)$1.14 -$1.14
Providers1
+
+
+
+
Timeline
AnnouncedMar 25, 2025
ReleasedMar 25, 2025
+
+
+
+
Specifications
Training Tokens14.8T
+
+
+
+
License & Family
License
MIT + Model License (Commercial use allowed)
Performance Overview
Performance metrics and category breakdown
Overall Performance
5 benchmarks
Average Score
70.4%
Best Score
94.0%
High Performers (80%+)
2Performance Metrics
Max Context Window
327.7K+
+
+
+
All Benchmark Results for DeepSeek-V3 0324
Complete list of benchmark scores with detailed information
| MATH-500 | text | 0.94 | 94.0% | Self-reported | |
| MMLU-Pro | text | 0.81 | 81.2% | Self-reported | |
| GPQA | text | 0.68 | 68.4% | Self-reported | |
| AIME 2024 | text | 0.59 | 59.4% | Self-reported | |
| LiveCodeBench | text | 0.49 | 49.2% | Self-reported |
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+