Comprehensive side-by-side LLM comparison
DeepSeek R1 Zero leads with 29.7% higher average benchmark score. Overall, DeepSeek R1 Zero is the stronger choice for coding tasks.
DeepSeek
DeepSeek R1 Distill Qwen 1.5B is a language model developed by DeepSeek. The model shows competitive results across 4 benchmarks. It excels particularly in MATH-500 (83.9%), AIME 2024 (52.7%), GPQA (33.8%). It's licensed for commercial use, making it suitable for enterprise applications. Released in 2025, it represents DeepSeek's latest advancement in AI technology.
DeepSeek
DeepSeek R1 Zero is a language model developed by DeepSeek. It achieves strong performance with an average score of 76.5% across 4 benchmarks. It excels particularly in MATH-500 (95.9%), AIME 2024 (86.7%), GPQA (73.3%). It's licensed for commercial use, making it suitable for enterprise applications. Released in 2025, it represents DeepSeek's latest advancement in AI technology.
Launched on the same date
DeepSeek R1 Distill Qwen 1.5B
DeepSeek
2025-01-20
DeepSeek R1 Zero
DeepSeek
2025-01-20
Average performance across 4 common benchmarks
DeepSeek R1 Distill Qwen 1.5B
DeepSeek R1 Zero
Available providers and their performance metrics
DeepSeek R1 Distill Qwen 1.5B
DeepSeek R1 Zero
DeepSeek R1 Distill Qwen 1.5B
DeepSeek R1 Zero