Comprehensive side-by-side LLM comparison
DeepSeek R1 Distill Qwen 7B leads with 18.9% higher average benchmark score. Overall, DeepSeek R1 Distill Qwen 7B is the stronger choice for coding tasks.
DeepSeek
DeepSeek-R1-Distill-Qwen-1.5B was created through distillation into an ultra-compact Qwen architecture, designed to enable reasoning capabilities on resource-constrained devices. Built with just 1.5 billion parameters, it brings advanced analytical techniques to edge computing and mobile scenarios.
DeepSeek
DeepSeek-R1-Distill-Qwen-7B was developed as a compact distilled model, designed to provide reasoning capabilities in an efficient 7B parameter package. Built to extend analytical AI to a broad range of applications, it balances capability with the practical benefits of a smaller model size.
Launched on the same date

DeepSeek R1 Distill Qwen 1.5B
DeepSeek
2025-01-20

DeepSeek R1 Distill Qwen 7B
DeepSeek
2025-01-20
Average performance across 4 common benchmarks

DeepSeek R1 Distill Qwen 1.5B

DeepSeek R1 Distill Qwen 7B
Available providers and their performance metrics

DeepSeek R1 Distill Qwen 1.5B

DeepSeek R1 Distill Qwen 7B

DeepSeek R1 Distill Qwen 1.5B

DeepSeek R1 Distill Qwen 7B