Comprehensive side-by-side LLM comparison
DeepSeek R1 Distill Qwen 1.5B leads with 6.4% higher average benchmark score. Gemma 3n E2B Instructed LiteRT (Preview) supports multimodal inputs. Overall, DeepSeek R1 Distill Qwen 1.5B is the stronger choice for coding tasks.
DeepSeek
DeepSeek-R1-Distill-Qwen-1.5B was created through distillation into an ultra-compact Qwen architecture, designed to enable reasoning capabilities on resource-constrained devices. Built with just 1.5 billion parameters, it brings advanced analytical techniques to edge computing and mobile scenarios.
Gemma 3N E2B IT LiteRT Preview was introduced as an experimental version optimized for LiteRT deployment, designed to push the boundaries of on-device AI. Built to demonstrate the potential of running instruction-tuned models on mobile and edge devices, it represents ongoing efforts to make AI more accessible across hardware platforms.
4 months newer

DeepSeek R1 Distill Qwen 1.5B
DeepSeek
2025-01-20

Gemma 3n E2B Instructed LiteRT (Preview)
2025-05-20
Average performance across 2 common benchmarks

DeepSeek R1 Distill Qwen 1.5B

Gemma 3n E2B Instructed LiteRT (Preview)
Gemma 3n E2B Instructed LiteRT (Preview)
2024-06-01
Available providers and their performance metrics

DeepSeek R1 Distill Qwen 1.5B

Gemma 3n E2B Instructed LiteRT (Preview)

DeepSeek R1 Distill Qwen 1.5B

Gemma 3n E2B Instructed LiteRT (Preview)