Comprehensive side-by-side LLM comparison
Gemini 2.0 Flash-Lite leads with 23.0% higher average benchmark score. Overall, Gemini 2.0 Flash-Lite is the stronger choice for coding tasks.
Gemini 2.0 Flash Lite was created as an even more efficient variant of Gemini 2.0 Flash, designed for applications where minimal latency and maximum cost-effectiveness are essential. Built to bring next-generation multimodal capabilities to resource-constrained deployments, it optimizes for speed and affordability.
Gemma 3N E2B IT LiteRT Preview was introduced as an experimental version optimized for LiteRT deployment, designed to push the boundaries of on-device AI. Built to demonstrate the potential of running instruction-tuned models on mobile and edge devices, it represents ongoing efforts to make AI more accessible across hardware platforms.
3 months newer

Gemini 2.0 Flash-Lite
2025-02-05

Gemma 3n E2B Instructed LiteRT (Preview)
2025-05-20
Context window and performance specifications
Average performance across 5 common benchmarks

Gemini 2.0 Flash-Lite

Gemma 3n E2B Instructed LiteRT (Preview)
Gemini 2.0 Flash-Lite
2024-06-01
Gemma 3n E2B Instructed LiteRT (Preview)
2024-06-01
Available providers and their performance metrics

Gemini 2.0 Flash-Lite

Gemma 3n E2B Instructed LiteRT (Preview)

Gemini 2.0 Flash-Lite

Gemma 3n E2B Instructed LiteRT (Preview)

Gemini 2.0 Flash-Lite

Gemma 3n E2B Instructed LiteRT (Preview)