Comprehensive side-by-side LLM comparison
Gemini 2.0 Flash Thinking leads with 49.4% higher average benchmark score. Overall, Gemini 2.0 Flash Thinking is the stronger choice for coding tasks.
Gemini 2.0 Flash Thinking was developed to incorporate extended reasoning capabilities into the Flash family, designed to combine quick response times with deeper analytical processing. Built to handle tasks requiring both speed and thoughtful problem-solving, it bridges the gap between fast inference and reasoning-enhanced models.
Gemma 3N E2B IT LiteRT Preview was introduced as an experimental version optimized for LiteRT deployment, designed to push the boundaries of on-device AI. Built to demonstrate the potential of running instruction-tuned models on mobile and edge devices, it represents ongoing efforts to make AI more accessible across hardware platforms.
3 months newer

Gemini 2.0 Flash Thinking
2025-01-21

Gemma 3n E2B Instructed LiteRT (Preview)
2025-05-20
Average performance across 1 common benchmarks

Gemini 2.0 Flash Thinking

Gemma 3n E2B Instructed LiteRT (Preview)
Gemma 3n E2B Instructed LiteRT (Preview)
2024-06-01
Gemini 2.0 Flash Thinking
2024-08-01
Available providers and their performance metrics

Gemini 2.0 Flash Thinking

Gemma 3n E2B Instructed LiteRT (Preview)

Gemini 2.0 Flash Thinking

Gemma 3n E2B Instructed LiteRT (Preview)