
Gemma 3n E2B Instructed LiteRT (Preview)
Multimodal
Zero-eval
by Google
+
+
+
+
About
Gemma 3n E2B Instructed LiteRT (Preview) is a multimodal language model developed by Google. The model shows competitive results across 28 benchmarks. Notable strengths include PIQA (78.9%), BoolQ (76.4%), ARC-E (75.8%). As a multimodal model, it can process and understand text, images, and other input formats seamlessly. It's licensed for commercial use, making it suitable for enterprise applications. Released in 2025, it represents Google's latest advancement in AI technology.
+
+
+
+
Timeline
AnnouncedMay 20, 2025
ReleasedMay 20, 2025
Knowledge CutoffJun 1, 2024
+
+
+
+
Specifications
Capabilities
Multimodal
+
+
+
+
License & Family
License
Gemma
Performance Overview
Performance metrics and category breakdown
Overall Performance
28 benchmarks
Average Score
43.9%
Best Score
78.9%
High Performers (80%+)
0+
+
+
+
All Benchmark Results for Gemma 3n E2B Instructed LiteRT (Preview)
Complete list of benchmark scores with detailed information
PIQA | text | 0.79 | 78.9% | Self-reported | |
BoolQ | text | 0.76 | 76.4% | Self-reported | |
ARC-E | text | 0.76 | 75.8% | Self-reported | |
HellaSwag | text | 0.72 | 72.2% | Self-reported | |
Winogrande | text | 0.67 | 66.8% | Self-reported | |
HumanEval | text | 0.67 | 66.5% | Self-reported | |
TriviaQA | text | 0.61 | 60.8% | Self-reported | |
MMLU | text | 0.60 | 60.1% | Self-reported | |
Global-MMLU-Lite | text | 0.59 | 59.0% | Self-reported | |
MBPP | text | 0.57 | 56.6% | Self-reported |
Showing 1 to 10 of 28 benchmarks