Google

Gemma 3n E4B Instructed LiteRT Preview

Multimodal
Zero-eval
#2Global-MMLU
#2Codegolf v2.2
#3WMT24++

by Google

+
+
+
+
About

Gemma 3n E4B Instructed LiteRT Preview is a multimodal language model developed by Google. The model shows competitive results across 28 benchmarks. It excels particularly in BoolQ (81.6%), ARC-E (81.6%), PIQA (81.0%). As a multimodal model, it can process and understand text, images, and other input formats seamlessly. It's licensed for commercial use, making it suitable for enterprise applications. Released in 2025, it represents Google's latest advancement in AI technology.

+
+
+
+
Timeline
AnnouncedMay 20, 2025
ReleasedMay 20, 2025
Knowledge CutoffJun 1, 2024
+
+
+
+
Specifications
Capabilities
Multimodal
+
+
+
+
License & Family
License
Gemma
Performance Overview
Performance metrics and category breakdown

Overall Performance

28 benchmarks
Average Score
50.3%
Best Score
81.6%
High Performers (80%+)
3
+
+
+
+
All Benchmark Results for Gemma 3n E4B Instructed LiteRT Preview
Complete list of benchmark scores with detailed information
BoolQ
text
0.82
81.6%
Self-reported
ARC-E
text
0.82
81.6%
Self-reported
PIQA
text
0.81
81.0%
Self-reported
HellaSwag
text
0.79
78.6%
Self-reported
HumanEval
text
0.75
75.0%
Self-reported
Winogrande
text
0.72
71.7%
Self-reported
TriviaQA
text
0.70
70.2%
Self-reported
MMLU
text
0.65
64.9%
Self-reported
Global-MMLU-Lite
text
0.65
64.5%
Self-reported
MBPP
text
0.64
63.6%
Self-reported
Showing 1 to 10 of 28 benchmarks