Google

Gemma 3 1B

Zero-eval

by Google

+
+
+
+
About

Gemma 3 1B is a language model developed by Google. The model shows competitive results across 18 benchmarks. It excels particularly in IFEval (80.2%), GSM8k (62.8%), Natural2Code (56.0%). It's licensed for commercial use, making it suitable for enterprise applications. Released in 2025, it represents Google's latest advancement in AI technology.

+
+
+
+
Timeline
AnnouncedMar 12, 2025
ReleasedMar 12, 2025
+
+
+
+
Specifications
Training Tokens2.0T
+
+
+
+
License & Family
License
Gemma
Performance Overview
Performance metrics and category breakdown

Overall Performance

18 benchmarks
Average Score
29.9%
Best Score
80.2%
High Performers (80%+)
1
+
+
+
+
All Benchmark Results for Gemma 3 1B
Complete list of benchmark scores with detailed information
IFEval
text
0.80
80.2%
Self-reported
GSM8k
text
0.63
62.8%
Self-reported
Natural2Code
text
0.56
56.0%
Self-reported
MATH
text
0.48
48.0%
Self-reported
HumanEval
text
0.41
41.5%
Self-reported
BIG-Bench Hard
text
0.39
39.1%
Self-reported
FACTS Grounding
text
0.36
36.4%
Self-reported
WMT24++
text
0.36
35.9%
Self-reported
MBPP
text
0.35
35.2%
Self-reported
Global-MMLU-Lite
text
0.34
34.2%
Self-reported
Showing 1 to 10 of 18 benchmarks