Google

Gemini 1.0 Pro

Zero-eval
#1BIG-Bench

by Google

+
+
+
+
About

Gemini 1.0 Pro is a language model developed by Google. The model shows competitive results across 9 benchmarks. Notable strengths include BIG-Bench (75.0%), MMLU (71.8%), WMT23 (71.7%). The model is available through 1 API provider. Released in 2024, it represents Google's latest advancement in AI technology.

+
+
+
+
Pricing Range
Input (per 1M)$0.50 -$0.50
Output (per 1M)$1.50 -$1.50
Providers1
+
+
+
+
Timeline
AnnouncedFeb 15, 2024
ReleasedFeb 15, 2024
Knowledge CutoffFeb 1, 2024
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown

Overall Performance

9 benchmarks
Average Score
48.4%
Best Score
75.0%
High Performers (80%+)
0

Performance Metrics

Max Context Window
41.0K
Avg Throughput
120.0 tok/s
Avg Latency
0ms
+
+
+
+
All Benchmark Results for Gemini 1.0 Pro
Complete list of benchmark scores with detailed information
BIG-Bench
text
0.75
75.0%
Unverified
MMLU
text
0.72
71.8%
Self-reported
WMT23
text
0.72
71.7%
Unverified
EgoSchema
video
0.56
55.7%
Self-reported
MMMU
multimodal
0.48
47.9%
Unverified
MathVista
multimodal
0.47
46.6%
Unverified
MATH
text
0.33
32.6%
Unverified
GPQA
text
0.28
27.9%
Unverified
FLEURS
audio
0.06
6.4%
Unverified