
Gemini 2.0 Flash
Multimodal
Zero-eval
#1Natural2Code
#1HiddenMath
#1CoVoST2
+2 more
by Google
+
+
+
+
About
Gemini 2.0 Flash is a multimodal language model developed by Google. It achieves strong performance with an average score of 66.7% across 13 benchmarks. It excels particularly in Natural2Code (92.9%), MATH (89.7%), FACTS Grounding (83.6%). With a 1.1M token context window, it can handle extensive documents and complex multi-turn conversations. The model is available through 1 API provider. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2024, it represents Google's latest advancement in AI technology.
+
+
+
+
Pricing Range
Input (per 1M)$0.10 -$0.10
Output (per 1M)$0.40 -$0.40
Providers1
+
+
+
+
Timeline
AnnouncedDec 1, 2024
ReleasedDec 1, 2024
Knowledge CutoffAug 1, 2024
+
+
+
+
Specifications
Capabilities
Multimodal
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown
Overall Performance
13 benchmarks
Average Score
66.7%
Best Score
92.9%
High Performers (80%+)
3Performance Metrics
Max Context Window
1.1MAvg Throughput
183.0 tok/sAvg Latency
0ms+
+
+
+
All Benchmark Results for Gemini 2.0 Flash
Complete list of benchmark scores with detailed information
Natural2Code | text | 0.93 | 92.9% | Self-reported | |
MATH | text | 0.90 | 89.7% | Self-reported | |
FACTS Grounding | text | 0.84 | 83.6% | Self-reported | |
MMLU-Pro | text | 0.76 | 76.4% | Self-reported | |
EgoSchema | video | 0.71 | 71.5% | Self-reported | |
MMMU | multimodal | 0.71 | 70.7% | Self-reported | |
MRCR | text | 0.69 | 69.2% | Self-reported | |
HiddenMath | text | 0.63 | 63.0% | Self-reported | |
GPQA | text | 0.62 | 62.1% | Self-reported | |
Bird-SQL (dev) | text | 0.57 | 56.9% | Self-reported |
Showing 1 to 10 of 13 benchmarks