
Gemini 2.5 Flash
Multimodal
Zero-eval
#2FACTS Grounding
#2LiveCodeBench v5
#3Global-MMLU-Lite
+1 more
by Google
+
+
+
+
About
Gemini 2.5 Flash is a multimodal language model developed by Google. It achieves strong performance with an average score of 62.5% across 14 benchmarks. It excels particularly in Global-MMLU-Lite (88.4%), AIME 2024 (88.0%), FACTS Grounding (85.3%). With a 1.1M token context window, it can handle extensive documents and complex multi-turn conversations. The model is available through 2 API providers. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents Google's latest advancement in AI technology.
+
+
+
+
Pricing Range
Input (per 1M)$0.30 -$0.30
Output (per 1M)$2.50 -$2.50
Providers2
+
+
+
+
Timeline
AnnouncedMay 20, 2025
ReleasedMay 20, 2025
Knowledge CutoffJan 31, 2025
+
+
+
+
Specifications
Capabilities
Multimodal
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown
Overall Performance
14 benchmarks
Average Score
62.5%
Best Score
88.4%
High Performers (80%+)
4Performance Metrics
Max Context Window
1.1MAvg Throughput
85.0 tok/sAvg Latency
1ms+
+
+
+
All Benchmark Results for Gemini 2.5 Flash
Complete list of benchmark scores with detailed information
Global-MMLU-Lite | text | 0.88 | 88.4% | Self-reported | |
AIME 2024 | text | 0.88 | 88.0% | Self-reported | |
FACTS Grounding | text | 0.85 | 85.3% | Self-reported | |
GPQA | text | 0.83 | 82.8% | Self-reported | |
MMMU | multimodal | 0.80 | 79.7% | Self-reported | |
AIME 2025 | text | 0.72 | 72.0% | Self-reported | |
Vibe-Eval | multimodal | 0.65 | 65.4% | Self-reported | |
LiveCodeBench v5 | text | 0.64 | 63.9% | Self-reported | |
Aider-Polyglot | text | 0.62 | 61.9% | Self-reported | |
SWE-Bench Verified | text | 0.60 | 60.4% | Self-reported |
Showing 1 to 10 of 14 benchmarks