
Gemini 1.5 Flash 8B
Multimodal
Zero-eval
#1FLEURS
#3XSTest
#3WMT23
by Google
+
+
+
+
About
Gemini 1.5 Flash 8B is a multimodal language model developed by Google. It achieves strong performance with an average score of 60.5% across 13 benchmarks. It excels particularly in XSTest (92.6%), FLEURS (86.4%), Natural2Code (75.5%). With a 1.1M token context window, it can handle extensive documents and complex multi-turn conversations. The model is available through 1 API provider. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2024, it represents Google's latest advancement in AI technology.
+
+
+
+
Pricing Range
Input (per 1M)$0.07 -$0.07
Output (per 1M)$0.30 -$0.30
Providers1
+
+
+
+
Timeline
AnnouncedMar 15, 2024
ReleasedMar 15, 2024
Knowledge CutoffOct 1, 2024
+
+
+
+
Specifications
Capabilities
Multimodal
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown
Overall Performance
13 benchmarks
Average Score
60.5%
Best Score
92.6%
High Performers (80%+)
2Performance Metrics
Max Context Window
1.1MAvg Throughput
150.0 tok/sAvg Latency
0ms+
+
+
+
All Benchmark Results for Gemini 1.5 Flash 8B
Complete list of benchmark scores with detailed information
XSTest | text | 0.93 | 92.6% | Self-reported | |
FLEURS | audio | 0.86 | 86.4% | Self-reported | |
Natural2Code | text | 0.76 | 75.5% | Self-reported | |
WMT23 | text | 0.73 | 72.6% | Self-reported | |
Video-MME | multimodal | 0.66 | 66.2% | Self-reported | |
MMLU-Pro | text | 0.59 | 58.7% | Self-reported | |
MATH | text | 0.59 | 58.7% | Self-reported | |
MRCR | text | 0.55 | 54.7% | Self-reported | |
MathVista | multimodal | 0.55 | 54.7% | Self-reported | |
MMMU | multimodal | 0.54 | 53.7% | Self-reported |
Showing 1 to 10 of 13 benchmarks