OpenAI

GPT-4o mini

Multimodal
Zero-eval

by OpenAI

+
+
+
+
About

GPT-4o Mini was created as a smaller, more efficient variant of GPT-4o, designed to bring multimodal capabilities to applications requiring faster response times and lower costs. Built to democratize access to advanced vision and text understanding, it enables developers to build sophisticated applications with reduced resource requirements.

+
+
+
+
Pricing Range
Input (per 1M)$0.15 -$0.15
Output (per 1M)$0.60 -$0.60
Providers1
+
+
+
+
Timeline
AnnouncedJul 18, 2024
ReleasedJul 18, 2024
Knowledge CutoffOct 1, 2023
+
+
+
+
Specifications
Capabilities
Multimodal
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown

Overall Performance

9 benchmarks
Average Score
63.5%
Best Score
87.2%
High Performers (80%+)
3

Performance Metrics

Max Context Window
144.4K
Avg Throughput
92.0 tok/s
Avg Latency
1ms
+
+
+
+
All Benchmark Results for GPT-4o mini
Complete list of benchmark scores with detailed information
HumanEval
text
0.87
87.2%
Self-reported
MGSM
text
0.87
87.0%
Self-reported
MMLU
text
0.82
82.0%
Self-reported
DROP
text
0.80
79.7%
Self-reported
MATH
text
0.70
70.2%
Self-reported
MMMU
multimodal
0.59
59.4%
Self-reported
MathVista
multimodal
0.57
56.7%
Self-reported
GPQA
text
0.40
40.2%
Self-reported
SWE-Bench Verified
text
0.09
8.7%
Self-reported
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+