OpenAI

o4 mini

Multimodal

by OpenAI

+
+
+
+
About

OpenAI o4-mini, released in April 2025, was the first reasoning model from OpenAI to combine multimodal understanding with efficient RL-trained chain-of-thought — allowing the model to reason over images, diagrams, and code simultaneously. Its ability to process visual context during reasoning rather than only before it made it particularly useful for coding tasks involving UI screenshots, charts, or visual specifications.

+
+
+
+
Pricing Range
Input (per 1M)$1.10 -$1.10
Output (per 1M)$4.40 -$4.40
Providers1
+
+
+
+
Timeline
ReleasedApr 16, 2025
+
+
+
+
Specifications
Capabilities
Multimodal
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown

Overall Performance

5 benchmarks
Average Score
19.8%
Best Score
56.9%
High Performers (80%+)
0

Performance Metrics

Max Context Window
300.0K

Top Categories

Agents
28.5%
Reasoning
14.3%
Multimodal
-1.0%
+
+
+
+
All Benchmark Results for o4 mini
Complete list of benchmark scores with detailed information
τ-bench
Agents
56.90
56.9%
Unverified
GDPVal
Agents
25.30
25.3%
Unverified
Humanity's Last Exam
Reasoning
14.28
14.3%
Unverified
BrowseComp
Agents
3.30
3.3%
Unverified
MMMU
Multimodal
-1.00
-1.0%
Unverified
+
+
+
+
Resources