OpenAI

o1-mini

Zero-eval
#1SuperGLUE
#2Cybersecurity CTFs

by OpenAI

+
+
+
+
About

o1-mini was created as a faster, more cost-effective reasoning model, designed to bring extended thinking capabilities to applications with tighter latency and budget constraints. Built to excel particularly in coding and STEM reasoning while maintaining affordability, it provides a more accessible entry point to reasoning-enhanced AI assistance.

+
+
+
+
Pricing Range
Input (per 1M)$3.00 -$3.30
Output (per 1M)$12.00 -$13.20
Providers2
+
+
+
+
Timeline
AnnouncedSep 12, 2024
ReleasedSep 12, 2024
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown

Overall Performance

6 benchmarks
Average Score
71.9%
Best Score
92.4%
High Performers (80%+)
3

Performance Metrics

Max Context Window
193.5K
Avg Throughput
107.5 tok/s
Avg Latency
3ms
+
+
+
+
All Benchmark Results for o1-mini
Complete list of benchmark scores with detailed information
HumanEval
text
0.92
92.4%
Self-reported
MATH-500
text
0.90
90.0%
Self-reported
MMLU
text
0.85
85.2%
Self-reported
SuperGLUE
text
0.75
75.0%
Self-reported
GPQA
text
0.60
60.0%
Self-reported
Cybersecurity CTFs
text
0.29
28.7%
Self-reported
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+