
o1-mini
Zero-eval
#1SuperGLUE
#1Cybersecurity CTFs
by OpenAI
+
+
+
+
About
o1-mini is a language model developed by OpenAI. It achieves strong performance with an average score of 71.9% across 6 benchmarks. It excels particularly in HumanEval (92.4%), MATH-500 (90.0%), MMLU (85.2%). It supports a 194K token context window for handling large documents. The model is available through 2 API providers. Released in 2024, it represents OpenAI's latest advancement in AI technology.
+
+
+
+
Pricing Range
Input (per 1M)$3.00 -$3.30
Output (per 1M)$12.00 -$13.20
Providers2
+
+
+
+
Timeline
AnnouncedSep 12, 2024
ReleasedSep 12, 2024
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown
Overall Performance
6 benchmarks
Average Score
71.9%
Best Score
92.4%
High Performers (80%+)
3Performance Metrics
Max Context Window
193.5KAvg Throughput
107.5 tok/sAvg Latency
3ms+
+
+
+
All Benchmark Results for o1-mini
Complete list of benchmark scores with detailed information
HumanEval | text | 0.92 | 92.4% | Self-reported | |
MATH-500 | text | 0.90 | 90.0% | Self-reported | |
MMLU | text | 0.85 | 85.2% | Self-reported | |
SuperGLUE | text | 0.75 | 75.0% | Self-reported | |
GPQA | text | 0.60 | 60.0% | Self-reported | |
Cybersecurity CTFs | text | 0.29 | 28.7% | Self-reported |