OpenAI

o1-preview

Zero-eval
#3MMLU

by OpenAI

+
+
+
+
About

o1-preview was introduced as an early version of OpenAI's reasoning model series, designed to demonstrate the potential of AI models that engage in extended thinking before responding. Built to solve complex problems through deliberate reasoning processes, it represented an initial step toward more thoughtful and analytical AI systems.

+
+
+
+
Pricing Range
Input (per 1M)$15.00 -$16.50
Output (per 1M)$60.00 -$66.00
Providers2
+
+
+
+
Timeline
AnnouncedSep 12, 2024
ReleasedSep 12, 2024
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown

Overall Performance

8 benchmarks
Average Score
64.8%
Best Score
90.8%
High Performers (80%+)
3

Performance Metrics

Max Context Window
160.8K
Avg Throughput
41.0 tok/s
Avg Latency
8ms
+
+
+
+
All Benchmark Results for o1-preview
Complete list of benchmark scores with detailed information
MGSM
text
0.91
90.8%
Self-reported
MMLU
text
0.91
90.8%
Self-reported
MATH
text
0.85
85.5%
Self-reported
GPQA
text
0.73
73.3%
Self-reported
LiveBench
text
0.52
52.3%
Self-reported
SimpleQA
text
0.42
42.4%
Self-reported
AIME 2024
text
0.42
42.0%
Self-reported
SWE-Bench Verified
text
0.41
41.3%
Self-reported
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+