OpenAI

o1-preview

Zero-eval
#3MMLU

by OpenAI

+
+
+
+
About

o1-preview is a language model developed by OpenAI. It achieves strong performance with an average score of 64.8% across 8 benchmarks. It excels particularly in MGSM (90.8%), MMLU (90.8%), MATH (85.5%). It supports a 161K token context window for handling large documents. The model is available through 2 API providers. Released in 2024, it represents OpenAI's latest advancement in AI technology.

+
+
+
+
Pricing Range
Input (per 1M)$15.00 -$16.50
Output (per 1M)$60.00 -$66.00
Providers2
+
+
+
+
Timeline
AnnouncedSep 12, 2024
ReleasedSep 12, 2024
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown

Overall Performance

8 benchmarks
Average Score
64.8%
Best Score
90.8%
High Performers (80%+)
3

Performance Metrics

Max Context Window
160.8K
Avg Throughput
41.0 tok/s
Avg Latency
8ms
+
+
+
+
All Benchmark Results for o1-preview
Complete list of benchmark scores with detailed information
MGSM
text
0.91
90.8%
Self-reported
MMLU
text
0.91
90.8%
Self-reported
MATH
text
0.85
85.5%
Self-reported
GPQA
text
0.73
73.3%
Self-reported
LiveBench
text
0.52
52.3%
Self-reported
SimpleQA
text
0.42
42.4%
Self-reported
AIME 2024
text
0.42
42.0%
Self-reported
SWE-Bench Verified
text
0.41
41.3%
Self-reported