Comprehensive side-by-side LLM comparison
o1 leads with 21.6% higher average benchmark score. Claude 3 Sonnet offers 100.0K more tokens in context window than o1. Claude 3 Sonnet is $57.00 cheaper per million tokens. Claude 3 Sonnet supports multimodal inputs. Overall, o1 is the stronger choice for coding tasks.
Anthropic
Claude 3 Sonnet was introduced as the balanced middle tier of the Claude 3 family, offering a combination of strong capabilities and operational speed. Designed to provide an optimal tradeoff between intelligence and performance for everyday tasks, it served as a versatile solution for a wide range of enterprise and consumer applications.
OpenAI
o1 was developed as part of OpenAI's reasoning-focused model series, designed to spend more time thinking before responding. Built to excel at complex reasoning tasks in science, coding, and mathematics, it employs extended internal reasoning processes to solve harder problems than traditional language models through careful step-by-step analysis.
9 months newer

Claude 3 Sonnet
Anthropic
2024-02-29

o1
OpenAI
2024-12-17
Cost per million tokens (USD)

Claude 3 Sonnet

o1
Context window and performance specifications
Average performance across 6 common benchmarks

Claude 3 Sonnet

o1
Available providers and their performance metrics

Claude 3 Sonnet
Anthropic
Bedrock

Claude 3 Sonnet

o1

Claude 3 Sonnet

o1

o1
Azure
OpenAI