Comprehensive side-by-side LLM comparison
o1 leads with 3.1% higher average benchmark score. Claude 3.5 Sonnet offers 100.0K more tokens in context window than o1. Claude 3.5 Sonnet is $57.00 cheaper per million tokens. Claude 3.5 Sonnet supports multimodal inputs. Both models have their strengths depending on your specific coding needs.
Anthropic
This upgraded version of Claude 3.5 Sonnet was released with significant improvements in coding and agentic tool use. Built to deliver enhanced performance in software engineering tasks, it brought substantial gains in reasoning and problem-solving while introducing the groundbreaking computer use capability in public beta, allowing it to interact with computer interfaces like a human.
OpenAI
o1 was developed as part of OpenAI's reasoning-focused model series, designed to spend more time thinking before responding. Built to excel at complex reasoning tasks in science, coding, and mathematics, it employs extended internal reasoning processes to solve harder problems than traditional language models through careful step-by-step analysis.
1 month newer

Claude 3.5 Sonnet
Anthropic
2024-10-22

o1
OpenAI
2024-12-17
Cost per million tokens (USD)

Claude 3.5 Sonnet

o1
Context window and performance specifications
Average performance across 11 common benchmarks

Claude 3.5 Sonnet

o1
Performance comparison across key benchmark categories

Claude 3.5 Sonnet

o1
Available providers and their performance metrics

Claude 3.5 Sonnet
Anthropic
Bedrock

Claude 3.5 Sonnet

o1

Claude 3.5 Sonnet

o1

o1
Azure
OpenAI