Comprehensive side-by-side LLM comparison
Claude 3.5 Sonnet leads with 3.9% higher average benchmark score. Claude 3.5 Sonnet offers 255.6K more tokens in context window than GPT-4o. GPT-4o is $5.50 cheaper per million tokens. Both models have their strengths depending on your specific coding needs.
Anthropic
This upgraded version of Claude 3.5 Sonnet was released with significant improvements in coding and agentic tool use. Built to deliver enhanced performance in software engineering tasks, it brought substantial gains in reasoning and problem-solving while introducing the groundbreaking computer use capability in public beta, allowing it to interact with computer interfaces like a human.
OpenAI
This updated version of GPT-4o was released with refinements to its multimodal capabilities and improved performance across text, vision, and audio tasks. Built to incorporate learnings from the initial GPT-4o deployment, it enhanced reliability and accuracy while maintaining the seamless cross-modal reasoning that defines the GPT-4o family.
2 months newer

GPT-4o
OpenAI
2024-08-06

Claude 3.5 Sonnet
Anthropic
2024-10-22
Cost per million tokens (USD)

Claude 3.5 Sonnet

GPT-4o
Context window and performance specifications
Average performance across 11 common benchmarks

Claude 3.5 Sonnet

GPT-4o
Performance comparison across key benchmark categories

Claude 3.5 Sonnet

GPT-4o
Available providers and their performance metrics

Claude 3.5 Sonnet
Anthropic
Bedrock

Claude 3.5 Sonnet

GPT-4o

Claude 3.5 Sonnet

GPT-4o

GPT-4o
Azure
OpenAI