Comprehensive side-by-side LLM comparison
Claude 3 Opus leads with 9.9% higher average benchmark score. Claude 3 Opus offers 334.5K more tokens in context window than GPT-4. Both models have similar pricing. Overall, Claude 3 Opus is the stronger choice for coding tasks.
Anthropic
Claude 3 Opus was developed as the most capable model in the Claude 3 family, designed to set new industry benchmarks across a wide range of cognitive tasks. Built to handle complex analysis and extended tasks requiring deep reasoning, it balanced frontier intelligence with careful safety considerations, representing the flagship tier of the Claude 3 generation.
OpenAI
GPT-4 was created as a large multimodal model capable of accepting image and text inputs while producing text outputs. Developed to exhibit human-level performance on various professional and academic benchmarks, it marked a significant advancement in reliability, creativity, and handling of nuanced instructions compared to its predecessors.
8 months newer

GPT-4
OpenAI
2023-06-13

Claude 3 Opus
Anthropic
2024-02-29
Cost per million tokens (USD)

Claude 3 Opus

GPT-4
Context window and performance specifications
Average performance across 7 common benchmarks

Claude 3 Opus

GPT-4
GPT-4
2022-12-31
Available providers and their performance metrics

Claude 3 Opus
Anthropic
Bedrock

Claude 3 Opus

GPT-4

Claude 3 Opus

GPT-4

GPT-4
Azure
OpenAI