Comprehensive side-by-side LLM comparison
Claude Opus 4.1 leads with 14.9% higher average benchmark score. GPT-4.1 offers 848.3K more tokens in context window than Claude Opus 4.1. GPT-4.1 is $80.00 cheaper per million tokens. Claude Opus 4.1 is available on 4 providers. Overall, Claude Opus 4.1 is the stronger choice for coding tasks.
Anthropic
Claude Opus 4.1 represents an iteration within the Claude 4 Opus line, built to deliver refined performance in complex reasoning and analysis tasks. Developed as part of Anthropic's flagship tier, it incorporates improvements to the foundational capabilities that define the Opus family of models.
OpenAI
GPT-4.1 represents an iterative improvement in the GPT-4 series, developed to refine the foundational capabilities established by GPT-4. Built to incorporate learnings and optimizations from the deployment of previous versions, it continues the evolution of OpenAI's flagship model line with enhanced reliability and performance.
3 months newer

GPT-4.1
OpenAI
2025-04-14

Claude Opus 4.1
Anthropic
2025-08-05
Cost per million tokens (USD)

Claude Opus 4.1

GPT-4.1
Context window and performance specifications
Average performance across 6 common benchmarks

Claude Opus 4.1

GPT-4.1
Performance comparison across key benchmark categories

Claude Opus 4.1

GPT-4.1
GPT-4.1
2024-06-01
Available providers and their performance metrics

Claude Opus 4.1
Anthropic
Bedrock
ZeroEval

Claude Opus 4.1

GPT-4.1

Claude Opus 4.1

GPT-4.1

GPT-4.1
OpenAI