Comprehensive side-by-side LLM comparison
Claude 3.7 Sonnet leads with 8.8% higher average benchmark score. GPT-4.1 mini offers 768.8K more tokens in context window than Claude 3.7 Sonnet. GPT-4.1 mini is $16.00 cheaper per million tokens. Claude 3.7 Sonnet is available on 3 providers. Overall, Claude 3.7 Sonnet is the stronger choice for coding tasks.
Anthropic
Claude Sonnet 3.7, released by Anthropic in February 2025, is a large language model from the Claude 3 family featuring hybrid reasoning with configurable extended thinking. It supports a 200K token context window, 64K maximum output tokens (128K in beta), and native image understanding. Sonnet 3.7 targets complex coding, mathematics, and scientific reasoning tasks where extended chain-of-thought processing provides meaningful improvements in output quality.
OpenAI
GPT-4.1 mini, released by OpenAI in April 2025, is a smaller variant from the GPT-4.1 family designed for efficient, cost-effective deployments requiring long-context understanding. It features a 1M token context window and native image understanding, with maintained coding and instruction-following capabilities relative to its size. GPT-4.1 mini targets applications needing a balance between response speed, cost, and capability, such as production APIs with high request volumes.
1 month newer

Claude 3.7 Sonnet
Anthropic
2025-02-24

GPT-4.1 mini
OpenAI
2025-04-14
Cost per million tokens (USD)
Claude 3.7 Sonnet
GPT-4.1 mini
Context window and performance specifications
Average performance across 1 common benchmarks
Claude 3.7 Sonnet
GPT-4.1 mini
Performance comparison across key benchmark categories
Claude 3.7 Sonnet
GPT-4.1 mini
Claude 3.7 Sonnet
2024-10
Available providers and their performance metrics
Claude 3.7 Sonnet
Anthropic
AWS Bedrock
Google Cloud Vertex AI
GPT-4.1 mini
Claude 3.7 Sonnet
GPT-4.1 mini
Claude 3.7 Sonnet
GPT-4.1 mini
OpenAI