Comprehensive side-by-side LLM comparison
Claude Sonnet 4 leads with 26.8% higher average benchmark score. Gemini 2.5 Pro offers 744.2K more tokens in context window than Claude Sonnet 4. Gemini 2.5 Pro is $6.75 cheaper per million tokens. Overall, Claude Sonnet 4 is the stronger choice for coding tasks.
Anthropic
Claude Sonnet 4, released by Anthropic in May 2025, is a large language model from the Claude 4 family that delivers a balance of performance and efficiency for coding, reasoning, and analytical tasks. It features a 200K token context window (extendable to 1M tokens in beta), 64K maximum output tokens, native image understanding, and extended thinking support. Sonnet 4 targets development workflows, document analysis, and applications that benefit from the performance characteristics of the Claude 4 generation.
Google DeepMind
Gemini 2.5 Pro, released by Google in May 2025, is a large language model from the Gemini 2.5 family designed for complex reasoning, coding, and long-context analysis tasks. It features a 1M token context window, native support for text, image, video, and audio input, and integrated thinking capabilities for multi-step problem solving. Gemini 2.5 Pro targets advanced coding workflows, scientific reasoning, and applications requiring deep understanding across large, mixed-modality contexts.
6 days newer

Claude Sonnet 4
Anthropic
2025-05-14

Gemini 2.5 Pro
Google DeepMind
2025-05-20
Cost per million tokens (USD)
Claude Sonnet 4
Gemini 2.5 Pro
Context window and performance specifications
Average performance across 1 common benchmarks
Claude Sonnet 4
Gemini 2.5 Pro
Performance comparison across key benchmark categories
Claude Sonnet 4
Gemini 2.5 Pro
Claude Sonnet 4
2025-01
Available providers and their performance metrics
Claude Sonnet 4
Anthropic
AWS Bedrock
Google Cloud Vertex AI
Gemini 2.5 Pro
Claude Sonnet 4
Gemini 2.5 Pro
Claude Sonnet 4
Gemini 2.5 Pro
Google Cloud Vertex AI