Comprehensive side-by-side LLM comparison
Grok-4 leads with 19.8% higher average benchmark score. Claude 3.7 Sonnet offers 64.0K more tokens in context window than Grok-4. Both models have similar pricing. Claude 3.7 Sonnet is available on 4 providers. Overall, Grok-4 is the stronger choice for coding tasks.
Anthropic
Claude 3.7 Sonnet represents Anthropic's first hybrid reasoning model, capable of producing near-instant responses or extended step-by-step thinking that is visible to users. Developed with particularly strong improvements in coding and front-end web development, it allows users to control thinking budgets and balances real-world task performance with reasoning capabilities for enterprise applications.
xAI
Grok 4 represents the fourth generation of xAI's language models, developed to continue advancing the frontier of AI reasoning and knowledge. Built to handle increasingly complex tasks with enhanced reliability, it demonstrates xAI's commitment to pushing AI capabilities forward.
4 months newer

Claude 3.7 Sonnet
Anthropic
2025-02-24

Grok-4
xAI
2025-07-09
Cost per million tokens (USD)

Claude 3.7 Sonnet

Grok-4
Context window and performance specifications
Average performance across 2 common benchmarks

Claude 3.7 Sonnet

Grok-4
Grok-4
2024-12-31
Available providers and their performance metrics

Claude 3.7 Sonnet
Anthropic
Bedrock
ZeroEval

Claude 3.7 Sonnet

Grok-4

Claude 3.7 Sonnet

Grok-4

Grok-4
xAI
ZeroEval