Comprehensive side-by-side LLM comparison
Grok-4 leads with 20.3% higher average benchmark score. Claude 3.5 Sonnet offers 136.0K more tokens in context window than Grok-4. Both models have similar pricing. Overall, Grok-4 is the stronger choice for coding tasks.
Anthropic
This upgraded version of Claude 3.5 Sonnet was released with significant improvements in coding and agentic tool use. Built to deliver enhanced performance in software engineering tasks, it brought substantial gains in reasoning and problem-solving while introducing the groundbreaking computer use capability in public beta, allowing it to interact with computer interfaces like a human.
xAI
Grok 4 represents the fourth generation of xAI's language models, developed to continue advancing the frontier of AI reasoning and knowledge. Built to handle increasingly complex tasks with enhanced reliability, it demonstrates xAI's commitment to pushing AI capabilities forward.
8 months newer

Claude 3.5 Sonnet
Anthropic
2024-10-22

Grok-4
xAI
2025-07-09
Cost per million tokens (USD)

Claude 3.5 Sonnet

Grok-4
Context window and performance specifications
Average performance across 1 common benchmarks

Claude 3.5 Sonnet

Grok-4
Grok-4
2024-12-31
Available providers and their performance metrics

Claude 3.5 Sonnet
Anthropic
Bedrock

Claude 3.5 Sonnet

Grok-4

Claude 3.5 Sonnet

Grok-4

Grok-4
xAI
ZeroEval