Comprehensive side-by-side LLM comparison
Claude Haiku 4.5 leads with 29.6% higher average benchmark score. GPT-4.1 mini offers 680.3K more tokens in context window than Claude Haiku 4.5. GPT-4.1 mini is $4.00 cheaper per million tokens. Overall, Claude Haiku 4.5 is the stronger choice for coding tasks.
Anthropic
Claude Haiku 4.5 continues Anthropic's tradition of fast, efficient models in the fourth generation Claude family. Designed to maintain the hallmark speed and cost-effectiveness of the Haiku line while incorporating advancements from the Claude 4 series, it serves applications requiring rapid processing and quick turnaround times.
OpenAI
GPT-4.1 Mini was created as a smaller, more efficient variant of GPT-4.1, designed to provide strong capabilities with reduced computational requirements. Built to serve applications where speed and cost are priorities while maintaining solid performance, it extends the GPT-4.1 capabilities to resource-conscious deployments.
6 months newer

GPT-4.1 mini
OpenAI
2025-04-14

Claude Haiku 4.5
Anthropic
2025-10-15
Cost per million tokens (USD)

Claude Haiku 4.5

GPT-4.1 mini
Context window and performance specifications
Average performance across 4 common benchmarks

Claude Haiku 4.5

GPT-4.1 mini
Performance comparison across key benchmark categories

Claude Haiku 4.5

GPT-4.1 mini
GPT-4.1 mini
2024-05-31
Claude Haiku 4.5
2025-02-01
Available providers and their performance metrics

Claude Haiku 4.5
Anthropic

GPT-4.1 mini

Claude Haiku 4.5

GPT-4.1 mini

Claude Haiku 4.5

GPT-4.1 mini
OpenAI
ZeroEval