Comprehensive side-by-side LLM comparison
Claude 3.5 Haiku leads with 5.9% higher average benchmark score. Claude 3.5 Haiku offers 255.6K more tokens in context window than GPT-4o mini. GPT-4o mini is $4.05 cheaper per million tokens. GPT-4o mini supports multimodal inputs. Claude 3.5 Haiku is available on 3 providers. Overall, Claude 3.5 Haiku is the stronger choice for coding tasks.
Anthropic
Claude 3.5 Haiku was developed as the next generation of Anthropic's fastest model, offering similar speed to Claude 3 Haiku while surpassing Claude 3 Opus on many intelligence benchmarks. Built with low latency, improved instruction following, and accurate tool use, it serves user-facing products, specialized sub-agent tasks, and data-heavy applications.
OpenAI
GPT-4o Mini was created as a smaller, more efficient variant of GPT-4o, designed to bring multimodal capabilities to applications requiring faster response times and lower costs. Built to democratize access to advanced vision and text understanding, it enables developers to build sophisticated applications with reduced resource requirements.
3 months newer

GPT-4o mini
OpenAI
2024-07-18

Claude 3.5 Haiku
Anthropic
2024-10-22
Cost per million tokens (USD)

Claude 3.5 Haiku

GPT-4o mini
Context window and performance specifications
Average performance across 6 common benchmarks

Claude 3.5 Haiku

GPT-4o mini
Performance comparison across key benchmark categories

Claude 3.5 Haiku

GPT-4o mini
GPT-4o mini
2023-10-01
Available providers and their performance metrics

Claude 3.5 Haiku
Anthropic
Bedrock

Claude 3.5 Haiku

GPT-4o mini

Claude 3.5 Haiku

GPT-4o mini

GPT-4o mini
Azure