Comprehensive side-by-side LLM comparison
o1 leads with 38.7% higher average benchmark score. Claude 3.5 Haiku offers 100.0K more tokens in context window than o1. Claude 3.5 Haiku is $70.20 cheaper per million tokens. Overall, o1 is the stronger choice for coding tasks.
Anthropic
Claude 3.5 Haiku is a language model developed by Anthropic. It achieves strong performance with an average score of 60.8% across 9 benchmarks. It excels particularly in HumanEval (88.1%), MGSM (85.6%), DROP (83.1%). It supports a 400K token context window for handling large documents. The model is available through 3 API providers. Released in 2024, it represents Anthropic's latest advancement in AI technology.
OpenAI
o1 is a language model developed by OpenAI. It achieves strong performance with an average score of 71.6% across 19 benchmarks. It excels particularly in GSM8k (97.1%), MATH (96.4%), GPQA Physics (92.8%). It supports a 300K token context window for handling large documents. The model is available through 2 API providers. Released in 2024, it represents OpenAI's latest advancement in AI technology.
1 month newer
Claude 3.5 Haiku
Anthropic
2024-10-22
o1
OpenAI
2024-12-17
Cost per million tokens (USD)
Claude 3.5 Haiku
o1
Context window and performance specifications
Average performance across 21 common benchmarks
Claude 3.5 Haiku
o1
Available providers and their performance metrics
Claude 3.5 Haiku
Anthropic
Bedrock
Claude 3.5 Haiku
o1
Claude 3.5 Haiku
o1
o1
Azure
OpenAI