Claude 3 Haiku
Multimodal
Zero-eval
by Anthropic
+
+
+
+
About
Claude 3 Haiku was created as the fastest and most affordable model in its intelligence class, processing 21K tokens per second for prompts under 32K tokens. Built for enterprise workloads requiring quick analysis of large datasets, it combines state-of-the-art vision capabilities with strong performance on industry benchmarks while prioritizing speed, affordability, and enterprise-grade security.
+
+
+
+
Pricing Range
Input (per 1M)$0.25 -$0.25
Output (per 1M)$1.25 -$1.25
Providers3
+
+
+
+
Timeline
AnnouncedMar 13, 2024
ReleasedMar 13, 2024
+
+
+
+
Specifications
Capabilities
Multimodal
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown
Overall Performance
10 benchmarks
Average Score
71.5%
Best Score
89.2%
High Performers (80%+)
3Performance Metrics
Max Context Window
400.0KAvg Throughput
82.0 tok/sAvg Latency
0ms+
+
+
+
All Benchmark Results for Claude 3 Haiku
Complete list of benchmark scores with detailed information
| ARC-C | text | 0.89 | 89.2% | Self-reported | |
| GSM8k | text | 0.89 | 88.9% | Self-reported | |
| HellaSwag | text | 0.86 | 85.9% | Self-reported | |
| DROP | text | 0.78 | 78.4% | Self-reported | |
| HumanEval | text | 0.76 | 75.9% | Self-reported | |
| MMLU | text | 0.75 | 75.2% | Self-reported | |
| MGSM | text | 0.75 | 75.1% | Self-reported | |
| BIG-Bench Hard | text | 0.74 | 73.7% | Self-reported | |
| MATH | text | 0.39 | 38.9% | Self-reported | |
| GPQA | text | 0.33 | 33.3% | Self-reported |
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+