Claude 3 Sonnet
Multimodal
Zero-eval
by Anthropic
+
+
+
+
About
Claude 3 Sonnet was introduced as the balanced middle tier of the Claude 3 family, offering a combination of strong capabilities and operational speed. Designed to provide an optimal tradeoff between intelligence and performance for everyday tasks, it served as a versatile solution for a wide range of enterprise and consumer applications.
+
+
+
+
Pricing Range
Input (per 1M)$3.00 -$3.00
Output (per 1M)$15.00 -$15.00
Providers3
+
+
+
+
Timeline
AnnouncedFeb 29, 2024
ReleasedFeb 29, 2024
+
+
+
+
Specifications
Capabilities
Multimodal
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown
Overall Performance
11 benchmarks
Average Score
73.8%
Best Score
93.2%
High Performers (80%+)
5Performance Metrics
Max Context Window
400.0KAvg Throughput
87.3 tok/sAvg Latency
0ms+
+
+
+
All Benchmark Results for Claude 3 Sonnet
Complete list of benchmark scores with detailed information
| ARC-C | text | 0.93 | 93.2% | Self-reported | |
| GSM8k | text | 0.92 | 92.3% | Self-reported | |
| HellaSwag | text | 0.89 | 89.0% | Self-reported | |
| MGSM | text | 0.83 | 83.5% | Self-reported | |
| BIG-Bench Hard | text | 0.83 | 82.9% | Self-reported | |
| MMLU | text | 0.79 | 79.0% | Self-reported | |
| DROP | text | 0.79 | 78.9% | Self-reported | |
| HumanEval | text | 0.73 | 73.0% | Self-reported | |
| MMLU-Pro | text | 0.57 | 56.8% | Self-reported | |
| MATH | text | 0.43 | 43.1% | Self-reported |
Showing 1 to 10 of 11 benchmarks
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+