Anthropic

Claude 3.7 Sonnet

Multimodal
Zero-eval
#2IFEval

by Anthropic

+
+
+
+
About

Claude 3.7 Sonnet represents Anthropic's first hybrid reasoning model, capable of producing near-instant responses or extended step-by-step thinking that is visible to users. Developed with particularly strong improvements in coding and front-end web development, it allows users to control thinking budgets and balances real-world task performance with reasoning capabilities for enterprise applications.

+
+
+
+
Pricing Range
Input (per 1M)$3.00 -$3.00
Output (per 1M)$15.00 -$15.00
Providers4
+
+
+
+
Timeline
AnnouncedFeb 24, 2025
ReleasedFeb 24, 2025
+
+
+
+
Specifications
Capabilities
Multimodal
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown

Overall Performance

11 benchmarks
Average Score
74.1%
Best Score
96.2%
High Performers (80%+)
6

Performance Metrics

Max Context Window
328.0K
Avg Throughput
56.8 tok/s
Avg Latency
0ms
+
+
+
+
All Benchmark Results for Claude 3.7 Sonnet
Complete list of benchmark scores with detailed information
MATH-500
text
0.96
96.2%
Self-reported
IFEval
text
0.93
93.2%
Self-reported
MMMLU
text
0.86
86.1%
Self-reported
GPQA
text
0.85
84.8%
Self-reported
TAU-bench Retail
text
0.81
81.2%
Self-reported
AIME 2024
text
0.80
80.0%
Self-reported
MMMU
multimodal
0.75
75.0%
Self-reported
SWE-Bench Verified
text
0.70
70.3%
Self-reported
TAU-bench Airline
text
0.58
58.4%
Self-reported
AIME 2025
text
0.55
54.8%
Self-reported
Showing 1 to 10 of 11 benchmarks
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+