
GPT-3.5 Turbo
Zero-eval
by OpenAI
+
+
+
+
About
GPT-3.5 Turbo is a language model developed by OpenAI. The model shows competitive results across 8 benchmarks. Notable strengths include DROP (70.2%), MMLU (69.8%), HumanEval (68.0%). The model is available through 2 API providers.
+
+
+
+
Pricing Range
Input (per 1M)$0.50 -$0.50
Output (per 1M)$1.50 -$1.50
Providers2
+
+
+
+
Timeline
AnnouncedMar 21, 2023
ReleasedMar 21, 2023
Knowledge CutoffSep 30, 2021
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown
Overall Performance
8 benchmarks
Average Score
42.3%
Best Score
70.2%
High Performers (80%+)
0Performance Metrics
Max Context Window
20.5KAvg Throughput
95.0 tok/sAvg Latency
1ms+
+
+
+
All Benchmark Results for GPT-3.5 Turbo
Complete list of benchmark scores with detailed information
DROP | text | 0.70 | 70.2% | Unverified | |
MMLU | text | 0.70 | 69.8% | Unverified | |
HumanEval | text | 0.68 | 68.0% | Unverified | |
MGSM | text | 0.56 | 56.3% | Unverified | |
MATH | text | 0.43 | 43.1% | Unverified | |
GPQA | text | 0.31 | 30.8% | Unverified | |
MMMU | multimodal | 0.00 | 0.0% | Unverified | |
MathVista | multimodal | 0.00 | 0.0% | Unverified |