GPT-4
Multimodal
Zero-eval
#1AI2 Reasoning Challenge (ARC)
#1Uniform Bar Exam
#1SAT Math
+3 more
by OpenAI
+
+
+
+
About
GPT-4 was created as a large multimodal model capable of accepting image and text inputs while producing text outputs. Developed to exhibit human-level performance on various professional and academic benchmarks, it marked a significant advancement in reliability, creativity, and handling of nuanced instructions compared to its predecessors.
+
+
+
+
Pricing Range
Input (per 1M)$30.00 -$30.00
Output (per 1M)$60.00 -$60.00
Providers2
+
+
+
+
Timeline
AnnouncedJun 13, 2023
ReleasedJun 13, 2023
Knowledge CutoffDec 31, 2022
+
+
+
+
Specifications
Capabilities
Multimodal
+
+
+
+
License & Family
License
Proprietary
Performance Overview
Performance metrics and category breakdown
Overall Performance
12 benchmarks
Average Score
77.7%
Best Score
96.3%
High Performers (80%+)
8Performance Metrics
Max Context Window
65.5KAvg Throughput
102.0 tok/sAvg Latency
0ms+
+
+
+
All Benchmark Results for GPT-4
Complete list of benchmark scores with detailed information
| AI2 Reasoning Challenge (ARC) | text | 0.96 | 96.3% | Self-reported | |
| HellaSwag | text | 0.95 | 95.3% | Self-reported | |
| Uniform Bar Exam | text | 0.90 | 90.0% | Self-reported | |
| SAT Math | text | 0.89 | 89.0% | Self-reported | |
| LSAT | text | 0.88 | 88.0% | Self-reported | |
| Winogrande | text | 0.88 | 87.5% | Self-reported | |
| MMLU | text | 0.86 | 86.4% | Self-reported | |
| DROP | text | 0.81 | 80.9% | Self-reported | |
| MGSM | text | 0.74 | 74.5% | Self-reported | |
| HumanEval | text | 0.67 | 67.0% | Self-reported |
Showing 1 to 10 of 12 benchmarks
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+