GLM-4.6
Multimodal
Zero-eval
#1LiveCodeBench v6
#1HLE
#3AIME 2025
+1 more
by Zhipu AI
+
+
+
+
About
GLM-4.6 is a multimodal language model developed by Zhipu AI. It achieves strong performance with an average score of 61.2% across 7 benchmarks. It excels particularly in AIME 2025 (93.9%), LiveCodeBench v6 (82.8%), GPQA (81.0%). It supports a 197K token context window for handling large documents. The model is available through 2 API providers. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. It's licensed for commercial use, making it suitable for enterprise applications. Released in 2025, it represents Zhipu AI's latest advancement in AI technology.
+
+
+
+
Pricing Range
Input (per 1M)$0.60 -$0.60
Output (per 1M)$2.00 -$2.00
Providers2
+
+
+
+
Timeline
AnnouncedSep 30, 2025
ReleasedSep 30, 2025
+
+
+
+
Specifications
Capabilities
Multimodal
+
+
+
+
License & Family
License
MIT
Performance Overview
Performance metrics and category breakdown
Overall Performance
7 benchmarks
Average Score
61.2%
Best Score
93.9%
High Performers (80%+)
3Performance Metrics
Max Context Window
196.6KAvg Throughput
85.0 tok/sAvg Latency
1ms+
+
+
+
All Benchmark Results for GLM-4.6
Complete list of benchmark scores with detailed information
AIME 2025 | text | 0.94 | 93.9% | Self-reported | |
LiveCodeBench v6 | text | 0.83 | 82.8% | Self-reported | |
GPQA | text | 0.81 | 81.0% | Self-reported | |
SWE-Bench Verified | text | 0.68 | 68.0% | Self-reported | |
BrowseComp | text | 0.45 | 45.1% | Self-reported | |
Terminal-Bench | text | 0.41 | 40.5% | Self-reported | |
HLE | multimodal | 0.17 | 17.2% | Self-reported |