LLM Leaderboard for Agentic Coders
+
+
+
+
Total Models
74
AI models tracked
+
+
+
+

Organizations
17
Companies & labs
+
+
+
+
Providers
17
API providers
+
+
+
+
Benchmarks
20
Evaluation metrics
Top Models by SWE-rebench
| Feb 17, 2026 | - | 59.1% | 79.6% | 58.3% | 74.7% | 63.3% | - | 81.7% | 89.9% | 33.2% | 49.0% | 61.3% | 89.3% | - | 74.5% | 75.6% | 72.5% | 91.7% | 97.9% | - | ||
| Feb 12, 2026 | 39.6% | - | 80.2% | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | ||
#03GLM 5 New | Feb 11, 2026 | 42.1% | - | 77.8% | - | - | 53.2% | - | - | - | - | - | - | - | - | - | - | - | - | - | - | |
| Feb 5, 2026 | 51.7% | 69.9% | 80.8% | 64.6% | - | 60.0% | - | - | 0.9% | - | - | - | - | - | - | - | - | - | - | - |
+
+
+
+
SWE-bench Dominance Timeline
Models that achieved the highest SWE-bench score at the time of their release
Claude Opus 4.5
GPT-5.1 Thinking
GPT-5
Claude Opus 4.1
Aug 2025Feb 2026
Organizations
Anthropic (44.9%)
OpenAI (55.1%)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Coding Categories Performance
Model performance across different coding domains and specializations
Note: These rankings reflect performance on available benchmarks for each model. Rankings do not necessarily indicate absolute superiority in a category, as most models have not been evaluated on all benchmarks.
+
+
+
+
Agentic Coding
Assesses autonomous agents for code editing, issue resolution, and tool-using workflows.
#1
Claude Opus 4.5Anthropic
81%
1 benchmarks
#2
Claude Opus 4.6Anthropic
81%
1 benchmarks
#3
Minimax M 2.5MiniMax
80%
1 benchmarks
#4
GPT-5.2OpenAI
80%
1 benchmarks
#5
Claude Sonnet 4.6Anthropic
80%
1 benchmarks
+
+
+
+
Repository-Level Coding
Involves understanding and modifying code in full repositories.
#1
Claude Opus 4.5Anthropic
81%
1 benchmarks
#2
Claude Opus 4.6Anthropic
81%
1 benchmarks
#3
Minimax M 2.5MiniMax
80%
1 benchmarks
#4
GPT-5.2OpenAI
80%
1 benchmarks
#5
Claude Sonnet 4.6Anthropic
80%
1 benchmarks