Comprehensive side-by-side LLM comparison
GPT-5.2 Codex leads with 7.5% higher average benchmark score. Overall, GPT-5.2 Codex is the stronger choice for coding tasks.
Mistral AI
Devstral 2, released by Mistral AI on December 9, 2025, is a 123 billion parameter dense transformer model specifically designed for software engineering tasks. It features a 256K token context window and achieved 72.2% on SWE-bench Verified at release, making it a competitive open-weight option for automated coding and agentic development. Devstral 2 targets code generation, multi-file software engineering, and agentic development workflows under a modified MIT license.
OpenAI
GPT-5.2 Codex is a coding-specialized variant from the GPT-5.2 family, designed for software engineering workflows including automated code generation, multi-file editing, and agentic development. It builds on GPT-5.2's improved instruction following and long-context capabilities, with optimizations specifically targeting programming tasks and agentic software workflows.
1 month newer

Devstral-2-123B
Mistral AI
2025-12-09

GPT-5.2 Codex
OpenAI
2026-01-14
Context window and performance specifications
Average performance across 1 common benchmarks
Devstral-2-123B
GPT-5.2 Codex
Performance comparison across key benchmark categories
Devstral-2-123B
GPT-5.2 Codex
Available providers and their performance metrics
Devstral-2-123B
OpenRouter
GPT-5.2 Codex
Devstral-2-123B
GPT-5.2 Codex
Devstral-2-123B
GPT-5.2 Codex