Comprehensive side-by-side LLM comparison
Minimax M 2.5 leads with 6.8% higher average benchmark score. Overall, Minimax M 2.5 is the stronger choice for coding tasks.
Xiaomi
MiMo-V2-Flash, released by Xiaomi on December 16, 2025, is a Mixture-of-Experts large language model with 309 billion total parameters and 15 billion active parameters per inference, designed for high-speed reasoning and agentic workflows. It features a 256K token context window, processes up to 150 tokens per second, and was trained on 27 trillion tokens. MiMo-V2-Flash targets open-source deployments requiring fast, capable coding and reasoning with an efficient inference footprint, under an MIT license.
MiniMax
MiniMax M2.5 is a large language model from MiniMax extensively trained with reinforcement learning across hundreds of thousands of complex real-world environments. It targets agentic tool use, coding automation, and office productivity tasks, with strong results on software engineering and web browsing benchmarks. M2.5 represents the next generation of MiniMax's M-series models optimized for production agentic workloads.
1 month newer
MiMo-V2-Flash
Xiaomi
2025-12-16
Minimax M 2.5
MiniMax
2026-02-13
Context window and performance specifications
Average performance across 1 common benchmarks
MiMo-V2-Flash
Minimax M 2.5
Performance comparison across key benchmark categories
MiMo-V2-Flash
Minimax M 2.5
Available providers and their performance metrics
MiMo-V2-Flash
Minimax M 2.5
MiniMax
MiMo-V2-Flash
Minimax M 2.5
MiMo-V2-Flash
Minimax M 2.5