Comprehensive side-by-side LLM comparison
Minimax M 2.5 offers 72.0K more tokens in context window than GPT-OSS-120B. Minimax M 2.5 is $Infinity cheaper per million tokens. Both models have their strengths depending on your specific coding needs.
OpenAI
GPT-OSS-120B, released by OpenAI in August 2025, is an open-weight large language model with 120 billion parameters distributed under the Apache 2.0 license. It represents OpenAI's entry into the open-source model space, enabling developers to self-host and fine-tune a GPT-5-generation-class model. GPT-OSS-120B targets research applications, on-premises deployments, and custom fine-tuning workflows requiring a large open-weight base model.
MiniMax
MiniMax M2.5 is a large language model from MiniMax extensively trained with reinforcement learning across hundreds of thousands of complex real-world environments. It targets agentic tool use, coding automation, and office productivity tasks, with strong results on software engineering and web browsing benchmarks. M2.5 represents the next generation of MiniMax's M-series models optimized for production agentic workloads.
6 months newer

GPT-OSS-120B
OpenAI
2025-08
Minimax M 2.5
MiniMax
2026-02-13
Cost per million tokens (USD)
GPT-OSS-120B
Minimax M 2.5
Context window and performance specifications
Available providers and their performance metrics
GPT-OSS-120B
Hugging Face
Minimax M 2.5
GPT-OSS-120B
Minimax M 2.5
GPT-OSS-120B
Minimax M 2.5
MiniMax