Comprehensive side-by-side LLM comparison
Jamba 1.5 Mini leads with 13.4% higher average benchmark score. Jamba 1.5 Mini offers 256.3K more tokens in context window than Mistral Small 3.1 24B Base. Both models have similar pricing. Mistral Small 3.1 24B Base supports multimodal inputs. Overall, Jamba 1.5 Mini is the stronger choice for coding tasks.
AI21 Labs
Jamba 1.5 Mini is a language model developed by AI21 Labs. The model shows competitive results across 8 benchmarks. It excels particularly in ARC-C (85.7%), GSM8k (75.8%), MMLU (69.7%). It supports a 512K token context window for handling large documents. The model is available through 2 API providers. Released in 2024, it represents AI21 Labs's latest advancement in AI technology.
Mistral AI
Mistral Small 3.1 24B Base is a multimodal language model developed by Mistral AI. It achieves strong performance with an average score of 62.9% across 5 benchmarks. It excels particularly in MMLU (81.0%), TriviaQA (80.5%), MMMU (59.3%). It supports a 256K token context window for handling large documents. The model is available through 1 API provider. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. It's licensed for commercial use, making it suitable for enterprise applications. Released in 2025, it represents Mistral AI's latest advancement in AI technology.
6 months newer
Jamba 1.5 Mini
AI21 Labs
2024-08-22
Mistral Small 3.1 24B Base
Mistral AI
2025-03-17
Cost per million tokens (USD)
Jamba 1.5 Mini
Mistral Small 3.1 24B Base
Context window and performance specifications
Average performance across 10 common benchmarks
Jamba 1.5 Mini
Mistral Small 3.1 24B Base
Jamba 1.5 Mini
2024-03-05
Available providers and their performance metrics
Jamba 1.5 Mini
Bedrock
Mistral Small 3.1 24B Base
Jamba 1.5 Mini
Mistral Small 3.1 24B Base
Jamba 1.5 Mini
Mistral Small 3.1 24B Base
Mistral AI