
Mistral Small 3.1 24B Base
Multimodal
Zero-eval
by Mistral AI
+
+
+
+
About
Mistral Small 3.1 24B Base is a multimodal language model developed by Mistral AI. It achieves strong performance with an average score of 62.9% across 5 benchmarks. It excels particularly in MMLU (81.0%), TriviaQA (80.5%), MMMU (59.3%). It supports a 256K token context window for handling large documents. The model is available through 1 API provider. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. It's licensed for commercial use, making it suitable for enterprise applications. Released in 2025, it represents Mistral AI's latest advancement in AI technology.
+
+
+
+
Pricing Range
Input (per 1M)$0.10 -$0.10
Output (per 1M)$0.30 -$0.30
Providers1
+
+
+
+
Timeline
AnnouncedMar 17, 2025
ReleasedMar 17, 2025
+
+
+
+
Specifications
Capabilities
Multimodal
+
+
+
+
License & Family
License
Apache 2.0
Performance Overview
Performance metrics and category breakdown
Overall Performance
5 benchmarks
Average Score
62.9%
Best Score
81.0%
High Performers (80%+)
2Performance Metrics
Max Context Window
256.0KAvg Throughput
137.1 tok/sAvg Latency
0ms+
+
+
+
All Benchmark Results for Mistral Small 3.1 24B Base
Complete list of benchmark scores with detailed information
MMLU | text | 0.81 | 81.0% | Self-reported | |
TriviaQA | text | 0.81 | 80.5% | Self-reported | |
MMMU | multimodal | 0.59 | 59.3% | Self-reported | |
MMLU-Pro | text | 0.56 | 56.0% | Self-reported | |
GPQA | text | 0.38 | 37.5% | Self-reported |