Mistral Small 3.1 24B Base
Multimodal
Zero-eval
by Mistral AI
+
+
+
+
About
Mistral Small 3.1 24B Base represents an updated iteration of the 24B foundation model, developed with architectural refinements and improved training. Built to provide enhanced base capabilities for fine-tuning, it incorporates learnings from previous versions for better downstream performance.
+
+
+
+
Pricing Range
Input (per 1M)$0.10 -$0.10
Output (per 1M)$0.30 -$0.30
Providers1
+
+
+
+
Timeline
AnnouncedMar 17, 2025
ReleasedMar 17, 2025
+
+
+
+
Specifications
Capabilities
Multimodal
+
+
+
+
License & Family
License
Apache 2.0
Performance Overview
Performance metrics and category breakdown
Overall Performance
5 benchmarks
Average Score
62.9%
Best Score
81.0%
High Performers (80%+)
2Performance Metrics
Max Context Window
256.0KAvg Throughput
137.1 tok/sAvg Latency
0ms+
+
+
+
All Benchmark Results for Mistral Small 3.1 24B Base
Complete list of benchmark scores with detailed information
| MMLU | text | 0.81 | 81.0% | Self-reported | |
| TriviaQA | text | 0.81 | 80.5% | Self-reported | |
| MMMU | multimodal | 0.59 | 59.3% | Self-reported | |
| MMLU-Pro | text | 0.56 | 56.0% | Self-reported | |
| GPQA | text | 0.38 | 37.5% | Self-reported |
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+