Mistral Small 3 24B Instruct
Zero-eval
#2Wild Bench
by Mistral AI
+
+
+
+
About
Mistral Small 24B Instruct was created as the instruction-tuned version of the 24B base model, designed to follow user instructions reliably. Built to serve general-purpose applications requiring moderate capability, it balances performance with deployment practicality.
+
+
+
+
Pricing Range
Input (per 1M)$0.07 -$0.10
Output (per 1M)$0.14 -$0.30
Providers2
+
+
+
+
Timeline
AnnouncedJan 30, 2025
ReleasedJan 30, 2025
Knowledge CutoffOct 1, 2023
+
+
+
+
License & Family
License
Apache 2.0
Performance Overview
Performance metrics and category breakdown
Overall Performance
8 benchmarks
Average Score
71.7%
Best Score
87.6%
High Performers (80%+)
4Performance Metrics
Max Context Window
64.0KAvg Throughput
91.5 tok/sAvg Latency
0ms+
+
+
+
All Benchmark Results for Mistral Small 3 24B Instruct
Complete list of benchmark scores with detailed information
| Arena Hard | text | 0.88 | 87.6% | Self-reported | |
| HumanEval | text | 0.85 | 84.8% | Self-reported | |
| MT-Bench | text | 0.83 | 83.5% | Self-reported | |
| IFEval | text | 0.83 | 82.9% | Self-reported | |
| MATH | text | 0.71 | 70.6% | Self-reported | |
| MMLU-Pro | text | 0.66 | 66.3% | Self-reported | |
| Wild Bench | text | 0.52 | 52.2% | Self-reported | |
| GPQA | text | 0.45 | 45.3% | Self-reported |
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+