Mistral AI

Mistral NeMo Instruct

Zero-eval
#1CommonSenseQA
#2Natural Questions

by Mistral AI

+
+
+
+
About

Mistral Nemo was developed as a mid-sized instruction-tuned model, designed to balance capability with efficiency for practical deployments. Built to serve as a versatile foundation for various applications, it provides reliable performance across general language understanding and generation tasks.

+
+
+
+
Pricing Range
Input (per 1M)$0.15 -$0.15
Output (per 1M)$0.15 -$0.15
Providers2
+
+
+
+
Timeline
AnnouncedJul 18, 2024
ReleasedJul 18, 2024
+
+
+
+
License & Family
License
Apache 2.0
Performance Overview
Performance metrics and category breakdown

Overall Performance

8 benchmarks
Average Score
64.3%
Best Score
83.5%
High Performers (80%+)
1

Performance Metrics

Max Context Window
256.0K
Avg Throughput
21.1 tok/s
Avg Latency
0ms
+
+
+
+
All Benchmark Results for Mistral NeMo Instruct
Complete list of benchmark scores with detailed information
HellaSwag
text
0.83
83.5%
Self-reported
Winogrande
text
0.77
76.8%
Self-reported
TriviaQA
text
0.74
73.8%
Self-reported
CommonSenseQA
text
0.70
70.4%
Self-reported
MMLU
text
0.68
68.0%
Self-reported
OpenBookQA
text
0.61
60.6%
Self-reported
TruthfulQA
text
0.50
50.3%
Self-reported
Natural Questions
text
0.31
31.2%
Self-reported
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+