Mistral AI

Mistral NeMo Instruct

Zero-eval
#1CommonSenseQA
#2Natural Questions

by Mistral AI

+
+
+
+
About

Mistral NeMo Instruct is a language model developed by Mistral AI. It achieves strong performance with an average score of 64.3% across 8 benchmarks. It excels particularly in HellaSwag (83.5%), Winogrande (76.8%), TriviaQA (73.8%). It supports a 256K token context window for handling large documents. The model is available through 2 API providers. It's licensed for commercial use, making it suitable for enterprise applications. Released in 2024, it represents Mistral AI's latest advancement in AI technology.

+
+
+
+
Pricing Range
Input (per 1M)$0.15 -$0.15
Output (per 1M)$0.15 -$0.15
Providers2
+
+
+
+
Timeline
AnnouncedJul 18, 2024
ReleasedJul 18, 2024
+
+
+
+
License & Family
License
Apache 2.0
Performance Overview
Performance metrics and category breakdown

Overall Performance

8 benchmarks
Average Score
64.3%
Best Score
83.5%
High Performers (80%+)
1

Performance Metrics

Max Context Window
256.0K
Avg Throughput
21.1 tok/s
Avg Latency
0ms
+
+
+
+
All Benchmark Results for Mistral NeMo Instruct
Complete list of benchmark scores with detailed information
HellaSwag
text
0.83
83.5%
Self-reported
Winogrande
text
0.77
76.8%
Self-reported
TriviaQA
text
0.74
73.8%
Self-reported
CommonSenseQA
text
0.70
70.4%
Self-reported
MMLU
text
0.68
68.0%
Self-reported
OpenBookQA
text
0.61
60.6%
Self-reported
TruthfulQA
text
0.50
50.3%
Self-reported
Natural Questions
text
0.31
31.2%
Self-reported