Microsoft

Phi-3.5-MoE-instruct

Zero-eval
#1OpenBookQA
#1PIQA
#1RULER
+14 more

by Microsoft

+
+
+
+
About

Phi-3.5 MoE was created using a mixture-of-experts architecture, designed to provide enhanced capabilities while maintaining efficiency through sparse activation. Built to combine the benefits of larger models with practical computational requirements, it represents Microsoft's exploration of efficient scaling techniques.

+
+
+
+
Timeline
AnnouncedAug 23, 2024
ReleasedAug 23, 2024
+
+
+
+
Specifications
Training Tokens4.9T
+
+
+
+
License & Family
License
MIT
Performance Overview
Performance metrics and category breakdown

Overall Performance

31 benchmarks
Average Score
65.6%
Best Score
91.0%
High Performers (80%+)
11
+
+
+
+
All Benchmark Results for Phi-3.5-MoE-instruct
Complete list of benchmark scores with detailed information
ARC-C
text
0.91
91.0%
Self-reported
OpenBookQA
text
0.90
89.6%
Self-reported
GSM8k
text
0.89
88.7%
Self-reported
PIQA
text
0.89
88.6%
Self-reported
RULER
text
0.87
87.1%
Self-reported
RepoQA
text
0.85
85.0%
Self-reported
BoolQ
text
0.85
84.6%
Self-reported
HellaSwag
text
0.84
83.8%
Self-reported
MEGA XStoryCloze
text
0.83
82.8%
Self-reported
Winogrande
text
0.81
81.3%
Self-reported
Showing 1 to 10 of 31 benchmarks
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+