Llama 3.2 3B Instruct
Zero-eval
#1NIH/Multi-needle
#1InfiniteBench/En.MC
#1Open-rewrite
+2 more
by Meta
+
+
+
+
About
Llama 3.2 3B was created as an ultra-compact open-source model, designed to enable on-device and edge deployment scenarios. Built with just 3 billion parameters while retaining instruction-following abilities, it brings Meta's language technology to mobile devices, IoT applications, and resource-constrained environments.
+
+
+
+
Pricing Range
Input (per 1M)$0.01 -$0.01
Output (per 1M)$0.02 -$0.02
Providers1
+
+
+
+
Timeline
AnnouncedSep 25, 2024
ReleasedSep 25, 2024
+
+
+
+
Specifications
Training Tokens9.0T
+
+
+
+
License & Family
License
Llama 3.2 Community License
Performance Overview
Performance metrics and category breakdown
Overall Performance
15 benchmarks
Average Score
55.6%
Best Score
84.7%
High Performers (80%+)
1Performance Metrics
Max Context Window
256.0KAvg Throughput
171.5 tok/sAvg Latency
0ms+
+
+
+
All Benchmark Results for Llama 3.2 3B Instruct
Complete list of benchmark scores with detailed information
| NIH/Multi-needle | text | 0.85 | 84.7% | Self-reported | |
| ARC-C | text | 0.79 | 78.6% | Self-reported | |
| GSM8k | text | 0.78 | 77.7% | Self-reported | |
| IFEval | text | 0.77 | 77.4% | Self-reported | |
| HellaSwag | text | 0.70 | 69.8% | Self-reported | |
| BFCL v2 | text | 0.67 | 67.0% | Self-reported | |
| MMLU | text | 0.63 | 63.4% | Self-reported | |
| InfiniteBench/En.MC | text | 0.63 | 63.3% | Self-reported | |
| MGSM | text | 0.58 | 58.2% | Self-reported | |
| MATH | text | 0.48 | 48.0% | Self-reported |
Showing 1 to 10 of 15 benchmarks
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+