+
+
+
+
Platform Stats
Total Models8
Organizations2
Verified Benchmarks0
Multimodal Models3
+
+
+
+
Pricing Overview
Avg Input (per 1M)$0.25
Avg Output (per 1M)$1.37
Cheapest Model$0.05
Premium Model$0.59
+
+
+
+
Supported Features
Number of models supporting each feature
web Search
2
function Calling
8
structured Output
8
code Execution
2
batch Inference
8
finetuning
0
+
+
+
+
Input Modalities
Models supporting different input types
text
8 (100%)
image
3 (38%)
audio
0 (0%)
video
0 (0%)
+
+
+
+
Models Overview
Top performers and pricing distribution

Pricing Distribution

Input pricing per 1M tokens
$0-1
8 models

Top Performing Models

By benchmark avg
#1Llama 3.3 70B Instruct
79.9%
#2Llama 3.1 70B Instruct
74.7%
#3Llama 4 Maverick
71.8%
#4Llama 4 Scout
67.3%
#5Llama 3.2 11B Instruct
63.6%

Most Affordable Models

Llama 3.1 8B Instruct
$0.05/1M
GPT OSS 20B
$0.10/1M
Llama 4 Scout
$0.11/1M

Available Models

8 models available through Groq

LicenseLinks
#01MetaLlama 3.1 8B Instruct
Llama 3.1 8B Instruct is a multilingual large language model optimized for dialogue use cases. It features a 128K context length, state-of-the-art tool use, and strong reasoning capabilities.
Jul 23, 2024
Llama 3.1 Community License
--72.6%--
#02OpenAIGPT OSS 20B
The gpt-oss-20b model (technically 20.9B parameters) achieves near-parity with OpenAI o4-mini on core reasoning benchmarks, while running efficiently on a single 80 GB GPU. The gpt-oss-20b model delivers similar results to OpenAI o3‑mini on common benchmarks and can run on edge devices with just 16 GB of memory, making it ideal for on-device use cases, local inference, or rapid iteration without costly infrastructure. Both models also perform strongly on tool use, few-shot function calling, CoT reasoning (as seen in results on the Tau-Bench agentic evaluation suite) and HealthBench (even outperforming proprietary models like OpenAI o1 and GPT‑4o). Note: While referred to as '20b' for simplicity, it technically has 20.9B parameters.
Aug 5, 2025
Apache 2.0
-----
#03MetaLlama 4 Scout
Llama 4 Scout is a natively multimodal model capable of processing both text and images. It features a 17 billion activated parameter (109B total) mixture-of-experts (MoE) architecture with 16 experts, supporting a wide range of multimodal tasks such as conversational interaction, image analysis, and code generation. The model includes a 10 million token context window.
Apr 5, 2025
Llama 4 Community License Agreement
---32.8%67.8%
#04OpenAIGPT OSS 120B
GPT-OSS-120B is an open-weight, 116.8B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation. It achieves near-parity with OpenAI o4-mini on core reasoning benchmarks. Note: While referred to as '120b' for simplicity, it technically has 116.8B parameters.
Aug 5, 2025
Apache 2.0
-----
#05MetaLlama 3.2 11B Instruct
Llama 3.2 11B Vision Instruct is an instruction-tuned multimodal large language model optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. It accepts text and images as input and generates text as output.
Sep 25, 2024
Llama 3.2 Community License
-----
#06MetaLlama 4 Maverick
Llama 4 Maverick is a natively multimodal model capable of processing both text and images. It features a 17 billion active parameter mixture-of-experts (MoE) architecture with 128 experts, supporting a wide range of multimodal tasks such as conversational interaction, image analysis, and code generation. The model includes a 1 million token context window.
Apr 5, 2025
Llama 4 Community License Agreement
---43.4%77.6%
#07MetaLlama 3.3 70B Instruct
Llama 3.3 is a multilingual large language model optimized for dialogue use cases across multiple languages. It is a pretrained and instruction-tuned generative model with 70 billion parameters, outperforming many open-source and closed chat models on common industry benchmarks. Llama 3.3 supports a context length of 128,000 tokens and is designed for commercial and research use in multiple languages.
Dec 6, 2024
Llama 3.3 Community License Agreement
--88.4%--
#08MetaLlama 3.1 70B Instruct
Llama 3.1 70B Instruct is a large language model optimized for multilingual dialogue use cases. It outperforms many available open source and closed chat models on common industry benchmarks.
Jul 23, 2024
Llama 3.1 Community License
--80.5%--
+
+
+
+
Resources