Models GitHub Models
meta-llama-3-70b-instruct
chat
Pricing
Per 1M tokens
Input
—
Cached input
—
Output
—
Cache write
—
Modalities
Input
text
Output
text
Features
Streaming
Function calling
Vision
Reasoning
JSON mode
Share This Model
Share on X or copy the link
GitHub Models
Meta Llama 3 70B Instruct
Input
—/M
Output
—/M
Text Generation