Provider Slug.
fireworks-ai
Portkey SDK Integration with Fireworks Models
Portkey provides a consistent API to interact with models from various providers. To integrate Fireworks with Portkey:1. Install the Portkey SDK
2. Initialize Portkey with the Virtual Key
To use Fireworks with Portkey, get your API key from here, then add it to Portkey to create the virtual key.3. Invoke Chat Completions with Fireworks
You can use the Portkey instance now to send requests to Fireworks API.Using Embeddings Models
Call any embedding model hosted on Fireworks with the familiar OpenAI embeddings signature:Using Vision Models
Portkey natively supports vision models hosted on Fireworks:Using Image Generation Models
Portkey also supports calling image generation models hosted on Fireworks in the familiar OpenAI signature:Fireworks Grammar Mode
Fireworks lets you define formal grammars to constrain model outputs. You can use it to force the model to generate valid JSON, speak only in emojis, or anything else. (Originally created by GGML) Grammar mode is set with theresponse_format
param. Just pass your grammar definition with {"type": "grammar", "grammar": grammar_definition}
Let’s say you want to classify patient requests into 3 pre-defined classes:
NOTE: Fireworks Grammer Mode is not supported on Portkey prompts playground
Fireworks JSON Mode
You can force the model to return (1) An arbitrary JSON, or (2) JSON with given schema with Fireworks’ JSON mode.Fireworks Function Calling
Portkey also supports function calling mode on Fireworks. Explore this cookbook for a deep dive and examples.Managing Fireworks Prompts
You can manage all Fireworks prompts in the Prompt Library. All the current 49+ language models available on Fireworks are supported and you can easily start testing different prompts. Once you’re ready with your prompt, you can use theportkey.prompts.completions.create
interface to use the prompt in your application.