Portkey provides a robust and secure gateway to facilitate the integration of various models into your apps, including chat, vision, image generation, and embedding models hosted on the Fireworks platform.
With Portkey, you can take advantage of features like fast AI gateway access, observability, prompt management, and more, all while ensuring the secure management of your LLM API keys through a virtual key system.
Provider Slug. fireworks-ai
Portkey SDK Integration with Fireworks Models
Portkey provides a consistent API to interact with models from various providers. To integrate Fireworks with Portkey:
1. Install the Portkey SDK
npm install --save portkey-ai
npm install --save portkey-ai
2. Initialize Portkey with the Virtual Key
To use Fireworks with Portkey, get your API key from here, then add it to Portkey to create the virtual key.
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // Defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "FIREWORKS_VIRTUAL_KEY" // Your Virtual Key
})
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // Defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "FIREWORKS_VIRTUAL_KEY" // Your Virtual Key
})
from portkey_ai import Portkey
portkey = Portkey(
api_key="PORTKEY_API_KEY", # Defaults to os.env("PORTKEY_API_KEY")
virtual_key="FIREWORKS_VIRTUAL_KEY" # Your Virtual Key
)
3. Invoke Chat Completions with Fireworks
You can use the Portkey instance now to send requests to Fireworks API.
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'accounts/fireworks/models/llama-v3-70b-instruct',
});
console.log(chatCompletion.choices);
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'accounts/fireworks/models/llama-v3-70b-instruct',
});
console.log(chatCompletion.choices);
completion = portkey.chat.completions.create(
messages= [{ "role": 'user', "content": 'Say this is a test' }],
model= 'accounts/fireworks/models/llama-v3-70b-instruct'
)
print(completion)
Now, let’s explore how you can use Portkey to call other models (vision, embedding, image) on the Fireworks API:
Using Embeddings Models
Call any embedding model hosted on Fireworks with the familiar OpenAI embeddings signature:
const embeddings = await portkey.embeddings.create({
input: "create vector representation on this sentence",
model: "thenlper/gte-large",
});
console.log(embeddings);
const embeddings = await portkey.embeddings.create({
input: "create vector representation on this sentence",
model: "thenlper/gte-large",
});
console.log(embeddings);
embeddings = portkey.embeddings.create(
input='create vector representation on this sentence',
model='thenlper/gte-large'
)
print(embeddings)
Using Vision Models
Portkey natively supports vision models hosted on Fireworks:
const completion = await portkey.chat.completions.create(
messages: [
{ "role": "user", "content": [
{ "type": "text","text": "Can you describe this image?" },
{ "type": "image_url", "image_url":
{ "url": "https://images.unsplash.com/photo-1582538885592-e70a5d7ab3d3?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=1770&q=80" }
}
]
}
],
model: 'accounts/fireworks/models/firellava-13b'
)
console.log(completion);
const completion = await portkey.chat.completions.create(
messages: [
{ "role": "user", "content": [
{ "type": "text","text": "Can you describe this image?" },
{ "type": "image_url", "image_url":
{ "url": "https://images.unsplash.com/photo-1582538885592-e70a5d7ab3d3?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=1770&q=80" }
}
]
}
],
model: 'accounts/fireworks/models/firellava-13b'
)
console.log(completion);
completion = portkey.chat.completions.create(
messages= [
{ "role": "user", "content": [
{ "type": "text","text": "Can you describe this image?" },
{ "type": "image_url", "image_url":
{ "url": "https://images.unsplash.com/photo-1582538885592-e70a5d7ab3d3?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=1770&q=80" }
}
]
}
],
model= 'accounts/fireworks/models/firellava-13b'
)
print(completion)
Using Image Generation Models
Portkey also supports calling image generation models hosted on Fireworks in the familiar OpenAI signature:
import Portkey from 'portkey-ai';
import fs from 'fs';
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY",
virtualKey: "FIREWORKS_VIRTUAL_KEY"
});
async function main(){
const image = await portkey.images.generate({
model: "accounts/fireworks/models/stable-diffusion-xl-1024-v1-0",
prompt: "An orange elephant in a purple pond"
});
const imageData = image.data[0].b64_json as string;
fs.writeFileSync("fireworks-image-gen.png", Buffer.from(imageData, 'base64'));
}
main()
import Portkey from 'portkey-ai';
import fs from 'fs';
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY",
virtualKey: "FIREWORKS_VIRTUAL_KEY"
});
async function main(){
const image = await portkey.images.generate({
model: "accounts/fireworks/models/stable-diffusion-xl-1024-v1-0",
prompt: "An orange elephant in a purple pond"
});
const imageData = image.data[0].b64_json as string;
fs.writeFileSync("fireworks-image-gen.png", Buffer.from(imageData, 'base64'));
}
main()
from portkey_ai import Portkey
import base64
from io import BytesIO
from PIL import Image
portkey = Portkey(
api_key="PORTKEY_API_KEY",
virtual_key="FIREWORKS_VIRTUAL_KEY"
)
image = portkey.images.generate(
model="accounts/fireworks/models/stable-diffusion-xl-1024-v1-0",
prompt="An orange elephant in a purple pond"
)
Image.open(BytesIO(base64.b64decode(image.data[0].b64_json))).save("fireworks-image-gen.png")
Fireworks Grammar Mode
Fireworks lets you define formal grammars to constrain model outputs. You can use it to force the model to generate valid JSON, speak only in emojis, or anything else. (Originally created by GGML)
Grammar mode is set with the response_format
param. Just pass your grammar definition with {"type": "grammar", "grammar": grammar_definition}
Let’s say you want to classify patient requests into 3 pre-defined classes:
from portkey_ai import Portkey
portkey = Portkey(
api_key="PORTKEY_API_KEY", # Defaults to os.env("PORTKEY_API_KEY")
virtual_key="FIREWORKS_VIRTUAL_KEY" # Your Virtual Key
)
patient_classification = """
root ::= diagnosis
diagnosis ::= "flu" | "dengue" | "malaria"
"""
completion = portkey.chat.completions.create(
messages= [{ "role": 'user', "content": 'Say this is a test' }],
response_format={"type": "grammar", "grammar": patient_classification},
model= 'accounts/fireworks/models/llama-v3-70b-instruct'
)
print(completion)
from portkey_ai import Portkey
portkey = Portkey(
api_key="PORTKEY_API_KEY", # Defaults to os.env("PORTKEY_API_KEY")
virtual_key="FIREWORKS_VIRTUAL_KEY" # Your Virtual Key
)
patient_classification = """
root ::= diagnosis
diagnosis ::= "flu" | "dengue" | "malaria"
"""
completion = portkey.chat.completions.create(
messages= [{ "role": 'user', "content": 'Say this is a test' }],
response_format={"type": "grammar", "grammar": patient_classification},
model= 'accounts/fireworks/models/llama-v3-70b-instruct'
)
print(completion)
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // Defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "FIREWORKS_VIRTUAL_KEY" // Your Virtual Key
})
const patient_classification = `
root ::= diagnosis
diagnosis ::= "flu" | "dengue" | "malaria"
`;
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
response_format: {"type": "grammar", "grammar": patient_classification},
model: 'accounts/fireworks/models/llama-v3-70b-instruct',
});
console.log(chatCompletion.choices);
NOTE: Fireworks Grammer Mode is not supported on Portkey prompts playground
Explore the Fireworks guide for more examples and a deeper dive on Grammer node.
Fireworks JSON Mode
You can force the model to return (1) An arbitrary JSON, or (2) JSON with given schema with Fireworks’ JSON mode.
from portkey_ai import Portkey
portkey = Portkey(
api_key="PORTKEY_API_KEY", # Defaults to os.env("PORTKEY_API_KEY")
virtual_key="FIREWORKS_VIRTUAL_KEY" # Your Virtual Key
)
class Recipe(BaseModel):
title: str
description: str
steps: List[str]
json_response = portkey.chat.completions.create(
messages = [{ "role": 'user', "content": 'Give me a recipe for making Ramen, in JSON format' }],
model = 'accounts/fireworks/models/llama-v3-70b-instruct',
response_format = {
"type":"json_object",
"schema": Recipe.schema_json()
}
)
print(json_response.choices[0].message.content)
from portkey_ai import Portkey
portkey = Portkey(
api_key="PORTKEY_API_KEY", # Defaults to os.env("PORTKEY_API_KEY")
virtual_key="FIREWORKS_VIRTUAL_KEY" # Your Virtual Key
)
class Recipe(BaseModel):
title: str
description: str
steps: List[str]
json_response = portkey.chat.completions.create(
messages = [{ "role": 'user', "content": 'Give me a recipe for making Ramen, in JSON format' }],
model = 'accounts/fireworks/models/llama-v3-70b-instruct',
response_format = {
"type":"json_object",
"schema": Recipe.schema_json()
}
)
print(json_response.choices[0].message.content)
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // Defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "FIREWORKS_VIRTUAL_KEY" // Your Virtual Key
})
asyn function main(){
const json_response = await portkey.chat.completions.create({
messages: [{role: "user",content: `Give me a recipe for making Ramen, in JSON format`}],
model: "accounts/fireworks/models/llama-v3-70b-instruct",
response_format: {
type: "json_object",
schema: {
type: "object",
properties: {
title: { type: "string" },
description: { type: "string" },
steps: { type: "array" }
}
}
}
});
}
console.log(json_response.choices[0].message.content);
main()
Explore Fireworks docs for JSON mode for more examples.
Fireworks Function Calling
Portkey also supports function calling mode on Fireworks. Explore this cookbook for a deep dive and examples.
Managing Fireworks Prompts
You can manage all Fireworks prompts in the Prompt Library. All the current 49+ language models available on Fireworks are supported and you can easily start testing different prompts.
Once you’re ready with your prompt, you can use the portkey.prompts.completions.create
interface to use the prompt in your application.
Next Steps
The complete list of features supported in the SDK are available on the link below.
You’ll find more information in the relevant sections:
- Add metadata to your requests
- Add gateway configs to your requests
- Tracing requests
- Setup a fallback from OpenAI to Firework APIs