Learn to integrate OpenAI with Portkey, enabling seamless completions, prompt management, and advanced functionalities like streaming, function calling and fine-tuning.
Portkey has native integrations with OpenAI SDKs for Node.js, Python, and its REST APIs. For OpenAI integration using other frameworks, explore our partnerships, including Langchain, LlamaIndex, among others.
Using the Portkey Gateway
To integrate the Portkey gateway with OpenAI,
Set the baseURL to the Portkey Gateway URL
Include Portkey-specific headers such as provider, apiKeyand others.
Here's how to apply it to a chat completion request:
Install the Portkey SDK in your application
npmi--saveportkey-ai
Next, insert the Portkey-specific code as shown in the highlighted lines to your OpenAI completion calls. PORTKEY_GATEWAY_URL is portkey's gateway URL to route your requests and createHeaders is a convenience function that generates the headers object. (All supported params/headers)
import OpenAI from'openai'; // We're using the v4 SDKimport { PORTKEY_GATEWAY_URL, createHeaders } from'portkey-ai'constopenai=newOpenAI({ apiKey:'OPENAI_API_KEY',// defaults to process.env["OPENAI_API_KEY"], baseURL:PORTKEY_GATEWAY_URL, defaultHeaders:createHeaders({ provider:"openai", apiKey:"PORTKEY_API_KEY"// defaults to process.env["PORTKEY_API_KEY"] })});asyncfunctionmain() {constchatCompletion=awaitopenai.chat.completions.create({ messages: [{ role:'user', content:'Say this is a test' }], model:'gpt-4-turbo', });console.log(chatCompletion.choices);}main();
Install the Portkey SDK in your application
pipinstallportkey-ai
Next, insert the Portkey-specific code as shown in the highlighted lines to your OpenAI function calls. PORTKEY_GATEWAY_URL is portkey's gateway URL to route your requests and createHeaders is a convenience function that generates the headers object. (All supported params/headers)
from openai import OpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeadersclient =OpenAI( api_key="OPENAI_API_KEY", # defaults to os.environ.get("OPENAI_API_KEY") base_url=PORTKEY_GATEWAY_URL, default_headers=createHeaders( provider="openai", api_key="PORTKEY_API_KEY"# defaults to os.environ.get("PORTKEY_API_KEY") ))chat_complete = client.chat.completions.create( model="gpt-4-turbo", messages=[{"role": "user", "content": "Say this is a test"}],)print(chat_complete.choices[0].message.content)
This request will be automatically logged by Portkey. You can view this in your logs dashboard. Portkey logs the tokens utilized, execution time, and cost for each request. Additionally, you can delve into the details to review the precise request and response data.
Portkey also supports creating and managing prompt templates in the prompt library. This enables the collaborative development of prompts directly through the user interface.
Create a prompt template with variables and set the hyperparameters.
Use this prompt in your codebase using the Portkey SDK.
import Portkey from'portkey-ai'constportkey=newPortkey({ apiKey:"PORTKEY_API_KEY",})// Make the prompt creation call with the variablesconstpromptCompletion=awaitportkey.prompts.completions.create({ promptID:"Your Prompt ID", variables: {// The variables specified in the prompt }})
// We can also override the hyperparametersconstpromptCompletion=awaitportkey.prompts.completions.create({ promptID:"Your Prompt ID", variables: {// The variables specified in the prompt }, max_tokens:250, presence_penalty:0.2})
from portkey_ai import Portkeyclient =Portkey( api_key="PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY"))prompt_completion = client.prompts.completions.create( prompt_id="Your Prompt ID", variables={# The variables specified in the prompt })print(prompt_completion)# We can also override the hyperparametersprompt_completion = client.prompts.completions.create( prompt_id="Your Prompt ID", variables={# The variables specified in the prompt }, max_tokens=250, presence_penalty=0.2)print(prompt_completion)
curl-XPOST"https://api.portkey.ai/v1/prompts/:PROMPT_ID/completions" \-H "Content-Type: application/json" \-H "x-portkey-api-key: $PORTKEY_API_KEY" \-d '{ "variables": { # The variables to use }, "max_tokens": 250, # Optional "presence_penalty": 0.2 # Optional}'
Observe how this streamlines your code readability and simplifies prompt updates via the UI without altering the codebase.
Advanced Use Cases
Streaming Responses
Portkey supports streaming responses using Server Sent Events (SSE).
import OpenAI from'openai';import { PORTKEY_GATEWAY_URL, createHeaders } from'portkey-ai'constopenai=newOpenAI({ baseURL:PORTKEY_GATEWAY_URL, defaultHeaders:createHeaders({ provider:"openai", apiKey:"PORTKEY_API_KEY"// defaults to process.env["PORTKEY_API_KEY"] })});asyncfunctionmain() {conststream=awaitopenai.chat.completions.create({ model:'gpt-4', messages: [{ role:'user', content:'Say this is a test' }], stream:true, });forawait (constchunkof stream) {process.stdout.write(chunk.choices[0]?.delta?.content ||''); }}main();
from openai import OpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeadersclient =OpenAI( api_key="OPENAI_API_KEY", # defaults to os.environ.get("OPENAI_API_KEY") base_url=PORTKEY_GATEWAY_URL, default_headers=createHeaders( provider="openai", api_key="PORTKEY_API_KEY"# defaults to os.environ.get("PORTKEY_API_KEY") ))chat_complete = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": "Say this is a test"}], stream=True)for chunk in chat_complete:print(chunk.choices[0].delta.content, end="", flush=True)
Using Vision Models
Portkey's multimodal Gateway fully supports OpenAI vision models as well. See this guide for more info:
Function calls within your OpenAI or Portkey SDK operations remain standard. These logs will appear in Portkey, highlighting the utilized functions and their outputs.
Additionally, you can define functions within your prompts and invoke the portkey.prompts.completions.create method as above.
Fine-Tuning
Please refer to our fine-tuning guides to take advantage of Portkey's advanced continuous fine-tuning capabilities.
Image Generation
Portkey supports multiple modalities for OpenAI and you can make image generation requests through Portkey's AI Gateway the same way as making completion calls.
// Define the OpenAI client as shown aboveconstimage=awaitopenai.images.generate({ model:"dall-e-3", prompt:"Lucy in the sky with diamonds", size:"1024x1024"})
# Define the OpenAI client as shown aboveimage = openai.images.generate( model="dall-e-3", prompt="Lucy in the sky with diamonds", size="1024x1024")
Portkey's fast AI gateway captures the information about the request on your Portkey Dashboard. On your logs screen, you'd be able to see this request with the request and response.
More information on image generation is available in the API Reference.
Audio - Transcription, Translation, and Text-to-Speech
Portkey's multimodal Gateway also supports the audio methods on OpenAI API. tts-1 , tts-1-hd, and whisper-1 models are supported.
For users who belong to multiple organizations or are accessing their projects through their legacy user API key, you can specify which organization and project is used for an API request.
In Portkey, you can attach this as a header, as part of the config or within the OpenAI virtual key.
OpenAI Virtual Keys
You can specify OpenAI's organisation and project IDs while defining a Virtual Key.
In the Gateway Config
You can also specify the organisation and project details in the config root or within a target.