Learn to integrate OpenAI with Portkey, enabling seamless completions, prompt management, and advanced functionalities like streaming, function calling and fine-tuning.
Portkey has native integrations with OpenAI SDKs for Node.js, Python, and its REST APIs. For OpenAI integration using other frameworks, explore our partnerships, including Langchain, LlamaIndex, among others.
Provider Slug. openai
Using the Portkey Gateway
To integrate the Portkey gateway with OpenAI,
- Set the
baseURL
to the Portkey Gateway URL - Include Portkey-specific headers such as
provider
,apiKey
, ‘virtualKey’ and others.
Here’s how to apply it to a chat completion request:
Install the Portkey SDK with npm
Install the Portkey SDK with npm
Install the Portkey SDK with pip
Install the OpenAI & Portkey SDKs with pip
Install the OpenAI & Portkey SDKs with npm
This request will be automatically logged by Portkey. You can view this in your logs dashboard. Portkey logs the tokens utilized, execution time, and cost for each request. Additionally, you can delve into the details to review the precise request and response data.
Portkey supports OpenAI’s new “developer” role in chat completions. With o1 models and newer, the developer
role replaces the previous system
role.
Using the Responses API
OpenAI has released a new Responses API that combines the best of both Chat Completions and Assistants APIs. Portkey fully supports this new API, enabling you to use it with both the Portkey SDK and OpenAI SDK.
The Responses API provides a more flexible foundation for building agentic applications with built-in tools that execute automatically.
Track End-User IDs
Portkey allows you to track user IDs passed with the user
parameter in OpenAI requests, enabling you to monitor user-level costs, requests, and more.
When you include the user
parameter in your requests, Portkey logs will display the associated user ID, as shown in the image below:
In addition to the user
parameter, Portkey allows you to send arbitrary custom metadata with your requests. This powerful feature enables you to associate additional context or information with each request, which can be useful for analysis, debugging, or other custom use cases.
Metadata
- The same integration approach applies to APIs for completions, embeddings, vision, moderation, transcription, translation, speech and files.
- If you are looking for a way to add your Org ID & Project ID to the requests, head over to this section.
Using the Prompts API
Portkey also supports creating and managing prompt templates in the prompt library. This enables the collaborative development of prompts directly through the user interface.
- Create a prompt template with variables and set the hyperparameters.
- Use this prompt in your codebase using the Portkey SDK.
Observe how this streamlines your code readability and simplifies prompt updates via the UI without altering the codebase.
Advanced Use Cases
Realtime API
Portkey supports OpenAI’s Realtime API with a seamless integration. This allows you to use Portkey’s logging, cost tracking, and guardrail features while using the Realtime API.
Realtime API
Streaming Responses
Portkey supports streaming responses using Server Sent Events (SSE).
Streaming with the Responses API
You can also stream responses from the Responses API:
Using Vision Models
Portkey’s multimodal Gateway fully supports OpenAI vision models as well. See this guide for more info:
Vision with the Responses API
You can also use the Responses API to process images alongside text:
Function Calling
Function calls within your OpenAI or Portkey SDK operations remain standard. These logs will appear in Portkey, highlighting the utilized functions and their outputs.
Additionally, you can define functions within your prompts and invoke the portkey.prompts.completions.create
method as above.
Function Calling with the Responses API
The Responses API also supports function calling with the same powerful capabilities:
Fine-Tuning
Please refer to our fine-tuning guides to take advantage of Portkey’s advanced continuous fine-tuning capabilities.
Image Generation
Portkey supports multiple modalities for OpenAI and you can make image generation requests through Portkey’s AI Gateway the same way as making completion calls.
Portkey’s fast AI gateway captures the information about the request on your Portkey Dashboard. On your logs screen, you’d be able to see this request with the request and response.
Log view for an image generation request on OpenAI
More information on image generation is available in the API Reference.
Audio - Transcription, Translation, and Text-to-Speech
Portkey’s multimodal Gateway also supports the audio
methods on OpenAI API. Check out the below guides for more info:
Check out the below guides for more info:
Intergrated Tools with Repsponses API
Web Search Tool
Web search delivers accurate and clearly-cited answers from the web, using the same tool as search in ChatGPT:
Options for search_context_size
:
high
: Most comprehensive context, higher cost, slower responsemedium
: Balanced context, cost, and latency (default)low
: Minimal context, lowest cost, fastest response
Responses include citations for URLs found in search results, with clickable references.
File Search Tool
File search enables quick retrieval from your knowledge base across multiple file types:
This tool requires you to first create a vector store and upload files to it. Supports various file formats including PDFs, DOCXs, TXT, and more. Results include file citations in the response.
Enhanced Reasoning
Control the depth of model reasoning for more comprehensive analysis:
Computer Use Assistant
Portkey also supports the Computer Use Assistant (CUA) tool, which helps agents control computers or virtual machines through screenshots and actions. This feature is available for select developers as a research preview on premium tiers.
Learn More about Computer use tool here
Managing OpenAI Projects & Organizations in Portkey
When integrating OpenAI with Portkey, you can specify your OpenAI organization and project IDs along with your API key. This is particularly useful if you belong to multiple organizations or are accessing projects through a legacy user API key.
Specifying the organization and project IDs helps you maintain better control over your access rules, usage, and costs.
In Portkey, you can add your Org & Project details by,
- Creating your Virtual Key
- Defining a Gateway Config
- Passing Details in a Request
Let’s explore each method in more detail.
Using Virtual Keys
When selecting OpenAI from the dropdown menu while creating a virtual key, Portkey automatically displays optional fields for the organization ID and project ID alongside the API key field.
Get your OpenAI API key from here, then add it to Portkey to create the virtual key that can be used throughout Portkey.
Portkey takes budget management a step further than OpenAI. While OpenAI allows setting budget limits per project, Portkey enables you to set budget limits for each virtual key you create. For more information on budget limits, refer to this documentation:
Using The Gateway Config
You can also specify the organization and project details in the gateway config, either at the root level or within a specific target.
While Making a Request
You can also pass your organization and project details directly when making a request using curl, the OpenAI SDK, or the Portkey SDK.
Portkey Features
Portkey supports the complete host of it’s functionality via the OpenAI SDK so you don’t need to migrate away from it.
Please find more information in the relevant sections: