OpenAI
Integrate OpenAI with Portkey to get production metrics for your requests and make chat completions, audio, image generation, structured outputs, function calling, fine-tuning, batch, and more requests.
Provider Slug: openai
Overview
Portkey integrates with OpenAI’s APIs to help you create production-grade AI sppd with enhanced reliability, observability, and governance features.
Getting Started
Obtain your OpenAI API Key
Visit the OpenAI dashboard to generate your API key.
Create a Virtual Key in Portkey
Portkey’s virtual key vault simplifies your interaction with OpenAI. Virtual keys act as secure aliases for your actual API keys, offering enhanced security and easier management through budget limits to control your API usage.
Use the Portkey app to create a virtual key associated with your OpenAI API key.
Initialize the Portkey Client
Now that you have your virtual key, set up the Portkey client:
Portkey Hosted App
Use the Portkey API key and the OpenAI virtual key to initialize the client in your preferred programming language.
Open Source Use
Alternatively, use Portkey’s Open Source AI Gateway to enhance your app’s reliability with minimal code:
🔥 That’s it! You’ve integrated Portkey into your application with just a few lines of code. Now let’s explore making requests using the Portkey client.
Supported Models
OpenAI Supported Features
Chat Completions
Generate chat completions using OpenAI models through Portkey:
Streaming
Stream responses for real-time output in your applications:
Function Calling
Leverage OpenAI’s function calling capabilities through Portkey:
Vision
Process images alongside text using OpenAI’s vision capabilities:
Embeddings
Generate embeddings for text using OpenAI’s embedding models:
Transcription and Translation
Portkey supports both Transcription
and Translation
methods for STT models:
Text to Speech
Convert text to speech using OpenAI’s TTS models:
Prompt Caching
Implement prompt caching to improve performance and reduce costs:
Prompt Caching Guide
Learn how to implement prompt caching for OpenAI models with Portkey.
Structured Output
Use structured outputs for more consistent and parseable responses:
Structured Outputs Guide
Discover how to use structured outputs with OpenAI models in Portkey.
Supported Endpoints and Parameters
Endpoint | Supported Parameters |
---|---|
complete | model, prompt, max_tokens, temperature, top_p, n, stream, logprobs, echo, stop, presence_penalty, frequency_penalty, best_of, logit_bias, user, seed, suffix |
embed | model, input, encoding_format, dimensions, user |
chatComplete | model, messages, functions, function_call, max_tokens, temperature, top_p, n, stream, stop, presence_penalty, frequency_penalty, logit_bias, user, seed, tools, tool_choice, response_format, logprobs, top_logprobs, stream_options, service_tier, parallel_tool_calls, max_completion_tokens |
imageGenerate | prompt, model, n, quality, response_format, size, style, user |
createSpeech | model, input, voice, response_format, speed |
createTranscription | All parameters supported |
createTranslation | All parameters supported |
Portkey’s Advanced Features
Track End-User IDs
Portkey allows you to track user IDs passed with the user parameter in OpenAI requests, enabling you to monitor user-level costs, requests, and more:
When you include the user parameter in your requests, Portkey logs will display the associated user ID, as shown in the image below:
In addition to the user
parameter, Portkey allows you to send arbitrary custom metadata with your requests. This powerful feature enables you to associate additional context or information with each request, which can be useful for analysis, debugging, or other custom use cases.
Using The Gateway Config
Here’s a simplified version of how to use Portkey’s Gateway Configuration:
Create a Gateway Configuration
You can create a Gateway configuration using the Portkey Config Dashboard or by writing a JSON configuration in your code. In this example, requests are routed based on the user’s subscription plan (paid or free).
Process Requests
When a user makes a request, it will pass through Portkey’s AI Gateway. Based on the configuration, the Gateway routes the request according to the user’s metadata.
Set Up the Portkey Client
Pass the Gateway configuration to your Portkey client. You can either use the config object or the Config ID from Portkey’s hosted version.
That’s it! Portkey seamlessly allows you to make your AI app more robust using built-in gateway features. Learn more about advanced gateway features:
Load Balancing
Distribute requests across multiple targets based on defined weights.
Fallbacks
Automatically switch to backup targets if the primary target fails.
Conditional Routing
Route requests to different targets based on specified conditions.
Caching
Enable caching of responses to improve performance and reduce costs.
Guardrails
Portkey’s AI gateway enables you to enforce input/output checks on requests by applying custom hooks before and after processing. Protect your user’s/company’s data by using PII guardrails and many more available on Portkey Guardrails:
Learn More About Guardrails
Explore Portkey’s guardrail features to enhance the security and reliability of your AI applications.
Next Steps
The complete list of features supported in the SDK are available in our comprehensive documentation:
Portkey SDK Documentation
Explore the full capabilities of the Portkey SDK and how to leverage them in your projects.
Limitations
Portkey does not support the following OpenAI features:
- Streaming for audio endpoints
- Chat completions feedback API
- File management endpoints
For the most up-to-date information on supported features and endpoints, please refer to our API Reference.