OpenAI

Learn to integrate OpenAI with Portkey, enabling seamless completions, prompt management, and advanced functionalities like streaming, function calling and fine-tuning.

Portkey has native integrations with OpenAI SDKs for Node.js, Python, and its REST APIs. For OpenAI integration using other frameworks, explore our partnerships, including Langchain, LlamaIndex, among others.

Using the Portkey Gateway

To integrate the Portkey gateway with OpenAI,

  • Set the baseURL to the Portkey Gateway URL

  • Include Portkey-specific headers such as provider, apiKeyand others.

Here's how to apply it to a chat completion request:

  1. Install the Portkey SDK in your application

npm i --save portkey-ai
  1. Next, insert the Portkey-specific code as shown in the highlighted lines to your OpenAI completion calls. PORTKEY_GATEWAY_URL is portkey's gateway URL to route your requests and createHeaders is a convenience function that generates the headers object. (All supported params/headers)

import OpenAI from 'openai'; // We're using the v4 SDK
import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai'

const openai = new OpenAI({
  apiKey: 'OPENAI_API_KEY', // defaults to process.env["OPENAI_API_KEY"],
  baseURL: PORTKEY_GATEWAY_URL,
  defaultHeaders: createHeaders({
    provider: "openai",
    apiKey: "PORTKEY_API_KEY" // defaults to process.env["PORTKEY_API_KEY"]
  })
});

async function main() {
  const chatCompletion = await openai.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'gpt-4-turbo',
  });

  console.log(chatCompletion.choices);
}

main();

This request will be automatically logged by Portkey. You can view this in your logs dashboard. Portkey logs the tokens utilized, execution time, and cost for each request. Additionally, you can delve into the details to review the precise request and response data.

The same integration approach applies to APIs for completions, embeddings, vision, moderation, transcription, translation, speech and files

Using the Prompts API

Portkey also supports creating and managing prompt templates in the prompt library. This enables the collaborative development of prompts directly through the user interface.

  1. Create a prompt template with variables and set the hyperparameters.

  1. Use this prompt in your codebase using the Portkey SDK.

import Portkey from 'portkey-ai'

const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY",
})

// Make the prompt creation call with the variables
const promptCompletion = await portkey.prompts.completions.create({
    promptID: "Your Prompt ID",
    variables: {
       // The variables specified in the prompt
    }
})
// We can also override the hyperparameters
const promptCompletion = await portkey.prompts.completions.create({
    promptID: "Your Prompt ID",
    variables: {
       // The variables specified in the prompt
    },
    max_tokens: 250,
    presence_penalty: 0.2
})

Observe how this streamlines your code readability and simplifies prompt updates via the UI without altering the codebase.

Advanced Use Cases

Streaming Responses

Portkey supports streaming responses using Server Sent Events (SSE).

import OpenAI from 'openai';
import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai'

const openai = new OpenAI({
  baseURL: PORTKEY_GATEWAY_URL,
  defaultHeaders: createHeaders({
    provider: "openai",
    apiKey: "PORTKEY_API_KEY" // defaults to process.env["PORTKEY_API_KEY"]
  })
});

async function main() {
  const stream = await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [{ role: 'user', content: 'Say this is a test' }],
    stream: true,
  });
  for await (const chunk of stream) {
    process.stdout.write(chunk.choices[0]?.delta?.content || '');
  }
}

main();

Using Vision Models

Portkey's multimodal Gateway fully supports OpenAI vision models as well. See this guide for more info:

pageVision

Function Calling

Function calls within your OpenAI or Portkey SDK operations remain standard. These logs will appear in Portkey, highlighting the utilized functions and their outputs.

Additionally, you can define functions within your prompts and invoke the portkey.prompts.completions.create method as above.

Fine-Tuning

Please refer to our fine-tuning guides to take advantage of Portkey's advanced continuous fine-tuning capabilities.

Image Generation

Portkey supports multiple modalities for OpenAI and you can make image generation requests through Portkey's AI Gateway the same way as making completion calls.

// Define the OpenAI client as shown above

const image = await openai.images.generate({
  model:"dall-e-3",
  prompt:"Lucy in the sky with diamonds",
  size:"1024x1024"
})

Portkey's fast AI gateway captures the information about the request on your Portkey Dashboard. On your logs screen, you'd be able to see this request with the request and response.

More information on image generation is available in the API Reference.

Audio - Transcription, Translation, and Text-to-Speech

Portkey's multimodal Gateway also supports the audio methods on OpenAI API. tts-1 , tts-1-hd, and whisper-1 models are supported.

Check out the below guides for more info:

pageText-to-SpeechpageSpeech-to-Text

Managing Organizations and Projects

For users who belong to multiple organizations or are accessing their projects through their legacy user API key, you can specify which organization and project is used for an API request.

In Portkey, you can attach this as a header, as part of the config or within the OpenAI virtual key.

OpenAI Virtual Keys

You can specify OpenAI's organisation and project IDs while defining a Virtual Key.

In the Gateway Config

You can also specify the organisation and project details in the config root or within a target.

{
	"virtual_key": "open-ai-key-66a67d",
	"openai_organization": "org-MoQxcZmsvbzVKibXlRMuAHXm",
	"openai_project": "$PROJECT_ID"
}

Portkey Features

Portkey supports the complete host of it's functionality via the OpenAI SDK so you don't need to migrate away from it.

Please find more information in the relevant sections:

Last updated