With Portkey, you can take advantage of features like fast AI gateway access, observability, prompt management, and more, all while ensuring the secure management of your LLM API keys through a integration system.
Provider Slug. azure-openai

Portkey SDK Integration with Azure OpenAI

Portkey provides a consistent API to interact with models from various providers. To integrate Azure OpenAI with Portkey:

Creating Your Azure OpenAI Integration

This integration is for all OpenAI models deployed on either Azure OpenAI or Azure AI Foundry.
Integrate Azure OpenAI models with Portkey to centrally manage your AI models and deployments. This guide walks you through setting up the integration using API key authentication.

Prerequisites

Before creating your integration, you’ll need:
  • An active Azure account
  • Access to your Azure portal
  • A model deployment on Azure (e.g., GPT-4, GPT-4o-mini)

Step 1: Start Creating Your Integration

Navigate to the Integrations page in your Portkey dashboard and select Azure OpenAI as your provider.
Creating Azure OpenAI Integration

Step 2: Configure Integration Details

Fill in the basic information for your integration:
  • Name: A descriptive name for this integration (e.g., “Azure OpenAI Production”)
  • Short Description: Optional context about this integration’s purpose
  • Slug: A unique identifier used in API calls (e.g., “azure-openai-prod”)

Step 3: Set Up Authentication

Portkey supports three authentication methods for Azure OpenAI. For most use cases, we recommend using the Default (API Key) method.
Complete Integration Form

Gather Your Azure Credentials

From your Azure portal, you’ll need to collect:
Azure Portal Overview

Enter Credentials in Portkey

  1. Navigate to your model deployment in Azure
  2. Click on the deployment to view details
  3. Copy the API Key from the authentication section
We recommend importing your Azure details (resource name, deployment details, API version) directly from your Target URI. Simply copy the target URL and import it.
Import from Target URI
  1. Azure Resource Name: Get Your resource Name from Azure
  1. Note the API Version and enter it in the given field
  2. Alias Name: A Portkey-specific field for accessing the model - name it as you prefer
  3. Foundation Model: Select a foundation model from the list that matches your deployment. This helps Portkey track costs and metrics. If your model isn’t listed, choose a similar model type to begin with.

Adding Multiple Models to Your Azure OpenAI Integration

You can deploy multiple models through a single Azure OpenAI integration by adding multiple deployments under the same integration.
Add Multiple Models
Follow the same steps as above for each additional model deployment.

1. Install the Portkey SDK

Add the Portkey SDK to your application to interact with Azure OpenAI’s API through Portkey’s gateway.
npm install --save portkey-ai

2. Initialize Portkey with the Azure

Set up Portkey with your Azure Integration as part of the initialization configuration. You can create a provider for Azure in the Portkey UI.
import Portkey from 'portkey-ai'

const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
    provider:"@AZURE_PROVIDER" // Your Azure Provider Slug
})

3. Invoke Chat Completions with Azure OpenAI

Use the Portkey instance to send requests to your Azure deployments. You can also override the provider slug directly in the API call if needed.
const chatCompletion = await portkey.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'gpt4', // This would be your deployment or model name
});

console.log(chatCompletion.choices);

Managing Azure OpenAI Prompts

You can manage all prompts to Azure OpenAI in the Prompt Library. All the current models of OpenAI are supported and you can easily start testing different prompts. Once you’re ready with your prompt, you can use the portkey.prompts.completions.create interface to use the prompt in your application.

Image Generation

Portkey supports multiple modalities for Azure OpenAI and you can make image generation requests through Portkey’s AI Gateway the same way as making completion calls.
import Portkey from 'portkey-ai'

const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY",
    provider:"@DALL-E_PROVIDER" // Referencing a Dall-E Azure deployment with Provider Slug
})

const image = await portkey.images.generate({
  prompt:"Lucy in the sky with diamonds",
  size:"1024x1024"
})
Portkey’s fast AI gateway captures the information about the request on your Portkey Dashboard. On your logs screen, you’d be able to see this request with the request and response.
api
Log view for an image generation request on Azure OpenAI More information on image generation is available in the API Reference.

Making Requests Without Model Catalog

Here’s how you can pass your Azure OpenAI details & secrets directly without using the Model Catalog feature.

Key Mapping

In a typical Azure OpenAI request,
curl https://{YOUR_RESOURCE_NAME}.openai.azure.com/openai/deployments/{YOUR_DEPLOYMENT_NAME}/chat/completions?api-version={API_VERSION} \
  -H "Content-Type: application/json" \
  -H "api-key: {YOUR_API_KEY}" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant"
      },
      {
        "role": "user",
        "content": "what is a portkey?"
      }
    ]
}'
ParameterNode SDKPython SDKREST Headers
AZURE RESOURCE NAMEazureResourceNameazure_resource_namex-portkey-azure-resource-name
AZURE DEPLOYMENT NAMEazureDeploymentIdazure_deployment_idx-portkey-azure-deployment-id
API VERSIONazureApiVersionazure_api_versionx-portkey-azure-api-version
AZURE API KEYAuthorization: “Bearer + Authorization = “Bearer + Authorization
AZURE MODEL NAMEazureModelNameazure_model_namex-portkey-azure-model-name

Example

import Portkey from 'portkey-ai'

const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY",
    provider: "azure-openai",
    azureResourceName: "AZURE_RESOURCE_NAME",
    azureDeploymentId: "AZURE_DEPLOYMENT_NAME",
    azureApiVersion: "AZURE_API_VERSION",
    azureModelName: "AZURE_MODEL_NAME"
    Authorization: "Bearer API_KEY"
})

How to Pass JWT (JSON Web Tokens)

If you have configured fine-grained access for Azure OpenAI and need to use JSON web token (JWT) in the Authorization header instead of the regular API Key, you can use the forwardHeaders parameter to do this.
import Portkey from 'portkey-ai'

const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY",
    provider: "azure-openai",
    azureResourceName: "AZURE_RESOURCE_NAME",
    azureDeploymendId: "AZURE_DEPLOYMENT_NAME",
    azureApiVersion: "AZURE_API_VERSION",
    azureModelName: "AZURE_MODEL_NAME",
    Authorization: "Bearer JWT_KEY", // Pass your JWT here
    forwardHeaders: [ "Authorization" ]
})
For further questions on custom Azure deployments or fine-grained access tokens, reach out to us on [email protected]

Next Steps

The complete list of features supported in the SDK are available on the link below.

SDK

You’ll find more information in the relevant sections:
  1. Add metadata to your requests
  2. Add gateway configs to your Azure OpenAI requests
  3. Tracing Azure OpenAI requests
  4. Setup a fallback from OpenAI to Azure OpenAI APIs