Skip to main content
Azure AI Foundry provides a unified platform for enterprise AI operations, model building, and application development. With Portkey, you can seamlessly integrate with various models available on Azure AI Foundry and take advantage of features like observability, prompt management, fallbacks, and more.

Quick Start

from portkey_ai import Portkey

# 1. Install: pip install portkey-ai
# 2. Add @azure-foundry provider in Model Catalog
# 3. Use it:

portkey = Portkey(
    api_key="PORTKEY_API_KEY",
    provider="@azure-foundry"
)

response = portkey.chat.completions.create(
    model="DeepSeek-V3-0324",  # Your deployed model name
    messages=[{"role": "user", "content": "Tell me about cloud computing"}]
)

print(response.choices[0].message.content)

Add Provider in Model Catalog

To integrate Azure AI Foundry with Portkey, you’ll create a provider in the Model Catalog. This securely stores your Azure AI Foundry credentials, allowing you to use a simple identifier in your code instead of handling sensitive authentication details directly.

Understanding Azure AI Foundry Deployments

Azure AI Foundry offers three different ways to deploy models, each with unique endpoints and configurations:
  1. AI Services: Azure-managed models accessed through Azure AI Services endpoints
  2. Managed: User-managed deployments running on dedicated Azure compute resources
  3. Serverless: Seamless, scalable deployment without managing infrastructure
You can learn more about the Azure AI Foundry deployment here.

Creating Your Azure AI Foundry Provider

Integrate Azure AI Foundry with Portkey to centrally manage your AI models and deployments. This guide walks you through setting up the provider using API key authentication.

Prerequisites

Before creating your provider, you’ll need:
  • An active Azure AI Foundry account
  • Access to your Azure AI Foundry portal
  • A deployed model on Azure Foundry

Step 1: Navigate to Model Catalog

Go to Model Catalog → Add Provider and select Azure AI Foundry as your provider.
Creating Azure AI Foundry Provider

Step 2: Configure Provider Details

Fill in the basic information for your provider:
  • Name: A descriptive name for this provider (e.g., “Azure AI Production”)
  • Short Description: Optional context about this provider’s purpose
  • Slug: A unique identifier used in API calls (e.g., “@azure-ai-prod”)

Step 3: Set Up Authentication

Portkey supports three authentication methods for Azure AI Foundry. For most use cases, we recommend using the Default (API Key) method.

Gather Your Azure Credentials

From your Azure AI Foundry portal, you’ll need to collect:
  1. Navigate to your model deployment in Azure AI Foundry
  2. Click on the deployment to view details
  3. Copy the API Key from the authentication section
  4. Copy the Target URI - this is your endpoint URL
  5. Note the API Version from your deployment URL
  6. Azure Deployment Name (Optional): Only required for Managed Services deployments

Enter Credentials in Portkey

Adding Multiple Models to Your Azure AI Foundry Provider

You can deploy multiple models through a single Azure AI Foundry provider by using Portkey’s custom models feature.

Steps to Add Additional Models

  1. Navigate to your Azure AI Foundry provider in Model Catalog
  2. Select the Model Provisioning step
  3. Click Add Model in the top-right corner

Configure Your Model

Enter the following details for your Azure deployment: Model Slug: Use your Azure Model Deployment name exactly as it appears in Azure AI Foundry
Azure Deployment Name
Short Description: Optional description for team reference Model Type: Select “Custom model” Base Model: Choose the model that matches your deployment’s API structure (e.g., select gpt-4 for GPT-4 deployments)
This is just for reference. If you can’t find the particular model, you can just choose a similar model.
Custom Pricing: Enable to track costs with your negotiated rates Once configured, this model will be available alongside others in your provider, allowing you to manage multiple Azure deployments through a single set of credentials.

Using Anthropic Models on Azure AI Foundry

Azure AI Foundry supports Anthropic models (Claude) through a slightly different configuration process. Follow these steps to integrate Anthropic models with Portkey.

Step 1: Create an Azure Foundry Provider

When creating the provider for Anthropic models, you’ll need to configure the following:
  1. Azure API Key: Copy this from your Azure Foundry console
  2. Azure Target URI: From your Foundry console, you’ll get a URL like:
    https://resource-name-swedencentral.services.ai.azure.com/anthropic/v1/messages
    
    You need to strip your URL till /anthropic:
    https://resource-name-swedencentral.services.ai.azure.com/anthropic
    
For Anthropic models on Azure Foundry, you don’t need to provide the Azure API Version or Deployment Name fields.

Step 2: Configure Workspace Provisioning

After setting up the provider credentials, proceed with the workspace provisioning step as usual.

Step 3: Add Your Anthropic Model

In the Model Provisioning step:
  1. Click the + Add Model button at the top
  2. Configure the model with these details:
    • Model Slug: Enter your deployment name from the Azure Foundry console
    • Base Model: Search for and select your Anthropic model (e.g., claude-opus-4-5-20251101, claude-sonnet-4-5-20250929, claude-haiku-4-5-20251001, claude-opus-4-1-20250805)
  3. Save the configuration

Making Requests to Anthropic Models

Once configured, you can call your Anthropic model using the Model Slug you saved:
import Portkey from 'portkey-ai';

const client = new Portkey({
  apiKey: 'PORTKEY_API_KEY',
  provider: '@AZURE_FOUNDRY_ANTHROPIC_PROVIDER'
});

const response = await client.chat.completions.create({
  messages: [{ role: "user", content: "Hello, Claude!" }],
  model: "your-azure-deployment-name", // Use the Model Slug you configured
});

console.log(response.choices[0].message.content);

Using the /messages Route with Azure Foundry Anthropic Models

Access Anthropic models on Azure AI Foundry through Anthropic’s native /messages endpoint using Portkey’s SDK or Anthropic’s SDK.
The /messages route provides access to Anthropic-native features like extended thinking, prompt caching, and native streaming formats when using Claude models on Azure AI Foundry.
curl --location 'https://api.portkey.ai/v1/messages' \
--header 'Content-Type: application/json' \
--header 'x-portkey-api-key: YOUR_PORTKEY_API_KEY' \
--data-raw '{
    "model": "@your-azure-foundry-anthropic-provider/your-azure-deployment-name",
    "max_tokens": 1024,
    "messages": [
      {
        "role": "user",
        "content": "Hello, Claude!"
      }
    ]
  }'

Sample Request

Once you’ve created your provider, you can start making requests to Azure AI Foundry models through Portkey.
Install the Portkey SDK with npm
npm install portkey-ai
import Portkey from 'portkey-ai';

const client = new Portkey({
  apiKey: 'PORTKEY_API_KEY',
  provider:'@AZURE_FOUNDRY_PROVIDER'
});

async function main() {
  const response = await client.chat.completions.create({
    messages: [{ role: "user", content: "Tell me about cloud computing" }],
    model: "DeepSeek-V3-0324", // Replace with your deployed model name
  });

  console.log(response.choices[0].message.content);
}

main();

Advanced Features

Function Calling

Azure AI Foundry supports function calling (tool calling) for compatible models. Here’s how to implement it with Portkey:
tools = [{
    "type": "function",
    "function": {
        "name": "getWeather",
        "description": "Get the current weather",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string", "description": "City and state"},
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
            },
            "required": ["location"]
        }
    }
}]

response = portkey.chat.completions.create(
    model="DeepSeek-V3-0324",  # Use a model that supports function calling
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What's the weather like in Delhi?"}
    ],
    tools=tools,
    tool_choice="auto"
)

print(response.choices[0])

Vision Capabilities

Process images alongside text using Azure AI Foundry’s vision capabilities:
response = portkey.chat.completions.create(
    model="Llama-4-Scout-17B-16E",  # Use a model that supports vision
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What's in this image?"},
                {
                    "type": "image_url",
                    "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
                }
            ]
        }
    ],
    max_tokens=500
)

print(response.choices[0].message.content)

Structured Outputs

Get consistent, parseable responses in specific formats:
import json

response = portkey.chat.completions.create(
    model="cohere-command-a",  # Use a model that supports response formats
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "List the top 3 cloud providers with their main services"}
    ],
    response_format={"type": "json_object"},
    temperature=0
)

print(json.loads(response.choices[0].message.content))

Portkey Gateway Features

Portkey provides advanced gateway features for Azure AI Foundry deployments:

Fallbacks

Create fallback configurations to ensure reliability when working with Azure AI Foundry models:
{
  "strategy": {
    "mode": "fallback"
  },
  "targets": [
    {
      "provider":"@azure-foundry-virtual-key",
      "override_params": {
        "model": "DeepSeek-V3-0324"
      }
    },
    {
      "provider":"@openai-virtual-key",
      "override_params": {
        "model": "gpt-4o"
      }
    }
  ]
}

Load Balancing

Distribute requests across multiple models for optimal performance:
{
  "strategy": {
    "mode": "loadbalance"
  },
  "targets": [
    {
      "provider":"@azure-foundry-virtual-key-1",
      "override_params": {
        "model": "DeepSeek-V3-0324"
      },
      "weight": 0.7
    },
    {
      "provider":"@azure-foundry-virtual-key-2",
      "override_params": {
        "model": "cohere-command-a"
      },
      "weight": 0.3
    }
  ]
}

Conditional Routing

Route requests based on specific conditions like user type or content requirements:
{
  "strategy": {
    "mode": "conditional",
    "conditions": [
      {
        "query": { "metadata.user_type": { "$eq": "premium" } },
        "then": "high-performance-model"
      },
      {
        "query": { "metadata.content_type": { "$eq": "code" } },
        "then": "code-specialized-model"
      }
    ],
    "default": "standard-model"
  },
  "targets": [
    {
      "name": "high-performance-model",
      "provider":"@azure-foundry-virtual-key-1",
      "override_params": {
        "model": "Llama-4-Scout-17B-16E"
      }
    },
    {
      "name": "code-specialized-model",
      "provider":"@azure-foundry-virtual-key-2",
      "override_params": {
        "model": "DeepSeek-V3-0324"
      }
    },
    {
      "name": "standard-model",
      "provider":"@azure-foundry-virtual-key-3",
      "override_params": {
        "model": "cohere-command-a"
      }
    }
  ]
}

Managing Prompts

You can manage all prompts to Azure AI Foundry in the Prompt Library. Once you’ve created and tested a prompt in the library, use the portkey.prompts.completions.create interface to use the prompt in your application.
prompt_completion = portkey.prompts.completions.create(
    prompt_id="Your Prompt ID",
    variables={
       # The variables specified in the prompt
    }
)

Next Steps