Learn how to integrate Azure AI Foundry with Portkey to access a wide range of AI models with enhanced observability and reliability features.
Azure AI Foundry provides a unified platform for enterprise AI operations, model building, and application development. With Portkey, you can seamlessly integrate with various models available on Azure AI Foundry and take advantage of features like observability, prompt management, fallbacks, and more.
To integrate Azure AI Foundry with Portkey, you’ll need to create a virtual key. Integrations securely store your Azure AI Foundry credentials in Portkey’s vault, allowing you to use a simple identifier in your code instead of handling sensitive authentication details directly.Navigate to the Inteagrations section in Portkey and select “Azure AI Foundry” as your provider.
Integrate Azure AI Foundry with Portkey to centrally manage your AI models and deployments. This guide walks you through setting up the integration using API key authentication.
For managed Azure deployments:Required parameters:
Azure Managed ClientID: Your managed client ID
Azure Foundry URL: The base endpoint URL for your deployment, formatted according to your deployment type:
For AI Services: https://your-resource-name.services.ai.azure.com/models
For Managed: https://your-model-name.region.inference.ml.azure.com/score
For Serverless: https://your-model-name.region.models.ai.azure.com
Azure API Version: The API version to use (e.g., “2024-05-01-preview”). This is required if you have api version in your deployment url. Examples:
If your URL is https://mycompany-ai.westus2.services.ai.azure.com/models?api-version=2024-05-01-preview, the API version is 2024-05-01-preview
Azure Deployment Name: (Optional) Required only when a single resource contains multiple deployments.
To use this authentication your azure application need to have the role of: conginitive services user. Enterprise-level authentication with Azure Entra ID:Required parameters:
Azure Entra ClientID: Your Azure Entra client ID
Azure Entra Secret: Your client secret
Azure Entra Tenant ID: Your tenant ID
Azure Foundry URL: The base endpoint URL for your deployment, formatted according to your deployment type:
For AI Services: https://your-resource-name.services.ai.azure.com/models
For Managed: https://your-model-name.region.inference.ml.azure.com/score
For Serverless: https://your-model-name.region.models.ai.azure.com
Azure API Version: The API version to use (e.g., “2024-05-01-preview”). This is required if you have api version in your deployment url. Examples:
If your URL is https://mycompany-ai.westus2.services.ai.azure.com/models?api-version=2024-05-01-preview, the API version is 2024-05-01-preview
Azure Deployment Name: (Optional) Required only when a single resource contains multiple deployments. Common in Managed deployments.
Enter the following details for your Azure deployment:Model Slug: Use your Azure Model Deployment name exactly as it appears in Azure AI Foundry
Short Description: Optional description for team referenceModel Type: Select “Custom model”Base Model: Choose the model that matches your deployment’s API structure (e.g., select gpt-4 for GPT-4 deployments)
This is just for reference. If you can’t find the particular model, you can just choose a similar model.
Custom Pricing: Enable to track costs with your negotiated ratesOnce configured, this model will be available alongside others in your integration, allowing you to manage multiple Azure deployments through a single set of credentials.
Azure AI Foundry supports Anthropic models (Claude) through a slightly different configuration process. Follow these steps to integrate Anthropic models with Portkey.
Once configured, you can call your Anthropic model using the Model Slug you saved:
NodeJS
Python
cURL
Copy
Ask AI
import Portkey from 'portkey-ai';const client = new Portkey({ apiKey: 'PORTKEY_API_KEY', provider: '@AZURE_FOUNDRY_ANTHROPIC_PROVIDER'});const response = await client.chat.completions.create({ messages: [{ role: "user", content: "Hello, Claude!" }], model: "your-azure-deployment-name", // Use the Model Slug you configured});console.log(response.choices[0].message.content);
Copy
Ask AI
from portkey_ai import Portkeyclient = Portkey( api_key="PORTKEY_API_KEY", virtual_key="AZURE_FOUNDRY_ANTHROPIC_PROVIDER")response = client.chat.completions.create( model="your-azure-deployment-name", # Use the Model Slug you configured messages=[ {"role": "user", "content": "Hello, Claude!"} ])print(response.choices[0].message.content)
Once you’ve created your Integration key, you can start making requests to Azure AI Foundry models through Portkey.
NodeJS
Python
cURL
Install the Portkey SDK with npm
Copy
Ask AI
npm install portkey-ai
Copy
Ask AI
import Portkey from 'portkey-ai';const client = new Portkey({ apiKey: 'PORTKEY_API_KEY', provider:'@AZURE_FOUNDRY_PROVIDER'});async function main() { const response = await client.chat.completions.create({ messages: [{ role: "user", content: "Tell me about cloud computing" }], model: "DeepSeek-V3-0324", // Replace with your deployed model name }); console.log(response.choices[0].message.content);}main();
Install the Portkey SDK with pip
Copy
Ask AI
pip install portkey-ai
Copy
Ask AI
from portkey_ai import Portkeyclient = Portkey( api_key = "PORTKEY_API_KEY", virtual_key = "AZURE_FOUNDRY_PROVIDER")response = client.chat.completions.create( model="DeepSeek-V3-0324", # Replace with your deployed model name messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Tell me about cloud computing"} ])print(response.choices[0].message.content)
Get consistent, parseable responses in specific formats:
Node.js
Python
Copy
Ask AI
const response = await portkey.chat.completions.create({ model: "cohere-command-a", // Use a model that supports response formats messages: [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "List the top 3 cloud providers with their main services" } ], response_format: { type: "json_object" }, temperature: 0});console.log(JSON.parse(response.choices[0].message.content));
Copy
Ask AI
response = portkey.chat.completions.create( model="cohere-command-a", # Use a model that supports response formats messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "List the top 3 cloud providers with their main services"} ], response_format={"type": "json_object"}, temperature=0)import jsonprint(json.loads(response.choices[0].message.content))
You can manage all prompts to Azure AI Foundry in the Prompt Library. Once you’ve created and tested a prompt in the library, use the portkey.prompts.completions.create interface to use the prompt in your application.
NodeJS
Python
Copy
Ask AI
const promptCompletion = await portkey.prompts.completions.create({ promptID: "Your Prompt ID", variables: { // The variables specified in the prompt }})
Copy
Ask AI
prompt_completion = portkey.prompts.completions.create( prompt_id="Your Prompt ID", variables={ # The variables specified in the prompt })