We are thrilled to introduce the Model Catalog, a powerful new feature designed to give your organization end-to-end control, governance, and visibility over your AI models.

The Model Catalog is the evolution of our Virtual Keys feature, built to handle the complexities of enterprise-scale AI operations.

This guide will walk you through what the Model Catalog is, how it enhances your current workflow, and what the experience looks like for every user in your organization.

Why Upgrade to the Model Catalog?

First, What is the Model Catalog?

The Model Catalog fundamentally upgrades the Virtual Key experience by introducing a centralized, organization-level management layer. It addresses key governance pain points you have faced, allowing for:

  1. Centralized Provider Management: Create a single (or multiple) provider integration (e.g., for Azure, OpenAI, Bedrock) at the organization level, and securely provision it to multiple workspaces without re-entering credentials.
  2. Workspace Provisioning: Easily control which workspaces have access to which provider integrations, and set distinct budgets and rate limits for each workspace from a single screen.
  3. Granular Model Provisioning: For any given provider integration, you can specify exactly which models (e.g., gpt-4o, claude-4-sonnet) are available, preventing users from accessing unapproved, expensive, or deprecated models.
  4. Simplified Model Calling: Your developers can now switch between providers and models directly in their API call using a simple model = "@provider/model_name" format, without needing to manage different virtual keys or complex configs for simple requests.

How It Enhances the Virtual Key Experience

The Model Catalog is designed to be a seamless upgrade. Here’s a quick comparison:

FeatureOld Way (Virtual Keys)New Way (Model Catalog)
CreationManually create a virtual key in each workspace, even with the same credentials.Create one Integration at the Org level and provision it to many workspaces.
BudgetingBudgets are set per virtual key. Managing shared provider costs across teams is difficult.At the org level directly, assign specific budgets/limits per workspace.
Model AccessA virtual key grants access to all models available under that provider key. Control requires complex configs or guardrails.Define an explicit allow-list of models for each Integration. Workspaces only see what you’ve enabled.
Making CallsUse the virtual_key header or bind a virtual key to a config, then bind that to an API key.Simply pass model: "@provider_slug/model_slug" in the request body. The old way still works perfectly.
VisibilityOrg Admins have no central view of all provider credentials being used across all workspaces.Org Admins have a central Integrations dashboard to see all connected providers, including those created by workspaces.

Is It Backwards Compatible? (The Important Question)

Yes, the Model Catalog is 100% backwards compatible.

  • Your existing Virtual Keys will be automatically migrated and will appear as “AI Providers” in the Model Catalog, scoped to their original workspace.
  • No code will break. All your existing applications and scripts that use the virtual_key header in requests or configs will continue to work exactly as they do today.

You can adopt the new features at your own pace without any disruption to your production workloads.

So, What Has Changed?

While the Model Catalog is an important upgrade, the transition is designed to be completely seamless. Here’s the most important thing you need to know:

  • Your existing Virtual Keys have been automatically migrated. They are now called “AI Providers”, and they keep their original slugs. All your existing code, configs, and prompts will continue to work without any changes.

With that key point in mind, let’s explore the new capabilities this unlocks for every role in your organization and how your daily workflows are enhanced.

The New Experience by Role

For Organization Admins: Central Command & Control

As an Org Admin, you will see a new Integrations tab in your organization settings. This is your central hub for managing all LLM providers.

  1. Connect an Integration: Instead of creating a virtual key in a workspace, you now “Connect” a provider here. The process of adding credentials is the same. You can create multiple integrations for the same provider (e.g., “Azure-Prod” and “Azure-Dev”).
  2. Provision to Workspaces: In the “Workspace Provisioning” step, you’ll see a list of all your workspaces. You can toggle access for each one and set specific budgets and rate limits. You can also choose to automatically provision this integration for any new workspaces created in the future.
  3. Provision Models: In the “Model Provisioning” step, you can specify exactly which models are accessible through this integration. You can choose to enable all models or select them individually.

There is also a “Workspace-Created” tab to view and manage any integrations created by Workspace Admins themselves, giving you full visibility.

For Workspace Admins: Inherit and Customize

As a Workspace Admin, you will now see Model Catalog on your sidebar. The old “Virtual Keys” will be marked as deprecated.

  • Inherited Providers: In the “AI Providers” tab, you will automatically see all the integrations your Org Admin has provisioned for your workspace.
  • Creating New Providers: You still have flexibility:
    1. Inherit from Org: You can create a new provider that inherits from an Org-level integration. This is useful for creating multiple providers with more restrictive budgets than the one assigned to your workspace.
    2. Create a New Integration (The Old Way): If enabled by your Org Admin, you can still create a brand new integration exclusively for your workspace, just like you created Virtual Keys before.

For Workspace Members (Developers): A Simpler, Better DX

As a developer, your experience is significantly streamlined.

  • The Model Garden: A new “Models” tab provides a complete gallery of every single model you have access to across all providers in your workspace. You can click on any model to get ready-to-use code snippets.

  • Simplified API Calls: You no longer need to worry about which virtual key to use for which model. With a single Portkey API key, you can call any model you have access to by specifying the provider and model in the model parameter:

    # Switch models and providers on the fly!
    client.chat.completions.create(
        model="@openai-prod/gpt-4o",
        messages=[...]
    )
    
    client.chat.completions.create(
        model="@bedrock-us/claude-3-sonnet-v1",
        messages=[...]
    )
    

How Your Workflows Change: A Practical Guide

The Model Catalog enhances every area where you previously used Virtual Keys. Let’s break down how each of your existing workflows translates to the new, more powerful system.

1. From “Creating Virtual Keys” to “Creating AI Providers”

The Old Way

You created a Virtual Key inside a specific workspace. This had to be repeated for every workspace needing access to the same provider credentials.

The New Way

You now create a central Integration at the Org level and provision it to many workspaces. You can also still create workspace-only integrations, or “inherit” from an Org integration to create providers with stricter limits.

What’s Better? This “create once, provision many” model saves significant time, reduces the risk of configuration errors, and gives you a single place to manage provider credentials and model access for your entire organization.


2. From “Sending Virtual Keys in Requests” to “Using the Model Parameter”

The Old Way

You sent the Virtual Key slug in the virtual_key header of your request.

The New Way

The virtual_key header is fully backward compatible and will continue to work. However, the recommended and more powerful method is to specify the provider and model directly in the model parameter of your request body.

We’ve renamed “Virtual Keys” to AI Providers at the workspace level. To reference one in a request, you simply prefix its slug with an @ symbol in your inference requests.

What’s Better? This method is more explicit and keeps all model-related information in one place—the model parameter. It eliminates the need for a separate header and makes switching between models and providers incredibly simple.

# Note: Your AI Provider slug is "my-azure-prod"
# Note: The model you want to use is "gpt-4o"

response = client.chat.completions.create(
  model="@my-azure-prod/gpt-4o",  # The new, recommended format
  messages=[{"role": "user", "content": "Hello!"}]
)

3. From “Using Virtual Keys in Configs” to “Using Models in Configs”

The Old Way

You added a virtual_key field to your Portkey Config, either at the root level or inside a strategy target (like fallback or load balance).

The New Way

This continues to work perfectly. For new configs, you can now specify the model directly in a target’s override_params. This unlocks powerful, multi-provider strategies.

What’s Better? You are no longer limited to falling back between virtual keys of the same provider. You can now create a fallback strategy from an Azure OpenAI deployment to a direct OpenAI integration, or from Bedrock to Google AI, all within a single, elegant config.

Example: Multi-Provider Fallback Config

Here’s a config that tries a model on Azure first. If that fails, it automatically falls back to the same model on AWS Bedrock.

{
  "strategy": {
    "mode": "fallback",
    "on_status_codes": [429, 500, 503]
  },
  "targets": [
    {
      // Target 1: Try Azure first
      "override_params": {
        "model": "@azure-us-east/gpt-4o"
      }
    },
    {
      // Target 2: Fallback to Bedrock if Azure fails
      "override_params": {
        "model": "@bedrock-main/anthropic.claude-3-sonnet-20240229-v1:0"
      }
    }
  ]
}

When you make a request using this config, you don’t need to specify a model at all. Portkey handles the complex routing for you.


4. From “Using Virtual Keys in Prompts” to… Nothing!

The Old Way

When creating a prompt template, you selected a Virtual Key from a dropdown to power the prompt.

The New Way

No action is required from you. We’ve handled this automatically.

What’s Better? A seamless, zero-effort migration. Your existing prompt templates have been automatically updated behind the scenes to use the new AI Provider system. They will continue to work exactly as before without any changes on your end. When creating new prompts, you’ll now select from a list of AI Providers.


How to Get Started

The Model Catalog is currently available as a feature-flagged release.

  1. Contact Us: Reach out to the Portkey team here, and we will enable the Model Catalog for your organization.
  2. Test in a Dev Org: We recommend enabling it for a non-production or sandbox organization first. This allows you to explore the new workflows, set up your first Org-level integrations, and prepare your teams.
  3. Plan Your Rollout: Once you’re comfortable, we can enable it for your production organization. You can then begin to migrate your workspace-level virtual keys to centrally managed integrations.

We are incredibly excited for you to experience the next level of control and efficiency with the Portkey Model Catalog.