Learn how to upgrade to Model Catalog to replace Virtual Keys
We are thrilled to introduce the Model Catalog, a powerful new feature designed to give your organization end-to-end control, governance, and visibility over your AI models. The Model Catalog is the evolution of our Virtual Keys feature, built to handle the complexities of enterprise-scale AI operations. This guide will walk you through what the Model Catalog is, how it enhances your current workflow, and what the experience looks like for every user in your organization.
gpt-4o
, claude-4-sonnet
) are available, preventing users from accessing unapproved, expensive, or deprecated models.model = "@provider/model_name"
format, without needing to manage different virtual keys or complex configs for simple requests.Feature | Old Way (Virtual Keys) | New Way (Model Catalog) |
---|---|---|
Creation | Manually create a virtual key in each workspace, even with the same credentials. | Create one Integration at the Org level and provision it to many workspaces. |
Budgeting | Budgets are set per virtual key. Managing shared provider costs across teams is difficult. | At the org level directly, assign specific budgets/limits per workspace. |
Model Access | A virtual key grants access to all models available under that provider key. Control requires complex configs or guardrails. | Define an explicit allow-list of models for each Integration. Workspaces only see what you’ve enabled. |
Making Calls | Use the virtual_key header or bind a virtual key to a config, then bind that to an API key. | Simply pass model: "@provider_slug/model_slug" in the request body. The old way still works perfectly. |
Visibility | Org Admins have no central view of all provider credentials being used across all workspaces. | Org Admins have a central Integrations dashboard to see all connected providers, including those created by workspaces. |
virtual_key
header in requests or configs will continue to work exactly as they do today.model
parameter:
virtual_key
header of your request.virtual_key
header is fully backward compatible and will continue to work. However, the recommended and more powerful method is to specify the provider and model directly in the model
parameter of your request body.@
symbol in your inference requests.
What’s Better?
This method is more explicit and keeps all model-related information in one place—the model
parameter. It eliminates the need for a separate header and makes switching between models and providers incredibly simple.
virtual_key
field to your Portkey Config, either at the root level or inside a strategy target (like fallback or load balance).model
directly in a target’s override_params
. This unlocks powerful, multi-provider strategies.