The Model Catalog is a centralized hub for viewing and managing all AI providers and models within your organization. It serves as the evolution of Virtual Keys, providing a more powerful and streamlined way to control your AI resources. It abstracts raw API keys and scattered environment variables into governed Provider Integrations and Models.
Upgrading from Virtual Keys
The Model Catalog upgrades the Virtual Key experience by introducing a centralized, organization-level management layer, offering advantages like:
  • Centralized provider and model management - no more duplicate configs across workspaces.
  • Fine-grained control: budgets, rate limits, and model allow-lists at both org and workspace level.
  • Inline usage: just pass model="@provider/model_slug"
Need help? See our Migration Guide ➜
Model Catalog - Provider and Models

AI Providers

AI Providers represent connections to AI services. Each AI Provider has:
  • ✔️ A unique slug (e.g., @openai-prod)
  • ✔️ Securely stored credentials
  • ✔️ Budget and rate limits
  • ✔️ Access to specific models

AI Models

The Models section is a gallery of all AI models available. Each Model entry includes:
  • ✔️ Model slug (@openai-prod/gpt-4o)
  • ✔️ Ready-to-use code snippets
  • ✔️ Input/output token limits
  • ✔️ Pricing information (where available)

Adding an AI Provider

You can add providers via **UI **(follow the steps below) or API.
1

Go to AI Providers → Add Provider

Portkey Model Catalog - Add Provider
2

Select the AI Service to integrate

Choose from list (OpenAI, Anthropic, etc.) or Self-hosted / Custom.Portkey Model Catalog - Add Provider - Choose Service
3

Enter Credentials

Choose existing credentials or create new ones.Model Catalog - Add credentials
4

Enter provider details & save

Choose the name and slug for this provider. The slug cannot be changed later and will be used to reference the AI models.Model Catalog - Add Provider Details

Using Provider Models

Once you have AI Providers set up, you can use their models in your applications through various methods. In Portkey, model strings follow this format: @provider_slug/model_name Screenshot2025 07 21at5 39 59PM Pn For example, @openai-prod/gpt-4o, @anthropic/claude-sonnet-3.7, @bedrock-us/claude-3-sonnet-v1
import { Portkey } from 'portkey-ai';
const client = new Portkey({ apiKey: "PORTKEY_API_KEY" });

const resp = await client.chat.completions.create({
  model: '@openai-prod/gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }]
});

2. Using the provider header

You can also specify the provider in the header instead of the model string like the earlier virtual keys approach. Remember to add the @ before your provider slug.
import { Portkey } from 'portkey-ai';
const client = new Portkey({ 
	apiKey: "PORTKEY_API_KEY",
	provider: "@openai-prod"
});

const resp = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }]
});

3. Specify provider in the config

Specifying the provider in the config will also set it for all requests using that config. You can also specify the model string format in the override params.
// Specify provider in the config
{
	"provider": "@openai-prod"
}

// and/or specify the model string in "override_params"
{
	"strategy": { "mode": "fallback" },
	"targets": [{
		"override_params": { "model": "@openai-prod/gpt-4o" }
	}, {
		"override_params": { "model": "@anthropic/claude-sonnet-3.7" }
	}]
}
Ordering: config (if provided) defines base; override_params merges on top (last write wins for scalars, deep merge for objects like metadata).

Integrations and Workspaces

The Model Catalog enables seamless integration across your organization’s structure:
  • Organization-Level: Create and manage integrations centrally
  • Workspace-Level: Provision specific integrations to workspaces
  • Developer-Level: Use provisioned models through simple API calls
This hierarchical approach provides central governance while giving workspaces the flexibility they need.

Learn More: Integrations

Admins can manage AI service credentials across workspaces easily through integrations. Click to learn more about using this.

Budgets & Limits

Portkey allows you to set and manage budget limits at various levels:
  • Workspace-Level: Set specific budgets for each workspace
  • Provider-Level: Set budgets for individual AI Providers
Budget limits can be:
  • Cost-Based: Set a maximum spend in USD
  • Token-Based: Set a maximum number of tokens that can be consumed
  • Rate-Based: Set maximum requests per minute/hour/day
You can also configure periodic resets (weekly or monthly) for these limits, which is perfect for managing recurring team budgets. Learn more about Budgets and Limits here.

Model Management

Custom Models

You can manage your own custom models in Model Catalog, including fine-tuned models, custom-hosted models and private models. Click to see how to create custom models.

Custom Pricing

For models with custom pricing arrangements, you can configure input and output token pricing at an integration level.. Click to see how to add custom pricing for models.

Self-hosted AI Providers

TBD