Portkey seamlessly integrates with the Vercel AI SDK, enabling you to build production-ready AI applications with enterprise-grade reliability, observability, and governance. Simply point the OpenAI provider to Portkey’s gateway and unlock powerful features:
Full-stack observability - Complete tracing and analytics for every request
250+ LLMs - Switch between OpenAI, Anthropic, Google, AWS Bedrock, and 250+ models
Enterprise reliability - Fallbacks, load balancing, automatic retries, and circuit breakers
Smart caching - Reduce costs up to 80% with semantic and simple caching
Production guardrails - 50+ built-in checks for safety and quality
Prompt management - Version, test, and deploy prompts from Portkey’s studio
Migrated from @portkey-ai/vercel-provider? We’ve updated our integration to use Vercel’s standard OpenAI provider for better compatibility with the rapidly evolving Vercel AI SDK. This new approach gives you access to all Vercel features while maintaining full Portkey functionality.
Quick Start
1. Installation
npm install ai @ai-sdk/openai
2. Get Your Portkey API Key
Sign up for Portkey and copy your API key from the dashboard. You’ll use this to authenticate with Portkey’s gateway.
Point the OpenAI provider to Portkey’s gateway:
import { createOpenAI } from '@ai-sdk/openai' ;
const openai = createOpenAI ({
baseURL: 'https://api.portkey.ai/v1' ,
apiKey: 'YOUR_PORTKEY_API_KEY' , // Your Portkey API key
});
4. Use Any Model
Use models from your AI Providers with the @provider-slug/model-name format:
import { generateText } from 'ai' ;
const { text } = await generateText ({
model: openai ( '@openai-prod/gpt-4o' ),
prompt: 'Write a vegetarian lasagna recipe for 4 people.' ,
});
console . log ( text );
That’s it! Your Vercel AI SDK app now has full Portkey observability and reliability features.
Core Features
Text Generation
Generate text with any model using generateText:
OpenAI Models
Anthropic Models
Google Models
import { generateText } from 'ai' ;
import { createOpenAI } from '@ai-sdk/openai' ;
const openai = createOpenAI ({
baseURL: 'https://api.portkey.ai/v1' ,
apiKey: 'YOUR_PORTKEY_API_KEY' ,
});
const { text } = await generateText ({
model: openai ( '@openai-prod/gpt-4o' ),
prompt: 'Write a haiku about coding.' ,
});
Use openai() for OpenAI’s completion models and openai.chat() for chat completion models from any provider (Anthropic, Google, AWS Bedrock, etc.).
Streaming Text
Stream responses in real-time with streamText:
import { streamText } from 'ai' ;
import { createOpenAI } from '@ai-sdk/openai' ;
const openai = createOpenAI ({
baseURL: 'https://api.portkey.ai/v1' ,
apiKey: 'YOUR_PORTKEY_API_KEY' ,
});
const result = streamText ({
model: openai ( '@openai-prod/gpt-4o' ),
prompt: 'Tell me about the Mission burrito debate in San Francisco.' ,
});
for await ( const chunk of result . fullStream ) {
if ( chunk . type === 'text-delta' ) {
process . stdout . write ( chunk . textDelta );
}
}
Structured Data Generation
Generate validated, structured outputs with generateObject:
With Schema Name & Description
Standard Format
import { generateObject } from 'ai' ;
import { createOpenAI } from '@ai-sdk/openai' ;
import { z } from 'zod' ;
const openai = createOpenAI ({
baseURL: 'https://api.portkey.ai/v1' ,
apiKey: 'YOUR_PORTKEY_API_KEY' ,
});
const result = await generateObject ({
model: openai . chat ( '@openai-prod/gpt-4o' ),
schemaName: 'Recipe' ,
schemaDescription: 'A recipe for a dish.' ,
schema: z . object ({
name: z . string (),
ingredients: z . array (
z . object ({
name: z . string (),
amount: z . string (),
})
),
steps: z . array ( z . string ()),
}),
prompt: 'Generate a lasagna recipe.' ,
});
console . log ( result . object );
Enable your AI to use tools and functions:
import { generateText , tool } from 'ai' ;
import { createOpenAI } from '@ai-sdk/openai' ;
import { z } from 'zod' ;
const openai = createOpenAI ({
baseURL: 'https://api.portkey.ai/v1' ,
apiKey: 'YOUR_PORTKEY_API_KEY' ,
});
const result = await generateText ({
model: openai ( '@openai-prod/gpt-4o' ),
tools: {
weather: tool ({
description: 'Get the weather in a location' ,
inputSchema: z . object ({
location: z . string (). describe ( 'The location to get the weather for' ),
}),
execute : async ({ location }) => ({
location ,
temperature: 72 + Math . floor ( Math . random () * 21 ) - 10 ,
}),
}),
},
prompt: 'What is the weather in San Francisco?' ,
});
console . log ( result . text );
console . log ( 'Tool Calls:' , result . toolCalls );
Image Generation
Generate images with DALL-E or other image models:
import { experimental_generateImage as generateImage } from 'ai' ;
import { createOpenAI } from '@ai-sdk/openai' ;
const openai = createOpenAI ({
baseURL: 'https://api.portkey.ai/v1' ,
apiKey: 'YOUR_PORTKEY_API_KEY' ,
});
const { image } = await generateImage ({
model: openai . image ( '@openai-prod/dall-e-3' ),
prompt: 'Santa Claus driving a Cadillac' ,
size: '1024x1024' ,
quality: 'standard' ,
});
console . log ( 'Image URL:' , image );
AI Agents
Build autonomous agents with tool usage and reasoning:
import { Experimental_Agent as Agent , tool , stepCountIs } from 'ai' ;
import { createOpenAI } from '@ai-sdk/openai' ;
import { z } from 'zod' ;
const openai = createOpenAI ({
baseURL: 'https://api.portkey.ai/v1' ,
apiKey: 'YOUR_PORTKEY_API_KEY' ,
});
const weatherAgent = new Agent ({
model: openai ( '@openai-prod/gpt-4o' ),
tools: {
weather: tool ({
description: 'Get the weather in a location (in Fahrenheit)' ,
inputSchema: z . object ({
location: z . string (). describe ( 'The location to get the weather for' ),
}),
execute : async ({ location }) => ({
location ,
temperature: 72 + Math . floor ( Math . random () * 21 ) - 10 ,
}),
}),
convertFahrenheitToCelsius: tool ({
description: 'Convert temperature from Fahrenheit to Celsius' ,
inputSchema: z . object ({
temperature: z . number (). describe ( 'Temperature in Fahrenheit' ),
}),
execute : async ({ temperature }) => ({
celsius: Math . round (( temperature - 32 ) * ( 5 / 9 )),
}),
}),
},
stopWhen: stepCountIs ( 20 ),
});
const result = await weatherAgent . generate ({
prompt: 'What is the weather in San Francisco in celsius?' ,
});
console . log ( 'Agent \' s answer:' , result . text );
console . log ( 'Steps taken:' , result . steps . length );
Custom Parameters
Fine-tune model behavior with temperature, tokens, and retries:
import { generateText } from 'ai' ;
import { createOpenAI } from '@ai-sdk/openai' ;
const openai = createOpenAI ({
baseURL: 'https://api.portkey.ai/v1' ,
apiKey: 'YOUR_PORTKEY_API_KEY' ,
});
const result = await generateText ({
model: openai ( '@openai-prod/gpt-4o' ),
maxOutputTokens: 512 ,
temperature: 0.3 ,
maxRetries: 5 ,
prompt: 'Invent a new holiday and describe its traditions.' ,
});
console . log ( result . text );
Enhance your requests with Portkey’s powerful headers:
Trace ID
Track and debug requests with custom trace IDs:
const openai = createOpenAI ({
baseURL: 'https://api.portkey.ai/v1' ,
apiKey: 'YOUR_PORTKEY_API_KEY' ,
headers: {
'x-portkey-trace-id' : 'user-123-session-456' ,
},
});
Add custom metadata for filtering and analytics:
const openai = createOpenAI ({
baseURL: 'https://api.portkey.ai/v1' ,
apiKey: 'YOUR_PORTKEY_API_KEY' ,
headers: {
'x-portkey-metadata' : JSON . stringify ({
environment: 'production' ,
user: 'user-123' ,
feature: 'chat' ,
}),
},
});
Apply Portkey configs using the config ID or inline JSON:
const openai = createOpenAI ({
baseURL: 'https://api.portkey.ai/v1' ,
apiKey: 'YOUR_PORTKEY_API_KEY' ,
headers: {
'x-portkey-config' : 'pp-config-xxx' , // Your config ID
},
});
Learn more about Portkey Headers
Using AI Providers & Model Catalog
Portkey’s Model Catalog lets you manage all your AI providers and models from a centralized dashboard with governance, budget limits, and access controls.
Setting Up Providers
Go to the Model Catalog in Portkey dashboard
Click “Add Provider” and choose your AI service (OpenAI, Anthropic, etc.)
Add your API credentials
Give your provider a unique slug (e.g., @openai-prod)
Using Provider Models
Reference models using the @provider-slug/model-name format:
// Use your configured providers
const { text } = await generateText ({
model: openai ( '@openai-prod/gpt-4o' ), // OpenAI
prompt: 'Hello!' ,
});
const { text : claudeText } = await generateText ({
model: openai . chat ( '@anthropic-prod/claude-3-5-sonnet' ), // Anthropic
prompt: 'Hello!' ,
});
const { text : geminiText } = await generateText ({
model: openai . chat ( '@google-prod/gemini-2.0-flash' ), // Google
prompt: 'Hello!' ,
});
Portkey Configs
Portkey Configs enable advanced routing, reliability, and governance for your AI requests. You can apply configs in two ways:
Method 1: Inline in Model Name
Reference a saved prompt or config directly in the model name:
import { generateText } from 'ai' ;
import { createOpenAI } from '@ai-sdk/openai' ;
const openai = createOpenAI ({
baseURL: 'https://api.portkey.ai/v1' ,
apiKey: 'YOUR_PORTKEY_API_KEY' ,
});
// Use a saved prompt with config
const { text } = await generateText ({
model: openai ( '@my-prompt-config/gpt-4o' ),
prompt: 'User query here' ,
});
Pass config ID or inline config via headers:
const openai = createOpenAI ({
baseURL: 'https://api.portkey.ai/v1' ,
apiKey: 'YOUR_PORTKEY_API_KEY' ,
headers: {
'x-portkey-config' : 'pp-config-xxx' , // Your saved config ID
},
});
Enterprise Features
Observability & Analytics
Get complete visibility into your AI operations with automatic request logging, performance metrics, and cost tracking:
Every request through Portkey is automatically logged with:
Request/response payloads - Full tracing of inputs and outputs
Performance metrics - Latency, tokens, and throughput
Cost tracking - Real-time spend across all providers
Error monitoring - Automatic error detection and alerts
Observability Explore Portkey’s full observability suite
AI Gateway Features
Portkey’s AI Gateway makes your AI applications production-ready with enterprise reliability:
AI Gateway View all gateway features and capabilities
Guardrails
Enforce safety, quality, and compliance with real-time guardrails:
{
"before_request_hooks" : [{
"id" : "input-guardrail-xxx"
}],
"after_request_hooks" : [{
"id" : "output-guardrail-xxx"
}]
}
Portkey offers 50+ built-in guardrails including:
PII detection and redaction
Toxic content filtering
Prompt injection protection
Custom regex and keyword filters
Sensitive data detection
Guardrails Set up real-time safety and compliance checks
Prompt Management
Manage, version, and deploy prompts from Portkey’s Prompt Studio:
Migration Guide
From @portkey-ai/vercel-provider
If you’re migrating from the old Portkey Vercel provider package:
Before:
import { createPortkey } from '@portkey-ai/vercel-provider' ;
const portkey = createPortkey ({
apiKey: 'YOUR_PORTKEY_API_KEY' ,
config: { /* ... */ }
});
const { text } = await generateText ({
model: portkey . chatModel ( 'gpt-4o' ),
prompt: 'Hello' ,
});
After:
import { createOpenAI } from '@ai-sdk/openai' ;
const openai = createOpenAI ({
baseURL: 'https://api.portkey.ai/v1' ,
apiKey: 'YOUR_PORTKEY_API_KEY' ,
});
const { text } = await generateText ({
model: openai ( '@openai-prod/gpt-4o' ),
prompt: 'Hello' ,
});
Key Changes:
Use @ai-sdk/openai instead of @portkey-ai/vercel-provider
Set baseURL to Portkey’s gateway
Reference models using @provider-slug/model-name format
Pass configs via headers or inline in model names
Benefits:
✅ Better compatibility with Vercel AI SDK updates
✅ Access to all Vercel AI SDK features immediately
✅ Simpler setup with standard OpenAI provider
✅ More flexible config management
Support & Resources