OpenAI Agents SDK (TypeScript)
Use Portkey with OpenAI Agents SDK to take your AI Agents to production
Introduction
OpenAI Agents SDK enables the development of complex AI agents with tools, planning, and memory capabilities. Portkey enhances OpenAI Agents with observability, reliability, and production-readiness features.
Portkey turns your experimental OpenAI Agents into production-ready systems by providing:
- Complete observability of every agent step, tool use, and interaction
- Built-in reliability with fallbacks, retries, and load balancing
- Cost tracking and optimization to manage your AI spend
- Access to 1600+ LLMs through a single integration
- Guardrails to keep agent behavior safe and compliant
- Version-controlled prompts for consistent agent performance
OpenAI Agents SDK Official Documentation
Learn more about OpenAI Agents SDK’s core concepts
Installation & Setup
Install the required packages
Generate API Key
Create a Portkey API key with optional budget/rate limits and attach your Config
Connect to OpenAI Agents
There are 3 ways to integrate Portkey with OpenAI Agents:
- Set a client that applies to all agents in your application
- Use a custom provider for selective Portkey integration
- Configure each agent individually
See the Integration Approaches section for more details.
Configure Portkey Client
For a simple setup, we’ll use the global client approach:
What are Virtual Keys? Virtual keys in Portkey securely store your LLM provider API keys (OpenAI, Anthropic, etc.) in an encrypted vault. They allow for easier key rotation and budget management. Learn more about virtual keys here.
Getting Started
Let’s create a simple question-answering agent with OpenAI Agents SDK and Portkey. This agent will respond directly to user messages using a language model:
In this example:
- We set up Portkey as the global client for OpenAI Agents SDK
- We create a simple agent with instructions and a model
- We run the agent with a user query
- We print the final output
Visit your Portkey dashboard to see detailed logs of this agent’s execution!
End-to-End Example
Research Agent with Tools: Here’s a more comprehensive agent that can use tools to perform tasks.
Visit your Portkey dashboard to see the complete execution flow visualized!
Production Features
1. Enhanced Observability
Portkey provides comprehensive observability for your OpenAI Agents, helping you understand exactly what’s happening during each execution.

Traces provide a hierarchical view of your agent’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.

Traces provide a hierarchical view of your agent’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.

Portkey logs every interaction with LLMs, including:
- Complete request and response payloads
- Latency and token usage metrics
- Cost calculations
- Tool calls and function executions
All logs can be filtered by metadata, trace IDs, models, and more, making it easy to debug specific agent runs.

Portkey provides built-in dashboards that help you:
- Track cost and token usage across all agent runs
- Analyze performance metrics like latency and success rates
- Identify bottlenecks in your agent workflows
- Compare different agent configurations and LLMs
You can filter and segment all metrics by custom metadata to analyze specific agent types, user groups, or use cases.

Add custom metadata to your OpenAI agent calls to enable powerful filtering and segmentation:
This metadata can be used to filter logs, traces, and metrics on the Portkey dashboard, allowing you to analyze specific agent runs, users, or environments.
2. Reliability - Keep Your Agents Running Smoothly
When running agents in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur.
It’s this simple to enable fallback in your OpenAI Agents:
This configuration will automatically try Claude if the GPT-4o request fails, ensuring your agent can continue operating.
Automatic Retries
Handles temporary failures automatically. If an LLM call fails, Portkey will retry the same request for the specified number of times - perfect for rate limits or network blips.
Request Timeouts
Prevent your agents from hanging. Set timeouts to ensure you get responses (or can fail gracefully) within your required timeframes.
Conditional Routing
Send different requests to different providers. Route complex reasoning to GPT-4, creative tasks to Claude, and quick responses to Gemini based on your needs.
Fallbacks
Keep running even if your primary provider fails. Automatically switch to backup providers to maintain availability.
Load Balancing
Spread requests across multiple API keys or providers. Great for high-volume agent operations and staying within rate limits.
3. Prompting in OpenAI Agents
Portkey’s Prompt Engineering Studio helps you create, manage, and optimize the prompts used in your OpenAI Agents. Instead of hardcoding prompts or instructions, use Portkey’s prompt rendering API to dynamically fetch and apply your versioned prompts.
Manage prompts in Portkey's Prompt Library
Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:
- Iteratively develop prompts before using them in your agents
- Test prompts with different variables and models
- Compare outputs between different prompt versions
- Collaborate with team members on prompt development
This visual environment makes it easier to craft effective prompts for each step in your OpenAI Agents agent’s workflow.
Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:
- Iteratively develop prompts before using them in your agents
- Test prompts with different variables and models
- Compare outputs between different prompt versions
- Collaborate with team members on prompt development
This visual environment makes it easier to craft effective prompts for each step in your OpenAI Agents agent’s workflow.
The Prompt Render API retrieves your prompt templates with all parameters configured:
You can:
- Create multiple versions of the same prompt
- Compare performance between versions
- Roll back to previous versions if needed
- Specify which version to use in your code:
Portkey prompts use Mustache-style templating for easy variable substitution:
When rendering, simply pass the variables:
Prompt Engineering Studio
Learn more about Portkey’s prompt management features
4. Guardrails for Safe Agents
Guardrails ensure your OpenAI Agents operate safely and respond appropriately in all situations.
Why Use Guardrails?
OpenAI Agents can experience various failure modes:
- Generating harmful or inappropriate content
- Leaking sensitive information like PII
- Hallucinating incorrect information
- Generating outputs in incorrect formats
Portkey’s guardrails protect against these issues by validating both inputs and outputs.
Implementing Guardrails
Portkey’s guardrails can:
- Detect and redact PII in both inputs and outputs
- Filter harmful or inappropriate content
- Validate response formats against schemas
- Check for hallucinations against ground truth
- Apply custom business logic and rules
Learn More About Guardrails
Explore Portkey’s guardrail features to enhance agent safety
5. User Tracking with Metadata
Track individual users through your OpenAI Agents using Portkey’s metadata system.
What is Metadata in Portkey?
Metadata allows you to associate custom data with each request, enabling filtering, segmentation, and analytics. The special _user
field is specifically designed for user tracking.
Filter Analytics by User
With metadata in place, you can filter analytics by user and analyze performance metrics on a per-user basis:

Filter analytics by user
This enables:
- Per-user cost tracking and budgeting
- Personalized user analytics
- Team or organization-level metrics
- Environment-specific monitoring (staging vs. production)
Learn More About Metadata
Explore how to use custom metadata to enhance your analytics
6. Caching for Efficient Agents
Implement caching to make your OpenAI Agents agents more efficient and cost-effective:
Simple caching performs exact matches on input prompts, caching identical requests to avoid redundant model executions.
Simple caching performs exact matches on input prompts, caching identical requests to avoid redundant model executions.
Semantic caching considers the contextual similarity between input requests, caching responses for semantically similar inputs.
7. Model Interoperability
With Portkey, you can easily switch between different LLMs in your OpenAI Agents without changing your core agent logic.
Portkey provides access to over 200 LLMs through a unified interface, including:
- OpenAI (GPT-4o, GPT-4 Turbo, etc.)
- Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
- Mistral AI (Mistral Large, Mistral Medium, etc.)
- Google Vertex AI (Gemini 1.5 Pro, etc.)
- Cohere (Command, Command-R, etc.)
- AWS Bedrock (Claude, Titan, etc.)
- Local/Private Models
Supported Providers
See the full list of LLM providers supported by Portkey
Set Up Enterprise Governance for OpenAI Agents
Why Enterprise Governance? If you are using OpenAI Agents inside your orgnaization, you need to consider several governance aspects:
- Cost Management: Controlling and tracking AI spending across teams
- Access Control: Managing which teams can use specific models
- Usage Analytics: Understanding how AI is being used across the organization
- Security & Compliance: Maintaining enterprise security standards
- Reliability: Ensuring consistent service across all users
Portkey adds a comprehensive governance layer to address these enterprise needs. Let’s implement these controls step by step.
Enterprise Implementation Guide
Portkey allows you to use 1600+ LLMs with your OpenAI Agents setup, with minimal configuration required. Let’s set up the core components in Portkey that you’ll need for integration.
Create Virtual Key
Virtual Keys are Portkey’s secure way to manage your LLM provider API keys. Think of them like disposable credit cards for your LLM API keys, providing essential controls like:
- Budget limits for API usage
- Rate limiting capabilities
- Secure API key storage
To create a virtual key: Go to Virtual Keys in the Portkey App. Save and copy the virtual key ID

Save your virtual key ID - you’ll need it for the next step.
Create Default Config
Configs in Portkey are JSON objects that define how your requests are routed. They help with implementing features like advanced routing, fallbacks, and retries.
We need to create a default config to route our requests to the virtual key created in Step 1.
To create your config:
- Go to Configs in Portkey dashboard
- Create new config with:
- Save and note the Config name for the next step

This basic config connects to your virtual key. You can add more advanced portkey features later.
Configure Portkey API Key
Now create Portkey API key access point and attach the config you created in Step 2:
- Go to API Keys in Portkey and Create new API key
- Select your config from
Step 2
- Generate and save your API key

Save your API key securely - you’ll need it for OpenAI Agents integration.
Once you have created your API Key after attaching default config, you can directly pass the API key + base URL in the OpenAI client. Here’s how:
Step 1: Implement Budget Controls & Rate Limits
Step 1: Implement Budget Controls & Rate Limits
Step 1: Implement Budget Controls & Rate Limits
Virtual Keys enable granular control over LLM access at the team/department level. This helps you:
- Set up budget limits
- Prevent unexpected usage spikes using Rate limits
- Track departmental spending
Setting Up Department-Specific Controls:
- Navigate to Virtual Keys in Portkey dashboard
- Create new Virtual Key for each department with budget limits and rate limits
- Configure department-specific limits

Step 2: Define Model Access Rules
Step 2: Define Model Access Rules
Step 2: Define Model Access Rules
As your AI usage scales, controlling which teams can access specific models becomes crucial. Portkey Configs provide this control layer with features like:
Access Control Features:
- Model Restrictions: Limit access to specific models
- Data Protection: Implement guardrails for sensitive data
- Reliability Controls: Add fallbacks and retry logic
Example Configuration:
Here’s a basic configuration to route requests to OpenAI, specifically using GPT-4o:
Create your config on the Configs page in your Portkey dashboard. You’ll need the config ID for connecting to OpenAI Agents’s setup.
Configs can be updated anytime to adjust controls without affecting running applications.
Step 3: Implement Access Controls
Step 3: Implement Access Controls
Step 3: Implement Access Controls
Create User-specific API keys that automatically:
- Track usage per user/team with the help of virtual keys
- Apply appropriate configs to route requests
- Collect relevant metadata to filter logs
- Enforce access permissions
Create API keys through:
Example using TypeScript SDK:
For detailed key management instructions, see our API Keys documentation.
Step 4: Deploy & Monitor
Step 4: Deploy & Monitor
Step 4: Deploy & Monitor
After distributing API keys to your team members, your enterprise-ready OpenAI Agents setup is ready to go. Each team member can now use their designated API keys with appropriate access levels and budget controls. Apply your governance setup using the integration steps from earlier sections Monitor usage in Portkey dashboard:
- Cost tracking by department
- Model usage patterns
- Request volumes
- Error rates
Enterprise Features Now Available
OpenAI Agents now has:
- Departmental budget controls
- Model access governance
- Usage tracking & attribution
- Security guardrails
- Reliability features
Frequently Asked Questions
How does Portkey enhance OpenAI Agents?
How does Portkey enhance OpenAI Agents?
Portkey adds production-readiness to OpenAI Agents through comprehensive observability (traces, logs, metrics), reliability features (fallbacks, retries, caching), and access to 1600+ LLMs through a unified interface. This makes it easier to debug, optimize, and scale your agent applications.
Can I use Portkey with existing OpenAI Agents?
Can I use Portkey with existing OpenAI Agents?
Yes! Portkey integrates seamlessly with existing OpenAI Agents. You only need to replace your client initialization code with the Portkey-enabled version. The rest of your agent code remains unchanged.
Does Portkey work with all OpenAI Agents features?
Does Portkey work with all OpenAI Agents features?
Portkey supports all OpenAI Agents SDK features, including tool use, memory, planning, and more. It adds observability and reliability without limiting any of the SDK’s functionality.
How does Portkey handle streaming in OpenAI Agents?
How does Portkey handle streaming in OpenAI Agents?
Portkey fully supports streaming responses in OpenAI Agents. You can enable streaming by using the appropriate methods in the OpenAI Agents SDK, and Portkey will properly track and log the streaming interactions.
How do I filter logs and traces for specific agent runs?
How do I filter logs and traces for specific agent runs?
Portkey allows you to add custom metadata to your agent runs, which you can then use for filtering. Add fields like agent_name
, agent_type
, or session_id
to easily find and analyze specific agent executions.
Can I use my own API keys with Portkey?
Can I use my own API keys with Portkey?
Yes! Portkey uses your own API keys for the various LLM providers. It securely stores them as virtual keys, allowing you to easily manage and rotate keys without changing your code.