OpenAI Agents SDK
Use any LLM provider with OpenAI Agents while gaining advanced observability, reliability, and governance capabilities.
Portkey’s OpenAI Agents SDK integration enables you to:
- Use any Portkey-supported LLM (AWS Bedrock, Vertex AI, Gemini, Mistral, etc.) with OpenAI Agents
- Monitor agent interactions with comprehensive observability tools
- Optimize cost and performance across your agent fleet
- Build reliable agents with production-grade fallbacks, load balancing, and routing
Observability Layer
Monitor every aspect of your agent interactions:
- Track request/response details, tokens, costs, and latency
- Visualize agent execution paths with trace tracking
- Receive alerts on anomalies and performance issues
Reliability Layer
Make your agents enterprise-grade reliable:
- Implement fallbacks between models when primary options fail
- Balance load across multiple API keys and instances
- Automated retries with intelligent backoff
- Request timeouts to prevent hanging requests
Governance Layer
Control and secure your agent operations:
- Set budget limits at organization/team/project levels
- Implement guardrails to validate inputs and outputs
- Define access permissions with role-based controls
- Enforce compliance with model and usage policies
Integration
The Portkey x OpenAI Agents integration requires minimal setup:
Configure Provider
Create a Virtual Key in the Portkey dashboard with your provider credentials
Create Config
Build a Config in Portkey UI with your Virtual Key and model parameters like this:
Generate API Key
Create a Portkey API key with optional budget/rate limits and attach your Config
Connect to OpenAI Agents
There are 3 ways to integrate Portkey with OpenAI Agents:
- Set a client that applies to all agents in your application
- Use a custom provider for selective Portkey integration
- Configure each agent individually
See the Quick Start Guide for more details.
First, install the dependencies
Minimal Working Example
Integration Approaches
You can integrate Portkey with OpenAI Agents using three officially supported approaches:
Set a global client that affects all agents in your application:
Best for: Whole application migration to Portkey with minimal code changes
Set a global client that affects all agents in your application:
Best for: Whole application migration to Portkey with minimal code changes
Use a custom ModelProvider to control which runs use Portkey:
Best for: A/B testing, staged rollouts, or toggling between providers at runtime
Attach a specific Model object to each Agent:
Best for: Mixed agent environments where different agents need different providers or configurations
Comparing the 3 approaches
Strategy | Code Touchpoints | Best For |
---|---|---|
Global Client via set_default_openai_client | One-time setup; agents need only model names | Whole app uses Portkey; simplest migration |
ModelProvider in RunConfig | Add a provider + pass run_config | Toggle Portkey per run; A/B tests, staged rollouts |
Explicit Model per Agent | Specify OpenAIChatCompletionsModel in agent | Mixed fleet: each agent can talk to a different provider |
Production Features
Portkey transforms your OpenAI Agents into enterprise-grade AI applications with these key capabilities:
1. Multi-Provider Support
Access 2,000+ LLMs through a single interface. Switch models by changing only your Portkey configuration:
Switch between any supported model by updating your Portkey config and using the appropriate model name - no code changes required.
Switch between any supported model by updating your Portkey config and using the appropriate model name - no code changes required.
2. Reliability
Make your agents resilient against failures with:
- Fallbacks: Automatic switching between models if your primary provider fails
- Load Balancing: Distribute requests across multiple provider keys
- Retries: Automatically retry failed requests with configurable backoff
3. Observability
Gain comprehensive insights into your agent operations:
- Metrics: Track costs, tokens, latency, and success rates
- Logs: View detailed records of every agent interaction
- Traces: Visualize complex agent execution paths
Implement agent-specific analytics with trace IDs or custom metadata attached to your API key directly:
4. Governance
Implement governance controls:
- Budget Limits: Set spending caps on API keys
- Access Control: Fine-grained permissions for team members
- Guardrails: Validate inputs and outputs with customizable rules
Complete Example: Multi-Tool Agent
Here’s a practical example of an agent with tools that leverages Portkey’s features:
Key Benefits
Multi-Provider Support
Use any supported LLM provider with OpenAI Agents without code changes
Intelligent Caching
Reduce costs by up to 70% and improve response times with semantic caching
Enhanced Reliability
Ensure uptime with automatic fallbacks, retries, and load balancing
Observability
Monitor costs, performance, and usage with detailed analytics