Portkey’s OpenAI Agents SDK integration enables you to:

  • Use any Portkey-supported LLM (AWS Bedrock, Vertex AI, Gemini, Mistral, etc.) with OpenAI Agents
  • Monitor agent interactions with comprehensive observability tools
  • Optimize cost and performance across your agent fleet
  • Build reliable agents with production-grade fallbacks, load balancing, and routing

Observability Layer

Monitor every aspect of your agent interactions:

  • Track request/response details, tokens, costs, and latency
  • Visualize agent execution paths with trace tracking
  • Receive alerts on anomalies and performance issues

Reliability Layer

Make your agents enterprise-grade reliable:

  • Implement fallbacks between models when primary options fail
  • Balance load across multiple API keys and instances
  • Automated retries with intelligent backoff
  • Request timeouts to prevent hanging requests

Governance Layer

Control and secure your agent operations:

  • Set budget limits at organization/team/project levels
  • Implement guardrails to validate inputs and outputs
  • Define access permissions with role-based controls
  • Enforce compliance with model and usage policies

Integration

The Portkey x OpenAI Agents integration requires minimal setup:

Configure Provider

Create a Virtual Key in the Portkey dashboard with your provider credentials

Create Config

Build a Config in Portkey UI with your Virtual Key and model parameters like this:

{
  "virtual_key": "anthropic-123abc",
  "override_params": {
    "model": "claude-3-7-sonnet-latest",
    "max_tokens": 4096
  }
}

Generate API Key

Create a Portkey API key with optional budget/rate limits and attach your Config

Connect to OpenAI Agents

There are 3 ways to integrate Portkey with OpenAI Agents:

  1. Set a client that applies to all agents in your application
  2. Use a custom provider for selective Portkey integration
  3. Configure each agent individually

See the Quick Start Guide for more details.

First, install the dependencies

pip install -U openai-agents portkey-ai

Minimal Working Example

from agents import (
    set_default_openai_client,
    set_default_openai_api,
    Agent, Runner
)
from openai import AsyncOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL
import os

# Set up Portkey as the global client
portkey = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key=os.environ["PORTKEY_API_KEY"],
)

# Register as the SDK-wide default
set_default_openai_client(portkey, use_for_tracing=False)
set_default_openai_api("chat_completions")  # Responses API → Chat

# Create agent with any supported model
agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
    model="claude-3-7-sonnet-latest"
)

# Run the agent
result = Runner.run_sync(agent, "Tell me about quantum computing.")
print(result.final_output)

Integration Approaches

You can integrate Portkey with OpenAI Agents using three officially supported approaches:

Set a global client that affects all agents in your application:

from agents import (
    set_default_openai_client,
    set_default_openai_api,
    set_tracing_disabled,
    Agent, Runner
)
from openai import AsyncOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL
import os

# Build a Portkey-backed client
portkey = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key=os.environ["PORTKEY_API_KEY"],
)

# Register it as the SDK-wide default
set_default_openai_client(portkey, use_for_tracing=False)   # skip OpenAI tracing
set_default_openai_api("chat_completions")                  # Responses API → Chat
set_tracing_disabled(True)                                  # optional

# Regular agent code—just a model name
agent = Agent(
    name="Haiku Writer",
    instructions="Respond only in haikus.",
    model="claude-3-7-sonnet-latest"
)

print(Runner.run_sync(agent, "Write a haiku on recursion.").final_output)

Best for: Whole application migration to Portkey with minimal code changes

Comparing the 3 approaches

StrategyCode TouchpointsBest For
Global Client via set_default_openai_clientOne-time setup; agents need only model namesWhole app uses Portkey; simplest migration
ModelProvider in RunConfigAdd a provider + pass run_configToggle Portkey per run; A/B tests, staged rollouts
Explicit Model per AgentSpecify OpenAIChatCompletionsModel in agentMixed fleet: each agent can talk to a different provider

Production Features

Portkey transforms your OpenAI Agents into enterprise-grade AI applications with these key capabilities:

1. Multi-Provider Support

Access 2,000+ LLMs through a single interface. Switch models by changing only your Portkey configuration:

# Your code stays the same
from agents import Agent, Runner, set_default_openai_client
from openai import AsyncOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL

portkey = AsyncOpenAI(
    api_key=os.environ.get("PORTKEY_API_KEY"), # Contains provider config
    base_url=PORTKEY_GATEWAY_URL
)
set_default_openai_client(portkey)

# Just change the model name to switch providers
agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
    model="claude-3-opus-20240229"  # Or "gemini-1.5-pro" or "anthropic.claude-v2"
)

Switch between any supported model by updating your Portkey config and using the appropriate model name - no code changes required.

2. Reliability

Make your agents resilient against failures with:

  • Fallbacks: Automatic switching between models if your primary provider fails
  • Load Balancing: Distribute requests across multiple provider keys
  • Retries: Automatically retry failed requests with configurable backoff
{
  "retry": { "attempts": 3 },
  "strategy": { "mode": "fallback" },
  "targets": [
    { "virtual_key": "anthropic-primary" },
    { "virtual_key": "mistral-backup" }
  ]
}

3. Observability

Gain comprehensive insights into your agent operations:

  • Metrics: Track costs, tokens, latency, and success rates
  • Logs: View detailed records of every agent interaction
  • Traces: Visualize complex agent execution paths

Implement agent-specific analytics with trace IDs or custom metadata attached to your API key directly:

from portkey_ai import createHeaders

portkey = AsyncOpenAI(
    api_key=os.environ["PORTKEY_API_KEY"],
    base_url=PORTKEY_GATEWAY_URL,
    default_headers=createHeaders(
        trace_id="support-agent-42",
        metadata={"user_type": "enterprise"}
    )
)

4. Governance

Implement governance controls:


Complete Example: Multi-Tool Agent

Here’s a practical example of an agent with tools that leverages Portkey’s features:

from agents import Agent, Runner, Tool, set_default_openai_client
from openai import AsyncOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL
import os

# Configure Portkey client
portkey = AsyncOpenAI(
    api_key=os.environ.get("PORTKEY_API_KEY"),
    base_url=PORTKEY_GATEWAY_URL
)
set_default_openai_client(portkey)

# Define agent tools
def get_weather(location: str) -> str:
    return f"Sunny and 75°F in {location}"

def search_web(query: str) -> str:
    return f"Found results for: {query}"

# Create agent with tools
agent = Agent(
    name="Research Assistant",
    instructions="Search for information and check weather.",
    model="claude-3-sonnet-latest",  # Using configured provider
    tools=[
        Tool(
            name="get_weather",
            description="Get current weather for a location",
            input_schema={
                "location": {
                    "type": "string",
                    "description": "City and state, e.g. San Francisco, CA"
                }
            },
            callback=get_weather
        ),
        Tool(
            name="search_web",
            description="Search the web for information",
            input_schema={
                "query": {
                    "type": "string",
                    "description": "Search query"
                }
            },
            callback=search_web
        )
    ]
)

# Run the agent
result = Runner.run_sync(
    agent, 
    "What's the weather in San Francisco and find information about Golden Gate Bridge?"
)
print(result.final_output)

Key Benefits

Resources