Introduction

OpenAI Agents SDK enables the development of complex AI agents with tools, planning, and memory capabilities. Portkey enhances OpenAI Agents with observability, reliability, and production-readiness features.

Portkey turns your experimental OpenAI Agents into production-ready systems by providing:

  • Complete observability of every agent step, tool use, and interaction
  • Built-in reliability with fallbacks, retries, and load balancing
  • Cost tracking and optimization to manage your AI spend
  • Access to 200+ LLMs through a single integration
  • Guardrails to keep agent behavior safe and compliant
  • Version-controlled prompts for consistent agent performance

OpenAI Agents SDK Official Documentation

Learn more about OpenAI Agents SDK’s core concepts

Installation & Setup

1

Install the required packages

pip install -U openai-agents portkey-ai
2

Generate API Key

Create a Portkey API key with optional budget/rate limits and attach your Config

3

Connect to OpenAI Agents

There are 3 ways to integrate Portkey with OpenAI Agents:

  1. Set a client that applies to all agents in your application
  2. Use a custom provider for selective Portkey integration
  3. Configure each agent individually

See the Integration Approaches section for more details.

4

Configure Portkey Client

For a simple setup, we’ll use the global client approach:

from agents import (
    set_default_openai_client,
    set_default_openai_api,
    Agent, Runner
)
from openai import AsyncOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
import os

# Set up Portkey as the global client
portkey = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key=os.environ["PORTKEY_API_KEY"],
    default_headers=createHeaders(
        virtual_key="YOUR_OPENAI_VIRTUAL_KEY"
    )
)

# Register as the SDK-wide default
set_default_openai_client(portkey, use_for_tracing=False)
set_default_openai_api("chat_completions")  # Responses API → Chat

What are Virtual Keys? Virtual keys in Portkey securely store your LLM provider API keys (OpenAI, Anthropic, etc.) in an encrypted vault. They allow for easier key rotation and budget management. Learn more about virtual keys here.

Getting Started

Let’s create a simple question-answering agent with OpenAI Agents SDK and Portkey. This agent will respond directly to user messages using a language model:

from agents import (
    set_default_openai_client,
    set_default_openai_api,
    Agent, Runner
)
from openai import AsyncOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
import os

# Set up Portkey as the global client
portkey = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key=os.environ["PORTKEY_API_KEY"],
    default_headers=createHeaders(
        virtual_key="YOUR_OPENAI_VIRTUAL_KEY"
    )
)

# Register as the SDK-wide default
set_default_openai_client(portkey, use_for_tracing=False)
set_default_openai_api("chat_completions")  # Responses API → Chat

# Create agent with any supported model
agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
    model="gpt-4o"  # Using Anthropic Claude through Portkey
)

# Run the agent
result = Runner.run_sync(agent, "Tell me about quantum computing.")
print(result.final_output)

In this example:

  1. We set up Portkey as the global client for OpenAI Agents SDK
  2. We create a simple agent with instructions and a model
  3. We run the agent synchronously with a user query
  4. We print the final output

Visit your Portkey dashboard to see detailed logs of this agent’s execution!

Integration Approaches

There are three ways to integrate Portkey with OpenAI Agents SDK, each suited for different scenarios:

Set a global client that affects all agents in your application:

from agents import (
    set_default_openai_client,
    set_default_openai_api,
    set_tracing_disabled,
    Agent, Runner
)
from openai import AsyncOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
import os

# Set up Portkey as the global client
portkey = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key=os.environ["PORTKEY_API_KEY"],
    default_headers=createHeaders(
        virtual_key="YOUR_OPENAI_VIRTUAL_KEY"
    )
)

# Register it as the SDK-wide default
set_default_openai_client(portkey, use_for_tracing=False)   # skip OpenAI tracing
set_default_openai_api("chat_completions")                  # Responses API → Chat
set_tracing_disabled(True)                                  # optional

# Regular agent code—just a model name
agent = Agent(
    name="Haiku Writer",
    instructions="Respond only in haikus.",
    model="claude-3-7-sonnet-latest"
)

print(Runner.run_sync(agent, "Write a haiku on recursion.").final_output)

Best for: Whole application migration to Portkey with minimal code changes

Comparing the 3 approaches

StrategyCode TouchpointsBest For
Global Client via set_default_openai_clientOne-time setup; agents need only model namesWhole app uses Portkey; simplest migration
ModelProvider in RunConfigAdd a provider + pass run_configToggle Portkey per run; A/B tests, staged rollouts
Explicit Model per AgentSpecify OpenAIChatCompletionsModel in agentMixed fleet: each agent can talk to a different provider

End-to-End Example

Research Agent with Tools: Here’s a more comprehensive agent that can use tools to perform tasks.

from agents import Agent, Runner, Tool, set_default_openai_client
from openai import AsyncOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
import os

# Configure Portkey client
portkey = AsyncOpenAI(
    api_key=os.environ.get("PORTKEY_API_KEY"),
    base_url=PORTKEY_GATEWAY_URL,
    default_headers=createHeaders(
        virtual_key="YOUR_OPENAI_VIRTUAL_KEY"
    )
)
set_default_openai_client(portkey)

# Define agent tools
def get_weather(location: str) -> str:
    """Get the current weather for a location."""
    return f"It's 72°F and sunny in {location}."

def search_web(query: str) -> str:
    """Search the web for information."""
    return f"Found information about: {query}"

# Create agent with tools
agent = Agent(
    name="Research Assistant",
    instructions="You are a helpful assistant that can search for information and check the weather.",
    model="claude-3-opus-20240229",
    tools=[
        Tool(
            name="get_weather",
            description="Get current weather for a location",
            input_schema={
                "location": {
                    "type": "string",
                    "description": "City and state, e.g. San Francisco, CA"
                }
            },
            callback=get_weather
        ),
        Tool(
            name="search_web",
            description="Search the web for information",
            input_schema={
                "query": {
                    "type": "string",
                    "description": "Search query"
                }
            },
            callback=search_web
        )
    ]
)

# Run the agent
result = Runner.run_sync(
    agent,
    "What's the weather in San Francisco and find information about Golden Gate Bridge?"
)
print(result.final_output)

Visit your Portkey dashboard to see the complete execution flow visualized!


Production Features

1. Enhanced Observability

Portkey provides comprehensive observability for your OpenAI Agents, helping you understand exactly what’s happening during each execution.

Traces provide a hierarchical view of your agent’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.

# Add tracing to your OpenAI Agents
  portkey = AsyncOpenAI(
      base_url=PORTKEY_GATEWAY_URL,
      api_key=os.environ["PORTKEY_API_KEY"],
      default_headers=createHeaders(
          trace_id="unique_execution_trace_id", # Add unique trace ID
          virtual_key="YOUR_OPENAI_VIRTUAL_KEY"
      )
  )
  set_default_openai_client(portkey)

2. Reliability - Keep Your Agents Running Smoothly

When running agents in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur.

It’s this simple to enable fallback in your OpenAI Agents:

from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
from openai import AsyncOpenAI
from agents import set_default_openai_client

# Create a config with fallbacks, It's recommended that you create the Config in Portkey App rather than hard-code the config JSON directly
config = {
  "strategy": {
    "mode": "fallback"
  },
  "targets": [
    {
      "provider": "openai",
      "override_params": {"model": "gpt-4o"}
    },
    {
      "provider": "anthropic",
      "override_params": {"model": "claude-3-opus-20240229"}
    }
  ]
}

# Configure Portkey client with fallback config
portkey = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key=os.environ["PORTKEY_API_KEY"],
    default_headers=createHeaders(config=config)
)
set_default_openai_client(portkey)

This configuration will automatically try Claude if the GPT-4o request fails, ensuring your agent can continue operating.

3. Prompting in OpenAI Agents

Portkey’s Prompt Engineering Studio helps you create, manage, and optimize the prompts used in your OpenAI Agents. Instead of hardcoding prompts or instructions, use Portkey’s prompt rendering API to dynamically fetch and apply your versioned prompts.

Manage prompts in Portkey's Prompt Library

Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:

  1. Iteratively develop prompts before using them in your agents
  2. Test prompts with different variables and models
  3. Compare outputs between different prompt versions
  4. Collaborate with team members on prompt development

This visual environment makes it easier to craft effective prompts for each step in your OpenAI Agents agent’s workflow.

Prompt Engineering Studio

Learn more about Portkey’s prompt management features

4. Guardrails for Safe Agents

Guardrails ensure your OpenAI Agents operate safely and respond appropriately in all situations.

Why Use Guardrails?

OpenAI Agents can experience various failure modes:

  • Generating harmful or inappropriate content
  • Leaking sensitive information like PII
  • Hallucinating incorrect information
  • Generating outputs in incorrect formats

Portkey’s guardrails protect against these issues by validating both inputs and outputs.

Implementing Guardrails

from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
from openai import AsyncOpenAI
from agents import set_default_openai_client

# Create a config with input and output guardrails, It's recommended you create Config in Portkey App and pass the config ID in the client
config = {
    "virtual_key": "openai-xxx",
    "input_guardrails": ["guardrails-id-xxx", "guardrails-id-yyy"],
    "output_guardrails": ["guardrails-id-xxx"]
}

# Configure OpenAI client with guardrails
portkey = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key=os.environ["PORTKEY_API_KEY"],
    default_headers=createHeaders(
        config=config,
        virtual_key="YOUR_OPENAI_VIRTUAL_KEY"
    )
)
set_default_openai_client(portkey)

Portkey’s guardrails can:

  • Detect and redact PII in both inputs and outputs
  • Filter harmful or inappropriate content
  • Validate response formats against schemas
  • Check for hallucinations against ground truth
  • Apply custom business logic and rules

Learn More About Guardrails

Explore Portkey’s guardrail features to enhance agent safety

5. User Tracking with Metadata

Track individual users through your OpenAI Agents using Portkey’s metadata system.

What is Metadata in Portkey?

Metadata allows you to associate custom data with each request, enabling filtering, segmentation, and analytics. The special _user field is specifically designed for user tracking.

from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
from openai import AsyncOpenAI
from agents import set_default_openai_client

# Configure client with user tracking
portkey = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key=os.environ["PORTKEY_API_KEY"],
    default_headers=createHeaders(
        virtual_key="YOUR_LLM_PROVIDER_VIRTUAL_KEY",
        metadata={
            "_user": "user_123", # Special _user field for user analytics
            "user_name": "John Doe",
            "user_tier": "premium",
            "user_company": "Acme Corp"
        }
    )
)
set_default_openai_client(portkey)

Filter Analytics by User

With metadata in place, you can filter analytics by user and analyze performance metrics on a per-user basis:

Filter analytics by user

This enables:

  • Per-user cost tracking and budgeting
  • Personalized user analytics
  • Team or organization-level metrics
  • Environment-specific monitoring (staging vs. production)

Learn More About Metadata

Explore how to use custom metadata to enhance your analytics

6. Caching for Efficient Agents

Implement caching to make your OpenAI Agents agents more efficient and cost-effective:

portkey_config = {
  "cache": {
    "mode": "simple"
  },
  "virtual_key": "YOUR_LLM_PROVIDER_VIRTUAL_KEY"
}

# Configure OpenAI client with chosen provider
portkey = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key=os.environ["PORTKEY_API_KEY"],
    default_headers=createHeaders(config=portkey_config)
)
set_default_openai_client(portkey)

Simple caching performs exact matches on input prompts, caching identical requests to avoid redundant model executions.

7. Model Interoperability

With Portkey, you can easily switch between different LLMs in your OpenAI Agents without changing your core agent logic.

# Configure Portkey with different LLM providers
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
from openai import AsyncOpenAI
from agents import set_default_openai_client

# Using OpenAI
openai_config = {
    "provider": "openai",
    "api_key": "YOUR_OPENAI_API_KEY",
    "override_params": {
        "model": "gpt-4o"
    }
}

# Using Anthropic
anthropic_config = {
    "provider": "anthropic",
    "api_key": "YOUR_ANTHROPIC_API_KEY",
    "override_params": {
        "model": "claude-3-opus-20240229"
    }
}

# Choose which config to use
active_config = openai_config  # or anthropic_config

# Configure OpenAI client with chosen provider
portkey = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key=os.environ["PORTKEY_API_KEY"],
    default_headers=createHeaders(config=active_config)
)
set_default_openai_client(portkey)

# Create and run agent - no changes needed in agent code
agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
    # The model specified here will be used as a reference but the actual model
    # is determined by the active_config
    model="gpt-4o"
)

result = Runner.run_sync(agent, "Tell me about quantum computing.")
print(result.final_output)

Portkey provides access to over 200 LLMs through a unified interface, including:

  • OpenAI (GPT-4o, GPT-4 Turbo, etc.)
  • Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
  • Mistral AI (Mistral Large, Mistral Medium, etc.)
  • Google Vertex AI (Gemini 1.5 Pro, etc.)
  • Cohere (Command, Command-R, etc.)
  • AWS Bedrock (Claude, Titan, etc.)
  • Local/Private Models

Supported Providers

See the full list of LLM providers supported by Portkey

Tool Use in OpenAI Agents

OpenAI Agents SDK natively supports tools that enable your agents to interact with external systems and APIs. Portkey provides full observability for tool usage in your agents:

from agents import Agent, Runner, Tool, set_default_openai_client
from openai import AsyncOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
import os

# Configure Portkey client with tracing
portkey = AsyncOpenAI(
    base_url=PORTKEY_GATEWAY_URL,
    api_key=os.environ["PORTKEY_API_KEY"],
    default_headers=createHeaders(
        trace_id="tools_example",
        metadata={"agent_type": "research"}
    )
)
set_default_openai_client(portkey)

# Define tools
def get_weather(location: str, unit: str = "fahrenheit") -> str:
    """Get the current weather in a given location"""
    return f"The weather in {location} is 72 degrees {unit}"

def get_population(city: str, country: str) -> str:
    """Get the population of a city"""
    return f"The population of {city}, {country} is 1,000,000"

# Create agent with tools
agent = Agent(
    name="Research Assistant",
    instructions="You are a helpful assistant that can look up weather and population information.",
    model="claude-3-opus-20240229",
    tools=[
        Tool(
            name="get_weather",
            description="Get the current weather in a given location",
            input_schema={
                "location": {
                    "type": "string",
                    "description": "City and state, e.g. San Francisco, CA"
                },
                "unit": {
                    "type": "string",
                    "description": "Temperature unit (celsius or fahrenheit)",
                    "default": "fahrenheit"
                }
            },
            callback=get_weather
        ),
        Tool(
            name="get_population",
            description="Get the population of a city",
            input_schema={
                "city": {
                    "type": "string",
                    "description": "City name"
                },
                "country": {
                    "type": "string",
                    "description": "Country name"
                }
            },
            callback=get_population
        )
    ]
)

# Run the agent
result = Runner.run_sync(
    agent,
    "What's the weather in San Francisco and what's the population of Tokyo, Japan?"
)
print(result.final_output)

Set Up Enterprise Governance for OpenAI Agents

Why Enterprise Governance? If you are using OpenAI Agents inside your orgnaization, you need to consider several governance aspects:

  • Cost Management: Controlling and tracking AI spending across teams
  • Access Control: Managing which teams can use specific models
  • Usage Analytics: Understanding how AI is being used across the organization
  • Security & Compliance: Maintaining enterprise security standards
  • Reliability: Ensuring consistent service across all users

Portkey adds a comprehensive governance layer to address these enterprise needs. Let’s implement these controls step by step.

Enterprise Implementation Guide

Portkey allows you to use 1600+ LLMs with your OpenAI Agents setup, with minimal configuration required. Let’s set up the core components in Portkey that you’ll need for integration.

1

Create Virtual Key

Virtual Keys are Portkey’s secure way to manage your LLM provider API keys. Think of them like disposable credit cards for your LLM API keys, providing essential controls like:

  • Budget limits for API usage
  • Rate limiting capabilities
  • Secure API key storage

To create a virtual key: Go to Virtual Keys in the Portkey App. Save and copy the virtual key ID

Save your virtual key ID - you’ll need it for the next step.

2

Create Default Config

Configs in Portkey are JSON objects that define how your requests are routed. They help with implementing features like advanced routing, fallbacks, and retries.

We need to create a default config to route our requests to the virtual key created in Step 1.

To create your config:

  1. Go to Configs in Portkey dashboard
  2. Create new config with:
    {
        "virtual_key": "YOUR_VIRTUAL_KEY_FROM_STEP1",
       	"override_params": {
          "model": "gpt-4o" // Your preferred model name
        }
    }
  3. Save and note the Config name for the next step

This basic config connects to your virtual key. You can add more advanced portkey features later.

3

Configure Portkey API Key

Now create Portkey API key access point and attach the config you created in Step 2:

  1. Go to API Keys in Portkey and Create new API key
  2. Select your config from Step 2
  3. Generate and save your API key

Save your API key securely - you’ll need it for OpenAI Agents integration.

4

Once you have creted your API Key after attaching default config, you can directly pass the API key + base URL in the AsyncOpenAI client. Here’s how:

from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
from openai import AsyncOpenAI


client=AsyncOpenAI(
api_key="YOUR_PORTKEY_API_KEY", # Your Portkey API Key from Step 3
base_url="PORTKEY_GATEWAY_URL"
)

# your rest of the code remains same

Enterprise Features Now Available

OpenAI Agents now has:

  • Departmental budget controls
  • Model access governance
  • Usage tracking & attribution
  • Security guardrails
  • Reliability features

Frequently Asked Questions

Resources