OpenAI Agents SDK enables the development of complex AI agents with tools, planning, and memory capabilities. Portkey enhances OpenAI Agents with observability, reliability, and production-readiness features.
Portkey turns your experimental OpenAI Agents into production-ready systems by providing:
Complete observability of every agent step, tool use, and interaction
Built-in reliability with fallbacks, retries, and load balancing
Cost tracking and optimization to manage your AI spend
Access to 200+ LLMs through a single integration
Guardrails to keep agent behavior safe and compliant
Version-controlled prompts for consistent agent performance
For a simple setup, we’ll use the global client approach:
from agents import ( set_default_openai_client, set_default_openai_api, Agent, Runner)from openai import AsyncOpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeadersimport os# Set up Portkey as the global clientportkey = AsyncOpenAI( base_url=PORTKEY_GATEWAY_URL, api_key=os.environ["PORTKEY_API_KEY"], default_headers=createHeaders( virtual_key="YOUR_OPENAI_VIRTUAL_KEY" ))# Register as the SDK-wide defaultset_default_openai_client(portkey, use_for_tracing=False)set_default_openai_api("chat_completions") # Responses API → Chat
What are Virtual Keys? Virtual keys in Portkey securely store your LLM provider API keys (OpenAI, Anthropic, etc.) in an encrypted vault. They allow for easier key rotation and budget management. Learn more about virtual keys here.
Let’s create a simple question-answering agent with OpenAI Agents SDK and Portkey. This agent will respond directly to user messages using a language model:
from agents import ( set_default_openai_client, set_default_openai_api, Agent, Runner)from openai import AsyncOpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeadersimport os# Set up Portkey as the global clientportkey = AsyncOpenAI( base_url=PORTKEY_GATEWAY_URL, api_key=os.environ["PORTKEY_API_KEY"], default_headers=createHeaders( virtual_key="YOUR_OPENAI_VIRTUAL_KEY" ))# Register as the SDK-wide defaultset_default_openai_client(portkey, use_for_tracing=False)set_default_openai_api("chat_completions") # Responses API → Chat# Create agent with any supported modelagent = Agent( name="Assistant", instructions="You are a helpful assistant.", model="gpt-4o" # Using Anthropic Claude through Portkey)# Run the agentresult = Runner.run_sync(agent, "Tell me about quantum computing.")print(result.final_output)
In this example:
We set up Portkey as the global client for OpenAI Agents SDK
We create a simple agent with instructions and a model
We run the agent synchronously with a user query
We print the final output
Visit your Portkey dashboard to see detailed logs of this agent’s execution!
There are three ways to integrate Portkey with OpenAI Agents SDK, each suited for different scenarios:
Set a global client that affects all agents in your application:
from agents import ( set_default_openai_client, set_default_openai_api, set_tracing_disabled, Agent, Runner)from openai import AsyncOpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeadersimport os# Set up Portkey as the global clientportkey = AsyncOpenAI( base_url=PORTKEY_GATEWAY_URL, api_key=os.environ["PORTKEY_API_KEY"], default_headers=createHeaders( virtual_key="YOUR_OPENAI_VIRTUAL_KEY" ))# Register it as the SDK-wide defaultset_default_openai_client(portkey, use_for_tracing=False) # skip OpenAI tracingset_default_openai_api("chat_completions") # Responses API → Chatset_tracing_disabled(True) # optional# Regular agent code—just a model nameagent = Agent( name="Haiku Writer", instructions="Respond only in haikus.", model="claude-3-7-sonnet-latest")print(Runner.run_sync(agent, "Write a haiku on recursion.").final_output)
Best for: Whole application migration to Portkey with minimal code changes
Set a global client that affects all agents in your application:
from agents import ( set_default_openai_client, set_default_openai_api, set_tracing_disabled, Agent, Runner)from openai import AsyncOpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeadersimport os# Set up Portkey as the global clientportkey = AsyncOpenAI( base_url=PORTKEY_GATEWAY_URL, api_key=os.environ["PORTKEY_API_KEY"], default_headers=createHeaders( virtual_key="YOUR_OPENAI_VIRTUAL_KEY" ))# Register it as the SDK-wide defaultset_default_openai_client(portkey, use_for_tracing=False) # skip OpenAI tracingset_default_openai_api("chat_completions") # Responses API → Chatset_tracing_disabled(True) # optional# Regular agent code—just a model nameagent = Agent( name="Haiku Writer", instructions="Respond only in haikus.", model="claude-3-7-sonnet-latest")print(Runner.run_sync(agent, "Write a haiku on recursion.").final_output)
Best for: Whole application migration to Portkey with minimal code changes
Use a custom ModelProvider to control which runs use Portkey:
Research Agent with Tools: Here’s a more comprehensive agent that can use tools to perform tasks.
from agents import Agent, Runner, Tool, set_default_openai_clientfrom openai import AsyncOpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeadersimport os# Configure Portkey clientportkey = AsyncOpenAI( api_key=os.environ.get("PORTKEY_API_KEY"), base_url=PORTKEY_GATEWAY_URL, default_headers=createHeaders( virtual_key="YOUR_OPENAI_VIRTUAL_KEY" ))set_default_openai_client(portkey)# Define agent toolsdef get_weather(location: str) -> str: """Get the current weather for a location.""" return f"It's 72°F and sunny in {location}."def search_web(query: str) -> str: """Search the web for information.""" return f"Found information about: {query}"# Create agent with toolsagent = Agent( name="Research Assistant", instructions="You are a helpful assistant that can search for information and check the weather.", model="claude-3-opus-20240229", tools=[ Tool( name="get_weather", description="Get current weather for a location", input_schema={ "location": { "type": "string", "description": "City and state, e.g. San Francisco, CA" } }, callback=get_weather ), Tool( name="search_web", description="Search the web for information", input_schema={ "query": { "type": "string", "description": "Search query" } }, callback=search_web ) ])# Run the agentresult = Runner.run_sync( agent, "What's the weather in San Francisco and find information about Golden Gate Bridge?")print(result.final_output)
Visit your Portkey dashboard to see the complete execution flow visualized!
This metadata can be used to filter logs, traces, and metrics on the Portkey dashboard, allowing you to analyze specific agent runs, users, or environments.
2. Reliability - Keep Your Agents Running Smoothly
When running agents in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur.
It’s this simple to enable fallback in your OpenAI Agents:
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URLfrom openai import AsyncOpenAIfrom agents import set_default_openai_client# Create a config with fallbacks, It's recommended that you create the Config in Portkey App rather than hard-code the config JSON directlyconfig = { "strategy": { "mode": "fallback" }, "targets": [ { "provider": "openai", "override_params": {"model": "gpt-4o"} }, { "provider": "anthropic", "override_params": {"model": "claude-3-opus-20240229"} } ]}# Configure Portkey client with fallback configportkey = AsyncOpenAI( base_url=PORTKEY_GATEWAY_URL, api_key=os.environ["PORTKEY_API_KEY"], default_headers=createHeaders(config=config))set_default_openai_client(portkey)
This configuration will automatically try Claude if the GPT-4o request fails, ensuring your agent can continue operating.
Portkey’s Prompt Engineering Studio helps you create, manage, and optimize the prompts used in your OpenAI Agents. Instead of hardcoding prompts or instructions, use Portkey’s prompt rendering API to dynamically fetch and apply your versioned prompts.
Manage prompts in Portkey's Prompt Library
Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:
Iteratively develop prompts before using them in your agents
Test prompts with different variables and models
Compare outputs between different prompt versions
Collaborate with team members on prompt development
This visual environment makes it easier to craft effective prompts for each step in your OpenAI Agents agent’s workflow.
Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:
Iteratively develop prompts before using them in your agents
Test prompts with different variables and models
Compare outputs between different prompt versions
Collaborate with team members on prompt development
This visual environment makes it easier to craft effective prompts for each step in your OpenAI Agents agent’s workflow.
The Prompt Render API retrieves your prompt templates with all parameters configured:
from openai import AsyncOpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeadersfrom agents import Agent, Runner, set_default_openai_client# Initialize Portkey clientportkey_client = Portkey(api_key="PORTKEY_API_KEY")# Retrieve prompt using the render APIprompt_data = portkey_client.prompts.render( prompt_id="YOUR_PROMPT_ID", variables={ "user_input": "Tell me about artificial intelligence" })# Configure OpenAI client with Portkeyopenai_client = AsyncOpenAI( base_url=PORTKEY_GATEWAY_URL, api_key="YOUR_PORTKEY_API_KEY", default_headers=createHeaders( virtual_key="YOUR_OPENAI_VIRTUAL_KEY" ))set_default_openai_client(openai_client)# Use the rendered prompt in your OpenAI Agentagent = Agent( name="Assistant", instructions=prompt_data.data.messages[0]["content"], # Use the rendered prompt as instructions model="gpt-4o")result = Runner.run_sync(agent, "Tell me about artificial intelligence")print(result.final_output)
You can:
Create multiple versions of the same prompt
Compare performance between versions
Roll back to previous versions if needed
Specify which version to use in your code:
# Use a specific prompt versionprompt_data = portkey_client.prompts.render( prompt_id="YOUR_PROMPT_ID@version_number", variables={ "user_input": "Tell me about quantum computing" })
Portkey prompts use Mustache-style templating for easy variable substitution:
You are an AI assistant helping with {{task_type}}.User question: {{user_input}}Please respond in a {{tone}} tone and include {{required_elements}}.
When rendering, simply pass the variables:
prompt_data = portkey_client.prompts.render( prompt_id="YOUR_PROMPT_ID", variables={ "task_type": "research", "user_input": "Tell me about quantum computing", "tone": "professional", "required_elements": "recent academic references" })
Guardrails ensure your OpenAI Agents operate safely and respond appropriately in all situations.
Why Use Guardrails?
OpenAI Agents can experience various failure modes:
Generating harmful or inappropriate content
Leaking sensitive information like PII
Hallucinating incorrect information
Generating outputs in incorrect formats
Portkey’s guardrails protect against these issues by validating both inputs and outputs.
Implementing Guardrails
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URLfrom openai import AsyncOpenAIfrom agents import set_default_openai_client# Create a config with input and output guardrails, It's recommended you create Config in Portkey App and pass the config ID in the clientconfig = { "virtual_key": "openai-xxx", "input_guardrails": ["guardrails-id-xxx", "guardrails-id-yyy"], "output_guardrails": ["guardrails-id-xxx"]}# Configure OpenAI client with guardrailsportkey = AsyncOpenAI( base_url=PORTKEY_GATEWAY_URL, api_key=os.environ["PORTKEY_API_KEY"], default_headers=createHeaders( config=config, virtual_key="YOUR_OPENAI_VIRTUAL_KEY" ))set_default_openai_client(portkey)
Track individual users through your OpenAI Agents using Portkey’s metadata system.
What is Metadata in Portkey?
Metadata allows you to associate custom data with each request, enabling filtering, segmentation, and analytics. The special _user field is specifically designed for user tracking.
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URLfrom openai import AsyncOpenAIfrom agents import set_default_openai_client# Configure client with user trackingportkey = AsyncOpenAI( base_url=PORTKEY_GATEWAY_URL, api_key=os.environ["PORTKEY_API_KEY"], default_headers=createHeaders( virtual_key="YOUR_LLM_PROVIDER_VIRTUAL_KEY", metadata={ "_user": "user_123", # Special _user field for user analytics "user_name": "John Doe", "user_tier": "premium", "user_company": "Acme Corp" } ))set_default_openai_client(portkey)
Filter Analytics by User
With metadata in place, you can filter analytics by user and analyze performance metrics on a per-user basis:
Filter analytics by user
This enables:
Per-user cost tracking and budgeting
Personalized user analytics
Team or organization-level metrics
Environment-specific monitoring (staging vs. production)
With Portkey, you can easily switch between different LLMs in your OpenAI Agents without changing your core agent logic.
# Configure Portkey with different LLM providersfrom portkey_ai import createHeaders, PORTKEY_GATEWAY_URLfrom openai import AsyncOpenAIfrom agents import set_default_openai_client# Using OpenAIopenai_config = { "provider": "openai", "api_key": "YOUR_OPENAI_API_KEY", "override_params": { "model": "gpt-4o" }}# Using Anthropicanthropic_config = { "provider": "anthropic", "api_key": "YOUR_ANTHROPIC_API_KEY", "override_params": { "model": "claude-3-opus-20240229" }}# Choose which config to useactive_config = openai_config # or anthropic_config# Configure OpenAI client with chosen providerportkey = AsyncOpenAI( base_url=PORTKEY_GATEWAY_URL, api_key=os.environ["PORTKEY_API_KEY"], default_headers=createHeaders(config=active_config))set_default_openai_client(portkey)# Create and run agent - no changes needed in agent codeagent = Agent( name="Assistant", instructions="You are a helpful assistant.", # The model specified here will be used as a reference but the actual model # is determined by the active_config model="gpt-4o")result = Runner.run_sync(agent, "Tell me about quantum computing.")print(result.final_output)
Portkey provides access to over 200 LLMs through a unified interface, including:
OpenAI (GPT-4o, GPT-4 Turbo, etc.)
Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
OpenAI Agents SDK natively supports tools that enable your agents to interact with external systems and APIs. Portkey provides full observability for tool usage in your agents:
from agents import Agent, Runner, Tool, set_default_openai_clientfrom openai import AsyncOpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeadersimport os# Configure Portkey client with tracingportkey = AsyncOpenAI( base_url=PORTKEY_GATEWAY_URL, api_key=os.environ["PORTKEY_API_KEY"], default_headers=createHeaders( trace_id="tools_example", metadata={"agent_type": "research"} ))set_default_openai_client(portkey)# Define toolsdef get_weather(location: str, unit: str = "fahrenheit") -> str: """Get the current weather in a given location""" return f"The weather in {location} is 72 degrees {unit}"def get_population(city: str, country: str) -> str: """Get the population of a city""" return f"The population of {city}, {country} is 1,000,000"# Create agent with toolsagent = Agent( name="Research Assistant", instructions="You are a helpful assistant that can look up weather and population information.", model="claude-3-opus-20240229", tools=[ Tool( name="get_weather", description="Get the current weather in a given location", input_schema={ "location": { "type": "string", "description": "City and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "description": "Temperature unit (celsius or fahrenheit)", "default": "fahrenheit" } }, callback=get_weather ), Tool( name="get_population", description="Get the population of a city", input_schema={ "city": { "type": "string", "description": "City name" }, "country": { "type": "string", "description": "Country name" } }, callback=get_population ) ])# Run the agentresult = Runner.run_sync( agent, "What's the weather in San Francisco and what's the population of Tokyo, Japan?")print(result.final_output)
Reliability: Ensuring consistent service across all users
Portkey adds a comprehensive governance layer to address these enterprise needs. Let’s implement these controls step by step.
Enterprise Implementation Guide
Portkey allows you to use 1600+ LLMs with your OpenAI Agents setup, with minimal configuration required. Let’s set up the core components in Portkey that you’ll need for integration.
1
Create Virtual Key
Virtual Keys are Portkey’s secure way to manage your LLM provider API keys. Think of them like disposable credit cards for your LLM API keys, providing essential controls like:
Budget limits for API usage
Rate limiting capabilities
Secure API key storage
To create a virtual key:
Go to Virtual Keys in the Portkey App. Save and copy the virtual key ID
Save your virtual key ID - you’ll need it for the next step.
2
Create Default Config
Configs in Portkey are JSON objects that define how your requests are routed. They help with implementing features like advanced routing, fallbacks, and retries.
We need to create a default config to route our requests to the virtual key created in Step 1.
Save your API key securely - you’ll need it for OpenAI Agents integration.
4
Once you have creted your API Key after attaching default config, you can directly pass the API key + base URL in the AsyncOpenAI client. Here’s how:
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URLfrom openai import AsyncOpenAIclient=AsyncOpenAI(api_key="YOUR_PORTKEY_API_KEY", # Your Portkey API Key from Step 3base_url="PORTKEY_GATEWAY_URL")# your rest of the code remains same
As your AI usage scales, controlling which teams can access specific models becomes crucial. Portkey Configs provide this control layer with features like:
After distributing API keys to your team members, your enterprise-ready OpenAI Agents setup is ready to go. Each team member can now use their designated API keys with appropriate access levels and budget controls.
Apply your governance setup using the integration steps from earlier sections
Monitor usage in Portkey dashboard:
Portkey adds production-readiness to OpenAI Agents through comprehensive observability (traces, logs, metrics), reliability features (fallbacks, retries, caching), and access to 200+ LLMs through a unified interface. This makes it easier to debug, optimize, and scale your agent applications.
Yes! Portkey integrates seamlessly with existing OpenAI Agents. You only need to replace your client initialization code with the Portkey-enabled version. The rest of your agent code remains unchanged.
Portkey supports all OpenAI Agents SDK features, including tool use, memory, planning, and more. It adds observability and reliability without limiting any of the SDK’s functionality.
Portkey fully supports streaming responses in OpenAI Agents. You can enable streaming by using the appropriate methods in the OpenAI Agents SDK, and Portkey will properly track and log the streaming interactions.
Portkey allows you to add custom metadata to your agent runs, which you can then use for filtering. Add fields like agent_name, agent_type, or session_id to easily find and analyze specific agent executions.
Yes! Portkey uses your own API keys for the various LLM providers. It securely stores them as virtual keys, allowing you to easily manage and rotate keys without changing your code.