Strands Agents is a simple-to-use agent framework built by AWS. Portkey enhances Strands Agents with production-grade observability, reliability, and multi-provider support—all through a single integration that requires no changes to your existing agent logic. What you get with this integration:
  • Complete observability of every agent step, tool use, and LLM interaction
  • Built-in reliability with automatic fallbacks, retries, and load balancing
  • 1600+ LLMs accessible through the same OpenAI-compatible interface
  • Production monitoring with traces, logs, and real-time metrics
  • Zero code changes to your existing Strands agent implementations

Strands Agents Documentation

Learn more about Strands Agents’ core concepts and features

Quick Start

1

Install Dependencies

pip install -U strands-agents strands-agents-tools openai portkey-ai
2

Replace Your Model Initialization

Instead of initializing your OpenAI model directly:
# Before: Direct OpenAI
from strands.models.openai import OpenAIModel

model = OpenAIModel(
    client_args={"api_key": "sk-..."},
    model_id="gpt-4o",
    params={"temperature": 0.7}
)
Initialize it through Portkey’s gateway:
# After: Through Portkey
from strands.models.openai import OpenAIModel
from portkey_ai import PORTKEY_GATEWAY_URL

model = OpenAIModel(
    client_args={
        "api_key": "YOUR_PORTKEY_API_KEY",  # Your Portkey API key
        "base_url": PORTKEY_GATEWAY_URL     # Routes through Portkey
    },
    model_id="gpt-4o",
    params={"temperature": 0.7}
)
3

Use Your Agent Normally

from strands import Agent
from strands_tools import calculator

agent = Agent(model=model, tools=[calculator])
response = agent("What is 2+2?")
print(response)
Your agent works exactly the same way, but now all interactions are automatically logged, traced, and monitored in your Portkey dashboard.

How the Integration Works

The integration leverages Strands’ flexible client_args parameter, which passes any arguments directly to the OpenAI client constructor. By setting base_url to Portkey’s gateway, all requests route through Portkey while maintaining full compatibility with the OpenAI API.
# This is what happens under the hood in Strands:
client_args = client_args or {}
self.client = openai.OpenAI(**client_args)  # Your Portkey config gets passed here
This means you get all of Portkey’s features without any changes to your agent logic, tool usage, or response handling.

Setting Up Portkey

Before using the integration, you need to configure your AI providers and create a Portkey API key.
1

Add Your AI Provider Keys

Go to Virtual Keys in the Portkey dashboard and add your actual AI provider keys (OpenAI, Anthropic, etc.). Each provider key gets a virtual key ID that you’ll reference in configs.
2

Create a Configuration

Go to Configs to define how requests should be routed. A basic config looks like:
{
 "virtual_key": "openai-key-abc123"
}
For production setups, you can add fallbacks, load balancing, and conditional routing here.
3

Generate Your Portkey API Key

Go to API Keys to create a new API key. Attach your config as the default routing config, and you’ll get an API key that routes to your configured providers.

Complete Integration Example

Here’s a full example showing how to set up a Strands agent with Portkey integration:
from strands import Agent
from strands.models.openai import OpenAIModel
from strands_tools import calculator, web_search
from portkey_ai import PORTKEY_GATEWAY_URL

# Initialize model through Portkey
model = OpenAIModel(
    client_args={
        "api_key": "YOUR_PORTKEY_API_KEY",
        "base_url": PORTKEY_GATEWAY_URL
    },
    model_id="gpt-4o",
    params={
        "max_tokens": 1000,
        "temperature": 0.7,
    }
)

# Create agent with tools (unchanged from standard Strands usage)
agent = Agent(
    model=model,
    tools=[calculator, web_search]
)

# Use the agent (unchanged from standard Strands usage)
response = agent("Calculate the compound interest on $10,000 at 5% for 10 years, then search for current inflation rates")
print(response)
The agent will automatically use both tools as needed, and every step will be logged in your Portkey dashboard with full request/response details, timing, and token usage.

Production Features

1. Enhanced Observability

Portkey provides comprehensive visibility into your agent’s behavior without requiring any code changes.
Track the complete execution flow of your agents with hierarchical traces that show:
  • LLM calls: Every request to language models with full payloads
  • Tool invocations: Which tools were called, with what parameters, and their responses
  • Decision points: How the agent chose between different tools or approaches
  • Performance metrics: Latency, token usage, and cost for each step
from strands import Agent
from strands.models.openai import OpenAIModel
from strands_tools import calculator
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders

model = OpenAIModel(
    client_args={
        "api_key": "YOUR_PORTKEY_API_KEY",
        "base_url": PORTKEY_GATEWAY_URL,
        # Add trace ID to group related requests
        "default_headers": createHeaders(trace_id="user_session_123")
    },
    model_id="gpt-4o",
    params={"temperature": 0.7}
)

agent = Agent(model=model, tools=[calculator])
response = agent("What's 15% of 2,847?")
All requests from this agent will be grouped under the same trace, making it easy to analyze the complete interaction flow.

2. Reliability & Fallbacks

When running agents in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur. It’s simple to enable fallback in your Strands Agents by using a Portkey Config that you can attach at runtime or directly to your Portkey API key. Here’s an example of attaching a Config at runtime:
Configure multiple providers so your agents keep working even when one provider fails:
from portkey_ai import createHeaders

model = OpenAIModel(
    client_args={
        "api_key": "YOUR_PORTKEY_API_KEY",
        "base_url": PORTKEY_GATEWAY_URL,
        "default_headers": createHeaders(
            config={
                "strategy": {
                    "mode": "fallback",
                    "on_status_codes": [429, 503, 502]  # Rate limits and server errors
                },
                "targets": [
                    { "virtual_key": "openai-key-primary" },   # Try OpenAI first
                    { "virtual_key": "anthropic-key-backup" }  # Fall back to Claude
                ]
            }
        )
    },
    model_id="gpt-4o",  # Will map to equivalent models on each provider
    params={"temperature": 0.7}
)
If OpenAI returns a rate limit error (429), Portkey automatically retries the request with Anthropic’s Claude, using default model mappings.

3. LLM Interoperability

Access 1,600+ models through the same Strands interface by changing just the provider configuration:
from portkey_ai import createHeaders

# Use Claude instead of GPT-4
model = OpenAIModel(
    client_args={
        "api_key": "YOUR_PORTKEY_API_KEY",
        "base_url": PORTKEY_GATEWAY_URL,
        "default_headers": createHeaders(
            provider="anthropic",
            api_key="YOUR_ANTHROPIC_KEY"  # Can also use virtual keys
        )
    },
    model_id="claude-3-7-sonnet-latest",  # Claude model ID
    params={"max_tokens": 1000, "temperature": 0.7}
)

# Agent code remains exactly the same
agent = Agent(model=model, tools=[calculator])
response = agent("Explain quantum computing in simple terms")
Portkey provides access to LLMs from providers including:
  • OpenAI (GPT-4o, GPT-4 Turbo, etc.)
  • Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
  • Mistral AI (Mistral Large, Mistral Medium, etc.)
  • Google Vertex AI (Gemini 1.5 Pro, etc.)
  • Cohere (Command, Command-R, etc.)
  • AWS Bedrock (Claude, Titan, etc.)
  • Local/Private Models

Supported Providers

See the full list of LLM providers supported by Portkey

4. Guardrails for Safe Agents

Guardrails ensure your Strands agents operate safely and respond appropriately in all situations. Why Use Guardrails? Strands agents can experience various failure modes:
  • Generating harmful or inappropriate content
  • Leaking sensitive information like PII
  • Hallucinating incorrect information
  • Generating outputs in incorrect formats
Portkey’s guardrails can:
  • Detect and redact PII in both inputs and outputs
  • Filter harmful or inappropriate content
  • Validate response formats against schemas
  • Check for hallucinations against ground truth
  • Apply custom business logic and rules

Learn More About Guardrails

Explore Portkey’s guardrail features to enhance agent safety

Advanced Configuration

Configure different behavior for development, staging, and production:
import os
from portkey_ai import createHeaders

def create_model(environment="production"):
    if environment == "development":
        # Use faster, cheaper models for development
        headers = createHeaders(
            config={"targets": [{"virtual_key": "openai-dev-key"}]},
            metadata={"environment": "dev"}
        )
        model_id = "gpt-4o-mini"
        params = {"temperature": 0.5, "max_tokens": 500}

    elif environment == "production":
        # Use high-performance models with fallbacks for production
        headers = createHeaders(
            config={
                "strategy": {"mode": "fallback"},
                "targets": [
                    {"virtual_key": "openai-prod-key"},
                    {"virtual_key": "anthropic-prod-key"}
                ]
            },
            metadata={"environment": "prod"}
        )
        model_id = "gpt-4o"
        params = {"temperature": 0.7, "max_tokens": 2000}

    return OpenAIModel(
        client_args={
            "api_key": "YOUR_PORTKEY_API_KEY",
            "base_url": PORTKEY_GATEWAY_URL,
            "default_headers": headers
        },
        model_id=model_id,
        params=params
    )

# Use environment-specific configuration
model = create_model(os.getenv("APP_ENV", "production"))
agent = Agent(model=model, tools=[calculator])

Enterprise Governance

If you are using Strands inside your organization, you need to consider several governance aspects:
  • Cost Management: Controlling and tracking AI spending across teams
  • Access Control: Managing which teams can use specific models
  • Usage Analytics: Understanding how AI is being used across the organization
  • Security & Compliance: Maintaining enterprise security standards
  • Reliability: Ensuring consistent service across all users

Centralized Key Management

Instead of distributing raw API keys to developers, use Portkey API keys that you can control centrally:
# Developers use Portkey API keys (not raw provider keys)
model = OpenAIModel(
    client_args={
        "api_key": "pk-team-frontend-xyz123",  # Team-specific Portkey key
        "base_url": PORTKEY_GATEWAY_URL
    },
    model_id="gpt-4o",
    params={"temperature": 0.7}
)
You can:
  • Rotate provider keys without updating any code
  • Set spending limits per team or API key
  • Control model access (which teams can use which models)
  • Monitor usage across all teams and projects
  • Revoke access instantly if needed

Usage Analytics & Budgets

Track and control AI spending across your organization:
  • Per-team budgets: Set monthly spending limits for different teams
  • Model usage analytics: See which teams are using which models most
  • Cost attribution: Understand costs by project, team, or user
  • Usage alerts: Get notified when teams approach their limits
All of this works automatically with your existing Strands agents—no code changes required.

Contact & Support


Resources

Troubleshooting

Problem: ModuleNotFoundError when importing Portkey componentsSolution: Ensure all dependencies are installed:
pip install -U strands-agents strands-agents-tools openai portkey-ai
Verify versions are compatible:
import strands
import portkey_ai
print(f"Strands: {strands.__version__}")
print(f"Portkey: {portkey_ai.__version__}")
Problem: AuthenticationError when making requestsSolution: Verify your Portkey API key and provider setup: Verify your Portkey API key and provider setup. Test your Portkey API key directly and check for common issues such as wrong API key format, misconfigured provider virtual keys, and missing config attachments.
# Test your Portkey API key directly
from portkey_ai import Portkey

portkey = Portkey(api_key="YOUR_PORTKEY_API_KEY")
response = portkey.chat.completions.create(
    messages=[{"role": "user", "content": "test"}],
    model="gpt-4o"
)
print(response)
Problem: Hitting rate limits despite having fallbacks configuredSolution: Check your fallback configuration:
# Ensure fallbacks are configured for rate limit status codes
headers = createHeaders(
    config={
        "strategy": {
            "mode": "fallback",
            "on_status_codes": [429, 503, 502, 504]
        },
        "targets": [
            {"virtual_key": "primary-key"},
            {"virtual_key": "backup-key"}
        ]
    }
)
Also verify that your backup providers have sufficient quota.
Problem: Model not found or unsupported model errorsSolution: Check that your model ID is correct for the provider:
# OpenAI models
model_id = "gpt-4o"  # Correct
model_id = "gpt-4-turbo"  # Correct

# Anthropic models
model_id = "claude-3-5-sonnet-20241022"  # Correct
model_id = "claude-3-sonnet"  # Wrong format

# Use provider-specific model IDs, not generic names
Problem: Not seeing traces or logs in Portkey dashboardSolution: Verify your requests are going through Portkey:
# Check that base_url is set correctly
print(model.client.base_url)  # Should be https://api.portkey.ai/v1

# Add trace IDs for easier debugging
headers = createHeaders(
    trace_id="debug-session-123",
    metadata={"debug": "true"}
)
Also check the Logs section in your Portkey dashboard and filter by your metadata.

Frequently Asked Questions

Portkey adds production-readiness to Strands Agents through comprehensive observability (traces, logs, metrics), reliability features (fallbacks, retries, caching), and access to 1600+ LLMs through a unified interface. This makes it easier to debug, optimize, and scale your agent applications, all while preserving Strands Agents’ strong type safety.
Yes! Portkey integrates seamlessly with existing Strands Agents applications. You just need to replace your client initialization code with the Portkey-enabled version. The rest of your agent code remains unchanged and continues to benefit from Strands Agents’ strong typing.
Portkey supports all Strands Agents features, including tool use, multi-agent systems, and more. It adds observability and reliability without limiting any of the framework’s functionality.
Yes, Portkey allows you to use a consistent trace_id across multiple agents and requests to track the entire workflow. This is especially useful for multi-agent systems where you want to understand the full execution path.
Portkey allows you to add custom metadata to your agent runs, which you can then use for filtering. Add fields like agent_name, agent_type, or session_id to easily find and analyze specific agent executions.
Yes! Portkey uses your own API keys for the various LLM providers. It securely stores them as virtual keys, allowing you to easily manage and rotate keys without changing your code.

Next Steps

Now that you have Portkey integrated with your Strands agents:
  1. Monitor your agents in the Portkey dashboard to understand their behavior
  2. Set up fallbacks for critical production agents using multiple providers
  3. Add custom metadata to track different agent types or user segments
  4. Configure budgets and alerts if you’re deploying multiple agents
  5. Explore advanced routing to optimize for cost, latency, or quality