Strands Agents is a simple-to-use agent framework built by AWS.

Portkey enhances Strands Agents with production-readiness features, turning your experimental agents into robust systems by providing:

  • Complete observability of every agent step, tool use, and interaction
  • Built-in reliability with fallbacks, retries, and load balancing
  • Cost tracking and optimization to manage your AI spend
  • Access to 200+ LLMs through a single integration
  • Guardrails to keep agent behavior safe and compliant
  • Version-controlled prompts for consistent agent performance

Strands Agents Documentation

Learn more about Strands Agents’ core concepts and features

1

Quickstart: Install

pip install -U strands-agents strands-agents-tools openai portkey-ai
2

Quickstart: Configure

Instantiate your Strands OpenAIModel with Portkey:

from strands.models.openai import OpenAIModel
from portkey_ai import PORTKEY_GATEWAY_URL

model = OpenAIModel(
  client_args={"api_key": "YOUR_PORTKEY_API_KEY", "base_url": PORTKEY_GATEWAY_URL},
  model_id="gpt-4o",
  params={"temperature": 0.7}
)
3

Quickstart: Run

from strands import Agent
from strands_tools import calculator

agent = Agent(model=model, tools=[calculator])
response = agent("What is 2+2")
print(response)

Integration

Portkey works out of the box with Strands Agents and supports all of Portkey + Strands functionalities because of our end-to-end support for the OpenAI API. You can directly import the OpenAI Model class inside Strands, set the base URL to Portkey Gateway URL, and unlock all of Portkey functionalities. Here’s how:

Portkey Setup

First, let’s setup your provider keys and settings on Portkey, that you can later use in Strands with your Portkey API key.

1

Create Provider

Go to Virtual Keys in the Portkey App to add your AI provider key and copy the virtual key ID.

2

Create Config

Go to Configs in the Portkey App, create a new config that uses your virtual key, then save the Config ID.

3

Create API Key

Go to API Keys in the Portkey App to generate a new API key and attach your Config as the default routing.

That’s it! With this, you unlock all Portkey functionalities for use with your Strands Agents!

Strands Setup

Now, let’s setup Strands Agents to use the Portkey API key we just created.

1

Install Packages

pip install -U strands-agents strands-agents-tools openai portkey-ai
2

Configure Portkey Client

When you instantiate the OpenAIModel, set the base_url to Portkey’s Gateway URL and pass your Portkey API Key directly in as the main API key.

from strands.models.openai import OpenAIModel
from portkey_ai import PORTKEY_GATEWAY_URL

portkey_model = OpenAIModel(
    client_args={
        "api_key": "YOUR_PORTKEY_API_KEY",
        "base_url": PORTKEY_GATEWAY_URL
    },
    model_id="gpt-4o",
    params={
        "temperature": 0.7
    }
)

That’s it! With this, you unlock all Portkey functionalities to be used along with your Strands Agents!

3

View the Log

Portkey logs all of your Strands requests in the Logs dashboard.

End-to-end Example

from strands import Agent
from strands.models.openai import OpenAIModel
from strands_tools import calculator
from portkey_ai import PORTKEY_GATEWAY_URL

model = OpenAIModel(
    client_args={
        "api_key": "YOUR_PORTKEY_API_KEY",
        "base_url": PORTKEY_GATEWAY_URL
    },
    model_id="gpt-4o",
    params={
        "max_tokens": 1000,
        "temperature": 0.7,
    }
)

agent = Agent(model=model, tools=[calculator])
response = agent("What is 2+2")
print(response)

We’ve demonstrated a simple working integration between Portkey & Strands. Check below for all the advanced functionalities Portkey offers for your Strands Agents.

Production Features

1. Enhanced Observability

Portkey provides comprehensive observability for your Strands agents, helping you understand exactly what’s happening during each execution.

Traces provide a hierarchical view of your agent’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.

from strands import Agent
from strands.models.openai import OpenAIModel
from strands_tools import calculator
from portkey_ai import PORTKEY_GATEWAY_URL,createHeaders

model = OpenAIModel(
    client_args={
        "api_key": "YOUR_PORTKEY_API_KEY",
        "base_url": PORTKEY_GATEWAY_URL,
        "default_headers":createHeaders(trace_id="strands")
    },
    model_id="gpt-4o",
    params={
        "max_tokens": 1000,
        "temperature": 0.7,
    }
)

agent = Agent(model=model, tools=[calculator])
response = agent("What is 2+2")
print(response)

2. Reliability - Keep Your Agents Running Smoothly

When running agents in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur.

It’s simple to enable fallback in your Strands Agents by using a Portkey Config that you can attach at runtime or directly to your Portkey API key. Here’s an example of attaching a Config at runtime:

from strands import Agent
from strands.models.openai import OpenAIModel
from strands_tools import calculator
from portkey_ai import PORTKEY_GATEWAY_URL,createHeaders

model = OpenAIModel(
    client_args={
        "api_key": "YOUR_PORTKEY_API_KEY",
        "base_url": PORTKEY_GATEWAY_URL,
        "default_headers":createHeaders(
            config={
                "strategy": {
                    "mode": "fallback",
                    "on_status_codes": [429]
                },
                "targets": [
                    { "virtual_key": "azure-81fddb" },
                    { "virtual_key": "open-ai-key-66a67d" }
                ]
            }
        )
    },
    model_id="gpt-4o",
    params={
        "max_tokens": 1000,
        "temperature": 0.7,
    }
)

agent = Agent(model=model, tools=[calculator])
response = agent("What is 2+2")
print(response)

This configuration will automatically try GPT-4o on OpenAI if the Azure deployment fails, ensuring your agent can continue operating.

3. Guardrails for Safe Agents

Guardrails ensure your Strands agents operate safely and respond appropriately in all situations.

Why Use Guardrails?

Strands agents can experience various failure modes:

  • Generating harmful or inappropriate content
  • Leaking sensitive information like PII
  • Hallucinating incorrect information
  • Generating outputs in incorrect formats

Portkey’s guardrails can:

  • Detect and redact PII in both inputs and outputs
  • Filter harmful or inappropriate content
  • Validate response formats against schemas
  • Check for hallucinations against ground truth
  • Apply custom business logic and rules

Learn More About Guardrails

Explore Portkey’s guardrail features to enhance agent safety

4. Model Interoperability

Strands supports multiple LLM providers, and Portkey extends this capability by providing access to over 200 LLMs through a unified interface. You can easily switch between different models without changing your core agent logic:

from strands import Agent
from strands.models.openai import OpenAIModel
from strands_tools import calculator
from portkey_ai import PORTKEY_GATEWAY_URL,createHeaders

model = OpenAIModel(
    client_args={
        "api_key": "YOUR_PORTKEY_API_KEY",
        "base_url": PORTKEY_GATEWAY_URL,
        "default_headers":createHeaders(
            provider="anthropic",
            api_key="ANTHROPIC_API_KEY"
        )
    },
    model_id="claude-3-7-sonnet-latest",
    params={
        "max_tokens": 1000,
        "temperature": 0.7,
    }
)

agent = Agent(model=model, tools=[calculator])
response = agent("What is 2+2")
print(response)

Portkey provides access to LLMs from providers including:

  • OpenAI (GPT-4o, GPT-4 Turbo, etc.)
  • Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
  • Mistral AI (Mistral Large, Mistral Medium, etc.)
  • Google Vertex AI (Gemini 1.5 Pro, etc.)
  • Cohere (Command, Command-R, etc.)
  • AWS Bedrock (Claude, Titan, etc.)
  • Local/Private Models

Supported Providers

See the full list of LLM providers supported by Portkey

Enterprise Governance

Why Enterprise Governance? If you are using Strands inside your organization, you need to consider several governance aspects:

  • Cost Management: Controlling and tracking AI spending across teams
  • Access Control: Managing which teams can use specific models
  • Usage Analytics: Understanding how AI is being used across the organization
  • Security & Compliance: Maintaining enterprise security standards
  • Reliability: Ensuring consistent service across all users

Enterprise Governance

1

1. Create a Virtual Key

Define budget and rate limits with a Virtual Key in the Portkey App.

For SSO/SCIM setup, see @[product/enterprise-offering/org-management/sso.mdx] and @[product/enterprise-offering/org-management/scim/scim.mdx].

2

2. Create a Config

Configure routing, fallbacks, and overrides.

3

3. Create an API Key

Assign scopes and metadata defaults.

4

4. Deploy & Monitor

Distribute keys and track usage in the Portkey dashboard.

View audit logs: @[product/enterprise-offering/audit-logs.mdx].

Troubleshooting

Contact & Support

Frequently Asked Questions

Resources