Strands Agents
Use Portkey with AWS’s Strands Agents to take your AI Agents to production
Strands Agents is a simple-to-use agent framework built by AWS.
Portkey enhances Strands Agents with production-readiness features, turning your experimental agents into robust systems by providing:
- Complete observability of every agent step, tool use, and interaction
- Built-in reliability with fallbacks, retries, and load balancing
- Cost tracking and optimization to manage your AI spend
- Access to 200+ LLMs through a single integration
- Guardrails to keep agent behavior safe and compliant
- Version-controlled prompts for consistent agent performance
Strands Agents Documentation
Learn more about Strands Agents’ core concepts and features
Quickstart: Install
Quickstart: Configure
Instantiate your Strands OpenAIModel
with Portkey:
Quickstart: Run
Integration
Portkey works out of the box with Strands Agents and supports all of Portkey + Strands functionalities because of our end-to-end support for the OpenAI API. You can directly import the OpenAI Model class inside Strands, set the base URL to Portkey Gateway URL, and unlock all of Portkey functionalities. Here’s how:
Portkey Setup
First, let’s setup your provider keys and settings on Portkey, that you can later use in Strands with your Portkey API key.
Create Provider
Go to Virtual Keys in the Portkey App to add your AI provider key and copy the virtual key ID.
Create Config
Go to Configs in the Portkey App, create a new config that uses your virtual key, then save the Config ID.
Create API Key
Go to API Keys in the Portkey App to generate a new API key and attach your Config as the default routing.
That’s it! With this, you unlock all Portkey functionalities for use with your Strands Agents!
Strands Setup
Now, let’s setup Strands Agents to use the Portkey API key we just created.
Install Packages
Configure Portkey Client
When you instantiate the OpenAIModel
, set the base_url
to Portkey’s Gateway URL and pass your Portkey API Key directly in as the main API key.
That’s it! With this, you unlock all Portkey functionalities to be used along with your Strands Agents!
View the Log
Portkey logs all of your Strands requests in the Logs dashboard.
End-to-end Example
We’ve demonstrated a simple working integration between Portkey & Strands. Check below for all the advanced functionalities Portkey offers for your Strands Agents.
Production Features
1. Enhanced Observability
Portkey provides comprehensive observability for your Strands agents, helping you understand exactly what’s happening during each execution.
Traces provide a hierarchical view of your agent’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.
Traces provide a hierarchical view of your agent’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.
Portkey logs every interaction with LLMs, including:
- Complete request and response payloads
- Latency and token usage metrics
- Cost calculations
- Tool calls and function executions
All logs can be filtered by metadata, trace IDs, models, and more, making it easy to debug specific agent runs.
Portkey provides built-in dashboards that help you:
- Track cost and token usage across all agent runs
- Analyze performance metrics like latency and success rates
- Identify bottlenecks in your agent workflows
- Compare different agent configurations and LLMs
You can filter and segment all metrics by custom metadata to analyze specific agent types, user groups, or use cases.
Add custom metadata to your Strands calls to enable powerful filtering and segmentation:
This metadata can be used to filter logs, traces, and metrics on the Portkey dashboard, allowing you to analyze specific agent runs, users, or environments.
2. Reliability - Keep Your Agents Running Smoothly
When running agents in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur.
It’s simple to enable fallback in your Strands Agents by using a Portkey Config that you can attach at runtime or directly to your Portkey API key. Here’s an example of attaching a Config at runtime:
This configuration will automatically try GPT-4o on OpenAI if the Azure deployment fails, ensuring your agent can continue operating.
Automatic Retries
Handles temporary failures automatically. If an LLM call fails, Portkey will retry the same request for the specified number of times - perfect for rate limits or network blips.
Request Timeouts
Prevent your agents from hanging. Set timeouts to ensure you get responses (or can fail gracefully) within your required timeframes.
Conditional Routing
Send different requests to different providers. Route complex reasoning to GPT-4, creative tasks to Claude, and quick responses to Gemini based on your needs.
Fallbacks
Keep running even if your primary provider fails. Automatically switch to backup providers to maintain availability.
Load Balancing
Spread requests across multiple API keys or providers. Great for high-volume agent operations and staying within rate limits.
3. Guardrails for Safe Agents
Guardrails ensure your Strands agents operate safely and respond appropriately in all situations.
Why Use Guardrails?
Strands agents can experience various failure modes:
- Generating harmful or inappropriate content
- Leaking sensitive information like PII
- Hallucinating incorrect information
- Generating outputs in incorrect formats
Portkey’s guardrails can:
- Detect and redact PII in both inputs and outputs
- Filter harmful or inappropriate content
- Validate response formats against schemas
- Check for hallucinations against ground truth
- Apply custom business logic and rules
Learn More About Guardrails
Explore Portkey’s guardrail features to enhance agent safety
4. Model Interoperability
Strands supports multiple LLM providers, and Portkey extends this capability by providing access to over 200 LLMs through a unified interface. You can easily switch between different models without changing your core agent logic:
Portkey provides access to LLMs from providers including:
- OpenAI (GPT-4o, GPT-4 Turbo, etc.)
- Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
- Mistral AI (Mistral Large, Mistral Medium, etc.)
- Google Vertex AI (Gemini 1.5 Pro, etc.)
- Cohere (Command, Command-R, etc.)
- AWS Bedrock (Claude, Titan, etc.)
- Local/Private Models
Supported Providers
See the full list of LLM providers supported by Portkey
Enterprise Governance
Why Enterprise Governance? If you are using Strands inside your organization, you need to consider several governance aspects:
- Cost Management: Controlling and tracking AI spending across teams
- Access Control: Managing which teams can use specific models
- Usage Analytics: Understanding how AI is being used across the organization
- Security & Compliance: Maintaining enterprise security standards
- Reliability: Ensuring consistent service across all users
Enterprise Governance
1. Create a Virtual Key
Define budget and rate limits with a Virtual Key in the Portkey App.
For SSO/SCIM setup, see @[product/enterprise-offering/org-management/sso.mdx] and @[product/enterprise-offering/org-management/scim/scim.mdx].
2. Create a Config
Configure routing, fallbacks, and overrides.
3. Create an API Key
Assign scopes and metadata defaults.
4. Deploy & Monitor
Distribute keys and track usage in the Portkey dashboard.
View audit logs: @[product/enterprise-offering/audit-logs.mdx].
Troubleshooting
Contact & Support
Enterprise SLAs & Support
Get dedicated SLA-backed support.
Portkey Community
Join our forums and Slack channel.