Strands Agents
Use Portkey with AWS’s Strands Agents to take your AI Agents to production
Strands Agents is a simple-to-use agent framework built by AWS. Portkey enhances Strands Agents with production-grade observability, reliability, and multi-provider support—all through a single integration that requires no changes to your existing agent logic.
What you get with this integration:
- Complete observability of every agent step, tool use, and LLM interaction
- Built-in reliability with automatic fallbacks, retries, and load balancing
- 200+ LLMs accessible through the same OpenAI-compatible interface
- Production monitoring with traces, logs, and real-time metrics
- Zero code changes to your existing Strands agent implementations
Strands Agents Documentation
Learn more about Strands Agents’ core concepts and features
Quick Start
Install Dependencies
Replace Your Model Initialization
Instead of initializing your OpenAI model directly:
Initialize it through Portkey’s gateway:
Use Your Agent Normally
Your agent works exactly the same way, but now all interactions are automatically logged, traced, and monitored in your Portkey dashboard.
How the Integration Works
The integration leverages Strands’ flexible client_args
parameter, which passes any arguments directly to the OpenAI client constructor. By setting base_url
to Portkey’s gateway, all requests route through Portkey while maintaining full compatibility with the OpenAI API.
This means you get all of Portkey’s features without any changes to your agent logic, tool usage, or response handling.
Setting Up Portkey
Before using the integration, you need to configure your AI providers and create a Portkey API key.
Add Your AI Provider Keys
Go to Virtual Keys in the Portkey dashboard and add your actual AI provider keys (OpenAI, Anthropic, etc.). Each provider key gets a virtual key ID that you’ll reference in configs.
Create a Configuration
Go to Configs to define how requests should be routed. A basic config looks like:
For production setups, you can add fallbacks, load balancing, and conditional routing here.
Generate Your Portkey API Key
Go to API Keys to create a new API key. Attach your config as the default routing config, and you’ll get an API key that routes to your configured providers.
Complete Integration Example
Here’s a full example showing how to set up a Strands agent with Portkey integration:
The agent will automatically use both tools as needed, and every step will be logged in your Portkey dashboard with full request/response details, timing, and token usage.
Production Features
1. Enhanced Observability
Portkey provides comprehensive visibility into your agent’s behavior without requiring any code changes.
Track the complete execution flow of your agents with hierarchical traces that show:
- LLM calls: Every request to language models with full payloads
- Tool invocations: Which tools were called, with what parameters, and their responses
- Decision points: How the agent chose between different tools or approaches
- Performance metrics: Latency, token usage, and cost for each step
All requests from this agent will be grouped under the same trace, making it easy to analyze the complete interaction flow.
Track the complete execution flow of your agents with hierarchical traces that show:
- LLM calls: Every request to language models with full payloads
- Tool invocations: Which tools were called, with what parameters, and their responses
- Decision points: How the agent chose between different tools or approaches
- Performance metrics: Latency, token usage, and cost for each step
All requests from this agent will be grouped under the same trace, making it easy to analyze the complete interaction flow.
Add business context to your agent runs for better filtering and analysis:
This metadata appears in your Portkey dashboard, allowing you to filter logs and analyze performance by user type, session, or any custom dimension.
Monitor your agents in production with built-in dashboards that track:
- Success rates: Percentage of successful agent completions
- Average latency: Response times across different agent types
- Token usage: Track consumption and costs across models
- Error patterns: Common failure modes and their frequency
All metrics can be segmented by the metadata you provide, giving you insights like “premium user agents have 15% higher success rates” or “billing department queries take 2x longer on average.”
2. Reliability & Fallbacks
When running agents in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur.
It’s simple to enable fallback in your Strands Agents by using a Portkey Config that you can attach at runtime or directly to your Portkey API key. Here’s an example of attaching a Config at runtime:
Configure multiple providers so your agents keep working even when one provider fails:
If OpenAI returns a rate limit error (429), Portkey automatically retries the request with Anthropic’s Claude, using default model mappings.
Configure multiple providers so your agents keep working even when one provider fails:
If OpenAI returns a rate limit error (429), Portkey automatically retries the request with Anthropic’s Claude, using default model mappings.
Distribute requests across multiple API keys to stay within rate limits:
Requests will be distributed 70/30 across your two OpenAI keys, helping you maximize throughput without hitting individual key limits.
Route requests to different providers/models based on custom logic (like metadata, input content, or user attributes) using Portkey’s Conditional Routing feature.
See the Conditional Routing documentation for full guidance and advanced examples.
3. LLM Interoperability
Access 1,600+ models through the same Strands interface by changing just the provider configuration:
Portkey provides access to LLMs from providers including:
- OpenAI (GPT-4o, GPT-4 Turbo, etc.)
- Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
- Mistral AI (Mistral Large, Mistral Medium, etc.)
- Google Vertex AI (Gemini 1.5 Pro, etc.)
- Cohere (Command, Command-R, etc.)
- AWS Bedrock (Claude, Titan, etc.)
- Local/Private Models
Supported Providers
See the full list of LLM providers supported by Portkey
4. Guardrails for Safe Agents
Guardrails ensure your Strands agents operate safely and respond appropriately in all situations.
Why Use Guardrails?
Strands agents can experience various failure modes:
- Generating harmful or inappropriate content
- Leaking sensitive information like PII
- Hallucinating incorrect information
- Generating outputs in incorrect formats
Portkey’s guardrails can:
- Detect and redact PII in both inputs and outputs
- Filter harmful or inappropriate content
- Validate response formats against schemas
- Check for hallucinations against ground truth
- Apply custom business logic and rules
Learn More About Guardrails
Explore Portkey’s guardrail features to enhance agent safety
Advanced Configuration
Configure different behavior for development, staging, and production:
Configure different behavior for development, staging, and production:
Override configuration for specific requests without changing the model:
Enterprise Governance
If you are using Strands inside your organization, you need to consider several governance aspects:
- Cost Management: Controlling and tracking AI spending across teams
- Access Control: Managing which teams can use specific models
- Usage Analytics: Understanding how AI is being used across the organization
- Security & Compliance: Maintaining enterprise security standards
- Reliability: Ensuring consistent service across all users
Centralized Key Management
Instead of distributing raw API keys to developers, use Portkey API keys that you can control centrally:
You can:
- Rotate provider keys without updating any code
- Set spending limits per team or API key
- Control model access (which teams can use which models)
- Monitor usage across all teams and projects
- Revoke access instantly if needed
Usage Analytics & Budgets
Track and control AI spending across your organization:
- Per-team budgets: Set monthly spending limits for different teams
- Model usage analytics: See which teams are using which models most
- Cost attribution: Understand costs by project, team, or user
- Usage alerts: Get notified when teams approach their limits
All of this works automatically with your existing Strands agents—no code changes required.
Contact & Support
Enterprise SLAs & Support
Get dedicated SLA-backed support.
Portkey Community
Join our forums and Slack channel.
Resources
Troubleshooting
Frequently Asked Questions
Next Steps
Now that you have Portkey integrated with your Strands agents:
- Monitor your agents in the Portkey dashboard to understand their behavior
- Set up fallbacks for critical production agents using multiple providers
- Add custom metadata to track different agent types or user segments
- Configure budgets and alerts if you’re deploying multiple agents
- Explore advanced routing to optimize for cost, latency, or quality