- Unified gateway for 1600+ models while keeping AgentCore’s runtime, gateway, and memory services intact
- Production telemetry with traces, logs, and metrics for every AgentCore invocation via Portkey headers and metadata
- Reliability controls (fallbacks, load balancing, timeouts) that shield your agents from provider failures
- Centralized governance over provider keys, spend, and access policies using Portkey API keys across AgentCore environments
AgentCore Developer Guide
Review AWS’s toolkit for packaging and deploying runtimes, gateway tools, and memory services
Supported Agent Frameworks
Bedrock AgentCore supports any OpenAI-compatible agent framework. Portkey seamlessly integrates with all of them, allowing you to add production-grade observability, reliability, and multi-provider routing to your AgentCore deployments.Strands Agents
AWS’s simple-to-use agent framework with built-in tools and memory
LangGraph
Build stateful, multi-actor agent workflows with directed graphs
OpenAI Agents SDK (Python)
Develop complex AI agents with tools, planning, and memory in Python
OpenAI Agents SDK (TypeScript)
Build production-ready agents with OpenAI’s TypeScript SDK
Each framework integration guide shows you exactly how to configure Portkey. The steps below demonstrate a generic setup that works with any of these frameworks.
Quick start
1
Create your agent with AgentCore
Start by creating your agent using the AgentCore CLI:This command will prompt you to choose an agent framework:
openai-agents- OpenAI Agents SDK (Python or TypeScript)strands-agents- AWS Strands Agentslanggraph- LangGraph workflowsgoogle-adk- Google Agent Development Kit
bedrock_agentcore.runtime helpers for local testing and deployment.After creation, add Portkey’s SDK to enable multi-provider routing:2
Set up Portkey credentials
Create your Portkey API key with routing configuration:For production setups, add fallbacks, load balancing, and conditional routing:
-
Add your AI provider keys
Go to Model Catalog Keys in the Portkey dashboard and add your actual AI provider keys (OpenAI, Anthropic, AWS Bedrock, etc.). Each provider key gets a unique slug that you’ll reference in configs. -
Create a routing configuration
Go to Configs to define how requests should be routed. A basic config looks like:
- Generate your Portkey API key
Go to API Keys to create a new API key. Attach your config as the default routing config to get an API key that automatically routes to your configured providers.
PORTKEY_API_KEY.3
Wire Portkey into your agent
Wrap your agent runnable with
BedrockAgentCoreApp and point the underlying OpenAI-compatible client at Portkey. This example uses OpenAI Agents SDK, but the same pattern works with Strands, LangGraph, and other frameworks.For framework-specific examples, see our detailed integration guides:
- Strands Agents - AWS’s simple agent framework
- LangGraph - Stateful agent workflows
- OpenAI Agents (Python) - Complex agents with tools
- OpenAI Agents (TypeScript) - TypeScript agent development
4
Deploy to AgentCore Runtime
Deploy your agent to Amazon Bedrock AgentCore Runtime:This command will:
- Consolidate all your code into a zip file
- Deploy your agent to AgentCore Runtime
- Configure CloudWatch logging
- Set up environment variables (including
PORTKEY_API_KEY)
If you don’t already have the required permissions, refer to IAM Permissions for AgentCore.
5
Invoke your deployed agent
Test your deployed agent with a prompt:All LLM traffic from your agent now flows through Portkey, giving you observability, reliability, and multi-provider routing. Check the Portkey dashboard to see traces, costs, and performance metrics.
AgentCore batches tools, memory, and runtime services. Portkey only replaces the LLM transport, so you can keep using AgentCore Gateway, Memory, and Identity features while benefiting from Portkey’s routing and analytics.
Integration patterns
| Scenario | Recommended approach | Notes |
|---|---|---|
| Entire AgentCore app should use Portkey | Register a global Portkey client (as shown above) so every LLM call flows through Portkey | Works with all frameworks—see Strands, LangGraph, OpenAI Agents |
| Some requests should use native Bedrock models | Keep the global client pointing at Bedrock and wrap specific runs with a custom Portkey-backed model provider | Best for hybrid deployments mixing Bedrock and other providers |
| Different agents inside the runtime need different providers | Instantiate per-agent model objects with bespoke Portkey headers/configs | Useful for multi-tenant AgentCore applications |
Production features to enable
Observability
Attach trace IDs and metadata directly from your AgentCore entrypoint so Portkey groups every tool call, LLM exchange, and retry under a single execution record. Apply Portkey Configs for fallbacks, retries, load balancing, or conditional routing to keep AgentCore agents resilient to provider hiccups. You can attach the config globally via the API key or per-request viacreateHeaders.
Model interoperability
Switch providers without touching your AgentCore business logic by swapping the Portkey config or provider slug (@openai-prod, @anthropic-prod, @gemini-fast, etc.). The agent definition stays unchanged.
Governance & access control
Distribute Portkey API keys (not raw provider keys) to AgentCore teams, enforce spend budgets, and audit usage across every invocation emitted by the runtime.---Compatibility checklist
- ✅ Agent frameworks: Strands, OpenAI Agents (Python/TypeScript), LangGraph, CrewAI, Pydantic AI, Google ADK—anything that can target an OpenAI-compatible client
- ✅ AgentCore services: Runtime, Gateway, Memory, Identity all continue to work; Portkey only handles LLM transport
- ✅ MCP / A2A tools: Tool invocations remain unchanged; Portkey runs alongside AgentCore Gateway tool definitions
- ✅ Foundation models: Route to Amazon Bedrock, OpenAI, Anthropic, Google Gemini, Mistral, Cohere, or on-prem models by updating your Portkey config—no redeploy required
For best performance, deploy your Portkey gateway in the same AWS Region as your AgentCore runtime (for example, use
customHost pointing at a private Portkey data plane) to minimize cross-region latency.Next steps
- Monitor test invocations in the Portkey dashboard to validate tracing, metadata, and costs
- Attach Portkey guardrails (PII redaction, schema validation, content filters) if your AgentCore agents need compliance controls
- Expand beyond a single model by adding fallbacks or conditional routing rules in Portkey Configs
- Coordinate with AWS AgentCore Gateway to expose Portkey-observed tools for deeper analytics across both platforms

