LangGraph
Use Portkey with LangGraph to take your AI agent workflows to production
Introduction
LangGraph is a library for building stateful, multi-actor applications with LLMs, designed to make developing complex agent workflows easier. It provides a flexible framework to create directed graphs where nodes process information and edges define the flow between them.
Portkey enhances LangGraph with production-readiness features, turning your experimental agent workflows into robust systems by providing:
- Complete observability of every agent step, tool use, and state transition
- Built-in reliability with fallbacks, retries, and load balancing
- Cost tracking and optimization to manage your AI spend
- Access to 200+ LLMs through a single integration
- Guardrails to keep agent behavior safe and compliant
- Version-controlled prompts for consistent agent performance
LangGraph Official Documentation
Learn more about LangGraph’s core concepts and features
Installation & Setup
Install the required packages
Depending on your use case, you may also need additional packages:
- For search capabilities:
pip install langchain_community
- For memory functionality:
pip install langgraph[checkpoint]
Generate API Key
Create a Portkey API key with optional budget/rate limits from the Portkey dashboard. You can attach configurations for reliability, caching, and more to this key.
Configure LangChain with Portkey
For a simple setup, configure a LangChain ChatOpenAI instance to use Portkey:
What are Virtual Keys? Virtual keys in Portkey securely store your LLM provider API keys (OpenAI, Anthropic, etc.) in an encrypted vault. They allow for easier key rotation and budget management. Learn more about virtual keys here.
Basic Agent Implementation
Let’s create a simple LangGraph chatbot using Portkey. This example shows how to set up a basic conversational agent:
This basic implementation:
- Creates a state graph with a message history
- Configures a ChatOpenAI model with Portkey
- Defines a simple chatbot node that processes messages with the LLM
- Compiles the graph and provides a streaming interface for chat
Advanced Features
1. Adding Tools to Your Agent
LangGraph can be enhanced with tools to allow your agent to perform actions. Here’s how to add the Tavily search tool:
This example requires a Tavily API key for the search functionality. You can sign up for one at Tavily’s website.
2. Creating Custom Tools
You can create custom tools for your agents using the @tool
decorator. Here’s how to create a simple multiplication tool:
This example:
- Defines a Pydantic model for the tool’s input schema
- Creates a custom multiplication tool with the
@tool
decorator - Integrates it into LangGraph with a tool node
3. Adding Memory to Your Agent
For persistent conversations, you can add memory to your LangGraph agents:
The thread_id
in the config allows you to maintain separate conversation threads for different users or contexts.
Production Features
1. Enhanced Observability
Portkey provides comprehensive observability for your LangGraph agents, helping you understand exactly what’s happening during each execution.
Traces provide a hierarchical view of your agent’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.
LangGraph also offers its own tracing via LangSmith, which can be used alongside Portkey for even more detailed workflow insights.
Traces provide a hierarchical view of your agent’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.
LangGraph also offers its own tracing via LangSmith, which can be used alongside Portkey for even more detailed workflow insights.
Portkey logs every interaction with LLMs, including:
- Complete request and response payloads
- Latency and token usage metrics
- Cost calculations
- Tool calls and function executions
All logs can be filtered by metadata, trace IDs, models, and more, making it easy to debug specific agent runs.
Portkey provides built-in dashboards that help you:
- Track cost and token usage across all agent runs
- Analyze performance metrics like latency and success rates
- Identify bottlenecks in your agent workflows
- Compare different agent configurations and LLMs
You can filter and segment all metrics by custom metadata to analyze specific agent types, user groups, or use cases.
Add custom metadata to your LangGraph agent calls to enable powerful filtering and segmentation:
This metadata can be used to filter logs, traces, and metrics on the Portkey dashboard, allowing you to analyze specific agent runs, users, or environments.
2. Reliability - Keep Your Agents Running Smoothly
When running agents in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur.
Enable fallback in your LangGraph agents by using a Portkey Config:
This configuration will automatically try Claude if the GPT-4o request fails, ensuring your agent can continue operating.
Automatic Retries
Handles temporary failures automatically. If an LLM call fails, Portkey will retry the same request for the specified number of times - perfect for rate limits or network blips.
Request Timeouts
Prevent your agents from hanging. Set timeouts to ensure you get responses (or can fail gracefully) within your required timeframes.
Conditional Routing
Send different requests to different providers. Route complex reasoning to GPT-4, creative tasks to Claude, and quick responses to Gemini based on your needs.
Fallbacks
Keep running even if your primary provider fails. Automatically switch to backup providers to maintain availability.
Load Balancing
Spread requests across multiple API keys or providers. Great for high-volume agent operations and staying within rate limits.
3. Prompting in LangGraph
Portkey’s Prompt Engineering Studio helps you create, manage, and optimize the prompts used in your LangGraph agents. Instead of hardcoding prompts or instructions, use Portkey’s prompt rendering API to dynamically fetch and apply your versioned prompts.
Manage prompts in Portkey's Prompt Library
Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:
- Iteratively develop prompts before using them in your agents
- Test prompts with different variables and models
- Compare outputs between different prompt versions
- Collaborate with team members on prompt development
This visual environment makes it easier to craft effective prompts for each step in your LangGraph agent’s workflow.
Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:
- Iteratively develop prompts before using them in your agents
- Test prompts with different variables and models
- Compare outputs between different prompt versions
- Collaborate with team members on prompt development
This visual environment makes it easier to craft effective prompts for each step in your LangGraph agent’s workflow.
The Prompt Render API retrieves your prompt templates with all parameters configured:
You can:
- Create multiple versions of the same prompt
- Compare performance between versions
- Roll back to previous versions if needed
- Specify which version to use in your code:
Portkey prompts use Mustache-style templating for easy variable substitution:
When rendering, simply pass the variables:
Prompt Engineering Studio
Learn more about Portkey’s prompt management features
4. Guardrails for Safe Agents
Guardrails ensure your LangGraph agents operate safely and respond appropriately in all situations.
Why Use Guardrails?
LangGraph agents can experience various failure modes:
- Generating harmful or inappropriate content
- Leaking sensitive information like PII
- Hallucinating incorrect information
- Generating outputs in incorrect formats
Portkey’s guardrails add protection for both inputs and outputs.
Implementing Guardrails
Portkey’s guardrails can:
- Detect and redact PII in both inputs and outputs
- Filter harmful or inappropriate content
- Validate response formats against schemas
- Check for hallucinations against ground truth
- Apply custom business logic and rules
Learn More About Guardrails
Explore Portkey’s guardrail features to enhance agent safety
5. User Tracking with Metadata
Track individual users through your LangGraph agents using Portkey’s metadata system.
What is Metadata in Portkey?
Metadata allows you to associate custom data with each request, enabling filtering, segmentation, and analytics. The special _user
field is specifically designed for user tracking.
Filter Analytics by User
With metadata in place, you can filter analytics by user and analyze performance metrics on a per-user basis:
Filter analytics by user
This enables:
- Per-user cost tracking and budgeting
- Personalized user analytics
- Team or organization-level metrics
- Environment-specific monitoring (staging vs. production)
Learn More About Metadata
Explore how to use custom metadata to enhance your analytics
6. Caching for Efficient Agents
Implement caching to make your LangGraph agents more efficient and cost-effective:
Simple caching performs exact matches on input prompts, caching identical requests to avoid redundant model executions.
Simple caching performs exact matches on input prompts, caching identical requests to avoid redundant model executions.
Semantic caching considers the contextual similarity between input requests, caching responses for semantically similar inputs.
7. Model Interoperability
LangGraph works with multiple LLM providers, and Portkey extends this capability by providing access to over 200 LLMs through a unified interface. You can easily switch between different models without changing your core agent logic:
Portkey provides access to LLMs from providers including:
- OpenAI (GPT-4o, GPT-4 Turbo, etc.)
- Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
- Mistral AI (Mistral Large, Mistral Medium, etc.)
- Google Vertex AI (Gemini 1.5 Pro, etc.)
- Cohere (Command, Command-R, etc.)
- AWS Bedrock (Claude, Titan, etc.)
- Local/Private Models
Supported Providers
See the full list of LLM providers supported by Portkey
Set Up Enterprise Governance for LangGraph
Why Enterprise Governance? If you are using LangGraph inside your organization, you need to consider several governance aspects:
- Cost Management: Controlling and tracking AI spending across teams
- Access Control: Managing which teams can use specific models
- Usage Analytics: Understanding how AI is being used across the organization
- Security & Compliance: Maintaining enterprise security standards
- Reliability: Ensuring consistent service across all users
Portkey adds a comprehensive governance layer to address these enterprise needs. Let’s implement these controls step by step.
Create Virtual Key
Virtual Keys are Portkey’s secure way to manage your LLM provider API keys. They provide essential controls like:
- Budget limits for API usage
- Rate limiting capabilities
- Secure API key storage
To create a virtual key: Go to Virtual Keys in the Portkey App. Save and copy the virtual key ID
Save your virtual key ID - you’ll need it for the next step.
Create Default Config
Configs in Portkey define how your requests are routed, with features like advanced routing, fallbacks, and retries.
To create your config:
- Go to Configs in Portkey dashboard
- Create new config with:
- Save and note the Config name for the next step
Configure Portkey API Key
Now create a Portkey API key and attach the config you created in Step 2:
- Go to API Keys in Portkey and Create new API key
- Select your config from
Step 2
- Generate and save your API key
Connect to LangGraph
After setting up your Portkey API key with the attached config, connect it to your LangGraph agents:
Enterprise Features Now Available
Your LangGraph integration now has:
- Departmental budget controls
- Model access governance
- Usage tracking & attribution
- Security guardrails
- Reliability features