Introduction

LangGraph is a library for building stateful, multi-actor applications with LLMs, designed to make developing complex agent workflows easier. It provides a flexible framework to create directed graphs where nodes process information and edges define the flow between them.

Portkey enhances LangGraph with production-readiness features, turning your experimental agent workflows into robust systems by providing:

  • Complete observability of every agent step, tool use, and state transition
  • Built-in reliability with fallbacks, retries, and load balancing
  • Cost tracking and optimization to manage your AI spend
  • Access to 200+ LLMs through a single integration
  • Guardrails to keep agent behavior safe and compliant
  • Version-controlled prompts for consistent agent performance

LangGraph Official Documentation

Learn more about LangGraph’s core concepts and features

Installation & Setup

1

Install the required packages

pip install -U langgraph langchain langchain_openai portkey-ai

Depending on your use case, you may also need additional packages:

  • For search capabilities: pip install langchain_community
  • For memory functionality: pip install langgraph[checkpoint]

Generate API Key

Create a Portkey API key with optional budget/rate limits from the Portkey dashboard. You can attach configurations for reliability, caching, and more to this key.

3

Configure LangChain with Portkey

For a simple setup, configure a LangChain ChatOpenAI instance to use Portkey:

from langchain_openai import ChatOpenAI
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL

# Set up LangChain model with Portkey
llm = ChatOpenAI(
    api_key="dummy",  # This is just a placeholder
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_PROVIDER_VIRTUAL_KEY",
        trace_id="unique-trace-id",               # Optional, for request tracing
        metadata={                                # Optional, for request segmentation
            "app_env": "production",
            "_user": "user_123"                   # Optional Special _user field for user analytics
        }
    )
)

What are Virtual Keys? Virtual keys in Portkey securely store your LLM provider API keys (OpenAI, Anthropic, etc.) in an encrypted vault. They allow for easier key rotation and budget management. Learn more about virtual keys here.

Basic Agent Implementation

Let’s create a simple LangGraph chatbot using Portkey. This example shows how to set up a basic conversational agent:

from typing import Annotated
from langchain_openai import ChatOpenAI
from typing_extensions import TypedDict
from portkey_ai import createHeaders
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages

# Define state structure with message history
class State(TypedDict):
    messages: Annotated[list, add_messages]

# Initialize graph builder
graph_builder = StateGraph(State)

# Set up LLM with Portkey
llm = ChatOpenAI(
    api_key="dummy",
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_PROVIDER_VIRTUAL_KEY",
        trace_id="chat-session-123"  # Optional
    )
)

# Define chatbot node function
def chatbot(state: State):
    return {"messages": [llm.invoke(state["messages"])]}

# Add node to graph and set entry/exit points
graph_builder.add_node("chatbot", chatbot)
graph_builder.set_entry_point("chatbot")
graph_builder.set_finish_point("chatbot")
graph = graph_builder.compile()

# Function to handle streaming updates
def stream_graph_updates(user_input: str):
    for event in graph.stream({"messages": [{"role": "user", "content": user_input}]}):
        for value in event.values():
            print("Assistant:", value["messages"][-1].content)

# Interactive chat loop
while True:
    try:
        user_input = input("User: ")
        if user_input.lower() in ["quit", "exit", "q"]:
            print("Goodbye!")
            break
        stream_graph_updates(user_input)
    except Exception as e:
        print(f"Error: {e}")
        break

This basic implementation:

  1. Creates a state graph with a message history
  2. Configures a ChatOpenAI model with Portkey
  3. Defines a simple chatbot node that processes messages with the LLM
  4. Compiles the graph and provides a streaming interface for chat

Advanced Features

1. Adding Tools to Your Agent

LangGraph can be enhanced with tools to allow your agent to perform actions. Here’s how to add the Tavily search tool:

from langgraph.prebuilt import ToolNode, tools_condition
from typing import Annotated
from langchain_openai import ChatOpenAI
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from portkey_ai import createHeaders
from langchain_community.tools.tavily_search import TavilySearchResults

class State(TypedDict):
    messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)

# Initialize the Tavily search tool
# Note: Requires TAVILY_API_KEY environment variable or passed as argument
tool = TavilySearchResults(max_results=2)
tools = [tool]

# Set up LLM with Portkey
llm = ChatOpenAI(
    api_key="dummy",
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_PROVIDER_VIRTUAL_KEY",
        trace_id="search-agent-session"  # Optional
    )
)
# Bind tools to the LLM
llm_with_tools = llm.bind_tools(tools)

def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)

# Add tool node to handle tool execution
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

# Add conditional routing based on tool usage
graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
# Return to chatbot after tool execution
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
graph = graph_builder.compile()

This example requires a Tavily API key for the search functionality. You can sign up for one at Tavily’s website.

2. Creating Custom Tools

You can create custom tools for your agents using the @tool decorator. Here’s how to create a simple multiplication tool:

from langchain_core.tools import tool
from pydantic import BaseModel, Field
from langgraph.prebuilt import ToolNode, tools_condition
from typing import Annotated
from langchain_openai import ChatOpenAI
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from portkey_ai import createHeaders

class State(TypedDict):
    messages: Annotated[list, add_messages]

# Define input schema for the tool
class MultiplyInputSchema(BaseModel):
    """Multiply two numbers"""
    a: int = Field(description="First operand")
    b: int = Field(description="Second operand")

# Create the tool
@tool("multiply_tool", args_schema=MultiplyInputSchema)
def multiply(a: int, b: int) -> int:
   return a * b

graph_builder = StateGraph(State)

tools = [multiply]

# Set up LLM with Portkey
llm = ChatOpenAI(
    api_key="dummy",
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_PROVIDER_VIRTUAL_KEY",
    )
)
llm_with_tools = llm.bind_tools(tools)

def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=tools)
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
graph = graph_builder.compile()

This example:

  1. Defines a Pydantic model for the tool’s input schema
  2. Creates a custom multiplication tool with the @tool decorator
  3. Integrates it into LangGraph with a tool node

3. Adding Memory to Your Agent

For persistent conversations, you can add memory to your LangGraph agents:

from typing import Annotated
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from typing_extensions import TypedDict
from portkey_ai import createHeaders
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages

class State(TypedDict):
    messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)

# Set up tools and LLM with Portkey
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatOpenAI(
    api_key="dummy",
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_PROVIDER_VIRTUAL_KEY",
        trace_id="memory-agent-session"  # Optional
    )
)
llm_with_tools = llm.bind_tools(tools)

def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")

# Initialize memory saver for persistent state
memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)

# Configuration for memory thread id
config = {"configurable": {"thread_id": "1"}}

# Function to handle streaming updates with memory
def stream_graph_updates(user_input: str):
    events = graph.stream(
        {"messages": [{"role": "user", "content": user_input}]},
        config,
        stream_mode="values",
    )
    for event in events:
        if "messages" in event:
            # Use pretty_print method for better formatting
            event["messages"][-1].pretty_print()

The thread_id in the config allows you to maintain separate conversation threads for different users or contexts.

Production Features

1. Enhanced Observability

Portkey provides comprehensive observability for your LangGraph agents, helping you understand exactly what’s happening during each execution.

Traces provide a hierarchical view of your agent’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.

# Add trace_id to enable hierarchical tracing in Portkey
llm = ChatOpenAI(
    api_key="dummy",
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_LLM_PROVIDER_VIRTUAL_KEY",
        trace_id="unique-session-id",  # Add unique trace ID
        metadata={"request_type": "user_query"}
    )
)

LangGraph also offers its own tracing via LangSmith, which can be used alongside Portkey for even more detailed workflow insights.

2. Reliability - Keep Your Agents Running Smoothly

When running agents in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur.

Enable fallback in your LangGraph agents by using a Portkey Config:

from portkey_ai import createHeaders

# Create LLM with fallback configuration
llm = ChatOpenAI(
    api_key="dummy",
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        config={
            "strategy": {
                "mode": "fallback"
            },
            "targets": [
                {
                    "virtual_key": "YOUR_OPENAI_VIRTUAL_API_KEY",
                    "override_params": {"model": "gpt-4o"}
                },
                {
                    "virtual_key": "YOUR_ANTHROPIC_VIRTUAL_API_KEY",
                    "override_params": {"model": "claude-3-opus-20240229"}
                }
            ]
        }
    )
)

This configuration will automatically try Claude if the GPT-4o request fails, ensuring your agent can continue operating.

3. Prompting in LangGraph

Portkey’s Prompt Engineering Studio helps you create, manage, and optimize the prompts used in your LangGraph agents. Instead of hardcoding prompts or instructions, use Portkey’s prompt rendering API to dynamically fetch and apply your versioned prompts.

Manage prompts in Portkey's Prompt Library

Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:

  1. Iteratively develop prompts before using them in your agents
  2. Test prompts with different variables and models
  3. Compare outputs between different prompt versions
  4. Collaborate with team members on prompt development

This visual environment makes it easier to craft effective prompts for each step in your LangGraph agent’s workflow.

Prompt Engineering Studio

Learn more about Portkey’s prompt management features

4. Guardrails for Safe Agents

Guardrails ensure your LangGraph agents operate safely and respond appropriately in all situations.

Why Use Guardrails?

LangGraph agents can experience various failure modes:

  • Generating harmful or inappropriate content
  • Leaking sensitive information like PII
  • Hallucinating incorrect information
  • Generating outputs in incorrect formats

Portkey’s guardrails add protection for both inputs and outputs.

Implementing Guardrails

from langchain_openai import ChatOpenAI
from portkey_ai import createHeaders

# Create LLM with guardrails
llm = ChatOpenAI(
    api_key="dummy",
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_OPENAI_VIRTUAL_KEY",
        config={
            "input_guardrails": ["guardrails-id-xxx", "guardrails-id-yyy"],
            "output_guardrails": ["guardrails-id-zzz"]
        }
    )
)

Portkey’s guardrails can:

  • Detect and redact PII in both inputs and outputs
  • Filter harmful or inappropriate content
  • Validate response formats against schemas
  • Check for hallucinations against ground truth
  • Apply custom business logic and rules

Learn More About Guardrails

Explore Portkey’s guardrail features to enhance agent safety

5. User Tracking with Metadata

Track individual users through your LangGraph agents using Portkey’s metadata system.

What is Metadata in Portkey?

Metadata allows you to associate custom data with each request, enabling filtering, segmentation, and analytics. The special _user field is specifically designed for user tracking.

from langchain_openai import ChatOpenAI
from portkey_ai import createHeaders

# Configure LLM with user tracking
llm = ChatOpenAI(
    api_key="dummy",
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_OPENAI_VIRTUAL_KEY",
        metadata={
            "_user": "user_123",  # Special _user field for user analytics
            "user_tier": "premium",
            "user_company": "Acme Corp",
            "session_id": "abc-123",
            "graph_id": "search_workflow"
        }
    )
)

Filter Analytics by User

With metadata in place, you can filter analytics by user and analyze performance metrics on a per-user basis:

Filter analytics by user

This enables:

  • Per-user cost tracking and budgeting
  • Personalized user analytics
  • Team or organization-level metrics
  • Environment-specific monitoring (staging vs. production)

Learn More About Metadata

Explore how to use custom metadata to enhance your analytics

6. Caching for Efficient Agents

Implement caching to make your LangGraph agents more efficient and cost-effective:

from langchain_openai import ChatOpenAI
from portkey_ai import createHeaders

# Configure LLM with simple caching
llm = ChatOpenAI(
    api_key="dummy",
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_OPENAI_VIRTUAL_KEY",
        config={
            "cache": {
                "mode": "simple"
            }
        }
    )
)

Simple caching performs exact matches on input prompts, caching identical requests to avoid redundant model executions.

7. Model Interoperability

LangGraph works with multiple LLM providers, and Portkey extends this capability by providing access to over 200 LLMs through a unified interface. You can easily switch between different models without changing your core agent logic:

from langchain_openai import ChatOpenAI
from portkey_ai import createHeaders

# OpenAI configuration
openai_llm = ChatOpenAI(
    api_key="dummy",
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_OPENAI_VIRTUAL_KEY"
    )
)

# Anthropic configuration
anthropic_llm = ChatOpenAI(
    api_key="dummy",
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY"
    )
)

# Choose which LLM to use based on your needs
active_llm = openai_llm  # or anthropic_llm

# Use in your LangGraph nodes
def chatbot(state):
    return {"messages": [active_llm.invoke(state["messages"])]}

Portkey provides access to LLMs from providers including:

  • OpenAI (GPT-4o, GPT-4 Turbo, etc.)
  • Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
  • Mistral AI (Mistral Large, Mistral Medium, etc.)
  • Google Vertex AI (Gemini 1.5 Pro, etc.)
  • Cohere (Command, Command-R, etc.)
  • AWS Bedrock (Claude, Titan, etc.)
  • Local/Private Models

Supported Providers

See the full list of LLM providers supported by Portkey

Set Up Enterprise Governance for LangGraph

Why Enterprise Governance? If you are using LangGraph inside your organization, you need to consider several governance aspects:

  • Cost Management: Controlling and tracking AI spending across teams
  • Access Control: Managing which teams can use specific models
  • Usage Analytics: Understanding how AI is being used across the organization
  • Security & Compliance: Maintaining enterprise security standards
  • Reliability: Ensuring consistent service across all users

Portkey adds a comprehensive governance layer to address these enterprise needs. Let’s implement these controls step by step.

1

Create Virtual Key

Virtual Keys are Portkey’s secure way to manage your LLM provider API keys. They provide essential controls like:

  • Budget limits for API usage
  • Rate limiting capabilities
  • Secure API key storage

To create a virtual key: Go to Virtual Keys in the Portkey App. Save and copy the virtual key ID

Save your virtual key ID - you’ll need it for the next step.

2

Create Default Config

Configs in Portkey define how your requests are routed, with features like advanced routing, fallbacks, and retries.

To create your config:

  1. Go to Configs in Portkey dashboard
  2. Create new config with:
    {
        "virtual_key": "YOUR_VIRTUAL_KEY_FROM_STEP1",
       	"override_params": {
          "model": "gpt-4o" // Your preferred model name
        }
    }
    
  3. Save and note the Config name for the next step
3

Configure Portkey API Key

Now create a Portkey API key and attach the config you created in Step 2:

  1. Go to API Keys in Portkey and Create new API key
  2. Select your config from Step 2
  3. Generate and save your API key
4

Connect to LangGraph

After setting up your Portkey API key with the attached config, connect it to your LangGraph agents:

from langchain_openai import ChatOpenAI
from portkey_ai import createHeaders

# Configure LLM with your Portkey API key
llm = ChatOpenAI(
    api_key="dummy",
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY"  # The API key with attached config from step 3
    )
)

Enterprise Features Now Available

Your LangGraph integration now has:

  • Departmental budget controls
  • Model access governance
  • Usage tracking & attribution
  • Security guardrails
  • Reliability features

Frequently Asked Questions

Resources