CrewAI is a cutting-edge framework for orchestrating autonomous AI agents. When integrated with Portkey, it enables production-ready features like observability, reliability, and seamless multi-provider support. This integration helps you build robust, scalable agent systems while maintaining full control over their execution.

Getting Started

1

1. Install the Required Packages

pip install -qU crewai portkey-ai
2

2. Configure the LLM Client

To build CrewAI Agents with Portkey, you’ll need two keys:

  • Portkey API Key: Sign up on the Portkey app and copy your API key.
  • Virtual Key: Virtual Keys securely manage your LLM API keys in one place. Store your LLM provider API keys securely in Portkey’s vault.
from crewai import LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL

gpt_llm = LLM(
    model="gpt-4",
    base_url=PORTKEY_GATEWAY_URL,
    api_key="dummy", # We are using Virtual key
    extra_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_VIRTUAL_KEY", # Enter your OpenAI Virtual key from Portkey
        config="YOUR_PORTKEY_CONFIG_ID", # All your model parameters and routing strategy
        trace_id="llm1"
    )
)
3

3. Create and Run Agents

Here’s an example of creating agents with different LLMs using Portkey integration:

from crewai import Agent, Task, Crew, Process

# Define your agents with roles and goals
coder = Agent(
    role='Software develoepr',
    goal='Write clear - concise code on demand',
    backstory='An expert coder with a keen eye for software trends.',
    llm = gpt_llm
)


# Create tasks for your agents
task1 = Task(
    description="Define the HTML for making a simple website with heading- Hello World! Portkey is working! .",
    expected_output="A clear and concise HTML code",
    agent=coder
)
# Instantiate your crew with a sequential process
crew = Crew(
    agents=[coder],
    tasks=[task1],
)
# Get your crew to work!
result = crew.kickoff()
print("######################")
print(result)

E2E Example with Multiple LLMs in CrewAI

Here’s a complete example showing multi-agent interaction with different LLMs:

from crewai import LLM, Agent, Task, Crew
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL

# Configure LLMs with different providers
gpt_llm = LLM(
    model="gpt-4",
    base_url=PORTKEY_GATEWAY_URL,
    api_key="dummy",
    extra_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_OPENAI_VIRTUAL_KEY",
        trace_id="pm_agent"
    )
)

anthropic_llm = LLM(
    model="claude-3-5-sonnet-latest",
    base_url=PORTKEY_GATEWAY_URL,
    api_key="dummy",
    extra_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY",
        trace_id="dev_agent"
    )
)

# Create agents with different LLMs
product_manager = Agent(
    role='Product Manager',
    goal='Define software requirements',
    backstory="Experienced PM skilled in requirement definition",
    llm=gpt_llm
)

developer = Agent(
    role='Software Developer',
    goal='Implement requirements',
    backstory="Senior developer with full-stack experience",
    llm=anthropic_llm
)

# Define tasks
planning_task = Task(
    description="Define the key requirements and features for a classic ping pong game. Be specific and concise.",
    expected_output="A clear and concise list of requirements for the ping pong game",
    agent=product_manager
)

implementation_task = Task(
    description="Based on the provided requirements, develop the code for the classic ping pong game. Focus on gameplay mechanics and a simple user interface.",
    expected_output="Complete code for the ping pong game",
    agent=developer
)

# Create and run crew
crew = Crew(
    agents=[product_manager, developer],
    tasks=[planning_task, implementation_task],
    verbose=1
)

result = crew.kickoff()

Enabling Portkey Features

By routing your CrewAI requests through Portkey, you get access to the following production-grade features:

1. Interoperability - Using Different LLMs

When building with CrewAI, you might want to experiment with different LLMs or use specific providers for different agent tasks. Portkey makes this seamless - you can switch between OpenAI, Anthropic, Gemini, Mistral, or cloud providers without changing your agent code.

Instead of managing multiple API keys and provider-specific configurations, Portkey’s Virtual Keys give you a single point of control. Here’s how you can use different LLMs with your CrewAI agents:

anthropic_llm = LLM(
    model="claude-3-5-sonnet-latest",
    base_url=PORTKEY_GATEWAY_URL,
    api_key="dummy", # We are using Virtual keys
    extra_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY"
    )
)

2. Caching - Speed Up Agent Responses

Agent operations often involve repetitive queries or similar tasks. Every time your agent makes an LLM call, you’re paying for tokens and waiting for responses. Portkey’s caching system can significantly reduce both costs and latency.

Portkey offers two powerful caching modes:

Simple Cache: Perfect for exact matches - when your agents make identical requests. Ideal for deterministic operations like function calling or FAQ-type queries.

Semantic Cache: Uses embedding-based matching to identify similar queries. Great for natural language interactions where users might ask the same thing in different ways.

config = {
    "cache": {
        "mode": "semantic",  # or "simple" for exact matching
    }
}

llm = LLM(
    model="gpt-4",
    base_url=PORTKEY_GATEWAY_URL,
    api_key="dummy",
    extra_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_VIRTUAL_KEY",
        config=config
    )
)

3. Reliability - Keep Your Agents Running Smoothly

When running agents in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur.

4. Observability - Understand Your Agents

Building agents is the first step - but how do you know they’re working effectively? Portkey provides comprehensive visibility into your agent operations through multiple lenses:

Metrics Dashboard: Track 40+ key performance indicators like:

  • Cost per agent interaction
  • Response times and latency
  • Token usage and efficiency
  • Success/failure rates
  • Cache hit rates

Send Custom Metadata with your requests

Add trace IDs to track specific workflows:


gpt_llm = LLM(
    model="gpt-4",
    base_url=PORTKEY_GATEWAY_URL,
    api_key="dummy", # We are using Virtual key
    extra_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_VIRTUAL_KEY", # Enter your OpenAI Virtual key from Portkey
        metadata={
        "agent": "weather_agent",
        "environment": "production"
    }
    )
)

5. Logs and Traces

Logs are essential for understanding agent behavior, diagnosing issues, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.

Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.

6. Security & Compliance - Enterprise-Ready Controls

When deploying agents in production, security is crucial. Portkey provides enterprise-grade security features:

Budget Controls

Set and monitor spending limits per Virtual Key. Get alerts before costs exceed thresholds.

Access Management

Control who can access what. Assign roles and permissions for your team members.

Audit Logging

Track all changes and access. Know who modified agent settings and when.

Data Privacy

Configure data retention and processing policies to meet your compliance needs.

Configure these settings in the Portkey Dashboard or programmatically through the API.

7. Continuous Improvement

Now that you know how to trace & log your Llamaindex requests to Portkey, you can also start capturing user feedback to improve your app!

You can append qualitative as well as quantitative feedback to any trace ID with the portkey.feedback.create method:

Adding Feedback
from portkey_ai import Portkey

portkey = Portkey(
    api_key="PORTKEY_API_KEY",
    virtual_key="YOUR_OPENAI_VIRTUAL_KEY"
)

feedback = portkey.feedback.create(
    trace_id="YOUR_CrewAI_Agent_TRACE_ID",
    value=5,  # Integer between -10 and 10
    weight=1,  # Optional
    metadata={
        # Pass any additional context here like comments, _user and more
    }
)

print(feedback)

Portkey Config

Many of these features are driven by Portkey’s Config architecture. The Portkey app simplifies creating, managing, and versioning your Configs.

For more information on using these features and setting up your Config, please refer to the Portkey documentation.