MLflow Tracing is a feature that enhances LLM observability in your Generative AI (GenAI) applications by capturing detailed information about the execution of your application’s services. Tracing provides a way to record the inputs, outputs, and metadata associated with each intermediate step of a request, enabling you to easily pinpoint the source of bugs and unexpected behaviors.

MLflow offers automatic, no-code-added integrations with over 20 popular GenAI libraries, providing immediate observability with just a single line of code. Combined with Portkey’s intelligent gateway, you get comprehensive tracing enriched with routing decisions and performance optimizations.

Why MLflow + Portkey?

No-Code Integrations

Automatic instrumentation for 20+ GenAI libraries with one line of code

Detailed Execution Traces

Capture inputs, outputs, and metadata for every step

Gateway Intelligence

Portkey adds routing context, fallback decisions, and cache performance

Debug with Confidence

Easily pinpoint issues with comprehensive trace data

Quick Start

Prerequisites

  • Python
  • Portkey account with API key
  • OpenAI API key (or use Portkey’s virtual keys)

Step 1: Install Dependencies

Install the required packages for MLflow and Portkey integration:

pip install mlflow openai opentelemetry-exporter-otlp-proto-http portkey-ai

Step 2: Configure OpenTelemetry Export

Set up the environment variables to send traces to Portkey’s OpenTelemetry endpoint:

import os

# Configure Portkey endpoint and authentication
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://api.portkey.ai/v1/logs/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = "x-portkey-api-key=YOUR_PORTKEY_API_KEY"
os.environ["OTEL_EXPORTER_OTLP_TRACES_PROTOCOL"] = "http/protobuf"

Step 3: Enable MLflow Instrumentation

Enable automatic tracing for OpenAI with just one line:

import mlflow

# Enable the MLflow instrumentation for tracing OpenAI
mlflow.openai.autolog()

Step 4: Configure Portkey Gateway

Set up the OpenAI client to use Portkey’s intelligent gateway:

from openai import OpenAI
from portkey_ai import createHeaders

# Use Portkey's gateway for intelligent routing
client = OpenAI(
    api_key="YOUR_OPENAI_API_KEY",  # Or use a dummy value with virtual keys
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_VIRTUAL_KEY"  # Optional: Use Portkey's secure key management
    )
)

Step 5: Make Instrumented LLM Calls

Now your LLM calls are automatically traced by MLflow and enhanced by Portkey:

# Make calls through Portkey's gateway
# MLflow instruments the call, Portkey adds gateway intelligence
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {
            "role": "user",
            "content": "Explain the importance of tracing in LLM applications"
        }
    ],
    temperature=0.7
)

print(response.choices[0].message.content)

# You now get:
# 1. Automatic tracing from MLflow
# 2. Gateway features from Portkey (caching, fallbacks, routing)
# 3. Combined insights in Portkey's dashboard

Complete Example

Here’s a full example bringing everything together:

import os
import mlflow
from openai import OpenAI
from portkey_ai import createHeaders

# Step 1: Configure Portkey endpoint
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://api.portkey.ai/v1/logs/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = "x-portkey-api-key=YOUR_PORTKEY_API_KEY"
os.environ["OTEL_EXPORTER_OTLP_TRACES_PROTOCOL"] = "http/protobuf"

# Step 2: Enable MLflow instrumentation
mlflow.openai.autolog()

# Step 3: Configure Portkey Gateway
client = OpenAI(
    api_key="YOUR_OPENAI_API_KEY",
    base_url="https://api.portkey.ai/v1",
    default_headers=createHeaders(
        api_key="YOUR_PORTKEY_API_KEY",
        virtual_key="YOUR_VIRTUAL_KEY"
    )
)

# Step 4: Make instrumented calls
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a helpful AI assistant."},
        {"role": "user", "content": "What are the benefits of observability in production AI systems?"}
    ]
)

print(response.choices[0].message.content)

Supported Integrations

MLflow automatically instruments many popular GenAI libraries:

LLM Providers

  • OpenAI
  • Anthropic
  • Cohere
  • Google Generative AI
  • Azure OpenAI

Vector Databases

  • Pinecone
  • ChromaDB
  • Weaviate
  • Qdrant

Frameworks

  • LangChain
  • LlamaIndex
  • Haystack
  • And 10+ more!

Next Steps


See Your Traces in Action

Once configured, navigate to the Portkey dashboard to see your MLflow instrumentation combined with gateway intelligence: