Portkey is a production-grade AI Gateway and Observability platform for AI applications. It offers built-in observability, reliability features and over 40+ key LLM metrics. For teams standardizing observability in Arize Phoenix, Portkey also supports seamless integration.

Portkey provides comprehensive observability out-of-the-box. This integration is for teams who want to consolidate their ML observability in Arize Phoenix alongside Portkey’s AI Gateway capabilities.

Why Portkey + Arize Phoenix?

Arize Phoenix brings observability to LLM workflows with tracing, prompt debugging, and performance monitoring.

Thanks to Phoenix’s OpenInference instrumentation, Portkey can emit structured traces automatically — no extra setup needed. This gives you clear visibility into every LLM call, making it easier to debug and improve your app.

AI Gateway Features

  • 1600+ LLM Providers: Single API for OpenAI, Anthropic, AWS Bedrock, and more
  • Advanced Routing: Fallbacks, load balancing, conditional routing
  • Cost Optimization: Semantic caching, request
  • Security: PII detection, content filtering, compliance controls

Built-in Observability

  • 40+ Key Metrics: Cost, latency, tokens, error rates
  • Detailed Logs & Traces: Request/response bodies and custom tracing
  • Custom Metadata: Attach custom metadata to your requests
  • Custom Alerts: Real-time monitoring and notifications

With this integration, you can route LLM traffic through Portkey and gain deep observability in Arize Phoenix—bringing together the best of gateway orchestration and ML observability.

Getting Started

Installation

Install the required packages to enable Arize Phoenix integration with your Portkey deployment:

pip install portkey-ai openinference-instrumentation-portkey arize-otel

Setting up the Integration

1

Configure Arize Phoenix

First, set up the Arize OpenTelemetry configuration:

from arize.otel import register

# Configure Arize as your telemetry backend
tracer_provider = register(
    space_id="your-space-id",      # Found in Arize app settings
    api_key="your-api-key",        # Your Arize API key
    project_name="portkey-gateway" # Name your project
)
2

Enable Portkey Instrumentation

Initialize the Portkey instrumentor to format traces for Arize:

from openinference.instrumentation.portkey import PortkeyInstrumentor

# Enable instrumentation
PortkeyInstrumentor().instrument(tracer_provider=tracer_provider)
3

Configure Portkey AI Gateway

Set up Portkey with all its powerful features:

from portkey_ai import Portkey

# Initialize Portkey client
portkey = Portkey(
    api_key="your-portkey-api-key",  # Optional for self-hosted
    virtual_key="your-openai-virtual-key"  # Or use provider-specific virtual keys
)

response = portkey.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Explain machine learning"}]
)

print(response.choices[0].message.content)

Complete Integration Example

Here’s a complete working example that connects Portkey’s AI Gateway with Arize Phoenix for centralized monitoring:

import os
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
from arize.otel import register  # OR from phoenix.otel import register

# configure the Phoenix tracer
from openinference.instrumentation.portkey import PortkeyInstrumentor

# Step 1: Configure Arize Phoenix
tracer_provider = register(
    space_id="your-space-id",
    api_key="your-arize-api-key",
    project_name="portkey-production"
)

# Step 2: Enable Portkey instrumentation
PortkeyInstrumentor().instrument(tracer_provider=tracer_provider)

# Step 3: Configure Portkey's Advanced AI Gateway
advanced_config = {
    "strategy": {
        "mode": "loadbalance"  # Distribute load across providers
    },
    "targets": [
        {
            "virtual_key": "openai-vk",
            "weight": 0.7,
            "override_params": {"model": "gpt-4o"}
        },
        {
            "virtual_key": "anthropic-vk",
            "weight": 0.3,
            "override_params": {"model": "claude-3-opus-20240229"}
        }
    ],
    "cache": {
        "mode": "semantic",  # Intelligent caching
        "max_age": 3600
    },
    "retry": {
        "attempts": 3,
        "on_status_codes": [429, 500, 502, 503, 504]
    },
    "request_timeout": 30000
}

# Initialize Portkey-powered client
client = OpenAI(
    api_key="not-needed",  # Virtual keys handle auth
    base_url=PORTKEY_GATEWAY_URL,
    default_headers=createHeaders(
        api_key=os.environ.get("PORTKEY_API_KEY"),
        config=advanced_config,
        metadata={
            "user_id": "user-123",
            "session_id": "session-456",
            "feature": "chat-assistant"
        }
    )
)

# Make requests through Portkey's AI Gateway
response = client.chat.completions.create(
    model="gpt-4o",  # Portkey handles provider routing
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing in simple terms."}
    ],
    temperature=0.7
)

print(response.choices[0].message.content)

Portkey AI Gateway Features

While Arize Phoenix provides observability, Portkey delivers a complete AI infrastructure platform. Here’s everything you get with Portkey:

🚀 Core Gateway Capabilities

🛡️ Reliability & Performance

💰 Cost Optimization

📊 Built-in Observability

🔒 Security & Compliance

🏢 Enterprise Features

Next Steps

Need help? Join our Discord community