Extend Portkey’s powerful AI Gateway with Arize Phoenix for unified LLM observability, tracing, and analytics across your ML stack.
Portkey is a production-grade AI Gateway and Observability platform for AI applications. It offers built-in observability, reliability features and over 40+ key LLM metrics. For teams standardizing observability in Arize Phoenix, Portkey also supports seamless integration.
Portkey provides comprehensive observability out-of-the-box. This integration is for teams who want to consolidate their ML observability in Arize Phoenix alongside Portkey’s AI Gateway capabilities.
Arize Phoenix brings observability to LLM workflows with tracing, prompt debugging, and performance monitoring.Thanks to Phoenix’s OpenInference instrumentation, Portkey can emit structured traces automatically — no extra setup needed. This gives you clear visibility into every LLM call, making it easier to debug and improve your app.
AI Gateway Features
1600+ LLM Providers: Single API for OpenAI, Anthropic, AWS Bedrock, and more
Detailed Logs & Traces: Request/response bodies and custom tracing
Custom Metadata: Attach custom metadata to your requests
Custom Alerts: Real-time monitoring and notifications
With this integration, you can route LLM traffic through Portkey and gain deep observability in Arize Phoenix—bringing together the best of gateway orchestration and ML observability.
First, set up the Arize OpenTelemetry configuration:
Copy
Ask AI
from arize.otel import register# Configure Arize as your telemetry backendtracer_provider = register( space_id="your-space-id", # Found in Arize app settings api_key="your-api-key", # Your Arize API key project_name="portkey-gateway" # Name your project)
2
Enable Portkey Instrumentation
Initialize the Portkey instrumentor to format traces for Arize:
Copy
Ask AI
from openinference.instrumentation.portkey import PortkeyInstrumentor# Enable instrumentationPortkeyInstrumentor().instrument(tracer_provider=tracer_provider)