Arize Phoenix
Extend Portkey’s powerful AI Gateway with Arize Phoenix for unified LLM observability, tracing, and analytics across your ML stack.
Portkey is a production-grade AI Gateway and Observability platform for AI applications. It offers built-in observability, reliability features and over 40+ key LLM metrics. For teams standardizing observability in Arize Phoenix, Portkey also supports seamless integration.
Portkey provides comprehensive observability out-of-the-box. This integration is for teams who want to consolidate their ML observability in Arize Phoenix alongside Portkey’s AI Gateway capabilities.
Why Portkey + Arize Phoenix?
Arize Phoenix brings observability to LLM workflows with tracing, prompt debugging, and performance monitoring.
Thanks to Phoenix’s OpenInference instrumentation, Portkey can emit structured traces automatically — no extra setup needed. This gives you clear visibility into every LLM call, making it easier to debug and improve your app.
AI Gateway Features
- 1600+ LLM Providers: Single API for OpenAI, Anthropic, AWS Bedrock, and more
- Advanced Routing: Fallbacks, load balancing, conditional routing
- Cost Optimization: Semantic caching, request
- Security: PII detection, content filtering, compliance controls
Built-in Observability
- 40+ Key Metrics: Cost, latency, tokens, error rates
- Detailed Logs & Traces: Request/response bodies and custom tracing
- Custom Metadata: Attach custom metadata to your requests
- Custom Alerts: Real-time monitoring and notifications
With this integration, you can route LLM traffic through Portkey and gain deep observability in Arize Phoenix—bringing together the best of gateway orchestration and ML observability.
Getting Started
Installation
Install the required packages to enable Arize Phoenix integration with your Portkey deployment:
Setting up the Integration
Configure Arize Phoenix
First, set up the Arize OpenTelemetry configuration:
Enable Portkey Instrumentation
Initialize the Portkey instrumentor to format traces for Arize:
Configure Portkey AI Gateway
Set up Portkey with all its powerful features:
Complete Integration Example
Here’s a complete working example that connects Portkey’s AI Gateway with Arize Phoenix for centralized monitoring:
Portkey AI Gateway Features
While Arize Phoenix provides observability, Portkey delivers a complete AI infrastructure platform. Here’s everything you get with Portkey:
🚀 Core Gateway Capabilities
1600+ LLM Providers
Access OpenAI, Anthropic, Google, Cohere, Mistral, Llama, and 1600+ models through a single unified API. No more managing different SDKs or endpoints.
Universal API
Use the same code to call any LLM provider. Switch between models and providers without changing your application code.
Virtual Keys
Secure vault for API keys with budget limits, rate limiting, and access controls. Never expose raw API keys in your code.
Advanced Configs
Define routing strategies, model parameters, and reliability settings in reusable configurations. Version control your AI infrastructure.
🛡️ Reliability & Performance
Smart Fallbacks
Automatically switch to backup providers when primary fails. Define fallback chains across multiple providers.
Load Balancing
Distribute requests across multiple API keys or providers based on custom weights and strategies.
Automatic Retries
Configurable retry logic with exponential backoff for transient failures and rate limits.
Request Timeouts
Set custom timeouts to prevent hanging requests and improve application responsiveness.
Conditional Routing
Route requests to different models based on content, metadata, or custom conditions.
Canary Testing
Gradually roll out new models or providers with percentage-based traffic splitting.
💰 Cost Optimization
Semantic Caching
Intelligent caching that understands semantic similarity. Reduce costs by up to 90% on repeated queries.
Budget Limits
Set spending limits per API key, team, or project. Get alerts before hitting limits.
Rate Limits
Set spending limits per API key, team, or project. Get alerts before hitting limits.
Cost Analytics
Real-time cost tracking across all providers with detailed breakdowns by model, user, and feature.
📊 Built-in Observability
Comprehensive Metrics
Track 40+ metrics including latency, tokens, costs, cache hits, error rates, and more in real-time.
Detailed Logs
Full request/response logging with advanced filtering, search, and export capabilities.
Distributed Tracing
Trace requests across your entire AI pipeline with correlation IDs and custom metadata.
Custom Alerts
Set up alerts on any metric with webhook, email, or Slack notifications.
🔒 Security & Compliance
PII Detection
Automatically detect and redact sensitive information like SSN, credit cards, and personal data.
Content Filtering
Block harmful, toxic, or inappropriate content in real-time based on custom policies.
Access Controls
Fine-grained RBAC with team management, user permissions, and audit logs.
SOC2 Compliance
Enterprise-grade security with SOC2 Type II certification and GDPR compliance.
Audit Logs
Complete audit trail of all API usage, configuration changes, and user actions.
Data Privacy
Zero data retention options and deployment in your own VPC for maximum privacy.
🏢 Enterprise Features
SSO Integration
SAML 2.0 support for Okta, Azure AD, Google Workspace, and custom IdPs.
Organization Management
Multi-workspace support with hierarchical teams and department-level controls.
SLA Guarantees
99.9% uptime SLA with dedicated support and custom deployment options.
Private Deployments
Deploy Portkey in your own AWS, Azure, or GCP environment with full control.
Next Steps
Explore Portkey Features
Discover all AI Gateway capabilities beyond observability
Virtual Keys Setup
Secure your API keys and set budgets
Advanced Routing
Configure fallbacks, load balancing, and more
Built-in Analytics
Use Portkey’s native observability features
Need help? Join our Discord community