Documentation Index
Fetch the complete documentation index at: https://docs.portkey.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
- Unified AI Gateway - Single interface for 250+ LLMs with API key management
- Centralized AI observability: Real-time usage tracking for 40+ key metrics and logs for every request
- Governance - Real-time spend tracking, set budget limits and RBAC in your LiveKit agents
- Security Guardrails - PII detection, content filtering, and compliance controls
1. Setting up Portkey
Portkey allows you to use 250+ LLMs with your LiveKit agents, with minimal configuration required. Let’s set up the core components in Portkey that you’ll need for integration.Connect your LLM`
- Go to Integrations in the Portkey App
- Click “Add Integration” and select OpenAI
- Provision workspaces and budget/rate limits if required
Create Default Config
- Go to Configs in Portkey dashboard
- Create new config with:
- Save and note the Config ID for the next step

Configure Portkey API Key
- Go to API Keys in Portkey
- Create new API key
- Select your config from Step 2
- Generate and save your API key

2. Integrate Portkey with LiveKit
Now that you have your Portkey components set up, let’s integrate them with LiveKit agents.Installation
Install the required packages:Configuration
End-to-End Example using Portkey and LiveKit
Build a simple voice assistant with Python in less than 10 minutes.3. Set Up Enterprise Governance for Livekit
Why Enterprise Governance? If you are using Livekit inside your orgnaization, you need to consider several governance aspects:- Cost Management: Controlling and tracking AI spending across teams
- Access Control: Managing which teams can use specific models
- Usage Analytics: Understanding how AI is being used across the organization
- Security & Compliance: Maintaining enterprise security standards
- Reliability: Ensuring consistent service across all users
Step 1: Implement Budget Controls & Rate Limits
Step 1: Implement Budget Controls & Rate Limits
Step 1: Implement Budget Controls & Rate Limits
Integrations enable granular control over LLM access at the team/department level. This helps you:- Set up budget limits
- Prevent unexpected usage spikes using Rate limits
- Track departmental spending
Setting Up Department-Specific Controls:
- Navigate to Integrations in Portkey dashboard and create a new Integration
- Provision the integration to relevant workspaces with their own budgets
Step 2: Define Model Access Rules
Step 2: Define Model Access Rules
Step 2: Define Model Access Rules
As your AI usage scales, controlling which teams can access specific models becomes crucial. Portkey Configs provide this control layer with features like:Access Control Features:
- Model Restrictions: Limit access to specific models
- Data Protection: Implement guardrails for sensitive data
- Reliability Controls: Add fallbacks and retry logic
Example Configuration:
Here’s a basic configuration to route requests to OpenAI, specifically using GPT-4o:Step 3: Implement Access Controls
Step 3: Implement Access Controls
Step 3: Implement Access Controls
Create User-specific API keys that automatically:- Track usage per user/team with the help of metadata
- Apply appropriate configs to route requests
- Collect relevant metadata to filter logs
- Enforce access permissions
Step 4: Deploy & Monitor
Step 4: Deploy & Monitor
Step 4: Deploy & Monitor
After distributing API keys to your team members, your enterprise-ready Livekit setup is ready to go. Each team member can now use their designated API keys with appropriate access levels and budget controls. Apply your governance setup using the integration steps from earlier sections Monitor usage in Portkey dashboard:- Cost tracking by department
- Model usage patterns
- Request volumes
- Error rates
Enterprise Features Now Available
Livekit now has:- Departmental budget controls
- Model access governance
- Usage tracking & attribution
- Security guardrails
- Reliability features
Portkey Features
Now that you have enterprise-grade Livekit setup, let’s explore the comprehensive features Portkey provides to ensure secure, efficient, and cost-effective AI operations.1. Comprehensive Metrics
Using Portkey you can track 40+ key metrics including cost, token usage, response time, and performance across all your LLM providers in real time. You can also filter these metrics based on custom metadata that you can set in your configs. Learn more about metadata here.
2. Advanced Logs
Portkey’s logging dashboard provides detailed logs for every request made to your LLMs. These logs include:- Complete request and response tracking
- Metadata tags for filtering
- Cost attribution and much more…

3. Unified Access to 1600+ LLMs
You can easily switch between 1600+ LLMs. Call various LLMs such as Anthropic, Gemini, Mistral, Azure OpenAI, Google Vertex AI, AWS Bedrock, and many more by simply changing theprovider in your default config object.
4. Advanced Metadata Tracking
Using Portkey, you can add custom metadata to your LLM requests for detailed tracking and analytics. Use metadata tags to filter logs, track usage, and attribute costs across departments and teams.Custom Metata
5. Enterprise Access Management
Budget Controls
Single Sign-On (SSO)
Organization Management
Access Rules & Audit Logs
6. Reliability Features
Fallbacks
Conditional Routing
Load Balancing
Caching
Smart Retries
Budget Limits
7. Advanced Guardrails
Protect your Project’s data and enhance reliability with real-time checks on LLM inputs and outputs. Leverage guardrails to:- Prevent sensitive data leaks
- Enforce compliance with organizational policies
- PII detection and masking
- Content filtering
- Custom security rules
- Data compliance checks
Guardrails
FAQs
Can I use models other than OpenAI with LiveKit through Portkey?
Can I use models other than OpenAI with LiveKit through Portkey?
How do I track costs for different voice agents?
How do I track costs for different voice agents?
- Add
agent_type,department, orcustomer_idtags - View costs filtered by these tags in the Portkey dashboard
- Set up separate providers with budget limits for each use case
What happens if my primary LLM provider goes down?
What happens if my primary LLM provider goes down?
Can I implement custom business logic for my agents?
Can I implement custom business logic for my agents?
- Filter sensitive information
- Add custom headers or modify requests
- Implement business-specific validation
- Route requests based on custom logic
How do I migrate existing LiveKit agents to use Portkey?
How do I migrate existing LiveKit agents to use Portkey?
- Create providers and configs in Portkey
- Update the OpenAI client initialization to use Portkey’s base URL
- Add Portkey headers with your API key and config
- No other code changes needed!
Next Steps
Ready to build production voice AI?- Join our Discord for support and updates
- Explore more integrations
- Read about advanced configs
- Learn about guardrails

