Conductor is a macOS AI coding app built on top of Claude Code. It manages workspaces (git worktrees), sessions, and parallel development environments. Conductor configures providers through environment variables in its Settings → Providers panel. Add Portkey to get:Documentation Index
Fetch the complete documentation index at: https://docs.portkey.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
- 1600+ LLMs through one interface — switch providers by updating environment variables
- Observability — track costs, tokens, and latency for every request
- Reliability — automatic fallbacks, retries, and caching
- Governance — budget limits, usage tracking, and team access controls
1. Setup
Add Provider

Configure Credentials
anthropic-prod.
Get Portkey API Key
2. Configure Portkey in Conductor
Conductor sets provider configuration through environment variables in its Settings → Providers panel. Since Conductor uses Claude Code under the hood, it accepts the sameANTHROPIC_* environment variables.
Set the following variables in Settings → Providers under Claude Code:
Choose your provider
The configuration pattern is the same for all providers — only the provider slug and model names change:3. Using Conductor with 1600+ models
With Portkey, you can route Conductor requests to any of 1600+ models. Change thex-portkey-provider header to your desired provider slug from Model Catalog.
Universal provider (any model)
Use Portkey’s unified gateway to route to any supported model:@your-provider to any provider slug (e.g., @openai-prod, @xai-prod, @moonshot-prod) and set ANTHROPIC_MODEL to your target model ID.
Why use Portkey with Conductor?
Cross-provider fallbacks
Never lose a coding session due to provider outages. Configure automatic failover:Budget controls for agentic coding
Conductor sessions can run expensive agentic loops. Set hard limits:- Cost limits: Maximum spend per day/week/month
- Token limits: Maximum tokens consumed
- Rate limits: Requests per minute/hour
Full session observability
Track every request in your coding session:- Request/response logs with full context
- Token usage and cost breakdowns
- Latency metrics
- Trace IDs for grouping related requests
Caching
Reduce costs and latency for repeated queries (common in iterative coding):Common configuration
Forward headers (required)
Some Claude Code features need theanthropic-beta header forwarded. Add this to your Portkey Config:
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
API Error: 500 fetch failed | Wrong base URL | Use https://api.portkey.ai (no /v1) |
| Lightning symbol (âš¡) in logs | Passthrough request | Check provider slug and API key |
| Requests not appearing | Authentication issue | Ensure ANTHROPIC_API_KEY is set to empty string |
3. Set Up Enterprise Governance
Why Enterprise Governance?- Cost Management: Controlling and tracking AI spending across teams
- Access Control: Managing team access and workspaces
- Usage Analytics: Understanding how AI is being used across the organization
- Security & Compliance: Maintaining enterprise security standards
- Reliability: Ensuring consistent service across all users
- Model Management: Managing what models are being used in your setup
Step 1: Implement Budget Controls & Rate Limits
Step 1: Implement Budget Controls & Rate Limits
Step 1: Implement Budget Controls & Rate Limits
Model Catalog enables you to have granular control over LLM access at the team/department level. This helps you:- Set up budget limits
- Prevent unexpected usage spikes using Rate limits
- Track departmental spending
Setting Up Department-Specific Controls:
- Navigate to Model Catalog in Portkey dashboard
- Create new Provider for each engineering team with budget limits and rate limits
- Configure department-specific limits

Step 2: Define Model Access Rules
Step 2: Define Model Access Rules
Step 2: Define Model Access Rules
As your AI usage scales, controlling which teams can access specific models becomes crucial. You can simply manage AI models in your org by provisioning model at the top integration level.
Step 4: Set Routing Configuration
Step 4: Set Routing Configuration
- Data Protection: Implement guardrails for sensitive code and data
- Reliability Controls: Add fallbacks, load-balance, retry and smart conditional routing logic
- Caching: Implement Simple and Semantic Caching. and more…
Example Configuration:
Here’s a basic configuration to load-balance requests to OpenAI and Anthropic:Step 4: Implement Access Controls
Step 4: Implement Access Controls
Step 3: Implement Access Controls
Create User-specific API keys that automatically:- Track usage per developer/team with the help of metadata
- Apply appropriate configs to route requests
- Collect relevant metadata to filter logs
- Enforce access permissions
Step 5: Deploy & Monitor
Step 5: Deploy & Monitor
Step 4: Deploy & Monitor
After distributing API keys to your engineering teams, your enterprise-ready setup is ready to go. Each developer can now use their designated API keys with appropriate access levels and budget controls. Apply your governance setup using the integration steps from earlier sections Monitor usage in Portkey dashboard:- Cost tracking by engineering team
- Model usage patterns for AI agent tasks
- Request volumes
- Error rates and debugging logs
Enterprise Features Now Available
You now have:- Departmental budget controls
- Model access governance
- Usage tracking & attribution
- Security guardrails
- Reliability features
Portkey Features
Now that you have an enterprise-grade setup, let’s explore the comprehensive features Portkey provides to ensure secure, efficient, and cost-effective AI operations.1. Comprehensive Metrics
Using Portkey you can track 40+ key metrics including cost, token usage, response time, and performance across all your LLM providers in real time. You can also filter these metrics based on custom metadata that you can set in your configs. Learn more about metadata here.
2. Advanced Logs
Portkey’s logging dashboard provides detailed logs for every request made to your LLMs. These logs include:- Complete request and response tracking
- Metadata tags for filtering
- Cost attribution and much more…

3. Unified Access to 1600+ LLMs
You can easily switch between 1600+ LLMs. Call various LLMs such as Anthropic, Gemini, Mistral, Azure OpenAI, Google Vertex AI, AWS Bedrock, and many more by simply changing theprovider slug in your default config object.
4. Advanced Metadata Tracking
Using Portkey, you can add custom metadata to your LLM requests for detailed tracking and analytics. Use metadata tags to filter logs, track usage, and attribute costs across departments and teams.Custom Metata
5. Enterprise Access Management
Budget Controls
Single Sign-On (SSO)
Organization Management
Access Rules & Audit Logs
6. Reliability Features
Fallbacks
Conditional Routing
Load Balancing
Caching
Smart Retries
Budget Limits
7. Advanced Guardrails
Protect your Project’s data and enhance reliability with real-time checks on LLM inputs and outputs. Leverage guardrails to:- Prevent sensitive data leaks
- Enforce compliance with organizational policies
- PII detection and masking
- Content filtering
- Custom security rules
- Data compliance checks
Guardrails
FAQs
How do I update my AI Provider limits after creation?
How do I update my AI Provider limits after creation?
Can I use multiple LLM providers with the same API key?
Can I use multiple LLM providers with the same API key?
How do I track costs for different teams?
How do I track costs for different teams?
- Create separate AI Providers for each team
- Use metadata tags in your configs
- Set up team-specific API keys
- Monitor usage in the analytics dashboard
What happens if a team exceeds their budget limit?
What happens if a team exceeds their budget limit?
- Further requests will be blocked
- Team admins receive notifications
- Usage statistics remain available in dashboard
- Limits can be adjusted if needed

