Add usage tracking, cost controls, and security guardrails to your Jan deployment
Create Virtual Key
Create Default Config
Configure Portkey API Key
Step 2
Settings > Model Providers > OpenAI
https://api.portkey.ai/v1/chat/completions
Step 1: Implement Budget Controls & Rate Limits
Step 2: Define Model Access Rules
Step 3: Implement Access Controls
Step 4: Deploy & Monitor
virtual key
in your default config
object.
How do I update my Virtual Key limits after creation?
Can I use multiple LLM providers with the same API key?
How do I track costs for different teams?
What happens if a team exceeds their budget limit?