Add usage tracking, cost controls, and security guardrails to your Anything LLM deployment
Create Virtual Key
Create Default Config
Configure Portkey API Key
Step 2
Settings > AI Providers > LLM
Generic OpenAI
https://api.portkey.ai/v1
gpt-4
, claude-2
)Step 1: Implement Budget Controls & Rate Limits
Step 2: Define Model Access Rules
Step 3: Implement Access Controls
Step 4: Deploy & Monitor
virtual key
in your default config
object.
How do I update my Virtual Key limits after creation?
Can I use multiple LLM providers with the same API key?
How do I track costs for different teams?
What happens if a team exceeds their budget limit?