AI Gateway
The world’s fastest AI Gateway with advanced routing & integrated Guardrails.
Features
Universal API
Use any of the supported models with a universal API (REST and SDKs)
Cache (Simple & Semantic)
Save costs and decrease latencies by using a cache
Fallbacks
Fallback between providers and models for resilience
Conditional Routing
Route to different targets based on custom conditional checks
Multimodality
Use vision, audio, image generation, and more models
Automatic Retries
Setup automatic retry strategies
Load Balancing
Load balance between various API Keys to counter rate-limits
Canary Testing
Canary test new models in production
Vault
Manage AI provider keys in a secure vault
Request Timeout
Easily handle unresponsive LLM requests
Using the Gateway
The various gateway strategies are implemented using Gateway configs. You can read more about configs below.
Configs
Open Source
We’ve open sourced our battle-tested AI gateway to the community. You can run it locally with a single command:
While you’re here, why not give us a star? It helps us a lot!
You can also self-host the gateway and then connect it to Portkey. Please reach out on [email protected] and we’ll help you set this up!
Was this page helpful?