Bring Your Own LLM
Portkey provides a robust and secure platform to observe, integrate, and manage your locally or privately hosted custom models.
Integrating Custom Models with Portkey SDK
You can integrate any custom LLM with Portkey as long as it’s API is compliant with any of the 15+ providers Portkey already supports.
1. Install the Portkey SDK
2. Initialize Portkey with your Custom URL
Instead of using a provider
+ authorization
pair or a virtualKey
referring to the provider, you can specify a **provider**
+ **custom_host**
pair while instantiating the Portkey client.
custom_host
here refers to the URL where your custom model is hosted, including the API version identifier.
More on custom_host
here.
3. Invoke Chat Completions
Use the Portkey SDK to invoke chat completions from your model, just as you would with any other provider.
Forward Sensitive Headers Securely
When integrating custom LLMs with Portkey, you may have sensitive information in your request headers that you don’t want Portkey to track or log. Portkey provides a secure way to forward specific headers directly to your model’s API without any processing.
Just specify an array of header names using the **forward_headers**
property when initializing the Portkey client. Portkey will then forward these headers directly to your custom host URL without logging or tracking them.
Here’s an example:
Forward Headers in the Config Object
You can also define forward_headers
in your Config object and then pass the headers directly while making a request.
Next Steps
Explore the complete list of features supported in the SDK:
SDK
You’ll find more information in the relevant sections:
Was this page helpful?