Portkey provides a robust and secure platform to observe, integrate, and manage your locally or privately hosted custom models.

Integrating Custom Models with Portkey SDK

You can integrate any custom LLM with Portkey as long as it’s API is compliant with any of the 15+ providers Portkey already supports.

1. Install the Portkey SDK

npm install --save portkey-ai

2. Initialize Portkey with your Custom URL

Instead of using a provider + authorization pair or a virtualKey referring to the provider, you can specify a **provider** + **custom_host** pair while instantiating the Portkey client.

custom_host here refers to the URL where your custom model is hosted, including the API version identifier.

import Portkey from 'portkey-ai'

const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY",
    provider: "PROVIDER_NAME", // This can be mistral-ai, openai, or anything else
    customHost: "http://MODEL_URL/v1/", // Your custom URL with version identifier
    authorization: "AUTH_KEY", // If you need to pass auth
})

More on custom_host here.

3. Invoke Chat Completions

Use the Portkey SDK to invoke chat completions from your model, just as you would with any other provider.

const chatCompletion = await portkey.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }]
});

console.log(chatCompletion.choices);

Forward Sensitive Headers Securely

When integrating custom LLMs with Portkey, you may have sensitive information in your request headers that you don’t want Portkey to track or log. Portkey provides a secure way to forward specific headers directly to your model’s API without any processing.

Just specify an array of header names using the **forward_headers** property when initializing the Portkey client. Portkey will then forward these headers directly to your custom host URL without logging or tracking them.

Here’s an example:

import Portkey from 'portkey-ai'

const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY",
    provider: "PROVIDER_NAME", // This can be mistral-ai, openai, or anything else
    customHost: "http://MODEL_URL/v1/", // Your custom URL with version identifier
    authorization: "AUTH_KEY", // If you need to pass auth
    forwardHeaders: [ "authorization" ]
})

Forward Headers in the Config Object

You can also define forward_headers in your Config object and then pass the headers directly while making a request.

{
    "strategy": {
        "mode": "loadbalance"
    },
    "targets": [
        {
            "provider": "openai",
            "api_key": ""
        },
        {
            "strategy": {
                "mode": "fallback"
            },
            "targets": [
                {
                    "provider": "azure-openai",
                    "custom_host": "http://MODEL_URL/v1",
                    "forward_headers": ["my-auth-header-1", "my-auth-header-2"]
                },
                {
                    "provider": "openai",
                    "api_key": "sk-***"
                }
            ]
        }
    ]
}

Next Steps

Explore the complete list of features supported in the SDK:

SDK


You’ll find more information in the relevant sections:

  1. Add metadata to your requests
  2. Add gateway configs to your requests
  3. Tracing requests
  4. Setup a fallback from OpenAI to your local LLM