Portkey provides a robust and secure gateway to seamlessly integrate open-source and fine-tuned LLMs from Predibase into your applications. With Portkey, you can leverage powerful features like fast AI gateway, caching, observability, prompt management, and more, while securely managing your LLM API keys through a virtual key system.

Provider Slug. predibase

Portkey SDK Integration with Predibase

Using Portkey, you can call your Predibase models in the familar OpenAI-spec and try out your existing pipelines on Predibase fine-tuned models with 2 LOC change.

1. Install the Portkey SDK

Install the Portkey SDK in your project using npm or pip:

npm install --save portkey-ai

2. Initialize Portkey with the Virtual Key

To use Predibase with Portkey, get your API key from here, then add it to Portkey to create the virtual key.

import Portkey from 'portkey-ai'

const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
    virtualKey: "VIRTUAL_KEY" // Your Predibase Virtual Key
})

3. Invoke Chat Completions on Predibase Serverless Endpoints

Predibase offers LLMs like Llama 3, Mistral, Gemma, etc. on its serverless infra that you can query instantly.

Sending Predibase Tenand ID

Predibase expects your account tenant ID along with the API key in each request. With Portkey, you can send your Tenand ID with the **user** param while making your request.

const chatCompletion = await portkey.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'llama-3-8b	',
    user: 'PREDIBASE_TENANT_ID'
});

console.log(chatCompletion.choices);

4. Invoke Predibase Fine-Tuned Models

With Portkey, you can send your fine-tune model & adapter details directly with the model param while making a request.

The format is:

model = <base_model>:<adapter-repo-name/adapter-version-number>

For example, if your base model is llama-3-8b and the adapter repo name is sentiment-analysis, you can make a request like this:

const chatCompletion = await portkey.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'llama-3-8b:sentiment-analysis/1',
    user: 'PREDIBASE_TENANT_ID'
});

console.log(chatCompletion.choices);

Routing to Dedicated Deployments

Using Portkey, you can easily route to your dedicatedly deployed models as well. Just pass the dedicated deployment name in the model param:

model = "my-dedicated-mistral-deployment-name"

JSON Schema Mode

You can enforce JSON schema for all Predibase models - just set the response_format to json_object and pass the relevant schema while making your request. Portkey logs will show your JSON output separately

const chatCompletion = await portkey.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'llama-3-8b	',
    user: 'PREDIBASE_TENANT_ID',
    response_format: {
      "type": "json_object",
      "schema": {"properties": {
        "name": {"maxLength": 10, "title": "Name", "type": "string"},
        "age": {"title": "Age", "type": "integer"},
        "required": ["name", "age", "strength"],
        "title": "Character",
        "type": "object"
      }
    }
});

console.log(chatCompletion.choices);

Next Steps

The complete list of features supported in the SDK are available on the link below.

SDK

You’ll find more information in the relevant sections:

  1. Add metadata to your requests
  2. Add gateway configs to your Predibase requests
  3. Tracing Predibase requests
  4. Setup a fallback from OpenAI to Predibase