Portkey provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including the models hosted on Deepinfra API.

Provider Slug. deepinfra

Portkey SDK Integration with Deepinfra Models

Portkey provides a consistent API to interact with models from various providers. To integrate Deepinfra with Portkey:

1. Install the Portkey SDK

Add the Portkey SDK to your application to interact with Mistral AI’s API through Portkey’s gateway.

npm install --save portkey-ai

2. Initialize Portkey with the Virtual Key

To use Deepinfra with Virtual Key, get your API key from here. Then add it to Portkey to create the virtual key

import Portkey from 'portkey-ai'

const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
    virtualKey: "VIRTUAL_KEY" // Your Deepinfra Virtual Key
})

3. Invoke Chat Completions

const chatCompletion = await portkey.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'nvidia/Nemotron-4-340B-Instruct',
});

console.log(chatCompletion.choices);

Supported Models

Here’s the list of all the Deepinfra models you can route to using Portkey -

Next Steps

The complete list of features supported in the SDK are available on the link below.

SDK

You’ll find more information in the relevant sections:

  1. Add metadata to your requests
  2. Add gateway configs to your Deepinfra requests
  3. Tracing Deepinfra requests
  4. Setup a fallback from OpenAI to Deepinfra