Provider slug: inference-net

Portkey SDK Integration with Inference.net

Portkey provides a consistent API to interact with models from various providers. To integrate Inference.net with Portkey:

1. Install the Portkey SDK

npm install --save portkey-ai

2. Initialize Portkey with Inference.net Authorization

  • Set provider name as inference-net
  • Pass your API key with Authorization header
import Portkey from 'portkey-ai'

const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
    provider: "inference-net",
    Authorization: "Bearer INFERENCE-NET API KEY"
})

3. Invoke Chat Completions

const chatCompletion = await portkey.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'llama3',
});
console.log(chatCompletion.choices);

Supported Models

Find more info about models supported by Inference.net here:

Inference.net

Next Steps

The complete list of features supported in the SDK are available on the link below.

SDK

You’ll find more information in the relevant sections:

  1. Add metadata to your requests
  2. Add gateway configs to your Inference.net requests
  3. Tracing Inference.net requests
  4. Setup a fallback from OpenAI to Inference.net