Portkey provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including your locally hosted models through Ollama.

Provider Slug. ollama

Portkey SDK Integration with Ollama Models

Portkey provides a consistent API to interact with models from various providers.

If you are running the open source Portkey Gateway, refer to this guide on how to connect Portkey with Ollama.

1. Expose your Ollama API

Expose your Ollama API by using a tunneling service like ngrok or any other way you prefer.

You can skip this step if you’re self-hosting the Gateway.

For using Ollama with ngrok, here’s a useful guide

ngrok http 11434 --host-header="localhost:11434"

2. Install the Portkey SDK

Install the Portkey SDK in your application to interact with your Ollama API through Portkey.

npm install --save portkey-ai

3. Initialize Portkey with Ollama URL

Instantiate the Portkey client by adding your Ollama publicly-exposed URL to the customHost property.

import Portkey from 'portkey-ai'

const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
    provider: "ollama",
    customHost: "https://7cc4-3-235-157-146.ngrok-free.app" // Your Ollama ngrok URL
})

For the Ollama integration, you only need to pass the base URL to customHost without the version identifier (such as /v1) - Portkey takes care of the rest!

4. Invoke Chat Completions with Ollama

Use the Portkey SDK to invoke chat completions from your Ollama model, just as you would with any other provider.

const chatCompletion = await portkey.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'llama3',
});

console.log(chatCompletion.choices);

Local Setup (npm or docker)

First, install the Gateway locally:

npx @portkey-ai/gateway
Your Gateway is running on http://localhost:8080/v1 🚀

Then, just change the baseURL to the Gateway URL, customHost to the Ollam URL, and make requests.

If you are running Portkey inside a Docker container, but Ollama is running natively on your machine (i.e. not in Docker), you will have to refer to Ollama using http://host.docker.internal:11434 for the Gateway to be able to call it.

Next Steps

Explore the complete list of features supported in the SDK:

SDK


You’ll find more information in the relevant sections:

  1. Add metadata to your requests
  2. Add gateway configs to your Ollama requests
  3. Tracing Ollama requests
  4. Setup a fallback from OpenAI to Ollama APIs