Ollama
Portkey provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including your locally hosted models through Ollama.
Provider Slug. ollama
Portkey SDK Integration with Ollama Models
Portkey provides a consistent API to interact with models from various providers.
If you are running the open source Portkey Gateway, refer to this guide on how to connect Portkey with Ollama.
1. Expose your Ollama API
Expose your Ollama API by using a tunneling service like ngrok or any other way you prefer.
You can skip this step if you’re self-hosting the Gateway.
For using Ollama with ngrok, here’s a useful guide
2. Install the Portkey SDK
Install the Portkey SDK in your application to interact with your Ollama API through Portkey.
3. Initialize Portkey with Ollama URL
Instantiate the Portkey client by adding your Ollama publicly-exposed URL to the customHost
property.
For the Ollama integration, you only need to pass the base URL to customHost
without the version identifier (such as /v1
) - Portkey takes care of the rest!
4. Invoke Chat Completions with Ollama
Use the Portkey SDK to invoke chat completions from your Ollama model, just as you would with any other provider.
Local Setup (npm or docker)
First, install the Gateway locally:
Your Gateway is running on http://localhost:8080/v1 🚀 |
---|
Then, just change the baseURL
to the Gateway URL, customHost
to the Ollam URL, and make requests.
If you are running Portkey inside a Docker container
, but Ollama is running natively on your machine (i.e. not in Docker), you will have to refer to Ollama using http://host.docker.internal:11434
for the Gateway to be able to call it.
Next Steps
Explore the complete list of features supported in the SDK:
SDK
You’ll find more information in the relevant sections:
Was this page helpful?