Open Source

We have open sourced our battle-tested AI Gateway to the community - it connects to 200+ LLMs with a unified interface and a single endpoint, and lets you effortlessly setup fallbacks, load balancing, retries, and more.

This gateway is in production at Portkey processing billions of tokens every day.

Community resource for AI builders to find GPU credits, grants, AI accelerators, or investments - all in a single place. Continuously updated, and sometimes also featuring exclusive deals.

We collaborate with the community to dive deep into how the LLMs & their inference providers are performing at scale, and publish gateway reports. We track latencies, uptime, cost changes, fluctuations across various modalitites like time-of-day, regions, token-lengths, and more.

Please reach out on Discord to collaborate on our next report!


Portkey supports various open source projects with additional production capabilities through its custom integrations, and the list is always growing:

  • Langchain - Monitor and trace your Langchain queries

  • Llamaindex​ - Monitor & trace your requests, and also set up automated fallbacks & load balancing

  • GPT Prompt Engineer - Log all the prompt engineer runs and debug issues easily

  • Instructor - Extract structured outputs from LLMs and get full-stack observability over everything

  • Promptfoo - Use Portkey prompts with Promptfoo to run evals and manage and version your prompt templates

  • Route to OSS LLMs using Ollama or LocalAI - Connect Portkey to your locally hosted models

  • Autogen - Bring LLM interoperability and Portkey's reliability to your Autogen agents

Last updated