Langchain (Python)
Portkey adds core production capabilities to any Langchain app.
This guide covers the integration for the Python flavour of Langchain. Docs for the JS Langchain integration are here.
LangChain is a framework for developing applications powered by language models. It enables applications that:
- Are context-aware: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
- Reason: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
You can find more information about it here.
When using Langchain, Portkey can help take it to production by adding a fast AI gateway, observability, prompt management and more to your Langchain app.
Quick Start Integration
Install the Portkey and Langchain Python SDKs to get started.
We installed langchain-core
to skip the optional dependencies. You can also install langchain
if you prefer.
Since Portkey is fully compatible with the OpenAI signature, you can connect to the Portkey Ai Gateway through the ChatOpenAI interface.
- Set the
base_url
asPORTKEY_GATEWAY_URL
- Add
default_headers
to consume the headers needed by Portkey using thecreateHeaders
helper method.
We can now initialise the model and update the model to use Portkey’s AI gateway
Response
The call and the corresponding prompt will also be visible on the Portkey logs tab.
Using Virtual Keys for Multiple Models
Portkey supports Virtual Keys which are an easy way to store and manage API keys in a secure vault. Lets try using a Virtual Key to make LLM calls.
1. Create a Virtual Key in your Portkey account and the id
Let’s try creating a new Virtual Key for Mistral like this
2. Use Virtual Keys in the Portkey Headers
The virtual_key
parameter sets the authentication and provider for the AI provider being used. In our case we’re using the Mistral Virtual key.
Notice that the api_key
can be left blank as that authentication won’t be used.
The Portkey AI gateway will authenticate the API request to Mistral and get the response back in the OpenAI format for you to consume.
The AI gateway extends Langchain’s ChatOpenAI
class making it a single interface to call any provider and any model.
Embeddings
Embeddings in Langchain through Portkey work the same way as the Chat Models using the OpenAIEmbeddings
class. Let’s try to create an embedding using OpenAI’s embedding model
Only OpenAI is supported as an embedding provider for Langchain because internally, Langchain converts the texts into tokens which are then sent as input to the API. This method of embedding tokens instead of strings via the API is ONLY supported by OpenAI.
If you plan to use any other embedding model, we recommend using the Portkey SDK directly to make embedding calls.
Chains & Prompts
Chains enable the integration of various Langchain concepts for simultaneous execution while Langchain supports Prompt Templates to construct inputs for language models. Lets see how this would work with Portkey
We’d be able to view the exact prompt that was used to make the call to OpenAI in the Portkey logs dashboards.
Using Portkey Prompt Templates with Langchain
Portkey features an advanced Prompts platform tailor-made for better prompt engineering. With Portkey, you can:
- Store Prompts with Access Control and Version Control: Keep all your prompts organized in a centralized location, easily track changes over time, and manage edit/view permissions for your team.
- Parameterize Prompts: Define variables and mustache-approved tags within your prompts, allowing for dynamic value insertion when calling LLMs. This enables greater flexibility and reusability of your prompts.
- Experiment in a Sandbox Environment: Quickly iterate on different LLMs and parameters to find the optimal combination for your use case, without modifying your Langchain code.
Here’s how you can leverage Portkey’s Prompt Management in your Langchain app:
- Save your provider keys on Portkey vault to get associated virtual keys
- Create your prompt template on the Portkey app, and save it to get an associated
Prompt ID
- Before making a Langchain request, render the prompt template using the Portkey SDK
- Transform the retrieved prompt to be compatible with Langchain and send the request!
Example: Using a Portkey Prompt Template in Langchain
Using Advanced Routing
The Portkey AI Gateway brings capabilities like load-balancing, fallbacks, experimentation and canary testing to Langchain through a configuration-first approach.
Let’s take an example where we might want to split traffic between gpt-4 and claude-opus 50:50 to test the two large models. The gateway configuration for this would look like the following:
We can then use this config in our requests being made from langchain.
When the LLM is invoked, Portkey will distribute the requests to gpt-4
and claude-3-opus-20240229
in the ratio of the defined weights.
You can find more config examples here.
Agents & Tracing
A powerful capability of Langchain is creating Agents. The challenge with agentic workflows is that prompts are often abstracted out and it’s hard to get a visibility into what the agent is doing. This also makes debugging harder.
Portkey’s Langchain integration gives you full visibility into the running of an agent. Let’s take an example of a popular agentic workflow.
Running this would yield the following logs in Portkey.
This is extremely powerful since you gain control and visibility over the agent flows so you can identify problems and make updates as needed.
Was this page helpful?