Run Portkey on Prompts from Langchain Hub
Writing the right prompt is often hard to get a quality LLM response. You want the prompt to be specialized and exhaustive enough for your problem. There is a high chance someone else might’ve stumbled across a similar situation and written the prompt you’ve been figuring out all this while.
Langchain’s Prompts Hub is like Github but for prompts. You can pull the prompt to make API calls to your favorite Large Language Models (LLMs) on providers such as OpenAI, Anthropic, Google, etc. Portkey provides a unified API interface (follows the OpenAI signature) to make API calls through its SDK.
Learn more about Langchain Hub and Portkey.
In this cookbook, we will pick up a prompt to direct the model in generating precise step-by-step instructions to reach a user-desired goal. This requires us to grab a prompt by browsing on the Prompts Hub and integrating it into Portkey to make a chat completions API call.
Let’s get started.
1. Import Langchain Hub and Portkey Libraries
Why not explore the prompts listed on the Prompts Hub?
Meanwhile, let’s boot up the NodeJS environment and start importing libraries — langchain
and portkey-ai
You can access the Langchain Hub through SDK read-only without a LangSmith API Key.
Since we expect to use Portkey to make API calls, let’s instantiate and authenticate with the API keys. You can get the Portkey API key from the dashboard and save your OpenAI API key in the Portkey Vault to get a Virtual Key.
Did you find an interesting prompt to use? I found one at ohkgi/superb_system_instruction_prompt.
This prompt details the prompt to direct the model to generate step-by-step instructions, precisely what we are searching for.
2. Pull a Prompt from Langchain Hub
Next up, let’s try to get the Prompt details using the hub
API
Pass the label of the repository on as an argument to pull
method as follows:
This should log the following to the console:
Good going! It’s time to pipe the prompt to make the API call.
3. Make the API Call using Portkey
The model we will request is going to be OpenAI’s GPT4. Since gpt-4
accepts System and User roles, let’s prepare them.
Pass messages
to the chat completions call as an argument to the response.
4. Explore the Logs
The prompt we used consisted of approximately 1300 tokens and cost around 5.5 cents. This information can be found on Portkey’s Logs page, which provides valuable data such as the time it took for the request to be processed, dates, and a snapshot of the request headers and body.
Read about all the observability features you get in the docs.
Congratulations! You now have the skills to access a prompt from the Langchain hub through programming and use it to make an API request to GPT4. Try out a quick experiment by tweaking your prompt from the Langchain hub and trying out the Claude2.1 model. You’ll be amazed at what you can achieve!