The Portkey Prompts API allows you to seamlessly integrate your saved prompts directly into your applications. This powerful feature lets you separate prompt engineering from application code, making both easier to maintain while providing consistent, optimized prompts across your AI applications.
With the Prompt API, you can:
Use versioned prompts in production applications
Dynamically populate prompts with variables at runtime
Override prompt parameters as needed without modifying the original templates
Retrieve prompt details for use with provider-specific SDKs
The Completions endpoint is the simplest way to use your saved prompts in production. It handles the entire process - retrieving the prompt, applying variables, sending it to the appropriate model, and returning the completion.
You can retrieve your saved prompts on Portkey using the /prompts/$PROMPT_ID/render endpoint. Portkey returns a JSON containing your prompt or messages body along with all the saved parameters that you can directly use in any request.
Updating Prompt Params While Retrieving the Prompt
If you want to change any model params (like temperature, messages body etc) while retrieving your prompt from Portkey, you can send the override params in your render payload.
Portkey will send back your prompt with overridden params, without making any changes to the saved prompt on Portkey.
Here’s how you can take the output from the render API and use it for making a call. We’ll take example of OpenAI SDKs, but you can use it simlarly for any other provider SDK as well.
import Portkey from 'portkey-ai';import OpenAI from 'openai';// Retrieving the Prompt from Portkeyconst portkey = new Portkey({ apiKey: "PORTKEY_API_KEY"})async function getPromptTemplate() { const render_response = await portkey.prompts.render({ promptID: "PROMPT_ID", variables: { "movie":"Dune 2" } }) return render_response.data;}// Making a Call to OpenAI with the Retrieved Promptconst openai = new OpenAI({ apiKey: 'OPENAI_API_KEY', baseURL: 'https://api.portkey.ai/v1', defaultHeaders: { 'x-portkey-provider': 'openai', 'x-portkey-api-key': 'PORTKEY_API_KEY', 'Content-Type': 'application/json', }});async function main() { const PROMPT_TEMPLATE = await getPromptTemplate(); const chatCompletion = await openai.chat.completions.create(PROMPT_TEMPLATE); console.log(chatCompletion.choices[0]);}main();
import Portkey from 'portkey-ai';import OpenAI from 'openai';// Retrieving the Prompt from Portkeyconst portkey = new Portkey({ apiKey: "PORTKEY_API_KEY"})async function getPromptTemplate() { const render_response = await portkey.prompts.render({ promptID: "PROMPT_ID", variables: { "movie":"Dune 2" } }) return render_response.data;}// Making a Call to OpenAI with the Retrieved Promptconst openai = new OpenAI({ apiKey: 'OPENAI_API_KEY', baseURL: 'https://api.portkey.ai/v1', defaultHeaders: { 'x-portkey-provider': 'openai', 'x-portkey-api-key': 'PORTKEY_API_KEY', 'Content-Type': 'application/json', }});async function main() { const PROMPT_TEMPLATE = await getPromptTemplate(); const chatCompletion = await openai.chat.completions.create(PROMPT_TEMPLATE); console.log(chatCompletion.choices[0]);}main();
from portkey_ai import Portkeyfrom openai import OpenAI# Retrieving the Prompt from Portkeyportkey = Portkey( api_key="PORTKEY_API_KEY")render_response = portkey.prompts.render( prompt_id="PROMPT_ID", variables={ "movie":"Dune 2" })PROMPT_TEMPLATE = render_response.data# Making a Call to OpenAI with the Retrieved Promptopenai = OpenAI( api_key = "OPENAI_API_KEY", base_url = "https://api.portkey.ai/v1", default_headers = { 'x-portkey-provider': 'openai', 'x-portkey-api-key': 'PORTKEY_API_KEY', 'Content-Type': 'application/json', })chat_complete = openai.chat.completions.create(**PROMPT_TEMPLATE)print(chat_complete.choices[0].message.content)