Learn how to integrate Portkey’s prompt templates directly into your applications using the Prompt API
/prompts/{promptId}/completions
) - Execute your saved prompt templates directly, receiving model completions/prompts/{promptId}/render
) - Retrieve your prompt template with variables populated, without executing it/prompts/$PROMPT_ID/render
endpoint. Portkey returns a JSON containing your prompt or messages body along with all the saved parameters that you can directly use in any request.
This is helpful if you are required to use provider SDKs and can not use the Portkey SDK in production. (Example of how to use Portkey prompt templates with OpenAI SDK)
Render
Endpoint/Methodhttps://api.portkey.ai/v1/prompts/$PROMPT_ID/render
with your prompt IDx-portkey-api-key
in the header{ "variables": { "VARIABLE_NAME": "VARIABLE_VALUE" } }
temperature
, messages body
etc) while retrieving your prompt from Portkey, you can send the override params in your render
payload.
Portkey will send back your prompt with overridden params, without making any changes to the saved prompt on Portkey.
model
and temperature
params in the retrieved prompt will be overridden with the newly passed valuesThe New Output:render
Output in a New Requestrender
API and use it for making a call. We’ll take example of OpenAI SDKs, but you can use it simlarly for any other provider SDK as well.