Prompt API
Learn how to integrate Portkey’s prompt templates directly into your applications using the Prompt API
This feature is available on all Portkey plans.
The Portkey Prompts API allows you to seamlessly integrate your saved prompts directly into your applications. This powerful feature lets you separate prompt engineering from application code, making both easier to maintain while providing consistent, optimized prompts across your AI applications.
With the Prompt API, you can:
- Use versioned prompts in production applications
- Dynamically populate prompts with variables at runtime
- Override prompt parameters as needed without modifying the original templates
- Retrieve prompt details for use with provider-specific SDKs
API Endpoints
Portkey offers two primary endpoints for working with saved prompts:
- Prompt Completions (
/prompts/{promptId}/completions
) - Execute your saved prompt templates directly, receiving model completions - Prompt Render (
/prompts/{promptId}/render
) - Retrieve your prompt template with variables populated, without executing it
Prompt Completions
The Completions endpoint is the simplest way to use your saved prompts in production. It handles the entire process - retrieving the prompt, applying variables, sending it to the appropriate model, and returning the completion.
Making a Completion Request
Streaming Support
The completions endpoint also supports streaming responses for real-time interactions:
Prompt Render
You can retrieve your saved prompts on Portkey using the /prompts/$PROMPT_ID/render
endpoint. Portkey returns a JSON containing your prompt or messages body along with all the saved parameters that you can directly use in any request.
This is helpful if you are required to use provider SDKs and can not use the Portkey SDK in production. (Example of how to use Portkey prompt templates with OpenAI SDK)
Using the Render
Endpoint/Method
- Make a request to
https://api.portkey.ai/v1/prompts/$PROMPT_ID/render
with your prompt ID - Pass your Portkey API key with
x-portkey-api-key
in the header - Send up the variables in your payload with
{ "variables": { "VARIABLE_NAME": "VARIABLE_VALUE" } }
That’s it! See it in action:
The Output:
Updating Prompt Params While Retrieving the Prompt
If you want to change any model params (like temperature
, messages body
etc) while retrieving your prompt from Portkey, you can send the override params in your render
payload.
Portkey will send back your prompt with overridden params, without making any changes to the saved prompt on Portkey.
Based on the above snippet, model
and temperature
params in the retrieved prompt will be overridden with the newly passed values
The New Output:
Using the render
Output in a New Request
Here’s how you can take the output from the render
API and use it for making a call. We’ll take example of OpenAI SDKs, but you can use it simlarly for any other provider SDK as well.
CRUD: coming soon 🚀
API Reference
For complete API details, including all available parameters and response formats, refer to the API reference documentation:
Next Steps
Now that you understand how to integrate prompts into your applications, explore these related features:
- Prompt Playground - Create and test prompts in an interactive environment
- Prompt Versioning - Track changes to your prompts over time
- Prompt Observability - Monitor prompt performance in production
Was this page helpful?