Prompt Render
Renders a prompt template with its variable values filled in
Given a prompt ID, variable values, and optionally any hyperparameters, this API returns a JSON object containing the raw prompt template.
Note: Unlike inference requests, Prompt Render API calls are processed through Portkey’s Control Plane services.
Example: Using Prompt Render output in a new request
Example: Using Prompt Render output in a new request
Here’s how you can take the output from the render API
and use it for making a separate LLM call. We’ll take example of OpenAI SDKs, but you can use it simlarly for any other frameworks like Langchain etc. as well.
Authorizations
Path Parameters
The unique identifier of the prompt template to render
Body
Note: Although hyperparameters are shown grouped here (like messages, max_tokens, temperature, etc.), they should only be passed at the root level, alongside 'variables' and 'stream'.
Response
Successful rendered prompt
The response is of type object
.