Note: Unlike inference requests, Prompt Render API calls are processed through Portkey’s Control Plane services.
Example: Using Prompt Render output in a new request
Example: Using Prompt Render output in a new request
Here’s how you can take the output from the
render API
and use it for making a separate LLM call. We’ll take example of OpenAI SDKs, but you can use it simlarly for any other frameworks like Langchain etc. as well.Authorizations
Path Parameters
The unique identifier of the prompt template to render
Body
application/json
Note: Although hyperparameters are shown grouped here (like messages, max_tokens, temperature, etc.), they should only be passed at the root level, alongside 'variables' and 'stream'.
Variables to substitute in the prompt template
Note: All hyperparameters are optional. Pass them at the root level, and not nested under hyperparameters
. Their grouping here is for educational purposes only.