🚀 Introducing Agent Gateway — governance, observability, and control for your AI agents.  Register for live webinar ↗
{
"success": true,
"data": {
"messages": [
{
"content": "<string>",
"role": "system",
"name": "<string>"
}
],
"model": "gpt-5",
"frequency_penalty": 0,
"logit_bias": null,
"logprobs": false,
"top_logprobs": 10,
"max_tokens": 123,
"max_completion_tokens": 123,
"n": 1,
"presence_penalty": 0,
"response_format": {
"type": "text"
},
"seed": 0,
"stop": "<string>",
"stream": false,
"stream_options": null,
"thinking": {
"type": "enabled",
"budget_tokens": 2030
},
"temperature": 1,
"top_p": 1,
"tools": [
{
"type": "function",
"function": {
"name": "<string>",
"description": "<string>",
"parameters": {},
"strict": false
}
}
],
"tool_choice": "none",
"parallel_tool_calls": true,
"user": "user-1234",
"function_call": "none",
"functions": [
{
"name": "<string>",
"description": "<string>",
"parameters": {}
}
]
}
}Renders a prompt template with its variable values filled in
{
"success": true,
"data": {
"messages": [
{
"content": "<string>",
"role": "system",
"name": "<string>"
}
],
"model": "gpt-5",
"frequency_penalty": 0,
"logit_bias": null,
"logprobs": false,
"top_logprobs": 10,
"max_tokens": 123,
"max_completion_tokens": 123,
"n": 1,
"presence_penalty": 0,
"response_format": {
"type": "text"
},
"seed": 0,
"stop": "<string>",
"stream": false,
"stream_options": null,
"thinking": {
"type": "enabled",
"budget_tokens": 2030
},
"temperature": 1,
"top_p": 1,
"tools": [
{
"type": "function",
"function": {
"name": "<string>",
"description": "<string>",
"parameters": {},
"strict": false
}
}
],
"tool_choice": "none",
"parallel_tool_calls": true,
"user": "user-1234",
"function_call": "none",
"functions": [
{
"name": "<string>",
"description": "<string>",
"parameters": {}
}
]
}
}Given a prompt ID, variable values, and optionally any hyperparameters, this API returns a JSON object containing the raw prompt template.Documentation Index
Fetch the complete documentation index at: https://docs.portkey.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Example: Using Prompt Render output in a new request
render API and use it for making a separate LLM call. We’ll take example of OpenAI SDKs, but you can use it simlarly for any other frameworks like Langchain etc. as well.from portkey_ai import Portkey
from openai import OpenAI
# Retrieving the Prompt from Portkey
portkey = Portkey(
api_key="PORTKEY_API_KEY"
)
render_response = portkey.prompts.render(
prompt_id="PROMPT_ID",
variables={ "movie":"Dune 2" }
)
PROMPT_TEMPLATE = render_response.data
# Making a Call to OpenAI with the Retrieved Prompt
openai = OpenAI(
api_key = "OPENAI_API_KEY",
base_url = "https://api.portkey.ai/v1",
default_headers = {
'x-portkey-provider': 'openai',
'x-portkey-api-key': 'PORTKEY_API_KEY',
'Content-Type': 'application/json',
}
)
chat_complete = openai.chat.completions.create(**PROMPT_TEMPLATE)
print(chat_complete.choices[0].message.content)
The unique identifier of the prompt template to render
Note: Although hyperparameters are shown grouped here (like messages, max_completion_tokens, temperature, etc.), they should only be passed at the root level, alongside 'variables' and 'stream'. The max_tokens parameter is deprecated — use max_completion_tokens instead.
Variables to substitute in the prompt template
Note: All hyperparameters are optional. Pass them at the root level, and not nested under hyperparameters. Their grouping here is for educational purposes only.
Show child attributes
Was this page helpful?