🚀 Introducing Agent Gateway — governance, observability, and control for your AI agents.  Register for live webinar ↗
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"user_input": "Hello world"
},
"max_completion_tokens": 250,
"presence_penalty": 0.2
}'{
"status": "<string>",
"headers": {},
"body": {
"id": "<string>",
"choices": [
{
"finish_reason": "stop",
"index": 123,
"message": {
"content": "<string>",
"role": "assistant",
"tool_calls": [
{
"id": "<string>",
"type": "function",
"function": {
"name": "<string>",
"arguments": "<string>"
}
}
],
"function_call": {
"arguments": "<string>",
"name": "<string>"
},
"content_blocks": [
{
"type": "text",
"text": "<string>"
}
]
},
"logprobs": {
"content": [
{
"token": "<string>",
"logprob": 123,
"bytes": [
123
],
"top_logprobs": [
{
"token": "<string>",
"logprob": 123,
"bytes": [
123
]
}
]
}
]
}
}
],
"created": 123,
"model": "<string>",
"object": "chat.completion",
"system_fingerprint": "<string>",
"usage": {
"completion_tokens": 123,
"prompt_tokens": 123,
"total_tokens": 123,
"completion_tokens_details": {
"reasoning_tokens": 123,
"accepted_prediction_tokens": 123,
"rejected_prediction_tokens": 123
},
"prompt_tokens_details": {
"cached_tokens": 123
}
}
}
}Execute your saved prompt templates on Portkey
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"user_input": "Hello world"
},
"max_completion_tokens": 250,
"presence_penalty": 0.2
}'{
"status": "<string>",
"headers": {},
"body": {
"id": "<string>",
"choices": [
{
"finish_reason": "stop",
"index": 123,
"message": {
"content": "<string>",
"role": "assistant",
"tool_calls": [
{
"id": "<string>",
"type": "function",
"function": {
"name": "<string>",
"arguments": "<string>"
}
}
],
"function_call": {
"arguments": "<string>",
"name": "<string>"
},
"content_blocks": [
{
"type": "text",
"text": "<string>"
}
]
},
"logprobs": {
"content": [
{
"token": "<string>",
"logprob": 123,
"bytes": [
123
],
"top_logprobs": [
{
"token": "<string>",
"logprob": 123,
"bytes": [
123
]
}
]
}
]
}
}
],
"created": 123,
"model": "<string>",
"object": "chat.completion",
"system_fingerprint": "<string>",
"usage": {
"completion_tokens": 123,
"prompt_tokens": 123,
"total_tokens": 123,
"completion_tokens_details": {
"reasoning_tokens": 123,
"accepted_prediction_tokens": 123,
"rejected_prediction_tokens": 123
},
"prompt_tokens_details": {
"cached_tokens": 123
}
}
}
}Send Variables
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"joke_topic": "elections",
"humor_level": "10"
}
}'
stringify the value before sending.curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"user_data": "{\"name\":\"John\",\"preferences\":{\"topic\":\"AI\",\"format\":\"brief\"}}"
}
}'
Override Prompt Settings
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"user_input": "Hello world"
},
"temperature": 0.7,
"max_tokens": 250,
"presence_penalty": 0.2
}'
Call Specific Prompt Version
{promptId} always calls the Published version of your prompt.But, you can also call a specific template version by appending its version number, like {promptId@12}:Version Tags:@latest: Calls the @{NUMBER} (like @12): Calls the specified version numberNo Suffix: Here, Portkey defaults to the Published versioncurl -X POST "https://api.portkey.ai/v1/prompts/PROMPT_ID@12/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"user_input": "Hello world"
}
}'
Streaming
stream:True explicitly in your request to enable streamingcurl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"user_input": "Hello world"
},
"stream": true
"max_tokens": 250,
"presence_penalty": 0.2
}'
The unique identifier of the prompt template to use
Note: Although hyperparameters are shown grouped here (like messages, max_completion_tokens, temperature, etc.), they should only be passed at the root level, alongside 'variables' and 'stream'. The max_tokens parameter is deprecated — use max_completion_tokens instead.
Variables to substitute in the prompt template
Default: False. Set to True if you want to stream the response
Note: All hyperparameters are optional. Pass them at the root level, and not nested under hyperparameters. Their grouping here is for educational purposes only.
Show child attributes
Was this page helpful?