API Reference
- Introduction
- Authentication
- Headers
- Errors
- Supported Providers
- SDKs
- API Details
Chat Completions
Portkey Endpoints
Embeddings
Other APIs
Completions
Moderations
Fine-tuning
Assistants
- Assistants
- Threads
- Messages
- Runs
- Run Steps
Prompt Completions
Execute your saved prompt templates on Portkey
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"user_input": "Hello world"
},
"max_tokens": 250,
"presence_penalty": 0.2
}'
{
"status": "<string>",
"headers": {},
"body": {
"id": "<string>",
"choices": [
{
"finish_reason": "stop",
"index": 123,
"message": {
"content": "<string>",
"tool_calls": [
{
"id": "<string>",
"type": "function",
"function": {
"name": "<string>",
"arguments": "<string>"
}
}
],
"role": "assistant",
"function_call": {
"arguments": "<string>",
"name": "<string>"
},
"content_blocks": [
{
"type": "text",
"text": "<string>"
}
]
},
"logprobs": {
"content": [
{
"token": "<string>",
"logprob": 123,
"bytes": [
123
],
"top_logprobs": [
{
"token": "<string>",
"logprob": 123,
"bytes": [
123
]
}
]
}
]
}
}
],
"created": 123,
"model": "<string>",
"system_fingerprint": "<string>",
"object": "chat.completion",
"usage": {
"completion_tokens": 123,
"prompt_tokens": 123,
"total_tokens": 123
}
}
}
Portkey Prompts API completely for both requests and responses, making it a drop-in replacement existing for your existing Chat or Completions calls.
Features
Create your Propmt Template on Portkey UI, define variables, and pass them with this API:
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"joke_topic": "elections",
"humor_level": "10"
}
}'
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"joke_topic": "elections",
"humor_level": "10"
}
}'
When passing JSON data with variables, stringify
the value before sending.
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"user_data": "{\"name\":\"John\",\"preferences\":{\"topic\":\"AI\",\"format\":\"brief\"}}"
}
}'
You can override any model hyperparameter saved in the prompt template by sending its new value at the time of making a request:
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"user_input": "Hello world"
},
"temperature": 0.7,
"max_tokens": 250,
"presence_penalty": 0.2
}'
Passing the {promptId}
always calls the Published
version of your prompt.
But, you can also call a specific template version by appending its version number, like {promptId@12}
:
Version Tags:
@latest
: Calls the@{NUMBER}
(like@12
): Calls the specified version numberNo Suffix
: Here, Portkey defaults to thePublished
version
curl -X POST "https://api.portkey.ai/v1/prompts/PROMPT_ID@12/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"user_input": "Hello world"
}
}'
Prompts API also supports streaming responses, and completely follows the OpenAI schema.
- Set
stream:True
explicitly in your request to enable streaming
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"user_input": "Hello world"
},
"stream": true
"max_tokens": 250,
"presence_penalty": 0.2
}'
Authorizations
Path Parameters
The unique identifier of the prompt template to use
Body
Note: Although hyperparameters are shown grouped here (like messages, max_tokens, temperature, etc.), they should only be passed at the root level, alongside 'variables' and 'stream'.
Response
The response is of type object
.
Was this page helpful?
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"variables": {
"user_input": "Hello world"
},
"max_tokens": 250,
"presence_penalty": 0.2
}'
{
"status": "<string>",
"headers": {},
"body": {
"id": "<string>",
"choices": [
{
"finish_reason": "stop",
"index": 123,
"message": {
"content": "<string>",
"tool_calls": [
{
"id": "<string>",
"type": "function",
"function": {
"name": "<string>",
"arguments": "<string>"
}
}
],
"role": "assistant",
"function_call": {
"arguments": "<string>",
"name": "<string>"
},
"content_blocks": [
{
"type": "text",
"text": "<string>"
}
]
},
"logprobs": {
"content": [
{
"token": "<string>",
"logprob": 123,
"bytes": [
123
],
"top_logprobs": [
{
"token": "<string>",
"logprob": 123,
"bytes": [
123
]
}
]
}
]
}
}
],
"created": 123,
"model": "<string>",
"system_fingerprint": "<string>",
"object": "chat.completion",
"usage": {
"completion_tokens": 123,
"prompt_tokens": 123,
"total_tokens": 123
}
}
}