POST
/
prompts
/
{promptId}
/
completions
curl -X POST "https://api.portkey.ai/v1/prompts/YOUR_PROMPT_ID/completions" \
  -H "Content-Type: application/json" \
  -H "x-portkey-api-key: $PORTKEY_API_KEY" \
  -d '{
    "variables": {
      "user_input": "Hello world"
    },
    "max_tokens": 250,
    "presence_penalty": 0.2
  }'
{
  "status": "<string>",
  "headers": {},
  "body": {
    "id": "<string>",
    "choices": [
      {
        "finish_reason": "stop",
        "index": 123,
        "message": {
          "content": "<string>",
          "tool_calls": [
            {
              "id": "<string>",
              "type": "function",
              "function": {
                "name": "<string>",
                "arguments": "<string>"
              }
            }
          ],
          "role": "assistant",
          "function_call": {
            "arguments": "<string>",
            "name": "<string>"
          }
        },
        "logprobs": {
          "content": [
            {
              "token": "<string>",
              "logprob": 123,
              "bytes": [
                123
              ],
              "top_logprobs": [
                {
                  "token": "<string>",
                  "logprob": 123,
                  "bytes": [
                    123
                  ]
                }
              ]
            }
          ]
        }
      }
    ],
    "created": 123,
    "model": "<string>",
    "system_fingerprint": "<string>",
    "object": "chat.completion",
    "usage": {
      "completion_tokens": 123,
      "prompt_tokens": 123,
      "total_tokens": 123
    }
  }
}

Portkey Prompts API completely for both requests and responses, making it a drop-in replacement existing for your existing Chat or Completions calls.

Features

Authorizations

x-portkey-api-key
string
header
required

Path Parameters

promptId
string
required

The unique identifier of the prompt template to use

Body

application/json

Note: Although hyperparameters are shown grouped here (like messages, max_tokens, temperature, etc.), they should only be passed at the root level, alongside 'variables' and 'stream'.

variables
object
required

Variables to substitute in the prompt template

stream
boolean
default:
false

Default: False. Set to True if you want to stream the response

hyperparameters
object

Note: All hyperparameters are optional. Pass them at the root level, and not nested under hyperparameters. Their grouping here is for educational purposes only.

Response

200 - application/json
Successful completion response
status
string

Response status

headers
object

Response headers

body
object

Represents a chat completion response returned by model, based on the provided input.