Prompt Completions
Execute your saved prompt templates on Portkey
Portkey Prompts API completely for both requests and responses, making it a drop-in replacement existing for your existing Chat or Completions calls.
Features
Send Variables
Send Variables
Create your Propmt Template on Portkey UI, define variables, and pass them with this API:
When passing JSON data with variables, stringify
the value before sending.
Override Prompt Settings
Override Prompt Settings
You can override any model hyperparameter saved in the prompt template by sending its new value at the time of making a request:
Call Specific Prompt Version
Call Specific Prompt Version
Passing the {promptId}
always calls the Published
version of your prompt.
But, you can also call a specific template version by appending its version number, like {promptId@12}
:
Version Tags:
@latest
: Calls the@{NUMBER}
(like@12
): Calls the specified version numberNo Suffix
: Here, Portkey defaults to thePublished
version
Streaming
Streaming
Prompts API also supports streaming responses, and completely follows the OpenAI schema.
- Set
stream:True
explicitly in your request to enable streaming
Authorizations
Path Parameters
The unique identifier of the prompt template to use
Body
Note: Although hyperparameters are shown grouped here (like messages, max_tokens, temperature, etc.), they should only be passed at the root level, alongside 'variables' and 'stream'.
Response
Successful completion response
The response is of type object
.