Portkeys Prompt Playground allows you to test and tinker with various hyperparameters without any external dependencies and deploy them to production seamlessly. Moreover, all team members can use the same prompt template, ensuring that everyone works from the same source of truth.
gpt4
that tells a story about any user-desired topic.
To do this:
System | You are a very good storyteller who covers various topics for the kids. You narrate them in very intriguing and interesting ways. You tell the story in less than 3 paragraphs. |
---|---|
User | Tell me a story about |
Max Tokens | 512 |
Temperature | 0.9 |
Frequency Penalty | -0.2 |
{{topic}}
. Portkey treats them as dynamic variables, so a string can be passed to this prompt at runtime. This prompt is much more useful since it generates stories on any topic.
Once you are happy with the Prompt Template, hit Save Prompt. The Prompts page displays saved prompt templates and their corresponding prompt ID, serving as a reference point in our code.
Next up, let’s see how to use the created prompt template to generate chat completions through OpenAI SDK.
axios
. This will allow you to POST to the Portkey’s render endpoint and retrieve prompt details that can be used with OpenAI SDK.
We will use axios
to make a POST
call to /prompts/${PROMPT_ID}/render
endpoint along with headers (includes Portkey API Key) and body that includes the prompt variables required in the prompt template.
For more information about Render API, refer to the docs.
portkey-ai
to use its utilities to change the base URL and the default headers. If you are wondering what virtual keys are, refer to Portkey Vault documentation.
The prompt details we retrieved are passed as an argument to the chat completions creation method.
promptID
and variables
parameters.
Show me the entire code