Prompt Security provides advanced protection for your AI applications against various security threats including prompt injections and sensitive data exposure, helping ensure safe interactions with LLMs. To get started with Prompt Security, visit their website:Documentation Index
Fetch the complete documentation index at: https://docs.portkey.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Get Started with Prompt Security
Using Prompt Security with Portkey
1. Add Prompt Security Credentials to Portkey
- Click on the
Admin Settingsbutton on Sidebar - Navigate to
Pluginstab under Organisation Settings - Click on the edit button for the Prompt Security integration
- Add your Prompt Security API Key and API Domain (obtain these from your Prompt Security account)
2. Add Prompt Security’s Guardrail Check
- Navigate to the
Guardrailspage and click theCreatebutton - Search for either “Protect Prompt” or “Protect Response” depending on your needs and click
Add - Set any
actionsyou want on your check, and create the Guardrail!
Guardrail Actions allow you to orchestrate your guardrails logic. You can learn more about them here
| Check Name | Description | Parameters | Supported Hooks |
|---|---|---|---|
| Protect Prompt | Protect a user prompt before it is sent to the LLM | None | beforeRequestHook |
| Protect Response | Protect a LLM response before it is sent to the user | None | afterRequestHook |
3. Add Guardrail ID to a Config and Make Your Request
- When you save a Guardrail, you’ll get an associated Guardrail ID - add this ID to the
before_request_hooksorafter_request_hooksparams in your Portkey Config - Create these Configs in Portkey UI, save them, and get an associated Config ID to attach to your requests. More here.
- NodeJS
- Python
- OpenAI NodeJS
- OpenAI Python
- cURL

