Zscaler AI Guard provides AI-powered security for LLM applications. It enforces Detections Policies to perform security checks such as Data Loss Prevention (DLP) and prompt injection protection on both inbound prompts and outbound model responses.
Get Started with Zscaler AI Guard
Using Zscaler AI Guard with Portkey
1. Add Zscaler Credentials to Portkey
- Navigate to the
Plugins page under Admin Settings
- Click on the edit button for the Zscaler AI Guard integration
- Add your Zscaler AI Guard API Key (the DAS Application Key generated from your Zscaler AI Guard tenant)
2. Add Zscaler AI Guard Check
- Navigate to the
Guardrails page and click the Create button
- Search for “Zscaler AI Guard” and click
Add
- Configure your guardrail settings:
- Policy ID: The ID of the Zscaler Detections Policy to execute (Required)
- Timeout: The timeout in milliseconds for the scan (Default:
10000)
- Set any
actions you want on your check, and create the Guardrail!
Guardrail Actions allow you to orchestrate your guardrails logic. You can learn more about them here
| Check Name | Description | Parameters | Supported Hooks |
|---|
| Zscaler AI Guard Check | Scans prompts and responses against a Zscaler Detections Policy. Returns ALLOW or BLOCK based on the policy evaluation. | Policy ID (string), Timeout (number) | beforeRequestHook, afterRequestHook |
3. Add Guardrail ID to a Config and Make Your Request
- When you save a Guardrail, you’ll get an associated Guardrail ID - add this ID to the
input_guardrails or output_guardrails params in your Portkey Config
- Create these Configs in Portkey UI, save them, and get an associated Config ID to attach to your requests. More here.
Here’s an example config:
{
"input_guardrails": ["guardrails-id-xxx", "guardrails-id-yyy"],
"output_guardrails": ["guardrails-id-xxx", "guardrails-id-yyy"]
}
NodeJS
Python
OpenAI NodeJS
OpenAI Python
cURL
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY",
config: "pc-***" // Supports a string config id or a config object
});
portkey = Portkey(
api_key="PORTKEY_API_KEY",
config="pc-***" # Supports a string config id or a config object
)
const openai = new OpenAI({
apiKey: 'OPENAI_API_KEY',
baseURL: PORTKEY_GATEWAY_URL,
defaultHeaders: createHeaders({
apiKey: "PORTKEY_API_KEY",
config: "CONFIG_ID"
})
});
client = OpenAI(
api_key="OPENAI_API_KEY",
base_url=PORTKEY_GATEWAY_URL,
default_headers=createHeaders(
provider="openai",
api_key="PORTKEY_API_KEY",
config="CONFIG_ID"
)
)
curl https://api.portkey.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-config: $CONFIG_ID" \
-d '{
"model": "gpt-5.1-nano",
"messages": [{
"role": "user",
"content": "Hello!"
}]
}'
For more, refer to the Config documentation.
Your requests are now guarded by Zscaler AI Guard, and you can see the verdict and any actions taken directly in your Portkey logs!
Get Support
If you face any issues with the Zscaler AI Guard integration, join the Portkey community forum for assistance. Last modified on February 25, 2026