List of Guardrail Checks
Each Guardrail Check has a specific purpose, it’s own parameters, supported hooks, and sources.
Partner Guardrails

Acuvity
- Scan Prompts
- Scan Responses For PII, toxicity, prompt injection detection, and more.

Apora
- Validate Aporia policies
- Define your Aporia policies on your Aporia dashboard and just pass the project ID in Portkey Guardrail check.

Azure
- Detect and Redact PII
- Apply Azure’s Content Safety checks

AWS Bedrock
- Analyze and redact PII to avoid model manipulation
- Bring your AWS Guardrails directly inside Portkey and more!

Lasso Security
- Scan Content Lasso Security’s Deputies analyze content for various security risks including jailbreak attempts, custom policy violations, and more.

Mistral
- Detect and filter harmful content across multiple dimensions

Pangea
- Text Guard for scanning LLM inputs and outputs
- Analyze and redact text to avoid model manipulation
- Detect malicious content and undesirable data transfers

Patronus
- Hallucination detection
- Check for conciseness, helpfulness, politeness
- Check for gender, racial bias
- and more!

Pillar
- Scan Prompts
- Scan Responses For PII, toxicity, prompt injection detection, and more.

Prompt Security
- Scan Prompts
- Scan Responses
The logic for all of the Guardrail Checks (including Partner Guardrails) is open source.
Bring Your Own Guardrail
We have built Guardrails in a very modular way, and support bringing your own Guardrail using a custom webhook! Learn more here.
Portkey’s Guardrails
Along with the partner Guardrails, there are also deterministic as well as LLM-based Guardrails supported natively by Portkey.
BASIC
Guardrails are available on all Portkey plans.
PRO
Guardrails are available on Portkey Pro & Enterprise plans.
BASIC
— Deterministic Guardrails
Regex Match
Checks if the request or response text matches a regex pattern.
Parameters: rule: string
Supported On: input_guardrails
, output_guardrails
Sentence Count
Checks if the content contains a certain number of sentences. Ranges allowed.
Parameters: minSentences: number
, maxSentences: number
Supported On: input_guardrails
, output_guardrails
Word Count
Checks if the content contains a certain number of words. Ranges allowed.
Parameters: minWords: number
, maxWords: number
Supported On: input_guardrails
, output_guardrails
Character Count
Checks if the content contains a certain number of characters. Ranges allowed.
Parameters: minCharacters: number
, maxCharacters: number
Supported On: input_guardrails
, output_guardrails
JSON Schema
Check if the response JSON matches a JSON schema.
Parameters: schema: json
Supported On: output_guardrails
only
JSON Keys
Check if the response JSON contains any, all or none of the mentioned keys.
Parameters: keys: array
, operator: string
Supported On: output_guardrails
only
Contains
Checks if the content contains any, all or none of the words or phrases.
Parameters: words: array
, operator: string
Supported On: output_guardrails
only
Valid URLs
Checks if all the URLs mentioned in the content are valid
Parameters: onlyDNS: boolean
Supported On: output_guardrails
only
Contains Code
Checks if the content contains code of format SQL, Python, TypeScript, etc.
Parameters: format: string
Supported On: output_guardrails
only
Lowercase Detection
Check if the given string is lowercase or not.
Parameters: format: string
Supported On: input_guardrails
, output_guardrails
Ends With
Check if the content ends with a specified string.
Parameters: Suffix: string
Supported On: input_guardrails
, output_guardrails
Webhook
Makes a webhook request for custom guardrails
Parameters: webhookURL: string
, headers: json
Supported On: input_guardrails
, output_guardrails
JWT Token Validator
Check if the JWT token is valid.
Parameters:
JWKS URI: string
,
Header Key: string
,
Cache Max Age: number
,
Clock Tolerance: number
,
Max Token Age: number
(in seconds)
Supported On: input_guardrails
Model Whitelist
Check if the inference model to be used is in the whitelist.
Parameters:
Models: array
, Inverse: boolean
Supported On: input_guardrails
PRO
— LLM Guardrails
Moderate Content
Checks if the content passes the mentioned content moderation checks.
Parameters: categories: array
Supported On: input_guardrails
only
Check Language
Checks if the response content is in the mentioned language.
Parameters: language: string
Supported On: input_guardrails
only
Detect PII
Detects Personally Identifiable Information (PII) in the content.
Parameters: categories: array
Supported On: input_guardrails
, output_guardrails
Detect Gibberish
Detects if the content is gibberish.
Parameters: boolean
Supported On: input_guardrails
, output_guardrails
You can now have configurable timeouts for Partner & Pro Guardrails!
Contribute Your Guardrail
Integrate your Guardrail platform with Portkey Gateway and reach our growing user base. Check out some existing integrations to get started.