A Guardrail Check is a specific rule or validation applied to your AI traffic. You can apply checks to the Input (to sanitize user prompts before they reach the LLM) or the Output (to validate the LLM’s response before sending it back to your user). You can combine multiple checks to build a robust Guardrail.
You can now have configurable timeouts for Partner & Pro Guardrails!
Partner Guardrails
Each Guardrail Check has a specific purpose, it’s own parameters, supported hooks, and sources.

Acuvity
- Scan prompts and responses for security threats * Detect PII, toxicity,
and prompt injections * Real-time content analysis and filtering

Aporia
- Validate custom Aporia policies via project ID * Define policies on your
Aporia dashboard * Seamless integration with Portkey checks

AWS Bedrock
- Analyze and redact PII to prevent manipulation * Integrate AWS Guardrails
directly in Portkey * Advanced security policy enforcement

Azure
- Detect and redact sensitive PII data * Apply Azure’s comprehensive content
safety checks * Detect jailbreaks and prompt injections with Shield Prompt
- Identify copyrighted content with Protected Material detection

Javelin (Highflame)
- Platform to secure your entire AI journey, with visibility at every step,
protection against emerging AI threats across GenAI, Agents & MCPs, and the
confidence to innovate at scale

Lasso Security
- Analyze content for security risks and jailbreaks * Enforce custom policy
violations detection * AI-powered threat detection and prevention

Mistral
- Detect and filter harmful content automatically * Multi-dimensional
content safety analysis * Real-time moderation capabilities

Pangea
- Guard LLM inputs and outputs with Text Guard * Detect malicious content
and data transfers * Prevent model manipulation attempts

Palo Alto Networks Prisma AIRS
- Real-time threat detection across all OSI layers (1-7) * Block prompt
injections, data leakage, and model DoS attacks

Patronus
- Detect hallucinations and factual errors * Assess quality: conciseness,
helpfulness, tone * Identify gender and racial bias in outputs

Pillar
- Scan prompts and responses comprehensively * Detect PII, toxicity, and
injection attacks * Enterprise security and compliance features

Prompt Security
- Scan prompts for security vulnerabilities * Analyze responses for policy
violations * Advanced threat detection and mitigation

Qualifire
- Best in class evaluation and guardrails for agents, RAG and chat bots.
Detect and mitigate hallucinations, grounding issues, and any custom
policies.

CrowdStrike AIDR
- Pass LLM Input and Output to guard chat completions * Block or sanitize text depending on configured rules

Exa
- Provider business-grade search and crawling for any web data

F5 Guardrails
- Advanced content moderation and PII detection capabilities for LLM inputs and outputs

Walled AI
- Ensure the safety and compliance of your LLM inputs * Generic safety checks, greetings, PII, and compliance checks

Zscaler
- Integrates Zscaler AI Guard to perform security checks on both inbound prompts and outbound LLM responses
Portkey’s Guardrails
Along with the partner Guardrails, there are also deterministic as well as LLM-based Guardrails supported natively by Portkey.
BASIC Guardrails are available on all Portkey plans.
PRO Guardrails are available on Portkey Pro & Enterprise plans.
Understanding Supported Hooks
input_guardrails: Runs on the user’s prompt before calling the model.
output_guardrails: Runs on the model’s generated text after the call.
BASIC — Deterministic Guardrails
Basic deterministic guardrails are ideal for quick, hard-coded validations that require zero LLM overhead. Implement these when you need to enforce strict data structures (like JSON schema), exact regex matches, or basic text limits (like word counts) with absolute certainty and minimal latency.
Text & Format Validation
| Guardrail Check | Description | Parameters | Supported On |
|---|
| Regex Match | Checks if the request or response text matches a regex pattern. | rule: string | Input, Output |
| Sentence Count | Checks if the content contains a certain number of sentences. Ranges allowed. | minSentences: number, maxSentences: number | Input, Output |
| Word Count | Checks if the content contains a certain number of words. Ranges allowed. | minWords: number, maxWords: number | Input, Output |
| Character Count | Checks if the content contains a certain number of characters. Ranges allowed. | minCharacters: number, maxCharacters: number | Input, Output |
| Uppercase check | Checks if content has all uppercase letters. | not: boolean | Input, Output |
| Lowercase Detection | Check if the given string is lowercase or not. | format: string | Input, Output |
| Ends With | Check if the content ends with a specified string. | Suffix: string | Input, Output |
Data & Structure
| Guardrail Check | Description | Parameters | Supported On |
|---|
| JSON Schema | Check if the response JSON matches a JSON schema. | schema: json | Output only |
| JSON Keys | Check if the response JSON contains any, all or none of the mentioned keys. | keys: array, operator: string | Output only |
| Valid URLs | Checks if all the URLs mentioned in the content are valid. | onlyDNS: boolean | Output only |
| Contains Code | Checks if the content contains code of format SQL, Python, TypeScript, etc. | format: string | Output only |
| Not Null | Checks if the response content is not null, undefined, or empty. | not: boolean | Output only |
| Contains | Checks if the content contains any, all or none of the words or phrases. | words: array, operator: string | Output only |
Security & Auth
| Guardrail Check | Description | Parameters | Supported On |
|---|
| JWT Token Validator | Validate JWT tokens with signature verification (JWKS), token introspection, claim validation, and extract claims. | Multiple | Input only |
| Request Parameters Check | Control which AI tools and request parameters can be used. | tools: object, params: object | Input only |
Routing & Control
| Guardrail Check | Description | Parameters | Supported On |
|---|
| Model Whitelist | Check if the inference model to be used is in the whitelist. | Models: array, Inverse: boolean | Input only |
| Model Rules | Allow requests based on metadata-driven rules mapping to allowed models. | rules: object, not: boolean | Input only |
| Allowed Request Types | Control which request types (endpoints) can be processed using an allowlist or blocklist. | allowedTypes: array, blockedTypes: array | Input only |
| Required Metadata Keys | Checks if the metadata contains all the required keys. | metadataKeys: array, operator: string | Input only |
Extensibility
| Guardrail Check | Description | Parameters | Supported On |
|---|
| Webhook | Makes a webhook request for custom guardrails. | webhookURL: string, headers: json | Input, Output |
| Log | Makes a request to a log URL and always gives true as the verdict. | logURL: string, headers: json | Output only |
PRO — LLM Guardrails
Pro guardrails leverage LLMs to perform nuanced, semantic validations. Implement these when you need to detect complex concepts like PII, toxicity, or gibberish that cannot be reliably caught with simple regex or deterministic rules.
| Guardrail Check | Description | Parameters | Supported On |
|---|
| Moderate Content | Checks if the content passes the mentioned content moderation checks. | categories: array | Input only |
| Check Language | Checks if the response content is in the mentioned language. | language: string | Input only |
| Detect PII | Detects Personally Identifiable Information (PII) in the content. | categories: array | Input, Output |
| Detect Gibberish | Detects if the content is gibberish. | boolean | Input, Output |
Bring Your Own Guardrail
We have built Guardrails in a very modular way, and support bringing your own Guardrail using a custom webhook! Learn more here.
Contribute Your Guardrail
Integrate your Guardrail platform with Portkey Gateway and reach our growing user base.
Check out some existing integrations to get started.