Skip to main content
A Guardrail Check is a specific rule or validation applied to your AI traffic. You can apply checks to the Input (to sanitize user prompts before they reach the LLM) or the Output (to validate the LLM’s response before sending it back to your user). You can combine multiple checks to build a robust Guardrail.
You can now have configurable timeouts for Partner & Pro Guardrails!

Partner Guardrails

Each Guardrail Check has a specific purpose, it’s own parameters, supported hooks, and sources.
Acuvity

Acuvity

  • Scan prompts and responses for security threats * Detect PII, toxicity, and prompt injections * Real-time content analysis and filtering
Aporia

Aporia

  • Validate custom Aporia policies via project ID * Define policies on your Aporia dashboard * Seamless integration with Portkey checks
Bedrock

AWS Bedrock

  • Analyze and redact PII to prevent manipulation * Integrate AWS Guardrails directly in Portkey * Advanced security policy enforcement
Azure

Azure

  • Detect and redact sensitive PII data * Apply Azure’s comprehensive content safety checks * Detect jailbreaks and prompt injections with Shield Prompt
  • Identify copyrighted content with Protected Material detection
Javelin

Javelin (Highflame)

  • Platform to secure your entire AI journey, with visibility at every step, protection against emerging AI threats across GenAI, Agents & MCPs, and the confidence to innovate at scale
Lasso

Lasso Security

  • Analyze content for security risks and jailbreaks * Enforce custom policy violations detection * AI-powered threat detection and prevention
Mistral-AI

Mistral

  • Detect and filter harmful content automatically * Multi-dimensional content safety analysis * Real-time moderation capabilities
Pangea

Pangea

  • Guard LLM inputs and outputs with Text Guard * Detect malicious content and data transfers * Prevent model manipulation attempts
prisma

Palo Alto Networks Prisma AIRS

  • Real-time threat detection across all OSI layers (1-7) * Block prompt injections, data leakage, and model DoS attacks
Patronus

Patronus

  • Detect hallucinations and factual errors * Assess quality: conciseness, helpfulness, tone * Identify gender and racial bias in outputs
Pillar

Pillar

  • Scan prompts and responses comprehensively * Detect PII, toxicity, and injection attacks * Enterprise security and compliance features
PromptSecurity

Prompt Security

  • Scan prompts for security vulnerabilities * Analyze responses for policy violations * Advanced threat detection and mitigation
Qualifire

Qualifire

  • Best in class evaluation and guardrails for agents, RAG and chat bots. Detect and mitigate hallucinations, grounding issues, and any custom policies.
CrowdStrike-scaled

CrowdStrike AIDR

  • Pass LLM Input and Output to guard chat completions * Block or sanitize text depending on configured rules
ColorColor_TypeWordmark@4x-1

Exa

  • Provider business-grade search and crawling for any web data
f5-logo-solid-628x353

F5 Guardrails

  • Advanced content moderation and PII detection capabilities for LLM inputs and outputs
encrypted-tbn0.gstatic

Walled AI

  • Ensure the safety and compliance of your LLM inputs * Generic safety checks, greetings, PII, and compliance checks
Zscaler-AWS-Partners-2025-300x150-1

Zscaler

  • Integrates Zscaler AI Guard to perform security checks on both inbound prompts and outbound LLM responses

Portkey’s Guardrails

Along with the partner Guardrails, there are also deterministic as well as LLM-based Guardrails supported natively by Portkey.
  • BASIC Guardrails are available on all Portkey plans.
  • PRO Guardrails are available on Portkey Pro & Enterprise plans.

Understanding Supported Hooks

  • input_guardrails: Runs on the user’s prompt before calling the model.
  • output_guardrails: Runs on the model’s generated text after the call.

BASIC — Deterministic Guardrails

Basic deterministic guardrails are ideal for quick, hard-coded validations that require zero LLM overhead. Implement these when you need to enforce strict data structures (like JSON schema), exact regex matches, or basic text limits (like word counts) with absolute certainty and minimal latency.

Text & Format Validation

Guardrail CheckDescriptionParametersSupported On
Regex MatchChecks if the request or response text matches a regex pattern.rule: stringInput, Output
Sentence CountChecks if the content contains a certain number of sentences. Ranges allowed.minSentences: number, maxSentences: numberInput, Output
Word CountChecks if the content contains a certain number of words. Ranges allowed.minWords: number, maxWords: numberInput, Output
Character CountChecks if the content contains a certain number of characters. Ranges allowed.minCharacters: number, maxCharacters: numberInput, Output
Uppercase checkChecks if content has all uppercase letters.not: booleanInput, Output
Lowercase DetectionCheck if the given string is lowercase or not.format: stringInput, Output
Ends WithCheck if the content ends with a specified string.Suffix: stringInput, Output

Data & Structure

Guardrail CheckDescriptionParametersSupported On
JSON SchemaCheck if the response JSON matches a JSON schema.schema: jsonOutput only
JSON KeysCheck if the response JSON contains any, all or none of the mentioned keys.keys: array, operator: stringOutput only
Valid URLsChecks if all the URLs mentioned in the content are valid.onlyDNS: booleanOutput only
Contains CodeChecks if the content contains code of format SQL, Python, TypeScript, etc.format: stringOutput only
Not NullChecks if the response content is not null, undefined, or empty.not: booleanOutput only
ContainsChecks if the content contains any, all or none of the words or phrases.words: array, operator: stringOutput only

Security & Auth

Guardrail CheckDescriptionParametersSupported On
JWT Token ValidatorValidate JWT tokens with signature verification (JWKS), token introspection, claim validation, and extract claims.MultipleInput only
Request Parameters CheckControl which AI tools and request parameters can be used.tools: object, params: objectInput only

Routing & Control

Guardrail CheckDescriptionParametersSupported On
Model WhitelistCheck if the inference model to be used is in the whitelist.Models: array, Inverse: booleanInput only
Model RulesAllow requests based on metadata-driven rules mapping to allowed models.rules: object, not: booleanInput only
Allowed Request TypesControl which request types (endpoints) can be processed using an allowlist or blocklist.allowedTypes: array, blockedTypes: arrayInput only
Required Metadata KeysChecks if the metadata contains all the required keys.metadataKeys: array, operator: stringInput only

Extensibility

Guardrail CheckDescriptionParametersSupported On
WebhookMakes a webhook request for custom guardrails.webhookURL: string, headers: jsonInput, Output
LogMakes a request to a log URL and always gives true as the verdict.logURL: string, headers: jsonOutput only

PRO — LLM Guardrails

Pro guardrails leverage LLMs to perform nuanced, semantic validations. Implement these when you need to detect complex concepts like PII, toxicity, or gibberish that cannot be reliably caught with simple regex or deterministic rules.
Guardrail CheckDescriptionParametersSupported On
Moderate ContentChecks if the content passes the mentioned content moderation checks.categories: arrayInput only
Check LanguageChecks if the response content is in the mentioned language.language: stringInput only
Detect PIIDetects Personally Identifiable Information (PII) in the content.categories: arrayInput, Output
Detect GibberishDetects if the content is gibberish.booleanInput, Output

Bring Your Own Guardrail

We have built Guardrails in a very modular way, and support bringing your own Guardrail using a custom webhook! Learn more here.

Contribute Your Guardrail

Integrate your Guardrail platform with Portkey Gateway and reach our growing user base. Check out some existing integrations to get started.
Last modified on March 11, 2026