OpenAI Guardrails

OpenAI Guardrails

OpenAI Guardrails

Decide whether to block, log, or proceed with requests based on OpenAI Guardrail verdicts. Configure verdict actions like async checks, request denial, and feedback logging—tailored to your enforcement needs.

Decide whether to block, log, or proceed with requests based on OpenAI Guardrail verdicts. Configure verdict actions like async checks, request denial, and feedback logging—tailored to your enforcement needs.

Decide whether to block, log, or proceed with requests based on OpenAI Guardrail verdicts. Configure verdict actions like async checks, request denial, and feedback logging—tailored to your enforcement needs.

World-Class Guardrail Partners

Integrate top guardrail platforms with Portkey to run your custom policies seamlessly.

  • Mistral
    Propmt Security
    Patronus
    Pillar
    Lasso
    Pangea
    Bedrock
    Azure
  • Lasso
    Pangea
    Bedrock
    Azure
    Promptfoo
    Aporia
    Acuvity
    Exa

OpenAI Guardrails

OpenAI powers some of the most advanced language models in the world, but running them in production requires more than just raw capability. With Portkey, you can apply customizable guardrails to every OpenAI request, ensuring safety, compliance, and control without changing your application code.

OpenAI is a leading AI provider offering models like o3, GPT-4, Whisper, and more. These models are widely used for building applications across chat, summarization, document analysis, code generation, and multimodal tasks.

Portkey supports all OpenAI models via its gateway, making it easy to standardize guardrails, monitor usage, and route calls intelligently.

Guardrail checks you can apply to OpenAI

Guardrail checks you can apply to OpenAI

Guardrail checks you can apply to OpenAI

Portkey offers both deterministic and LLM-powered guardrails that work seamlessly with OpenAI’s APIs. You can apply these checks to inputs, outputs, or both.

Portkey offers both deterministic and LLM-powered guardrails that work seamlessly with OpenAI’s APIs. You can apply these checks to inputs, outputs, or both.

Portkey offers both deterministic and LLM-powered guardrails that work seamlessly with OpenAI’s APIs. You can apply these checks to inputs, outputs, or both.

Input guardrails

*

.

Regex Match

Basic

Enforce patterns on input prompts

Sentence / Word / Character Count

Basic

Control verbosity

Lowercase Detection

Basic

Control verbosity

Ends With

Basic

Validate specific prompt endings

Webhook

Basic

Enforce custom business logic

JWT Token Validator

Basic

Verify token authenticity

Model Whitelist

Basic

Allow only approved models per route

*

.

Moderate Content

Pro

Block unsafe or harmful prompts

Check Language

Pro

Enforce language constraints

Detect PII

Pro

Prevent sensitive info in prompts

Detect Gibberish

Pro

Block incoherent or low-quality input

Output guardrails

*

.

Regex / Sentence / Word / Character Count

Basic

Need content here Need content here

JSON Schema / JSON Keys

Basic

Ensure required words or phrases

Contains

Basic

Ensure required words or phrases

Valid URLs

Basic

Validate links in responses

Contains Code

Basic

Detect code in specific formats

Lowercase Detection / Ends With

Basic

Need content here Need content here

Webhook

Basic

Post-process or validate output

Detect PII / Detect Gibberish

Basic

Need content here Need content here

Partner Guardrails
Acuvity

Acuvity is a model-agnostic GenAI security solution. It is built to secure existing and future GenAI models, apps, services, tools, plugins, and more.Scan Content: Comprehensive content safety and security checks.

✓ Scan Content: Comprehensive content safety and security checks.

Acuvity

Acuvity is a model-agnostic GenAI security solution. It is built to secure existing and future GenAI models, apps, services, tools, plugins, and more.Scan Content: Comprehensive content safety and security checks.

✓ Scan Content: Comprehensive content safety and security checks.

Acuvity

Acuvity is a model-agnostic GenAI security solution. It is built to secure existing and future GenAI models, apps, services, tools, plugins, and more.Scan Content: Comprehensive content safety and security checks.

✓ Scan Content: Comprehensive content safety and security checks.

Aporia

Aporia provides state-of-the-art Guardrails for any AI workload. With Aporia, you can set up powerful, multimodal AI Guardrails.

✓ Validate Project: Check all the policies within an Aporia project

Aporia

Aporia provides state-of-the-art Guardrails for any AI workload. With Aporia, you can set up powerful, multimodal AI Guardrails.

✓ Validate Project: Check all the policies within an Aporia project

Aporia

Aporia provides state-of-the-art Guardrails for any AI workload. With Aporia, you can set up powerful, multimodal AI Guardrails.

✓ Validate Project: Check all the policies within an Aporia project

Lasso Security

Lasso Security protects your GenAI apps from data leaks, prompt injections, and other potential risks, keeping your systems safe and secure.

✓ Analyse Content - Lasso Security's Deputies analyze content for various security risks, including jailbreak attempts, custom policy violations, sexual content, hate speech, illegal content, and more.

Lasso Security

Lasso Security protects your GenAI apps from data leaks, prompt injections, and other potential risks, keeping your systems safe and secure.

✓ Analyse Content - Lasso Security's Deputies analyze content for various security risks, including jailbreak attempts, custom policy violations, sexual content, hate speech, illegal content, and more.

Lasso Security

Lasso Security protects your GenAI apps from data leaks, prompt injections, and other potential risks, keeping your systems safe and secure.

✓ Analyse Content - Lasso Security's Deputies analyze content for various security risks, including jailbreak attempts, custom policy violations, sexual content, hate speech, illegal content, and more.

Mistral

Mistral moderation service helps detect and filter harmful content across multiple policy dimensions to secure your AI applications.

✓ Moderate Content: Checks if content passes selected content moderation checks

Mistral

Mistral moderation service helps detect and filter harmful content across multiple policy dimensions to secure your AI applications.

✓ Moderate Content: Checks if content passes selected content moderation checks

Mistral

Mistral moderation service helps detect and filter harmful content across multiple policy dimensions to secure your AI applications.

✓ Moderate Content: Checks if content passes selected content moderation checks

Pangea

Pangea AI Guard helps analyze and redact text to prevent model manipulation and malicious content.

✓ AI Guard - Analyze and redact text to avoid manipulation of the model and malicious content.

Pangea

Pangea AI Guard helps analyze and redact text to prevent model manipulation and malicious content.

✓ AI Guard - Analyze and redact text to avoid manipulation of the model and malicious content.

Pangea

Pangea AI Guard helps analyze and redact text to prevent model manipulation and malicious content.

✓ AI Guard - Analyze and redact text to avoid manipulation of the model and malicious content.

Azure Guardrails

Microsoft Azure offers robust content moderation and PII redaction services that can now be seamlessly integrated with Portkey’s Guardrails ecosystem.

✓ Azure Content Safety: A comprehensive content moderation service that detects harmful content, including hate speech, violence, sexual content, and self-harm references in text.

Load More

Azure Guardrails

Microsoft Azure offers robust content moderation and PII redaction services that can now be seamlessly integrated with Portkey’s Guardrails ecosystem.

✓ Azure Content Safety: A comprehensive content moderation service that detects harmful content, including hate speech, violence, sexual content, and self-harm references in text.

Load More

Azure Guardrails

Microsoft Azure offers robust content moderation and PII redaction services that can now be seamlessly integrated with Portkey’s Guardrails ecosystem.

✓ Azure Content Safety: A comprehensive content moderation service that detects harmful content, including hate speech, violence, sexual content, and self-harm references in text.

Load More

AWS Bedrock Guardrail

AWS Bedrock provides a comprehensive solution for securing your LLM applications, including content filtering, PII detection, redaction, and more.

✓ Add contextual grounding check - Validate if model responses are grounded in the referent source and relevant to the user’s query to filter model hallucination.

Load More

AWS Bedrock Guardrail

AWS Bedrock provides a comprehensive solution for securing your LLM applications, including content filtering, PII detection, redaction, and more.

✓ Add contextual grounding check - Validate if model responses are grounded in the referent source and relevant to the user’s query to filter model hallucination.

Load More

AWS Bedrock Guardrail

AWS Bedrock provides a comprehensive solution for securing your LLM applications, including content filtering, PII detection, redaction, and more.

✓ Add contextual grounding check - Validate if model responses are grounded in the referent source and relevant to the user’s query to filter model hallucination.

Load More

Patronus AI

Patronus excels in industry-specific guardrails for RAG workflows.

✓ Retrieval Answer Relevance: Checks whether the answer is on-topic to the input question. Does not measure correctness.

Load More

Patronus AI

Patronus excels in industry-specific guardrails for RAG workflows.

✓ Retrieval Answer Relevance: Checks whether the answer is on-topic to the input question. Does not measure correctness.

Load More

Patronus AI

Patronus excels in industry-specific guardrails for RAG workflows.

✓ Retrieval Answer Relevance: Checks whether the answer is on-topic to the input question. Does not measure correctness.

Load More

Pillar Security

Pillar Security is an all-in-one platform that empowers organizations to monitor, assess risks, and secure their AI activities.

✓ Scan Prompt: Analyses your inputs for prompt injection, PII, Secrets, Toxic Language, and Invisible Character

Load More

Pillar Security

Pillar Security is an all-in-one platform that empowers organizations to monitor, assess risks, and secure their AI activities.

✓ Scan Prompt: Analyses your inputs for prompt injection, PII, Secrets, Toxic Language, and Invisible Character

Load More

Pillar Security

Pillar Security is an all-in-one platform that empowers organizations to monitor, assess risks, and secure their AI activities.

✓ Scan Prompt: Analyses your inputs for prompt injection, PII, Secrets, Toxic Language, and Invisible Character

Load More

Prompt Security

Prompt Security detects and protects against prompt injection, sensitive data exposure, and other AI security threats.

✓ Protect Prompt: Protect a user prompt before it is sent to the LLM

Load More

Prompt Security

Prompt Security detects and protects against prompt injection, sensitive data exposure, and other AI security threats.

✓ Protect Prompt: Protect a user prompt before it is sent to the LLM

Load More

Prompt Security

Prompt Security detects and protects against prompt injection, sensitive data exposure, and other AI security threats.

✓ Protect Prompt: Protect a user prompt before it is sent to the LLM

Load More

How to add guardrails to OpenAI with Portkey

Putting Portkey Guardrails in production is just a 4-step process:

Step 1
Create Guardrail Checks
Step 2
Create Guardrail Actions
Step 3
Enable Guardrail through Configs
Step 4
Attach the Config to a Request
Step 1
Create Guardrail Checks
Create Guardrail Checks
Step 2
Create Guardrail
Actions
Create Guardrail Actions
Step 3
Enable Guardrail through Configs
Enable Guardrail through Configs
Step 4
Attach the Config to a Request
Attach the Config to a Request

Guardrail action settings

Guardrail action settings

Guardrail action settings

Async (TRUE)

Run guardrails in parallel to the request.

→ No added latency. Best for logging-only scenarios.

Async (TRUE)

Run guardrails in parallel to the request.

→ No added latency. Best for logging-only scenarios.

Async (TRUE)

Run guardrails in parallel to the request.

→ No added latency. Best for logging-only scenarios.

Async (FALSE)

Run guardrails before request or response.

→ Adds latency. Use when the guardrail result should influence the flow.

Async (FALSE)

Run guardrails before request or response.

→ Adds latency. Use when the guardrail result should influence the flow.

Async (FALSE)

Run guardrails before request or response.

→ Adds latency. Use when the guardrail result should influence the flow.

Deny Request (TRUE)

Block the request or response if any guardrail fails.

→ Use when violations must stop execution.

Deny Request (TRUE)

Block the request or response if any guardrail fails.

→ Use when violations must stop execution.

Deny Request (TRUE)

Block the request or response if any guardrail fails.

→ Use when violations must stop execution.

Deny Request (FALSE)

Allow the request even if the guardrail fails (returns 246 status).

→ Good for observing without blocking.

Deny Request (FALSE)

Allow the request even if the guardrail fails (returns 246 status).

→ Good for observing without blocking.

Deny Request (FALSE)

Allow the request even if the guardrail fails (returns 246 status).

→ Good for observing without blocking.

Send Feedback on Success/Failure

Attach metadata based on guardrail results.

→ Recommended for tracking and evaluation.

Send Feedback on Success/Failure

Attach metadata based on guardrail results.

→ Recommended for tracking and evaluation.

Send Feedback on Success/Failure

Attach metadata based on guardrail results.

→ Recommended for tracking and evaluation.

Build your AI app's
control panel now

Build your AI app's
control panel now

Build your AI app's
control panel now

Manage models, monitor usage, and fine-tune settings—all in one place.

Manage models, monitor usage, and fine-tune
settings—all in one place.

Manage models, monitor usage, and fine-tune settings—all in one place.

Products

Solutions

Developers

Resources

...
...
...