Pangea AI Guard helps analyze and redact text to prevent model manipulation and malicious content.
Plugins
page under Sidebar
Guardrails
page and click the Create
buttonAdd
Check Name | Description | Parameters | Supported Hooks |
---|---|---|---|
AI Guard | Analyze and redact text to avoid manipulation of the model and malicious content | recipe (string), debug (boolean) | beforeRequestHook , afterRequestHook |
input_guardrails
or output_guardrails
params in your Portkey ConfigKey Configuration Properties
Guardrail Name | ID | Description | Parameters |
---|---|---|---|
Pangea AI Guard | pangea.textGuard | Scans LLM inputs/outputs for malicious content, harmful patterns, etc. | recipe (string), debug (boolean) |
Pangea PII Guard | pangea.pii | Detects and optionally redacts personal identifiable information | redact (boolean) |
Key Configuration Properties
type
: Always set to "guardrail"
for guardrail checksid
: A unique identifier for your guardrailcredentials
: Authentication details for Pangea
api_key
: Your Pangea API keydomain
: Your Pangea domain (e.g., aws.us-east-1.pangea.cloud
)checks
: Array of guardrail checks to run
id
: The specific guardrail ID from the table aboveparameters
: Configuration options specific to each guardraildeny
: Whether to block the request if guardrail fails (true/false)async
: Whether to run guardrail asynchronously (true/false)on_success
/on_fail
: Optional callbacks for success/failure scenarios
feedback
: Data for logging and analyticsweight
: Importance of this feedback (0-1)value
: Feedback score (-10 to 10)