Guardrails
Pangea
Pangea AI Guard helps analyze and redact text to prevent model manipulation and malicious content.
Pangea provides AI Guard service for scanning LLM inputs and outputs to avoid manipulation of the model, addition of malicious content, and other undesirable data transfers.
To get started with Pangea, visit their documentation:
Get Started with Pangea AI Guard
Using Pangea with Portkey
1. Add Pangea Credentials to Portkey
- Navigate to the
Integrations
page underSettings
- Click on the edit button for the Pangea integration
- Add your Pangea token and domain information
2. Add Pangea’s Guardrail Check
- Navigate to the “Guardrails” page
- Search for Pangea’s AI Guard and click
Add
- Configure your recipe and debug settings
- Set any actions you want on your check, and create the Guardrail!
Check Name | Description | Parameters | Supported Hooks |
---|---|---|---|
AI Guard | Analyze and redact text to avoid manipulation of the model and malicious content | recipe (string), debug (boolean) | beforeRequestHook , afterRequestHook |
3. Add Guardrail ID to a Config and Make Your Request
- When you save a Guardrail, you’ll get an associated Guardrail ID - add this ID to the
before_request_hooks
orafter_request_hooks
params in your Portkey Config - Save this Config and pass it along with any Portkey request you’re making!
Here’s an example configuration:
And here’s how to use it in your code:
Your requests are now guarded by Pangea AI Guard and you can see the Verdict and any action you take directly on Portkey logs! More detailed logs for your requests will also be available on your Pangea dashboard.
Get Support
If you face any issues with the Pangea integration, just ping the @pangea team on the community forum.
Learn More
Was this page helpful?