Azure Guardrails
Integrate Microsoft Azure’s powerful content moderation services & PII guardrails with Portkey
Microsoft Azure offers robust content moderation and PII readaction services that can now be seamlessly integrated with Portkey’s guardrails ecosystem. This integration supports two powerful Azure services:
Azure Content Safety
A comprehensive content moderation service that detects harmful content including hate speech, violence, sexual content, and self-harm references in text.
Azure PII Detection
Advanced detection of personally identifiable information (PII) and protected health information (PHI) to safeguard sensitive data.
Setting Up Azure Guardrails
Follow these steps to integrate Azure’s content moderation services with Portkey:
1. Configure Azure Authentication
Navigate to the Integrations page under Settings to set up your Azure credentials. You can authenticate using three different methods:
- API Key - Uses a simple API key for authentication
- Entra (formerly Azure AD) - Uses Azure Active Directory authentication
- Managed - Uses managed identity authentication within Azure
Each authentication method requires specific credentials from your Azure account:
- Resource Name: Your Azure resource name
- API Key: Your Azure API key
- Resource Name: Your Azure resource name
- API Key: Your Azure API key
- Resource Name: Your Azure resource name
- Client ID: Your Azure client ID
- Client Secret: Your client secret
- Tenant ID: Your Azure tenant ID
- Resource Name: Your Azure resource name
- Client ID: Your Azure client ID (for managed identity)
2. Create Azure Guardrail Checks
Once authentication is set up, you can add Azure guardrail checks to your Portkey workflow:
- Navigate to the Guardrails page
- Search for either Azure Content Safety or Azure PII Detection
- Click Add to configure your chosen guardrail
- Configure the specific settings for your guardrail
- Save your configuration and create the guardrail
Guardrail Actions allow you to orchestrate your guardrails logic. You can learn more about them here
Azure Content Safety
Azure Content Safety analyzes text for harmful content across several categories.
Configuration Options
Parameter | Description | Values |
---|---|---|
Blocklist Names | Custom Blocklist names from your azure setup | blocklist-1 , blocklist-2 , blocklist-3 |
API Version | Azure Content Safety API version | Default: 2024-09-01 |
Severity | Minimum severity threshold for flagging content | 2 , 4 , 6 , or 8 |
Categories | Content categories to monitor | Hate, SelfHarm, Sexual, Violence |
Timeout | Maximum time in milliseconds for the check | Default: 5000 |
Using Blocklists
Blocklists allow you to define custom terms or patterns to be flagged. You’ll need to create Content Safety blocklists in your Azure account first, then reference them in the Blocklist Names field.
For more information on Azure Content Safety blocklists, visit the official documentation.
Azure PII Detection
Azure PII Detection identifies and can help protect personal and health-related information in your content.
Configuration Options
Parameter | Description | Values |
---|---|---|
Domain | The type of sensitive information to detect | none (both PII and PHI) or phi (only PHI) |
API Version | Azure PII Detection API version | Default: 2024-11-01 |
Model Version | Version of the detection model to use | Default: latest |
Redact | Option to redact detected information | true or false |
Timeout | Maximum time in milliseconds for the check | Default: 5000 |
Add Guardrail ID to a Config and Make Your Request
- When you save a Guardrail, you’ll get an associated Guardrail ID - add this ID to the
input_guardrails
oroutput_guardrails
params in your Portkey Config - Create these Configs in Portkey UI, save them, and get an associated Config ID to attach to your requests. More here.
Here’s an example configuration:
For more, refer to the Config documentation.
Monitoring and Logs
All guardrail actions and verdicts are visible in your Portkey logs, allowing you to:
- Track which content has been flagged
- See guardrail verdicts and actions
- Monitor the performance of your content moderation pipeline
Using Azure Guardrails - Scenarios
After setting up your guardrails, there are different ways to use them depending on your security requirements:
Detect and Monitor Only
To simply detect but not block content:
- Configure your guardrail actions without enabling “Deny”
- Monitor the guardrail results in your Portkey logs
- If any issues are detected, the response will include a
hook_results
object with details
Redact PII Automatically
To automatically remove sensitive information:
- Enable the
Redact
option for Azure PII Detection - When PII is detected, it will be automatically redacted and replaced with standardized identifiers
- The response will include a
transformed
flag set totrue
in the results
Block Harmful Content
To completely block requests that violate your policies:
- Enable the
Deny
option in the guardrails action tab - If harmful content is detected, the request will fail with an appropriate status code
- You can customize denial messages to provide guidance to users
Need Support?
If you encounter any issues with Azure Guardrails, please reach out to our support team through the Portkey community forum.
Was this page helpful?