Microsoft Azure offers robust content moderation and PII readaction services that can now be seamlessly integrated with Portkey’s guardrails ecosystem. This integration supports two powerful Azure services:

Azure Content Safety

A comprehensive content moderation service that detects harmful content including hate speech, violence, sexual content, and self-harm references in text.

Azure PII Detection

Advanced detection of personally identifiable information (PII) and protected health information (PHI) to safeguard sensitive data.

Setting Up Azure Guardrails

Follow these steps to integrate Azure’s content moderation services with Portkey:

1. Configure Azure Authentication

Navigate to the Integrations page under Settings to set up your Azure credentials. You can authenticate using three different methods:

  • API Key - Uses a simple API key for authentication
  • Entra (formerly Azure AD) - Uses Azure Active Directory authentication
  • Managed - Uses managed identity authentication within Azure

Each authentication method requires specific credentials from your Azure account:

  • Resource Name: Your Azure resource name
  • API Key: Your Azure API key

2. Create Azure Guardrail Checks

Once authentication is set up, you can add Azure guardrail checks to your Portkey workflow:

  1. Navigate to the Guardrails page
  2. Search for either Azure Content Safety or Azure PII Detection
  3. Click Add to configure your chosen guardrail
  4. Configure the specific settings for your guardrail
  5. Save your configuration and create the guardrail

Guardrail Actions allow you to orchestrate your guardrails logic. You can learn more about them here

Azure Content Safety

Azure Content Safety analyzes text for harmful content across several categories.

Configuration Options

ParameterDescriptionValues
Blocklist NamesCustom Blocklist names from your azure setupblocklist-1, blocklist-2, blocklist-3
API VersionAzure Content Safety API versionDefault: 2024-09-01
SeverityMinimum severity threshold for flagging content2, 4, 6, or 8
CategoriesContent categories to monitorHate, SelfHarm, Sexual, Violence
TimeoutMaximum time in milliseconds for the checkDefault: 5000

Using Blocklists

Blocklists allow you to define custom terms or patterns to be flagged. You’ll need to create Content Safety blocklists in your Azure account first, then reference them in the Blocklist Names field.

For more information on Azure Content Safety blocklists, visit the official documentation.

Azure PII Detection

Azure PII Detection identifies and can help protect personal and health-related information in your content.

Configuration Options

ParameterDescriptionValues
DomainThe type of sensitive information to detectnone (both PII and PHI) or phi (only PHI)
API VersionAzure PII Detection API versionDefault: 2024-11-01
Model VersionVersion of the detection model to useDefault: latest
RedactOption to redact detected informationtrue or false
TimeoutMaximum time in milliseconds for the checkDefault: 5000

Add Guardrail ID to a Config and Make Your Request

  • When you save a Guardrail, you’ll get an associated Guardrail ID - add this ID to the input_guardrails or output_guardrails params in your Portkey Config
  • Create these Configs in Portkey UI, save them, and get an associated Config ID to attach to your requests. More here.

Here’s an example configuration:

{
  "input_guardrails": ["guardrails-id-xxx", "guardrails-id-yyy"],
  "output_guardrails": ["guardrails-id-xxx", "guardrails-id-yyy"]
}
const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY",
    config: "pc-***" // Supports a string config id or a config object
});

For more, refer to the Config documentation.

Monitoring and Logs

All guardrail actions and verdicts are visible in your Portkey logs, allowing you to:

  • Track which content has been flagged
  • See guardrail verdicts and actions
  • Monitor the performance of your content moderation pipeline

Using Azure Guardrails - Scenarios

After setting up your guardrails, there are different ways to use them depending on your security requirements:

Detect and Monitor Only

To simply detect but not block content:

  • Configure your guardrail actions without enabling “Deny”
  • Monitor the guardrail results in your Portkey logs
  • If any issues are detected, the response will include a hook_results object with details

Redact PII Automatically

To automatically remove sensitive information:

  • Enable the Redact option for Azure PII Detection
  • When PII is detected, it will be automatically redacted and replaced with standardized identifiers
  • The response will include a transformed flag set to true in the results

Block Harmful Content

To completely block requests that violate your policies:

  • Enable the Deny option in the guardrails action tab
  • If harmful content is detected, the request will fail with an appropriate status code
  • You can customize denial messages to provide guidance to users

Need Support?

If you encounter any issues with Azure Guardrails, please reach out to our support team through the Portkey community forum.