Mistral moderation service helps detect and filter harmful content across multiple policy dimensions to secure your AI applications.
Integrations
page under Settings
Guardrails
page and click the Create
buttonAdd
actions
you want on your check, and create the Guardrail!Check Name | Description | Parameters | Supported Hooks |
---|---|---|---|
Moderate Content | Checks if content passes selected content moderation checks | Moderation Checks (array), Timeout (number) | beforeRequestHook , afterRequestHook |
input_guardrails
or output_guardrails
params in your Portkey Config