provider
and API key
in the ChatOpenAI
object.
Trace-id
to the request headers for each agent.
400
/500
errors, but also in their core behavior. You can get a response with a 200
status code that completely errors out for your app’s pipeline due to mismatched output. With Portkey’s Guardrails, we now help you enforce LLM behavior in real-time with our Guardrails on the Gateway pattern.
Using Portkey’s Guardrail platform, you can now verify your LLM inputs AND outputs to be adhering to your specifed checks; and since Guardrails are built on top of our Gateway, you can orchestrate your request exactly the way you want - with actions ranging from denying the request, logging the guardrail result, creating an evals dataset, falling back to another LLM or prompt, retrying the request, and more.