Transform offline LLMs into online models with real-time internet search capabilities.
Settings
→ Plugins
in the sidebarGuardrails
page and click Create
Add
<web_search_context>
)</web_search_context>
)actions
you want on your check, and create the Guardrail!
input_guardrails
parameter in your Portkey Config.input guardrail
. It adds web search results to your request before it reaches the LLM, which is why only before_request_hooks
are supported and not after_request_hooks
on Portkey’s gateway. Learn more about guardrails hereDoes this work with all LLM providers?
How does this affect token usage?
How fresh are the search results?
How does this differ from using OpenAI's web search capability?