The Portkey x LlamaIndex integration brings advanced AI gateway capabilities, full-stack observability, and prompt management to apps built on LlamaIndex.
OpenAI
class in Llamaindex as you normally would, along with Portkey’s helper functions createHeaders
and PORTKEY_GATEWAY_URL
.
complete
and chat
methods with streaming
on & off.
assistant: Arrr, matey! They call me Captain Barnacle Bill, the most colorful pirate to ever sail the seven seas! With a parrot on me shoulder and a treasure map in me hand, I’m always ready for adventure! What be yer name, landlubber?
Call various LLMs like Anthropic, Gemini, Mistral, Azure OpenAI, Google Vertex AI, and AWS Bedrock with minimal code changes.
Speed up your requests and save money on LLM calls by storing past responses in the Portkey cache. Choose between Simple and Semantic cache modes.
Set up fallbacks between different LLMs or providers, load balance your requests across multiple instances or API keys, set automatic retries, and request timeouts.
Portkey automatically logs all the key details about your requests, including cost, tokens used, response time, request and response bodies, and more. Send custom metadata and trace IDs for better analytics and debugging.
Use Portkey as a centralized hub to store, version, and experiment with prompts across multiple LLMs, and seamlessly retrieve them in your LlamaIndex app for easy integration.
Improve your LlamaIndex app by capturing qualitative & quantitative user feedback on your requests.
Set budget limits on provider API keys and implement fine-grained user roles and permissions for both the app and the Portkey APIs.
model
parameter:get_customized_config
that takes a config_slug
and a model
as parameters.config_slug
.config
object from the API response.model
parameter in the override_params
section of the Config with the provided custom_model
.model
override is applied to the saved Config.
For more details on working with Configs in Portkey, refer to the Config documentation.
provider name
API key
, andmodel name
cache
params to your config object.
default_headers
.
Prompt ID
trace ID
with the portkey.feedback.create
method: