Integrate DSPy with Portkey for production-ready LLM pipelines
LM
interface, allowing you to use 250+ LLMs with detailed cost insights. Simply configure the LM with Portkey’s gateway URL and your Portkey API key.
openai/@PROVIDER_SLUG/MODEL_NAME
where:
@PROVIDER_SLUG
is your provider’s slug in Portkey (found in Model Catalog)MODEL_NAME
is the specific model you want to useopenai/@openai-provider-slug/gpt-4o
openai/@anthropic-provider-slug/claude-3-sonnet-20240320
openai/@aws-bedrock-slug/anthropic.claude-3-sonnet-20240229-v1:0
Request Details
: Information about the specific request, including the model used, input, and output.Metrics
: Performance metrics such as latency, token usage, and cost.Logs
: Detailed logs of the request, including any errors or warnings.Traces
: A visual representation of the request flow, especially useful for complex DSPy modules.Finding Provider Slugs
Model Name Format
gpt-4o
, gpt-3.5-turbo
claude-3-opus-20240229
, claude-3-sonnet-20240320
anthropic.claude-3-sonnet-20240229-v1:0
Missing LLM Calls in Traces