Integrate DSPy with Portkey for production-ready LLM pipelines
OpenAI
client in DSPy and makes it work with 250+ LLMs and gives you detailed cost insights. Just change api_base
and add Portkey related headers in the default_headers
param.
Request Details
: Information about the specific request, including the model used, input, and output.Metrics
: Performance metrics such as latency, token usage, and cost.Logs
: Detailed logs of the request, including any errors or warnings.Traces
: A visual representation of the request flow, especially useful for complex DSPy modules.config
key in the default_headers
param:
Config
key in the default_headers
param.
Missing LLM Calls in Traces