⭐️ Analyze your LLM calls - 2.0

Portkey's analytics 2.0 give our users complete visibility into their LLM calls across requests, users, errors, cache and feedback.

We've revamped all the dashboards to give you complete visibility across all your LLM calls.

We now support metrics across Requests, Users, Errors, Cache, and Feedback while allowing filters across models, cost, tokens, status codes and other metadata.

If you're working with an LLM API like OpenAI, or any other - visibility across all your requests can be a BIG pain. How do you measure cost, latency, accuracy of your requests?

Portkey's observability features provide complete control over ALL your requests and our new analytics dashboards provide the insights you're looking for. Fast.

0:00
/

21 Metrics Supported

As of Aug 7th, Portkey supports 21 metrics that you can use to analyse your LLM app. Let's dive in to learn about the various dashboards.

1. 🗺️ Overview

The 30,000 feet view for your app. You can view

  • The cost across your requests in USD
  • The total tokens used across the prompt and the completion
  • The mean latency across your requests
  • The total successful and failed requests
  • Your total unique users
  • The top models used and the top users making requests

2. 🙋🏻‍♂️Users

Get insights into the user behaviour across your requests. You can view

  • The total unique users who're using your app (through the user parameter in OpenAI calls or _user custom metadata parameter for other LLMs)
  • The top users by the number of requests being made
  • The average requests per user, to denote if your users are using your app more or less
  • The average user feedback across all the feedback collected in Portkey

3. ⚠️ Errors

View the various failed requests and the reasons to improve performance. You can view

  • The error rate percentage (Total failed requests / total requests)
  • The count of all the errors segmented by the error type
  • A pie chart of all the different error codes you've received in the time frame
  • A trend of all the rescued calls which were successful due to automatic retries, and fallback measures in Portkey

4. 🗄️ Cache

Visualise the true value of simple and semantic cache (when enabled with Portkey). You can view

  • The number of cache hits segmented by simple and semantic hits
  • The cache hit rate - (cache hits / total requests with cache enabled)
  • The cache speedup that determines how much time are you saving your users
  • And savings due to enabling cache

5. 💬 Feedback

Portkey allows you to collect feedback from your users in a flexible manner. The analytics dashboard for feedback will allow you to get insights on it. You can view

  • The count of feedbacks getting logged in Portkey
  • A histogram of the scores coming in
  • A weighted average feedback trend
  • And the number of engaged users who're giving feedback

Filtering the Dashboards

You can filter the data underlying to these dashboards through the UI. Portkey provides out of the box filters for

  1. Date: Choose the time frame for the analyis
  2. Model: Only view the specific models you want to analyse (`gpt-3.5-turbo`, `claude-1`)
  3. Cost: Choose the range for the cost of the LLM request
  4. Tokens: Only show requests within a particular token range (prompt & completion)
  5. Status: Filter requests that returned a certain status code (200 for successful calls, 429 for all rate-limited calls)
  6. Meta: Filter by user, organisation, prompt name or environment

You can also filter by any custom metadata property you send in your request (In Beta now)

As always, you can switch to the logs tab to view granular details for every request. The same filters have been provided there as well.

If you need any help with the dashboards, or have suggestions - please do write in to [email protected] and we'd be super happy to chat with you.