Comparing Top10 LMSYS Models with Portkey
The LMSYS Chatbot Arena, with over 1,000,000 human comparisons, is the gold standard for evaluating LLM performance.
But, testing multiple LLMs is a pain, requiring you to juggle APIs that all work differently, with different authentication and dependencies.
Enter Portkey: A unified, open source API for accessing over 200 LLMs. Portkey makes it a breeze to call the models on the LMSYS leaderboard - no setup required.
In this notebook, you’ll see how Portkey streamlines LLM evaluation for the Top 10 LMSYS Models, giving you valuable insights into cost, performance, and accuracy metrics.
Let’s dive in!
Video Guide
The notebook comes with a video guide that you can follow along
Setting up Portkey
To get started, install the necessary packages:
Next, sign up for a Portkey API key at https://app.portkey.ai/. Navigate to “Settings” -> “API Keys” and create an API key with the appropriate scope.
Defining the Top 10 LMSYS Models
Let’s define the list of Top 10 LMSYS models and their corresponding providers.
Add Provider API Keys to Portkey Vault
ALL the providers above are integrated with Portkey - which means, you can add their API keys to Portkey vault and get a corresponding Virtual Key and streamline API key management.
Provider | Link to get API Key | Payment Mode |
---|---|---|
openai | https://platform.openai.com/ | Wallet Top Up |
anthropic | https://console.anthropic.com/ | Wallet Top Up |
https://aistudio.google.com/ | Free to Use | |
cohere | https://dashboard.cohere.com/ | Free Credits |
together-ai | https://api.together.ai/ | Free Credits |
reka-ai | https://platform.reka.ai/ | Wallet Top Up |
zhipu | https://open.bigmodel.cn/ | Free to Use |
Running the Models with Portkey
Now, let’s create a function to run the Top 10 LMSYS models using OpenAI SDK with Portkey Gateway:
Comparing Model Outputs
To display the model outputs in a tabular format for easy comparison, we define the print_model_outputs function:
Example: Evaluating LLMs for a Specific Task
Let’s run the notebook with a specific prompt to showcase the differences in responses from various LLMs:
On Portkey, you will be able to see the logs for all models:
Conclusion
With minimal setup and code modifications, Portkey enables you to streamline your LLM evaluation process and easily call 200+ LLMs to find the best model for your specific use case.
Explore Portkey further and integrate it into your own projects. Visit the Portkey documentation at https://docs.portkey.ai/ for more information on how to leverage Portkey’s capabilities in your workflow.
Was this page helpful?