Anyscale
Integrate Anyscale endpoints with Portkey seamlessly and make your OSS models production-ready
Portkey’s suite of features - AI gateway, observability, prompt management, and continuous fine-tuning are all enabled for the OSS models (Llama2, Mistral, Zephyr, and more) available on Anyscale endpoints.
Provider Slug. anyscale
Portkey SDK Integration with Anyscale
1. Install the Portkey SDK
2. Initialize Portkey with Anyscale Virtual Key
To use Anyscale with Portkey, get your Anyscale API key from here, then add it to Portkey to create the virtual key.
3. Invoke Chat Completions with Anyscale
Directly Using Portkey’s REST API
Alternatively, you can also directly call Anyscale models through Portkey’s REST API - it works exactly the same as OpenAI API, with 2 differences:
- You send your requests to Portkey’s complete Gateway URL
https://api.portkey.ai/v1/chat/completions
- You have to add Portkey specific headers.
x-portkey-api-key
for sending your Portkey API Keyx-portkey-virtual-key
for sending your provider’s virtual key (Alternatively, if you are not using Virtual keys, you can send your Auth header for your provider, and pass thex-portkey-provider
header along with it)
Using the OpenAI Python or Node SDKs for Anyscale
You can also use the baseURL
param in the standard OpenAI SDKs and make calls to Portkey + Anyscale directly from there. Like the Rest API example, you are only required to change the baseURL
and add defaultHeaders
to your instance. You can use the Portkey SDK to make it simpler:
This request will be automatically logged by Portkey. You can view this in your logs dashboard. Portkey logs the tokens utilized, execution time, and cost for each request. Additionally, you can delve into the details to review the precise request and response data.
Managing Anyscale Prompts
You can manage all prompts for Anyscale’s OSS models in the Prompt Library. All the current models of Anyscale are supported.
Creating Prompts
Use the Portkey prompt playground to set variables and try out various model params to get the right output.
Using Prompts
Deploy the prompts using the Portkey SDK or REST API
We can also override the hyperparameters:
Observe how this streamlines your code readability and simplifies prompt updates via the UI without altering the codebase.
Advanced Use Cases
Streaming Responses
Portkey supports streaming responses using Server Sent Events (SSE).
Fine-tuning
Please refer to our fine-tuning guides to take advantage of Portkey’s advanced continuous fine-tuning capabilities.
Portkey Features
Portkey supports the complete host of it’s functionality via the OpenAI SDK so you don’t need to migrate away from it.
Please find more information in the relevant sections: