Integrate Anyscale endpoints with Portkey seamlessly and make your OSS models production-ready
Portkey’s suite of features - AI gateway, observability, prompt management, and continuous fine-tuning are all enabled for the OSS models (Llama2, Mistral, Zephyr, and more) available on Anyscale endpoints.
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"] virtualKey: "ANYSCALE_VIRTUAL_KEY" // Your Anyscale Virtual Key})
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"] virtualKey: "ANYSCALE_VIRTUAL_KEY" // Your Anyscale Virtual Key})
from portkey_ai import Portkeyportkey = Portkey( api_key="PORTKEY_API_KEY", # Replace with your Portkey API key virtual_key="ANYSCALE_VIRTUAL_KEY" # Replace with your virtual key for Anyscale)
Alternatively, you can also directly call Anyscale models through Portkey’s REST API - it works exactly the same as OpenAI API, with 2 differences:
You send your requests to Portkey’s complete Gateway URL https://api.portkey.ai/v1/chat/completions
You have to add Portkey specific headers.
x-portkey-api-key for sending your Portkey API Key
x-portkey-virtual-key for sending your provider’s virtual key (Alternatively, if you are not using Virtual keys, you can send your Auth header for your provider, and pass the x-portkey-provider header along with it)
You can also use the baseURL param in the standard OpenAI SDKs and make calls to Portkey + Anyscale directly from there. Like the Rest API example, you are only required to change the baseURL and add defaultHeaders to your instance. You can use the Portkey SDK to make it simpler:
import OpenAI from 'openai'; // We're using the v4 SDKimport { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai'const anyscale = new OpenAI({ apiKey: 'ANYSCALE_API_KEY', baseURL: PORTKEY_GATEWAY_URL, defaultHeaders: createHeaders({ provider: "anyscale", apiKey: "PORTKEY_API_KEY" // defaults to process.env["PORTKEY_API_KEY"] })});async function main() { const chatCompletion = await anyscale.chat.completions.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'mistralai/Mistral-7B-Instruct-v0.1', }); console.log(chatCompletion.choices);}main();
import OpenAI from 'openai'; // We're using the v4 SDKimport { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai'const anyscale = new OpenAI({ apiKey: 'ANYSCALE_API_KEY', baseURL: PORTKEY_GATEWAY_URL, defaultHeaders: createHeaders({ provider: "anyscale", apiKey: "PORTKEY_API_KEY" // defaults to process.env["PORTKEY_API_KEY"] })});async function main() { const chatCompletion = await anyscale.chat.completions.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'mistralai/Mistral-7B-Instruct-v0.1', }); console.log(chatCompletion.choices);}main();
from openai import OpenAIfrom portkey_ai import PORTKEY_GATEWAY_URL, createHeadersanyscale = OpenAI( api_key="ANYSCALE_API_KEY", # defaults to os.environ.get("OPENAI_API_KEY") base_url=PORTKEY_GATEWAY_URL, default_headers=createHeaders( provider="anyscale", api_key="PORTKEY_API_KEY" # defaults to os.environ.get("PORTKEY_API_KEY") ))chat_complete = anyscale.chat.completions.create( model="mistralai/Mistral-7B-Instruct-v0.1", messages=[{"role": "user", "content": "Say this is a test"}],)print(chat_complete.choices[0].message.content)
This request will be automatically logged by Portkey. You can view this in your logs dashboard. Portkey logs the tokens utilized, execution time, and cost for each request. Additionally, you can delve into the details to review the precise request and response data.
Deploy the prompts using the Portkey SDK or REST API
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]})// Make the prompt creation call with the variablesconst promptCompletion = await portkey.prompts.completions.create({ promptID: "YOUR_PROMPT_ID", variables: { //Required variables for prompt }})
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]})// Make the prompt creation call with the variablesconst promptCompletion = await portkey.prompts.completions.create({ promptID: "YOUR_PROMPT_ID", variables: { //Required variables for prompt }})