Portkey provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including all the text generation models supported by Huggingface’s Inference endpoints.
With Portkey, you can take advantage of features like fast AI gateway access, observability, prompt management, and more, all while ensuring the secure management of your LLM API keys through a virtual key system.
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"] virtualKey: "VIRTUAL_KEY", // Your Huggingface Virtual Key huggingfaceBaseUrl: "HUGGINGFACE_DEDICATED_URL" // Optional: Use this if you have a dedicated server hosted on Huggingface})
Copy
Ask AI
import Portkey from 'portkey-ai'const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"] virtualKey: "VIRTUAL_KEY", // Your Huggingface Virtual Key huggingfaceBaseUrl: "HUGGINGFACE_DEDICATED_URL" // Optional: Use this if you have a dedicated server hosted on Huggingface})
Copy
Ask AI
from portkey_ai import Portkeyportkey = Portkey( api_key="PORTKEY_API_KEY", # Replace with your Portkey API key virtual_key="VIRTUAL_KEY", # Replace with your virtual key for Huggingface huggingface_base_url="HUGGINGFACE_DEDICATED_URL" # Optional: Use this if you have a dedicated server hosted on Huggingface)
Use the Portkey instance to send requests to Huggingface. You can also override the virtual key directly in the API call if needed.
Copy
Ask AI
const chatCompletion = await portkey.chat.completions.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'meta-llama/Meta-Llama-3.1-8B-Instruct', // make sure your model is hot});console.log(chatCompletion.choices[0].message.content);
Copy
Ask AI
const chatCompletion = await portkey.chat.completions.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'meta-llama/Meta-Llama-3.1-8B-Instruct', // make sure your model is hot});console.log(chatCompletion.choices[0].message.content);
Copy
Ask AI
chat_completion = portkey.chat.completions.create( messages= [{ "role": 'user', "content": 'Say this is a test' }], model= 'meta-llama/meta-llama-3.1-8b-instruct', # make sure your model is hot)print(chat_completion.choices[0].message.content)
Copy
Ask AI
chat_completion = client.chat.completions.create( messages = [{ "role": 'user', "content": 'Say this is a test' }], model = 'meta-llama/meta-llama-3.1-8b-instruct', # make sure your model is hot)print(chat_completion.choices[0].message.content)
Copy
Ask AI
async function main() { const chatCompletion = await client.chat.completions.create({ model: "meta-llama/meta-llama-3.1-8b-instruct", // make sure your model is hot messages: [{ role: "user", content: "How many points to Gryffindor?" }], }); console.log(chatCompletion.choices[0].message.content);}main();
Copy
Ask AI
chat_completion = client.chat.completions.create( messages = [{ "role": 'user', "content": 'Say this is a test' }], model = 'meta-llama/meta-llama-3.1-8b-instruct', # make sure your model is hot)print(chat_completion.choices[0].message.content)
Virtual Keys serve as Portkey’s unified authentication system for all LLM interactions, simplifying the use of multiple providers and Portkey features within your application. For self-hosted LLMs, you can configure custom authentication requirements including authorization keys, bearer tokens, or any other headers needed to access your model:
Navigate to Virtual Keys in your Portkey dashboard
Click “Add Key” and enable the “Local/Privately hosted provider” toggle
Configure your deployment:
Select the matching provider API specification (typically OpenAI)
Enter your model’s base URL in the Custom Host field
Add required authentication headers and their values
Click “Create” to generate your virtual key
You can now use this virtual key in your requests:
Copy
Ask AI
const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", virtualKey: "YOUR_SELF_HOSTED_LLM_VIRTUAL_KEY"async function main() { const response = await client.chat.completions.create({ messages: [{ role: "user", content: "Bob the builder.." }], model: "your-self-hosted-model-name", });console.log(response.choices[0].message.content);})
Copy
Ask AI
const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", virtualKey: "YOUR_SELF_HOSTED_LLM_VIRTUAL_KEY"async function main() { const response = await client.chat.completions.create({ messages: [{ role: "user", content: "Bob the builder.." }], model: "your-self-hosted-model-name", });console.log(response.choices[0].message.content);})