Portkey provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including the models hosted on Deepinfra API.
Portkey SDK Integration with Deepinfra Models
Portkey provides a consistent API to interact with models from various providers. To integrate Deepinfra with Portkey:
1. Install the Portkey SDK
Add the Portkey SDK to your application to interact with Mistral AI’s API through Portkey’s gateway.
npm install --save portkey-ai
npm install --save portkey-ai
2. Initialize Portkey with the Virtual Key
To use Deepinfra with Virtual Key, get your API key from here. Then add it to Portkey to create the virtual key
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "VIRTUAL_KEY" // Your Deepinfra Virtual Key
})
import Portkey from 'portkey-ai'
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "VIRTUAL_KEY" // Your Deepinfra Virtual Key
})
from portkey_ai import Portkey
portkey = Portkey(
api_key="PORTKEY_API_KEY", # Replace with your Portkey API key
virtual_key="DEEPINFRA_VIRTUAL_KEY"
)
3. Invoke Chat Completions
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'nvidia/Nemotron-4-340B-Instruct',
});
console.log(chatCompletion.choices);
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'nvidia/Nemotron-4-340B-Instruct',
});
console.log(chatCompletion.choices);
completion = portkey.chat.completions.create(
messages= [{ "role": 'user', "content": 'Say this is a test' }],
model= 'nvidia/Nemotron-4-340B-Instruct'
)
print(completion)
Supported Models
Here’s the list of all the Deepinfra models you can route to using Portkey -
Next Steps
The complete list of features supported in the SDK are available on the link below.
You’ll find more information in the relevant sections:
- Add metadata to your requests
- Add gateway configs to your Deepinfra requests
- Tracing Deepinfra requests
- Setup a fallback from OpenAI to Deepinfra