It takes 2 mins to integrate and with that, it starts monitoring all of your LLM requests and makes your app resilient, secure, performant, and more accurate at the same time.

Here’s a product walkthrough (3 mins):

Integrate in 3 Lines of Code

from portkey_ai import Portkey

portkey = Portkey(
    api_key="YOUR_PORTKEY_API_KEY",
    virtual_key="YOUR_VIRTUAL_KEY"
)

chat_complete = portkey.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"}
    ]
)
print(chat_complete.choices[0].message.content)

While you’re here, why not give us a star? It helps us a lot!

FAQs