The official Python SDK makes it easy to integrate Portkey into any Python application. Enjoy unified access to 250+ LLMs, advanced observability, routing, governance, and enterprise features with just a few lines of code.

Installation

Install the Portkey SDK from PyPI:

pip install portkey-ai

API Key Setup

  1. Create a Portkey API key in your dashboard.
  2. Store your API key securely as an environment variable:
export PORTKEY_API_KEY="your_api_key_here"
The SDK automatically detects your API key from the environment.

Quickstart

Here’s a minimal example to get you started:

from portkey_ai import Portkey

client = Portkey(
    api_key="your_api_key_here",  # Or use the env var PORTKEY_API_KEY
    virtual_key="your_virtual_key_here"  # Or use config="cf-***"
)

response = client.chat.completions.create(
    messages=[{"role": "user", "content": "Hello, world!"}],
    model="gpt-4o"  # Example provider/model
)
You can use either a Virtual Key or a Config object to select your AI provider. Find more info on different authentication mechanisms here.

Authentication & Configuration

The SDK requires:

  • Portkey API Key: Your Portkey API key (env var PORTKEY_API_KEY recommended)
  • Provider Authentication:
    • Virtual Key: The Virtual Key of your chosen AI provider
    • Config: The Config object or config slug for advanced routing
    • Provider Slug + Auth Headers: Useful if you do not want to save your API keys to Portkey and make direct requests.
# With Virtual Key
portkey = Portkey(api_key="...", virtual_key="...")

# With Config
portkey = Portkey(api_key="...", config="cf-***")

# With Provider Slug + Auth Headers
portkey = Portkey(api_key="...", provider="openai", Authorization = "Bearer OPENAI_API_KEY")

Async Usage

Portkey supports Async usage - just use AsyncPortkey client instead of Portkey with await:

Python
import asyncio
from portkey_ai import AsyncPortkey

portkey = AsyncPortkey(
    api_key="PORTKEY_API_KEY",
    virtual_key="VIRTUAL_KEY"
)

async def main():
    chat_completion = await portkey.chat.completions.create(
        messages=[{'role': 'user', 'content': 'Say this is a test'}],
        model='gpt-4'
    )

    print(chat_completion)

asyncio.run(main())

Using a Custom httpx Client

If you need to customize HTTP networking—for example, to disable SSL verification due to VPNs like Zscaler or to use custom proxies—you can pass your own httpx.Client to the Portkey SDK.

Disabling SSL certificate verification is insecure and should only be used for debugging or in trusted internal environments. Never use this in production.

Example: Disable SSL Verification

import httpx
from portkey_ai import Portkey

# Create an httpx client with SSL verification disabled
custom_client = httpx.Client(verify=False)

portkey = Portkey(
    api_key="your_api_key_here",
    virtual_key="your_virtual_key_here",
    http_client=custom_client
)

response = portkey.chat.completions.create(
    messages=[{"role": "user", "content": "Hello!"}],
    model="gpt-4o"
)
print(response)
  • You can use any httpx.Client options (e.g., for proxies, timeouts, custom headers).
  • For async usage, pass an httpx.AsyncClient to AsyncPortkey.
  • See OpenAI Python SDK: Configuring the HTTP client for more examples and best practices.

Adding Trace ID or Metadata

You can choose to override the configuration in individual requests as well and send trace id or metadata along with each request.

completion = portkey.with_options(
    trace_id = "TRACE_ID",
    metadata = {"_user": "USER_IDENTIFIER"}
).chat.completions.create(
    messages = [{ "role": 'user', "content": 'Say this is a test' }],
    model = 'gpt-4o'
)

Parameters

List of All Headers

View the complete list of headers that can be used with Portkey API requests, including authentication, configuration, and custom headers.

Here’s how you can use these headers with the Python SDK:

portkey = Portkey(
    api_key="PORTKEY_API_KEY",
    virtual_key="VIRTUAL_KEY",
    # Add any other headers from the reference
)

# Or at runtime
completion = portkey.with_options(
    trace_id="your_trace_id",
    metadata={"_user": "user_id"},
    # Add any other headers as needed
).chat.completions.create(
    messages=[{"role": "user", "content": "Hello!"}],
    model="gpt-4o"
)

Troubleshooting & Support

FAQs