This feature is available on all Portkey plans.

OpenAI’s Realtime API while the fastest way to use multi-modal generation, presents its own set of problems around logging, cost tracking and guardrails.

Portkey’s AI Gateway provides a solution to these problems with a seamless integration. Portkeys logging is unique in that it captures the entire request and response, including the model’s response, cost, and guardrail violations.

Here’s how to get started:

from portkey_ai import AsyncPortkey as Portkey, PORTKEY_GATEWAY_URL
import asyncio

async def main():
client = Portkey(
    api_key="PORTKEY_API_KEY",
    virtual_key="VIRTUAL_KEY",
    base_url=PORTKEY_GATEWAY_URL,
)

async with client.beta.realtime.connect(model="gpt-4o-realtime-preview-2024-10-01") as connection: #replace with the model you want to use
    await connection.session.update(session={'modalities': ['text']})

    await connection.conversation.item.create(
        item={
            "type": "message",
            "role": "user",
            "content": [{"type": "input_text", "text": "Say hello!"}],
        }
    )
    await connection.response.create()

    async for event in connection:
        if event.type == 'response.text.delta':
            print(event.delta, flush=True, end="")

        elif event.type == 'response.text.done':
            print()

        elif event.type == "response.done":
            break

asyncio.run(main())

For advanced use cases, you can use configs (https://portkey.ai/docs/product/ai-gateway/configs#configs)

If you would not like to store your API Keys with Portkey, you can pass your openai key in the Authorization header.

Fire Away!

You can see your logs in realtime with neatly visualized traces and cost tracking.

Realtime API Traces

Next Steps