Using OpenAI AgentKit with Anthropic, Gemini and other providers

Learn how to connect OpenAI AgentKit workflows with multiple LLM providers and get observability, guardrails, and reliability controls.

AgentKit is OpenAI’s new framework for building and running AI agents.
It includes 3 parts:

  • Agent Builder: a visual canvas for creating and versioning multi-agent workflows
  • Connector Registry: a central place for admins to manage how data and tools connect across OpenAI products
  • ChatKit: a toolkit for embedding customizable chat-based agent experiences in your product

AgentKit handles orchestration, tool-calling, and state management for you. Instead of wiring every API manually, you describe what the agent should do and how the tools connect, and the SDK takes care of the execution plan.

AgentKit limitations

While AgentKit provides a powerful and fast way to prototype agent workflows, it comes with constraints you should be aware of, especially if you plan to push your agent into production, evolve it, or extend it across providers.

In the upcoming sections, we’ll walk you through how to integrate Portkey to extend AgentKit beyond these limitations.

How to enable multiple providers

Required setup

  • Node.js 18 or later — AgentKit’s SDK is built in TypeScript and depends on recent Node features.
  • OpenAI account with access to the AgentKit beta.
  • Portkey account — you’ll need your PORTKEY_API_KEY to authenticate requests.

How to integrate Portkey with AgentKit

Once your workflow is ready, click Code in the top navigation and select Agents SDK to get the TypeScript implementation of your workflow.

Next, you need to install required packages

npm install @openai/agents openai

Replace the default OpenAI client initialization in your exported code with Portkey, as given below:

// Original Agent Builder code
import { Agent, run } from '@openai/agents';

// Add Portkey imports
import { OpenAI } from 'openai';
import { setDefaultOpenAIClient } from '@openai/agents';

const PORTKEY_GATEWAY_URL = "https://api.portkey.ai/v1";

// Configure Portkey client
const portkey = new OpenAI({
  baseURL: PORTKEY_GATEWAY_URL,
  apiKey: process.env.PORTKEY_API_KEY,
  defaultHeaders: {
    "x-portkey-provider": "@your-openai-provider-slug",
  },
});

// Set as default client for all agents
setDefaultOpenAIClient(portkey);

// Your Agent Builder workflow code continues as exported...
//
//

Using other providers: In your model field of your OpenAI Agent SDK code. You need to enter the Portkey Model Slug from Model catalog. Example : @openai-provider/gpt-5, @anthropic-provider/claude-sonnet-latest.

How to use multiple providers with AgentKit

Portkey integrates with 1,600+ LLMs across providers and modalities, from OpenAI and Anthropic to Gemini, Mistral, and beyond.

Once you connect AgentKit to Portkey, you can use routing strategies to decide which provider to call based on custom rules such as cost, latency, model capability, or region or even based on metadata.

You can also define fallback strategies that automatically retry or reroute requests to alternate providers.

For example, if your OpenAI call fails, Portkey's AI gateway can retry with Anthropic, without changing your AgentKit workflow.

import { createHeaders, PORTKEY_GATEWAY_URL } from 'portkey-ai';
import { OpenAI } from 'openai';
import { setDefaultOpenAIClient } from '@openai/agents';

// Create a config with fallbacks, It's recommended that you create the Config in Portkey App rather than hard-code the config JSON directly
const config = {
  "strategy": {
    "mode": "fallback"
  },
  "targets": [
    {
      "provider": "openai",
      "override_params": {"model": "gpt-4o"}
    },
    {
      "provider": "anthropic",
      "override_params": {"model": "claude-3-opus-20240229"}
    }
  ]
};

// Configure Portkey client with fallback config
const portkey = new OpenAI({
    baseURL: PORTKEY_GATEWAY_URL,
    apiKey: process.env.PORTKEY_API_KEY!,
    defaultHeaders: createHeaders({
        config: config
    })
});
setDefaultOpenAIClient(portkey);

Enhance AgentKit capabilities with Portkey

Once your AgentKit agents run through Portkey, you gain access to additional layers built for production environments. These don’t change your workflow logic, they enhance how your agents operate and how you monitor them.

1. Observability

Every agent interaction is logged with full context — inputs, outputs, latency, and token usage. You can trace how an agent makes decisions, which tools it calls, and how each provider performs across runs.

// Add tracing to your OpenAI Agents
const portkey = new OpenAI({
    baseURL: PORTKEY_GATEWAY_URL,
    apiKey: process.env.PORTKEY_API_KEY!,
    defaultHeaders: createHeaders({
        traceId: "unique_execution_trace_id", // Add unique trace ID
        provider:"@YOUR_OPENAI_PROVIDER"
    })
});
setDefaultOpenAIClient(portkey);

2. Guardrails

You can define policies to review or filter actions before execution.
Guardrails can:

  • Enforce safety or compliance rules,
  • Sanitize inputs and outputs,
  • Block tool calls or messages that violate defined policies.

This gives teams fine-grained control over what their agents can access or execute.

3. Cost tracking

All provider usage is consolidated in one place. You can monitor spend per agent, per team, or per provider, compare token efficiency, and identify high-cost patterns early.

4. Reliability features

Beyond fallback strategies, Portkey includes:

  • Retries and circuit breakers to handle transient errors automatically.
  • Conditional routing to direct requests to different providers based on metadata or request type.
  • Canary testing for introducing new models safely before moving them into production.

5. Governance and data controls

For enterprise environments, Portkey offers:

  • Audit logs for every agent interaction,
  • Access controls by user or workspace,
  • Regional deployments and data residency options to meet compliance needs.

Bringing it all together

AgentKit simplifies how developers design and deploy agents. Portkey takes it a step further by making those agents production-ready — multi-provider, observable, and reliable.

With just a small configuration change, you can:

  • Run your AgentKit agents on 1,600+ LLMs across providers.
  • Add routing, fallbacks, and guardrails without rewriting logic.
  • Track performance, cost, and behavior in one unified dashboard.

If you’re building with AgentKit and want to bring your agents into production environments confidently, start here.