Skip to main content
We raised our $15M Series A this month! 🎉 Led by Elevation Capital with participation from Lightspeed, this funding helps us double down on our mission: building AI infrastructure that never breaks. We’ve also made improvements across the gateway, guardrails, and provider ecosystem this month. See what’s new!

Summary

AreaKey highlights
PlatformModel Catalog migration for all orgs
GatewayUse Responses API and Messages API with any provider
GuardrailsProtect your apps with Zscaler AI Guard
Models and providersRoute to Databricks, use Claude 4.6 features, run Together AI reasoning models, stream TTS via SSE
Community & EventsAgent Harness Salon: BLR (Sat 22 Feb), RSA Conference SF (March)

Highlights

We’re thrilled to announce our $15M Series A! This is a huge milestone for Portkey and a testament to the incredible trust our customers and community have placed in us. With this funding, we’re doubling down on our mission: building the unified control plane for production AI that never breaks. Portkey Series A announcement graphic Here’s what we’re going to focus on:
  • Expanding go-to-market — Meeting the growing enterprise demand across finance, pharma, technology, and beyond
  • Governance for agentic AI — Building the controls organizations need as agents take autonomous action: permissions, identity, access boundaries, and budget guardrails
  • Platform infrastructure at scale — Higher-volume workloads, real-time use cases, and day0 support for new models and pricing changes
Read the full announcement here

Platform

All Organizations Now on Model Catalog

We’ve upgraded all organizations to Model Catalog. It gives you a unified way to discover, configure, and route to models across providers.
  • Browse all available models across 40+ providers in one place
  • Configure model-specific settings without touching code
  • Switch providers for the same model with a single change
Explore Model Catalog

Gateway

Use the Responses API with Any Provider

You can now use OpenAI’s Responses API (/v1/responses) across providers!
  • Keep a single API format while switching between Anthropic, Google, Bedrock, and others
  • Use prompt caching and thinking parameters across providers
from portkey_ai import Portkey

portkey = Portkey(api_key="PORTKEY_API_KEY")

response = portkey.responses.create(
    model="@anthropic-provider/claude-sonnet-4-5-20250514",
    input="Explain quantum computing in simple terms"
)

print(response.output_text)
Get started with Responses API

Use the Messages API with Any Provider

You can now use Anthropic’s Messages API (/v1/messages) with any provider through a universal adapter — not just Anthropic, Bedrock, and Vertex AI.
import anthropic

client = anthropic.Anthropic(
    api_key="PORTKEY_API_KEY",
    base_url="https://api.portkey.ai"
)

message = client.messages.create(
    model="@google-provider/gemini-2.5-flash",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Explain quantum computing in simple terms"}]
)

print(message.content[0].text)
  • Keep your existing Messages API code while routing to OpenAI, Google, and more
  • Let the gateway handle format conversion automatically
See how to use the Messages endpoint

Guardrails

Protect Your Apps with Zscaler AI Guard

You can now connect Zscaler AI Guard to scan prompts and responses for security threats.
  • Enforce Detection Policies for security checks
  • Block or flag data loss risks with DLP protection
  • Catch prompt injection attempts on both inputs and outputs
Connect Zscaler AI Guard

Models and providers

  • Databricks: You can now route requests to Databricks Model Serving for chat completions, completions, and embeddings. Set up Databricks.
  • Claude 4.6: Use Claude 4.6 features across Anthropic, Bedrock, and other providers — including Adaptive Thinking with reasoning_effort, structured outputs via output_config, and new stop reasons like refusal.
  • Together AI reasoning: Run reasoning/thinking models on Together AI with the reasoning_effort parameter and get structured content_blocks in responses. Try Together AI reasoning.
  • Bedrock Anthropic citations: Access Anthropic’s citations feature on Bedrock through the chat completions API.

Enhancements

  • OpenAI & Azure OpenAI TTS streaming: Stream text-to-speech audio via Server-Sent Events by setting stream_format: “sse”. Set up SSE streaming.
  • ZhipuAI: Generate images with ZhipuAI’s CogView models (e.g., cogview-4-250304). See ZhipuAI docs.
  • Vertex AI: Control image and video input resolution with media_resolution, skip PTU cost attribution with vertex_skip_ptu_cost_attribution, and configure workload identity auth via x-portkey-vertex-auth-type.
  • Batch pricing: Get accurate cost attribution for batch requests with dedicated batch pricing. When batch-specific pricing isn’t available, costs default to 50% of standard pricing.

Community & Events

Agent Harness Salon: Bangalore

We hosted another Agent Harness Salon in Bangalore on Saturday, February 22nd! Thanks to everyone who joined for the demos, discussions, and drinks. Agent Harness Salon Bangalore February 2026

Meet us at RSA!

The Portkey team will be in SF for RSA Conference this month! If you’re attending and want to chat about AI security, governance, or infrastructure, we’d love to connect. Book a slot here

Resources

Support

Last modified on March 4, 2026