Summary
| Area | Key highlights |
|---|---|
| Platform | Model Catalog migration for all orgs |
| Gateway | Use Responses API and Messages API with any provider |
| Guardrails | Protect your apps with Zscaler AI Guard |
| Models and providers | Route to Databricks, use Claude 4.6 features, run Together AI reasoning models, stream TTS via SSE |
| Community & Events | Agent Harness Salon: BLR (Sat 22 Feb), RSA Conference SF (March) |
Highlights
We’re thrilled to announce our $15M Series A! This is a huge milestone for Portkey and a testament to the incredible trust our customers and community have placed in us. With this funding, we’re doubling down on our mission: building the unified control plane for production AI that never breaks.
Here’s what we’re going to focus on:
- Expanding go-to-market — Meeting the growing enterprise demand across finance, pharma, technology, and beyond
- Governance for agentic AI — Building the controls organizations need as agents take autonomous action: permissions, identity, access boundaries, and budget guardrails
- Platform infrastructure at scale — Higher-volume workloads, real-time use cases, and day0 support for new models and pricing changes
Platform
All Organizations Now on Model Catalog
We’ve upgraded all organizations to Model Catalog. It gives you a unified way to discover, configure, and route to models across providers.- Browse all available models across 40+ providers in one place
- Configure model-specific settings without touching code
- Switch providers for the same model with a single change
Gateway
Use the Responses API with Any Provider
You can now use OpenAI’s Responses API (/v1/responses) across providers!
- Keep a single API format while switching between Anthropic, Google, Bedrock, and others
- Use prompt caching and thinking parameters across providers
Use the Messages API with Any Provider
You can now use Anthropic’s Messages API (/v1/messages) with any provider through a universal adapter — not just Anthropic, Bedrock, and Vertex AI.
- Keep your existing Messages API code while routing to OpenAI, Google, and more
- Let the gateway handle format conversion automatically
Guardrails
Protect Your Apps with Zscaler AI Guard
You can now connect Zscaler AI Guard to scan prompts and responses for security threats.- Enforce Detection Policies for security checks
- Block or flag data loss risks with DLP protection
- Catch prompt injection attempts on both inputs and outputs
Models and providers
- Databricks: You can now route requests to Databricks Model Serving for chat completions, completions, and embeddings. Set up Databricks.
- Claude 4.6: Use Claude 4.6 features across Anthropic, Bedrock, and other providers — including Adaptive Thinking with
reasoning_effort, structured outputs viaoutput_config, and new stop reasons likerefusal. - Together AI reasoning: Run reasoning/thinking models on Together AI with the
reasoning_effortparameter and get structuredcontent_blocksin responses. Try Together AI reasoning. - Bedrock Anthropic citations: Access Anthropic’s citations feature on Bedrock through the chat completions API.
Enhancements
- OpenAI & Azure OpenAI TTS streaming: Stream text-to-speech audio via Server-Sent Events by setting
stream_format: “sse”. Set up SSE streaming. - ZhipuAI: Generate images with ZhipuAI’s CogView models (e.g.,
cogview-4-250304). See ZhipuAI docs. - Vertex AI: Control image and video input resolution with
media_resolution, skip PTU cost attribution withvertex_skip_ptu_cost_attribution, and configure workload identity auth viax-portkey-vertex-auth-type. - Batch pricing: Get accurate cost attribution for batch requests with dedicated batch pricing. When batch-specific pricing isn’t available, costs default to 50% of standard pricing.
Community & Events
Agent Harness Salon: Bangalore
We hosted another Agent Harness Salon in Bangalore on Saturday, February 22nd! Thanks to everyone who joined for the demos, discussions, and drinks.
Meet us at RSA!
The Portkey team will be in SF for RSA Conference this month! If you’re attending and want to chat about AI security, governance, or infrastructure, we’d love to connect. Book a slot hereResources
- Blog: LLM Deployment Pipeline Explained Step by Step
- Blog: How to host an AI Hackathon without losing control of your keys or budget
- Blog: Securing the MCP Gateway: Lasso Partners with Portkey to Deliver Enterprise-Grade Agentic AI Protection
- Blog: The best approach to compare LLM outputs

