Taking Enterprise AI to New Heights! 🚀

February brings major enhancements to Portkey’s platform with unified APIs, advanced security features, and powerful integrations. We’re particularly excited about our unified fine-tuning, files, and batches API that works across all major providers—making multi-provider deployments simpler than ever.

We’ve also launched advanced PII redaction, auto instrumentation for popular frameworks, and expanded our model support with the latest releases from OpenAI, Google, Anthropic, and more.

Plus, we had an amazing time at the AI Engineering Summit in NYC and connecting with the community through the Latent Space podcast!

Let’s explore what’s new:

Summary

AreaKey Updates
Platform• Unified Fine-Tuning, Files & Batches API across all major providers
• Advanced PII Redaction with standardized identifiers
• Auto instrumentation for CrewAI & LangGraph
• Custom webhooks with request/response body mutation
Gateway• Support for reasoning_effort param in OpenAI
• Conditional routing with parameter value modification
• Google Search Tool support
• Multimodal ‘webm’ support on Vertex AI
• Improved streaming responses across providers
Security• Default Configs & Metadata on API Keys
• Multiple owner support in organizations
• Improved role management in UI
• Updated cache implementation for enterprise performance
New Models• OpenAI o3 models
• Gemini 2 Flash Thinking
• Claude 3.7 Sonnet
• OpenAI GPT-4.5
Integrations• Acuvity guardrail
• AnythingLLM & JanHQ integrations
• Azure Marketplace availability
• Zed integration for secure LLM interactions
Community• AI Engineering Summit in NYC
• Latent Space podcast appearance
• New community contributors

Platform

Unified Fine-tuning, Files & Batches API

Managing AI assets across multiple providers just got dramatically simpler. Our unified API now offers:

  • Consistent interface for fine-tuning across OpenAI, Azure OpenAI, Google Vertex AI, AWS Bedrock, and Fireworks AI
  • Standardized file upload endpoints for all supported providers
  • Provider-agnostic batch processing that works with any model Portkey supports

This means you can:

  • Develop once, deploy everywhere
  • Easily A/B test fine-tuned models across providers
  • Simplify your codebase with a single interface for all providers

Advanced PII Redaction

We’ve significantly enhanced our security capabilities with sophisticated PII redaction:

  • Automatically detect and redact sensitive information (emails, phone numbers, SSNs) before they reach any LLM
  • Replace sensitive data with standardized identifiers for consistent handling
  • Seamless integration with our entire guardrails ecosystem

Auto Instrumentation for Agent Frameworks

Building AI agents is now even easier with automatic instrumentation for popular frameworks:

  • Full support for CrewAI and LangGraph with zero configuration changes
  • Retain all Portkey features: interoperability, metering, governance, routing, and more
  • Simplified monitoring and management of complex agent systems

Gateway Enhancements

Custom Webhooks with Body Mutation

A game-changing feature for request and response transformation:

  • Mutate request/response bodies directly from your webhooks
  • Simply return a transformedData object along with your verdict
  • Automatically override existing request/response bodies based on your transformations

Improved Provider Integrations

  • Configurable Timeouts: All Partner & Pro Guardrails now have configurable timeouts
  • Better Streaming: Fixed Azure OpenAI streaming to include usage data in final chunks
  • Tool Handling: Improved handling of tool_calls in Gemini responses with mixed text and tool calls
  • Vertex Caching: The gateway now automatically caches your Vertex-generated tokens
  • Google Search Tool: Added support for google_search as a separate tool from google_search_retrieval
  • Conditional Routing: You can now conditionally route based on any request params and modify param values

Enterprise

February continues the momentum with powerful enterprise features:

  • Azure Marketplace: Portkey is now available on Azure Marketplace for simplified enterprise procurement
  • Multiple Owners: Organizations can now have multiple owner accounts for improved management
  • Enhanced Role Management: Change member roles directly from the UI
  • User Key Creation: Create user-specific keys directly from the UI interface
  • Default Configs: Attach default configurations and metadata to any API key you create
  • Performance Optimization: Updated cache implementation to avoid redundant Redis calls
  • Browser SDK Support: Run our SDK directly in the browser with Cross-Origin access support

New Models & Integrations

OpenAI o3 Models

Latest o3 models now available through Portkey

Gemini 2 Flash Thinking

Access Google’s Gemini 2 Flash with thinking capabilities

Claude 3.7 Sonnet

Anthropic’s newest Sonnet model with enhanced reasoning

OpenAI GPT-4.5

Latest OpenAI model with improved capabilities

We’ve expanded our integration ecosystem with powerful new additions:

  • Acuvity Guardrail: Enhanced security with specialized content filtering
  • Zed Integration: Secure, observe, and govern your LLM interactions for entire teams
  • AnythingLLM & JanHQ: New integrations for expanded ecosystem compatibility
  • Grounding for Vertex & Gemini: Improved factual accuracy for Google’s AI models

Scale & Impact

January was a record-breaking month for Portkey. We saw unprecedented enterprise adoption, closing more enterprise deals in January alone than in the final months of 2024 combined.

What’s thrilling is the scale of AI adoption we’re enabling:

  • Processed ~250M LLM calls in just the past week
  • ~60% of calls have fallbacks configured
  • ~39% of calls have load balancing or A/B Testing enabled
  • A large percentage have at least one runtime guardrail check

Community

Community Contributors

A special thanks to our community contributors this month:

“Describing Portkey as merely useful would be an understatement; it’s a must-have.” - @AManInTech

Our Stories

The State of AI FinOps 2025: Key Insights from FinOps Foundation's Latest Report

Documentation

We’ve significantly improved our documentation this month:

Support

Was this page helpful?