December
Ending the year with tools, intelligence, and enterprise controls! 🛠️
This month we announced our MCP(Model Context Protocol) product - enabling LLMs to leverage 800+ tools through a unified interface. We’ve also added Gemini grounding for fact-checking against Google search, PDF support for Anthropic, and the entire HuggingFace model garden on Vertex AI.
For enterprises, we’re introducing comprehensive SSO/SCIM support, enhanced usage controls, and more.
Let’s explore what’s new!
Summary
Area | Key Updates |
---|---|
Platform | • Announced Portkey MCP Client with support for 800+ tools • Set Usage & budget limits for keys • New strict OpenAI compliance mode |
Integrations | • Support for o1 and Llama 3.3 • Full HuggingFace model garden on Vertex AI • Support for Amazon Nova models • Enhanced Groq & Ollama tools support • Gemini grounding mode for search-backed responses • Anthropic’s new PDF input capabilities • Microsoft Semantic Kernel integration • Realtime API support |
Enterprise | • Flexible SSO/SCIM for any OIDC/SAML provider • New workspace management APIs |
Guardrail | • New guardrail integrations with Pangea & Promptfoo • Enhanced regex guardrail capabilities |
Model Context Protocol
Portkey’s Model Context Protocol client enables your AI agents to seamlessly interact with hundreds of tools while maintaining enterprise-grade observability and control.
Join the MCP waitlist →
Platform
Gemini Grounding
Ground LLM responses with real-world data through Google search integration
Anthropic PDF
Native support for PDF processing in Anthropic models, with OpenAI’s image_url
field
Realtime API
Portkeys logs the entire request and response for the OpenAI realtime API, including the model’s response, cost, and guardrail violations.
Flag for Strict OpenAI Compliance
Set the flag to FALSE
to use provider-specific features while maintaining OpenAI API compatibility
- Bulk Actions on Prompts: Move & Delete multiple prompt templates easily
- Language Detection in Logs: Portkey now renders the coding language in logs view
- Local Gateway Console: On the open source Gateway, we now log all of your requests along with their key stats
- Set Dynamic Usage Limits: Create budget or token limited API & Virtual keys with automatic expirty or reset
Enterprise
Authentication & Access
- Universal SSO: Support for any provider using OIDC/SAML standards
- SCIM Integration: Automated user provisioning and management
- Workspace Control: New APIs for workspace deletion and user invites
- Private Deployment: Updated documentation for fully private Portkey installations (Docs)
Azure Enhancements
- Virtual Keys: New configuration options for Azure deployments
- Enhanced Integration: Improved Azure-specific features and controls
Integrations
New Providers
HuggingFace on Vertex
Access the complete HuggingFace model garden through Vertex AI
Self-deployed models on Vertex
You can now call your self-deployed models on Vertex AI through Portkey
Amazon Nova
Support for Nova models in prompt playground
Azure AI Inference
Full integration with Nebius AI platform
Qdrant
You can also route your Qdrant vector DB queries through Portkey now.
More
Nebius AI, Lambda Labs, Lemonfox, Inference.net, Voyage AI, Recraft AI
New Models
OpenAI o1
Integrated for OpenAI’s latest o1 model across OpenAI & Azure OpenAI
Llama 3.3
Added support for Meta’s latest Llama 3.3 model across multiple providers
Microsoft Semantic Kernel
We also integrated Microsoft’s Semantic Kernel library with Portkey! (and did it first for C#
)
Semantic Kernel Docs
Guardrails
All Guardrails responses now include an explanation
property to understand why checks passed or failed.
Pangea
Pangea’s enterprise-grade security guardrails are now available on Portkey
Mistral Content Moderation
Run content moderation on LLM inputs/outputs with Mistral’s latest model
Resources
Essential reading for your AI infrastructure:
- Prompt Injection Attacks: Understanding and preventing security risks
- Real-time vs Batch Evaluation: Choosing the right guardrail strategy
Improvements
Provider Enhancements
- Fixed Cohere streaming on Bedrock
- Improved media support in moderations API
- Enhanced regex guardrail functionality
SDK Updates
- Resolved Pydantic compatibility issues
- Fixed httpx-related concerns in the Python SDK
Support
Was this page helpful?