Open Sourcing Guardrails on the Gateway Framework We are solving the *biggest missing component* in taking AI apps to prod → Now, enforce LLM behavior and route requests with precision, in one go.
LLMs in Prod Comes to Bangalore Portkey's LLMs in Prod series hit Bangalore, bringing together AI practitioners to tackle real-world challenges in productionizing AI apps. From AI gateways to agentic workflows to DSPy at scale, here's what's shaping the future of AI in production.
Anyscale's OSS Models + Portkey's Ops Stack The landscape of AI development is rapidly evolving, and open-source Large Language Models (LLMs) have emerged as a key foundation for building AI applications. Anyscale has been a game-changer here with their fast and cheap APIs for Llama2, Mistral, and more OSS models. But to harness the full potential of
Building Production-Ready RAG Apps 💡This is Portkey's first collaboration with the Hasura Team. Hasura helps you build robust RAG data pipelines by unifying multiple private data sources (relational DB, vector DB, etc.) and letting you query the data securely with production-grade controls. LLMs have been around for some time now and have shown that
⭐ Reducing LLM Costs & Latency with Semantic Cache Implementing semantic cache from scratch for production use cases.