Portkey Blog Portkey Blog
  • Home
  • Production Guides
  • New Releases
  • Talks
  • Upcoming Events
  • Paper Summaries
  • Portkey Docs
  • Join Community
Sign in Subscribe

Semantic Caching

OpenAI’s Prompt Caching: A Deep Dive

OpenAI’s Prompt Caching: A Deep Dive

This update is welcome news for developers who have been grappling with the challenges of managing API costs and response times. OpenAI's Prompt Caching introduces a mechanism to reuse recently seen input tokens, potentially slashing costs by up to 50% and dramatically reducing latency for repetitive tasks. In this post,
Kavya MD Oct 20, 2024
Unpacking Semantic Caching at Walmart

Unpacking Semantic Caching at Walmart

Last month, the LLMs in the Prod community had the pleasure of hosting Rohit Chatter, Chief Software Architect at Walmart Tech Global, for a fireside chat on Gen AI and semantic caching in retail. This conversation spanned a wide range of topics, from Rohit's personal journey in the tech industry
Vrushank Vyas Feb 5, 2024

Subscribe to Portkey Blog

  • Portkey Blog
  • Portkey Website
Portkey Blog © 2025. Powered by Ghost