Portkey Blog Portkey Blog
  • Home
  • Production Guides
  • New Releases
  • Talks
  • Upcoming Events
  • Paper Summaries
  • Portkey Docs
  • Join Community
Sign in Subscribe

Semantic Cache

⭐️ Implementing FrugalGPT: Reducing LLM Costs & Improving Performance

⭐️ Implementing FrugalGPT: Reducing LLM Costs & Improving Performance

FrugalGPT is a framework proposed by Lingjiao Chen, Matei Zaharia, and James Zou from Stanford University in their 2023 paper "FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance". The paper outlines strategies for more cost-effective and performant usage of large language model (LLM) APIs. A
Rohit Agarwal, Ayush Garg Apr 22, 2024
Transforming E-Commerce Search with Semantic Cache: Insights from Walmart's Journey

Transforming E-Commerce Search with Semantic Cache: Insights from Walmart's Journey

We recently spoke with Rohit Chatter, Chief Software Architect at Walmart, who offered profound insights into how Walmart is leveraging these technologies, particularly focusing on semantic caching and its impact on e-commerce search functions.
Rohit Agarwal Feb 9, 2024
Our AI overlords

⭐ Reducing LLM Costs & Latency with Semantic Cache

Implementing semantic cache from scratch for production use cases.
Vrushank Vyas Jul 11, 2023

Subscribe to Portkey Blog

  • Portkey Blog
  • Portkey Website
Portkey Blog © 2025. Powered by Ghost