Portkey Blog Portkey Blog
  • Home
  • Production Guides
  • New Releases
  • Talks
  • Upcoming Events
  • Paper Summaries
  • Portkey Docs
  • Join Community
Sign in Subscribe

Cache hit rate

⭐️ Implementing FrugalGPT: Reducing LLM Costs & Improving Performance

⭐️ Implementing FrugalGPT: Reducing LLM Costs & Improving Performance

FrugalGPT is a framework proposed by Lingjiao Chen, Matei Zaharia, and James Zou from Stanford University in their 2023 paper "FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance". The paper outlines strategies for more cost-effective and performant usage of large language model (LLM) APIs. A
Rohit Agarwal, Ayush Garg Apr 22, 2024
Our AI overlords

⭐ Reducing LLM Costs & Latency with Semantic Cache

Implementing semantic cache from scratch for production use cases.
Vrushank Vyas Jul 11, 2023

Subscribe to Portkey Blog

  • Portkey Blog
  • Portkey Website
Portkey Blog © 2025. Powered by Ghost