⭐ Building Reliable LLM Apps: 5 Things To Know In this blog post, we explore a roadmap for building reliable large language model applications. Let’s get started!
⭐ Semantic Cache for Large Language Models Learn how semantic caching for large language models reduces cost, improves latency, and stabilizes high-volume AI applications by reusing responses based on intent, not just text.
FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance - Summary The paper discusses the cost associated with querying large language models (LLMs) and proposes FrugalGPT, a framework that uses LLM APIs to process natural language queries within a budget constraint. The framework uses prompt adaptation, LLM approximation, and LLM cascade to reduce the inference
⭐️ Decoding OpenAI Evals Learn how to use the eval framework to evaluate models & prompts to optimise LLM systems for the best outputs.
We're Afraid Language Models Aren't Modeling Ambiguity - Summary The paper discusses the importance of managing ambiguity in natural language understanding and evaluates the ability of language models (LMs) to recognize and disentangle possible meanings. The authors present AMBIENT, a linguist-annotated benchmark of 1,645 examples with diverse kinds of ambiguity
Sparks of Artificial General Intelligence: Early experiments with GPT-4 - Summary The paper reports on the investigation of an early version of GPT-4, which is part of a new cohort of LLMs that exhibit more general intelligence than previous AI models. The paper demonstrates that GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psyc
Eight Things to Know about Large Language Models - Summary The paper discusses eight potentially surprising claims about large language models (LLMs), including their predictable increase in capability with increasing investment, the unpredictability of specific behaviors, and the lack of reliable techniques for steering their behavior.