
LLaMA2
⭐️ Getting Started with Llama 2
Llama 2 is an open-source large language model (LLM) developed by Meta. See Llama 2's capabilities, comparisons, and how to run LLAMA 2 locally using Python.
LLaMA2
Llama 2 is an open-source large language model (LLM) developed by Meta. See Llama 2's capabilities, comparisons, and how to run LLAMA 2 locally using Python.
Production Guides
In this blog post, we explore a roadmap for building reliable large language model applications. Let’s get started!
Production Guides
Implementing semantic cache from scratch for production use cases.
paper summaries
The paper discusses the cost associated with querying large language models (LLMs) and proposes FrugalGPT, a framework that uses LLM APIs to process natural language queries within a budget constraint. The framework uses prompt adaptation, LLM approximation, and LLM cascade to reduce the inference
Production Guides
Learn how to use the eval framework to evaluate models & prompts to optimise LLM systems for the best outputs.
paper summaries
The paper discusses the importance of managing ambiguity in natural language understanding and evaluates the ability of language models (LMs) to recognize and disentangle possible meanings. The authors present AMBIENT, a linguist-annotated benchmark of 1,645 examples with diverse kinds of ambiguity
paper summaries
The paper reports on the investigation of an early version of GPT-4, which is part of a new cohort of LLMs that exhibit more general intelligence than previous AI models. The paper demonstrates that GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psyc
paper summaries
The paper discusses eight potentially surprising claims about large language models (LLMs), including their predictable increase in capability with increasing investment, the unpredictability of specific behaviors, and the lack of reliable techniques for steering their behavior.
paper summaries
The paper presents the first attempt to use GPT-4 to generate instruction-following data for Large Language Models (LLMs) finetuning. The 52K English and Chinese instruction-following data generated by GPT-4 leads to superior zero-shot performance on new tasks compared to the instruction-following