In this blog post, we explore a roadmap for building reliable large language model applications. Let’s get started!
Implementing semantic cache from scratch for production use cases.
Learn how to use the eval framework to evaluate models & prompts to optimise LLM systems for the best outputs.
Choosing an LLM from 20+ models available today is hard. We explore Elo ratings as a method to objectively rank and pick the best performers for our use case.