⭐️ Analyze your LLM calls - 2.0 Portkey's analytics 2.0 give our users complete visibility into their LLM calls across requests, users, errors, cache and feedback.
⭐ Building Reliable LLM Apps: 5 Things To Know In this blog post, we explore a roadmap for building reliable large language model applications. Let’s get started!
⭐ Semantic Cache for Large Language Models Learn how semantic caching for large language models reduces cost, improves latency, and stabilizes high-volume AI applications by reusing responses based on intent, not just text.
Dive into what is LLMOps Rohit from Portkey is joined by Weaviate's Research Scientist Connor where they go on a deep dive about the differences between MLOps and LLMOps, building RAG systems, and what lies ahead for building production-grade LLM-based apps. This and much more in this podcast! Rohit Agarwal on Portkey -
The Confidence Checklist for LLMs in Production Portkey CEO Rohit Agarwal shares practical tips from his own experience on crafting production-grade & reliable LLM systems. Read more LLM reliability tips here.
Towards Reasoning in Large Language Models: A Survey - Summary This paper provides a comprehensive overview of the current state of knowledge on reasoning in Large Language Models (LLMs), including techniques for improving and eliciting reasoning in these models, methods and benchmarks for evaluating reasoning abilities, findings and implications of previous r
Are We Really Making Much Progress in Text Classification? A Comparative Review - Summary This paper reviews and compares methods for single-label and multi-label text classification, categorizing them into bag-of-words, sequence-based, graph-based, and hierarchical methods. The findings reveal that pre-trained language models outperform all recently proposed graph-based and hierarchy-b