Latency and Cost are significant hurdles for developers building on top of Language Models like GPT-4. High latency can degrade the user experience, and increased costs can impact scalability.
We've released a new feature - Semantic Cache, which efficiently addresses these challenges. Early tests reveal a promising ~20% cache hit rate at 99% accuracy for Q&A (or RAG) use cases.
Picture this: You've built a popular AI app, and you're processing a lot of API calls daily. You notice that users often ask similar questions - questions you've already answered before. However, with each call to the LLM, you incur costs for all those redundant tokens, and your users have a sub-par experience.
This is where Semantic Cache comes into play. It allows you to serve a cached response for semantically repeated queries instead of resorting to your AI provider, saving you costs and reducing latency.
To illustrate the extent of semantic similarity covered and the time saved, we ran a simple query through GPT3.5, changing it a little each time and noting down the time.
Despite variations in the question, Portkey's semantic cache identified the core query each time and provided the answer in a fraction of the initial time.
We also ran the same test on a set of queries that may look similar but require different responses.
While this is a promising start, we are committed to enhancing cache performance further and expanding the breadth of semantic similarity we can cover.
In Q&A scenarios like enterprise search & customer support, we discovered that over 20% of the queries were semantically similar. With Semantic Cache, these requests can be served without the typical inference latency & token cost, offering a potential speed boost of at least 20x at zero cost to you.
📈 Impact of Caching
Let's look at the numbers. Despite recent improvements in GPT 3.5's response times through Portkey, it's evident that cached responses via our edge infrastructure are considerably quicker.
In RAG use cases, we observed a cache hit rate ranging between 18% to as high as 60%, delivering accurate results 99% of the time.
If you believe your use case can benefit from semantic cache, consider joining the Portkey Beta.
🧮 Evaluating Cache Effectiveness
Semantic Cache's effectiveness can be evaluated by checking how many similar queries you get over a specified period - that's your expected cache hit rate.
You can use a model-graded eval to test the semantic match's accuracy at various similarity thresholds. We’d recommend starting with a 95% confidence and then tweaking it to ensure your accuracy is high enough to perform a cache call.
Here's a simple tool that calculates the money saved on GPT4 calls based on the expected cache hit rate.
Click on Run Pen to start:
🎯 Our Approach To Semantic Cache
First, we check for exact repeat queries, leveraging Cloudflare’s key-value store for quick and efficient responses.
For unique queries, we apply a vector search on Pinecone, enhanced with a hybrid search that utilizes meta properties inferred from inputs. We also preprocess inputs to remove redundant content, which improves the accuracy of the vector search.
To ensure top-notch results, we run backtests to adjust the vector confidence individually for each customer.
🧩 Challenges of using Semantic Cache
Semantic Cache, while efficient, does have some challenges.
For instance, while a regular cache operates at 100% accuracy, a semantic cache can sometimes be incorrect. Embedding the entire prompt for the cache can also sometimes result in lower accuracy.
- We built our cache solution on top of OpenAI embeddings & Pinecone's vector search. We combine it with advanced text manipulation & hybrid search inferred from metadata to remove irrelevant parts of the prompt.
- We only return a cache hit if there is >95% confidence in similarity.
- In our tests, users rated the semantic cache accuracy at 99%.
Additionally, improper management could potentially lead to data and prompt leakage.
- We encrypt the whole request with
SHA256and run our cache system on top of it to prevent containment.
- Each organisation's data, prompts, and responses are encrypted and secured under different namespaces to ensure security.
- Additional metadata stored as part of the vector also mitigates leakage risks.
💡How to Use Semantic Cache on Portkey
Using Semantic Cache with Portkey is straightforward. When sending a request to your AI provider (for example, OpenAI), just include 'x-portkey-cache': 'semantic' in your request header. With this single adjustment, semantic cache will be active for that query.
To refresh the previously stored cache, use the `x-portkey-cache-force-refresh` header and set it to true. This will invalidate the cache and store a new value.
You can also set an age limit for each cache, after which it will be refreshed. Specify the max-age in seconds for your cache in the Cache-Control header. Once the max-age is reached, the cache will refresh automatically.
In conclusion, the introduction of Semantic Cache not only cuts down latency and costs but also elevates the overall user experience.