Prompt Caching
Prompt caching on Anthropic lets you cache individual messages in your request for repeat use. With caching, you can free up your tokens to include more context in your prompt, and also deliver responses significantly faster and cheaper.
You can use this feature on our OpenAI-compliant universal API as well as with our prompt templates.
API Support
Just pass Anthropic’s anthropic-beta
header in your request, and set the cache_control
param in your respective message body:
Prompt Templates Support
Set any message in your prompt template to be cached by just toggling the Cache Control
setting in the UI:
Anthropic currently has certain restrictions on prompt caching, like:
- Cache TTL is set at 5 minutes and can not be changed
- The message you are caching needs to cross minimum length to enable this feature
- 1024 tokens for Claude 3.5 Sonnet and Claude 3 Opus
- 2048 tokens for Claude 3 Haiku
For more, refer to Anthropic’s prompt caching documentation here.
Seeing Cache Results in Portkey
Portkey automatically calculate the correct pricing for your prompt caching requests & responses based on Anthropic’s calculations here:
In the individual log for any request, you can also see the exact status of your request and verify if it was cached, or delivered from cache with two usage
parameters:
cache_creation_input_tokens
: Number of tokens written to the cache when creating a new entry.cache_read_input_tokens
: Number of tokens retrieved from the cache for this request.
Was this page helpful?