LLM Caching
AutoGen supports caching API requests so that they can be reused when the same request is issued. This is useful when repeating or continuing experiments for reproducibility and cost saving.
Since version 0.2.8
, a configurable context manager allows you to easily
configure LLM cache, using either DiskCache
, RedisCache
, or Cosmos DB Cache. All agents inside the context manager will use the same cache.
The cache can also be passed directly to the model client’s create call.
Controlling the seed
You can vary the cache_seed
parameter to get different LLM output while
still using cache.
Cache path
By default DiskCache
uses .cache
for storage. To change the cache directory,
set cache_path_root
:
Disabling cache
For backward compatibility, DiskCache
is on by default with cache_seed
set to 41.
To disable caching completely, set cache_seed
to None
in the llm_config
of the agent.
Difference between cache_seed
and OpenAI’s seed
parameter
OpenAI v1.1 introduced a new parameter seed
. The difference between AutoGen’s cache_seed
and OpenAI’s seed
is AutoGen uses an explicit request cache to guarantee the exactly same output is produced for the same input and when cache is hit, no OpenAI API call will be made. OpenAI’s seed
is a best-effort deterministic sampling with no guarantee of determinism. When using OpenAI’s seed
with cache_seed
set to None
, even for the same input, an OpenAI API call will be made and there is no guarantee for getting exactly the same output.