Contributor Guide
Migration Guide
Migrating to 0.2
openai v1 is a total rewrite of the library with many breaking changes. For example, the inference requires instantiating a client, instead of using a global class method.
Therefore, some changes are required for users of pyautogen<0.2
.
api_base
->base_url
,request_timeout
->timeout
inllm_config
andconfig_list
.max_retry_period
andretry_wait_time
are deprecated.max_retries
can be set for each client.- MathChat is unsupported until it is tested in future release.
autogen.Completion
andautogen.ChatCompletion
are deprecated. The essential functionalities are moved toautogen.OpenAIWrapper
:
- Inference parameter tuning and inference logging features are updated:
Checkout Logging documentation and Logging example notebook to learn more.
Inference parameter tuning can be done via flaml.tune
.
seed
in autogen is renamed intocache_seed
to accommodate the newly addedseed
param in openai chat completion api.use_cache
is removed as a kwarg inOpenAIWrapper.create()
for being automatically decided bycache_seed
: int | None. The difference between autogen’scache_seed
and openai’sseed
is that:- autogen uses local disk cache to guarantee the exactly same output is produced for the same input and when cache is hit, no openai api call will be made.
- openai’s
seed
is a best-effort deterministic sampling with no guarantee of determinism. When using openai’sseed
withcache_seed
set to None, even for the same input, an openai api call will be made and there is no guarantee for getting exactly the same output.