AssistantAgent is an LLM-based agent that can write Python code (in a
Python coding block) for a user to execute for a given task.
UserProxyAgent is an agent which serves as a proxy for a user to
execute the code written by AssistantAgent. We create multiple
UserProxyAgent instances that can represent different human users.
Requirements
AG2 requiresPython>=3.9. To run this notebook example, please
install:
Set your API Endpoint
Theconfig_list_from_json
function loads a list of configurations from an environment variable or
a json file.
It first looks for an environment variable of a specified name
(“OAI_CONFIG_LIST” in this example), which needs to be a valid json
string. If that variable is not found, it looks for a json file with the
same name. It filters the configs by models (you can filter by other
keys as well).
The json looks like the following:
Construct Agents
We defineask_expert function to start a conversation between two
agents and return a summary of the result. We construct an assistant
agent named “assistant_for_expert” and a user proxy agent named
“expert”. We specify human_input_mode as “ALWAYS” in the user proxy
agent, which will always ask for feedback from the expert user.
human_input_mode as
“TERMINATE” in the user proxy agent, which will ask for feedback when it
receives a “TERMINATE” signal from the assistant agent. We set the
functions in AssistantAgent and function_map in UserProxyAgent
to use the created ask_expert function.
For simplicity, the ask_expert function is defined to run locally. For
real applications, the function should run remotely to interact with an
expert user.
Perform a task
We invoke theinitiate_chat() method of the student proxy agent to
start the conversation. When you run the cell below, you will be
prompted to provide feedback after the assistant agent sends a
“TERMINATE” signal at the end of the message. The conversation will
finish if you don’t provide any feedback (by pressing Enter directly).
Before the “TERMINATE” signal, the student proxy agent will try to
execute the code suggested by the assistant agent on behalf of the user.
ask_expert. When this happens, a line like the following will
be displayed:
***** Suggested function Call: ask_expert *****