Auto Generated Agent Chat: Collaborative Task Solving with Multiple Agents and Human Users
Involve multiple human users via function calls and nested chat.
AG2 offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature here.
In this notebook, we demonstrate an application involving multiple
agents and human users to work together and accomplish a task.
AssistantAgent
is an LLM-based agent that can write Python code (in a
Python coding block) for a user to execute for a given task.
UserProxyAgent
is an agent which serves as a proxy for a user to
execute the code written by AssistantAgent
. We create multiple
UserProxyAgent
instances that can represent different human users.
Requirements
AG2 requires Python>=3.9
. To run this notebook example, please
install:
Set your API Endpoint
The
config_list_from_json
function loads a list of configurations from an environment variable or
a json file.
It first looks for an environment variable of a specified name (“OAI_CONFIG_LIST” in this example), which needs to be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by models (you can filter by other keys as well).
The json looks like the following:
You can set the value of config_list in any way you prefer. Please refer to this User Guide for full code examples of the different methods.
Construct Agents
We define ask_expert
function to start a conversation between two
agents and return a summary of the result. We construct an assistant
agent named “assistant_for_expert” and a user proxy agent named
“expert”. We specify human_input_mode
as “ALWAYS” in the user proxy
agent, which will always ask for feedback from the expert user.
We construct another assistant agent named “assistant_for_student” and a
user proxy agent named “student”. We specify human_input_mode
as
“TERMINATE” in the user proxy agent, which will ask for feedback when it
receives a “TERMINATE” signal from the assistant agent. We set the
functions
in AssistantAgent
and function_map
in UserProxyAgent
to use the created ask_expert
function.
For simplicity, the ask_expert
function is defined to run locally. For
real applications, the function should run remotely to interact with an
expert user.
Perform a task
We invoke the initiate_chat()
method of the student proxy agent to
start the conversation. When you run the cell below, you will be
prompted to provide feedback after the assistant agent sends a
“TERMINATE” signal at the end of the message. The conversation will
finish if you don’t provide any feedback (by pressing Enter directly).
Before the “TERMINATE” signal, the student proxy agent will try to
execute the code suggested by the assistant agent on behalf of the user.
When the assistant needs to consult the expert, it suggests a function
call to ask_expert
. When this happens, a line like the following will
be displayed:
***** Suggested function Call: ask_expert *****