Auto Generated Agent Chat: Task Solving with Code Generation, Execution, Debugging & Human Feedback
AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature here.
In this notebook, we demonstrate how to use AssistantAgent
and
UserProxyAgent
to solve a challenging math problem with human
feedback. Here AssistantAgent
is an LLM-based agent that can write
Python code (in a Python coding block) for a user to execute for a given
task. UserProxyAgent
is an agent which serves as a proxy for a user to
execute the code written by AssistantAgent
. By setting
human_input_mode
properly, the UserProxyAgent
can also prompt the
user for feedback to AssistantAgent
. For example, when
human_input_mode
is set to “ALWAYS”, the UserProxyAgent
will always
prompt the user for feedback. When user feedback is provided, the
UserProxyAgent
will directly pass the feedback to AssistantAgent
.
When no user feedback is provided, the UserProxyAgent
will execute the
code written by AssistantAgent
and return the execution results
(success or failure and corresponding outputs) to AssistantAgent
.
Requirements
AutoGen requires Python>=3.9
. To run this notebook example, please
install:
Set your API Endpoint
The
config_list_from_json
function loads a list of configurations from an environment variable or
a json file.
It first looks for environment variable “OAI_CONFIG_LIST” which needs to be a valid json string. If that variable is not found, it then looks for a json file named “OAI_CONFIG_LIST”. It filters the configs by models (you can filter by other keys as well). Only the models with matching names are kept in the list based on the filter condition.
The config list looks like the following:
You can set the value of config_list in any way you prefer. Please refer to this notebook for full code examples of the different methods.
Construct Agents
We construct the assistant agent and the user proxy agent.
Perform a task
We invoke the initiate_chat()
method of the user proxy agent to start
the conversation. When you run the cell below, you will be prompted to
provide feedback after receiving a message from the assistant agent. If
you don’t provide any feedback (by pressing Enter directly), the user
proxy agent will try to execute the code suggested by the assistant
agent on behalf of you, or terminate if the assistant agent sends a
“TERMINATE” signal at the end of the message.
Analyze the conversation
The human user can provide feedback at each step. When the human user didn’t provide feedback, the code was executed. The executed results and error messages are returned to the assistant, and the assistant is able to modify the code based on the feedback. In the end, the task is complete and a “TERMINATE” signal is sent from the assistant. The user skipped feedback in the end and the conversation is finished.
After the conversation is finished, we can save the conversations
between the two agents. The conversation can be accessed from
user_proxy.chat_messages
.