Task Solving with Code Generation, Execution and Debugging
Use conversable language learning model agents to solve tasks and provide automatic feedback through a comprehensive example of writing, executing, and debugging Python code to compare stock price changes.
In this notebook, we demonstrate how to use AssistantAgent
and
UserProxyAgent
to write code and execute the code. Here
AssistantAgent
is an LLM-based agent that can write Python code (in a
Python coding block) for a user to execute for a given task.
UserProxyAgent
is an agent which serves as a proxy for the human user
to execute the code written by AssistantAgent
, or automatically
execute the code. Depending on the setting of human_input_mode
and
max_consecutive_auto_reply
, the UserProxyAgent
either solicits
feedback from the human user or returns auto-feedback based on the
result of code execution (success or failure and corresponding outputs)
to AssistantAgent
. AssistantAgent
will debug the code and suggest
new code if the result contains error. The two agents keep communicating
to each other until the task is done.
Install the following packages before running the code below:
For more information, please refer to the installation guide.
Example Task: Check Stock Price Change
In the example below, let’s see how to use the agents in AutoGen to
write a python script and execute the script. This process involves
constructing a AssistantAgent
to serve as the assistant, along with a
UserProxyAgent
that acts as a proxy for the human user. In this
example demonstrated below, when constructing the UserProxyAgent
, we
select the human_input_mode
to “NEVER”. This means that the
UserProxyAgent
will not solicit feedback from the human user. It stops
replying when the limit defined by max_consecutive_auto_reply
is
reached, or when is_termination_msg()
returns true for the received
message.
The example above involves code execution. In AutoGen, code execution is
triggered automatically by the UserProxyAgent
when it detects an
executable code block in a received message and no human user input is
provided. Users have the option to specify a different working directory
by setting the work_dir
argument when constructing a new instance of
the LocalCommandLineCodeExecutor
. For Docker-based or Jupyter
kernel-based code execution, please refer to Code Executors
Tutorial for more
information.
Check chat results
The initiate_chat
method returns a ChatResult
object, which is a
dataclass object storing information about the chat. Currently, it
includes the following attributes:
chat_history
: a list of chat history.summary
: a string of chat summary. A summary is only available if a summary_method is provided when initiating the chat.cost
: a tuple of (total_cost, total_actual_cost), where total_cost is a dictionary of cost information, and total_actual_cost is a dictionary of information on the actual incurred cost with cache.human_input
: a list of strings of human inputs solicited during the chat. (Note that since we are settinghuman_input_mode
toNEVER
in this notebook, this list is always empty.)
Example Task: Plot Chart
Let’s display the generated figure.
Let’s display the raw data collected and saved from previous chat as well.
Example Task: Use User Defined Message Function to let Agents Analyze data Collected
Let’s create a user defined message to let the agents analyze the raw
data and write a blogpost. The function is supposed to take sender
,
recipient
and context
as inputs and outputs a string of message.
**kwargs from initiate_chat
will be used as context
. Take the
following code as an example, the context
includes a field file_name
as provided in initiate_chat
. In the user defined message function
my_message_generator
, we are reading data from the file specified by
this filename.
Let’s check the summary of the chat
This is the blog post that the agents generated.
A Comparative Analysis of META and TESLA Stocks in Early 2024
In the first quarter of 2024, the stock market saw some interesting movements in the tech sector. Two companies that stood out during this period were META and TESLA.
META, the social media giant, had an average stock price of 403.53 during this period. The highest it reached was 519.83, while the lowest was 344.47. The standard deviation, a measure of how spread out the prices were, was 100.72.
On the other hand, TESLA, the electric vehicle and clean energy company, had an average stock price of 219.54. The stock reached a high of 248.42 and a low of 171.76. The standard deviation for TESLA was 41.68.
These figures show that both META and TESLA had their ups and downs during this period. However, the higher standard deviation for META indicates that its stock price fluctuated more compared to TESLA.
As we move further into 2024, it will be interesting to see how these trends evolve. Will META and TESLA continue on their current trajectories, or will we see a shift in the market dynamics? Only time will tell.
Let’s check how much the above chat cost