summary_method
argument is specified.
Learn more about the various ways to configure LLM endpoints
here.
Example Tasks
Below are four example tasks, with each task being a string of text describing the request. The completion of later tasks requires or benefits from the results of prerequisite tasks.Scenario 1: Solve the tasks with a series of chats
Theinitiate_chats
interface can take a list of dictionaries as
inputs. Each dictionary preserves the following fields: - message
: is
a string of text (typically a message containing the task); -
recipient
: a conversable agent dedicated for the task; -
summary_method
: A string specifying the method to get a summary from
the chat. Currently supported choices include last_msg
, which takes
the last message from the chat history as the summary, and
reflection_with_llm
, which uses an LLM call to reflect on the chat
history and summarize a takeaway; - summary_prompt
: A string
specifying how to instruct an LLM-backed agent (either the recipient or
the sender in the chat) to reflect on the chat history and derive a
summary. If not otherwise specified, a default prompt will be used when
summary_method
is reflection_with_llm
. “Summarize the takeaway from
the conversation. Do not add any introductory phrases. If the intended
request is NOT properly addressed, please point it out.” - carryover
:
A string or a list of string to specify additional context to be used in
the chat. With initiate_chats
, summary from previous chats will be
added as carryover. They will be appended after the carryover provided
by the user.
Check chat results
Theinitiate_chat
method returns a ChatResult
object, which is a
dataclass object storing information about the chat. Currently, it
includes the following attributes:
chat_history
: a list of chat history.summary
: a string of chat summary. A summary is only available if a summary_method is provided when initiating the chat.cost
: a tuple of (total_cost, total_actual_cost), where total_cost is a dictionary of cost information, and total_actual_cost is a dictionary of information on the actual incurred cost with cache.human_input
: a list of strings of human inputs solicited during the chat. (Note that since we are settinghuman_input_mode
toNEVER
in this notebook, this list is always empty.)
Scenario 2: With human inputs revising tasks in the middle
Since AutoGen agents support soliciting human inputs during a chat ifhuman_input_mode
is specified properly, the actual task might be
revised in the middle of a chat.
The example below showcases that even if a task is revised in the middle
(for the first task, the human user requests to get Microsoft’s stock
price information as well, in addition to NVDA and TSLA), the
`reflection_with_llm“ summary method can still capture it, as it
reflects on the whole conversation instead of just the original request.