Using Transform Messages during Speaker Selection
When using “auto” mode for speaker selection in group chats, a nested-chat is used to determine the next speaker. This nested-chat includes all of the group chat’s messages and this can result in a lot of content which the LLM needs to process for determining the next speaker. As conversations progress, it can be challenging to keep the context length within the workable window for the LLM. Furthermore, reducing the number of overall tokens will improve inference time and reduce token costs.
Using Transform Messages you gain control over which messages are used for speaker selection and the context length within each message as well as overall.
All the transforms available for Transform Messages can be applied to
the speaker selection nested-chat, such as the MessageHistoryLimiter
,
MessageTokenLimiter
, and TextMessageCompressor
.
How do I apply them
When instantiating your GroupChat
object, all you need to do is assign
a
TransformMessages
object to the select_speaker_transform_messages
parameter, and the
transforms within it will be applied to the nested speaker selection
chats.
And, as you’re passing in a TransformMessages
object, multiple
transforms can be applied to that nested chat.
As part of the nested-chat, an agent called ‘checking_agent’ is used to direct the LLM on selecting the next speaker. It is preferable to avoid compressing or truncating the content from this agent. How this is done is shown in the second last example.
Creating transforms for speaker selection in a GroupChat
We will progressively create a TransformMessage
object to show how you
can build up transforms for speaker selection.
Each iteration will replace the previous one, enabling you to use the code in each cell as is.
Importantly, transforms are applied in the order that they are in the transforms list.