Introduction
This notebook illustrates how to useTransformMessages
give any
ConversableAgent
the ability to handle long contexts, sensitive data,
and more.
Learn more about configuring LLMs for agents here.
Handling Long Contexts
Imagine a scenario where the LLM generates an extensive amount of text, surpassing the token limit imposed by your API provider. To address this issue, you can leverageTransformMessages
along with its constituent
transformations, MessageHistoryLimiter
and MessageTokenLimiter
.
MessageHistoryLimiter
: You can restrict the total number of messages considered as context history. This transform is particularly useful when you want to limit the conversational context to a specific number of recent messages, ensuring efficient processing and response generation.MessageTokenLimiter
: Enables you to cap the total number of tokens, either on a per-message basis or across the entire context history (or both). This transformation is invaluable when you need to adhere to strict token limits imposed by your API provider, preventing unnecessary costs or errors caused by exceeding the allowed token count. Additionally, amin_tokens
threshold can be applied, ensuring that the transformation is only applied when the number of tokens is not less than the specified threshold.
Example 1: Limiting number of messages
Let’s take a look at how these transformations will effect the messages. Below we see that by applying theMessageHistoryLimiter
, we can see
that we limited the context history to the 3 most recent messages.
Example 2: Limiting number of tokens
Now let’s test limiting the number of tokens in messages. We can see that we can limit the number of tokens to 3, which is equivalent to 3 words in this instance.min_tokens
threshold is set to 10, indicating that the
transformation will not be applied if the total number of tokens in the
messages is less than that. This is especially beneficial when the
transformation should only occur after a certain number of tokens has
been reached, such as in the context window of the model. An example is
provided below.
Example 3: Combining transformations
Let’s test these transforms with agents (the upcoming test is replicated from the agentchat_capability_long_context_handling notebook). We will see that the agent without the capability to handle long context will result in an error, while the agent with that capability will have no issues.Handling Sensitive Data
You can use theMessageTransform
protocol to create custom message
transformations that redact sensitive data from the chat history. This
is particularly useful when you want to ensure that sensitive
information, such as API keys, passwords, or personal data, is not
exposed in the chat history or logs.
Now, we will create a custom message transform to detect any OpenAI API
key and redact it.