Learn how to persist memories across chat sessions using the Teachability capability
Teachability
addresses these
limitations by persisting user teachings across chat boundaries in
long-term memory (a vector database). Memories (called memos) are
created and saved to disk throughout a conversation, then loaded from
disk later. Instead of copying all the memos into the context window,
which would eat up valuable space, individual memos are retrieved into
context only as needed. This allows the user to teach many facts,
preferences and skills to the teachable agent just once, and have it
remember them in later chats.
In making decisions about memo storage and retrieval, Teachability
calls an instance of TextAnalyzerAgent
to analyze pieces of text in
several different ways. This adds extra LLM calls involving a relatively
small number of tokens. These calls can add a few seconds to the time a
user waits for a response.
This notebook demonstrates how Teachability
can be added to an agent
so that it can learn facts, preferences, and skills from users. To chat
with a teachable agent yourself, run
chat_with_teachable_agent.py.
config_list_from_json
function loads a list of configurations from an environment variable or
a json file.
clear_history=True
to
initiate_chat
. At this point, a common LLM-based assistant would
forget everything from the last chat. But a teachable agent can retrieve
memories from its vector DB as needed, allowing it to recall and reason
over things that the user taught it in earlier conversations.