Teach OpenAI assistants.
Teachability
addresses these
limitations by persisting user teachings across chat boundaries in
long-term memory (a vector database). Memories (called memos) are
created and saved to disk throughout a conversation, then loaded from
disk later. Instead of copying all the memos into the context window,
which would eat up valuable space, individual memos are retrieved into
context only as needed. This allows the user to teach many facts,
preferences and skills to the teachable agent just once, and have it
remember them in later chats.
In making decisions about memo storage and retrieval, Teachability
calls an instance of TextAnalyzerAgent
to analyze pieces of text in
several different ways. This adds extra LLM calls involving a relatively
small number of tokens. These calls can add a few seconds to the time a
user waits for a response.
This notebook demonstrates how Teachability
can be added to instances
of GPTAssistantAgent
so that they can learn facts, preferences, and
skills from users. As explained
here,
each instance of GPTAssistantAgent
wraps an OpenAI Assistant that can
be given a set of tools including functions, code interpreter, and
retrieval. Assistants with these tools are demonstrated in separate
standalone sections below, which can be run independently.
Python>=3.9
. To run this notebook example, please install
the [teachable] option.
config_list_from_json
function loads a list of configurations from an environment variable or
a json file.
ConversableAgent
teachable by adding a Teachability
object to it.
GPTAssistantAgent
instance.
ConversableAgent
teachable by adding a Teachability
object to it.
GPTAssistantAgent
instance.
file_ids
list in the code below.
ConversableAgent
teachable by adding a Teachability
object to it.
GPTAssistantAgent
instance.