Use AutoGen in Databricks with DBRX
In March 2024, Databricks released DBRX, a general-purpose LLM that sets a new standard for open LLMs. While available as an open-source model on Hugging Face (databricks/dbrx-instruct and databricks/dbrx-base ), customers of Databricks can also tap into the Foundation Model APIs, which make DBRX available through an OpenAI-compatible, autoscaling REST API.
Autogen is becoming a popular standard for agent creation. Built to support any “LLM as a service” that implements the OpenAI SDK, it can easily be extended to integrate with powerful open source models.
This notebook will demonstrate a few basic examples of Autogen with
DBRX, including the use of AssistantAgent
, UserProxyAgent
, and
ConversableAgent
. These demos are not intended to be exhaustive - feel
free to use them as a base to build upon!
Requirements
AutoGen must be installed on your Databricks cluster, and requires
Python>=3.9
. This example includes the %pip
magic command to
install: %pip install pyautogen
, as well as other necessary libraries.
This code has been tested on: * Serverless Notebooks (in public preview as of Apr 18, 2024) * Databricks Runtime 14.3 LTS ML docs
This code can run in any Databricks workspace in a region where DBRX is available via pay-per-token APIs (or provisioned throughput). To check if your region is supported, see Foundation Model Region Availability. If the above is true, the workspace must also be enabled by an admin for Foundation Model APIs docs.
Tips
-
This notebook can be imported from github to a Databricks workspace and run directly. Use sparse checkout mode with git to import only this notebook or the examples directory.
-
Databricks recommends using Secrets instead of storing tokens in plain text.
Contributor
tj@databricks.com (Github: tj-cycyota)
It is recommended to restart the Python kernel after installs - uncomment and run the below:
Setup DBRX config list
See Autogen docs for more inforation on the use of config_list
: LLM
Configuration
Hello World Example
Our first example will be with a simple UserProxyAgent
asking a
question to an AssistantAgent
. This is based on the tutorial demo
here.
After sending the question and seeing a response, you can type exit
to
end the chat or continue to converse.
Simple Coding Agent
In this example, we will implement a “coding agent” that can execute code. You will see how this code is run alongside your notebook in your current workspace, taking advantage of the performance benefits of Databricks clusters. This is based off the demo here.
First, set up a directory:
Next, setup our agents and initiate a coding problem. Notice how the
UserProxyAgent
will take advantage of our code_executor
; after the
code is shown on screen, type Return/Enter in the chatbox to have it
execute locally on your cluster via the bot’s auto-reply.
Note: with generative AI coding assistants, you should always manually read and review the code before executing it yourself, as LLM results are non-deterministic and may lead to unintended consequences.
We can see the python file that was created in our working directory:
Conversable Bots
We can also implement the two-agent chat pattern using DBRX to “talk to itself” in a teacher/student exchange:
Implement Logging Display
It can be useful to display chat logs to the notebook for debugging, and then persist those logs to a Delta table. The following section demonstrates how to extend the default AutoGen logging libraries.
First, we will implement a Python class
that extends the capabilities
of autogen.runtime_logging
docs:
Let’s use the class above on our simplest example. Note the addition of
logging .start()
and .stop()
, as well as try/except for error
handling.
With this, we have a simple framework to review and persist logs from
our chats! Notice that in the request
field above, we can also see the
system prompt for the LLM - this can be useful for prompt engineering as
well as debugging.
Note that when you deploy this to Databricks Model Serving, model responses are auto-logged using Lakehouse Monitoring; but the above approach provides a simple mechanism to log chats from the client side.
Let’s now persist these results to a Delta table in Unity Catalog:
Closing Thoughts
This notebook provides a few basic examples of using Autogen with DBRX, and we’re excited to see how you can use this framework alongside leading open-source LLMs!
Limitations
-
Databricks Foundation Model API supports other open-source LLMs (Mixtral, Llama2, etc.), but the above code has not been tested on those.
-
As of April 2024, DBRX does not yet support tool/function calling abilities. To discuss this capability further, please reach out to your Databricks representative.