Open In Colab Open on GitHub

LiteLLM is an open-source locally run proxy server that provides an OpenAI-compatible API. It interfaces with a large number of providers that do the inference. To handle the inference, a popular open-source inference engine is Ollama.

As not all proxy servers support OpenAI’s Function Calling (usable with AutoGen), LiteLLM together with Ollama enable this useful feature.

Running this stack requires the installation of:

  1. AutoGen (installation instructions)
  2. LiteLLM
  3. Ollama

Note: We recommend using a virtual environment for your stack, see this article for guidance.

Installing LiteLLM

Install LiteLLM with the proxy server functionality:

pip install 'litellm[proxy]'

Note: If using Windows, run LiteLLM and Ollama within a WSL2.

:::tip
For custom LiteLLM installation instructions, see their [GitHub repository](https://github.com/BerriAI/litellm).
:::

Installing Ollama

For Mac and Windows, download Ollama.

For Linux:

curl -fsSL https://ollama.com/install.sh | sh

Downloading models

Ollama has a library of models to choose from, see them here.

Before you can use a model, you need to download it (using the name of the model from the library):

ollama pull llama3:instruct

To view the models you have downloaded and can use:

ollama list
:::tip
Ollama enables the use of GGUF model files, available readily on Hugging Face. See Ollama`s [GitHub repository](https://github.com/ollama/ollama)
for examples.
:::

Running LiteLLM proxy server

To run LiteLLM with the model you have downloaded, in your terminal:

litellm --model ollama/llama3:instruct
INFO:     Started server process [19040]
INFO:     Waiting for application startup.

#------------------------------------------------------------#
#                                                            #
#       'This feature doesn't meet my needs because...'       #
#        https://github.com/BerriAI/litellm/issues/new        #
#                                                            #
#------------------------------------------------------------#

 Thank you for using LiteLLM! - Krrish & Ishaan



Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new


INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)

This will run the proxy server and it will be available at ‘http://0.0.0.0:4000/’.

Using LiteLLM+Ollama with AutoGen

Now that we have the URL for the LiteLLM proxy server, you can use it within AutoGen in the same way as OpenAI or cloud-based proxy servers.

As you are running this proxy server locally, no API key is required. Additionally, as the model is being set when running the LiteLLM command, no model name needs to be configured in AutoGen. However, model and api_key are mandatory fields for configurations within AutoGen so we put dummy values in them, as per the example below.

An additional setting for the configuration is price, which can be used to set the pricing of tokens. As we’re running it locally, we’ll put our costs as zero. Using this setting will also avoid a prompt being shown when price can’t be determined.

from autogen import ConversableAgent, UserProxyAgent

local_llm_config = {
    "config_list": [
        {
            "model": "NotRequired",  # Loaded with LiteLLM command
            "api_key": "NotRequired",  # Not needed
            "base_url": "http://0.0.0.0:4000",  # Your LiteLLM URL
            "price": [0, 0],  # Put in price per 1K tokens [prompt, response] as free!
        }
    ],
    "cache_seed": None,  # Turns off caching, useful for testing different models
}

# Create the agent that uses the LLM.
assistant = ConversableAgent("agent", llm_config=local_llm_config)

# Create the agent that represents the user in the conversation.
user_proxy = UserProxyAgent("user", code_execution_config=False)

# Let the assistant start the conversation.  It will end when the user types exit.
res = assistant.initiate_chat(user_proxy, message="How can I help you today?")

print(assistant)
agent (to user):

How can I help you today?

--------------------------------------------------------------------------------
user (to agent):

Why is the sky blue?

--------------------------------------------------------------------------------

>>>>>>>> USING AUTO REPLY...
agent (to user):

A classic question!

The sky appears blue because of a phenomenon called scattering. When sunlight enters Earth's atmosphere, it encounters tiny molecules of gases such as nitrogen (N2) and oxygen (O2). These molecules scatter the light in all directions, but they scatter shorter (blue) wavelengths more than longer (red) wavelengths.

This is known as Rayleigh scattering, named after the British physicist Lord Rayleigh, who first described the phenomenon in the late 19th century. As a result of this scattering, the blue light is distributed throughout the atmosphere, giving the sky its blue appearance.

Additionally, when sunlight passes through more dense atmospheric particles like water vapor, pollutants, and dust, it can also be scattered or absorbed, which affects the color we see. For example, during sunrise and sunset, the light has to travel longer distances through the atmosphere, which scatters the shorter wavelengths even more, making the sky appear more red.

So, there you have it! The blue sky is a result of the combination of sunlight, atmospheric gases, and the scattering of light.

How's that? Do you have any other questions or would you like to explore more topics?

--------------------------------------------------------------------------------
user (to agent):

Why is it sometimes red, then?

--------------------------------------------------------------------------------

>>>>>>>> USING AUTO REPLY...
agent (to user):

Excellent follow-up question!

As I mentioned earlier, the color we see in the sky can be affected by the amount and type of particles in the atmosphere. When the sunlight has to travel longer distances through the air, like during sunrise and sunset, it encounters more atmospheric particles that scatter the shorter blue wavelengths even more than the longer red wavelengths.

This is known as Mie scattering, named after the German physicist Gustav Mie. The larger particles, such as water droplets, pollen, and dust, are responsible for this type of scattering. They scatter the shorter blue wavelengths more efficiently than the longer red wavelengths, which is why we often see more red or orange hues during these times.

Additionally, during sunrise and sunset, the sun's rays have to travel through a thicker layer of atmosphere, which contains more particles like water vapor, pollutants, and aerosols. These particles can absorb or scatter certain wavelengths of light, making them appear redder or more orange.

The combination of Mie scattering and absorption by atmospheric particles can create the warm, golden hues we often see during sunrise and sunset. It's a beautiful reminder that the color of our sky is not just a result of the sun itself but also the complex interactions between sunlight, atmosphere, and particles!

Would you like to explore more about the Earth's atmosphere or perhaps learn about other fascinating topics?

--------------------------------------------------------------------------------
<autogen.agentchat.conversable_agent.ConversableAgent object at 0x7fe35da88dd0>

Example with Function Calling

Function calling (aka Tool calling) is a feature of OpenAI’s API that AutoGen, LiteLLM, and Ollama support.

Below is an example of using function calling with LiteLLM and Ollama. Based on this currency conversion notebook.

LiteLLM is loaded in the same way as the previous example and we’ll continue to use Meta’s Llama3 model as it is good at constructing the function calling message required.

Note: LiteLLM version 1.41.27, or later, is required (to support function calling natively using Ollama).

In your terminal:

litellm --model ollama/llama3

Then we run our program with function calling.

from typing import Literal

from typing_extensions import Annotated

import autogen

local_llm_config = {
    "config_list": [
        {
            "model": "NotRequired",  # Loaded with LiteLLM command
            "api_key": "NotRequired",  # Not needed
            "base_url": "http://0.0.0.0:4000",  # Your LiteLLM URL
            "price": [0, 0],  # Put in price per 1K tokens [prompt, response] as free!
        }
    ],
    "cache_seed": None,  # Turns off caching, useful for testing different models
}
/usr/local/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm
# Create the agent and include examples of the function calling JSON in the prompt
# to help guide the model
chatbot = autogen.AssistantAgent(
    name="chatbot",
    system_message="""For currency exchange tasks,
        only use the functions you have been provided with.
        If the function has been called previously,
        return only the word 'TERMINATE'.""",
    llm_config=local_llm_config,
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    is_termination_msg=lambda x: x.get("content", "") and "TERMINATE" in x.get("content", ""),
    human_input_mode="NEVER",
    max_consecutive_auto_reply=1,
    code_execution_config={"work_dir": "code", "use_docker": False},
)
CurrencySymbol = Literal["USD", "EUR"]

# Define our function that we expect to call


def exchange_rate(base_currency: CurrencySymbol, quote_currency: CurrencySymbol) -> float:
    if base_currency == quote_currency:
        return 1.0
    elif base_currency == "USD" and quote_currency == "EUR":
        return 1 / 1.1
    elif base_currency == "EUR" and quote_currency == "USD":
        return 1.1
    else:
        raise ValueError(f"Unknown currencies {base_currency}, {quote_currency}")


# Register the function with the agent
@user_proxy.register_for_execution()
@chatbot.register_for_llm(description="Currency exchange calculator.")
def currency_calculator(
    base_amount: Annotated[float, "Amount of currency in base_currency"],
    base_currency: Annotated[CurrencySymbol, "Base currency"] = "USD",
    quote_currency: Annotated[CurrencySymbol, "Quote currency"] = "EUR",
) -> str:
    quote_amount = exchange_rate(base_currency, quote_currency) * base_amount
    return f"{format(quote_amount, '.2f')} {quote_currency}"
# start the conversation
res = user_proxy.initiate_chat(
    chatbot,
    message="How much is 123.45 EUR in USD?",
    summary_method="reflection_with_llm",
)
user_proxy (to chatbot):

How much is 123.45 EUR in USD?

--------------------------------------------------------------------------------
chatbot (to user_proxy):

***** Suggested tool call (call_d9584223-9af0-4526-ad09-856b03487fd5): currency_calculator *****
Arguments: 
{"base_amount": 123.45, "base_currency": "EUR", "quote_currency": "USD"}
************************************************************************************************

--------------------------------------------------------------------------------

>>>>>>>> EXECUTING FUNCTION currency_calculator...
user_proxy (to chatbot):

user_proxy (to chatbot):

***** Response from calling tool (call_d9584223-9af0-4526-ad09-856b03487fd5) *****
135.80 USD
**********************************************************************************

--------------------------------------------------------------------------------
chatbot (to user_proxy):

***** Suggested tool call (call_17b07b4d-629f-4314-8a04-97b1537fa486): currency_calculator *****
Arguments: 
{"base_amount": 123.45, "base_currency": "EUR", "quote_currency": "USD"}
************************************************************************************************

--------------------------------------------------------------------------------

We can see that the currency conversion function was called with the correct values and a result was generated.

:::tip
Once functions are included in the conversation it is possible, using LiteLLM and Ollama, that the model may continue to recommend tool calls (as shown above). This is an area of active development and a native Ollama client for AutoGen is planned for a future release.
:::