Local vllm
vLLM
vLLM is a locally run proxy and inference server, providing an OpenAI-compatible API. As it performs both the proxy and the inferencing, you don’t need to install an additional inference server.
Note: vLLM does not support OpenAI’s Function Calling (usable with AutoGen). However, it is in development and may be available by the time you read this.
Running this stack requires the installation of:
- AutoGen (installation instructions)
- vLLM
Note: We recommend using a virtual environment for your stack, see this article for guidance.
Installing vLLM
In your terminal:
Choosing models
vLLM will download new models when you run the server.
The models are sourced from Hugging Face, a filtered list of Text
Generation models is here
and vLLM has a list of commonly used models.
Use the full model name, e.g. mistralai/Mistral-7B-Instruct-v0.2
.
Chat Template
vLLM uses a pre-defined chat template, unless the model has a chat template defined in its config file on Hugging Face.
This can cause an issue if the chat template doesn’t allow 'role' : 'system'
messages, as used in AutoGen.
Therefore, we will create a chat template for the Mistral.AI Mistral 7B model we are using that allows roles of ‘user’, ‘assistant’, and ‘system’.
Create a file name autogenmistraltemplate.jinja
with the following content:
Running vLLM proxy server
To run vLLM with the chosen model and our chat template, in your terminal:
By default, vLLM will run on ‘http://0.0.0.0:8000’.
Using vLLM with AutoGen
Now that we have the URL for the vLLM proxy server, you can use it within AutoGen in the same way as OpenAI or cloud-based proxy servers.
As you are running this proxy server locally, no API key is required. As api_key
is a mandatory
field for configurations within AutoGen we put a dummy value in it, as per the example below.
Although we are specifying the model when running the vLLM command, we must still put it into the
model
value for vLLM.
Output: