LLM providers offer functionality for defining a structure of the messages generated by LLMs, AG2 enables this functionality by propagating a response_format
, in the LLM configuration for your agents, to the underlying client.
response_format
, in the LLM configuration for your
agents, to the underlying client.
Structured outputs are available for a number of Model Providers, see
the Supported model providers
section below. In this example we will
use OpenAI as the model provider.
ag2
:Note: If you have been usingFor more information, please refer to the installation guide.autogen
orag2
, all you need to do is upgrade it using:orasautogen
, andag2
are aliases for the same PyPI package.
openai
)
anthropic
)
google
)
ollama
)
LLMConfig.from_json
function loads a list of configurations from an environment variable or
a json file. Structured Output is supported by OpenAI’s models from
gpt-4-0613 and gpt-3.5-turbo-0613.
UserProxyAgent
to input
the math problem and an AssistantAgent
to solve it.
The AssistantAgent
will be constrained to solving the math problem
step-by-step by using the MathReasoning
response format we defined
above.
The response_format
is added to the LLM configuration and then this
configuration is applied to the agent.
MathReasoning
model.
response_format
, you have the flexibility to customize
how the output is parsed and presented, making it more user-friendly. To
demonstrate this, we’ll add a format
method to our MathReasoning
model. This method will define the logic for transforming the raw JSON
response into a more human-readable and accessible format.
MathReasoning
model to include a format
method.
This method will allow the underlying client to parse the return value
from the LLM into a more human-readable format. If the format
method
is not defined, the client will default to returning the model’s JSON
representation, as demonstrated in the previous example.
MathReasoning.format
method.