OpenAI offers a functionality for defining a structure of the messages generated by LLMs, AutoGen enables this functionality by propagating response_format passed to your agents to the underlying client.
response_format
, in the LLM configuration for your agents, to the
underlying client.
You can define the JSON structure of the output in the response_format
field in the LLM configuration.
To assist in determining the JSON structure, you can generate a valid
schema using .model_json_schema()
on a predefined pydantic model, for
more info, see
here. Your
schema should be OpenAPI
specification compliant
and have a title field defined for the root model which will be
loaded as a response_format
for the Agent.
For more info on structured outputs, see our
documentation.
ag2
:Note: If you have been usingFor more information, please refer to the installation guide.autogen
orag2
, all you need to do is upgrade it using:orasautogen
, andag2
are aliases for the same PyPI package.
openai
) - Anthropic (anthropic
) - Google Gemini (google
) -
Ollama (ollama
)
LLMConfig.from_json
method loads a list of configurations from an environment variable or a
JSON file.
Here is an example of a configuration using the gpt-4o-mini
model that
will use a MathReasoning
response format. To use it, paste it into
your OAI_CONFIG_LIST
file and set the api_key
to your OpenAI API
key.
UserProxyAgent
to input
the math problem and an AssistantAgent
to solve it.
The AssistantAgent
will be constrained to solving the math problem
step-by-step by using the MathReasoning
response format we defined
above.
The response_format
is added to the LLM configuration and then this
configuration is applied to the agent.
MathReasoning
model.