Examples by Notebook
OpenAI Assistants in AutoGen
Examples
- Examples by Category
- Examples by Notebook
- Notebooks
- Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering
- Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering
- Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering
- Agent Tracking with AgentOps
- AgentOptimizer: An Agentic Way to Train Your LLM Agent
- Task Solving with Code Generation, Execution and Debugging
- Assistants with Azure Cognitive Search and Azure Identity
- CaptainAgent
- Usage tracking with AutoGen
- Agent Chat with custom model loading
- Agent Chat with Multimodal Models: DALLE and GPT-4V
- Use AutoGen in Databricks with DBRX
- Auto Generated Agent Chat: Task Solving with Provided Tools as Functions
- Task Solving with Provided Tools as Functions (Asynchronous Function Calls)
- Writing a software application using function calls
- Currency Calculator: Task Solving with Provided Tools as Functions
- Groupchat with Llamaindex agents
- Group Chat
- Group Chat with Retrieval Augmented Generation
- Group Chat with Customized Speaker Selection Method
- FSM - User can input speaker transition constraints
- Perform Research with Multi-Agent Group Chat
- StateFlow: Build Workflows through State-Oriented Actions
- Group Chat with Coder and Visualization Critic
- Using Guidance with AutoGen
- Auto Generated Agent Chat: Task Solving with Code Generation, Execution, Debugging & Human Feedback
- Generate Dalle Images With Conversable Agents
- Auto Generated Agent Chat: Function Inception
- Auto Generated Agent Chat: Task Solving with Langchain Provided Tools as Functions
- Engaging with Multimodal Models: GPT-4V in AutoGen
- Agent Chat with Multimodal Models: LLaVA
- Runtime Logging with AutoGen
- Agent with memory using Mem0
- Solving Multiple Tasks in a Sequence of Async Chats
- Solving Multiple Tasks in a Sequence of Chats
- Nested Chats for Tool Use in Conversational Chess
- Conversational Chess using non-OpenAI clients
- Solving Complex Tasks with A Sequence of Nested Chats
- Solving Complex Tasks with Nested Chats
- OptiGuide with Nested Chats in AutoGen
- Chat with OpenAI Assistant using function call in AutoGen: OSS Insights for Advanced GitHub Data Analysis
- Auto Generated Agent Chat: Group Chat with GPTAssistantAgent
- RAG OpenAI Assistants in AutoGen
- OpenAI Assistants in AutoGen
- Auto Generated Agent Chat: GPTAssistant with Code Interpreter
- Agent Observability with OpenLIT
- Auto Generated Agent Chat: Collaborative Task Solving with Coding and Planning Agent
- ReasoningAgent - Advanced LLM Reasoning with Multiple Search Strategies
- SocietyOfMindAgent
- SQL Agent for Spider text-to-SQL benchmark
- Interactive LLM Agent Dealing with Data Stream
- Structured output
- WebSurferAgent
- Swarm Orchestration with AG2
- Using a local Telemetry server to monitor a GraphRAG agent
- Trip planning with a FalkorDB GraphRAG agent using a Swarm
- (Legacy) Implement Swarm-style orchestration with GroupChat
- Chatting with a teachable agent
- Making OpenAI Assistants Teachable
- Auto Generated Agent Chat: Teaching AI New Skills via Natural Language Interaction
- Preprocessing Chat History with `TransformMessages`
- Auto Generated Agent Chat: Collaborative Task Solving with Multiple Agents and Human Users
- Translating Video audio using Whisper and GPT-3.5-turbo
- Auto Generated Agent Chat: Solving Tasks Requiring Web Info
- Web Scraping using Apify Tools
- Websockets: Streaming input and output using websockets
- Solving Multiple Tasks in a Sequence of Chats with Different Conversable Agent Pairs
- Demonstrating the `AgentEval` framework using the task of solving math problems as an example
- Agent Chat with Async Human Inputs
- Automatically Build Multi-agent System from Agent Library
- AutoBuild
- A Uniform interface to call different LLMs
- From Dad Jokes To Sad Jokes: Function Calling with GPTAssistantAgent
- Language Agent Tree Search
- Mitigating Prompt hacking with JSON Mode in Autogen
- Using RetrieveChat for Retrieve Augmented Code Generation and Question Answering
- Using Neo4j's graph database with AG2 agents for Question & Answering
- Enhanced Swarm Orchestration with AG2
- Cross-Framework LLM Tool Integration with AG2
- RealtimeAgent in a Swarm Orchestration
- ReasoningAgent - Advanced LLM Reasoning with Multiple Search Strategies
- Application Gallery
Examples by Notebook
OpenAI Assistants in AutoGen
This notebook shows a very basic example of the
GPTAssistantAgent
,
which is an experimental AutoGen agent class that leverages the OpenAI
Assistant API for
conversational capabilities, working with UserProxyAgent
in AutoGen.
import logging
import os
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent
logger = logging.getLogger(__name__)
logger.setLevel(logging.WARNING)
assistant_id = os.environ.get("ASSISTANT_ID", None)
config_list = config_list_from_json("OAI_CONFIG_LIST")
llm_config = {"config_list": config_list}
assistant_config = {"assistant_id": assistant_id}
gpt_assistant = GPTAssistantAgent(
name="assistant",
instructions=AssistantAgent.DEFAULT_SYSTEM_MESSAGE,
llm_config=llm_config,
assistant_config=assistant_config,
)
user_proxy = UserProxyAgent(
name="user_proxy",
code_execution_config={
"work_dir": "coding",
"use_docker": False,
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
is_termination_msg=lambda msg: "TERMINATE" in msg["content"],
human_input_mode="NEVER",
max_consecutive_auto_reply=1,
)
user_proxy.initiate_chat(gpt_assistant, message="Print hello world")
OpenAI client config of GPTAssistantAgent(assistant) - model: gpt-4-turbo-preview
GPT Assistant only supports one OpenAI client. Using the first client in the list.
No matching assistant found, creating a new assistant
user_proxy (to assistant):
Print hello world
--------------------------------------------------------------------------------
assistant (to user_proxy):
```python
print("Hello, world!")
```
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...
user_proxy (to assistant):
exitcode: 0 (execution succeeded)
Code output:
Hello, world!
--------------------------------------------------------------------------------
assistant (to user_proxy):
TERMINATE
--------------------------------------------------------------------------------
ChatResult(chat_id=None, chat_history=[{'content': 'Print hello world', 'role': 'assistant'}, {'content': '```python\nprint("Hello, world!")\n```\n', 'role': 'user'}, {'content': 'exitcode: 0 (execution succeeded)\nCode output: \nHello, world!\n', 'role': 'assistant'}, {'content': 'TERMINATE\n', 'role': 'user'}], summary='\n', cost=({'total_cost': 0}, {'total_cost': 0}), human_input=[])
user_proxy.initiate_chat(gpt_assistant, message="Write py code to eval 2 + 2", clear_history=True)
user_proxy (to assistant):
Write py code to eval 2 + 2
--------------------------------------------------------------------------------
assistant (to user_proxy):
```python
# Calculate 2+2 and print the result
result = 2 + 2
print(result)
```
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...
user_proxy (to assistant):
exitcode: 0 (execution succeeded)
Code output:
4
--------------------------------------------------------------------------------
assistant (to user_proxy):
The Python code successfully calculated \(2 + 2\) and printed the result, which is \(4\).
TERMINATE
--------------------------------------------------------------------------------
ChatResult(chat_id=None, chat_history=[{'content': 'Write py code to eval 2 + 2', 'role': 'assistant'}, {'content': '```python\n# Calculate 2+2 and print the result\nresult = 2 + 2\nprint(result)\n```\n', 'role': 'user'}, {'content': 'exitcode: 0 (execution succeeded)\nCode output: \n4\n', 'role': 'assistant'}, {'content': 'The Python code successfully calculated \\(2 + 2\\) and printed the result, which is \\(4\\).\n\nTERMINATE\n', 'role': 'user'}], summary='The Python code successfully calculated \\(2 + 2\\) and printed the result, which is \\(4\\).\n\n\n', cost=({'total_cost': 0}, {'total_cost': 0}), human_input=[])
gpt_assistant.delete_assistant()
Permanently deleting assistant...