Use Cases
- Use cases
- Notebooks
- All Notebooks
- Websockets: Streaming input and output using websockets
- Perplexity Search Tool
- Auto Generated Agent Chat: Task Solving with Code Generation, Execution, Debugging & Human Feedback
- Using a local Telemetry server to monitor a GraphRAG agent
- Auto Generated Agent Chat: Task Solving with Provided Tools as Functions
- RealtimeAgent with gemini client
- Agentic RAG workflow on tabular data from a PDF file
- Language Agent Tree Search
- Config loader utility functions
- Wikipedia Agent
- Google Drive Tools
- FSM - User can input speaker transition constraints
- DeepResearchAgent
- Group Chat with Retrieval Augmented Generation
- AgentOptimizer: An Agentic Way to Train Your LLM Agent
- Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering
- RealtimeAgent with WebRTC connection
- Agent with memory using Mem0
- Preprocessing Chat History with `TransformMessages`
- RealtimeAgent in a Swarm Orchestration
- RAG OpenAI Assistants in AG2
- Using Guidance with AG2
- Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering
- Using Neo4j's graph database with AG2 agents for Question & Answering
- Use AG2 in Databricks with DBRX
- Wikipedia Search Tools
- Solving Complex Tasks with Nested Chats
- DeepSeek: Adding Browsing Capabilities to AG2
- Solving Complex Tasks with A Sequence of Nested Chats
- Nested Chats for Tool Use in Conversational Chess
- OpenAI Assistants in AG2
- Group Chat with Coder and Visualization Critic
- Agent Chat with Multimodal Models: LLaVA
- SocietyOfMindAgent
- Conversational Workflows with MCP: A Marie Antoinette Take on The Eiffel Tower
- Agent Chat with Multimodal Models: DALLE and GPT-4V
- StateFlow: Build Workflows through State-Oriented Actions
- Small, Local Model (IBM Granite) Multi-Agent RAG
- Group Chat with Tools
- DuckDuckGo Search Tool
- Load the configuration including the response format
- RealtimeAgent in a Swarm Orchestration
- MCP Clients
- Agent Tracking with AgentOps
- SQL Agent for Spider text-to-SQL benchmark
- AutoBuild
- Automatically Build Multi-agent System from Agent Library
- Generate Dalle Images With Conversable Agents
- Agent Observability with OpenLIT
- Using RetrieveChat Powered by Couchbase Capella for Retrieve Augmented Code Generation and Question Answering
- A Uniform interface to call different LLMs
- OptiGuide with Nested Chats in AG2
- Conversational Chess using non-OpenAI clients
- Chat Context Dependency Injection
- Agent with memory using Mem0
- `run` function examples with event processing
- From Dad Jokes To Sad Jokes: Function Calling with GPTAssistantAgent
- Mitigating Prompt hacking with JSON Mode in Autogen
- Group Chat with Customized Speaker Selection Method
- Trip planning with a FalkorDB GraphRAG agent using a Swarm
- Tavily Search Tool
- Structured output
- Adding Google Search Capability to AG2
- Task Solving with Provided Tools as Functions (Asynchronous Function Calls)
- Conversational Workflows with MCP: A French joke on a random Wikipedia article
- Auto Generated Agent Chat: Using MathChat to Solve Math Problems
- Agent Chat with custom model loading
- Auto Generated Agent Chat: Function Inception
- Use AG2 to Tune ChatGPT
- Auto Generated Agent Chat: Group Chat with GPTAssistantAgent
- Using FalkorGraphRagCapability with agents for GraphRAG Question & Answering
- Solving Multiple Tasks in a Sequence of Async Chats
- Discord, Slack, and Telegram messaging tools
- RealtimeAgent in a Swarm Orchestration using WebRTC
- Runtime Logging with AG2
- WebSurferAgent
- Use MongoDBQueryEngine to query Markdown files
- Conversational Workflows with MCP: A Shakespearean Take on arXiv Abstracts
- RAG with DocAgent
- Solving Multiple Tasks in a Sequence of Chats
- Cross-Framework LLM Tool for CaptainAgent
- Web Scraping using Apify Tools
- Auto Generated Agent Chat: Collaborative Task Solving with Coding and Planning Agent
- Currency Calculator: Task Solving with Provided Tools as Functions
- Use ChromaDBQueryEngine to query Markdown files
- Using RetrieveChat for Retrieve Augmented Code Generation and Question Answering
- Writing a software application using function calls
- Using OpenAI’s Web Search Tool with AG2
- Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering
- Enhanced Swarm Orchestration with AG2
- Perform Research with Multi-Agent Group Chat
- Auto Generated Agent Chat: Teaching AI New Skills via Natural Language Interaction
- Usage tracking with AG2
- Using Neo4j's native GraphRAG SDK with AG2 agents for Question & Answering
- Groupchat with Llamaindex agents
- Assistants with Azure Cognitive Search and Azure Identity
- Swarm Orchestration with AG2
- Tools with Dependency Injection
- Structured output from json configuration
- Solving Multiple Tasks in a Sequence of Chats with Different Conversable Agent Pairs
- Chat with OpenAI Assistant using function call in AG2: OSS Insights for Advanced GitHub Data Analysis
- Auto Generated Agent Chat: Collaborative Task Solving with Multiple Agents and Human Users
- Adding YouTube Search Capability to AG2
- Chatting with a teachable agent
- Making OpenAI Assistants Teachable
- RealtimeAgent in a Swarm Orchestration
- Translating Video audio using Whisper and GPT-3.5-turbo
- Run a standalone AssistantAgent
- Auto Generated Agent Chat: Task Solving with Langchain Provided Tools as Functions
- Use LLamaIndexQueryEngine to query Markdown files
- Auto Generated Agent Chat: GPTAssistant with Code Interpreter
- Interactive LLM Agent Dealing with Data Stream
- Agent Chat with Async Human Inputs
- ReasoningAgent - Advanced LLM Reasoning with Multiple Search Strategies
- Auto Generated Agent Chat: Solving Tasks Requiring Web Info
- Use AG2 to Tune OpenAI Models
- Engaging with Multimodal Models: GPT-4V in AG2
- Supercharging Web Crawling with Crawl4AI
- Use AG2 in Microsoft Fabric
- Cross-Framework LLM Tool Integration with AG2
- Demonstrating the `AgentEval` framework using the task of solving math problems as an example
- Group Chat
- Adding Browsing Capabilities to AG2
- CaptainAgent
- (Legacy) Implement Swarm-style orchestration with GroupChat
- Task Solving with Code Generation, Execution and Debugging
- RealtimeAgent with local websocket connection
- Community Gallery
Runtime Logging with AG2
Provide capabilities of runtime logging for debugging and performance analysis.
AG2 offers utilities to log data for debugging and performance analysis. This notebook demonstrates how to use them.
we log data in different modes: - SQlite Database - File
In general, users can initiate logging by calling
autogen.runtime_logging.start()
and stop logging by calling
autogen.runtime_logging.stop()
import json
import pandas as pd
import autogen
from autogen import AssistantAgent, LLMConfig, UserProxyAgent
# Setup API key. Add your own API key to config file or environment variable
llm_config = LLMConfig.from_json(path="OAI_CONFIG_LIST", temperature=0.9)
# Start logging
logging_session_id = autogen.runtime_logging.start(config={"dbname": "logs.db"})
print("Logging session ID: " + str(logging_session_id))
# Create an agent workflow and run it
assistant = AssistantAgent(name="assistant", llm_config=llm_config)
user_proxy = UserProxyAgent(
name="user_proxy",
code_execution_config=False,
human_input_mode="NEVER",
is_termination_msg=lambda msg: "TERMINATE" in msg["content"],
)
user_proxy.initiate_chat(
assistant, message="What is the height of the Eiffel Tower? Only respond with the answer and terminate"
)
autogen.runtime_logging.stop()
Getting Data from the SQLite Database
logs.db
should be generated, by default it’s using SQLite database.
You can view the data with GUI tool like sqlitebrowser
, using SQLite
command line shell or using python script:
def get_log(dbname="logs.db", table="chat_completions"):
import sqlite3
con = sqlite3.connect(dbname)
query = f"SELECT * from {table}"
cursor = con.execute(query)
rows = cursor.fetchall()
column_names = [description[0] for description in cursor.description]
data = [dict(zip(column_names, row)) for row in rows]
con.close()
return data
def str_to_dict(s):
return json.loads(s)
log_data = get_log()
log_data_df = pd.DataFrame(log_data)
log_data_df["total_tokens"] = log_data_df.apply(
lambda row: str_to_dict(row["response"])["usage"]["total_tokens"], axis=1
)
log_data_df["request"] = log_data_df.apply(lambda row: str_to_dict(row["request"])["messages"][0]["content"], axis=1)
log_data_df["response"] = log_data_df.apply(
lambda row: str_to_dict(row["response"])["choices"][0]["message"]["content"], axis=1
)
log_data_df
Computing Cost
One use case of logging data is to compute the cost of a session.
# Sum totoal tokens for all sessions
total_tokens = log_data_df["total_tokens"].sum()
# Sum total cost for all sessions
total_cost = log_data_df["cost"].sum()
# Total tokens for specific session
session_tokens = log_data_df[log_data_df["session_id"] == logging_session_id]["total_tokens"].sum()
session_cost = log_data_df[log_data_df["session_id"] == logging_session_id]["cost"].sum()
print("Total tokens for all sessions: " + str(total_tokens) + ", total cost: " + str(round(total_cost, 4)))
print(
"Total tokens for session "
+ str(logging_session_id)
+ ": "
+ str(session_tokens)
+ ", cost: "
+ str(round(session_cost, 4))
)
Log data in File mode
By default, the log type is set to sqlite
as shown above, but we
introduced a new parameter for the autogen.runtime_logging.start()
the logger_type = "file"
will start to log data in the File mode.
import pandas as pd
import autogen
from autogen import AssistantAgent, LLMConfig, UserProxyAgent
# Setup API key. Add your own API key to config file or environment variable
llm_config = LLMConfig.from_json(path="OAI_CONFIG_LIST", temperature=0.9)
# Start logging with logger_type and the filename to log to
logging_session_id = autogen.runtime_logging.start(logger_type="file", config={"filename": "runtime.log"})
print("Logging session ID: " + str(logging_session_id))
# Create an agent workflow and run it
assistant = AssistantAgent(name="assistant", llm_config=llm_config)
user_proxy = UserProxyAgent(
name="user_proxy",
code_execution_config=False,
human_input_mode="NEVER",
is_termination_msg=lambda msg: "TERMINATE" in msg["content"],
)
user_proxy.initiate_chat(
assistant, message="What is the height of the Eiffel Tower? Only respond with the answer and terminate"
)
autogen.runtime_logging.stop()
This should create a runtime.log
file in your current directory.