In this notebook, we’re building a workflow to extract accurate tabular
data information from a PDF file.
The following bullets summarize the notebook, with highlights being:
- Parse the PDF file and extract tables into images (optional).
- A single RAG agent fails to get the accurate information from
tabular data.
- An agentic workflow using a groupchat is able to extract information
accurately:
- the agentic workflow uses a RAG agent to extract document
metadata (e.g. the image of a data table using just the table
name)
- the table image is converted to Markdown through a multi-modal
agent
- finally, an assistant agent answers the original question with
an LLM
Unstructured-IO is a dependency for this notebook to parse the PDF. Please install AG2 (with the neo4j extra) and the dependencies:
Note: If you have been using autogen
or ag2
, all you need to do is upgrade it using:
pip install -U autogen[openai,neo4j]
or
pip install -U ag2[openai,neo4j]
as autogen
, and ag2
are aliases for the same PyPI package.
Set Configuration and OpenAI API Key
import os
import autogen
config_list = autogen.config_list_from_json(
"OAI_CONFIG_LIST",
filter_dict={
"model": ["gpt-4o"],
},
)
os.environ["OPENAI_API_KEY"] = config_list[0]["api_key"]
Parse PDF file
Skip and use parsed files to run the rest. This step is expensive
and time consuming, please skip if you don’t need to generate the full
data set. The estimated cost is from $10 to $15 to parse the pdf
file and build the knowledge graph with entire parsed output.
For the notebook, we use a common financial document, Nvidia 2024
10-K
as an example (file download
link).
We use Unstructured-IO to parse the PDF, the table and image from the
PDF are extracted out as .jpg files.
All parsed output are saved in a JSON file.
from unstructured.partition.pdf import partition_pdf
from unstructured.staging.base import elements_to_json
file_elements = partition_pdf(
filename="./input_files/nvidia_10k_2024.pdf",
strategy="hi_res",
languages=["eng"],
infer_table_structure=True,
extract_images_in_pdf=True,
extract_image_block_output_dir="./parsed_pdf_info",
extract_image_block_types=["Image", "Table"],
extract_forms=False,
form_extraction_skip_tables=False,
)
elements_to_json(elements=file_elements, filename="parsed_elements.json", encoding="utf-8")
Create sample dataset
import json
output_elements = []
keys_to_extract = ["element_id", "text", "type"]
metadata_keys = ["page_number", "parent_id", "image_path"]
text_types = set(["Text", "UncategorizedText", "NarrativeText"])
element_length = len(file_elements)
for idx in range(element_length):
data = file_elements[idx].to_dict()
new_data = {key: data[key] for key in keys_to_extract}
metadata = data["metadata"]
for key in metadata_keys:
if key in metadata:
new_data[key] = metadata[key]
if data["type"] == "Table":
if idx > 0:
pre_data = file_elements[idx - 1].to_dict()
if pre_data["type"] in text_types:
new_data["text"] = pre_data["text"] + new_data["text"]
if idx < element_length - 1:
post_data = file_elements[idx + 1].to_dict()
if post_data["type"] in text_types:
new_data["text"] = new_data["text"] + post_data["text"]
output_elements.append(new_data)
with open("proessed_elements.json", "w", encoding="utf-8") as file:
json.dump(output_elements, file, indent=4)
Imports
If you want to skip the parsing of the PDF file, you can start here.
# This is needed to allow nested asyncio calls for Neo4j in Jupyter
import nest_asyncio
nest_asyncio.apply()
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from autogen import AssistantAgent, ConversableAgent, UserProxyAgent
# load documents
from autogen.agentchat.contrib.graph_rag.document import Document, DocumentType
from autogen.agentchat.contrib.graph_rag.neo4j_graph_query_engine import Neo4jGraphQueryEngine
from autogen.agentchat.contrib.graph_rag.neo4j_graph_rag_capability import Neo4jGraphCapability
from autogen.agentchat.contrib.multimodal_conversable_agent import MultimodalConversableAgent
Create a knowledge graph with sample data
To save time and cost, we use a small subset of the data for the
notebook.
This does not change the fact that the native RAG agent solution
failed to provide the correct answer.
input_path = "./agentchat_pdf_rag/sample_elements.json"
input_documents = [
Document(doctype=DocumentType.JSON, path_or_url=input_path),
]
query_engine = Neo4jGraphQueryEngine(
username="neo4j", # Change if you reset username
password="password", # Change if you reset password
host="bolt://172.17.0.3", # Change
port=7687, # if needed
llm=OpenAI(model="gpt-4o", temperature=0.0), # Default, no need to specify
embedding=OpenAIEmbedding(model_name="text-embedding-3-small"), # except you want to use a different model
database="neo4j", # Change if you want to store the graphh in your custom database
)
# query_engine._clear()
# Ingest data and create a new property graph
query_engine.init_db(input_doc=input_documents)
Connect to knowledge graph if it is built
query_engine = Neo4jGraphQueryEngine(
username="neo4j",
password="password",
host="bolt://172.17.0.3",
port=7687,
database="neo4j",
)
query_engine.connect_db()
Native RAG Agent Solution
The following shows that when use a native RAG agent for parsed data,
the agent failed to get the right information (5,282 instead of 4,430).
Our best guess is that RAG agent fails to understand the table structure
from text.
rag_agent = ConversableAgent(
name="nvidia_rag",
human_input_mode="NEVER",
)
# Associate the capability with the agent
graph_rag_capability = Neo4jGraphCapability(query_engine)
graph_rag_capability.add_to_agent(rag_agent)
# Create a user proxy agent to converse with our RAG agent
user_proxy = UserProxyAgent(
name="user_proxy",
human_input_mode="ALWAYS",
)
user_proxy.initiate_chat(rag_agent, message="Could you list all tables from the document and its image_path?")
Agentic RAG workflow for tabular data
From the above example, when asked the goodwill asset (in millions) of
the table NVIDIA Corporation and Subsidiaries Consolidated Balance
Sheets, the answer was wrong. The correct figure from the table is
$4,430 million instead of $4,400 million. To enhance the RAG
performance from the tabular data, we introduce the enhanced workflow.
The workflow consists a group of agent and use groupchat to coordinate.
It breaks the RAG into 3 mains steps, 1. it finds the parsed image of
the corresponding table. 2. it converts the image to table in structured
Markdown format. 3. With the table in Markdown, the workflow answer the
question with the correct data.
llm_config = {
"cache_seed": 42, # change the cache_seed for different trials
"temperature": 1,
"config_list": config_list,
"timeout": 120,
}
user_proxy = UserProxyAgent(
name="User_proxy",
system_message="A human admin.",
human_input_mode="ALWAYS", # Try between ALWAYS or NEVER
code_execution_config=False,
)
table_assistant = AssistantAgent(
name="table_assistant",
system_message="""You are a helpful assistant.
You will extract the table name from the message and reply with "Find image_path for Table: {table_name}".
For example, when you got message "What is column data in table XYZ?",
you will reply "Find image_path for Table: XYZ"
""",
llm_config=llm_config,
human_input_mode="NEVER", # Never ask for human input.
)
rag_agent = ConversableAgent(
name="nvidia_rag",
human_input_mode="NEVER",
)
# Associate the capability with the agent
graph_rag_capability = Neo4jGraphCapability(query_engine)
graph_rag_capability.add_to_agent(rag_agent)
img_folder = "/workspaces/ag2/notebook/agentchat_pdf_rag/parsed_pdf_info"
img_request_format = ConversableAgent(
name="img_request_format",
system_message=f"""You are a helpful assistant.
You will extract the table_file_name from the message and reply with "Please extract table from the following image and convert it to Markdown.
<img {img_folder}/table_file_name>.".
For example, when you got message "The image path for the table titled XYZ is "./parsed_pdf_info/abcde".",
you will reply "Please extract table from the following image and convert it to Markdown.
<img {img_folder}/abcde>."
""",
llm_config=llm_config,
human_input_mode="NEVER",
)
image2table_convertor = MultimodalConversableAgent(
name="image2table_convertor",
system_message="""
You are an image to table converter. You will process an image of one or multiple consecutive tables.
You need to follow the following steps in sequence,
1. extract the complete table contents and structure.
2. Make sure the structure is complete and no information is left out. Otherwise, start from step 1 again.
3. Correct typos in the text fields.
4. In the end, output the table(s) in Markdown.
""",
llm_config={"config_list": config_list, "max_tokens": 300},
human_input_mode="NEVER",
max_consecutive_auto_reply=1,
)
conclusion = AssistantAgent(
name="conclusion",
system_message="""You are a helpful assistant.
Base on the history of the groupchat, answer the original question from User_proxy.
""",
llm_config=llm_config,
human_input_mode="NEVER", # Never ask for human input.
)
groupchat = autogen.GroupChat(
agents=[
user_proxy,
table_assistant,
rag_agent,
img_request_format,
image2table_convertor,
conclusion,
],
messages=[],
speaker_selection_method="round_robin",
)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
chat_result = user_proxy.initiate_chat(
manager,
message="What is goodwill asset (in millions) for 2024 in table NVIDIA Corporation and Subsidiaries Consolidated Balance Sheets?",
)