LLamaIndexqueryEngine for
retrieval-augmented question answering over documents. It shows how to
set up the engine with Docling parsed Markdown files, and execute
natural language queries against the indexed data.
The LlamaIndexQueryEngine provides an efficient way to query vectorDBs
using any LlamaIndex’s vector
store.
We use some Markdown (.md) files as input, feel free to try your own
text or Markdown documents.
You can create and add this ChromaDBQueryEngine to
DocAgent
to use.
Load LLM configuration
This demonstration requires anOPENAI_API_KEY to be in your
environment variables. See our
documentation
for guidance.
In the first example, we build a LLamaIndexQueryEngine instance using ChromaDB.
Refer to this link for running Chromadb in a Docker container.chroma_vector_store to create our AG2
LLamaIndexQueryEngine instance.
Pinecone
In the second example, we build a similar LLamaIndexQueryEngine instance, but on top of Pinecone.
Refer to https://docs.llamaindex.ai/en/stable/examples/vector_stores/PineconeIndexDemo/ for more details on how to set up Pinecone and PineconeVectorStore Please put your Pinecone API key in an environment variable calledPINECONE_API_KEY.