LlamaIndexQueryEngine

LlamaIndexQueryEngine(
    vector_store: BasePydanticVectorStore,
    llm: ForwardRef('LLM') | None = None,
    file_reader_class: type['SimpleDirectoryReader'] | None = None
)

This engine leverages LlamaIndex’s VectorStoreIndex to efficiently index and retrieve documents, and generate an answer in response to natural language queries. It use any LlamaIndex vector store.
By default the engine will use OpenAI’s GPT-4o model (use the llm parameter to change that).
Initializes the LlamaIndexQueryEngine with the given vector store.

Parameters:
NameDescription
vector_storeType: BasePydanticVectorStore
llmType: ForwardRef(‘LLM’) | None

Default: None
file_reader_classType: type[‘SimpleDirectoryReader’] | None

Default: None

Instance Methods

add_docs

add_docs(
    self,
    new_doc_dir: Path | str | None = None,
    new_doc_paths_or_urls: Sequence[Path | str] | None = None,
    *args: Any,
    **kwargs: Any
) -> None

Add new documents to the underlying database and add to the index.

Parameters:
NameDescription
new_doc_dirA dir of input documents that are used to create the records in database.

Type: pathlib.Path | str | None

Default: None
new_doc_paths_or_urlsA sequence of input documents that are used to create the records in database.

A document can be a Path to a file or a url.

Type: Sequence[pathlib.Path | str] | None

Default: None
*argsAny additional arguments

Type: Any
**kwargsAny additional keyword arguments

Type: Any

connect_db

connect_db(
    self,
    *args: Any,
    **kwargs: Any
) -> bool

Connect to the database.
It sets up the LlamaIndex storage and create an index from the existing vector store.

Parameters:
NameDescription
*argsAny additional arguments

Type: Any
**kwargsAny additional keyword arguments

Type: Any
Returns:
TypeDescription
boolbool: True if connection is successful

init_db

init_db(
    self,
    new_doc_dir: Path | str | None = None,
    new_doc_paths_or_urls: Sequence[Path | str] | None = None,
    *args: Any,
    **kwargs: Any
) -> bool

Initialize the database with the input documents or records.
It takes the following steps:
1. Set up LlamaIndex storage context.
2. insert documents and build an index upon them.

Parameters:
NameDescription
new_doc_dira dir of input documents that are used to create the records in database.

Type: pathlib.Path | str | None

Default: None
new_doc_paths_or_urlsA sequence of input documents that are used to create the records in database.

A document can be a Path to a file or a url.

Type: Sequence[pathlib.Path | str] | None

Default: None
*argsAny additional arguments

Type: Any
**kwargsAny additional keyword arguments

Type: Any
Returns:
TypeDescription
boolbool: True if initialization is successful

query

query(self, question: str) -> str

Retrieve information from indexed documents by processing a query using the engine’s LLM.

Parameters:
NameDescription
questionA natural language query string used to search the indexed documents.

Type: str
Returns:
TypeDescription
strA string containing the response generated by LLM.