query_vector_db

query_vector_db(
    query_texts: list[str],
    n_results: int = 10,
    client: API = None,
    db_path: str = 'tmp/chromadb.db',
    collection_name: str = 'all-my-documents',
    search_string: str = '',
    embedding_model: str = 'all-MiniLM-L6-v2',
    embedding_function: Callable = None
) -> QueryResult

Query a vector db. We support chromadb compatible APIs, it’s not required if you prepared your own vector db and query function.

Parameters:
NameDescription
query_textsthe list of strings which will be used to query the vector db.

Type: list[str]
n_resultsthe number of results to return.

Default is 10.

Type: int

Default: 10
clientthe chromadb compatible client.

Default is None, a chromadb client will be used.

Type: API

Default: None
db_paththe path to the vector db.

Default is “tmp/chromadb.db”.

The default was /tmp/chromadb.db for version =0.2.24.

Type: str

Default: ‘tmp/chromadb.db’
collection_namethe name of the collection.

Default is “all-my-documents”.

Type: str

Default: ‘all-my-documents’
search_stringthe search string.

Only docs that contain an exact match of this string will be retrieved.

Default is "".

Type: str

Default:
embedding_modelthe embedding model to use.

Default is “all-MiniLM-L6-v2”.

Will be ignored if embedding_function is not None.

Type: str

Default: ‘all-MiniLM-L6-v2’
embedding_functionthe embedding function to use.

Default is None, SentenceTransformer with the given embedding_model will be used.

If you want to use OpenAI, Cohere, HuggingFace or other embedding functions, you can pass it here, follow the examples in https://docs.trychroma.com/embeddings.

Type: Callable

Default: None
Returns:
TypeDescription
QueryResultThe query result. The format is: python class QueryResult(TypedDict): ids: List[IDs] embeddings: Optional[List[List[Embedding]]] documents: Optional[List[List[Document]]] metadatas: Optional[List[List[Metadata]]] distances: Optional[List[List[float]]]