Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Qdrant Vector Store

This guide shows how to use Qdrant as a vector store backend in Synaptic. Qdrant is a high-performance vector database purpose-built for similarity search.

Setup

Add the qdrant feature to your Cargo.toml:

[dependencies]
synaptic = { version = "0.3", features = ["openai", "qdrant"] }

Start a Qdrant instance (e.g. via Docker):

docker run -p 6333:6333 -p 6334:6334 qdrant/qdrant

Port 6333 is the REST API; port 6334 is the gRPC endpoint used by the Rust client.

Configuration

Create a QdrantConfig with the connection URL, collection name, and vector dimensionality:

use synaptic::qdrant::{QdrantConfig, QdrantVectorStore};

let config = QdrantConfig::new("http://localhost:6334", "my_collection", 1536);
let store = QdrantVectorStore::new(config)?;

API key authentication

For Qdrant Cloud or secured deployments, attach an API key:

let config = QdrantConfig::new("https://my-cluster.cloud.qdrant.io:6334", "docs", 1536)
    .with_api_key("your-api-key-here");

let store = QdrantVectorStore::new(config)?;

Distance metric

The default distance metric is cosine similarity. You can change it with with_distance():

use qdrant_client::qdrant::Distance;

let config = QdrantConfig::new("http://localhost:6334", "my_collection", 1536)
    .with_distance(Distance::Euclid);

Available options: Distance::Cosine (default), Distance::Euclid, Distance::Dot, Distance::Manhattan.

Creating the collection

Call ensure_collection() to create the collection if it does not already exist. This is idempotent and safe to call on every startup:

store.ensure_collection().await?;

The collection is created with the vector size and distance metric from your config.

Adding documents

QdrantVectorStore implements the VectorStore trait. Pass an embeddings provider to compute vectors:

use synaptic::qdrant::VectorStore;
use synaptic::retrieval::Document;
use synaptic::openai::OpenAiEmbeddings;

let embeddings = OpenAiEmbeddings::new("text-embedding-3-small");

let docs = vec![
    Document::new("1", "Rust is a systems programming language"),
    Document::new("2", "Python is great for data science"),
    Document::new("3", "Go is designed for concurrency"),
];

let ids = store.add_documents(docs, &embeddings).await?;

Document IDs are mapped to Qdrant point UUIDs. If a document ID is already a valid UUID, it is used directly. Otherwise, a deterministic UUID v5 is generated from the ID string.

Find the k most similar documents to a text query:

let results = store.similarity_search("fast systems language", 3, &embeddings).await?;
for doc in &results {
    println!("{}: {}", doc.id, doc.content);
}

Search with scores

Get similarity scores alongside results:

let scored = store.similarity_search_with_score("concurrency", 3, &embeddings).await?;
for (doc, score) in &scored {
    println!("{} (score: {:.3}): {}", doc.id, score, doc.content);
}

Search by vector

Search using a pre-computed embedding vector:

use synaptic::embeddings::Embeddings;

let query_vec = embeddings.embed_query("systems programming").await?;
let results = store.similarity_search_by_vector(&query_vec, 3).await?;

Deleting documents

Remove documents by their IDs:

store.delete(&["1", "3"]).await?;

Using with a retriever

Wrap the store in a VectorStoreRetriever to use it with the rest of Synaptic's retrieval infrastructure:

use std::sync::Arc;
use synaptic::vectorstores::VectorStoreRetriever;
use synaptic::retrieval::Retriever;

let embeddings = Arc::new(OpenAiEmbeddings::new("text-embedding-3-small"));
let store = Arc::new(store);

let retriever = VectorStoreRetriever::new(store, embeddings, 5);
let results = retriever.retrieve("fast language", 5).await?;

Using an existing client

If you already have a configured qdrant_client::Qdrant instance, you can pass it directly:

use qdrant_client::Qdrant;
use synaptic::qdrant::{QdrantConfig, QdrantVectorStore};

let client = Qdrant::from_url("http://localhost:6334").build()?;
let config = QdrantConfig::new("http://localhost:6334", "my_collection", 1536);

let store = QdrantVectorStore::from_client(client, config);

RAG Pipeline Example

A complete RAG pipeline: load documents, split them into chunks, embed and store in Qdrant, then retrieve relevant context and generate an answer.

use synaptic::core::{ChatModel, ChatRequest, Message, Embeddings};
use synaptic::openai::{OpenAiChatModel, OpenAiEmbeddings};
use synaptic::qdrant::{QdrantConfig, QdrantVectorStore};
use synaptic::splitters::RecursiveCharacterTextSplitter;
use synaptic::loaders::TextLoader;
use synaptic::vectorstores::VectorStoreRetriever;
use synaptic::models::HttpBackend;
use std::sync::Arc;

let backend = Arc::new(HttpBackend::new());
let embeddings = Arc::new(OpenAiEmbeddings::new(
    OpenAiEmbeddings::config("text-embedding-3-small"),
    backend.clone(),
));

// 1. Load and split
let loader = TextLoader::new("docs/knowledge-base.txt");
let docs = loader.load().await?;
let splitter = RecursiveCharacterTextSplitter::new(500, 50);
let chunks = splitter.split_documents(&docs)?;

// 2. Store in Qdrant
let config = QdrantConfig::new("http://localhost:6334", "my_collection", 1536);
let store = QdrantVectorStore::new(config)?;
store.ensure_collection().await?;
store.add_documents(chunks, embeddings.as_ref()).await?;

// 3. Retrieve and answer
let store = Arc::new(store);
let retriever = VectorStoreRetriever::new(store, embeddings.clone(), 5);
let relevant = retriever.retrieve("What is Synaptic?", 5).await?;
let context = relevant.iter().map(|d| d.content.as_str()).collect::<Vec<_>>().join("\n\n");

let model = OpenAiChatModel::new(/* config */);
let request = ChatRequest::new(vec![
    Message::system(&format!("Answer based on context:\n{context}")),
    Message::human("What is Synaptic?"),
]);
let response = model.chat(&request).await?;

Using with an Agent

Wrap the retriever as a tool so a ReAct agent can decide when to search the vector store during multi-step reasoning:

use synaptic::graph::create_react_agent;
use synaptic::qdrant::{QdrantConfig, QdrantVectorStore};
use synaptic::vectorstores::VectorStoreRetriever;
use synaptic::openai::{OpenAiChatModel, OpenAiEmbeddings};
use std::sync::Arc;

// Build the retriever (as shown above)
let config = QdrantConfig::new("http://localhost:6334", "knowledge", 1536);
let store = Arc::new(QdrantVectorStore::new(config)?);
store.ensure_collection().await?;
let embeddings = Arc::new(OpenAiEmbeddings::new(/* config */));
let retriever = VectorStoreRetriever::new(store, embeddings, 5);

// Register the retriever as a tool and create a ReAct agent
// that can autonomously decide when to search
let model = OpenAiChatModel::new(/* config */);
let agent = create_react_agent(model, vec![/* retriever tool */]).compile();

The agent will invoke the retriever tool whenever it determines that external knowledge is needed to answer the user's question.

Configuration reference

FieldTypeDefaultDescription
urlStringrequiredQdrant gRPC URL (e.g. http://localhost:6334)
collection_nameStringrequiredName of the Qdrant collection
vector_sizeu64requiredDimensionality of the embedding vectors
api_keyOption<String>NoneAPI key for authenticated access
distanceDistanceCosineDistance metric for similarity search