Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Integrations

Synaptic provides optional integration crates that connect to external services. Each integration is gated behind a Cargo feature flag and adds no overhead when not enabled.

Available Integrations

IntegrationFeaturePurpose
OpenAI-Compatible ProvidersopenaiGroq, DeepSeek, Fireworks, Together, xAI, MistralAI, HuggingFace, Cohere, OpenRouter
Azure OpenAIopenaiAzure-hosted OpenAI models (chat + embeddings)
AnthropicanthropicAnthropic Claude models (chat + streaming + tool calling)
Google GeminigeminiGoogle Gemini models via Generative Language API
OllamaollamaLocal LLM inference with Ollama (chat + embeddings)
AWS BedrockbedrockAWS Bedrock foundation models (Claude, Llama, Mistral, etc.)
Cohere RerankercohereDocument reranking for improved retrieval quality
QdrantqdrantVector store backed by the Qdrant vector database
PgVectorpgvectorVector store backed by PostgreSQL with the pgvector extension
PineconepineconeManaged vector store backed by Pinecone
ChromachromaOpen-source vector store backed by Chroma
MongoDB AtlasmongodbVector search backed by MongoDB Atlas
ElasticsearchelasticsearchVector store backed by Elasticsearch kNN
RedisredisKey-value store and LLM response cache backed by Redis
SQLite CachesqlitePersistent LLM response cache backed by SQLite
PDF LoaderpdfDocument loader for PDF files
Tavily SearchtavilyWeb search tool for agents

Enabling integrations

Add the desired feature flags to your Cargo.toml:

[dependencies]
synaptic = { version = "0.3", features = ["openai", "qdrant", "redis"] }

You can combine any number of feature flags. Each integration pulls in only the dependencies it needs.

Trait compatibility

Every integration implements a core Synaptic trait, so it plugs directly into the existing framework:

  • OpenAI-Compatible, Azure OpenAI, and Bedrock implement ChatModel -- use them anywhere a model is accepted.
  • OpenAI-Compatible (MistralAI, HuggingFace, Cohere) and Azure OpenAI also implement Embeddings.
  • Cohere Reranker implements DocumentCompressor -- use it with ContextualCompressionRetriever for two-stage retrieval.
  • Qdrant, PgVector, Pinecone, Chroma, MongoDB Atlas, and Elasticsearch implement VectorStore -- use them with VectorStoreRetriever or any component that accepts &dyn VectorStore.
  • Redis Store implements Store -- use it anywhere InMemoryStore is used, including agent ToolRuntime injection.
  • Redis Cache and SQLite Cache implement LlmCache -- wrap any ChatModel with CachedChatModel for persistent response caching.
  • PDF Loader implements Loader -- use it in RAG pipelines alongside TextSplitter, Embeddings, and VectorStore.
  • Tavily Search implements Tool -- register it with an agent for web search capabilities.

Guides

LLM Providers

Reranking

Vector Stores

Storage & Caching

Loaders & Tools