Integrations
Synaptic provides optional integration crates that connect to external services. Each integration is gated behind a Cargo feature flag and adds no overhead when not enabled.
Available Integrations
| Integration | Feature | Purpose |
|---|---|---|
| OpenAI-Compatible Providers | openai | Groq, DeepSeek, Fireworks, Together, xAI, MistralAI, HuggingFace, Cohere, OpenRouter |
| Azure OpenAI | openai | Azure-hosted OpenAI models (chat + embeddings) |
| Anthropic | anthropic | Anthropic Claude models (chat + streaming + tool calling) |
| Google Gemini | gemini | Google Gemini models via Generative Language API |
| Ollama | ollama | Local LLM inference with Ollama (chat + embeddings) |
| AWS Bedrock | bedrock | AWS Bedrock foundation models (Claude, Llama, Mistral, etc.) |
| Cohere Reranker | cohere | Document reranking for improved retrieval quality |
| Qdrant | qdrant | Vector store backed by the Qdrant vector database |
| PgVector | pgvector | Vector store backed by PostgreSQL with the pgvector extension |
| Pinecone | pinecone | Managed vector store backed by Pinecone |
| Chroma | chroma | Open-source vector store backed by Chroma |
| MongoDB Atlas | mongodb | Vector search backed by MongoDB Atlas |
| Elasticsearch | elasticsearch | Vector store backed by Elasticsearch kNN |
| Redis | redis | Key-value store and LLM response cache backed by Redis |
| SQLite Cache | sqlite | Persistent LLM response cache backed by SQLite |
| PDF Loader | pdf | Document loader for PDF files |
| Tavily Search | tavily | Web search tool for agents |
Enabling integrations
Add the desired feature flags to your Cargo.toml:
[dependencies]
synaptic = { version = "0.3", features = ["openai", "qdrant", "redis"] }
You can combine any number of feature flags. Each integration pulls in only the dependencies it needs.
Trait compatibility
Every integration implements a core Synaptic trait, so it plugs directly into the existing framework:
- OpenAI-Compatible, Azure OpenAI, and Bedrock implement
ChatModel-- use them anywhere a model is accepted. - OpenAI-Compatible (MistralAI, HuggingFace, Cohere) and Azure OpenAI also implement
Embeddings. - Cohere Reranker implements
DocumentCompressor-- use it withContextualCompressionRetrieverfor two-stage retrieval. - Qdrant, PgVector, Pinecone, Chroma, MongoDB Atlas, and Elasticsearch implement
VectorStore-- use them withVectorStoreRetrieveror any component that accepts&dyn VectorStore. - Redis Store implements
Store-- use it anywhereInMemoryStoreis used, including agentToolRuntimeinjection. - Redis Cache and SQLite Cache implement
LlmCache-- wrap anyChatModelwithCachedChatModelfor persistent response caching. - PDF Loader implements
Loader-- use it in RAG pipelines alongsideTextSplitter,Embeddings, andVectorStore. - Tavily Search implements
Tool-- register it with an agent for web search capabilities.
Guides
LLM Providers
- OpenAI-Compatible Providers -- Groq, DeepSeek, Fireworks, Together, xAI, MistralAI, HuggingFace, Cohere, OpenRouter
- Azure OpenAI -- Azure-hosted OpenAI models
- Anthropic -- Anthropic Claude models
- Google Gemini -- Google Gemini models
- Ollama -- Local LLM inference (chat + embeddings)
- AWS Bedrock -- AWS Bedrock foundation models
Reranking
- Cohere Reranker -- document reranking for improved retrieval
Vector Stores
- Qdrant Vector Store -- store and search embeddings with Qdrant
- PgVector -- store and search embeddings with PostgreSQL + pgvector
- Pinecone Vector Store -- managed vector store with Pinecone
- Chroma Vector Store -- open-source embedding database
- MongoDB Atlas Vector Search -- vector search with MongoDB Atlas
- Elasticsearch Vector Store -- vector search with Elasticsearch kNN
Storage & Caching
- Redis Store & Cache -- persistent key-value storage and LLM caching with Redis
- SQLite Cache -- local LLM response caching with SQLite
Loaders & Tools
- PDF Loader -- load documents from PDF files
- Tavily Search Tool -- web search tool for agents