Integrations
Synaptic provides optional integration crates that connect to external services. Each integration is gated behind a Cargo feature flag and adds no overhead when not enabled.
Available Integrations
| Integration | Feature | Purpose |
|---|---|---|
| OpenAI-Compatible Providers | openai | Groq, DeepSeek, Fireworks, Together, xAI, MistralAI, HuggingFace, Cohere, OpenRouter |
| Azure OpenAI | openai | Azure-hosted OpenAI models (chat + embeddings) |
| Anthropic | anthropic | Anthropic Claude models (chat + streaming + tool calling) |
| Google Gemini | gemini | Google Gemini models via Generative Language API |
| Ollama | ollama | Local LLM inference with Ollama (chat + embeddings) |
| AWS Bedrock | bedrock | AWS Bedrock foundation models (Claude, Llama, Mistral, etc.) |
| Cohere Reranker | cohere | Document reranking for improved retrieval quality |
| Qdrant | qdrant | Vector store backed by the Qdrant vector database |
| PostgreSQL | postgres | Store, cache, vector store, and graph checkpointer backed by PostgreSQL |
| Pinecone | pinecone | Managed vector store backed by Pinecone |
| Chroma | chroma | Open-source vector store backed by Chroma |
| MongoDB Atlas | mongodb | Vector search backed by MongoDB Atlas |
| Elasticsearch | elasticsearch | Vector store backed by Elasticsearch kNN |
| Redis | redis | Key-value store and LLM response cache backed by Redis |
| SQLite Cache | sqlite | Persistent LLM response cache backed by SQLite |
| PDF Loader | pdf | Document loader for PDF files |
| Tavily Search | tavily | Web search tool for agents |
| Together AI | together | Serverless open-source models (Llama, DeepSeek, Qwen, Mixtral) |
| Fireworks AI | fireworks | Fastest open-source model inference (sub-100ms TTFT) |
| xAI Grok | xai | xAI Grok models with real-time reasoning |
| Perplexity AI | perplexity | Search-augmented LLMs with cited sources |
Enabling integrations
Add the desired feature flags to your Cargo.toml:
[dependencies]
synaptic = { version = "0.4", features = ["openai", "qdrant", "redis"] }
You can combine any number of feature flags. Each integration pulls in only the dependencies it needs.
Trait compatibility
Every integration implements a core Synaptic trait, so it plugs directly into the existing framework:
- OpenAI-Compatible, Azure OpenAI, and Bedrock implement
ChatModel-- use them anywhere a model is accepted. - OpenAI-Compatible (MistralAI, HuggingFace, Cohere) and Azure OpenAI also implement
Embeddings. - Cohere Reranker implements
DocumentCompressor-- use it withContextualCompressionRetrieverfor two-stage retrieval. - Qdrant, PostgreSQL (PgVectorStore), Pinecone, Chroma, MongoDB Atlas, and Elasticsearch implement
VectorStore-- use them withVectorStoreRetrieveror any component that accepts&dyn VectorStore. - Redis Store and PostgreSQL (PgStore) implement
Store-- use them anywhereInMemoryStoreis used, including agentToolRuntimeinjection. - Redis Cache, PostgreSQL (PgCache), and SQLite Cache implement
LlmCache-- wrap anyChatModelwithCachedChatModelfor persistent response caching. - PDF Loader implements
Loader-- use it in RAG pipelines alongsideTextSplitter,Embeddings, andVectorStore. - Tavily Search implements
Tool-- register it with an agent for web search capabilities.
Guides
LLM Providers
- OpenAI-Compatible Providers -- Groq, DeepSeek, Fireworks, Together, xAI, MistralAI, HuggingFace, Cohere, OpenRouter
- Azure OpenAI -- Azure-hosted OpenAI models
- Anthropic -- Anthropic Claude models
- Google Gemini -- Google Gemini models
- Ollama -- Local LLM inference (chat + embeddings)
- AWS Bedrock -- AWS Bedrock foundation models
- Together AI -- Serverless open-source models (Llama, DeepSeek, Qwen, Mixtral)
- Fireworks AI -- Fastest open-source model inference
- xAI Grok -- xAI Grok models with real-time reasoning
- Perplexity AI -- Search-augmented LLMs with cited sources
Reranking
- Cohere Reranker -- document reranking for improved retrieval
Vector Stores
- Qdrant Vector Store -- store and search embeddings with Qdrant
- PostgreSQL -- store, cache, vector store, and graph checkpointer with PostgreSQL
- Pinecone Vector Store -- managed vector store with Pinecone
- Chroma Vector Store -- open-source embedding database
- MongoDB Atlas Vector Search -- vector search with MongoDB Atlas
- Elasticsearch Vector Store -- vector search with Elasticsearch kNN
Storage & Caching
- Redis Store & Cache -- persistent key-value storage and LLM caching with Redis
- SQLite Cache -- local LLM response caching with SQLite
Loaders & Tools
- PDF Loader -- load documents from PDF files
- Tavily Search Tool -- web search tool for agents