Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Installation

Requirements

  • Rust edition: 2021
  • Minimum supported Rust version (MSRV): 1.88
  • Runtime: Tokio (async runtime)

Adding Synaptic to Your Project

The synaptic facade crate re-exports all sub-crates. Use feature flags to control which modules are compiled.

Feature Flags

Synaptic provides fine-grained feature flags, similar to tokio:

[dependencies]
# Full — everything enabled
synaptic = { version = "0.4", features = ["full"] }

# Agent development (tools + graph + memory + middleware + your chosen provider)
synaptic = { version = "0.4", features = ["openai", "agent"] }

# RAG applications (retrieval + loaders + splitters + embeddings + vectorstores)
synaptic = { version = "0.4", features = ["openai", "rag"] }

# Agent + RAG
synaptic = { version = "0.4", features = ["agent", "rag"] }

# Just OpenAI model calls
synaptic = { version = "0.4", features = ["openai"] }

# All providers
synaptic = { version = "0.4", features = ["models"] }

# Fine-grained: one provider + specific modules
synaptic = { version = "0.4", features = ["anthropic", "graph", "middleware"] }

Composite features:

FeatureDescription
defaultrunnables, prompts, parsers, tools, callbacks
agentdefault + openai, graph, memory, middleware, store, condenser, secrets, config, session
ragdefault + openai, embeddings, retrieval, loaders, splitters, vectorstores
modelsAll providers: openai + anthropic + gemini + ollama + bedrock + cohere
fullAll features enabled (all providers, integrations, otel, langfuse, store-filesystem, deep-config)

Provider features (each enables one provider within synaptic-models):

FeatureDescription
openaiOpenAI (OpenAiChatModel, OpenAiEmbeddings)
anthropicAnthropic (AnthropicChatModel)
geminiGoogle Gemini (GeminiChatModel)
ollamaOllama (OllamaChatModel, OllamaEmbeddings)
bedrockAWS Bedrock (BedrockChatModel)
cohereCohere (CohereReranker)

OpenAI-compatible providers (Groq, DeepSeek, Mistral, Together, Fireworks, xAI, Perplexity) are enabled via their respective feature flags: groq, deepseek, mistral, together, fireworks, xai, perplexity.

Module features:

FeatureDescription
graphGraph orchestration (StateGraph, create_react_agent, InterceptorChain)
middlewareInterceptor chain (tool call limits, HITL, summarization, SSRF guard, circuit breaker)
memoryMemory strategies (buffer, window, summary, token buffer)
storePersistence backends (postgres, redis, sqlite, mongodb)
mcpModel Context Protocol client (Stdio/SSE/HTTP transports)
macrosProc macros (#[tool], #[chain], #[entrypoint], #[traceable])
deepDeep Agent harness (ACP protocol, built-in tools, sub-agents, skills)
eventsEventBus with 29 event kinds and 5 dispatch modes
configAgent config loading + secrets masking + plugin system

Integration features:

FeatureDescription
qdrantQdrant vector store (via synaptic-rag)
postgresPostgreSQL store, cache, vector store, checkpointer (via synaptic-store)
redisRedis store + cache (via synaptic-store)
sqliteSQLite store (via synaptic-store)
mongodbMongoDB store (via synaptic-store)
pineconePinecone vector store (via synaptic-rag)
chromaChroma vector store (via synaptic-rag)
elasticsearchElasticsearch vector store (via synaptic-rag)
opensearchOpenSearch vector store (via synaptic-rag)
milvusMilvus vector store (via synaptic-rag)
lancedbLanceDB vector store (via synaptic-rag)
weaviateWeaviate vector store (via synaptic-rag)
pdfPDF document loader (via synaptic-tools)
tavilyTavily search tool (via synaptic-integrations)
confluenceConfluence integration (via synaptic-integrations)
slackSlack integration (via synaptic-integrations)
larkLark/Feishu bot framework (via synaptic-lark)
otelOpenTelemetry tracing
langfuseLangfuse observability

The core module (traits and types) is always available regardless of feature selection.

Quick Start Example

[dependencies]
synaptic = { version = "0.4", features = ["openai", "agent"] }
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }

Using the Facade

The facade crate provides namespaced re-exports for all sub-crates:

use synaptic::core::{ChatModel, ChatRequest, ChatResponse, Message, SynapticError};
use synaptic::models::{OpenAiChatModel, AnthropicChatModel};  // requires provider features
use synaptic::graph::{StateGraph, create_react_agent};
use synaptic::rag::{Retriever, InMemoryVectorStore, RecursiveCharacterTextSplitter};
use synaptic::middleware::{Interceptor, InterceptorChain};

Alternatively, you can depend on individual crates directly if you want to minimize compile times:

[dependencies]
synaptic-core = "0.4"
synaptic-models = { version = "0.4", features = ["openai"] }
synaptic-graph = "0.4"

Provider API Keys

Synaptic reads API keys from environment variables. Set the ones you need for your chosen provider:

ProviderEnvironment Variable
OpenAIOPENAI_API_KEY
AnthropicANTHROPIC_API_KEY
Google GeminiGOOGLE_API_KEY
OllamaNo key required (runs locally)

For example, on a Unix shell:

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="AI..."

You do not need any API keys to run the Quickstart example, which uses the ScriptedChatModel test double.

Building and Testing

From the workspace root:

# Build all crates
cargo build --workspace

# Run all tests
cargo test --workspace

# Test a single crate
cargo test -p synaptic-models

# Run a specific test by name
cargo test -p synaptic-core -- trim_messages

# Check formatting
cargo fmt --all -- --check

# Run lints
cargo clippy --workspace

Workspace Dependencies

Synaptic uses Cargo workspace-level dependency management. Key shared dependencies include:

  • async-trait -- async trait methods
  • serde / serde_json -- serialization
  • thiserror 2.0 -- error derive
  • tokio -- async runtime (macros, rt-multi-thread, sync, time)
  • reqwest -- HTTP client (json, stream features)
  • futures / async-stream -- stream utilities
  • tracing / tracing-subscriber -- structured logging