Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Architecture

Synaptic is organized as a workspace of focused Rust crates. Each crate owns exactly one concern, and they compose together through shared traits defined in a single core crate. This page explains the layered design, the principles behind it, and how the crates depend on each other.

Design Principles

Async-first. Every trait in Synaptic is async via #[async_trait], and the runtime is tokio. This is not an afterthought bolted onto a synchronous API -- async is the foundation. LLM calls, tool execution, memory access, and embedding queries are all naturally asynchronous operations, and Synaptic models them as such from the start.

One crate, one concern. Each provider has its own crate: synaptic-openai, synaptic-anthropic, synaptic-gemini, synaptic-ollama. The synaptic-tools crate knows how to register and execute tools. The synaptic-memory crate knows how to store and retrieve conversation history. No crate does two jobs. This keeps compile times manageable, makes it possible to use only what you need, and ensures that changes to one subsystem do not cascade across the codebase.

Shared traits in core. The synaptic-core crate defines every trait and type that crosses crate boundaries: ChatModel, Tool, MemoryStore, CallbackHandler, Message, ChatRequest, ChatResponse, ToolCall, SynapticError, RunnableConfig, and more. Implementation crates depend on core, never on each other (unless composition requires it).

Concurrency-safe by default. Shared registries use Arc<RwLock<_>> (standard library RwLock for low-contention read-heavy data like tool registries). Mutable state that requires async access -- callbacks, memory stores, checkpointers -- uses Arc<tokio::sync::Mutex<_>> or Arc<tokio::sync::RwLock<_>>. All core traits require Send + Sync.

Session isolation. Memory, agent runs, and graph checkpoints are keyed by a session or thread identifier. Two concurrent conversations never interfere with each other, even when they share the same model and tool instances.

Event-driven observability. The RunEvent enum captures every significant lifecycle event (run started, LLM called, tool called, run finished, run failed). Callback handlers receive these events asynchronously, enabling logging, tracing, recording, and custom side effects without modifying application code.

The Four Layers

Synaptic's crates fall into four layers, each building on the ones below it.

Layer 1: Core

synaptic-core is the foundation. It defines:

  • Traits: ChatModel, Tool, MemoryStore, CallbackHandler
  • Message types: The Message enum (System, Human, AI, Tool, Chat, Remove), AIMessageChunk for streaming, ToolCall, ToolDefinition, ToolChoice
  • Request/response: ChatRequest, ChatResponse, TokenUsage
  • Streaming: The ChatStream type alias (Pin<Box<dyn Stream<Item = Result<AIMessageChunk, SynapticError>> + Send>>)
  • Configuration: RunnableConfig (tags, metadata, concurrency limits, run IDs)
  • Events: RunEvent enum with six lifecycle variants
  • Errors: SynapticError enum with 19 variants spanning all subsystems

Every other crate in the workspace depends on synaptic-core. Nothing depends on synaptic-core except through this single shared foundation.

Layer 2: Implementation Crates

Each crate implements one core concern:

CratePurpose
synaptic-modelsProviderBackend abstraction, test doubles (ScriptedChatModel), wrappers (RetryChatModel, RateLimitedChatModel, StructuredOutputChatModel<T>, BoundToolsChatModel)
synaptic-openaiOpenAiChatModel + OpenAiEmbeddings
synaptic-anthropicAnthropicChatModel
synaptic-geminiGeminiChatModel
synaptic-ollamaOllamaChatModel + OllamaEmbeddings
synaptic-toolsToolRegistry, SerialToolExecutor, ParallelToolExecutor, HandleErrorTool, ReturnDirectTool
synaptic-memoryInMemoryStore and strategy types: Buffer, Window, Summary, TokenBuffer, SummaryBuffer, RunnableWithMessageHistory, FileChatMessageHistory
synaptic-callbacksRecordingCallback, TracingCallback, CompositeCallback
synaptic-promptsPromptTemplate, ChatPromptTemplate, FewShotChatMessagePromptTemplate, ExampleSelector
synaptic-parsersOutput parsers: StrOutputParser, JsonOutputParser, StructuredOutputParser<T>, ListOutputParser, EnumOutputParser, BooleanOutputParser, MarkdownListOutputParser, NumberedListOutputParser, XmlOutputParser, RetryOutputParser, FixingOutputParser
synaptic-cacheInMemoryCache, SemanticCache, CachedChatModel
synaptic-evalEvaluators (ExactMatch, JsonValidity, RegexMatch, EmbeddingDistance, LLMJudge), Dataset, batch evaluation pipeline

Layer 3: Composition and Retrieval

These crates combine the implementation crates into higher-level abstractions:

CratePurpose
synaptic-runnablesThe LCEL system: Runnable trait, BoxRunnable with pipe operator, RunnableSequence, RunnableParallel, RunnableBranch, RunnableWithFallbacks, RunnableAssign, RunnablePick, RunnableEach, RunnableRetry, RunnableGenerator
synaptic-graphLangGraph-style state machines: StateGraph builder, CompiledGraph, Node trait, ToolNode, create_react_agent(), checkpointing, streaming, visualization
synaptic-loadersDocument loaders: TextLoader, JsonLoader, CsvLoader, DirectoryLoader, FileLoader, MarkdownLoader, WebLoader
synaptic-splittersText splitters: CharacterTextSplitter, RecursiveCharacterTextSplitter, MarkdownHeaderTextSplitter, HtmlHeaderTextSplitter, LanguageTextSplitter, TokenTextSplitter
synaptic-embeddingsEmbeddings trait, FakeEmbeddings, CacheBackedEmbeddings
synaptic-vectorstoresVectorStore trait, InMemoryVectorStore, MultiVectorRetriever
synaptic-retrievalRetriever trait and seven implementations: InMemory, BM25, MultiQuery, Ensemble, ContextualCompression, SelfQuery, ParentDocument
synaptic-qdrantQdrantVectorStore (Qdrant integration)
synaptic-pgvectorPgVectorStore (PostgreSQL pgvector integration)
synaptic-redisRedisStore + RedisCache (Redis integration)
synaptic-pdfPdfLoader (PDF document loading)

Layer 4: Facade

The synaptic crate re-exports everything from all sub-crates under a unified namespace. Application code can use a single dependency:

[dependencies]
synaptic = "0.2"

And then import from organized modules:

use synaptic::core::{Message, ChatRequest};
use synaptic::openai::OpenAiChatModel;       // requires "openai" feature
use synaptic::anthropic::AnthropicChatModel; // requires "anthropic" feature
use synaptic::graph::{create_react_agent, MessageState};
use synaptic::runnables::{BoxRunnable, Runnable};

Crate Dependency Diagram

                       synaptic (facade)
                             |
        +--------------------+--------------------+
        |                    |                    |
   synaptic-graph      synaptic-runnables    synaptic-eval
        |                    |                    |
   synaptic-tools        synaptic-core       synaptic-embeddings
        |                    ^                    |
   synaptic-core              |               synaptic-core
                             |
        +--------+-----------+-----------+--------+--------+
        |        |           |           |        |        |
   synap-   synap-    synap-    synap-   synap-  Provider
   tic-     tic-      tic-      tic-     tic-    crates:
   models   memory    callbacks prompts  parsers openai,
        |        |           |           |        | anthropic,
        +--------+-----------+-----------+--------+ gemini,
                             |                      ollama
                        synaptic-core

   Retrieval pipeline (all depend on synaptic-core):

   synaptic-loaders --> synaptic-splitters --> synaptic-embeddings
                                                   |
                                            synaptic-vectorstores
                                                   |
                                            synaptic-retrieval

   Integration crates (each depends on synaptic-core):

   synaptic-qdrant, synaptic-pgvector, synaptic-redis, synaptic-pdf

The arrows point downward toward dependencies. Every crate ultimately depends on synaptic-core. The composition crates (synaptic-graph, synaptic-runnables) additionally depend on the implementation crates they orchestrate.

Provider Abstraction

Each LLM provider lives in its own crate (synaptic-openai, synaptic-anthropic, synaptic-gemini, synaptic-ollama). They all use the ProviderBackend trait from synaptic-models to separate HTTP concerns from protocol mapping. HttpBackend makes real HTTP requests; FakeBackend returns scripted responses for testing. This means you can test any code that uses ChatModel without network access and without mocking at the HTTP level. You only compile the providers you actually use.

The Runnable Abstraction

The Runnable<I, O> trait in synaptic-runnables is the universal composition primitive. Prompt templates, output parsers, chat models, and entire graphs can all be treated as runnables. They compose via the | pipe operator into chains that can be invoked, batched, or streamed. See Runnables & LCEL for details.

The Graph Abstraction

The StateGraph builder in synaptic-graph provides a higher-level orchestration model for complex workflows. Where LCEL chains are linear pipelines (with branching), graphs support cycles, conditional routing, checkpointing, human-in-the-loop interrupts, and dynamic control flow via GraphCommand. See Graph for details.

See Also