Quickstart
This guide walks you through a minimal Synaptic program that sends a chat request and prints the response. It uses ScriptedChatModel, a test double that returns pre-configured responses, so you do not need any API keys to run it.
The Complete Example
use synaptic::core::{ChatModel, ChatRequest, ChatResponse, Message, SynapticError};
use synaptic::models::ScriptedChatModel;
#[tokio::main]
async fn main() -> Result<(), SynapticError> {
// 1. Create a scripted model with a predefined response.
// ScriptedChatModel returns responses in order, one per chat() call.
let model = ScriptedChatModel::new(vec![
ChatResponse {
message: Message::ai("Hello! I'm a Synaptic assistant. How can I help you today?"),
usage: None,
},
]);
// 2. Build a chat request with a system prompt and a user message.
let request = ChatRequest::new(vec![
Message::system("You are a helpful assistant built with Synaptic."),
Message::human("Hello! What are you?"),
]);
// 3. Send the request and get a response.
let response = model.chat(request).await?;
// 4. Print the assistant's reply.
println!("Assistant: {}", response.message.content());
Ok(())
}
Running this program prints:
Assistant: Hello! I'm a Synaptic assistant. How can I help you today?
What is Happening
-
ScriptedChatModel::new(vec![...])creates a chat model that returns the givenChatResponsevalues in sequence. This is useful for testing and examples without requiring a live API. In production, you would replace this withOpenAiChatModel(fromsynaptic::openai),AnthropicChatModel(fromsynaptic::anthropic), or another provider adapter. -
ChatRequest::new(messages)constructs a chat request from a vector of messages. Messages are created with factory methods:Message::system()for system prompts,Message::human()for user input, andMessage::ai()for assistant responses. -
model.chat(request).await?sends the request asynchronously and returns aChatResponsecontaining the model's message and optional token usage information. -
response.message.content()extracts the text content from the response message.
Using a Real Provider
To use OpenAI instead of the scripted model, replace the model creation:
use synaptic::openai::OpenAiChatModel;
// Reads OPENAI_API_KEY from the environment automatically.
let model = OpenAiChatModel::new("gpt-4o");
You will also need the "openai" feature enabled in your Cargo.toml:
[dependencies]
synaptic = { version = "0.2", features = ["openai"] }
The rest of the code stays the same -- ChatModel::chat() has the same signature regardless of provider.
Next Steps
- Build a Simple LLM Application -- Chain prompts with output parsers
- Build a Chatbot with Memory -- Add conversation history
- Build a ReAct Agent -- Give your model tools to call
- Build a RAG Application -- Retrieve documents for context
- Architecture Overview -- Understand the crate structure