Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Build a ReAct Agent

This tutorial walks you through building a ReAct (Reasoning + Acting) agent that can decide when to call tools and when to respond to the user. You will define a custom tool, wire it into a prebuilt agent graph, and watch the agent loop through reasoning and tool execution.

What is a ReAct Agent?

A ReAct agent follows a loop:

  1. Reason -- The LLM looks at the conversation so far and decides what to do next.
  2. Act -- If the LLM determines it needs information, it emits one or more tool calls.
  3. Observe -- The tool results are added to the conversation as Tool messages.
  4. Repeat -- The LLM reviews the tool output and either calls more tools or produces a final answer.

Synaptic provides create_react_agent(model, tools), which builds a compiled StateGraph that implements this loop automatically.

Prerequisites

Add the required crates to your Cargo.toml:

[dependencies]
synaptic = { version = "0.2", features = ["agent", "macros"] }
async-trait = "0.1"
serde_json = "1"
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }

Step 1: Define a Custom Tool

The easiest way to define a tool in Synaptic is with the #[tool] macro. Write an async function, add a doc comment (this becomes the description the LLM sees), and the macro generates the struct, Tool trait implementation, and a factory function automatically.

use serde_json::json;
use synaptic::core::SynapticError;
use synaptic::macros::tool;

/// Adds two numbers.
#[tool]
async fn add(
    /// The first number
    a: i64,
    /// The second number
    b: i64,
) -> Result<serde_json::Value, SynapticError> {
    Ok(json!({ "value": a + b }))
}

The function parameters are automatically mapped to a JSON Schema that tells the LLM what arguments to provide. Parameter doc comments become "description" fields in the schema. In production, you can use Option<T> for optional parameters and #[default = value] for defaults. See Procedural Macros for the full reference.

Step 2: Create a Chat Model

For this tutorial we build a simple demo model that simulates the ReAct loop. On the first call (when there is no tool output in the conversation yet), it returns a tool call. On the second call (after tool output has been added), it returns a final text answer.

use async_trait::async_trait;
use serde_json::json;
use synaptic::core::{ChatModel, ChatRequest, ChatResponse, Message, SynapticError, ToolCall};

struct DemoModel;

#[async_trait]
impl ChatModel for DemoModel {
    async fn chat(&self, request: ChatRequest) -> Result<ChatResponse, SynapticError> {
        let has_tool_output = request.messages.iter().any(|m| m.is_tool());

        if !has_tool_output {
            // First turn: ask to call the "add" tool
            Ok(ChatResponse {
                message: Message::ai_with_tool_calls(
                    "I will use a tool to calculate this.",
                    vec![ToolCall {
                        id: "call-1".to_string(),
                        name: "add".to_string(),
                        arguments: json!({ "a": 7, "b": 5 }),
                    }],
                ),
                usage: None,
            })
        } else {
            // Second turn: the tool result is in, produce the final answer
            Ok(ChatResponse {
                message: Message::ai("The result is 12."),
                usage: None,
            })
        }
    }
}

In a real application you would use one of the provider adapters (OpenAiChatModel from synaptic::openai, AnthropicChatModel from synaptic::anthropic, etc.) instead of a scripted model.

Step 3: Build the Agent Graph

create_react_agent takes a model and a vector of tools, and returns a CompiledGraph<MessageState>. Under the hood, it creates two nodes:

  • "agent" -- calls the ChatModel with the current messages and tool definitions.
  • "tools" -- executes any tool calls from the agent's response using a ToolNode.

A conditional edge routes from "agent" to "tools" if the response contains tool calls, or to END if it does not. An unconditional edge routes from "tools" back to "agent" so the model can review the results.

use std::sync::Arc;
use synaptic::core::Tool;
use synaptic::graph::create_react_agent;

let model = Arc::new(DemoModel);
let tools: Vec<Arc<dyn Tool>> = vec![add()];

let graph = create_react_agent(model, tools).unwrap();

The add() factory function (generated by #[tool]) returns Arc<dyn Tool>, so it can be used directly in the tools vector. The model is wrapped in Arc because the graph needs shared ownership -- nodes may be invoked concurrently in more complex workflows.

Step 4: Run the Agent

Create an initial MessageState with the user's question and invoke the graph:

use synaptic::core::Message;
use synaptic::graph::MessageState;

let initial_state = MessageState {
    messages: vec![Message::human("What is 7 + 5?")],
};

let result = graph.invoke(initial_state).await.unwrap();

let last = result.last_message().unwrap();
println!("agent answer: {}", last.content());
// Output: agent answer: The result is 12.

MessageState is the built-in state type for conversational agents. It holds a Vec<Message> that grows as the agent loop progresses. After invocation, last_message() returns the final message in the conversation -- typically the agent's answer.

Full Working Example

Here is the complete program that ties all the pieces together:

use std::sync::Arc;
use async_trait::async_trait;
use serde_json::json;
use synaptic::core::{ChatModel, ChatRequest, ChatResponse, Message, SynapticError, Tool, ToolCall};
use synaptic::graph::{create_react_agent, MessageState};
use synaptic::macros::tool;

// --- Model ---

struct DemoModel;

#[async_trait]
impl ChatModel for DemoModel {
    async fn chat(&self, request: ChatRequest) -> Result<ChatResponse, SynapticError> {
        let has_tool_output = request.messages.iter().any(|m| m.is_tool());
        if !has_tool_output {
            Ok(ChatResponse {
                message: Message::ai_with_tool_calls(
                    "I will use a tool to calculate this.",
                    vec![ToolCall {
                        id: "call-1".to_string(),
                        name: "add".to_string(),
                        arguments: json!({ "a": 7, "b": 5 }),
                    }],
                ),
                usage: None,
            })
        } else {
            Ok(ChatResponse {
                message: Message::ai("The result is 12."),
                usage: None,
            })
        }
    }
}

// --- Tool ---

/// Adds two numbers.
#[tool]
async fn add(
    /// The first number
    a: i64,
    /// The second number
    b: i64,
) -> Result<serde_json::Value, SynapticError> {
    Ok(json!({ "value": a + b }))
}

// --- Main ---

#[tokio::main]
async fn main() -> Result<(), SynapticError> {
    let model = Arc::new(DemoModel);
    let tools: Vec<Arc<dyn Tool>> = vec![add()];

    let graph = create_react_agent(model, tools)?;

    let initial_state = MessageState {
        messages: vec![Message::human("What is 7 + 5?")],
    };

    let result = graph.invoke(initial_state).await?;
    let last = result.last_message().unwrap();
    println!("agent answer: {}", last.content());
    Ok(())
}

How the Loop Executes

Here is the sequence of events when you run this example:

StepNodeWhat happens
1agentReceives [Human("What is 7 + 5?")]. Returns an AI message with a ToolCall for add(a=7, b=5).
2routingThe conditional edge sees tool calls in the last message and routes to tools.
3toolsToolNode looks up "add" in the registry, calls the add tool's call method, and appends a Tool message with {"value": 12}.
4edgeThe unconditional edge routes from tools back to agent.
5agentReceives the full conversation including the tool result. Returns AI("The result is 12.") with no tool calls.
6routingNo tool calls in the last message, so the conditional edge routes to END.

The graph terminates and returns the final MessageState.

Next Steps