Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

xAI Grok

xAI develops the Grok family of LLMs known for their real-time reasoning capabilities and integration with X (Twitter) data. The Grok API is OpenAI-compatible.

xAI is available as a compatibility submodule inside synaptic-models. No separate crate is needed.

Setup

[dependencies]
synaptic = { version = "0.4", features = ["openai"] }

Sign up at x.ai to obtain an API key.

Configuration

use synaptic::openai::compat::xai::{self, XaiModel};
use synaptic::models::HttpBackend;
use std::sync::Arc;

let model = xai::chat_model("xai-your-api-key", XaiModel::Grok2Latest.to_string(), Arc::new(HttpBackend::new()));

Builder methods

Use OpenAiConfig builder methods for customization:

use synaptic::openai::compat::xai::{self, XaiModel};
use synaptic::openai::OpenAiChatModel;
use synaptic::models::HttpBackend;
use std::sync::Arc;

let config = xai::config("xai-your-api-key", XaiModel::Grok2Latest.to_string())
    .with_temperature(0.7)
    .with_max_tokens(8192);

let model = OpenAiChatModel::new(config, Arc::new(HttpBackend::new()));

Available Models

Enum VariantAPI Model IDBest For
Grok2Latestgrok-2-latestGeneral purpose (recommended)
Grok2Minigrok-2-miniFast, cost-effective
GrokBetagrok-betaLegacy compatibility
Custom(String)(any)Preview models

Usage

use synaptic::openai::compat::xai::{self, XaiModel};
use synaptic::core::{ChatModel, ChatRequest, Message};
use synaptic::models::HttpBackend;
use std::sync::Arc;

let model = xai::chat_model("xai-your-api-key", XaiModel::Grok2Latest.to_string(), Arc::new(HttpBackend::new()));

let request = ChatRequest::new(vec![
    Message::system("You are Grok, a helpful AI with wit and humor."),
    Message::human("What's happening in AI today?"),
]);

let response = model.chat(request).await?;
println!("{}", response.message.content());

Streaming

use futures::StreamExt;

let request = ChatRequest::new(vec![
    Message::human("Give me a quick summary of today's AI trends."),
]);

let mut stream = model.stream_chat(request);
while let Some(chunk) = stream.next().await {
    print!("{}", chunk?.content);
}
println!();

Configuration Reference

All configuration is done through OpenAiConfig builder methods. See the OpenAI-Compatible Providers page for the full reference.

MethodDescription
.with_temperature(f64)Sampling temperature (0.0-2.0)
.with_max_tokens(u32)Maximum tokens to generate
.with_top_p(f64)Nucleus sampling threshold
.with_stop(Vec<String>)Stop sequences