Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Perplexity AI

Perplexity AI provides online search-augmented language models through its Sonar model family. Unlike traditional LLMs, Sonar models access real-time web information and return cited sources, making them ideal for factual queries and research tasks.

Perplexity AI is available as a compatibility submodule inside synaptic-models. No separate crate is needed.

Setup

[dependencies]
synaptic = { version = "0.4", features = ["openai"] }

Sign up at perplexity.ai to obtain an API key (prefixed with pplx-).

Configuration

use synaptic::openai::compat::perplexity::{self, PerplexityModel};
use synaptic::models::HttpBackend;
use std::sync::Arc;

let model = perplexity::chat_model("pplx-your-api-key", PerplexityModel::SonarLarge.to_string(), Arc::new(HttpBackend::new()));

Builder methods

Use OpenAiConfig builder methods for customization:

use synaptic::openai::compat::perplexity::{self, PerplexityModel};
use synaptic::openai::OpenAiChatModel;
use synaptic::models::HttpBackend;
use std::sync::Arc;

let config = perplexity::config("pplx-your-api-key", PerplexityModel::SonarLarge.to_string())
    .with_temperature(0.2)
    .with_max_tokens(1024);

let model = OpenAiChatModel::new(config, Arc::new(HttpBackend::new()));

Available Models

Enum VariantAPI Model IDBest For
SonarLargesonar-large-onlineGeneral web search (recommended)
SonarSmallsonar-small-onlineFast, cost-effective web search
SonarHugesonar-huge-onlineMaximum quality web search
SonarReasoningProsonar-reasoning-proComplex reasoning with citations
Custom(String)(any)Preview models

Usage

use synaptic::openai::compat::perplexity::{self, PerplexityModel};
use synaptic::core::{ChatModel, ChatRequest, Message};
use synaptic::models::HttpBackend;
use std::sync::Arc;

let model = perplexity::chat_model("pplx-your-api-key", PerplexityModel::SonarLarge.to_string(), Arc::new(HttpBackend::new()));

let request = ChatRequest::new(vec![
    Message::system("Be precise and concise. Cite your sources."),
    Message::human("What is the current state of Rust adoption in systems programming?"),
]);

let response = model.chat(request).await?;
println!("{}", response.message.content());

Streaming

use futures::StreamExt;

let request = ChatRequest::new(vec![
    Message::human("What are the latest developments in LLM research?"),
]);

let mut stream = model.stream_chat(request);
while let Some(chunk) = stream.next().await {
    print!("{}", chunk?.content);
}
println!();

Error Handling

use synaptic::core::SynapticError;

match model.chat(request).await {
    Ok(response) => println!("{}", response.message.content()),
    Err(SynapticError::RateLimit(msg)) => eprintln!("Rate limited: {}", msg),
    Err(e) => return Err(e.into()),
}

Configuration Reference

All configuration is done through OpenAiConfig builder methods. See the OpenAI-Compatible Providers page for the full reference.

MethodDescription
.with_temperature(f64)Sampling temperature (0.0-2.0)
.with_max_tokens(u32)Maximum tokens to generate
.with_top_p(f64)Nucleus sampling threshold
.with_stop(Vec<String>)Stop sequences