Files
sq/README.md
2026-02-27 13:41:04 +01:00

5.4 KiB

SQ (SeedQueue)

A fast, durable message queue. Written in Rust.

  • Custom WAL with configurable fsync for durability
  • Cap'n Proto data plane for high-throughput publish/subscribe (~2M msg/s)
  • gRPC control plane for topic management, cluster ops, and health checks
  • Simple quorum clustering with gossip-based membership
  • S3 tiered storage for long-term segment offloading
  • OpenTelemetry traces and metrics out of the box

Quickstart

# Build
cargo build --release -p sq-server

# Run a single node
./target/release/sq-server serve

# Default ports:
#   6064  Cap'n Proto data plane (producer/consumer)
#   6060  gRPC control plane
#   6062  HTTP (health, metrics)

Or with mise:

mise run develop    # cargo run -p sq-server -- serve

SDK

Add the dependency:

[dependencies]
sq-sdk = { path = "crates/sq-sdk" }

Publish:

use sq_sdk::{Producer, ProducerConfig, ProducerMessage};

let mut producer = Producer::connect(ProducerConfig::default()).await?;

producer.send("orders", None, b"hello").await?;

// Or batch:
let batch = vec![
    ProducerMessage::new("orders", b"msg-1"),
    ProducerMessage::new("orders", b"msg-2"),
];
producer.send_batch(batch).await?;

Subscribe:

use sq_sdk::{Consumer, ConsumerConfig};

let mut consumer = Consumer::connect(ConsumerConfig {
    topic: "orders".into(),
    consumer_group: "my-group".into(),
    auto_commit: true,
    ..Default::default()
}).await?;

loop {
    let messages = consumer.poll().await?;
    for msg in &messages {
        println!("[{}] offset={}", msg.topic, msg.offset);
    }
}

A BatchProducer is available for high-throughput fire-and-forget patterns with automatic batching.

The gRPC transport is still accessible via GrpcProducer, GrpcConsumer, etc.

Configuration

All flags can also be set via environment variables.

Flag Env Default Description
--grpc-host SQ_GRPC_HOST 127.0.0.1:6060 gRPC listen address
--capnp-host SQ_CAPNP_HOST 127.0.0.1:6064 Cap'n Proto listen address
--http-host SQ_HTTP_HOST 127.0.0.1:6062 HTTP listen address
--node-id SQ_NODE_ID node-1 Unique node identifier
--data-dir SQ_DATA_DIR ./data WAL storage directory
--seeds SQ_SEEDS Comma-separated seed node addresses
--cluster-id SQ_CLUSTER_ID default Cluster identifier
--sync-policy SQ_SYNC_POLICY every-batch every-batch, none, or interval in ms
--s3-bucket SQ_S3_BUCKET S3 bucket for tiered storage
--s3-endpoint SQ_S3_ENDPOINT S3 endpoint (for MinIO, etc.)
--s3-region SQ_S3_REGION S3 region

Observability is activated by setting OTEL_EXPORTER_OTLP_ENDPOINT (e.g., http://jaeger:4317).

Running a cluster

# Node 1
sq-server --node-id node-1 --grpc-host 0.0.0.0:6060 --capnp-host 0.0.0.0:6064 serve

# Node 2
sq-server --node-id node-2 --grpc-host 0.0.0.0:6070 --capnp-host 0.0.0.0:6074 \
  --seeds 10.0.0.1:6060 serve

# Node 3
sq-server --node-id node-3 --grpc-host 0.0.0.0:6080 --capnp-host 0.0.0.0:6084 \
  --seeds 10.0.0.1:6060,10.0.0.2:6070 serve

Docker Compose

A full stack is provided in templates/docker-compose.yaml:

docker compose -f templates/docker-compose.yaml up -d

This starts a 3-node SQ cluster with Jaeger (tracing), Prometheus (metrics), Grafana (dashboards), and MinIO (S3-compatible object storage).

Development

Requires Rust 2024 edition (nightly or stable 1.85+).

mise run check       # cargo check --workspace
mise run test        # cargo nextest run --workspace
mise run clippy      # cargo clippy --workspace
mise run build       # cargo build --workspace
mise run local:up    # start docker-compose services
mise run local:down  # stop docker-compose services
mise run develop     # run server in dev mode

Proto codegen (requires buf):

mise run generate:proto

CI

CI runs via Dagger (containerized pipelines, no YAML). Requires the Dagger CLI.

mise run ci:pr       # PR pipeline: check + test + build
mise run ci:main     # Main branch pipeline

The pipeline runs cargo check, cargo test, and builds a release Docker image — all inside containers with dependency caching.

Project structure

crates/
  sq-server          Server binary + library (gRPC, Cap'n Proto, HTTP)
  sq-sdk             Client SDK (Producer, Consumer, BatchProducer)
  sq-storage         WAL, storage engine, S3 object store
  sq-cluster         Membership and replication
  sq-grpc-interface  Protobuf/tonic generated types
  sq-capnp-interface Cap'n Proto schema and codec
  sq-models          Shared domain types
  sq-sim             Deterministic I/O simulation for testing
ci/                  Dagger CI pipeline
examples/
  publish_subscribe  Example publisher and subscriber
templates/
  docker-compose.yaml  Full observability stack
  sq-server.Dockerfile Multi-stage release build

Testing

# Unit tests (~100 tests, instant)
cargo test --workspace --lib

# Integration tests (cluster, data plane)
cargo test -p sq-server --test cluster_test
cargo test -p sq-server --test data_plane_test

# Stress tests (100K+ messages, benchmarks)
cargo test -p sq-server --test stress_test -- --nocapture
cargo test -p sq-server --test capnp_stress_test -- --nocapture

# Storage benchmarks
cargo bench -p sq-storage