Skip to content

sam-baraka/RustQueue

Repository files navigation

RustQ — Distributed Job Queue System

RustQ is a high-performance, distributed background job queue written in Rust. It’s inspired by systems like Hangfire, Celery, and BullMQ, but designed for speed, reliability, and type safety. RustQ allows services to enqueue jobs, process them asynchronously, and scale horizontally across worker nodes.


Overview

RustQ is built to handle high-throughput job execution with resilience and minimal latency. It enables developers to offload time-consuming or IO-heavy tasks from the main application thread to distributed workers, ensuring scalability and responsiveness.

Core Features:

  • Asynchronous Job Execution with tokio
  • Pluggable Storage Layer (in-memory, PostgreSQL, Redis, or RocksDB)
  • Retry & Backoff Policies
  • Worker Registration & Health Monitoring
  • Metrics Dashboard (optional) built using axum
  • Idempotency & Deduplication Support
  • Cluster Coordination via Raft (future milestone)

Architecture

+------------------+        +------------------+        +------------------+
|   API Gateway    |  -->   |     Broker       |  -->   |     Workers       |
+------------------+        +------------------+        +------------------+
        |                        |                          |
        | REST/gRPC APIs         | Queue Management         | Job Execution
        |                        |                          |
        v                        v                          v
   Clients submit jobs    Broker manages queue      Workers pull & execute

Each worker connects to the broker to fetch jobs from a queue. Jobs are serialized, persisted, and processed asynchronously. The broker ensures fair distribution, job persistence, and retries in case of worker failure.

RustQ — Distributed Job Queue System

RustQ is a high-performance, distributed background job queue written in Rust. It’s inspired by systems like Hangfire, Celery, and BullMQ, but designed for speed, reliability, and type safety. RustQ allows services to enqueue jobs, process them asynchronously, and scale horizontally across worker nodes.


Overview

RustQ is built to handle high-throughput job execution with resilience and minimal latency. It enables developers to offload time-consuming or IO-heavy tasks from the main application thread to distributed workers, ensuring scalability and responsiveness.

Core Features

  • Asynchronous Job Execution with tokio
  • Pluggable Storage Layer (in-memory, PostgreSQL, Redis, or RocksDB)
  • Retry & Backoff Policies
  • Worker Registration & Health Monitoring
  • Metrics Dashboard (optional) built using axum
  • Idempotency & Deduplication Support
  • Cluster Coordination via Raft (future milestone)

Architecture

+------------------+        +------------------+        +------------------+
|   API Gateway    |  -->   |     Broker       |  -->   |     Workers       |
+------------------+        +------------------+        +------------------+
                |                        |                          |
                | REST/gRPC APIs         | Queue Management         | Job Execution
                |                        |                          |
                v                        v                          v
     Clients submit jobs    Broker manages queue      Workers pull & execute

Each worker connects to the broker to fetch jobs from a queue. Jobs are serialized, persisted, and processed asynchronously. The broker ensures fair distribution, job persistence, and retries in case of worker failure.


Components

Broker Service

  • Manages queues, job states, and worker registration.
  • Handles retries, timeouts, and error tracking.
  • Publishes events when jobs succeed or fail.

Worker Nodes

  • Connect to the broker and pull jobs.
  • Execute jobs asynchronously with concurrency control.
  • Report job completion/failure back to the broker.

Client SDK

  • Provides APIs to enqueue jobs and query their status.
  • Offers both async (REST/gRPC) and in-process client modes.

Quickstart (Local Development)

Prerequisites:

  • Rust toolchain (stable) — install via rustup
  • cargo (comes with Rust)
  • Optional: Docker and Docker Compose (for running Redis/Postgres locally)
  1. Clone the repository and enter the project directory:
git clone git@github.com:sam-baraka/RustQueue.git
cd RustQueue
  1. Run unit tests:
cargo test
  1. Run the broker (default configuration):
cargo run --bin rustq-broker
  1. Run a worker (example queue send_email):
cargo run --bin rustq-worker -- --queue send_email
  1. Submit a job using the example producer:
cargo run --example enqueue_job

If you prefer to use Docker for dependencies, see the docker-compose.yml example below (if present in the repo).


Configuration

RustQ reads configuration from environment variables and (optionally) a config file. Common variables used during development:

  • RUSTQ_BIND_ADDR - HTTP bind address for the broker (default: 0.0.0.0:8080)
  • RUSTQ_STORAGE - Storage backend: memory, redis, postgres, rocksdb (default: memory)
  • REDIS_URL - Redis connection string (if using redis backend)
  • DATABASE_URL - Postgres connection string (if using postgres backend)

Example (zsh / bash):

export RUSTQ_BIND_ADDR=127.0.0.1:8080
export RUSTQ_STORAGE=redis
export REDIS_URL=redis://localhost:6379

Storage Backends

RustQ supports multiple pluggable backends. During development you can use the in-memory backend for fast iteration. For production, use Redis, Postgres, or RocksDB.

Short notes:

  • Redis — good for speed and easy to operate; useful for ephemeral queues and fast retries.
  • Postgres — provides strong persistence and is useful if you already use Postgres for other data.
  • RocksDB — embedded key-value store, useful for single-node high-throughput scenarios.

Examples (Client & Worker)

Client example (enqueue a job):

use rustq::client::RustQClient;

#[tokio::main]
async fn main() {
        let client = RustQClient::new("http://localhost:8080");

        client.enqueue("send_email", serde_json::json!({
                "to": "user@example.com",
                "subject": "Welcome!",
        })).await.unwrap();
}

Worker example (register handler & run):

use rustq::worker::{Worker, JobHandler, Job, JobError};

struct SendEmailHandler;

#[async_trait::async_trait]
impl JobHandler for SendEmailHandler {
        async fn handle(&self, job: Job) -> Result<(), JobError> {
                println!("Sending email to {:?}", job.data.get("to"));
                Ok(())
        }
}

#[tokio::main]
async fn main() {
        let mut worker = Worker::new("send_email");
        worker.register_handler(SendEmailHandler);
        worker.run().await;
}

Development Workflow

  • Create a branch per feature/fix: git checkout -b feat/your-feature
  • Write unit tests and run cargo test
  • Keep changes small and focused; open a PR with a clear description and reference issues

PR checklist:

  • Tests added/updated
  • Lints pass (run cargo clippy)
  • Change log entry if behavior changes

Product Roadmap (Expanded)

This roadmap breaks the high-level milestones into concrete short-, mid-, and long-term items with success criteria.

Short-term (v0.1 — v0.2, 0–3 months)

  • v0.1 — Core Queue & Workers

    • Features: enqueue, dequeue, worker processing, retries, basic in-memory persistence
    • Success criteria: basic end-to-end example runs locally; unit tests cover job lifecycle
    • Priority: High
  • v0.2 — Storage & Observability

    • Features: Redis and Postgres backends, basic metrics endpoint, logging/tracing hooks
    • Success criteria: chosen backend(s) can run in CI; metrics exposed for Prometheus
    • Priority: High

Mid-term (v0.3 — v0.4, 3–9 months)

  • v0.3 — Dashboard & UI

    • Features: simple web dashboard (axum), job list, retry/inspect controls, worker health view
    • Success criteria: web UI shows live job state and allows manual retry/inspect operations
    • Priority: Medium
  • v0.4 — Cluster Coordination

    • Features: optional Raft-based leader election for broker HA, distributed lease management
    • Success criteria: broker leader failover test passes in an integration test cluster
    • Priority: Medium

Long-term (v0.5+, 9+ months)

  • v0.5 — Multi-language SDKs & Cloud
    • Features: SDKs for Python and Go, hosted/cloud deployment guides, Helm charts
    • Success criteria: client libraries publishable and used in sample apps
    • Priority: Low

Stretch & Nice-to-have

  • Advanced deduplication strategies
  • Job dependency graphs & DAG scheduling
  • Exactly-once semantics with transactional sinks

Contributing

Pull requests are welcome! Please open an issue before submitting major changes. Follow these steps for contributions:

  1. Fork the repo
  2. Create a feature branch (git checkout -b feature/my-feature)
  3. Run tests and linters locally (cargo test, cargo clippy)
  4. Commit changes and open a PR with a clear description and tests

If you're new, check the good first issue label on issues (if present).


License

MIT License © 2025 Samuel Baraka

About

Distributed Job Queue System

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages