RustQ is a high-performance, distributed background job queue written in Rust. It’s inspired by systems like Hangfire, Celery, and BullMQ, but designed for speed, reliability, and type safety. RustQ allows services to enqueue jobs, process them asynchronously, and scale horizontally across worker nodes.
RustQ is built to handle high-throughput job execution with resilience and minimal latency. It enables developers to offload time-consuming or IO-heavy tasks from the main application thread to distributed workers, ensuring scalability and responsiveness.
Core Features:
- Asynchronous Job Execution with
tokio - Pluggable Storage Layer (in-memory, PostgreSQL, Redis, or RocksDB)
- Retry & Backoff Policies
- Worker Registration & Health Monitoring
- Metrics Dashboard (optional) built using
axum - Idempotency & Deduplication Support
- Cluster Coordination via Raft (future milestone)
+------------------+ +------------------+ +------------------+
| API Gateway | --> | Broker | --> | Workers |
+------------------+ +------------------+ +------------------+
| | |
| REST/gRPC APIs | Queue Management | Job Execution
| | |
v v v
Clients submit jobs Broker manages queue Workers pull & execute
Each worker connects to the broker to fetch jobs from a queue. Jobs are serialized, persisted, and processed asynchronously. The broker ensures fair distribution, job persistence, and retries in case of worker failure.
RustQ is a high-performance, distributed background job queue written in Rust. It’s inspired by systems like Hangfire, Celery, and BullMQ, but designed for speed, reliability, and type safety. RustQ allows services to enqueue jobs, process them asynchronously, and scale horizontally across worker nodes.
RustQ is built to handle high-throughput job execution with resilience and minimal latency. It enables developers to offload time-consuming or IO-heavy tasks from the main application thread to distributed workers, ensuring scalability and responsiveness.
- Asynchronous Job Execution with
tokio - Pluggable Storage Layer (in-memory, PostgreSQL, Redis, or RocksDB)
- Retry & Backoff Policies
- Worker Registration & Health Monitoring
- Metrics Dashboard (optional) built using
axum - Idempotency & Deduplication Support
- Cluster Coordination via Raft (future milestone)
+------------------+ +------------------+ +------------------+
| API Gateway | --> | Broker | --> | Workers |
+------------------+ +------------------+ +------------------+
| | |
| REST/gRPC APIs | Queue Management | Job Execution
| | |
v v v
Clients submit jobs Broker manages queue Workers pull & execute
Each worker connects to the broker to fetch jobs from a queue. Jobs are serialized, persisted, and processed asynchronously. The broker ensures fair distribution, job persistence, and retries in case of worker failure.
- Manages queues, job states, and worker registration.
- Handles retries, timeouts, and error tracking.
- Publishes events when jobs succeed or fail.
- Connect to the broker and pull jobs.
- Execute jobs asynchronously with concurrency control.
- Report job completion/failure back to the broker.
- Provides APIs to enqueue jobs and query their status.
- Offers both async (REST/gRPC) and in-process client modes.
Prerequisites:
- Rust toolchain (stable) — install via
rustup cargo(comes with Rust)- Optional: Docker and Docker Compose (for running Redis/Postgres locally)
- Clone the repository and enter the project directory:
git clone git@github.com:sam-baraka/RustQueue.git
cd RustQueue- Run unit tests:
cargo test- Run the broker (default configuration):
cargo run --bin rustq-broker- Run a worker (example queue
send_email):
cargo run --bin rustq-worker -- --queue send_email- Submit a job using the example producer:
cargo run --example enqueue_jobIf you prefer to use Docker for dependencies, see the docker-compose.yml example below (if present in the repo).
RustQ reads configuration from environment variables and (optionally) a config file. Common variables used during development:
- RUSTQ_BIND_ADDR - HTTP bind address for the broker (default: 0.0.0.0:8080)
- RUSTQ_STORAGE - Storage backend:
memory,redis,postgres,rocksdb(default: memory) - REDIS_URL - Redis connection string (if using
redisbackend) - DATABASE_URL - Postgres connection string (if using
postgresbackend)
Example (zsh / bash):
export RUSTQ_BIND_ADDR=127.0.0.1:8080
export RUSTQ_STORAGE=redis
export REDIS_URL=redis://localhost:6379RustQ supports multiple pluggable backends. During development you can use the in-memory backend for fast iteration. For production, use Redis, Postgres, or RocksDB.
Short notes:
- Redis — good for speed and easy to operate; useful for ephemeral queues and fast retries.
- Postgres — provides strong persistence and is useful if you already use Postgres for other data.
- RocksDB — embedded key-value store, useful for single-node high-throughput scenarios.
Client example (enqueue a job):
use rustq::client::RustQClient;
#[tokio::main]
async fn main() {
let client = RustQClient::new("http://localhost:8080");
client.enqueue("send_email", serde_json::json!({
"to": "user@example.com",
"subject": "Welcome!",
})).await.unwrap();
}Worker example (register handler & run):
use rustq::worker::{Worker, JobHandler, Job, JobError};
struct SendEmailHandler;
#[async_trait::async_trait]
impl JobHandler for SendEmailHandler {
async fn handle(&self, job: Job) -> Result<(), JobError> {
println!("Sending email to {:?}", job.data.get("to"));
Ok(())
}
}
#[tokio::main]
async fn main() {
let mut worker = Worker::new("send_email");
worker.register_handler(SendEmailHandler);
worker.run().await;
}- Create a branch per feature/fix:
git checkout -b feat/your-feature - Write unit tests and run
cargo test - Keep changes small and focused; open a PR with a clear description and reference issues
PR checklist:
- Tests added/updated
- Lints pass (run
cargo clippy) - Change log entry if behavior changes
This roadmap breaks the high-level milestones into concrete short-, mid-, and long-term items with success criteria.
Short-term (v0.1 — v0.2, 0–3 months)
-
v0.1 — Core Queue & Workers
- Features: enqueue, dequeue, worker processing, retries, basic in-memory persistence
- Success criteria: basic end-to-end example runs locally; unit tests cover job lifecycle
- Priority: High
-
v0.2 — Storage & Observability
- Features: Redis and Postgres backends, basic metrics endpoint, logging/tracing hooks
- Success criteria: chosen backend(s) can run in CI; metrics exposed for Prometheus
- Priority: High
Mid-term (v0.3 — v0.4, 3–9 months)
-
v0.3 — Dashboard & UI
- Features: simple web dashboard (axum), job list, retry/inspect controls, worker health view
- Success criteria: web UI shows live job state and allows manual retry/inspect operations
- Priority: Medium
-
v0.4 — Cluster Coordination
- Features: optional Raft-based leader election for broker HA, distributed lease management
- Success criteria: broker leader failover test passes in an integration test cluster
- Priority: Medium
Long-term (v0.5+, 9+ months)
- v0.5 — Multi-language SDKs & Cloud
- Features: SDKs for Python and Go, hosted/cloud deployment guides, Helm charts
- Success criteria: client libraries publishable and used in sample apps
- Priority: Low
Stretch & Nice-to-have
- Advanced deduplication strategies
- Job dependency graphs & DAG scheduling
- Exactly-once semantics with transactional sinks
Pull requests are welcome! Please open an issue before submitting major changes. Follow these steps for contributions:
- Fork the repo
- Create a feature branch (
git checkout -b feature/my-feature) - Run tests and linters locally (
cargo test,cargo clippy) - Commit changes and open a PR with a clear description and tests
If you're new, check the good first issue label on issues (if present).
MIT License © 2025 Samuel Baraka