Tracking issue for adding first‑class worker‑process orchestration to Peanar—from lightweight dev ergonomics to production‑ready health & scaling helpers—without turning the library into a full‑blown process supervisor like pm2.
🎯 Goal
Provide opinionated, opt‑in tooling and guidelines so Peanar users can run workers safely and efficiently (local dev → single‑VM prod → container/Kubernetes) while keeping Peanar itself lean and broker‑centric.
📝 Scope
| Layer |
What we deliver |
Status |
| L0 Docs |
Deployment recipes, systemd & k8s snippets, best‑practice guide |
☐ |
| L1 Lifecycle Helpers |
@peanar/worker-lifecycle utilities • signal capture • graceful drain • health probe stub |
☐ |
| L2 Dev Launcher |
CLI: peanar run -w <n> ./worker.js (forks via cluster/child_process) |
☐ |
| L3 Metrics & Autoscale Hints |
Prom/OTLP gauges for lag & in‑flight; pure‑function recommendedReplicas() |
☐ |
| L4 Mini‑Supervisor (opt‑in) |
peanar supervise wrapper with simple restart/back‑off |
☐ |
🚫 Non‑Goals
- Replicating pm2/systemd/k8s features (log rotation, CPU pinning, remote dashboards, etc.).
- Shipping opinionated infrastructure code (Helm charts, Terraform).
- Implementing an autoscaler—only hints.
📐 Design Tenets
- Stateless CLI, stateful workers – no cross‑process coordination.
- Graceful by default – shutdown contract & drain API.
- Opt‑in everything – existing setups remain untouched.
- Metrics first‑class – expose, don’t store/visualise.
✅ Acceptance Criteria
- A dev can run
peanar run -w $(nproc) ./worker.js and see all child PIDs, live reload on SIGINT, and clean drain on SIGTERM.
- Health probe returns JSON:
{ "status": "ok", "lag": <int>, "inFlight": <int> } and is consumable by Kubernetes readinessProbe.
- Metrics exported in Prometheus format behind a toggle.
- Docs include copy‑paste examples for systemd, Docker Compose, and k8s Deployment + HPA.
- All new packages covered by unit tests; L2‑L4 paths have integration tests with real RabbitMQ.
- No breaking changes to
peanar@core.
📋 Task Breakdown
General
Layer 0 – Documentation
Layer 1 – Lifecycle Helpers
Layer 2 – Dev Launcher
Layer 3 – Metrics & Autoscale Hints
Layer 4 – Mini‑Supervisor (optional)
DX / Tooling
❓ Open Questions
- Default bundle – Ship CLI inside
peanar or separate package?
- Metrics back‑ends – Prometheus only, or add StatsD / OTLP hooks?
- Target Node versions – keep Node 18 LTS baseline?
_Use this epic to coordinate PRs; link each Layer’s implementation issue/PR here and tick boxes as they merge.
🎯 Goal
Provide opinionated, opt‑in tooling and guidelines so Peanar users can run workers safely and efficiently (local dev → single‑VM prod → container/Kubernetes) while keeping Peanar itself lean and broker‑centric.
📝 Scope
@peanar/worker-lifecycleutilities• signal capture
• graceful drain
• health probe stub
peanar run -w <n> ./worker.js(forks viacluster/child_process)recommendedReplicas()peanar supervisewrapper with simple restart/back‑off🚫 Non‑Goals
📐 Design Tenets
✅ Acceptance Criteria
peanar run -w $(nproc) ./worker.jsand see all child PIDs, live reload on SIGINT, and clean drain on SIGTERM.{ "status": "ok", "lag": <int>, "inFlight": <int> }and is consumable by KubernetesreadinessProbe.peanar@core.📋 Task Breakdown
General
@peanar/cliworkspace packageLayer 0 – Documentation
/examplesLayer 1 – Lifecycle Helpers
hookCaptureSignals()utilstartHealthProbe()serverLayer 2 – Dev Launcher
commander)cluster/ fallback tochild_processLayer 3 – Metrics & Autoscale Hints
prom-client)autoscale.recommendedReplicas()functionLayer 4 – Mini‑Supervisor (optional)
--max-restarts,--restart-windowDX / Tooling
❓ Open Questions
peanaror separate package?_Use this epic to coordinate PRs; link each Layer’s implementation issue/PR here and tick boxes as they merge.