Skip to content

fixes #73: autometic tor deployment#82

Open
keraliss wants to merge 6 commits into
mainfrom
tor_deployment
Open

fixes #73: autometic tor deployment#82
keraliss wants to merge 6 commits into
mainfrom
tor_deployment

Conversation

@keraliss
Copy link
Copy Markdown
Collaborator

@keraliss keraliss commented May 5, 2026

Summary by CodeRabbit

  • New Features

    • App displays Tor connection status on startup via a dismissible toast; backend exposes a Tor status endpoint
    • Automatic local Tor detection and management (uses system, host binary, or bundled Tor)
  • UI Improvements

    • New toast notification component
    • Added slide-in animation for UI elements
  • Documentation

    • Docker instructions clarified with explicit RPC/ZMQ endpoints and RPC credentials; Tor auth note removed
  • Chores

    • Docker Tor control authentication behavior adjusted
    • Runtime dependency features and dependency configuration updated

Review Change Stack

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 5, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a Tor manager that detects or starts a local Tor SOCKS proxy (system, host, or downloaded bundle), integrates it into MakerManager and the API (/api/tor/status), surfaces Tor status to the frontend via a one-time toast, and updates graceful shutdown, docs, container config, CI, and dependencies.

Changes

Tor connectivity & UI integration

Layer / File(s) Summary
Dependency
Cargo.toml
Enables tokio signal feature and moves ureq from dev-dependencies to dependencies.
Container config & docs
docker/docker-compose.yml, README.md
Tor container torrc no longer writes a hashed control password (uses CookieAuthentication 0); README maker instructions updated to list RPC/ZMQ endpoints and RPC credentials and remove the prior Tor auth note.
Core Tor manager (data/behavior)
src/tor_manager.rs
New TorManager with detection and startup: probes ports, can spawn host tor (writes torrc, disables cookie auth), or use a cached/downloaded bundle; includes noop(), source_label(), readiness polling, checksum/verify, extraction, macOS signing, and Drop cleanup.
Integration into backend
src/lib.rs, src/main.rs, src/maker_manager/mod.rs, tests/api/*
Adds tor_manager module; MakerManager now holds a TorManager, initializes via detect_or_start() with fallback to noop() (adds new_for_testing()), and exposes tor_source(); tests updated to use testing constructor.
Server control flow
src/server.rs
Registers graceful shutdown with .with_graceful_shutdown(shutdown_signal()) and adds shutdown_signal() awaiting Ctrl+C and SIGTERM.
API layer
src/api/dto.rs, src/api/monitoring.rs, src/api/mod.rs
Adds TorStatusInfo DTO, new GET /api/tor/status handler returning tor source/managed flag, and registers the path/schema in OpenAPI.
Frontend API surface
frontend/app/api.ts
Adds TorSource, TorStatusInfo, and monitoring.getTorStatus() to fetch /tor/status.
Frontend UI & styling
frontend/app/components/Toast.tsx, frontend/app/app.css, frontend/app/main.tsx, frontend/tsconfig.tsbuildinfo
Adds Toast component (auto-hide + onDismiss), slide-in-right keyframes and utility, app wiring to show a one-time toast when Tor is managed, and updates buildinfo.
Tests / CI
tests/api/mod.rs, .github/workflows/docker-build.yml
Tests use MakerManager::new_for_testing() to avoid starting Tor; workflow changes test container run/cleanup (removed --rm, uses docker rm -f).

Sequence Diagram

sequenceDiagram
    participant Browser as Browser / React App
    participant API as Backend API
    participant State as AppState
    participant TorMgr as TorManager
    participant Toast as Toast Component

    Browser->>API: GET /api/tor/status
    API->>State: lock() -> tor_source()
    State->>TorMgr: source_label()
    TorMgr-->>State: "system" / "host" / "downloaded"
    State-->>API: TorStatusInfo { source, managed }
    API-->>Browser: 200 OK + TorStatusInfo
    Browser->>Browser: set torToast if managed & not shown
    Browser->>Toast: render with message
    Toast-->>Browser: shown, auto-hide after duration -> onDismiss
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • mojoX911

Poem

🐰 I sniffed the ports and found a way,

Docker, host, or system—Tor can play.
A toast hops in with a cheerful chime,
Server sighs softly at shutdown time.
Hooray for tunnels, tidy and spry!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 51.72% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fixes #73: autometic tor deployment' accurately summarizes the main change: implementing automatic Tor deployment. However, it contains a typo ('autometic' instead of 'automatic').
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch tor_deployment

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/maker_manager/mod.rs (1)

114-126: ⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

Keep MakerManager::new free of external process startup.

Constructing a manager now probes/spawns Tor before it even loads persisted state. That makes config-only code paths and the unit test later in this file (assign_available_maker_ports_skips_taken_local_ports) fail on machines without Tor or Docker, and it can block startup for up to the Docker/Tor timeouts. This belongs in a lazy/init step or an injected dependency, not the constructor.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/maker_manager/mod.rs` around lines 114 - 126, MakerManager::new currently
calls TorManager::detect_or_start (spawning/probing external processes) during
construction; move that startup out of the constructor so config-only and
unit-test paths don't require Tor/Docker. Change new (in mod.rs) to only
construct the struct (initialize tor_manager as None or accept a TorManager
dependency) and load persisted state (saved_configs/persistence) without calling
detect_or_start; add a separate init/start method (e.g.,
MakerManager::init_tor_or_start or inject_tor_manager) that runs
TorManager::detect_or_start and assigns tor_manager and related fields when the
runtime actually needs external services; update tests to call the new init
method or inject a mock TorManager so they no longer spawn external processes.
docker/docker-compose.yml (1)

34-47: ⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Don't expose an unauthenticated Tor ControlPort.

CookieAuthentication 0 is only safe if the control port stays private. Here the same service binds ControlPort on 0.0.0.0 and publishes 19051:9051, so anyone who can reach the Docker host can issue Tor control commands.

Suggested hardening
         SocksPort 0.0.0.0:9050
-        ControlPort 0.0.0.0:9051
+        ControlPort 127.0.0.1:9051
         DataDirectory /var/lib/tor
         CookieAuthentication 0
         EOF
@@
       - "19050:9050"
-      - "19051:9051"
       - "${DASHBOARD_PORT:-3000}:${DASHBOARD_PORT:-3000}"
       - "${BITCOIN_RPC_PORT:-38332}:${BITCOIN_RPC_PORT:-38332}"
       - "${BITCOIN_ZMQ_PORT:-28332}:${BITCOIN_ZMQ_PORT:-28332}"
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@docker/docker-compose.yml` around lines 34 - 47, The Tor service currently
sets ControlPort 0.0.0.0 and CookieAuthentication 0 in /tmp/torrc and publishes
port 19051:9051, which exposes an unauthenticated control port; fix by binding
ControlPort to localhost (e.g., ControlPort 127.0.0.1:9051) and re-enable
authentication (set CookieAuthentication 1 or configure HashedControlPassword)
in the torrc generation, and remove or avoid publishing the control port (remove
the "19051:9051" ports mapping) so only the container or a secured sidecar can
access ControlPort; update references in the Docker Compose snippet that write
/tmp/torrc and the ports section accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@frontend/app/api.ts`:
- Around line 396-407: Move the standalone Tor types and getTorStatus into the
existing monitoring namespace so the API surface is consistent: relocate the
TorSource and TorStatusInfo type declarations next to other domain types and
export them from the monitoring object, then change getTorStatus to be a method
on the monitoring object (e.g., monitoring.getTorStatus) that returns
Promise<TorStatusInfo> and delegates to get<TorStatusInfo>("/tor/status");
finally update any call sites/imports (for example the usage in main.tsx) to use
monitoring.getTorStatus and ensure references/imports are adjusted accordingly.

In `@frontend/app/app.css`:
- Around line 50-59: The keyframes declaration slideInRight violates the
keyframes-name-pattern lint rule; rename the keyframe to kebab-case (e.g.,
slide-in-right) and update every usage (animation, animation-name, and shorthand
animation properties) to match the new name; specifically change the `@keyframes`
block named slideInRight to slide-in-right and update any CSS rules that
reference slideInRight (including shorthand usages like animation: slideInRight
...) so names are consistent.

In `@frontend/app/components/Toast.tsx`:
- Around line 12-19: The useEffect currently depends on the onDismiss prop which
is changing identity on each parent render; wrap the onDismiss callback with
React's useEffectEvent (e.g., const onDismissEvent = useEffectEvent(() =>
onDismiss())) and call onDismissEvent from the dismiss timeout instead of
onDismiss, then remove onDismiss from the effect dependency array so timers only
depend on durationMs; update references in Toast.tsx where the dismiss timeout
invokes onDismiss to use the stable onDismissEvent.

In `@frontend/app/main.tsx`:
- Around line 13-44: Extract the App component into its own module and export it
so React Fast Refresh can track updates: create a new file that defines and
exports the App function (keeping its useState, useEffect logic that calls
getTorStatus and renders BrowserRouter/Routes and Toast) and update main.tsx to
import { App } from that new file and render <App />; ensure the exported symbol
name remains App and that getTorStatus, Toast, and route components (Home,
MakerDetails, AddMaker, MakerSetup) are imported in the new App file as they are
now.

In `@src/tor_manager.rs`:
- Around line 134-155: The Tor process is being bound to 0.0.0.0 (open to the
network) in spawn_host_process (and the other managed-startup path around lines
160-179); change the bind addresses so both SocksPort and ControlPort listen
only on loopback (e.g. 127.0.0.1 or localhost) instead of 0.0.0.0, and update
any matching string literals or torrc generation code that build
"0.0.0.0:{SOCKS_PORT}" / "0.0.0.0:{CONTROL_PORT}" so they produce
"127.0.0.1:{SOCKS_PORT}" / "127.0.0.1:{CONTROL_PORT}" (or equivalent), ensuring
the DataDirectory and other settings remain unchanged.
- Around line 25-36: The detect_or_start branch currently accepts any listener
on SOCKS_PORT as a usable system Tor instance but doesn't verify the control
port required later by maker initialization; update TorManager::detect_or_start
to also check the Tor control port before returning a System TorManager (e.g.,
call a control_port_reachable or attempt a ControlPort connect/check using the
CONTROL_PORT value or configured control_port). If the control port is not
reachable, do not return TorSource::System (log a warning and fall through to
start an internal Tor or return an error), so maker_manager's code that expects
a working control port (the consumer that passes control_port through) won't
fail later. Ensure logging includes which port failed and keep the existing
behavior when both SOCKS and control ports are reachable.

---

Outside diff comments:
In `@docker/docker-compose.yml`:
- Around line 34-47: The Tor service currently sets ControlPort 0.0.0.0 and
CookieAuthentication 0 in /tmp/torrc and publishes port 19051:9051, which
exposes an unauthenticated control port; fix by binding ControlPort to localhost
(e.g., ControlPort 127.0.0.1:9051) and re-enable authentication (set
CookieAuthentication 1 or configure HashedControlPassword) in the torrc
generation, and remove or avoid publishing the control port (remove the
"19051:9051" ports mapping) so only the container or a secured sidecar can
access ControlPort; update references in the Docker Compose snippet that write
/tmp/torrc and the ports section accordingly.

In `@src/maker_manager/mod.rs`:
- Around line 114-126: MakerManager::new currently calls
TorManager::detect_or_start (spawning/probing external processes) during
construction; move that startup out of the constructor so config-only and
unit-test paths don't require Tor/Docker. Change new (in mod.rs) to only
construct the struct (initialize tor_manager as None or accept a TorManager
dependency) and load persisted state (saved_configs/persistence) without calling
detect_or_start; add a separate init/start method (e.g.,
MakerManager::init_tor_or_start or inject_tor_manager) that runs
TorManager::detect_or_start and assigns tor_manager and related fields when the
runtime actually needs external services; update tests to call the new init
method or inject a mock TorManager so they no longer spawn external processes.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: aa03fb9b-6fc0-41b9-919e-72f5e2892e46

📥 Commits

Reviewing files that changed from the base of the PR and between 1e97a00 and 16293e3.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (16)
  • Cargo.toml
  • README.md
  • docker/docker-compose.yml
  • frontend/app/api.ts
  • frontend/app/app.css
  • frontend/app/components/Toast.tsx
  • frontend/app/main.tsx
  • frontend/tsconfig.tsbuildinfo
  • src/api/dto.rs
  • src/api/mod.rs
  • src/api/monitoring.rs
  • src/lib.rs
  • src/main.rs
  • src/maker_manager/mod.rs
  • src/server.rs
  • src/tor_manager.rs
💤 Files with no reviewable changes (1)
  • README.md

Comment thread frontend/app/api.ts Outdated
Comment thread frontend/app/app.css Outdated
Comment thread frontend/app/components/Toast.tsx Outdated
Comment thread frontend/app/main.tsx
Comment on lines +13 to +44
function App() {
const [torToast, setTorToast] = useState<string | null>(null);

useEffect(() => {
if (sessionStorage.getItem("tor-toast-shown")) return;
getTorStatus()
.then(({ managed, source }) => {
if (managed) {
const label = source === "docker" ? "Docker" : "host binary";
setTorToast(`Tor started via ${label}`);
sessionStorage.setItem("tor-toast-shown", "1");
}
})
.catch(() => {});
}, []);

return (
<>
<BrowserRouter>
<Routes>
<Route path="/" element={<Home />} />
<Route path="/makerDetails/:makerId" element={<MakerDetails />} />
<Route path="/addMaker" element={<AddMaker />} />
<Route path="/makers/:makerId/setup" element={<MakerSetup />} />
</Routes>
</BrowserRouter>
{torToast && (
<Toast message={torToast} onDismiss={() => setTorToast(null)} />
)}
</>
);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial | ⚡ Quick win

Move App to its own file to restore React Fast Refresh

The static analysis tool flags this correctly: when a component is defined in a file that has no exports, the interval/effect does not reset even if a new callback is passed in every render — more relevantly, React Fast Refresh requires that every file contain only exported components to work. main.tsx is a pure entry point (no exports), so the App component defined here will never benefit from fast refresh during development.

♻️ Proposed fix — extract App

Create frontend/app/App.tsx:

+import { useEffect, useState } from "react";
+import { BrowserRouter, Routes, Route } from "react-router-dom";
+import { Toast } from "./components/Toast";
+import { getTorStatus } from "./api";
+import Home from "./routes/home";
+import MakerDetails from "./routes/makerDetails";
+import AddMaker from "./routes/addMaker";
+import MakerSetup from "./routes/makersetup";
+
+export default function App() {
+  const [torToast, setTorToast] = useState<string | null>(null);
+  // ... same body as current App
+}

Then simplify main.tsx:

 import { createRoot } from "react-dom/client";
-import { BrowserRouter, Routes, Route } from "react-router-dom";
 import { StrictMode } from "react";
 import "@/app.css";
-import Home from "./routes/home";
-// ... other imports
-import { Toast } from "./components/Toast";
-import { getTorStatus } from "./api";
-
-function App() {
-  // ...
-}
+import App from "./App";

 createRoot(document.getElementById("root")!).render(
   <StrictMode>
     <App />
   </StrictMode>,
 );
🧰 Tools
🪛 GitHub Check: frontend

[warning] 13-13:
Fast refresh only works when a file has exports. Move your component(s) to a separate file

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@frontend/app/main.tsx` around lines 13 - 44, Extract the App component into
its own module and export it so React Fast Refresh can track updates: create a
new file that defines and exports the App function (keeping its useState,
useEffect logic that calls getTorStatus and renders BrowserRouter/Routes and
Toast) and update main.tsx to import { App } from that new file and render <App
/>; ensure the exported symbol name remains App and that getTorStatus, Toast,
and route components (Home, MakerDetails, AddMaker, MakerSetup) are imported in
the new App file as they are now.

Comment thread src/tor_manager.rs
Comment thread src/tor_manager.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/maker_manager/mod.rs`:
- Around line 119-122: The public function new_for_testing is flagged dead_code
because it's only referenced from the integration test crate; add the attribute
#[allow(dead_code)] above the pub fn new_for_testing(config_dir: PathBuf) ->
Result<Self> to silence the lib-level dead_code lint while keeping the function
public and callable from tests — no other logic changes are required (this
function forwards to Self::new_with_tor and uses TorManager::noop()).
- Around line 114-117: MakerManager::new (which calls
TorManager::detect_or_start and may call ensure_docker_daemon) performs long
blocking operations and is invoked by Server::new inside the Tokio runtime,
blocking the executor; fix by moving Server::new/MakerManager::new invocation
outside the async context (construct Server before entering #[tokio::main]) or
by calling Server::new within tokio::task::spawn_blocking at the call site so
the blocking TorManager::detect_or_start and ensure_docker_daemon calls run on a
blocking thread and do not block the Tokio worker.

In `@src/tor_manager.rs`:
- Around line 27-33: Add a dead-code allowance to the TorManager::noop function
so it no longer triggers clippy warnings: annotate the pub fn noop() -> Self
definition in the TorManager impl with #[allow(dead_code)] (or alternatively
gate it with #[cfg(test)] if you prefer test-only visibility) so the lint error
is resolved while keeping the function signature intact.
- Around line 175-198: The Docker invocation currently uses the unpinned image
string "osminogin/tor-simple" in the std::process::Command call (the args array
that includes "osminogin/tor-simple"); change this to a pinned image reference
(for example "osminogin/tor-simple:0.4.9.5@sha256:<digest>") or make the image
name a configurable constant (e.g., TOR_IMAGE) so the digest/tag can be set and
reviewed, then update the args entry that currently contains
"osminogin/tor-simple" to use that pinned value and ensure any tests or docs
reflect the new constant/config option.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: fa82914a-620d-4a2e-aea4-5da0941e22c7

📥 Commits

Reviewing files that changed from the base of the PR and between 16293e3 and c851a8a.

📒 Files selected for processing (3)
  • src/maker_manager/mod.rs
  • src/tor_manager.rs
  • tests/api/mod.rs

Comment thread src/maker_manager/mod.rs
Comment on lines 114 to +117
pub fn new(config_dir: PathBuf) -> Result<Self> {
let tor_manager = TorManager::detect_or_start(&config_dir)?;
Self::new_with_tor(config_dir, tor_manager)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Locate every call site of MakerManager::new to determine async vs sync context.
rg -nP -C5 '\bMakerManager::new\s*\('

Repository: citadel-tech/maker-dashboard

Length of output: 705


🏁 Script executed:

#!/bin/bash
# Check the Server::new call site in src/server.rs and surrounding context
head -n 100 src/server.rs | tail -n 50

Repository: citadel-tech/maker-dashboard

Length of output: 1918


🏁 Script executed:

#!/bin/bash
# Find main.rs or startup code to see how Server is instantiated
find . -type f -name "main.rs" -o -name "lib.rs" | head -5

Repository: citadel-tech/maker-dashboard

Length of output: 99


🏁 Script executed:

#!/bin/bash
# Double-check for all MakerManager::new call sites including async contexts
rg -nA10 'MakerManager::new' --type rust

Repository: citadel-tech/maker-dashboard

Length of output: 2157


🏁 Script executed:

#!/bin/bash
# Read main.rs to understand server initialization
cat src/main.rs

Repository: citadel-tech/maker-dashboard

Length of output: 1578


MakerManager::new is called inside the async runtime, blocking the Tokio worker for ~90s.

In src/main.rs, Server::new is invoked directly within the #[tokio::main] async context. Since Server::newMakerManager::newTorManager::detect_or_start performs synchronous port probes, spawns a host Tor and waits up to 30s, and may call ensure_docker_daemon (60s timeout), this blocks the Tokio executor thread during initialization. While the current code has no concurrent tasks at startup, this violates async best practices. Either:

  • Move Server::new outside the async runtime by calling it before #[tokio::main] yields, or
  • Wrap Server::new in tokio::task::spawn_blocking at the call site.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/maker_manager/mod.rs` around lines 114 - 117, MakerManager::new (which
calls TorManager::detect_or_start and may call ensure_docker_daemon) performs
long blocking operations and is invoked by Server::new inside the Tokio runtime,
blocking the executor; fix by moving Server::new/MakerManager::new invocation
outside the async context (construct Server before entering #[tokio::main]) or
by calling Server::new within tokio::task::spawn_blocking at the call site so
the blocking TorManager::detect_or_start and ensure_docker_daemon calls run on a
blocking thread and do not block the Tokio worker.

Comment thread src/maker_manager/mod.rs
Comment thread src/tor_manager.rs
Comment thread src/tor_manager.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.github/workflows/docker-build.yml (1)

45-65: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Always tear down test-container on step failure.

Because this step exits on command failure, the cleanup on Line 65 is skipped if the final curl | jq check fails. That leaves the container running for the rest of the job and makes the success/failure paths inconsistent. Use an EXIT trap so cleanup always runs.

Suggested fix
       - name: Test container starts and responds
         run: |
+          trap 'docker rm -f test-container >/dev/null 2>&1 || true' EXIT
+
           docker run -d --name test-container -p 3000:3000 \
             -e DASHBOARD_HOST=0.0.0.0 -e DASHBOARD_ALLOW_REMOTE=true \
             maker-dashboard:test
@@
           done
           # Verify API responds
           curl -sf http://localhost:3000/api/makers | jq .
-          docker rm -f test-container || true
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In @.github/workflows/docker-build.yml around lines 45 - 65, The workflow step
that runs the test container ("test-container") doesn't guarantee cleanup if any
command fails (the final curl|jq can cause early exit and skip docker rm); add a
cleanup routine and register it with a trap on EXIT so the container is always
removed. Specifically, define a cleanup function in the run block that does
docker logs (on failure) and docker rm -f test-container (or the captured
container ID), then set trap cleanup EXIT before running any commands; ensure
the trap runs regardless of success/failure and that the container name/ID
(test-container) referenced in the cleanup matches the one created.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@frontend/app/components/Toast.tsx`:
- Around line 24-30: The Toast component (Toast.tsx) renders transient status UI
but lacks ARIA live region attributes; update the outer container rendered by
the Toast component to include role="status" and aria-live="polite" (and
optionally aria-atomic="true") so screen readers announce the message; modify
the div with className "fixed bottom-6 right-6 z-50 animate-slide-in-right" to
include these attributes in the JSX.

In `@src/maker_manager/mod.rs`:
- Around line 115-121: The current fallback in MakerManager where
TorManager::detect_or_start(...) maps failures to TorManager::noop() loses the
error state (since noop() reports source System); instead preserve and expose
the failure by introducing an explicit unavailable/error state and returning
that here: add a TorSource::Unavailable (or TorManager::error variant) and set
it when detect_or_start fails, or return a TorManager instance that carries the
error state rather than using TorManager::noop(); update uses of tor_source(),
/api/tor/status, and any code that inspects TorManager (e.g., tor_source(),
TorManager::noop(), and places that match on TorSource) to handle the new
Unavailable/Error variant so the UI can surface “Tor unavailable” without
waiting for maker startup to fail.

In `@src/tor_manager.rs`:
- Around line 57-87: Both host and docker branches currently only call
wait_for_port(SOCKS_PORT) causing later maker initialization to fail if
CONTROL_PORT isn't ready and leaking resources; update both flows to wait for
both ports (e.g. implement wait_for_ports(&[SOCKS_PORT, CONTROL_PORT]) and call
it instead of wait_for_port), and on failure ensure proper cleanup: if
spawn_host_process(...) returned a Child, kill/terminate the child before
returning Err, and if spawn_docker() started a container, stop the container (as
already attempted) before returning Err; update references in this change
include find_binary, spawn_host_process, wait_for_port, SOCKS_PORT,
CONTROL_PORT, TorManager, and spawn_docker.

---

Outside diff comments:
In @.github/workflows/docker-build.yml:
- Around line 45-65: The workflow step that runs the test container
("test-container") doesn't guarantee cleanup if any command fails (the final
curl|jq can cause early exit and skip docker rm); add a cleanup routine and
register it with a trap on EXIT so the container is always removed.
Specifically, define a cleanup function in the run block that does docker logs
(on failure) and docker rm -f test-container (or the captured container ID),
then set trap cleanup EXIT before running any commands; ensure the trap runs
regardless of success/failure and that the container name/ID (test-container)
referenced in the cleanup matches the one created.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 3f2d81aa-7b04-4015-af3b-dce4fa872d0c

📥 Commits

Reviewing files that changed from the base of the PR and between c851a8a and cf1047f.

📒 Files selected for processing (7)
  • .github/workflows/docker-build.yml
  • frontend/app/api.ts
  • frontend/app/app.css
  • frontend/app/components/Toast.tsx
  • frontend/app/main.tsx
  • src/maker_manager/mod.rs
  • src/tor_manager.rs

Comment on lines +24 to +30
return (
<div className="fixed bottom-6 right-6 z-50 animate-slide-in-right">
<div className="flex items-center gap-3 rounded-lg border border-green-700 bg-gray-950 px-4 py-3 text-sm text-green-400 shadow-lg">
<span className="text-green-500">✓</span>
{message}
</div>
</div>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Expose the toast as a live region.

This message is transient status UI, but it has no role/aria-live, so assistive tech may never announce it.

Suggested fix
-    <div className="fixed bottom-6 right-6 z-50 animate-slide-in-right">
+    <div
+      className="fixed bottom-6 right-6 z-50 animate-slide-in-right"
+      role="status"
+      aria-live="polite"
+      aria-atomic="true"
+    >
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return (
<div className="fixed bottom-6 right-6 z-50 animate-slide-in-right">
<div className="flex items-center gap-3 rounded-lg border border-green-700 bg-gray-950 px-4 py-3 text-sm text-green-400 shadow-lg">
<span className="text-green-500"></span>
{message}
</div>
</div>
return (
<div
className="fixed bottom-6 right-6 z-50 animate-slide-in-right"
role="status"
aria-live="polite"
aria-atomic="true"
>
<div className="flex items-center gap-3 rounded-lg border border-green-700 bg-gray-950 px-4 py-3 text-sm text-green-400 shadow-lg">
<span className="text-green-500"></span>
{message}
</div>
</div>
)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@frontend/app/components/Toast.tsx` around lines 24 - 30, The Toast component
(Toast.tsx) renders transient status UI but lacks ARIA live region attributes;
update the outer container rendered by the Toast component to include
role="status" and aria-live="polite" (and optionally aria-atomic="true") so
screen readers announce the message; modify the div with className "fixed
bottom-6 right-6 z-50 animate-slide-in-right" to include these attributes in the
JSX.

Comment thread src/maker_manager/mod.rs
Comment on lines +115 to +121
let tor_manager = TorManager::detect_or_start(&config_dir).unwrap_or_else(|e| {
tracing::warn!(
"Tor could not be started: {}. Tor-dependent makers will fail to start.",
e
);
TorManager::noop()
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major | 🏗️ Heavy lift

Don’t collapse Tor bootstrap failure into the system state.

If detect_or_start fails, this substitutes TorManager::noop(), whose source is still System. That makes tor_source() and /api/tor/status indistinguishable from a real externally managed Tor instance, so the UI gets no way to surface “Tor unavailable” until maker startup fails later.

Please preserve an explicit unavailable/error state here instead of coercing the failure into system Tor. That can be a new TorSource variant or separate status field, but the current fallback loses important state.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/maker_manager/mod.rs` around lines 115 - 121, The current fallback in
MakerManager where TorManager::detect_or_start(...) maps failures to
TorManager::noop() loses the error state (since noop() reports source System);
instead preserve and expose the failure by introducing an explicit
unavailable/error state and returning that here: add a TorSource::Unavailable
(or TorManager::error variant) and set it when detect_or_start fails, or return
a TorManager instance that carries the error state rather than using
TorManager::noop(); update uses of tor_source(), /api/tor/status, and any code
that inspects TorManager (e.g., tor_source(), TorManager::noop(), and places
that match on TorSource) to handle the new Unavailable/Error variant so the UI
can surface “Tor unavailable” without waiting for maker startup to fail.

Comment thread src/tor_manager.rs Outdated
Comment on lines +57 to +87
if let Some(binary) = find_binary("tor") {
tracing::info!(
"Found tor binary at {}, spawning host process",
binary.display()
);
let child = spawn_host_process(&binary, config_dir)?;
wait_for_port(SOCKS_PORT)?;
return Ok(TorManager {
source: TorSource::HostProcess,
process: Some(child),
container_id: None,
});
}

if find_binary("docker").is_some() {
tracing::info!("Found docker, ensuring daemon is running...");
ensure_docker_daemon()?;
tracing::info!("Docker daemon ready, spawning tor container");
let container_id = spawn_docker()
.map_err(|e| anyhow::anyhow!("Failed to start tor Docker container: {}", e))?;
if let Err(e) = wait_for_port(SOCKS_PORT) {
let _ = std::process::Command::new("docker")
.args(["stop", &container_id])
.output();
return Err(e);
}
return Ok(TorManager {
source: TorSource::Docker,
process: None,
container_id: Some(container_id),
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Wait for both Tor ports before returning a managed instance.

These branches only wait for SOCKS_PORT, but maker initialization later needs CONTROL_PORT too. If Tor comes up with SOCKS only, startup succeeds here and the first maker fails later. On the host-process path, that failure also leaks the spawned tor child because the error return drops Child without killing it.

Suggested fix
-            let child = spawn_host_process(&binary, config_dir)?;
-            wait_for_port(SOCKS_PORT)?;
+            let mut child = spawn_host_process(&binary, config_dir)?;
+            if let Err(e) = wait_for_ports(&[SOCKS_PORT, CONTROL_PORT]) {
+                let _ = child.kill();
+                let _ = child.wait();
+                return Err(e);
+            }
             return Ok(TorManager {
                 source: TorSource::HostProcess,
                 process: Some(child),
                 container_id: None,
             });
@@
-            if let Err(e) = wait_for_port(SOCKS_PORT) {
+            if let Err(e) = wait_for_ports(&[SOCKS_PORT, CONTROL_PORT]) {
                 let _ = std::process::Command::new("docker")
                     .args(["stop", &container_id])
                     .output();
                 return Err(e);
             }

A small helper such as wait_for_ports(&[u16]) keeps the two paths consistent.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/tor_manager.rs` around lines 57 - 87, Both host and docker branches
currently only call wait_for_port(SOCKS_PORT) causing later maker initialization
to fail if CONTROL_PORT isn't ready and leaking resources; update both flows to
wait for both ports (e.g. implement wait_for_ports(&[SOCKS_PORT, CONTROL_PORT])
and call it instead of wait_for_port), and on failure ensure proper cleanup: if
spawn_host_process(...) returned a Child, kill/terminate the child before
returning Err, and if spawn_docker() started a container, stop the container (as
already attempted) before returning Err; update references in this change
include find_binary, spawn_host_process, wait_for_port, SOCKS_PORT,
CONTROL_PORT, TorManager, and spawn_docker.

@keraliss keraliss linked an issue May 5, 2026 that may be closed by this pull request
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

♻️ Duplicate comments (1)
src/tor_manager.rs (1)

66-108: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Wait on both ports and avoid leaking the spawned Tor child on startup failure.

All three managed branches still call wait_for_port(SOCKS_PORT)? only, and if it returns Err, the just-spawned Child is dropped without kill() — leaving an orphaned tor running on the user's box. The maker layer also depends on CONTROL_PORT, so SOCKS-only readiness is not sufficient.

Suggested fix (host-process branch shown; mirror for cached + downloaded paths)
-        if let Some(binary) = find_binary("tor") {
-            tracing::info!("Found system tor at {}", binary.display());
-            let child = spawn_host_process(&binary, None)?;
-            wait_for_port(SOCKS_PORT)?;
-            return Ok(TorManager {
-                source: TorSource::HostProcess,
-                process: Some(child),
-            });
-        }
+        if let Some(binary) = find_binary("tor") {
+            tracing::info!("Found system tor at {}", binary.display());
+            let mut child = spawn_host_process(&binary, None)?;
+            if let Err(e) = wait_for_ports(&[SOCKS_PORT, CONTROL_PORT]) {
+                let _ = child.kill();
+                let _ = child.wait();
+                return Err(e);
+            }
+            return Ok(TorManager {
+                source: TorSource::HostProcess,
+                process: Some(child),
+            });
+        }

Add a small helper:

fn wait_for_ports(ports: &[u16]) -> anyhow::Result<()> {
    for &p in ports { wait_for_port(p)?; }
    Ok(())
}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/tor_manager.rs` around lines 66 - 108, The three managed-branch flows
spawn a Tor child (via spawn_host_process) but only call
wait_for_port(SOCKS_PORT) and will drop the Child on wait error, leaking a
running tor; change each branch that spawns a child (the HostProcess branch, the
cached-bundle branch using tor_binary_in_bundle, and the downloaded-bundle
branch using download_tor_bundle) to wait for both SOCKS_PORT and CONTROL_PORT
before returning and to kill the spawned Child if port waiting fails. Add a tiny
helper (e.g., wait_for_ports or similar) to wait on &[SOCKS_PORT, CONTROL_PORT],
call it after spawning and before returning, and on Err from that helper call
child.kill() (and wait on it) then return the error instead of dropping the
Child.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@Cargo.toml`:
- Line 29: Update the ureq dependency in Cargo.toml from 2 to a 3.x version and
change the two HTTP download sites (functions download_tor_bundle and
verify_checksum_from_manifest) to use ureq 3's timeout and bounded-read APIs:
replace bare ureq::get().call() usage with calls that set timeout_connect(...)
and timeout_recv_body(...), and when reading response bodies use
BodyWithConfig::limit(...) (or the corresponding response-with-config helper) to
enforce a maximum read size; ensure imports and any result handling are adjusted
for the 3.x API changes.

In `@src/tor_manager.rs`:
- Around line 325-365: The extract_archive function currently shells out to
external tools (tar/powershell) which is brittle; replace that implementation
with a pure-Rust solution using flate2+tar for .tar.gz and the zip crate for
.zip files, update Cargo.toml to include flate2="1", tar="0.4", and zip="0.6",
detect archive type from the archive Path (e.g., .tar.gz/.tgz vs .zip), open the
file with std::fs::File, stream-decompress .tar.gz with flate2::read::GzDecoder
into tar::Archive and unpack to dest, and for .zip use zip::ZipArchive to
extract entries while preserving permissions and returning any I/O errors
(include context in anyhow errors) instead of spawning external commands; keep
the function signature extract_archive(archive: &Path, dest: &Path) ->
anyhow::Result<()> and ensure all errors propagate with context rather than
swallowing stdout/stderr.
- Around line 300-323: Replace the OS-specific shell-out in sha256_of_file with
a pure-Rust SHA-256 implementation: add sha2 = "0.10" (and hex = "0.4" or use
sha2's output formatting) to Cargo.toml, open the file with std::fs::File in
sha256_of_file, stream its contents into a sha2::Sha256 hasher (e.g., via
std::io::copy to the hasher or reading in chunks), finalize the hasher and
return the hex-encoded digest as the function result; remove all
platform-specific Command blocks and the Windows-specific stdout parsing.
- Around line 167-180: spawn_host_process currently ignores the provided config
directory and always writes tor state into coinswap_tor_dir(); change its
signature to accept a config_dir or tor_runtime_dir (e.g., &Path) and use
config_dir.join("tor") (or a derived tor_runtime_dir) instead of
coinswap_tor_dir() for both DataDirectory and torrc_path (and
create_dir_all/write to that path); update every call site (notably
detect_or_start) to pass config_dir.join("tor") and keep coinswap_tor_dir() only
as a fallback when no config_dir is supplied.
- Around line 241-248: The code currently treats verify_checksum_from_manifest
failures as non-fatal; change that so any Err from
verify_checksum_from_manifest(&checksum_url, &filename, &archive_path) causes
immediate cleanup and error propagation instead of proceeding to
extract_archive. Specifically, on Err(e) remove the partial archive
(std::fs::remove_file(&archive_path)), return or propagate the error to the
caller (don’t call extract_archive), and only allow bypassing this fatal
behavior via an explicit opt‑in flag if such a configuration flag is present;
update the logic around
checksum_url/filename/archive_path/verify_checksum_from_manifest accordingly.
- Around line 232-239: The download currently uses ureq::get(&url).call() with
no timeouts and std::io::copy on response.into_reader(), so a slow/malicious
server can hang or produce an oversized file; fix by creating a ureq::Agent with
explicit connect and read timeouts (e.g., via
ureq::AgentBuilder::new().timeout_connect(...).timeout_read(...).build()), use
that agent to perform the GET for the archive (the code that produces response
and writes to archive_path), and cap the bytes written by wrapping the response
reader in an std::io::Take to enforce a max size before calling std::io::copy;
apply the same agent+timeouts and response-size cap to the manifest fetch in
verify_checksum_from_manifest as well.
- Around line 11-13: The download extension variable ext in src/tor_manager.rs
is currently set conditionally to "zip" for Windows which causes 404s; change
the ext assignment so ext is always "tar.gz" (remove the if os == "windows"
branch) where ext is defined (used with TOR_ARCHIVE_BASE and TOR_VERSION to
build the expert bundle URL), so all platforms request .tar.gz expert bundles;
preserve existing extraction logic that handles .tar.gz on Windows.

---

Duplicate comments:
In `@src/tor_manager.rs`:
- Around line 66-108: The three managed-branch flows spawn a Tor child (via
spawn_host_process) but only call wait_for_port(SOCKS_PORT) and will drop the
Child on wait error, leaking a running tor; change each branch that spawns a
child (the HostProcess branch, the cached-bundle branch using
tor_binary_in_bundle, and the downloaded-bundle branch using
download_tor_bundle) to wait for both SOCKS_PORT and CONTROL_PORT before
returning and to kill the spawned Child if port waiting fails. Add a tiny helper
(e.g., wait_for_ports or similar) to wait on &[SOCKS_PORT, CONTROL_PORT], call
it after spawning and before returning, and on Err from that helper call
child.kill() (and wait on it) then return the error instead of dropping the
Child.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 57ee7255-f24f-4189-9ee3-a6c0eefc7e61

📥 Commits

Reviewing files that changed from the base of the PR and between cf1047f and 9ff4e0a.

📒 Files selected for processing (2)
  • Cargo.toml
  • src/tor_manager.rs

Comment thread Cargo.toml Outdated
Comment thread src/tor_manager.rs Outdated
Comment thread src/tor_manager.rs Outdated
Comment thread src/tor_manager.rs Outdated
Comment thread src/tor_manager.rs Outdated
Comment thread src/tor_manager.rs Outdated
Comment thread src/tor_manager.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (1)
src/tor_manager.rs (1)

70-114: ⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Wait for both SOCKS and CONTROL ports, and clean up the spawned child on failure.

All three managed-startup branches (lines 73-78, 89-95, 106-114) only wait_for_port(SOCKS_PORT)?. The maker code passes a control_port to coinswap, so if Tor comes up SOCKS-only within STARTUP_TIMEOUT, this returns Ok and the first maker fails later trying to reach 9051.

Additionally, when wait_for_port returns Err, the just-spawned child is moved into ? and dropped without kill()/wait(), leaking the Tor process (which is still holding the data dir lock). On the next startup attempt the new Tor will fail because the old one still owns the lock.

♻️ Suggested fix
-fn wait_for_port(port: u16) -> anyhow::Result<()> {
+fn wait_for_ports(ports: &[u16]) -> anyhow::Result<()> {
     let deadline = Instant::now() + STARTUP_TIMEOUT;
     while Instant::now() < deadline {
-        if port_reachable(port) {
+        if ports.iter().all(|p| port_reachable(*p)) {
             return Ok(());
         }
         std::thread::sleep(POLL_INTERVAL);
     }
     Err(anyhow::anyhow!(
-        "Tor did not become reachable on port {} within {:?}",
-        port,
+        "Tor did not become reachable on ports {:?} within {:?}",
+        ports,
         STARTUP_TIMEOUT
     ))
 }

…then at each call site:

-            let child = spawn_host_process(&binary, None, &config_dir.join("tor"))?;
-            wait_for_port(SOCKS_PORT)?;
+            let mut child = spawn_host_process(&binary, None, &config_dir.join("tor"))?;
+            if let Err(e) = wait_for_ports(&[SOCKS_PORT, CONTROL_PORT]) {
+                let _ = child.kill();
+                let _ = child.wait();
+                return Err(e);
+            }

Apply the same kill-on-failure pattern to the cached-bundle and downloaded-bundle branches.

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/tor_manager.rs`:
- Line 8: The CONNECT_TIMEOUT constant is too long (2s) for loopback probes and
slows both the initial "is Tor running?" checks and the wait_for_port polling;
reduce CONNECT_TIMEOUT to a shorter value (e.g. Duration::from_millis(300) or
anywhere in the 200–500 ms range) so failed connect attempts return quickly,
keeping wait_for_port's 500 ms sleep/poll cadence effective; update the
CONNECT_TIMEOUT constant and ensure functions that rely on it (the initial Tor
probe logic and the wait_for_port routine) use the new shorter timeout.
- Around line 298-303: The current lookup that builds expected_hash uses
manifest.lines().find(|l| l.contains(filename)) which can match substrings;
update the logic around expected_hash so you parse each line into
whitespace-separated tokens and require the second token exactly equals filename
(i.e., split the line, check tokens.get(1) == Some(&filename) or equivalent)
before taking the first token as the hash; ensure you return the same error via
ok_or_else if no exact match is found.
- Around line 349-368: The Windows-specific extraction fallback incorrectly
calls PowerShell's Expand-Archive on a tar.gz (archive_str) which it can't
handle; update the Windows block (the std::process::Command::new("tar") ->
.args(["-xf", archive_str, "-C", dest_str]) path) to remove the PowerShell
fallback and instead return a clear error if tar is missing or fails (e.g.,
Err(anyhow::anyhow!("tar not found or failed to extract '{}' — please install
tar (bsdtar) or ensure it's on PATH", archive_str))). Keep archive_str and
dest_str usage and only change the fallback behavior so we don't attempt
Expand-Archive on tar.gz.
- Around line 240-247: The download path using io::copy with
response.into_reader().take(MAX_ARCHIVE_BYTES) can silently succeed when the
server is still streaming exactly MAX_ARCHIVE_BYTES; detect truncation by
reading one extra byte and explicitly failing before checksum: change the copy
call to read up to MAX_ARCHIVE_BYTES + 1 (use .take(MAX_ARCHIVE_BYTES + 1) or
equivalent), then after the copy check if bytes > MAX_ARCHIVE_BYTES (or bytes ==
MAX_ARCHIVE_BYTES+1) and if so remove the partial file at archive_path and
return an error like "archive exceeded the MAX_ARCHIVE_BYTES cap" so checksum
verification is not attempted; ensure this logic is applied around the block
that creates the file and calls io::copy (referencing MAX_ARCHIVE_BYTES,
response.into_reader(), archive_path, and the surrounding download/checksum
flow).
- Around line 146-164: The find_binary function currently only uses the Unix
"which" and Unix-only prefixes; update it to be cross-platform by adding
Windows-specific logic: on Windows, call "where" (or use the pathsearch crate)
and try name + each PATHEXT extension, and check common Windows install prefixes
(ProgramFiles, ProgramFiles(x86), LocalAppData) for name.exe; on non-Windows
keep the existing "which" lookup and Unix prefixes. Ensure you still return
Some(PathBuf) when a valid executable is found and None otherwise; reference the
find_binary function to locate the code to modify.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 542f45ea-3d0a-44de-ab43-57ed08d5b1bf

📥 Commits

Reviewing files that changed from the base of the PR and between 9ff4e0a and b5a2d62.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • Cargo.toml
  • src/tor_manager.rs

Comment thread src/tor_manager.rs

const SOCKS_PORT: u16 = 9050;
const CONTROL_PORT: u16 = 9051;
const CONNECT_TIMEOUT: Duration = Duration::from_secs(2);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial | 💤 Low value

2-second connect timeout for loopback is excessive.

CONNECT_TIMEOUT = 2s is used both for the initial "is Tor already running?" probes (lines 47-48) and for the wait_for_port polling (line 406, 500 ms poll interval). On loopback a connect either succeeds immediately or fails immediately; a 2 s timeout mostly matters when something is actively blackholing 127.0.0.1, which is rare. More importantly, during wait_for_port each failed probe can stall up to 2 s — interleaved with a 500 ms sleep — making the effective polling cadence much slower than intended.

Consider 200–500 ms for loopback probes.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/tor_manager.rs` at line 8, The CONNECT_TIMEOUT constant is too long (2s)
for loopback probes and slows both the initial "is Tor running?" checks and the
wait_for_port polling; reduce CONNECT_TIMEOUT to a shorter value (e.g.
Duration::from_millis(300) or anywhere in the 200–500 ms range) so failed
connect attempts return quickly, keeping wait_for_port's 500 ms sleep/poll
cadence effective; update the CONNECT_TIMEOUT constant and ensure functions that
rely on it (the initial Tor probe logic and the wait_for_port routine) use the
new shorter timeout.

Comment thread src/tor_manager.rs
Comment thread src/tor_manager.rs Outdated
Comment thread src/tor_manager.rs Outdated
Comment thread src/tor_manager.rs Outdated
Copy link
Copy Markdown
Collaborator

@hulxv hulxv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't it better to use libtor instead of downloading the binary?

https://docs.rs/libtor/latest/libtor/

Comment thread Cargo.toml Outdated
futures = "0.3.32"
clap = { version = "4.5.60", features = ["derive", "env"] }
dirs = "6.0.0"
ureq = { version = "2", features = ["json"] }
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use v3, it's updated on the last commits

@mojoX911 mojoX911 added this to the v0.2.2 milestone May 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

Feat: Automatic Tor deployment in maker dashboard

3 participants