fixes #73: autometic tor deployment#82
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a Tor manager that detects or starts a local Tor SOCKS proxy (system, host, or downloaded bundle), integrates it into MakerManager and the API (/api/tor/status), surfaces Tor status to the frontend via a one-time toast, and updates graceful shutdown, docs, container config, CI, and dependencies. ChangesTor connectivity & UI integration
Sequence DiagramsequenceDiagram
participant Browser as Browser / React App
participant API as Backend API
participant State as AppState
participant TorMgr as TorManager
participant Toast as Toast Component
Browser->>API: GET /api/tor/status
API->>State: lock() -> tor_source()
State->>TorMgr: source_label()
TorMgr-->>State: "system" / "host" / "downloaded"
State-->>API: TorStatusInfo { source, managed }
API-->>Browser: 200 OK + TorStatusInfo
Browser->>Browser: set torToast if managed & not shown
Browser->>Toast: render with message
Toast-->>Browser: shown, auto-hide after duration -> onDismiss
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/maker_manager/mod.rs (1)
114-126:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftKeep
MakerManager::newfree of external process startup.Constructing a manager now probes/spawns Tor before it even loads persisted state. That makes config-only code paths and the unit test later in this file (
assign_available_maker_ports_skips_taken_local_ports) fail on machines without Tor or Docker, and it can block startup for up to the Docker/Tor timeouts. This belongs in a lazy/init step or an injected dependency, not the constructor.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/maker_manager/mod.rs` around lines 114 - 126, MakerManager::new currently calls TorManager::detect_or_start (spawning/probing external processes) during construction; move that startup out of the constructor so config-only and unit-test paths don't require Tor/Docker. Change new (in mod.rs) to only construct the struct (initialize tor_manager as None or accept a TorManager dependency) and load persisted state (saved_configs/persistence) without calling detect_or_start; add a separate init/start method (e.g., MakerManager::init_tor_or_start or inject_tor_manager) that runs TorManager::detect_or_start and assigns tor_manager and related fields when the runtime actually needs external services; update tests to call the new init method or inject a mock TorManager so they no longer spawn external processes.docker/docker-compose.yml (1)
34-47:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winDon't expose an unauthenticated Tor ControlPort.
CookieAuthentication 0is only safe if the control port stays private. Here the same service bindsControlPorton0.0.0.0and publishes19051:9051, so anyone who can reach the Docker host can issue Tor control commands.Suggested hardening
SocksPort 0.0.0.0:9050 - ControlPort 0.0.0.0:9051 + ControlPort 127.0.0.1:9051 DataDirectory /var/lib/tor CookieAuthentication 0 EOF @@ - "19050:9050" - - "19051:9051" - "${DASHBOARD_PORT:-3000}:${DASHBOARD_PORT:-3000}" - "${BITCOIN_RPC_PORT:-38332}:${BITCOIN_RPC_PORT:-38332}" - "${BITCOIN_ZMQ_PORT:-28332}:${BITCOIN_ZMQ_PORT:-28332}"🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@docker/docker-compose.yml` around lines 34 - 47, The Tor service currently sets ControlPort 0.0.0.0 and CookieAuthentication 0 in /tmp/torrc and publishes port 19051:9051, which exposes an unauthenticated control port; fix by binding ControlPort to localhost (e.g., ControlPort 127.0.0.1:9051) and re-enable authentication (set CookieAuthentication 1 or configure HashedControlPassword) in the torrc generation, and remove or avoid publishing the control port (remove the "19051:9051" ports mapping) so only the container or a secured sidecar can access ControlPort; update references in the Docker Compose snippet that write /tmp/torrc and the ports section accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@frontend/app/api.ts`:
- Around line 396-407: Move the standalone Tor types and getTorStatus into the
existing monitoring namespace so the API surface is consistent: relocate the
TorSource and TorStatusInfo type declarations next to other domain types and
export them from the monitoring object, then change getTorStatus to be a method
on the monitoring object (e.g., monitoring.getTorStatus) that returns
Promise<TorStatusInfo> and delegates to get<TorStatusInfo>("/tor/status");
finally update any call sites/imports (for example the usage in main.tsx) to use
monitoring.getTorStatus and ensure references/imports are adjusted accordingly.
In `@frontend/app/app.css`:
- Around line 50-59: The keyframes declaration slideInRight violates the
keyframes-name-pattern lint rule; rename the keyframe to kebab-case (e.g.,
slide-in-right) and update every usage (animation, animation-name, and shorthand
animation properties) to match the new name; specifically change the `@keyframes`
block named slideInRight to slide-in-right and update any CSS rules that
reference slideInRight (including shorthand usages like animation: slideInRight
...) so names are consistent.
In `@frontend/app/components/Toast.tsx`:
- Around line 12-19: The useEffect currently depends on the onDismiss prop which
is changing identity on each parent render; wrap the onDismiss callback with
React's useEffectEvent (e.g., const onDismissEvent = useEffectEvent(() =>
onDismiss())) and call onDismissEvent from the dismiss timeout instead of
onDismiss, then remove onDismiss from the effect dependency array so timers only
depend on durationMs; update references in Toast.tsx where the dismiss timeout
invokes onDismiss to use the stable onDismissEvent.
In `@frontend/app/main.tsx`:
- Around line 13-44: Extract the App component into its own module and export it
so React Fast Refresh can track updates: create a new file that defines and
exports the App function (keeping its useState, useEffect logic that calls
getTorStatus and renders BrowserRouter/Routes and Toast) and update main.tsx to
import { App } from that new file and render <App />; ensure the exported symbol
name remains App and that getTorStatus, Toast, and route components (Home,
MakerDetails, AddMaker, MakerSetup) are imported in the new App file as they are
now.
In `@src/tor_manager.rs`:
- Around line 134-155: The Tor process is being bound to 0.0.0.0 (open to the
network) in spawn_host_process (and the other managed-startup path around lines
160-179); change the bind addresses so both SocksPort and ControlPort listen
only on loopback (e.g. 127.0.0.1 or localhost) instead of 0.0.0.0, and update
any matching string literals or torrc generation code that build
"0.0.0.0:{SOCKS_PORT}" / "0.0.0.0:{CONTROL_PORT}" so they produce
"127.0.0.1:{SOCKS_PORT}" / "127.0.0.1:{CONTROL_PORT}" (or equivalent), ensuring
the DataDirectory and other settings remain unchanged.
- Around line 25-36: The detect_or_start branch currently accepts any listener
on SOCKS_PORT as a usable system Tor instance but doesn't verify the control
port required later by maker initialization; update TorManager::detect_or_start
to also check the Tor control port before returning a System TorManager (e.g.,
call a control_port_reachable or attempt a ControlPort connect/check using the
CONTROL_PORT value or configured control_port). If the control port is not
reachable, do not return TorSource::System (log a warning and fall through to
start an internal Tor or return an error), so maker_manager's code that expects
a working control port (the consumer that passes control_port through) won't
fail later. Ensure logging includes which port failed and keep the existing
behavior when both SOCKS and control ports are reachable.
---
Outside diff comments:
In `@docker/docker-compose.yml`:
- Around line 34-47: The Tor service currently sets ControlPort 0.0.0.0 and
CookieAuthentication 0 in /tmp/torrc and publishes port 19051:9051, which
exposes an unauthenticated control port; fix by binding ControlPort to localhost
(e.g., ControlPort 127.0.0.1:9051) and re-enable authentication (set
CookieAuthentication 1 or configure HashedControlPassword) in the torrc
generation, and remove or avoid publishing the control port (remove the
"19051:9051" ports mapping) so only the container or a secured sidecar can
access ControlPort; update references in the Docker Compose snippet that write
/tmp/torrc and the ports section accordingly.
In `@src/maker_manager/mod.rs`:
- Around line 114-126: MakerManager::new currently calls
TorManager::detect_or_start (spawning/probing external processes) during
construction; move that startup out of the constructor so config-only and
unit-test paths don't require Tor/Docker. Change new (in mod.rs) to only
construct the struct (initialize tor_manager as None or accept a TorManager
dependency) and load persisted state (saved_configs/persistence) without calling
detect_or_start; add a separate init/start method (e.g.,
MakerManager::init_tor_or_start or inject_tor_manager) that runs
TorManager::detect_or_start and assigns tor_manager and related fields when the
runtime actually needs external services; update tests to call the new init
method or inject a mock TorManager so they no longer spawn external processes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: aa03fb9b-6fc0-41b9-919e-72f5e2892e46
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (16)
Cargo.tomlREADME.mddocker/docker-compose.ymlfrontend/app/api.tsfrontend/app/app.cssfrontend/app/components/Toast.tsxfrontend/app/main.tsxfrontend/tsconfig.tsbuildinfosrc/api/dto.rssrc/api/mod.rssrc/api/monitoring.rssrc/lib.rssrc/main.rssrc/maker_manager/mod.rssrc/server.rssrc/tor_manager.rs
💤 Files with no reviewable changes (1)
- README.md
| function App() { | ||
| const [torToast, setTorToast] = useState<string | null>(null); | ||
|
|
||
| useEffect(() => { | ||
| if (sessionStorage.getItem("tor-toast-shown")) return; | ||
| getTorStatus() | ||
| .then(({ managed, source }) => { | ||
| if (managed) { | ||
| const label = source === "docker" ? "Docker" : "host binary"; | ||
| setTorToast(`Tor started via ${label}`); | ||
| sessionStorage.setItem("tor-toast-shown", "1"); | ||
| } | ||
| }) | ||
| .catch(() => {}); | ||
| }, []); | ||
|
|
||
| return ( | ||
| <> | ||
| <BrowserRouter> | ||
| <Routes> | ||
| <Route path="/" element={<Home />} /> | ||
| <Route path="/makerDetails/:makerId" element={<MakerDetails />} /> | ||
| <Route path="/addMaker" element={<AddMaker />} /> | ||
| <Route path="/makers/:makerId/setup" element={<MakerSetup />} /> | ||
| </Routes> | ||
| </BrowserRouter> | ||
| {torToast && ( | ||
| <Toast message={torToast} onDismiss={() => setTorToast(null)} /> | ||
| )} | ||
| </> | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial | ⚡ Quick win
Move App to its own file to restore React Fast Refresh
The static analysis tool flags this correctly: when a component is defined in a file that has no exports, the interval/effect does not reset even if a new callback is passed in every render — more relevantly, React Fast Refresh requires that every file contain only exported components to work. main.tsx is a pure entry point (no exports), so the App component defined here will never benefit from fast refresh during development.
♻️ Proposed fix — extract App
Create frontend/app/App.tsx:
+import { useEffect, useState } from "react";
+import { BrowserRouter, Routes, Route } from "react-router-dom";
+import { Toast } from "./components/Toast";
+import { getTorStatus } from "./api";
+import Home from "./routes/home";
+import MakerDetails from "./routes/makerDetails";
+import AddMaker from "./routes/addMaker";
+import MakerSetup from "./routes/makersetup";
+
+export default function App() {
+ const [torToast, setTorToast] = useState<string | null>(null);
+ // ... same body as current App
+}Then simplify main.tsx:
import { createRoot } from "react-dom/client";
-import { BrowserRouter, Routes, Route } from "react-router-dom";
import { StrictMode } from "react";
import "@/app.css";
-import Home from "./routes/home";
-// ... other imports
-import { Toast } from "./components/Toast";
-import { getTorStatus } from "./api";
-
-function App() {
- // ...
-}
+import App from "./App";
createRoot(document.getElementById("root")!).render(
<StrictMode>
<App />
</StrictMode>,
);🧰 Tools
🪛 GitHub Check: frontend
[warning] 13-13:
Fast refresh only works when a file has exports. Move your component(s) to a separate file
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@frontend/app/main.tsx` around lines 13 - 44, Extract the App component into
its own module and export it so React Fast Refresh can track updates: create a
new file that defines and exports the App function (keeping its useState,
useEffect logic that calls getTorStatus and renders BrowserRouter/Routes and
Toast) and update main.tsx to import { App } from that new file and render <App
/>; ensure the exported symbol name remains App and that getTorStatus, Toast,
and route components (Home, MakerDetails, AddMaker, MakerSetup) are imported in
the new App file as they are now.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/maker_manager/mod.rs`:
- Around line 119-122: The public function new_for_testing is flagged dead_code
because it's only referenced from the integration test crate; add the attribute
#[allow(dead_code)] above the pub fn new_for_testing(config_dir: PathBuf) ->
Result<Self> to silence the lib-level dead_code lint while keeping the function
public and callable from tests — no other logic changes are required (this
function forwards to Self::new_with_tor and uses TorManager::noop()).
- Around line 114-117: MakerManager::new (which calls
TorManager::detect_or_start and may call ensure_docker_daemon) performs long
blocking operations and is invoked by Server::new inside the Tokio runtime,
blocking the executor; fix by moving Server::new/MakerManager::new invocation
outside the async context (construct Server before entering #[tokio::main]) or
by calling Server::new within tokio::task::spawn_blocking at the call site so
the blocking TorManager::detect_or_start and ensure_docker_daemon calls run on a
blocking thread and do not block the Tokio worker.
In `@src/tor_manager.rs`:
- Around line 27-33: Add a dead-code allowance to the TorManager::noop function
so it no longer triggers clippy warnings: annotate the pub fn noop() -> Self
definition in the TorManager impl with #[allow(dead_code)] (or alternatively
gate it with #[cfg(test)] if you prefer test-only visibility) so the lint error
is resolved while keeping the function signature intact.
- Around line 175-198: The Docker invocation currently uses the unpinned image
string "osminogin/tor-simple" in the std::process::Command call (the args array
that includes "osminogin/tor-simple"); change this to a pinned image reference
(for example "osminogin/tor-simple:0.4.9.5@sha256:<digest>") or make the image
name a configurable constant (e.g., TOR_IMAGE) so the digest/tag can be set and
reviewed, then update the args entry that currently contains
"osminogin/tor-simple" to use that pinned value and ensure any tests or docs
reflect the new constant/config option.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: fa82914a-620d-4a2e-aea4-5da0941e22c7
📒 Files selected for processing (3)
src/maker_manager/mod.rssrc/tor_manager.rstests/api/mod.rs
| pub fn new(config_dir: PathBuf) -> Result<Self> { | ||
| let tor_manager = TorManager::detect_or_start(&config_dir)?; | ||
| Self::new_with_tor(config_dir, tor_manager) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Locate every call site of MakerManager::new to determine async vs sync context.
rg -nP -C5 '\bMakerManager::new\s*\('Repository: citadel-tech/maker-dashboard
Length of output: 705
🏁 Script executed:
#!/bin/bash
# Check the Server::new call site in src/server.rs and surrounding context
head -n 100 src/server.rs | tail -n 50Repository: citadel-tech/maker-dashboard
Length of output: 1918
🏁 Script executed:
#!/bin/bash
# Find main.rs or startup code to see how Server is instantiated
find . -type f -name "main.rs" -o -name "lib.rs" | head -5Repository: citadel-tech/maker-dashboard
Length of output: 99
🏁 Script executed:
#!/bin/bash
# Double-check for all MakerManager::new call sites including async contexts
rg -nA10 'MakerManager::new' --type rustRepository: citadel-tech/maker-dashboard
Length of output: 2157
🏁 Script executed:
#!/bin/bash
# Read main.rs to understand server initialization
cat src/main.rsRepository: citadel-tech/maker-dashboard
Length of output: 1578
MakerManager::new is called inside the async runtime, blocking the Tokio worker for ~90s.
In src/main.rs, Server::new is invoked directly within the #[tokio::main] async context. Since Server::new → MakerManager::new → TorManager::detect_or_start performs synchronous port probes, spawns a host Tor and waits up to 30s, and may call ensure_docker_daemon (60s timeout), this blocks the Tokio executor thread during initialization. While the current code has no concurrent tasks at startup, this violates async best practices. Either:
- Move
Server::newoutside the async runtime by calling it before#[tokio::main]yields, or - Wrap
Server::newintokio::task::spawn_blockingat the call site.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/maker_manager/mod.rs` around lines 114 - 117, MakerManager::new (which
calls TorManager::detect_or_start and may call ensure_docker_daemon) performs
long blocking operations and is invoked by Server::new inside the Tokio runtime,
blocking the executor; fix by moving Server::new/MakerManager::new invocation
outside the async context (construct Server before entering #[tokio::main]) or
by calling Server::new within tokio::task::spawn_blocking at the call site so
the blocking TorManager::detect_or_start and ensure_docker_daemon calls run on a
blocking thread and do not block the Tokio worker.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/docker-build.yml (1)
45-65:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winAlways tear down
test-containeron step failure.Because this step exits on command failure, the cleanup on Line 65 is skipped if the final
curl | jqcheck fails. That leaves the container running for the rest of the job and makes the success/failure paths inconsistent. Use anEXITtrap so cleanup always runs.Suggested fix
- name: Test container starts and responds run: | + trap 'docker rm -f test-container >/dev/null 2>&1 || true' EXIT + docker run -d --name test-container -p 3000:3000 \ -e DASHBOARD_HOST=0.0.0.0 -e DASHBOARD_ALLOW_REMOTE=true \ maker-dashboard:test @@ done # Verify API responds curl -sf http://localhost:3000/api/makers | jq . - docker rm -f test-container || true🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.github/workflows/docker-build.yml around lines 45 - 65, The workflow step that runs the test container ("test-container") doesn't guarantee cleanup if any command fails (the final curl|jq can cause early exit and skip docker rm); add a cleanup routine and register it with a trap on EXIT so the container is always removed. Specifically, define a cleanup function in the run block that does docker logs (on failure) and docker rm -f test-container (or the captured container ID), then set trap cleanup EXIT before running any commands; ensure the trap runs regardless of success/failure and that the container name/ID (test-container) referenced in the cleanup matches the one created.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@frontend/app/components/Toast.tsx`:
- Around line 24-30: The Toast component (Toast.tsx) renders transient status UI
but lacks ARIA live region attributes; update the outer container rendered by
the Toast component to include role="status" and aria-live="polite" (and
optionally aria-atomic="true") so screen readers announce the message; modify
the div with className "fixed bottom-6 right-6 z-50 animate-slide-in-right" to
include these attributes in the JSX.
In `@src/maker_manager/mod.rs`:
- Around line 115-121: The current fallback in MakerManager where
TorManager::detect_or_start(...) maps failures to TorManager::noop() loses the
error state (since noop() reports source System); instead preserve and expose
the failure by introducing an explicit unavailable/error state and returning
that here: add a TorSource::Unavailable (or TorManager::error variant) and set
it when detect_or_start fails, or return a TorManager instance that carries the
error state rather than using TorManager::noop(); update uses of tor_source(),
/api/tor/status, and any code that inspects TorManager (e.g., tor_source(),
TorManager::noop(), and places that match on TorSource) to handle the new
Unavailable/Error variant so the UI can surface “Tor unavailable” without
waiting for maker startup to fail.
In `@src/tor_manager.rs`:
- Around line 57-87: Both host and docker branches currently only call
wait_for_port(SOCKS_PORT) causing later maker initialization to fail if
CONTROL_PORT isn't ready and leaking resources; update both flows to wait for
both ports (e.g. implement wait_for_ports(&[SOCKS_PORT, CONTROL_PORT]) and call
it instead of wait_for_port), and on failure ensure proper cleanup: if
spawn_host_process(...) returned a Child, kill/terminate the child before
returning Err, and if spawn_docker() started a container, stop the container (as
already attempted) before returning Err; update references in this change
include find_binary, spawn_host_process, wait_for_port, SOCKS_PORT,
CONTROL_PORT, TorManager, and spawn_docker.
---
Outside diff comments:
In @.github/workflows/docker-build.yml:
- Around line 45-65: The workflow step that runs the test container
("test-container") doesn't guarantee cleanup if any command fails (the final
curl|jq can cause early exit and skip docker rm); add a cleanup routine and
register it with a trap on EXIT so the container is always removed.
Specifically, define a cleanup function in the run block that does docker logs
(on failure) and docker rm -f test-container (or the captured container ID),
then set trap cleanup EXIT before running any commands; ensure the trap runs
regardless of success/failure and that the container name/ID (test-container)
referenced in the cleanup matches the one created.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 3f2d81aa-7b04-4015-af3b-dce4fa872d0c
📒 Files selected for processing (7)
.github/workflows/docker-build.ymlfrontend/app/api.tsfrontend/app/app.cssfrontend/app/components/Toast.tsxfrontend/app/main.tsxsrc/maker_manager/mod.rssrc/tor_manager.rs
| return ( | ||
| <div className="fixed bottom-6 right-6 z-50 animate-slide-in-right"> | ||
| <div className="flex items-center gap-3 rounded-lg border border-green-700 bg-gray-950 px-4 py-3 text-sm text-green-400 shadow-lg"> | ||
| <span className="text-green-500">✓</span> | ||
| {message} | ||
| </div> | ||
| </div> |
There was a problem hiding this comment.
Expose the toast as a live region.
This message is transient status UI, but it has no role/aria-live, so assistive tech may never announce it.
Suggested fix
- <div className="fixed bottom-6 right-6 z-50 animate-slide-in-right">
+ <div
+ className="fixed bottom-6 right-6 z-50 animate-slide-in-right"
+ role="status"
+ aria-live="polite"
+ aria-atomic="true"
+ >📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return ( | |
| <div className="fixed bottom-6 right-6 z-50 animate-slide-in-right"> | |
| <div className="flex items-center gap-3 rounded-lg border border-green-700 bg-gray-950 px-4 py-3 text-sm text-green-400 shadow-lg"> | |
| <span className="text-green-500">✓</span> | |
| {message} | |
| </div> | |
| </div> | |
| return ( | |
| <div | |
| className="fixed bottom-6 right-6 z-50 animate-slide-in-right" | |
| role="status" | |
| aria-live="polite" | |
| aria-atomic="true" | |
| > | |
| <div className="flex items-center gap-3 rounded-lg border border-green-700 bg-gray-950 px-4 py-3 text-sm text-green-400 shadow-lg"> | |
| <span className="text-green-500">✓</span> | |
| {message} | |
| </div> | |
| </div> | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@frontend/app/components/Toast.tsx` around lines 24 - 30, The Toast component
(Toast.tsx) renders transient status UI but lacks ARIA live region attributes;
update the outer container rendered by the Toast component to include
role="status" and aria-live="polite" (and optionally aria-atomic="true") so
screen readers announce the message; modify the div with className "fixed
bottom-6 right-6 z-50 animate-slide-in-right" to include these attributes in the
JSX.
| let tor_manager = TorManager::detect_or_start(&config_dir).unwrap_or_else(|e| { | ||
| tracing::warn!( | ||
| "Tor could not be started: {}. Tor-dependent makers will fail to start.", | ||
| e | ||
| ); | ||
| TorManager::noop() | ||
| }); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major | 🏗️ Heavy lift
Don’t collapse Tor bootstrap failure into the system state.
If detect_or_start fails, this substitutes TorManager::noop(), whose source is still System. That makes tor_source() and /api/tor/status indistinguishable from a real externally managed Tor instance, so the UI gets no way to surface “Tor unavailable” until maker startup fails later.
Please preserve an explicit unavailable/error state here instead of coercing the failure into system Tor. That can be a new TorSource variant or separate status field, but the current fallback loses important state.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/maker_manager/mod.rs` around lines 115 - 121, The current fallback in
MakerManager where TorManager::detect_or_start(...) maps failures to
TorManager::noop() loses the error state (since noop() reports source System);
instead preserve and expose the failure by introducing an explicit
unavailable/error state and returning that here: add a TorSource::Unavailable
(or TorManager::error variant) and set it when detect_or_start fails, or return
a TorManager instance that carries the error state rather than using
TorManager::noop(); update uses of tor_source(), /api/tor/status, and any code
that inspects TorManager (e.g., tor_source(), TorManager::noop(), and places
that match on TorSource) to handle the new Unavailable/Error variant so the UI
can surface “Tor unavailable” without waiting for maker startup to fail.
| if let Some(binary) = find_binary("tor") { | ||
| tracing::info!( | ||
| "Found tor binary at {}, spawning host process", | ||
| binary.display() | ||
| ); | ||
| let child = spawn_host_process(&binary, config_dir)?; | ||
| wait_for_port(SOCKS_PORT)?; | ||
| return Ok(TorManager { | ||
| source: TorSource::HostProcess, | ||
| process: Some(child), | ||
| container_id: None, | ||
| }); | ||
| } | ||
|
|
||
| if find_binary("docker").is_some() { | ||
| tracing::info!("Found docker, ensuring daemon is running..."); | ||
| ensure_docker_daemon()?; | ||
| tracing::info!("Docker daemon ready, spawning tor container"); | ||
| let container_id = spawn_docker() | ||
| .map_err(|e| anyhow::anyhow!("Failed to start tor Docker container: {}", e))?; | ||
| if let Err(e) = wait_for_port(SOCKS_PORT) { | ||
| let _ = std::process::Command::new("docker") | ||
| .args(["stop", &container_id]) | ||
| .output(); | ||
| return Err(e); | ||
| } | ||
| return Ok(TorManager { | ||
| source: TorSource::Docker, | ||
| process: None, | ||
| container_id: Some(container_id), | ||
| }); |
There was a problem hiding this comment.
Wait for both Tor ports before returning a managed instance.
These branches only wait for SOCKS_PORT, but maker initialization later needs CONTROL_PORT too. If Tor comes up with SOCKS only, startup succeeds here and the first maker fails later. On the host-process path, that failure also leaks the spawned tor child because the error return drops Child without killing it.
Suggested fix
- let child = spawn_host_process(&binary, config_dir)?;
- wait_for_port(SOCKS_PORT)?;
+ let mut child = spawn_host_process(&binary, config_dir)?;
+ if let Err(e) = wait_for_ports(&[SOCKS_PORT, CONTROL_PORT]) {
+ let _ = child.kill();
+ let _ = child.wait();
+ return Err(e);
+ }
return Ok(TorManager {
source: TorSource::HostProcess,
process: Some(child),
container_id: None,
});
@@
- if let Err(e) = wait_for_port(SOCKS_PORT) {
+ if let Err(e) = wait_for_ports(&[SOCKS_PORT, CONTROL_PORT]) {
let _ = std::process::Command::new("docker")
.args(["stop", &container_id])
.output();
return Err(e);
}A small helper such as wait_for_ports(&[u16]) keeps the two paths consistent.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/tor_manager.rs` around lines 57 - 87, Both host and docker branches
currently only call wait_for_port(SOCKS_PORT) causing later maker initialization
to fail if CONTROL_PORT isn't ready and leaking resources; update both flows to
wait for both ports (e.g. implement wait_for_ports(&[SOCKS_PORT, CONTROL_PORT])
and call it instead of wait_for_port), and on failure ensure proper cleanup: if
spawn_host_process(...) returned a Child, kill/terminate the child before
returning Err, and if spawn_docker() started a container, stop the container (as
already attempted) before returning Err; update references in this change
include find_binary, spawn_host_process, wait_for_port, SOCKS_PORT,
CONTROL_PORT, TorManager, and spawn_docker.
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (1)
src/tor_manager.rs (1)
66-108:⚠️ Potential issue | 🟠 Major | ⚡ Quick winWait on both ports and avoid leaking the spawned Tor child on startup failure.
All three managed branches still call
wait_for_port(SOCKS_PORT)?only, and if it returnsErr, the just-spawnedChildis dropped withoutkill()— leaving an orphanedtorrunning on the user's box. The maker layer also depends onCONTROL_PORT, so SOCKS-only readiness is not sufficient.Suggested fix (host-process branch shown; mirror for cached + downloaded paths)
- if let Some(binary) = find_binary("tor") { - tracing::info!("Found system tor at {}", binary.display()); - let child = spawn_host_process(&binary, None)?; - wait_for_port(SOCKS_PORT)?; - return Ok(TorManager { - source: TorSource::HostProcess, - process: Some(child), - }); - } + if let Some(binary) = find_binary("tor") { + tracing::info!("Found system tor at {}", binary.display()); + let mut child = spawn_host_process(&binary, None)?; + if let Err(e) = wait_for_ports(&[SOCKS_PORT, CONTROL_PORT]) { + let _ = child.kill(); + let _ = child.wait(); + return Err(e); + } + return Ok(TorManager { + source: TorSource::HostProcess, + process: Some(child), + }); + }Add a small helper:
fn wait_for_ports(ports: &[u16]) -> anyhow::Result<()> { for &p in ports { wait_for_port(p)?; } Ok(()) }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/tor_manager.rs` around lines 66 - 108, The three managed-branch flows spawn a Tor child (via spawn_host_process) but only call wait_for_port(SOCKS_PORT) and will drop the Child on wait error, leaking a running tor; change each branch that spawns a child (the HostProcess branch, the cached-bundle branch using tor_binary_in_bundle, and the downloaded-bundle branch using download_tor_bundle) to wait for both SOCKS_PORT and CONTROL_PORT before returning and to kill the spawned Child if port waiting fails. Add a tiny helper (e.g., wait_for_ports or similar) to wait on &[SOCKS_PORT, CONTROL_PORT], call it after spawning and before returning, and on Err from that helper call child.kill() (and wait on it) then return the error instead of dropping the Child.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@Cargo.toml`:
- Line 29: Update the ureq dependency in Cargo.toml from 2 to a 3.x version and
change the two HTTP download sites (functions download_tor_bundle and
verify_checksum_from_manifest) to use ureq 3's timeout and bounded-read APIs:
replace bare ureq::get().call() usage with calls that set timeout_connect(...)
and timeout_recv_body(...), and when reading response bodies use
BodyWithConfig::limit(...) (or the corresponding response-with-config helper) to
enforce a maximum read size; ensure imports and any result handling are adjusted
for the 3.x API changes.
In `@src/tor_manager.rs`:
- Around line 325-365: The extract_archive function currently shells out to
external tools (tar/powershell) which is brittle; replace that implementation
with a pure-Rust solution using flate2+tar for .tar.gz and the zip crate for
.zip files, update Cargo.toml to include flate2="1", tar="0.4", and zip="0.6",
detect archive type from the archive Path (e.g., .tar.gz/.tgz vs .zip), open the
file with std::fs::File, stream-decompress .tar.gz with flate2::read::GzDecoder
into tar::Archive and unpack to dest, and for .zip use zip::ZipArchive to
extract entries while preserving permissions and returning any I/O errors
(include context in anyhow errors) instead of spawning external commands; keep
the function signature extract_archive(archive: &Path, dest: &Path) ->
anyhow::Result<()> and ensure all errors propagate with context rather than
swallowing stdout/stderr.
- Around line 300-323: Replace the OS-specific shell-out in sha256_of_file with
a pure-Rust SHA-256 implementation: add sha2 = "0.10" (and hex = "0.4" or use
sha2's output formatting) to Cargo.toml, open the file with std::fs::File in
sha256_of_file, stream its contents into a sha2::Sha256 hasher (e.g., via
std::io::copy to the hasher or reading in chunks), finalize the hasher and
return the hex-encoded digest as the function result; remove all
platform-specific Command blocks and the Windows-specific stdout parsing.
- Around line 167-180: spawn_host_process currently ignores the provided config
directory and always writes tor state into coinswap_tor_dir(); change its
signature to accept a config_dir or tor_runtime_dir (e.g., &Path) and use
config_dir.join("tor") (or a derived tor_runtime_dir) instead of
coinswap_tor_dir() for both DataDirectory and torrc_path (and
create_dir_all/write to that path); update every call site (notably
detect_or_start) to pass config_dir.join("tor") and keep coinswap_tor_dir() only
as a fallback when no config_dir is supplied.
- Around line 241-248: The code currently treats verify_checksum_from_manifest
failures as non-fatal; change that so any Err from
verify_checksum_from_manifest(&checksum_url, &filename, &archive_path) causes
immediate cleanup and error propagation instead of proceeding to
extract_archive. Specifically, on Err(e) remove the partial archive
(std::fs::remove_file(&archive_path)), return or propagate the error to the
caller (don’t call extract_archive), and only allow bypassing this fatal
behavior via an explicit opt‑in flag if such a configuration flag is present;
update the logic around
checksum_url/filename/archive_path/verify_checksum_from_manifest accordingly.
- Around line 232-239: The download currently uses ureq::get(&url).call() with
no timeouts and std::io::copy on response.into_reader(), so a slow/malicious
server can hang or produce an oversized file; fix by creating a ureq::Agent with
explicit connect and read timeouts (e.g., via
ureq::AgentBuilder::new().timeout_connect(...).timeout_read(...).build()), use
that agent to perform the GET for the archive (the code that produces response
and writes to archive_path), and cap the bytes written by wrapping the response
reader in an std::io::Take to enforce a max size before calling std::io::copy;
apply the same agent+timeouts and response-size cap to the manifest fetch in
verify_checksum_from_manifest as well.
- Around line 11-13: The download extension variable ext in src/tor_manager.rs
is currently set conditionally to "zip" for Windows which causes 404s; change
the ext assignment so ext is always "tar.gz" (remove the if os == "windows"
branch) where ext is defined (used with TOR_ARCHIVE_BASE and TOR_VERSION to
build the expert bundle URL), so all platforms request .tar.gz expert bundles;
preserve existing extraction logic that handles .tar.gz on Windows.
---
Duplicate comments:
In `@src/tor_manager.rs`:
- Around line 66-108: The three managed-branch flows spawn a Tor child (via
spawn_host_process) but only call wait_for_port(SOCKS_PORT) and will drop the
Child on wait error, leaking a running tor; change each branch that spawns a
child (the HostProcess branch, the cached-bundle branch using
tor_binary_in_bundle, and the downloaded-bundle branch using
download_tor_bundle) to wait for both SOCKS_PORT and CONTROL_PORT before
returning and to kill the spawned Child if port waiting fails. Add a tiny helper
(e.g., wait_for_ports or similar) to wait on &[SOCKS_PORT, CONTROL_PORT], call
it after spawning and before returning, and on Err from that helper call
child.kill() (and wait on it) then return the error instead of dropping the
Child.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 57ee7255-f24f-4189-9ee3-a6c0eefc7e61
📒 Files selected for processing (2)
Cargo.tomlsrc/tor_manager.rs
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (1)
src/tor_manager.rs (1)
70-114:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winWait for both SOCKS and CONTROL ports, and clean up the spawned child on failure.
All three managed-startup branches (lines 73-78, 89-95, 106-114) only
wait_for_port(SOCKS_PORT)?. The maker code passes acontrol_portto coinswap, so if Tor comes up SOCKS-only withinSTARTUP_TIMEOUT, this returnsOkand the first maker fails later trying to reach 9051.Additionally, when
wait_for_portreturnsErr, the just-spawnedchildis moved into?and dropped withoutkill()/wait(), leaking the Tor process (which is still holding the data dir lock). On the next startup attempt the new Tor will fail because the old one still owns the lock.♻️ Suggested fix
-fn wait_for_port(port: u16) -> anyhow::Result<()> { +fn wait_for_ports(ports: &[u16]) -> anyhow::Result<()> { let deadline = Instant::now() + STARTUP_TIMEOUT; while Instant::now() < deadline { - if port_reachable(port) { + if ports.iter().all(|p| port_reachable(*p)) { return Ok(()); } std::thread::sleep(POLL_INTERVAL); } Err(anyhow::anyhow!( - "Tor did not become reachable on port {} within {:?}", - port, + "Tor did not become reachable on ports {:?} within {:?}", + ports, STARTUP_TIMEOUT )) }…then at each call site:
- let child = spawn_host_process(&binary, None, &config_dir.join("tor"))?; - wait_for_port(SOCKS_PORT)?; + let mut child = spawn_host_process(&binary, None, &config_dir.join("tor"))?; + if let Err(e) = wait_for_ports(&[SOCKS_PORT, CONTROL_PORT]) { + let _ = child.kill(); + let _ = child.wait(); + return Err(e); + }Apply the same kill-on-failure pattern to the cached-bundle and downloaded-bundle branches.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/tor_manager.rs`:
- Line 8: The CONNECT_TIMEOUT constant is too long (2s) for loopback probes and
slows both the initial "is Tor running?" checks and the wait_for_port polling;
reduce CONNECT_TIMEOUT to a shorter value (e.g. Duration::from_millis(300) or
anywhere in the 200–500 ms range) so failed connect attempts return quickly,
keeping wait_for_port's 500 ms sleep/poll cadence effective; update the
CONNECT_TIMEOUT constant and ensure functions that rely on it (the initial Tor
probe logic and the wait_for_port routine) use the new shorter timeout.
- Around line 298-303: The current lookup that builds expected_hash uses
manifest.lines().find(|l| l.contains(filename)) which can match substrings;
update the logic around expected_hash so you parse each line into
whitespace-separated tokens and require the second token exactly equals filename
(i.e., split the line, check tokens.get(1) == Some(&filename) or equivalent)
before taking the first token as the hash; ensure you return the same error via
ok_or_else if no exact match is found.
- Around line 349-368: The Windows-specific extraction fallback incorrectly
calls PowerShell's Expand-Archive on a tar.gz (archive_str) which it can't
handle; update the Windows block (the std::process::Command::new("tar") ->
.args(["-xf", archive_str, "-C", dest_str]) path) to remove the PowerShell
fallback and instead return a clear error if tar is missing or fails (e.g.,
Err(anyhow::anyhow!("tar not found or failed to extract '{}' — please install
tar (bsdtar) or ensure it's on PATH", archive_str))). Keep archive_str and
dest_str usage and only change the fallback behavior so we don't attempt
Expand-Archive on tar.gz.
- Around line 240-247: The download path using io::copy with
response.into_reader().take(MAX_ARCHIVE_BYTES) can silently succeed when the
server is still streaming exactly MAX_ARCHIVE_BYTES; detect truncation by
reading one extra byte and explicitly failing before checksum: change the copy
call to read up to MAX_ARCHIVE_BYTES + 1 (use .take(MAX_ARCHIVE_BYTES + 1) or
equivalent), then after the copy check if bytes > MAX_ARCHIVE_BYTES (or bytes ==
MAX_ARCHIVE_BYTES+1) and if so remove the partial file at archive_path and
return an error like "archive exceeded the MAX_ARCHIVE_BYTES cap" so checksum
verification is not attempted; ensure this logic is applied around the block
that creates the file and calls io::copy (referencing MAX_ARCHIVE_BYTES,
response.into_reader(), archive_path, and the surrounding download/checksum
flow).
- Around line 146-164: The find_binary function currently only uses the Unix
"which" and Unix-only prefixes; update it to be cross-platform by adding
Windows-specific logic: on Windows, call "where" (or use the pathsearch crate)
and try name + each PATHEXT extension, and check common Windows install prefixes
(ProgramFiles, ProgramFiles(x86), LocalAppData) for name.exe; on non-Windows
keep the existing "which" lookup and Unix prefixes. Ensure you still return
Some(PathBuf) when a valid executable is found and None otherwise; reference the
find_binary function to locate the code to modify.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 542f45ea-3d0a-44de-ab43-57ed08d5b1bf
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
Cargo.tomlsrc/tor_manager.rs
|
|
||
| const SOCKS_PORT: u16 = 9050; | ||
| const CONTROL_PORT: u16 = 9051; | ||
| const CONNECT_TIMEOUT: Duration = Duration::from_secs(2); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial | 💤 Low value
2-second connect timeout for loopback is excessive.
CONNECT_TIMEOUT = 2s is used both for the initial "is Tor already running?" probes (lines 47-48) and for the wait_for_port polling (line 406, 500 ms poll interval). On loopback a connect either succeeds immediately or fails immediately; a 2 s timeout mostly matters when something is actively blackholing 127.0.0.1, which is rare. More importantly, during wait_for_port each failed probe can stall up to 2 s — interleaved with a 500 ms sleep — making the effective polling cadence much slower than intended.
Consider 200–500 ms for loopback probes.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/tor_manager.rs` at line 8, The CONNECT_TIMEOUT constant is too long (2s)
for loopback probes and slows both the initial "is Tor running?" checks and the
wait_for_port polling; reduce CONNECT_TIMEOUT to a shorter value (e.g.
Duration::from_millis(300) or anywhere in the 200–500 ms range) so failed
connect attempts return quickly, keeping wait_for_port's 500 ms sleep/poll
cadence effective; update the CONNECT_TIMEOUT constant and ensure functions that
rely on it (the initial Tor probe logic and the wait_for_port routine) use the
new shorter timeout.
hulxv
left a comment
There was a problem hiding this comment.
isn't it better to use libtor instead of downloading the binary?
| futures = "0.3.32" | ||
| clap = { version = "4.5.60", features = ["derive", "env"] } | ||
| dirs = "6.0.0" | ||
| ureq = { version = "2", features = ["json"] } |
There was a problem hiding this comment.
use v3, it's updated on the last commits
Summary by CodeRabbit
New Features
UI Improvements
Documentation
Chores