Skip to content

Introduce the new Architecture#6

Open
hulxv wants to merge 1 commit into
citadel-tech:mainfrom
hulxv:rearch
Open

Introduce the new Architecture#6
hulxv wants to merge 1 commit into
citadel-tech:mainfrom
hulxv:rearch

Conversation

@hulxv
Copy link
Copy Markdown

@hulxv hulxv commented May 5, 2026

This PR introduces the new architecture of marketd. It converts from an unusable tool to a clean production tool to work as an HTTP server to serve the data of the market publicly, and is able to host it on users' machines. It's a solution to show the market's offers on our website for all people.

Related to citadel-tech/website#3

CC: @mojoX911 @stark-3k

Summary by CodeRabbit

  • New Features

    • HTTP API endpoints for fetching offers and checking system health
    • Enhanced UI with real-time offer polling, fidelity bond details, and additional offer parameters
  • Infrastructure

    • Docker-based deployment with Bitcoin, Tor, and marketd containers
    • Production configuration tool with setup wizard
    • CI/CD workflows for testing, linting, and Docker builds
  • Documentation

    • Added architecture documentation

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 5, 2026

📝 Walkthrough

Walkthrough

This PR restructures the system from a separate daemon that periodically fetches and writes offer data to a single integrated HTTP service. A new Rust marketd binary (built from the root workspace) implements offer synchronization as a background thread, exposes offers via JSON REST endpoints, and serves a bundled React frontend. The system is now containerized with multi-stage Docker builds and orchestrated via Docker Compose alongside Bitcoin and Tor services. Configuration is managed through environment variables and a persistent .docker-config file, and a rewritten run.sh provides dev and production operational modes.


Changes

Architectural Refactor: Daemon → Integrated HTTP Service

Layer / File(s) Summary
Configuration & Entrypoint
src/config.rs, src/main.rs
Config struct uses Clap to parse CLI and environment-variable-backed settings for Bitcoin RPC, Tor, sync timing, listen address, and logging. main initializes tracing, spawns a background sync thread, and starts an Axum HTTP server.
Offer Synchronization Loop
src/main.rs (lines 66–150)
sync_loop waits for Bitcoin RPC and Tor readiness, initializes a Taker (retrying on failure), then repeatedly syncs the offerbook, fetches and filters offers, maps them to ApiOffer with timestamp, and writes to shared store; errors are logged and retried.
API Types & In-Memory Store
src/state.rs
ApiOffer, ApiFidelityBond, and ApiOutpoint expose coinswap offer data with fidelity bond extraction via JSON serialization. OfferStore and SharedStore (Arc\) hold offers and last-sync metadata for thread-safe access.
HTTP Server & Endpoints
src/server.rs
Axum router with GET /api/offers (returns cached offers as JSON), GET /api/health (reports status/count/sync time), permissive CORS, and SPA-style static fallback serving frontend assets from configured directory.
Frontend Polling & UI
web/src/App.jsx, web/offer_data.json
Replaced static JSON load with polling fetch to /api/offers on an interval, added inline error handling, and extended table columns to display time, bond amount with explorer link, min/max size, and status messages. Deleted static offer_data.json.
Docker Containerization
Dockerfile, docker-compose.yml
Multi-stage Dockerfile builds frontend (Node), Rust binary (with Cargo layer caching), and minimal runtime image; docker-compose.yml orchestrates bitcoind (RPC/ZMQ/healthcheck), tor (SOCKS/control/healthcheck), and marketd (with depends-on conditions and env config).
Operational Scripting & Config Persistence
run.sh, .gitignore
Rewrote run.sh from simple dev launcher to Docker Compose management tool with .docker-config persistence, dev / prod / configure / start / stop / logs / status commands, production wizard (network/mode/credential prompts), and service health polling.
Removed Daemon Artifacts
daemon/src/main.rs, daemon/Cargo.toml
Deleted the previous Tor-over-SOCKS daemon that queried a tracker, performed a CBOR handshake, and periodically wrote JSON; emptied daemon cargo manifest.
Documentation & CI
docs/ARCH.md, .github/workflows/*.yml
Added architecture reference documenting system responsibilities, boundaries, two-thread model, API surface, and non-goals. Added four GitHub Actions workflows for frontend/backend linting, Rust formatting/testing, and Docker build validation.

Sequence Diagram

sequenceDiagram
    participant Browser
    participant Marketd HTTP Server
    participant Sync Loop (Background)
    participant Bitcoin RPC
    participant Tor Control
    participant Maker Nodes

    Browser->>Marketd HTTP Server: GET /api/offers (poll interval)
    activate Marketd HTTP Server
    Marketd HTTP Server->>Marketd HTTP Server: Read SharedStore
    Marketd HTTP Server-->>Browser: JSON offers + last_sync
    deactivate Marketd HTTP Server

    par Background Sync
        Sync Loop (Background)->>Bitcoin RPC: Wait for connectivity
        Bitcoin RPC-->>Sync Loop (Background): RPC ready
        Sync Loop (Background)->>Tor Control: Wait for control port
        Tor Control-->>Sync Loop (Background): Port open
        Sync Loop (Background)->>Sync Loop (Background): Initialize Taker (wallet + RPC config)
        Sync Loop (Background)->>Maker Nodes: Sync offerbook (via Tor)
        Maker Nodes-->>Sync Loop (Background): Maker list
        Sync Loop (Background)->>Maker Nodes: Fetch offers from Good makers
        Maker Nodes-->>Sync Loop (Background): Offer data
        Sync Loop (Background)->>Sync Loop (Background): Map to ApiOffer + timestamp
        Sync Loop (Background)->>Sync Loop (Background): Write offers + last_sync to SharedStore
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~55 minutes


Poem

🐰 A daemon once ran on its own lonely thread,
But now it integrates—HTTP instead!
Offers flow live from a shared memory store,
Docker brings harmony; orchestrate four.
The rabbit hops forward to faster frontends! 🚀

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 20.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'Introduce the new Architecture' is vague and generic. It does not clearly specify what architectural change was made, leaving readers uncertain about the actual nature of the improvements. Replace with a more specific title such as 'Convert marketd to HTTP server with API-driven offer aggregation' or 'Refactor marketd architecture: from daemon to REST API service'.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 17

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In @.github/workflows/ci.yml:
- Around line 20-35: The CI is still operating on the retired daemon/ crate;
update the workflow steps that reference the daemon workspace so they operate on
the repository root instead. Replace the Rust cache workspaces value (currently
"daemon") to the repository root workspace (e.g., "." or the new crate name),
and remove or change the working-directory fields on the "Format check",
"Clippy", and "Build" steps so they run from the repo root (or the new package
directory), ensuring the actions named "Rust cache", "Format check", "Clippy",
and "Build" reference the correct crate location.

In @.gitignore:
- Line 2: The .gitignore currently excludes Cargo.lock which prevents committing
the lockfile; remove the "Cargo.lock" entry from .gitignore (or uncomment/delete
that line) and add/commit the repository's Cargo.lock so Dockerfile/CI builds
are reproducible for this binary service; ensure the committed file is tracked
in git after updating .gitignore.

In `@Cargo.toml`:
- Line 7: Replace the mutable branch pin of the coinswap dependency in
Cargo.toml with an immutable revision: remove branch =
"feat/make-fetch-offer-public" and instead specify rev = "<commit-sha>" (or
switch to a released tag) for the coinswap git dependency so builds are
reproducible; update the coinswap entry to use rev = "<sha>" while keeping the
same git URL.

In `@docker-compose.yml`:
- Around line 69-75: The docker-compose env block is overriding any
externally-exported MARKETD_BITCOIN_RPC_URL and omitting the Tor SOCKS port;
update the env entries so MARKETD_BITCOIN_RPC_URL honors an externally-exported
value (e.g. use
MARKETD_BITCOIN_RPC_URL=${MARKETD_BITCOIN_RPC_URL:-127.0.0.1:${BITCOIN_RPC_PORT:-38332}})
instead of always hardcoding 127.0.0.1, and add an env forward for the Tor SOCKS
port such as MARKETD_TOR_SOCKS_PORT=${TOR_SOCKS_PORT:-9050} so non-default Tor
SOCKS configurations are preserved for marketd.
- Around line 10-16: The docker-compose service is exposing bitcoind RPC and ZMQ
on all interfaces via -rpcbind=0.0.0.0 and -rpcallowip=0.0.0.0/0 and the ZMQ
endpoints, so change those flags to restrict to loopback: replace
-rpcbind=0.0.0.0 with -rpcbind=127.0.0.1, change -rpcallowip=0.0.0.0/0 to
-rpcallowip=127.0.0.1 (or remove it if unnecessary), and bind ZMQ endpoints
(-zmqpubrawtx and -zmqpubrawblock) to 127.0.0.1:${BITCOIN_ZMQ_PORT:-28332} while
keeping the existing env vars like ${BITCOIN_RPC_PORT:-38332} and
${BITCOIN_ZMQ_PORT:-28332}; this ensures only local processes (e.g., marketd)
can access RPC/ZMQ.
- Around line 44-55: The Tor control plane is exposed because SocksPort and
ControlPort are bound to 0.0.0.0 and a default TOR_AUTH_PASSWORD is provided;
update the torrc generation to bind SocksPort and ControlPort to loopback (use
127.0.0.1:$${TOR_SOCKS_PORT:-9050} and 127.0.0.1:$${TOR_CONTROL_PORT:-9051}
instead of 0.0.0.0), remove the fallback default for TOR_AUTH_PASSWORD in the
docker-compose environment (do not set
TOR_AUTH_PASSWORD=${TOR_AUTH_PASSWORD:-coinswap}), and add a startup check
(before exec tor -f /tmp/torrc) that fails fast if TOR_AUTH_PASSWORD is empty so
HashedControlPassword is only written/used when a password is explicitly
provided; adjust any code that sets HashedControlPassword to derive it from the
required TOR_AUTH_PASSWORD.

In `@Dockerfile`:
- Around line 26-38: Create a non-root user and switch to it in the final Docker
stage: add steps in the final image to create a dedicated user/group (e.g.,
marketd user), change ownership of the binary (/usr/local/bin/marketd) and
static directory (/usr/share/marketd/web) to that user, and add a USER directive
before ENTRYPOINT so the container runs as the unprivileged user instead of
root; ensure any needed directories exist and are writable by that user (refer
to ENTRYPOINT, MARKETD_STATIC_DIR, and the copied paths for locating the
resources).

In `@docs/ARCH.md`:
- Around line 54-62: The fenced code blocks in the ARCH.md snippet are unlabeled
which triggers MD040; update each triple-backtick fence around the
CLI/server/sync step list (the block containing "1. Parse CLI / env config
(clap)" through "d. enter sync cycle (see below)") to include a language tag
(use "text"), and do the same for the other unlabeled block mentioned (the block
at the later occurrence around lines 106-110) so both blocks read ```text at the
start of the fence; no other content changes are needed.

In `@run.sh`:
- Around line 32-35: The current load_config() unsafely sources CONFIG_FILE (and
save_config() writes raw user input), allowing injection via
quotes/backticks/$(...). Replace sourcing with a safe parser: modify
save_config() to write each setting as KEY='escaped_value' (escape single quotes
in values) and modify load_config() to stop using source and instead read
CONFIG_FILE line-by-line, validate each line matches a strict key pattern (e.g.,
^[A-Z_][A-Z0-9_]*=) and reject any value containing backticks, $(), or unescaped
shell metacharacters, then assign variables using a safe assignment (e.g., use
read -r key val and builtin export/declare to set the variable) so no shell
expansion or command execution can occur when loading CONFIG_FILE.

In `@src/main.rs`:
- Around line 24-28: The sync loop (sync_loop) is an infinite/long-running
synchronous workload and should not use tokio::task::spawn_blocking (which
consumes a limited blocking-pool thread); replace the
tokio::task::spawn_blocking(move || sync_loop(sync_cfg, sync_store)) call with
std::thread::spawn(move || sync_loop(sync_cfg, sync_store)) so the
never-returning work uses a dedicated OS thread; keep using the cloned store
(sync_store) and cfg (sync_cfg) in the move closure and ensure the thread is
detached (no join) since the loop never returns.
- Around line 98-104: The current loop uses taker.run_offer_sync_now() and then
polls taker.is_offerbook_syncing(), which has a TOCTOU race because
run_offer_sync_now() only sets run_now=true and not is_syncing; replace this
pattern by either upgrading the coinswap dependency to a release that includes
PR `#792` and calling taker.sync_offerbook_and_wait() (preferred), or if using the
feat/make-fetch-offer-public branch, call the exported sync_offerbook_and_wait()
method instead of run_offer_sync_now()/is_offerbook_syncing() to get the
guaranteed sequencing; update the call site in main.rs to invoke
sync_offerbook_and_wait() and remove the polling loop.

In `@src/server.rs`:
- Around line 28-35: The get_health handler currently calls
store.read().unwrap() which will panic on RwLock poisoning; change get_health to
mirror get_offers by returning a Result<Json<Value>, StatusCode> (or
IntoResponse) and map a poisoned read error to an INTERNAL_SERVER_ERROR instead
of unwraping — use store.read().map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
and return Err(...) when poisoned, then build and return the Json body on
success; reference the get_health function and align its error handling with
get_offers.
- Around line 11-17: The SPA fallback currently sits at the root and swallows
unknown /api/* requests (ServeDir/ServeFile returns index.html 200), so refactor
to build an api_router that contains the API routes and state/layers
(Router::new() with .route("/offers", get(get_offers)), .route("/health",
get(get_health)), .with_state(store), .layer(CorsLayer::permissive())) and then
create the root Router that .nest("/api", api_router) and applies
.fallback_service(spa) at the root; this ensures unknown /api/* paths return
proper 404s from the API router instead of serving the SPA index.html.

In `@src/state.rs`:
- Around line 40-46: The current extraction of FidelityBond::outpoint via
serde_json::to_value in from_coinswap silently falls back to txid = "" and vout
= 0 on any schema change or serialization error; update from_coinswap to
validate the serde lookup (check serde_json::to_value result and that
bond_json["outpoint"]["txid"] and ["vout"] exist/types are correct), log a
warning via the module logger when the structure is missing or parsing fails,
and change from_coinswap to return a Result so callers can drop malformed offers
instead of publishing them; reference FidelityBond::outpoint,
serde_json::to_value, from_coinswap, txid, and vout in your changes.

In `@web/src/App.jsx`:
- Around line 11-37: The fetchOffers polling currently allows overlapping
in-flight requests and assumes the response is a well-formed array; fix by
adding an AbortController per fetch (created inside fetchOffers) and store its
signal so you can abort the previous request before starting a new one (stop
overlaps and abort on unmount in the useEffect cleanup), check
res.headers.get('content-type') and validate Array.isArray(data) before calling
setOffers, setLastUpdated or mapping timestamps (also validate each item has a
numeric timestamp), and only call setState when the request wasn't aborted;
update references to fetchOffers, POLL_INTERVAL_MS, useEffect, setOffers,
setLastUpdated, setError, and setLoading accordingly.
- Line 74: The Tailwind v3 gradient class used in the JSX <tr> element needs
updating: locate the JSX element that sets className on the table row (the <tr>
with className="bg-gradient-to-r from-orange-600 to-orange-500") and replace the
deprecated bg-gradient-to-r with the v4 name bg-linear-to-r, keeping the
existing color stops (from-orange-600 and to-orange-500) intact; also scan the
file for any other occurrences of bg-gradient-to-* and rename them to
bg-linear-to-* to ensure consistency with Tailwind v4.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 1d711963-6c77-40f7-8815-e737b533b52a

📥 Commits

Reviewing files that changed from the base of the PR and between 594bde0 and 180f54f.

⛔ Files ignored due to path filters (2)
  • docs/marketd-arch.png is excluded by !**/*.png
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (16)
  • .github/workflows/ci.yml
  • .gitignore
  • Cargo.toml
  • Dockerfile
  • daemon/Cargo.toml
  • daemon/src/main.rs
  • docker-compose.yml
  • docs/ARCH.md
  • run.sh
  • src/config.rs
  • src/main.rs
  • src/server.rs
  • src/state.rs
  • web/offer_data.json
  • web/src/App.jsx
  • web/vite.config.js
💤 Files with no reviewable changes (3)
  • web/offer_data.json
  • daemon/Cargo.toml
  • daemon/src/main.rs

Comment thread .github/workflows/ci.yml Outdated
Comment on lines +20 to +35
- name: Rust cache
uses: Swatinem/rust-cache@v2
with:
workspaces: daemon

- name: Format check
working-directory: daemon
run: cargo fmt --check

- name: Clippy
working-directory: daemon
run: cargo clippy -- -D warnings

- name: Build
working-directory: daemon
run: cargo build
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

CI is still building the retired daemon/ crate.

This PR moves the Rust package to the repository root, but the workflow still caches, formats, lints, and builds daemon/. That means this job will either fail outright or stop validating the code that is actually being shipped now.

Suggested fix
       - name: Rust cache
         uses: Swatinem/rust-cache@v2
-        with:
-          workspaces: daemon

       - name: Format check
-        working-directory: daemon
         run: cargo fmt --check

       - name: Clippy
-        working-directory: daemon
         run: cargo clippy -- -D warnings

       - name: Build
-        working-directory: daemon
         run: cargo build
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Rust cache
uses: Swatinem/rust-cache@v2
with:
workspaces: daemon
- name: Format check
working-directory: daemon
run: cargo fmt --check
- name: Clippy
working-directory: daemon
run: cargo clippy -- -D warnings
- name: Build
working-directory: daemon
run: cargo build
- name: Rust cache
uses: Swatinem/rust-cache@v2
- name: Format check
run: cargo fmt --check
- name: Clippy
run: cargo clippy -- -D warnings
- name: Build
run: cargo build
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In @.github/workflows/ci.yml around lines 20 - 35, The CI is still operating on
the retired daemon/ crate; update the workflow steps that reference the daemon
workspace so they operate on the repository root instead. Replace the Rust cache
workspaces value (currently "daemon") to the repository root workspace (e.g.,
"." or the new crate name), and remove or change the working-directory fields on
the "Format check", "Clippy", and "Build" steps so they run from the repo root
(or the new package directory), ensuring the actions named "Rust cache", "Format
check", "Clippy", and "Build" reference the correct crate location.

Comment thread .gitignore Outdated
Comment thread Cargo.toml Outdated
Comment thread docker-compose.yml
Comment thread docker-compose.yml
Comment thread src/server.rs
Comment on lines +11 to +17
Router::new()
.route("/api/offers", get(get_offers))
.route("/api/health", get(get_health))
.with_state(store)
.layer(CorsLayer::permissive())
.fallback_service(spa)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial | ⚡ Quick win

🧩 Analysis chain

🌐 Web query:

In axum 0.7, what is the recommended pattern for combining a JSON API router with a static-file SPA fallback so that unknown /api/* paths return a JSON 404 instead of the SPA HTML?

💡 Result:

The recommended pattern in axum 0.7 for combining a JSON API router (e.g., under /api/*) with a static-file SPA fallback, ensuring unknown /api/* paths return JSON 404 instead of SPA HTML, is to nest the API router first and set the SPA ServeDir as the root-level fallback.[1][2][3] Create your API router: let api_router = Router::new() .route("/users", get(get_users)) // ... other API routes .fallback(api_404); // Optional: custom JSON 404 for API (inherits to root if no root fallback) Define a JSON 404 handler (used if no fallback on root): async fn api_404() -> impl IntoResponse { (StatusCode::NOT_FOUND, Json(serde_json::json!({"error": "Not Found"}))) } Set up the SPA fallback using tower_http::services::{ServeDir, ServeFile} (add tower-http = { version = "0.5", features = ["fs"] }): let spa_dir = Path::new("spa/dist"); // Path to SPA build dir with index.html let fallback = ServeDir::new(spa_dir) .not_found_service(ServeFile::new(spa_dir.join("index.html"))); Combine: let app: Router = Router::new() .nest("/api", api_router) .fallback_service(fallback.handle_error(|e: std::io::Error| async move { // Handle ServeDir IO errors (e.g., permission denied) as 404 or 500 if e.kind() == std::io::ErrorKind::NotFound { (StatusCode::NOT_FOUND, Json(serde_json::json!({"error": "Not Found"}))) } else { (StatusCode::INTERNAL_SERVER_ERROR, Json(serde_json::json!({"error": "Internal Error"}))) } })); How it works: - /api/* matches the nested router first. Known paths hit handlers; unknown /api/* hit api_404 (JSON 404).[3] - Non-API paths (e.g., /, /foo) fall through to fallback: ServeDir serves static files if exist, else index.html for SPA routing.[4][2] - Fallback order ensures API takes precedence; no SPA HTML for API paths.[1] Use .nest_service("/api", api_router.with_state(state)) if using .nest_service for services.[5] Official axum static-file-server example demonstrates ServeDir SPA fallback.[2]

Citations:


🏁 Script executed:

cat -n src/server.rs | head -50

Repository: citadel-tech/marketd

Length of output: 1422


SPA fallback swallows unknown /api/* 404s and serves index.html.

Because fallback_service(spa) is the catch-all, a request to e.g. /api/typo will not match the explicit routes, fall through to ServeDir, miss on disk, and then not_found_service(ServeFile::new(index)) will return the SPA index.html with 200 OK. API consumers (and the frontend's fetch().res.ok check) will not see a proper 404 and will instead try to JSON.parse HTML.

Scope the SPA fallback so it does not match API routes by nesting the API router first, then applying the SPA fallback at the root level:

let api_router = Router::new()
    .route("/offers", get(get_offers))
    .route("/health", get(get_health))
    .with_state(store)
    .layer(CorsLayer::permissive());

Router::new()
    .nest("/api", api_router)
    .fallback_service(spa)

This ensures unknown /api/* paths are not matched by the SPA fallback.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/server.rs` around lines 11 - 17, The SPA fallback currently sits at the
root and swallows unknown /api/* requests (ServeDir/ServeFile returns index.html
200), so refactor to build an api_router that contains the API routes and
state/layers (Router::new() with .route("/offers", get(get_offers)),
.route("/health", get(get_health)), .with_state(store),
.layer(CorsLayer::permissive())) and then create the root Router that
.nest("/api", api_router) and applies .fallback_service(spa) at the root; this
ensures unknown /api/* paths return proper 404s from the API router instead of
serving the SPA index.html.

Comment thread src/server.rs
Comment on lines +28 to +35
async fn get_health(State(store): State<SharedStore>) -> Json<Value> {
let s = store.read().unwrap();
Json(json!({
"status": "ok",
"offer_count": s.offers.len(),
"last_sync": s.last_sync,
}))
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

get_health panics on a poisoned RwLock; behavior is also inconsistent with get_offers.

store.read().unwrap() will tear down the request task (and emit a 500 from axum's panic handler) if any writer panicked while holding the lock. get_offers correctly maps the same error to INTERNAL_SERVER_ERROR — health should do the same so that a degraded/poisoned state is reported, not panicked through.

🔧 Proposed fix
-async fn get_health(State(store): State<SharedStore>) -> Json<Value> {
-    let s = store.read().unwrap();
-    Json(json!({
-        "status": "ok",
-        "offer_count": s.offers.len(),
-        "last_sync": s.last_sync,
-    }))
-}
+async fn get_health(State(store): State<SharedStore>) -> Result<Json<Value>, StatusCode> {
+    let s = store.read().map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
+    Ok(Json(json!({
+        "status": "ok",
+        "offer_count": s.offers.len(),
+        "last_sync": s.last_sync,
+    })))
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async fn get_health(State(store): State<SharedStore>) -> Json<Value> {
let s = store.read().unwrap();
Json(json!({
"status": "ok",
"offer_count": s.offers.len(),
"last_sync": s.last_sync,
}))
}
async fn get_health(State(store): State<SharedStore>) -> Result<Json<Value>, StatusCode> {
let s = store.read().map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
Ok(Json(json!({
"status": "ok",
"offer_count": s.offers.len(),
"last_sync": s.last_sync,
})))
}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/server.rs` around lines 28 - 35, The get_health handler currently calls
store.read().unwrap() which will panic on RwLock poisoning; change get_health to
mirror get_offers by returning a Result<Json<Value>, StatusCode> (or
IntoResponse) and map a poisoned read error to an INTERNAL_SERVER_ERROR instead
of unwraping — use store.read().map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
and return Err(...) when poisoned, then build and return the Json body on
success; reference the get_health function and align its error handling with
get_offers.

Comment thread src/state.rs Outdated
Comment on lines +40 to +46
// FidelityBond::outpoint is pub(crate); extract via serde until it is made pub.
let bond_json = serde_json::to_value(bond).unwrap_or(Value::Null);
let txid = bond_json["outpoint"]["txid"]
.as_str()
.unwrap_or("")
.to_string();
let vout = bond_json["outpoint"]["vout"].as_u64().unwrap_or(0) as u32;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Brittle outpoint extraction — silent fallback to "" / 0 on any upstream schema change.

If coinswap's serde representation of FidelityBond ever changes (renamed field, nested differently, or to_value errors), this code silently produces txid = "" and vout = 0 for every offer. The frontend will then render every offer's bond link as …/tx/ (a broken explorer link) with no log trace pointing at the cause.

At minimum, log a warning when the structural lookup fails, and consider returning a Result from from_coinswap so the sync loop can drop the malformed offer instead of publishing a confidently-wrong one. Also consider opening an upstream PR to make outpoint pub so the workaround can be removed.

🔧 Proposed defensive change
-        // FidelityBond::outpoint is pub(crate); extract via serde until it is made pub.
-        let bond_json = serde_json::to_value(bond).unwrap_or(Value::Null);
-        let txid = bond_json["outpoint"]["txid"]
-            .as_str()
-            .unwrap_or("")
-            .to_string();
-        let vout = bond_json["outpoint"]["vout"].as_u64().unwrap_or(0) as u32;
+        // FidelityBond::outpoint is pub(crate); extract via serde until it is made pub.
+        // TODO: upstream a `pub` accessor in coinswap and remove this round-trip.
+        let bond_json = serde_json::to_value(bond).unwrap_or(Value::Null);
+        let txid = bond_json["outpoint"]["txid"].as_str().unwrap_or_else(|| {
+            tracing::warn!(?bond_json, "fidelity bond outpoint.txid missing in serde repr");
+            ""
+        }).to_string();
+        let vout = bond_json["outpoint"]["vout"].as_u64().unwrap_or_else(|| {
+            tracing::warn!(?bond_json, "fidelity bond outpoint.vout missing in serde repr");
+            0
+        }) as u32;
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/state.rs` around lines 40 - 46, The current extraction of
FidelityBond::outpoint via serde_json::to_value in from_coinswap silently falls
back to txid = "" and vout = 0 on any schema change or serialization error;
update from_coinswap to validate the serde lookup (check serde_json::to_value
result and that bond_json["outpoint"]["txid"] and ["vout"] exist/types are
correct), log a warning via the module logger when the structure is missing or
parsing fails, and change from_coinswap to return a Result so callers can drop
malformed offers instead of publishing them; reference FidelityBond::outpoint,
serde_json::to_value, from_coinswap, txid, and vout in your changes.

Comment thread web/src/App.jsx
Comment thread web/src/App.jsx Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@docker-compose.yml`:
- Line 36: Update the floating Docker image tags to fixed versions: replace the
tor image "osminogin/tor-simple" with a concrete tag or digest (e.g.,
"osminogin/tor-simple:0.4.9.5" or a sha256 digest) and stop using a floating
major version for bitcoind — change the bitcoin image reference that uses the
BITCOIN_VERSION default (e.g., "bitcoin/bitcoin:${BITCOIN_VERSION:-28}") to a
specific patched release like "bitcoin/bitcoin:28.1" (or pin via digest); adjust
any related environment default for BITCOIN_VERSION to the explicit patch if you
need to keep the variable.

In `@Dockerfile`:
- Around line 11-13: Update both RUN apt-get install invocations in the
Dockerfile to add the --no-install-recommends flag (e.g., change "apt-get
install -y pkg-config libssl-dev ..." to "apt-get install -y
--no-install-recommends ...") so unnecessary recommended packages are not pulled
into the build; keep the surrounding chain (apt-get update && ... && rm -rf
/var/lib/apt/lists/*) intact and apply the same change to the other install
block referenced (the one around lines 27-29).
- Around line 19-20: The current Dockerfile RUN step creates a dummy src/main.rs
and runs cargo build but swallows errors and always removes src due to the
semicolon and stderr redirect; update the RUN step that creates src/main.rs and
runs cargo build so it does not redirect stderr (remove the `2>/dev/null`) and
chain operations with `&&` instead of `;` so the layer fails loudly if the dummy
build fails (and thus actually warms cache only on successful builds); also
consider not creating a dummy src when the project may declare custom [[bin]]
paths or workspace members so the cache step is meaningful.
- Around line 26-38: The Dockerfile runtime image lacks a HEALTHCHECK, so
orchestrators mark the container healthy as soon as marketd starts even if the
HTTP listener or sync loops aren't working; add a HEALTHCHECK instruction after
EXPOSE 3000 that probes the running service (use MARKETD_LISTEN_ADDR and the
HTTP health endpoint your app exposes, e.g., /health or /api/health) and return
success only on a 200 response; ensure the runtime image provides a probe tool
(install curl/wget in the image or copy a small health-binary from the builder
stage) and reference ENTRYPOINT ["marketd"]/ENV MARKETD_LISTEN_ADDR to construct
the probe URL.

In `@run.sh`:
- Around line 113-115: The compose_down function currently references "$@" but
no callers pass arguments (cmd_stop calls compose_down with no args), so remove
the unused parameter passthrough to avoid dead code and ShellCheck warnings:
edit the compose_down definition (function name compose_down) to stop using "$@"
in the docker compose invocation; alternatively, if you intend to forward script
args, change cmd_stop to call compose_down "$@" instead—pick one approach and
make the corresponding update to either compose_down or cmd_stop so the intent
is explicit.
- Around line 117-138: The wait_bitcoind_healthy function can hang forever and
parses JSON fragily; change it to use a robust JSON extractor (e.g., pipe
bitcoin-cli getblockchaininfo into jq -r .initialblockdownload or use
bitcoin-cli -getinfo if available) instead of grep|cut|tr, remove the trailing
"|| true" so command failures surface, and add a max-attempts / timeout guard
(e.g., attempt N times with the existing sleep interval) that cleans up the
background logs process (logs_pid) and returns a non-zero exit code on failure
so callers (like cmd_prod) can abort; keep references to wait_bitcoind_healthy,
logs_pid, and the bitcoin-cli getblockchaininfo invocation when making the
changes.
- Line 29: The separator in header() is empty because printf '%.0s' {1..50}
prints empty strings; replace it with a real repeat and avoid unnecessary echo
subshells: e.g. use printf '%.0s-' {1..50} to produce a 50-char rule (or printf
'%*s\n' 50 '' | tr ' ' '-' ), and update the header() implementation (function
header()) to print that line directly instead of echo "$(printf …)".
- Around line 167-169: BITCOIND is being passed an invalid -mainnet flag because
BITCOIN_NETWORK is set to "mainnet"; change the network token to the accepted
"main" and update all usages so the daemon/cli receive -chain=main (or the
single-purpose flags) instead of -mainnet: set BITCOIN_NETWORK="main" where it's
assigned, update the bitcoind service command and the bitcoin-cli healthcheck
entries in docker-compose.yml to use -chain=main, and update the
wait_bitcoind_healthy() bitcoin-cli invocation in run.sh to use -chain=main (or
remove the flag and rely on BITCOIN_RPC_PORT), ensuring all three places
(BITCOIN_NETWORK variable, docker-compose bitcoind command/healthcheck, and
wait_bitcoind_healthy()) are consistent.

In `@src/config.rs`:
- Around line 27-28: The default listen address for the CLI/config arg is set to
"127.0.0.1:3000" which keeps the HTTP server private; update the arg attribute
for the listen_addr field (pub listen_addr: String) to use "0.0.0.0:3000" as the
default_value so the server binds publicly by default and matches the documented
behavior.

In `@src/main.rs`:
- Around line 137-142: The write lock on the shared store is using unwrap()
which will panic on a poisoned lock and kill the sync loop; replace the direct
unwrap with proper handling of the PoisonError (e.g., match store.write() {
Ok(mut s) => { s.offers = offers; s.last_sync = Some(timestamp); }, Err(err) =>
{ recover via err.into_inner() or log the poison and use into_inner() to
continue, or log and skip updating this cycle }) so the thread doesn't die
silently—handle both Ok and Err branches, log the error (using your app logger),
and ensure updates to s.offers and s.last_sync only occur on the recovered
guard.
- Line 22: The error is caused by referencing a nonexistent variable `args` in
the tracing setup; change the call that constructs the env filter from
tracing_subscriber::EnvFilter::new(&args.log_filter) to use the parsed config
variable `cfg` (e.g., tracing_subscriber::EnvFilter::new(&cfg.log_filter)) so it
matches the earlier Config::parse() assignment and the `cfg` binding in main;
ensure any other uses of `args` are similarly replaced or the variable is
renamed consistently.

In `@web/src/App.jsx`:
- Around line 141-149: The anchor in App.jsx currently hardcodes a .onion
explorer URL for offer.fidelity_bond.outpoint.txid which is unreachable for most
users; change this to use a configurable explorer base (e.g. EXPLORER_BASE
injected from the API or a build-time env var) and construct the href as
`${EXPLORER_BASE}/tx/${offer.fidelity_bond.outpoint.txid}`, falling back to a
clearnet default (mempool.space for mainnet or mempool.space/signet for signet)
when the config is absent, and ensure the UI still renders the txid (or as plain
selectable text if you prefer to drop links) while preserving title and
truncation behavior.
- Around line 102-153: Replace the unstable key={index} in the offers.map row
with a stable unique identifier from each offer; e.g., use key={offer.address}
or a composite key like
key={`${offer.fidelity_bond.outpoint.txid}:${offer.fidelity_bond.outpoint.vout}`}
in the <tr> to ensure React can reconcile list items correctly — pick the field
that is guaranteed unique for each offer and update the key prop on the <tr>
accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: a22b1687-f2c6-4299-8f05-4d3688b56185

📥 Commits

Reviewing files that changed from the base of the PR and between 180f54f and ff4073d.

⛔ Files ignored due to path filters (2)
  • docs/marketd-arch.png is excluded by !**/*.png
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (16)
  • .github/workflows/ci.yml
  • .gitignore
  • Cargo.toml
  • Dockerfile
  • daemon/Cargo.toml
  • daemon/src/main.rs
  • docker-compose.yml
  • docs/ARCH.md
  • run.sh
  • src/config.rs
  • src/main.rs
  • src/server.rs
  • src/state.rs
  • web/offer_data.json
  • web/src/App.jsx
  • web/vite.config.js
💤 Files with no reviewable changes (3)
  • web/offer_data.json
  • daemon/src/main.rs
  • daemon/Cargo.toml

Comment thread docker-compose.yml
restart: unless-stopped

tor:
image: osminogin/tor-simple
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial | ⚡ Quick win

🧩 Analysis chain

🌐 Web query:

osminogin/tor-simple Docker Hub available tags

💡 Result:

The Docker Hub repository is https://hub.docker.com/r/osminogin/tor-simple/tags and it currently shows these tags (examples explicitly listed on the page): - latest [1] - 0.4.9.5 [1] - 0.4.8.22 [1] - 0.4.8.21 [1] - 0.4.8.20 [1] - 0.4.8.19 [1] - 0.4.8.18 [1] - 0.4.8.17 [1] - 0.4.8.16 [1] - 0.4.8.15 [1] If you need the full complete list of tags, use the Docker Hub tags page above (it’s the authoritative source for the repository’s current available tags). [1]

Citations:


🏁 Script executed:

cat -n docker-compose.yml | head -50

Repository: citadel-tech/marketd

Length of output: 1860


Pin the tor and bitcoind images to explicit versions.

The tor image has no tag specified, defaulting to :latest, which yields non-reproducible builds and allows silent behavioral changes or malicious updates. Pin to a concrete release like osminogin/tor-simple:0.4.9.5 (or a digest). The bitcoind image similarly uses a floating major-version tag (bitcoin/bitcoin:${BITCOIN_VERSION:-28}), resolving to the latest patch within that release. Explicitly specify the patch version (e.g., 28.1) for full reproducibility.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@docker-compose.yml` at line 36, Update the floating Docker image tags to
fixed versions: replace the tor image "osminogin/tor-simple" with a concrete tag
or digest (e.g., "osminogin/tor-simple:0.4.9.5" or a sha256 digest) and stop
using a floating major version for bitcoind — change the bitcoin image reference
that uses the BITCOIN_VERSION default (e.g.,
"bitcoin/bitcoin:${BITCOIN_VERSION:-28}") to a specific patched release like
"bitcoin/bitcoin:28.1" (or pin via digest); adjust any related environment
default for BITCOIN_VERSION to the explicit patch if you need to keep the
variable.

Comment thread Dockerfile
Comment thread Dockerfile
Comment thread Dockerfile
Comment on lines +26 to +38
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y \
libssl3 libzmq5 ca-certificates \
&& rm -rf /var/lib/apt/lists/*

COPY --from=builder /app/target/release/marketd /usr/local/bin/marketd
COPY --from=frontend /app/dist /usr/share/marketd/web

ENV MARKETD_LISTEN_ADDR=0.0.0.0:3000
ENV MARKETD_STATIC_DIR=/usr/share/marketd/web
EXPOSE 3000

ENTRYPOINT ["marketd"]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial | ⚡ Quick win

Add a HEALTHCHECK for the runtime image.

docker-compose.yml health-gates other services with condition: service_healthy, but the marketd service itself has no healthcheck declared anywhere — neither here nor in the compose file. Containers will be marked healthy as soon as the process starts, even if the HTTP listener bind failed or the sync loop is stuck waiting for Bitcoin/Tor. A simple HTTP probe against the API gives the orchestrator a real signal and matches the pattern already used for bitcoind/tor.

🔧 Example healthcheck (assumes `curl` available; can use `wget` from busybox or a tiny Rust health binary instead)
+RUN apt-get update && apt-get install -y --no-install-recommends \
+    libssl3 libzmq5 ca-certificates curl \
+    && rm -rf /var/lib/apt/lists/*
@@
 EXPOSE 3000
+
+HEALTHCHECK --interval=30s --timeout=5s --start-period=30s --retries=3 \
+  CMD curl -fsS http://127.0.0.1:3000/api/health || exit 1

 ENTRYPOINT ["marketd"]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y \
libssl3 libzmq5 ca-certificates \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/marketd /usr/local/bin/marketd
COPY --from=frontend /app/dist /usr/share/marketd/web
ENV MARKETD_LISTEN_ADDR=0.0.0.0:3000
ENV MARKETD_STATIC_DIR=/usr/share/marketd/web
EXPOSE 3000
ENTRYPOINT ["marketd"]
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
libssl3 libzmq5 ca-certificates curl \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/marketd /usr/local/bin/marketd
COPY --from=frontend /app/dist /usr/share/marketd/web
ENV MARKETD_LISTEN_ADDR=0.0.0.0:3000
ENV MARKETD_STATIC_DIR=/usr/share/marketd/web
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=5s --start-period=30s --retries=3 \
CMD curl -fsS http://127.0.0.1:3000/api/health || exit 1
ENTRYPOINT ["marketd"]
🧰 Tools
🪛 Checkov (3.2.526)

[low] 1-38: Ensure that HEALTHCHECK instructions have been added to container images

(CKV_DOCKER_2)


[low] 1-38: Ensure that a user for the container has been created

(CKV_DOCKER_3)

🪛 Hadolint (2.14.0)

[info] 27-27: Avoid additional packages by specifying --no-install-recommends

(DL3015)


[warning] 27-27: Pin versions in apt get install. Instead of apt-get install <package> use apt-get install <package>=<version>

(DL3008)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@Dockerfile` around lines 26 - 38, The Dockerfile runtime image lacks a
HEALTHCHECK, so orchestrators mark the container healthy as soon as marketd
starts even if the HTTP listener or sync loops aren't working; add a HEALTHCHECK
instruction after EXPOSE 3000 that probes the running service (use
MARKETD_LISTEN_ADDR and the HTTP health endpoint your app exposes, e.g., /health
or /api/health) and return success only on a 200 response; ensure the runtime
image provides a probe tool (install curl/wget in the image or copy a small
health-binary from the builder stage) and reference ENTRYPOINT ["marketd"]/ENV
MARKETD_LISTEN_ADDR to construct the probe URL.

Comment thread run.sh Outdated
Comment thread src/config.rs
Comment on lines +27 to +28
#[arg(long, env = "MARKETD_LISTEN_ADDR", default_value = "127.0.0.1:3000")]
pub listen_addr: String,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Bind publicly by default, or the default deployment stays private.

Line 27 defaults listen_addr to 127.0.0.1:3000, which makes the new HTTP server unreachable from other machines unless the operator overrides it. That conflicts with the PR goal of public hosting and with docs/ARCH.md, which documents 0.0.0.0:3000 as the listener.

Suggested fix
-    #[arg(long, env = "MARKETD_LISTEN_ADDR", default_value = "127.0.0.1:3000")]
+    #[arg(long, env = "MARKETD_LISTEN_ADDR", default_value = "0.0.0.0:3000")]
     pub listen_addr: String,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#[arg(long, env = "MARKETD_LISTEN_ADDR", default_value = "127.0.0.1:3000")]
pub listen_addr: String,
#[arg(long, env = "MARKETD_LISTEN_ADDR", default_value = "0.0.0.0:3000")]
pub listen_addr: String,
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/config.rs` around lines 27 - 28, The default listen address for the
CLI/config arg is set to "127.0.0.1:3000" which keeps the HTTP server private;
update the arg attribute for the listen_addr field (pub listen_addr: String) to
use "0.0.0.0:3000" as the default_value so the server binds publicly by default
and matches the documented behavior.

Comment thread src/main.rs Outdated
Comment thread src/main.rs
Comment on lines +137 to +142
let count = offers.len();
{
let mut s = store.write().unwrap();
s.offers = offers;
s.last_sync = Some(timestamp);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial | ⚡ Quick win

Mutex poisoning on the offer store will silently kill the sync loop.

store.write().unwrap() panics if any reader/writer ever panicked while holding the lock; the panic propagates and the sync loop thread dies, leaving the HTTP server returning the last cached offers indefinitely with no log signal. Since this thread is the only thing keeping the API fresh, prefer match store.write() { ... } or recover via into_inner() so a transient panic doesn't permanently freeze the data plane.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/main.rs` around lines 137 - 142, The write lock on the shared store is
using unwrap() which will panic on a poisoned lock and kill the sync loop;
replace the direct unwrap with proper handling of the PoisonError (e.g., match
store.write() { Ok(mut s) => { s.offers = offers; s.last_sync = Some(timestamp);
}, Err(err) => { recover via err.into_inner() or log the poison and use
into_inner() to continue, or log and skip updating this cycle }) so the thread
doesn't die silently—handle both Ok and Err branches, log the error (using your
app logger), and ensure updates to s.offers and s.last_sync only occur on the
recovered guard.

Comment thread web/src/App.jsx
Comment thread web/src/App.jsx
Copy link
Copy Markdown
Collaborator

@keraliss keraliss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here are some points to consider:

  • the ui is not showing any loader or any indication that it is trying to load the market. so maybe we should add that?
  • there are some config mismatch. the rpc url defaults to regtest, but the docker is set to signet. we should change it to signet, as we will be doing all the operations there only.
  • the network_mode : host only works on linux, not on mac. so localhost is not loading for me. please look into it.
  • the ci will fail, as you moved stuff from daemon. so we need to update that.

also coderabbit has some good points. would be great if we can resolve them as well.

Comment thread src/main.rs Outdated
Comment thread run.sh
local net_choice
read -rp " Choice [1]: " net_choice
case "${net_choice:-1}" in
2) BITCOIN_NETWORK="mainnet"; BITCOIN_RPC_PORT="8332" ;;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this mainnet flag will fail.

@keraliss
Copy link
Copy Markdown
Collaborator

keraliss commented May 9, 2026

also one thing i noticed, why are we not using any ffi? this could have been a very good use case for it. we can just call the fetch_offers api directly, get the response and show them in the ui. is there any specific reason?

@hulxv
Copy link
Copy Markdown
Author

hulxv commented May 11, 2026

also one thing i noticed, why are we not using any ffi? this could have been a very good use case for it. we can just call the fetch_offers api directly, get the response and show them in the ui. is there any specific reason?

Why would we use the ffi in Rust? We are using the coinswasp core lib directly. we don't need to ffi

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (4)
src/server.rs (2)

28-35: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

get_health panics on poisoned lock — inconsistent with get_offers error handling.

The past review correctly identified that get_health uses .unwrap() on the RwLock, which will panic if the lock is poisoned (when a writer panicked while holding it). This is inconsistent with get_offers (lines 19-26), which properly maps the error to INTERNAL_SERVER_ERROR.

🔒 Consistent error handling
-async fn get_health(State(store): State<SharedStore>) -> Json<Value> {
-    let s = store.read().unwrap();
-    Json(json!({
+async fn get_health(State(store): State<SharedStore>) -> Result<Json<Value>, StatusCode> {
+    let s = store.read().map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
+    Ok(Json(json!({
         "status": "ok",
         "offer_count": s.offers.len(),
         "last_sync": s.last_sync,
-    }))
+    })))
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/server.rs` around lines 28 - 35, get_health currently calls
store.read().unwrap() which will panic on a poisoned RwLock; mirror get_offers'
behavior instead: change get_health to return Result<Json<Value>, StatusCode>,
replace unwrap() with a match or map_err on store.read() and return
Err(StatusCode::INTERNAL_SERVER_ERROR) when the lock is poisoned, then on Ok(s)
build and return Json(json!({ "status":"ok","offer_count":
s.offers.len(),"last_sync": s.last_sync }))—use the same error mapping/logging
style as get_offers to keep consistent handling of SharedStore read failures.

11-16: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

SPA fallback serves HTML for unknown /api/* routes instead of proper 404.

The past review correctly identified that the current structure will serve index.html with 200 OK for unknown API routes like /api/typo, breaking API consumers who expect JSON 404 responses. The API routes should be nested under /api prefix, then the SPA fallback applied at root level.

🔧 Recommended restructure
 pub fn router(store: SharedStore, static_dir: String) -> Router {
     let index = format!("{static_dir}/index.html");
     let spa = ServeDir::new(&static_dir).not_found_service(ServeFile::new(index));
 
+    let api_router = Router::new()
+        .route("/offers", get(get_offers))
+        .route("/health", get(get_health))
+        .with_state(store)
+        .layer(CorsLayer::permissive());
+
     Router::new()
-        .route("/api/offers", get(get_offers))
-        .route("/api/health", get(get_health))
-        .with_state(store)
-        .layer(CorsLayer::permissive())
+        .nest("/api", api_router)
         .fallback_service(spa)
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/server.rs` around lines 11 - 16, The SPA fallback is currently attached
to the same Router that defines API routes (Router::new() with
.route("/api/offers", ...), .route("/api/health", ...), .fallback_service(spa)),
causing unknown /api/* paths to return the SPA HTML; instead, create a dedicated
api_router (e.g., Router::new() with .route("/offers", get(get_offers)) and
.route("/health", get(get_health)) and any CORS/middleware) and then mount it
under the API prefix using .nest("/api", api_router) (or .route("/api",
api_router)), leaving the top-level Router to apply .fallback_service(spa) so
unknown /api/* yield the router 404 while non-API paths fall back to the SPA.
src/state.rs (1)

40-46: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Silent fallback on fidelity bond outpoint extraction produces broken data.

The past review correctly identified that if the coinswap library's FidelityBond serialization format changes, this code will silently produce txid = "" and vout = 0, resulting in broken explorer links in the frontend with no diagnostic logs. The workaround using serde_json::to_value should at minimum log warnings when extraction fails.

🔍 Defensive improvement
-        // TODO: FidelityBond::outpoint is pub(crate); extract via serde until it is made pub.
-        let bond_json = serde_json::to_value(bond).unwrap_or(Value::Null);
-        let txid = bond_json["outpoint"]["txid"]
-            .as_str()
-            .unwrap_or("")
-            .to_string();
-        let vout = bond_json["outpoint"]["vout"].as_u64().unwrap_or(0) as u32;
+        // TODO: FidelityBond::outpoint is pub(crate); extract via serde until it is made pub.
+        let bond_json = serde_json::to_value(bond).unwrap_or_else(|e| {
+            tracing::warn!(?e, "failed to serialize FidelityBond for outpoint extraction");
+            Value::Null
+        });
+        let txid = bond_json["outpoint"]["txid"].as_str().unwrap_or_else(|| {
+            tracing::warn!(?bond_json, "fidelity bond outpoint.txid missing or invalid");
+            ""
+        }).to_string();
+        let vout = bond_json["outpoint"]["vout"].as_u64().unwrap_or_else(|| {
+            tracing::warn!(?bond_json, "fidelity bond outpoint.vout missing or invalid");
+            0
+        }) as u32;
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/state.rs` around lines 40 - 46, The current extraction of FidelityBond
outpoint via serde_json::to_value silently falls back to txid = "" and vout = 0
when the serialized shape changes; update the block around bond_json / txid /
vout to detect missing or invalid fields and emit a diagnostic warning (e.g.,
using tracing::warn! or log::warn!) that includes the fidelity bond debug info
(the bond_json or bond identifier) and the fact extraction failed, and prefer
returning an error or Option upstream if appropriate; specifically modify the
code handling FidelityBond::outpoint (the bond_json variable and the txid/vout
extraction) to log the failure with context rather than silently using empty
values.
src/config.rs (1)

35-36: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Default listen address restricts server to localhost only.

The default listen_addr of 127.0.0.1:3000 prevents external connections, which conflicts with the PR objective of serving market data publicly and hosting on users' machines. The past review correctly identified this should default to 0.0.0.0:3000 to bind on all interfaces.

🌐 Proposed fix
-    #[arg(long, env = "MARKETD_LISTEN_ADDR", default_value = "127.0.0.1:3000")]
+    #[arg(long, env = "MARKETD_LISTEN_ADDR", default_value = "0.0.0.0:3000")]
     pub listen_addr: String,
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/config.rs` around lines 35 - 36, The default listen address currently set
on the listen_addr config field (#[arg(long, env = "MARKETD_LISTEN_ADDR",
default_value = "127.0.0.1:3000")]) restricts binding to localhost; update the
default_value to "0.0.0.0:3000" so the server binds all interfaces and can
accept external connections, keeping the same arg/env attribute and field name
listen_addr.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In @.github/workflows/docker-build.yml:
- Line 40: Replace the incorrect image tag string used in the Docker build step:
find the line containing the literal "tags: maker-dashboard:test" and change it
to use the correct project name (e.g. "tags: marketd:test") so the built image
is tagged for the marketd project; ensure any other occurrences of
"maker-dashboard" in that workflow are similarly updated to "marketd".
- Around line 6-19: Replace the incorrect frontend path filter "frontend/**"
with the actual directory "web/**" in the workflow's push and pull_request
'paths' blocks so changes to the real frontend trigger the job; specifically
update both occurrences of the string "frontend/**" under the 'paths' arrays in
the Docker build workflow to "web/**".
- Around line 44-68: The workflow is using the wrong env vars and endpoint:
replace the DASHBOARD_* env vars with the env var the service actually reads
(MARKETD_LISTEN_ADDR) when running the container (reference MARKETD_LISTEN_ADDR)
and update the runtime assertions to call the API endpoints that marketd exposes
(/api/health and /api/offers) instead of /api/auth/status; change the health
check to verify /api/health responds successfully (HTTP 200 / expected health
JSON) and change the initialization assertion to call /api/offers and assert the
expected empty/uninitialized response shape for offers.

In @.github/workflows/test.yml:
- Around line 48-58: Remove the redundant --no-default-features flags from the
"Run tests" and "Check documentation" steps: update the commands run by those
steps so they call `cargo test --verbose` and `cargo doc --no-deps`
(respectively) instead of `cargo test --verbose --no-default-features` and
`cargo doc --no-deps --no-default-features`, since the manifest defines no
features; keep the step names ("Run tests", "Check documentation") and shells
unchanged.

In `@docker-compose.yml`:
- Line 54: The compose file defines TOR_AUTH_PASSWORD twice inconsistently: one
place uses a fallback default (coinswap) and another uses the required parameter
expansion (:${TOR_AUTH_PASSWORD:?…}) so the required check is never hit; to fix,
either make the password truly required by removing the fallback default where
TOR_AUTH_PASSWORD is set (so it uses plain ${TOR_AUTH_PASSWORD}) or, if a
default is acceptable, remove the required check by changing the usage that
currently uses :? to use :- (or plain ${TOR_AUTH_PASSWORD}) so both locations
use the same semantics; update the occurrences referencing TOR_AUTH_PASSWORD and
the marketd environment/command that uses the :? expansion to match your chosen
behavior.

---

Duplicate comments:
In `@src/config.rs`:
- Around line 35-36: The default listen address currently set on the listen_addr
config field (#[arg(long, env = "MARKETD_LISTEN_ADDR", default_value =
"127.0.0.1:3000")]) restricts binding to localhost; update the default_value to
"0.0.0.0:3000" so the server binds all interfaces and can accept external
connections, keeping the same arg/env attribute and field name listen_addr.

In `@src/server.rs`:
- Around line 28-35: get_health currently calls store.read().unwrap() which will
panic on a poisoned RwLock; mirror get_offers' behavior instead: change
get_health to return Result<Json<Value>, StatusCode>, replace unwrap() with a
match or map_err on store.read() and return
Err(StatusCode::INTERNAL_SERVER_ERROR) when the lock is poisoned, then on Ok(s)
build and return Json(json!({ "status":"ok","offer_count":
s.offers.len(),"last_sync": s.last_sync }))—use the same error mapping/logging
style as get_offers to keep consistent handling of SharedStore read failures.
- Around line 11-16: The SPA fallback is currently attached to the same Router
that defines API routes (Router::new() with .route("/api/offers", ...),
.route("/api/health", ...), .fallback_service(spa)), causing unknown /api/*
paths to return the SPA HTML; instead, create a dedicated api_router (e.g.,
Router::new() with .route("/offers", get(get_offers)) and .route("/health",
get(get_health)) and any CORS/middleware) and then mount it under the API prefix
using .nest("/api", api_router) (or .route("/api", api_router)), leaving the
top-level Router to apply .fallback_service(spa) so unknown /api/* yield the
router 404 while non-API paths fall back to the SPA.

In `@src/state.rs`:
- Around line 40-46: The current extraction of FidelityBond outpoint via
serde_json::to_value silently falls back to txid = "" and vout = 0 when the
serialized shape changes; update the block around bond_json / txid / vout to
detect missing or invalid fields and emit a diagnostic warning (e.g., using
tracing::warn! or log::warn!) that includes the fidelity bond debug info (the
bond_json or bond identifier) and the fact extraction failed, and prefer
returning an error or Option upstream if appropriate; specifically modify the
code handling FidelityBond::outpoint (the bond_json variable and the txid/vout
extraction) to log the failure with context rather than silently using empty
values.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 2bdc77ba-562a-45a7-9608-298d1880cddb

📥 Commits

Reviewing files that changed from the base of the PR and between ff4073d and b993ea2.

⛔ Files ignored due to path filters (3)
  • Cargo.lock is excluded by !**/*.lock
  • docs/marketd-arch.png is excluded by !**/*.png
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (19)
  • .github/workflows/docker-build.yml
  • .github/workflows/lint.yml
  • .github/workflows/rustfmt.yml
  • .github/workflows/test.yml
  • .gitignore
  • Cargo.toml
  • Dockerfile
  • daemon/Cargo.toml
  • daemon/src/main.rs
  • docker-compose.yml
  • docs/ARCH.md
  • run.sh
  • src/config.rs
  • src/main.rs
  • src/server.rs
  • src/state.rs
  • web/offer_data.json
  • web/src/App.jsx
  • web/vite.config.js
💤 Files with no reviewable changes (3)
  • web/offer_data.json
  • daemon/Cargo.toml
  • daemon/src/main.rs

Comment on lines +6 to +19
paths:
- "Dockerfile"
- "src/**"
- "frontend/**"
- "Cargo.toml"
- "Cargo.lock"
pull_request:
branches: ["main"]
paths:
- "Dockerfile"
- "src/**"
- "frontend/**"
- "Cargo.toml"
- "Cargo.lock"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Paths filter references non-existent frontend/ directory.

Lines 9 and 17 trigger the workflow on changes to frontend/**, but the actual frontend directory in this repository is web/ (as evidenced by the Dockerfile, package.json location, and other workflow files). This workflow will never run when the frontend changes.

🔧 Proposed fix
     paths:
       - "Dockerfile"
       - "src/**"
-      - "frontend/**"
+      - "web/**"
       - "Cargo.toml"
       - "Cargo.lock"
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In @.github/workflows/docker-build.yml around lines 6 - 19, Replace the
incorrect frontend path filter "frontend/**" with the actual directory "web/**"
in the workflow's push and pull_request 'paths' blocks so changes to the real
frontend trigger the job; specifically update both occurrences of the string
"frontend/**" under the 'paths' arrays in the Docker build workflow to "web/**".

context: .
push: false
load: true
tags: maker-dashboard:test
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Image tagged with wrong project name.

The image is tagged as maker-dashboard:test but this is the marketd project. This will cause confusion if anyone inspects local Docker images or references this tag in other scripts.

🔧 Proposed fix
-          tags: maker-dashboard:test
+          tags: marketd:test
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
tags: maker-dashboard:test
tags: marketd:test
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In @.github/workflows/docker-build.yml at line 40, Replace the incorrect image
tag string used in the Docker build step: find the line containing the literal
"tags: maker-dashboard:test" and change it to use the correct project name (e.g.
"tags: marketd:test") so the built image is tagged for the marketd project;
ensure any other occurrences of "maker-dashboard" in that workflow are similarly
updated to "marketd".

Comment on lines +44 to +68
- name: Test container starts and responds
run: |
docker run -d --rm --name test-container -p 3000:3000 \
-e DASHBOARD_HOST=0.0.0.0 -e DASHBOARD_ALLOW_REMOTE=true \
maker-dashboard:test
# Wait for the server to be ready
for i in $(seq 1 30); do
if curl -sf http://localhost:3000/ > /dev/null 2>&1; then
echo "Server is up"
break
fi
if [ "$i" -eq 30 ]; then
echo "Server failed to start"
docker logs test-container
exit 1
fi
sleep 1
done
# Verify API responds and that a fresh container is uninitialized.
if ! curl -sf http://localhost:3000/api/auth/status \
| jq -e '.data.initialized == false and .data.authenticated == false'; then
echo "Auth status assertion failed"
docker logs test-container
exit 1
fi
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Test container uses wrong environment variables and API endpoint.

Three mismatches indicate this workflow was copied from a different project:

  1. Line 47: Sets DASHBOARD_HOST and DASHBOARD_ALLOW_REMOTE, but marketd uses MARKETD_LISTEN_ADDR (see src/config.rs and docker-compose.yml)
  2. Lines 63-64: Tests /api/auth/status with initialized and authenticated fields, but marketd only exposes /api/offers and /api/health (see src/server.rs)

The test will fail because the container won't honor the env vars and the API endpoint doesn't exist.

🔧 Proposed fix
       - name: Test container starts and responds
         run: |
           docker run -d --rm --name test-container -p 3000:3000 \
-            -e DASHBOARD_HOST=0.0.0.0 -e DASHBOARD_ALLOW_REMOTE=true \
-            maker-dashboard:test
+            marketd:test
           # Wait for the server to be ready
           for i in $(seq 1 30); do
-            if curl -sf http://localhost:3000/ > /dev/null 2>&1; then
+            if curl -sf http://localhost:3000/api/health > /dev/null 2>&1; then
               echo "Server is up"
               break
             fi
             if [ "$i" -eq 30 ]; then
               echo "Server failed to start"
               docker logs test-container
               exit 1
             fi
             sleep 1
           done
-          # Verify API responds and that a fresh container is uninitialized.
-          if ! curl -sf http://localhost:3000/api/auth/status \
-            | jq -e '.data.initialized == false and .data.authenticated == false'; then
-            echo "Auth status assertion failed"
+          # Verify API responds
+          if ! curl -sf http://localhost:3000/api/health | jq -e '.status'; then
+            echo "Health check failed"
             docker logs test-container
             exit 1
           fi
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In @.github/workflows/docker-build.yml around lines 44 - 68, The workflow is
using the wrong env vars and endpoint: replace the DASHBOARD_* env vars with the
env var the service actually reads (MARKETD_LISTEN_ADDR) when running the
container (reference MARKETD_LISTEN_ADDR) and update the runtime assertions to
call the API endpoints that marketd exposes (/api/health and /api/offers)
instead of /api/auth/status; change the health check to verify /api/health
responds successfully (HTTP 200 / expected health JSON) and change the
initialization assertion to call /api/offers and assert the expected
empty/uninitialized response shape for offers.

Comment on lines +48 to +58
cargo build --verbose --no-default-features

- name: Run tests
shell: bash
run: |
cargo test --verbose --no-default-features

- name: Check documentation
shell: bash
run: |
cargo doc --no-deps --no-default-features
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial | 💤 Low value

--no-default-features flag is redundant but harmless.

The Cargo.toml manifest defines no [features] section, so --no-default-features has no effect. While this doesn't cause issues, it may confuse future maintainers who expect feature flags to exist.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In @.github/workflows/test.yml around lines 48 - 58, Remove the redundant
--no-default-features flags from the "Run tests" and "Check documentation"
steps: update the commands run by those steps so they call `cargo test
--verbose` and `cargo doc --no-deps` (respectively) instead of `cargo test
--verbose --no-default-features` and `cargo doc --no-deps
--no-default-features`, since the manifest defines no features; keep the step
names ("Run tests", "Check documentation") and shells unchanged.

Comment thread docker-compose.yml
volumes:
- tor-data:/var/lib/tor
environment:
- TOR_AUTH_PASSWORD=${TOR_AUTH_PASSWORD:-coinswap}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Inconsistent TOR_AUTH_PASSWORD handling creates a misleading "required" check.

Line 54 sets a fallback default (coinswap), but line 75 uses :? to mark it as required. The :? check will never trigger because docker-compose will always provide the fallback from line 54 before marketd starts. If the password should truly be required, remove the default; if a default is acceptable, remove the :?.

🔧 Proposed fix (make password truly required)
     environment:
-      - TOR_AUTH_PASSWORD=${TOR_AUTH_PASSWORD:-coinswap}
+      - TOR_AUTH_PASSWORD=${TOR_AUTH_PASSWORD:?TOR_AUTH_PASSWORD is required}
       - TOR_SOCKS_PORT=${TOR_SOCKS_PORT:-9050}
       - TOR_CONTROL_PORT=${TOR_CONTROL_PORT:-9051}

Also applies to: 75-75

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@docker-compose.yml` at line 54, The compose file defines TOR_AUTH_PASSWORD
twice inconsistently: one place uses a fallback default (coinswap) and another
uses the required parameter expansion (:${TOR_AUTH_PASSWORD:?…}) so the required
check is never hit; to fix, either make the password truly required by removing
the fallback default where TOR_AUTH_PASSWORD is set (so it uses plain
${TOR_AUTH_PASSWORD}) or, if a default is acceptable, remove the required check
by changing the usage that currently uses :? to use :- (or plain
${TOR_AUTH_PASSWORD}) so both locations use the same semantics; update the
occurrences referencing TOR_AUTH_PASSWORD and the marketd environment/command
that uses the :? expansion to match your chosen behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants