minor fixes around modules#863
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughCentralized confirmation waiting in the wallet with a multi-tx, threshold-based waiter (with interruption/abort hooks), updated call sites (maker/taker/tests), increased a taker non-test confirmation timeout (600s→900s), refined websocket read error handling, and added a 5ms idle sleep in the watcher loop. ChangesReliability and Performance Improvements
Sequence Diagram(s)sequenceDiagram
participant Caller as Taker/Maker
participant Wallet
participant Network
Caller->>Wallet: wait_for_tx_confirmation(&[txid], required_confirms, shutdown, abort_check)
Wallet->>Network: get_raw_transaction_info(txid) (loop)
Network-->>Wallet: txinfo (confirmations, height)
Wallet-->>Caller: confirmation_height or WalletError::Interrupted / other error
Estimated Code Review Effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested Reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/watch_tower/nostr_discovery.rs`:
- Around line 205-207: The match arm that converts
tungstenite::Error::ConnectionClosed and tungstenite::Error::AlreadyClosed into
Ok(()) causes run_nostr_session_for_relay to treat relay-initiated disconnects
as graceful shutdowns and stop reconnecting; change the behavior to return an
Err (or a distinct error) from that arm instead of Ok(()) so the outer retry
loop in run_nostr_session_for_relay will log and retry, or alternatively check
the shared shutdown flag inside that arm and only return Ok(()) when shutdown is
set while returning Err otherwise; locate the match handling those tungstenite
errors and update it accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: b9268273-162c-491a-bde4-26fef54a2a54
📒 Files selected for processing (3)
src/taker/api.rssrc/watch_tower/nostr_discovery.rssrc/watch_tower/watcher.rs
There was a problem hiding this comment.
Pull request overview
This PR makes small operational/performance tweaks to the watch tower and taker by reducing busy-looping in the watcher, adjusting Nostr relay read error handling to avoid unnecessary reconnects, and increasing the taker confirmation wait timeout.
Changes:
- Add a small idle sleep in the watch tower watcher loop when there are no commands and no ZMQ events.
- Treat
WouldBlock/TimedOut/Interruptedsocket read errors as non-fatal in Nostr discovery to prevent reconnect spam. - Increase taker confirmation wait timeout from 600s to 900s outside integration tests.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
src/watch_tower/watcher.rs |
Adds a short sleep when both command queue and ZMQ backend are idle to prevent CPU busy-looping. |
src/watch_tower/nostr_discovery.rs |
Adjusts websocket read error handling to continue on transient I/O errors rather than reconnecting. |
src/taker/api.rs |
Increases the non-test confirmation wait timeout to better match typical block intervals. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #863 +/- ##
==========================================
+ Coverage 68.87% 78.47% +9.60%
==========================================
Files 35 57 +22
Lines 4932 15762 +10830
==========================================
+ Hits 3397 12370 +8973
- Misses 1535 3392 +1857 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/watch_tower/watcher.rs (1)
179-188: 🧹 Nitpick | 🔵 Trivial | 💤 Low valueSleep fires after every command when no ZMQ event is pending, not only when fully idle.
After
handle_commandreturns (any command other than Shutdown), execution falls through toself.backend.poll(); ifpoll()returnsNonewe sleep 5 ms before the nexttry_recv(). The comment says "no commands and no ZMQ events" but the sleep is conditioned only on the ZMQ side. A burst of incoming commands (e.g. multipleRegisterWatchRequestat startup) would each incur an extra 5 ms round-trip.♻️ Proposed fix — track whether work was done this tick
loop { + let mut did_work = false; match self.rx_requests.try_recv() { Ok(cmd) => { + did_work = true; if !self.handle_command(cmd, &rest_backend_2) { discovery_clone.store(true, Ordering::SeqCst); break; } } Err(TryRecvError::Disconnected) => break, Err(TryRecvError::Empty) => {} } if let Some(event) = self.backend.poll() { + did_work = true; self.handle_event(event); - } else { - // Avoid busy-looping when there are no commands and no ZMQ events. - std::thread::sleep(Duration::from_millis(5)); + } + if !did_work { + // Avoid busy-looping when there are no commands and no ZMQ events. + std::thread::sleep(Duration::from_millis(5)); } }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/watch_tower/watcher.rs` around lines 179 - 188, The loop currently sleeps whenever backend.poll() returns None even if a command was just processed; add a local boolean (e.g., work_done) at the start of the loop, set it true whenever a command is successfully handled (from try_recv/handle_command) or when self.backend.poll() yields an event handled by handle_event, then only call std::thread::sleep(Duration::from_millis(5)) when work_done is false (i.e., no command and no event this tick); update references to try_recv(), handle_command/handle_event, and backend.poll() in watcher.rs to mark work_done appropriately and preserve existing shutdown/control flow.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/watch_tower/nostr_discovery.rs`:
- Around line 205-210: The match arm handling
tungstenite::Error::ConnectionClosed | tungstenite::Error::AlreadyClosed should
return the actual matched error when shutdown.load(Ordering::SeqCst) is false so
reconnection logic in run_nostr_session_for_relay triggers; split that arm into
two patterns or bind the error (e.g., err) so you can return err.into() for both
ConnectionClosed and AlreadyClosed, and only return Ok(()) when shutdown is
true, preserving the original error value for logging/fidelity.
---
Outside diff comments:
In `@src/watch_tower/watcher.rs`:
- Around line 179-188: The loop currently sleeps whenever backend.poll() returns
None even if a command was just processed; add a local boolean (e.g., work_done)
at the start of the loop, set it true whenever a command is successfully handled
(from try_recv/handle_command) or when self.backend.poll() yields an event
handled by handle_event, then only call
std::thread::sleep(Duration::from_millis(5)) when work_done is false (i.e., no
command and no event this tick); update references to try_recv(),
handle_command/handle_event, and backend.poll() in watcher.rs to mark work_done
appropriately and preserve existing shutdown/control flow.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 2ce94873-3352-4c29-8f81-b80b37d670da
📒 Files selected for processing (2)
src/watch_tower/nostr_discovery.rssrc/watch_tower/watcher.rs
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/watch_tower/watcher.rs (1)
171-187:⚠️ Potential issue | 🟠 Major | ⚡ Quick winIdle backoff is applied even when a command was just processed.
std::thread::sleep(Duration::from_millis(5))currently runs wheneverbackend.poll()isNone, including iterations wheretry_recv()returnedOk(cmd). That adds avoidable command latency and reduces throughput.Proposed fix
loop { + let mut idle = true; match self.rx_requests.try_recv() { Ok(cmd) => { + idle = false; if !self.handle_command(cmd, &rest_backend_2) { discovery_clone.store(true, Ordering::SeqCst); break; } } Err(TryRecvError::Disconnected) => break, Err(TryRecvError::Empty) => {} } if let Some(event) = self.backend.poll() { + idle = false; self.handle_event(event); - } else { + } + + if idle { // Avoid busy-looping when there are no commands and no ZMQ events. std::thread::sleep(Duration::from_millis(5)); } }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/watch_tower/watcher.rs` around lines 171 - 187, The loop currently always sleeps when backend.poll() returns None, even if rx_requests.try_recv() returned Ok(cmd) and was handled, causing added latency; change the loop to track whether any work was done this iteration (e.g., introduce a local bool like did_work), set it to true when try_recv() yields Ok and handle_command(...) runs (or when handle_event(...) runs), and only call std::thread::sleep(Duration::from_millis(5)) when did_work is false; update the block that matches on rx_requests.try_recv(), the call to handle_command, and the backend.poll() branch to set that flag accordingly so sleep is skipped after processing a command or event.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Outside diff comments:
In `@src/watch_tower/watcher.rs`:
- Around line 171-187: The loop currently always sleeps when backend.poll()
returns None, even if rx_requests.try_recv() returned Ok(cmd) and was handled,
causing added latency; change the loop to track whether any work was done this
iteration (e.g., introduce a local bool like did_work), set it to true when
try_recv() yields Ok and handle_command(...) runs (or when handle_event(...)
runs), and only call std::thread::sleep(Duration::from_millis(5)) when did_work
is false; update the block that matches on rx_requests.try_recv(), the call to
handle_command, and the backend.poll() branch to set that flag accordingly so
sleep is skipped after processing a command or event.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: ff412efb-07e8-4fd3-a48c-7754b722c793
📒 Files selected for processing (4)
src/taker/api.rssrc/taker/error.rssrc/watch_tower/nostr_discovery.rssrc/watch_tower/watcher.rs
💤 Files with no reviewable changes (2)
- src/taker/error.rs
- src/taker/api.rs
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
src/watch_tower/nostr_discovery.rs (1)
205-210: 🧹 Nitpick | 🔵 Trivial | ⚡ Quick winPreserve the matched error variant for better diagnostics.
The past review comment remains valid: when
shutdownis false, line 209 always returnstungstenite::Error::ConnectionClosed.into(), discarding the distinction betweenConnectionClosedandAlreadyClosed. Binding the matched error preserves logging fidelity.♻️ Bind the error to preserve the variant
- Err(tungstenite::Error::ConnectionClosed | tungstenite::Error::AlreadyClosed) => { - if shutdown.load(Ordering::SeqCst) { - return Ok(()); - } - return Err(tungstenite::Error::ConnectionClosed.into()); - } + Err(e @ (tungstenite::Error::ConnectionClosed | tungstenite::Error::AlreadyClosed)) => { + if shutdown.load(Ordering::SeqCst) { + return Ok(()); + } + return Err(e.into()); + }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/watch_tower/nostr_discovery.rs` around lines 205 - 210, The match arm that currently patterns Err(tungstenite::Error::ConnectionClosed | tungstenite::Error::AlreadyClosed) should bind the matched error (e.g., using a variable like `e`) so that when shutdown.load(...) is false you return the original variant via `e.into()` instead of always returning `ConnectionClosed`; update the arm in the function handling the websocket read/error (the arm referencing shutdown and tungstenite::Error::ConnectionClosed | tungstenite::Error::AlreadyClosed) to bind the error and return that bound error when not shutting down to preserve diagnostic fidelity.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/taker/legacy_swap.rs`:
- Around line 313-322: The confirmation wait currently uses the `?` operator on
`wallet.wait_for_tx_confirmation(...)` which propagates interruption directly;
change this to mirror the maker path by matching the result and mapping an
interruption error to `TakerError::ContractsBroadcasted(...)` (include the
funding txids as in the maker path) while otherwise returning the original error
or success. Locate the block around `prev_confirm_height = { let wallet =
self.read_wallet()?; wallet.wait_for_tx_confirmation(&funding_txids,
required_confirms, None, None,)? }` in legacy_swap.rs and replace the `?`
propagation with a match that detects the wallet interruption error and returns
`Err(TakerError::ContractsBroadcasted(funding_txids.clone()))`, returning the
normal Ok height for success and propagating other errors unchanged.
In `@src/taker/taproot_swap.rs`:
- Around line 566-570: The current wait_for_tx_confirmation call may propagate
WalletError::Interrupted via ? and lose the recovery-specific
ContractsBroadcasted control flow; change the block that calls
wallet.wait_for_tx_confirmation(&contract_txids, required_confirms, None, None)
to explicitly match its Result and convert an Err(WalletError::Interrupted(_))
into the taker-side recovery error (e.g., return
Err(TakerError::ContractsBroadcasted) or the equivalent variant used in this
module) while still propagating other wallet errors; reference the symbols
wallet.read_wallet(), wait_for_tx_confirmation, WalletError::Interrupted,
contract_txids, required_confirms, and the ContractsBroadcasted taker error when
implementing the match/convert.
In `@src/wallet/api.rs`:
- Around line 2702-2714: The linear backoff computed as total_sleep =
sleep_increment_secs.saturating_mul(attempt) can grow unbounded; change it to
cap growth by introducing a max backoff (e.g., max_backoff_secs) and compute
total_sleep = std::cmp::min(sleep_increment_secs.saturating_mul(attempt),
max_backoff_secs); keep the 1-second loop and shutdown/abort checks as-is and
update the log message to reflect the capped value; reference the variables
total_sleep, sleep_increment_secs, attempt and add a nearby constant like
MAX_BACKOFF_SECS or max_backoff_secs.
- Around line 2671-2690: Replace the wallet-scoped use of getrawtransaction by
calling the wallet RPC gettransaction instead: change the call site using
self.rpc.get_raw_transaction_info(txid, ...) to use the wallet method (e.g.
self.rpc.get_transaction(txid) or the crate's equivalent) and extract
confirmations from the returned wallet transaction struct (e.g.
tx.confirmations.unwrap_or(0)) just like before; keep the existing error
handling/loop logic but rely on the wallet RPC so nodes without txindex won't
permanently fail to find confirmed wallet txs. Ensure you update any type
matches/usages around the returned value to the wallet-transaction type and keep
the same variables (txid, confirms, required_confirms, max_confirm_height,
all_confirmed) referenced in the surrounding code.
---
Duplicate comments:
In `@src/watch_tower/nostr_discovery.rs`:
- Around line 205-210: The match arm that currently patterns
Err(tungstenite::Error::ConnectionClosed | tungstenite::Error::AlreadyClosed)
should bind the matched error (e.g., using a variable like `e`) so that when
shutdown.load(...) is false you return the original variant via `e.into()`
instead of always returning `ConnectionClosed`; update the arm in the function
handling the websocket read/error (the arm referencing shutdown and
tungstenite::Error::ConnectionClosed | tungstenite::Error::AlreadyClosed) to
bind the error and return that bound error when not shutting down to preserve
diagnostic fidelity.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: b0affb52-8cfa-4fcd-9473-f72d4696350f
📒 Files selected for processing (10)
src/maker/api.rssrc/taker/api.rssrc/taker/error.rssrc/taker/legacy_swap.rssrc/taker/taproot_swap.rssrc/wallet/api.rssrc/wallet/error.rssrc/watch_tower/nostr_discovery.rssrc/watch_tower/watcher.rstests/integration/fidelity.rs
💤 Files with no reviewable changes (1)
- src/taker/error.rs
|
Looks good to me. Feel free to take any of the other feedbacks given by coderabbit. |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/taker/taproot_swap.rs (1)
571-575:⚠️ Potential issue | 🟠 Major | ⚡ Quick winMap interruption to a recovery error after broadcast.
At Line 574,
?can propagateWalletError::Interrupted(_)as a generic error even though contract txs are already broadcast (point-of-no-return). Handle interruption explicitly and returnTakerError::ContractsBroadcasted(...).Proposed fix
let required_confirms = self.swap_state()?.params.required_confirms; { let wallet = self.read_wallet()?; - wallet.wait_for_tx_confirmation(&contract_txids, required_confirms, None, None)?; + match wallet.wait_for_tx_confirmation(&contract_txids, required_confirms, None, None) { + Ok(_) => {} + Err(crate::wallet::WalletError::Interrupted(_)) => { + return Err(TakerError::ContractsBroadcasted(contract_txids.clone())); + } + Err(e) => return Err(e.into()), + } }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/taker/taproot_swap.rs` around lines 571 - 575, The call to wallet.wait_for_tx_confirmation(...) can return WalletError::Interrupted which currently bubbles up via `?` even after contract_txids were broadcast; update the error handling around the block that reads the wallet and calls `wait_for_tx_confirmation(&contract_txids, required_confirms, None, None)` to catch an interrupted/wallet-interruption error and map it to `TakerError::ContractsBroadcasted(contract_txids.clone())` (or appropriate variant) instead of letting the generic error propagate; keep all other errors propagated as before and ensure you reference `self.swap_state()?` to obtain `required_confirms` and use `self.read_wallet()?` as before.src/taker/legacy_swap.rs (1)
314-323:⚠️ Potential issue | 🟠 Major | ⚡ Quick winPreserve
ContractsBroadcastedflow for taker funding confirmation interruption.At Line 317,
?still hidesWalletError::Interrupted(_)after funding is already broadcast. Mirror the explicit mapping used elsewhere and returnTakerError::ContractsBroadcasted(funding_txids.clone()).Proposed fix
let required_confirms = self.swap_state()?.params.required_confirms; prev_confirm_height = { let wallet = self.read_wallet()?; - wallet.wait_for_tx_confirmation( - &funding_txids, - required_confirms, - None, - None, - )? + match wallet.wait_for_tx_confirmation( + &funding_txids, + required_confirms, + None, + None, + ) { + Ok(h) => h, + Err(crate::wallet::WalletError::Interrupted(_)) => { + return Err(TakerError::ContractsBroadcasted(funding_txids.clone())); + } + Err(e) => return Err(e.into()), + } };🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/taker/legacy_swap.rs` around lines 314 - 323, The current call to wallet.wait_for_tx_confirmation in legacy_swap.rs uses the ? operator which will propagate WalletError::Interrupted(_) instead of preserving the ContractsBroadcasted flow; change the error handling around wait_for_tx_confirmation (called via self.read_wallet()?) to explicitly map WalletError::Interrupted(_) to return Err(TakerError::ContractsBroadcasted(funding_txids.clone())) (e.g., match or map_err on the result of wait_for_tx_confirmation) while letting other errors propagate as before; locate usages of swap_state(), read_wallet(), wait_for_tx_confirmation and replace the `?` on that call with explicit mapping to TakerError::ContractsBroadcasted(funding_txids.clone()) when Interrupted is encountered.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/watch_tower/watcher.rs`:
- Around line 186-189: Add a short rationale to the idle backoff comment next to
the sleep in watcher.rs (the else branch that calls
std::thread::sleep(Duration::from_millis(5))) explaining why 5ms was
chosen—e.g., it gives ~200Hz polling for reasonable responsiveness while
eliminating busy-loop CPU use—so future readers see the tradeoff without
changing behavior.
---
Duplicate comments:
In `@src/taker/legacy_swap.rs`:
- Around line 314-323: The current call to wallet.wait_for_tx_confirmation in
legacy_swap.rs uses the ? operator which will propagate
WalletError::Interrupted(_) instead of preserving the ContractsBroadcasted flow;
change the error handling around wait_for_tx_confirmation (called via
self.read_wallet()?) to explicitly map WalletError::Interrupted(_) to return
Err(TakerError::ContractsBroadcasted(funding_txids.clone())) (e.g., match or
map_err on the result of wait_for_tx_confirmation) while letting other errors
propagate as before; locate usages of swap_state(), read_wallet(),
wait_for_tx_confirmation and replace the `?` on that call with explicit mapping
to TakerError::ContractsBroadcasted(funding_txids.clone()) when Interrupted is
encountered.
In `@src/taker/taproot_swap.rs`:
- Around line 571-575: The call to wallet.wait_for_tx_confirmation(...) can
return WalletError::Interrupted which currently bubbles up via `?` even after
contract_txids were broadcast; update the error handling around the block that
reads the wallet and calls `wait_for_tx_confirmation(&contract_txids,
required_confirms, None, None)` to catch an interrupted/wallet-interruption
error and map it to `TakerError::ContractsBroadcasted(contract_txids.clone())`
(or appropriate variant) instead of letting the generic error propagate; keep
all other errors propagated as before and ensure you reference
`self.swap_state()?` to obtain `required_confirms` and use `self.read_wallet()?`
as before.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: a6252741-cf8e-493b-bc03-c4956f77be03
📒 Files selected for processing (10)
src/maker/api.rssrc/taker/api.rssrc/taker/error.rssrc/taker/legacy_swap.rssrc/taker/taproot_swap.rssrc/wallet/api.rssrc/wallet/error.rssrc/watch_tower/nostr_discovery.rssrc/watch_tower/watcher.rstests/integration/fidelity.rs
💤 Files with no reviewable changes (1)
- src/taker/error.rs
Pull Request
Description
Fixes small bugs -:
1-) removed
timeoutfromnet_wait_for_confirmationapi. as generally most of the blocks are not mined under 10min, so wait until the txn is confirmed.2-) Currently while testing on signet i was getting a lot of WouldBlock spam from nostr relay response, i.e.
Now, In
read_event_loop, WouldBlock / TimedOut / Interrupted are now treated as non-fatal (continue loop), to avoid reconnections.3-) Currently in a typical swap, we have a lot of calls to
zmq::poll&&zmq::recv_event, so I added a small idle backoff in the watcher loop.Changes in Profiling-:
Related Issue(s)
Closes #
Type of Change
Protocol Version(s) Affected
messages.rs,contract.rs,handlers.rs)messages2.rs,contract2.rs,handlers2.rs)Affected Component(s)
docs/)Checklist
Code Quality
cargo +nightly fmt --alland committed the resultcargo +stable clippy --all-features --lib --bins --tests -- -D warningswith zero warningscargo +stable clippy --examples -- -D warningswith zero warningsRUSTDOCFLAGS="-D warnings" cargo +nightly doc --all-features --document-private-items --no-depswith zero warningsln -s ../../git_hooks/pre-commit .git/hooks/pre-commitif not already set)Testing
cargo test)cargo test --features integration-test)#[cfg(feature = "integration-test")]Documentation
docs/folder were updatedSecurity & Privacy (Critical)
integration-testfeature flag is not reachable in production code pathsSummary by CodeRabbit
Bug Fixes
Reliability / UX
Tests