Conversation
|
Note Currently processing new changes in this PR. This may take a few minutes, please wait... ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (11)
📝 WalkthroughWalkthroughAdds structured tracing and timing across the order pipeline: quote batching, order fetching and inventory logs, candidate construction with per-quote stats, price-cap filtering and simulations, orderbook selection, preflight iteration with failing-order removal, and approval checks. ChangesOrder Pipeline Instrumentation
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (4)
crates/common/src/take_orders/candidates.rs (3)
106-186: 💤 Low valueSkip field formatting when INFO is disabled for rejections.
For the rejection branch you already cap
info_rejection_logs, but the per-decision field formatting (format_float,truncate_error,order_hashinside the macro call) is still performed unconditionally even when no INFO subscriber is attached. On hot paths with many quote rejections this is wasted work. Consider gating withenabled!(Level::INFO)analogously to how the accepted branch gates onLevel::DEBUG.♻️ Proposed sketch
- } else if stats.info_rejection_logs >= MAX_INFO_CANDIDATE_DECISION_LOGS { - stats.omitted_info_rejection_logs += 1; - return; + } else { + if stats.info_rejection_logs >= MAX_INFO_CANDIDATE_DECISION_LOGS { + stats.omitted_info_rejection_logs += 1; + return; + } + if !enabled!(Level::INFO) { + // Still bump the counter so the cap remains meaningful, but skip + // the field formatting cost if no subscriber will record it. + stats.info_rejection_logs += 1; + return; + } }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@crates/common/src/take_orders/candidates.rs` around lines 106 - 186, The rejection branch in log_candidate_decision still computes expensive fields (format_float, truncate_error, order_hash, etc.) unconditionally; wrap the info! logging and the precomputed values behind a runtime check enabled!(Level::INFO) (similar to the existing DEBUG check for accepted decisions) so you only compute quoted_max_output, quoted_ratio, quote_error, and call order_hash when INFO is enabled; update the branch that checks stats.info_rejection_logs / MAX_INFO_CANDIDATE_DECISION_LOGS to perform that enabled!(Level::INFO) check before doing the formatting and invoking info!, and ensure stats.info_rejection_logs is incremented only after a logged info call.
274-374: ⚡ Quick winRedundant debug logs after
log_candidate_decision.Each rejection path now both calls
log_candidate_decision(...)(which already emits a structuredinfo!/debug!event with full decision context) and immediately follows with a seconddebug!(...)carrying a strict subset of that information. The trailingdebug!calls are now duplicate noise and can be dropped to keep one source of truth per decision.♻️ Proposed cleanup
reason: "quote_failed", }); - debug!( - orderbook = %orderbook, - input_index = quote.pair.input_index, - output_index = quote.pair.output_index, - error = ?quote.error, - "skipping failed quote" - ); return Ok(None);(apply the same removal to the
missing_quote_data,out_of_bounds_io_indices,wrong_direction, andzero_or_non_positive_capacitybranches)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@crates/common/src/take_orders/candidates.rs` around lines 274 - 374, Remove the redundant debug! calls that immediately follow log_candidate_decision(...) in the rejection branches for quote_failed, missing_quote_data, out_of_bounds_io_indices, wrong_direction, and zero_or_non_positive_capacity: keep the log_candidate_decision(CandidateDecisionLog { ... }) calls (and stats increments) and delete the subsequent debug!(...) statements (the ones that log orderbook/input_io_index/output_io_index and a short message) so each decision is logged only via log_candidate_decision; ensure no other logic is changed and that references like quote.pair.input_index/output_index, input_io_index/output_io_index, stats, orderbook, order, quote, input_token, and output_token remain intact.
142-186: ⚡ Quick winCollapse the duplicated accept/reject log bodies.
The
if is_accepted { debug!(...) } else { info!(...) }arms emit byte-for-byte the same field set with only the macro level differing. Consider using a single block plus a helper level, e.g. viatracing::event!(level, ...)with the level chosen up front, to remove ~40 lines of duplication and keep the field list in one place going forward.♻️ Sketch
let level = if is_accepted { Level::DEBUG } else { Level::INFO }; tracing::event!( level, orderbook = %orderbook, order_hash = %order_hash(order), // ... single field list ... decision, reason, "take-order candidate decision" );🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@crates/common/src/take_orders/candidates.rs` around lines 142 - 186, The two logging branches (the is_accepted condition that calls debug!(...) and the else that calls info!(...)) duplicate the exact same fields; replace them with a single tracing::event! call where the level is chosen first (e.g. let level = if is_accepted { Level::DEBUG } else { Level::INFO }) and then emit the shared field list (including order_hash(order), quote.pair.*, input.map(|io| io.token).unwrap_or(Address::ZERO), quoted_max_output, quoted_ratio, quote.signed_context.len(), decision, reason, etc.) once; ensure you preserve the stats.info_rejection_logs += 1 side effect only when level == Level::INFO (i.e. when !is_accepted) so behavior remains identical.crates/common/src/raindex_client/take_orders/mod.rs (1)
32-122: ⚡ Quick winDe-duplicate logging helpers across
take_ordersmodules.
truncate_error,format_float_for_log, andorder_hash_for_leghere are byte-equivalent totruncate_error,format_float, andorder_hashincrates/common/src/take_orders/candidates.rs(and per the AI summary,simulation.rs/preflight.rsgrow similar helpers). TheMAX_LOGGED_ERROR_CHARS/MAX_LENconstants are also duplicated (both 512). Consider extracting these into a smalltake_orders::log_helpers(or similar) module so the format/truncate semantics stay consistent if they evolve.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@crates/common/src/raindex_client/take_orders/mod.rs` around lines 32 - 122, Duplicate logging helpers (truncate_error, format_float_for_log, order_hash_for_leg and the MAX_LEN constant) should be extracted into a single take_orders::log_helpers module and consumed from there; create log_helpers with the shared const (use the canonical name MAX_LOGGED_ERROR_CHARS or keep MAX_LEN but consolidate), move/rename the functions to that module (truncate_error, format_float, order_hash) and update callers in this file (emit_selected_leg!, format_float_for_log usage, order_hash_for_leg call sites, and any other take_orders modules like candidates.rs/simulation.rs/preflight.rs) to use the centralized functions via use take_orders::log_helpers::{truncate_error, format_float, order_hash} (or re-exported names) so all duplicate implementations are removed and semantics remain consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@crates/common/src/raindex_client/orders.rs`:
- Around line 211-246: The per-order inventory logs in
log_order_inventory_for_pair are too heavy for info level; change the per-order
info! calls to debug! and wrap the per-order loop (including calls to
format_order_vaults(&order.inputs) and format_order_vaults(&order.outputs)) in a
log-level guard so formatting is only done when debug is enabled (e.g. if
tracing::level_enabled!(tracing::Level::DEBUG) { ... }). Leave the aggregate
omitted_order_count summary as-is (or keep at info) but ensure any expensive
formatting only runs inside the debug guard; update the code in
log_order_inventory_for_pair to use debug! and the tracing level check.
In `@crates/common/src/raindex_client/take_orders/mod.rs`:
- Around line 34-60: The macro emit_selected_leg! currently indexes
order.validInputs[input_index] and order.validOutputs[output_index] unsafely;
change those to use .get(input_index) and .get(output_index) and fall back to
sentinel/default values when None so the logging path never panics.
Specifically, within emit_selected_leg! replace direct access to input and
output with safe lookups and set input_token/output_token to a sentinel like
"<missing>" (or the existing candidate decision logger sentinel),
input_vault_id/output_vault_id to a safe default, and
selected_input/selected_output to a formatted sentinel when the entry is absent,
while keeping other fields (order_hash_for_leg, format_float_for_log($leg.input)
etc.) intact so logs remain informative but non-panicking.
In `@crates/common/src/take_orders/simulation.rs`:
- Around line 170-193: The per-candidate price-cap rejection events currently
emitted with info! (inside the branch that increments logged_rejections) are too
noisy; change that info! to debug! and guard it with a debug-level enablement
check (e.g. tracing::level_enabled!(tracing::Level::DEBUG) or equivalent) so the
structured fields (orderbook, order_hash(&candidate), input_io_index,
output_io_index, max_output, ratio, price_cap, decision, reason) are only
evaluated when debug logging is enabled; keep the aggregate info! at the end
that logs logged_rejections and omitted_rejections unchanged and continue to
increment logged_rejections/omitted_rejections using the existing
MAX_INFO_PRICE_CAP_REJECTION_LOGS logic.
---
Nitpick comments:
In `@crates/common/src/raindex_client/take_orders/mod.rs`:
- Around line 32-122: Duplicate logging helpers (truncate_error,
format_float_for_log, order_hash_for_leg and the MAX_LEN constant) should be
extracted into a single take_orders::log_helpers module and consumed from there;
create log_helpers with the shared const (use the canonical name
MAX_LOGGED_ERROR_CHARS or keep MAX_LEN but consolidate), move/rename the
functions to that module (truncate_error, format_float, order_hash) and update
callers in this file (emit_selected_leg!, format_float_for_log usage,
order_hash_for_leg call sites, and any other take_orders modules like
candidates.rs/simulation.rs/preflight.rs) to use the centralized functions via
use take_orders::log_helpers::{truncate_error, format_float, order_hash} (or
re-exported names) so all duplicate implementations are removed and semantics
remain consistent.
In `@crates/common/src/take_orders/candidates.rs`:
- Around line 106-186: The rejection branch in log_candidate_decision still
computes expensive fields (format_float, truncate_error, order_hash, etc.)
unconditionally; wrap the info! logging and the precomputed values behind a
runtime check enabled!(Level::INFO) (similar to the existing DEBUG check for
accepted decisions) so you only compute quoted_max_output, quoted_ratio,
quote_error, and call order_hash when INFO is enabled; update the branch that
checks stats.info_rejection_logs / MAX_INFO_CANDIDATE_DECISION_LOGS to perform
that enabled!(Level::INFO) check before doing the formatting and invoking info!,
and ensure stats.info_rejection_logs is incremented only after a logged info
call.
- Around line 274-374: Remove the redundant debug! calls that immediately follow
log_candidate_decision(...) in the rejection branches for quote_failed,
missing_quote_data, out_of_bounds_io_indices, wrong_direction, and
zero_or_non_positive_capacity: keep the
log_candidate_decision(CandidateDecisionLog { ... }) calls (and stats
increments) and delete the subsequent debug!(...) statements (the ones that log
orderbook/input_io_index/output_io_index and a short message) so each decision
is logged only via log_candidate_decision; ensure no other logic is changed and
that references like quote.pair.input_index/output_index,
input_io_index/output_io_index, stats, orderbook, order, quote, input_token, and
output_token remain intact.
- Around line 142-186: The two logging branches (the is_accepted condition that
calls debug!(...) and the else that calls info!(...)) duplicate the exact same
fields; replace them with a single tracing::event! call where the level is
chosen first (e.g. let level = if is_accepted { Level::DEBUG } else {
Level::INFO }) and then emit the shared field list (including order_hash(order),
quote.pair.*, input.map(|io| io.token).unwrap_or(Address::ZERO),
quoted_max_output, quoted_ratio, quote.signed_context.len(), decision, reason,
etc.) once; ensure you preserve the stats.info_rejection_logs += 1 side effect
only when level == Level::INFO (i.e. when !is_accepted) so behavior remains
identical.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fe362291-3ba0-4e6d-88a9-1856b1c2e341
📒 Files selected for processing (8)
crates/common/src/raindex_client/order_quotes.rscrates/common/src/raindex_client/orders.rscrates/common/src/raindex_client/take_orders/approval.rscrates/common/src/raindex_client/take_orders/mod.rscrates/common/src/raindex_client/take_orders/selection.rscrates/common/src/take_orders/candidates.rscrates/common/src/take_orders/preflight.rscrates/common/src/take_orders/simulation.rs
| fn log_order_inventory_for_pair( | ||
| chain_id: u32, | ||
| sell_token: Address, | ||
| buy_token: Address, | ||
| orders: &[RaindexOrder], | ||
| ) { | ||
| for (order_index, order) in orders | ||
| .iter() | ||
| .take(MAX_INFO_ORDER_INVENTORY_LOGS) | ||
| .enumerate() | ||
| { | ||
| info!( | ||
| chain_id, | ||
| sell_token = %sell_token, | ||
| buy_token = %buy_token, | ||
| order_index, | ||
| order_hash = %order.order_hash(), | ||
| orderbook = %order.orderbook(), | ||
| input_vaults = ?format_order_vaults(&order.inputs), | ||
| output_vaults = ?format_order_vaults(&order.outputs), | ||
| "order considered for take-orders pair" | ||
| ); | ||
| } | ||
|
|
||
| let omitted = orders.len().saturating_sub(MAX_INFO_ORDER_INVENTORY_LOGS); | ||
| if omitted > 0 { | ||
| info!( | ||
| chain_id, | ||
| sell_token = %sell_token, | ||
| buy_token = %buy_token, | ||
| logged_order_count = MAX_INFO_ORDER_INVENTORY_LOGS, | ||
| omitted_order_count = omitted, | ||
| "omitted additional order inventory logs" | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
Per-order inventory logging is too expensive at info on a hot path.
This path can emit up to 50 detailed inventory events per request and formats vault-level values eagerly, which can materially increase latency and log ingestion cost under load. Move detailed inventory logs to debug and guard the function by log-level enablement.
💡 Suggested change
-use tracing::{info, warn};
+use tracing::{debug, info, warn, Level};
fn log_order_inventory_for_pair(
chain_id: u32,
sell_token: Address,
buy_token: Address,
orders: &[RaindexOrder],
) {
+ if !tracing::enabled!(Level::DEBUG) {
+ return;
+ }
+
for (order_index, order) in orders
.iter()
.take(MAX_INFO_ORDER_INVENTORY_LOGS)
.enumerate()
{
- info!(
+ debug!(
chain_id,
sell_token = %sell_token,
buy_token = %buy_token,
order_index,
order_hash = %order.order_hash(),
orderbook = %order.orderbook(),
input_vaults = ?format_order_vaults(&order.inputs),
output_vaults = ?format_order_vaults(&order.outputs),
"order considered for take-orders pair"
);
}
let omitted = orders.len().saturating_sub(MAX_INFO_ORDER_INVENTORY_LOGS);
if omitted > 0 {
- info!(
+ debug!(
chain_id,
sell_token = %sell_token,
buy_token = %buy_token,
logged_order_count = MAX_INFO_ORDER_INVENTORY_LOGS,
omitted_order_count = omitted,
"omitted additional order inventory logs"
);
}
}Also applies to: 1469-1469
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@crates/common/src/raindex_client/orders.rs` around lines 211 - 246, The
per-order inventory logs in log_order_inventory_for_pair are too heavy for info
level; change the per-order info! calls to debug! and wrap the per-order loop
(including calls to format_order_vaults(&order.inputs) and
format_order_vaults(&order.outputs)) in a log-level guard so formatting is only
done when debug is enabled (e.g. if
tracing::level_enabled!(tracing::Level::DEBUG) { ... }). Leave the aggregate
omitted_order_count summary as-is (or keep at info) but ensure any expensive
formatting only runs inside the debug guard; update the code in
log_order_inventory_for_pair to use debug! and the tracing level check.
| macro_rules! emit_selected_leg { | ||
| ($level:ident, $leg_index:expr, $leg:expr, $orderbook:expr, $block_number:expr, $message:expr) => {{ | ||
| let order = &$leg.candidate.order; | ||
| let input_index = $leg.candidate.input_io_index as usize; | ||
| let output_index = $leg.candidate.output_io_index as usize; | ||
| let input = &order.validInputs[input_index]; | ||
| let output = &order.validOutputs[output_index]; | ||
|
|
||
| $level!( | ||
| orderbook = %$orderbook, | ||
| block_number = $block_number, | ||
| leg_index = $leg_index, | ||
| order_hash = %order_hash_for_leg($leg), | ||
| input_io_index = $leg.candidate.input_io_index, | ||
| output_io_index = $leg.candidate.output_io_index, | ||
| input_token = %input.token, | ||
| output_token = %output.token, | ||
| input_vault_id = %input.vaultId, | ||
| output_vault_id = %output.vaultId, | ||
| selected_input = %format_float_for_log($leg.input), | ||
| selected_output = %format_float_for_log($leg.output), | ||
| ratio = %format_float_for_log($leg.candidate.ratio), | ||
| event = $message, | ||
| "take-order leg" | ||
| ); | ||
| }}; | ||
| } |
There was a problem hiding this comment.
Defensive indexing in the leg-logging macro.
emit_selected_leg! indexes order.validInputs[input_index] and order.validOutputs[output_index] directly. In practice these indices come from a TakeOrderCandidate that already passed indices_in_bounds, but a panic in logging code (e.g. if a future caller constructs a SelectedTakeOrderLeg from a different source, or if the underlying order is mutated upstream) is an unpleasant failure mode for an instrumentation path. Prefer .get(...) with a sentinel like the candidate decision logger already does.
🛡️ Proposed fix
- let input_index = $leg.candidate.input_io_index as usize;
- let output_index = $leg.candidate.output_io_index as usize;
- let input = &order.validInputs[input_index];
- let output = &order.validOutputs[output_index];
+ let input_index = $leg.candidate.input_io_index as usize;
+ let output_index = $leg.candidate.output_io_index as usize;
+ let input = order.validInputs.get(input_index);
+ let output = order.validOutputs.get(output_index);
@@
- input_token = %input.token,
- output_token = %output.token,
- input_vault_id = %input.vaultId,
- output_vault_id = %output.vaultId,
+ input_token = %input.map(|io| io.token).unwrap_or(alloy::primitives::Address::ZERO),
+ output_token = %output.map(|io| io.token).unwrap_or(alloy::primitives::Address::ZERO),
+ input_vault_id = %input.map(|io| io.vaultId).unwrap_or_default(),
+ output_vault_id = %output.map(|io| io.vaultId).unwrap_or_default(),🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@crates/common/src/raindex_client/take_orders/mod.rs` around lines 34 - 60,
The macro emit_selected_leg! currently indexes order.validInputs[input_index]
and order.validOutputs[output_index] unsafely; change those to use
.get(input_index) and .get(output_index) and fall back to sentinel/default
values when None so the logging path never panics. Specifically, within
emit_selected_leg! replace direct access to input and output with safe lookups
and set input_token/output_token to a sentinel like "<missing>" (or the existing
candidate decision logger sentinel), input_vault_id/output_vault_id to a safe
default, and selected_input/selected_output to a formatted sentinel when the
entry is absent, while keeping other fields (order_hash_for_leg,
format_float_for_log($leg.input) etc.) intact so logs remain informative but
non-panicking.
| } else if logged_rejections < MAX_INFO_PRICE_CAP_REJECTION_LOGS { | ||
| info!( | ||
| orderbook = %candidate.orderbook, | ||
| order_hash = %order_hash(&candidate), | ||
| input_io_index = candidate.input_io_index, | ||
| output_io_index = candidate.output_io_index, | ||
| max_output = %format_float(candidate.max_output), | ||
| ratio = %format_float(candidate.ratio), | ||
| price_cap = %format_float(price_cap), | ||
| decision = "rejected", | ||
| reason = "above_price_cap", | ||
| "take-order candidate decision" | ||
| ); | ||
| logged_rejections += 1; | ||
| } else { | ||
| omitted_rejections += 1; | ||
| } | ||
| } | ||
|
|
||
| if omitted_rejections > 0 { | ||
| info!( | ||
| logged_rejections, | ||
| omitted_rejections, "omitted additional price-cap candidate rejection logs" | ||
| ); |
There was a problem hiding this comment.
Price-cap rejection events should not be emitted at info for each candidate.
This can generate up to 100 structured events per simulation call and adds avoidable CPU/logging overhead in a frequently executed path. Emit these as debug (with an enablement guard) and keep only aggregate counts at info.
💡 Suggested change
-use tracing::info;
+use tracing::{debug, info, Level};
fn filter_candidates_by_price_cap(
candidates: Vec<TakeOrderCandidate>,
price_cap: Float,
) -> Result<Vec<TakeOrderCandidate>, RaindexError> {
+ let log_rejections = tracing::enabled!(Level::DEBUG);
let mut filtered = Vec::new();
let mut logged_rejections = 0usize;
let mut omitted_rejections = 0usize;
for candidate in candidates {
let ratio = candidate.ratio;
if ratio.lte(price_cap)? {
filtered.push(candidate);
- } else if logged_rejections < MAX_INFO_PRICE_CAP_REJECTION_LOGS {
- info!(
+ } else if log_rejections && logged_rejections < MAX_INFO_PRICE_CAP_REJECTION_LOGS {
+ debug!(
orderbook = %candidate.orderbook,
order_hash = %order_hash(&candidate),
input_io_index = candidate.input_io_index,
output_io_index = candidate.output_io_index,
max_output = %format_float(candidate.max_output),
ratio = %format_float(candidate.ratio),
price_cap = %format_float(price_cap),
decision = "rejected",
reason = "above_price_cap",
"take-order candidate decision"
);
logged_rejections += 1;
} else {
omitted_rejections += 1;
}
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@crates/common/src/take_orders/simulation.rs` around lines 170 - 193, The
per-candidate price-cap rejection events currently emitted with info! (inside
the branch that increments logged_rejections) are too noisy; change that info!
to debug! and guard it with a debug-level enablement check (e.g.
tracing::level_enabled!(tracing::Level::DEBUG) or equivalent) so the structured
fields (orderbook, order_hash(&candidate), input_io_index, output_io_index,
max_output, ratio, price_cap, decision, reason) are only evaluated when debug
logging is enabled; keep the aggregate info! at the end that logs
logged_rejections and omitted_rejections unchanged and continue to increment
logged_rejections/omitted_rejections using the existing
MAX_INFO_PRICE_CAP_REJECTION_LOGS logic.
a1a5dc6 to
866d259
Compare
866d259 to
191a997
Compare

Motivation
Production take-orders failures are hard to diagnose because the existing flow does not expose enough structured context to explain where liquidity disappeared, which orders/vaults were considered, why candidates were rejected, or which order failed preflight simulation.
Solution
Add bounded structured tracing across the take-orders path:
Checks
By submitting this for review, I'm confirming I've done the following:
Verified locally:
cargo fmt --all -- --checknix develop -c cargo check -p rain_orderbook_commonnix develop -c cargo clippy -p rain_orderbook_common --all-targets -- -D warningsSummary by CodeRabbit