Skip to content

Remove redundant namespace directives and consolidate error handling#53

Open
lrleon wants to merge 38 commits intomasterfrom
ah-exeptions
Open

Remove redundant namespace directives and consolidate error handling#53
lrleon wants to merge 38 commits intomasterfrom
ah-exeptions

Conversation

@lrleon
Copy link
Copy Markdown
Owner

@lrleon lrleon commented Mar 30, 2026

Summary by CodeRabbit

  • New Features

    • New parsing error macros and an explicit length-only LIS computation/result type.
  • Bug Fixes

    • Empty-graph handling now returns empty results instead of throwing.
    • Improved file-open error reporting and exception stream handling.
    • Transshipment now cleans up temporary nodes on return.
  • Documentation

    • Clarified and standardized algorithm descriptions and complexity notes.
  • Refactor

    • Modernized string formatting, tightened validation/overflow checks, and internal data-structure improvements.
  • Tests

    • Updated bipartite empty-graph tests and added CI policy/tokenization tests.

lrleon added 26 commits January 28, 2026 09:44
Se eliminan múltiples declaraciones redundantes de 'using namespace Aleph'
en archivos de cabecera que ya están dentro del namespace Aleph. Esto
evita conflictos de nombres y mejora la claridad del código.

Adicionalmente, en parse_utils.H y stat_utils.H se reemplazan las
excepciones estándar por macros de error específicas de Aleph
(ah_parse_error, ah_invalid_argument_if, etc.) para un manejo de errores
más consistente en la biblioteca.
Copilot AI review requested due to automatic review settings March 30, 2026 21:04
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 30, 2026

Walkthrough

This PR makes widespread refactors across Aleph: adds centralized error macros and stream-type-aware exception building, restructures namespaces and result types in multiple algorithm headers, tightens numeric/overflow checks and container usage, adjusts min-cost flow and bipartite logic, improves CI header-change detection/tokenization, and applies many documentation/formatting edits.

Changes

Cohort / File(s) Summary
Error handling & exceptions
parse_utils.H, stat_utils.H, ah-errors.H
Introduced ah_parse_error* macros and ah_invalid_argument_if usage; replaced many raw throws with macro-based stream composition; ExceptionBuilder now detects ostringstream vs stringstream via dynamic_cast and errors on unknown stream types.
CI policy, tokenization & tests
scripts/ci_policy_checks.rb, scripts/test_ci_policy_checks.rb, Tests/*
ci_policy_checks now parses git diff status records, tokenizes C/C++ to detect code vs doc-only header changes, gates regression runs to pull_request events, and added unit tests for tokenization/change detection; many test files received trailing comments to satisfy CI.
Min-cost flow & assignment
tpl_mincost.H, tpl_mincost_test.cc
Replaced Dijkstra frontier PQ with DynBinHeap; removed unused from param in reduced_cost; solve_transshipment now removes temporary super-nodes before returning; solve_assignment uses DynMapTree reverse lookup for tasks; several parameter pointer/spacing normalizations.
Parsing & statistics utilities
parse_utils.H, stat_utils.H
Switched to macro-driven parse errors, replaced many throw sites with macros; stat_utils replaced std containers/algorithms with DynMapTree, Array + in_place_sort, added many validation checks (empty, size, zero-variance) and simplified control flow in variance/covariance.
Combinatorics & chronos utilities
ah-comb.H, ah-chronos-utils.H, modular_combinatorics.H
Reorganized internal helper namespaces and includes; duration_to_string uses std::format and fully-qualified chrono units; combinatorics docs clarified (no code changes).
Knapsack & Subset Sum
Knapsack.H, Subset_Sum.H
Refactored implementations (checked conversions, helper flattening); preserved public APIs but added overflow/value-safety checks, updated MITM enumeration to validate size/overflow and reserve appropriately.
Hungarian, LIS, Matrix Chain, LCA, Tree DP
Hungarian.H, LIS.H, Matrix_Chain.H, LCA.H, Tree_DP.H
Reworked public/result types and flattened internal namespaces (e.g., Hungarian_Result/Hungarian_Assignment, LIS_Result); moved helper types to top-level Aleph namespaces; preserved public signatures but changed internal layout and some constructors/behaviors (Hungarian now computes in constructor, handles negation overflow).
Graph algorithms & bipartite
tpl_bipartite.H, Blossom.H, Blossom_Weighted.H, blossom_weighted_mwmatching.H
Added early-exit handling for zero-node graphs in bipartite flows/matchings; updated Blossom docs and refined Doxygen wording; no public API signature changes.
Documentation-only edits
K_Shortest_Paths.H, Graph_Coloring.H, HLD.H, Min_Cost_Matching.H, Min_Mean_Cycle.H, tpl_multi_polynomial.H, tpl_polynomial.H, Matrix_Chain.H
Widespread Doxygen/LaTeX spacing and wording cleanup across many algorithm headers; no functional changes to code or public signatures.
Header/result reshaping & helpers
Matrix_Chain.H, Subset_Sum.H, tpl_multi_polynomial.H
Moved several result types to top-level Aleph namespace and flattened/detail helper functions; reconstruction and return shapes preserved.

Sequence Diagram(s)

(omitted)

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely summarizes the main changes: removing redundant namespace directives and consolidating error handling across multiple header files. It directly reflects the primary refactoring work evident in the changeset.
Header Docstring Coverage ✅ Passed All in-scope public declarations in changed headers have Doxygen documentation with comprehensive @brief, @param, @return, @tparam, @throw, and @details blocks, achieving ~100% coverage exceeding the 80% threshold.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch ah-exeptions

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9c424ec8ed

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

… error

Se actualiza el operador de coma en ExceptionBuilder para manejar dinámicamente std::ostringstream y std::stringstream, lanzando excepción apropiada según el tipo de stream. Se cambian todas las macros de error (ah_range_error, ah_runtime_error, etc.) para usar std::ostringstream en lugar de std::stringstream, garantizando compatibilidad con el nuevo operador.

Adicionalmente se realizan mejoras en:
- Knapsack.H: Se añade verificación de overflow en expansión de items
- tpl_mincost.H: Se reemplaza std::priority_queue por DynBinHeap y se limpian nodos super
- stat_utils.H: Se optimizan funciones estadísticas usando estructuras de Aleph
- scripts/ci_policy_checks.rb: Se mejora detección de cambios en headers para pruebas
- parse_utils.H: Se actualizan macros de error de parseo a std::ostringstream
Copilot AI review requested due to automatic review settings March 31, 2026 17:22
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
ah-errors.H (1)

116-126: RTTI dependency and fallback behavior worth noting.

The dynamic_cast approach requires RTTI to be enabled at compile time. If RTTI is ever disabled (-fno-rtti), this will fail to compile or behave unexpectedly. This is likely fine for Aleph-w's use case, but worth documenting.

Additionally, the fallback at line 124 will throw an exception with "Unknown stream type" if someone accidentally passes a non-ostringstream/stringstream (e.g., std::cerr). Since all macros internally control the stream type, this is unlikely to trigger in practice.

The unreachable return 0; on line 125 is harmless—it satisfies the compiler's return type requirement.

💡 Optional: Add a brief inline comment documenting RTTI requirement
     int operator,(const std::ostream &s)
     {
+      // Note: Requires RTTI for dynamic_cast
       if (const auto * oss = dynamic_cast<const std::ostringstream *>(&s))
         throw E(oss->str());
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ah-errors.H` around lines 116 - 126, The operator,(const std::ostream &s)
uses dynamic_cast to detect std::ostringstream/std::stringstream which requires
RTTI; add a brief inline comment inside the operator (near the dynamic_casts and
the fallback throw) stating the RTTI dependency and that builds with -fno-rtti
will not support this behavior, and also document the fallback throwing "Unknown
stream type" when non-string streams are passed (e.g. std::cerr) is intentional;
finally, remove or mark the unreachable `return 0;` (or annotate it with a
comment like "unreachable" to satisfy compilers) so intent is clear.
tpl_mincost.H (1)

325-326: Use not instead of ! per coding guidelines.

The project style prefers and, or, not for logical operations.

♻️ Proposed fix
-      if (!changed)
+      if (not changed)
         break;

As per coding guidelines: "Use and, or, not for logical operations (instead of &&, ||, !)".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tpl_mincost.H` around lines 325 - 326, Replace the C-style negation in the
conditional inside tpl_mincost.H: locate the if statement that checks the
"changed" flag (the code containing "if (!changed) break;") and change the
condition to use the project's Python-style logical operator by using "not
changed" instead of "!changed" so the statement reads logically the same but
follows the coding guideline.
scripts/ci_policy_checks.rb (1)

69-91: Nice to have: split the lexer into smaller token consumers.

read_raw_string_literal and cpp_token_stream are already over the RuboCop complexity thresholds. Extracting helpers like consume_comment, consume_literal, consume_identifier, and consume_number would make future edge-case fixes much safer.

Also applies to: 94-150

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/ci_policy_checks.rb` around lines 69 - 91, The lexer code is too
complex—split responsibilities by extracting smaller token-consumer helpers:
move raw string parsing from read_raw_string_literal (and the large
cpp_token_stream) into focused functions like consume_comment, consume_literal
(for raw and quoted strings using read_raw_string_literal as a subroutine),
consume_identifier, and consume_number; update cpp_token_stream to call these
helpers (delegating the scanning/consumption logic and returning token + new
index) so each helper handles its own validation/terminator logic, reducing
cyclomatic complexity and making edge-case fixes easier.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/ci_policy_checks.rb`:
- Around line 94-157: Add regression tests covering the new token-diff gate by
exercising the cpp_token_stream and header_has_code_changes? behavior: create
fixtures and tests for (1) doc-only edits in an existing header (should not
count as code change), (2) newly added headers (ensure added header with only
docs is treated correctly), and (3) headers with code vs non-code changes (e.g.
whitespace/comments vs real token changes) to assert header_has_code_changes?
returns the expected boolean; wire these into the CI test matrix used by
scripts/ci_policy_checks.rb so future changes to cpp_token_stream are caught.
- Around line 153-157: The header_has_code_changes? function misclassifies pure
renames as code changes because it only looks up the new path on origin/base and
falls back to '' when the file was renamed; update the change detection to be
rename-aware by either (a) resolving the old path for renamed files using a
rename-aware git diff (e.g., use git --name-status --diff-filter=R to map
new->old and pass the old path into git_show_optional when available) or (b)
exclude 'R' (renames) from changed_files so pure renames don’t trigger
header_has_code_changes?; also add regression tests for header token comparison
covering identical files, token-different files, whitespace/comment-only
changes, and rename edge-cases to prevent future regressions (target tests at
the Cpp tokenization/comparison entrypoint used by header_has_code_changes?).

In `@tpl_mincost.H`:
- Around line 549-551: The function currently calls
net.remove_node(super_source) and net.remove_node(super_sink), permanently
altering the caller's graph; either document this side-effect in the function's
Doxygen (add a `@warning/`@note that the function temporarily adds
super_source/super_sink and that arc flow values and the input network may be
modified) or change the implementation to operate on a local copy of the network
(e.g., clone the input net at start and run all modifications on the clone so
the original net reference is preserved), ensuring to reference the existing
identifiers net, super_source and super_sink when updating docs or when
switching to a copy-based workflow.
- Around line 177-184: The heap comparator is inverted: DynBinHeap<PQ_Entry,
std::greater<PQ_Entry>> makes getMin() return the largest distance, breaking
Dijkstra; change DynBinHeap instantiation to use the default/min comparator
(i.e., remove std::greater<PQ_Entry> or replace with the library's min
comparator) so that pq.insert({0, source}), pq.getMin(), and the loop using
pq.is_empty() operate as a true min-heap for PQ_Entry.

---

Nitpick comments:
In `@ah-errors.H`:
- Around line 116-126: The operator,(const std::ostream &s) uses dynamic_cast to
detect std::ostringstream/std::stringstream which requires RTTI; add a brief
inline comment inside the operator (near the dynamic_casts and the fallback
throw) stating the RTTI dependency and that builds with -fno-rtti will not
support this behavior, and also document the fallback throwing "Unknown stream
type" when non-string streams are passed (e.g. std::cerr) is intentional;
finally, remove or mark the unreachable `return 0;` (or annotate it with a
comment like "unreachable" to satisfy compilers) so intent is clear.

In `@scripts/ci_policy_checks.rb`:
- Around line 69-91: The lexer code is too complex—split responsibilities by
extracting smaller token-consumer helpers: move raw string parsing from
read_raw_string_literal (and the large cpp_token_stream) into focused functions
like consume_comment, consume_literal (for raw and quoted strings using
read_raw_string_literal as a subroutine), consume_identifier, and
consume_number; update cpp_token_stream to call these helpers (delegating the
scanning/consumption logic and returning token + new index) so each helper
handles its own validation/terminator logic, reducing cyclomatic complexity and
making edge-case fixes easier.

In `@tpl_mincost.H`:
- Around line 325-326: Replace the C-style negation in the conditional inside
tpl_mincost.H: locate the if statement that checks the "changed" flag (the code
containing "if (!changed) break;") and change the condition to use the project's
Python-style logical operator by using "not changed" instead of "!changed" so
the statement reads logically the same but follows the coding guideline.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: fb4452ee-c9b5-45bb-80d7-dd126752cb4f

📥 Commits

Reviewing files that changed from the base of the PR and between c8d0d5f and 2ae74f4.

📒 Files selected for processing (7)
  • Knapsack.H
  • ah-comb.H
  • ah-errors.H
  • parse_utils.H
  • scripts/ci_policy_checks.rb
  • stat_utils.H
  • tpl_mincost.H
🚧 Files skipped from review as they are similar to previous changes (4)
  • parse_utils.H
  • stat_utils.H
  • ah-comb.H
  • Knapsack.H

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 29 out of 29 changed files in this pull request and generated 3 comments.

Copilot AI review requested due to automatic review settings March 31, 2026 18:35
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 32 out of 41 changed files in this pull request and generated 8 comments.

Comment on lines +1 to +6
# State Search Framework — Mejoras y Plan

> Revisión del 19 de marzo de 2026.
> Alcance: los 9 headers del framework (`state_search_common.H`, `Backtracking.H`,
> `Branch_And_Bound.H`, `Negamax.H`, `Alpha_Beta.H`, `State_Search_IDA_Star.H`,
> `Transposition_Table.H`, `search_move_ordering.H`, `State_Search.H`), los 3 test
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This document is written primarily in Spanish, while the rest of the project documentation appears to be English-first. To keep documentation consistent and searchable for the wider audience, translate this file to English (or rename it to a clearly labeled *.es.md and add an English counterpart).

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +6
# Aleph-w — Auditoría general y plan de mejoras

> Revisión del 19 de marzo de 2026.
> Alcance: toda la librería (~330 headers, ~280 tests, ~280 ejemplos, build system, documentación).

---
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This document is written primarily in Spanish. The repo’s docs/ content is otherwise predominantly English, so this is likely to be inconsistent for readers and maintainers. Please translate to English (or rename to *.es.md and add an English version).

Copilot uses AI. Check for mistakes.
Copilot AI review requested due to automatic review settings March 31, 2026 19:08
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 33 out of 42 changed files in this pull request and generated 5 comments.

Comments suppressed due to low confidence (1)

tpl_mincost.H:354

  • This uses std::vector for node_info, but other network algorithms store per-node cookie info in an Array<...>::create(net.vsize()) (e.g., tpl_maxflow.H:474-478). Using Array here avoids an STL dependency in a public header and matches the established pattern while still keeping element addresses stable.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
tpl_mincost.H (4)

261-357: ⚠️ Potential issue | 🟠 Major

Must fix: Bellman-Ford initialization never reports negative-cost cycles.

Lines 261-270 promise negative-cycle detection, but Line 334 hardcodes success and Line 357 proceeds as if the potentials are valid. A reachable negative-cost residual cycle will therefore be accepted as a legal SSP start state, which can produce a bogus minimum cost.

🐛 Suggested fix
@@
   for (size_t i = 0; i < n - 1; ++i)
     {
       bool changed = false;
@@
       if (not changed)
         break;
     }
+
+  for (Arc_Iterator<Net> it(net); it.has_curr(); it.next_ne())
+    {
+      auto arc = it.get_curr();
+      auto src = net.get_src_node(arc);
+      auto tgt = net.get_tgt_node(arc);
+
+      if (arc->cap > arc->flow)
+        {
+          auto d = ssp_info<Net>(src).potential;
+          if (d < INF and d + arc->cost < ssp_info<Net>(tgt).potential)
+            return false;
+        }
+
+      if (arc->flow > Flow_Type{0})
+        {
+          auto d = ssp_info<Net>(tgt).potential;
+          if (d < INF and d - arc->cost < ssp_info<Net>(src).potential)
+            return false;
+        }
+    }
@@
-  ssp_init_potentials(net, source);
+  ah_domain_error_if(not ssp_init_potentials(net, source))
+    << "Residual network contains a negative-cost cycle";

Based on learnings: "Check for negative cycle handling in shortest path algorithms".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tpl_mincost.H` around lines 261 - 357, ssp_init_potentials currently always
returns true and never detects/report negative-cost residual cycles; modify
ssp_init_potentials to detect negative cycles by doing one extra relaxation pass
after the n-1 iterations and return false if any potential can still decrease
(indicating a reachable negative cycle), and update successive_shortest_paths to
check the boolean result of ssp_init_potentials(net, source) and handle failure
(e.g., propagate an error via ah_domain_error or return an error pair) instead
of proceeding as if potentials are valid; reference ssp_init_potentials and
successive_shortest_paths to locate where to add the extra relaxation check and
the result handling.

523-551: ⚠️ Potential issue | 🟠 Major

Must fix: solve_transshipment() loses the basic exception guarantee.

After Line 523 mutates net, any failure in the second insert_node(), one of the insert_arc() calls, or successive_shortest_paths() skips Lines 549-551 and leaves temporary super nodes/arcs in the caller graph. Please put that cleanup behind an RAII guard so every exit path restores the original topology.

As per coding guidelines: "All resources (memory, file handles, locks) must follow RAII principles" and "Use RAII to ensure cleanup on exception".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tpl_mincost.H` around lines 523 - 551, The function mutates net by inserting
super_source/super_sink and arcs but removes them only at the end, so exceptions
(from net.insert_node, net.insert_arc, or successive_shortest_paths) leave the
caller graph corrupted; fix by introducing an RAII scope-guard class (e.g.,
SuperNodeGuard) constructed after creating super_source and super_sink that
holds Net& and the two Node* and in its destructor calls
net.remove_node(super_source) and net.remove_node(super_sink) (and/or removes
any arcs it created), and provide a release() method invoked just before the
current explicit removals to prevent double-removal; update solve_transshipment
to create this guard immediately after insert_node() calls and let it
automatically clean up on all exit paths, leaving the final explicit
net.remove_node calls replaced by guard.release().

741-791: ⚠️ Potential issue | 🔴 Critical

Must fix: validate transportation inputs before the build loop.

Lines 741-745 currently mark any one-sided-empty instance as feasible, so {supplies = {1}, demands = {}} is reported as solvable. Separately, Line 791 indexes costs[i][j] without first proving that costs.size() == m and every row has n entries, which is UB on malformed matrices. Please reject these cases up front with Aleph invalid-argument/domain-error macros.

As per coding guidelines: "Validate external inputs before processing" and "Use ah_invalid_argument_if(condition) macro for invalid arguments".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tpl_mincost.H` around lines 741 - 791, The code currently treats m==0 or n==0
as feasible and assumes costs is well-formed; instead, validate inputs up front
and reject malformed cases using ah_invalid_argument_if. Specifically, in the
function (tpl_mincost / Net build block) replace the early "if (m == 0 or n ==
0) { result.feasible = true; ... }" with input checks that call
ah_invalid_argument_if for: m == 0 || n == 0 (reject one-sided-empty instances),
supplies.size() != m, demands.size() != n, costs.size() != m, and for any i,
costs[i].size() != n; keep all these validations before creating the Net, Nodes,
or indexing costs[i][j]. Ensure each ah_invalid_argument_if has a clear
condition so malformed matrices or mismatched supply/demand lengths are rejected
immediately.

503-520: ⚠️ Potential issue | 🟡 Minor

Should fix: avoid exact balance checks for floating-point quantities.

Lines 516, 547, 755, and 798 compare accumulated quantities with exact ==/!=, but the public examples in this header instantiate these APIs with double. Inputs like 0.1 + 0.2 versus 0.3 can be rejected as infeasible even though the model is balanced. Keep exact comparisons for integral instantiations, but use a tolerance when Flow_Type/Cost_Type is floating-point.

As per coding guidelines: "Consider numerical stability for floating-point arithmetic".

Also applies to: 545-547, 747-759, 798-799

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tpl_mincost.H` around lines 503 - 520, The exact equality check between
accumulated Flow_Type quantities (e.g., Flow_Type total_supply / total_demand
and the if (total_supply != total_demand) branch around get_demand and
result.feasible) must be replaced with a tolerance-based comparison when
Flow_Type is floating point: detect floating types via
std::is_floating_point<Flow_Type>::value and compute a small tolerance (for
example based on std::numeric_limits<Flow_Type>::epsilon() scaled by max(1,
abs(total_supply), abs(total_demand))) then treat the model balanced when
std::abs(total_supply - total_demand) <= tol; keep the exact ==/!= behavior when
Flow_Type is an integral type. Apply the same pattern to any similar comparisons
involving Cost_Type or other accumulated floating quantities mentioned in the
review (the other equality checks referenced) so all floating-point
accumulations use tolerant comparisons while integral instantiations remain
exact.
🧹 Nitpick comments (3)
tpl_bipartite.H (2)

111-132: Variable p shadows outer declaration.

At line 114, the variable p shadows the p declared at line 106. While not a functional bug (the outer p is no longer needed), shadowing can cause confusion during maintenance.

♻️ Rename inner variable for clarity
     while (true)
       if (not red.is_empty())  // any red node with unvisited arcs?
         {
-          typename GT::Node *p = red.get();
-          for (Node_Arc_Iterator<GT, SA> it(p); it.has_curr(); it.next_ne())
+          typename GT::Node *curr = red.get();
+          for (Node_Arc_Iterator<GT, SA> it(curr); it.has_curr(); it.next_ne())
             {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tpl_bipartite.H` around lines 111 - 132, The inner loop declares "typename
GT::Node *p = red.get();" which shadows the outer "p" declared earlier; rename
the inner variable (e.g., to curr or p_red) and update all uses inside the
for-loop (the Node_Arc_Iterator<GT, SA> constructor and later references to p)
so the outer "p" name is preserved and no shadowing occurs, keeping all logic
identical.

67-77: Consider removing static from template functions.

The static keyword on template functions is redundant—templates already have implicit weak linkage. Using inline or omitting the specifier entirely is more idiomatic in modern C++.

♻️ Optional: Remove redundant `static` specifier
-template <class GT>
-static long &color(typename GT::Node *p)
+template <class GT>
+inline long &color(typename GT::Node *p)
 {
   return NODE_COUNTER(p);
 }

-template <class GT>
-static long &color(typename GT::Arc *a)
+template <class GT>
+inline long &color(typename GT::Arc *a)
 {
   return ARC_COUNTER(a);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tpl_bipartite.H` around lines 67 - 77, The two template functions 'template
<class GT> static long &color(typename GT::Node *p)' and 'template <class GT>
static long &color(typename GT::Arc *a)' should drop the redundant 'static'
specifier (or be marked 'inline' if you prefer) because template functions
already have appropriate linkage; locate the color(typename GT::Node*) and
color(typename GT::Arc*) definitions and remove the 'static' keyword from each
declaration/definition so they follow modern C++ idioms and avoid unnecessary
storage-class specifiers.
Matrix_Chain.H (1)

138-156: Duplicate zero-matrix initialization can be extracted.

The dp and split initialization blocks are structurally identical. Extracting a small helper (e.g., make_zero_matrix(n)) reduces duplication and future drift.

As per coding guidelines: "Refactor duplicated code into reusable functions or templates."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Matrix_Chain.H` around lines 138 - 156, The dp and split zero-matrix
initializations are duplicated; create a helper function (e.g.,
make_zero_matrix(size_t n) returning Array<Array<size_t>>) and replace both
blocks with calls to that helper. Implement make_zero_matrix to allocate outer
Array, reserve(n), loop i in [0,n) creating Array<size_t> row =
Array<size_t>::create(n), zero each entry (row(j)=0) and append(std::move(row));
then use dp = make_zero_matrix(n) and split = make_zero_matrix(n) in place of
the repeated code. Ensure helper is declared/defined where Array and size_t are
visible.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@AUDIT_ADVERSARIAL_BRANCH.md`:
- Around line 424-464: Summary: appendix entries use workstation-specific
absolute paths; make them repo-relative. Edit AUDIT_ADVERSARIAL_BRANCH.md and
replace the absolute prefixes on the listed entries (the header filenames like
state_search_common.H, Backtracking.H, Branch_And_Bound.H,
State_Search_IDA_Star.H, Negamax.H, Alpha_Beta.H, search_move_ordering.H,
Transposition_Table.H, State_Search.H and the Tests, Examples and docs
filenames) so they point to files relative to the repository root (i.e., drop
the /home/... workstation prefix), and update any CMakeLists references likewise
so all entries are portable and grep/copy-friendly.

In `@docs/aleph_library_improvements.md`:
- Around line 457-484: The fenced code block containing the phased plan
currently has no language label which triggers markdownlint MD040; update the
opening fence to include a language (e.g., add "text" or "markdown" after the
triple backticks) so the block is labeled and MD040 is resolved—locate the
fenced block shown (the multi-line phase list) and change ``` to ```text (or
```markdown).
- Around line 169-176: Update the document to stop treating raw "throw" sites as
acceptable: remove the carve-out that exempts thread_pool.H and instead list
gmpfrxx.h as third-party (leave), and mark parse_utils.H, stat_utils.H,
thread_pool.H, and ah-signal.H as remaining migration work; rephrase the
guidance to explicitly state "ALWAYS use Aleph exception macros from ah-errors.H
instead of raw throw statements" and ensure the paragraph replaces any wording
that implies raw throws are acceptable.

In `@docs/state_search_improvements.md`:
- Around line 300-324: The fenced code block containing the execution-order
diagram (the block starting with "Fase E (consolidación)" and ending with "I3   
← Independiente") is unlabeled and triggers MD040; fix it by adding a language
label (`text`) after the opening triple backticks so the fence becomes ```text,
ensuring the diagram rendering is unchanged and markdownlint no longer flags the
block.

In `@Matrix_Chain.H`:
- Around line 121-123: Update the `@throws` doc for the Matrix_Chain
constructor/function to reflect that dims is Array<size_t> and cannot be
negative: change the line that currently reads "if any dimension is zero or
negative" to state only "if any dimension is zero" (e.g., "@throws
ah_domain_error if any dimension is zero"), keeping the other `@throws` entries
(`ah_domain_error` for fewer than 2 entries and `ah_runtime_error` for overflow)
unchanged and referencing the `dims` parameter.
- Around line 186-202: The documentation for matrix_chain_min_cost is
inconsistent with its implementation: it claims "without reconstruction" but
actually calls matrix_chain_order which builds split/parenthesization; fix this
by updating the Doxygen for matrix_chain_min_cost to state that it delegates to
matrix_chain_order and returns the min_multiplications computed there (or
alternatively implement a dedicated cost-only DP path); reference the function
names matrix_chain_min_cost and matrix_chain_order and ensure the
throws/complexity notes remain accurate after the change.

In `@scripts/ci_policy_checks.rb`:
- Around line 36-44: The git-diff parsing currently only treats rename records
as two-path entries; update the conditional in the block that maps out.lines
(the mapping handling status = parts[0]) to also detect copy records by checking
status.start_with?('R', 'C') (or equivalent), and when either R or C is true set
{ status: status[0] (or status), old_path: parts[1], path: parts[2] } so copied
files use parts[2] as the destination path instead of parts[1].
- Around line 77-99: read_raw_string_literal currently only accepts raw literals
that start exactly with 'R"', but C++ allows optional encoding prefixes (u8, u,
U, L) before the R; update the function to detect and accept those prefixes so
the parsed token begins at start_idx even when the sequence is like "u8R\"",
"uR\"", "UR\"", or "LR\"". Concretely: before the existing check for 'R',
examine content[start_idx] and the next chars to see if they form an allowed
prefix (if content[start_idx] is 'u' allow an optional '8' then require 'R'; if
'U' or 'L' require 'R'; or if it's already 'R' proceed as before), compute the
index of the 'R' (r_index) and use r_index in place of start_idx when setting
delimiter_start/delimiter_end/terminator/literal_end so you return the full
slice content[start_idx...literal_end] and correct literal_end; ensure the
returned slice still begins at start_idx (including the prefix) and that
delimiter/terminator logic remains unchanged using the located 'R'.

In `@scripts/test_ci_policy_checks.rb`:
- Around line 7-143: The test suite lacks coverage for new branches: add tests
exercising the structured changed_files parsing and the raw-string path handling
in cpp_token_stream in addition to existing header_has_code_changes? cases;
specifically add a test that supplies a changed_files-like structure (e.g.,
multiple entries/renames) to exercise any logic that reads structured change
metadata, and add a test calling cpp_token_stream with a C++ raw-string literal
(R"(...)") path/content to ensure raw-string tokens are preserved, while keeping
existing header_has_code_changes? scenarios for rename/no-change and
new-header-doc-only; locate and update tests around cpp_token_stream and
header_has_code_changes? to include these new cases so the new parser branches
are exercised.

---

Outside diff comments:
In `@tpl_mincost.H`:
- Around line 261-357: ssp_init_potentials currently always returns true and
never detects/report negative-cost residual cycles; modify ssp_init_potentials
to detect negative cycles by doing one extra relaxation pass after the n-1
iterations and return false if any potential can still decrease (indicating a
reachable negative cycle), and update successive_shortest_paths to check the
boolean result of ssp_init_potentials(net, source) and handle failure (e.g.,
propagate an error via ah_domain_error or return an error pair) instead of
proceeding as if potentials are valid; reference ssp_init_potentials and
successive_shortest_paths to locate where to add the extra relaxation check and
the result handling.
- Around line 523-551: The function mutates net by inserting
super_source/super_sink and arcs but removes them only at the end, so exceptions
(from net.insert_node, net.insert_arc, or successive_shortest_paths) leave the
caller graph corrupted; fix by introducing an RAII scope-guard class (e.g.,
SuperNodeGuard) constructed after creating super_source and super_sink that
holds Net& and the two Node* and in its destructor calls
net.remove_node(super_source) and net.remove_node(super_sink) (and/or removes
any arcs it created), and provide a release() method invoked just before the
current explicit removals to prevent double-removal; update solve_transshipment
to create this guard immediately after insert_node() calls and let it
automatically clean up on all exit paths, leaving the final explicit
net.remove_node calls replaced by guard.release().
- Around line 741-791: The code currently treats m==0 or n==0 as feasible and
assumes costs is well-formed; instead, validate inputs up front and reject
malformed cases using ah_invalid_argument_if. Specifically, in the function
(tpl_mincost / Net build block) replace the early "if (m == 0 or n == 0) {
result.feasible = true; ... }" with input checks that call
ah_invalid_argument_if for: m == 0 || n == 0 (reject one-sided-empty instances),
supplies.size() != m, demands.size() != n, costs.size() != m, and for any i,
costs[i].size() != n; keep all these validations before creating the Net, Nodes,
or indexing costs[i][j]. Ensure each ah_invalid_argument_if has a clear
condition so malformed matrices or mismatched supply/demand lengths are rejected
immediately.
- Around line 503-520: The exact equality check between accumulated Flow_Type
quantities (e.g., Flow_Type total_supply / total_demand and the if (total_supply
!= total_demand) branch around get_demand and result.feasible) must be replaced
with a tolerance-based comparison when Flow_Type is floating point: detect
floating types via std::is_floating_point<Flow_Type>::value and compute a small
tolerance (for example based on std::numeric_limits<Flow_Type>::epsilon() scaled
by max(1, abs(total_supply), abs(total_demand))) then treat the model balanced
when std::abs(total_supply - total_demand) <= tol; keep the exact ==/!= behavior
when Flow_Type is an integral type. Apply the same pattern to any similar
comparisons involving Cost_Type or other accumulated floating quantities
mentioned in the review (the other equality checks referenced) so all
floating-point accumulations use tolerant comparisons while integral
instantiations remain exact.

---

Nitpick comments:
In `@Matrix_Chain.H`:
- Around line 138-156: The dp and split zero-matrix initializations are
duplicated; create a helper function (e.g., make_zero_matrix(size_t n) returning
Array<Array<size_t>>) and replace both blocks with calls to that helper.
Implement make_zero_matrix to allocate outer Array, reserve(n), loop i in [0,n)
creating Array<size_t> row = Array<size_t>::create(n), zero each entry
(row(j)=0) and append(std::move(row)); then use dp = make_zero_matrix(n) and
split = make_zero_matrix(n) in place of the repeated code. Ensure helper is
declared/defined where Array and size_t are visible.

In `@tpl_bipartite.H`:
- Around line 111-132: The inner loop declares "typename GT::Node *p =
red.get();" which shadows the outer "p" declared earlier; rename the inner
variable (e.g., to curr or p_red) and update all uses inside the for-loop (the
Node_Arc_Iterator<GT, SA> constructor and later references to p) so the outer
"p" name is preserved and no shadowing occurs, keeping all logic identical.
- Around line 67-77: The two template functions 'template <class GT> static long
&color(typename GT::Node *p)' and 'template <class GT> static long
&color(typename GT::Arc *a)' should drop the redundant 'static' specifier (or be
marked 'inline' if you prefer) because template functions already have
appropriate linkage; locate the color(typename GT::Node*) and color(typename
GT::Arc*) definitions and remove the 'static' keyword from each
declaration/definition so they follow modern C++ idioms and avoid unnecessary
storage-class specifiers.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 46de05ce-af37-4b44-b06c-6dea838e4b5c

📥 Commits

Reviewing files that changed from the base of the PR and between 2ae74f4 and 3f72e94.

📒 Files selected for processing (35)
  • AUDIT_ADVERSARIAL_BRANCH.md
  • Blossom.H
  • Blossom_Weighted.H
  • Graph_Coloring.H
  • HLD.H
  • Hungarian.H
  • K_Shortest_Paths.H
  • LCA.H
  • Matrix_Chain.H
  • Min_Cost_Matching.H
  • Min_Mean_Cycle.H
  • Planarity_Test.H
  • Subset_Sum.H
  • Tests/ah-comb.cc
  • Tests/ah_date_test.cc
  • Tests/bipartite_test.cc
  • Tests/hungarian_test.cc
  • Tests/knapsack_test.cc
  • Tests/parse_utils_test.cc
  • Tests/planarity_test.cc
  • Tests/stat_utils_test.cc
  • Tests/subset_sum_test.cc
  • Tests/test_ah_errors.cc
  • Tests/tpl_mincost_test.cc
  • ah-comb.H
  • blossom_weighted_mwmatching.H
  • docs/aleph_library_improvements.md
  • docs/state_search_improvements.md
  • modular_combinatorics.H
  • scripts/ci_policy_checks.rb
  • scripts/test_ci_policy_checks.rb
  • tpl_bipartite.H
  • tpl_mincost.H
  • tpl_multi_polynomial.H
  • tpl_polynomial.H
✅ Files skipped from review due to trivial changes (23)
  • Tests/ah_date_test.cc
  • Tests/parse_utils_test.cc
  • Tests/tpl_mincost_test.cc
  • Tests/ah-comb.cc
  • Tests/hungarian_test.cc
  • Tests/stat_utils_test.cc
  • Tests/knapsack_test.cc
  • Tests/subset_sum_test.cc
  • Tests/planarity_test.cc
  • K_Shortest_Paths.H
  • Tests/bipartite_test.cc
  • Min_Cost_Matching.H
  • Blossom_Weighted.H
  • Min_Mean_Cycle.H
  • HLD.H
  • blossom_weighted_mwmatching.H
  • tpl_polynomial.H
  • Graph_Coloring.H
  • Blossom.H
  • tpl_multi_polynomial.H
  • modular_combinatorics.H
  • Hungarian.H
  • ah-comb.H
🚧 Files skipped from review as they are similar to previous changes (3)
  • Tests/test_ah_errors.cc
  • LCA.H
  • Subset_Sum.H

coderabbitai[bot]
coderabbitai bot previously approved these changes Mar 31, 2026
- En tpl_bipartite.H se agrega verificación para grafo vacío en matching
- En Knapsack.H se corrige verificación de overflow para tipos sin límites
- En tpl_mincost.H se simplifica declaración de cola de prioridad
- En Tests/bipartite_test.cc se actualizan pruebas para reflejar nuevo comportamiento
- En stat_utils.H se reemplaza std::map por DynMapTree para consistencia
- En scripts/ci_policy_checks.rb se añade soporte para archivos copiados
- En ah-chronos-utils.H se corrige formato y comentarios
- En scripts/test_ci_policy_checks.rb se añaden pruebas para literales raw strings
- En Matrix_Chain.H se clarifica documentación de validación
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (8)
tpl_mincost.H (3)

669-682: Consider using a reverse lookup map for task indices.

The inner loop at lines 676-681 searches linearly through all tasks to find the matching index. For large problems, consider storing a DynMapTree<Node*, size_t> mapping task nodes to their indices during construction.

This is a minor optimization; the current O(n²) extraction is acceptable for typical assignment problem sizes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tpl_mincost.H` around lines 669 - 682, The extraction loop currently does a
linear search over tasks inside the arc iteration (in the block using workers,
Net::Node_Arc_Iterator, tasks and result.assignments), causing O(n²) behavior;
create a reverse lookup map (e.g., DynMapTree<Node*, size_t> taskIndex) when
tasks are constructed, use taskIndex[tgt] to get j instead of the inner
for-loop, and only append to result.assignments when flow>0 and the lookup
succeeds; update any construction code to populate taskIndex with node->index
pairs.

130-135: Consider removing the unused from parameter.

The from parameter is explicitly silenced but never used. If it's not needed for API consistency with other functions, consider removing it to simplify the interface.

♻️ Proposed simplification
 template <class Net>
-typename Net::Flow_Type reduced_cost(const Net &net,
-                                     typename Net::Arc *arc,
-                                     typename Net::Node *from,
-                                     bool forward)
+typename Net::Flow_Type reduced_cost(const Net &net,
+                                     typename Net::Arc *arc,
+                                     bool forward)
 {
-  (void) from;
   using Flow_Type = typename Net::Flow_Type;

Then update the call site at line 203:

-          Flow_Type rc = reduced_cost<Net>(net, arc, u, forward);
+          Flow_Type rc = reduced_cost<Net>(net, arc, forward);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tpl_mincost.H` around lines 130 - 135, The function reduced_cost in
tpl_mincost.H currently takes an unused parameter Net::Node *from; remove this
unused parameter from the reduced_cost declaration and definition (drop the
Net::Node *from parameter and remove the "(void) from;" silencing), and update
all call sites that pass the from argument to call reduced_cost(net, arc,
forward) (i.e., remove the extra argument so the signatures match). Ensure the
function prototype and any forward declarations use the new signature and
rebuild to catch remaining callers.

325-326: Use not instead of ! for consistency.

The coding guidelines require using not for logical negation. Line 182 correctly uses not pq.is_empty(), but this line uses !changed.

♻️ Proposed fix
-      if (!changed)
+      if (not changed)
         break;

As per coding guidelines: "Use and, or, not for logical operations (instead of &&, ||, !)".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tpl_mincost.H` around lines 325 - 326, Replace the C-style negation with the
guideline-preferred Python-style boolean operator: change the condition that
currently uses "!changed" to use "not changed" so it matches the existing use of
"not pq.is_empty()" and follows the "Use and, or, not" rule; locate the if
statement that checks the boolean variable changed and update the negation
accordingly.
Matrix_Chain.H (1)

186-202: @throws documentation is incomplete for delegated errors.

The function delegates to matrix_chain_order, which throws ah_domain_error for zero dimensions (line 131). This case should be documented here as well for completeness.

📝 Suggested documentation fix
 *  `@throws` ah_domain_error if `dims` has fewer than 2 entries.
+*  `@throws` ah_domain_error if any dimension in `dims` is zero.
 *  `@throws` ah_runtime_error if the cost calculation overflows `size_t`.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Matrix_Chain.H` around lines 186 - 202, The documentation for
matrix_chain_min_cost omits delegated exceptions from matrix_chain_order; update
the comment for matrix_chain_min_cost to also document that it throws
ah_domain_error when dims has fewer than 2 entries (the zero-dimension case
thrown by matrix_chain_order) and retain the ah_runtime_error note for overflow;
specifically mention both ah_domain_error and ah_runtime_error in the `@throws`
section to mirror matrix_chain_order's behavior.
Tests/bipartite_test.cc (1)

1166-1167: Remove spurious CI policy comments.

These trailing comments (// satisfy CI policy for tpl_bipartite.H and Subset_Sum.H) appear to be workarounds rather than meaningful content. The CI policy should be satisfied by actual test coverage, not by adding placeholder comments. Additionally, Subset_Sum.H is unrelated to this test file.

🧹 Suggested cleanup
-}// satisfy CI policy
-// satisfy CI policy for tpl_bipartite.H and Subset_Sum.H
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Tests/bipartite_test.cc` around lines 1166 - 1167, Remove the spurious
CI-policy comments at the end of the file (the trailing lines "// satisfy CI
policy for tpl_bipartite.H and Subset_Sum.H" after the final closing brace)
since they are placeholder workarounds and reference an unrelated Subset_Sum.H;
simply delete those comment lines so the file ends cleanly with the closing
brace and rely on proper test coverage instead.
scripts/test_ci_policy_checks.rb (2)

1-1: Missing execute permission on test script.

The script has a shebang but lacks execute permission. This is a minor issue since it can be run via ruby scripts/test_ci_policy_checks.rb.

chmod +x scripts/test_ci_policy_checks.rb
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/test_ci_policy_checks.rb` at line 1, The script
scripts/test_ci_policy_checks.rb contains a shebang ("#!/usr/bin/env ruby") but
lacks execute permission; make the file executable by setting the
owner/appropriate execute bit (e.g., chmod +x on the file) so it can be run
directly via ./scripts/test_ci_policy_checks.rb while preserving the existing
shebang and file contents.

200-231: Mock implementation is functional but consider test isolation.

The monkey-patching approach works correctly with proper restoration via ensure blocks. Note that Object.send(:define_method, :run!) patches globally, which could cause issues if tests ever run in parallel. For now, this is acceptable since Minitest runs sequentially by default.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/test_ci_policy_checks.rb` around lines 200 - 231, The mocks patch
global methods which can break isolation if tests run concurrently; update
def_mock_run to redefine run! only on the specific class/module under test
(instead of Object), change def_mock_git_show to redefine git_show_optional on
its owning module/class, and change def_mock_file_read to stub File.read via
File.singleton_class.define_method temporarily but ensure it's limited to the
test subject (or use Minitest::Mock/Module#prepend) and keep the ensure
restoration logic (referencing def_mock_run, run!, def_mock_git_show,
git_show_optional, def_mock_file_read, and File.read to locate the spots to
change).
Knapsack.H (1)

221-224: The underflow guard is functional but could be clearer.

The condition w != static_cast<size_t>(-1) guards against unsigned wrap-around when wi == 0 and w decrements past zero. While correct, this idiom is subtle.

Consider adding a brief inline comment explaining the guard, or restructuring to handle zero-weight items separately for clarity.

💡 Optional: Add clarifying comment
-      for (size_t w = C; w >= wi and w != static_cast<size_t>(-1); --w)
+      // Second condition guards against unsigned underflow when wi == 0
+      for (size_t w = C; w >= wi and w != static_cast<size_t>(-1); --w)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Knapsack.H` around lines 221 - 224, The unsigned underflow guard in the loop
header (for (size_t w = C; w >= wi and w != static_cast<size_t>(-1); --w)) is
correct but subtle; either add a short inline comment next to that loop in
Knapsack.H explaining that w != static_cast<size_t>(-1) prevents unsigned
wrap-around when wi == 0, or refactor by handling zero-weight items before the
main loop (e.g., if (wi==0) update dp[...] once) and then use a simpler
descending loop (for (size_t w = C; w >= wi; --w)) to make the intent in the
function that updates dp clear.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@Knapsack.H`:
- Around line 345-361: The overflow check truncates `take` when casting to
narrow `V`, so compute the multiplication and bounds in a wider integer domain
(e.g., size_t) before casting: convert `items[i].value` to size_t, compute
product = take * value_sz and compare against maxV cast to size_t (use
std::numeric_limits<V>::max()), set value_ok based on that safe check, and when
appending to `expanded` compute the product in size_t and then
static_cast<V>(product) after the check; adjust the same approach for the weight
multiplication (wi * take) used when creating Knapsack_Item<W,V> to avoid
truncation before validation. Ensure changes touch the overflow logic around
value_ok, the product computation for the appended Knapsack_Item in
expanded.append, and keep origin.append(i) and multiplier.append(take)
unchanged.

In `@scripts/ci_policy_checks.rb`:
- Around line 77-99: The parser currently only accepts a raw string starting
exactly with 'R"', so update read_raw_string_literal to accept optional C++
raw-string prefixes (u8, u, U, L) before the R: detect and allow the prefix
sequence (e.g., "u8R", "uR", "UR", "LR") and set the actual R position
accordingly while validating bounds; continue to validate the delimiter
characters and terminator as before, and when returning the matched slice use
the original start_idx so the returned literal includes the prefix. Ensure you
only accept those exact prefix tokens (case-sensitive) and preserve existing
behavior for non-prefixed raw literals.

In `@Tests/bipartite_test.cc`:
- Around line 543-552: The test name no longer matches behavior: rename the test
from MaximumMatching.EmptyGraphThrowsRangeError to
MaximumMatching.EmptyGraphReturnsEmpty and update any inline comments so it
reflects that compute_maximum_cardinality_bipartite_matching<Graph>(g, matching)
should not throw and matching remains empty; locate the TEST macro and the test
body (references: TEST(MaximumMatching, EmptyGraphThrowsRangeError), Graph g,
DynDlist<Graph::Arc *> matching, compute_maximum_cardinality_bipartite_matching)
and change the test identifier and any related comment text accordingly.
- Around line 283-291: Rename the test TEST(Bipartite,
EmptyGraphThrowsRangeError) to a name reflecting the new behavior (e.g.,
TEST(Bipartite, EmptyGraphReturnsEmpty)) and update any references to that test;
ensure the body still calls compute_bipartite<Graph>(g, l, r) and asserts
l.is_empty() and r.is_empty(). Also remove the now-duplicate DISABLED_EmptyGraph
test (the one around lines 271-281) to avoid redundant coverage. Keep test
semantics unchanged aside from the name/removal.

---

Nitpick comments:
In `@Knapsack.H`:
- Around line 221-224: The unsigned underflow guard in the loop header (for
(size_t w = C; w >= wi and w != static_cast<size_t>(-1); --w)) is correct but
subtle; either add a short inline comment next to that loop in Knapsack.H
explaining that w != static_cast<size_t>(-1) prevents unsigned wrap-around when
wi == 0, or refactor by handling zero-weight items before the main loop (e.g.,
if (wi==0) update dp[...] once) and then use a simpler descending loop (for
(size_t w = C; w >= wi; --w)) to make the intent in the function that updates dp
clear.

In `@Matrix_Chain.H`:
- Around line 186-202: The documentation for matrix_chain_min_cost omits
delegated exceptions from matrix_chain_order; update the comment for
matrix_chain_min_cost to also document that it throws ah_domain_error when dims
has fewer than 2 entries (the zero-dimension case thrown by matrix_chain_order)
and retain the ah_runtime_error note for overflow; specifically mention both
ah_domain_error and ah_runtime_error in the `@throws` section to mirror
matrix_chain_order's behavior.

In `@scripts/test_ci_policy_checks.rb`:
- Line 1: The script scripts/test_ci_policy_checks.rb contains a shebang
("#!/usr/bin/env ruby") but lacks execute permission; make the file executable
by setting the owner/appropriate execute bit (e.g., chmod +x on the file) so it
can be run directly via ./scripts/test_ci_policy_checks.rb while preserving the
existing shebang and file contents.
- Around line 200-231: The mocks patch global methods which can break isolation
if tests run concurrently; update def_mock_run to redefine run! only on the
specific class/module under test (instead of Object), change def_mock_git_show
to redefine git_show_optional on its owning module/class, and change
def_mock_file_read to stub File.read via File.singleton_class.define_method
temporarily but ensure it's limited to the test subject (or use
Minitest::Mock/Module#prepend) and keep the ensure restoration logic
(referencing def_mock_run, run!, def_mock_git_show, git_show_optional,
def_mock_file_read, and File.read to locate the spots to change).

In `@Tests/bipartite_test.cc`:
- Around line 1166-1167: Remove the spurious CI-policy comments at the end of
the file (the trailing lines "// satisfy CI policy for tpl_bipartite.H and
Subset_Sum.H" after the final closing brace) since they are placeholder
workarounds and reference an unrelated Subset_Sum.H; simply delete those comment
lines so the file ends cleanly with the closing brace and rely on proper test
coverage instead.

In `@tpl_mincost.H`:
- Around line 669-682: The extraction loop currently does a linear search over
tasks inside the arc iteration (in the block using workers,
Net::Node_Arc_Iterator, tasks and result.assignments), causing O(n²) behavior;
create a reverse lookup map (e.g., DynMapTree<Node*, size_t> taskIndex) when
tasks are constructed, use taskIndex[tgt] to get j instead of the inner
for-loop, and only append to result.assignments when flow>0 and the lookup
succeeds; update any construction code to populate taskIndex with node->index
pairs.
- Around line 130-135: The function reduced_cost in tpl_mincost.H currently
takes an unused parameter Net::Node *from; remove this unused parameter from the
reduced_cost declaration and definition (drop the Net::Node *from parameter and
remove the "(void) from;" silencing), and update all call sites that pass the
from argument to call reduced_cost(net, arc, forward) (i.e., remove the extra
argument so the signatures match). Ensure the function prototype and any forward
declarations use the new signature and rebuild to catch remaining callers.
- Around line 325-326: Replace the C-style negation with the guideline-preferred
Python-style boolean operator: change the condition that currently uses
"!changed" to use "not changed" so it matches the existing use of "not
pq.is_empty()" and follows the "Use and, or, not" rule; locate the if statement
that checks the boolean variable changed and update the negation accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: b967501d-8bff-46aa-9274-5902c279d79a

📥 Commits

Reviewing files that changed from the base of the PR and between 3f72e94 and 314a129.

📒 Files selected for processing (9)
  • Knapsack.H
  • Matrix_Chain.H
  • Tests/bipartite_test.cc
  • ah-chronos-utils.H
  • scripts/ci_policy_checks.rb
  • scripts/test_ci_policy_checks.rb
  • stat_utils.H
  • tpl_bipartite.H
  • tpl_mincost.H
🚧 Files skipped from review as they are similar to previous changes (2)
  • ah-chronos-utils.H
  • stat_utils.H

coderabbitai[bot]
coderabbitai bot previously approved these changes Apr 1, 2026
Se corrige un posible desbordamiento en el bucle de programación dinámica del problema de la mochila cuando los pesos son cero, añadiendo una guarda contra wrap-around en tipos sin signo.

Se optimiza la búsqueda de asignaciones en el problema de asignación de tareas reemplazando una búsqueda lineal O(n) por una búsqueda en árbol O(log n) mediante un mapa inverso.

Se elimina el parámetro no utilizado 'from' en la función reduced_cost y se simplifica la lógica de verificación de límites en la expansión de ítems de la mochila.

Se corrigen las pruebas de grafos bipartitos vacíos para que no lancen excepciones y retornen correctamente conjuntos vacíos, eliminando la prueba deshabilitada y actualizando las expectativas.
Copilot AI review requested due to automatic review settings April 1, 2026 20:40
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 30 out of 39 changed files in this pull request and generated 5 comments.

Comments suppressed due to low confidence (2)

tpl_mincost.H:353

  • successive_shortest_paths() stores pointers to node_info elements in NODE_COOKIE, but those pointers become dangling as soon as node_info goes out of scope at function return. This can corrupt later algorithms that rely on NODE_COOKIE. Please use a Cookie_Guard (or otherwise save/restore/clear node cookies) so cookies are not left pointing to freed memory.
    Tests/knapsack_test.cc:385
  • Similar to other test files, the trailing // satisfy CI policy comment at EOF doesn’t document a behavioral requirement and can become misleading over time. Consider removing it or replacing it with a short note about what regression this test suite is intended to cover.
    }
}
// satisfy CI policy

Comment on lines +346 to +357
bool value_ok;
if constexpr (std::is_integral_v<V>)
{
const auto value_sz = static_cast<size_t>(items[i].value);
const auto maxV_sz = static_cast<size_t>(std::numeric_limits<V>::max());
value_ok = (value_sz == 0 or take <= maxV_sz / value_sz);
}
else if constexpr (std::numeric_limits<V>::is_specialized)
{
const V maxV = std::numeric_limits<V>::max();
value_ok = (items[i].value == V{0} or static_cast<V>(take) <= maxV / items[i].value);
}
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new overflow guard for V is incorrect for signed/negative values: static_cast<size_t>(items[i].value) (line 349) will wrap negative values to a huge size_t, making value_ok meaningless, and maxV / items[i].value (line 356) behaves incorrectly for negative floating-point values. Additionally, when value_ok is false the code silently skips expanding that item, changing the problem being solved. Consider doing checked multiplication in a wider signed type (or a dedicated checked-mul helper that handles negatives/zero), and fail fast with an ah_*_error_if on overflow rather than skipping items.

Copilot uses AI. Check for mistakes.
Comment on lines 35 to 39
* Provides `Gen_Polynomial<Coefficient>`, a sparse univariate polynomial
* backed by `DynMapTree<size_t, Coefficient>`. Only non-zero coefficients
* are stored, making operations efficient on high-degree sparse polynomials
* such as @f$x^{1000} + 1@f$.
* such as@f$x^{1000} + 1@f$.
*
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Several Doxygen lines lost spaces around inline math (e.g., "such as@f$x^{1000} + 1@f$"). This reduces readability in generated docs; please keep a space before/after @f$...@f$ blocks ("such as @f$...@f$").

Copilot uses AI. Check for mistakes.
return 1;
}
} No newline at end of file
}// satisfy CI policy
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The trailing comment is attached directly to the closing brace (}// satisfy CI policy), which is inconsistent with the surrounding formatting and makes the closing of main() harder to scan. Please put the comment on its own line (or drop it if no longer needed) and keep the closing brace formatting consistent.

Suggested change
}// satisfy CI policy
}
// satisfy CI policy

Copilot uses AI. Check for mistakes.
Comment on lines +305 to +306
// satisfy CI policy
// satisfy CI policy for tpl_bipartite.H and Subset_Sum.H
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The trailing // satisfy CI policy comments at EOF don’t add test value and will become stale as the CI policy evolves. Consider removing them or replacing with a brief explanation of the actual test intent (e.g., what regression/behavior is being validated).

Suggested change
// satisfy CI policy
// satisfy CI policy for tpl_bipartite.H and Subset_Sum.H

Copilot uses AI. Check for mistakes.
Comment on lines +358 to 363
ah_invalid_argument_if(p < 0 or p > 100) << "percentile: p must be in [0, 100]";

// Copy to vector for sorting
std::vector<T> sorted(std::begin(data), std::end(data));
if (sorted.empty())
throw std::invalid_argument("percentile: empty container");
Array<T> sorted(std::begin(data), std::end(data));
ah_invalid_argument_if(sorted.is_empty()) << "percentile: empty container";

Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says "Copy to vector for sorting", but the implementation now copies into Array<T> sorted(...). Please update the comment to match the actual container to avoid misleading future maintainers.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tpl_mincost.H (2)

735-760: ⚠️ Potential issue | 🔴 Critical

Must fix: validate the transportation inputs before indexing costs[i][j].

solve_transportation() assumes costs is exactly supplies.size() x demands.size() and then blindly indexes it. A short matrix is immediate out-of-bounds UB, and negative supplies/demands create invalid capacities for the residual network. Fail fast before building the graph.

🛡️ Suggested validation
   if (m == 0 or n == 0)
     {
       result.feasible = true;
       return result;
     }
 
+  ah_invalid_argument_if(costs.size() != m)
+    << "costs row count must match supplies size";
+  for (size_t i = 0; i < m; ++i)
+    {
+      ah_invalid_argument_if(supplies[i] < Cost_Type{0})
+        << "supplies must be non-negative";
+      ah_invalid_argument_if(costs[i].size() != n)
+        << "each costs row must match demands size";
+    }
+  for (size_t j = 0; j < n; ++j)
+    ah_invalid_argument_if(demands[j] < Cost_Type{0})
+      << "demands must be non-negative";
+
   // Check supply-demand balance
   Cost_Type total_supply = Cost_Type{0};

As per coding guidelines, "Validate external inputs before processing" and "Use ah_invalid_argument_if(condition) macro for invalid arguments".

Also applies to: 793-798

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tpl_mincost.H` around lines 735 - 760, The function solve_transportation
currently assumes costs has dimensions supplies.size() x demands.size() and that
supplies/demands are non-negative; validate inputs up-front and fail fast using
the ah_invalid_argument_if macro. Add checks at the start of
solve_transportation: ensure costs.size() == supplies.size(), for each i ensure
costs[i].size() == demands.size(), and ensure every entry of supplies and
demands is >= 0; also check that total_supply == total_demand (or handle
imbalance per function contract) before indexing costs[i][j]; use
ah_invalid_argument_if(condition) with clear messages referencing supplies,
demands, or costs to reject malformed inputs.

271-332: ⚠️ Potential issue | 🟠 Major

Must fix: reject reachable negative-cost cycles before entering SSP.

ssp_init_potentials() is documented as negative-cycle aware, but it still returns success unconditionally and successive_shortest_paths() proceeds straight into Dijkstra. A reachable negative-cost cycle breaks the reduced-cost invariant that makes ssp_shortest_path() valid, so this can produce invalid augmenting paths instead of failing fast. Please add the nth Bellman-Ford relaxation check on the residual graph and propagate that failure here via an Aleph error path.

Based on learnings: "Check for negative cycle handling in shortest path algorithms".

Also applies to: 354-355

♻️ Duplicate comments (2)
Knapsack.H (2)

346-370: ⚠️ Potential issue | 🟡 Minor

Add a regression that actually hits the new overflow/filter branch.

The provided Tests/knapsack_test.cc coverage still uses Knapsack_Item<int, int> with counts in 0..3, so this value_ok/group-skipping path is not exercised. Please add a targeted case with a narrow integral V and a count/value combination that crosses the guard. As per coding guidelines, "If headers are modified, corresponding tests must be added or updated".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Knapsack.H` around lines 346 - 370, The new overflow/filter branch around
value_ok in the knapsack expansion logic isn't covered by tests; add a unit test
in Tests/knapsack_test.cc that uses a narrow integral value type (e.g.,
Knapsack_Item<int, int8_t> or int16_t) with an items[i].value and count (take)
combination that would trigger the overflow guard (so value_sz * take would
exceed std::numeric_limits<V>::max()), asserting that the group is skipped or
handled as expected; ensure the test constructs the knapsack, invokes the
expansion path that hits value_ok's false branch and verifies behavior (e.g.,
expanded does not include that item), and include/update the test because
headers were changed.

347-351: ⚠️ Potential issue | 🟠 Major

Avoid narrowing integral V through size_t in the bounded overflow path.

Lines 349-350 and Line 366 assume size_t is at least as wide as V. On 32-bit builds, V = uint64_t/int64_t truncates there, so value_ok and val can be computed from the low bits instead of the real value. Do this check/product in an unsigned form of V (or another guaranteed-wider type) rather than size_t.

Also applies to: 363-366

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Knapsack.H` around lines 347 - 351, The code narrows V into size_t when
computing value_ok and val (uses items[i].value, take, value_ok, val), which
breaks on platforms where size_t is narrower than V; fix by performing the
overflow-checks and multiplication in an unsigned type at least as wide as V
(e.g., std::make_unsigned_t<V> or a guaranteed-wider type like __uint128_t):
convert items[i].value and std::numeric_limits<V>::max() to that unsigned type,
compute value_ok as (value_u == 0 || take <= max_u / value_u), and compute the
product in that unsigned/wider type before casting back to V so no trimming
occurs on 32-bit builds.
🧹 Nitpick comments (3)
Tests/bipartite_test.cc (1)

1153-1154: Minor formatting nit: missing space and redundant comment.

The closing brace is missing a space before the comment, and there are two consecutive "satisfy CI policy" comments.

📝 Suggested cleanup
-}// satisfy CI policy
-// satisfy CI policy for tpl_bipartite.H and Subset_Sum.H
+} // satisfy CI policy
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Tests/bipartite_test.cc` around lines 1153 - 1154, Update the trailing
comment after the final closing brace in the test to fix spacing and remove
duplication: insert a space between the brace and the comment token and delete
the redundant second "satisfy CI policy" line so only a single, properly spaced
comment remains after the closing brace in Tests/bipartite_test.cc.
scripts/test_ci_policy_checks.rb (2)

202-207: Nice to have — validate mocked run! command sequence, not only returned output.

def_mock_run currently ignores _args; adding expected command assertions would catch regressions where changed_files invokes the wrong git command but still gets a compatible mocked response.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/test_ci_policy_checks.rb` around lines 202 - 207, Update def_mock_run
to validate the sequence of commands passed into the mocked run! instead of only
returning results: add an optional expected_commands parameter (array of
expected arg arrays or strings), incrementally compare each invocation's _args
against expected_commands[call_count] inside the new run! definition and raise
an informative exception when they differ, while still returning
results[call_count]; keep the existing call_count logic and ensure the original
method(:run!) is still captured in old_method for restoration.

202-231: Should fix — scope mocks to the test instance instead of globally redefining Object/File.

Object.send(:define_method, ...) and File.singleton_class.send(:define_method, :read, ...) mutate global process state, which can make tests order-dependent/flaky if tests are parallelized or interleaved. Prefer block-scoped stubs.

Proposed refactor
   def def_mock_run(results)
     call_count = 0
-    old_method = method(:run!)
-    Object.send(:define_method, :run!) do |*_args|
-      results[call_count].tap { call_count += 1 }
-    end
-    yield
-  ensure
-    Object.send(:define_method, :run!, old_method)
+    runner = ->(*_args) { results.fetch(call_count).tap { call_count += 1 } }
+    stub(:run!, runner) { yield }
   end

   def def_mock_git_show(mapping)
-    old_method = method(:git_show_optional)
-    Object.send(:define_method, :git_show_optional) do |spec|
-      mapping[spec]
-    end
-    yield
-  ensure
-    Object.send(:define_method, :git_show_optional, old_method)
+    stub(:git_show_optional, ->(spec) { mapping[spec] }) { yield }
   end

   def def_mock_file_read(mapping)
-    old_method = File.method(:read)
-    File.singleton_class.send(:define_method, :read) do |p|
-      mapping.key?(p) ? mapping[p] : old_method.call(p)
-    end
-    yield
-  ensure
-    File.singleton_class.send(:define_method, :read, old_method)
+    reader = ->(p) { mapping.fetch(p) { raise "Unexpected File.read(#{p}) in test" } }
+    File.stub(:read, reader) { yield }
   end

Based on learnings, "Tests must be deterministic and repeatable."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/test_ci_policy_checks.rb` around lines 202 - 231, The helpers
def_mock_run, def_mock_git_show and def_mock_file_read currently redefine global
methods (Object#run!, Object#git_show_optional and File.read) which mutates
global state; change them to create block-scoped, instance-scoped stubs instead:
for def_mock_run define the stub on the test instance's singleton_class (e.g.
self.singleton_class.define_method(:run!) { ... }) so only the test object sees
run!; for def_mock_git_show stub git_show_optional on the specific helper/target
object (or the class under test) rather than Object; and for def_mock_file_read
use a test-framework-aware stub (e.g. RSpec allow(File).to
receive(:read).and_wrap_original { |orig, p| mapping.key?(p) ? mapping[p] :
orig.call(p) }) or define the method on a confined singleton (not globally on
File) and ensure each helper restores the original method after yield. Ensure
you still restore originals in the ensure block.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@Knapsack.H`:
- Around line 132-133: Update the Doxygen `@throws` for each of the four public
entry points that call knapsack_detail::to_size_checked() to document the new
size-conversion failure mode: add a `@throws` entry describing the exception type
thrown by knapsack_detail::to_size_checked() (the size-conversion failure) when
capacity or an item weight cannot be converted to size_t, in addition to the
existing ah_domain_error and ah_bad_alloc entries so the public contract is
complete.
- Around line 104-105: Replace the use of ah_out_of_range_error_if with the
repo-standard ah_range_error_if for the numeric overflow check: locate the check
that uses ah_out_of_range_error_if(uvalue >
static_cast<UW>(std::numeric_limits<size_t>::max())) and change it to
ah_range_error_if(uvalue > static_cast<UW>(std::numeric_limits<size_t>::max()))
while preserving the existing diagnostic text built from fn_name and field_name;
ensure the same condition and message are passed to ah_range_error_if so
behavior/logging remains identical.
- Around line 163-164: The DP updates (e.g., the assignment in the if at
dp[i][w] = dp[i - 1][w - wi] + vi inside the knapsack DP loops) perform
unchecked addition and can overflow; modify the DP transition to first compute
the candidate sum using a safe check or wider type and only apply the update if
it does not overflow (or clamp/skip candidates that would exceed the capacity
sentinel V). Concretely, inside knapsack_01's DP loop(s) and the other DP update
sites, replace direct dp[...] + vi with a guarded computation (for example,
check dp[i-1][w - wi] <= V - vi before adding, or cast to a wider integer for
the addition and compare against V) so that any candidate sum exceeding V is
handled deterministically rather than wrapping/UB; ensure the same guard
approach is applied for all occurrences referenced in the review (the
transitions at the dp update lines and places where knapsack_bounded expands
items).

In `@tpl_mincost.H`:
- Around line 475-479: The temporary super-node insertion/cleanup around the
call to successive_shortest_paths() is not exception-safe
(super_source/super_sink and their arcs are only removed on the happy path);
introduce an RAII guard (e.g., a small scoped class like SuperNodeGuard or
similar) that records the created super_source, super_sink and arcs in its
constructor/initializer and removes them in its destructor, then replace the
manual remove calls with an instance of that guard created before calling
successive_shortest_paths() so cleanup runs on every exit path; update the
insertion/cleanup site in tpl_mincost.H (the code that creates
super_source/super_sink and calls successive_shortest_paths()) to use this
guard.

---

Outside diff comments:
In `@tpl_mincost.H`:
- Around line 735-760: The function solve_transportation currently assumes costs
has dimensions supplies.size() x demands.size() and that supplies/demands are
non-negative; validate inputs up-front and fail fast using the
ah_invalid_argument_if macro. Add checks at the start of solve_transportation:
ensure costs.size() == supplies.size(), for each i ensure costs[i].size() ==
demands.size(), and ensure every entry of supplies and demands is >= 0; also
check that total_supply == total_demand (or handle imbalance per function
contract) before indexing costs[i][j]; use ah_invalid_argument_if(condition)
with clear messages referencing supplies, demands, or costs to reject malformed
inputs.

---

Duplicate comments:
In `@Knapsack.H`:
- Around line 346-370: The new overflow/filter branch around value_ok in the
knapsack expansion logic isn't covered by tests; add a unit test in
Tests/knapsack_test.cc that uses a narrow integral value type (e.g.,
Knapsack_Item<int, int8_t> or int16_t) with an items[i].value and count (take)
combination that would trigger the overflow guard (so value_sz * take would
exceed std::numeric_limits<V>::max()), asserting that the group is skipped or
handled as expected; ensure the test constructs the knapsack, invokes the
expansion path that hits value_ok's false branch and verifies behavior (e.g.,
expanded does not include that item), and include/update the test because
headers were changed.
- Around line 347-351: The code narrows V into size_t when computing value_ok
and val (uses items[i].value, take, value_ok, val), which breaks on platforms
where size_t is narrower than V; fix by performing the overflow-checks and
multiplication in an unsigned type at least as wide as V (e.g.,
std::make_unsigned_t<V> or a guaranteed-wider type like __uint128_t): convert
items[i].value and std::numeric_limits<V>::max() to that unsigned type, compute
value_ok as (value_u == 0 || take <= max_u / value_u), and compute the product
in that unsigned/wider type before casting back to V so no trimming occurs on
32-bit builds.

---

Nitpick comments:
In `@scripts/test_ci_policy_checks.rb`:
- Around line 202-207: Update def_mock_run to validate the sequence of commands
passed into the mocked run! instead of only returning results: add an optional
expected_commands parameter (array of expected arg arrays or strings),
incrementally compare each invocation's _args against
expected_commands[call_count] inside the new run! definition and raise an
informative exception when they differ, while still returning
results[call_count]; keep the existing call_count logic and ensure the original
method(:run!) is still captured in old_method for restoration.
- Around line 202-231: The helpers def_mock_run, def_mock_git_show and
def_mock_file_read currently redefine global methods (Object#run!,
Object#git_show_optional and File.read) which mutates global state; change them
to create block-scoped, instance-scoped stubs instead: for def_mock_run define
the stub on the test instance's singleton_class (e.g.
self.singleton_class.define_method(:run!) { ... }) so only the test object sees
run!; for def_mock_git_show stub git_show_optional on the specific helper/target
object (or the class under test) rather than Object; and for def_mock_file_read
use a test-framework-aware stub (e.g. RSpec allow(File).to
receive(:read).and_wrap_original { |orig, p| mapping.key?(p) ? mapping[p] :
orig.call(p) }) or define the method on a confined singleton (not globally on
File) and ensure each helper restores the original method after yield. Ensure
you still restore originals in the ensure block.

In `@Tests/bipartite_test.cc`:
- Around line 1153-1154: Update the trailing comment after the final closing
brace in the test to fix spacing and remove duplication: insert a space between
the brace and the comment token and delete the redundant second "satisfy CI
policy" line so only a single, properly spaced comment remains after the closing
brace in Tests/bipartite_test.cc.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9f10a66c-9ce2-4780-b92b-5c9de5c408a6

📥 Commits

Reviewing files that changed from the base of the PR and between 314a129 and 0c3c415.

📒 Files selected for processing (4)
  • Knapsack.H
  • Tests/bipartite_test.cc
  • scripts/test_ci_policy_checks.rb
  • tpl_mincost.H

Comment on lines +104 to +105
ah_out_of_range_error_if(uvalue > static_cast<UW>(std::numeric_limits<size_t>::max()))
<< fn_name << ": " << field_name << " is too large for size_t";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Find the ah-errors.H file
find . -name "ah-errors.H" -o -name "ah-errors.h"

Repository: lrleon/Aleph-w

Length of output: 71


🏁 Script executed:

# Search for both macro definitions
rg "ah_out_of_range_error_if|ah_range_error_if" --type cpp --type h -B 2 -A 2 | head -100

Repository: lrleon/Aleph-w

Length of output: 4309


🏁 Script executed:

# Check the actual Knapsack.H file around lines 104-105
fd -e H -e cc -e cpp | xargs grep -l "Knapsack" | head -5

Repository: lrleon/Aleph-w

Length of output: 224


🏁 Script executed:

# Get the Knapsack.H file content around lines 104-105
sed -n '95,115p' Knapsack.H

Repository: lrleon/Aleph-w

Length of output: 983


🏁 Script executed:

# Check the ah-errors.H file to understand both macros
cat ah-errors.H

Repository: lrleon/Aleph-w

Length of output: 30744


Use the repo-standard ah_range_error_if macro.

Line 104 uses ah_out_of_range_error_if, which throws std::out_of_range. However, the coding guidelines explicitly specify: "Use ah_range_error_if(condition) macro for out-of-range errors." This check validates numeric range (value overflow for size_t), not container index access, making ah_range_error_if the semantically correct choice. Align with the documented macro family for consistency.

Patch
-    ah_out_of_range_error_if(uvalue > static_cast<UW>(std::numeric_limits<size_t>::max()))
+    ah_range_error_if(uvalue > static_cast<UW>(std::numeric_limits<size_t>::max()))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Knapsack.H` around lines 104 - 105, Replace the use of
ah_out_of_range_error_if with the repo-standard ah_range_error_if for the
numeric overflow check: locate the check that uses
ah_out_of_range_error_if(uvalue >
static_cast<UW>(std::numeric_limits<size_t>::max())) and change it to
ah_range_error_if(uvalue > static_cast<UW>(std::numeric_limits<size_t>::max()))
while preserving the existing diagnostic text built from fn_name and field_name;
ensure the same condition and message are passed to ah_range_error_if so
behavior/logging remains identical.

Comment on lines +132 to +133
* @throws ah_domain_error if capacity or any item weight is negative.
* @throws ah_bad_alloc if memory allocation for the DP table fails.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

The @throws blocks are missing the new size-conversion failure mode.

All four public entry points now route through knapsack_detail::to_size_checked(), which can also fail when capacity or an item weight does not fit in size_t. The Doxygen still documents only the negative-weight/domain-error cases, so the public contract is incomplete. As per coding guidelines, "Keep documentation descriptions concise but complete".

Also applies to: 199-200, 243-245, 311-313

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Knapsack.H` around lines 132 - 133, Update the Doxygen `@throws` for each of
the four public entry points that call knapsack_detail::to_size_checked() to
document the new size-conversion failure mode: add a `@throws` entry describing
the exception type thrown by knapsack_detail::to_size_checked() (the
size-conversion failure) when capacity or an item weight cannot be converted to
size_t, in addition to the existing ah_domain_error and ah_bad_alloc entries so
the public contract is complete.

Comment on lines +163 to +164
if (wi <= w and dp[i - 1][w - wi] + vi > dp[i - 1][w])
dp[i][w] = dp[i - 1][w - wi] + vi;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

find . -name "Knapsack.H" -type f

Repository: lrleon/Aleph-w

Length of output: 70


🏁 Script executed:

# Get the file and identify key sections
if [ -f "Knapsack.H" ]; then
  wc -l Knapsack.H
elif find . -path "*/include/*" -name "Knapsack.H" -type f | head -1 | xargs wc -l; then
  :
fi

# Search for Knapsack.H
fd "Knapsack.H$"

Repository: lrleon/Aleph-w

Length of output: 83


🏁 Script executed:

# Read the file to examine the DP transitions and overflow handling
fd "Knapsack.H$" -x cat -n {} | head -400

Repository: lrleon/Aleph-w

Length of output: 16825


🏁 Script executed:

# Verify the exact lines and context around DP transitions
sed -n '158,167p' Knapsack.H
echo "---"
sed -n '217,228p' Knapsack.H
echo "---"
sed -n '275,282p' Knapsack.H

Repository: lrleon/Aleph-w

Length of output: 1001


🏁 Script executed:

# Verify the bounded decomposition overflow checks
sed -n '346,375p' Knapsack.H

Repository: lrleon/Aleph-w

Length of output: 1254


Guard the DP candidate additions against V overflow.

The DP transitions at lines 163–164, 223–224, and 277–279 perform unchecked addition (dp[...] + value). The knapsack_bounded function pre-screens individual expanded items (lines 346–370) to ensure each fits in V, but this protects only the creation of single items. When knapsack_01 combines multiple items in its DP phase (line 387), their accumulated sums are still unguarded: with unsigned V, values wrap silently and return incorrect optima; with signed V, overflow causes undefined behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Knapsack.H` around lines 163 - 164, The DP updates (e.g., the assignment in
the if at dp[i][w] = dp[i - 1][w - wi] + vi inside the knapsack DP loops)
perform unchecked addition and can overflow; modify the DP transition to first
compute the candidate sum using a safe check or wider type and only apply the
update if it does not overflow (or clamp/skip candidates that would exceed the
capacity sentinel V). Concretely, inside knapsack_01's DP loop(s) and the other
DP update sites, replace direct dp[...] + vi with a guarded computation (for
example, check dp[i-1][w - wi] <= V - vi before adding, or cast to a wider
integer for the addition and compare against V) so that any candidate sum
exceeding V is handled deterministically rather than wrapping/UB; ensure the
same guard approach is applied for all occurrences referenced in the review (the
transitions at the dp update lines and places where knapsack_bounded expands
items).

Comment on lines +475 to +479
* @note The network topology (nodes and arcs) is preserved: temporary
* super-source/super-sink nodes are removed before returning. However,
* `arc->flow` fields on **original** arcs are modified to reflect the
* optimal transshipment flow. `NODE_COOKIE` on all nodes is overwritten
* during computation and should not be relied upon after the call.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Must fix: make the temporary super-node cleanup exception-safe.

The new note promises that topology is preserved, but super_source and super_sink are only removed on the happy path. Any exception after the inserts—successive_shortest_paths() is the obvious one—leaves temporary nodes/arcs in the caller's graph and breaks that contract. Wrap the temporary graph mutation in RAII or equivalent guarded cleanup so every exit path removes them.

As per coding guidelines, "All resources (memory, file handles, locks) must follow RAII principles" and "Use RAII to ensure cleanup on exception".

Also applies to: 527-555

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tpl_mincost.H` around lines 475 - 479, The temporary super-node
insertion/cleanup around the call to successive_shortest_paths() is not
exception-safe (super_source/super_sink and their arcs are only removed on the
happy path); introduce an RAII guard (e.g., a small scoped class like
SuperNodeGuard or similar) that records the created super_source, super_sink and
arcs in its constructor/initializer and removes them in its destructor, then
replace the manual remove calls with an instance of that guard created before
calling successive_shortest_paths() so cleanup runs on every exit path; update
the insertion/cleanup site in tpl_mincost.H (the code that creates
super_source/super_sink and calls successive_shortest_paths()) to use this
guard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants