Skip to content

chore: distribute data provider rewards#1331

Merged
MicBun merged 2 commits intomainfrom
lpRewards
Mar 9, 2026
Merged

chore: distribute data provider rewards#1331
MicBun merged 2 commits intomainfrom
lpRewards

Conversation

@MicBun
Copy link
Member

@MicBun MicBun commented Mar 9, 2026

resolves: https://github.com/truflation/website/issues/3419

Summary by CodeRabbit

  • Bug Fixes

    • Improved liquidity detection for LP rewards, refining two-sided market checks and midpoint price calculation.
  • Chores

    • Adjusted fee assignment timing in order settlement to streamline processing.
  • Tests

    • Updated tests to compute LP shares from sampled rewards instead of fixed hard-coded percentages.

@MicBun MicBun requested a review from pr-time-tracker March 9, 2026 20:13
@MicBun MicBun self-assigned this Mar 9, 2026
@holdex
Copy link

holdex bot commented Mar 9, 2026

Time Submission Status

Member Status Time Action Last Update
MicBun ✅ Submitted 4h 30min Update time Mar 9, 2026, 9:22 PM

You can submit time with the command. Example:

@holdex pr submit-time 15m

See available commands to help comply with our Guidelines.

@coderabbitai
Copy link

coderabbitai bot commented Mar 9, 2026

📝 Walkthrough

Walkthrough

Two SQL migrations were updated: settlement fee accumulator assignments were moved to immediately follow unlock operations; LP reward sampling logic was reworked to explicitly initialize and transform bid/ask values and adjust midpoint/liquidity checks. Tests were updated to compute LP expectations from sampled rewards rather than fixed percentages.

Changes

Cohort / File(s) Summary
Settlement Fee Timing
internal/migrations/033-order-book-settlement.sql
Move assignments of $actual_dp_fees and $actual_validator_fees to immediately after each unlock call and remove redundant assignments inside subsequent blocks; control flow and final values unchanged.
Liquidity & Midpoint Detection
internal/migrations/034-order-book-rewards.sql
Refactor sample_lp_rewards: initialize best_bid/best_ask, use absolute/transformed price updates for bid/ask, tighten two-sided liquidity check, and compute midpoint as (best_ask + best_bid)/2.
Tests: dynamic LP shares
tests/streams/order_book/fee_distribution_test.go
Replace hard-coded LP share expectations with dynamic calculations derived from sampled rewards data, adjusting assertions to the new reward-driven distribution logic.

Sequence Diagram(s)

(omitted)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • pr-time-tracker

Poem

🐰✨ I hopped where unlocks softly land,
Took care to count each fee by hand,
Bids and asks I nudged just right,
Rewards now dance in sampled light,
A rabbit cheers: the ledgers sing tonight! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Title check ⚠️ Warning The PR title 'chore: distribute data provider rewards' is vague and doesn't accurately reflect the main changes, which involve refactoring fee assignment timing and liquidity detection logic across migration files and tests. Use a more specific title that captures the primary technical changes, such as 'refactor: restructure fee assignment and liquidity detection in order book logic' or clarify which specific reward distribution changes are being made.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch lpRewards

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
internal/migrations/034-order-book-rewards.sql (1)

117-153: Add a regression test for complementary-side midpoint cases.

This rewrite changes how outcome = FALSE orders are projected onto the YES price scale before the two-sided-liquidity gate and midpoint calculation. The current reward tests cover empty/incomplete books and a happy path, but they do not pin cases where the best bid or ask comes only from NO-side orders. A targeted test here would make future refactors much safer.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/migrations/034-order-book-rewards.sql` around lines 117 - 153, Add a
regression test that covers complementary-side midpoint cases where the best bid
or ask comes only from NO-side orders (price projection onto YES scale) to
exercise the two-sided-liquidity gate and midpoint calculation in the logic that
reads ob_positions by query_id and outcome; specifically, create tests that
insert NO-side buys (price < 0) and NO-side sells (price > 0) and verify they
correctly update $best_bid and $best_ask and produce the expected $x_mid, as
well as tests where only NO-side provides the best bid/ask to ensure the code
path that returns early when $best_bid = 0 OR $best_ask = 100 is not incorrectly
triggered. Ensure test names reference the midpoint behavior (e.g.,
"complementary_side_midpoint_no_side_only") and assert the computed midpoint
equals the manual projection ((best_ask + best_bid) / 2) after the NO->YES
projections are applied.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@internal/migrations/034-order-book-rewards.sql`:
- Around line 117-153: Add a regression test that covers complementary-side
midpoint cases where the best bid or ask comes only from NO-side orders (price
projection onto YES scale) to exercise the two-sided-liquidity gate and midpoint
calculation in the logic that reads ob_positions by query_id and outcome;
specifically, create tests that insert NO-side buys (price < 0) and NO-side
sells (price > 0) and verify they correctly update $best_bid and $best_ask and
produce the expected $x_mid, as well as tests where only NO-side provides the
best bid/ask to ensure the code path that returns early when $best_bid = 0 OR
$best_ask = 100 is not incorrectly triggered. Ensure test names reference the
midpoint behavior (e.g., "complementary_side_midpoint_no_side_only") and assert
the computed midpoint equals the manual projection ((best_ask + best_bid) / 2)
after the NO->YES projections are applied.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 11e95c4b-114f-44cc-a24f-40ae2b4e9709

📥 Commits

Reviewing files that changed from the base of the PR and between 6e9b251 and 98dc72d.

📒 Files selected for processing (2)
  • internal/migrations/033-order-book-settlement.sql
  • internal/migrations/034-order-book-rewards.sql

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
tests/streams/order_book/fee_distribution_test.go (2)

198-207: Potential precision loss when converting float64 percentage to int64.

The conversion int64(p1Reward*100) truncates toward zero. If p1Reward is stored as 64.33 but represented in float64 as 64.32999999..., then 64.32999*100 = 6432.999... truncates to 6432 instead of 6433, causing an off-by-one error in expected values.

🔧 Suggested fix using math.Round
+import "math"
+
 if p1Reward, ok := rewards[1]; ok {
     // Convert float64 percentage to big.Int with precision
     // reward_percent is already 0-100
-    p1Wei := new(big.Int).Mul(lpShareTotal, big.NewInt(int64(p1Reward*100)))
+    p1Wei := new(big.Int).Mul(lpShareTotal, big.NewInt(int64(math.Round(p1Reward*100))))
     expectedLP1 = new(big.Int).Div(p1Wei, big.NewInt(10000))
 }
 if p2Reward, ok := rewards[2]; ok {
-    p2Wei := new(big.Int).Mul(lpShareTotal, big.NewInt(int64(p2Reward*100)))
+    p2Wei := new(big.Int).Mul(lpShareTotal, big.NewInt(int64(math.Round(p2Reward*100))))
     expectedLP2 = new(big.Int).Div(p2Wei, big.NewInt(10000))
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/streams/order_book/fee_distribution_test.go` around lines 198 - 207,
The test is truncating float percentages when computing expectedLP1/expectedLP2
(conversion int64(p1Reward*100) and int64(p2Reward*100)), which can cause
off-by-one errors; update the conversion to round the scaled float before
casting (e.g., use math.Round(p1Reward*100) and math.Round(p2Reward*100)) so the
calculations that use lpShareTotal produce correct big.Int results for
expectedLP1 and expectedLP2.

388-401: Direct map access and float64 precision concerns.

Two issues:

  1. Direct map access: Unlike the 1-block test which uses if _, ok := rewards[1]; ok, this code directly accesses rewards1000[1] etc. If a participant ID doesn't exist, Go returns 0.0 silently, which could mask test setup issues or lead to false positives.

  2. Precision loss: Same truncation issue with int64(totalP1Reward*100) — use math.Round for safety.

🔧 Suggested fix for safety and precision
 // Calculate average LP share across all blocks
+// Verify expected participant IDs exist
+require.Contains(t, rewards1000, 1, "Missing participant 1 at block 1000")
+require.Contains(t, rewards1000, 2, "Missing participant 2 at block 1000")
+require.Contains(t, rewards2000, 1, "Missing participant 1 at block 2000")
+require.Contains(t, rewards2000, 2, "Missing participant 2 at block 2000")
+require.Contains(t, rewards3000, 1, "Missing participant 1 at block 3000")
+require.Contains(t, rewards3000, 2, "Missing participant 2 at block 3000")
+
 totalP1Reward := rewards1000[1] + rewards2000[1] + rewards3000[1]
 totalP2Reward := rewards1000[2] + rewards2000[2] + rewards3000[2]

 // User1 LP: (lpShareTotal * totalP1Reward) / (100 * 3)
 // User2 LP: (lpShareTotal * totalP2Reward) / (100 * 3)
 expectedLP1 := new(big.Int).Div(
-    new(big.Int).Mul(lpShareTotal, big.NewInt(int64(totalP1Reward*100))),
+    new(big.Int).Mul(lpShareTotal, big.NewInt(int64(math.Round(totalP1Reward*100)))),
     big.NewInt(30000),
 )
 expectedLP2 := new(big.Int).Div(
-    new(big.Int).Mul(lpShareTotal, big.NewInt(int64(totalP2Reward*100))),
+    new(big.Int).Mul(lpShareTotal, big.NewInt(int64(math.Round(totalP2Reward*100)))),
     big.NewInt(30000),
 )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/streams/order_book/fee_distribution_test.go` around lines 388 - 401,
The test directly indexes rewards1000/rewards2000/rewards3000 (used to compute
totalP1Reward/totalP2Reward) and converts float64 to int64 via
int64(totalP1Reward*100) which can silently mask missing keys and cause
precision truncation; update the code that computes totalP1Reward/totalP2Reward
to first check key presence (e.g., using the comma-ok pattern) for each
participant ID in rewards1000/rewards2000/rewards3000 and fail the test or
handle missing entries explicitly, and when converting the scaled float to
integer for expectedLP1/expectedLP2 use math.Round on the product (e.g.,
math.Round(totalP1Reward*100)) before casting to int64 to avoid truncation; keep
references to lpShareTotal, expectedLP1 and expectedLP2 to replace the current
int64(...) conversions accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/streams/order_book/fee_distribution_test.go`:
- Around line 198-207: The test is truncating float percentages when computing
expectedLP1/expectedLP2 (conversion int64(p1Reward*100) and
int64(p2Reward*100)), which can cause off-by-one errors; update the conversion
to round the scaled float before casting (e.g., use math.Round(p1Reward*100) and
math.Round(p2Reward*100)) so the calculations that use lpShareTotal produce
correct big.Int results for expectedLP1 and expectedLP2.
- Around line 388-401: The test directly indexes
rewards1000/rewards2000/rewards3000 (used to compute
totalP1Reward/totalP2Reward) and converts float64 to int64 via
int64(totalP1Reward*100) which can silently mask missing keys and cause
precision truncation; update the code that computes totalP1Reward/totalP2Reward
to first check key presence (e.g., using the comma-ok pattern) for each
participant ID in rewards1000/rewards2000/rewards3000 and fail the test or
handle missing entries explicitly, and when converting the scaled float to
integer for expectedLP1/expectedLP2 use math.Round on the product (e.g.,
math.Round(totalP1Reward*100)) before casting to int64 to avoid truncation; keep
references to lpShareTotal, expectedLP1 and expectedLP2 to replace the current
int64(...) conversions accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ca1638ee-2beb-4353-b637-c069d521e76e

📥 Commits

Reviewing files that changed from the base of the PR and between 98dc72d and 845c4fd.

📒 Files selected for processing (1)
  • tests/streams/order_book/fee_distribution_test.go

@MicBun MicBun merged commit 1e51772 into main Mar 9, 2026
8 checks passed
@MicBun MicBun deleted the lpRewards branch March 9, 2026 22:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant