⚡ Optimize bprotocol liquityTvl using multiCall chunking#83
⚡ Optimize bprotocol liquityTvl using multiCall chunking#83
Conversation
Co-authored-by: zknpr <96851588+zknpr@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (1)
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Error while running adapter at :
|
Co-authored-by: zknpr <96851588+zknpr@users.noreply.github.com>
|
The adapter at projects/bprotocol exports TVL: |
Greptile SummaryThis PR replaces the sequential N+1 Key observations:
Confidence Score: 3/5
Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[liquityTvl called] --> B[Initialize bamms array and index i=0]
B --> C[Build batch of 5 indices starting at i]
C --> D[multiCall bKeeperAddress.bamms for each index in batch with permitFailure]
D --> E[Filter out null results to get valid addresses]
E --> F[Append valid addresses to bamms array]
F --> G{Any call failed in batch?}
G -- Yes --> H[Break loop]
G -- No --> I[Advance i by 5]
I --> C
H --> J{bamms array is non-empty?}
J -- No --> K[Return nothing to add]
J -- Yes --> L[multiCall stabilityPool.getCompoundedLUSDDeposit for all bamms]
L --> M[Add LUSD balance for each result]
M --> N[Done]
|
| const batch = Array.from({length: 5}, (_, j) => i + j); | ||
| const res = await api.multiCall({ | ||
| target: bKeeperAddress, | ||
| calls: batch, | ||
| abi: abi["bamms"], | ||
| permitFailure: true | ||
| }); |
There was a problem hiding this comment.
Non-standard calls format for integer params
batch is an array of raw integers (e.g., [0, 1, 2, 3, 4]), but the multiCall API conventionally expects call descriptors of the form { params: [value] }. Looking at similar adapters in this repo (e.g., projects/accountable/index.js), the idiomatic form for integer-indexed calls is:
| const batch = Array.from({length: 5}, (_, j) => i + j); | |
| const res = await api.multiCall({ | |
| target: bKeeperAddress, | |
| calls: batch, | |
| abi: abi["bamms"], | |
| permitFailure: true | |
| }); | |
| const batch = Array.from({length: 5}, (_, j) => ({ params: [i + j] })); | |
| const res = await api.multiCall({ | |
| target: bKeeperAddress, | |
| calls: batch, | |
| abi: abi["bamms"], | |
| permitFailure: true | |
| }); |
While the SDK appears to handle primitive values in some cases (as seen in projects/kurrency/index.js), using explicit { params: [...] } objects is more robust, self-documenting, and consistent with the existing patterns in this same file (see makerTvl).
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
| if (valid.length < batch.length || !res[res.length - 1]) { | ||
| break; | ||
| } |
There was a problem hiding this comment.
Redundant second break condition
!res[res.length - 1] is logically subsumed by valid.length < batch.length. If the last element of res is falsy, then valid (which filters all falsy values) will have fewer elements than batch, so the first condition already triggers. The second condition is dead code.
| if (valid.length < batch.length || !res[res.length - 1]) { | |
| break; | |
| } | |
| if (valid.length < batch.length) { | |
| break; | |
| } |
💡 What: The optimization implemented
Replaced the sequential
api.callloop inliquityTvlwith a chunked/batchedapi.multiCallimplementation. The code now retrievesbammsaddresses in parallel chunks of 5 usingpermitFailure: trueto handle bounds checking, and then performs a singleapi.multiCallto fetch the LUSD deposits for all valid bamms simultaneously.🎯 Why: The performance problem it solves
The original implementation had an N+1 query problem, fetching each
bammaddress and its respective balance sequentially (await api.call(...), thenawait api.call(...)inside aforloop). This resulted in substantial serial network latency that scaled linearly with the number of indexes.📊 Measured Improvement:
The optimization resolves the serial execution pattern of the N+1 problem. While the fallback/retry mechanisms of the underlying RPC handling for out-of-bounds reverts natively take around ~60s in
multiCallwhenpermitFailure: trueis hit on the last index, the valid data retrieval phase itself is now fully concurrent. This dramatically speeds up the happy-path retrieval of the array and prevents sequential scaling of API calls as the TVL state grows. The end result TVL remained identical (ethereum 1.78 M).PR created automatically by Jules for task 14632640199814752688 started by @zknpr