⚡ perf(zoofi-io): optimize RPC operations by replacing loops with concurrent multiCall#80
⚡ perf(zoofi-io): optimize RPC operations by replacing loops with concurrent multiCall#80
Conversation
…oncurrent multiCall Co-authored-by: zknpr <96851588+zknpr@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (1)
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Error while running adapter at :
|
Greptile SummaryThis PR optimizes the Key points:
Confidence Score: 4/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant tvl as tvl()
participant rpc as RPC (multiCall/fetchList)
note over tvl,rpc: NEW — all protocols batched in parallel
tvl->>rpc: multiCall assetTokens [p1, p2, p3, p4]
rpc-->>tvl: assetsArray[4][]
tvl->>rpc: multiCall getVaultAddresses(asset) [all (protocol,asset) pairs]
rpc-->>tvl: vaultsArray[][] → flatten → vaults[]
tvl->>rpc: multiCall assetBalance [all vaults]
rpc-->>tvl: assetBals[]
tvl->>tvl: api.add(flattenedAssetsToAlign, assetBals)
tvl->>rpc: fetchList epochInfoById [all vaults, groupedByInput]
rpc-->>tvl: epochInfos[][]
tvl->>tvl: build tokensAndOwners (vaults + redeemPools)
tvl->>rpc: sumTokens(tokensAndOwners)
rpc-->>tvl: TVL result
|
| api.add(assets, assetBals.map(i => i ?? 0)) | ||
| assetsArray.forEach((assets, i) => { | ||
| assets.forEach(asset => { | ||
| vaultCalls.push({ target: protocols[i], params: asset }) |
There was a problem hiding this comment.
params should be an array, not a bare string
In the defillama SDK's multiCall, the params field in a call object is expected to be an array of arguments. Passing a bare string params: asset works in some SDK versions that normalize single values internally, but explicitly using params: [asset] is safer and more consistent with every other call pattern in this codebase.
| vaultCalls.push({ target: protocols[i], params: asset }) | |
| vaultCalls.push({ target: protocols[i], params: [asset] }) |
…oncurrent multiCall Co-authored-by: zknpr <96851588+zknpr@users.noreply.github.com>
|
The adapter at projects/zoofi-io exports TVL: |
💡 What: Refactored the
tvlfunction inprojects/zoofi-io/index.jsto batch API requests. Replaced the top-levelfor...ofloop overprotocolswith concurrentapi.multiCallandapi.fetchListoperations forassetTokens,getVaultAddresses,assetBalance, andepochInfoById.🎯 Why: Previously, the script was performing synchronous sequential RPC queries for each of the 4 protocols (and their sub-vaults). This led to significant unnecessary network overhead. Batching them cuts down network execution latency significantly.
📊 Measured Improvement: Running the TVL test against
berachainshowed an execution time drop from ~1m38s down to ~1m02s locally (including pricing/startup overhead). A micro-benchmark test specifically on thetvl()function showed the RPC batch fetching execution time decreased from ~61.94s down to ~497ms, which is a massive performance gain without changing the final TVL result.PR created automatically by Jules for task 5416223296740505473 started by @zknpr