Conversation
Add metrics hashrate endpoint that queries miner hashrate data via tailLogCustomRangeAggr RPC and returns daily hashrate with summary.
Add metrics consumption endpoint that queries powermeter data via tailLogCustomRangeAggr RPC and returns daily power/consumption with summary.
Add metrics efficiency endpoint that queries miner efficiency (W/TH) data via tailLogCustomRangeAggr RPC. Adds AGGR_FIELDS.EFFICIENCY constant.
Add miner-status endpoint that queries miner status data via tailLog RPC with aggregated offline/sleep/maintenance counts per day.
…erature endpoints Add three new metrics endpoints with interval-based aggregation, per-miner power mode categorization with status overrides, timeline segmentation, and per-container temperature breakdown with site-wide aggregates.
…ltering Curl testing revealed three bugs: - tailLog with groupRange returns ts as "start-end" string (e.g. "1770854400000-1771459199999"), not a number. getStartOfDay(NaN) silently dropped all entries. Added parseEntryTs() helper. - Container names are not valid RPC tags (returns empty []). Changed to always use tag 't-miner' and post-filter by container in handler. - Pre-existing bug in getMinerStatus: same range-string ts issue with groupRange '1D' caused empty logs. Now fixed.
total_cnt does not exist in stat-3h RPC entries. The online count was always 0 because the total was defaulting to 0. Use type_cnt (object keyed by miner type) with sumObjectValues() to derive the correct total, enabling accurate online = total - offline - sleep - maintenance.
- Extract validateStartEnd(), iterateRpcEntries(), forEachRangeAggrItem() shared helpers to eliminate duplicated validation, result unpacking, and range-aggr processing across 7 endpoints - Add TYPE_CNT, OFFLINE_CNT, SLEEP_CNT, MAINTENANCE_CNT to AGGR_FIELDS constants replacing hardcoded strings in miner-status handler - Use proper weighted running average for temperature merging across orks - Extract DEFAULT_TIMELINE_LIMIT constant and extractContainerFromMinerId() - Unify entry.error checks and parseEntryTs usage across all endpoints
| default: | ||
| return { key: 'stat-3h', groupRange: '1D', divisorMs: 24 * 60 * 60 * 1000 } | ||
| } | ||
| } |
There was a problem hiding this comment.
please move util methods to a different file
| if (powerMode === 'low') return 'low' | ||
| if (powerMode === 'high') return 'high' | ||
| if (powerMode === 'sleep') return 'sleep' | ||
| return 'normal' |
There was a problem hiding this comment.
please move values to constants file
|
|
||
| const interval = resolveInterval(start, end, req.query.interval) | ||
| const config = getIntervalConfig(interval) | ||
| const limit = Math.ceil((end - start) / config.divisorMs) |
There was a problem hiding this comment.
should start/end be included in the rpcPayload instead of calculating the limit here?
| * e.g. "bitdeer-9a-miner1" -> "bitdeer-9a" | ||
| * NOTE: Unverified against real power_mode_group_aggr data. | ||
| */ | ||
| function extractContainerFromMinerId (minerId) { |
There was a problem hiding this comment.
@mukama I dont think we have this relationship stored in any miners.
Also minerId seems misleading in the name. MinerIds are autogenerated ids
| safeDiv | ||
| } = require('../../utils') | ||
|
|
||
| const TWO_DAYS_MS = 2 * 24 * 60 * 60 * 1000 |
| return null | ||
| } | ||
|
|
||
| function validateStartEnd (req) { |
There was a problem hiding this comment.
All the functions below seem to be seperate from handlers , can be moved to utils
| return { log, summary } | ||
| } | ||
|
|
||
| function categorizeMiner (powerMode, status) { |
There was a problem hiding this comment.
We are considering miners as normal power mode if powermode is not from list. I dont think this is correct behaviour. We should explicitly send back correct powermode rather than normal as default
| if (!ts) continue | ||
| callback(ts, item.val || item) | ||
| } | ||
| } else if (typeof items === 'object') { |
| return { | ||
| ts: Number(dayTs), | ||
| powerW, | ||
| consumptionMWh: (powerW * 24) / 1000000 |
There was a problem hiding this comment.
Why are we multiplying by 24? If its consumption shouldn't it be instantaneous / avg of the day ?
| [AGGR_FIELDS.POWER_MODE_GROUP]: 1, | ||
| [AGGR_FIELDS.STATUS_GROUP]: 1 | ||
| }, | ||
| shouldCalculateAvg: true, |
There was a problem hiding this comment.
Are we sure about this ? Powermode does not need aggregation IMO
Summary
GET /auth/metrics/*API v2 endpoints ported from the legacy dashboard, replacing frontendtail-log/tail-log/range-aggrcalls with clean REST APIs{ log: [...], summary: {...} }response format consistent with finance endpoints1h/1d/1wbased on date range)groupRange: '1D'/'1W')type_cntfieldEndpoints
GET /auth/metrics/hashrate — daily hashrate (MH/s) with summary averages via
tailLogCustomRangeAggrRPCGET /auth/metrics/hashrate?start=1700000000000&end=1700100000000&overwriteCache=trueGET /auth/metrics/consumption — daily power consumption (W + MWh) via
tailLogCustomRangeAggrRPCGET /auth/metrics/consumption?start=1700000000000&end=1700100000000&overwriteCache=trueGET /auth/metrics/efficiency — daily efficiency (W/TH/s) via
tailLogCustomRangeAggrRPCGET /auth/metrics/efficiency?start=1700000000000&end=1700100000000&overwriteCache=trueGET /auth/metrics/miner-status — daily miner status breakdown (online/offline/sleep/maintenance) via
tailLogRPCGET /auth/metrics/miner-status?start=1700000000000&end=1700100000000&overwriteCache=trueGET /auth/metrics/power-mode — miner power mode distribution over time via
tailLogRPCGET /auth/metrics/power-mode?start=1700000000000&end=1700100000000&interval=1d&overwriteCache=trueOptional: interval ("1h" | "1d" | "1w"), overwriteCache
GET /auth/metrics/power-mode/timeline — per-miner power mode timeline with merged segments via
tailLogRPCGET /auth/metrics/power-mode/timeline?start=1700000000000&end=1700100000000&container=bitdeer-9a&limit=10080&overwriteCache=trueOptional: start, end, container, limit, overwriteCache
GET /auth/metrics/temperature — per-container temperature breakdown with site-wide aggregates via
tailLogRPCGET /auth/metrics/temperature?start=1700000000000&end=1700100000000&interval=1d&container=bitdeer-9a&overwriteCache=trueOptional: interval ("1h" | "1d" | "1w"), container, overwriteCache
Known Issues
stat-3hentries. Selecting1hinterval returns raw 3h data points, not true 1-hour resolution.