Skip to content

fix(tracer,metrics): propagate flush() through to exporter#8

Open
stackbilt-admin wants to merge 1 commit intomainfrom
fix/exporter-flush-propagation
Open

fix(tracer,metrics): propagate flush() through to exporter#8
stackbilt-admin wants to merge 1 commit intomainfrom
fix/exporter-flush-propagation

Conversation

@stackbilt-admin
Copy link
Copy Markdown
Member

Summary

`Tracer.flush()` and `MetricsCollector.flush()` only drained their local buffer into `options.export.export(...)` — they never called `options.export.flush()` to force a real POST. On Cloudflare Workers the default `StackbiltCloudExporter` buffers across requests and only POSTs at a 100-item / 50KB threshold; low-volume isolates get evicted long before that trips, so buffered spans die silently.

Option B from the issue: make the flush chain transitive. Added an optional `flush?()` to `SpanExporter` and `MetricsExporter` interfaces, and call it after `export()` in both collectors. Exporters without `flush()` are unaffected (backward-compatible).

Test plan

  • `npm run typecheck` clean
  • `npm test` — 71/71 pass (redact suites)
  • Deploy to a low-volume Worker, confirm every request emits spans (no more 100-item batch gate)
  • Existing high-volume consumers (if any) still flush at normal cadence

Dogfood evidence

Verified end-to-end in `tarotscript-worker` (deployed version `9d53b2c3-08a2-415a-8d95-b252e6e6f610`): after applying the same flush-chain pattern to that worker's consumer-side shim, traces landed on every request instead of only at the batch boundary. This PR lifts that workaround into the library.

Closes #7

🤖 Generated with Claude Code

Tracer.flush() and MetricsCollector.flush() only drained their own
local buffer into options.export.export(...) — they never called
options.export.flush() to force the exporter to POST. On Cloudflare
Workers the default StackbiltCloudExporter buffers across requests
and only POSTs at a 100-item / 50KB threshold; low-volume isolates
get evicted long before that trips, so buffered spans die silently.

Option B from #7: make the flush chain transitive. Add an optional
flush?() to SpanExporter and MetricsExporter interfaces, and call
it after export() in both collectors. Exporters without flush() are
unaffected (backward-compatible).

Verified end-to-end in tarotscript-worker dogfood instance: after
applying the same pattern to that worker's consumer-side shim,
traces landed on every request instead of only at the 100-span
batch boundary.

Closes #7.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug: StackbiltCloudExporter silently drops spans in low-volume Workers (double-buffering + ephemeral isolates)

1 participant