feat: add configurable limit on concurrent bulk dispatch goroutines#6751
Merged
ycombinator merged 7 commits intoelastic:mainfrom Apr 8, 2026
Merged
feat: add configurable limit on concurrent bulk dispatch goroutines#6751ycombinator merged 7 commits intoelastic:mainfrom
ycombinator merged 7 commits intoelastic:mainfrom
Conversation
5 tasks
Contributor
|
This pull request does not have a backport label. Could you fix it @ycombinator? 🙏
|
michel-laterman
requested changes
Apr 6, 2026
Contributor
michel-laterman
left a comment
There was a problem hiding this comment.
We really should be consistent with the type of maxPendingBulkDispatches; either have it as an int64 in the implementation + config or just an int
Also the new tests don't follow our existing test structures with the use of the require test package
michel-laterman
previously approved these changes
Apr 7, 2026
Contributor
michel-laterman
left a comment
There was a problem hiding this comment.
lgtm, should we fix the linter warnings?
db3c667 to
487b09a
Compare
michel-laterman
approved these changes
Apr 8, 2026
swiatekm
approved these changes
Apr 8, 2026
When agent count exceeds what the bulk engine can process, goroutines pile up in dispatch() waiting to send on the 32-slot channel. Each blocked goroutine holds its stack plus the bulkT object. With 30k+ agents under upgrade/policy storms, this grows unbounded until OOM. This adds an optional cap (max_pending_dispatches) on concurrent dispatch goroutines. When the limit is reached, new dispatches are rejected immediately with ErrTooManyDispatches, which maps to HTTP 429. Agents retry on their next checkin interval, spreading load over time. The default is 0 (no limit) so existing deployments are unaffected. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Fixes errcheck linter warnings by capturing the error from bytes.Buffer.WriteString and asserting via require.NoError. The WriteString calls inside the worker goroutines are hoisted to the test goroutine so require can be used safely. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
487b09a to
3285fa6
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What is the problem this PR solves?
During a 30k Serverless scale test, 22 of 39 fleet-server pods were OOMKilled. Analysis of the captured pod logs showed:
dispatch()waiting to enqueue onto the bulk engine's channel (capacity 32).bulkTobject. With no upper bound on concurrent dispatches, goroutines piled up until pods hit their memory limit (~154 Mi) and were killed.How does this PR solve the problem?
This adds an optional cap on concurrent dispatch goroutines to bound memory usage.
The limit check runs at the top of
dispatch(), before blocking on the channel:fleet-server/internal/pkg/bulk/engine.go
Lines 608 to 622 in 1f456fb
When the limit is reached, the dispatch is rejected immediately with
ErrTooManyDispatches, which maps to HTTP 429 so agents retry on their next checkin interval:fleet-server/internal/pkg/api/error.go
Lines 181 to 189 in 1f456fb
The limit is configurable via
max_pending_dispatchesin the bulk config:fleet-server/internal/pkg/config/input.go
Line 52 in 1f456fb
The default is 0 (no limit) so existing deployments are unaffected. Operators opt in by setting a value appropriate for their deployment size.
How to test this PR locally
Design Checklist
Checklist
Related issues
🤖 Generated with Claude Code