Skip to content

feat: implement batch processing for database writes#274

Open
detecti1 wants to merge 2 commits intofreol35241:masterfrom
detecti1:feat/batch-processing
Open

feat: implement batch processing for database writes#274
detecti1 wants to merge 2 commits intofreol35241:masterfrom
detecti1:feat/batch-processing

Conversation

@detecti1
Copy link
Copy Markdown

@detecti1 detecti1 commented Jul 1, 2025

Description

This PR enhances LTSS with configurable batch processing controls to optimize database write performance. The following user-configurable settings have been introduced and documented in README.md:

batch_size: Number of events to group before writing to the database.
batch_timeout_ms: Maximum time (ms) to wait before writing a batch, ensuring timely writes even during periods of low event rates.
poll_interval_ms: Internal polling interval for event collection.
With these options, LTSS can now collect events in memory and write them in batches, instead of sending every event to the database right away.

Rationale

In my setup, I use a cloud-hosted TimescaleDB instance as the LTSS backend. I noticed that sometimes data writing was very slow when the network had high latency or/and events came in quickly. Since the original LTSS wrote to the database for every single event in one thread, high network latency would cause memory usage and write delays to increase over time, as more and more events waited in memory.

Batch processing helps solve these problems by reducing the number of database writes, sending several events at once. This makes writing data faster and easier to manage, even on slower or unstable networks.

Testing

The updated batch processing feature has been running in my home production setup for more than two weeks. During this time, both memory usage and data writing have been stable, with no problems observed. And unit test provided. In addition, I have also added unit tests.

Let me know if you need anything else or want more changes!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant