-
Notifications
You must be signed in to change notification settings - Fork 0
Reduce transactions scope #110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
* Rename the `transaction` module to `sync` in the `records-lib` crate. * Rename the `within` function in this module to `transaction_within`. * Add a new `lock_reading_within` function inside this new module, similar to the `transaction_within` function, that wraps the execution of a provided closure with table locks in reading mode. * Make the `/overview` endpoint to use this function instead of the `[transaction_]within` one. Make it use it only for the parts that need it (when really building the leaderboard array). Refs: #105.
* Replace the use of the locks of the tables by a simple read-only transaction, that wraps the same logic (so the new transaction is smaller than the previous one). * Rename the `transaction_within` function into just `transaction`, and base it on a new function `transation_with_config`, that allows to specify the access mode and isolation level, and use it for the `/overview` endpoint. Refs: #105.
Also remove unused import. Refs: #105.
* Reduce scope of Redis connections in various places, which amounts to
replace the type of the arguments of functions requiring a Redis
connection from a multiplexed connection to the Redis pool instead. A
connection is acquired later in the bodies of the various functions.
This is to ensure that when doing Redis transactions, the function
executing it really owns exclusively the Redis connection (which is
behind a multiplexed one, meaning it could be shared between many
operations).
* This was mainly done for the new implementation of the
`ranks::get_rank` function. The new implementation allows to get the
rank of a time in a leaderboard, even if the time isn't present in it.
On top of that, it keeps the leaderboard unchanged, and it is
concurrency-safe (two rank retrievals could be running on the same
leaderboard, at the same time, and they won't overlap). This should
fix a lot of bugs, and should make the API more robust against
race-conditions.
* The SQL insert of the `event_edition_records` row is now part of the
transaction with the insert of the `records` and `checkpoint_times`
rows. This is reflected for the `/event/_/_/player/finished` endpoint.
* Transactions scopes were reduced to not include instructions not
related to SQL queries anymore (ban-checking, retrieve map, etc). For
the record saving handler, the transaction size is now reduced for the
inserts of the `records`, `checkpoint_times`, and
`event_edition_records` tables.
* Fix a race-condition when multiple saves of the same player on the
same map could happen at the same time, which could lead the related
Redis ZSET to keep the wrong version of the leaderboard. The fix is to
lock the rows of the records related to the map in the transaction.
Also, the retrieval of the old record is made in the same transaction,
to make sure it is the last record committed in the database.
* Fix a race-condition when updating the rank of a player in Redis. When
saving a record, the time in Redis was updated before the transaction
was committed, so any other operation that was acting on the ZSET in
Redis could override the new time by an outdated version, because the
other operation couldn't see the last version, because the transaction
didn't commit yet. This fix is done:
* In the `/event/_/_/player/finished` endpoint handler, when saving
the same record to the original map leaderboard.
* In the `/player/finished` endpoint handler, when saving the same
record to the original map, when the record is done on an event map.
* In the `player_finished` module (which is referenced by both above
endpoints).
Other changes:
* Increase the size of the range of generated IDs for test, to avoid
interferences.
* Make the saving of mappacks in Redis atomic (using transactions). This
is reflected in the mappack node in the GraphQL API, and in the
socc and player_ranking packages.
Refs: #105.
* Add a new `ranks::get_rank_in_session` function along with a new type `RankingSession`. The latter is just a wrapper around a Redis connection retrieved from the pool. * This is designed like this so that we can use the same connection between rank retrievals (and avoid creating a new connection each time), while not exposing the Redis connection outside the body. Refs: #105.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR significantly refactors transaction management and Redis synchronization patterns to reduce transaction scope, fix race conditions, and improve concurrency safety in leaderboard operations. The changes address issue #105 by implementing smaller, more focused transactions and ensuring proper ordering of SQL and Redis updates.
Key changes:
- Renamed and enhanced
transaction.rstosync.rswith newtransaction_with_configfunction supporting custom isolation levels and access modes - Introduced
RankingSessiontype for efficient batch rank retrieval, reusing Redis connections - Reimplemented
get_rankto be concurrency-safe using Redis WATCH/MULTI/EXEC pattern, leaving leaderboards unchanged
Reviewed changes
Copilot reviewed 24 out of 24 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
crates/records_lib/src/sync.rs |
New module replacing transaction.rs with enhanced transaction configuration support |
crates/records_lib/src/transaction.rs |
Removed in favor of sync.rs |
crates/records_lib/src/ranks.rs |
Added RankingSession and reimplemented get_rank with WATCH/MULTI/EXEC for concurrency safety |
crates/records_lib/src/leaderboard.rs |
Changed to use RedisPool and RankingSession for batch operations |
crates/records_lib/src/mappack.rs |
Refactored to use RedisPool with atomic pipelines, added read-only transaction optimization |
crates/game_api/src/http/player_finished.rs |
Added row locking, moved Redis updates after transaction commits, consolidated event_edition_records insertion |
crates/game_api/src/http/player.rs |
Added transaction wrapping and post-commit Redis updates for original map records |
crates/game_api/src/http/event.rs |
Removed transaction wrapper, moved event_edition_records logic into player_finished |
crates/game_api/src/http/overview.rs |
Refactored to use read-only transactions and update leaderboard before queries |
crates/graphql-api/src/objects/*.rs |
Updated to use RedisPool and RankingSession pattern consistently |
crates/socc/src/*.rs |
Updated to use RedisPool with scoped connections and atomic pipelines |
crates/admin/src/*.rs |
Updated to use sync module and RedisPool |
crates/game_api/tests/*.rs |
Increased map ID range from 1-100 to 1-100000 to reduce test collision probability |
crates/game_api/tests/base.rs |
Improved error message for failed tests |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Also add an unwatch in case an error is raised before exiting the loop. Refs: #105.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 24 out of 24 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
This PR reduces the scope of transactions in the API (except those in the admin crate, will be done later since not urgent).
It also fixes a lot of race-conditions between operations that could read and update the same leaderboard at the same time, which could make the related ZSET in Redis to keep an outdated version.
The
ranks::get_rankfunction from the records-lib package now leaves the related ZSET in Redis unchanged, and operate the whole atomically.See the commit details for more information.
Closes: #105.