forked from valkey-io/valkey
-
Notifications
You must be signed in to change notification settings - Fork 0
defrag: playgraoung with several defrag configs #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
zvi-code
wants to merge
15
commits into
unstable
Choose a base branch
from
defrag_playground
base: unstable
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
9d85775 to
cf5b79e
Compare
Signed-off-by: Nitai Caro <caronita@amazon.com> Co-authored-by: Nitai Caro <caronita@amazon.com>
This PR is 1 of 3 PRs intended to achieve the goal of 1 million requests per second, as detailed by [dan touitou](https://github.com/touitou-dan) in valkey-io#22. This PR modifies the IO threads to be fully asynchronous, which is a first and necessary step to allow more work offloading and better utilization of the IO threads. ### Current IO threads state: Valkey IO threads were introduced in Redis 6.0 to allow better utilization of multi-core machines. Before this, Redis was single-threaded and could only use one CPU core for network and command processing. The introduction of IO threads helps in offloading the IO operations to multiple threads. **Current IO Threads flow:** 1. Initialization: When Redis starts, it initializes a specified number of IO threads. These threads are in addition to the main thread, each thread starts with an empty list, the main thread will populate that list in each event-loop with pending-read-clients or pending-write-clients. 2. Read Phase: The main thread accepts incoming connections and reads requests from clients. The reading of requests are offloaded to IO threads. The main thread puts the clients ready-to-read in a list and set the global io_threads_op to IO_THREADS_OP_READ, the IO threads pick the clients up, perform the read operation and parse the first incoming command. 3. Command Processing: After reading the requests, command processing is still single-threaded and handled by the main thread. 4. Write Phase: Similar to the read phase, the write phase is also be offloaded to IO threads. The main thread prepares the response in the clients’ output buffer then the main thread puts the client in the list, and sets the global io_threads_op to the IO_THREADS_OP_WRITE. The IO threads then pick the clients up and perform the write operation to send the responses back to clients. 5. Synchronization: The main-thread communicate with the threads on how many jobs left per each thread with atomic counter. The main-thread doesn’t access the clients while being handled by the IO threads. **Issues with current implementation:** * Underutilized Cores: The current implementation of IO-threads leads to the underutilization of CPU cores. * The main thread remains responsible for a significant portion of IO-related tasks that could be offloaded to IO-threads. * When the main-thread is processing client’s commands, the IO threads are idle for a considerable amount of time. * Notably, the main thread's performance during the IO-related tasks is constrained by the speed of the slowest IO-thread. * Limited Offloading: Currently, Since the Main-threads waits synchronously for the IO threads, the Threads perform only read-parse, and write operations, with parsing done only for the first command. If the threads can do work asynchronously we may offload more work to the threads reducing the load from the main-thread. * TLS: Currently, we don't support IO threads with TLS (where offloading IO would be more beneficial) since TLS read/write operations are not thread-safe with the current implementation. ### Suggested change Non-blocking main thread - The main thread and IO threads will operate in parallel to maximize efficiency. The main thread will not be blocked by IO operations. It will continue to process commands independently of the IO thread's activities. **Implementation details** **Inter-thread communication.** * We use a static, lock-free ring buffer of fixed size (2048 jobs) for the main thread to send jobs and for the IO to receive them. If the ring buffer fills up, the main thread will handle the task itself, acting as back pressure (in case IO operations are more expensive than command processing). A static ring buffer is a better candidate than a dynamic job queue as it eliminates the need for allocation/freeing per job. * An IO job will be in the format: ` [void* function-call-back | void *data] `where data is either a client to read/write from and the function-ptr is the function to be called with the data for example readQueryFromClient using this format we can use it later to offload other types of works to the IO threads. * The Ring buffer is one way from the main-thread to the IO thread, Upon read/write event the main thread will send a read/write job then in before sleep it will iterate over the pending read/write clients to checking for each client if the IO threads has already finished handling it. The IO thread signals it has finished handling a client read/write by toggling an atomic flag read_state / write_state on the client struct. **Thread Safety** As suggested in this solution, the IO threads are reading from and writing to the clients' buffers while the main thread may access those clients. We must ensure no race conditions or unsafe access occurs while keeping the Valkey code simple and lock free. Minimal Action in the IO Threads The main change is to limit the IO thread operations to the bare minimum. The IO thread will access only the client's struct and only the necessary fields in this struct. The IO threads will be responsible for the following: * Read Operation: The IO thread will only read and parse a single command. It will not update the server stats, handle read errors, or parsing errors. These tasks will be taken care of by the main thread. * Write Operation: The IO thread will only write the available data. It will not free the client's replies, handle write errors, or update the server statistics. To achieve this without code duplication, the read/write code has been refactored into smaller, independent components: * Functions that perform only the read/parse/write calls. * Functions that handle the read/parse/write results. This refactor accounts for the majority of the modifications in this PR. **Client Struct Safe Access** As we ensure that the IO threads access memory only within the client struct, we need to ensure thread safety only for the client's struct's shared fields. * Query Buffer * Command parsing - The main thread will not try to parse a command from the query buffer when a client is offloaded to the IO thread. * Client's memory checks in client-cron - The main thread will not access the client query buffer if it is offloaded and will handle the querybuf grow/shrink when the client is back. * CLIENT LIST command - The main thread will busy-wait for the IO thread to finish handling the client, falling back to the current behavior where the main thread waits for the IO thread to finish their processing. * Output Buffer * The IO thread will not change the client's bufpos and won't free the client's reply lists. These actions will be done by the main thread on the client's return from the IO thread. * bufpos / block→used: As the main thread may change the bufpos, the reply-block→used, or add/delete blocks to the reply list while the IO thread writes, we add two fields to the client struct: io_last_bufpos and io_last_reply_block. The IO thread will write until the io_last_bufpos, which was set by the main-thread before sending the client to the IO thread. If more data has been added to the cob in between, it will be written in the next write-job. In addition, the main thread will not trim or merge reply blocks while the client is offloaded. * Parsing Fields * Client's cmd, argc, argv, reqtype, etc., are set during parsing. * The main thread will indicate to the IO thread not to parse a cmd if the client is not reset. In this case, the IO thread will only read from the network and won't attempt to parse a new command. * The main thread won't access the c→cmd/c→argv in the CLIENT LIST command as stated before it will busy wait for the IO threads. * Client Flags * c→flags, which may be changed by the main thread in multiple places, won't be accessed by the IO thread. Instead, the main thread will set the c→io_flags with the information necessary for the IO thread to know the client's state. * Client Close * On freeClient, the main thread will busy wait for the IO thread to finish processing the client's read/write before proceeding to free the client. * Client's Memory Limits * The IO thread won't handle the qb/cob limits. In case a client crosses the qb limit, the IO thread will stop reading for it, letting the main thread know that the client crossed the limit. **TLS** TLS is currently not supported with IO threads for the following reasons: 1. Pending reads - If SSL has pending data that has already been read from the socket, there is a risk of not calling the read handler again. To handle this, a list is used to hold the pending clients. With IO threads, multiple threads can access the list concurrently. 2. Event loop modification - Currently, the TLS code registers/unregisters the file descriptor from the event loop depending on the read/write results. With IO threads, multiple threads can modify the event loop struct simultaneously. 3. The same client can be sent to 2 different threads concurrently (redis/redis#12540). Those issues were handled in the current PR: 1. The IO thread only performs the read operation. The main thread will check for pending reads after the client returns from the IO thread and will be the only one to access the pending list. 2. The registering/unregistering of events will be similarly postponed and handled by the main thread only. 3. Each client is being sent to the same dedicated thread (c→id % num_of_threads). **Sending Replies Immediately with IO threads.** Currently, after processing a command, we add the client to the pending_writes_list. Only after processing all the clients do we send all the replies. Since the IO threads are now working asynchronously, we can send the reply immediately after processing the client’s requests, reducing the command latency. However, if we are using AOF=always, we must wait for the AOF buffer to be written, in which case we revert to the current behavior. **IO threads dynamic adjustment** Currently, we use an all-or-nothing approach when activating the IO threads. The current logic is as follows: if the number of pending write clients is greater than twice the number of threads (including the main thread), we enable all threads; otherwise, we enable none. For example, if 8 IO threads are defined, we enable all 8 threads if there are 16 pending clients; else, we enable none. It makes more sense to enable partial activation of the IO threads. If we have 10 pending clients, we will enable 5 threads, and so on. This approach allows for a more granular and efficient allocation of resources based on the current workload. In addition, the user will now be able to change the number of I/O threads at runtime. For example, when decreasing the number of threads from 4 to 2, threads 3 and 4 will be closed after flushing their job queues. **Tests** Currently, we run the io-threads tests with 4 IO threads (https://github.com/valkey-io/valkey/blob/443d80f1686377ad42cbf92d98ecc6d240325ee1/.github/workflows/daily.yml#L353). This means that we will not activate the IO threads unless there are 8 (threads * 2) pending write clients per single loop, which is unlikely to happened in most of tests, meaning the IO threads are not currently being tested. To enforce the main thread to always offload work to the IO threads, regardless of the number of pending events, we add an events-per-io-thread configuration with a default value of 2. When set to 0, this configuration will force the main thread to always offload work to the IO threads. When we offload every single read/write operation to the IO threads, the IO-threads are running with 100% CPU when running multiple tests concurrently some tests fail as a result of larger than expected command latencies. To address this issue, we have to add some after or wait_for calls to some of the tests to ensure they pass with IO threads as well. Signed-off-by: Uri Yagelnik <uriy@amazon.com>
…DelEvent (valkey-io#725) for kqueue: EV_DELETE fails if the specified fd is not associated with the kqfd. If EVFILT_WRITE is associated but EVFILT_READ is not, then calling aeApiDelEvent with mask = -1 or `(AE_READABLE|AE_WRITABLE)` will cause the kevent() to fail with errno = 2(No such file or directory) and EVFILT_WRITE not dissociated. So we need to calculate the actual mask to be removed, instead of passing in whatever user provides. for evport: The comment clearly states that aeApiDelEvent "rely on the fact that our caller has already updated the mask in the eventLoop". for epoll: There's no need to calculate the "actual mask" twice, once in `aeDeleteFileEvent` and another in `aeApiDelEvent`, let's just use the mask recorded in the eventLoop. Fixes valkey-io#715 Signed-off-by: wkgcass <wkgcass@hotmail.com> Co-authored-by: Andy Pan <i@andypan.me> Co-authored-by: Binbin <binloveplay1314@qq.com>
Test case "EVAL - Scripts do not block on waitaof" observed to fail in e.g. https://github.com/valkey-io/valkey/actions/runs/9860131487/job/27233756421?pr=688 It can happen that the local AOF has been written and 1 is returned here where 0 is expected. Writing a key inside the EVAL script makes sure there's no time to write the AOF. Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
I noticed in valkey-io#738 that we don't properly check ULong config boundaries and made the change there. I'm pulling out that particular commit into this PR since we don't know if we want to merge the configurable cluster blacklist TTL yet. --------- Signed-off-by: Brennan Cathcart <brennancathcart@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
New configs: * `cluster-announce-client-ipv4` * `cluster-announce-client-ipv6` New module API function: * `ValkeyModule_GetClusterNodeInfoForClient`, takes a client id and is otherwise just like its non-ForClient cousin. If configured, one of these IP addresses are reported to each client in CLUSTER SLOTS, CLUSTER SHARDS, CLUSTER NODES and redirects, replacing the IP (`custer-announce-ip` or the auto-detected IP) of each node. Which one is reported to the client depends on whether the client is connected over IPv4 or IPv6. Benefits: * This allows clients using IPv4 to get the IPv4 addresses of all cluster nodes and IPv6 clients to get the IPv6 clients. * This allows the IPs visible to clients to be different to the IPs used between the cluster nodes due to NAT'ing. The information is propagated in the cluster bus using new Ping extensions. (Old nodes without this feature ignore unknown Ping extensions.) This adds another dimension to CLUSTER SLOTS reply. It now depends on the client's use of TLS, the IP address family and RESP version. Refactoring: The cached connection type definition is moved from connection.h (it actually has nothing to do with the connection abstraction) to server.h and is changed to a bitmap, with one bit for each of TLS, IPv6 and RESP3. Fixes valkey-io#337 --------- Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
See valkey-io#770 for details about failure. Want to prevent the test failures. Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
…y-io#775) The test fails because, in external, another test may have enabled appendonly, causing acklocal to return 1. We can add a CONFIG SET to disable the appendonly, but this is not safe too unless we use multi. The test does not actually rely on appendonly, so we can just * it. Fixes valkey-io#770. Signed-off-by: Binbin <binloveplay1314@qq.com>
Inputing a negative number equivalent to --loop, and inputing a number greater than or equal to 0 will cause the tests to be run one more time. Signed-off-by: Binbin <binloveplay1314@qq.com>
Allows cluster admins to configure the blacklist TTL as needed to allow sufficient time for `CLUSTER FORGET` to be executed on every node in the cluster. Config name `cluster-blacklist-ttl`; unit seconds; deault 60. --------- Signed-off-by: Brennan Cathcart <brennancathcart@gmail.com>
cf5b79e to
54e13dd
Compare
zvi-code
pushed a commit
that referenced
this pull request
Sep 6, 2024
We are updating this variable in the main thread, and the
child threads can printing the logs at the same time. This
generating a warning in SANITIZER=thread:
```
WARNING: ThreadSanitizer: data race (pid=74208)
Read of size 4 at 0x000102875c10 by thread T3:
#0 serverLogRaw <null>:52173615 (valkey-server:x86_64+0x10003c556)
#1 _serverLog <null>:52173615 (valkey-server:x86_64+0x10003ca89)
#2 bioProcessBackgroundJobs <null>:52173615 (valkey-server:x86_64+0x1001402c9)
Previous write of size 4 at 0x000102875c10 by main thread (mutexes: write M0):
#0 afterSleep <null>:52173615 (valkey-server:x86_64+0x10004989b)
#1 aeProcessEvents <null>:52173615 (valkey-server:x86_64+0x100031e52)
#2 main <null>:52173615 (valkey-server:x86_64+0x100064a3c)
#3 start <null>:52173615 (dyld:x86_64+0xfffffffffff5c365)
#4 start <null>:52173615 (dyld:x86_64+0xfffffffffff5c365)
```
The refresh of daylight_active is not real time, we update
it in aftersleep, so we don't need a strong synchronization,
so using memory_order_relaxed. But also noted we are doing
load/store operations only for daylight_active, which is an
aligned 32-bit integer, so using memory_order_relaxed will
not provide more consistency than what we have today.
So this is just a cleanup that to clear the warning.
Signed-off-by: Binbin <binloveplay1314@qq.com>
Signed-off-by: Zvi Schneider <ezvisch@amazon.com>
…dd option to indicate field address range len + create multi HASH\SET keys according to addr range + add del and delh dels to delete objects of string, hash, set type + add various scripts used mostly to test defrag Signed-off-by: Zvi Schneider <ezvisch@amazon.com>
54e13dd to
719dbfc
Compare
zvi-code
pushed a commit
that referenced
this pull request
Aug 1, 2025
Fix new unittest networking use-after-free error
```
==96611==ERROR: AddressSanitizer: heap-use-after-free on address 0x503000075e00 at pc 0x55e52cbe1495 bp 0x7ffd9e1fc690 sp 0x7ffd9e1fc688
READ of size 8 at 0x503000075e00 thread T0
#0 0x55e52cbe[149](https://github.com/valkey-io/valkey/actions/runs/13230922385/job/36927929457#step:10:150)4 in freeReplicaReferencedReplBuffer /home/runner/work/valkey/valkey/src/replication.c:401:27
#1 0x55e52cbe7abf in freeClientReplicationData /home/runner/work/valkey/valkey/src/replication.c:1261:5
#2 0x55e52cb17a44 in test_writeToReplica /home/runner/work/valkey/valkey/src/unit/test_networking.c:188:5
#3 0x55e52cac976b in runTestSuite /home/runner/work/valkey/valkey/src/unit/test_main.c:26:28
#4 0x55e52cac9bae in main /home/runner/work/valkey/valkey/src/unit/test_main.c:61:14
#5 0x7fded4c2a1c9 (/lib/x86_64-linux-gnu/libc.so.6+0x2a1c9) (BuildId: 6d64b17fbac799e68da7ebd9985ddf9b5cb375e6)
valkey-io#6 0x7fded4c2a28a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2a28a) (BuildId: 6d64b17fbac799e68da7ebd9985ddf9b5cb375e6)
valkey-io#7 0x55e52c9b5ec4 in _start (/home/runner/work/valkey/valkey/src/valkey-unit-tests+0x177ec4) (BuildId: 587aaf0e86abaf104cbb714f290b1436f8ddf614)
0x503000075e00 is located 16 bytes inside of 24-byte region [0x503000075df0,0x503000075e08)
freed by thread T0 here:
#0 0x55e52ca50a7a in free (/home/runner/work/valkey/valkey/src/valkey-unit-tests+0x212a7a) (BuildId: 587aaf0e86abaf104cbb714f290b1436f8ddf614)
#1 0x55e52cb905ba in listEmpty /home/runner/work/valkey/valkey/src/adlist.c:64:9
#2 0x55e52cb179e5 in test_writeToReplica /home/runner/work/valkey/valkey/src/unit/test_networking.c:179:9
#3 0x55e52cac976b in runTestSuite /home/runner/work/valkey/valkey/src/unit/test_main.c:26:28
#4 0x55e52cac9bae in main /home/runner/work/valkey/valkey/src/unit/test_main.c:61:14
#5 0x7fded4c2a1c9 (/lib/x86_64-linux-gnu/libc.so.6+0x2a1c9) (BuildId: 6d64b17fbac799e68da7ebd9985ddf9b5cb375e6)
valkey-io#6 0x7fded4c2a28a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2a28a) (BuildId: 6d64b17fbac799e68da7ebd9985ddf9b5cb375e6)
valkey-io#7 0x55e52c9b5ec4 in _start (/home/runner/work/valkey/valkey/src/valkey-unit-tests+0x177ec4) (BuildId: 587aaf0e86abaf104cbb714f290b1436f8ddf614)
previously allocated by thread T0 here:
#0 0x55e52ca50d13 in malloc (/home/runner/work/valkey/valkey/src/valkey-unit-tests+0x212d13) (BuildId: 587aaf0e86abaf104cbb714f290b1436f8ddf614)
#1 0x55e52cbb844f in ztrymalloc_usable_internal /home/runner/work/valkey/valkey/src/zmalloc.c:[155](https://github.com/valkey-io/valkey/actions/runs/13230922385/job/36927929457#step:10:156):17
#2 0x55e52cbb844f in valkey_malloc /home/runner/work/valkey/valkey/src/zmalloc.c:184:17
#3 0x55e52cb90be6 in listAddNodeTail /home/runner/work/valkey/valkey/src/adlist.c:126:17
#4 0x55e52cb17873 in test_writeToReplica /home/runner/work/valkey/valkey/src/unit/test_networking.c:167:9
#5 0x55e52cac976b in runTestSuite /home/runner/work/valkey/valkey/src/unit/test_main.c:26:28
valkey-io#6 0x55e52cac9bae in main /home/runner/work/valkey/valkey/src/unit/test_main.c:61:14
valkey-io#7 0x7fded4c2a1c9 (/lib/x86_64-linux-gnu/libc.so.6+0x2a1c9) (BuildId: 6d64b17fbac799e68da7ebd9985ddf9b5cb375e6)
valkey-io#8 0x7fded4c2a28a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2a28a) (BuildId: 6d64b17fbac799e68da7ebd9985ddf9b5cb375e6)
valkey-io#9 0x55e52c9b5ec4 in _start (/home/runner/work/valkey/valkey/src/valkey-unit-tests+0x177ec4) (BuildId: 587aaf0e86abaf104cbb714f290b1436f8ddf614)
```
https://github.com/valkey-io/valkey/actions/runs/13230922385/job/36927929457
Signed-off-by: Uri Yagelnik <uriy@amazon.com>
zvi-code
pushed a commit
that referenced
this pull request
Aug 1, 2025
When calling the command `EVAL error{} 0`, Valkey crashes with the
following stack trace. This patch ensures we never leave the
`err_info.msg` field null when we fail to extract a proper error
message.
```
=== VALKEY BUG REPORT START: Cut & paste starting from here ===
2595901:M 18 Jun 2025 01:20:12.917 # valkey 8.1.2 crashed by signal: 11, si_code: 1
2595901:M 18 Jun 2025 01:20:12.917 # Accessing address: (nil)
2595901:M 18 Jun 2025 01:20:12.917 # Crashed running the instruction at: 0x726f8e57ed1d
------ STACK TRACE ------
EIP:
/usr/lib/libc.so.6(+0x16ed1d) [0x726f8e57ed1d]
2595905 bio_aof
/usr/lib/libc.so.6(+0x9de22) [0x726f8e4ade22]
/usr/lib/libc.so.6(+0x91fda) [0x726f8e4a1fda]
/usr/lib/libc.so.6(+0x9264c) [0x726f8e4a264c]
/usr/lib/libc.so.6(pthread_cond_wait+0x14e) [0x726f8e4a4d1e]
valkey-server *:6379(bioProcessBackgroundJobs+0x1b4) [0x6530abb46db4]
/usr/lib/libc.so.6(+0x957eb) [0x726f8e4a57eb]
/usr/lib/libc.so.6(+0x11918c) [0x726f8e52918c]
2595904 bio_close_file
/usr/lib/libc.so.6(+0x9de22) [0x726f8e4ade22]
/usr/lib/libc.so.6(+0x91fda) [0x726f8e4a1fda]
/usr/lib/libc.so.6(+0x9264c) [0x726f8e4a264c]
/usr/lib/libc.so.6(pthread_cond_wait+0x14e) [0x726f8e4a4d1e]
valkey-server *:6379(bioProcessBackgroundJobs+0x1b4) [0x6530abb46db4]
/usr/lib/libc.so.6(+0x957eb) [0x726f8e4a57eb]
/usr/lib/libc.so.6(+0x11918c) [0x726f8e52918c]
2595901 valkey-server *
/usr/lib/libc.so.6(+0x3def0) [0x726f8e44def0]
/usr/lib/libc.so.6(+0x16ed1d) [0x726f8e57ed1d]
valkey-server *:6379(sdscatfmt+0x894) [0x6530abaa24a4]
valkey-server *:6379(luaCallFunction+0x39a) [0x6530abbc66ea]
valkey-server *:6379(+0x1a0992) [0x6530abbc6992]
valkey-server *:6379(scriptingEngineCallFunction+0x98) [0x6530abbc1298]
valkey-server *:6379(+0x11ff55) [0x6530abb45f55]
valkey-server *:6379(call+0x174) [0x6530aba94454]
valkey-server *:6379(processCommand+0x93d) [0x6530aba958dd]
valkey-server *:6379(processCommandAndResetClient+0x21) [0x6530abaa9d11]
valkey-server *:6379(processInputBuffer+0xe3) [0x6530abaaee83]
valkey-server *:6379(readQueryFromClient+0x65) [0x6530abaaef55]
valkey-server *:6379(+0x18e31a) [0x6530abbb431a]
valkey-server *:6379(aeProcessEvents+0x24a) [0x6530aba790ca]
valkey-server *:6379(aeMain+0x2d) [0x6530aba7938d]
valkey-server *:6379(main+0x3f6) [0x6530aba6e7b6]
/usr/lib/libc.so.6(+0x276b5) [0x726f8e4376b5]
/usr/lib/libc.so.6(__libc_start_main+0x89) [0x726f8e437769]
valkey-server *:6379(_start+0x25) [0x6530aba70235]
2595906 bio_lazy_free
/usr/lib/libc.so.6(+0x9de22) [0x726f8e4ade22]
/usr/lib/libc.so.6(+0x91fda) [0x726f8e4a1fda]
/usr/lib/libc.so.6(+0x9264c) [0x726f8e4a264c]
/usr/lib/libc.so.6(pthread_cond_wait+0x14e) [0x726f8e4a4d1e]
valkey-server *:6379(bioProcessBackgroundJobs+0x1b4) [0x6530abb46db4]
/usr/lib/libc.so.6(+0x957eb) [0x726f8e4a57eb]
/usr/lib/libc.so.6(+0x11918c) [0x726f8e52918c]
4/4 expected stacktraces.
------ STACK TRACE DONE ------
------ REGISTERS ------
2595901:M 18 Jun 2025 01:20:12.920 #
RAX:0000000000000000 RBX:0000726f8dd35663
RCX:0000000000000000 RDX:0000000000000000
RDI:0000000000000000 RSI:0000000000000010
RBP:00007ffc2b821a80 RSP:00007ffc2b821938
R8 :000000000000000c R9 :00006530abc111b8
R10:0000000000000001 R11:0000000000000003
R12:00006530abc49adc R13:00006530abc111b7
R14:0000000000000001 R15:0000000000000001
RIP:0000726f8e57ed1d EFL:0000000000010283
CSGSFS:002b000000000033
2595901:M 18 Jun 2025 01:20:12.921 * hide-user-data-from-log is on, skip logging stack content to avoid spilling user data.
------ INFO OUTPUT ------
# Server
redis_version:7.2.4
server_name:valkey
valkey_version:8.1.2
valkey_release_stage:ga
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:38d65aa7b4148d2c
server_mode:standalone
os:Linux 6.14.6-arch1-1 x86_64
arch_bits:64
monotonic_clock:POSIX clock_gettime
multiplexing_api:epoll
gcc_version:15.1.1
process_id:2595901
process_supervised:no
run_id:a0b75f67a217a81142f17553028c010e86c1ee80
tcp_port:6379
server_time_usec:1750209612917634
uptime_in_seconds:16
uptime_in_days:0
hz:10
configured_hz:10
clients_hz:10
lru_clock:5379148
executable:/home/fusl/valkey-server
config_file:
io_threads_active:0
availability_zone:
listener0:name=tcp,bind=*,bind=-::*,port=6379
# Clients
connected_clients:1
cluster_connections:0
maxclients:10000
client_recent_max_input_buffer:0
client_recent_max_output_buffer:0
blocked_clients:0
tracking_clients:0
pubsub_clients:0
watching_clients:0
clients_in_timeout_table:0
total_watched_keys:0
total_blocking_keys:0
total_blocking_keys_on_nokey:0
paused_reason:none
paused_actions:none
paused_timeout_milliseconds:0
# Memory
used_memory:911824
used_memory_human:890.45K
used_memory_rss:15323136
used_memory_rss_human:14.61M
used_memory_peak:911824
used_memory_peak_human:890.45K
used_memory_peak_perc:100.29%
used_memory_overhead:892232
used_memory_startup:891824
used_memory_dataset:19592
used_memory_dataset_perc:97.96%
allocator_allocated:1845952
allocator_active:1986560
allocator_resident:6672384
allocator_muzzy:0
total_system_memory:67323842560
total_system_memory_human:62.70G
used_memory_lua:34816
used_memory_vm_eval:34816
used_memory_lua_human:34.00K
used_memory_scripts_eval:184
number_of_cached_scripts:1
number_of_functions:0
number_of_libraries:0
used_memory_vm_functions:33792
used_memory_vm_total:68608
used_memory_vm_total_human:67.00K
used_memory_functions:224
used_memory_scripts:408
used_memory_scripts_human:408B
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.00
allocator_frag_bytes:0
allocator_rss_ratio:3.36
allocator_rss_bytes:4685824
rss_overhead_ratio:2.30
rss_overhead_bytes:8650752
mem_fragmentation_ratio:17.18
mem_fragmentation_bytes:14431168
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_total_replication_buffers:0
mem_clients_slaves:0
mem_clients_normal:0
mem_cluster_links:0
mem_aof_buffer:0
mem_allocator:jemalloc-5.3.0
mem_overhead_db_hashtable_rehashing:0
active_defrag_running:0
lazyfree_pending_objects:0
lazyfreed_objects:0
# Persistence
loading:0
async_loading:0
current_cow_peak:0
current_cow_size:0
current_cow_size_age:0
current_fork_perc:0.00
current_save_keys_processed:0
current_save_keys_total:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1750209596
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
rdb_saves:0
rdb_last_cow_size:0
rdb_last_load_keys_expired:0
rdb_last_load_keys_loaded:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_rewrites:0
aof_rewrites_consecutive_failures:0
aof_last_write_status:ok
aof_last_cow_size:0
module_fork_in_progress:0
module_fork_last_cow_size:0
# Stats
total_connections_received:1
total_commands_processed:0
instantaneous_ops_per_sec:0
total_net_input_bytes:34
total_net_output_bytes:0
total_net_repl_input_bytes:0
total_net_repl_output_bytes:0
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.00
instantaneous_input_repl_kbps:0.00
instantaneous_output_repl_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
expired_stale_perc:0.00
expired_time_cap_reached_count:0
expire_cycle_cpu_milliseconds:0
evicted_keys:0
evicted_clients:0
evicted_scripts:0
total_eviction_exceeded_time:0
current_eviction_exceeded_time:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
pubsubshard_channels:0
latest_fork_usec:0
total_forks:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
total_active_defrag_time:0
current_active_defrag_time:0
tracking_total_keys:0
tracking_total_items:0
tracking_total_prefixes:0
unexpected_error_replies:0
total_error_replies:0
dump_payload_sanitizations:0
total_reads_processed:1
total_writes_processed:0
io_threaded_reads_processed:0
io_threaded_writes_processed:0
io_threaded_freed_objects:0
io_threaded_accept_processed:0
io_threaded_poll_processed:0
io_threaded_total_prefetch_batches:0
io_threaded_total_prefetch_entries:0
client_query_buffer_limit_disconnections:0
client_output_buffer_limit_disconnections:0
reply_buffer_shrinks:0
reply_buffer_expands:0
eventloop_cycles:170
eventloop_duration_sum:17739
eventloop_duration_cmd_sum:0
instantaneous_eventloop_cycles_per_sec:9
instantaneous_eventloop_duration_usec:99
acl_access_denied_auth:0
acl_access_denied_cmd:0
acl_access_denied_key:0
acl_access_denied_channel:0
# Replication
role:master
connected_slaves:0
replicas_waiting_psync:0
master_failover_state:no-failover
master_replid:d35a0bb7979f490a60174bb363524431d7eb2428
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:10485760
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:0.012543
used_cpu_user:0.016853
used_cpu_sys_children:0.000000
used_cpu_user_children:0.000000
used_cpu_sys_main_thread:0.012440
used_cpu_user_main_thread:0.016714
# Modules
# Commandstats
# Errorstats
# Latencystats
# Cluster
cluster_enabled:0
# Keyspace
------ CLIENT LIST OUTPUT ------
id=2 addr=127.0.0.1:41372 laddr=127.0.0.1:6379 fd=10 name=*redacted* age=0 idle=0 flags=N capa= db=0 sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=0 qbuf-free=0 argv-mem=12 multi-mem=0 rbs=16384 rbp=16384 obl=0 oll=0 omem=0 tot-mem=17060 events=r cmd=eval user=*redacted* redir=-1 resp=2 lib-name= lib-ver= tot-net-in=34 tot-net-out=0 tot-cmds=0
------ CURRENT CLIENT INFO ------
id=2 addr=127.0.0.1:41372 laddr=127.0.0.1:6379 fd=10 name=*redacted* age=0 idle=0 flags=N capa= db=0 sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=0 qbuf-free=0 argv-mem=12 multi-mem=0 rbs=16384 rbp=16384 obl=0 oll=0 omem=0 tot-mem=17060 events=r cmd=eval user=*redacted* redir=-1 resp=2 lib-name= lib-ver= tot-net-in=34 tot-net-out=0 tot-cmds=0
argc: 3
argv[0]: "eval"
argv[1]: 7 bytes
argv[2]: 1 bytes
------ EXECUTING CLIENT INFO ------
id=2 addr=127.0.0.1:41372 laddr=127.0.0.1:6379 fd=10 name=*redacted* age=0 idle=0 flags=N capa= db=0 sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=0 qbuf-free=0 argv-mem=12 multi-mem=0 rbs=16384 rbp=16384 obl=0 oll=0 omem=0 tot-mem=17060 events=r cmd=eval user=*redacted* redir=-1 resp=2 lib-name= lib-ver= tot-net-in=34 tot-net-out=0 tot-cmds=0
argc: 3
argv[0]: "eval"
argv[1]: 7 bytes
argv[2]: 1 bytes
------ MODULES INFO OUTPUT ------
------ CONFIG DEBUG OUTPUT ------
repl-diskless-load disabled
debug-context ""
sanitize-dump-payload no
lazyfree-lazy-user-del yes
lazyfree-lazy-server-del yes
import-mode no
lazyfree-lazy-user-flush yes
list-compress-depth 0
dual-channel-replication-enabled no
repl-diskless-sync yes
activedefrag no
lazyfree-lazy-expire yes
io-threads 1
replica-read-only yes
client-query-buffer-limit 1gb
slave-read-only yes
lazyfree-lazy-eviction yes
proto-max-bulk-len 512mb
------ FAST MEMORY TEST ------
2595901:M 18 Jun 2025 01:20:12.921 # Bio worker thread #0 terminated
2595901:M 18 Jun 2025 01:20:12.921 # Bio worker thread #1 terminated
2595901:M 18 Jun 2025 01:20:12.921 # Bio worker thread #2 terminated
*** Preparing to test memory region 6530abce2000 (212992 bytes)
*** Preparing to test memory region 726f8af7f000 (2621440 bytes)
*** Preparing to test memory region 726f8b200000 (8388608 bytes)
*** Preparing to test memory region 726f8ba00000 (4194304 bytes)
*** Preparing to test memory region 726f8bffe000 (8388608 bytes)
*** Preparing to test memory region 726f8c7ff000 (8388608 bytes)
*** Preparing to test memory region 726f8d000000 (8388608 bytes)
*** Preparing to test memory region 726f8dc00000 (4194304 bytes)
*** Preparing to test memory region 726f8e290000 (16384 bytes)
*** Preparing to test memory region 726f8e3d2000 (20480 bytes)
*** Preparing to test memory region 726f8e5f8000 (32768 bytes)
*** Preparing to test memory region 726f8eb58000 (12288 bytes)
*** Preparing to test memory region 726f8eb5c000 (16384 bytes)
*** Preparing to test memory region 726f8ed63000 (4096 bytes)
*** Preparing to test memory region 726f8eef2000 (397312 bytes)
*** Preparing to test memory region 726f8efc7000 (4096 bytes)
.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O.O
Fast memory test PASSED, however your memory can still be broken. Please run a memory test for several hours if possible.
------ DUMPING CODE AROUND EIP ------
Symbol: (null) (base: (nil))
Module: /usr/lib/libc.so.6 (base 0x726f8e410000)
$ xxd -r -p /tmp/dump.hex /tmp/dump.bin
$ objdump --adjust-vma=(nil) -D -b binary -m i386:x86-64 /tmp/dump.bin
------
=== VALKEY BUG REPORT END. Make sure to include from START to END. ===
```
---------
Signed-off-by: Fusl <fusl@meo.ws>
Signed-off-by: Binbin <binloveplay1314@qq.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
zvi-code
pushed a commit
that referenced
this pull request
Aug 1, 2025
…y-io#2257) **Current state** During `hashtableScanDefrag`, rehashing is paused to prevent entries from moving, but the scan callback can still delete entries which triggers `hashtableShrinkIfNeeded`. For example, the `expireScanCallback` can delete expired entries. **Issue** This can cause the table to be resized and the old memory to be freed while the scan is still accessing it, resulting in the following memory access violation: ``` [err]: Sanitizer error: ================================================================= ==46774==ERROR: AddressSanitizer: heap-use-after-free on address 0x611000003100 at pc 0x0000004704d3 bp 0x7fffcb062000 sp 0x7fffcb061ff0 READ of size 1 at 0x611000003100 thread T0 #0 0x4704d2 in isPositionFilled /home/gusakovy/Projects/valkey/src/hashtable.c:422 #1 0x478b45 in hashtableScanDefrag /home/gusakovy/Projects/valkey/src/hashtable.c:1768 #2 0x4789c2 in hashtableScan /home/gusakovy/Projects/valkey/src/hashtable.c:1729 #3 0x47e3ca in kvstoreScan /home/gusakovy/Projects/valkey/src/kvstore.c:402 #4 0x6d9040 in activeExpireCycle /home/gusakovy/Projects/valkey/src/expire.c:297 #5 0x4859d2 in databasesCron /home/gusakovy/Projects/valkey/src/server.c:1269 valkey-io#6 0x486e92 in serverCron /home/gusakovy/Projects/valkey/src/server.c:1577 valkey-io#7 0x4637dd in processTimeEvents /home/gusakovy/Projects/valkey/src/ae.c:370 valkey-io#8 0x4643e3 in aeProcessEvents /home/gusakovy/Projects/valkey/src/ae.c:513 valkey-io#9 0x4647ea in aeMain /home/gusakovy/Projects/valkey/src/ae.c:543 valkey-io#10 0x4a61fc in main /home/gusakovy/Projects/valkey/src/server.c:7291 valkey-io#11 0x7f471957c139 in __libc_start_main (/lib64/libc.so.6+0x21139) valkey-io#12 0x452e39 in _start (/local/home/gusakovy/Projects/valkey/src/valkey-server+0x452e39) 0x611000003100 is located 0 bytes inside of 256-byte region [0x611000003100,0x611000003200) freed by thread T0 here: #0 0x7f471a34a1e5 in __interceptor_free (/lib64/libasan.so.4+0xd81e5) #1 0x4aefbc in zfree_internal /home/gusakovy/Projects/valkey/src/zmalloc.c:400 #2 0x4aeff5 in valkey_free /home/gusakovy/Projects/valkey/src/zmalloc.c:415 #3 0x4707d2 in rehashingCompleted /home/gusakovy/Projects/valkey/src/hashtable.c:456 #4 0x471b5b in resize /home/gusakovy/Projects/valkey/src/hashtable.c:656 #5 0x475bff in hashtableShrinkIfNeeded /home/gusakovy/Projects/valkey/src/hashtable.c:1272 valkey-io#6 0x47704b in hashtablePop /home/gusakovy/Projects/valkey/src/hashtable.c:1448 valkey-io#7 0x47716f in hashtableDelete /home/gusakovy/Projects/valkey/src/hashtable.c:1459 valkey-io#8 0x480038 in kvstoreHashtableDelete /home/gusakovy/Projects/valkey/src/kvstore.c:847 valkey-io#9 0x50c12c in dbGenericDeleteWithDictIndex /home/gusakovy/Projects/valkey/src/db.c:490 valkey-io#10 0x515f28 in deleteExpiredKeyAndPropagateWithDictIndex /home/gusakovy/Projects/valkey/src/db.c:1831 valkey-io#11 0x516103 in deleteExpiredKeyAndPropagate /home/gusakovy/Projects/valkey/src/db.c:1844 valkey-io#12 0x6d8642 in activeExpireCycleTryExpire /home/gusakovy/Projects/valkey/src/expire.c:70 valkey-io#13 0x6d8706 in expireScanCallback /home/gusakovy/Projects/valkey/src/expire.c:139 valkey-io#14 0x478bd8 in hashtableScanDefrag /home/gusakovy/Projects/valkey/src/hashtable.c:1770 valkey-io#15 0x4789c2 in hashtableScan /home/gusakovy/Projects/valkey/src/hashtable.c:1729 valkey-io#16 0x47e3ca in kvstoreScan /home/gusakovy/Projects/valkey/src/kvstore.c:402 valkey-io#17 0x6d9040 in activeExpireCycle /home/gusakovy/Projects/valkey/src/expire.c:297 valkey-io#18 0x4859d2 in databasesCron /home/gusakovy/Projects/valkey/src/server.c:1269 valkey-io#19 0x486e92 in serverCron /home/gusakovy/Projects/valkey/src/server.c:1577 valkey-io#20 0x4637dd in processTimeEvents /home/gusakovy/Projects/valkey/src/ae.c:370 valkey-io#21 0x4643e3 in aeProcessEvents /home/gusakovy/Projects/valkey/src/ae.c:513 valkey-io#22 0x4647ea in aeMain /home/gusakovy/Projects/valkey/src/ae.c:543 valkey-io#23 0x4a61fc in main /home/gusakovy/Projects/valkey/src/server.c:7291 valkey-io#24 0x7f471957c139 in __libc_start_main (/lib64/libc.so.6+0x21139) previously allocated by thread T0 here: #0 0x7f471a34a753 in __interceptor_calloc (/lib64/libasan.so.4+0xd8753) #1 0x4ae48c in ztrycalloc_usable_internal /home/gusakovy/Projects/valkey/src/zmalloc.c:214 #2 0x4ae757 in valkey_calloc /home/gusakovy/Projects/valkey/src/zmalloc.c:257 #3 0x4718fc in resize /home/gusakovy/Projects/valkey/src/hashtable.c:645 #4 0x475bff in hashtableShrinkIfNeeded /home/gusakovy/Projects/valkey/src/hashtable.c:1272 #5 0x47704b in hashtablePop /home/gusakovy/Projects/valkey/src/hashtable.c:1448 valkey-io#6 0x47716f in hashtableDelete /home/gusakovy/Projects/valkey/src/hashtable.c:1459 valkey-io#7 0x480038 in kvstoreHashtableDelete /home/gusakovy/Projects/valkey/src/kvstore.c:847 valkey-io#8 0x50c12c in dbGenericDeleteWithDictIndex /home/gusakovy/Projects/valkey/src/db.c:490 valkey-io#9 0x515f28 in deleteExpiredKeyAndPropagateWithDictIndex /home/gusakovy/Projects/valkey/src/db.c:1831 valkey-io#10 0x516103 in deleteExpiredKeyAndPropagate /home/gusakovy/Projects/valkey/src/db.c:1844 valkey-io#11 0x6d8642 in activeExpireCycleTryExpire /home/gusakovy/Projects/valkey/src/expire.c:70 valkey-io#12 0x6d8706 in expireScanCallback /home/gusakovy/Projects/valkey/src/expire.c:139 valkey-io#13 0x478bd8 in hashtableScanDefrag /home/gusakovy/Projects/valkey/src/hashtable.c:1770 valkey-io#14 0x4789c2 in hashtableScan /home/gusakovy/Projects/valkey/src/hashtable.c:1729 valkey-io#15 0x47e3ca in kvstoreScan /home/gusakovy/Projects/valkey/src/kvstore.c:402 valkey-io#16 0x6d9040 in activeExpireCycle /home/gusakovy/Projects/valkey/src/expire.c:297 valkey-io#17 0x4859d2 in databasesCron /home/gusakovy/Projects/valkey/src/server.c:1269 valkey-io#18 0x486e92 in serverCron /home/gusakovy/Projects/valkey/src/server.c:1577 valkey-io#19 0x4637dd in processTimeEvents /home/gusakovy/Projects/valkey/src/ae.c:370 valkey-io#20 0x4643e3 in aeProcessEvents /home/gusakovy/Projects/valkey/src/ae.c:513 valkey-io#21 0x4647ea in aeMain /home/gusakovy/Projects/valkey/src/ae.c:543 valkey-io#22 0x4a61fc in main /home/gusakovy/Projects/valkey/src/server.c:7291 valkey-io#23 0x7f471957c139 in __libc_start_main (/lib64/libc.so.6+0x21139) SUMMARY: AddressSanitizer: heap-use-after-free /home/gusakovy/Projects/valkey/src/hashtable.c:422 in isPositionFilled Shadow bytes around the buggy address: 0x0c227fff85d0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff85e0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff85f0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 0x0c227fff8600: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff8610: fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa =>0x0c227fff8620:[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff8630: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff8640: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c227fff8650: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c227fff8660: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c227fff8670: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==46774==ABORTING ``` **Solution** Suggested solution is to also pause auto shrinking during `hashtableScanDefrag`. I noticed that there was already a `hashtablePauseAutoShrink` method and `pause_auto_shrink` counter, but it wasn't actually used in `hashtableShrinkIfNeeded` so I fixed that. **Testing** I created a simple tcl test that (most of the times) triggers this error, but it's a little clunky so I didn't add it as part of the PR: ``` start_server {tags {"expire hashtable defrag"}} { test {hashtable scan defrag on expiry} { r config set hz 100 set num_keys 20 for {set i 0} {$i < $num_keys} {incr i} { r set "key_$i" "value_$i" } for {set j 0} {$j < 50} {incr j} { set expire_keys 100 for {set i 0} {$i < $expire_keys} {incr i} { # Short expiry time to ensure they expire quickly r psetex "expire_key_${i}_${j}" 100 "expire_value_${i}_${j}" } # Verify keys are set set initial_size [r dbsize] assert_equal $initial_size [expr $num_keys + $expire_keys] after 150 for {set i 0} {$i < 10} {incr i} { r get "expire_key_${i}_${j}" after 10 } } set remaining_keys [r dbsize] assert_equal $remaining_keys $num_keys # Verify server is still responsive assert_equal [r ping] {PONG} } {} } ``` Compiling with ASAN using `make noopt SANITIZER=address valkey-server` and running the test causes error above. Applying the fix resolves the issue. Signed-off-by: Yakov Gusakov <yaakov0015@gmail.com>
zvi-code
pushed a commit
that referenced
this pull request
Oct 18, 2025
With valkey-io#1401, we introduced additional filters to CLIENT LIST/KILL subcommand. The intended behavior was to pick the last value of the filter. However, we introduced memory leak for all the preceding filters. Before this change: ``` > CLIENT LIST IP 127.0.0.1 IP 127.0.0.1 id=4 addr=127.0.0.1:37866 laddr=127.0.0.1:6379 fd=10 name= age=0 idle=0 flags=N capa= db=0 sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=0 qbuf-free=0 argv-mem=21 multi-mem=0 rbs=16384 rbp=16384 obl=0 oll=0 omem=0 tot-mem=16989 events=r cmd=client|list user=default redir=-1 resp=2 lib-name= lib-ver= tot-net-in=49 tot-net-out=0 tot-cmds=0 ``` Leak: ``` Direct leak of 11 byte(s) in 1 object(s) allocated from: #0 0x7f2901aa557d in malloc (/lib64/libasan.so.4+0xd857d) #1 0x76db76 in ztrymalloc_usable_internal /workplace/harkrisp/valkey/src/zmalloc.c:156 #2 0x76db76 in zmalloc_usable /workplace/harkrisp/valkey/src/zmalloc.c:200 #3 0x4c4121 in _sdsnewlen.constprop.230 /workplace/harkrisp/valkey/src/sds.c:113 #4 0x4dc456 in parseClientFiltersOrReply.constprop.63 /workplace/harkrisp/valkey/src/networking.c:4264 #5 0x4bb9f7 in clientListCommand /workplace/harkrisp/valkey/src/networking.c:4600 valkey-io#6 0x641159 in call /workplace/harkrisp/valkey/src/server.c:3772 valkey-io#7 0x6431a6 in processCommand /workplace/harkrisp/valkey/src/server.c:4434 valkey-io#8 0x4bfa9b in processCommandAndResetClient /workplace/harkrisp/valkey/src/networking.c:3571 valkey-io#9 0x4bfa9b in processInputBuffer /workplace/harkrisp/valkey/src/networking.c:3702 valkey-io#10 0x4bffa3 in readQueryFromClient /workplace/harkrisp/valkey/src/networking.c:3812 valkey-io#11 0x481015 in callHandler /workplace/harkrisp/valkey/src/connhelpers.h:79 valkey-io#12 0x481015 in connSocketEventHandler.lto_priv.394 /workplace/harkrisp/valkey/src/socket.c:301 valkey-io#13 0x7d3fb3 in aeProcessEvents /workplace/harkrisp/valkey/src/ae.c:486 valkey-io#14 0x7d4d44 in aeMain /workplace/harkrisp/valkey/src/ae.c:543 valkey-io#15 0x453925 in main /workplace/harkrisp/valkey/src/server.c:7319 valkey-io#16 0x7f2900cd7139 in __libc_start_main (/lib64/libc.so.6+0x21139) ``` Note: For filter ID / NOT-ID we group all the option and perform filtering whereas for remaining filters we only pick the last filter option. --------- Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.