Skip to content

Conversation

@ynachi
Copy link
Owner

@ynachi ynachi commented Jan 9, 2026

No description provided.

@ynachi ynachi requested a review from Copilot January 9, 2026 21:28
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a "Bitcask server" implementation by adding RESP (Redis Serialization Protocol) writer functionality and refactoring the core async runtime to improve safety, performance, and maintainability.

Changes:

  • Added RESP protocol writer (RespWriter) with comprehensive tests and integration tests
  • Refactored core async runtime to use reference-counted completion states for safe coroutine cancellation
  • Renamed functions/types across the codebase to follow consistent PascalCase naming conventions
  • Improved adaptive heartbeat mechanism in worker event loop for better CPU utilization

Reviewed changes

Copilot reviewed 100 out of 123 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
kio/resp/writer.h New RESP protocol writer implementation for serializing RESP frames
kio/resp/parser.h/cpp Updated RESP parser with improved error handling and bounds checking
kio/core/worker.h/cpp Major refactor to use SafeIoCompletion and adaptive event loop timing
kio/core/safe_completion.h New reference-counted completion state for cancellation safety
kio/core/chunked_buffer.h/cpp New scatter-gather buffer implementation for efficient I/O
kio/sync/baton.h Rewritten synchronization primitive supporting multiple waiters and timeouts
kio/tests/resp/* Comprehensive test coverage for RESP writer and integration tests

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

ASSERT_TRUE(kRes2.has_value());
EXPECT_EQ(kRes2->type, FrameType::kDouble);
EXPECT_EQ(GetSimplePayload(*kRes2), "-42");
parser_.Consume(*kRes2);

// Fird integer
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'Fird' to 'Third'

Suggested change
// Fird integer
// Third integer

Copilot uses AI. Check for mistakes.
// ---------------------

/**
* @brief Append data to the buffer. The data remains uncommited
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'uncommited' to 'uncommitted'

Copilot uses AI. Check for mistakes.
@ynachi
Copy link
Owner Author

ynachi commented Jan 9, 2026

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b1cd5fefbe

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +495 to +499
if (!result.has_value())
{
keydir_.erase(entry.key);
entries_skipped++;
continue;
if (result.error().value == kIoNeedMoreData)
{
break; // Truncated hint file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Treat truncated hint files as invalid

The recovery loop treats kIoNeedMoreData as a soft stop and returns success, which means a partially written or truncated hint file is accepted as authoritative. Because TryRecoverFromHint only checks has_value(), any crash during hint-file generation will cause recovery to skip the data file entirely and permanently drop keys that weren’t recorded in the truncated hint. Consider returning an error on kIoNeedMoreData (or explicitly marking the hint as invalid) so recovery falls back to scanning the data file in that scenario.

Useful? React with 👍 / 👎.

Comment on lines +227 to +231
// Update destination file stats
ctx.stats.data_files[ctx.dst_file_id] = PartitionStats::FileStats{
.total_bytes = actual_size,
.live_bytes = actual_size, // All data in new file is live
.live_entries = total_entries_copied

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid marking all compaction output as live

Compaction can interleave with concurrent updates: ProcessBufferEntries re-checks the KeyDir and may skip updating it if a key changed during compaction, leaving a stale entry in the new file. This is expected, but the final stats still set live_bytes to actual_size and live_entries to total_entries_copied, which counts those stale entries as live. That under-reports fragmentation, so the new file may never be re-compacted even though it already contains dead data, leading to ongoing disk bloat. Consider tracking live bytes/entries based on successful KeyDir updates (or recomputing from KeyDir) before setting stats.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants