Skip to content

perf: rewrite gh CLI subprocesses to use native fetch and SQLite caching#4

Open
OctavianTocan wants to merge 1 commit intomainfrom
fixing-the-lag
Open

perf: rewrite gh CLI subprocesses to use native fetch and SQLite caching#4
OctavianTocan wants to merge 1 commit intomainfrom
fixing-the-lag

Conversation

@OctavianTocan
Copy link
Copy Markdown
Owner

@OctavianTocan OctavianTocan commented Mar 18, 2026

Summary

  • Replaced 140+ gh CLI subprocess calls with native fetch via GitHub API
  • Added in-memory OAuth token caching via gh auth token
  • Added local SQLite persistence using bun:sqlite for zero-latency PR lists
  • Optimized React rendering cascade with useDeferredValue and React.memo
  • Implemented PanelSkeleton for smooth, tab-aware loading transitions
  • Batched GraphQL PR details queries into chunks to prevent rate limits

Summary by Sourcery

Replace GitHub CLI subprocess usage with direct GitHub API calls backed by token-based auth, add persistent SQLite-backed caching for PR data, and improve the PR list and preview UI responsiveness and split-branch introspection.

New Features:

  • Introduce a SQLite-backed local cache for pull requests, PR details, and panel data for faster subsequent loads.
  • Add a new split command and supporting split-state model to inspect and visualize split-branch topology and status.
  • Provide a tab-specific PanelSkeleton component to show structured loading placeholders in the preview panel.

Bug Fixes:

  • Make the PR parsing and tests align with GitHub API response shapes rather than gh CLI JSON output.
  • Harden error handling and selection logic in the PR list when data loads asynchronously or fetches fail.

Enhancements:

  • Refactor GitHub integration to use native fetch and GitHub REST/GraphQL APIs instead of shelling out to the gh CLI for most operations.
  • Cache GitHub OAuth tokens in-memory via the gh auth token command to avoid repeated invocations.
  • Batch PR details fetching via GitHub GraphQL to reduce API calls and improve performance under load.
  • Optimize PR list rendering with deferred search input, memoized row components, and derived sorting/grouping maps.
  • Improve PR panel state management types and internal caching to integrate with the new database-backed cache.
  • Refine CLI entrypoint to include the new split command in parsing and help output.

Documentation:

  • Update CLI help text to document the new split command and its purpose.

Tests:

  • Update GitHub-related unit and integration tests to reflect the new fetch-based implementation and repository fixtures.
  • Extend cache tests to cover the updated PRCache semantics and uniqueness of cache keys.

Summary by CodeRabbit

New Features

  • Added Split command to view split workflow topology and status
  • Implemented PR caching with automatic sync to live data
  • Added multiple sort options for PR lists: age, attention, repo, number, title, and status

Improvements

  • Enhanced loading state with skeleton placeholders
  • Improved error handling when cached data is available
  • Optimized PR table rendering performance with memoization
  • Debounced search filtering for smoother interactions

- Replaced 140+ `gh` CLI subprocess calls with native `fetch` via GitHub API
- Added in-memory OAuth token caching via `gh auth token`
- Added local SQLite persistence using `bun:sqlite` for zero-latency PR lists
- Optimized React rendering cascade with `useDeferredValue` and `React.memo`
- Implemented `PanelSkeleton` for smooth, tab-aware loading transitions
- Batched GraphQL PR details queries into chunks to prevent rate limits
Copilot AI review requested due to automatic review settings March 18, 2026 17:11
@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Mar 18, 2026

Reviewer's Guide

Replaces gh CLI subprocess usage with native GitHub REST/GraphQL fetch helpers and introduces SQLite-backed caching and React rendering optimizations for faster, smoother PR listing and previewing, plus a new split-branch visualization command.

Sequence diagram for ls command PR loading with SQLite and GitHub API

sequenceDiagram
  actor User
  participant RaftCLI
  participant LsCommand
  participant DB as DBModule
  participant Github as GithubAPI
  participant Auth as GithubAuth
  participant GhCLI
  participant GitHubAPI

  User->>RaftCLI: run raft ls
  RaftCLI->>LsCommand: render LsCommand

  rect rgb(240,240,240)
    LsCommand->>DB: getCachedPRs()
    DB-->>LsCommand: PullRequest[] cachedPRs
    alt cachedPRs not empty
      LsCommand->>LsCommand: setAllPRs(cachedPRs)
      LsCommand->>LsCommand: setLoading(false)
    end
  end

  LsCommand->>LsCommand: load() async effect
  LsCommand->>Github: fetchOpenPRs(author, onProgress)

  Github->>Github: fetchGh("search/issues...")
  Github->>Auth: getGithubToken()
  alt token not cached
    Auth->>GhCLI: safeSpawn([gh auth token])
    GhCLI-->>Auth: stdout token
    Auth-->>Github: oauth token
  else token cached
    Auth-->>Github: cached token
  end

  Github->>GitHubAPI: HTTPS request search/issues
  GitHubAPI-->>Github: JSON search results
  Github-->>LsCommand: PullRequest[] results

  LsCommand->>LsCommand: sort results
  LsCommand->>DB: cachePRs(results)
  DB-->>LsCommand: ok
  LsCommand->>LsCommand: setAllPRs(results)
  LsCommand->>LsCommand: setLoading(false)

  loop for visible PRs
    LsCommand->>Github: batchFetchPRDetails(pr slice)
    Github->>GitHubAPI: GraphQL batch query
    GitHubAPI-->>Github: PRDetails map
    Github-->>LsCommand: Map url to PRDetails
    LsCommand->>DB: cachePRDetails(url, details)
  end
Loading

ER diagram for new SQLite PR caching schema

erDiagram
  PULL_REQUESTS {
    string url PK
    json   data
    datetime updated_at
  }

  PR_DETAILS {
    string url PK
    json   data
    datetime updated_at
  }

  PR_PANEL_DATA {
    string url PK
    json   data
    datetime updated_at
  }

  PULL_REQUESTS ||--|| PR_DETAILS : same_pr_url
  PULL_REQUESTS ||--|| PR_PANEL_DATA : same_pr_url
Loading

Class diagram for PRCache, DB helpers, and split state types

classDiagram
  class PRCache {
    - Map~string, PRDetails~ details
    - Map~string, PRPanelData~ panelData
    + PRCache()
    + getDetails(url string) PRDetails
    + setDetails(url string, data PRDetails) void
    + hasDetails(url string) boolean
    + getPanelData(url string) PRPanelData
    + setPanelData(url string, data PRPanelData) void
    + hasPanelData(url string) boolean
  }

  class DBModule {
    + db Database
    + getCachedPRs() PullRequest[]
    + cachePRs(prs PullRequest[]) void
    + getCachedPRDetails(url string) PRDetails
    + cachePRDetails(url string, details PRDetails) void
    + getCachedPRPanelData(url string) PRPanelData
    + cachePRPanelData(url string, panelData PRPanelData) void
  }

  class GithubAuth {
    - cachedToken string
    + getGithubToken() Promise~string~
  }

  class GithubAPI {
    + fetchGh(endpoint string, options RequestInit) Promise~any~
    + fetchGhGraphql(query string, variables any) Promise~any~
    + parseSearchResults(items any[]) PullRequest[]
    + fetchAllAccountPRs(onProgress function) Promise~PullRequest[]~
    + fetchOpenPRs(author string, onProgress function) Promise~PullRequest[]~
    + fetchRepoPRs(repo string) Promise~PullRequest[]~
    + updatePRTitle(repo string, prNumber number, title string) Promise~void~
    + findStackComment(repo string, prNumber number) Promise~number~
    + upsertStackComment(repo string, prNumber number, body string) Promise~void~
    + getCurrentRepo() Promise~string~
    + batchFetchPRDetails(prs PRIdentifier[]) Promise~Map~string, PRDetails~~
    + fetchPRDetails(repo string, prNumber number) Promise~PRDetails~
    + fetchPRPanelData(repo string, prNumber number) Promise~PRPanelData~
    + submitPRReview(repo string, prNumber number, event ReviewEvent, body string) Promise~void~
    + replyToReviewComment(repo string, prNumber number, commentId number, body string) Promise~void~
    + postPRComment(repo string, prNumber number, body string) Promise~void~
    + fetchReviewThreads(repo string, prNumber number) Promise~ReviewThread[]~
    + resolveReviewThread(threadId string) Promise~void~
    + fetchCIStatus(repo string, ref string) Promise~CIReturnStatus~
    + fetchHasConflicts(repo string, prNumber number) Promise~boolean~
  }

  class PRIdentifier {
    + repo string
    + number number
    + url string
  }

  class SplitEntry {
    + number number
    + name string
    + branch string
    + files string[]
    + lines number
    + dependsOn number[]
    + prNumber number
    + prUrl string
    + status SplitEntryStatus
  }

  class SplitState {
    + version number
    + originalBranch string
    + targetBranch string
    + strategy string
    + createdAt string
    + status SplitPhase
    + topology string
    + splits SplitEntry[]
  }

  class SplitStateModule {
    + readSplitState(repoRoot string) Promise~SplitState~
    + writeSplitState(repoRoot string, state SplitState) void
    + formatSplitTopology(state SplitState) string[]
  }

  class SplitCommand {
    + repo string
    + SplitCommand(props SplitCommandProps)
  }

  class SplitCommandProps {
    + repo string
  }

  PRCache --> DBModule : uses
  GithubAPI --> GithubAuth : uses
  GithubAPI --> PRIdentifier : uses
  SplitStateModule --> SplitState : manages
  SplitState --> SplitEntry : contains
  SplitCommand --> SplitStateModule : reads
  SplitCommand --> SplitState : renders
Loading

File-Level Changes

Change Details Files
Replace gh CLI subprocess interactions with first-class GitHub REST/GraphQL fetch helpers and token management.
  • Introduce fetchGh and fetchGhGraphql helpers using a cached OAuth token from getGithubToken
  • Refactor PR search/list/update/comment/review operations to use GitHub REST API endpoints instead of gh command invocations
  • Refactor review-thread and CI-status fetching to use GraphQL/REST fetch calls and simplify multi-account handling
  • Change getCurrentRepo to derive owner/repo from git remote.origin.url rather than gh repo view
src/lib/github.ts
src/lib/auth.ts
Add persistent SQLite caching for PR lists and metadata, integrated with in-memory cache.
  • Create db.ts with bun:sqlite database in ~/.config/raft containing pull_requests, pr_details, and pr_panel_data tables
  • Add getCachedPRs/cachePRs for PR list caching and wire them into ls command load path
  • Extend PRCache to consult and update SQLite for PRDetails and PRPanelData while keeping an in-memory map for fast reads
  • Adjust cache tests to avoid collisions and align with new behavior
src/lib/db.ts
src/lib/cache.ts
src/lib/__tests__/cache.test.ts
src/commands/ls.tsx
Optimize PR list rendering and lifecycle detail fetching to reduce render thrash and network calls.
  • Use useDeferredValue for search query to debounce expensive filtering and keep typing responsive
  • Separate filteredPRs vs sortedPRs, memoizing urgency calculations and timestamps for attention/age sort modes
  • Batch PRDetails fetching via batchFetchPRDetails GraphQL query to reduce per-PR API calls and hydrate cache in bulk
  • Stabilize selection and scrolling with refs for scrollOffset and selectedIndex, and reuse cached detail maps when possible
  • Wrap PRRow in React.memo to avoid unnecessary row rerenders in large tables
src/commands/ls.tsx
src/lib/github.ts
src/components/pr-table.tsx
Improve PR preview loading UX with a tab-aware skeleton and more precise loading conditions.
  • Add PanelSkeleton component that renders different skeleton layouts depending on the active tab (body, comments, code, files)
  • Update PreviewPanel to show spinner plus PanelSkeleton only when loading and no panelData is yet available
  • Expose stricter React.Dispatch typings for setSplitRatio and setPanelFullscreen in usePanel hook
src/components/skeleton.tsx
src/components/preview-panel.tsx
src/hooks/usePanel.ts
Introduce a new split command and split-state utilities for visualizing split-branch topology and status.
  • Add SplitCommand React UI with keyboard navigation, selection, and PR opening/copying behavior driven by split-state JSON
  • Implement split-state.ts to read/write .raft-split.json and format split topologies into an ASCII tree representation
  • Wire split into CLI command parsing, help text, and main command switcher
src/commands/split.tsx
src/lib/split-state.ts
src/index.tsx
Adapt tests to new GitHub API shapes and behavior.
  • Update github tests to call parseSearchResults with already-parsed API items instead of raw gh JSON strings
  • Switch integration tests from a personal repo to cli/cli and adjust expectations to match REST-based URLs and fields
src/lib/__tests__/github.test.ts
src/__tests__/integration.test.ts

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 18, 2026

Walkthrough

This PR introduces a new split workflow viewer command, replaces GitHub CLI-based operations with direct HTTP/GraphQL API requests, adds a SQLite-backed caching layer for PR data and details, and enhances the PR list command with deferred filtering, multi-mode sorting, cache-backed loading, and batch detail fetching. The GitHub token retrieval is now delegated to a dedicated auth module.

Changes

Cohort / File(s) Summary
Test Updates
src/__tests__/integration.test.ts, src/lib/__tests__/cache.test.ts, src/lib/__tests__/github.test.ts
Updated integration tests to target cli/cli repository; refactored test mocks to align with new GitHub REST/GraphQL API payload shapes (html\_url, draft, repository\_url, created\_at); added unique key generation in cache tests to avoid collision.
New Split Command
src/commands/split.tsx, src/lib/split-state.ts
Introduced SplitCommand TUI viewer with topology tree, entry list, detail panel, and keyboard navigation (q/j/k/enter/c). Exports SplitState, SplitPhase, SplitEntry types and functions readSplitState, writeSplitState, formatSplitTopology for managing .raft-split.json split workflow state.
PR List Enhancements
src/commands/ls.tsx
Added cache-backed loading (getCachedPRs/cachePRs preload state), deferred filtering via useDeferredValue, multi-mode sorting (age/attention/repo/number/title/status), batch detail fetching (batchFetchPRDetails), and per-PR urgency detection. Made repoFilter optional in public LsCommandProps interface.
GitHub API Abstraction
src/lib/github.ts, src/lib/auth.ts
Replaced gh CLI invocations with direct HTTP (fetchGh) and GraphQL (fetchGhGraphql) calls authenticated via getGithubToken(). Added batchFetchPRDetails for efficient detail fetching; updated parseSearchResults signature to accept array instead of JSON string; simplified account handling.
Caching Infrastructure
src/lib/db.ts, src/lib/cache.ts
Introduced SQLite-backed database (db.ts) with tables for pull\_requests, pr\_details, pr\_panel\_data. Cache layer (cache.ts) provides in-memory fallback to database for getDetails, getPanelData, hasDetails, hasPanelData operations with automatic persistence.
UI Components & State
src/components/pr-table.tsx, src/components/preview-panel.tsx, src/components/skeleton.tsx, src/hooks/usePanel.ts
Memoized PRRow for optimization; added PanelSkeleton component with tab-specific layouts for loading states; updated preview-panel to render skeleton during load. Updated PanelState setters to standard React.Dispatch signatures (setSplitRatio, setPanelFullscreen).
Integration
src/index.tsx
Extended Command union type to include "split"; added SplitCommand import and routing for "split" command invocation.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant UI as LS Command UI
    participant Cache as In-Memory Cache
    participant DB as SQLite DB
    participant GitHub as GitHub API
    
    User->>UI: Load PR list
    UI->>Cache: getCachedPRs()
    alt Cache hit
        Cache-->>UI: Return cached PRs
        UI->>UI: Sort, filter, display
    else Cache miss
        UI->>DB: Query pull_requests table
        alt DB has data
            DB-->>UI: Return PR data
            UI->>UI: Populate state, display
        else DB empty
            UI->>GitHub: fetchGh (PR search)
            GitHub-->>UI: PR results
        end
        UI->>Cache: cachePRs(results)
        Cache->>DB: INSERT/REPLACE rows
    end
    
    User->>UI: Select PR for details
    UI->>Cache: getDetails(prUrl)
    alt Details in cache
        Cache-->>UI: Return PRDetails
    else Not cached
        UI->>DB: Query pr_details table
        alt Found in DB
            DB-->>UI: Return PRDetails
            Cache->>Cache: Store in memory
        else Not in DB
            UI->>GitHub: batchFetchPRDetails(prs)
            GitHub-->>UI: Return details map
            UI->>Cache: setDetails(url, details)
            Cache->>DB: INSERT/REPLACE row
        end
    end
    UI->>UI: Render detail panel with deferred search
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • Copilot

Poem

🐰 Hops with glee through split topology trees,
Caching PRs from API with such ease!
No more shell commands, just GraphQL delight,
Sorted and filtered—the UX shines bright!
A split viewer born, and the code runs so fleet,
Making raft's review workflow complete!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 6.25% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and clearly describes the main change: replacing CLI subprocess calls with native fetch and SQLite caching, which is the core focus of this performance optimization PR.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fixing-the-lag
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 3 issues, and left some high level feedback:

  • In batchFetchPRDetails the GraphQL query is constructed via string interpolation of owner/name directly into the query string; consider switching to variables (as done in fetchReviewThreads) to avoid issues if repo names ever contain characters that need escaping and to keep the query safer and easier to maintain.
  • In SplitCommand the repo prop is accepted but never used, which can be confusing; either use it to constrain or label the split state, or remove it from the props to keep the API minimal.
  • The SplitCommand uses Bun.spawn(["open", ...]) to launch PR URLs, which is macOS-specific; consider using a cross-platform opener (xdg-open/start or a small helper that picks the right command per platform) so the command works consistently on Linux and Windows.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `batchFetchPRDetails` the GraphQL query is constructed via string interpolation of `owner`/`name` directly into the query string; consider switching to variables (as done in `fetchReviewThreads`) to avoid issues if repo names ever contain characters that need escaping and to keep the query safer and easier to maintain.
- In `SplitCommand` the `repo` prop is accepted but never used, which can be confusing; either use it to constrain or label the split state, or remove it from the props to keep the API minimal.
- The `SplitCommand` uses `Bun.spawn(["open", ...])` to launch PR URLs, which is macOS-specific; consider using a cross-platform opener (`xdg-open`/`start` or a small helper that picks the right command per platform) so the command works consistently on Linux and Windows.

## Individual Comments

### Comment 1
<location path="src/commands/ls.tsx" line_range="118" />
<code_context>

   const [authorFilter, setAuthorFilter] = useState<string | null>(null)

+  const deferredSearchQuery = useDeferredValue(searchQuery)
+
   const filteredPRs = useMemo(() => {
</code_context>
<issue_to_address>
**issue (bug_risk):** `useDeferredValue` is used but not imported in this module.

This hook isn’t imported in this file, so `deferredSearchQuery` will fail at runtime. Please add `useDeferredValue` to the React import (e.g. `import React, { ..., useDeferredValue } from "react"`).
</issue_to_address>

### Comment 2
<location path="src/lib/cache.ts" line_range="24-25" />
<code_context>
+    cachePRDetails(url, data);
   }

   hasDetails(url: string): boolean {
-    return this.details.has(url)
+    return this.details.has(url) || getCachedPRDetails(url) !== null;
   }

</code_context>
<issue_to_address>
**suggestion (performance):** `hasDetails` and `hasPanelData` now hit SQLite on every cache miss, which may be more expensive than intended.

Because the in-memory map is checked first, any cache miss will now trigger a SQLite lookup. If these methods run in hot render paths, that extra I/O could be significant. Depending on your access pattern, you could either rely solely on `this.details`/`this.panelData` (accepting possible stale false negatives) or add a simple negative-cache indicator so you don’t repeatedly hit SQLite for known-missing URLs.

Suggested implementation:

```typescript
export class PRCache {
  // In-memory backing for immediate synchronous reads during renders
  private details = new Map<string, PRDetails>()
  private panelData = new Map<string, PRPanelData>()
  // Negative cache to avoid repeated SQLite lookups for known-missing entries
  private missingDetails = new Set<string>()
  private missingPanelData = new Set<string>()

```

```typescript
  getDetails(url: string): PRDetails | undefined {
    // Fast in-memory hit
    if (this.details.has(url)) return this.details.get(url);

    // If we've already confirmed this URL is missing in the backing cache,
    // don't hit SQLite again.
    if (this.missingDetails.has(url)) return undefined;

    const fromDb = getCachedPRDetails(url);
    if (fromDb) {
      // Populate in-memory cache and clear any negative-cache marker
      this.details.set(url, fromDb);
      this.missingDetails.delete(url);
      return fromDb;
    }

    // Record a negative cache entry to avoid repeated SQLite lookups
    this.missingDetails.add(url);
    return undefined;

```

```typescript
  hasDetails(url: string): boolean {
    // Only consult the in-memory map; avoid backing-store I/O on cache miss.
    // This keeps hasDetails safe to call from hot render paths.
    return this.details.has(url);
  }

```

To fully align with the suggestion, similar negative-caching logic should be added for the panel data path:

1. Update `getPanelData(url: string)` to:
   - Return immediately if `this.panelData.has(url)`.
   - Return `undefined` if `this.missingPanelData.has(url)`.
   - On SQLite miss, add `url` to `this.missingPanelData`.
   - On SQLite hit, populate `this.panelData` and `delete` from `this.missingPanelData`.

2. Revert `hasPanelData(url: string)` to only check `this.panelData.has(url)` and not query SQLite, mirroring the updated `hasDetails` behavior.

You should apply the same patterns as shown for `getDetails`/`hasDetails`, using `panelData`, `missingPanelData`, and the corresponding panel-data SQLite accessors.
</issue_to_address>

### Comment 3
<location path="src/lib/__tests__/cache.test.ts" line_range="38-41" />
<code_context>
-    expect(cache.hasDetails("x")).toBe(false)
-    cache.setDetails("x", { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" })
-    expect(cache.hasDetails("x")).toBe(true)
+    const uniqueKey = "test-unique-key-" + Date.now();
+    expect(cache.hasDetails(uniqueKey)).toBe(false)
+    cache.setDetails(uniqueKey, { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" })
+    expect(cache.hasDetails(uniqueKey)).toBe(true)
   })
 })
</code_context>
<issue_to_address>
**suggestion (testing):** Isolate PRCache tests from the SQLite backend and add coverage for DB-backed reads/writes.

Because `PRCache` now delegates to `getCachedPRDetails`/`cachePRDetails` (and the panel-data variants), these tests will hit the real SQLite DB in `~/.config/raft/raft.sqlite`, introducing hidden I/O, potential flakiness, and leaving DB interactions unvalidated.

To improve this:
1. Mock `getCachedPRDetails`, `cachePRDetails`, `getCachedPRPanelData`, and `cachePRPanelData` in this file so tests stay fast, deterministic, and let you assert calls (URL, payload, etc.).
2. Add a test for a cache miss in memory but a hit in the DB layer (e.g. `details` map empty while `getCachedPRDetails` returns data) to verify that `getDetails` repopulates the in-memory map from persistent storage.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.


const [authorFilter, setAuthorFilter] = useState<string | null>(null)

const deferredSearchQuery = useDeferredValue(searchQuery)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): useDeferredValue is used but not imported in this module.

This hook isn’t imported in this file, so deferredSearchQuery will fail at runtime. Please add useDeferredValue to the React import (e.g. import React, { ..., useDeferredValue } from "react").

Comment on lines 24 to +25
hasDetails(url: string): boolean {
return this.details.has(url)
return this.details.has(url) || getCachedPRDetails(url) !== null;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (performance): hasDetails and hasPanelData now hit SQLite on every cache miss, which may be more expensive than intended.

Because the in-memory map is checked first, any cache miss will now trigger a SQLite lookup. If these methods run in hot render paths, that extra I/O could be significant. Depending on your access pattern, you could either rely solely on this.details/this.panelData (accepting possible stale false negatives) or add a simple negative-cache indicator so you don’t repeatedly hit SQLite for known-missing URLs.

Suggested implementation:

export class PRCache {
  // In-memory backing for immediate synchronous reads during renders
  private details = new Map<string, PRDetails>()
  private panelData = new Map<string, PRPanelData>()
  // Negative cache to avoid repeated SQLite lookups for known-missing entries
  private missingDetails = new Set<string>()
  private missingPanelData = new Set<string>()
  getDetails(url: string): PRDetails | undefined {
    // Fast in-memory hit
    if (this.details.has(url)) return this.details.get(url);

    // If we've already confirmed this URL is missing in the backing cache,
    // don't hit SQLite again.
    if (this.missingDetails.has(url)) return undefined;

    const fromDb = getCachedPRDetails(url);
    if (fromDb) {
      // Populate in-memory cache and clear any negative-cache marker
      this.details.set(url, fromDb);
      this.missingDetails.delete(url);
      return fromDb;
    }

    // Record a negative cache entry to avoid repeated SQLite lookups
    this.missingDetails.add(url);
    return undefined;
  hasDetails(url: string): boolean {
    // Only consult the in-memory map; avoid backing-store I/O on cache miss.
    // This keeps hasDetails safe to call from hot render paths.
    return this.details.has(url);
  }

To fully align with the suggestion, similar negative-caching logic should be added for the panel data path:

  1. Update getPanelData(url: string) to:

    • Return immediately if this.panelData.has(url).
    • Return undefined if this.missingPanelData.has(url).
    • On SQLite miss, add url to this.missingPanelData.
    • On SQLite hit, populate this.panelData and delete from this.missingPanelData.
  2. Revert hasPanelData(url: string) to only check this.panelData.has(url) and not query SQLite, mirroring the updated hasDetails behavior.

You should apply the same patterns as shown for getDetails/hasDetails, using panelData, missingPanelData, and the corresponding panel-data SQLite accessors.

Comment on lines +38 to +41
const uniqueKey = "test-unique-key-" + Date.now();
expect(cache.hasDetails(uniqueKey)).toBe(false)
cache.setDetails(uniqueKey, { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" })
expect(cache.hasDetails(uniqueKey)).toBe(true)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Isolate PRCache tests from the SQLite backend and add coverage for DB-backed reads/writes.

Because PRCache now delegates to getCachedPRDetails/cachePRDetails (and the panel-data variants), these tests will hit the real SQLite DB in ~/.config/raft/raft.sqlite, introducing hidden I/O, potential flakiness, and leaving DB interactions unvalidated.

To improve this:

  1. Mock getCachedPRDetails, cachePRDetails, getCachedPRPanelData, and cachePRPanelData in this file so tests stay fast, deterministic, and let you assert calls (URL, payload, etc.).
  2. Add a test for a cache miss in memory but a hit in the DB layer (e.g. details map empty while getCachedPRDetails returns data) to verify that getDetails repopulates the in-memory map from persistent storage.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR replaces most gh CLI subprocess usage with direct GitHub REST/GraphQL fetch calls, adds token + SQLite-backed caching to speed up PR list/panel loads, and introduces several UI performance/loading improvements (plus a new split command).

Changes:

  • Introduce fetch-based GitHub REST/GraphQL helpers (including batched GraphQL PR details).
  • Add SQLite persistence + integrate it into the in-memory PR cache and ls flow.
  • Improve TUI rendering/loading (deferred search, memoized rows, tab-aware panel skeleton) and add raft split.

Reviewed changes

Copilot reviewed 15 out of 15 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
src/lib/split-state.ts New split state schema + read/write + topology formatting.
src/lib/github.ts Replaces gh calls with REST/GraphQL fetch, adds batching and new helpers.
src/lib/db.ts Adds SQLite persistence for PR lists/details/panel data.
src/lib/cache.ts PRCache now reads/writes through SQLite for persistence.
src/lib/auth.ts Adds cached OAuth token retrieval via gh auth token.
src/lib/tests/github.test.ts Updates parseSearchResults tests for new input shape.
src/lib/tests/cache.test.ts Adjusts cache test keying to avoid collisions with persisted cache.
src/index.tsx Registers new split command and CLI help entry.
src/hooks/usePanel.ts Tightens setter types for panel state setters.
src/components/skeleton.tsx Adds PanelSkeleton for tab-specific loading placeholders.
src/components/preview-panel.tsx Uses PanelSkeleton during panel loading transitions.
src/components/pr-table.tsx Memoizes PRRow to reduce re-render churn.
src/commands/split.tsx New TUI for viewing .raft-split.json state and opening/copying PR URLs.
src/commands/ls.tsx Adds SQLite warm start, deferred search, new sorting path, and batched details fetch.
src/tests/integration.test.ts Updates real-network repo target for integration tests.
Comments suppressed due to low confidence (1)

src/commands/ls.tsx:247

  • LsCommand now derives selectedPR from sortedPRs, but passes filteredPRs + selectedIndex into usePanel. usePanel assumes selectedIndex indexes into the allPRs array for neighbor prefetching; passing a differently-ordered array will prefetch the wrong neighbors (and can fetch unrelated PRs). Pass sortedPRs (or whichever array selectedIndex refers to) to usePanel to keep indices consistent.
  const selectedPR = sortedPRs[selectedIndex] ?? null

  // Panel state management (shared hook handles data fetching + caching + prefetching)
  const panel = usePanel(selectedPR, filteredPRs, selectedIndex)
  const { panelOpen, panelTab, splitRatio, panelFullscreen, panelData, panelLoading,

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 84 to +106
export async function fetchAllAccountPRs(
onProgress?: (status: string) => void,
): Promise<PullRequest[]> {
onProgress?.("Discovering accounts...")
const accounts = await getGhAccounts()
const originalAccount = await getActiveAccount()
const allPRs: PullRequest[] = []
const seen = new Set<string>()

for (let i = 0; i < accounts.length; i++) {
const account = accounts[i]
onProgress?.(`Fetching PRs for ${account} (${i + 1}/${accounts.length})...`)
if (accounts.length > 1) {
try { await switchAccount(account) } catch { continue }
}
try {
const json = await runGh([
"search", "prs",
"--author=@me",
"--state=open",
"--limit=100",
"--json", "number,title,url,body,state,repository,isDraft,createdAt",
])
if (json) {
const parsed = parseSearchResults(json)
for (const pr of parsed) {
if (!seen.has(pr.url)) {
seen.add(pr.url)
allPRs.push(pr)
}
}
onProgress?.(`Found ${allPRs.length} PRs so far...`)
}
} catch { /* skip account if query fails */ }
}

onProgress?.(`Loaded ${allPRs.length} PRs across ${accounts.length} accounts`)

// Restore original account
if (accounts.length > 1 && originalAccount) {
try { await switchAccount(originalAccount) } catch { /* ignore */ }
}

return allPRs
onProgress?.("Fetching PRs...");
const json = await fetchGh("search/issues?q=is:pr+is:open+author:@me&per_page=100");
return parseSearchResults(json.items);
}

/**
* Fetch open pull requests for a specific author or all accessible repositories.
*
* - If `author` is undefined: fetches PRs across all authenticated accounts via `@me`.
* - If `author` is empty string: fetches all open PRs accessible to the current account.
* - If `author` is provided: fetches PRs by that specific author.
*
* @param author - GitHub username or empty string for all repos, undefined for @me across accounts
* @param onProgress - Optional callback for progress status messages
* @returns Array of open pull requests
*/
export async function fetchOpenPRs(
author?: string,
onProgress?: (status: string) => void,
): Promise<PullRequest[]> {
if (author === "") {
// Empty string means fetch all PRs across all repos the user has access to
onProgress?.("Fetching all open PRs...")
const json = await runGh([
"search", "prs",
"--state=open",
"--limit=1000",
"--json", "number,title,url,body,state,repository,isDraft,createdAt,author",
])
if (!json) return []
return parseSearchResults(json)
onProgress?.("Fetching all open PRs...");
const json = await fetchGh("search/issues?q=is:pr+is:open&per_page=100");
return parseSearchResults(json.items);
}
if (author) {
onProgress?.(`Fetching PRs for ${author}...`)
const json = await runGh([
"search", "prs",
`--author=${author}`,
"--state=open",
"--limit=100",
"--json", "number,title,url,body,state,repository,isDraft,createdAt,author",
])
if (!json) return []
return parseSearchResults(json)
onProgress?.(`Fetching PRs for ${author}...`);
const json = await fetchGh(`search/issues?q=is:pr+is:open+author:${author}&per_page=100`);
return parseSearchResults(json.items);
}
return fetchAllAccountPRs(onProgress)
return fetchAllAccountPRs(onProgress);
Comment on lines +272 to +274
const map = await batchFetchPRDetails([{repo, number: prNumber, url: `https://github.com/${repo}/pull/${prNumber}`}]);
const details = map.get(`https://github.com/${repo}/pull/${prNumber}`);
if (!details) throw new Error("Failed to fetch PR details");
Comment on lines +39 to +41
export function writeSplitState(repoRoot: string, state: SplitState): void {
const path = `${repoRoot}/.raft-split.json`
Bun.write(path, JSON.stringify(state, null, 2) + "\n")
Comment on lines +7 to +13
// Initialize database in ~/.config/raft/raft.sqlite
const configDir = join(homedir(), ".config", "raft");
mkdirSync(configDir, { recursive: true });

const dbPath = join(configDir, "raft.sqlite");
export const db = new Database(dbPath);

Comment on lines +7 to +19
interface SplitCommandProps {
repo?: string
}

async function getRepoRoot(): Promise<string | null> {
const proc = Bun.spawn(["git", "rev-parse", "--show-toplevel"], {
stdout: "pipe",
stderr: "pipe",
})
const stdout = await new Response(proc.stdout).text()
const code = await proc.exited
return code === 0 ? stdout.trim() : null
}
Comment on lines +11 to +18
async function getRepoRoot(): Promise<string | null> {
const proc = Bun.spawn(["git", "rev-parse", "--show-toplevel"], {
stdout: "pipe",
stderr: "pipe",
})
const stdout = await new Response(proc.stdout).text()
const code = await proc.exited
return code === 0 ? stdout.trim() : null
Comment on lines +54 to +66
export function parseSearchResults(items: any[]): PullRequest[] {
return items.map((pr) => {
const firstLine = (pr.body ?? "").split("\n")[0] ?? "";
const repoUrlParts = pr.repository_url.split("/");
const repo = `${repoUrlParts[repoUrlParts.length - 2]}/${repoUrlParts[repoUrlParts.length - 1]}`;
return {
number: pr.number,
title: pr.title,
url: pr.url,
url: pr.html_url,
body: firstLine.slice(0, 80),
state: pr.state,
isDraft: pr.isDraft,
repo: pr.repository.nameWithOwner,
isDraft: pr.draft || false,
repo,
Comment on lines +53 to +62

for (const split of splits) {
if (split.dependsOn.length === 0) {
roots.push(split)
} else {
for (const dep of split.dependsOn) {
const existing = childrenOf.get(dep) ?? []
existing.push(split)
childrenOf.set(dep, existing)
}
Comment on lines 24 to 26
hasDetails(url: string): boolean {
return this.details.has(url)
return this.details.has(url) || getCachedPRDetails(url) !== null;
}
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 543495e0c7

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

const CHUNK_SIZE = 20;
for (let i = 0; i < prs.length; i += CHUNK_SIZE) {
const chunk = prs.slice(i, i + CHUNK_SIZE);
let query = "query {\\n";
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Replace escaped newline in GraphQL query header

The batch query is initialized with "query {\\n", which inserts a literal backslash-n into the GraphQL document instead of whitespace. GraphQL treats \ outside a string as invalid syntax, so fetchGhGraphql will reject these batched detail requests; batchFetchPRDetails then logs and drops the chunk, leaving PR lifecycle/CI/review data missing and causing fetchPRDetails callers to fail when no entry is returned.

Useful? React with 👍 / 👎.

fetchGh(`repos/${repo}/issues/${prNumber}/comments`),
fetchGh(`repos/${repo}/pulls/${prNumber}/comments`),
fetchGh(`repos/${repo}/pulls/${prNumber}/files?per_page=100`),
fetchReviewThreads(repo, prNumber),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Catch review-thread fetch failures in panel loading

Including fetchReviewThreads(repo, prNumber) directly in Promise.all makes the entire panel fetch fail if the GraphQL review-thread call errors (for example due to transient GraphQL/rate-limit issues), even when body/comments/files REST calls succeeded. This is a regression from the previous behavior where thread metadata failures degraded to an empty thread list instead of blanking the preview panel.

Useful? React with 👍 / 👎.

Comment on lines +88 to +89
const json = await fetchGh("search/issues?q=is:pr+is:open+author:@me&per_page=100");
return parseSearchResults(json.items);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Reintroduce account iteration in all-account PR fetch

fetchAllAccountPRs now executes a single author:@me search and returns immediately, so it no longer aggregates PRs from multiple authenticated GitHub CLI accounts. As a result, flows that rely on this default path to scan "across all accounts" will silently omit PRs owned under non-active accounts.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (6)
src/components/skeleton.tsx (2)

58-58: The height prop is declared but unused.

The component accepts a height parameter that isn't referenced in any of the returned layouts. Either use it to constrain the skeleton height or remove it from the props interface.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/skeleton.tsx` at line 58, PanelSkeleton currently declares a
height prop that is never used; either remove height from the props signature or
apply it to the rendered skeleton container. Update the PanelSkeleton({ tab,
width, height }) props (and any callers) to drop height if unused, or use height
to set the skeleton container style/inline style/class (e.g., constrain the
outer div or <Skeleton> component) so the prop actually controls vertical size;
ensure the prop type is kept in sync with the function signature and any
consuming components.

58-58: Consider using PanelTab type for better type safety.

The tab parameter is typed as string, but a PanelTab type exists in src/lib/types.ts (used by preview-panel.tsx). Using the union type would provide compile-time safety against typos and make the valid values explicit.

🔧 Proposed fix
+import type { PanelTab } from "../lib/types"
+
-export function PanelSkeleton({ tab, width, height }: { tab: string; width: number; height: number }) {
+export function PanelSkeleton({ tab, width, height }: { tab: PanelTab; width: number; height: number }) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/skeleton.tsx` at line 58, The PanelSkeleton function currently
types the tab parameter as string; change it to use the existing union type
PanelTab for stronger type safety by importing PanelTab from src/lib/types.ts
and updating the signature export function PanelSkeleton({ tab, width, height }:
{ tab: PanelTab; width: number; height: number }); ensure any callers pass a
value assignable to PanelTab (or cast/update them) so the code compiles.
src/lib/split-state.ts (1)

28-37: Consider validating the parsed JSON structure.

The as SplitState cast assumes the JSON matches the expected shape. If the file contains malformed or outdated data (e.g., missing splits array), callers may encounter runtime errors when accessing properties.

A minimal check (e.g., verifying splits is an array) would make the function more robust.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/split-state.ts` around lines 28 - 37, readSplitState currently casts
parsed JSON to SplitState unconditionally, which can cause runtime errors if the
file is malformed; update readSplitState to validate the parsed object before
returning it: after JSON.parse(text) check that the result is an object and that
required properties like splits exist and are of expected types (e.g.,
Array.isArray(parsed.splits)), optionally validate elements if needed, and
return null (or throw) when validation fails so callers never receive an invalid
SplitState; reference the readSplitState function and the SplitState type and
specifically validate the "splits" array.
src/lib/auth.ts (1)

19-19: Redundant trim() call.

The safeSpawn function already trims stdout by default (the trim option defaults to true per src/lib/process.ts:45). This extra .trim() is harmless but unnecessary.

🔧 Suggested simplification
-  cachedToken = stdout.trim();
+  cachedToken = stdout;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/auth.ts` at line 19, The assignment to cachedToken unnecessarily
calls .trim() because safeSpawn already returns trimmed stdout; update the code
in the token-fetching logic to assign stdout directly to cachedToken (remove the
redundant .trim() call). Locate the use of cachedToken and the call to safeSpawn
in auth-related functions (reference symbol: cachedToken and the call site that
invokes safeSpawn) and replace cachedToken = stdout.trim() with cachedToken =
stdout to simplify the code without changing behavior.
src/lib/github.ts (1)

423-433: CI status returns "ready" when no check runs exist.

When there are no check runs (line 427), the function returns "ready". This may be misleading for repos that don't use GitHub Actions/checks — "unknown" might be more accurate since we can't confirm the code is actually ready.

💡 Consider alternative
-    if (checkRuns.length === 0) return "ready";
+    if (checkRuns.length === 0) return "unknown";

Or keep "ready" but document that "no checks" is treated as implicitly passing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/github.ts` around lines 423 - 433, fetchCIStatus currently returns
"ready" when there are no check runs which is misleading for repos without CI;
change the branch in fetchCIStatus so that when checkRuns.length === 0 it
returns "unknown" instead of "ready" (i.e., update the early return in the
fetchCIStatus function), and adjust any tests or callers that assumed no-checks
== "ready" to expect "unknown" or document the behavior accordingly.
src/commands/ls.tsx (1)

343-346: Inconsistent array reference for panel navigation bounds.

Lines 343-344 use filteredPRs.length for up/down navigation bounds, but the component elsewhere uses sortedPRs (e.g., line 492 uses sortedPRs.length). Since sortedPRs is derived from filteredPRs with the same length, this is functionally correct but inconsistent.

♻️ For consistency
       if (key.name === "down") {
-        setSelectedIndex((i) => Math.min(filteredPRs.length - 1, i + 1))
+        setSelectedIndex((i) => Math.min(sortedPRs.length - 1, i + 1))
       } else if (key.name === "up") {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/ls.tsx` around lines 343 - 346, The navigation bounds use
filteredPRs.length while the rest of the component uses sortedPRs, so update the
up/down handlers to use sortedPRs.length for consistency: modify the
setSelectedIndex calls in the key handling (the block that checks key.name ===
"down" / "up") to reference sortedPRs.length instead of filteredPRs.length (keep
using setSelectedIndex and the same Math.min/Math.max logic).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/commands/split.tsx`:
- Around line 11-19: getRepoRoot currently spawns a Bun process but never cleans
it up, risking leaked fds; update getRepoRoot to ensure the child process is
always terminated/unreferenced (call proc.kill() and/or proc.unref() in a
finally block) after awaiting proc.exited and reading stdout, or replace the
spawn with the existing safeSpawn helper from ../lib/process to get consistent
cleanup behavior; reference the getRepoRoot function and safeSpawn so reviewers
can locate and apply the change.
- Around line 98-99: The code uses the macOS-only command "open" (e.g.,
Bun.spawn(["open", splits[selectedIndex].prUrl!])) which fails silently on other
platforms; create a cross-platform helper (e.g., openUrl(url: string)) that
checks process.platform and calls Bun.spawn with ["open", url] on darwin,
["xdg-open", url] on linux, and ["cmd", "/c", "start", "", url] on win32 (or the
equivalent start invocation), await the spawn, and surface errors instead of
silencing stdout/stderr; replace direct Bun.spawn calls in split.tsx
(splits[selectedIndex].prUrl), stack.tsx, ls.tsx, and log.tsx to use openUrl and
use showFlash to report success or failure.

In `@src/lib/cache.ts`:
- Around line 24-26: hasDetails() and hasPanelData() currently call
getCachedPRDetails()/getCachedPanelData() to check DB existence then discard the
parsed result, causing a second DB read when getDetails()/getPanelData() is
called; modify hasDetails(url) and hasPanelData(key) to, when the in-memory Map
(this.details / this.panels) misses and the respective getCached... call returns
non-null, parse/assign that returned value into the in-memory cache
(this.details.set(url, parsed) or this.panels.set(key, parsed)) and then return
true; this ensures the subsequent getDetails() / getPanelData() use the cached
value and avoid duplicate DB queries.

In `@src/lib/db.ts`:
- Around line 8-12: Wrap the filesystem and DB creation in a try-catch so module
initialization cannot throw: catch failures from mkdirSync and new Database, log
the error, and fall back to an in-memory database (use dbPath fallback
":memory:") so the exported symbol db always exists; ensure you reference and
protect the existing symbols configDir, dbPath, mkdirSync, and Database and
export a usable db even when the on-disk initialization fails.

In `@src/lib/github.ts`:
- Around line 96-104: The search URLs built in the branch handling author use
the raw author string which can break the query if it contains spaces or special
characters; update the code that calls fetchGh (the two cases using
fetchGh("search/issues?q=...") and
fetchGh(`search/issues?q=...author:${author}...`)) to pass an encoded author
term using encodeURIComponent(author) when author is used in the query, and
ensure the non-author branch remains unchanged; keep
parseSearchResults(json.items) as the return.
- Around line 190-226: The GraphQL query built in the loop (variable query
inside the chunk-processing code) directly interpolates repo owner/name (pr.repo
split into owner and name) and is vulnerable to injection or breaking on special
characters; fix by switching to a variables-based request (use fetchGhGraphql
which supports variables) or by validating/escaping owner and name before
interpolation: restructure the batching to send one parameterized GraphQL query
that uses aliases (e.g., pr_0, pr_1) mapped to variables for owner/name, or fall
back to querying repos individually via fetchGhGraphql with safe variables
instead of string interpolation.

In `@src/lib/split-state.ts`:
- Around line 39-42: Change writeSplitState to be asynchronous and await the
Bun.write() call: update the function signature writeSplitState(repoRoot:
string, state: SplitState) to return Promise<void> (make it async) and await
Bun.write(path, JSON.stringify(state, null, 2) + "\n") so the write completes
before the function resolves; propagate or let exceptions surface so callers can
handle write failures.

---

Nitpick comments:
In `@src/commands/ls.tsx`:
- Around line 343-346: The navigation bounds use filteredPRs.length while the
rest of the component uses sortedPRs, so update the up/down handlers to use
sortedPRs.length for consistency: modify the setSelectedIndex calls in the key
handling (the block that checks key.name === "down" / "up") to reference
sortedPRs.length instead of filteredPRs.length (keep using setSelectedIndex and
the same Math.min/Math.max logic).

In `@src/components/skeleton.tsx`:
- Line 58: PanelSkeleton currently declares a height prop that is never used;
either remove height from the props signature or apply it to the rendered
skeleton container. Update the PanelSkeleton({ tab, width, height }) props (and
any callers) to drop height if unused, or use height to set the skeleton
container style/inline style/class (e.g., constrain the outer div or <Skeleton>
component) so the prop actually controls vertical size; ensure the prop type is
kept in sync with the function signature and any consuming components.
- Line 58: The PanelSkeleton function currently types the tab parameter as
string; change it to use the existing union type PanelTab for stronger type
safety by importing PanelTab from src/lib/types.ts and updating the signature
export function PanelSkeleton({ tab, width, height }: { tab: PanelTab; width:
number; height: number }); ensure any callers pass a value assignable to
PanelTab (or cast/update them) so the code compiles.

In `@src/lib/auth.ts`:
- Line 19: The assignment to cachedToken unnecessarily calls .trim() because
safeSpawn already returns trimmed stdout; update the code in the token-fetching
logic to assign stdout directly to cachedToken (remove the redundant .trim()
call). Locate the use of cachedToken and the call to safeSpawn in auth-related
functions (reference symbol: cachedToken and the call site that invokes
safeSpawn) and replace cachedToken = stdout.trim() with cachedToken = stdout to
simplify the code without changing behavior.

In `@src/lib/github.ts`:
- Around line 423-433: fetchCIStatus currently returns "ready" when there are no
check runs which is misleading for repos without CI; change the branch in
fetchCIStatus so that when checkRuns.length === 0 it returns "unknown" instead
of "ready" (i.e., update the early return in the fetchCIStatus function), and
adjust any tests or callers that assumed no-checks == "ready" to expect
"unknown" or document the behavior accordingly.

In `@src/lib/split-state.ts`:
- Around line 28-37: readSplitState currently casts parsed JSON to SplitState
unconditionally, which can cause runtime errors if the file is malformed; update
readSplitState to validate the parsed object before returning it: after
JSON.parse(text) check that the result is an object and that required properties
like splits exist and are of expected types (e.g.,
Array.isArray(parsed.splits)), optionally validate elements if needed, and
return null (or throw) when validation fails so callers never receive an invalid
SplitState; reference the readSplitState function and the SplitState type and
specifically validate the "splits" array.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b6681bb2-c8ca-443f-869f-f14ba05de9a3

📥 Commits

Reviewing files that changed from the base of the PR and between a2a9d59 and 543495e.

📒 Files selected for processing (15)
  • src/__tests__/integration.test.ts
  • src/commands/ls.tsx
  • src/commands/split.tsx
  • src/components/pr-table.tsx
  • src/components/preview-panel.tsx
  • src/components/skeleton.tsx
  • src/hooks/usePanel.ts
  • src/index.tsx
  • src/lib/__tests__/cache.test.ts
  • src/lib/__tests__/github.test.ts
  • src/lib/auth.ts
  • src/lib/cache.ts
  • src/lib/db.ts
  • src/lib/github.ts
  • src/lib/split-state.ts

Comment on lines +11 to +19
async function getRepoRoot(): Promise<string | null> {
const proc = Bun.spawn(["git", "rev-parse", "--show-toplevel"], {
stdout: "pipe",
stderr: "pipe",
})
const stdout = await new Response(proc.stdout).text()
const code = await proc.exited
return code === 0 ? stdout.trim() : null
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Process cleanup missing — potential file descriptor leak.

Unlike safeSpawn in src/lib/process.ts, this function doesn't call proc.kill() or proc.unref() after awaiting the result. If getRepoRoot is called repeatedly (e.g., during component remounts), this could accumulate file descriptors.

🛡️ Suggested fix using try/finally cleanup
 async function getRepoRoot(): Promise<string | null> {
   const proc = Bun.spawn(["git", "rev-parse", "--show-toplevel"], {
     stdout: "pipe",
     stderr: "pipe",
   })
-  const stdout = await new Response(proc.stdout).text()
-  const code = await proc.exited
-  return code === 0 ? stdout.trim() : null
+  try {
+    const stdout = await new Response(proc.stdout).text()
+    const code = await proc.exited
+    return code === 0 ? stdout.trim() : null
+  } finally {
+    try { proc.kill() } catch {}
+    proc.unref()
+  }
 }

Alternatively, consider reusing safeSpawn from ../lib/process for consistent cleanup handling.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/split.tsx` around lines 11 - 19, getRepoRoot currently spawns a
Bun process but never cleans it up, risking leaked fds; update getRepoRoot to
ensure the child process is always terminated/unreferenced (call proc.kill()
and/or proc.unref() in a finally block) after awaiting proc.exited and reading
stdout, or replace the spawn with the existing safeSpawn helper from
../lib/process to get consistent cleanup behavior; reference the getRepoRoot
function and safeSpawn so reviewers can locate and apply the change.

Comment on lines +98 to +99
Bun.spawn(["open", splits[selectedIndex].prUrl!], { stdout: "ignore", stderr: "ignore" })
showFlash("Opening PR...")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if there's a platform-aware open utility elsewhere in the codebase
rg -n "xdg-open|start.*url|platform.*open" --type=ts

Repository: OctavianTocan/raft

Length of output: 44


🏁 Script executed:

head -n 110 src/commands/split.tsx | tail -n 20

Repository: OctavianTocan/raft

Length of output: 901


🏁 Script executed:

rg -n "platform|process\.platform|darwin|linux|win32" src/commands/split.tsx

Repository: OctavianTocan/raft

Length of output: 44


🏁 Script executed:

rg -n "open.*prUrl\|prUrl.*open" -A 3 -B 3

Repository: OctavianTocan/raft

Length of output: 44


🏁 Script executed:

rg -n "process\.env\|Bun\.env" src/commands/split.tsx

Repository: OctavianTocan/raft

Length of output: 44


🏁 Script executed:

fd -t f "\.ts$|\.tsx$" | head -20 | xargs rg -l "xdg-open|start.*url|platform" | head -5

Repository: OctavianTocan/raft

Length of output: 44


🏁 Script executed:

rg -n "Bun\.spawn.*open" --type=ts

Repository: OctavianTocan/raft

Length of output: 737


open command is macOS-specific and affects multiple commands.

The open command only works on macOS. On Linux, xdg-open is used, and on Windows, start or cmd /c start. This will fail silently on non-macOS systems due to stdout: "ignore", stderr: "ignore".

This pattern appears in 6 locations across 4 files:

  • src/commands/split.tsx:98
  • src/commands/stack.tsx:125, 147
  • src/commands/ls.tsx:377, 497
  • src/commands/log.tsx:143

Implement a cross-platform solution to handle URL opening on macOS, Linux, and Windows.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/split.tsx` around lines 98 - 99, The code uses the macOS-only
command "open" (e.g., Bun.spawn(["open", splits[selectedIndex].prUrl!])) which
fails silently on other platforms; create a cross-platform helper (e.g.,
openUrl(url: string)) that checks process.platform and calls Bun.spawn with
["open", url] on darwin, ["xdg-open", url] on linux, and ["cmd", "/c", "start",
"", url] on win32 (or the equivalent start invocation), await the spawn, and
surface errors instead of silencing stdout/stderr; replace direct Bun.spawn
calls in split.tsx (splits[selectedIndex].prUrl), stack.tsx, ls.tsx, and log.tsx
to use openUrl and use showFlash to report success or failure.

Comment on lines 24 to 26
hasDetails(url: string): boolean {
return this.details.has(url)
return this.details.has(url) || getCachedPRDetails(url) !== null;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

hasDetails() and hasPanelData() cause redundant DB queries.

When data exists in the database but not in memory, hasDetails() fetches and parses the data via getCachedPRDetails(), then discards it. A subsequent getDetails() call re-queries the same data. The usage pattern in src/commands/ls.tsx:212-226 shows hasDetails() filtering followed by getDetails() on the same URLs, doubling DB I/O.

🐛 Proposed fix: Cache the result when checking existence
   hasDetails(url: string): boolean {
-    return this.details.has(url) || getCachedPRDetails(url) !== null;
+    if (this.details.has(url)) return true;
+    const fromDb = getCachedPRDetails(url);
+    if (fromDb) {
+      this.details.set(url, fromDb);
+      return true;
+    }
+    return false;
   }
   hasPanelData(url: string): boolean {
-    return this.panelData.has(url) || getCachedPRPanelData(url) !== null;
+    if (this.panelData.has(url)) return true;
+    const fromDb = getCachedPRPanelData(url);
+    if (fromDb) {
+      this.panelData.set(url, fromDb);
+      return true;
+    }
+    return false;
   }

Also applies to: 43-45

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/cache.ts` around lines 24 - 26, hasDetails() and hasPanelData()
currently call getCachedPRDetails()/getCachedPanelData() to check DB existence
then discard the parsed result, causing a second DB read when
getDetails()/getPanelData() is called; modify hasDetails(url) and
hasPanelData(key) to, when the in-memory Map (this.details / this.panels) misses
and the respective getCached... call returns non-null, parse/assign that
returned value into the in-memory cache (this.details.set(url, parsed) or
this.panels.set(key, parsed)) and then return true; this ensures the subsequent
getDetails() / getPanelData() use the cached value and avoid duplicate DB
queries.

Comment on lines +8 to +12
const configDir = join(homedir(), ".config", "raft");
mkdirSync(configDir, { recursive: true });

const dbPath = join(configDir, "raft.sqlite");
export const db = new Database(dbPath);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Module initialization can throw without recovery path.

If mkdirSync fails (e.g., permission denied) or new Database() fails (e.g., disk full, locked), the entire module import will throw, crashing any consumer. Consider wrapping initialization in a try-catch with a fallback (e.g., in-memory DB or graceful degradation).

🛡️ Suggested defensive initialization
+let db: Database;
+try {
   const configDir = join(homedir(), ".config", "raft");
   mkdirSync(configDir, { recursive: true });
-
   const dbPath = join(configDir, "raft.sqlite");
-export const db = new Database(dbPath);
+  db = new Database(dbPath);
+} catch (e) {
+  // Fallback to in-memory database if disk access fails
+  console.error("Failed to initialize persistent cache, using in-memory:", e);
+  db = new Database(":memory:");
+}
+export { db };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const configDir = join(homedir(), ".config", "raft");
mkdirSync(configDir, { recursive: true });
const dbPath = join(configDir, "raft.sqlite");
export const db = new Database(dbPath);
let db: Database;
try {
const configDir = join(homedir(), ".config", "raft");
mkdirSync(configDir, { recursive: true });
const dbPath = join(configDir, "raft.sqlite");
db = new Database(dbPath);
} catch (e) {
// Fallback to in-memory database if disk access fails
console.error("Failed to initialize persistent cache, using in-memory:", e);
db = new Database(":memory:");
}
export { db };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/db.ts` around lines 8 - 12, Wrap the filesystem and DB creation in a
try-catch so module initialization cannot throw: catch failures from mkdirSync
and new Database, log the error, and fall back to an in-memory database (use
dbPath fallback ":memory:") so the exported symbol db always exists; ensure you
reference and protect the existing symbols configDir, dbPath, mkdirSync, and
Database and export a usable db even when the on-disk initialization fails.

Comment on lines 96 to +104
if (author === "") {
// Empty string means fetch all PRs across all repos the user has access to
onProgress?.("Fetching all open PRs...")
const json = await runGh([
"search", "prs",
"--state=open",
"--limit=1000",
"--json", "number,title,url,body,state,repository,isDraft,createdAt,author",
])
if (!json) return []
return parseSearchResults(json)
onProgress?.("Fetching all open PRs...");
const json = await fetchGh("search/issues?q=is:pr+is:open&per_page=100");
return parseSearchResults(json.items);
}
if (author) {
onProgress?.(`Fetching PRs for ${author}...`)
const json = await runGh([
"search", "prs",
`--author=${author}`,
"--state=open",
"--limit=100",
"--json", "number,title,url,body,state,repository,isDraft,createdAt,author",
])
if (!json) return []
return parseSearchResults(json)
onProgress?.(`Fetching PRs for ${author}...`);
const json = await fetchGh(`search/issues?q=is:pr+is:open+author:${author}&per_page=100`);
return parseSearchResults(json.items);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

URL parameters not encoded — special characters may break queries.

If author contains special characters (e.g., +, &, spaces, or Unicode), the URL will be malformed. Use encodeURIComponent for user-provided values.

🛡️ Suggested fix
   if (author === "") {
     onProgress?.("Fetching all open PRs...");
     const json = await fetchGh("search/issues?q=is:pr+is:open&per_page=100");
     return parseSearchResults(json.items);
   }
   if (author) {
     onProgress?.(`Fetching PRs for ${author}...`);
-    const json = await fetchGh(`search/issues?q=is:pr+is:open+author:${author}&per_page=100`);
+    const json = await fetchGh(`search/issues?q=is:pr+is:open+author:${encodeURIComponent(author)}&per_page=100`);
     return parseSearchResults(json.items);
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/github.ts` around lines 96 - 104, The search URLs built in the branch
handling author use the raw author string which can break the query if it
contains spaces or special characters; update the code that calls fetchGh (the
two cases using fetchGh("search/issues?q=...") and
fetchGh(`search/issues?q=...author:${author}...`)) to pass an encoded author
term using encodeURIComponent(author) when author is used in the query, and
ensure the non-author branch remains unchanged; keep
parseSearchResults(json.items) as the return.

Comment on lines +190 to 226
let query = "query {\\n";
for (let j = 0; j < chunk.length; j++) {
const pr = chunk[j];
const [owner, name] = pr.repo.split("/");
query += `
pr_${j}: repository(owner: "${owner}", name: "${name}") {
pullRequest(number: ${pr.number}) {
additions
deletions
comments { totalCount }
headRefName
headRefOid
mergeable
reviews(first: 100) {
nodes {
author { login }
state
}
}
reviewThreads(first: 100) {
nodes {
isResolved
}
}
commits(last: 1) {
nodes {
commit {
statusCheckRollup {
state
}
}
}
}
}
}
return result
} catch {
continue
}
`;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

GraphQL query construction is vulnerable to injection.

Repository owner/name values are interpolated directly into the query string without escaping. If a repo name contains " or other special characters, the query will break or behave unexpectedly. Use GraphQL variables instead of string interpolation.

🛡️ Suggested fix using variables

The current approach builds the query dynamically with string interpolation. A safer approach would be to use a single query with variables, though this requires restructuring the batching logic. At minimum, validate/escape the owner and name:

       const [owner, name] = pr.repo.split("/");
+      // Basic validation - repo names shouldn't contain quotes
+      if (owner.includes('"') || name.includes('"')) {
+        console.error(`Invalid repo name: ${pr.repo}`);
+        continue;
+      }
       query += `
         pr_${j}: repository(owner: "${owner}", name: "${name}") {

For a more robust solution, consider using a parameterized approach with aliases that map to variables, or process repos individually with the existing fetchGhGraphql which properly handles variables.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let query = "query {\\n";
for (let j = 0; j < chunk.length; j++) {
const pr = chunk[j];
const [owner, name] = pr.repo.split("/");
query += `
pr_${j}: repository(owner: "${owner}", name: "${name}") {
pullRequest(number: ${pr.number}) {
additions
deletions
comments { totalCount }
headRefName
headRefOid
mergeable
reviews(first: 100) {
nodes {
author { login }
state
}
}
reviewThreads(first: 100) {
nodes {
isResolved
}
}
commits(last: 1) {
nodes {
commit {
statusCheckRollup {
state
}
}
}
}
}
}
return result
} catch {
continue
}
`;
}
let query = "query {\\n";
for (let j = 0; j < chunk.length; j++) {
const pr = chunk[j];
const [owner, name] = pr.repo.split("/");
// Basic validation - repo names shouldn't contain quotes
if (owner.includes('"') || name.includes('"')) {
console.error(`Invalid repo name: ${pr.repo}`);
continue;
}
query += `
pr_${j}: repository(owner: "${owner}", name: "${name}") {
pullRequest(number: ${pr.number}) {
additions
deletions
comments { totalCount }
headRefName
headRefOid
mergeable
reviews(first: 100) {
nodes {
author { login }
state
}
}
reviewThreads(first: 100) {
nodes {
isResolved
}
}
commits(last: 1) {
nodes {
commit {
statusCheckRollup {
state
}
}
}
}
}
}
`;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/github.ts` around lines 190 - 226, The GraphQL query built in the
loop (variable query inside the chunk-processing code) directly interpolates
repo owner/name (pr.repo split into owner and name) and is vulnerable to
injection or breaking on special characters; fix by switching to a
variables-based request (use fetchGhGraphql which supports variables) or by
validating/escaping owner and name before interpolation: restructure the
batching to send one parameterized GraphQL query that uses aliases (e.g., pr_0,
pr_1) mapped to variables for owner/name, or fall back to querying repos
individually via fetchGhGraphql with safe variables instead of string
interpolation.

Comment on lines +39 to +42
export function writeSplitState(repoRoot: string, state: SplitState): void {
const path = `${repoRoot}/.raft-split.json`
Bun.write(path, JSON.stringify(state, null, 2) + "\n")
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Does Bun.write return a Promise?

💡 Result:

Yes, Bun.write returns a Promise that resolves with the number of bytes written.

Citations:


Make writeSplitState async and await Bun.write().

Bun.write() returns a Promise<number>, but the current code calls it without awaiting in a synchronous function. The write operation is fire-and-forget, meaning the function returns before the file is written, potentially causing data loss or race conditions.

Proposed fix
-export function writeSplitState(repoRoot: string, state: SplitState): void {
+export async function writeSplitState(repoRoot: string, state: SplitState): Promise<void> {
   const path = `${repoRoot}/.raft-split.json`
-  Bun.write(path, JSON.stringify(state, null, 2) + "\n")
+  await Bun.write(path, JSON.stringify(state, null, 2) + "\n")
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export function writeSplitState(repoRoot: string, state: SplitState): void {
const path = `${repoRoot}/.raft-split.json`
Bun.write(path, JSON.stringify(state, null, 2) + "\n")
}
export async function writeSplitState(repoRoot: string, state: SplitState): Promise<void> {
const path = `${repoRoot}/.raft-split.json`
await Bun.write(path, JSON.stringify(state, null, 2) + "\n")
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/split-state.ts` around lines 39 - 42, Change writeSplitState to be
asynchronous and await the Bun.write() call: update the function signature
writeSplitState(repoRoot: string, state: SplitState) to return Promise<void>
(make it async) and await Bun.write(path, JSON.stringify(state, null, 2) + "\n")
so the write completes before the function resolves; propagate or let exceptions
surface so callers can handle write failures.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants