Skip to content

feat: implement intent-sliced semantic diff grouping#5

Open
OctavianTocan wants to merge 2 commits intomainfrom
intent-sliced-diffs
Open

feat: implement intent-sliced semantic diff grouping#5
OctavianTocan wants to merge 2 commits intomainfrom
intent-sliced-diffs

Conversation

@OctavianTocan
Copy link
Copy Markdown
Owner

@OctavianTocan OctavianTocan commented Mar 18, 2026

Summary

  • Replaced alphabetical file sorting with Semantic Grouping by Intent (Source Code, Tests, Config, Boilerplate).
  • Annihilated diff noise by aggressively folding Boilerplate files (yarn.lock, generated types, etc.) by default.
  • Added Shift+B keybind to expand hidden boilerplate files.
  • Added automatic pagination for PR files API to handle PRs with >100 files seamlessly.
  • Removed dead collapseDiffRegions code which was incompatible with OpenTUI's diff parser.

Summary by Sourcery

Replace gh CLI–based GitHub integration with token-authenticated REST/GraphQL APIs, add semantic intent-based diff grouping with boilerplate folding, introduce persistent PR caching and split-branch inspection, and refine list/panel UX and performance.

New Features:

  • Group PR file diffs by semantic intent (Source Code, Tests, Config, Boilerplate) with boilerplate sections folded by default and toggleable via Shift+B.
  • Add SQLite-backed caching for pull requests, PR details, and panel data to persist state across runs and speed up loading.
  • Introduce a split view command to inspect split-branch state and topology from a .raft-split.json file, including an interactive TUI with navigation and status badges.
  • Add panel skeleton placeholders for each tab to improve perceived loading while PR panel data is fetched.

Bug Fixes:

  • Prevent state updates after unmount when loading PRs and stabilize selection/scroll behavior by decoupling filtered vs sorted lists and using refs for scroll offset.

Enhancements:

  • Refactor GitHub access to use a shared OAuth token from the gh CLI with direct REST and GraphQL calls, simplifying multi-account handling and improving reliability.
  • Batch-fetch PR details via GraphQL and memoize urgency calculations and counts to improve performance of attention-based sorting and list rendering.
  • Paginate PR files API requests to support large pull requests with more than 100 changed files.
  • Enhance diff panel usability by summarizing and collapsing boilerplate and generated files while keeping source, test, and config changes prominent.
  • Optimize the PR table by memoizing row rendering and deferring search input to avoid unnecessary recomputation during typing.

Tests:

  • Update GitHub parsing and integration tests to align with the new REST/GraphQL data shapes and switch integration scenarios to use the cli/cli repository.
  • Harden PR cache tests to avoid key collisions and validate the new cache semantics.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added "split" command for managing raft splits with interactive UI and keyboard navigation
    • Implemented persistent database caching for faster PR loading and offline availability
    • Added grouped file view with boilerplate visibility toggle for cleaner diffs
  • Improvements

    • Enhanced PR sorting by urgency and age for better prioritization
    • Optimized batch fetching for PR details to improve performance
    • Improved loading states with skeleton screens for better UX
    • Deferred search queries to reduce unnecessary re-renders

- Replaced 140+ `gh` CLI subprocess calls with native `fetch` via GitHub API
- Added in-memory OAuth token caching via `gh auth token`
- Added local SQLite persistence using `bun:sqlite` for zero-latency PR lists
- Optimized React rendering cascade with `useDeferredValue` and `React.memo`
- Implemented `PanelSkeleton` for smooth, tab-aware loading transitions
- Batched GraphQL PR details queries into chunks to prevent rate limits
Copilot AI review requested due to automatic review settings March 18, 2026 17:20
@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Mar 18, 2026

Reviewer's Guide

Refactors GitHub integration to use the REST/GraphQL APIs with a shared auth helper and adds intent-based diff grouping with boilerplate folding in the files panel, plus caching, performance, and UX improvements to PR listing and preview, and a new split-branch inspection command.

Sequence diagram for loading PR list with caching and GitHub API

sequenceDiagram
  actor User
  participant LsCommand
  participant PRCache
  participant DB as DbModule
  participant Github as GithubApi
  participant GitHubAPI

  User->>LsCommand: run raft ls
  activate LsCommand
  LsCommand->>DB: getCachedPRs()
  DB-->>LsCommand: cachedPullRequests[]
  alt hasCached
    LsCommand->>LsCommand: sort cachedPullRequests
    LsCommand-->>User: initial PR list
  else noCached
    LsCommand->>LsCommand: show loading state
  end

  LsCommand->>Github: fetchOpenPRs(author,onProgress)
  activate Github
  Github->>Github: fetchGh("search/issues?...&per_page=100")
  Github->>GitHubAPI: HTTPS request (REST)
  GitHubAPI-->>Github: JSON items[]
  Github->>Github: parseSearchResults(items)
  Github-->>LsCommand: pullRequests[]
  deactivate Github

  LsCommand->>LsCommand: sort pullRequests
  LsCommand->>DB: cachePRs(pullRequests)
  LsCommand-->>User: updated PR list

  loop for each visible PR needing details
    LsCommand->>PRCache: hasDetails(pr.url)?
    alt notCached
      LsCommand->>Github: batchFetchPRDetails(prsToFetch)
      activate Github
      Github->>Github: build GraphQL query
      Github->>GitHubAPI: HTTPS request (GraphQL)
      GitHubAPI-->>Github: data for PR batch
      Github-->>LsCommand: Map<url,PRDetails>
      deactivate Github
      LsCommand->>PRCache: setDetails(url,details)
      PRCache->>DB: cachePRDetails(url,details)
    else cached
      PRCache-->>LsCommand: PRDetails
    end
  end

  LsCommand->>LsCommand: compute urgencyMap, sortedPRs
  LsCommand-->>User: attention-sorted PR list
  deactivate LsCommand
Loading

Sequence diagram for semantic file grouping and boilerplate toggle

sequenceDiagram
  actor User
  participant PanelFiles
  participant DiffUtils

  User->>PanelFiles: open Files tab with files[]
  activate PanelFiles
  PanelFiles->>PanelFiles: useMemo group files by intent
  loop for each file
    PanelFiles->>DiffUtils: getFileIntent(file.filename)
    DiffUtils-->>PanelFiles: intent (Source_Code|Tests|Config|Boilerplate)
    PanelFiles->>PanelFiles: add file to groups[intent]
  end

  PanelFiles->>User: render groups in intentOrder
  Note over PanelFiles: Boilerplate group is folded by default

  User->>PanelFiles: press Shift+B
  PanelFiles->>PanelFiles: toggle showBoilerplate

  alt showBoilerplate is true
    PanelFiles->>User: render Boilerplate header and all boilerplate file diffs
  else showBoilerplate is false
    PanelFiles->>User: render collapsed Boilerplate summary row
  end

  deactivate PanelFiles
Loading

ER diagram for new SQLite PR cache schema

erDiagram
  pull_requests {
    TEXT url PK
    JSON data
    DATETIME updated_at
  }

  pr_details {
    TEXT url PK
    JSON data
    DATETIME updated_at
  }

  pr_panel_data {
    TEXT url PK
    JSON data
    DATETIME updated_at
  }

  pull_requests ||--|| pr_details : same_pr_url
  pull_requests ||--|| pr_panel_data : same_pr_url
Loading

File-Level Changes

Change Details Files
Replace gh CLI-based GitHub access with direct REST/GraphQL API helpers backed by a gh-derived token and add batching for PR metadata.
  • Introduce fetchGh and fetchGhGraphql helpers that inject an OAuth token from a new getGithubToken helper and normalize error handling.
  • Rewrite PR search, repo PR listing, PR title/comment/review operations, review-thread fetching, CI status, conflicts, and panel data fetching to call GitHub REST/GraphQL APIs instead of shelling out to gh.
  • Add batchFetchPRDetails to aggregate PRDetails via a single GraphQL query per chunk and adapt fetchPRDetails to use the batched path.
  • Simplify account handling by assuming the active gh auth context and removing multi-account discovery/switching logic and runGh helpers.
  • Update tests and integration tests to reflect the new parseSearchResults signature and fetchRepoPRs behavior.
src/lib/github.ts
src/lib/auth.ts
src/lib/__tests__/github.test.ts
src/__tests__/integration.test.ts
Implement intent-sliced diff grouping in the files panel, folding boilerplate by default and exposing a keybinding to toggle it.
  • Add FileIntent type and getFileIntent classifier to categorize files into Source Code, Tests, Config, and Boilerplate based on paths/extensions and known patterns.
  • Refactor PanelFiles to group incoming FileDiffs by intent, render groups in a fixed order, and reuse a shared renderFile helper.
  • Collapse Boilerplate files into a summarized header showing file count and total churn, with instructions and state to expand via a Shift+B keybinding.
  • Remove the unused collapseDiffRegions logic and placeholder marker from diff-utils as it is incompatible with the current diff parser.
src/components/panel-files.tsx
src/lib/diff-utils.ts
Add persistent caching, better sorting, and performance-focused filtering to the ls command and PR cache.
  • Introduce a SQLite-backed db module that stores pull requests, PRDetails, and PRPanelData under ~/.config/raft/raft.sqlite with simple CRUD helpers.
  • Extend PRCache to transparently hydrate from and write through to the SQLite cache for details and panel data.
  • Modify the ls command to load cached PRs on startup, cache freshly fetched lists, and avoid clobbering the UI on network errors when cache exists.
  • Split filtering and sorting into separate memoized stages, use useDeferredValue for search input, precompute urgency and timestamp maps, and consistently base grouping and selection on the sorted PR list.
  • Batch-load PRDetails for visible PRs via batchFetchPRDetails, track which URLs have been materialized to avoid unnecessary recomputation, and memoize counts and row rendering (via React.memo) to reduce re-renders.
src/commands/ls.tsx
src/lib/cache.ts
src/lib/db.ts
src/components/pr-table.tsx
src/hooks/usePanel.ts
Improve preview panel loading UX with skeleton placeholders tuned per tab.
  • Add a PanelSkeleton component that renders tab-specific skeleton layouts for body, comments, code, and files views.
  • Update the preview panel to show a spinner plus an appropriate skeleton when a panel tab is loading and no data has been fetched yet.
src/components/skeleton.tsx
src/components/preview-panel.tsx
Introduce a split command and supporting split-state utilities to inspect split-branch topology and status.
  • Add a SplitCommand that reads split state from the current repo, renders an ASCII topology, and presents a navigable list of splits with status badges and details.
  • Wire the new split command into CLI argument parsing and the main command switch, including usage text.
  • Define SplitState and SplitEntry types, JSON read/write helpers for .raft-split.json, and a formatSplitTopology helper that renders a tree-like view of the split graph.
src/commands/split.tsx
src/lib/split-state.ts
src/index.tsx

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 18, 2026

Walkthrough

This PR introduces persistent SQLite-based caching for PRs and PR details, batch fetching of PR information from GitHub's API, a new split management command with interactive TUI, and UI enhancements including file grouping by intent (Source Code, Tests, Config, Boilerplate) with togglable boilerplate visibility. The core github.ts module shifts from gh CLI-based operations to direct fetch-based API calls with token authentication.

Changes

Cohort / File(s) Summary
GitHub API & Authentication
src/lib/github.ts, src/lib/auth.ts
Replaced gh CLI-based GitHub operations with fetch-based REST/GraphQL calls authenticated via token. Introduced batchFetchPRDetails for parallelized PR detail retrieval, refactored search result parsing, and streamlined PR fetching workflows.
Database & Caching Layer
src/lib/db.ts, src/lib/cache.ts
Added new SQLite database module with CRUD helpers for caching PR lists, details, and panel data. Enhanced in-memory cache layer to consult and persist to database, implementing a two-tier caching strategy.
PR List Command
src/commands/ls.tsx
Integrated caching and batch fetching: loads cached PRs on mount, introduces deferred search values, computes urgency-based sorting, replaces per-PR detail fetches with batch operations, and updates UI to render sorted/prioritized PR lists.
Split Management Command
src/commands/split.tsx
New interactive TUI command for managing PR splits; reads split state from .raft-split.json, renders topology tree and split list with keyboard navigation (j/k/arrow), URL opening (enter), and copying (c). Integrates with StatusBadge and existing split-state utilities.
Split State Model & Utilities
src/lib/split-state.ts
Defines SplitPhase, SplitEntry, and SplitState types for modeling PR splits; provides readSplitState/writeSplitState for file I/O and formatSplitTopology for ASCII tree rendering of split topology.
File Grouping & Intent Classification
src/components/panel-files.tsx, src/lib/diff-utils.ts
Added FileIntent type and getFileIntent classifier mapping files to Source Code/Tests/Config/Boilerplate; refactored panel-files to render grouped diffs by intent with keyboard-driven boilerplate visibility toggle (B key) and summary for hidden boilerplate.
UI Components & Layout
src/components/pr-table.tsx, src/components/preview-panel.tsx, src/components/skeleton.tsx, src/hooks/usePanel.ts
Memoized PRRow for rendering optimization; enhanced preview-panel loading state with skeleton layout; introduced PanelSkeleton component for tab-specific skeleton rendering; tightened React.Dispatch type signatures for panel state setters.
App Entry & Routing
src/index.tsx
Extended Command type to include "split" and added parseArgs/rendering logic to route to SplitCommand component.
Tests & Type Alignment
src/__tests__/integration.test.ts, src/lib/__tests__/cache.test.ts, src/lib/__tests__/github.test.ts
Updated test fixtures: integration tests use new repo reference (cli/cli), cache tests adopt time-dependent keys, and github tests align with updated GitHub API field names (html_url, draft, repository_url, created_at).

Sequence Diagram(s)

sequenceDiagram
    participant UI as ls.tsx UI
    participant Cache as Cache Layer
    participant DB as SQLite DB
    participant GitHub as GitHub API
    participant Batch as Batch Fetcher

    UI->>Cache: On mount: load cached PRs
    Cache->>DB: getCachedPRs()
    DB-->>Cache: [PR list] or empty
    Cache-->>UI: Populate sorted list immediately

    UI->>Batch: Need PR details (batch)
    Batch->>GitHub: batchFetchPRDetails (batched GraphQL)
    GitHub-->>Batch: [PRDetails map]
    Batch->>Cache: Update details cache
    Cache->>DB: cachePRDetails(url, details)
    DB-->>Cache: ✓ Stored
    Cache-->>UI: Render with details

    Note over UI,DB: New PRs from search/filter
    UI->>GitHub: fetchRepoPRs()
    GitHub-->>UI: [PRs]
    UI->>Cache: cachePRs(newList)
    Cache->>DB: Clear & insert PRs
    DB-->>Cache: ✓ Transaction complete
Loading
sequenceDiagram
    participant User as User Input
    participant Split as SplitCommand
    participant FS as File System
    participant State as split-state lib

    User->>Split: Launch split command
    Split->>FS: getRepoRoot()
    FS-->>Split: repo path
    Split->>FS: Read .raft-split.json
    FS-->>Split: file content
    Split->>State: readSplitState()
    State-->>Split: SplitState
    Split->>Split: formatSplitTopology()
    Split-->>User: Render topology tree + splits list

    User->>Split: Press 'j'/'k' navigate
    Split-->>User: Update selection highlight
    User->>Split: Press 'enter' on split
    Split->>FS: Open PR URL (system command)
    FS-->>User: Browser opens URL
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

  • Add lifecycle-driven PR review workflows to raft #2 — Shares core changes across src/commands/ls.tsx, src/lib/github.ts, and src/hooks/usePanel.ts; implements complementary lifecycle-driven sorting, batch PR detail fetching, panel caching, and review thread awareness.

Poem

🐰 A rabbit dug deep, SQLite's cache so neat,
Batching PR details to make fetches complete,
Split topology trees now bloom on the screen,
Files grouped by intent—a tidy machine,
With deferred values and sorted priorities,
The warren hops faster through all these deliveries! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 11.54% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: implement intent-sliced semantic diff grouping' accurately describes the main feature introduced in this PR—semantic grouping of diffs by intent categories.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch intent-sliced-diffs
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • In SplitCommand, the repo prop is accepted but never used, and the PR-opening logic relies on Bun.spawn(["open", ...]), which will only work on macOS; consider either wiring the repo parameter into the command or removing it, and using a cross‑platform opener (e.g., xdg-open/start) to avoid platform-specific behavior.
  • The SQLite cache in lib/db.ts is initialized and opened at module import time, which can make tests and tooling harder to control; consider lazily creating the database/connection on first use so environments that don't need persistence (or have restricted filesystems) can avoid side effects.
  • The PanelFiles boilerplate grouping currently treats all .yml/.yaml/.toml/.ini/dockerfile/makefile as Config, which may sweep some application-level resources into config; if you expect mixed usage, you might want to tighten these heuristics (e.g., by path prefixes like config/, .github/, etc.) to avoid misclassification.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `SplitCommand`, the `repo` prop is accepted but never used, and the PR-opening logic relies on `Bun.spawn(["open", ...])`, which will only work on macOS; consider either wiring the `repo` parameter into the command or removing it, and using a cross‑platform opener (e.g., `xdg-open`/`start`) to avoid platform-specific behavior.
- The SQLite cache in `lib/db.ts` is initialized and opened at module import time, which can make tests and tooling harder to control; consider lazily creating the database/connection on first use so environments that don't need persistence (or have restricted filesystems) can avoid side effects.
- The `PanelFiles` boilerplate grouping currently treats all `.yml`/`.yaml`/`.toml`/`.ini`/`dockerfile`/`makefile` as `Config`, which may sweep some application-level resources into config; if you expect mixed usage, you might want to tighten these heuristics (e.g., by path prefixes like `config/`, `.github/`, etc.) to avoid misclassification.

## Individual Comments

### Comment 1
<location path="src/lib/cache.ts" line_range="24-25" />
<code_context>
+    cachePRDetails(url, data);
   }

   hasDetails(url: string): boolean {
-    return this.details.has(url)
+    return this.details.has(url) || getCachedPRDetails(url) !== null;
   }

</code_context>
<issue_to_address>
**suggestion (performance):** Avoid hitting SQLite on every `hasDetails`/`hasPanelData` call to prevent unnecessary synchronous I/O in hot paths.

Because `hasDetails`/`hasPanelData` now call `getCachedPR*` on cache misses, uncached URLs incur a synchronous DB read on every check, which is risky in render-driven paths. Consider either (a) maintaining a `knownMissing`/negative cache, or (b) restricting DB access to the `get*` methods and having `has*` read only the in-memory maps, with callers responsible for triggering loads when `get*` returns `undefined`.

Suggested implementation:

```typescript
  hasDetails(url: string): boolean {
    // Only consult the in-memory map; DB access is restricted to getDetails.
    return this.details.has(url);
  }

```

```typescript
  hasPanelData(url: string): boolean {
    // Only consult the in-memory map; DB access is restricted to getPanelData.
    return this.panelData.has(url);
  }

```

1. Ensure that `getDetails` and `getPanelData` both:
   - Call their respective `getCachedPR*` functions on cache miss.
   - Populate the in-memory maps (`this.details` / `this.panelData`) with any data loaded from SQLite before returning it.
2. Call sites that previously relied on `hasDetails`/`hasPanelData` triggering disk reads should now either:
   - Call `getDetails`/`getPanelData` directly and branch on `undefined`, or
   - Explicitly trigger a load via the appropriate `get*` before using `has*` if they need to ensure the cache is hydrated.
</issue_to_address>

### Comment 2
<location path="src/lib/__tests__/cache.test.ts" line_range="38-41" />
<code_context>
-    expect(cache.hasDetails("x")).toBe(false)
-    cache.setDetails("x", { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" })
-    expect(cache.hasDetails("x")).toBe(true)
+    const uniqueKey = "test-unique-key-" + Date.now();
+    expect(cache.hasDetails(uniqueKey)).toBe(false)
+    cache.setDetails(uniqueKey, { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" })
+    expect(cache.hasDetails(uniqueKey)).toBe(true)
   })
 })
</code_context>
<issue_to_address>
**suggestion (testing):** Extend PRCache tests to cover DB-backed behavior and avoid real DB side effects

Because `PRCache` now delegates to `db.ts`, this test only covers the in-memory path. We should also verify DB-backed behavior by mocking `./db` so that:
- `getCachedPRDetails` returns a known object when the in-memory `details` map is empty, and `getDetails` surfaces and caches it.
- `setDetails` calls `cachePRDetails` with the expected arguments.

Also, as written, `hasDetails`/`getDetails` in tests will hit the real DB under `~/.config/raft/raft.sqlite`. Please mock `./db` to an in-memory fake in this suite so we avoid mutating user state and keep tests deterministic.

Suggested implementation:

```typescript
  test("has() returns correct boolean", () => {
    const cache = new PRCache()
    const key = "test-has-details-key"
    expect(cache.hasDetails(key)).toBe(false)
    cache.setDetails(key, { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" })
    expect(cache.hasDetails(key)).toBe(true)
  })

  test("getDetails returns DB-backed value and caches it", () => {
    const cache = new PRCache()
    const key = "test-db-backed-key"

    // Ensure no in-memory details before DB lookup
    expect(cache.hasDetails(key)).toBe(false)

    // Arrange DB to return a known object
    const db = jest.requireMock("../db") as {
      getCachedPRDetails: jest.Mock
      cachePRDetails: jest.Mock
    }
    const dbDetails = {
      additions: 5,
      deletions: 3,
      commentCount: 2,
      reviews: [{ state: "APPROVED" }],
      headRefName: "db-backed-branch",
    }

    db.getCachedPRDetails.mockReturnValue(dbDetails)

    const result = cache.getDetails(key)

    // Surfaces DB value
    expect(result).toEqual(dbDetails)

    // Caches result in memory so subsequent lookups don't need DB
    expect(cache.hasDetails(key)).toBe(true)
    expect(db.getCachedPRDetails).toHaveBeenCalledWith(key)
  })

  test("setDetails persists to DB via cachePRDetails", () => {
    const cache = new PRCache()
    const key = "test-set-details-db-persist"
    const details = {
      additions: 1,
      deletions: 1,
      commentCount: 0,
      reviews: [],
      headRefName: "persisted-branch",
    }

    const db = jest.requireMock("../db") as {
      getCachedPRDetails: jest.Mock
      cachePRDetails: jest.Mock
    }

    cache.setDetails(key, details)

    // Persists to DB
    expect(db.cachePRDetails).toHaveBeenCalledWith(key, details)

    // Still behaves correctly in-memory
    expect(cache.hasDetails(key)).toBe(true)
    expect(cache.getDetails(key)).toEqual(details)
  })
})

```

To fully implement the suggestion and avoid real DB side effects, you should also:
1. Add a Jest module mock for `../db` at the top of this test file (before importing `PRCache`), for example:
   ```ts
   jest.mock("../db", () => {
     return {
       getCachedPRDetails: jest.fn(),
       cachePRDetails: jest.fn(),
     }
   })
   ```
2. Ensure the mock path (`"../db"`) matches the actual import path used inside the `PRCache` implementation. If `PRCache` imports from `"./db"` or another relative path, adjust both the `jest.mock` call and `jest.requireMock` calls accordingly.
3. Optionally, add a `beforeEach(() => jest.clearAllMocks())` in this test suite to keep DB mock expectations isolated between tests.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines 24 to +25
hasDetails(url: string): boolean {
return this.details.has(url)
return this.details.has(url) || getCachedPRDetails(url) !== null;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (performance): Avoid hitting SQLite on every hasDetails/hasPanelData call to prevent unnecessary synchronous I/O in hot paths.

Because hasDetails/hasPanelData now call getCachedPR* on cache misses, uncached URLs incur a synchronous DB read on every check, which is risky in render-driven paths. Consider either (a) maintaining a knownMissing/negative cache, or (b) restricting DB access to the get* methods and having has* read only the in-memory maps, with callers responsible for triggering loads when get* returns undefined.

Suggested implementation:

  hasDetails(url: string): boolean {
    // Only consult the in-memory map; DB access is restricted to getDetails.
    return this.details.has(url);
  }
  hasPanelData(url: string): boolean {
    // Only consult the in-memory map; DB access is restricted to getPanelData.
    return this.panelData.has(url);
  }
  1. Ensure that getDetails and getPanelData both:
    • Call their respective getCachedPR* functions on cache miss.
    • Populate the in-memory maps (this.details / this.panelData) with any data loaded from SQLite before returning it.
  2. Call sites that previously relied on hasDetails/hasPanelData triggering disk reads should now either:
    • Call getDetails/getPanelData directly and branch on undefined, or
    • Explicitly trigger a load via the appropriate get* before using has* if they need to ensure the cache is hydrated.

Comment on lines +38 to +41
const uniqueKey = "test-unique-key-" + Date.now();
expect(cache.hasDetails(uniqueKey)).toBe(false)
cache.setDetails(uniqueKey, { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" })
expect(cache.hasDetails(uniqueKey)).toBe(true)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Extend PRCache tests to cover DB-backed behavior and avoid real DB side effects

Because PRCache now delegates to db.ts, this test only covers the in-memory path. We should also verify DB-backed behavior by mocking ./db so that:

  • getCachedPRDetails returns a known object when the in-memory details map is empty, and getDetails surfaces and caches it.
  • setDetails calls cachePRDetails with the expected arguments.

Also, as written, hasDetails/getDetails in tests will hit the real DB under ~/.config/raft/raft.sqlite. Please mock ./db to an in-memory fake in this suite so we avoid mutating user state and keep tests deterministic.

Suggested implementation:

  test("has() returns correct boolean", () => {
    const cache = new PRCache()
    const key = "test-has-details-key"
    expect(cache.hasDetails(key)).toBe(false)
    cache.setDetails(key, { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" })
    expect(cache.hasDetails(key)).toBe(true)
  })

  test("getDetails returns DB-backed value and caches it", () => {
    const cache = new PRCache()
    const key = "test-db-backed-key"

    // Ensure no in-memory details before DB lookup
    expect(cache.hasDetails(key)).toBe(false)

    // Arrange DB to return a known object
    const db = jest.requireMock("../db") as {
      getCachedPRDetails: jest.Mock
      cachePRDetails: jest.Mock
    }
    const dbDetails = {
      additions: 5,
      deletions: 3,
      commentCount: 2,
      reviews: [{ state: "APPROVED" }],
      headRefName: "db-backed-branch",
    }

    db.getCachedPRDetails.mockReturnValue(dbDetails)

    const result = cache.getDetails(key)

    // Surfaces DB value
    expect(result).toEqual(dbDetails)

    // Caches result in memory so subsequent lookups don't need DB
    expect(cache.hasDetails(key)).toBe(true)
    expect(db.getCachedPRDetails).toHaveBeenCalledWith(key)
  })

  test("setDetails persists to DB via cachePRDetails", () => {
    const cache = new PRCache()
    const key = "test-set-details-db-persist"
    const details = {
      additions: 1,
      deletions: 1,
      commentCount: 0,
      reviews: [],
      headRefName: "persisted-branch",
    }

    const db = jest.requireMock("../db") as {
      getCachedPRDetails: jest.Mock
      cachePRDetails: jest.Mock
    }

    cache.setDetails(key, details)

    // Persists to DB
    expect(db.cachePRDetails).toHaveBeenCalledWith(key, details)

    // Still behaves correctly in-memory
    expect(cache.hasDetails(key)).toBe(true)
    expect(cache.getDetails(key)).toEqual(details)
  })
})

To fully implement the suggestion and avoid real DB side effects, you should also:

  1. Add a Jest module mock for ../db at the top of this test file (before importing PRCache), for example:
    jest.mock("../db", () => {
      return {
        getCachedPRDetails: jest.fn(),
        cachePRDetails: jest.fn(),
      }
    })
  2. Ensure the mock path ("../db") matches the actual import path used inside the PRCache implementation. If PRCache imports from "./db" or another relative path, adjust both the jest.mock call and jest.requireMock calls accordingly.
  3. Optionally, add a beforeEach(() => jest.clearAllMocks()) in this test suite to keep DB mock expectations isolated between tests.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 93279851c7

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

const CHUNK_SIZE = 20;
for (let i = 0; i < prs.length; i += CHUNK_SIZE) {
const chunk = prs.slice(i, i + CHUNK_SIZE);
let query = "query {\\n";
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use a real newline in batched GraphQL query header

The batch query is initialized with "query {\\n", which embeds a literal backslash token before the first field. GraphQL does not accept \ outside string literals, so this causes fetchGhGraphql to reject and every batch to fall into the catch path, leaving PR details empty and breaking lifecycle/attention metadata loading.

Useful? React with 👍 / 👎.

if (!json) return []
return parseSearchResults(json)
onProgress?.("Fetching all open PRs...");
const json = await fetchGh("search/issues?q=is:pr+is:open&per_page=100");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Paginate open-PR search instead of fetching only page 1

This search call requests per_page=100 but never iterates page, so users with more than 100 matching PRs only get a partial list. Because the app replaces cached PR rows with each refresh, the missing pages effectively disappear from the UI/state after sync, which is a functional regression from the previous higher-limit behavior.

Useful? React with 👍 / 👎.


hasDetails(url: string): boolean {
return this.details.has(url)
return this.details.has(url) || getCachedPRDetails(url) !== null;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Revalidate persisted PR details before skipping fetches

Treating any DB hit as a permanent cache hit means details are never refreshed once stored. In ls, entries with hasDetails(url) are excluded from network fetches, so CI status, approvals, unresolved threads, and conflicts can remain stale across sessions and misdrive attention sorting/lifecycle actions.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds persistent caching and a new split command while refactoring GitHub integration away from heavy gh CLI usage toward direct REST/GraphQL calls authenticated via a token obtained from gh auth token.

Changes:

  • Introduces a SQLite-backed cache layer for PR lists/details/panel data and wires it into ls.
  • Refactors src/lib/github.ts to use direct GitHub REST/GraphQL requests (token sourced from gh).
  • Adds split-state persistence + a new raft split UI command, plus UI improvements (panel skeletons, file intent grouping, memoized PR rows).

Reviewed changes

Copilot reviewed 17 out of 17 changed files in this pull request and generated 10 comments.

Show a summary per file
File Description
src/lib/split-state.ts New split state model + read/write + topology formatting.
src/lib/github.ts Replaces many gh CLI calls with REST/GraphQL fetch helpers; adds batched details fetching.
src/lib/diff-utils.ts Adds file “intent” classification used for grouping in the files panel.
src/lib/db.ts New SQLite persistence for PR caching (PR list/details/panel data).
src/lib/cache.ts Extends in-memory cache with SQLite read-through/write-through behavior.
src/lib/auth.ts Adds getGithubToken() using gh auth token via safeSpawn/buildCleanEnv.
src/lib/tests/github.test.ts Updates parseSearchResults tests to match REST search schema.
src/lib/tests/cache.test.ts Tweaks cache test keys to avoid collisions with persistent storage.
src/index.tsx Adds split command routing and help text.
src/hooks/usePanel.ts Tightens types for state setters.
src/components/skeleton.tsx Adds a per-tab panel skeleton.
src/components/preview-panel.tsx Displays skeleton while loading panel content.
src/components/pr-table.tsx Memoizes PR row rendering.
src/components/panel-files.tsx Groups files by intent and adds boilerplate toggling via keyboard.
src/commands/split.tsx New command to visualize split topology and open/copy PR URLs.
src/commands/ls.tsx Uses cached PRs, deferred search, batched details fetch, and revised sorting/grouping.
src/tests/integration.test.ts Updates target repo in integration tests.
Comments suppressed due to low confidence (1)

src/commands/ls.tsx:247

  • selectedPR is derived from sortedPRs, but usePanel() is given filteredPRs along with selectedIndex. usePanel assumes selectedIndex is an index into the provided allPRs array (used for neighbor prefetching), so this will prefetch the wrong PRs once sorting is applied. Pass sortedPRs instead so indices align.
  const selectedPR = sortedPRs[selectedIndex] ?? null

  // Panel state management (shared hook handles data fetching + caching + prefetching)
  const panel = usePanel(selectedPR, filteredPRs, selectedIndex)
  const { panelOpen, panelTab, splitRatio, panelFullscreen, panelData, panelLoading,

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +39 to +41
export function writeSplitState(repoRoot: string, state: SplitState): void {
const path = `${repoRoot}/.raft-split.json`
Bun.write(path, JSON.stringify(state, null, 2) + "\n")
Comment on lines 24 to 26
hasDetails(url: string): boolean {
return this.details.has(url)
return this.details.has(url) || getCachedPRDetails(url) !== null;
}
});
}
} catch (e) {
console.error("GraphQL batch failed:", e);
Comment on lines +6 to +12

interface SplitCommandProps {
repo?: string
}

async function getRepoRoot(): Promise<string | null> {
const proc = Bun.spawn(["git", "rev-parse", "--show-toplevel"], {
Comment on lines +97 to +100
} else if ((key.name === "enter" || key.name === "return") && splits[selectedIndex]?.prUrl) {
Bun.spawn(["open", splits[selectedIndex].prUrl!], { stdout: "ignore", stderr: "ignore" })
showFlash("Opening PR...")
} else if (key.name === "c" && splits[selectedIndex]?.prUrl) {
Comment on lines +5 to +8
import { homedir } from "node:os";

// Initialize database in ~/.config/raft/raft.sqlite
const configDir = join(homedir(), ".config", "raft");
Comment on lines +7 to +13
// Initialize database in ~/.config/raft/raft.sqlite
const configDir = join(homedir(), ".config", "raft");
mkdirSync(configDir, { recursive: true });

const dbPath = join(configDir, "raft.sqlite");
export const db = new Database(dbPath);

Comment on lines 96 to 100
if (author === "") {
// Empty string means fetch all PRs across all repos the user has access to
onProgress?.("Fetching all open PRs...")
const json = await runGh([
"search", "prs",
"--state=open",
"--limit=1000",
"--json", "number,title,url,body,state,repository,isDraft,createdAt,author",
])
if (!json) return []
return parseSearchResults(json)
onProgress?.("Fetching all open PRs...");
const json = await fetchGh("search/issues?q=is:pr+is:open&per_page=100");
return parseSearchResults(json.items);
}
Comment on lines +194 to +196
query += `
pr_${j}: repository(owner: "${owner}", name: "${name}") {
pullRequest(number: ${pr.number}) {
Comment on lines +88 to +89
const json = await fetchGh("search/issues?q=is:pr+is:open+author:@me&per_page=100");
return parseSearchResults(json.items);
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (3)
src/components/skeleton.tsx (1)

58-60: Use PanelTab instead of string to keep tab handling type-safe and exhaustive.

PreviewPanel already passes a PanelTab; keeping this prop as string allows invalid values and falls back to silent null rendering.

♻️ Proposed refactor
 import { useState, useEffect } from "react"
+import type { PanelTab } from "../lib/types"

-export function PanelSkeleton({ tab, width, height }: { tab: string; width: number; height: number }) {
+export function PanelSkeleton({ tab, width, height }: { tab: PanelTab; width: number; height: number }) {

Also applies to: 119-120

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/skeleton.tsx` around lines 58 - 60, PanelSkeleton currently
types its tab prop as string which permits invalid values; change the prop type
to the PanelTab union (use PanelTab instead of string) and update any other
skeleton components that accept tab (the other occurrence around the component
at the end of the file) so callers like PreviewPanel can only pass valid tabs;
also make the tab switch/exhaustive handling within PanelSkeleton use a switch
over PanelTab (or a discriminated mapping) so TypeScript will catch missing
cases rather than silently rendering null.
src/lib/split-state.ts (1)

28-42: File I/O uses Bun APIs correctly per coding guidelines.

Bun.file and Bun.write are used appropriately. The readSplitState function gracefully handles errors by returning null.

One consideration: writeSplitState is synchronous but Bun.write returns a Promise. This means write errors won't be caught by callers.

🔧 Consider making writeSplitState async to handle errors
-export function writeSplitState(repoRoot: string, state: SplitState): void {
+export async function writeSplitState(repoRoot: string, state: SplitState): Promise<void> {
   const path = `${repoRoot}/.raft-split.json`
-  Bun.write(path, JSON.stringify(state, null, 2) + "\n")
+  await Bun.write(path, JSON.stringify(state, null, 2) + "\n")
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/split-state.ts` around lines 28 - 42, The writeSplitState function is
currently synchronous but calls Bun.write (which returns a Promise), so failures
aren't propagated; change writeSplitState to be async, await Bun.write(...) and
either let errors propagate (remove silent failures) or catch and rethrow/log as
appropriate so callers can handle I/O errors; update the function signature
writeSplitState(repoRoot: string, state: SplitState): Promise<void> and ensure
callers are updated to await the call.
src/lib/diff-utils.ts (1)

37-84: File intent classification is well-structured with a few edge case considerations.

The classification logic covers common patterns effectively. A few observations:

  1. Line 50: lower.includes("generated") may be overly broad—it would match files like user-generated-content.ts. Consider using path segment matching like lower.includes("/generated/") or lower.startsWith("generated/").

  2. Lines 77-78: The checks lower === "dockerfile" and lower === "makefile" only match if the entire lowercased filename equals these strings. This works for root-level files but won't match paths like docker/Dockerfile. Consider using lower.endsWith("/dockerfile") or checking just the basename.

These are edge cases that may not affect typical usage, so flagging as optional refinement.

🔧 Suggested refinement for edge cases
-    lower.includes("generated")
+    lower.includes("/generated/") || lower.includes("/generated.") || lower.startsWith("generated/") || lower.startsWith("generated.")
   ) {
     return "Boilerplate"
   }
-    lower === "dockerfile" ||
-    lower === "makefile"
+    lower.endsWith("/dockerfile") || lower === "dockerfile" ||
+    lower.endsWith("/makefile") || lower === "makefile"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/diff-utils.ts` around lines 37 - 84, getFileIntent's current checks
are too broad for "generated" and too strict for Dockerfile/Makefile; update the
logic in getFileIntent so the "generated" test matches path segments (e.g., look
for "/generated/" or filenames starting with "generated/") instead of any
substring via lower.includes("generated"), and change the docker/make checks
(currently lower === "dockerfile" and lower === "makefile") to detect basenames
or path endings (e.g., use a basename check or lower.endsWith("/dockerfile") /
lower.endsWith("/makefile")) so files like "docker/Dockerfile" are classified as
"Config".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/components/skeleton.tsx`:
- Around line 58-68: Computed skeleton widths/heights in PanelSkeleton and
related SkeletonBlock usages can be negative (e.g., Math.floor(width * ratio)
and repeat(Math.min(width - 2, 40))), causing runtime RangeError; clamp all
computed sizes with Math.max(0, ...) (or Math.min/Math.max to enforce upper
bounds) before using them in repeat() or as width/height props so values never
go below 0 (apply to PanelSkeleton, SkeletonBlock calls, and any
repeat(Math.min(width - 2, 40)) sites).

In `@src/hooks/usePanel.ts`:
- Around line 27-28: The file uses React.Dispatch and React.SetStateAction for
the setSplitRatio and setPanelFullscreen types but React is not imported; update
the top of usePanel.ts to import the types directly (import { Dispatch,
SetStateAction } from 'react') and change the declarations to use
Dispatch<SetStateAction<number>> for setSplitRatio and
Dispatch<SetStateAction<boolean>> for setPanelFullscreen so the types resolve
correctly.

In `@src/index.tsx`:
- Around line 214-216: The SplitCommand is ignoring the repo prop passed from
the CLI (config.repoFilter) because it always calls getRepoRoot() internally;
update SplitCommand to accept and use the repo prop when provided (e.g., prefer
the incoming prop over calling getRepoRoot()), or if you intend to disallow CLI
repo selection, remove repoFilter from the split command wiring in src/index.tsx
and update docs; specifically modify the SplitCommand component (and any
functions inside it that call getRepoRoot()) to use the passed-in repo prop when
non-empty, or remove the prop from the caller and related config so there is no
unused prop.

In `@src/lib/auth.ts`:
- Around line 3-20: The getGithubToken function races on cold start because
cachedToken is only set after safeSpawn completes; change it to guard concurrent
callers by adding a shared in-flight promise (e.g., pendingTokenPromise)
alongside cachedToken so the first caller kicks off safeSpawn and stores the
promise, subsequent callers await that promise instead of spawning new
processes, and once resolved set cachedToken and clear pendingTokenPromise;
update getGithubToken to return cachedToken immediately if present, otherwise
await the pending promise or create it by calling safeSpawn with
buildCleanEnv(), handle errors by rejecting/clearing the pending promise so
future calls can retry, and reference cachedToken and getGithubToken (and the
safeSpawn call) when making the change.

In `@src/lib/db.ts`:
- Around line 38-41: getCachedPRs currently returns all rows regardless of age;
change its SQL to filter out entries whose updated_at is older than the
configured TTL (same predicate used in getCachedPRPanelData). Update the
db.query call in getCachedPRs to include a WHERE clause comparing updated_at
against the TTL cutoff (using the same TTL constant/function used by
getCachedPRPanelData), then parse the remaining rows as before; repeat the same
TTL filtering approach for getCachedPRPanelData to ensure cached panel payloads
expire correctly.
- Around line 7-12: The module currently opens ~/.config/raft/raft.sqlite on
import (configDir, dbPath, db) which leaks state into tests; change it to avoid
immediate file-backed DB creation by making DB initialization configurable and
lazy: export a function like initDb(path?: string) or getDb() that creates and
returns the Database instance instead of constructing db at module top-level,
and prefer an in-memory DB for tests by checking process.env.NODE_ENV === 'test'
or a configurable env var (e.g. RAFT_DB_PATH) so tests can pass ':memory:' or a
temp path; keep the existing exported symbol name (db) behavior by setting it
when initDb is called to preserve call sites.

In `@src/lib/github.ts`:
- Around line 84-127: fetchAllAccountPRs, fetchOpenPRs and fetchRepoPRs
currently only request a single page (per_page=100) and thus miss additional
PRs; change each to paginate like fetchPRPanelData: add a page counter, loop
requests appending results from each page (use &page=${page}&per_page=100 or add
page param to the query), stop when the returned items length is less than 100
(or zero), and return the concatenated list; for fetchRepoPRs map each page's
items into the PullRequest shape and concatenate before returning, preserving
the existing error handling.
- Around line 137-144: findStackComment and the comment-fetching in
fetchPRPanelData currently call fetchGh without pagination and therefore only
get the first page of comments; update both to paginate (like the existing files
pagination) by looping page numbers, calling fetchGh with query params page and
per_page, accumulating results for issueComments and codeComments until an empty
page is returned, then search the accumulated comments for STACK_COMMENT_MARKER
in findStackComment (returning c.id) and use the full accumulated lists in
fetchPRPanelData so no comments are missed on busy PRs.
- Around line 7-40: fetchGh and fetchGhGraphql currently call fetch() with no
timeout and can hang; add a 30s timeout using AbortController: create an
AbortController, set a setTimeout to call controller.abort() after 30000ms, pass
controller.signal into the fetch call (for fetchGh include it in the options
spread and for fetchGhGraphql add signal to the request init), ensure the
timeout is cleared (clearTimeout) after fetch resolves or errors, and throw or
let the abort error propagate so callers can handle timeouts; reference
functions fetchGh and fetchGhGraphql and preserve existing header/response
handling and getGithubToken usage.

---

Nitpick comments:
In `@src/components/skeleton.tsx`:
- Around line 58-60: PanelSkeleton currently types its tab prop as string which
permits invalid values; change the prop type to the PanelTab union (use PanelTab
instead of string) and update any other skeleton components that accept tab (the
other occurrence around the component at the end of the file) so callers like
PreviewPanel can only pass valid tabs; also make the tab switch/exhaustive
handling within PanelSkeleton use a switch over PanelTab (or a discriminated
mapping) so TypeScript will catch missing cases rather than silently rendering
null.

In `@src/lib/diff-utils.ts`:
- Around line 37-84: getFileIntent's current checks are too broad for
"generated" and too strict for Dockerfile/Makefile; update the logic in
getFileIntent so the "generated" test matches path segments (e.g., look for
"/generated/" or filenames starting with "generated/") instead of any substring
via lower.includes("generated"), and change the docker/make checks (currently
lower === "dockerfile" and lower === "makefile") to detect basenames or path
endings (e.g., use a basename check or lower.endsWith("/dockerfile") /
lower.endsWith("/makefile")) so files like "docker/Dockerfile" are classified as
"Config".

In `@src/lib/split-state.ts`:
- Around line 28-42: The writeSplitState function is currently synchronous but
calls Bun.write (which returns a Promise), so failures aren't propagated; change
writeSplitState to be async, await Bun.write(...) and either let errors
propagate (remove silent failures) or catch and rethrow/log as appropriate so
callers can handle I/O errors; update the function signature
writeSplitState(repoRoot: string, state: SplitState): Promise<void> and ensure
callers are updated to await the call.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0f18be0e-5b2d-48af-8755-d422f18944ae

📥 Commits

Reviewing files that changed from the base of the PR and between a2a9d59 and 9327985.

📒 Files selected for processing (17)
  • src/__tests__/integration.test.ts
  • src/commands/ls.tsx
  • src/commands/split.tsx
  • src/components/panel-files.tsx
  • src/components/pr-table.tsx
  • src/components/preview-panel.tsx
  • src/components/skeleton.tsx
  • src/hooks/usePanel.ts
  • src/index.tsx
  • src/lib/__tests__/cache.test.ts
  • src/lib/__tests__/github.test.ts
  • src/lib/auth.ts
  • src/lib/cache.ts
  • src/lib/db.ts
  • src/lib/diff-utils.ts
  • src/lib/github.ts
  • src/lib/split-state.ts

Comment on lines +58 to +68
export function PanelSkeleton({ tab, width, height }: { tab: string; width: number; height: number }) {
if (tab === "body") {
return (
<box flexDirection="column" width={width} height={height} paddingX={1}>
<box height={1} marginBottom={1}><SkeletonBlock width={Math.floor(width * 0.4)} color="#292e42" /></box>
<box height={1}><SkeletonBlock width={Math.floor(width * 0.9)} color="#1f2335" /></box>
<box height={1}><SkeletonBlock width={Math.floor(width * 0.85)} color="#1f2335" /></box>
<box height={1} marginBottom={1}><SkeletonBlock width={Math.floor(width * 0.6)} color="#1f2335" /></box>
<box height={1}><SkeletonBlock width={Math.floor(width * 0.95)} color="#1f2335" /></box>
<box height={1}><SkeletonBlock width={Math.floor(width * 0.7)} color="#1f2335" /></box>
</box>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Clamp computed skeleton widths/heights to avoid runtime repeat() crashes.

At Line 108 and Line 112, repeat(Math.min(width - 2, 40)) can become negative when width is narrow, causing RangeError. Same risk applies to block widths derived from Math.floor(width * ratio) when width is negative.

🐛 Proposed fix
 export function PanelSkeleton({ tab, width, height }: { tab: string; width: number; height: number }) {
+  const safeWidth = Math.max(0, Math.floor(width))
+  const safeHeight = Math.max(1, Math.floor(height))
+  const blockWidth = (ratio: number) => Math.max(0, Math.floor(safeWidth * ratio))
+  const frameWidth = Math.max(0, Math.min(safeWidth - 2, 40))
+
   if (tab === "body") {
     return (
-      <box flexDirection="column" width={width} height={height} paddingX={1}>
-        <box height={1} marginBottom={1}><SkeletonBlock width={Math.floor(width * 0.4)} color="#292e42" /></box>
+      <box flexDirection="column" width={safeWidth} height={safeHeight} paddingX={1}>
+        <box height={1} marginBottom={1}><SkeletonBlock width={blockWidth(0.4)} color="#292e42" /></box>
         ...
       </box>
     )
   }
   ...
-            <box height={1}><text fg="#292e42">{"\u250C" + "\u2500".repeat(Math.min(width - 2, 40))}</text></box>
+            <box height={1}><text fg="#292e42">{"\u250C" + "\u2500".repeat(frameWidth)}</text></box>
             ...
-            <box height={1}><text fg="#292e42">{"\u2514" + "\u2500".repeat(Math.min(width - 2, 40))}</text></box>
+            <box height={1}><text fg="#292e42">{"\u2514" + "\u2500".repeat(frameWidth)}</text></box>

Also applies to: 74-84, 89-99, 104-116

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/skeleton.tsx` around lines 58 - 68, Computed skeleton
widths/heights in PanelSkeleton and related SkeletonBlock usages can be negative
(e.g., Math.floor(width * ratio) and repeat(Math.min(width - 2, 40))), causing
runtime RangeError; clamp all computed sizes with Math.max(0, ...) (or
Math.min/Math.max to enforce upper bounds) before using them in repeat() or as
width/height props so values never go below 0 (apply to PanelSkeleton,
SkeletonBlock calls, and any repeat(Math.min(width - 2, 40)) sites).

Comment on lines +27 to +28
setSplitRatio: React.Dispatch<React.SetStateAction<number>>
setPanelFullscreen: React.Dispatch<React.SetStateAction<boolean>>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n src/hooks/usePanel.ts | head -60

Repository: OctavianTocan/raft

Length of output: 2818


Import the React state types before using React.Dispatch.

React is not in scope in this file, so these members will not type-check. Use named type imports (Dispatch, SetStateAction) from React.

💡 Suggested fix
-import { useState, useEffect, useRef, type MutableRefObject } from "react"
+import {
+  useState,
+  useEffect,
+  useRef,
+  type Dispatch,
+  type MutableRefObject,
+  type SetStateAction,
+} from "react"
@@
-  setSplitRatio: React.Dispatch<React.SetStateAction<number>>
-  setPanelFullscreen: React.Dispatch<React.SetStateAction<boolean>>
+  setSplitRatio: Dispatch<SetStateAction<number>>
+  setPanelFullscreen: Dispatch<SetStateAction<boolean>>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
setSplitRatio: React.Dispatch<React.SetStateAction<number>>
setPanelFullscreen: React.Dispatch<React.SetStateAction<boolean>>
import {
useState,
useEffect,
useRef,
type Dispatch,
type MutableRefObject,
type SetStateAction,
} from "react"
// ... other code ...
setSplitRatio: Dispatch<SetStateAction<number>>
setPanelFullscreen: Dispatch<SetStateAction<boolean>>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/hooks/usePanel.ts` around lines 27 - 28, The file uses React.Dispatch and
React.SetStateAction for the setSplitRatio and setPanelFullscreen types but
React is not imported; update the top of usePanel.ts to import the types
directly (import { Dispatch, SetStateAction } from 'react') and change the
declarations to use Dispatch<SetStateAction<number>> for setSplitRatio and
Dispatch<SetStateAction<boolean>> for setPanelFullscreen so the types resolve
correctly.

Comment on lines +214 to +216
case "split":
root.render(<SplitCommand repo={config.repoFilter} />)
break
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

The repo prop passed to SplitCommand is unused.

Per the context snippets from src/commands/split.tsx:77-117 and src/commands/split.tsx:7-9, the SplitCommand component ignores the repo prop and instead calls getRepoRoot() internally to discover the repository path. This means the --repo= CLI argument (parsed at line 69) has no effect for the split command.

Consider either:

  1. Using the repo prop in SplitCommand when provided, or
  2. Removing repoFilter from the split command config and documenting that --repo is not supported for this command.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/index.tsx` around lines 214 - 216, The SplitCommand is ignoring the repo
prop passed from the CLI (config.repoFilter) because it always calls
getRepoRoot() internally; update SplitCommand to accept and use the repo prop
when provided (e.g., prefer the incoming prop over calling getRepoRoot()), or if
you intend to disallow CLI repo selection, remove repoFilter from the split
command wiring in src/index.tsx and update docs; specifically modify the
SplitCommand component (and any functions inside it that call getRepoRoot()) to
use the passed-in repo prop when non-empty, or remove the prop from the caller
and related config so there is no unused prop.

Comment on lines +3 to +20
let cachedToken: string | null = null;

export async function getGithubToken(): Promise<string> {
if (cachedToken) {
return cachedToken;
}

// Use the gh CLI to get the OAuth token, avoiding any .env leaks via buildCleanEnv
const { stdout, exitCode, stderr } = await safeSpawn(["gh", "auth", "token"], {
env: buildCleanEnv(),
});

if (exitCode !== 0) {
throw new Error(`Failed to get GitHub token from gh CLI. Ensure you are logged in via 'gh auth login'. Error: ${stderr}`);
}

cachedToken = stdout.trim();
return cachedToken;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Deduplicate the first token lookup across concurrent callers.

cachedToken is only populated after safeSpawn() finishes, so the initial burst of fetchGh()/fetchGhGraphql() calls will each run gh auth token. fetchPRPanelData() alone fans out four authenticated requests, so cold starts still pay multiple subprocess launches.

💡 Suggested fix
 let cachedToken: string | null = null;
+let cachedTokenPromise: Promise<string> | null = null;
 
 export async function getGithubToken(): Promise<string> {
-  if (cachedToken) {
-    return cachedToken;
-  }
+  if (cachedToken) return cachedToken;
+  if (cachedTokenPromise) return cachedTokenPromise;
 
-  // Use the gh CLI to get the OAuth token, avoiding any .env leaks via buildCleanEnv
-  const { stdout, exitCode, stderr } = await safeSpawn(["gh", "auth", "token"], {
-    env: buildCleanEnv(),
-  });
-
-  if (exitCode !== 0) {
-    throw new Error(`Failed to get GitHub token from gh CLI. Ensure you are logged in via 'gh auth login'. Error: ${stderr}`);
-  }
-
-  cachedToken = stdout.trim();
-  return cachedToken;
+  cachedTokenPromise = (async () => {
+    const { stdout, exitCode, stderr } = await safeSpawn(["gh", "auth", "token"], {
+      env: buildCleanEnv(),
+    });
+
+    if (exitCode !== 0) {
+      throw new Error(`Failed to get GitHub token from gh CLI. Ensure you are logged in via 'gh auth login'. Error: ${stderr}`);
+    }
+
+    cachedToken = stdout.trim();
+    return cachedToken;
+  })();
+
+  try {
+    return await cachedTokenPromise;
+  } finally {
+    cachedTokenPromise = null;
+  }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let cachedToken: string | null = null;
export async function getGithubToken(): Promise<string> {
if (cachedToken) {
return cachedToken;
}
// Use the gh CLI to get the OAuth token, avoiding any .env leaks via buildCleanEnv
const { stdout, exitCode, stderr } = await safeSpawn(["gh", "auth", "token"], {
env: buildCleanEnv(),
});
if (exitCode !== 0) {
throw new Error(`Failed to get GitHub token from gh CLI. Ensure you are logged in via 'gh auth login'. Error: ${stderr}`);
}
cachedToken = stdout.trim();
return cachedToken;
let cachedToken: string | null = null;
let cachedTokenPromise: Promise<string> | null = null;
export async function getGithubToken(): Promise<string> {
if (cachedToken) return cachedToken;
if (cachedTokenPromise) return cachedTokenPromise;
cachedTokenPromise = (async () => {
const { stdout, exitCode, stderr } = await safeSpawn(["gh", "auth", "token"], {
env: buildCleanEnv(),
});
if (exitCode !== 0) {
throw new Error(`Failed to get GitHub token from gh CLI. Ensure you are logged in via 'gh auth login'. Error: ${stderr}`);
}
cachedToken = stdout.trim();
return cachedToken;
})();
try {
return await cachedTokenPromise;
} finally {
cachedTokenPromise = null;
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/auth.ts` around lines 3 - 20, The getGithubToken function races on
cold start because cachedToken is only set after safeSpawn completes; change it
to guard concurrent callers by adding a shared in-flight promise (e.g.,
pendingTokenPromise) alongside cachedToken so the first caller kicks off
safeSpawn and stores the promise, subsequent callers await that promise instead
of spawning new processes, and once resolved set cachedToken and clear
pendingTokenPromise; update getGithubToken to return cachedToken immediately if
present, otherwise await the pending promise or create it by calling safeSpawn
with buildCleanEnv(), handle errors by rejecting/clearing the pending promise so
future calls can retry, and reference cachedToken and getGithubToken (and the
safeSpawn call) when making the change.

Comment on lines +7 to +12
// Initialize database in ~/.config/raft/raft.sqlite
const configDir = join(homedir(), ".config", "raft");
mkdirSync(configDir, { recursive: true });

const dbPath = join(configDir, "raft.sqlite");
export const db = new Database(dbPath);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't bind tests to the user's real cache DB.

This module opens ~/.config/raft/raft.sqlite on import, so test runs share and mutate the same persistent store. The timestamp key in src/lib/__tests__/cache.test.ts is already compensating for leaked rows instead of getting a clean database.

💡 Suggested fix
-import { join } from "node:path";
+import { dirname, join } from "node:path";
@@
-const configDir = join(homedir(), ".config", "raft");
-mkdirSync(configDir, { recursive: true });
-
-const dbPath = join(configDir, "raft.sqlite");
+const dbPath = process.env.RAFT_DB_PATH ?? join(homedir(), ".config", "raft", "raft.sqlite");
+mkdirSync(dirname(dbPath), { recursive: true });
 export const db = new Database(dbPath);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Initialize database in ~/.config/raft/raft.sqlite
const configDir = join(homedir(), ".config", "raft");
mkdirSync(configDir, { recursive: true });
const dbPath = join(configDir, "raft.sqlite");
export const db = new Database(dbPath);
// Initialize database in ~/.config/raft/raft.sqlite
const dbPath = process.env.RAFT_DB_PATH ?? join(homedir(), ".config", "raft", "raft.sqlite");
mkdirSync(dirname(dbPath), { recursive: true });
export const db = new Database(dbPath);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/db.ts` around lines 7 - 12, The module currently opens
~/.config/raft/raft.sqlite on import (configDir, dbPath, db) which leaks state
into tests; change it to avoid immediate file-backed DB creation by making DB
initialization configurable and lazy: export a function like initDb(path?:
string) or getDb() that creates and returns the Database instance instead of
constructing db at module top-level, and prefer an in-memory DB for tests by
checking process.env.NODE_ENV === 'test' or a configurable env var (e.g.
RAFT_DB_PATH) so tests can pass ':memory:' or a temp path; keep the existing
exported symbol name (db) behavior by setting it when initDb is called to
preserve call sites.

Comment on lines +38 to +41
export function getCachedPRs(): PullRequest[] {
const query = db.query(`SELECT data FROM pull_requests`);
return query.all().map((row: any) => JSON.parse(row.data));
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use updated_at to expire persisted cache entries.

All three getters return rows regardless of age, so PR details and panel payloads can survive across restarts indefinitely. Because usePanel() short-circuits on any panel-data hit, stale comments/files can look fresh forever.

💡 Suggested fix
+const CACHE_TTL_MINUTES = 15;
+
 export function getCachedPRDetails(url: string): PRDetails | null {
-  const query = db.query(`SELECT data FROM pr_details WHERE url = ?`);
+  const query = db.query(`
+    SELECT data
+    FROM pr_details
+    WHERE url = ?
+      AND updated_at >= datetime('now', '-${CACHE_TTL_MINUTES} minutes')
+  `);
   const row = query.get(url) as any;
   if (!row) return null;
   return JSON.parse(row.data);
 }

Apply the same TTL predicate to getCachedPRs() and getCachedPRPanelData().

Also applies to: 59-78

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/db.ts` around lines 38 - 41, getCachedPRs currently returns all rows
regardless of age; change its SQL to filter out entries whose updated_at is
older than the configured TTL (same predicate used in getCachedPRPanelData).
Update the db.query call in getCachedPRs to include a WHERE clause comparing
updated_at against the TTL cutoff (using the same TTL constant/function used by
getCachedPRPanelData), then parse the remaining rows as before; repeat the same
TTL filtering approach for getCachedPRPanelData to ensure cached panel payloads
expire correctly.

Comment on lines +7 to +40
export async function fetchGh(endpoint: string, options: RequestInit = {}): Promise<any> {
const token = await getGithubToken();
const url = endpoint.startsWith("http") ? endpoint : `https://api.github.com/${endpoint.replace(/^\//, "")}`;

const headers = new Headers(options.headers);
headers.set("Authorization", `Bearer ${token}`);
if (!headers.has("Accept")) {
headers.set("Accept", "application/vnd.github.v3+json");
}

const response = await fetch(url, { ...options, headers });

if (!response.ok) {
const text = await response.text();
throw new Error(`GitHub API error ${response.status}: ${text}`);
}

if (response.status === 204) {
return {};
}

return response.json();
}

export async function fetchGhGraphql(query: string, variables: any = {}): Promise<any> {
const token = await getGithubToken();
const response = await fetch("https://api.github.com/graphql", {
method: "POST",
headers: {
"Authorization": `Bearer ${token}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ query, variables })
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check if safeSpawn in process.ts has a 30s timeout
head -100 src/lib/process.ts | tail -60

Repository: OctavianTocan/raft

Length of output: 2003


🏁 Script executed:

# Check the full github.ts file to see actual implementation
cat -n src/lib/github.ts

Repository: OctavianTocan/raft

Length of output: 17129


🏁 Script executed:

# Search for any existing timeout handling in github.ts
rg -n "timeout|abort|AbortController" src/lib/github.ts

Repository: OctavianTocan/raft

Length of output: 44


🏁 Script executed:

# Find all callers of fetchGh and fetchGhGraphql
rg -n "fetchGh|fetchGhGraphql" --type ts --type tsx -B 2 -A 2

Repository: OctavianTocan/raft

Length of output: 89


🏁 Script executed:

rg "fetchGh|fetchGhGraphql" --type ts -l

Repository: OctavianTocan/raft

Length of output: 79


🏁 Script executed:

# Search across all files for callers
rg "fetchGh\(|fetchGhGraphql\(" -B 1 -A 1 | head -50

Repository: OctavianTocan/raft

Length of output: 2662


🏁 Script executed:

# Check if there are any other timeout mechanisms or patterns in the codebase
rg "AbortController|timeout|signal.*abort" --type ts

Repository: OctavianTocan/raft

Length of output: 601


Add timeout protection to GitHub API fetch helpers.

safeSpawn() enforces a 30s timeout to prevent hung subprocesses from blocking the CLI. The new fetchGh() and fetchGhGraphql() wrappers lack any timeout mechanism—raw fetch() calls will hang indefinitely if the GitHub API becomes unresponsive. Since all REST and GraphQL operations funnel through these shared wrappers, a stalled connection will block the entire CLI.

Suggested fix
+const GITHUB_REQUEST_TIMEOUT_MS = 30_000;
+
+async function fetchWithTimeout(url: string, options: RequestInit): Promise<Response> {
+  const controller = new AbortController();
+  const timeout = setTimeout(() => controller.abort(), GITHUB_REQUEST_TIMEOUT_MS);
+  try {
+    return await fetch(url, { ...options, signal: options.signal ?? controller.signal });
+  } finally {
+    clearTimeout(timeout);
+  }
+}
+
 export async function fetchGh(endpoint: string, options: RequestInit = {}): Promise<any> {
   const token = await getGithubToken();
   const url = endpoint.startsWith("http") ? endpoint : `https://api.github.com/${endpoint.replace(/^\//, "")}`;
@@
-  const response = await fetch(url, { ...options, headers });
+  const response = await fetchWithTimeout(url, { ...options, headers });
@@
 export async function fetchGhGraphql(query: string, variables: any = {}): Promise<any> {
   const token = await getGithubToken();
-  const response = await fetch("https://api.github.com/graphql", {
+  const response = await fetchWithTimeout("https://api.github.com/graphql", {
     method: "POST",
     headers: {
       "Authorization": `Bearer ${token}`,
       "Content-Type": "application/json",
     },
     body: JSON.stringify({ query, variables })
   });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export async function fetchGh(endpoint: string, options: RequestInit = {}): Promise<any> {
const token = await getGithubToken();
const url = endpoint.startsWith("http") ? endpoint : `https://api.github.com/${endpoint.replace(/^\//, "")}`;
const headers = new Headers(options.headers);
headers.set("Authorization", `Bearer ${token}`);
if (!headers.has("Accept")) {
headers.set("Accept", "application/vnd.github.v3+json");
}
const response = await fetch(url, { ...options, headers });
if (!response.ok) {
const text = await response.text();
throw new Error(`GitHub API error ${response.status}: ${text}`);
}
if (response.status === 204) {
return {};
}
return response.json();
}
export async function fetchGhGraphql(query: string, variables: any = {}): Promise<any> {
const token = await getGithubToken();
const response = await fetch("https://api.github.com/graphql", {
method: "POST",
headers: {
"Authorization": `Bearer ${token}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ query, variables })
});
const GITHUB_REQUEST_TIMEOUT_MS = 30_000;
async function fetchWithTimeout(url: string, options: RequestInit): Promise<Response> {
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), GITHUB_REQUEST_TIMEOUT_MS);
try {
return await fetch(url, { ...options, signal: options.signal ?? controller.signal });
} finally {
clearTimeout(timeout);
}
}
export async function fetchGh(endpoint: string, options: RequestInit = {}): Promise<any> {
const token = await getGithubToken();
const url = endpoint.startsWith("http") ? endpoint : `https://api.github.com/${endpoint.replace(/^\//, "")}`;
const headers = new Headers(options.headers);
headers.set("Authorization", `Bearer ${token}`);
if (!headers.has("Accept")) {
headers.set("Accept", "application/vnd.github.v3+json");
}
const response = await fetchWithTimeout(url, { ...options, headers });
if (!response.ok) {
const text = await response.text();
throw new Error(`GitHub API error ${response.status}: ${text}`);
}
if (response.status === 204) {
return {};
}
return response.json();
}
export async function fetchGhGraphql(query: string, variables: any = {}): Promise<any> {
const token = await getGithubToken();
const response = await fetchWithTimeout("https://api.github.com/graphql", {
method: "POST",
headers: {
"Authorization": `Bearer ${token}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ query, variables })
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/github.ts` around lines 7 - 40, fetchGh and fetchGhGraphql currently
call fetch() with no timeout and can hang; add a 30s timeout using
AbortController: create an AbortController, set a setTimeout to call
controller.abort() after 30000ms, pass controller.signal into the fetch call
(for fetchGh include it in the options spread and for fetchGhGraphql add signal
to the request init), ensure the timeout is cleared (clearTimeout) after fetch
resolves or errors, and throw or let the abort error propagate so callers can
handle timeouts; reference functions fetchGh and fetchGhGraphql and preserve
existing header/response handling and getGithubToken usage.

Comment on lines 84 to 127
export async function fetchAllAccountPRs(
onProgress?: (status: string) => void,
): Promise<PullRequest[]> {
onProgress?.("Discovering accounts...")
const accounts = await getGhAccounts()
const originalAccount = await getActiveAccount()
const allPRs: PullRequest[] = []
const seen = new Set<string>()

for (let i = 0; i < accounts.length; i++) {
const account = accounts[i]
onProgress?.(`Fetching PRs for ${account} (${i + 1}/${accounts.length})...`)
if (accounts.length > 1) {
try { await switchAccount(account) } catch { continue }
}
try {
const json = await runGh([
"search", "prs",
"--author=@me",
"--state=open",
"--limit=100",
"--json", "number,title,url,body,state,repository,isDraft,createdAt",
])
if (json) {
const parsed = parseSearchResults(json)
for (const pr of parsed) {
if (!seen.has(pr.url)) {
seen.add(pr.url)
allPRs.push(pr)
}
}
onProgress?.(`Found ${allPRs.length} PRs so far...`)
}
} catch { /* skip account if query fails */ }
}

onProgress?.(`Loaded ${allPRs.length} PRs across ${accounts.length} accounts`)

// Restore original account
if (accounts.length > 1 && originalAccount) {
try { await switchAccount(originalAccount) } catch { /* ignore */ }
}

return allPRs
onProgress?.("Fetching PRs...");
const json = await fetchGh("search/issues?q=is:pr+is:open+author:@me&per_page=100");
return parseSearchResults(json.items);
}

/**
* Fetch open pull requests for a specific author or all accessible repositories.
*
* - If `author` is undefined: fetches PRs across all authenticated accounts via `@me`.
* - If `author` is empty string: fetches all open PRs accessible to the current account.
* - If `author` is provided: fetches PRs by that specific author.
*
* @param author - GitHub username or empty string for all repos, undefined for @me across accounts
* @param onProgress - Optional callback for progress status messages
* @returns Array of open pull requests
*/
export async function fetchOpenPRs(
author?: string,
onProgress?: (status: string) => void,
): Promise<PullRequest[]> {
if (author === "") {
// Empty string means fetch all PRs across all repos the user has access to
onProgress?.("Fetching all open PRs...")
const json = await runGh([
"search", "prs",
"--state=open",
"--limit=1000",
"--json", "number,title,url,body,state,repository,isDraft,createdAt,author",
])
if (!json) return []
return parseSearchResults(json)
onProgress?.("Fetching all open PRs...");
const json = await fetchGh("search/issues?q=is:pr+is:open&per_page=100");
return parseSearchResults(json.items);
}
if (author) {
onProgress?.(`Fetching PRs for ${author}...`)
const json = await runGh([
"search", "prs",
`--author=${author}`,
"--state=open",
"--limit=100",
"--json", "number,title,url,body,state,repository,isDraft,createdAt,author",
])
if (!json) return []
return parseSearchResults(json)
onProgress?.(`Fetching PRs for ${author}...`);
const json = await fetchGh(`search/issues?q=is:pr+is:open+author:${author}&per_page=100`);
return parseSearchResults(json.items);
}
return fetchAllAccountPRs(onProgress)
return fetchAllAccountPRs(onProgress);
}

/** Try fetching repo PRs, attempting each account if needed. */
export async function fetchRepoPRs(repo: string): Promise<PullRequest[]> {
const accounts = await getGhAccounts()
const originalAccount = await getActiveAccount()

for (const account of accounts) {
if (accounts.length > 1) {
try { await switchAccount(account) } catch { continue }
}
try {
const json = await runGh([
"pr", "list",
"--repo", repo,
"--state=open",
"--limit=100",
"--json", "number,title,url,body,state,isDraft,headRefName,baseRefName,createdAt",
])
// Restore original account before returning
if (accounts.length > 1 && originalAccount) {
try { await switchAccount(originalAccount) } catch { /* ignore */ }
}
if (!json) return []
const raw = JSON.parse(json) as Array<{
number: number
title: string
url: string
body: string
state: string
isDraft: boolean
headRefName: string
baseRefName: string
createdAt: string
}>
return raw.map((pr) => ({
number: pr.number,
title: pr.title,
url: `https://github.com/${repo}/pull/${pr.number}`,
body: (pr.body ?? "").split("\n")[0]?.slice(0, 80) ?? "",
state: pr.state,
isDraft: pr.isDraft,
repo,
headRefName: pr.headRefName,
baseRefName: pr.baseRefName,
createdAt: pr.createdAt,
}))
} catch {
// This account can't access the repo, try next
continue
}
}

// Restore original account
if (accounts.length > 1 && originalAccount) {
try { await switchAccount(originalAccount) } catch { /* ignore */ }
try {
const prs = await fetchGh(`repos/${repo}/pulls?state=open&per_page=100`);
return prs.map((pr: any) => ({
number: pr.number,
title: pr.title,
url: pr.html_url,
body: (pr.body ?? "").split("\n")[0]?.slice(0, 80) ?? "",
state: pr.state,
isDraft: pr.draft || false,
repo,
headRefName: pr.head.ref,
baseRefName: pr.base.ref,
createdAt: pr.created_at,
}));
} catch {
return [];
}

return []
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's see the full file to understand the context
wc -l src/lib/github.ts

Repository: OctavianTocan/raft

Length of output: 83


🏁 Script executed:

# Read the entire github.ts file with line numbers
cat -n src/lib/github.ts

Repository: OctavianTocan/raft

Length of output: 17129


🏁 Script executed:

# Check if there's any pagination logic or Link header handling
rg -i "pagination|link.*header|page|per_page" src/lib/github.ts -A 2 -B 2

Repository: OctavianTocan/raft

Length of output: 1258


🏁 Script executed:

# Search for the fetchGh function definition to see how it handles responses
ast-grep --pattern 'function fetchGh($$$) { $$$ }'

Repository: OctavianTocan/raft

Length of output: 44


🏁 Script executed:

# Also look for async function fetchGh
ast-grep --pattern 'async function fetchGh($$$) { $$$ }'

Repository: OctavianTocan/raft

Length of output: 44


Implement pagination for PR fetches to handle repos with >100 open PRs.

The fetchAllAccountPRs, fetchOpenPRs, and fetchRepoPRs functions only fetch the first page (100 results) and stop, silently missing PRs in active repositories. Implement pagination following the pattern used in fetchPRPanelData (lines 286–296): loop with page parameter, concatenate results, and break when the response has fewer than 100 items.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/github.ts` around lines 84 - 127, fetchAllAccountPRs, fetchOpenPRs
and fetchRepoPRs currently only request a single page (per_page=100) and thus
miss additional PRs; change each to paginate like fetchPRPanelData: add a page
counter, loop requests appending results from each page (use
&page=${page}&per_page=100 or add page param to the query), stop when the
returned items length is less than 100 (or zero), and return the concatenated
list; for fetchRepoPRs map each page's items into the PullRequest shape and
concatenate before returning, preserving the existing error handling.

Comment on lines 137 to +144
export async function findStackComment(repo: string, prNumber: number): Promise<number | null> {
const json = await runGh([
"api",
`repos/${repo}/issues/${prNumber}/comments`,
"--jq", `.[] | select(.body | contains("${STACK_COMMENT_MARKER}")) | .id`,
])
if (!json) return null
const id = parseInt(json.split("\n")[0], 10)
return isNaN(id) ? null : id
const comments = await fetchGh(`repos/${repo}/issues/${prNumber}/comments`);
for (const c of comments) {
if (c.body && c.body.includes(STACK_COMMENT_MARKER)) {
return c.id;
}
}
return null;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n src/lib/github.ts | head -150

Repository: OctavianTocan/raft

Length of output: 5743


🏁 Script executed:

sed -n '275,290p' src/lib/github.ts

Repository: OctavianTocan/raft

Length of output: 668


🏁 Script executed:

sed -n '279,330p' src/lib/github.ts

Repository: OctavianTocan/raft

Length of output: 1672


🏁 Script executed:

rg -n "per_page|page=" src/lib/github.ts

Repository: OctavianTocan/raft

Length of output: 521


Paginate comment reads to avoid duplicates and incomplete data on busy PRs.

findStackComment() (line 138) and fetchPRPanelData() (lines 281-282) fetch issue and pull request comments without pagination parameters. Since GitHub's default page size is 30 items, these functions will only retrieve the first page of comments. On busy PRs with many comments, this leads to:

  • Missed stack comments (allowing duplicate stack comments to be created)
  • Incomplete panel discussions (review comments beyond the first page are lost)

Note that fetchPRPanelData() already implements pagination for files (lines 290-295); apply the same pattern to issueComments and codeComments fetches.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lib/github.ts` around lines 137 - 144, findStackComment and the
comment-fetching in fetchPRPanelData currently call fetchGh without pagination
and therefore only get the first page of comments; update both to paginate (like
the existing files pagination) by looping page numbers, calling fetchGh with
query params page and per_page, accumulating results for issueComments and
codeComments until an empty page is returned, then search the accumulated
comments for STACK_COMMENT_MARKER in findStackComment (returning c.id) and use
the full accumulated lists in fetchPRPanelData so no comments are missed on busy
PRs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants