feat: implement intent-sliced semantic diff grouping#5
feat: implement intent-sliced semantic diff grouping#5OctavianTocan wants to merge 2 commits intomainfrom
Conversation
- Replaced 140+ `gh` CLI subprocess calls with native `fetch` via GitHub API - Added in-memory OAuth token caching via `gh auth token` - Added local SQLite persistence using `bun:sqlite` for zero-latency PR lists - Optimized React rendering cascade with `useDeferredValue` and `React.memo` - Implemented `PanelSkeleton` for smooth, tab-aware loading transitions - Batched GraphQL PR details queries into chunks to prevent rate limits
Reviewer's GuideRefactors GitHub integration to use the REST/GraphQL APIs with a shared auth helper and adds intent-based diff grouping with boilerplate folding in the files panel, plus caching, performance, and UX improvements to PR listing and preview, and a new split-branch inspection command. Sequence diagram for loading PR list with caching and GitHub APIsequenceDiagram
actor User
participant LsCommand
participant PRCache
participant DB as DbModule
participant Github as GithubApi
participant GitHubAPI
User->>LsCommand: run raft ls
activate LsCommand
LsCommand->>DB: getCachedPRs()
DB-->>LsCommand: cachedPullRequests[]
alt hasCached
LsCommand->>LsCommand: sort cachedPullRequests
LsCommand-->>User: initial PR list
else noCached
LsCommand->>LsCommand: show loading state
end
LsCommand->>Github: fetchOpenPRs(author,onProgress)
activate Github
Github->>Github: fetchGh("search/issues?...&per_page=100")
Github->>GitHubAPI: HTTPS request (REST)
GitHubAPI-->>Github: JSON items[]
Github->>Github: parseSearchResults(items)
Github-->>LsCommand: pullRequests[]
deactivate Github
LsCommand->>LsCommand: sort pullRequests
LsCommand->>DB: cachePRs(pullRequests)
LsCommand-->>User: updated PR list
loop for each visible PR needing details
LsCommand->>PRCache: hasDetails(pr.url)?
alt notCached
LsCommand->>Github: batchFetchPRDetails(prsToFetch)
activate Github
Github->>Github: build GraphQL query
Github->>GitHubAPI: HTTPS request (GraphQL)
GitHubAPI-->>Github: data for PR batch
Github-->>LsCommand: Map<url,PRDetails>
deactivate Github
LsCommand->>PRCache: setDetails(url,details)
PRCache->>DB: cachePRDetails(url,details)
else cached
PRCache-->>LsCommand: PRDetails
end
end
LsCommand->>LsCommand: compute urgencyMap, sortedPRs
LsCommand-->>User: attention-sorted PR list
deactivate LsCommand
Sequence diagram for semantic file grouping and boilerplate togglesequenceDiagram
actor User
participant PanelFiles
participant DiffUtils
User->>PanelFiles: open Files tab with files[]
activate PanelFiles
PanelFiles->>PanelFiles: useMemo group files by intent
loop for each file
PanelFiles->>DiffUtils: getFileIntent(file.filename)
DiffUtils-->>PanelFiles: intent (Source_Code|Tests|Config|Boilerplate)
PanelFiles->>PanelFiles: add file to groups[intent]
end
PanelFiles->>User: render groups in intentOrder
Note over PanelFiles: Boilerplate group is folded by default
User->>PanelFiles: press Shift+B
PanelFiles->>PanelFiles: toggle showBoilerplate
alt showBoilerplate is true
PanelFiles->>User: render Boilerplate header and all boilerplate file diffs
else showBoilerplate is false
PanelFiles->>User: render collapsed Boilerplate summary row
end
deactivate PanelFiles
ER diagram for new SQLite PR cache schemaerDiagram
pull_requests {
TEXT url PK
JSON data
DATETIME updated_at
}
pr_details {
TEXT url PK
JSON data
DATETIME updated_at
}
pr_panel_data {
TEXT url PK
JSON data
DATETIME updated_at
}
pull_requests ||--|| pr_details : same_pr_url
pull_requests ||--|| pr_panel_data : same_pr_url
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
WalkthroughThis PR introduces persistent SQLite-based caching for PRs and PR details, batch fetching of PR information from GitHub's API, a new split management command with interactive TUI, and UI enhancements including file grouping by intent (Source Code, Tests, Config, Boilerplate) with togglable boilerplate visibility. The core github.ts module shifts from gh CLI-based operations to direct fetch-based API calls with token authentication. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as ls.tsx UI
participant Cache as Cache Layer
participant DB as SQLite DB
participant GitHub as GitHub API
participant Batch as Batch Fetcher
UI->>Cache: On mount: load cached PRs
Cache->>DB: getCachedPRs()
DB-->>Cache: [PR list] or empty
Cache-->>UI: Populate sorted list immediately
UI->>Batch: Need PR details (batch)
Batch->>GitHub: batchFetchPRDetails (batched GraphQL)
GitHub-->>Batch: [PRDetails map]
Batch->>Cache: Update details cache
Cache->>DB: cachePRDetails(url, details)
DB-->>Cache: ✓ Stored
Cache-->>UI: Render with details
Note over UI,DB: New PRs from search/filter
UI->>GitHub: fetchRepoPRs()
GitHub-->>UI: [PRs]
UI->>Cache: cachePRs(newList)
Cache->>DB: Clear & insert PRs
DB-->>Cache: ✓ Transaction complete
sequenceDiagram
participant User as User Input
participant Split as SplitCommand
participant FS as File System
participant State as split-state lib
User->>Split: Launch split command
Split->>FS: getRepoRoot()
FS-->>Split: repo path
Split->>FS: Read .raft-split.json
FS-->>Split: file content
Split->>State: readSplitState()
State-->>Split: SplitState
Split->>Split: formatSplitTopology()
Split-->>User: Render topology tree + splits list
User->>Split: Press 'j'/'k' navigate
Split-->>User: Update selection highlight
User->>Split: Press 'enter' on split
Split->>FS: Open PR URL (system command)
FS-->>User: Browser opens URL
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Hey - I've found 2 issues, and left some high level feedback:
- In
SplitCommand, therepoprop is accepted but never used, and the PR-opening logic relies onBun.spawn(["open", ...]), which will only work on macOS; consider either wiring therepoparameter into the command or removing it, and using a cross‑platform opener (e.g.,xdg-open/start) to avoid platform-specific behavior. - The SQLite cache in
lib/db.tsis initialized and opened at module import time, which can make tests and tooling harder to control; consider lazily creating the database/connection on first use so environments that don't need persistence (or have restricted filesystems) can avoid side effects. - The
PanelFilesboilerplate grouping currently treats all.yml/.yaml/.toml/.ini/dockerfile/makefileasConfig, which may sweep some application-level resources into config; if you expect mixed usage, you might want to tighten these heuristics (e.g., by path prefixes likeconfig/,.github/, etc.) to avoid misclassification.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `SplitCommand`, the `repo` prop is accepted but never used, and the PR-opening logic relies on `Bun.spawn(["open", ...])`, which will only work on macOS; consider either wiring the `repo` parameter into the command or removing it, and using a cross‑platform opener (e.g., `xdg-open`/`start`) to avoid platform-specific behavior.
- The SQLite cache in `lib/db.ts` is initialized and opened at module import time, which can make tests and tooling harder to control; consider lazily creating the database/connection on first use so environments that don't need persistence (or have restricted filesystems) can avoid side effects.
- The `PanelFiles` boilerplate grouping currently treats all `.yml`/`.yaml`/`.toml`/`.ini`/`dockerfile`/`makefile` as `Config`, which may sweep some application-level resources into config; if you expect mixed usage, you might want to tighten these heuristics (e.g., by path prefixes like `config/`, `.github/`, etc.) to avoid misclassification.
## Individual Comments
### Comment 1
<location path="src/lib/cache.ts" line_range="24-25" />
<code_context>
+ cachePRDetails(url, data);
}
hasDetails(url: string): boolean {
- return this.details.has(url)
+ return this.details.has(url) || getCachedPRDetails(url) !== null;
}
</code_context>
<issue_to_address>
**suggestion (performance):** Avoid hitting SQLite on every `hasDetails`/`hasPanelData` call to prevent unnecessary synchronous I/O in hot paths.
Because `hasDetails`/`hasPanelData` now call `getCachedPR*` on cache misses, uncached URLs incur a synchronous DB read on every check, which is risky in render-driven paths. Consider either (a) maintaining a `knownMissing`/negative cache, or (b) restricting DB access to the `get*` methods and having `has*` read only the in-memory maps, with callers responsible for triggering loads when `get*` returns `undefined`.
Suggested implementation:
```typescript
hasDetails(url: string): boolean {
// Only consult the in-memory map; DB access is restricted to getDetails.
return this.details.has(url);
}
```
```typescript
hasPanelData(url: string): boolean {
// Only consult the in-memory map; DB access is restricted to getPanelData.
return this.panelData.has(url);
}
```
1. Ensure that `getDetails` and `getPanelData` both:
- Call their respective `getCachedPR*` functions on cache miss.
- Populate the in-memory maps (`this.details` / `this.panelData`) with any data loaded from SQLite before returning it.
2. Call sites that previously relied on `hasDetails`/`hasPanelData` triggering disk reads should now either:
- Call `getDetails`/`getPanelData` directly and branch on `undefined`, or
- Explicitly trigger a load via the appropriate `get*` before using `has*` if they need to ensure the cache is hydrated.
</issue_to_address>
### Comment 2
<location path="src/lib/__tests__/cache.test.ts" line_range="38-41" />
<code_context>
- expect(cache.hasDetails("x")).toBe(false)
- cache.setDetails("x", { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" })
- expect(cache.hasDetails("x")).toBe(true)
+ const uniqueKey = "test-unique-key-" + Date.now();
+ expect(cache.hasDetails(uniqueKey)).toBe(false)
+ cache.setDetails(uniqueKey, { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" })
+ expect(cache.hasDetails(uniqueKey)).toBe(true)
})
})
</code_context>
<issue_to_address>
**suggestion (testing):** Extend PRCache tests to cover DB-backed behavior and avoid real DB side effects
Because `PRCache` now delegates to `db.ts`, this test only covers the in-memory path. We should also verify DB-backed behavior by mocking `./db` so that:
- `getCachedPRDetails` returns a known object when the in-memory `details` map is empty, and `getDetails` surfaces and caches it.
- `setDetails` calls `cachePRDetails` with the expected arguments.
Also, as written, `hasDetails`/`getDetails` in tests will hit the real DB under `~/.config/raft/raft.sqlite`. Please mock `./db` to an in-memory fake in this suite so we avoid mutating user state and keep tests deterministic.
Suggested implementation:
```typescript
test("has() returns correct boolean", () => {
const cache = new PRCache()
const key = "test-has-details-key"
expect(cache.hasDetails(key)).toBe(false)
cache.setDetails(key, { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" })
expect(cache.hasDetails(key)).toBe(true)
})
test("getDetails returns DB-backed value and caches it", () => {
const cache = new PRCache()
const key = "test-db-backed-key"
// Ensure no in-memory details before DB lookup
expect(cache.hasDetails(key)).toBe(false)
// Arrange DB to return a known object
const db = jest.requireMock("../db") as {
getCachedPRDetails: jest.Mock
cachePRDetails: jest.Mock
}
const dbDetails = {
additions: 5,
deletions: 3,
commentCount: 2,
reviews: [{ state: "APPROVED" }],
headRefName: "db-backed-branch",
}
db.getCachedPRDetails.mockReturnValue(dbDetails)
const result = cache.getDetails(key)
// Surfaces DB value
expect(result).toEqual(dbDetails)
// Caches result in memory so subsequent lookups don't need DB
expect(cache.hasDetails(key)).toBe(true)
expect(db.getCachedPRDetails).toHaveBeenCalledWith(key)
})
test("setDetails persists to DB via cachePRDetails", () => {
const cache = new PRCache()
const key = "test-set-details-db-persist"
const details = {
additions: 1,
deletions: 1,
commentCount: 0,
reviews: [],
headRefName: "persisted-branch",
}
const db = jest.requireMock("../db") as {
getCachedPRDetails: jest.Mock
cachePRDetails: jest.Mock
}
cache.setDetails(key, details)
// Persists to DB
expect(db.cachePRDetails).toHaveBeenCalledWith(key, details)
// Still behaves correctly in-memory
expect(cache.hasDetails(key)).toBe(true)
expect(cache.getDetails(key)).toEqual(details)
})
})
```
To fully implement the suggestion and avoid real DB side effects, you should also:
1. Add a Jest module mock for `../db` at the top of this test file (before importing `PRCache`), for example:
```ts
jest.mock("../db", () => {
return {
getCachedPRDetails: jest.fn(),
cachePRDetails: jest.fn(),
}
})
```
2. Ensure the mock path (`"../db"`) matches the actual import path used inside the `PRCache` implementation. If `PRCache` imports from `"./db"` or another relative path, adjust both the `jest.mock` call and `jest.requireMock` calls accordingly.
3. Optionally, add a `beforeEach(() => jest.clearAllMocks())` in this test suite to keep DB mock expectations isolated between tests.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| hasDetails(url: string): boolean { | ||
| return this.details.has(url) | ||
| return this.details.has(url) || getCachedPRDetails(url) !== null; |
There was a problem hiding this comment.
suggestion (performance): Avoid hitting SQLite on every hasDetails/hasPanelData call to prevent unnecessary synchronous I/O in hot paths.
Because hasDetails/hasPanelData now call getCachedPR* on cache misses, uncached URLs incur a synchronous DB read on every check, which is risky in render-driven paths. Consider either (a) maintaining a knownMissing/negative cache, or (b) restricting DB access to the get* methods and having has* read only the in-memory maps, with callers responsible for triggering loads when get* returns undefined.
Suggested implementation:
hasDetails(url: string): boolean {
// Only consult the in-memory map; DB access is restricted to getDetails.
return this.details.has(url);
} hasPanelData(url: string): boolean {
// Only consult the in-memory map; DB access is restricted to getPanelData.
return this.panelData.has(url);
}- Ensure that
getDetailsandgetPanelDataboth:- Call their respective
getCachedPR*functions on cache miss. - Populate the in-memory maps (
this.details/this.panelData) with any data loaded from SQLite before returning it.
- Call their respective
- Call sites that previously relied on
hasDetails/hasPanelDatatriggering disk reads should now either:- Call
getDetails/getPanelDatadirectly and branch onundefined, or - Explicitly trigger a load via the appropriate
get*before usinghas*if they need to ensure the cache is hydrated.
- Call
| const uniqueKey = "test-unique-key-" + Date.now(); | ||
| expect(cache.hasDetails(uniqueKey)).toBe(false) | ||
| cache.setDetails(uniqueKey, { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" }) | ||
| expect(cache.hasDetails(uniqueKey)).toBe(true) |
There was a problem hiding this comment.
suggestion (testing): Extend PRCache tests to cover DB-backed behavior and avoid real DB side effects
Because PRCache now delegates to db.ts, this test only covers the in-memory path. We should also verify DB-backed behavior by mocking ./db so that:
getCachedPRDetailsreturns a known object when the in-memorydetailsmap is empty, andgetDetailssurfaces and caches it.setDetailscallscachePRDetailswith the expected arguments.
Also, as written, hasDetails/getDetails in tests will hit the real DB under ~/.config/raft/raft.sqlite. Please mock ./db to an in-memory fake in this suite so we avoid mutating user state and keep tests deterministic.
Suggested implementation:
test("has() returns correct boolean", () => {
const cache = new PRCache()
const key = "test-has-details-key"
expect(cache.hasDetails(key)).toBe(false)
cache.setDetails(key, { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" })
expect(cache.hasDetails(key)).toBe(true)
})
test("getDetails returns DB-backed value and caches it", () => {
const cache = new PRCache()
const key = "test-db-backed-key"
// Ensure no in-memory details before DB lookup
expect(cache.hasDetails(key)).toBe(false)
// Arrange DB to return a known object
const db = jest.requireMock("../db") as {
getCachedPRDetails: jest.Mock
cachePRDetails: jest.Mock
}
const dbDetails = {
additions: 5,
deletions: 3,
commentCount: 2,
reviews: [{ state: "APPROVED" }],
headRefName: "db-backed-branch",
}
db.getCachedPRDetails.mockReturnValue(dbDetails)
const result = cache.getDetails(key)
// Surfaces DB value
expect(result).toEqual(dbDetails)
// Caches result in memory so subsequent lookups don't need DB
expect(cache.hasDetails(key)).toBe(true)
expect(db.getCachedPRDetails).toHaveBeenCalledWith(key)
})
test("setDetails persists to DB via cachePRDetails", () => {
const cache = new PRCache()
const key = "test-set-details-db-persist"
const details = {
additions: 1,
deletions: 1,
commentCount: 0,
reviews: [],
headRefName: "persisted-branch",
}
const db = jest.requireMock("../db") as {
getCachedPRDetails: jest.Mock
cachePRDetails: jest.Mock
}
cache.setDetails(key, details)
// Persists to DB
expect(db.cachePRDetails).toHaveBeenCalledWith(key, details)
// Still behaves correctly in-memory
expect(cache.hasDetails(key)).toBe(true)
expect(cache.getDetails(key)).toEqual(details)
})
})To fully implement the suggestion and avoid real DB side effects, you should also:
- Add a Jest module mock for
../dbat the top of this test file (before importingPRCache), for example:jest.mock("../db", () => { return { getCachedPRDetails: jest.fn(), cachePRDetails: jest.fn(), } })
- Ensure the mock path (
"../db") matches the actual import path used inside thePRCacheimplementation. IfPRCacheimports from"./db"or another relative path, adjust both thejest.mockcall andjest.requireMockcalls accordingly. - Optionally, add a
beforeEach(() => jest.clearAllMocks())in this test suite to keep DB mock expectations isolated between tests.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 93279851c7
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const CHUNK_SIZE = 20; | ||
| for (let i = 0; i < prs.length; i += CHUNK_SIZE) { | ||
| const chunk = prs.slice(i, i + CHUNK_SIZE); | ||
| let query = "query {\\n"; |
There was a problem hiding this comment.
Use a real newline in batched GraphQL query header
The batch query is initialized with "query {\\n", which embeds a literal backslash token before the first field. GraphQL does not accept \ outside string literals, so this causes fetchGhGraphql to reject and every batch to fall into the catch path, leaving PR details empty and breaking lifecycle/attention metadata loading.
Useful? React with 👍 / 👎.
| if (!json) return [] | ||
| return parseSearchResults(json) | ||
| onProgress?.("Fetching all open PRs..."); | ||
| const json = await fetchGh("search/issues?q=is:pr+is:open&per_page=100"); |
There was a problem hiding this comment.
Paginate open-PR search instead of fetching only page 1
This search call requests per_page=100 but never iterates page, so users with more than 100 matching PRs only get a partial list. Because the app replaces cached PR rows with each refresh, the missing pages effectively disappear from the UI/state after sync, which is a functional regression from the previous higher-limit behavior.
Useful? React with 👍 / 👎.
|
|
||
| hasDetails(url: string): boolean { | ||
| return this.details.has(url) | ||
| return this.details.has(url) || getCachedPRDetails(url) !== null; |
There was a problem hiding this comment.
Revalidate persisted PR details before skipping fetches
Treating any DB hit as a permanent cache hit means details are never refreshed once stored. In ls, entries with hasDetails(url) are excluded from network fetches, so CI status, approvals, unresolved threads, and conflicts can remain stale across sessions and misdrive attention sorting/lifecycle actions.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Pull request overview
This PR adds persistent caching and a new split command while refactoring GitHub integration away from heavy gh CLI usage toward direct REST/GraphQL calls authenticated via a token obtained from gh auth token.
Changes:
- Introduces a SQLite-backed cache layer for PR lists/details/panel data and wires it into
ls. - Refactors
src/lib/github.tsto use direct GitHub REST/GraphQL requests (token sourced fromgh). - Adds split-state persistence + a new
raft splitUI command, plus UI improvements (panel skeletons, file intent grouping, memoized PR rows).
Reviewed changes
Copilot reviewed 17 out of 17 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
| src/lib/split-state.ts | New split state model + read/write + topology formatting. |
| src/lib/github.ts | Replaces many gh CLI calls with REST/GraphQL fetch helpers; adds batched details fetching. |
| src/lib/diff-utils.ts | Adds file “intent” classification used for grouping in the files panel. |
| src/lib/db.ts | New SQLite persistence for PR caching (PR list/details/panel data). |
| src/lib/cache.ts | Extends in-memory cache with SQLite read-through/write-through behavior. |
| src/lib/auth.ts | Adds getGithubToken() using gh auth token via safeSpawn/buildCleanEnv. |
| src/lib/tests/github.test.ts | Updates parseSearchResults tests to match REST search schema. |
| src/lib/tests/cache.test.ts | Tweaks cache test keys to avoid collisions with persistent storage. |
| src/index.tsx | Adds split command routing and help text. |
| src/hooks/usePanel.ts | Tightens types for state setters. |
| src/components/skeleton.tsx | Adds a per-tab panel skeleton. |
| src/components/preview-panel.tsx | Displays skeleton while loading panel content. |
| src/components/pr-table.tsx | Memoizes PR row rendering. |
| src/components/panel-files.tsx | Groups files by intent and adds boilerplate toggling via keyboard. |
| src/commands/split.tsx | New command to visualize split topology and open/copy PR URLs. |
| src/commands/ls.tsx | Uses cached PRs, deferred search, batched details fetch, and revised sorting/grouping. |
| src/tests/integration.test.ts | Updates target repo in integration tests. |
Comments suppressed due to low confidence (1)
src/commands/ls.tsx:247
- selectedPR is derived from sortedPRs, but usePanel() is given filteredPRs along with selectedIndex. usePanel assumes selectedIndex is an index into the provided allPRs array (used for neighbor prefetching), so this will prefetch the wrong PRs once sorting is applied. Pass sortedPRs instead so indices align.
const selectedPR = sortedPRs[selectedIndex] ?? null
// Panel state management (shared hook handles data fetching + caching + prefetching)
const panel = usePanel(selectedPR, filteredPRs, selectedIndex)
const { panelOpen, panelTab, splitRatio, panelFullscreen, panelData, panelLoading,
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| export function writeSplitState(repoRoot: string, state: SplitState): void { | ||
| const path = `${repoRoot}/.raft-split.json` | ||
| Bun.write(path, JSON.stringify(state, null, 2) + "\n") |
| hasDetails(url: string): boolean { | ||
| return this.details.has(url) | ||
| return this.details.has(url) || getCachedPRDetails(url) !== null; | ||
| } |
| }); | ||
| } | ||
| } catch (e) { | ||
| console.error("GraphQL batch failed:", e); |
|
|
||
| interface SplitCommandProps { | ||
| repo?: string | ||
| } | ||
|
|
||
| async function getRepoRoot(): Promise<string | null> { | ||
| const proc = Bun.spawn(["git", "rev-parse", "--show-toplevel"], { |
| } else if ((key.name === "enter" || key.name === "return") && splits[selectedIndex]?.prUrl) { | ||
| Bun.spawn(["open", splits[selectedIndex].prUrl!], { stdout: "ignore", stderr: "ignore" }) | ||
| showFlash("Opening PR...") | ||
| } else if (key.name === "c" && splits[selectedIndex]?.prUrl) { |
| import { homedir } from "node:os"; | ||
|
|
||
| // Initialize database in ~/.config/raft/raft.sqlite | ||
| const configDir = join(homedir(), ".config", "raft"); |
| // Initialize database in ~/.config/raft/raft.sqlite | ||
| const configDir = join(homedir(), ".config", "raft"); | ||
| mkdirSync(configDir, { recursive: true }); | ||
|
|
||
| const dbPath = join(configDir, "raft.sqlite"); | ||
| export const db = new Database(dbPath); | ||
|
|
| if (author === "") { | ||
| // Empty string means fetch all PRs across all repos the user has access to | ||
| onProgress?.("Fetching all open PRs...") | ||
| const json = await runGh([ | ||
| "search", "prs", | ||
| "--state=open", | ||
| "--limit=1000", | ||
| "--json", "number,title,url,body,state,repository,isDraft,createdAt,author", | ||
| ]) | ||
| if (!json) return [] | ||
| return parseSearchResults(json) | ||
| onProgress?.("Fetching all open PRs..."); | ||
| const json = await fetchGh("search/issues?q=is:pr+is:open&per_page=100"); | ||
| return parseSearchResults(json.items); | ||
| } |
| query += ` | ||
| pr_${j}: repository(owner: "${owner}", name: "${name}") { | ||
| pullRequest(number: ${pr.number}) { |
| const json = await fetchGh("search/issues?q=is:pr+is:open+author:@me&per_page=100"); | ||
| return parseSearchResults(json.items); |
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (3)
src/components/skeleton.tsx (1)
58-60: UsePanelTabinstead ofstringto keep tab handling type-safe and exhaustive.
PreviewPanelalready passes aPanelTab; keeping this prop asstringallows invalid values and falls back to silentnullrendering.♻️ Proposed refactor
import { useState, useEffect } from "react" +import type { PanelTab } from "../lib/types" -export function PanelSkeleton({ tab, width, height }: { tab: string; width: number; height: number }) { +export function PanelSkeleton({ tab, width, height }: { tab: PanelTab; width: number; height: number }) {Also applies to: 119-120
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/skeleton.tsx` around lines 58 - 60, PanelSkeleton currently types its tab prop as string which permits invalid values; change the prop type to the PanelTab union (use PanelTab instead of string) and update any other skeleton components that accept tab (the other occurrence around the component at the end of the file) so callers like PreviewPanel can only pass valid tabs; also make the tab switch/exhaustive handling within PanelSkeleton use a switch over PanelTab (or a discriminated mapping) so TypeScript will catch missing cases rather than silently rendering null.src/lib/split-state.ts (1)
28-42: File I/O uses Bun APIs correctly per coding guidelines.
Bun.fileandBun.writeare used appropriately. ThereadSplitStatefunction gracefully handles errors by returningnull.One consideration:
writeSplitStateis synchronous butBun.writereturns aPromise. This means write errors won't be caught by callers.🔧 Consider making writeSplitState async to handle errors
-export function writeSplitState(repoRoot: string, state: SplitState): void { +export async function writeSplitState(repoRoot: string, state: SplitState): Promise<void> { const path = `${repoRoot}/.raft-split.json` - Bun.write(path, JSON.stringify(state, null, 2) + "\n") + await Bun.write(path, JSON.stringify(state, null, 2) + "\n") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/split-state.ts` around lines 28 - 42, The writeSplitState function is currently synchronous but calls Bun.write (which returns a Promise), so failures aren't propagated; change writeSplitState to be async, await Bun.write(...) and either let errors propagate (remove silent failures) or catch and rethrow/log as appropriate so callers can handle I/O errors; update the function signature writeSplitState(repoRoot: string, state: SplitState): Promise<void> and ensure callers are updated to await the call.src/lib/diff-utils.ts (1)
37-84: File intent classification is well-structured with a few edge case considerations.The classification logic covers common patterns effectively. A few observations:
Line 50:
lower.includes("generated")may be overly broad—it would match files likeuser-generated-content.ts. Consider using path segment matching likelower.includes("/generated/")orlower.startsWith("generated/").Lines 77-78: The checks
lower === "dockerfile"andlower === "makefile"only match if the entire lowercased filename equals these strings. This works for root-level files but won't match paths likedocker/Dockerfile. Consider usinglower.endsWith("/dockerfile")or checking just the basename.These are edge cases that may not affect typical usage, so flagging as optional refinement.
🔧 Suggested refinement for edge cases
- lower.includes("generated") + lower.includes("/generated/") || lower.includes("/generated.") || lower.startsWith("generated/") || lower.startsWith("generated.") ) { return "Boilerplate" }- lower === "dockerfile" || - lower === "makefile" + lower.endsWith("/dockerfile") || lower === "dockerfile" || + lower.endsWith("/makefile") || lower === "makefile"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/diff-utils.ts` around lines 37 - 84, getFileIntent's current checks are too broad for "generated" and too strict for Dockerfile/Makefile; update the logic in getFileIntent so the "generated" test matches path segments (e.g., look for "/generated/" or filenames starting with "generated/") instead of any substring via lower.includes("generated"), and change the docker/make checks (currently lower === "dockerfile" and lower === "makefile") to detect basenames or path endings (e.g., use a basename check or lower.endsWith("/dockerfile") / lower.endsWith("/makefile")) so files like "docker/Dockerfile" are classified as "Config".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/skeleton.tsx`:
- Around line 58-68: Computed skeleton widths/heights in PanelSkeleton and
related SkeletonBlock usages can be negative (e.g., Math.floor(width * ratio)
and repeat(Math.min(width - 2, 40))), causing runtime RangeError; clamp all
computed sizes with Math.max(0, ...) (or Math.min/Math.max to enforce upper
bounds) before using them in repeat() or as width/height props so values never
go below 0 (apply to PanelSkeleton, SkeletonBlock calls, and any
repeat(Math.min(width - 2, 40)) sites).
In `@src/hooks/usePanel.ts`:
- Around line 27-28: The file uses React.Dispatch and React.SetStateAction for
the setSplitRatio and setPanelFullscreen types but React is not imported; update
the top of usePanel.ts to import the types directly (import { Dispatch,
SetStateAction } from 'react') and change the declarations to use
Dispatch<SetStateAction<number>> for setSplitRatio and
Dispatch<SetStateAction<boolean>> for setPanelFullscreen so the types resolve
correctly.
In `@src/index.tsx`:
- Around line 214-216: The SplitCommand is ignoring the repo prop passed from
the CLI (config.repoFilter) because it always calls getRepoRoot() internally;
update SplitCommand to accept and use the repo prop when provided (e.g., prefer
the incoming prop over calling getRepoRoot()), or if you intend to disallow CLI
repo selection, remove repoFilter from the split command wiring in src/index.tsx
and update docs; specifically modify the SplitCommand component (and any
functions inside it that call getRepoRoot()) to use the passed-in repo prop when
non-empty, or remove the prop from the caller and related config so there is no
unused prop.
In `@src/lib/auth.ts`:
- Around line 3-20: The getGithubToken function races on cold start because
cachedToken is only set after safeSpawn completes; change it to guard concurrent
callers by adding a shared in-flight promise (e.g., pendingTokenPromise)
alongside cachedToken so the first caller kicks off safeSpawn and stores the
promise, subsequent callers await that promise instead of spawning new
processes, and once resolved set cachedToken and clear pendingTokenPromise;
update getGithubToken to return cachedToken immediately if present, otherwise
await the pending promise or create it by calling safeSpawn with
buildCleanEnv(), handle errors by rejecting/clearing the pending promise so
future calls can retry, and reference cachedToken and getGithubToken (and the
safeSpawn call) when making the change.
In `@src/lib/db.ts`:
- Around line 38-41: getCachedPRs currently returns all rows regardless of age;
change its SQL to filter out entries whose updated_at is older than the
configured TTL (same predicate used in getCachedPRPanelData). Update the
db.query call in getCachedPRs to include a WHERE clause comparing updated_at
against the TTL cutoff (using the same TTL constant/function used by
getCachedPRPanelData), then parse the remaining rows as before; repeat the same
TTL filtering approach for getCachedPRPanelData to ensure cached panel payloads
expire correctly.
- Around line 7-12: The module currently opens ~/.config/raft/raft.sqlite on
import (configDir, dbPath, db) which leaks state into tests; change it to avoid
immediate file-backed DB creation by making DB initialization configurable and
lazy: export a function like initDb(path?: string) or getDb() that creates and
returns the Database instance instead of constructing db at module top-level,
and prefer an in-memory DB for tests by checking process.env.NODE_ENV === 'test'
or a configurable env var (e.g. RAFT_DB_PATH) so tests can pass ':memory:' or a
temp path; keep the existing exported symbol name (db) behavior by setting it
when initDb is called to preserve call sites.
In `@src/lib/github.ts`:
- Around line 84-127: fetchAllAccountPRs, fetchOpenPRs and fetchRepoPRs
currently only request a single page (per_page=100) and thus miss additional
PRs; change each to paginate like fetchPRPanelData: add a page counter, loop
requests appending results from each page (use &page=${page}&per_page=100 or add
page param to the query), stop when the returned items length is less than 100
(or zero), and return the concatenated list; for fetchRepoPRs map each page's
items into the PullRequest shape and concatenate before returning, preserving
the existing error handling.
- Around line 137-144: findStackComment and the comment-fetching in
fetchPRPanelData currently call fetchGh without pagination and therefore only
get the first page of comments; update both to paginate (like the existing files
pagination) by looping page numbers, calling fetchGh with query params page and
per_page, accumulating results for issueComments and codeComments until an empty
page is returned, then search the accumulated comments for STACK_COMMENT_MARKER
in findStackComment (returning c.id) and use the full accumulated lists in
fetchPRPanelData so no comments are missed on busy PRs.
- Around line 7-40: fetchGh and fetchGhGraphql currently call fetch() with no
timeout and can hang; add a 30s timeout using AbortController: create an
AbortController, set a setTimeout to call controller.abort() after 30000ms, pass
controller.signal into the fetch call (for fetchGh include it in the options
spread and for fetchGhGraphql add signal to the request init), ensure the
timeout is cleared (clearTimeout) after fetch resolves or errors, and throw or
let the abort error propagate so callers can handle timeouts; reference
functions fetchGh and fetchGhGraphql and preserve existing header/response
handling and getGithubToken usage.
---
Nitpick comments:
In `@src/components/skeleton.tsx`:
- Around line 58-60: PanelSkeleton currently types its tab prop as string which
permits invalid values; change the prop type to the PanelTab union (use PanelTab
instead of string) and update any other skeleton components that accept tab (the
other occurrence around the component at the end of the file) so callers like
PreviewPanel can only pass valid tabs; also make the tab switch/exhaustive
handling within PanelSkeleton use a switch over PanelTab (or a discriminated
mapping) so TypeScript will catch missing cases rather than silently rendering
null.
In `@src/lib/diff-utils.ts`:
- Around line 37-84: getFileIntent's current checks are too broad for
"generated" and too strict for Dockerfile/Makefile; update the logic in
getFileIntent so the "generated" test matches path segments (e.g., look for
"/generated/" or filenames starting with "generated/") instead of any substring
via lower.includes("generated"), and change the docker/make checks (currently
lower === "dockerfile" and lower === "makefile") to detect basenames or path
endings (e.g., use a basename check or lower.endsWith("/dockerfile") /
lower.endsWith("/makefile")) so files like "docker/Dockerfile" are classified as
"Config".
In `@src/lib/split-state.ts`:
- Around line 28-42: The writeSplitState function is currently synchronous but
calls Bun.write (which returns a Promise), so failures aren't propagated; change
writeSplitState to be async, await Bun.write(...) and either let errors
propagate (remove silent failures) or catch and rethrow/log as appropriate so
callers can handle I/O errors; update the function signature
writeSplitState(repoRoot: string, state: SplitState): Promise<void> and ensure
callers are updated to await the call.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0f18be0e-5b2d-48af-8755-d422f18944ae
📒 Files selected for processing (17)
src/__tests__/integration.test.tssrc/commands/ls.tsxsrc/commands/split.tsxsrc/components/panel-files.tsxsrc/components/pr-table.tsxsrc/components/preview-panel.tsxsrc/components/skeleton.tsxsrc/hooks/usePanel.tssrc/index.tsxsrc/lib/__tests__/cache.test.tssrc/lib/__tests__/github.test.tssrc/lib/auth.tssrc/lib/cache.tssrc/lib/db.tssrc/lib/diff-utils.tssrc/lib/github.tssrc/lib/split-state.ts
| export function PanelSkeleton({ tab, width, height }: { tab: string; width: number; height: number }) { | ||
| if (tab === "body") { | ||
| return ( | ||
| <box flexDirection="column" width={width} height={height} paddingX={1}> | ||
| <box height={1} marginBottom={1}><SkeletonBlock width={Math.floor(width * 0.4)} color="#292e42" /></box> | ||
| <box height={1}><SkeletonBlock width={Math.floor(width * 0.9)} color="#1f2335" /></box> | ||
| <box height={1}><SkeletonBlock width={Math.floor(width * 0.85)} color="#1f2335" /></box> | ||
| <box height={1} marginBottom={1}><SkeletonBlock width={Math.floor(width * 0.6)} color="#1f2335" /></box> | ||
| <box height={1}><SkeletonBlock width={Math.floor(width * 0.95)} color="#1f2335" /></box> | ||
| <box height={1}><SkeletonBlock width={Math.floor(width * 0.7)} color="#1f2335" /></box> | ||
| </box> |
There was a problem hiding this comment.
Clamp computed skeleton widths/heights to avoid runtime repeat() crashes.
At Line 108 and Line 112, repeat(Math.min(width - 2, 40)) can become negative when width is narrow, causing RangeError. Same risk applies to block widths derived from Math.floor(width * ratio) when width is negative.
🐛 Proposed fix
export function PanelSkeleton({ tab, width, height }: { tab: string; width: number; height: number }) {
+ const safeWidth = Math.max(0, Math.floor(width))
+ const safeHeight = Math.max(1, Math.floor(height))
+ const blockWidth = (ratio: number) => Math.max(0, Math.floor(safeWidth * ratio))
+ const frameWidth = Math.max(0, Math.min(safeWidth - 2, 40))
+
if (tab === "body") {
return (
- <box flexDirection="column" width={width} height={height} paddingX={1}>
- <box height={1} marginBottom={1}><SkeletonBlock width={Math.floor(width * 0.4)} color="#292e42" /></box>
+ <box flexDirection="column" width={safeWidth} height={safeHeight} paddingX={1}>
+ <box height={1} marginBottom={1}><SkeletonBlock width={blockWidth(0.4)} color="#292e42" /></box>
...
</box>
)
}
...
- <box height={1}><text fg="#292e42">{"\u250C" + "\u2500".repeat(Math.min(width - 2, 40))}</text></box>
+ <box height={1}><text fg="#292e42">{"\u250C" + "\u2500".repeat(frameWidth)}</text></box>
...
- <box height={1}><text fg="#292e42">{"\u2514" + "\u2500".repeat(Math.min(width - 2, 40))}</text></box>
+ <box height={1}><text fg="#292e42">{"\u2514" + "\u2500".repeat(frameWidth)}</text></box>Also applies to: 74-84, 89-99, 104-116
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/skeleton.tsx` around lines 58 - 68, Computed skeleton
widths/heights in PanelSkeleton and related SkeletonBlock usages can be negative
(e.g., Math.floor(width * ratio) and repeat(Math.min(width - 2, 40))), causing
runtime RangeError; clamp all computed sizes with Math.max(0, ...) (or
Math.min/Math.max to enforce upper bounds) before using them in repeat() or as
width/height props so values never go below 0 (apply to PanelSkeleton,
SkeletonBlock calls, and any repeat(Math.min(width - 2, 40)) sites).
| setSplitRatio: React.Dispatch<React.SetStateAction<number>> | ||
| setPanelFullscreen: React.Dispatch<React.SetStateAction<boolean>> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/hooks/usePanel.ts | head -60Repository: OctavianTocan/raft
Length of output: 2818
Import the React state types before using React.Dispatch.
React is not in scope in this file, so these members will not type-check. Use named type imports (Dispatch, SetStateAction) from React.
💡 Suggested fix
-import { useState, useEffect, useRef, type MutableRefObject } from "react"
+import {
+ useState,
+ useEffect,
+ useRef,
+ type Dispatch,
+ type MutableRefObject,
+ type SetStateAction,
+} from "react"
@@
- setSplitRatio: React.Dispatch<React.SetStateAction<number>>
- setPanelFullscreen: React.Dispatch<React.SetStateAction<boolean>>
+ setSplitRatio: Dispatch<SetStateAction<number>>
+ setPanelFullscreen: Dispatch<SetStateAction<boolean>>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| setSplitRatio: React.Dispatch<React.SetStateAction<number>> | |
| setPanelFullscreen: React.Dispatch<React.SetStateAction<boolean>> | |
| import { | |
| useState, | |
| useEffect, | |
| useRef, | |
| type Dispatch, | |
| type MutableRefObject, | |
| type SetStateAction, | |
| } from "react" | |
| // ... other code ... | |
| setSplitRatio: Dispatch<SetStateAction<number>> | |
| setPanelFullscreen: Dispatch<SetStateAction<boolean>> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/hooks/usePanel.ts` around lines 27 - 28, The file uses React.Dispatch and
React.SetStateAction for the setSplitRatio and setPanelFullscreen types but
React is not imported; update the top of usePanel.ts to import the types
directly (import { Dispatch, SetStateAction } from 'react') and change the
declarations to use Dispatch<SetStateAction<number>> for setSplitRatio and
Dispatch<SetStateAction<boolean>> for setPanelFullscreen so the types resolve
correctly.
| case "split": | ||
| root.render(<SplitCommand repo={config.repoFilter} />) | ||
| break |
There was a problem hiding this comment.
The repo prop passed to SplitCommand is unused.
Per the context snippets from src/commands/split.tsx:77-117 and src/commands/split.tsx:7-9, the SplitCommand component ignores the repo prop and instead calls getRepoRoot() internally to discover the repository path. This means the --repo= CLI argument (parsed at line 69) has no effect for the split command.
Consider either:
- Using the
repoprop inSplitCommandwhen provided, or - Removing
repoFilterfrom the split command config and documenting that--repois not supported for this command.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/index.tsx` around lines 214 - 216, The SplitCommand is ignoring the repo
prop passed from the CLI (config.repoFilter) because it always calls
getRepoRoot() internally; update SplitCommand to accept and use the repo prop
when provided (e.g., prefer the incoming prop over calling getRepoRoot()), or if
you intend to disallow CLI repo selection, remove repoFilter from the split
command wiring in src/index.tsx and update docs; specifically modify the
SplitCommand component (and any functions inside it that call getRepoRoot()) to
use the passed-in repo prop when non-empty, or remove the prop from the caller
and related config so there is no unused prop.
| let cachedToken: string | null = null; | ||
|
|
||
| export async function getGithubToken(): Promise<string> { | ||
| if (cachedToken) { | ||
| return cachedToken; | ||
| } | ||
|
|
||
| // Use the gh CLI to get the OAuth token, avoiding any .env leaks via buildCleanEnv | ||
| const { stdout, exitCode, stderr } = await safeSpawn(["gh", "auth", "token"], { | ||
| env: buildCleanEnv(), | ||
| }); | ||
|
|
||
| if (exitCode !== 0) { | ||
| throw new Error(`Failed to get GitHub token from gh CLI. Ensure you are logged in via 'gh auth login'. Error: ${stderr}`); | ||
| } | ||
|
|
||
| cachedToken = stdout.trim(); | ||
| return cachedToken; |
There was a problem hiding this comment.
Deduplicate the first token lookup across concurrent callers.
cachedToken is only populated after safeSpawn() finishes, so the initial burst of fetchGh()/fetchGhGraphql() calls will each run gh auth token. fetchPRPanelData() alone fans out four authenticated requests, so cold starts still pay multiple subprocess launches.
💡 Suggested fix
let cachedToken: string | null = null;
+let cachedTokenPromise: Promise<string> | null = null;
export async function getGithubToken(): Promise<string> {
- if (cachedToken) {
- return cachedToken;
- }
+ if (cachedToken) return cachedToken;
+ if (cachedTokenPromise) return cachedTokenPromise;
- // Use the gh CLI to get the OAuth token, avoiding any .env leaks via buildCleanEnv
- const { stdout, exitCode, stderr } = await safeSpawn(["gh", "auth", "token"], {
- env: buildCleanEnv(),
- });
-
- if (exitCode !== 0) {
- throw new Error(`Failed to get GitHub token from gh CLI. Ensure you are logged in via 'gh auth login'. Error: ${stderr}`);
- }
-
- cachedToken = stdout.trim();
- return cachedToken;
+ cachedTokenPromise = (async () => {
+ const { stdout, exitCode, stderr } = await safeSpawn(["gh", "auth", "token"], {
+ env: buildCleanEnv(),
+ });
+
+ if (exitCode !== 0) {
+ throw new Error(`Failed to get GitHub token from gh CLI. Ensure you are logged in via 'gh auth login'. Error: ${stderr}`);
+ }
+
+ cachedToken = stdout.trim();
+ return cachedToken;
+ })();
+
+ try {
+ return await cachedTokenPromise;
+ } finally {
+ cachedTokenPromise = null;
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let cachedToken: string | null = null; | |
| export async function getGithubToken(): Promise<string> { | |
| if (cachedToken) { | |
| return cachedToken; | |
| } | |
| // Use the gh CLI to get the OAuth token, avoiding any .env leaks via buildCleanEnv | |
| const { stdout, exitCode, stderr } = await safeSpawn(["gh", "auth", "token"], { | |
| env: buildCleanEnv(), | |
| }); | |
| if (exitCode !== 0) { | |
| throw new Error(`Failed to get GitHub token from gh CLI. Ensure you are logged in via 'gh auth login'. Error: ${stderr}`); | |
| } | |
| cachedToken = stdout.trim(); | |
| return cachedToken; | |
| let cachedToken: string | null = null; | |
| let cachedTokenPromise: Promise<string> | null = null; | |
| export async function getGithubToken(): Promise<string> { | |
| if (cachedToken) return cachedToken; | |
| if (cachedTokenPromise) return cachedTokenPromise; | |
| cachedTokenPromise = (async () => { | |
| const { stdout, exitCode, stderr } = await safeSpawn(["gh", "auth", "token"], { | |
| env: buildCleanEnv(), | |
| }); | |
| if (exitCode !== 0) { | |
| throw new Error(`Failed to get GitHub token from gh CLI. Ensure you are logged in via 'gh auth login'. Error: ${stderr}`); | |
| } | |
| cachedToken = stdout.trim(); | |
| return cachedToken; | |
| })(); | |
| try { | |
| return await cachedTokenPromise; | |
| } finally { | |
| cachedTokenPromise = null; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/auth.ts` around lines 3 - 20, The getGithubToken function races on
cold start because cachedToken is only set after safeSpawn completes; change it
to guard concurrent callers by adding a shared in-flight promise (e.g.,
pendingTokenPromise) alongside cachedToken so the first caller kicks off
safeSpawn and stores the promise, subsequent callers await that promise instead
of spawning new processes, and once resolved set cachedToken and clear
pendingTokenPromise; update getGithubToken to return cachedToken immediately if
present, otherwise await the pending promise or create it by calling safeSpawn
with buildCleanEnv(), handle errors by rejecting/clearing the pending promise so
future calls can retry, and reference cachedToken and getGithubToken (and the
safeSpawn call) when making the change.
| // Initialize database in ~/.config/raft/raft.sqlite | ||
| const configDir = join(homedir(), ".config", "raft"); | ||
| mkdirSync(configDir, { recursive: true }); | ||
|
|
||
| const dbPath = join(configDir, "raft.sqlite"); | ||
| export const db = new Database(dbPath); |
There was a problem hiding this comment.
Don't bind tests to the user's real cache DB.
This module opens ~/.config/raft/raft.sqlite on import, so test runs share and mutate the same persistent store. The timestamp key in src/lib/__tests__/cache.test.ts is already compensating for leaked rows instead of getting a clean database.
💡 Suggested fix
-import { join } from "node:path";
+import { dirname, join } from "node:path";
@@
-const configDir = join(homedir(), ".config", "raft");
-mkdirSync(configDir, { recursive: true });
-
-const dbPath = join(configDir, "raft.sqlite");
+const dbPath = process.env.RAFT_DB_PATH ?? join(homedir(), ".config", "raft", "raft.sqlite");
+mkdirSync(dirname(dbPath), { recursive: true });
export const db = new Database(dbPath);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Initialize database in ~/.config/raft/raft.sqlite | |
| const configDir = join(homedir(), ".config", "raft"); | |
| mkdirSync(configDir, { recursive: true }); | |
| const dbPath = join(configDir, "raft.sqlite"); | |
| export const db = new Database(dbPath); | |
| // Initialize database in ~/.config/raft/raft.sqlite | |
| const dbPath = process.env.RAFT_DB_PATH ?? join(homedir(), ".config", "raft", "raft.sqlite"); | |
| mkdirSync(dirname(dbPath), { recursive: true }); | |
| export const db = new Database(dbPath); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/db.ts` around lines 7 - 12, The module currently opens
~/.config/raft/raft.sqlite on import (configDir, dbPath, db) which leaks state
into tests; change it to avoid immediate file-backed DB creation by making DB
initialization configurable and lazy: export a function like initDb(path?:
string) or getDb() that creates and returns the Database instance instead of
constructing db at module top-level, and prefer an in-memory DB for tests by
checking process.env.NODE_ENV === 'test' or a configurable env var (e.g.
RAFT_DB_PATH) so tests can pass ':memory:' or a temp path; keep the existing
exported symbol name (db) behavior by setting it when initDb is called to
preserve call sites.
| export function getCachedPRs(): PullRequest[] { | ||
| const query = db.query(`SELECT data FROM pull_requests`); | ||
| return query.all().map((row: any) => JSON.parse(row.data)); | ||
| } |
There was a problem hiding this comment.
Use updated_at to expire persisted cache entries.
All three getters return rows regardless of age, so PR details and panel payloads can survive across restarts indefinitely. Because usePanel() short-circuits on any panel-data hit, stale comments/files can look fresh forever.
💡 Suggested fix
+const CACHE_TTL_MINUTES = 15;
+
export function getCachedPRDetails(url: string): PRDetails | null {
- const query = db.query(`SELECT data FROM pr_details WHERE url = ?`);
+ const query = db.query(`
+ SELECT data
+ FROM pr_details
+ WHERE url = ?
+ AND updated_at >= datetime('now', '-${CACHE_TTL_MINUTES} minutes')
+ `);
const row = query.get(url) as any;
if (!row) return null;
return JSON.parse(row.data);
}Apply the same TTL predicate to getCachedPRs() and getCachedPRPanelData().
Also applies to: 59-78
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/db.ts` around lines 38 - 41, getCachedPRs currently returns all rows
regardless of age; change its SQL to filter out entries whose updated_at is
older than the configured TTL (same predicate used in getCachedPRPanelData).
Update the db.query call in getCachedPRs to include a WHERE clause comparing
updated_at against the TTL cutoff (using the same TTL constant/function used by
getCachedPRPanelData), then parse the remaining rows as before; repeat the same
TTL filtering approach for getCachedPRPanelData to ensure cached panel payloads
expire correctly.
| export async function fetchGh(endpoint: string, options: RequestInit = {}): Promise<any> { | ||
| const token = await getGithubToken(); | ||
| const url = endpoint.startsWith("http") ? endpoint : `https://api.github.com/${endpoint.replace(/^\//, "")}`; | ||
|
|
||
| const headers = new Headers(options.headers); | ||
| headers.set("Authorization", `Bearer ${token}`); | ||
| if (!headers.has("Accept")) { | ||
| headers.set("Accept", "application/vnd.github.v3+json"); | ||
| } | ||
|
|
||
| const response = await fetch(url, { ...options, headers }); | ||
|
|
||
| if (!response.ok) { | ||
| const text = await response.text(); | ||
| throw new Error(`GitHub API error ${response.status}: ${text}`); | ||
| } | ||
|
|
||
| if (response.status === 204) { | ||
| return {}; | ||
| } | ||
|
|
||
| return response.json(); | ||
| } | ||
|
|
||
| export async function fetchGhGraphql(query: string, variables: any = {}): Promise<any> { | ||
| const token = await getGithubToken(); | ||
| const response = await fetch("https://api.github.com/graphql", { | ||
| method: "POST", | ||
| headers: { | ||
| "Authorization": `Bearer ${token}`, | ||
| "Content-Type": "application/json", | ||
| }, | ||
| body: JSON.stringify({ query, variables }) | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check if safeSpawn in process.ts has a 30s timeout
head -100 src/lib/process.ts | tail -60Repository: OctavianTocan/raft
Length of output: 2003
🏁 Script executed:
# Check the full github.ts file to see actual implementation
cat -n src/lib/github.tsRepository: OctavianTocan/raft
Length of output: 17129
🏁 Script executed:
# Search for any existing timeout handling in github.ts
rg -n "timeout|abort|AbortController" src/lib/github.tsRepository: OctavianTocan/raft
Length of output: 44
🏁 Script executed:
# Find all callers of fetchGh and fetchGhGraphql
rg -n "fetchGh|fetchGhGraphql" --type ts --type tsx -B 2 -A 2Repository: OctavianTocan/raft
Length of output: 89
🏁 Script executed:
rg "fetchGh|fetchGhGraphql" --type ts -lRepository: OctavianTocan/raft
Length of output: 79
🏁 Script executed:
# Search across all files for callers
rg "fetchGh\(|fetchGhGraphql\(" -B 1 -A 1 | head -50Repository: OctavianTocan/raft
Length of output: 2662
🏁 Script executed:
# Check if there are any other timeout mechanisms or patterns in the codebase
rg "AbortController|timeout|signal.*abort" --type tsRepository: OctavianTocan/raft
Length of output: 601
Add timeout protection to GitHub API fetch helpers.
safeSpawn() enforces a 30s timeout to prevent hung subprocesses from blocking the CLI. The new fetchGh() and fetchGhGraphql() wrappers lack any timeout mechanism—raw fetch() calls will hang indefinitely if the GitHub API becomes unresponsive. Since all REST and GraphQL operations funnel through these shared wrappers, a stalled connection will block the entire CLI.
Suggested fix
+const GITHUB_REQUEST_TIMEOUT_MS = 30_000;
+
+async function fetchWithTimeout(url: string, options: RequestInit): Promise<Response> {
+ const controller = new AbortController();
+ const timeout = setTimeout(() => controller.abort(), GITHUB_REQUEST_TIMEOUT_MS);
+ try {
+ return await fetch(url, { ...options, signal: options.signal ?? controller.signal });
+ } finally {
+ clearTimeout(timeout);
+ }
+}
+
export async function fetchGh(endpoint: string, options: RequestInit = {}): Promise<any> {
const token = await getGithubToken();
const url = endpoint.startsWith("http") ? endpoint : `https://api.github.com/${endpoint.replace(/^\//, "")}`;
@@
- const response = await fetch(url, { ...options, headers });
+ const response = await fetchWithTimeout(url, { ...options, headers });
@@
export async function fetchGhGraphql(query: string, variables: any = {}): Promise<any> {
const token = await getGithubToken();
- const response = await fetch("https://api.github.com/graphql", {
+ const response = await fetchWithTimeout("https://api.github.com/graphql", {
method: "POST",
headers: {
"Authorization": `Bearer ${token}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ query, variables })
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function fetchGh(endpoint: string, options: RequestInit = {}): Promise<any> { | |
| const token = await getGithubToken(); | |
| const url = endpoint.startsWith("http") ? endpoint : `https://api.github.com/${endpoint.replace(/^\//, "")}`; | |
| const headers = new Headers(options.headers); | |
| headers.set("Authorization", `Bearer ${token}`); | |
| if (!headers.has("Accept")) { | |
| headers.set("Accept", "application/vnd.github.v3+json"); | |
| } | |
| const response = await fetch(url, { ...options, headers }); | |
| if (!response.ok) { | |
| const text = await response.text(); | |
| throw new Error(`GitHub API error ${response.status}: ${text}`); | |
| } | |
| if (response.status === 204) { | |
| return {}; | |
| } | |
| return response.json(); | |
| } | |
| export async function fetchGhGraphql(query: string, variables: any = {}): Promise<any> { | |
| const token = await getGithubToken(); | |
| const response = await fetch("https://api.github.com/graphql", { | |
| method: "POST", | |
| headers: { | |
| "Authorization": `Bearer ${token}`, | |
| "Content-Type": "application/json", | |
| }, | |
| body: JSON.stringify({ query, variables }) | |
| }); | |
| const GITHUB_REQUEST_TIMEOUT_MS = 30_000; | |
| async function fetchWithTimeout(url: string, options: RequestInit): Promise<Response> { | |
| const controller = new AbortController(); | |
| const timeout = setTimeout(() => controller.abort(), GITHUB_REQUEST_TIMEOUT_MS); | |
| try { | |
| return await fetch(url, { ...options, signal: options.signal ?? controller.signal }); | |
| } finally { | |
| clearTimeout(timeout); | |
| } | |
| } | |
| export async function fetchGh(endpoint: string, options: RequestInit = {}): Promise<any> { | |
| const token = await getGithubToken(); | |
| const url = endpoint.startsWith("http") ? endpoint : `https://api.github.com/${endpoint.replace(/^\//, "")}`; | |
| const headers = new Headers(options.headers); | |
| headers.set("Authorization", `Bearer ${token}`); | |
| if (!headers.has("Accept")) { | |
| headers.set("Accept", "application/vnd.github.v3+json"); | |
| } | |
| const response = await fetchWithTimeout(url, { ...options, headers }); | |
| if (!response.ok) { | |
| const text = await response.text(); | |
| throw new Error(`GitHub API error ${response.status}: ${text}`); | |
| } | |
| if (response.status === 204) { | |
| return {}; | |
| } | |
| return response.json(); | |
| } | |
| export async function fetchGhGraphql(query: string, variables: any = {}): Promise<any> { | |
| const token = await getGithubToken(); | |
| const response = await fetchWithTimeout("https://api.github.com/graphql", { | |
| method: "POST", | |
| headers: { | |
| "Authorization": `Bearer ${token}`, | |
| "Content-Type": "application/json", | |
| }, | |
| body: JSON.stringify({ query, variables }) | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/github.ts` around lines 7 - 40, fetchGh and fetchGhGraphql currently
call fetch() with no timeout and can hang; add a 30s timeout using
AbortController: create an AbortController, set a setTimeout to call
controller.abort() after 30000ms, pass controller.signal into the fetch call
(for fetchGh include it in the options spread and for fetchGhGraphql add signal
to the request init), ensure the timeout is cleared (clearTimeout) after fetch
resolves or errors, and throw or let the abort error propagate so callers can
handle timeouts; reference functions fetchGh and fetchGhGraphql and preserve
existing header/response handling and getGithubToken usage.
| export async function fetchAllAccountPRs( | ||
| onProgress?: (status: string) => void, | ||
| ): Promise<PullRequest[]> { | ||
| onProgress?.("Discovering accounts...") | ||
| const accounts = await getGhAccounts() | ||
| const originalAccount = await getActiveAccount() | ||
| const allPRs: PullRequest[] = [] | ||
| const seen = new Set<string>() | ||
|
|
||
| for (let i = 0; i < accounts.length; i++) { | ||
| const account = accounts[i] | ||
| onProgress?.(`Fetching PRs for ${account} (${i + 1}/${accounts.length})...`) | ||
| if (accounts.length > 1) { | ||
| try { await switchAccount(account) } catch { continue } | ||
| } | ||
| try { | ||
| const json = await runGh([ | ||
| "search", "prs", | ||
| "--author=@me", | ||
| "--state=open", | ||
| "--limit=100", | ||
| "--json", "number,title,url,body,state,repository,isDraft,createdAt", | ||
| ]) | ||
| if (json) { | ||
| const parsed = parseSearchResults(json) | ||
| for (const pr of parsed) { | ||
| if (!seen.has(pr.url)) { | ||
| seen.add(pr.url) | ||
| allPRs.push(pr) | ||
| } | ||
| } | ||
| onProgress?.(`Found ${allPRs.length} PRs so far...`) | ||
| } | ||
| } catch { /* skip account if query fails */ } | ||
| } | ||
|
|
||
| onProgress?.(`Loaded ${allPRs.length} PRs across ${accounts.length} accounts`) | ||
|
|
||
| // Restore original account | ||
| if (accounts.length > 1 && originalAccount) { | ||
| try { await switchAccount(originalAccount) } catch { /* ignore */ } | ||
| } | ||
|
|
||
| return allPRs | ||
| onProgress?.("Fetching PRs..."); | ||
| const json = await fetchGh("search/issues?q=is:pr+is:open+author:@me&per_page=100"); | ||
| return parseSearchResults(json.items); | ||
| } | ||
|
|
||
| /** | ||
| * Fetch open pull requests for a specific author or all accessible repositories. | ||
| * | ||
| * - If `author` is undefined: fetches PRs across all authenticated accounts via `@me`. | ||
| * - If `author` is empty string: fetches all open PRs accessible to the current account. | ||
| * - If `author` is provided: fetches PRs by that specific author. | ||
| * | ||
| * @param author - GitHub username or empty string for all repos, undefined for @me across accounts | ||
| * @param onProgress - Optional callback for progress status messages | ||
| * @returns Array of open pull requests | ||
| */ | ||
| export async function fetchOpenPRs( | ||
| author?: string, | ||
| onProgress?: (status: string) => void, | ||
| ): Promise<PullRequest[]> { | ||
| if (author === "") { | ||
| // Empty string means fetch all PRs across all repos the user has access to | ||
| onProgress?.("Fetching all open PRs...") | ||
| const json = await runGh([ | ||
| "search", "prs", | ||
| "--state=open", | ||
| "--limit=1000", | ||
| "--json", "number,title,url,body,state,repository,isDraft,createdAt,author", | ||
| ]) | ||
| if (!json) return [] | ||
| return parseSearchResults(json) | ||
| onProgress?.("Fetching all open PRs..."); | ||
| const json = await fetchGh("search/issues?q=is:pr+is:open&per_page=100"); | ||
| return parseSearchResults(json.items); | ||
| } | ||
| if (author) { | ||
| onProgress?.(`Fetching PRs for ${author}...`) | ||
| const json = await runGh([ | ||
| "search", "prs", | ||
| `--author=${author}`, | ||
| "--state=open", | ||
| "--limit=100", | ||
| "--json", "number,title,url,body,state,repository,isDraft,createdAt,author", | ||
| ]) | ||
| if (!json) return [] | ||
| return parseSearchResults(json) | ||
| onProgress?.(`Fetching PRs for ${author}...`); | ||
| const json = await fetchGh(`search/issues?q=is:pr+is:open+author:${author}&per_page=100`); | ||
| return parseSearchResults(json.items); | ||
| } | ||
| return fetchAllAccountPRs(onProgress) | ||
| return fetchAllAccountPRs(onProgress); | ||
| } | ||
|
|
||
| /** Try fetching repo PRs, attempting each account if needed. */ | ||
| export async function fetchRepoPRs(repo: string): Promise<PullRequest[]> { | ||
| const accounts = await getGhAccounts() | ||
| const originalAccount = await getActiveAccount() | ||
|
|
||
| for (const account of accounts) { | ||
| if (accounts.length > 1) { | ||
| try { await switchAccount(account) } catch { continue } | ||
| } | ||
| try { | ||
| const json = await runGh([ | ||
| "pr", "list", | ||
| "--repo", repo, | ||
| "--state=open", | ||
| "--limit=100", | ||
| "--json", "number,title,url,body,state,isDraft,headRefName,baseRefName,createdAt", | ||
| ]) | ||
| // Restore original account before returning | ||
| if (accounts.length > 1 && originalAccount) { | ||
| try { await switchAccount(originalAccount) } catch { /* ignore */ } | ||
| } | ||
| if (!json) return [] | ||
| const raw = JSON.parse(json) as Array<{ | ||
| number: number | ||
| title: string | ||
| url: string | ||
| body: string | ||
| state: string | ||
| isDraft: boolean | ||
| headRefName: string | ||
| baseRefName: string | ||
| createdAt: string | ||
| }> | ||
| return raw.map((pr) => ({ | ||
| number: pr.number, | ||
| title: pr.title, | ||
| url: `https://github.com/${repo}/pull/${pr.number}`, | ||
| body: (pr.body ?? "").split("\n")[0]?.slice(0, 80) ?? "", | ||
| state: pr.state, | ||
| isDraft: pr.isDraft, | ||
| repo, | ||
| headRefName: pr.headRefName, | ||
| baseRefName: pr.baseRefName, | ||
| createdAt: pr.createdAt, | ||
| })) | ||
| } catch { | ||
| // This account can't access the repo, try next | ||
| continue | ||
| } | ||
| } | ||
|
|
||
| // Restore original account | ||
| if (accounts.length > 1 && originalAccount) { | ||
| try { await switchAccount(originalAccount) } catch { /* ignore */ } | ||
| try { | ||
| const prs = await fetchGh(`repos/${repo}/pulls?state=open&per_page=100`); | ||
| return prs.map((pr: any) => ({ | ||
| number: pr.number, | ||
| title: pr.title, | ||
| url: pr.html_url, | ||
| body: (pr.body ?? "").split("\n")[0]?.slice(0, 80) ?? "", | ||
| state: pr.state, | ||
| isDraft: pr.draft || false, | ||
| repo, | ||
| headRefName: pr.head.ref, | ||
| baseRefName: pr.base.ref, | ||
| createdAt: pr.created_at, | ||
| })); | ||
| } catch { | ||
| return []; | ||
| } | ||
|
|
||
| return [] | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's see the full file to understand the context
wc -l src/lib/github.tsRepository: OctavianTocan/raft
Length of output: 83
🏁 Script executed:
# Read the entire github.ts file with line numbers
cat -n src/lib/github.tsRepository: OctavianTocan/raft
Length of output: 17129
🏁 Script executed:
# Check if there's any pagination logic or Link header handling
rg -i "pagination|link.*header|page|per_page" src/lib/github.ts -A 2 -B 2Repository: OctavianTocan/raft
Length of output: 1258
🏁 Script executed:
# Search for the fetchGh function definition to see how it handles responses
ast-grep --pattern 'function fetchGh($$$) { $$$ }'Repository: OctavianTocan/raft
Length of output: 44
🏁 Script executed:
# Also look for async function fetchGh
ast-grep --pattern 'async function fetchGh($$$) { $$$ }'Repository: OctavianTocan/raft
Length of output: 44
Implement pagination for PR fetches to handle repos with >100 open PRs.
The fetchAllAccountPRs, fetchOpenPRs, and fetchRepoPRs functions only fetch the first page (100 results) and stop, silently missing PRs in active repositories. Implement pagination following the pattern used in fetchPRPanelData (lines 286–296): loop with page parameter, concatenate results, and break when the response has fewer than 100 items.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/github.ts` around lines 84 - 127, fetchAllAccountPRs, fetchOpenPRs
and fetchRepoPRs currently only request a single page (per_page=100) and thus
miss additional PRs; change each to paginate like fetchPRPanelData: add a page
counter, loop requests appending results from each page (use
&page=${page}&per_page=100 or add page param to the query), stop when the
returned items length is less than 100 (or zero), and return the concatenated
list; for fetchRepoPRs map each page's items into the PullRequest shape and
concatenate before returning, preserving the existing error handling.
| export async function findStackComment(repo: string, prNumber: number): Promise<number | null> { | ||
| const json = await runGh([ | ||
| "api", | ||
| `repos/${repo}/issues/${prNumber}/comments`, | ||
| "--jq", `.[] | select(.body | contains("${STACK_COMMENT_MARKER}")) | .id`, | ||
| ]) | ||
| if (!json) return null | ||
| const id = parseInt(json.split("\n")[0], 10) | ||
| return isNaN(id) ? null : id | ||
| const comments = await fetchGh(`repos/${repo}/issues/${prNumber}/comments`); | ||
| for (const c of comments) { | ||
| if (c.body && c.body.includes(STACK_COMMENT_MARKER)) { | ||
| return c.id; | ||
| } | ||
| } | ||
| return null; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/lib/github.ts | head -150Repository: OctavianTocan/raft
Length of output: 5743
🏁 Script executed:
sed -n '275,290p' src/lib/github.tsRepository: OctavianTocan/raft
Length of output: 668
🏁 Script executed:
sed -n '279,330p' src/lib/github.tsRepository: OctavianTocan/raft
Length of output: 1672
🏁 Script executed:
rg -n "per_page|page=" src/lib/github.tsRepository: OctavianTocan/raft
Length of output: 521
Paginate comment reads to avoid duplicates and incomplete data on busy PRs.
findStackComment() (line 138) and fetchPRPanelData() (lines 281-282) fetch issue and pull request comments without pagination parameters. Since GitHub's default page size is 30 items, these functions will only retrieve the first page of comments. On busy PRs with many comments, this leads to:
- Missed stack comments (allowing duplicate stack comments to be created)
- Incomplete panel discussions (review comments beyond the first page are lost)
Note that fetchPRPanelData() already implements pagination for files (lines 290-295); apply the same pattern to issueComments and codeComments fetches.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/github.ts` around lines 137 - 144, findStackComment and the
comment-fetching in fetchPRPanelData currently call fetchGh without pagination
and therefore only get the first page of comments; update both to paginate (like
the existing files pagination) by looping page numbers, calling fetchGh with
query params page and per_page, accumulating results for issueComments and
codeComments until an empty page is returned, then search the accumulated
comments for STACK_COMMENT_MARKER in findStackComment (returning c.id) and use
the full accumulated lists in fetchPRPanelData so no comments are missed on busy
PRs.
Summary
yarn.lock, generated types, etc.) by default.Shift+Bkeybind to expand hidden boilerplate files.collapseDiffRegionscode which was incompatible with OpenTUI's diff parser.Summary by Sourcery
Replace gh CLI–based GitHub integration with token-authenticated REST/GraphQL APIs, add semantic intent-based diff grouping with boilerplate folding, introduce persistent PR caching and split-branch inspection, and refine list/panel UX and performance.
New Features:
Bug Fixes:
Enhancements:
Tests:
Summary by CodeRabbit
Release Notes
New Features
Improvements