perf: rewrite gh CLI subprocesses to use native fetch and SQLite caching#4
perf: rewrite gh CLI subprocesses to use native fetch and SQLite caching#4OctavianTocan wants to merge 1 commit intomainfrom
Conversation
- Replaced 140+ `gh` CLI subprocess calls with native `fetch` via GitHub API - Added in-memory OAuth token caching via `gh auth token` - Added local SQLite persistence using `bun:sqlite` for zero-latency PR lists - Optimized React rendering cascade with `useDeferredValue` and `React.memo` - Implemented `PanelSkeleton` for smooth, tab-aware loading transitions - Batched GraphQL PR details queries into chunks to prevent rate limits
Reviewer's GuideReplaces gh CLI subprocess usage with native GitHub REST/GraphQL fetch helpers and introduces SQLite-backed caching and React rendering optimizations for faster, smoother PR listing and previewing, plus a new split-branch visualization command. Sequence diagram for ls command PR loading with SQLite and GitHub APIsequenceDiagram
actor User
participant RaftCLI
participant LsCommand
participant DB as DBModule
participant Github as GithubAPI
participant Auth as GithubAuth
participant GhCLI
participant GitHubAPI
User->>RaftCLI: run raft ls
RaftCLI->>LsCommand: render LsCommand
rect rgb(240,240,240)
LsCommand->>DB: getCachedPRs()
DB-->>LsCommand: PullRequest[] cachedPRs
alt cachedPRs not empty
LsCommand->>LsCommand: setAllPRs(cachedPRs)
LsCommand->>LsCommand: setLoading(false)
end
end
LsCommand->>LsCommand: load() async effect
LsCommand->>Github: fetchOpenPRs(author, onProgress)
Github->>Github: fetchGh("search/issues...")
Github->>Auth: getGithubToken()
alt token not cached
Auth->>GhCLI: safeSpawn([gh auth token])
GhCLI-->>Auth: stdout token
Auth-->>Github: oauth token
else token cached
Auth-->>Github: cached token
end
Github->>GitHubAPI: HTTPS request search/issues
GitHubAPI-->>Github: JSON search results
Github-->>LsCommand: PullRequest[] results
LsCommand->>LsCommand: sort results
LsCommand->>DB: cachePRs(results)
DB-->>LsCommand: ok
LsCommand->>LsCommand: setAllPRs(results)
LsCommand->>LsCommand: setLoading(false)
loop for visible PRs
LsCommand->>Github: batchFetchPRDetails(pr slice)
Github->>GitHubAPI: GraphQL batch query
GitHubAPI-->>Github: PRDetails map
Github-->>LsCommand: Map url to PRDetails
LsCommand->>DB: cachePRDetails(url, details)
end
ER diagram for new SQLite PR caching schemaerDiagram
PULL_REQUESTS {
string url PK
json data
datetime updated_at
}
PR_DETAILS {
string url PK
json data
datetime updated_at
}
PR_PANEL_DATA {
string url PK
json data
datetime updated_at
}
PULL_REQUESTS ||--|| PR_DETAILS : same_pr_url
PULL_REQUESTS ||--|| PR_PANEL_DATA : same_pr_url
Class diagram for PRCache, DB helpers, and split state typesclassDiagram
class PRCache {
- Map~string, PRDetails~ details
- Map~string, PRPanelData~ panelData
+ PRCache()
+ getDetails(url string) PRDetails
+ setDetails(url string, data PRDetails) void
+ hasDetails(url string) boolean
+ getPanelData(url string) PRPanelData
+ setPanelData(url string, data PRPanelData) void
+ hasPanelData(url string) boolean
}
class DBModule {
+ db Database
+ getCachedPRs() PullRequest[]
+ cachePRs(prs PullRequest[]) void
+ getCachedPRDetails(url string) PRDetails
+ cachePRDetails(url string, details PRDetails) void
+ getCachedPRPanelData(url string) PRPanelData
+ cachePRPanelData(url string, panelData PRPanelData) void
}
class GithubAuth {
- cachedToken string
+ getGithubToken() Promise~string~
}
class GithubAPI {
+ fetchGh(endpoint string, options RequestInit) Promise~any~
+ fetchGhGraphql(query string, variables any) Promise~any~
+ parseSearchResults(items any[]) PullRequest[]
+ fetchAllAccountPRs(onProgress function) Promise~PullRequest[]~
+ fetchOpenPRs(author string, onProgress function) Promise~PullRequest[]~
+ fetchRepoPRs(repo string) Promise~PullRequest[]~
+ updatePRTitle(repo string, prNumber number, title string) Promise~void~
+ findStackComment(repo string, prNumber number) Promise~number~
+ upsertStackComment(repo string, prNumber number, body string) Promise~void~
+ getCurrentRepo() Promise~string~
+ batchFetchPRDetails(prs PRIdentifier[]) Promise~Map~string, PRDetails~~
+ fetchPRDetails(repo string, prNumber number) Promise~PRDetails~
+ fetchPRPanelData(repo string, prNumber number) Promise~PRPanelData~
+ submitPRReview(repo string, prNumber number, event ReviewEvent, body string) Promise~void~
+ replyToReviewComment(repo string, prNumber number, commentId number, body string) Promise~void~
+ postPRComment(repo string, prNumber number, body string) Promise~void~
+ fetchReviewThreads(repo string, prNumber number) Promise~ReviewThread[]~
+ resolveReviewThread(threadId string) Promise~void~
+ fetchCIStatus(repo string, ref string) Promise~CIReturnStatus~
+ fetchHasConflicts(repo string, prNumber number) Promise~boolean~
}
class PRIdentifier {
+ repo string
+ number number
+ url string
}
class SplitEntry {
+ number number
+ name string
+ branch string
+ files string[]
+ lines number
+ dependsOn number[]
+ prNumber number
+ prUrl string
+ status SplitEntryStatus
}
class SplitState {
+ version number
+ originalBranch string
+ targetBranch string
+ strategy string
+ createdAt string
+ status SplitPhase
+ topology string
+ splits SplitEntry[]
}
class SplitStateModule {
+ readSplitState(repoRoot string) Promise~SplitState~
+ writeSplitState(repoRoot string, state SplitState) void
+ formatSplitTopology(state SplitState) string[]
}
class SplitCommand {
+ repo string
+ SplitCommand(props SplitCommandProps)
}
class SplitCommandProps {
+ repo string
}
PRCache --> DBModule : uses
GithubAPI --> GithubAuth : uses
GithubAPI --> PRIdentifier : uses
SplitStateModule --> SplitState : manages
SplitState --> SplitEntry : contains
SplitCommand --> SplitStateModule : reads
SplitCommand --> SplitState : renders
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
WalkthroughThis PR introduces a new split workflow viewer command, replaces GitHub CLI-based operations with direct HTTP/GraphQL API requests, adds a SQLite-backed caching layer for PR data and details, and enhances the PR list command with deferred filtering, multi-mode sorting, cache-backed loading, and batch detail fetching. The GitHub token retrieval is now delegated to a dedicated auth module. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant UI as LS Command UI
participant Cache as In-Memory Cache
participant DB as SQLite DB
participant GitHub as GitHub API
User->>UI: Load PR list
UI->>Cache: getCachedPRs()
alt Cache hit
Cache-->>UI: Return cached PRs
UI->>UI: Sort, filter, display
else Cache miss
UI->>DB: Query pull_requests table
alt DB has data
DB-->>UI: Return PR data
UI->>UI: Populate state, display
else DB empty
UI->>GitHub: fetchGh (PR search)
GitHub-->>UI: PR results
end
UI->>Cache: cachePRs(results)
Cache->>DB: INSERT/REPLACE rows
end
User->>UI: Select PR for details
UI->>Cache: getDetails(prUrl)
alt Details in cache
Cache-->>UI: Return PRDetails
else Not cached
UI->>DB: Query pr_details table
alt Found in DB
DB-->>UI: Return PRDetails
Cache->>Cache: Store in memory
else Not in DB
UI->>GitHub: batchFetchPRDetails(prs)
GitHub-->>UI: Return details map
UI->>Cache: setDetails(url, details)
Cache->>DB: INSERT/REPLACE row
end
end
UI->>UI: Render detail panel with deferred search
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Hey - I've found 3 issues, and left some high level feedback:
- In
batchFetchPRDetailsthe GraphQL query is constructed via string interpolation ofowner/namedirectly into the query string; consider switching to variables (as done infetchReviewThreads) to avoid issues if repo names ever contain characters that need escaping and to keep the query safer and easier to maintain. - In
SplitCommandtherepoprop is accepted but never used, which can be confusing; either use it to constrain or label the split state, or remove it from the props to keep the API minimal. - The
SplitCommandusesBun.spawn(["open", ...])to launch PR URLs, which is macOS-specific; consider using a cross-platform opener (xdg-open/startor a small helper that picks the right command per platform) so the command works consistently on Linux and Windows.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `batchFetchPRDetails` the GraphQL query is constructed via string interpolation of `owner`/`name` directly into the query string; consider switching to variables (as done in `fetchReviewThreads`) to avoid issues if repo names ever contain characters that need escaping and to keep the query safer and easier to maintain.
- In `SplitCommand` the `repo` prop is accepted but never used, which can be confusing; either use it to constrain or label the split state, or remove it from the props to keep the API minimal.
- The `SplitCommand` uses `Bun.spawn(["open", ...])` to launch PR URLs, which is macOS-specific; consider using a cross-platform opener (`xdg-open`/`start` or a small helper that picks the right command per platform) so the command works consistently on Linux and Windows.
## Individual Comments
### Comment 1
<location path="src/commands/ls.tsx" line_range="118" />
<code_context>
const [authorFilter, setAuthorFilter] = useState<string | null>(null)
+ const deferredSearchQuery = useDeferredValue(searchQuery)
+
const filteredPRs = useMemo(() => {
</code_context>
<issue_to_address>
**issue (bug_risk):** `useDeferredValue` is used but not imported in this module.
This hook isn’t imported in this file, so `deferredSearchQuery` will fail at runtime. Please add `useDeferredValue` to the React import (e.g. `import React, { ..., useDeferredValue } from "react"`).
</issue_to_address>
### Comment 2
<location path="src/lib/cache.ts" line_range="24-25" />
<code_context>
+ cachePRDetails(url, data);
}
hasDetails(url: string): boolean {
- return this.details.has(url)
+ return this.details.has(url) || getCachedPRDetails(url) !== null;
}
</code_context>
<issue_to_address>
**suggestion (performance):** `hasDetails` and `hasPanelData` now hit SQLite on every cache miss, which may be more expensive than intended.
Because the in-memory map is checked first, any cache miss will now trigger a SQLite lookup. If these methods run in hot render paths, that extra I/O could be significant. Depending on your access pattern, you could either rely solely on `this.details`/`this.panelData` (accepting possible stale false negatives) or add a simple negative-cache indicator so you don’t repeatedly hit SQLite for known-missing URLs.
Suggested implementation:
```typescript
export class PRCache {
// In-memory backing for immediate synchronous reads during renders
private details = new Map<string, PRDetails>()
private panelData = new Map<string, PRPanelData>()
// Negative cache to avoid repeated SQLite lookups for known-missing entries
private missingDetails = new Set<string>()
private missingPanelData = new Set<string>()
```
```typescript
getDetails(url: string): PRDetails | undefined {
// Fast in-memory hit
if (this.details.has(url)) return this.details.get(url);
// If we've already confirmed this URL is missing in the backing cache,
// don't hit SQLite again.
if (this.missingDetails.has(url)) return undefined;
const fromDb = getCachedPRDetails(url);
if (fromDb) {
// Populate in-memory cache and clear any negative-cache marker
this.details.set(url, fromDb);
this.missingDetails.delete(url);
return fromDb;
}
// Record a negative cache entry to avoid repeated SQLite lookups
this.missingDetails.add(url);
return undefined;
```
```typescript
hasDetails(url: string): boolean {
// Only consult the in-memory map; avoid backing-store I/O on cache miss.
// This keeps hasDetails safe to call from hot render paths.
return this.details.has(url);
}
```
To fully align with the suggestion, similar negative-caching logic should be added for the panel data path:
1. Update `getPanelData(url: string)` to:
- Return immediately if `this.panelData.has(url)`.
- Return `undefined` if `this.missingPanelData.has(url)`.
- On SQLite miss, add `url` to `this.missingPanelData`.
- On SQLite hit, populate `this.panelData` and `delete` from `this.missingPanelData`.
2. Revert `hasPanelData(url: string)` to only check `this.panelData.has(url)` and not query SQLite, mirroring the updated `hasDetails` behavior.
You should apply the same patterns as shown for `getDetails`/`hasDetails`, using `panelData`, `missingPanelData`, and the corresponding panel-data SQLite accessors.
</issue_to_address>
### Comment 3
<location path="src/lib/__tests__/cache.test.ts" line_range="38-41" />
<code_context>
- expect(cache.hasDetails("x")).toBe(false)
- cache.setDetails("x", { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" })
- expect(cache.hasDetails("x")).toBe(true)
+ const uniqueKey = "test-unique-key-" + Date.now();
+ expect(cache.hasDetails(uniqueKey)).toBe(false)
+ cache.setDetails(uniqueKey, { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" })
+ expect(cache.hasDetails(uniqueKey)).toBe(true)
})
})
</code_context>
<issue_to_address>
**suggestion (testing):** Isolate PRCache tests from the SQLite backend and add coverage for DB-backed reads/writes.
Because `PRCache` now delegates to `getCachedPRDetails`/`cachePRDetails` (and the panel-data variants), these tests will hit the real SQLite DB in `~/.config/raft/raft.sqlite`, introducing hidden I/O, potential flakiness, and leaving DB interactions unvalidated.
To improve this:
1. Mock `getCachedPRDetails`, `cachePRDetails`, `getCachedPRPanelData`, and `cachePRPanelData` in this file so tests stay fast, deterministic, and let you assert calls (URL, payload, etc.).
2. Add a test for a cache miss in memory but a hit in the DB layer (e.g. `details` map empty while `getCachedPRDetails` returns data) to verify that `getDetails` repopulates the in-memory map from persistent storage.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
|
||
| const [authorFilter, setAuthorFilter] = useState<string | null>(null) | ||
|
|
||
| const deferredSearchQuery = useDeferredValue(searchQuery) |
There was a problem hiding this comment.
issue (bug_risk): useDeferredValue is used but not imported in this module.
This hook isn’t imported in this file, so deferredSearchQuery will fail at runtime. Please add useDeferredValue to the React import (e.g. import React, { ..., useDeferredValue } from "react").
| hasDetails(url: string): boolean { | ||
| return this.details.has(url) | ||
| return this.details.has(url) || getCachedPRDetails(url) !== null; |
There was a problem hiding this comment.
suggestion (performance): hasDetails and hasPanelData now hit SQLite on every cache miss, which may be more expensive than intended.
Because the in-memory map is checked first, any cache miss will now trigger a SQLite lookup. If these methods run in hot render paths, that extra I/O could be significant. Depending on your access pattern, you could either rely solely on this.details/this.panelData (accepting possible stale false negatives) or add a simple negative-cache indicator so you don’t repeatedly hit SQLite for known-missing URLs.
Suggested implementation:
export class PRCache {
// In-memory backing for immediate synchronous reads during renders
private details = new Map<string, PRDetails>()
private panelData = new Map<string, PRPanelData>()
// Negative cache to avoid repeated SQLite lookups for known-missing entries
private missingDetails = new Set<string>()
private missingPanelData = new Set<string>() getDetails(url: string): PRDetails | undefined {
// Fast in-memory hit
if (this.details.has(url)) return this.details.get(url);
// If we've already confirmed this URL is missing in the backing cache,
// don't hit SQLite again.
if (this.missingDetails.has(url)) return undefined;
const fromDb = getCachedPRDetails(url);
if (fromDb) {
// Populate in-memory cache and clear any negative-cache marker
this.details.set(url, fromDb);
this.missingDetails.delete(url);
return fromDb;
}
// Record a negative cache entry to avoid repeated SQLite lookups
this.missingDetails.add(url);
return undefined; hasDetails(url: string): boolean {
// Only consult the in-memory map; avoid backing-store I/O on cache miss.
// This keeps hasDetails safe to call from hot render paths.
return this.details.has(url);
}To fully align with the suggestion, similar negative-caching logic should be added for the panel data path:
-
Update
getPanelData(url: string)to:- Return immediately if
this.panelData.has(url). - Return
undefinedifthis.missingPanelData.has(url). - On SQLite miss, add
urltothis.missingPanelData. - On SQLite hit, populate
this.panelDataanddeletefromthis.missingPanelData.
- Return immediately if
-
Revert
hasPanelData(url: string)to only checkthis.panelData.has(url)and not query SQLite, mirroring the updatedhasDetailsbehavior.
You should apply the same patterns as shown for getDetails/hasDetails, using panelData, missingPanelData, and the corresponding panel-data SQLite accessors.
| const uniqueKey = "test-unique-key-" + Date.now(); | ||
| expect(cache.hasDetails(uniqueKey)).toBe(false) | ||
| cache.setDetails(uniqueKey, { additions: 0, deletions: 0, commentCount: 0, reviews: [], headRefName: "" }) | ||
| expect(cache.hasDetails(uniqueKey)).toBe(true) |
There was a problem hiding this comment.
suggestion (testing): Isolate PRCache tests from the SQLite backend and add coverage for DB-backed reads/writes.
Because PRCache now delegates to getCachedPRDetails/cachePRDetails (and the panel-data variants), these tests will hit the real SQLite DB in ~/.config/raft/raft.sqlite, introducing hidden I/O, potential flakiness, and leaving DB interactions unvalidated.
To improve this:
- Mock
getCachedPRDetails,cachePRDetails,getCachedPRPanelData, andcachePRPanelDatain this file so tests stay fast, deterministic, and let you assert calls (URL, payload, etc.). - Add a test for a cache miss in memory but a hit in the DB layer (e.g.
detailsmap empty whilegetCachedPRDetailsreturns data) to verify thatgetDetailsrepopulates the in-memory map from persistent storage.
There was a problem hiding this comment.
Pull request overview
This PR replaces most gh CLI subprocess usage with direct GitHub REST/GraphQL fetch calls, adds token + SQLite-backed caching to speed up PR list/panel loads, and introduces several UI performance/loading improvements (plus a new split command).
Changes:
- Introduce
fetch-based GitHub REST/GraphQL helpers (including batched GraphQL PR details). - Add SQLite persistence + integrate it into the in-memory PR cache and
lsflow. - Improve TUI rendering/loading (deferred search, memoized rows, tab-aware panel skeleton) and add
raft split.
Reviewed changes
Copilot reviewed 15 out of 15 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| src/lib/split-state.ts | New split state schema + read/write + topology formatting. |
| src/lib/github.ts | Replaces gh calls with REST/GraphQL fetch, adds batching and new helpers. |
| src/lib/db.ts | Adds SQLite persistence for PR lists/details/panel data. |
| src/lib/cache.ts | PRCache now reads/writes through SQLite for persistence. |
| src/lib/auth.ts | Adds cached OAuth token retrieval via gh auth token. |
| src/lib/tests/github.test.ts | Updates parseSearchResults tests for new input shape. |
| src/lib/tests/cache.test.ts | Adjusts cache test keying to avoid collisions with persisted cache. |
| src/index.tsx | Registers new split command and CLI help entry. |
| src/hooks/usePanel.ts | Tightens setter types for panel state setters. |
| src/components/skeleton.tsx | Adds PanelSkeleton for tab-specific loading placeholders. |
| src/components/preview-panel.tsx | Uses PanelSkeleton during panel loading transitions. |
| src/components/pr-table.tsx | Memoizes PRRow to reduce re-render churn. |
| src/commands/split.tsx | New TUI for viewing .raft-split.json state and opening/copying PR URLs. |
| src/commands/ls.tsx | Adds SQLite warm start, deferred search, new sorting path, and batched details fetch. |
| src/tests/integration.test.ts | Updates real-network repo target for integration tests. |
Comments suppressed due to low confidence (1)
src/commands/ls.tsx:247
LsCommandnow derivesselectedPRfromsortedPRs, but passesfilteredPRs+selectedIndexintousePanel.usePanelassumesselectedIndexindexes into theallPRsarray for neighbor prefetching; passing a differently-ordered array will prefetch the wrong neighbors (and can fetch unrelated PRs). PasssortedPRs(or whichever arrayselectedIndexrefers to) tousePanelto keep indices consistent.
const selectedPR = sortedPRs[selectedIndex] ?? null
// Panel state management (shared hook handles data fetching + caching + prefetching)
const panel = usePanel(selectedPR, filteredPRs, selectedIndex)
const { panelOpen, panelTab, splitRatio, panelFullscreen, panelData, panelLoading,
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| export async function fetchAllAccountPRs( | ||
| onProgress?: (status: string) => void, | ||
| ): Promise<PullRequest[]> { | ||
| onProgress?.("Discovering accounts...") | ||
| const accounts = await getGhAccounts() | ||
| const originalAccount = await getActiveAccount() | ||
| const allPRs: PullRequest[] = [] | ||
| const seen = new Set<string>() | ||
|
|
||
| for (let i = 0; i < accounts.length; i++) { | ||
| const account = accounts[i] | ||
| onProgress?.(`Fetching PRs for ${account} (${i + 1}/${accounts.length})...`) | ||
| if (accounts.length > 1) { | ||
| try { await switchAccount(account) } catch { continue } | ||
| } | ||
| try { | ||
| const json = await runGh([ | ||
| "search", "prs", | ||
| "--author=@me", | ||
| "--state=open", | ||
| "--limit=100", | ||
| "--json", "number,title,url,body,state,repository,isDraft,createdAt", | ||
| ]) | ||
| if (json) { | ||
| const parsed = parseSearchResults(json) | ||
| for (const pr of parsed) { | ||
| if (!seen.has(pr.url)) { | ||
| seen.add(pr.url) | ||
| allPRs.push(pr) | ||
| } | ||
| } | ||
| onProgress?.(`Found ${allPRs.length} PRs so far...`) | ||
| } | ||
| } catch { /* skip account if query fails */ } | ||
| } | ||
|
|
||
| onProgress?.(`Loaded ${allPRs.length} PRs across ${accounts.length} accounts`) | ||
|
|
||
| // Restore original account | ||
| if (accounts.length > 1 && originalAccount) { | ||
| try { await switchAccount(originalAccount) } catch { /* ignore */ } | ||
| } | ||
|
|
||
| return allPRs | ||
| onProgress?.("Fetching PRs..."); | ||
| const json = await fetchGh("search/issues?q=is:pr+is:open+author:@me&per_page=100"); | ||
| return parseSearchResults(json.items); | ||
| } | ||
|
|
||
| /** | ||
| * Fetch open pull requests for a specific author or all accessible repositories. | ||
| * | ||
| * - If `author` is undefined: fetches PRs across all authenticated accounts via `@me`. | ||
| * - If `author` is empty string: fetches all open PRs accessible to the current account. | ||
| * - If `author` is provided: fetches PRs by that specific author. | ||
| * | ||
| * @param author - GitHub username or empty string for all repos, undefined for @me across accounts | ||
| * @param onProgress - Optional callback for progress status messages | ||
| * @returns Array of open pull requests | ||
| */ | ||
| export async function fetchOpenPRs( | ||
| author?: string, | ||
| onProgress?: (status: string) => void, | ||
| ): Promise<PullRequest[]> { | ||
| if (author === "") { | ||
| // Empty string means fetch all PRs across all repos the user has access to | ||
| onProgress?.("Fetching all open PRs...") | ||
| const json = await runGh([ | ||
| "search", "prs", | ||
| "--state=open", | ||
| "--limit=1000", | ||
| "--json", "number,title,url,body,state,repository,isDraft,createdAt,author", | ||
| ]) | ||
| if (!json) return [] | ||
| return parseSearchResults(json) | ||
| onProgress?.("Fetching all open PRs..."); | ||
| const json = await fetchGh("search/issues?q=is:pr+is:open&per_page=100"); | ||
| return parseSearchResults(json.items); | ||
| } | ||
| if (author) { | ||
| onProgress?.(`Fetching PRs for ${author}...`) | ||
| const json = await runGh([ | ||
| "search", "prs", | ||
| `--author=${author}`, | ||
| "--state=open", | ||
| "--limit=100", | ||
| "--json", "number,title,url,body,state,repository,isDraft,createdAt,author", | ||
| ]) | ||
| if (!json) return [] | ||
| return parseSearchResults(json) | ||
| onProgress?.(`Fetching PRs for ${author}...`); | ||
| const json = await fetchGh(`search/issues?q=is:pr+is:open+author:${author}&per_page=100`); | ||
| return parseSearchResults(json.items); | ||
| } | ||
| return fetchAllAccountPRs(onProgress) | ||
| return fetchAllAccountPRs(onProgress); |
| const map = await batchFetchPRDetails([{repo, number: prNumber, url: `https://github.com/${repo}/pull/${prNumber}`}]); | ||
| const details = map.get(`https://github.com/${repo}/pull/${prNumber}`); | ||
| if (!details) throw new Error("Failed to fetch PR details"); |
| export function writeSplitState(repoRoot: string, state: SplitState): void { | ||
| const path = `${repoRoot}/.raft-split.json` | ||
| Bun.write(path, JSON.stringify(state, null, 2) + "\n") |
| // Initialize database in ~/.config/raft/raft.sqlite | ||
| const configDir = join(homedir(), ".config", "raft"); | ||
| mkdirSync(configDir, { recursive: true }); | ||
|
|
||
| const dbPath = join(configDir, "raft.sqlite"); | ||
| export const db = new Database(dbPath); | ||
|
|
| interface SplitCommandProps { | ||
| repo?: string | ||
| } | ||
|
|
||
| async function getRepoRoot(): Promise<string | null> { | ||
| const proc = Bun.spawn(["git", "rev-parse", "--show-toplevel"], { | ||
| stdout: "pipe", | ||
| stderr: "pipe", | ||
| }) | ||
| const stdout = await new Response(proc.stdout).text() | ||
| const code = await proc.exited | ||
| return code === 0 ? stdout.trim() : null | ||
| } |
| async function getRepoRoot(): Promise<string | null> { | ||
| const proc = Bun.spawn(["git", "rev-parse", "--show-toplevel"], { | ||
| stdout: "pipe", | ||
| stderr: "pipe", | ||
| }) | ||
| const stdout = await new Response(proc.stdout).text() | ||
| const code = await proc.exited | ||
| return code === 0 ? stdout.trim() : null |
| export function parseSearchResults(items: any[]): PullRequest[] { | ||
| return items.map((pr) => { | ||
| const firstLine = (pr.body ?? "").split("\n")[0] ?? ""; | ||
| const repoUrlParts = pr.repository_url.split("/"); | ||
| const repo = `${repoUrlParts[repoUrlParts.length - 2]}/${repoUrlParts[repoUrlParts.length - 1]}`; | ||
| return { | ||
| number: pr.number, | ||
| title: pr.title, | ||
| url: pr.url, | ||
| url: pr.html_url, | ||
| body: firstLine.slice(0, 80), | ||
| state: pr.state, | ||
| isDraft: pr.isDraft, | ||
| repo: pr.repository.nameWithOwner, | ||
| isDraft: pr.draft || false, | ||
| repo, |
|
|
||
| for (const split of splits) { | ||
| if (split.dependsOn.length === 0) { | ||
| roots.push(split) | ||
| } else { | ||
| for (const dep of split.dependsOn) { | ||
| const existing = childrenOf.get(dep) ?? [] | ||
| existing.push(split) | ||
| childrenOf.set(dep, existing) | ||
| } |
| hasDetails(url: string): boolean { | ||
| return this.details.has(url) | ||
| return this.details.has(url) || getCachedPRDetails(url) !== null; | ||
| } |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 543495e0c7
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const CHUNK_SIZE = 20; | ||
| for (let i = 0; i < prs.length; i += CHUNK_SIZE) { | ||
| const chunk = prs.slice(i, i + CHUNK_SIZE); | ||
| let query = "query {\\n"; |
There was a problem hiding this comment.
Replace escaped newline in GraphQL query header
The batch query is initialized with "query {\\n", which inserts a literal backslash-n into the GraphQL document instead of whitespace. GraphQL treats \ outside a string as invalid syntax, so fetchGhGraphql will reject these batched detail requests; batchFetchPRDetails then logs and drops the chunk, leaving PR lifecycle/CI/review data missing and causing fetchPRDetails callers to fail when no entry is returned.
Useful? React with 👍 / 👎.
| fetchGh(`repos/${repo}/issues/${prNumber}/comments`), | ||
| fetchGh(`repos/${repo}/pulls/${prNumber}/comments`), | ||
| fetchGh(`repos/${repo}/pulls/${prNumber}/files?per_page=100`), | ||
| fetchReviewThreads(repo, prNumber), |
There was a problem hiding this comment.
Catch review-thread fetch failures in panel loading
Including fetchReviewThreads(repo, prNumber) directly in Promise.all makes the entire panel fetch fail if the GraphQL review-thread call errors (for example due to transient GraphQL/rate-limit issues), even when body/comments/files REST calls succeeded. This is a regression from the previous behavior where thread metadata failures degraded to an empty thread list instead of blanking the preview panel.
Useful? React with 👍 / 👎.
| const json = await fetchGh("search/issues?q=is:pr+is:open+author:@me&per_page=100"); | ||
| return parseSearchResults(json.items); |
There was a problem hiding this comment.
Reintroduce account iteration in all-account PR fetch
fetchAllAccountPRs now executes a single author:@me search and returns immediately, so it no longer aggregates PRs from multiple authenticated GitHub CLI accounts. As a result, flows that rely on this default path to scan "across all accounts" will silently omit PRs owned under non-active accounts.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (6)
src/components/skeleton.tsx (2)
58-58: Theheightprop is declared but unused.The component accepts a
heightparameter that isn't referenced in any of the returned layouts. Either use it to constrain the skeleton height or remove it from the props interface.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/skeleton.tsx` at line 58, PanelSkeleton currently declares a height prop that is never used; either remove height from the props signature or apply it to the rendered skeleton container. Update the PanelSkeleton({ tab, width, height }) props (and any callers) to drop height if unused, or use height to set the skeleton container style/inline style/class (e.g., constrain the outer div or <Skeleton> component) so the prop actually controls vertical size; ensure the prop type is kept in sync with the function signature and any consuming components.
58-58: Consider usingPanelTabtype for better type safety.The
tabparameter is typed asstring, but aPanelTabtype exists insrc/lib/types.ts(used bypreview-panel.tsx). Using the union type would provide compile-time safety against typos and make the valid values explicit.🔧 Proposed fix
+import type { PanelTab } from "../lib/types" + -export function PanelSkeleton({ tab, width, height }: { tab: string; width: number; height: number }) { +export function PanelSkeleton({ tab, width, height }: { tab: PanelTab; width: number; height: number }) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/skeleton.tsx` at line 58, The PanelSkeleton function currently types the tab parameter as string; change it to use the existing union type PanelTab for stronger type safety by importing PanelTab from src/lib/types.ts and updating the signature export function PanelSkeleton({ tab, width, height }: { tab: PanelTab; width: number; height: number }); ensure any callers pass a value assignable to PanelTab (or cast/update them) so the code compiles.src/lib/split-state.ts (1)
28-37: Consider validating the parsed JSON structure.The
as SplitStatecast assumes the JSON matches the expected shape. If the file contains malformed or outdated data (e.g., missingsplitsarray), callers may encounter runtime errors when accessing properties.A minimal check (e.g., verifying
splitsis an array) would make the function more robust.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/split-state.ts` around lines 28 - 37, readSplitState currently casts parsed JSON to SplitState unconditionally, which can cause runtime errors if the file is malformed; update readSplitState to validate the parsed object before returning it: after JSON.parse(text) check that the result is an object and that required properties like splits exist and are of expected types (e.g., Array.isArray(parsed.splits)), optionally validate elements if needed, and return null (or throw) when validation fails so callers never receive an invalid SplitState; reference the readSplitState function and the SplitState type and specifically validate the "splits" array.src/lib/auth.ts (1)
19-19: Redundanttrim()call.The
safeSpawnfunction already trims stdout by default (thetrimoption defaults totruepersrc/lib/process.ts:45). This extra.trim()is harmless but unnecessary.🔧 Suggested simplification
- cachedToken = stdout.trim(); + cachedToken = stdout;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/auth.ts` at line 19, The assignment to cachedToken unnecessarily calls .trim() because safeSpawn already returns trimmed stdout; update the code in the token-fetching logic to assign stdout directly to cachedToken (remove the redundant .trim() call). Locate the use of cachedToken and the call to safeSpawn in auth-related functions (reference symbol: cachedToken and the call site that invokes safeSpawn) and replace cachedToken = stdout.trim() with cachedToken = stdout to simplify the code without changing behavior.src/lib/github.ts (1)
423-433: CI status returns "ready" when no check runs exist.When there are no check runs (line 427), the function returns
"ready". This may be misleading for repos that don't use GitHub Actions/checks —"unknown"might be more accurate since we can't confirm the code is actually ready.💡 Consider alternative
- if (checkRuns.length === 0) return "ready"; + if (checkRuns.length === 0) return "unknown";Or keep
"ready"but document that "no checks" is treated as implicitly passing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/github.ts` around lines 423 - 433, fetchCIStatus currently returns "ready" when there are no check runs which is misleading for repos without CI; change the branch in fetchCIStatus so that when checkRuns.length === 0 it returns "unknown" instead of "ready" (i.e., update the early return in the fetchCIStatus function), and adjust any tests or callers that assumed no-checks == "ready" to expect "unknown" or document the behavior accordingly.src/commands/ls.tsx (1)
343-346: Inconsistent array reference for panel navigation bounds.Lines 343-344 use
filteredPRs.lengthfor up/down navigation bounds, but the component elsewhere usessortedPRs(e.g., line 492 usessortedPRs.length). SincesortedPRsis derived fromfilteredPRswith the same length, this is functionally correct but inconsistent.♻️ For consistency
if (key.name === "down") { - setSelectedIndex((i) => Math.min(filteredPRs.length - 1, i + 1)) + setSelectedIndex((i) => Math.min(sortedPRs.length - 1, i + 1)) } else if (key.name === "up") {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/ls.tsx` around lines 343 - 346, The navigation bounds use filteredPRs.length while the rest of the component uses sortedPRs, so update the up/down handlers to use sortedPRs.length for consistency: modify the setSelectedIndex calls in the key handling (the block that checks key.name === "down" / "up") to reference sortedPRs.length instead of filteredPRs.length (keep using setSelectedIndex and the same Math.min/Math.max logic).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/commands/split.tsx`:
- Around line 11-19: getRepoRoot currently spawns a Bun process but never cleans
it up, risking leaked fds; update getRepoRoot to ensure the child process is
always terminated/unreferenced (call proc.kill() and/or proc.unref() in a
finally block) after awaiting proc.exited and reading stdout, or replace the
spawn with the existing safeSpawn helper from ../lib/process to get consistent
cleanup behavior; reference the getRepoRoot function and safeSpawn so reviewers
can locate and apply the change.
- Around line 98-99: The code uses the macOS-only command "open" (e.g.,
Bun.spawn(["open", splits[selectedIndex].prUrl!])) which fails silently on other
platforms; create a cross-platform helper (e.g., openUrl(url: string)) that
checks process.platform and calls Bun.spawn with ["open", url] on darwin,
["xdg-open", url] on linux, and ["cmd", "/c", "start", "", url] on win32 (or the
equivalent start invocation), await the spawn, and surface errors instead of
silencing stdout/stderr; replace direct Bun.spawn calls in split.tsx
(splits[selectedIndex].prUrl), stack.tsx, ls.tsx, and log.tsx to use openUrl and
use showFlash to report success or failure.
In `@src/lib/cache.ts`:
- Around line 24-26: hasDetails() and hasPanelData() currently call
getCachedPRDetails()/getCachedPanelData() to check DB existence then discard the
parsed result, causing a second DB read when getDetails()/getPanelData() is
called; modify hasDetails(url) and hasPanelData(key) to, when the in-memory Map
(this.details / this.panels) misses and the respective getCached... call returns
non-null, parse/assign that returned value into the in-memory cache
(this.details.set(url, parsed) or this.panels.set(key, parsed)) and then return
true; this ensures the subsequent getDetails() / getPanelData() use the cached
value and avoid duplicate DB queries.
In `@src/lib/db.ts`:
- Around line 8-12: Wrap the filesystem and DB creation in a try-catch so module
initialization cannot throw: catch failures from mkdirSync and new Database, log
the error, and fall back to an in-memory database (use dbPath fallback
":memory:") so the exported symbol db always exists; ensure you reference and
protect the existing symbols configDir, dbPath, mkdirSync, and Database and
export a usable db even when the on-disk initialization fails.
In `@src/lib/github.ts`:
- Around line 96-104: The search URLs built in the branch handling author use
the raw author string which can break the query if it contains spaces or special
characters; update the code that calls fetchGh (the two cases using
fetchGh("search/issues?q=...") and
fetchGh(`search/issues?q=...author:${author}...`)) to pass an encoded author
term using encodeURIComponent(author) when author is used in the query, and
ensure the non-author branch remains unchanged; keep
parseSearchResults(json.items) as the return.
- Around line 190-226: The GraphQL query built in the loop (variable query
inside the chunk-processing code) directly interpolates repo owner/name (pr.repo
split into owner and name) and is vulnerable to injection or breaking on special
characters; fix by switching to a variables-based request (use fetchGhGraphql
which supports variables) or by validating/escaping owner and name before
interpolation: restructure the batching to send one parameterized GraphQL query
that uses aliases (e.g., pr_0, pr_1) mapped to variables for owner/name, or fall
back to querying repos individually via fetchGhGraphql with safe variables
instead of string interpolation.
In `@src/lib/split-state.ts`:
- Around line 39-42: Change writeSplitState to be asynchronous and await the
Bun.write() call: update the function signature writeSplitState(repoRoot:
string, state: SplitState) to return Promise<void> (make it async) and await
Bun.write(path, JSON.stringify(state, null, 2) + "\n") so the write completes
before the function resolves; propagate or let exceptions surface so callers can
handle write failures.
---
Nitpick comments:
In `@src/commands/ls.tsx`:
- Around line 343-346: The navigation bounds use filteredPRs.length while the
rest of the component uses sortedPRs, so update the up/down handlers to use
sortedPRs.length for consistency: modify the setSelectedIndex calls in the key
handling (the block that checks key.name === "down" / "up") to reference
sortedPRs.length instead of filteredPRs.length (keep using setSelectedIndex and
the same Math.min/Math.max logic).
In `@src/components/skeleton.tsx`:
- Line 58: PanelSkeleton currently declares a height prop that is never used;
either remove height from the props signature or apply it to the rendered
skeleton container. Update the PanelSkeleton({ tab, width, height }) props (and
any callers) to drop height if unused, or use height to set the skeleton
container style/inline style/class (e.g., constrain the outer div or <Skeleton>
component) so the prop actually controls vertical size; ensure the prop type is
kept in sync with the function signature and any consuming components.
- Line 58: The PanelSkeleton function currently types the tab parameter as
string; change it to use the existing union type PanelTab for stronger type
safety by importing PanelTab from src/lib/types.ts and updating the signature
export function PanelSkeleton({ tab, width, height }: { tab: PanelTab; width:
number; height: number }); ensure any callers pass a value assignable to
PanelTab (or cast/update them) so the code compiles.
In `@src/lib/auth.ts`:
- Line 19: The assignment to cachedToken unnecessarily calls .trim() because
safeSpawn already returns trimmed stdout; update the code in the token-fetching
logic to assign stdout directly to cachedToken (remove the redundant .trim()
call). Locate the use of cachedToken and the call to safeSpawn in auth-related
functions (reference symbol: cachedToken and the call site that invokes
safeSpawn) and replace cachedToken = stdout.trim() with cachedToken = stdout to
simplify the code without changing behavior.
In `@src/lib/github.ts`:
- Around line 423-433: fetchCIStatus currently returns "ready" when there are no
check runs which is misleading for repos without CI; change the branch in
fetchCIStatus so that when checkRuns.length === 0 it returns "unknown" instead
of "ready" (i.e., update the early return in the fetchCIStatus function), and
adjust any tests or callers that assumed no-checks == "ready" to expect
"unknown" or document the behavior accordingly.
In `@src/lib/split-state.ts`:
- Around line 28-37: readSplitState currently casts parsed JSON to SplitState
unconditionally, which can cause runtime errors if the file is malformed; update
readSplitState to validate the parsed object before returning it: after
JSON.parse(text) check that the result is an object and that required properties
like splits exist and are of expected types (e.g.,
Array.isArray(parsed.splits)), optionally validate elements if needed, and
return null (or throw) when validation fails so callers never receive an invalid
SplitState; reference the readSplitState function and the SplitState type and
specifically validate the "splits" array.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b6681bb2-c8ca-443f-869f-f14ba05de9a3
📒 Files selected for processing (15)
src/__tests__/integration.test.tssrc/commands/ls.tsxsrc/commands/split.tsxsrc/components/pr-table.tsxsrc/components/preview-panel.tsxsrc/components/skeleton.tsxsrc/hooks/usePanel.tssrc/index.tsxsrc/lib/__tests__/cache.test.tssrc/lib/__tests__/github.test.tssrc/lib/auth.tssrc/lib/cache.tssrc/lib/db.tssrc/lib/github.tssrc/lib/split-state.ts
| async function getRepoRoot(): Promise<string | null> { | ||
| const proc = Bun.spawn(["git", "rev-parse", "--show-toplevel"], { | ||
| stdout: "pipe", | ||
| stderr: "pipe", | ||
| }) | ||
| const stdout = await new Response(proc.stdout).text() | ||
| const code = await proc.exited | ||
| return code === 0 ? stdout.trim() : null | ||
| } |
There was a problem hiding this comment.
Process cleanup missing — potential file descriptor leak.
Unlike safeSpawn in src/lib/process.ts, this function doesn't call proc.kill() or proc.unref() after awaiting the result. If getRepoRoot is called repeatedly (e.g., during component remounts), this could accumulate file descriptors.
🛡️ Suggested fix using try/finally cleanup
async function getRepoRoot(): Promise<string | null> {
const proc = Bun.spawn(["git", "rev-parse", "--show-toplevel"], {
stdout: "pipe",
stderr: "pipe",
})
- const stdout = await new Response(proc.stdout).text()
- const code = await proc.exited
- return code === 0 ? stdout.trim() : null
+ try {
+ const stdout = await new Response(proc.stdout).text()
+ const code = await proc.exited
+ return code === 0 ? stdout.trim() : null
+ } finally {
+ try { proc.kill() } catch {}
+ proc.unref()
+ }
}Alternatively, consider reusing safeSpawn from ../lib/process for consistent cleanup handling.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/commands/split.tsx` around lines 11 - 19, getRepoRoot currently spawns a
Bun process but never cleans it up, risking leaked fds; update getRepoRoot to
ensure the child process is always terminated/unreferenced (call proc.kill()
and/or proc.unref() in a finally block) after awaiting proc.exited and reading
stdout, or replace the spawn with the existing safeSpawn helper from
../lib/process to get consistent cleanup behavior; reference the getRepoRoot
function and safeSpawn so reviewers can locate and apply the change.
| Bun.spawn(["open", splits[selectedIndex].prUrl!], { stdout: "ignore", stderr: "ignore" }) | ||
| showFlash("Opening PR...") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if there's a platform-aware open utility elsewhere in the codebase
rg -n "xdg-open|start.*url|platform.*open" --type=tsRepository: OctavianTocan/raft
Length of output: 44
🏁 Script executed:
head -n 110 src/commands/split.tsx | tail -n 20Repository: OctavianTocan/raft
Length of output: 901
🏁 Script executed:
rg -n "platform|process\.platform|darwin|linux|win32" src/commands/split.tsxRepository: OctavianTocan/raft
Length of output: 44
🏁 Script executed:
rg -n "open.*prUrl\|prUrl.*open" -A 3 -B 3Repository: OctavianTocan/raft
Length of output: 44
🏁 Script executed:
rg -n "process\.env\|Bun\.env" src/commands/split.tsxRepository: OctavianTocan/raft
Length of output: 44
🏁 Script executed:
fd -t f "\.ts$|\.tsx$" | head -20 | xargs rg -l "xdg-open|start.*url|platform" | head -5Repository: OctavianTocan/raft
Length of output: 44
🏁 Script executed:
rg -n "Bun\.spawn.*open" --type=tsRepository: OctavianTocan/raft
Length of output: 737
open command is macOS-specific and affects multiple commands.
The open command only works on macOS. On Linux, xdg-open is used, and on Windows, start or cmd /c start. This will fail silently on non-macOS systems due to stdout: "ignore", stderr: "ignore".
This pattern appears in 6 locations across 4 files:
src/commands/split.tsx:98src/commands/stack.tsx:125, 147src/commands/ls.tsx:377, 497src/commands/log.tsx:143
Implement a cross-platform solution to handle URL opening on macOS, Linux, and Windows.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/commands/split.tsx` around lines 98 - 99, The code uses the macOS-only
command "open" (e.g., Bun.spawn(["open", splits[selectedIndex].prUrl!])) which
fails silently on other platforms; create a cross-platform helper (e.g.,
openUrl(url: string)) that checks process.platform and calls Bun.spawn with
["open", url] on darwin, ["xdg-open", url] on linux, and ["cmd", "/c", "start",
"", url] on win32 (or the equivalent start invocation), await the spawn, and
surface errors instead of silencing stdout/stderr; replace direct Bun.spawn
calls in split.tsx (splits[selectedIndex].prUrl), stack.tsx, ls.tsx, and log.tsx
to use openUrl and use showFlash to report success or failure.
| hasDetails(url: string): boolean { | ||
| return this.details.has(url) | ||
| return this.details.has(url) || getCachedPRDetails(url) !== null; | ||
| } |
There was a problem hiding this comment.
hasDetails() and hasPanelData() cause redundant DB queries.
When data exists in the database but not in memory, hasDetails() fetches and parses the data via getCachedPRDetails(), then discards it. A subsequent getDetails() call re-queries the same data. The usage pattern in src/commands/ls.tsx:212-226 shows hasDetails() filtering followed by getDetails() on the same URLs, doubling DB I/O.
🐛 Proposed fix: Cache the result when checking existence
hasDetails(url: string): boolean {
- return this.details.has(url) || getCachedPRDetails(url) !== null;
+ if (this.details.has(url)) return true;
+ const fromDb = getCachedPRDetails(url);
+ if (fromDb) {
+ this.details.set(url, fromDb);
+ return true;
+ }
+ return false;
} hasPanelData(url: string): boolean {
- return this.panelData.has(url) || getCachedPRPanelData(url) !== null;
+ if (this.panelData.has(url)) return true;
+ const fromDb = getCachedPRPanelData(url);
+ if (fromDb) {
+ this.panelData.set(url, fromDb);
+ return true;
+ }
+ return false;
}Also applies to: 43-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/cache.ts` around lines 24 - 26, hasDetails() and hasPanelData()
currently call getCachedPRDetails()/getCachedPanelData() to check DB existence
then discard the parsed result, causing a second DB read when
getDetails()/getPanelData() is called; modify hasDetails(url) and
hasPanelData(key) to, when the in-memory Map (this.details / this.panels) misses
and the respective getCached... call returns non-null, parse/assign that
returned value into the in-memory cache (this.details.set(url, parsed) or
this.panels.set(key, parsed)) and then return true; this ensures the subsequent
getDetails() / getPanelData() use the cached value and avoid duplicate DB
queries.
| const configDir = join(homedir(), ".config", "raft"); | ||
| mkdirSync(configDir, { recursive: true }); | ||
|
|
||
| const dbPath = join(configDir, "raft.sqlite"); | ||
| export const db = new Database(dbPath); |
There was a problem hiding this comment.
Module initialization can throw without recovery path.
If mkdirSync fails (e.g., permission denied) or new Database() fails (e.g., disk full, locked), the entire module import will throw, crashing any consumer. Consider wrapping initialization in a try-catch with a fallback (e.g., in-memory DB or graceful degradation).
🛡️ Suggested defensive initialization
+let db: Database;
+try {
const configDir = join(homedir(), ".config", "raft");
mkdirSync(configDir, { recursive: true });
-
const dbPath = join(configDir, "raft.sqlite");
-export const db = new Database(dbPath);
+ db = new Database(dbPath);
+} catch (e) {
+ // Fallback to in-memory database if disk access fails
+ console.error("Failed to initialize persistent cache, using in-memory:", e);
+ db = new Database(":memory:");
+}
+export { db };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const configDir = join(homedir(), ".config", "raft"); | |
| mkdirSync(configDir, { recursive: true }); | |
| const dbPath = join(configDir, "raft.sqlite"); | |
| export const db = new Database(dbPath); | |
| let db: Database; | |
| try { | |
| const configDir = join(homedir(), ".config", "raft"); | |
| mkdirSync(configDir, { recursive: true }); | |
| const dbPath = join(configDir, "raft.sqlite"); | |
| db = new Database(dbPath); | |
| } catch (e) { | |
| // Fallback to in-memory database if disk access fails | |
| console.error("Failed to initialize persistent cache, using in-memory:", e); | |
| db = new Database(":memory:"); | |
| } | |
| export { db }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/db.ts` around lines 8 - 12, Wrap the filesystem and DB creation in a
try-catch so module initialization cannot throw: catch failures from mkdirSync
and new Database, log the error, and fall back to an in-memory database (use
dbPath fallback ":memory:") so the exported symbol db always exists; ensure you
reference and protect the existing symbols configDir, dbPath, mkdirSync, and
Database and export a usable db even when the on-disk initialization fails.
| if (author === "") { | ||
| // Empty string means fetch all PRs across all repos the user has access to | ||
| onProgress?.("Fetching all open PRs...") | ||
| const json = await runGh([ | ||
| "search", "prs", | ||
| "--state=open", | ||
| "--limit=1000", | ||
| "--json", "number,title,url,body,state,repository,isDraft,createdAt,author", | ||
| ]) | ||
| if (!json) return [] | ||
| return parseSearchResults(json) | ||
| onProgress?.("Fetching all open PRs..."); | ||
| const json = await fetchGh("search/issues?q=is:pr+is:open&per_page=100"); | ||
| return parseSearchResults(json.items); | ||
| } | ||
| if (author) { | ||
| onProgress?.(`Fetching PRs for ${author}...`) | ||
| const json = await runGh([ | ||
| "search", "prs", | ||
| `--author=${author}`, | ||
| "--state=open", | ||
| "--limit=100", | ||
| "--json", "number,title,url,body,state,repository,isDraft,createdAt,author", | ||
| ]) | ||
| if (!json) return [] | ||
| return parseSearchResults(json) | ||
| onProgress?.(`Fetching PRs for ${author}...`); | ||
| const json = await fetchGh(`search/issues?q=is:pr+is:open+author:${author}&per_page=100`); | ||
| return parseSearchResults(json.items); |
There was a problem hiding this comment.
URL parameters not encoded — special characters may break queries.
If author contains special characters (e.g., +, &, spaces, or Unicode), the URL will be malformed. Use encodeURIComponent for user-provided values.
🛡️ Suggested fix
if (author === "") {
onProgress?.("Fetching all open PRs...");
const json = await fetchGh("search/issues?q=is:pr+is:open&per_page=100");
return parseSearchResults(json.items);
}
if (author) {
onProgress?.(`Fetching PRs for ${author}...`);
- const json = await fetchGh(`search/issues?q=is:pr+is:open+author:${author}&per_page=100`);
+ const json = await fetchGh(`search/issues?q=is:pr+is:open+author:${encodeURIComponent(author)}&per_page=100`);
return parseSearchResults(json.items);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/github.ts` around lines 96 - 104, The search URLs built in the branch
handling author use the raw author string which can break the query if it
contains spaces or special characters; update the code that calls fetchGh (the
two cases using fetchGh("search/issues?q=...") and
fetchGh(`search/issues?q=...author:${author}...`)) to pass an encoded author
term using encodeURIComponent(author) when author is used in the query, and
ensure the non-author branch remains unchanged; keep
parseSearchResults(json.items) as the return.
| let query = "query {\\n"; | ||
| for (let j = 0; j < chunk.length; j++) { | ||
| const pr = chunk[j]; | ||
| const [owner, name] = pr.repo.split("/"); | ||
| query += ` | ||
| pr_${j}: repository(owner: "${owner}", name: "${name}") { | ||
| pullRequest(number: ${pr.number}) { | ||
| additions | ||
| deletions | ||
| comments { totalCount } | ||
| headRefName | ||
| headRefOid | ||
| mergeable | ||
| reviews(first: 100) { | ||
| nodes { | ||
| author { login } | ||
| state | ||
| } | ||
| } | ||
| reviewThreads(first: 100) { | ||
| nodes { | ||
| isResolved | ||
| } | ||
| } | ||
| commits(last: 1) { | ||
| nodes { | ||
| commit { | ||
| statusCheckRollup { | ||
| state | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| return result | ||
| } catch { | ||
| continue | ||
| } | ||
| `; | ||
| } |
There was a problem hiding this comment.
GraphQL query construction is vulnerable to injection.
Repository owner/name values are interpolated directly into the query string without escaping. If a repo name contains " or other special characters, the query will break or behave unexpectedly. Use GraphQL variables instead of string interpolation.
🛡️ Suggested fix using variables
The current approach builds the query dynamically with string interpolation. A safer approach would be to use a single query with variables, though this requires restructuring the batching logic. At minimum, validate/escape the owner and name:
const [owner, name] = pr.repo.split("/");
+ // Basic validation - repo names shouldn't contain quotes
+ if (owner.includes('"') || name.includes('"')) {
+ console.error(`Invalid repo name: ${pr.repo}`);
+ continue;
+ }
query += `
pr_${j}: repository(owner: "${owner}", name: "${name}") {For a more robust solution, consider using a parameterized approach with aliases that map to variables, or process repos individually with the existing fetchGhGraphql which properly handles variables.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let query = "query {\\n"; | |
| for (let j = 0; j < chunk.length; j++) { | |
| const pr = chunk[j]; | |
| const [owner, name] = pr.repo.split("/"); | |
| query += ` | |
| pr_${j}: repository(owner: "${owner}", name: "${name}") { | |
| pullRequest(number: ${pr.number}) { | |
| additions | |
| deletions | |
| comments { totalCount } | |
| headRefName | |
| headRefOid | |
| mergeable | |
| reviews(first: 100) { | |
| nodes { | |
| author { login } | |
| state | |
| } | |
| } | |
| reviewThreads(first: 100) { | |
| nodes { | |
| isResolved | |
| } | |
| } | |
| commits(last: 1) { | |
| nodes { | |
| commit { | |
| statusCheckRollup { | |
| state | |
| } | |
| } | |
| } | |
| } | |
| } | |
| } | |
| return result | |
| } catch { | |
| continue | |
| } | |
| `; | |
| } | |
| let query = "query {\\n"; | |
| for (let j = 0; j < chunk.length; j++) { | |
| const pr = chunk[j]; | |
| const [owner, name] = pr.repo.split("/"); | |
| // Basic validation - repo names shouldn't contain quotes | |
| if (owner.includes('"') || name.includes('"')) { | |
| console.error(`Invalid repo name: ${pr.repo}`); | |
| continue; | |
| } | |
| query += ` | |
| pr_${j}: repository(owner: "${owner}", name: "${name}") { | |
| pullRequest(number: ${pr.number}) { | |
| additions | |
| deletions | |
| comments { totalCount } | |
| headRefName | |
| headRefOid | |
| mergeable | |
| reviews(first: 100) { | |
| nodes { | |
| author { login } | |
| state | |
| } | |
| } | |
| reviewThreads(first: 100) { | |
| nodes { | |
| isResolved | |
| } | |
| } | |
| commits(last: 1) { | |
| nodes { | |
| commit { | |
| statusCheckRollup { | |
| state | |
| } | |
| } | |
| } | |
| } | |
| } | |
| } | |
| `; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/github.ts` around lines 190 - 226, The GraphQL query built in the
loop (variable query inside the chunk-processing code) directly interpolates
repo owner/name (pr.repo split into owner and name) and is vulnerable to
injection or breaking on special characters; fix by switching to a
variables-based request (use fetchGhGraphql which supports variables) or by
validating/escaping owner and name before interpolation: restructure the
batching to send one parameterized GraphQL query that uses aliases (e.g., pr_0,
pr_1) mapped to variables for owner/name, or fall back to querying repos
individually via fetchGhGraphql with safe variables instead of string
interpolation.
| export function writeSplitState(repoRoot: string, state: SplitState): void { | ||
| const path = `${repoRoot}/.raft-split.json` | ||
| Bun.write(path, JSON.stringify(state, null, 2) + "\n") | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does Bun.write return a Promise?
💡 Result:
Yes, Bun.write returns a Promise that resolves with the number of bytes written.
Citations:
Make writeSplitState async and await Bun.write().
Bun.write() returns a Promise<number>, but the current code calls it without awaiting in a synchronous function. The write operation is fire-and-forget, meaning the function returns before the file is written, potentially causing data loss or race conditions.
Proposed fix
-export function writeSplitState(repoRoot: string, state: SplitState): void {
+export async function writeSplitState(repoRoot: string, state: SplitState): Promise<void> {
const path = `${repoRoot}/.raft-split.json`
- Bun.write(path, JSON.stringify(state, null, 2) + "\n")
+ await Bun.write(path, JSON.stringify(state, null, 2) + "\n")
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function writeSplitState(repoRoot: string, state: SplitState): void { | |
| const path = `${repoRoot}/.raft-split.json` | |
| Bun.write(path, JSON.stringify(state, null, 2) + "\n") | |
| } | |
| export async function writeSplitState(repoRoot: string, state: SplitState): Promise<void> { | |
| const path = `${repoRoot}/.raft-split.json` | |
| await Bun.write(path, JSON.stringify(state, null, 2) + "\n") | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/split-state.ts` around lines 39 - 42, Change writeSplitState to be
asynchronous and await the Bun.write() call: update the function signature
writeSplitState(repoRoot: string, state: SplitState) to return Promise<void>
(make it async) and await Bun.write(path, JSON.stringify(state, null, 2) + "\n")
so the write completes before the function resolves; propagate or let exceptions
surface so callers can handle write failures.
Summary
ghCLI subprocess calls with nativefetchvia GitHub APIgh auth tokenbun:sqlitefor zero-latency PR listsuseDeferredValueandReact.memoPanelSkeletonfor smooth, tab-aware loading transitionsSummary by Sourcery
Replace GitHub CLI subprocess usage with direct GitHub API calls backed by token-based auth, add persistent SQLite-backed caching for PR data, and improve the PR list and preview UI responsiveness and split-branch introspection.
New Features:
splitcommand and supporting split-state model to inspect and visualize split-branch topology and status.PanelSkeletoncomponent to show structured loading placeholders in the preview panel.Bug Fixes:
ghCLI JSON output.Enhancements:
ghCLI for most operations.gh auth tokencommand to avoid repeated invocations.splitcommand in parsing and help output.Documentation:
splitcommand and its purpose.Tests:
Summary by CodeRabbit
New Features
Improvements