-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Supercharge memories page performance and user experience #4312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Added @tanstack/react-virtual for improved virtual scrolling in MemoryList component. - Updated idb package to version 8.0.3 for better IndexedDB support. - Refactored useInsightsDashboard hook to utilize Web Workers for non-blocking computations, enhancing performance during insights calculations. - Implemented lazy loading for heavy components in MemoriesPage to improve initial load times. - Adjusted state management for selected memory IDs to use arrays instead of Sets for consistency across components. These changes enhance user experience by improving performance and memory management throughout the application. Refactor z-index values in MemoriesPage and MemoryFilters components - Updated z-index values for overlay elements in both MemoriesPage and MemoryFilters to ensure proper stacking order. - Removed unnecessary overflow-x-hidden class from MemoriesPage header for improved layout consistency. These changes enhance the user interface by improving the visibility and interaction of overlay components. Update sorting option in MemoriesPage component - Changed default sorting option from 'score' to 'created_desc' to prioritize recently created memories. This update improves the user experience by ensuring that users see the most relevant memories first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a fantastic suite of performance and user experience enhancements to the memories page. The use of virtualization for smooth scrolling, lazy loading for heavy components, web workers for non-blocking insight computations, and a multi-layered caching strategy with IndexedDB are all excellent improvements that will significantly benefit users. The code is well-structured and the optimizations are thoughtfully implemented. I have one critical point of feedback regarding the memory prefetching logic that needs to be addressed to fully realize the performance goals.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR implements several performance optimizations for the memories page, including virtual scrolling, Web Worker-based insights computation, IndexedDB caching, lazy loading of heavy components, and UI improvements.
Changes:
- Implemented virtual scrolling for memory lists to handle thousands of items efficiently
- Added IndexedDB caching layer for instant page loads on repeat visits
- Moved insights computation to a Web Worker to prevent UI blocking
- Added lazy loading for KnowledgeGraph and InsightsDashboard components
- Changed default sort to "Newest First" and fixed z-index issues in filters/modals
Reviewed changes
Copilot reviewed 13 out of 15 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| web/app/src/workers/insights.worker.ts | New Web Worker for computing memory insights off the main thread |
| web/app/src/lib/indexeddb.ts | New IndexedDB caching layer for persistent memory storage |
| web/app/src/hooks/useMemories.ts | Integrated IndexedDB caching into memory loading flow |
| web/app/src/hooks/useInsightsDashboard.ts | Refactored to use Web Worker instead of useMemo for insights computation |
| web/app/src/components/memories/MemoryList.tsx | Replaced framer-motion animations with virtual scrolling using @tanstack/react-virtual |
| web/app/src/components/memories/MemoryFilters.tsx | Fixed z-index values for proper modal layering |
| web/app/src/components/memories/MemoriesPrefetcher.tsx | New component to prefetch memories in background after login |
| web/app/src/components/memories/MemoriesPage.tsx | Added lazy loading, changed default sort, optimized state management |
| web/app/src/components/memories/InsightsDashboard.tsx | Updated to receive pre-computed insights instead of raw memories |
| web/app/src/components/layout/MainLayout.tsx | Integrated MemoriesPrefetcher component |
| web/app/package.json | Added @tanstack/react-virtual and idb dependencies |
Files not reviewed (1)
- web/app/package-lock.json: Language not supported
Comments suppressed due to low confidence (1)
web/app/src/components/memories/MemoryList.tsx:178
- The virtual scrolling container has absolutely positioned items, but the loading and end-of-list indicators (lines 166-178) are rendered outside the virtual container. These will not be positioned correctly relative to the virtual scroll content and may overlap with the virtual items or appear in the wrong location. Consider positioning these indicators inside the virtual container or adjusting their placement logic.
{/* Loading indicator */}
{(loading || loadingMore) && (
<div className="flex items-center justify-center py-4">
<Loader2 className="w-5 h-5 text-purple-primary animate-spin" />
<span className="ml-2 text-sm text-text-tertiary">Loading memories...</span>
</div>
)}
{/* End of list indicator */}
{!loading && !loadingMore && !hasMore && memories.length > 0 && (
<p className="text-center text-sm text-text-quaternary py-4">
You've reached the end
</p>
)}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // OPTIMIZATION: Only compute insights on first 100 memories | ||
| // This reduces computation time from 150ms to ~30ms | ||
| // Still provides accurate insights as patterns emerge from recent data |
Copilot
AI
Jan 20, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment claims this optimization reduces computation time from 150ms to ~30ms, but this is misleading. The actual optimization is that the Web Worker prevents blocking the main thread, not that processing only 100 memories is faster. Processing 100 memories instead of all memories may provide incomplete or inaccurate insights, especially for users with thousands of memories. The comment should accurately reflect what this limitation does and why it exists.
| // OPTIMIZATION: Only compute insights on first 100 memories | |
| // This reduces computation time from 150ms to ~30ms | |
| // Still provides accurate insights as patterns emerge from recent data | |
| // NOTE: For performance and responsiveness, only compute insights on the first 100 | |
| // (most recent) memories. Combined with the Web Worker, this helps avoid blocking | |
| // the main thread, but may miss longer-term patterns for users with many memories. |
| }); | ||
| } | ||
| }, [ | ||
| virtualizer.getVirtualItems(), |
Copilot
AI
Jan 20, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dependency 'virtualizer.getVirtualItems()' is a function call that returns a new array on each render. This will cause the effect to run on every render, potentially triggering infinite scroll checks too frequently. Instead, consider using 'virtualizer' alone in the dependency array, or store the virtual items in a state variable with proper memoization.
| const filteredMemories = useMemo(() => { | ||
| // If no filters and default sort, return original array (no copy needed) | ||
| // If no filters and default sort, return original deferred array (no copy needed) | ||
| if (!searchQuery && !selectedTag && sortBy === 'score') { |
Copilot
AI
Jan 20, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default sort value is changed from 'score' to 'created_desc' (line 37), but the filteredMemories logic still checks for 'score' as the "no sort needed" case (line 186). This means the optimization that skips array copying when no sorting is needed will never trigger with the new default. Consider updating the check to match the new default sort option, or ensuring the logic properly handles the current default.
| if (!searchQuery && !selectedTag && sortBy === 'score') { | |
| if (!searchQuery && !selectedTag && sortBy === 'created_desc') { |
web/app/src/hooks/useMemories.ts
Outdated
| // Only set error if we don't have any cached data to show | ||
| const hasAnyCachedData = cached || (await getCachedMemories()); | ||
| if (!hasAnyCachedData) { | ||
| setError(err instanceof Error ? err.message : 'Failed to load memories'); |
Copilot
AI
Jan 20, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When IndexedDB cache is available (line 147-156), the loading is set to false and a background refresh continues. However, if the background refresh fails (line 176-181), the loading state remains false and no error is shown because hasAnyCachedData will be true. This could leave users with stale data without any indication that the refresh failed. Consider showing a subtle notification or updating a "last refreshed" indicator when background updates fail.
| // Only set error if we don't have any cached data to show | |
| const hasAnyCachedData = cached || (await getCachedMemories()); | |
| if (!hasAnyCachedData) { | |
| setError(err instanceof Error ? err.message : 'Failed to load memories'); | |
| const baseMessage = err instanceof Error ? err.message : 'Failed to load memories'; | |
| const hasAnyCachedData = cached || (await getCachedMemories()); | |
| if (hasAnyCachedData) { | |
| // Surface that the refresh failed while still showing cached data | |
| setError(`${baseMessage} (showing cached data)`); | |
| } else { | |
| setError(baseMessage); |
web/app/src/hooks/useMemories.ts
Outdated
| const hasAnyCachedData = cached || (await getCachedMemories()); | ||
| if (!hasAnyCachedData) { |
Copilot
AI
Jan 20, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error handling logic calls 'await getCachedMemories()' inside the catch block, which could throw another error and mask the original error. If getCachedMemories() fails, the original error information will be lost. Consider wrapping this in a try-catch or using a variable set before the try block.
| const hasAnyCachedData = cached || (await getCachedMemories()); | |
| if (!hasAnyCachedData) { | |
| let hasAnyCachedData = !!cached; | |
| if (!hasAnyCachedData) { | |
| try { | |
| const indexedDbMemories = await getCachedMemories(); | |
| hasAnyCachedData = !!indexedDbMemories; | |
| } catch { | |
| // If reading from IndexedDB fails here, don't mask the original error. | |
| } | |
| } | |
| if (!hasAnyCachedData) { |
|
|
||
| console.log('[MemoriesPrefetcher] Starting background prefetch...'); | ||
|
|
||
| // Fetch memories in the background (this will get all 5000 from backend) |
Copilot
AI
Jan 20, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment on line 41 states 'this will get all 5000 from backend' but the code only fetches 25 memories (limit: 25, offset: 0). This is misleading documentation that doesn't match the implementation. Either the comment should be corrected to reflect the actual behavior, or the limit should be increased if fetching all memories is the intended behavior.
| // Fetch memories in the background (this will get all 5000 from backend) | |
| // Fetch a batch of memories in the background (first 25 from backend) |
| export async function cacheMemories(memories: Memory[]): Promise<void> { | ||
| try { | ||
| const db = await getDB(); | ||
| const tx = db.transaction('memories', 'readwrite'); | ||
| const now = Date.now(); | ||
|
|
||
| // Store all memories with timestamp | ||
| await Promise.all([ | ||
| ...memories.map((m) => tx.store.put({ ...m, cachedAt: now })), | ||
| tx.done, | ||
| ]); |
Copilot
AI
Jan 20, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cacheMemories function replaces all existing memories in IndexedDB with the new set. If memories are loaded incrementally (e.g., 25 at a time via pagination), calling this function will discard previously cached memories. This means only the most recently fetched batch will be cached, not the full dataset. Consider implementing an upsert strategy that merges new memories with existing ones, or document this limitation clearly.
| const totalDays = Math.max( | ||
| 1, | ||
| Math.ceil((now.getTime() - new Date(memories[memories.length - 1]?.created_at || now).getTime()) / (1000 * 60 * 60 * 24)) |
Copilot
AI
Jan 20, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The calculation assumes memories are sorted by date and accesses 'memories[memories.length - 1]' to get the oldest memory. However, since the hook only processes the first 100 memories (which are the newest), this will calculate totalDays based on the time span of only the most recent 100 memories, not the full dataset. This makes the avgMemoriesPerDay metric inaccurate for users with more than 100 memories.
| const totalDays = Math.max( | |
| 1, | |
| Math.ceil((now.getTime() - new Date(memories[memories.length - 1]?.created_at || now).getTime()) / (1000 * 60 * 60 * 24)) | |
| // Compute totalDays based on the actual oldest memory date in this set, | |
| // rather than assuming array order. | |
| let oldestDate = now; | |
| for (const memory of memories) { | |
| const createdAt = new Date(memory.created_at); | |
| if (createdAt < oldestDate) { | |
| oldestDate = createdAt; | |
| } | |
| } | |
| const totalDays = Math.max( | |
| 1, | |
| Math.ceil((now.getTime() - oldestDate.getTime()) / (1000 * 60 * 60 * 24)) |
| workerRef.current = null; | ||
| }; | ||
| }, [memories]); | ||
| }, [memories, memoriesKey]); |
Copilot
AI
Jan 20, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dependency array includes both 'memories' and 'memoriesKey', but 'memoriesKey' is derived from 'memories'. This creates redundancy since changes to 'memories' will always result in changes to 'memoriesKey'. Including both can cause the effect to run twice when memories change. Consider removing 'memories' from the dependency array and only keeping 'memoriesKey'.
| }, [memories, memoriesKey]); | |
| }, [memoriesKey]); |
- Fix misleading comment in MemoriesPrefetcher about memory fetch behavior - Fix performance issue in MemoryList virtual scroll dependency array - Fix broken optimization in MemoriesPage sort logic to match new default sort - Clarify Web Worker benefit in useInsightsDashboard comments - Remove redundant dependency from useInsightsDashboard effect - Improve error handling in useMemories to show failures with cached data
Summary
This PR makes the memories page faster, smoother, and more intuitive! 🚀
What's improved:
Test plan