feat(marketplace): add item metrics tracking and favorites system#39
feat(marketplace): add item metrics tracking and favorites system#39linkai0924 merged 2 commits intomainfrom
Conversation
Implement preview_count, install_count, and favorite_count as denormalized counters on capability_items, driven by behavior logs and a dedicated item_favorites relation table. Move LogBehavior to a public route so anonymous views are recorded, add idempotent POST/DELETE /items/:id/favorite endpoints behind auth, and return the favorited flag in GetItem for the current user. Signed-off-by: 林凯90331 <90331@sangfor.com> Co-authored-by: CoStrict <zgsm@sangfor.com.cn>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (2)
📝 WalkthroughWalkthroughAdds per-item preview/install/favorite counters, a new per-user Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Client
participant API
participant Service
participant DB
User->>Client: Click favorite
Client->>API: POST /api/items/:id/favorite
API->>API: Authenticate user
API->>DB: SELECT item (with registry) + access check
DB-->>API: Item row
alt access allowed
API->>Service: FavoriteItem(itemID, userID)
activate Service
Service->>DB: BEGIN
Service->>DB: INSERT INTO item_favorites (item_id, user_id)
alt inserted
Service->>DB: UPDATE capability_items SET favorite_count = favorite_count + 1
Service->>DB: COMMIT
Service-->>API: (count, wasCreated=true)
else already exists
Service->>DB: ROLLBACK
Service-->>API: (count, wasCreated=false)
end
deactivate Service
API-->>Client: 200 {favorited: true/false, favoriteCount: N}
else access denied
API-->>Client: 403
end
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly Related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (1)
server/Makefile (1)
16-17:swagger-checkisn’t actually a check.
swag fmtrewrites annotations; it won’t tell CI whether generated docs are stale. Either rename this to something likeswagger-fmt, or makeswagger-checkregeneratedocs/and fail on a diff so API changes can’t land with stale Swagger output.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/Makefile` around lines 16 - 17, The Makefile target swagger-check is misleading because it only runs "$(SWAG) fmt" which mutates annotations instead of failing CI on stale generated docs; update the target named "swagger-check" (or rename it to "swagger-fmt") so it either (a) runs the full generator via "$(SWAG) init" (or the project's docs generation command) into docs/ and then compares the generated output to the repo with "git diff --exit-code docs/" to fail when there are differences, or (b) rename the target to "swagger-fmt" and keep the current "$(SWAG) fmt" behaviour; adjust references to the Makefile target accordingly and keep the SWAG variable and docs/ path names (swagger-check, SWAG, docs/) to locate the correct place to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/cmd/api/main.go`:
- Line 194: The POST route api.POST("/items/:id/behavior") currently uses
recommendHandler.LogBehavior and accepts arbitrary actionType values allowing
clients to modify preview_count/install_count; update LogBehavior to validate
actionType against a strict whitelist (e.g., only allow read-only/view actions
for anonymous users and explicitly map allowed action strings to internal
enums), enforce per-IP rate-limiting on this route (use your existing rate
limiter middleware or add one tied to request.RemoteAddr/IP), reject or 403 any
disallowed actions, and consider adding a separate read-only endpoint (e.g., GET
or POST /items/:id/view) that only records view events with tight validation
instead of exposing the generic write handler.
In `@server/cmd/migrate/main.go`:
- Line 39: The AutoMigrate invocation is currently including
&models.ItemFavorite{}, which causes GORM to create the item_favorites table
before the SQL goose migration runs; remove &models.ItemFavorite{} from the
AutoMigrate arguments (leave other model types intact) so the SQL migration file
remains the authoritative source for creating and configuring the item_favorites
table (you can add a brief comment near the db.AutoMigrate call referencing that
item_favorites is managed by the goose migration).
In `@server/internal/handlers/marketplace_test.go`:
- Around line 603-607: The first POST to /api/items/item-fav2/favorite is
currently ignored, allowing the test to pass if only the retry succeeds; change
the test to capture the response from the initial call (the first postJSON
invocation) and assert that its response code equals http.StatusOK before
issuing the second postJSON retry and asserting that one as well—locate the two
calls to postJSON in marketplace_test.go and replace the discarded first call
with a variable (e.g., w1) and an assertion that w1.Code == http.StatusOK, then
keep the existing second call/assertion for the retry.
In `@server/internal/handlers/recommend.go`:
- Around line 163-205: Update RecommendHandler.LogBehavior to enforce auth and
item visibility: load the item by itemID (using your existing item
service/store) before calling h.behaviorSvc.LogBehavior, and if uid (from
middleware.UserIDKey) is empty only allow actionType ==
models.ActionType("view") and only if the loaded item is public/accessible;
reject other action types for anonymous requests with a 401/403. For
authenticated users allow all valid action types but still verify the item is
visible to that user (reject mutations to private/hidden items). Ensure error
responses are returned before calling h.behaviorSvc.LogBehavior and reference
the existing symbols RecommendHandler.LogBehavior, middleware.UserIDKey,
models.ActionType, and h.behaviorSvc.LogBehavior to locate the changes.
In `@server/internal/handlers/registry_test.go`:
- Around line 1083-1093: The async test is flaky because the background
goroutine in DownloadItem calls database.GetDB() which may open a different
sqlite in-memory connection, so update setupTestDB to open SQLite with shared
in-memory cache (e.g., use DSN with mode=memory&cache=shared) and configure
GORM/sql.DB to use a single connection (SetMaxOpenConns(1), SetMaxIdleConns(1));
this ensures database.DB and any goroutines share the same in-memory state so
the polling loop in the test (which reads models.CapabilityItem via database.DB)
consistently observes the install_count increment.
In `@server/internal/handlers/registry.go`:
- Around line 241-252: The download handler currently logs a behavior with
ActionInstall via services.NewBehaviorService(...).LogBehavior in the anonymous
goroutine (see the LogBehavior call and ActionInstall constant), which inflates
install_count; change this to record a non-counting download event instead by
using a distinct action (e.g., models.ActionDownload) or by adding a metadata
flag (e.g., "count": false) so the backend will not increment install_count, and
keep true installs only on the client-confirmed /install behavior endpoint
(update the ContextDirectAccess metadata if needed to reflect "download" and
ensure downstream aggregation ignores this event for install counts).
In `@server/internal/services/behavior_service.go`:
- Around line 141-159: The current insert path in the favorite flow (checking
existing via tx.Where(...).First(&existing), setting created := err ==
gorm.ErrRecordNotFound, then tx.Create(&favorite) and UpdateColumn on
models.CapabilityItem) can race: concurrent requests can both see no row and one
Create will fail with a duplicate-key error; handle this by treating a
duplicate-key create error as created=false instead of returning an error.
Concretely, catch the unique-constraint/duplicate-key error returned by
tx.Create(&favorite) (the insert on models.ItemFavorite), set created = false
(or re-query the existing row) and proceed without incrementing favorite_count;
only run the UpdateColumn increment when created is true. Also ensure you still
rollback and return errors for non-duplicate failures.
- Around line 84-122: The current create flow omits ItemID/RegistryID then does
unchecked Update calls and a separate counter bump, returning success even if
those follow-ups fail; wrap the DB work in a single transaction (use
s.db.Transaction or db.Transaction) so you Create(log) (without Omit), or
Create(log) then set ItemID/RegistryID on the saved log via
tx.Model(log).Update/Updates checking errors, and then run the counter update
inside the same transaction (call updateItemStats with the saved log.ItemID or
inline the counter updates using tx.Model(&models.CapabilityItem{}).Where("id =
?", log.ItemID).UpdateColumn(...)), returning any error to cause rollback;
ensure updateExperienceScore is adapted to run within the transaction or called
after a successful commit if it must be outside, and propagate errors from all
DB operations instead of ignoring them.
In `@server/migrations/20260401103000_add_item_metrics_and_favorites.sql`:
- Around line 5-12: The migration may be skipped by CREATE TABLE IF NOT EXISTS
when the ORM AutoMigrates models.ItemFavorite without a CapabilityItem relation,
so add a deterministic FK guarantee: after the CREATE TABLE block for
item_favorites, run an explicit ALTER TABLE item_favorites ADD CONSTRAINT (e.g.
fk_item_favorites_item_id) FOREIGN KEY (item_id) REFERENCES capability_items(id)
ON DELETE CASCADE if it doesn't already exist; alternatively (or in addition)
update the models.ItemFavorite struct to declare the CapabilityItem relation so
AutoMigrate creates the same foreign key—ensure the migration includes the
explicit ALTER TABLE statement referencing table item_favorites and column
item_id and the constraint name so the FK is always applied.
---
Nitpick comments:
In `@server/Makefile`:
- Around line 16-17: The Makefile target swagger-check is misleading because it
only runs "$(SWAG) fmt" which mutates annotations instead of failing CI on stale
generated docs; update the target named "swagger-check" (or rename it to
"swagger-fmt") so it either (a) runs the full generator via "$(SWAG) init" (or
the project's docs generation command) into docs/ and then compares the
generated output to the repo with "git diff --exit-code docs/" to fail when
there are differences, or (b) rename the target to "swagger-fmt" and keep the
current "$(SWAG) fmt" behaviour; adjust references to the Makefile target
accordingly and keep the SWAG variable and docs/ path names (swagger-check,
SWAG, docs/) to locate the correct place to change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ffb05c1c-5e33-4338-882a-c4162e46ca25
📒 Files selected for processing (14)
docs/proposals/SKILL_DATA_DESIGN.mdportalserver/Makefileserver/cmd/api/main.goserver/cmd/migrate/main.goserver/internal/handlers/capability_item.goserver/internal/handlers/capability_item_test.goserver/internal/handlers/marketplace_test.goserver/internal/handlers/recommend.goserver/internal/handlers/registry.goserver/internal/handlers/registry_test.goserver/internal/models/models.goserver/internal/services/behavior_service.goserver/migrations/20260401103000_add_item_metrics_and_favorites.sql
| marketplace.GET("/items/new", recommendHandler.GetNewAndNoteworthy) | ||
| } | ||
| api.GET("/items/:id/similar", searchHandler.FindSimilar) | ||
| api.POST("/items/:id/behavior", recommendHandler.LogBehavior) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the recommend handler file
fd -t f "recommend" server/ --type f | head -20Repository: XDfield/costrict-web
Length of output: 147
🏁 Script executed:
# Search for LogBehavior handler implementation
rg -t go "LogBehavior" server/ -A 20Repository: XDfield/costrict-web
Length of output: 18071
🏁 Script executed:
# Look at the main.go file around line 194 to understand middleware context
sed -n '180,210p' server/cmd/api/main.goRepository: XDfield/costrict-web
Length of output: 1328
🏁 Script executed:
# Search for behavior types and validation
rg -t go "behavior" server/internal/services/ -i -A 5 | head -50Repository: XDfield/costrict-web
Length of output: 3844
🏁 Script executed:
# Check ActionType definition and allowed values
rg -t go "ActionType" server/internal/models/ -A 3 | head -40Repository: XDfield/costrict-web
Length of output: 1548
🏁 Script executed:
# Search for install_count and preview_count updates mentioned in review
rg -t go "install_count|preview_count" server/ -B 3 -A 3Repository: XDfield/costrict-web
Length of output: 7128
🏁 Script executed:
# Check if there's rate-limiting middleware applied
rg -t go "rate|throttle|limit" server/cmd/api/main.go -iRepository: XDfield/costrict-web
Length of output: 46
🏁 Script executed:
# Look for any behavior logging logic related to counters
rg -t go "LogBehavior" server/internal/services/behavior_service.go -A 40Repository: XDfield/costrict-web
Length of output: 2094
🏁 Script executed:
# Check the full middleware setup for the public api group
sed -n '150,200p' server/cmd/api/main.goRepository: XDfield/costrict-web
Length of output: 2273
🏁 Script executed:
# Verify there's no validation of ActionType values anywhere
rg -t go "ActionType" server/internal/handlers/recommend.go -B 5 -A 10Repository: XDfield/costrict-web
Length of output: 1197
Do not expose a generic behavior endpoint as public without per-action validation and rate-limiting.
This endpoint accepts any arbitrary actionType string value with no validation or whitelist. The behavior service directly updates preview_count and install_count based on the client-supplied action type, allowing anonymous users to trivially forge item engagement metrics. Add strict ActionType validation (whitelist view-only actions if intended for anonymous use), implement per-IP rate-limiting, and consider a separate read-only endpoint for anonymous view tracking instead of exposing the generic write handler.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/cmd/api/main.go` at line 194, The POST route
api.POST("/items/:id/behavior") currently uses recommendHandler.LogBehavior and
accepts arbitrary actionType values allowing clients to modify
preview_count/install_count; update LogBehavior to validate actionType against a
strict whitelist (e.g., only allow read-only/view actions for anonymous users
and explicitly map allowed action strings to internal enums), enforce per-IP
rate-limiting on this route (use your existing rate limiter middleware or add
one tied to request.RemoteAddr/IP), reject or 403 any disallowed actions, and
consider adding a separate read-only endpoint (e.g., GET or POST
/items/:id/view) that only records view events with tight validation instead of
exposing the generic write handler.
| &models.CapabilityAsset{}, | ||
| &models.CapabilityArtifact{}, | ||
| &models.BehaviorLog{}, | ||
| &models.ItemFavorite{}, |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Don’t let AutoMigrate create item_favorites ahead of the SQL migration.
This PR already introduces the favorites schema via goose, but AutoMigrate now creates the table before goose runs. That gives fresh installs a different migration path from upgraded ones and can silently skip SQL-only defaults/constraints once the table already exists. Prefer keeping item_favorites owned by the migration file instead of adding it here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/cmd/migrate/main.go` at line 39, The AutoMigrate invocation is
currently including &models.ItemFavorite{}, which causes GORM to create the
item_favorites table before the SQL goose migration runs; remove
&models.ItemFavorite{} from the AutoMigrate arguments (leave other model types
intact) so the SQL migration file remains the authoritative source for creating
and configuring the item_favorites table (you can add a brief comment near the
db.AutoMigrate call referencing that item_favorites is managed by the goose
migration).
| postJSON(r, "/api/items/item-fav2/favorite", map[string]interface{}{}) | ||
| w := postJSON(r, "/api/items/item-fav2/favorite", map[string]interface{}{}) | ||
| if w.Code != http.StatusOK { | ||
| t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String()) | ||
| } |
There was a problem hiding this comment.
Assert the first favorite request succeeds.
This test can still pass if the initial POST /favorite fails and only the retry succeeds, which hides regressions in the first write path.
Suggested tightening
r := newMarketplaceRouter("user-fav2")
- postJSON(r, "/api/items/item-fav2/favorite", map[string]interface{}{})
+ first := postJSON(r, "/api/items/item-fav2/favorite", map[string]interface{}{})
+ if first.Code != http.StatusOK {
+ t.Fatalf("expected first favorite to return 200, got %d: %s", first.Code, first.Body.String())
+ }
w := postJSON(r, "/api/items/item-fav2/favorite", map[string]interface{}{})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/internal/handlers/marketplace_test.go` around lines 603 - 607, The
first POST to /api/items/item-fav2/favorite is currently ignored, allowing the
test to pass if only the retry succeeds; change the test to capture the response
from the initial call (the first postJSON invocation) and assert that its
response code equals http.StatusOK before issuing the second postJSON retry and
asserting that one as well—locate the two calls to postJSON in
marketplace_test.go and replace the discarded first call with a variable (e.g.,
w1) and an assertion that w1.Code == http.StatusOK, then keep the existing
second call/assertion for the retry.
| func (h *RecommendHandler) LogBehavior(c *gin.Context) { | ||
| itemID := c.Param("id") | ||
|
|
||
| var req struct { | ||
| ActionType string `json:"actionType" binding:"required"` | ||
| Context string `json:"context"` | ||
| SearchQuery string `json:"searchQuery"` | ||
| SessionID string `json:"sessionId"` | ||
| DurationMs int64 `json:"durationMs"` | ||
| Rating int `json:"rating"` | ||
| Feedback string `json:"feedback"` | ||
| Metadata map[string]interface{} `json:"metadata"` | ||
| } | ||
|
|
||
| if err := c.ShouldBindJSON(&req); err != nil { | ||
| c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid request: " + err.Error()}) | ||
| return | ||
| } | ||
|
|
||
| // Get user ID | ||
| userID, _ := c.Get(middleware.UserIDKey) | ||
| uid, _ := userID.(string) | ||
|
|
||
| behaviorReq := services.LogBehaviorRequest{ | ||
| UserID: uid, | ||
| ItemID: itemID, | ||
| ActionType: models.ActionType(req.ActionType), | ||
| Context: models.ContextType(req.Context), | ||
| SearchQuery: req.SearchQuery, | ||
| SessionID: req.SessionID, | ||
| DurationMs: req.DurationMs, | ||
| Rating: req.Rating, | ||
| Feedback: req.Feedback, | ||
| Metadata: req.Metadata, | ||
| } | ||
|
|
||
| log, err := h.behaviorSvc.LogBehavior(c.Request.Context(), behaviorReq) | ||
| if err != nil { | ||
| c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()}) | ||
| return | ||
| } | ||
|
|
||
| c.JSON(http.StatusCreated, log) |
There was a problem hiding this comment.
Public LogBehavior needs auth and visibility guards.
This handler now accepts anonymous traffic without checking item visibility or restricting the action type. An unauthenticated caller can submit install, success/fail, ratings, or feedback and mutate item metrics; if they know a private item ID, they can mutate hidden-item stats too. If the public route is only meant for anonymous previews, load the item here and allow uid == "" only for accessible public items with actionType == "view".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/internal/handlers/recommend.go` around lines 163 - 205, Update
RecommendHandler.LogBehavior to enforce auth and item visibility: load the item
by itemID (using your existing item service/store) before calling
h.behaviorSvc.LogBehavior, and if uid (from middleware.UserIDKey) is empty only
allow actionType == models.ActionType("view") and only if the loaded item is
public/accessible; reject other action types for anonymous requests with a
401/403. For authenticated users allow all valid action types but still verify
the item is visible to that user (reject mutations to private/hidden items).
Ensure error responses are returned before calling h.behaviorSvc.LogBehavior and
reference the existing symbols RecommendHandler.LogBehavior,
middleware.UserIDKey, models.ActionType, and h.behaviorSvc.LogBehavior to locate
the changes.
| deadline := time.Now().Add(500 * time.Millisecond) | ||
| for { | ||
| var item models.CapabilityItem | ||
| if err := database.DB.First(&item, "id = ?", "item-byid").Error; err == nil && item.InstallCount == 1 { | ||
| break | ||
| } | ||
| if time.Now().After(deadline) { | ||
| t.Fatal("expected install_count to be incremented after download") | ||
| } | ||
| time.Sleep(10 * time.Millisecond) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the test file to understand the structure
wc -l server/internal/handlers/registry_test.goRepository: XDfield/costrict-web
Length of output: 110
🏁 Script executed:
# Read the specific section mentioned in the review (lines 1083-1093)
sed -n '1083,1093p' server/internal/handlers/registry_test.goRepository: XDfield/costrict-web
Length of output: 421
🏁 Script executed:
# Find the setupTestDB function
rg "func setupTestDB" server/internal/handlers/registry_test.go -A 15Repository: XDfield/costrict-web
Length of output: 513
🏁 Script executed:
# Check if there's a DownloadItem handler and BehaviorService
rg "DownloadItem|BehaviorService" server/internal/handlers/registry_test.go -B 2 -A 5Repository: XDfield/costrict-web
Length of output: 1018
🏁 Script executed:
# Find the DownloadItem handler implementation
rg "func DownloadItem" server/internal/handlers/ -A 30Repository: XDfield/costrict-web
Length of output: 2124
🏁 Script executed:
# Look for BehaviorService and where it's called
rg "BehaviorService|go .*BehaviorService" server/internal/handlers/ -B 2 -A 5Repository: XDfield/costrict-web
Length of output: 2255
🏁 Script executed:
# Check the complete test that includes lines 1083-1093
sed -n '1050,1100p' server/internal/handlers/registry_test.goRepository: XDfield/costrict-web
Length of output: 1758
🏁 Script executed:
# Look for any goroutine spawning in the handler
rg "go " server/internal/handlers/registry.go -B 2 -A 2Repository: XDfield/costrict-web
Length of output: 262
🏁 Script executed:
# Check the GetDB function to understand connection handling
rg "func GetDB|func \(.*\) GetDB" server/ -A 5Repository: XDfield/costrict-web
Length of output: 466
🏁 Script executed:
# Look at the database initialization
rg "sqlite.Open|SetMaxOpenConns|SetMaxIdleConns" server/ -B 2 -A 2Repository: XDfield/costrict-web
Length of output: 439
🏁 Script executed:
# Check if there's any connection pool configuration
find server -name "*.go" -type f | xargs grep -l "database\|gorm" | head -5 | xargs grep "MaxOpen\|MaxIdle\|cache"Repository: XDfield/costrict-web
Length of output: 147
🏁 Script executed:
# Check how the test sets up the global database.DB
sed -n '1050,1075p' server/internal/handlers/registry_test.goRepository: XDfield/costrict-web
Length of output: 1019
🏁 Script executed:
# Search for where database.DB is assigned in setupTestDB
rg "database.DB\s*=" server/internal/handlers/registry_test.go -B 2 -A 2Repository: XDfield/costrict-web
Length of output: 125
🏁 Script executed:
# Check the full setupTestDB function
rg "func setupTestDB" server/internal/handlers/registry_test.go -A 60 | head -70Repository: XDfield/costrict-web
Length of output: 2012
🏁 Script executed:
# Check GORM's default connection pool settings
web_search
What are GORM's default MaxOpenConns and MaxIdleConns settings for SQLite?Repository: XDfield/costrict-web
Length of output: 183
🌐 Web query:
GORM SQLite default MaxOpenConns connection pool settings
💡 Result:
GORM with SQLite uses Go's database/sql package under the hood, which provides the connection pool. GORM does not set any custom defaults for MaxOpenConns or other pool settings; it inherits the standard library's defaults. The default for MaxOpenConns in database/sql is 0, meaning unlimited open connections (the pool creates new connections as needed without an upper limit). To access and configure the underlying *sql.DB (and thus the pool settings) after opening a GORM connection: sqlDB, err := db.DB if err != nil { // handle error } sqlDB.SetMaxOpenConns(100) // example: limit to 100 open connections
Citations:
- 1: https://gorm.io/docs/generic_interface.html
- 2: https://gorm.io/docs/connecting_to_the_database.html
- 3: https://go.dev/doc/database/manage-connections
- 4: https://pkg.go.dev/database/sql/
- 5: https://pkg.go.dev/database/sql@go1.24.2
🌐 Web query:
SQLite :memory: database per connection vs shared cache
💡 Result:
In SQLite, using ":memory:" for an in-memory database creates a distinct, private database per connection. Each connection gets its own independent in-memory database that is not shared with other connections, even within the same process. The database is deleted when the connection closes. To share the same in-memory database across multiple connections in the same process, use a URI filename with the shared cache parameter: "file::memory:?cache=shared". All connections opened this way share the same database content and pager cache. The database is automatically deleted when the last connection closes. For multiple distinct shareable in-memory databases, use named ones like "file:memdb1?mode=memory&cache=shared". Connections using the exact same name share that specific database. Shared cache mode has a special locking model (transaction-level by default, with limitations like no WAL support in shared :memory: without additional setup) and is discouraged by SQLite developers for general use due to complexity and potential issues, though it works reliably for in-memory sharing within a process. Shared cache only works within the same process, not across processes. Per-connection ":memory:" uses private cache (each connection has its own pager cache), while shared cache mode shares the pager cache and database content across specified connections.
Citations:
- 1: https://www.sqlite.org/inmemorydb.html
- 2: https://sqlite.org/sharedcache.html
- 3: https://sqlite.org/forum/info/d33f5d5dd1168b52
- 4: https://stackoverflow.com/questions/36447766/multiple-sqlite-connections-to-a-database-in-memory
- 5: https://stackoverflow.com/questions/58649529/how-to-create-multiple-memory-databases-in-sqlite3
- 6: https://sqlite.org/c3ref/enable_shared_cache.html
- 7: https://sqlite.org/forum/info/5489305f601b8c3f93417e24c1ebd04a645dc9f655ab78d837b57425af1aa869
- 8: https://sqlite.org/forum/info/760f6e655e84af72
- 9: https://www.reddit.com/r/sqlite/comments/14z6vud/is_sqlite_page_cache_shared_or_not/
🏁 Script executed:
# Check LogBehavior implementation to confirm it updates install_count
rg "func.*LogBehavior|install_count|InstallCount" server/internal/services/ -B 2 -A 8Repository: XDfield/costrict-web
Length of output: 8122
This async assertion is flaky with sqlite.Open(":memory:").
The background goroutine spawned by the DownloadItem handler calls database.GetDB(), which may return a different connection from the pool than the main test thread. With GORM's default unlimited connection pooling and SQLite's per-connection in-memory databases, that goroutine sees an empty :memory: database and writes the install_count increment there. The main test thread polls its own connection's database, which remains unchanged, causing the assertion to timeout intermittently.
Stabilize setupTestDB by enabling shared cache and limiting connections to a single instance:
Fix for setupTestDB
- db, err := gorm.Open(sqlite.Open(":memory:"), &gorm.Config{
+ db, err := gorm.Open(sqlite.Open("file::memory:?cache=shared"), &gorm.Config{
Logger: logger.Default.LogMode(logger.Silent),
})
+ sqlDB, _ := db.DB()
+ sqlDB.SetMaxOpenConns(1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/internal/handlers/registry_test.go` around lines 1083 - 1093, The
async test is flaky because the background goroutine in DownloadItem calls
database.GetDB() which may open a different sqlite in-memory connection, so
update setupTestDB to open SQLite with shared in-memory cache (e.g., use DSN
with mode=memory&cache=shared) and configure GORM/sql.DB to use a single
connection (SetMaxOpenConns(1), SetMaxIdleConns(1)); this ensures database.DB
and any goroutines share the same in-memory state so the polling loop in the
test (which reads models.CapabilityItem via database.DB) consistently observes
the install_count increment.
| go func(item models.CapabilityItem, userID string) { | ||
| _, _ = services.NewBehaviorService(database.GetDB()).LogBehavior(context.Background(), services.LogBehaviorRequest{ | ||
| UserID: userID, | ||
| ItemID: item.ID, | ||
| RegistryID: item.RegistryID, | ||
| ActionType: models.ActionInstall, | ||
| Context: models.ContextDirectAccess, | ||
| Metadata: map[string]interface{}{ | ||
| "source": "download", | ||
| }, | ||
| }) | ||
| }(item, userID) |
There was a problem hiding this comment.
/download still inflates install_count.
Routing this through LogBehavior changes the plumbing, but every download still emits ActionInstall. That makes install_count depend on retries/crawlers, and it will double-count once the client also sends the explicit install behavior the proposal describes. Keep installs on the client-confirmed behavior endpoint, or log a separate non-counted download event here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/internal/handlers/registry.go` around lines 241 - 252, The download
handler currently logs a behavior with ActionInstall via
services.NewBehaviorService(...).LogBehavior in the anonymous goroutine (see the
LogBehavior call and ActionInstall constant), which inflates install_count;
change this to record a non-counting download event instead by using a distinct
action (e.g., models.ActionDownload) or by adding a metadata flag (e.g.,
"count": false) so the backend will not increment install_count, and keep true
installs only on the client-confirmed /install behavior endpoint (update the
ContextDirectAccess metadata if needed to reflect "download" and ensure
downstream aggregation ignores this event for install counts).
| // Use Omit to skip empty UUID fields | ||
| result := s.db.Omit("ItemID", "RegistryID").Create(log) | ||
| if result.Error != nil { | ||
| return nil, result.Error | ||
| } | ||
|
|
||
| // Manually set ItemID and RegistryID if they have valid values | ||
| if log.ItemID != "" { | ||
| s.db.Model(log).Update("item_id", log.ItemID) | ||
| } | ||
| if log.RegistryID != "" { | ||
| s.db.Model(log).Update("registry_id", log.RegistryID) | ||
| } | ||
|
|
||
| // Keep aggregate counters in sync so list/detail APIs can return them directly. | ||
| s.updateItemStats(req.ItemID, req.ActionType) | ||
|
|
||
| return log, nil | ||
| } | ||
|
|
||
| // updateItemStats updates item statistics based on behavior | ||
| func (s *BehaviorService) updateItemStats(itemID string, actionType models.ActionType) { | ||
| db := database.GetDB() | ||
|
|
||
| switch actionType { | ||
| case models.ActionView: | ||
| db.Model(&models.CapabilityItem{}). | ||
| Where("id = ?", itemID). | ||
| UpdateColumn("preview_count", gorm.Expr("preview_count + 1")) | ||
|
|
||
| case models.ActionInstall: | ||
| db.Model(&models.CapabilityItem{}). | ||
| Where("id = ?", itemID). | ||
| UpdateColumn("install_count", gorm.Expr("install_count + 1")) | ||
|
|
||
| case models.ActionSuccess, models.ActionFail: | ||
| s.updateExperienceScore(itemID) | ||
| } | ||
| } |
There was a problem hiding this comment.
Don't return success after best-effort log/counter writes.
Create omits ItemID/RegistryID, patches them back with unchecked Update calls, and then bumps counters afterward. Any failure in those follow-up writes still returns 201, which can leave behavior_logs and capability_items out of sync. Persist the final row shape and counter update inside one checked transaction, and drive the counter update from the saved log.ItemID rather than raw request input.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/internal/services/behavior_service.go` around lines 84 - 122, The
current create flow omits ItemID/RegistryID then does unchecked Update calls and
a separate counter bump, returning success even if those follow-ups fail; wrap
the DB work in a single transaction (use s.db.Transaction or db.Transaction) so
you Create(log) (without Omit), or Create(log) then set ItemID/RegistryID on the
saved log via tx.Model(log).Update/Updates checking errors, and then run the
counter update inside the same transaction (call updateItemStats with the saved
log.ItemID or inline the counter updates using
tx.Model(&models.CapabilityItem{}).Where("id = ?",
log.ItemID).UpdateColumn(...)), returning any error to cause rollback; ensure
updateExperienceScore is adapted to run within the transaction or called after a
successful commit if it must be outside, and propagate errors from all DB
operations instead of ignoring them.
| var existing models.ItemFavorite | ||
| err := tx.Where("item_id = ? AND user_id = ?", itemID, userID).First(&existing).Error | ||
| if err != nil && err != gorm.ErrRecordNotFound { | ||
| tx.Rollback() | ||
| return 0, false, err | ||
| } | ||
|
|
||
| created := err == gorm.ErrRecordNotFound | ||
| if created { | ||
| if err := tx.Create(&favorite).Error; err != nil { | ||
| tx.Rollback() | ||
| return 0, false, err | ||
| } | ||
| if err := tx.Model(&models.CapabilityItem{}). | ||
| Where("id = ?", itemID). | ||
| UpdateColumn("favorite_count", gorm.Expr("favorite_count + 1")).Error; err != nil { | ||
| tx.Rollback() | ||
| return 0, false, err | ||
| } |
There was a problem hiding this comment.
The favorite path still races on duplicate requests.
Two concurrent requests can both observe “no existing favorite”, one insert wins, and the other hits the unique index and returns 500. That breaks the endpoint’s idempotent contract on retries/double-clicks. Use insert-if-absent semantics, or treat duplicate-key conflicts as created=false, before incrementing favorite_count.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/internal/services/behavior_service.go` around lines 141 - 159, The
current insert path in the favorite flow (checking existing via
tx.Where(...).First(&existing), setting created := err ==
gorm.ErrRecordNotFound, then tx.Create(&favorite) and UpdateColumn on
models.CapabilityItem) can race: concurrent requests can both see no row and one
Create will fail with a duplicate-key error; handle this by treating a
duplicate-key create error as created=false instead of returning an error.
Concretely, catch the unique-constraint/duplicate-key error returned by
tx.Create(&favorite) (the insert on models.ItemFavorite), set created = false
(or re-query the existing row) and proceed without incrementing favorite_count;
only run the UpdateColumn increment when created is true. Also ensure you still
rollback and return errors for non-duplicate failures.
| CREATE TABLE IF NOT EXISTS item_favorites ( | ||
| id UUID PRIMARY KEY DEFAULT gen_random_uuid(), | ||
| item_id UUID NOT NULL REFERENCES capability_items(id) ON DELETE CASCADE, | ||
| user_id VARCHAR(191) NOT NULL, | ||
| created_at TIMESTAMPTZ DEFAULT NOW() | ||
| ); | ||
|
|
||
| CREATE UNIQUE INDEX IF NOT EXISTS idx_item_favorite ON item_favorites(item_id, user_id); |
There was a problem hiding this comment.
CREATE TABLE IF NOT EXISTS can skip the FK/cascade here.
server/cmd/migrate/main.go AutoMigrates models.ItemFavorite, but the model does not declare the CapabilityItem relation. If that path creates item_favorites first, this block no-ops and REFERENCES capability_items(id) ON DELETE CASCADE never gets applied. Add an explicit ALTER TABLE ... ADD CONSTRAINT step here, or mirror the FK in the model so both schema paths stay aligned.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/migrations/20260401103000_add_item_metrics_and_favorites.sql` around
lines 5 - 12, The migration may be skipped by CREATE TABLE IF NOT EXISTS when
the ORM AutoMigrates models.ItemFavorite without a CapabilityItem relation, so
add a deterministic FK guarantee: after the CREATE TABLE block for
item_favorites, run an explicit ALTER TABLE item_favorites ADD CONSTRAINT (e.g.
fk_item_favorites_item_id) FOREIGN KEY (item_id) REFERENCES capability_items(id)
ON DELETE CASCADE if it doesn't already exist; alternatively (or in addition)
update the models.ItemFavorite struct to declare the CapabilityItem relation so
AutoMigrate creates the same foreign key—ensure the migration includes the
explicit ALTER TABLE statement referencing table item_favorites and column
item_id and the constraint name so the FK is always applied.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/build-images.yaml:
- Around line 103-106: Validate and sanitize the user-provided portal_branch
before using it in the BRANCH variable and in the git commands: enforce a safe
branch-name pattern (e.g., allow only alphanumerics, hyphens, underscores and
slashes via a regex) and if validation fails fall back to 'dev' or exit with an
error; apply this check where BRANCH is set and before the git fetch/checkout
steps so the BRANCH value used by git fetch origin
"+refs/heads/${BRANCH}:refs/remotes/origin/${BRANCH}" and git checkout -B
"${BRANCH}" "origin/${BRANCH}" is guaranteed safe and cannot inject shell
metacharacters.
- Around line 101-107: The "Update portal submodule to target branch" step
currently runs for every matrix service; restrict it to only run for portal
builds by adding a conditional to the step (use the step name "Update portal
submodule to target branch") that checks the matrix/service or build input for
portal (e.g., if: matrix.service == 'portal' or an equivalent expression that
checks the requested services), leaving the existing BRANCH, cd portal, git
fetch and git checkout commands intact so non-portal matrix entries skip the
submodule update.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 576eae80-67b2-480a-b803-47f7197fc228
📒 Files selected for processing (1)
.github/workflows/build-images.yaml
| - name: Update portal submodule to target branch | ||
| run: | | ||
| BRANCH="${{ inputs.portal_branch || 'dev' }}" | ||
| cd portal | ||
| git fetch origin '+refs/heads/dev:refs/remotes/origin/dev' | ||
| git checkout -B dev origin/dev | ||
| git fetch origin "+refs/heads/${BRANCH}:refs/remotes/origin/${BRANCH}" | ||
| git checkout -B "${BRANCH}" "origin/${BRANCH}" | ||
| echo "portal submodule branch: ${BRANCH}" |
There was a problem hiding this comment.
Scope the portal submodule checkout to portal builds only.
This step currently runs for every matrix service. Non-portal builds can fail due to portal branch issues even when portal is not requested.
Suggested fix
- name: Update portal submodule to target branch
+ if: matrix.service == 'portal'
run: |
BRANCH="${{ inputs.portal_branch || 'dev' }}"
cd portal
git fetch origin "+refs/heads/${BRANCH}:refs/remotes/origin/${BRANCH}"
git checkout -B "${BRANCH}" "origin/${BRANCH}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build-images.yaml around lines 101 - 107, The "Update
portal submodule to target branch" step currently runs for every matrix service;
restrict it to only run for portal builds by adding a conditional to the step
(use the step name "Update portal submodule to target branch") that checks the
matrix/service or build input for portal (e.g., if: matrix.service == 'portal'
or an equivalent expression that checks the requested services), leaving the
existing BRANCH, cd portal, git fetch and git checkout commands intact so
non-portal matrix entries skip the submodule update.
| BRANCH="${{ inputs.portal_branch || 'dev' }}" | ||
| cd portal | ||
| git fetch origin '+refs/heads/dev:refs/remotes/origin/dev' | ||
| git checkout -B dev origin/dev | ||
| git fetch origin "+refs/heads/${BRANCH}:refs/remotes/origin/${BRANCH}" | ||
| git checkout -B "${BRANCH}" "origin/${BRANCH}" |
There was a problem hiding this comment.
Validate portal_branch before using it in git commands.
workflow_dispatch input is user-provided. Without validation, malformed values can break the run and increase command-safety risk.
Suggested hardening
run: |
BRANCH="${{ inputs.portal_branch || 'dev' }}"
+ if ! git check-ref-format --branch "$BRANCH" >/dev/null 2>&1; then
+ echo "Invalid portal_branch: $BRANCH" >&2
+ exit 1
+ fi
cd portal
git fetch origin "+refs/heads/${BRANCH}:refs/remotes/origin/${BRANCH}"
git checkout -B "${BRANCH}" "origin/${BRANCH}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| BRANCH="${{ inputs.portal_branch || 'dev' }}" | |
| cd portal | |
| git fetch origin '+refs/heads/dev:refs/remotes/origin/dev' | |
| git checkout -B dev origin/dev | |
| git fetch origin "+refs/heads/${BRANCH}:refs/remotes/origin/${BRANCH}" | |
| git checkout -B "${BRANCH}" "origin/${BRANCH}" | |
| BRANCH="${{ inputs.portal_branch || 'dev' }}" | |
| if ! git check-ref-format --branch "$BRANCH" >/dev/null 2>&1; then | |
| echo "Invalid portal_branch: $BRANCH" >&2 | |
| exit 1 | |
| fi | |
| cd portal | |
| git fetch origin "+refs/heads/${BRANCH}:refs/remotes/origin/${BRANCH}" | |
| git checkout -B "${BRANCH}" "origin/${BRANCH}" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build-images.yaml around lines 103 - 106, Validate and
sanitize the user-provided portal_branch before using it in the BRANCH variable
and in the git commands: enforce a safe branch-name pattern (e.g., allow only
alphanumerics, hyphens, underscores and slashes via a regex) and if validation
fails fall back to 'dev' or exit with an error; apply this check where BRANCH is
set and before the git fetch/checkout steps so the BRANCH value used by git
fetch origin "+refs/heads/${BRANCH}:refs/remotes/origin/${BRANCH}" and git
checkout -B "${BRANCH}" "origin/${BRANCH}" is guaranteed safe and cannot inject
shell metacharacters.
Allow specifying the portal submodule branch when triggering the build-images workflow, defaulting to 'dev' for backward compatibility. Signed-off-by: 林凯90331 <90331@sangfor.com> Co-authored-by: CoStrict <zgsm@sangfor.com.cn>
c6d94a4 to
c72007b
Compare
Implement preview_count, install_count, and favorite_count as denormalized counters on capability_items, driven by behavior logs and a dedicated item_favorites relation table. Move LogBehavior to a public route so anonymous views are recorded, add idempotent POST/DELETE /items/:id/favorite endpoints behind auth, and return the favorited flag in GetItem for the current user.
Summary by CodeRabbit
New Features
Documentation