Skip to content

KMS-649: PR adds Redis-backed response caching for hot read endpoints (/concepts*, /concept*, /tree*) with a scheduled cache-prime Lambda, plus deployment/runtime updates to support Redis in AWS and local development.#94

Merged
cgokey merged 42 commits intomainfrom
KMS-649
Feb 27, 2026

Conversation

@cgokey
Copy link
Contributor

@cgokey cgokey commented Feb 23, 2026

Overview

What is the feature?

This PR adds Redis-backed response caching for hot read endpoints (/concepts*, /concept*, /tree*) with a scheduled cache-prime Lambda, plus deployment/runtime updates to support Redis in AWS and local development.

What is the Solution?

I introduced Redis cache key builders and cache read/write helpers in shared code, wired cache lookups/writes into getConcept, getConcepts, and getKeywordsTree, and added a scheduled primeConceptsCache Lambda that validates a published-version marker, clears old cache keys, and primes target routes.

On the infrastructure side, I added a dedicated RedisStack, wired endpoint env vars into Lambdas, updated cron scheduling, and removed old API Gateway cache helper code.

For local/dev tooling, I added Redis scripts (redis:start|stop|connect|memory_used), local invoke script for prime lambda, updated RDF4J helper scripts, and consolidated local docs into the root README.

What areas of the application does this impact?

This change impacts the KMS read path end to end: API handlers, shared runtime logic, infrastructure, local tooling, and tests. At the handler layer, getConcept, getConcepts, and getKeywordsTree now use Redis-backed response caching and shared logger-based cache observability. In shared code, new cache key and cache access modules support concept, concepts, and tree responses, while the new prime workflow modules (primeConceptsCache, primeConcepts, primeKeywordTrees) add version-marker based cache refresh behavior and route warming. In CDK, Redis is introduced as a first-class deployment component through RedisStack, Lambda environment wiring, and scheduled invocation changes, while older API Gateway cache helper wiring was removed. Local developer workflows are also affected through new Redis helper scripts and prime invocation tooling, plus documentation updates in the root README that replace the separate local invoke README. Finally, test coverage was expanded and updated across all touched handlers and shared modules to validate cache hit/miss behavior, error handling paths, and prime execution logic.

Testing

Environment for testing: local SAM + Docker Redis + local RDF4J (and branch deploy validation in SIT if needed)
Collection to test with: KMS concepts/schemes pulled via rdf4j:pull and loaded with rdf4j:setup

Install deps:
npm install

Start RDF4J and load data:
npm run rdf4j:build
npm run rdf4j:create-network
npm run rdf4j:start
npm run rdf4j:pull
npm run rdf4j:setup

Start local Redis:
npm run redis:start

Start local API:
npm run start-local

Verify cache behavior:
Call /concepts twice and confirm first miss / second hit in logs
Call /concept/{id} twice and confirm hit/miss logging
Call /tree/concept_scheme/all twice and confirm hit/miss logging

Invoke cache-prime locally:
npm run prime-cache:invoke-local
Verify marker check, cache clear, and priming summaries in logs

Checklist

  • I have added automated tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings

Christopher D. Gokey added 15 commits February 21, 2026 19:51
…setup/pull to a single full-file loader, add start-local:watch, and harden SPARQL read paths with shared single-flight/adaptive retry plus aligned test updates.
…ad timeout defaults to 30s, and wire retry/timeout env vars through CDK+Bamboo
…nt and propagate SPARQL/concepts timeout-retry env defaults through CDK and Bamboo
… cache settings when cache cluster is disabled
@codecov-commenter
Copy link

codecov-commenter commented Feb 23, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 99.64%. Comparing base (744646f) to head (1f8b3e7).

Additional details and impacted files
@@            Coverage Diff             @@
##             main      #94      +/-   ##
==========================================
+ Coverage   99.59%   99.64%   +0.04%     
==========================================
  Files         143      148       +5     
  Lines        2478     2805     +327     
  Branches      608      683      +75     
==========================================
+ Hits         2468     2795     +327     
  Misses          9        9              
  Partials        1        1              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@cgokey cgokey changed the title KMS-649: Hardens KMS read-path behavior under burst/retry traffic and prevents /concepts* traffic spikes from saturating RDF4J and causing cascading timeouts. KMS-649: PR adds Redis-backed response caching for hot read endpoints (/concepts*, /concept*, /tree*) with a scheduled cache-prime Lambda, plus deployment/runtime updates to support Redis in AWS and local development. Feb 25, 2026
@htranho
Copy link
Contributor

htranho commented Feb 26, 2026

Add step
npm run rdf4j:build
to 'testing', before
npm run rdf4j:create-network

@cgokey cgokey merged commit c521f11 into main Feb 27, 2026
6 checks passed
@cgokey cgokey deleted the KMS-649 branch February 27, 2026 14:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants