A thread-safe, in-memory LRU (Least Recently Used) Cache built from scratch in core Java.
Supports TTL-based expiry, eviction tracking, and concurrent access — zero external libraries required.
- O(1) get and put using a HashMap + doubly-linked list
- LRU eviction — least recently used entry is dropped when capacity is full
- TTL expiry — entries auto-expire after a configurable duration
- Thread safety via
ReentrantReadWriteLock— parallel reads, exclusive writes - Background sweeper — daemon thread cleans up expired entries periodically
- Live stats — hit count, miss count, eviction count, hit rate via
AtomicLong - Builder API — fluent construction with
capacity,ttlMs,sweepIntervalMs - Built-in test runner — 10 tests covering correctness, TTL, and concurrency (no JUnit needed)
javac LRUCacheDemo.java
java LRUCacheDemoRequires Java 11+ only. No Maven, no Gradle, no external jars.
LRUCache<String, String> cache = LRUCache.<String, String>builder()
.capacity(500)
.ttlSeconds(600) // entries expire after 10 minutes
.sweepIntervalMs(60_000)
.build();
cache.put("user:42", "Alice");
String val = cache.get("user:42"); // returns null if expired or evicted
System.out.println(cache.stats());
// hits=1 misses=0 evictions=0 hitRate=100%
cache.shutdown(); // stops the background sweeper threadHEAD ↔ [most recent] ↔ ... ↔ [least recent] ↔ TAIL
↑ ↑
promoted on get() evicted on overflow
A HashMap<K, Node> gives O(1) key lookup.
A doubly-linked list maintains recency order.
Sentinel head and tail nodes eliminate null checks in every list operation.
| Operation | Lock used | Why |
|---|---|---|
get() |
Write lock | A cache hit moves the node to front — that's a structural write |
put() |
Write lock | Inserts a node and potentially evicts another |
size() |
Read lock | Read-only — safe to share with other readers |
| Stats counters | None (AtomicLong) | Lock-free increment — never slows the hot path |
- Each
Nodestores an absoluteexpiryMstimestamp at creation. get()checks expiry lazily — expired entry is treated as a miss and removed.- A background
ScheduledExecutorServicesweeps the list periodically to catch entries that were never accessed again.
LRUCacheDemo.java ← everything in one file
├── Node<K,V> ← doubly-linked list node with TTL
├── CacheStats ← AtomicLong hit/miss/eviction counters
├── LRUCache<K,V> ← main cache (ReadWriteLock + sweeper + builder)
└── main() ← built-in test runner + live demo
| Test | What it verifies |
|---|---|
| get returns null on miss | Basic miss behaviour |
| put then get round-trip | Basic correctness |
| put updates existing value | In-place update + size stays same |
| Evicts LRU on overflow | Correct eviction ordering |
| Update promotes to MRU | put on existing key resets recency |
| Eviction count tracked | Stats accuracy |
| Entry expires after TTL | TTL correctness |
| Entry accessible within TTL | TTL boundary correctness |
| Hit rate calculation | Stats accuracy |
| 8 threads concurrent — no corruption | Thread safety under load |
- Data structures: doubly-linked list, hash map, sentinel nodes
- Concurrency:
ReentrantReadWriteLock,AtomicLong,ScheduledExecutorService,CountDownLatch - Design patterns: Builder pattern, Generics
- Java internals: daemon threads, lock granularity trade-offs, lazy vs eager expiry
MIT