Skip to content

harshraj005/LRU_Cache_System_Using_Java

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LRU Cache — Java (No Dependencies)

A thread-safe, in-memory LRU (Least Recently Used) Cache built from scratch in core Java.
Supports TTL-based expiry, eviction tracking, and concurrent access — zero external libraries required.


Features

  • O(1) get and put using a HashMap + doubly-linked list
  • LRU eviction — least recently used entry is dropped when capacity is full
  • TTL expiry — entries auto-expire after a configurable duration
  • Thread safety via ReentrantReadWriteLock — parallel reads, exclusive writes
  • Background sweeper — daemon thread cleans up expired entries periodically
  • Live stats — hit count, miss count, eviction count, hit rate via AtomicLong
  • Builder API — fluent construction with capacity, ttlMs, sweepIntervalMs
  • Built-in test runner — 10 tests covering correctness, TTL, and concurrency (no JUnit needed)

Quick Start

javac LRUCacheDemo.java
java  LRUCacheDemo

Requires Java 11+ only. No Maven, no Gradle, no external jars.


Usage

LRUCache<String, String> cache = LRUCache.<String, String>builder()
    .capacity(500)
    .ttlSeconds(600)        // entries expire after 10 minutes
    .sweepIntervalMs(60_000)
    .build();

cache.put("user:42", "Alice");
String val = cache.get("user:42");  // returns null if expired or evicted

System.out.println(cache.stats());
// hits=1  misses=0  evictions=0  hitRate=100%

cache.shutdown(); // stops the background sweeper thread

How It Works

Data structure

HEAD ↔ [most recent] ↔ ... ↔ [least recent] ↔ TAIL
         ↑                         ↑
    promoted on get()         evicted on overflow

A HashMap<K, Node> gives O(1) key lookup.
A doubly-linked list maintains recency order.
Sentinel head and tail nodes eliminate null checks in every list operation.

Concurrency model

Operation Lock used Why
get() Write lock A cache hit moves the node to front — that's a structural write
put() Write lock Inserts a node and potentially evicts another
size() Read lock Read-only — safe to share with other readers
Stats counters None (AtomicLong) Lock-free increment — never slows the hot path

TTL lifecycle

  1. Each Node stores an absolute expiryMs timestamp at creation.
  2. get() checks expiry lazily — expired entry is treated as a miss and removed.
  3. A background ScheduledExecutorService sweeps the list periodically to catch entries that were never accessed again.

Project Structure

LRUCacheDemo.java        ← everything in one file
  ├── Node<K,V>          ← doubly-linked list node with TTL
  ├── CacheStats         ← AtomicLong hit/miss/eviction counters
  ├── LRUCache<K,V>      ← main cache (ReadWriteLock + sweeper + builder)
  └── main()             ← built-in test runner + live demo

Test Coverage

Test What it verifies
get returns null on miss Basic miss behaviour
put then get round-trip Basic correctness
put updates existing value In-place update + size stays same
Evicts LRU on overflow Correct eviction ordering
Update promotes to MRU put on existing key resets recency
Eviction count tracked Stats accuracy
Entry expires after TTL TTL correctness
Entry accessible within TTL TTL boundary correctness
Hit rate calculation Stats accuracy
8 threads concurrent — no corruption Thread safety under load

Concepts Demonstrated

  • Data structures: doubly-linked list, hash map, sentinel nodes
  • Concurrency: ReentrantReadWriteLock, AtomicLong, ScheduledExecutorService, CountDownLatch
  • Design patterns: Builder pattern, Generics
  • Java internals: daemon threads, lock granularity trade-offs, lazy vs eager expiry

License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages