A DCB-inspired event store library for TypeScript.
BoundlessDB — because consistency boundaries should be dynamic, not fixed.
npm install boundlessdbInteractive Browser Demo — No installation required!
The entire event store runs client-side in your browser using WebAssembly SQLite.
- 🚀 Works in Browser — Full client-side event sourcing via sql.js (WASM)
- 🔑 No Streams — Events organized via configurable consistency keys
- ⚙️ Config-based Key Extraction — Events remain pure business data
- 🎟️ AppendCondition — Simple, transparent optimistic concurrency control
- ⚡ Conflict Detection with Delta — Get exactly what changed since your read
- 🔄 Reindex Script — Change your config, run the reindex script, keys are rebuilt safely
- 💾 SQLite, PostgreSQL & In-Memory — Multiple storage backends
- 📦 Embedded Library — No separate server, runs in your process
import { createEventStore, SqliteStorage } from 'boundlessdb';
const store = createEventStore({
storage: new SqliteStorage(':memory:'),
consistency: {
eventTypes: {
CourseCreated: {
keys: [{ name: 'course', path: 'data.courseId' }]
},
StudentSubscribed: {
keys: [
{ name: 'course', path: 'data.courseId' },
{ name: 'student', path: 'data.studentId' }
]
}
}
}
});You append an event with business data:
await store.append([{
type: 'StudentSubscribed',
data: { courseId: 'cs101', studentId: 'alice' }
}], result.appendCondition);Your config tells BoundlessDB which fields are consistency keys:
consistency: {
eventTypes: {
StudentSubscribed: {
keys: [
{ name: 'course', path: 'data.courseId' },
{ name: 'student', path: 'data.studentId' }
]
}
}
}
// → Extracts: course='cs101', student='alice'Keys are stored in a separate index table, linked to the event position:
event_keys: [pos:1, course, cs101], [pos:1, student, alice]
Find all events matching key conditions — no need to list event types:
const result = await store.query()
.matchKeys({ course: 'cs101' })
.read();
// result.appendCondition captures: "I read all matching events up to position X"// 1️⃣ READ — Query by key and get an appendCondition
const { events, appendCondition } = await store.query<CourseEvent>()
.matchKeys({ course: 'cs101' })
.read();
// 2️⃣ DECIDE — Build state, run business logic
const state = events.reduce(evolve, initialState);
const newEvents = decide(command, state);
// 3️⃣ WRITE — Append with optimistic concurrency
const result = await store.append(newEvents, appendCondition);const initialState = { enrolled: 0, capacity: 30 };
// evolve: (state, event) → new state
const evolve = (state, event) => {
switch (event.type) {
case 'StudentSubscribed':
return { ...state, enrolled: state.enrolled + 1 };
default:
return state;
}
};
// decide: (command, state) → events[]
const decide = (command, state) => {
if (state.enrolled >= state.capacity) {
throw new Error('Course is full!');
}
return [{ type: 'StudentSubscribed', data: command }];
};if (result.conflict) {
// Someone else wrote while you were deciding
console.log('Events since your read:', result.conflictingEvents);
// Retry with result.appendCondition
} else {
console.log('Success at position', result.position);
}Build queries with a chainable API:
// Key-only: everything about course cs101 (any event type!)
const { events, appendCondition } = await store.query<CourseEvent>()
.matchKeys({ course: 'cs101' })
.read();
// Multi-key AND: Alice's enrollment in cs101
const enrollment = await store.query<CourseEvent>()
.matchKeys({ course: 'cs101', student: 'alice' })
.read();
// Multi-type + key: course lifecycle events for cs101 (OR between types)
const lifecycle = await store.query<CourseEvent>()
.matchTypeAndKeys('CourseCreated', { course: 'cs101' })
.matchTypeAndKeys('CourseCancelled', { course: 'cs101' })
.read();
// Type + key
const enrollments = await store.query<CourseEvent>()
.matchTypeAndKeys('StudentSubscribed', { course: 'cs101' })
.fromPosition(100n)
.limit(50)
.read();| Method | Description |
|---|---|
matchKeys({ key: value, ... }) |
Match events by key(s), any type. Keys within = AND. Each call = new condition (OR). |
matchType(...types) |
Match events of one or more types. Each call = new condition (OR). |
matchTypeAndKeys(type, { key: value, ... }) |
Match events by type and key(s). Keys within = AND. Each call = new condition (OR). |
fromPosition(bigint) |
Start reading from position |
limit(number) |
Limit number of results |
read() |
Execute query, returns QueryResult |
Rules:
- Each top-level call (
matchKeys(),matchType(),matchTypeAndKeys()) starts a new condition (OR between conditions) - Keys within a single call's object are AND — the same event must match all
The AppendCondition controls optimistic concurrency. It follows the DCB specification:
interface AppendCondition {
failIfEventsMatch: QueryCondition[]; // What constitutes a conflict?
after?: bigint; // From which position to check? (optional)
}const result = await store.query()
.matchKeys({ course: 'cs101' })
.read();
// appendCondition = { failIfEventsMatch: [...], after: <last_position> }
await store.append(newEvents, result.appendCondition);Checks: Were there NEW events (after my read) that match my query?
- ✅ No conflict if nothing new was written
- ❌ Conflict if someone else wrote matching events
await store.append(newEvents, {
failIfEventsMatch: [{ type: 'StudentSubscribed', key: 'course', value: 'cs101' }],
after: 42n
});Checks: Events AFTER position 42 only. Use case: Custom retry logic, or when you know the position.
await store.append(newEvents, {
failIfEventsMatch: [{ type: 'UserCreated', key: 'username', value: 'alice' }]
// no 'after' → checks from position 0
});Checks: ALL events from the beginning. Use case: Uniqueness checks without reading first.
- ❌ Fails if ANY matching event exists
- "Username 'alice' must not exist yet"
await store.append(newEvents, null);Checks: Nothing. Use case: First write, or events where conflicts don't matter.
| Case | after |
Checks |
|---|---|---|
| Read → Append | From read position | Events AFTER read |
| Manual | Explicit position | Events AFTER position |
| Uniqueness | Omitted | ALL events |
| Blind | null condition |
Nothing |
Traditional streams give you ONE boundary. DCB lets you query ANY combination:
// Key-only: "Everything about course cs101"
store.query().matchKeys({ course: 'cs101' }).read()
// Multi-key AND: "Alice's enrollment in cs101"
store.query()
.matchKeys({ course: 'cs101', student: 'alice' })
.read()
// Multi-type + key: "Course lifecycle events for cs101"
store.query()
.matchTypeAndKeys('CourseCreated', { course: 'cs101' })
.matchTypeAndKeys('CourseCancelled', { course: 'cs101' })
.read()
// OR: "All cancellations OR everything about Alice"
store.query()
.matchType('CourseCancelled') // condition 1
.matchKeys({ student: 'alice' }) // condition 2 (OR)
.read()- Keys within a single call = AND — keys passed in
matchKeys({})/matchTypeAndKeys()must all match the same event - Multiple top-level calls = OR — each call starts a new condition (events matching either)
// AND: Events where course='cs101' AND student='alice' (same event)
store.query()
.matchKeys({ course: 'cs101', student: 'alice' })
.read();
// OR: Events where course='cs101' OR student='alice' (different events)
store.query()
.matchKeys({ course: 'cs101' })
.matchKeys({ student: 'alice' })
.read();Keys are extracted from event payloads via configuration — events stay pure:
const consistency = {
eventTypes: {
OrderPlaced: {
keys: [
{ name: 'order', path: 'data.orderId' },
{ name: 'customer', path: 'data.customer.id' },
{ name: 'month', path: 'data.timestamp', transform: 'MONTH' }
]
}
}
};| Option | Description |
|---|---|
name |
Key name for queries |
path |
Dot-notation path in event (e.g., data.customer.id) |
transform |
Transform the extracted value (see below) |
nullHandling |
error (default), skip, default |
defaultValue |
Value when nullHandling: 'default' |
Transforms modify the extracted value before indexing:
| Transform | Input | Output | Use Case |
|---|---|---|---|
LOWER |
"Alice@Email.COM" |
"alice@email.com" |
Case-insensitive matching |
UPPER |
"alice" |
"ALICE" |
Normalized codes |
MONTH |
"2026-02-20T14:30:00Z" |
"2026-02" |
Monthly partitioning |
YEAR |
"2026-02-20T14:30:00Z" |
"2026" |
Yearly aggregation |
DATE |
"2026-02-20T14:30:00Z" |
"2026-02-20" |
Daily partitioning |
Example: Time-based partitioning
const consistency = {
eventTypes: {
OrderPlaced: {
keys: [
{ name: 'order', path: 'data.orderId' },
{ name: 'month', path: 'data.placedAt', transform: 'MONTH' }
]
}
}
};
// Event: { type: 'OrderPlaced', data: { orderId: 'ORD-123', placedAt: '2026-02-20T14:30:00Z' } }
// Extracted keys: order="ORD-123", month="2026-02"
// Query all orders from February 2026:
const { events } = await store.query()
.matchTypeAndKeys('OrderPlaced', { month: '2026-02' })
.read();This is great for Close the Books patterns — query all events in a time period efficiently!
The consistency config is hashed and stored in the database. When you change your config (add/remove keys, change paths or transforms), the key index must be rebuilt.
On startup, BoundlessDB detects config changes:
stored_hash: "a1b2c3..." (from last run)
current_hash: "x9y8z7..." (from your config)
→ Error: Config hash mismatch. Run the reindex script before starting the application.
This is intentional — reindexing millions of events should be an explicit step, not a surprise on startup.
Run the reindex script as part of your deployment (like a database migration):
# SQLite
npx tsx scripts/reindex.ts --config ./consistency.config.ts --db ./events.sqlite
# PostgreSQL
npx tsx scripts/reindex.ts --config ./consistency.config.ts --connection postgresql://user:pass@localhost/db
# Custom batch size (default: 10,000)
npx tsx scripts/reindex.ts --config ./consistency.config.ts --db ./events.sqlite --batch-size 50000The --config file must default-export a ConsistencyConfig (see benchmark/consistency.config.ts for an example).
The script:
- Checks the hash first — if unchanged, exits immediately ("No reindex needed")
- Processes in batches — never loads all events into memory
- Shows live progress — percentage, throughput, ETA
- Is crash-safe — stores progress in metadata, resumes from where it left off
🔄 Reindex (SQLite)
Config hash: fd7b17c0... → a3e91b44...
Events: 50,001,237
[████████████░░░░░░░░░░░░░░░░░░] 40% 20,000,000 / 50,001,237 142,857 keys/s ETA 3m 30s
✅ Reindex complete: 50,001,237 events, 112,482,011 keys (8m 12s)
Add the reindex script to your deployment pipeline:
# Example: GitHub Actions
- name: Reindex (if config changed)
run: npx tsx scripts/reindex.ts --config ./consistency.config.ts --db ./events.sqliteThe script exits with code 0 in both cases (no reindex needed / reindex completed successfully), so it's safe to run on every deploy.
You can also call reindexBatch() directly on a storage engine:
const storage = new SqliteStorage('./events.sqlite');
await storage.reindexBatch(extractKeys, {
batchSize: 10_000,
onProgress: (done, total) => {
console.log(`${done}/${total}`);
}
});BoundlessDB works entirely in the browser with no server required:
<script type="module">
import { createEventStore, SqlJsStorage } from './boundless.browser.js';
const store = createEventStore({
storage: new SqlJsStorage(),
consistency: {
eventTypes: {
TodoAdded: { keys: [{ name: 'list', path: 'data.listId' }] }
}
}
});
// Everything runs client-side!
</script>npm run build:browser
# → ui/public/boundless.browser.js (~100KB)BoundlessDB supports concurrent access from multiple processes (e.g. Supabase Edge Functions, multiple server instances) when using PostgreSQL.
The conflict check and write happen atomically in a single SERIALIZABLE transaction. PostgreSQL detects overlapping reads and aborts one transaction if two processes try to append with conflicting consistency keys. The library retries automatically.
// Edge Function A and B run simultaneously:
// Both read, both decide, both try to append.
// PostgreSQL ensures only one succeeds — the other gets a conflict result.
const result = await store.append(newEvents, appendCondition);
if (result.conflict) {
// Retry with fresh state
}Key behavior:
- Appends with different keys proceed in parallel (no conflict)
- Appends with overlapping keys are serialized (one wins, one retries)
- This maps directly to DCB's Dynamic Consistency Boundaries
No configuration needed — atomic conflict detection is built into all storage engines.
| Backend | Environment | Persistence |
|---|---|---|
SqliteStorage |
Node.js | File or :memory: |
SqlJsStorage |
Browser | In-memory (WASM) |
PostgresStorage |
Node.js | PostgreSQL database |
InMemoryStorage |
Any | None (testing) |
For production deployments with PostgreSQL:
import { createEventStore, PostgresStorage } from 'boundlessdb';
const storage = new PostgresStorage('postgresql://user:pass@localhost/mydb');
await storage.init(); // Required: creates tables if they don't exist
const store = createEventStore({
storage,
consistency: { /* ... */ }
});Note: PostgreSQL support requires the pg package:
npm install pgDefine type-safe events using the Event marker type:
import { Event, EventStore } from 'boundlessdb';
// Define your events
type ProductItemAdded = Event<'ProductItemAdded', {
cartId: string;
productId: string;
quantity: number;
}>;
type ProductItemRemoved = Event<'ProductItemRemoved', {
cartId: string;
productId: string;
}>;
// Create a union type for all cart events
type CartEvents = ProductItemAdded | ProductItemRemoved;
// Read with type safety
const result = await store.query<CartEvents>()
.matchKeys({ cart: 'cart-123' })
.read();
// TypeScript knows the event types!
for (const event of result.events) {
if (event.type === 'ProductItemAdded') {
console.log(event.data.quantity); // ✅ typed as number
}
}The fluent query builder is the recommended API. For advanced use, conditions can also be passed directly to store.read():
// Key-only: events with key, regardless of type
{ keys: [{ name: 'course', value: 'cs101' }] }
// Type-only: all events of type
{ type: 'CourseCreated' }
// Type + single key
{ type: 'ProductItemAdded', key: 'cart', value: 'cart-123' }
// Type + multi-key AND
{ type: 'StudentSubscribed', keys: [
{ name: 'course', value: 'cs101' },
{ name: 'student', value: 'alice' }
]}
// Multi-type
{ types: ['CourseCreated', 'CourseCancelled'] }
// Multi-type + keys
{ types: ['CourseCreated', 'CourseCancelled'], keys: [{ name: 'course', value: 'cs101' }] }// Get ALL events (useful for admin/debug/export)
const result = await store.all().read();const store = createEventStore({
storage: SqliteStorage | SqlJsStorage | PostgresStorage | InMemoryStorage,
consistency: ConsistencyConfig, // Key extraction rules
});Fluent query builder:
const result = await store.query<CourseEvent>()
.matchKeys({ course: 'cs101' }) // key-only (any event type)
.read();
const result = await store.query<CourseEvent>()
.matchTypeAndKeys('CourseCreated', { course: 'cs101' }) // type + key
.matchTypeAndKeys('CourseCancelled', { course: 'cs101' }) // OR type + key
.fromPosition(100n) // start from position
.limit(50) // limit results
.read(); // execute, returns QueryResult| Method | Description |
|---|---|
matchKeys({ key: value, ... }) |
Match events by key(s), any type. Keys within = AND. Each call = new condition (OR). |
matchType(...types) |
Match events of type(s). Each call = new condition (OR). |
matchTypeAndKeys(type, { key: value, ... }) |
Match events by type and key(s). Keys within = AND. Each call = new condition (OR). |
fromPosition(bigint) |
Start reading from position |
limit(number) |
Limit number of results |
read() |
Execute query, returns QueryResult |
const result = await store.read<CartEvents>({
conditions: [
{ type: string } // unconstrained
| { type: string, key: string, value: string } // single key
| { type: string, keys: { name: string, value: string }[] } // multi-key AND
],
fromPosition?: bigint,
limit?: number,
});
result.events // StoredEvent<E>[]
result.position // bigint
result.conditions // QueryCondition[]
result.appendCondition // AppendCondition (for store.append)
result.count // number
result.isEmpty() // boolean
result.first() // StoredEvent<E> | undefined
result.last() // StoredEvent<E> | undefined// With appendCondition from read()
const readResult = await store.read<CartEvents>({ conditions });
const result = await store.append<CartEvents>([newEvent], readResult.appendCondition);
// With manual AppendCondition (DCB spec compliant)
const result = await store.append<CartEvents>([newEvent], {
failIfEventsMatch: [{ type: 'UserCreated', key: 'username', value: 'alice' }],
after: 42n // optional
});
// Without consistency check
const result = await store.append<CartEvents>([newEvent], null);
// Result handling
if (result.conflict) {
result.conflictingEvents; // StoredEvent<E>[] - what changed since your read
result.appendCondition; // Fresh condition for retry
} else {
result.position; // Position of last appended event
result.appendCondition; // Condition for next operation
}Run benchmarks against SQLite or PostgreSQL:
# SQLite (in-memory)
npx tsx benchmark/sqlite-query.ts --events 1m
# SQLite (on-disk, shuffled by default)
npx tsx benchmark/sqlite-query.ts --events 1m --disk
# PostgreSQL
npx tsx benchmark/postgres-query.ts --events 1mBenchmarks and reindex share the same config file format:
# Run benchmark with custom config
npx tsx benchmark/sqlite-query.ts --events 1m --disk --config ./my-config.ts
# Two configs are included:
# benchmark/consistency.config.ts — full (course + student + lesson keys)
# benchmark/consistency.config.minimal.ts — minimal (course key only)Changing the consistency config changes the key index. BoundlessDB enforces this:
# 1. Benchmark with full config
npx tsx benchmark/sqlite-query.ts --events 1m --disk
# 2. Switch to minimal config → reindex first!
npx tsx scripts/reindex.ts --config ./benchmark/consistency.config.minimal.ts \
--db ./benchmark/boundless-bench.sqlite
# 3. Benchmark with minimal config
npx tsx benchmark/sqlite-query.ts --events 1m --disk \
--config ./benchmark/consistency.config.minimal.ts
# 4. Switch back → reindex again
npx tsx scripts/reindex.ts --config ./benchmark/consistency.config.ts \
--db ./benchmark/boundless-bench.sqliteSkipping the reindex step will throw an error — just like in production.
For a detailed walkthrough of how config changes affect the key index, see docs/benchmark-reindex-workflow.md.
npm install
npm test
npm run build
npm run build:browserFor detailed SQL query plans and optimization notes, see docs/sqlite-queries.md.
- dcb.events — Dynamic Consistency Boundaries
- Giraflow — Event Modeling visualization
Built with ❤️ for Event Sourcing