BoundlessDB

A DCB-inspired event store library for TypeScript โ€” with support for SQLite, PostgreSQL, in-memory

const { events, appendCondition } = await store.query<CourseEvent>()
  .matchKey('course', 'cs101')
  .andKey('student', 'alice')
  .read();

// Build state from matching events
const state: CourseState = events.reduce(evolve, initialState);
const { events, appendCondition } = await store.query<CourseEvent>()
  .matchType('StudentEnrolled')
  .andKey('course', 'cs101')
  .andKey('student', 'alice')
  .read();

// Build state
const state: CourseState = events.reduce(evolve, initialState);

// Business logic
const newEvents: CourseEvent[] = decide(command, state);

// Append with optimistic concurrency
await store.append(newEvents, appendCondition);
// Read from two boundaries
const cartResult = await store.query()
  .matchKey('cart', 'cart-42').read();

const inventoryResult = await store.query()
  .matchKey('product', 'shoe-xl').read();

// Merge conditions โ€” protects both boundaries
const merged = cartResult.appendCondition
  .mergeWith(inventoryResult.appendCondition);

// Single atomic append
await store.append(newEvents, merged);
const result = await store.append(newEvents, appendCondition);

if (result.conflict) {
  // Someone else enrolled while you were deciding!
  console.log('Events since your read:', result.conflictingEvents);
  
  // Retry with fresh appendCondition
  await store.append(events, result.appendCondition);
} else {
  // Success!
  console.log('Enrolled at position', result.position);
}
๐Ÿ›’ Live Shopping Cart Demo ๐ŸŽฎ Browser Demo
npm install boundlessdb
๐Ÿšซ

No Streams

Events organized via configurable consistency keys, not rigid stream boundaries.

โš™๏ธ

Config-based Keys

Extract consistency keys from event payloads. Events stay pure business data.

โšก

Conflict Detection

Get exactly what changed since your read โ€” with a fresh condition for retry.

๐Ÿ”„

One-Command Reindex

Change your consistency config, run one command. Keys are rebuilt โ€” batched, resumable, crash-safe.

๐Ÿ’พ

Multiple Storage Engines

SQLite for embedded, PostgreSQL for production, sql.js for browser, or in-memory for testing.

๐Ÿ”’

Multi-Node Safe

Atomic conflict detection via SERIALIZABLE transactions. Safe for Supabase Edge Functions and concurrent deployments.

๐Ÿ”—

Multi-Key AND Queries

Chain .andKey() for AND semantics โ€” match events where ALL keys match in one query.

๐Ÿ“ฆ

Embedded Library

Runs in your process โ€” no separate server needed. (No gRPC/HTTP API yet.)

Beyond Traditional DCB

Dynamic Consistency Boundaries (DCB) solves optimistic concurrency by attaching consistency keys (tags) to events. BoundlessDB takes this further:

Traditional DCB
Keys (tags) are written onto events at append time.
Once written, they're immutable.
BoundlessDB Approach
Keys are extracted from payloads via config.
Events stay pure. Config can change anytime.
๐Ÿ’ก Why more flexible?

โ€ข No migration needed โ€” Change your consistency boundaries by updating config, not events
โ€ข Events stay clean โ€” Business data only, no infrastructure concerns
โ€ข Retroactive changes โ€” Add new keys to existing events via reindex

Conflict? No Problem!

if (result.conflict) {
  // Someone else enrolled while you were deciding!
  console.log('Events since your read:', result.conflictingEvents);
  
  // Retry with fresh appendCondition
  await store.append(events, result.appendCondition);
} else {
  // Success!
  console.log('Enrolled at position', result.position);
}

How Conflict Detection Works

// The appendCondition captures your exact query scope (DCB spec):
appendCondition = {
  failIfEventsMatch: [{ type: 'StudentSubscribed', key: 'course', value: 'cs101' }],
  after: 5n  // Position at time of read (optional)
}

// On append, BoundlessDB checks:
// "Are there NEW events (after pos 5) that MATCH these conditions?"

// โœ… NO conflict if someone wrote:
//    - StudentSubscribed for course='math201' (different key value)
//    - CourseCreated for cs101 (different event type)

// โŒ CONFLICT only if:
//    - StudentSubscribed for course='cs101' was written
//    - (matches your query conditions!)
๐Ÿ’ก Conflicts are scoped to your read โ€” not global. This is the power of Dynamic Consistency Boundaries!

AppendCondition Cases

The AppendCondition controls when a conflict is detected. Four patterns cover all use cases:

// Standard flow: Read first, then append
const result = await store.query()
  .matchKey('course', 'cs101')
  .read();

// appendCondition = { failIfEventsMatch: [...], after: position }
await store.append(events, result.appendCondition);
โœ… No conflict if nothing new was written
โŒ Conflict if someone else wrote matching events after your read
// Check from a specific position
await store.append(events, {
  failIfEventsMatch: [
    { type: 'StudentSubscribed', key: 'course', value: 'cs101' }
  ],
  after: 42n
});
Use case: Custom retry logic, or when you know the exact position.
Checks only events AFTER position 42.
// Check ALL events (no 'after' = from position 0)
await store.append(events, {
  failIfEventsMatch: [
    { type: 'UserCreated', key: 'username', value: 'alice' }
  ]
  // no 'after' โ†’ checks ALL events!
});
Use case: Uniqueness checks without reading first.
โŒ Fails if ANY matching event exists anywhere.
Example: "Username 'alice' must not exist yet"
// No consistency check at all
await store.append(events, null);
Use case: First write, or events where conflicts don't matter.
No checks performed, event is always appended.

Query Across Multiple Dimensions

// Key-only: "Everything about course cs101"
store.query().matchKey('course', 'cs101').read()

// Multi-key AND: "Alice's enrollment in cs101"
store.query()
  .matchKey('course', 'cs101')
  .andKey('student', 'alice')
  .read()

// Multi-type + key: "Course lifecycle events for cs101"
store.query()
  .matchType('CourseCreated', 'CourseCancelled')
  .andKey('course', 'cs101')
  .read()

// OR: "All cancellations OR everything about Alice"
store.query()
  .matchType('CourseCancelled')          // condition 1
  .matchKey('student', 'alice')          // condition 2 (OR)
  .read()

Config-based Key Extraction

Keys are extracted from event payloads via configuration.
Events stay pure โ€” no tags or metadata pollution!
const consistency = {
  eventTypes: {
    CourseCreated: {
      keys: [
        { name: 'course', path: 'data.courseId' }
      ]
    },
    StudentSubscribed: {
      keys: [
        { name: 'course', path: 'data.courseId' },
        { name: 'student', path: 'data.studentId' },
        { name: 'semester', path: 'data.semester', transform: 'UPPER' }
      ]
    }
  }
};
When an event is appended:
{ type: 'StudentSubscribed', data: { courseId: 'cs101', studentId: 'alice', semester: 'ws24' } }

Keys are automatically extracted and indexed:
โ†’ course: 'cs101'
โ†’ student: 'alice'
โ†’ semester: 'WS24' (transformed to uppercase)

One-Command Reindex

// Change your config โ†’ run one command โ†’ done
// No migration files. No manual SQL. Just config.

$ npx tsx scripts/reindex.ts --config ./consistency.ts --db ./events.sqlite

  ๐Ÿ”„ Reindex (SQLite)
  Config hash: a1b2c3... โ†’ x9y8z7...
  Events: 50,001,237

  [โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘] 93%  46,500,000 / 50,001,237  148,201 keys/s

  โœ… Reindex complete: 50,001,237 events, 112,482,011 keys (8m 12s)

// Batched, resumable, crash-safe.
// Add to your CI/CD pipeline โ€” runs only when config changed.

How It Works

1
Event Appended
You append an event with business data
{ type: 'StudentSubscribed', data: { courseId: 'cs101', studentId: 'alice' } }
2
Keys Extracted
Config tells BoundlessDB which fields are consistency keys
course โ†’ 'cs101', student โ†’ 'alice'
3
Index Updated
Keys are stored in a separate index table, linked to the event position
event_keys: [pos:1, course, cs101], [pos:1, student, alice]
4
Query by Keys
Find all events matching any combination of key conditions
WHERE (type='StudentSubscribed' AND key='course' AND value='cs101')
๐Ÿ’ก Config changed? BoundlessDB compares the config hash on startup. If different, all keys are automatically re-extracted from existing events. No manual migration needed!

โšก Performance

Query performance at 50,000,000 events.

Query Results SQLite PostgreSQL
Single type 24,940 117.58 ms 302.48 ms
Constrained (type + key) 167 0.49 ms 3.73 ms
Highly selective 10 0.14 ms 1.14 ms
Mixed (2 types, 1 key) 334 1.41 ms 3.83 ms
Full aggregate (3 types) 2,004 4.59 ms 7.96 ms
Append (single event) โ€” 1.39 ms 3.27 ms
Read + Append (recent) โ€” 1.93 ms 5.28 ms
Read + Append (cold) โ€” 1.12 ms 6.57 ms
Write Throughput
26,827 evt/s
SQLite on disk
Write Throughput
6,950 evt/s
PostgreSQL 16
๐Ÿ’ก Sub-millisecond queries for selective lookups, even with 50M events in the store. Shuffled execution order (no cache bias). Bold = sub-millisecond.

๐Ÿ”’ Conflict & Concurrency (PostgreSQL, 50M events)

SERIALIZABLE isolation with decorrelated jitter backoff.

Scenario Latency Conflicts Success Throughput
Append with condition 2.35 ms โ€” โ€” โ€”
Conflict detection 1.00 ms โ€” โ€” โ€”
Conflict + retry round-trip 4.13 ms โ€” โ€” โ€”
10 writers ร— 100 events, same key 3,869 ms 39/round 10/10 258 evt/s
10 writers ร— 100 events, different keys 722 ms 0 10/10 1,384 evt/s
Parameters: 50 rounds ยท 10 app-level retries ยท PostgreSQL SERIALIZABLE ยท Decorrelated jitter backoff (50ms base, 2s cap) ยท 10 internal retries (40001)
๐Ÿ’ก Same key = serialized (conflicts, retries). Different keys = full parallelism (zero conflicts). DCB boundaries map directly to PostgreSQL SERIALIZABLE concurrency โ€” no tuning required. Run benchmarks yourself โ†’