Redis Graph Cache
A TypeScript-first Redis caching layer for Node.js applications that need schema-driven normalization, atomic concurrent writes, ZSET-backed paginated lists, automatic relationship hydration, and graceful resilience under failure.
Related Documentation
What it does
Schema-driven storage
Declare entities and relationships once; the cache handles normalization, hydration, and key generation.
Atomic writes via Lua
Every read-modify-write path is implemented as a single Lua script so concurrent writers cannot lose each other's updates.
Two list flavours
list (plain JSON array) for small collections;indexedList (ZSET-backed) for paginated, sorted feeds with atomic trim.
Automatic hydration
Read an entity and its related entities are automatically fetched and assembled into a nested object graph.
TypeScript native
Full generic typing with keyof TSchema everywhere. Type-safe API from schema definition.
Built-in resilience
Circuit breaker, exponential backoff retry, connection pooling, and production safety guards out of the box.
Realistic Operating Envelope
Single Redis instance, single-instance Node application. Numbers assume a reasonable Redis (8–16+ GB RAM, low network latency to the app) and 1–10 KB payloads.
| Workload | Status |
|---|---|
| ~5k concurrent users, ~100k entities, JSON-array lists with up to a few thousand items | Comfortably supported |
~20k concurrent users with indexedList + connection pool | Supported with capacity planning |
~50k concurrent users, poolSize: 8–16, compression on, request coalescing at the service layer | Achievable with monitoring + stampede protection |
| 50k+ users sustained on a single Redis | Not supported today — Redis Cluster is on the roadmap |
What it is not suitable for
Be honest with yourself before deploying:
- Redis Cluster is not supported yet — planned for a future release. For now, use a single Redis instance. Hash-tagging, slot-aware MGET/pipelines, and replica routing are not implemented.
- Off-thread JSON is not implemented. At very high throughput (~50k+ ops/sec on one Node process) sync
JSON.parsebecomes the bottleneck. The standard fix is PM2 cluster mode, not a library change. - Cache stampede protection is not built in. If a hot key expires under load, every concurrent reader will fall through to your DB at once. Add request coalescing at your service layer.
- No automatic invalidation graph for plain
listschemas. UseindexedListwithtrackMembership: true, or hand-roll list invalidation in your DB mutation paths.
Atomicity Contract
| Operation | Atomicity |
|---|---|
writeEntity (per key) | Atomic — CAS-with-retry per normalized key. Concurrent writers do not lose each other's updates. |
writeList (across N items) | Per-key atomic per item; the list-key write is independent. Two concurrent writeList calls writing the same list key may overlap. |
addListItem / removeListItem | Atomic — single Lua script. Idempotent add; safe under contention. |
addIndexedListItem / removeIndexedListItem | Atomic — single Lua script. Trim and back-index update happen in the same script. |
invalidateEntity (cascade) | Atomic — reads the back-index, ZREMs from every tracked list, deletes the entity and the back-index, all in one script. |
clearAllCache | Atomic FLUSHDB (or scoped SCAN+UNLINK with keyPrefix). Guarded by required confirm and a production-mode block. |
What is not atomic
- Two concurrent
writeListcalls writing the same list key — the value may end up reflecting either writer's id list. - Sequences of operations across keys (e.g. "update post then add to feed") — cross-key transactions are out of scope.
- Cache invalidation across application instances — every process that mutates the underlying data must call
invalidateEntity.
Resilience Guarantees
- Circuit breaker opens when Redis is degraded, failing fast instead of cascading failures
- Exponential backoff retry with jitter for transient failures
- Connection pooling for improved throughput and partial failure tolerance
- Production safety guards on destructive operations (e.g.,
clearAllCache) - Configurable limits to prevent runaway memory usage