Production Checklist

A checklist to ensure your redis-graph-cache deployment is production-ready.

Before you deploy

Review this checklist to ensure your redis-graph-cache setup is production-ready.

Configuration

Set a keyPrefix

Always set redis.keyPrefix when sharing Redis with other applications or environments. This enables safe scoped cache clearing.

Configure connection pool

Set redis.poolSize to 4-8 for production. This increases throughput and provides partial failure tolerance.

Enable compression for large payloads

Set cache.enableCompression: true with an appropriate compressionThreshold (default 1024 bytes) for entities larger than a few KB.

Set appropriate TTL values

Ensure schema TTLs align with your data freshness requirements. Use cascadeTTL for parent-child relationships when parents are longer-lived.

Configure circuit breaker

Adjust resilience.circuitBreaker thresholds based on your Redis latency and failure tolerance.

Set operational limits

Configure limits.maxHydrationDepth and limits.maxMemoryUsagePerOperation to prevent runaway memory usage.

Redis Setup

Use Redis 6 or later

Ensure your Redis version is >= 6 for full compatibility with all features.

Configure maxmemory policy

Set maxmemory and maxmemory-policy (e.g., allkeys-lru) to prevent Redis from running out of memory.

Enable persistence (optional)

Configure RDB/AOF persistence if you need to survive Redis restarts. Note that this adds latency.

Monitor Redis metrics

Set up monitoring for memory usage, hit rate, latency, and connection count.

Size Redis correctly

For ~100k entities at typical sizes, plan for ~1–2 GB RAM. For 1M entities, 8–16 GB. Compression cuts these roughly in half. INFO memory tells you the truth about your specific data. Use allkeys-lru if cache-only data being evicted is acceptable.

Use PM2 cluster mode (or equivalent) above ~30k ops/sec

For sustained workloads above ~30k ops/sec on one Node process, sync JSON.parse on the event loop becomes the bottleneck — not Redis. Run multiple Node processes behind a load balancer (PM2 exec_mode: 'cluster', Kubernetes pods, etc.). Each process opens its own pool.

Add stampede protection at the service layer

When a hot key expires, every concurrent reader misses the cache and hits your database. The cache doesn't deduplicate. Add request coalescing — e.g. p-memoize, dataloader, or a small in-process map of in-flight DB fetches keyed by entity id.

Application Integration

Handle errors gracefully

Catch and handle CircuitBreakerOpenError and RedisConnectionError by falling back to your primary data store.

Wire up metrics

Call getMetrics() periodically and export to your observability stack. Hit rate is the most important signal.

Implement cache invalidation

Call invalidateEntity() when data changes in your database to maintain cache consistency.

Graceful shutdown

Call disconnect() on shutdown to close Redis connections gracefully.

Schema Design

Use indexedList for large collections

Use indexedList instead of plain list for collections that may exceed ~200 items.

Enable trackMembership when needed

Set trackMembership: true only when you need cascade invalidation. It doubles the write traffic per insert.

Set maxSize on indexed lists

Configure maxSize on indexed lists to cap memory usage for unbounded feeds.

Avoid circular relations

Design your schema to avoid circular relations, or use excludeRelations to break cycles during reads.

Testing & Validation

Load test before deployment

Run load tests to validate your configuration can handle expected traffic patterns.

Test failure scenarios

Test how your application behaves when Redis is down or the circuit breaker trips.

Validate cache hit rate

Track hitRate from getMetrics(). Hit rate is the only honest signal of whether the cache is paying off; investigate sub-50% rates.

Common Pitfalls

Redis Cluster is not supported yet

Cluster support is on the roadmap for a future release. For now, deploy against a single Redis instance (primary with replicas is fine, but a single keyspace).

Don't ignore the circuit breaker

Always handle CircuitBreakerOpenError. Ignoring it will cause your application to fail when Redis is degraded.

Don't use plain list for large collections

Plain lists parse the entire array on every read. Use indexedList for collections larger than ~200 items.

Don't skip keyPrefix in production

Without keyPrefix, clearAllCache wipes the entire Redis DB. Always set it when sharing Redis.