Serializers
Lossless round-tripping of rich JS types — Date, BigInt, Map, Set, RegExp, Buffer, NaN, ±Infinity.
Default: TAGGED_SERIALIZER
Entity payloads round-trip through a pluggable Serializer that preserves JS types raw JSON.stringifycannot represent natively. The default is on automatically — no config required.
Type fidelity matrix
| JS type | Schema type | Without (raw JSON) | With default serializer |
|---|---|---|---|
string | 'string' | string | string |
number | 'number' | number | number |
boolean | 'boolean' | boolean | boolean |
| plain object | 'object' | object | object |
| array | 'array' | array | array |
Date | 'date' | ISO string | Date instance |
BigInt | 'bigint' | throws TypeError | BigInt |
Map | 'map' | {} (data lost) | Map |
Set | 'set' | {} (data lost) | Set |
RegExp | 'regexp' | {} (data lost) | RegExp |
Buffer | 'buffer' | {type:"Buffer",data:[...]} | Buffer |
NaN, ±Infinity | 'number' | null | original number |
Field type is advisory, not coercive
The type you declare in fields exists for documentation and to catch obvious typos at schema registration. The cache does not coerce values based on it. What you write is what you read back.
// Schema says 'date', but you pass a string:
fields: { createdAt: { type: 'date' as const } },
data: { createdAt: '2026-04-26T10:00:00Z' }
// On read you get back a STRING, not a Date — because you wrote
// a string. The serializer only transforms values whose JS type
// is special. Strings round-trip as strings.
// To get a Date back, write a Date:
data: { createdAt: new Date('2026-04-26T10:00:00Z') }By design
The cache is a cache, not a validator. Runtime coercion would hide bugs where producers and consumers disagree about the wire format.
Backward compatibility
- Reads of plain JSON written by older versions or other producers work unchanged. The reviver only transforms objects that carry the internal tag key.
- Writes that contain only JSON-native values produce byte-for-byte identical output to plain
JSON.stringify, so on-wire format is unchanged for ordinary data.
Opting out / plugging in another codec
import {
RedisGraphCache,
JSON_SERIALIZER,
type Serializer,
} from 'redis-graph-cache';
// Opt out: original behaviour, fastest, but lossy for the rich types above.
new RedisGraphCache(schema, {
cache: { serializer: JSON_SERIALIZER },
});
// Plug in superjson (or devalue, cbor-x, etc.):
import superjson from 'superjson';
const codec: Serializer = {
stringify: (v) => superjson.stringify(v),
parse: (s) => superjson.parse(s),
};
new RedisGraphCache(schema, {
cache: { serializer: codec },
});Don't mix serializers
Don't mix serializers against the same cache without flushing first — old entries written with one serializer may not parse correctly with another.
Lists are not routed through the serializer
List bodies (the JSON id-array stored under a listschema's key) are intentionally not routed through the serializer because Lua scripts on the Redis side parse them directly. Lists always use plain JSON. Only entity values pass through the serializer.
undefined handling
Intentionally not tagged
undefined is kept identical to "field omitted" so the merge logic doesn't accidentally clobber existing values on partial updates. See Null vs Undefined.