Description
We're experiencing RangeError: Maximum call stack size exceeded in GlobalCache.getEntity when denormalizing large datasets (~3000 entities) with bidirectional schema references. The existing same-pk cycle detection doesn't prevent this because the traversal visits different entity instances of different types at each hop.
Reproduction
Schema setup
import { Entity } from '@data-client/rest';
class Department extends Entity {
id = '';
name = '';
buildings: Building[] = [];
parent: Department | null = null;
children: Department[] = [];
pk() { return this.id; }
static key = 'Department';
static schema = {
buildings: [Building], // Department → Building
parent: Department,
children: [Department],
};
}
class Building extends Entity {
id = '';
name = '';
departments: Department[] = [];
rooms: Room[] = [];
pk() { return this.id; }
static key = 'Building';
static schema = {
departments: [Department], // Building → Department (bidirectional)
rooms: [Room],
};
}
class Room extends Entity {
id = '';
name = '';
building: Building | null = null;
departments: Department[] = [];
buildings: Building[] = [];
pk() { return this.id; }
static key = 'Room';
static schema = {
building: Building,
departments: [Department], // Room → Department (cross-type)
buildings: [Building], // Room → Building (cross-type)
};
}
What happens
With small datasets (50 departments, 50 buildings), everything works. With large datasets (~3000 departments, ~3000 buildings, each with bidirectional references), useSuspense crashes:
RangeError: Maximum call stack size exceeded
at GlobalCache.getEntity (@data-client/react)
at unvisitEntity
at unvisit
at denormalize
...
Why the existing cycle detection doesn't help
GlobalCache.getEntity tracks visited pks per entity type (cycleCacheKey). It detects when the same pk for the same entity type is revisited. But with bidirectional cross-type references, the traversal visits different pks at each hop:
Department:dept-1 → Building:bldg-1 → Department:dept-2 → Building:bldg-2 → Department:dept-3 → ...
Each pk is unique, so the cycle detection never fires. With 3000 interconnected entities, the recursion depth exceeds the JS call stack (~10,000 frames).
Console log proof
We added depth tracking to GlobalCache.getEntity and captured the denormalization chain at depth 200:
[denorm] depth 200 chain:
[Department]:{"filter...
→ Department:a1b2c3d4
→ Building:e5f6g7h8
→ Department:i9j0k1l2
→ Building:m3n4o5p6
→ Department:q7r8s9t0
→ Building:u1v2w3x4
→ Department:y5z6a7b8
→ ... (200 unique entity pks, still going)
Every entity in the chain has a different pk. The chain alternates between entity types, visiting thousands of unique entities before overflowing.
Key observations
-
It doesn't crash on initial load — only after enough entities from related types accumulate in the cache (e.g., loading a list of buildings, then switching to a list of departments, so both are now in the cache)
-
page[size]: 500 × 5 pages = 2,500 primary entities + ~20,000 included entities in cache. useSuspense denormalizes 500 primary entities synchronously, traversing the full connected graph.
-
The problem is depth, not cycles — even without true cycles, 3000 entities connected via department.buildings → building.departments → ... creates a traversal chain thousands of frames deep through unique pks.
Expected behavior
Denormalization should not overflow the stack regardless of how many entities are in the cache or how they're connected.
Proposed solutions
Option A: Global depth limit
Add a configurable max denormalization depth. Beyond that depth, return entity pks instead of fully resolved entities (similar to how same-pk cycle detection returns early).
class Department extends Entity {
static schema = {
buildings: [Building],
};
static maxDenormDepth = 3; // Stop resolving after 3 levels
}
Option B: Lazy relationship resolution (like Ember Data)
Support { async: true } or { lazy: true } on schema relationships. Instead of eagerly resolving during denormalization, return a proxy/getter that resolves on first property access:
class Department extends Entity {
static schema = {
buildings: { schema: [Building], lazy: true }, // resolved on access, not during denormalize
};
}
This follows Ember Data's pattern of { async: true } relationships and would fundamentally prevent deep traversal since relationships are only resolved when explicitly accessed.
Environment
@data-client/react: 0.15.7
@data-client/rest: 0.15.7
- React: 18
- Browser: Chrome 131+
- Dataset: ~3000 entities per type, bidirectional relationships, JSON:API with
include strings
Related
Description
We're experiencing
RangeError: Maximum call stack size exceededinGlobalCache.getEntitywhen denormalizing large datasets (~3000 entities) with bidirectional schema references. The existing same-pk cycle detection doesn't prevent this because the traversal visits different entity instances of different types at each hop.Reproduction
Schema setup
What happens
With small datasets (50 departments, 50 buildings), everything works. With large datasets (~3000 departments, ~3000 buildings, each with bidirectional references),
useSuspensecrashes:Why the existing cycle detection doesn't help
GlobalCache.getEntitytracks visited pks per entity type (cycleCacheKey). It detects when the same pk for the same entity type is revisited. But with bidirectional cross-type references, the traversal visits different pks at each hop:Each pk is unique, so the cycle detection never fires. With 3000 interconnected entities, the recursion depth exceeds the JS call stack (~10,000 frames).
Console log proof
We added depth tracking to
GlobalCache.getEntityand captured the denormalization chain at depth 200:Every entity in the chain has a different pk. The chain alternates between entity types, visiting thousands of unique entities before overflowing.
Key observations
It doesn't crash on initial load — only after enough entities from related types accumulate in the cache (e.g., loading a list of buildings, then switching to a list of departments, so both are now in the cache)
page[size]: 500× 5 pages = 2,500 primary entities + ~20,000 included entities in cache.useSuspensedenormalizes 500 primary entities synchronously, traversing the full connected graph.The problem is depth, not cycles — even without true cycles, 3000 entities connected via
department.buildings → building.departments → ...creates a traversal chain thousands of frames deep through unique pks.Expected behavior
Denormalization should not overflow the stack regardless of how many entities are in the cache or how they're connected.
Proposed solutions
Option A: Global depth limit
Add a configurable max denormalization depth. Beyond that depth, return entity pks instead of fully resolved entities (similar to how same-pk cycle detection returns early).
Option B: Lazy relationship resolution (like Ember Data)
Support
{ async: true }or{ lazy: true }on schema relationships. Instead of eagerly resolving during denormalization, return a proxy/getter that resolves on first property access:This follows Ember Data's pattern of
{ async: true }relationships and would fundamentally prevent deep traversal since relationships are only resolved when explicitly accessed.Environment
@data-client/react: 0.15.7@data-client/rest: 0.15.7includestringsRelated