You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At large scale, the diff algorithm scales poorly (roughly O(n^2) normally, but becomes much worse at large n when the GC goes into overdrive to keep the memory usage below ~512MB (despite us telling V8 to let it go up to 8GB...) This has been fixed in this commit by reworking the algorithm in use to be O(n) and reduce recomputation of set information.
At all scales, the TypeORM read logic is not optimized for batch reads, and sub-entities are queried one-at-a-time with no caching. There's nothing we can do here, according to the official docs, besides writing our own batch read mechanism, either completely from scratch or digging into the TypeORM internals, both of which have significant downsides. There also appears to be a lot of GC action during this phase, so we might be able to speed it up some if we can figure out the V8 flags we need to set, but it appears to be mostly IO-bound (by splitting the Postgres queries up the way it does), so that would only be a minor fix on that front.
No description provided.
The text was updated successfully, but these errors were encountered: