Conversation
…12679) * perf: optimize schema serialization layer for list, record, object, and isPlainObject - list.ts: Replace O(n²) reduce with push-based loop (eliminates array copy per item) - record.ts: Replace O(n²) reduce with mutable accumulator (eliminates object spread per entry) - object.ts: Cache property metadata at schema construction time (avoid recomputing on every call) - isPlainObject.ts: Simplify prototype chain check from 3+ getPrototypeOf calls to 2 Co-Authored-By: unknown <> * fix: defer isSchemaRequired check to first use for lazy schema compatibility The previous optimization eagerly called isSchemaRequired() at schema construction time, but lazy() schemas are not yet resolved at that point, causing 'Cannot read properties of undefined (reading getType)' errors. This fix uses lazy memoization: required keys are computed on first parse()/json() call instead of at construction time, then cached for subsequent calls. Co-Authored-By: unknown <> * chore: add serialization layer benchmarking suite Adds vitest bench suite covering all optimized code paths: - list parse/json at 100, 1K, 10K items - record parse/json at 100, 1K, 5K entries (string + numeric keys) - object parse/json with 10, 50, 100 properties - nested IR-like structures (depth 2-3, breadth 5) - lazy recursive schemas (depth 2-4, breadth 3) - schema construction overhead - caching effectiveness (fresh vs cached schema) Run with: npx vitest bench --config generators/typescript/utils/commons/vitest.config.ts --dir generators/typescript/utils/core-utilities/tests/unit/schemas/benchmarks Co-Authored-By: unknown <> * perf: use lazy memoization for rawKeyToProperty to eliminate construction overhead Replace eager rawKeyToProperty precomputation with lazy memoization via getRawKeyToProperty(). This defers the schema iteration to first parse()/json() call and caches the result, giving best of both worlds: - Schema construction: ~0.98x of baseline (essentially free, was 0.60-0.75x with precompute) - Runtime parse/json: same speedups as before (1.5-2000x depending on path) The pattern is: define nothing at construction time, compute + cache on first use. This is especially valuable when many schemas are defined at module load time but only a subset are exercised per request. Co-Authored-By: unknown <> * perf: round 2 optimizations - cached Set, for-in loops, eliminate rest spread Additional optimizations building on round 1: 1. object.ts: Cache requiredKeysSet alongside requiredKeys arrays, eliminating per-call Set construction. Use for-in loop instead of Object.entries() to avoid intermediate array allocation. Track missing required keys with a counter instead of cloning the Set. 2. record.ts: Replace entries() wrapper (Object.entries + tuple allocation) with for-in loop + direct property access. 3. union.ts: Replace destructuring rest spread ({[discriminant]: val, ...rest}) with explicit for-in loop to avoid creating intermediate object. 4. getObjectLikeUtils.ts: Replace O(n²) reduce+spread pattern in withParsedProperties with for-in loop + direct assignment. Benchmark: 8.75x geo mean vs baseline (runtime), 1.58x additional over round 1. Co-Authored-By: unknown <> * fix: correct .js import extensions in CLI and generator-cli copies Fix barrel import paths: '../object-like' -> '../object-like/index.js' for all copies that use .js extensions in their imports. Co-Authored-By: unknown <> * fix: use cached hasOwnProperty ref to prevent esbuild Object.hasOwn transformation Co-Authored-By: unknown <> * fix: revert non-generator changes and exclude benchmarks from generated SDKs Co-Authored-By: unknown <> * chore: add versions.yml entry for serialization performance optimization Co-Authored-By: unknown <> * fix: bump version to 3.49.0 for feat changelog type Co-Authored-By: unknown <> --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Swimburger <Swimburger@users.noreply.github.com>
Co-authored-by: Swimburger <Swimburger@users.noreply.github.com>
Co-authored-by: unknown <> Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: unknown <> Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
…2692) Co-Authored-By: unknown <> Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
…onsistency and stability (#12691) Co-authored-by: Niels Swimberghe <3382717+Swimburger@users.noreply.github.com>
…e ETE test resilience (#12689) Co-authored-by: unknown <> Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: unknown <> Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot] (v2.0.0-alpha.4)
Can you help keep this open source service alive? 💖 Please sponsor : )