fix(normalizer): handle cast-wrapped timestamps, NULL defaults, and numeric overflow#238
fix(normalizer): handle cast-wrapped timestamps, NULL defaults, and numeric overflow#238
Conversation
Both commands previously returned raw `VerifyDatabaseSchemaResult` on schema verification failure instead of the standard `CliErrorEnvelope`. This made agents parsing `--json` output handle two different shapes. Now both commands map schema verification failures through a new `errorSchemaVerificationFailed` factory (PN-RTM-3004), which wraps the full verification tree in `meta.verificationResult`. Human TTY output still renders the tree before the error summary.
Add 13 journey-based e2e test files covering 30 CLI scenarios (A-Z + P2-P5)
across greenfield setup, schema evolution, brownfield adoption, drift detection,
migration edge cases, and error scenarios. Each journey is a single it() block
with descriptive assertion labels for step-level failure identification.
Infrastructure:
- journey-test-helpers.ts: CommandResult-based runners for all CLI commands,
contract swap helper, JSON parsing, SQL helper
- 5 contract fixtures (base, additive, destructive, add-table, v3)
- Config templates with {{DB_URL}} placeholder replacement
Known limitations (marked as TODO in tests):
- db schema-verify and db update hit Vite SSR module resolution error
after DDL changes (PN-CLI-4999) — affects drift-schema journeys M/N
- Migration chain breakage recovery (P3) requires manual intervention
- Target mismatch (U) needs better simulation approach
The "Vite SSR error" was actually a stale build of @prisma-next/core-control-plane missing the errorSchemaVerificationFailed export in dist. After rebuild, db schema-verify and db sign properly return exit code 1 for verification failures. Journey M: Phantom drift now fully tests verify false-positive → schema-verify catches drift. db update recovery fails as expected (planner can't re-add dropped NOT NULL columns — documented as known limitation). Journey N: Extra column drift now tests verify → tolerant/strict schema-verify → expand contract → db update (with --no-interactive fallback to -y for confirmation handling) → tolerant verify passes. Also restores exact exit code 1 assertions for schema-verify and db sign failures in brownfield-adoption and drift-marker journeys.
…rrel The function was defined in errors.ts but missing from the exports barrel, causing runtime undefined when imported via the package's public API. This was the root cause of the PN-CLI-4999 "Vite SSR error" in e2e tests — the function was available in source but not in built dist.
…name)
The test was deleting the wrong migration directory — 'initial' sorted last
alphabetically (i > a) but was the chain root (∅→base), not the intended
target 'add-posts' (additive→v3). After deleting the root, the planner
correctly couldn't reconstruct the graph.
Fix: find the migration dir by name suffix ('_add_posts') instead of
relying on alphabetical sort. Now P3.03–P3.04 fully test the recovery
flow: re-plan the missing edge → apply succeeds.
Instead of tampering with the marker row (which only affects stored JSON, not the hash comparison), edit contract.json on disk to change the target field from "postgres" to "mysql". db verify compares contractIR.target against config.target.targetId, so this triggers PN-RTM-3003.
Remove unused parseJsonOutput import and add commander as devDependency so the Command type import in journey-test-helpers resolves.
Tolerant vs strict schema-verify is already tested in Journey N (drift-schema.e2e.test.ts) with extra columns. Journey H tested the same code path with extra tables, adding no unique coverage.
…test.ts P4 (partial apply/resume) and P5 (no migration path) were explicitly noted in the plan as already covered by the command-specific test. P5's recovery pattern is also identical to P3 (chain breakage).
Q (apply noop), R (plan noop), and X (show variants) are now tail steps of Journey B. They reuse B's already-applied migration chain instead of spinning up 3 separate PGlite instances for 4 assertions.
Journey I's JSON assertions are already covered per-command in A.09, A.10, B.10, and isolated command tests. Journey J (help output) has near-zero regression prevention value. Journey Y (global flags) is retained as a lightweight no-DB test.
- 30 → 22 journeys, 13 → 10 test files - Removed: H (redundant with N), I (JSON covered inline), J (help), P4 and P5 (covered by command-specific tests) - Merged: Q, R, X as tail steps of Journey B - Updated cross-reference matrix, file structure, acceptance criteria, implementation phases, and parallelism estimates
- `pnpm test:journeys` from root runs only journey e2e tests - Dedicated vitest config with forks pool (4 workers) and spinUpPpgDev timeouts for PGlite-backed tests - Wired through turbo.json for proper build dependency resolution
Add a self-contained README to cli-journeys/ summarizing all 11 test files by what they cover. Rewrite top-level JSDoc comments in each test file to describe scenarios in plain words instead of opaque codes. Also gitignore the vite-plugin output/ fixture directory.
Reverts the errorSchemaVerificationFailed wrapping introduced in 0f57e7f. The db sign e2e test expects the raw VerifyDatabaseSchemaResult shape (with `schema` at top level) in JSON failure output, not a CliErrorEnvelope.
- Initialize all closeDb declarations with no-op default to prevent TypeError if createDevDatabase() fails in beforeAll - Use callback form for DB_URL replacement to prevent special character corruption in connection strings (e.g., $& in passwords) - Re-throw unexpected errors in runCommand/runCommandRaw when no CLI exit code was captured, preventing silent masking of real regressions - Run V.02 (dry-run) before V.01 (mutating db init) so the dry-run validates the pre-mutation state - Merge redundant B.05/B.06 steps — both called runMigrationStatus with the same DB-connected context; consolidated into one step - Fix misleading comments in Journey N (age drift stays unresolved; test intentionally validates tolerant mode) - Clarify README: "each journey (describe block)" not just "describe"
- Replace `node:path` with `pathe` for consistent cross-platform paths - Remove try/catch in getMigrationDirs that masked broken test setup
…umeric overflow
parsePostgresDefault now correctly normalizes:
- Cast-wrapped timestamp defaults: ('now'::text)::timestamp without time zone,
now()::timestamptz, CURRENT_TIMESTAMP::timestamp with time zone, etc.
- NULL and NULL::type defaults to { kind: 'literal', value: null }
- Rejects non-finite numeric defaults (Infinity from enormous literals)
Also: renamed misleading _nativeType parameter, pre-compiled all inline
regexes to module-level constants, simplified NULL_PATTERN, removed
duplicate verification-level tests, added negative timestamp test cases,
and confirmed extension types (citext, ltree, uuid, etc.) pass through
the type normalizer unchanged.
📝 WalkthroughWalkthroughThe changes enhance PostgreSQL default value normalization to handle timestamp expressions (including various Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
@prisma-next/runtime-executor
@prisma-next/sql-runtime
@prisma-next/extension-paradedb
@prisma-next/extension-pgvector
@prisma-next/postgres
@prisma-next/sql-orm-client
@prisma-next/contract-authoring
@prisma-next/contract-ts
@prisma-next/ids
@prisma-next/psl-parser
@prisma-next/cli
@prisma-next/emitter
@prisma-next/eslint-plugin
@prisma-next/migration-tools
@prisma-next/vite-plugin-contract-emit
@prisma-next/sql-contract
@prisma-next/sql-errors
@prisma-next/sql-operations
@prisma-next/sql-schema-ir
@prisma-next/sql-contract-psl
@prisma-next/sql-contract-ts
@prisma-next/sql-contract-emitter
@prisma-next/family-sql
@prisma-next/sql-kysely-lane
@prisma-next/sql-lane-query-builder
@prisma-next/sql-relational-core
@prisma-next/sql-lane
@prisma-next/target-postgres
@prisma-next/adapter-postgres
@prisma-next/driver-postgres
@prisma-next/core-control-plane
@prisma-next/core-execution-plane
@prisma-next/config
@prisma-next/contract
@prisma-next/operations
@prisma-next/plan
@prisma-next/utils
commit: |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/3-targets/6-adapters/postgres/src/core/default-normalizer.ts (1)
98-105:⚠️ Potential issue | 🟠 Major
Number.isFinite()accepts lossy numerics, collapsing distinct Postgres defaults into identical values.Very large integers and high-precision decimals round during
Number()conversion but still passNumber.isFinite()checks. For example,'9007199254740993'becomes9007199254740992(precision loss confirmed) and'0.12345678901234567890123456789'becomes0.12345678901234568. This causes distinct Postgres default values to normalize identically, corrupting the comparison model.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/3-targets/6-adapters/postgres/src/core/default-normalizer.ts` around lines 98 - 105, The numeric normalization currently converts matched strings with Number(trimmed) and accepts any finite result, which lets lossy conversions (e.g., beyond Number.MAX_SAFE_INTEGER or high-precision decimals) slip through; change the logic in the numeric branch (the block using NUMERIC_PATTERN and isBigInt) to detect precision loss and refuse to normalize lossy values: after computing const num = Number(trimmed) keep the existing BigInt branch for isBigInt, but otherwise check that the numeric conversion is precision-preserving (for integers use Number.isSafeInteger(num) and for decimals compare a canonicalized string of num to a canonicalized trimmed input or use a precision-stable test) and if the value is not preserved return undefined instead of returning num; update the function where NUMERIC_PATTERN and isBigInt are used so only precision-preserving numbers are normalized.
🧹 Nitpick comments (9)
packages/3-targets/6-adapters/postgres/test/control-adapter.test.ts (1)
1-1653: Consider splitting this test file in a follow-up.The file is 1653 lines, significantly exceeding the 500-line guideline. Given that
control-adapter.defaults.test.tsalready exists as a companion file, consider further splitting by functionality (e.g.,control-adapter.introspect.test.tsfor the introspection tests, keeping type normalization tests separate). This would improve navigability and maintainability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/3-targets/6-adapters/postgres/test/control-adapter.test.ts` around lines 1 - 1653, This test file is too large; split the tests into smaller focused files: move all introspection-related specs (describe('introspect', all its it blocks, and the USER-DEFINED enum tests) into a new control-adapter.introspect.test.ts that imports PostgresControlAdapter and reuses createMockDriver, and move the normalizeSchemaNativeType table-driven tests into control-adapter.normalize.test.ts; keep any shared helpers (createMockDriver, includes, QueryHandler type) in a small helper file or export them from a new control-adapter.test-utils.ts and update imports so PostgresControlAdapter and normalizeSchemaNativeType continue to be tested with the same fixtures.packages/2-sql/3-tooling/family/test/schema-verify.defaults.test.ts (1)
1-806: File exceeds 500-line limit (806 lines) — split by functionality.As per coding guidelines, test files should be kept under 500 lines. This file can be split along the existing
describeblock boundaries:
New file Describe block Lines schema-verify.defaults.basic.test.tsverifySqlSchema - defaults~305 schema-verify.defaults.timestamps.test.tsverifySqlSchema - timestamp default normalization~70 schema-verify.defaults.null.test.tsverifySqlSchema - null default normalization~35 schema-verify.defaults.casts.test.tsverifySqlSchema - string literal defaults with type casts~320 Alternatively, combine the smaller timestamp and null blocks into a single
schema-verify.defaults.normalization.test.tsfile, and keep the cast tests separate.The
testNormalizerhelper can be extracted toschema-verify.helpers.tsalongside the existing helpers to avoid duplication across split files.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/3-tooling/family/test/schema-verify.defaults.test.ts` around lines 1 - 806, Split the oversized test file by moving the testNormalizer helper into the existing helpers module (export testNormalizer from schema-verify.helpers) and then create multiple test files that each contain one or more describe blocks (e.g. move the "verifySqlSchema - defaults" block to schema-verify.defaults.basic.test.ts, the "verifySqlSchema - timestamp default normalization" and "verifySqlSchema - null default normalization" blocks into schema-verify.defaults.normalization.test.ts, and the "verifySqlSchema - string literal defaults with type casts" block into schema-verify.defaults.casts.test.ts) — update imports in each new test file to import testNormalizer, verifySqlSchema, createTestContract, createTestSchemaIR, createContractTable, createSchemaTable, and emptyTypeMetadataRegistry from their helpers/modules so each file stays under 500 lines and re-exports/uses testNormalizer centrally.packages/3-targets/6-adapters/postgres/test/control-adapter.defaults.test.ts (1)
381-470: Split these parser suites by concern.The new timestamp, NULL, and extension cases are separate concerns, and this file is now at the repo’s test-file size ceiling. Moving them into feature-specific files will keep future default-normalizer changes easier to navigate and extend.
As per coding guidelines, "Keep test files under 500 lines to maintain readability and navigability. If a test file exceeds this limit, it should be split into multiple files." and "Split test files when they exceed 500 lines, contain multiple distinct concerns that can be logically separated, or have multiple top-level
describeblocks that can be split by functionality."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/3-targets/6-adapters/postgres/test/control-adapter.defaults.test.ts` around lines 381 - 470, The test file currently mixes three distinct concerns under parsePostgresDefault (timestamp-normalization, NULL-normalization, and extension-type defaults); split them into three feature-specific test files (e.g., control-adapter.defaults.timestamp.test.ts, control-adapter.defaults.null.test.ts, control-adapter.defaults.extensions.test.ts), moving the corresponding describe blocks and their it.each cases for timestamp casts, NULL cases, and extension/uuid/jsonb cases respectively; ensure each new file imports parsePostgresDefault and any helper types the original file used, remove the moved blocks from the original file, and run tests to confirm no missing imports or duplicated top-level describes remain.test/integration/test/fixtures/cli/cli-e2e-test-app/fixtures/cli-journeys/contract.ts (1)
1-17: Single-source the baseline journey contract.
contract.tsandcontract-base.tsare intentionally identical right now. That duplication will drift the next time the baseline schema changes, and then the fixture app's starting contract and the "reset" contract stop matching. Please move the shared definition behind a small helper so both files stay locked together. As per coding guidelines, "Don't add comments if avoidable; prefer code which expresses its intent."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/test/fixtures/cli/cli-e2e-test-app/fixtures/cli-journeys/contract.ts` around lines 1 - 17, Extract the shared contract definition into a small helper function (e.g., createBaseContract) that returns the result of defineContract<CodecTypes>() configured with postgresPack, int4Column, textColumn, the 'user' table and the 'User' model; export that helper and then have both contract.ts and contract-base.ts simply import it and set export const contract = createBaseContract(), so the single source of truth is the helper (referencing symbols: defineContract, CodecTypes, postgresPack, int4Column, textColumn, and export const contract).test/integration/test/cli-journeys/config-errors.e2e.test.ts (1)
10-11: Preferpatheovernode:pathfor path operations.Per repository conventions, use
patheinstead ofnode:pathfor consistent cross-platform behavior.♻️ Proposed fix
import { writeFileSync } from 'node:fs'; -import { join } from 'node:path'; +import { join } from 'pathe';Based on learnings: "In the Prisma-next repository, prefer importing and using 'pathe' over the built-in 'node:path' module for path operations across the codebase."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/test/cli-journeys/config-errors.e2e.test.ts` around lines 10 - 11, The import uses node:path for join which violates repo convention to use pathe; replace "import { join } from 'node:path';" with "import { join } from 'pathe';" and ensure any subsequent uses of join (e.g., in this test) remain the same; keep writeFileSync imported from 'node:fs' as-is.test/integration/test/cli-journeys/connection-and-contract-errors.e2e.test.ts (1)
16-18: Usepatheinstead ofnode:path.This repo standardizes on
pathefor path manipulation in TypeScript.♻️ Suggested change
import { readFileSync, writeFileSync } from 'node:fs'; -import { join } from 'node:path'; +import { join } from 'pathe';As per coding guidelines, "
**/*.{ts,tsx,mts,cts}: Usepatheinstead ofnode:pathfor path manipulation in TypeScript files."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/test/cli-journeys/connection-and-contract-errors.e2e.test.ts` around lines 16 - 18, Replace the node:path import with pathe: change the import that currently reads "import { join } from 'node:path'" to import join from 'pathe' (or destructured import { join } from 'pathe' consistent with repo style) so all path operations in this test (look for the join usage in connection-and-contract-errors.e2e.test.ts) use pathe; keep other fs imports (readFileSync, writeFileSync) untouched and run typecheck to ensure no other references to node:path remain.test/integration/test/cli-journeys/schema-evolution-migrations.e2e.test.ts (1)
15-17: Usepatheinstead ofnode:path.This matters here because the joined value is later passed back to the CLI at Line 145.
♻️ Suggested change
import { readdirSync } from 'node:fs'; -import { join } from 'node:path'; +import { join } from 'pathe';As per coding guidelines, "
**/*.{ts,tsx,mts,cts}: Usepatheinstead ofnode:pathfor path manipulation in TypeScript files."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/test/cli-journeys/schema-evolution-migrations.e2e.test.ts` around lines 15 - 17, Replace the use of node:path with pathe for path manipulation: update the import that currently says "import { join } from 'node:path';" to "import { join } from 'pathe';" so the test uses pathe's join when building paths passed to the CLI (refer to the imports at the top where readdirSync and join are imported); no other changes to readdirSync import are required.test/integration/test/cli-journeys/drift-migration-dag.e2e.test.ts (1)
10-12: Usepatheinstead ofnode:path.This repo standardizes on
pathefor path manipulation in TypeScript.♻️ Suggested change
import { readdirSync, rmSync } from 'node:fs'; -import { join } from 'node:path'; +import { join } from 'pathe';As per coding guidelines, "
**/*.{ts,tsx,mts,cts}: Usepatheinstead ofnode:pathfor path manipulation in TypeScript files."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/test/cli-journeys/drift-migration-dag.e2e.test.ts` around lines 10 - 12, Replace the import of join from 'node:path' with the standardized 'pathe' import used across the repo: locate the import line that currently reads "import { join } from 'node:path';" in the test file and change it to import join (or path-appropriate function) from 'pathe' so all path manipulation in drift-migration-dag.e2e.test.ts uses pathe; keep other fs imports (readdirSync, rmSync) unchanged.test/integration/test/utils/journey-test-helpers.ts (1)
383-393: Consider top-level import instead of dynamic import.The
withClientis dynamically imported from@prisma-next/test-utils, but this package is already a direct dependency used in the test files that consume these helpers. A static import at the top of the file would be simpler and avoid the async overhead.Proposed fix
Add to imports at top of file:
import { withClient } from '@prisma-next/test-utils';Then simplify the function:
export async function sql( connectionString: string, query: string, params?: unknown[], ): Promise<{ rows: Record<string, unknown>[] }> { - const { withClient } = await import('@prisma-next/test-utils'); return withClient(connectionString, async (client) => { const result = await client.query(query, params); return { rows: result.rows as Record<string, unknown>[] }; }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/test/utils/journey-test-helpers.ts` around lines 383 - 393, The helper function sql currently does a dynamic import of withClient which adds unnecessary async overhead; replace the dynamic import with a top-level static import ("import { withClient } from '@prisma-next/test-utils';") and update the sql function to call withClient(connectionString, ...) directly (remove the await import(...) line), keeping the same parameter names and return shape so function sql and references to withClient remain consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/3-targets/6-adapters/postgres/src/core/default-normalizer.ts`:
- Around line 27-43: The isTimestampDefault helper currently unwraps casted
expressions and treats clock_timestamp() the same as now(), which is incorrect;
update isTimestampDefault to detect clock_timestamp() separately (e.g., add a
CLOCK_TIMESTAMP_PATTERN regex) and ensure that if the inner expression matches
clock_timestamp() (including casted forms via TIMESTAMP_CAST_SUFFIX or
TEXT_CAST_SUFFIX) it is not collapsed/treated as NOW_LITERAL_PATTERN—check for
CLOCK_TIMESTAMP_PATTERN before using TEXT_CAST_SUFFIX replacement or returning
NOW_LITERAL_PATTERN so clock_timestamp() remains distinct from now().
In `@test/integration/test/cli-journeys/brownfield-adoption.e2e.test.ts`:
- Around line 89-97: The test currently runs runDbUpdate(ctx) then immediately
runs runDbSign(ctx, ['--json']), which can mask failures of the update; insert a
call to runDbVerify(ctx) immediately after runDbUpdate(ctx) (before runDbSign)
and assert its exitCode is 0 to ensure the DB is aligned, then keep
runDbSign(ctx, ['--json']) as a separate verification step and continue to
parseJsonOutput(signJson) as before; update references in assertions to use
runDbVerify and ensure the error message labels (e.g., 'F.07: db verify')
reflect this check.
In
`@test/integration/test/cli-journeys/connection-and-contract-errors.e2e.test.ts`:
- Around line 171-178: The test currently accepts both exit codes for runDbInit
(variables dryRun and init checking exitCode) which masks planner failures;
update the assertions to pin expected behavior: for dryRun (variable dryRun)
assert the single expected exitCode (e.g., 0) or, if multiple exit codes are
truly valid, assert the corresponding diagnostic text appears on dryRun.stdout;
do the same for init (variable init) — either require the specific exitCode the
CLI guarantees or require a specific machine-readable diagnostic string to be
present in init.stdout so the test verifies the reason for success vs conflict
rather than accepting any outcome.
In `@test/integration/test/cli-journeys/db-update-workflows.e2e.test.ts`:
- Around line 133-137: After the E.07 JSON accept check, add a verification step
to ensure the destructive update actually applied and schema/marker alignment is
intact by invoking runDbVerify(ctx) or performing a no-op runDbUpdate(ctx,
['--dry-run']) after parsing jsonAccept/jsonSuccess; update the test block
around runDbUpdate, jsonAccept and jsonSuccess to call runDbVerify (or dry-run)
and assert its exitCode is 0 so partial-apply regressions are caught.
In `@test/integration/test/cli-journeys/drift-migration-dag.e2e.test.ts`:
- Around line 88-95: After calling runMigrationApply(ctx) in the P3.04 step, add
a verification call to ensure the recovered state is consistent: invoke
runMigrationStatus(ctx) or runDbVerify(ctx) and assert its exitCode is 0 (and
optionally assert expected status/output) so the test fails if the re-planned
additive→v3 migration applied but left markers/DB inconsistent; update the
assertions adjacent to runMigrationApply to use the chosen verifier function.
In `@test/integration/test/cli-journeys/drift-schema.e2e.test.ts`:
- Around line 147-153: The test omits asserting the successful path when
runDbUpdate(ctx, ['--no-interactive']) returns exitCode 0; add an explicit
assertion that update.exitCode is 0 (with a helpful message) in the branch where
update.exitCode === 0 so the success case is verified; keep the existing retry
logic for non-zero exitCode that calls runDbUpdate and asserts updateY.exitCode
=== 0; reference the runDbUpdate call and the update/updateY variables and their
exitCode properties when adding the assertion.
In `@test/integration/test/cli-journeys/help-and-flags.e2e.test.ts`:
- Around line 17-34: The test 'Y.01: --no-color suppresses ANSI codes in stdout'
currently only asserts output exists; add an explicit assertion that
result.stdout contains no ANSI escape sequences by checking result.stdout
against a regex for ANSI codes (e.g., /\x1B\[[0-9;]*[A-Za-z]/) and assert it
does not match; update the test that uses setupJourneyNoDb and runContractEmit
to include this negative match on result.stdout so the test truly verifies ANSI
codes are absent.
In `@test/integration/test/cli-journeys/multi-step-migration.e2e.test.ts`:
- Around line 69-83: The test currently checks only for the presence of
"Pending"/"Applied" strings from runMigrationStatus output (after stripAnsi)
which can pass with incorrect counts; update the assertions to verify exact
migration counts or names by invoking runMigrationStatus with the --json flag
(or parsing structured output) and asserting that the returned JSON shows
exactly 2 pending before runMigrationApply and 0 pending (and both migration
names present in applied) after; locate and update the calls to
runMigrationStatus and the corresponding expects (and keep runMigrationApply
as-is) so you assert counts/names from the parsed JSON rather than just
containing "Pending"/"Applied".
In `@test/integration/test/cli-journeys/schema-evolution-migrations.e2e.test.ts`:
- Around line 198-212: The test currently accepts either a successful migration
apply or a silent failure followed by db update, which leaves behavior unpinned;
update the block around runMigrationApply(ctx) so you explicitly handle the two
scenarios: if apply.exitCode !== 0 assert the expected failure diagnostic is
emitted on the machine-readable channel (check apply.stdout for the specific
error message or code your CLI returns) before calling runDbUpdate(ctx) and
asserting update.exitCode === 0; otherwise assert apply.exitCode === 0 and
assert the expected success diagnostic is present in apply.stdout. Reference
runMigrationApply and runDbUpdate to locate and change the logic.
In
`@test/integration/test/fixtures/cli/cli-e2e-test-app/fixtures/cli-journeys/prisma-next.config.ts`:
- Around line 7-20: Add an explicit driver field to the config by setting
driver: postgresDriver in the exported defineConfig call; specifically update
the object that currently has family: sql, target: postgres, adapter:
postgresAdapter to also include driver: postgresDriver so the
family/target/adapter/driver tuple (in defineConfig) enforces type-level
compatibility and matches familyId/targetId conventions.
---
Outside diff comments:
In `@packages/3-targets/6-adapters/postgres/src/core/default-normalizer.ts`:
- Around line 98-105: The numeric normalization currently converts matched
strings with Number(trimmed) and accepts any finite result, which lets lossy
conversions (e.g., beyond Number.MAX_SAFE_INTEGER or high-precision decimals)
slip through; change the logic in the numeric branch (the block using
NUMERIC_PATTERN and isBigInt) to detect precision loss and refuse to normalize
lossy values: after computing const num = Number(trimmed) keep the existing
BigInt branch for isBigInt, but otherwise check that the numeric conversion is
precision-preserving (for integers use Number.isSafeInteger(num) and for
decimals compare a canonicalized string of num to a canonicalized trimmed input
or use a precision-stable test) and if the value is not preserved return
undefined instead of returning num; update the function where NUMERIC_PATTERN
and isBigInt are used so only precision-preserving numbers are normalized.
---
Nitpick comments:
In `@packages/2-sql/3-tooling/family/test/schema-verify.defaults.test.ts`:
- Around line 1-806: Split the oversized test file by moving the testNormalizer
helper into the existing helpers module (export testNormalizer from
schema-verify.helpers) and then create multiple test files that each contain one
or more describe blocks (e.g. move the "verifySqlSchema - defaults" block to
schema-verify.defaults.basic.test.ts, the "verifySqlSchema - timestamp default
normalization" and "verifySqlSchema - null default normalization" blocks into
schema-verify.defaults.normalization.test.ts, and the "verifySqlSchema - string
literal defaults with type casts" block into
schema-verify.defaults.casts.test.ts) — update imports in each new test file to
import testNormalizer, verifySqlSchema, createTestContract, createTestSchemaIR,
createContractTable, createSchemaTable, and emptyTypeMetadataRegistry from their
helpers/modules so each file stays under 500 lines and re-exports/uses
testNormalizer centrally.
In
`@packages/3-targets/6-adapters/postgres/test/control-adapter.defaults.test.ts`:
- Around line 381-470: The test file currently mixes three distinct concerns
under parsePostgresDefault (timestamp-normalization, NULL-normalization, and
extension-type defaults); split them into three feature-specific test files
(e.g., control-adapter.defaults.timestamp.test.ts,
control-adapter.defaults.null.test.ts,
control-adapter.defaults.extensions.test.ts), moving the corresponding describe
blocks and their it.each cases for timestamp casts, NULL cases, and
extension/uuid/jsonb cases respectively; ensure each new file imports
parsePostgresDefault and any helper types the original file used, remove the
moved blocks from the original file, and run tests to confirm no missing imports
or duplicated top-level describes remain.
In `@packages/3-targets/6-adapters/postgres/test/control-adapter.test.ts`:
- Around line 1-1653: This test file is too large; split the tests into smaller
focused files: move all introspection-related specs (describe('introspect', all
its it blocks, and the USER-DEFINED enum tests) into a new
control-adapter.introspect.test.ts that imports PostgresControlAdapter and
reuses createMockDriver, and move the normalizeSchemaNativeType table-driven
tests into control-adapter.normalize.test.ts; keep any shared helpers
(createMockDriver, includes, QueryHandler type) in a small helper file or export
them from a new control-adapter.test-utils.ts and update imports so
PostgresControlAdapter and normalizeSchemaNativeType continue to be tested with
the same fixtures.
In `@test/integration/test/cli-journeys/config-errors.e2e.test.ts`:
- Around line 10-11: The import uses node:path for join which violates repo
convention to use pathe; replace "import { join } from 'node:path';" with
"import { join } from 'pathe';" and ensure any subsequent uses of join (e.g., in
this test) remain the same; keep writeFileSync imported from 'node:fs' as-is.
In
`@test/integration/test/cli-journeys/connection-and-contract-errors.e2e.test.ts`:
- Around line 16-18: Replace the node:path import with pathe: change the import
that currently reads "import { join } from 'node:path'" to import join from
'pathe' (or destructured import { join } from 'pathe' consistent with repo
style) so all path operations in this test (look for the join usage in
connection-and-contract-errors.e2e.test.ts) use pathe; keep other fs imports
(readFileSync, writeFileSync) untouched and run typecheck to ensure no other
references to node:path remain.
In `@test/integration/test/cli-journeys/drift-migration-dag.e2e.test.ts`:
- Around line 10-12: Replace the import of join from 'node:path' with the
standardized 'pathe' import used across the repo: locate the import line that
currently reads "import { join } from 'node:path';" in the test file and change
it to import join (or path-appropriate function) from 'pathe' so all path
manipulation in drift-migration-dag.e2e.test.ts uses pathe; keep other fs
imports (readdirSync, rmSync) unchanged.
In `@test/integration/test/cli-journeys/schema-evolution-migrations.e2e.test.ts`:
- Around line 15-17: Replace the use of node:path with pathe for path
manipulation: update the import that currently says "import { join } from
'node:path';" to "import { join } from 'pathe';" so the test uses pathe's join
when building paths passed to the CLI (refer to the imports at the top where
readdirSync and join are imported); no other changes to readdirSync import are
required.
In
`@test/integration/test/fixtures/cli/cli-e2e-test-app/fixtures/cli-journeys/contract.ts`:
- Around line 1-17: Extract the shared contract definition into a small helper
function (e.g., createBaseContract) that returns the result of
defineContract<CodecTypes>() configured with postgresPack, int4Column,
textColumn, the 'user' table and the 'User' model; export that helper and then
have both contract.ts and contract-base.ts simply import it and set export const
contract = createBaseContract(), so the single source of truth is the helper
(referencing symbols: defineContract, CodecTypes, postgresPack, int4Column,
textColumn, and export const contract).
In `@test/integration/test/utils/journey-test-helpers.ts`:
- Around line 383-393: The helper function sql currently does a dynamic import
of withClient which adds unnecessary async overhead; replace the dynamic import
with a top-level static import ("import { withClient } from
'@prisma-next/test-utils';") and update the sql function to call
withClient(connectionString, ...) directly (remove the await import(...) line),
keeping the same parameter names and return shape so function sql and references
to withClient remain consistent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 6ee7f604-1205-4d02-8d2a-fa2d8336a15e
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (33)
package.jsonpackages/1-framework/1-core/migration/control-plane/src/errors.tspackages/1-framework/1-core/migration/control-plane/src/exports/errors.tspackages/1-framework/3-tooling/cli/src/utils/cli-errors.tspackages/2-sql/3-tooling/family/test/schema-verify.defaults.test.tspackages/3-targets/6-adapters/postgres/src/core/default-normalizer.tspackages/3-targets/6-adapters/postgres/test/control-adapter.defaults.test.tspackages/3-targets/6-adapters/postgres/test/control-adapter.test.tstest/integration/.gitignoretest/integration/package.jsontest/integration/test/cli-journeys/README.mdtest/integration/test/cli-journeys/brownfield-adoption.e2e.test.tstest/integration/test/cli-journeys/config-errors.e2e.test.tstest/integration/test/cli-journeys/connection-and-contract-errors.e2e.test.tstest/integration/test/cli-journeys/db-update-workflows.e2e.test.tstest/integration/test/cli-journeys/drift-marker.e2e.test.tstest/integration/test/cli-journeys/drift-migration-dag.e2e.test.tstest/integration/test/cli-journeys/drift-schema.e2e.test.tstest/integration/test/cli-journeys/greenfield-setup.e2e.test.tstest/integration/test/cli-journeys/help-and-flags.e2e.test.tstest/integration/test/cli-journeys/multi-step-migration.e2e.test.tstest/integration/test/cli-journeys/schema-evolution-migrations.e2e.test.tstest/integration/test/fixtures/cli/cli-e2e-test-app/fixtures/cli-journeys/contract-add-table.tstest/integration/test/fixtures/cli/cli-e2e-test-app/fixtures/cli-journeys/contract-additive.tstest/integration/test/fixtures/cli/cli-e2e-test-app/fixtures/cli-journeys/contract-base.tstest/integration/test/fixtures/cli/cli-e2e-test-app/fixtures/cli-journeys/contract-destructive.tstest/integration/test/fixtures/cli/cli-e2e-test-app/fixtures/cli-journeys/contract-v3.tstest/integration/test/fixtures/cli/cli-e2e-test-app/fixtures/cli-journeys/contract.tstest/integration/test/fixtures/cli/cli-e2e-test-app/fixtures/cli-journeys/prisma-next.config.tstest/integration/test/fixtures/cli/cli-e2e-test-app/fixtures/cli-journeys/prisma-next.config.with-db.tstest/integration/test/utils/journey-test-helpers.tstest/integration/vitest.journeys.config.tsturbo.json
clock_timestamp() returns wall-clock time (can differ across rows in a single INSERT), while now()/CURRENT_TIMESTAMP return transaction start time (constant within a transaction). Normalizing them together hid this semantic difference. Split TIMESTAMP_PATTERN into NOW_FUNCTION_PATTERN and CLOCK_TIMESTAMP_PATTERN, and replace isTimestampDefault() with canonicalizeTimestampDefault() that returns the correct canonical form for each.
# Conflicts: # packages/1-framework/1-core/migration/control-plane/src/errors.ts # test/integration/test/cli-journeys/README.md # test/integration/test/cli-journeys/brownfield-adoption.e2e.test.ts # test/integration/test/cli-journeys/config-errors.e2e.test.ts # test/integration/test/cli-journeys/connection-and-contract-errors.e2e.test.ts # test/integration/test/cli-journeys/db-update-workflows.e2e.test.ts # test/integration/test/cli-journeys/drift-marker.e2e.test.ts # test/integration/test/cli-journeys/drift-migration-dag.e2e.test.ts # test/integration/test/cli-journeys/drift-schema.e2e.test.ts # test/integration/test/cli-journeys/greenfield-setup.e2e.test.ts # test/integration/test/cli-journeys/help-and-flags.e2e.test.ts # test/integration/test/cli-journeys/multi-step-migration.e2e.test.ts # test/integration/test/cli-journeys/schema-evolution-migrations.e2e.test.ts # test/integration/test/utils/journey-test-helpers.ts # test/integration/vitest.journeys.config.ts
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/2-sql/3-tooling/family/test/schema-verify.defaults.test.ts (1)
18-70:⚠️ Potential issue | 🟠 MajorTest normalizer drift from production parser can hide verification regressions.
testNormalizerstill differs fromparsePostgresDefaultin behavior (not usingnativeType, different NULL cast acceptance, unconditional JSON parsing of string literals, noNumber.isFiniteguard). This can makeverifySqlSchematests pass while production normalization would differ.Suggested alignment patch
-const testNormalizer: DefaultNormalizer = (rawDefault: string): ColumnDefault | undefined => { +const testNormalizer: DefaultNormalizer = ( + rawDefault: string, + nativeType?: string, +): ColumnDefault | undefined => { const trimmed = rawDefault.trim(); + const normalizedType = nativeType?.toLowerCase(); + const isBigInt = normalizedType === 'bigint' || normalizedType === 'int8'; @@ - if (/^NULL(?:::(?:"[^"]+"|[\w\s]+)(?:\(\d+(?:,\d+)?\))?)?$/i.test(trimmed)) { + if (/^NULL(?:::.+)?$/i.test(trimmed)) { return { kind: 'literal', value: null }; } @@ if (/^-?\d+(\.\d+)?$/.test(trimmed)) { - return { kind: 'literal', value: Number(trimmed) }; + if (isBigInt) { + return { kind: 'literal', value: { $type: 'bigint', value: trimmed } }; + } + const num = Number(trimmed); + if (!Number.isFinite(num)) return undefined; + return { kind: 'literal', value: num }; } @@ if (stringMatch?.[1] !== undefined) { const unescaped = stringMatch[1].replace(/''/g, "'"); - try { - return { kind: 'literal', value: JSON.parse(unescaped) }; - } catch { - return { kind: 'literal', value: unescaped }; + if (normalizedType === 'json' || normalizedType === 'jsonb') { + try { + return { kind: 'literal', value: JSON.parse(unescaped) }; + } catch { + // keep string fallback + } } + return { kind: 'literal', value: unescaped }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/3-tooling/family/test/schema-verify.defaults.test.ts` around lines 18 - 70, testNormalizer diverges from parsePostgresDefault causing test/production drift; update testNormalizer to match parsePostgresDefault by (1) preserving/returning nativeType when detecting explicit casts (use the same nativeType extraction logic as parsePostgresDefault for patterns like ::text or ::json), (2) broaden NULL matching to accept NULL::type forms the same way parsePostgresDefault does, (3) restrict numeric literal acceptance to Number.isFinite(Number(trimmed)) before returning a numeric literal, and (4) only attempt JSON.parse on string literal contents when they are valid JSON (e.g., start with "{" or "[") or otherwise keep the unescaped string; ensure the fallback and function-kind handling (returning { kind: 'function', expression: ... }) remain unchanged so verifySqlSchema behavior matches production parsePostgresDefault.
🧹 Nitpick comments (1)
packages/2-sql/3-tooling/family/test/schema-verify.defaults.test.ts (1)
72-803: Split this test file; it exceeds the repository size limit for test files.The file is now 800+ lines and mixes multiple concerns; please split by feature area (e.g.,
defaults.timestamp,defaults.null,defaults.string-literals) to keep maintenance and navigation manageable.As per coding guidelines: "
**/*.test.ts: Keep test files under 500 lines... Split test files when they exceed 500 lines, contain multiple distinct concerns...".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/3-tooling/family/test/schema-verify.defaults.test.ts` around lines 72 - 803, This test file is too large and should be split into multiple smaller test files by feature area; extract the describe blocks titled "verifySqlSchema - defaults", "verifySqlSchema - timestamp default normalization", "verifySqlSchema - null default normalization", and "verifySqlSchema - string literal defaults with type casts" into separate test files (e.g., schema-verify.defaults.general.test.ts, schema-verify.defaults.timestamp.test.ts, schema-verify.defaults.null.test.ts, schema-verify.defaults.string-literals.test.ts). For each new file, copy the relevant describe block and its contained it tests along with required helpers and imports (createTestContract, createContractTable, createTestSchemaIR, createSchemaTable, verifySqlSchema, emptyTypeMetadataRegistry, testNormalizer, SqlContract/SqlStorage references), run and fix any relative import paths, and ensure each file exports nothing (just contains tests) so the test runner picks them up; keep existing test assertions and behavior intact. Ensure no duplicated global setup is introduced and run the test suite to confirm all tests still pass.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@packages/2-sql/3-tooling/family/test/schema-verify.defaults.test.ts`:
- Around line 18-70: testNormalizer diverges from parsePostgresDefault causing
test/production drift; update testNormalizer to match parsePostgresDefault by
(1) preserving/returning nativeType when detecting explicit casts (use the same
nativeType extraction logic as parsePostgresDefault for patterns like ::text or
::json), (2) broaden NULL matching to accept NULL::type forms the same way
parsePostgresDefault does, (3) restrict numeric literal acceptance to
Number.isFinite(Number(trimmed)) before returning a numeric literal, and (4)
only attempt JSON.parse on string literal contents when they are valid JSON
(e.g., start with "{" or "[") or otherwise keep the unescaped string; ensure the
fallback and function-kind handling (returning { kind: 'function', expression:
... }) remain unchanged so verifySqlSchema behavior matches production
parsePostgresDefault.
---
Nitpick comments:
In `@packages/2-sql/3-tooling/family/test/schema-verify.defaults.test.ts`:
- Around line 72-803: This test file is too large and should be split into
multiple smaller test files by feature area; extract the describe blocks titled
"verifySqlSchema - defaults", "verifySqlSchema - timestamp default
normalization", "verifySqlSchema - null default normalization", and
"verifySqlSchema - string literal defaults with type casts" into separate test
files (e.g., schema-verify.defaults.general.test.ts,
schema-verify.defaults.timestamp.test.ts, schema-verify.defaults.null.test.ts,
schema-verify.defaults.string-literals.test.ts). For each new file, copy the
relevant describe block and its contained it tests along with required helpers
and imports (createTestContract, createContractTable, createTestSchemaIR,
createSchemaTable, verifySqlSchema, emptyTypeMetadataRegistry, testNormalizer,
SqlContract/SqlStorage references), run and fix any relative import paths, and
ensure each file exports nothing (just contains tests) so the test runner picks
them up; keep existing test assertions and behavior intact. Ensure no duplicated
global setup is introduced and run the test suite to confirm all tests still
pass.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 7e7b9c9a-f734-4d26-be5b-b661fc13707e
📒 Files selected for processing (3)
packages/2-sql/3-tooling/family/test/schema-verify.defaults.test.tspackages/3-targets/6-adapters/postgres/src/core/default-normalizer.tspackages/3-targets/6-adapters/postgres/test/control-adapter.defaults.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/3-targets/6-adapters/postgres/test/control-adapter.defaults.test.ts
saevarb
left a comment
There was a problem hiding this comment.
LGTM - I just have a random thought that I don't think is worth acting on now
Summary
db schema-verifyanddb signcompare introspected types/defaults against contract declarations viaparsePostgresDefault. The normalizer handled common Postgres aliases (varchar→character varying,now()/CURRENT_TIMESTAMP) but failed on cast-wrapped forms and NULL defaults, causing false negatives (mismatch reported when types/defaults are semantically identical).This PR fixes TML-2068.
Before
('now'::text)::timestamp without time zone{ kind: 'function', expression: "('now'::text)::timestamp without time zone" }(raw passthrough)now()CURRENT_TIMESTAMP::timestamptz{ kind: 'function', expression: "CURRENT_TIMESTAMP::timestamptz" }(raw passthrough)now()NULL/NULL::text/NULL::"MyEnum"{ kind: 'function', expression: "NULL" }(treated as unknown function){ kind: 'literal', value: null }_nativeTypeparameter_prefix + JSDoc says "currently unused"Number("1" + "0".repeat(309)){ kind: 'literal', value: Infinity }After
('now'::text)::timestamp without time zone{ kind: 'function', expression: 'now()' }CURRENT_TIMESTAMP::timestamptz{ kind: 'function', expression: 'now()' }NULL/NULL::text/NULL::"MyEnum"{ kind: 'literal', value: null }nativeTypeparameterNumber("1" + "0".repeat(309))undefined(unparseable)Changes
Production (
default-normalizer.ts):isTimestampDefault()helper to unwrap cast-wrapped timestamp forms (strips::timestamp[tz]suffix, outer parens,::textinner cast, then retests)NULL_PATTERNmatchingNULLwith optional::typecast →{ kind: 'literal', value: null }TEXT_CAST_SUFFIXandNOW_LITERAL_PATTERNas module-level constants (was inline)NULL_PATTERNfrom over-specified grammar validation to/^NULL(?:::.+)?$/iNumber.isFinite()guard for numeric defaults_nativeType→nativeType, updated JSDocTests (3 files, +31 test cases net):
it.each)it.each)normalizeSchemaNativeTypeTest plan
pnpm vitest run packages/3-targets/6-adapters/postgres/test/control-adapter.defaults.test.ts— 43 passedpnpm vitest run packages/2-sql/3-tooling/family/test/schema-verify.defaults.test.ts— 20 passedpnpm vitest run packages/3-targets/6-adapters/postgres/test/control-adapter.test.ts— 45 passeddb schema-verifyno longer false-negatives onnow()vsCURRENT_TIMESTAMPSummary by CodeRabbit
Release Notes
Bug Fixes
Tests