tmp(demo): add CLI recording infrastructure, journey tests, and demo slides#239
Draft
tmp(demo): add CLI recording infrastructure, journey tests, and demo slides#239
Conversation
Both commands previously returned raw `VerifyDatabaseSchemaResult` on schema verification failure instead of the standard `CliErrorEnvelope`. This made agents parsing `--json` output handle two different shapes. Now both commands map schema verification failures through a new `errorSchemaVerificationFailed` factory (PN-RTM-3004), which wraps the full verification tree in `meta.verificationResult`. Human TTY output still renders the tree before the error summary.
Add 13 journey-based e2e test files covering 30 CLI scenarios (A-Z + P2-P5)
across greenfield setup, schema evolution, brownfield adoption, drift detection,
migration edge cases, and error scenarios. Each journey is a single it() block
with descriptive assertion labels for step-level failure identification.
Infrastructure:
- journey-test-helpers.ts: CommandResult-based runners for all CLI commands,
contract swap helper, JSON parsing, SQL helper
- 5 contract fixtures (base, additive, destructive, add-table, v3)
- Config templates with {{DB_URL}} placeholder replacement
Known limitations (marked as TODO in tests):
- db schema-verify and db update hit Vite SSR module resolution error
after DDL changes (PN-CLI-4999) — affects drift-schema journeys M/N
- Migration chain breakage recovery (P3) requires manual intervention
- Target mismatch (U) needs better simulation approach
The "Vite SSR error" was actually a stale build of @prisma-next/core-control-plane missing the errorSchemaVerificationFailed export in dist. After rebuild, db schema-verify and db sign properly return exit code 1 for verification failures. Journey M: Phantom drift now fully tests verify false-positive → schema-verify catches drift. db update recovery fails as expected (planner can't re-add dropped NOT NULL columns — documented as known limitation). Journey N: Extra column drift now tests verify → tolerant/strict schema-verify → expand contract → db update (with --no-interactive fallback to -y for confirmation handling) → tolerant verify passes. Also restores exact exit code 1 assertions for schema-verify and db sign failures in brownfield-adoption and drift-marker journeys.
…rrel The function was defined in errors.ts but missing from the exports barrel, causing runtime undefined when imported via the package's public API. This was the root cause of the PN-CLI-4999 "Vite SSR error" in e2e tests — the function was available in source but not in built dist.
…name)
The test was deleting the wrong migration directory — 'initial' sorted last
alphabetically (i > a) but was the chain root (∅→base), not the intended
target 'add-posts' (additive→v3). After deleting the root, the planner
correctly couldn't reconstruct the graph.
Fix: find the migration dir by name suffix ('_add_posts') instead of
relying on alphabetical sort. Now P3.03–P3.04 fully test the recovery
flow: re-plan the missing edge → apply succeeds.
Instead of tampering with the marker row (which only affects stored JSON, not the hash comparison), edit contract.json on disk to change the target field from "postgres" to "mysql". db verify compares contractIR.target against config.target.targetId, so this triggers PN-RTM-3003.
Remove unused parseJsonOutput import and add commander as devDependency so the Command type import in journey-test-helpers resolves.
Tolerant vs strict schema-verify is already tested in Journey N (drift-schema.e2e.test.ts) with extra columns. Journey H tested the same code path with extra tables, adding no unique coverage.
…test.ts P4 (partial apply/resume) and P5 (no migration path) were explicitly noted in the plan as already covered by the command-specific test. P5's recovery pattern is also identical to P3 (chain breakage).
Q (apply noop), R (plan noop), and X (show variants) are now tail steps of Journey B. They reuse B's already-applied migration chain instead of spinning up 3 separate PGlite instances for 4 assertions.
Journey I's JSON assertions are already covered per-command in A.09, A.10, B.10, and isolated command tests. Journey J (help output) has near-zero regression prevention value. Journey Y (global flags) is retained as a lightweight no-DB test.
- 30 → 22 journeys, 13 → 10 test files - Removed: H (redundant with N), I (JSON covered inline), J (help), P4 and P5 (covered by command-specific tests) - Merged: Q, R, X as tail steps of Journey B - Updated cross-reference matrix, file structure, acceptance criteria, implementation phases, and parallelism estimates
- `pnpm test:journeys` from root runs only journey e2e tests - Dedicated vitest config with forks pool (4 workers) and spinUpPpgDev timeouts for PGlite-backed tests - Wired through turbo.json for proper build dependency resolution
Add a self-contained README to cli-journeys/ summarizing all 11 test files by what they cover. Rewrite top-level JSDoc comments in each test file to describe scenarios in plain words instead of opaque codes. Also gitignore the vite-plugin output/ fixture directory.
Reverts the errorSchemaVerificationFailed wrapping introduced in 0f57e7f. The db sign e2e test expects the raw VerifyDatabaseSchemaResult shape (with `schema` at top level) in JSON failure output, not a CliErrorEnvelope.
- Initialize all closeDb declarations with no-op default to prevent TypeError if createDevDatabase() fails in beforeAll - Use callback form for DB_URL replacement to prevent special character corruption in connection strings (e.g., $& in passwords) - Re-throw unexpected errors in runCommand/runCommandRaw when no CLI exit code was captured, preventing silent masking of real regressions - Run V.02 (dry-run) before V.01 (mutating db init) so the dry-run validates the pre-mutation state - Merge redundant B.05/B.06 steps — both called runMigrationStatus with the same DB-connected context; consolidated into one step - Fix misleading comments in Journey N (age drift stays unresolved; test intentionally validates tolerant mode) - Clarify README: "each journey (describe block)" not just "describe"
Add automated CLI recording system using VHS (agentstation fork) to produce animated SVG and plain-text ASCII captures of CLI commands. - Recording script with per-command and journey modes - Two-layer caching: turbo (task-level) + per-recording output probes - CI workflow (workflow_dispatch) to regenerate and open a PR - docker-compose.yaml for local PostgreSQL on port 5433 - CLI_RECORDING.md documenting the full setup
(cherry picked from commit 0e453e20ceea6f6337dfbdfffdaea2109a91af6b)
Build a minimal PATH (wrapper dir, node, vhs/ttyd, system paths) instead of dumping the developer's full $PATH into committed tapes. Eliminates machine-specific personal tool directories from recordings.
- Replace `node:path` with `pathe` for consistent cross-platform paths - Remove try/catch in getMigrationDirs that masked broken test setup
…umeric overflow
parsePostgresDefault now correctly normalizes:
- Cast-wrapped timestamp defaults: ('now'::text)::timestamp without time zone,
now()::timestamptz, CURRENT_TIMESTAMP::timestamp with time zone, etc.
- NULL and NULL::type defaults to { kind: 'literal', value: null }
- Rejects non-finite numeric defaults (Infinity from enormous literals)
Also: renamed misleading _nativeType parameter, pre-compiled all inline
regexes to module-level constants, simplified NULL_PATTERN, removed
duplicate verification-level tests, added negative timestamp test cases,
and confirmed extension types (citext, ltree, uuid, etc.) pass through
the type normalizer unchanged.
clock_timestamp() returns wall-clock time (can differ across rows in a single INSERT), while now()/CURRENT_TIMESTAMP return transaction start time (constant within a transaction). Normalizing them together hid this semantic difference. Split TIMESTAMP_PATTERN into NOW_FUNCTION_PATTERN and CLOCK_TIMESTAMP_PATTERN, and replace isTimestampDefault() with canonicalizeTimestampDefault() that returns the correct canonical form for each.
When stdout is not a TTY (e.g., piped to jq, captured by scripts), the CLI now automatically outputs JSON without requiring --json. This follows standard Unix conventions where tools emit structured output when piped, making `prisma-next db verify | jq '.contract'` work seamlessly. Previously, piping without --json produced no stdout output at all because human-readable decoration was suppressed (correct) but no JSON was emitted as a fallback (incorrect). Adds a journey test (A.11) verifying parseGlobalFlags auto-enables json when process.stdout.isTTY is falsy, and a VHS recording demonstrating the jq piping workflow.
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
@prisma-next/runtime-executor
@prisma-next/sql-runtime
@prisma-next/extension-paradedb
@prisma-next/extension-pgvector
@prisma-next/postgres
@prisma-next/sql-orm-client
@prisma-next/contract-authoring
@prisma-next/contract-ts
@prisma-next/ids
@prisma-next/psl-parser
@prisma-next/cli
@prisma-next/emitter
@prisma-next/eslint-plugin
@prisma-next/migration-tools
@prisma-next/vite-plugin-contract-emit
@prisma-next/sql-contract
@prisma-next/sql-errors
@prisma-next/sql-operations
@prisma-next/sql-schema-ir
@prisma-next/sql-contract-psl
@prisma-next/sql-contract-ts
@prisma-next/sql-contract-emitter
@prisma-next/family-sql
@prisma-next/sql-kysely-lane
@prisma-next/sql-lane-query-builder
@prisma-next/sql-relational-core
@prisma-next/sql-lane
@prisma-next/target-postgres
@prisma-next/adapter-postgres
@prisma-next/driver-postgres
@prisma-next/core-control-plane
@prisma-next/core-execution-plane
@prisma-next/config
@prisma-next/contract
@prisma-next/operations
@prisma-next/plan
@prisma-next/utils
commit: |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
clock_timestamp(), cast-wrapped timestamps, and NULL defaultsdb verifylive schema verification by defaultTest plan
pnpm test:e2epassespnpm test:integrationpasses (including new journey tests)