[WIP] Fix Prisma connection setup to avoid degraded performance#1409
Closed
[WIP] Fix Prisma connection setup to avoid degraded performance#1409
Conversation
Copilot stopped work on behalf of
jaypatrick due to an error
March 25, 2026 23:52
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Thanks for asking me to work on this. I will get started on it and keep this PR's description up to date as I form a plan and make progress.
Original prompt
Context & Root Cause
The live site at
https://adblock-frontend.jayson-knight.workers.dev/shows "Degraded performance" and "Data may be stale" on every page load, andGET /api/healthreturnsdatabase: { status: "down", latency_ms: 0 }.The
latency_ms: 0is the key tell: the probe threw before making any network call. The root cause is inworker/lib/prisma.ts:@prisma/adapter-pg≥ 1.0 changed its constructor signature.PrismaPgno longer accepts a plain config object — it requires apg.Poolinstance. Passing{ connectionString }causes a synchronous throw at client-creation time, not at query time. Becauselatency_msis measured from beforecreatePrismaClient()is called, it records 0 ms. This means every request that touches the database layer silently fails, including anonymous page loads that triggerBetterAuthProvider.verifyToken()→createAuth()→createPrismaClient().The Cloudflare Hyperdrive dashboard shows zero connections because the
pg.Poolis never created — Prisma throws before it can connect.The secondary issue —
PrismaClientConfigSchemapreviously only acceptedpostgresql://but Hyperdrive's.connectionStringproperty returnspostgres://— has already been fixed in the current codebase (the schema now accepts both). That fix is documented indocs/troubleshooting/KB-002-hyperdrive-database-down.md. However thePrismaPgPool constructor bug was introduced at the same time and is still present.Changes Required
1. Fix
worker/lib/prisma.ts— PrismaPg Pool constructor (PRIMARY BUG FIX)Replace the broken
PrismaPg({ connectionString })call with one that:pg(as the rest of the codebase does inworker/utils/pg-pool.ts)new Pool({ connectionString })instancePoolinstance toPrismaPgtry/catchthat surfaces a clear error message instead of a cryptic Prisma internal throw$disconnect()in afinallyblock to release the Hyperdrive proxy socket (see existing docs indocs/troubleshooting/neon-troubleshooting.md)The corrected factory should look like:
If making
createPrismaClientasync is a larger refactor, an alternative is to keep it synchronous but use the already-existingcreatePgPoolhelper fromworker/utils/pg-pool.ts. Either approach is acceptable as long as a realpg.Poolis passed toPrismaPg.Update
_internalsaccordingly and update every call site (health handler, prisma middleware, auth factory) to await the factory.2. Add a query-level timeout to the database probe in
worker/handlers/health.tsThe current
databaseProbe()has no timeout on the Prisma query. If the Hyperdrive proxy hangs (e.g. misconfigured binding, cold start), the health check itself will hang until the Worker CPU limit kills the request — which can take 30 seconds and causes the cascading 502 visible in the UI.Wrap the
prisma.$queryRawcall with aPromise.raceagainstAbortSignal.timeout(5000)(5-second deadline). On timeout, return{ status: 'down', latency_ms: 5000, error: 'probe timed out' }. This is the same pattern used inworker/handlers/container-status.ts.Also add an
errorfield to theDatabaseResulttype so the error message is surfaced in the JSON response.3. Add
errorfield propagation in the health response typesUpdate
DatabaseResultinworker/handlers/health.ts:Populate
errorin the catch block witherror instanceof Error ? error.message : String(error)(truncated to 200 chars to avoid leaking stack traces).4. Update
worker/middleware/prisma-middleware.ts— add$disconnectin finallyThe middleware currently sets the Prisma client in context and calls
next()but never disconnects. Add atry/finallyaroundnext()to callprisma.$disconnect()after the request completes. This prevents connection leaks under load: