Add Discord approval gate for Luma event drafts#19
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughThis PR introduces Discord-based event review workflow integration to Luma Sync. It adds database infrastructure for tracking review sessions, Discord API integration for creating and monitoring review threads, workflow automation for polling approvals, and configuration support for Discord bot authentication. Changes
Sequence Diagram(s)sequenceDiagram
participant WF as Luma Sync Workflow
participant Discord as Discord API
participant DB as Database
participant Poll as Approval Poller
WF->>WF: Create event draft
WF->>Discord: POST root message to review channel
Discord-->>WF: channelId, rootMessageId
WF->>Discord: POST thread with draft details
Discord-->>WF: threadId, lastSeenMessageId
WF->>DB: Create review session (pending)
DB-->>WF: sessionId
Note over Poll: Post-processing loop
Poll->>DB: List pending sessions (limit 100)
DB-->>Poll: pendingSessions[]
loop For each pending session
Poll->>Discord: List thread messages after lastSeenMessageId
Discord-->>Poll: messages[]
Poll->>Poll: Scan for explicit approval
alt Approval found
Poll->>DB: Get event isDraft status
Poll->>DB: Update isDraft = false
Poll->>DB: Update session status = approved
Poll->>WF: Increment approvedCount
else New messages detected
Poll->>DB: Update session cursor
end
end
WF-->>WF: Return with approvedCount
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (6)
app/src/workflows/luma-sync/steps/events.ts (1)
185-186: Naming inconsistency:set_live_after_explicit_approvaluses snake_case.Every other function in this file uses camelCase (
createEventFromDraft,resolveUniqueSlug, etc.). The snake_case name with a separate camelCase alias on Line 216 is unnecessary indirection — just name the functionsetLiveAfterExplicitApprovaldirectly.♻️ Proposed fix
-export async function set_live_after_explicit_approval({ +export async function setLiveAfterExplicitApproval({ reviewSessionId, eventId, approvalMessageId, ... } - -export const setLiveAfterExplicitApproval = set_live_after_explicit_approval;Also applies to: 216-216
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/workflows/luma-sync/steps/events.ts` around lines 185 - 186, The function is misnamed in snake_case; rename set_live_after_explicit_approval to camelCase setLiveAfterExplicitApproval and remove the separate camelCase alias used later (setLiveAfterExplicitApproval) to avoid indirection; update any exports/uses in this module that reference set_live_after_explicit_approval to reference setLiveAfterExplicitApproval (check the function declaration and the alias at or around the existing setLiveAfterExplicitApproval reference) so naming is consistent with other functions like createEventFromDraft and resolveUniqueSlug.app/src/lib/schema.ts (1)
197-214: Consider constrainingstatus(and optionallyprovider) at the database level.Both
statusandproviderare plaintextcolumns that accept any string, but the workflow only expects specific values ("pending"/"approved"for status,"discord"for provider). ApgEnumor aCHECKconstraint would prevent invalid data from being written at the DB level, consistent with howhackathonStateEnumandprofileTypeEnumare used elsewhere in this schema.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/lib/schema.ts` around lines 197 - 214, The event_review_sessions table uses free-form text for provider and status; constrain them at the DB level by introducing an enum or CHECK constraint and applying it in eventReviewSessionsTable: create a pgEnum (e.g., eventReviewStatusEnum with "pending" and "approved") and use that for the status column instead of text("status"), and optionally create a provider enum (or CHECK) limited to "discord" and use that for provider; update the references in eventReviewSessionsTable (status and provider) so the schema enforces allowed values.app/src/workflows/luma-sync/steps/discord.ts (3)
254-307: Minor: after sorting ascending,latestSeenMessageIdis alwaysmessages[messages.length - 1].id.The loop to track
latestSeenMessageIdis correct but redundant with the sort. Aftermessages.sort(...), you could just assignlatestSeenMessageId = messages[messages.length - 1].idand simplify the loop to only scan for approvals. Not worth a change if you prefer the explicit approach.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/workflows/luma-sync/steps/discord.ts` around lines 254 - 307, In pollDiscordThreadForApproval, after messages.sort((left, right) => compareSnowflakeIds(left.id, right.id)) set latestSeenMessageId = messages[messages.length - 1].id (or null when messages empty) instead of updating it inside the loop, and then simplify the for (const message of messages) loop to only check non-bot messages for isExplicitApprovalMessage and set approvalMessageId; keep the existing return shape (latestSeenMessageId, approvalMessageId) and preserve error handling around listThreadMessages.
91-93: Redundantnew Date()wrapping.
valueis already aDate; wrapping it innew Date(value)just clones it. You can callvalue.toISOString()directly.♻️ Proposed fix
function toIsoString(value: Date): string { - return new Date(value).toISOString(); + return value.toISOString(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/workflows/luma-sync/steps/discord.ts` around lines 91 - 93, The toIsoString function wraps an existing Date in new Date(value) unnecessarily; update the toIsoString function to call value.toISOString() directly (keep the function name toIsoString and parameter value) to avoid cloning the Date object and return the ISO string.
133-164: No timeout or rate-limit handling on Discord API calls.Discord enforces per-route rate limits and returns
429with aRetry-Afterheader. Currently, a429is treated as a generic failure. For a polling workflow that runs repeatedly, consider at minimum:
- Adding an
AbortSignal.timeout()(or equivalent) to thefetchcall to avoid hanging indefinitely.- Detecting
429responses and either respectingRetry-Afteror surfacing a distinct error so the caller can back off.Not blocking, but worth considering for production robustness.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/workflows/luma-sync/steps/discord.ts` around lines 133 - 164, In discordRequest, add request timeout and explicit 429 handling: create an AbortController (or use AbortSignal.timeout) and pass its signal into fetch to avoid hangs, and after receiving the response check for status 429; if 429, read the Retry-After header (or parse response body) and either sleep for that many seconds before retrying or throw a distinct rate-limit error (e.g., DiscordRateLimitError) that includes the retry-after value so callers can back off; keep the existing error payload parsing for other non-ok statuses.app/src/workflows/luma-sync/index.ts (1)
240-280: Sequential polling of up to 100 sessions — adequate for now, but note scalability.Each pending session is polled one-by-one with a Discord API call. At 100 sessions, and assuming ~200-500ms per API call, this loop could take 20-50 seconds. This is fine for a background workflow, but as the number of pending reviews grows:
- The hardcoded limit of 100 means sessions beyond that threshold are silently skipped each cycle.
- Sequential execution amplifies latency linearly.
For now this is acceptable, but consider batching or parallelizing (with rate-limit awareness) if the volume grows.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/workflows/luma-sync/index.ts` around lines 240 - 280, The loop polls up to 100 sessions sequentially causing latency and silently skipping >100 items; change listPendingDiscordReviewSessions to accept a configurable/pageable limit or implement pagination, and process sessions in parallel with controlled concurrency (e.g., p-map or a Promise pool) when calling pollDiscordThreadForApproval, set_live_after_explicit_approval, and updateDiscordReviewSessionCursor so you retain per-session error handling and cursor updates; ensure you throttle/handle Discord rate limits and keep the errors.push logic intact for failed tasks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src/workflows/luma-sync/index.ts`:
- Around line 210-229: When Discord thread/session creation fails in the
try/catch around createDiscordReviewThreadForEvent and
createDiscordReviewSession for createdEvent, mark the created event so it won't
be silently orphaned: update the event record (or call a helper like
enqueueReviewRetry) to set a "needsReviewRetry" or "reviewRetryAt" field and
persist it, and/or push the event ID into a retry queue/topic for the next sync;
also record the failure detail in errors as you already do. Ensure changes touch
the same code paths that persist events (referencing createdEvent,
createDiscordReviewThreadForEvent, createDiscordReviewSession) so subsequent
sync runs can detect and retry creating the review session.
In `@app/src/workflows/luma-sync/steps/discord.ts`:
- Around line 67-76: The compareSnowflakeIds function currently falls back to
localeCompare on BigInt parse failure which can mis-order numeric strings;
update the catch to do a numeric-aware fallback: if both inputs match /^\d+$/
then compare by length (shorter = smaller) and only use localeCompare if lengths
equal, otherwise throw an error (include the invalid inputs) so callers are
alerted to truly non-numeric snowflake values; reference compareSnowflakeIds to
locate and change the catch behavior.
- Around line 147-149: The code parses Discord's response with
JSON.parse(responseText) which can throw if the body is non-empty but not valid
JSON; wrap the parse in a try/catch (or use a guarded parse helper) around
JSON.parse so that on parse failure you set responseData to null (or include the
raw responseText for diagnostics) and do not let a SyntaxError mask the HTTP
status; update the logic around responseText/responseData in this module
(discord.ts) so any thrown parse errors are caught and the code continues to
handle non-2xx statuses using response.status and the safe responseData value.
In `@app/src/workflows/luma-sync/steps/events.ts`:
- Around line 185-214: Replace the two independent updates in
set_live_after_explicit_approval with a single db.batch call that sends both
UPDATE queries together: build the eventsTable update (setting isDraft=false and
returning id for eventId) and the eventReviewSessionsTable update (setting
status="approved", approvalMessageId and lastSeenMessageId) and pass both
queries into db.batch([...]) so they are sent in one request; then extract the
first batch result to determine updatedEvent truthiness to return, using the
existing symbols eventId, reviewSessionId, approvalMessageId, eventsTable,
eventReviewSessionsTable and db.batch instead of two separate await
db.update(...) calls.
---
Nitpick comments:
In `@app/src/lib/schema.ts`:
- Around line 197-214: The event_review_sessions table uses free-form text for
provider and status; constrain them at the DB level by introducing an enum or
CHECK constraint and applying it in eventReviewSessionsTable: create a pgEnum
(e.g., eventReviewStatusEnum with "pending" and "approved") and use that for the
status column instead of text("status"), and optionally create a provider enum
(or CHECK) limited to "discord" and use that for provider; update the references
in eventReviewSessionsTable (status and provider) so the schema enforces allowed
values.
In `@app/src/workflows/luma-sync/index.ts`:
- Around line 240-280: The loop polls up to 100 sessions sequentially causing
latency and silently skipping >100 items; change
listPendingDiscordReviewSessions to accept a configurable/pageable limit or
implement pagination, and process sessions in parallel with controlled
concurrency (e.g., p-map or a Promise pool) when calling
pollDiscordThreadForApproval, set_live_after_explicit_approval, and
updateDiscordReviewSessionCursor so you retain per-session error handling and
cursor updates; ensure you throttle/handle Discord rate limits and keep the
errors.push logic intact for failed tasks.
In `@app/src/workflows/luma-sync/steps/discord.ts`:
- Around line 254-307: In pollDiscordThreadForApproval, after
messages.sort((left, right) => compareSnowflakeIds(left.id, right.id)) set
latestSeenMessageId = messages[messages.length - 1].id (or null when messages
empty) instead of updating it inside the loop, and then simplify the for (const
message of messages) loop to only check non-bot messages for
isExplicitApprovalMessage and set approvalMessageId; keep the existing return
shape (latestSeenMessageId, approvalMessageId) and preserve error handling
around listThreadMessages.
- Around line 91-93: The toIsoString function wraps an existing Date in new
Date(value) unnecessarily; update the toIsoString function to call
value.toISOString() directly (keep the function name toIsoString and parameter
value) to avoid cloning the Date object and return the ISO string.
- Around line 133-164: In discordRequest, add request timeout and explicit 429
handling: create an AbortController (or use AbortSignal.timeout) and pass its
signal into fetch to avoid hangs, and after receiving the response check for
status 429; if 429, read the Retry-After header (or parse response body) and
either sleep for that many seconds before retrying or throw a distinct
rate-limit error (e.g., DiscordRateLimitError) that includes the retry-after
value so callers can back off; keep the existing error payload parsing for other
non-ok statuses.
In `@app/src/workflows/luma-sync/steps/events.ts`:
- Around line 185-186: The function is misnamed in snake_case; rename
set_live_after_explicit_approval to camelCase setLiveAfterExplicitApproval and
remove the separate camelCase alias used later (setLiveAfterExplicitApproval) to
avoid indirection; update any exports/uses in this module that reference
set_live_after_explicit_approval to reference setLiveAfterExplicitApproval
(check the function declaration and the alias at or around the existing
setLiveAfterExplicitApproval reference) so naming is consistent with other
functions like createEventFromDraft and resolveUniqueSlug.
|
|
||
| try { | ||
| const reviewThread = | ||
| await createDiscordReviewThreadForEvent(createdEvent); | ||
| await createDiscordReviewSession({ | ||
| eventId: createdEvent.id, | ||
| channelId: reviewThread.channelId, | ||
| rootMessageId: reviewThread.rootMessageId, | ||
| threadId: reviewThread.threadId, | ||
| lastSeenMessageId: reviewThread.lastSeenMessageId, | ||
| }); | ||
| } catch (reviewSetupError) { | ||
| errors.push({ | ||
| scope: "review", | ||
| reference: createdEvent.id, | ||
| error: `Failed to create Discord review thread: ${toErrorMessage( | ||
| reviewSetupError, | ||
| )}`, | ||
| }); | ||
| } |
There was a problem hiding this comment.
Discord review setup failure is non-blocking — good resilience.
If the Discord thread or session creation fails, the event is still created as a draft and the error is logged. The event won't have a review session though, which means it'll stay as a draft indefinitely with no way to approve it through Discord polling.
Consider whether an orphaned draft (no review session) should be surfaced more prominently or retried on next sync, so it doesn't silently remain in draft forever.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/workflows/luma-sync/index.ts` around lines 210 - 229, When Discord
thread/session creation fails in the try/catch around
createDiscordReviewThreadForEvent and createDiscordReviewSession for
createdEvent, mark the created event so it won't be silently orphaned: update
the event record (or call a helper like enqueueReviewRetry) to set a
"needsReviewRetry" or "reviewRetryAt" field and persist it, and/or push the
event ID into a retry queue/topic for the next sync; also record the failure
detail in errors as you already do. Ensure changes touch the same code paths
that persist events (referencing createdEvent,
createDiscordReviewThreadForEvent, createDiscordReviewSession) so subsequent
sync runs can detect and retry creating the review session.
| function compareSnowflakeIds(a: string, b: string): number { | ||
| try { | ||
| const aValue = BigInt(a); | ||
| const bValue = BigInt(b); | ||
| if (aValue === bValue) return 0; | ||
| return aValue > bValue ? 1 : -1; | ||
| } catch { | ||
| return a.localeCompare(b); | ||
| } | ||
| } |
There was a problem hiding this comment.
localeCompare fallback can mis-order numeric snowflake strings.
If BigInt() parsing fails (e.g., non-numeric input), localeCompare does lexicographic comparison which gives wrong results for numeric strings of differing lengths (e.g., "9" > "10" lexicographically). Consider throwing or padding instead, though in practice Discord snowflake IDs should always be valid numeric strings, so the risk is low.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/workflows/luma-sync/steps/discord.ts` around lines 67 - 76, The
compareSnowflakeIds function currently falls back to localeCompare on BigInt
parse failure which can mis-order numeric strings; update the catch to do a
numeric-aware fallback: if both inputs match /^\d+$/ then compare by length
(shorter = smaller) and only use localeCompare if lengths equal, otherwise throw
an error (include the invalid inputs) so callers are alerted to truly
non-numeric snowflake values; reference compareSnowflakeIds to locate and change
the catch behavior.
| const responseText = await response.text(); | ||
| const responseData = | ||
| responseText.length > 0 ? JSON.parse(responseText) : null; |
There was a problem hiding this comment.
JSON.parse can throw on non-empty, non-JSON response bodies.
If Discord returns a non-empty body that isn't valid JSON (e.g., an HTML error page from a proxy/CDN), JSON.parse(responseText) will throw an unhandled SyntaxError, masking the actual HTTP error status.
🛡️ Proposed fix
const responseText = await response.text();
- const responseData =
- responseText.length > 0 ? JSON.parse(responseText) : null;
+ let responseData: unknown = null;
+ if (responseText.length > 0) {
+ try {
+ responseData = JSON.parse(responseText);
+ } catch {
+ if (!response.ok) {
+ throw new Error(
+ `Discord API request failed (${response.status}): ${responseText.slice(0, 200)}`,
+ );
+ }
+ }
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/workflows/luma-sync/steps/discord.ts` around lines 147 - 149, The
code parses Discord's response with JSON.parse(responseText) which can throw if
the body is non-empty but not valid JSON; wrap the parse in a try/catch (or use
a guarded parse helper) around JSON.parse so that on parse failure you set
responseData to null (or include the raw responseText for diagnostics) and do
not let a SyntaxError mask the HTTP status; update the logic around
responseText/responseData in this module (discord.ts) so any thrown parse errors
are caught and the code continues to handle non-2xx statuses using
response.status and the safe responseData value.
| export async function set_live_after_explicit_approval({ | ||
| reviewSessionId, | ||
| eventId, | ||
| approvalMessageId, | ||
| }: { | ||
| reviewSessionId: string; | ||
| eventId: string; | ||
| approvalMessageId: string; | ||
| }): Promise<boolean> { | ||
| "use step"; | ||
|
|
||
| const [updatedEvent] = await db | ||
| .update(eventsTable) | ||
| .set({ | ||
| isDraft: false, | ||
| }) | ||
| .where(and(eq(eventsTable.id, eventId), eq(eventsTable.isDraft, true))) | ||
| .returning({ id: eventsTable.id }); | ||
|
|
||
| await db | ||
| .update(eventReviewSessionsTable) | ||
| .set({ | ||
| status: "approved", | ||
| approvalMessageId, | ||
| lastSeenMessageId: approvalMessageId, | ||
| }) | ||
| .where(eq(eventReviewSessionsTable.id, reviewSessionId)); | ||
|
|
||
| return Boolean(updatedEvent); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd 'db.ts' --type fRepository: allthingsweb-dev/allthingsweb
Length of output: 90
🏁 Script executed:
cat -n app/src/lib/db.tsRepository: allthingsweb-dev/allthingsweb
Length of output: 351
🏁 Script executed:
rg 'db\.transaction' --type ts --type tsxRepository: allthingsweb-dev/allthingsweb
Length of output: 100
🏁 Script executed:
rg 'db\.transaction' --type-list | head -20Repository: allthingsweb-dev/allthingsweb
Length of output: 462
🏁 Script executed:
rg 'db\.transaction'Repository: allthingsweb-dev/allthingsweb
Length of output: 55
🏁 Script executed:
rg 'await db\.' --max-count=20Repository: allthingsweb-dev/allthingsweb
Length of output: 3242
🏁 Script executed:
cat app/package.json | grep -A 5 '"drizzle'Repository: allthingsweb-dev/allthingsweb
Length of output: 652
🏁 Script executed:
cat -n app/src/lib/tx-utils.tsRepository: allthingsweb-dev/allthingsweb
Length of output: 1238
🏁 Script executed:
rg 'try.*catch|BEGIN|ROLLBACK' app/src/workflows/luma-sync/steps/events.tsRepository: allthingsweb-dev/allthingsweb
Length of output: 55
🏁 Script executed:
cat -n app/src/workflows/luma-sync/steps/events.ts | head -220Repository: allthingsweb-dev/allthingsweb
Length of output: 7104
🌐 Web query:
drizzle-orm neon-http transaction support db.transaction
💡 Result:
drizzle-orm/neon-http does not support interactive transactions, so calling db.transaction(...) will always fail with:
Error: No transactions support in neon-http driver(repos.supermodeltools.com)
Drizzle’s Neon docs explicitly note that HTTP is best for single / non-interactive work, and if you need session/interactive transaction support you should use a WebSocket/pg-compatible approach instead. (orm.drizzle.team)
What to do instead
1) If you need real db.transaction(...)
Use the WebSocket/pg-compatible Neon serverless driver via Drizzle’s neon-serverless adapter (typically with Pool / Client):
import { Pool } from "@neondatabase/serverless";
import { drizzle } from "drizzle-orm/neon-serverless";
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
export const db = drizzle({ client: pool });(Example pattern shown in practice here.) (stackoverflow.com)
2) If you only need “non-interactive transactions” (send a batch as one unit)
Neon’s HTTP driver supports non-interactive transactions via sql.transaction([...]) (array of statements executed together). (planetscale.com)
In Drizzle, the closest equivalent is db.batch([...]), which Drizzle documents as supported for Neon. (orm.drizzle.team)
Note:
db.batch(...)is not the same API/semantics asdb.transaction(async (tx) => ...)—it’s a “send multiple statements together” style, not an interactive transaction with a transactionaltxclient. (orm.drizzle.team)
Citations:
- 1: https://repos.supermodeltools.com/drizzle-orm/fn-session-ts-transaction-23.html?utm_source=openai
- 2: https://orm.drizzle.team/docs/connect-neon
- 3: https://stackoverflow.com/questions/79215025/how-to-handle-transaction-with-nextjs-14-and-drizzle-orm?utm_source=openai
- 4: https://planetscale.com/docs/postgres/connecting/neon-serverless-driver?utm_source=openai
- 5: https://orm.drizzle.team/docs/batch-api
- 6: https://orm.drizzle.team/docs/batch-api
🏁 Script executed:
rg 'db\.batch' || echo "No db.batch usage found"Repository: allthingsweb-dev/allthingsweb
Length of output: 96
Non-atomic dual update — event and review session can diverge on partial failure.
set_live_after_explicit_approval performs two independent UPDATE statements. If the first succeeds (setting isDraft = false) but the second fails (marking the session "approved"), the event goes live while the session remains "pending", causing the poller to keep trying to approve an already-live event.
The project uses drizzle-orm/neon-http, which does not support db.transaction(). Use db.batch() to send both updates together as a batch:
export async function set_live_after_explicit_approval({
reviewSessionId,
eventId,
approvalMessageId,
}: {
reviewSessionId: string;
eventId: string;
approvalMessageId: string;
}): Promise<boolean> {
"use step";
- const [updatedEvent] = await db
- .update(eventsTable)
- .set({
- isDraft: false,
- })
- .where(and(eq(eventsTable.id, eventId), eq(eventsTable.isDraft, true)))
- .returning({ id: eventsTable.id });
-
- await db
- .update(eventReviewSessionsTable)
- .set({
- status: "approved",
- approvalMessageId,
- lastSeenMessageId: approvalMessageId,
- })
- .where(eq(eventReviewSessionsTable.id, reviewSessionId));
-
- return Boolean(updatedEvent);
+ const results = await db.batch([
+ db
+ .update(eventsTable)
+ .set({
+ isDraft: false,
+ })
+ .where(and(eq(eventsTable.id, eventId), eq(eventsTable.isDraft, true)))
+ .returning({ id: eventsTable.id }),
+ db
+ .update(eventReviewSessionsTable)
+ .set({
+ status: "approved",
+ approvalMessageId,
+ lastSeenMessageId: approvalMessageId,
+ })
+ .where(eq(eventReviewSessionsTable.id, reviewSessionId)),
+ ]);
+
+ const [updatedEvent] = results[0] as Array<{ id: string }>;
+ return Boolean(updatedEvent);
}Note: db.batch() sends all queries together in a single request, reducing the window for divergence, but does not provide full ACID transaction guarantees on HTTP. For true transactional safety, consider switching to neon-serverless driver.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/workflows/luma-sync/steps/events.ts` around lines 185 - 214, Replace
the two independent updates in set_live_after_explicit_approval with a single
db.batch call that sends both UPDATE queries together: build the eventsTable
update (setting isDraft=false and returning id for eventId) and the
eventReviewSessionsTable update (setting status="approved", approvalMessageId
and lastSeenMessageId) and pass both queries into db.batch([...]) so they are
sent in one request; then extract the first batch result to determine
updatedEvent truthiness to return, using the existing symbols eventId,
reviewSessionId, approvalMessageId, eventsTable, eventReviewSessionsTable and
db.batch instead of two separate await db.update(...) calls.
Summary
event_review_sessionstable to track Discord review state per imported Luma eventisDraft = true)ApprovedreplyapprovedVerification
bun run typecheckbun run buildbun run fmtbun run db:generateSummary by CodeRabbit
Release Notes
New Features
Chores