Conversation
…, workspaces, playground Sprint 15: Streamable HTTP streaming responses + webhook-to-MCP resource pipeline Sprint 16: Per-provider webhook signature validation (Stripe/GitHub/Slack), webhook UI, composite server architecture with namespace-based tool dispatch Sprint 17: Composite builder UI, merged tool preview, dashboard navigation updates Sprint 18: Team workspaces with RBAC (owner/admin/member/viewer), response caching with per-tool TTLs and write-through invalidation Sprint 19: Workspace switcher, member management, interactive playground with schema-to-form renderer Security hardened: raw body signature validation, cross-tenant ownership checks, constant-time HMAC comparison, Redis SCAN over KEYS, rate limiting on webhook endpoints, user-scoped cache keys, pending invite isolation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds composite MCP routing with namespaced dispatch and per-user plan gating, SSE streamable HTTP tools and resource endpoints, webhook ingestion with signature validation and SSE notification, Redis-backed response caching, workspace/composite DB models and APIs, frontend hooks/pages/components, transformer webhook support, and accompanying tests. Changes
Sequence Diagram(s)mermaid mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 19
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/runtime/src/transports/streamable-http.ts (1)
216-221:⚠️ Potential issue | 🟠 MajorDon't advertise
resources.subscribebefore it's implemented.Line 220 tells clients that resource subscriptions are supported, but the switch only handles
resources/listandresources/read; aresources/subscriberequest still falls through to “Method not found”. That breaks the capability contract established duringinitialize.Proposed fix
capabilities: { tools: { listChanged: true }, - resources: { subscribe: true, listChanged: true }, + resources: { listChanged: true }, },Also applies to: 334-399, 405-406
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/src/transports/streamable-http.ts` around lines 216 - 221, The initialize response currently advertises resources.subscribe support but the request dispatcher (in the streamable HTTP transport) only implements resources/list and resources/read, causing subscribe requests to be unhandled; update the initialization capability object used in jsonRpcSuccess (the block producing protocolVersion/capabilities) to remove or set resources.subscribe to false and ensure the same change is applied consistently to the other initialization response locations referenced (around the other ranges noted) so that advertised capabilities match implemented handlers (search for jsonRpcSuccess and the initialization response objects to locate all occurrences).
🟡 Minor comments (12)
apps/web/components/playground/schema-form.tsx-161-171 (1)
161-171:⚠️ Potential issue | 🟡 MinorPass
requiredparameter to array fields for required indicator display.The
SchemaFieldfunction receivesrequired, but the array case (lines 163-171) doesn't pass it toArrayField, andArrayFielddoesn't forward it toFieldWrapper(line 223). This prevents array fields from showing the required indicator, unlike all other field types.🔧 Suggested fix
function ArrayField({ name, schema, value, onChange, description, + required, disabled, depth = 0, }: { readonly name: string; readonly schema: JSONSchema; readonly value: unknown; readonly onChange: (value: unknown) => void; readonly description?: string; + readonly required?: boolean; readonly disabled?: boolean; readonly depth?: number; }) {- return ( - <FieldWrapper name={label} description={description}> + return ( + <FieldWrapper name={label} description={description} required={required}>case 'array': { return ( <ArrayField name={name} schema={schema} value={value} onChange={onChange} description={description} + required={required} disabled={disabled} depth={depth} /> ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/components/playground/schema-form.tsx` around lines 161 - 171, The array branch of SchemaField is not passing the required prop to ArrayField, so ArrayField (and ultimately FieldWrapper) never receives it; update the array case in SchemaField to include required={required} when rendering <ArrayField ...>, and then modify the ArrayField component to accept a required prop and forward it into its FieldWrapper invocation (e.g., pass required={required} to FieldWrapper) so the required indicator is displayed for array fields.apps/web/app/api/workspaces/[id]/members/[userId]/route.ts-34-40 (1)
34-40:⚠️ Potential issue | 🟡 MinorHandle case where target member doesn't exist.
If
targetUserIdis not a member of this workspace,getUserRolemay returnnullorundefined. The current code only checks fortargetRole === 'owner'but proceeds to callremoveMembereven if the user doesn't exist. This could result in a misleading success response or a database error depending on the repository implementation.🛡️ Proposed fix to handle non-existent member
const targetRole = await repo.getUserRole(workspaceId, targetUserId); + if (!targetRole) { + throw new ApiError('NOT_FOUND', 'Member not found', 404); + } + if (targetRole === 'owner') { throw new ApiError('FORBIDDEN', 'Cannot remove the workspace owner', 403); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/api/workspaces/`[id]/members/[userId]/route.ts around lines 34 - 40, Check whether repo.getUserRole(workspaceId, targetUserId) returns null/undefined before proceeding: if targetRole is falsy, throw an ApiError('NOT_FOUND', 'Target user is not a member of this workspace', 404) and do not call repo.removeMember; otherwise keep the existing owner check (targetRole === 'owner') and then call repo.removeMember(workspaceId, targetUserId) only for valid non-owner members.apps/web/app/api/workspaces/route.ts-37-39 (1)
37-39:⚠️ Potential issue | 🟡 MinorAdd handling for duplicate workspace slug errors.
The
repo.createmethod will throw a database unique constraint violation when a duplicate slug is provided, butwithErrorHandleronly handlesApiErrorandZodErrorinstances. Database constraint violations fall through to a generic 500 error instead of returning the appropriate409 Conflictresponse. Add a try-catch in the repository'screatemethod to detect duplicate slug errors and throw anApiErrorwith status 409, or extendwithErrorHandlerto map database constraint violations accordingly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/api/workspaces/route.ts` around lines 37 - 39, The repository create path currently allows DB unique-constraint errors from WorkspaceRepository.create to bubble up as 500s; update WorkspaceRepository.create to catch the DB-specific duplicate-slug error (e.g., identify by the DB error code/message thrown by your ORM/driver) and rethrow an ApiError with status 409 and a clear message like "workspace slug already exists", or alternatively extend withErrorHandler to detect the same DB constraint error and map it to new ApiError(409, ...) before returning; target the WorkspaceRepository.create method and the withErrorHandler error-mapping logic and ensure ApiError is used so the route returns 409 Conflict for duplicate slugs.apps/web/app/api/workspaces/[id]/members/route.ts-30-37 (1)
30-37:⚠️ Potential issue | 🟡 MinorThe duplicate invitation concern is already prevented by the database schema.
The
workspaceMemberstable enforces a composite primary key on(workspaceId, userId), which prevents duplicate entries at the database level. If the same email is invited twice, the database will reject the duplicate insertion.However, the code lacks graceful error handling for this case. When attempting to invite the same email twice, the API will fail with a raw database constraint violation error instead of providing a user-friendly message. Consider adding a check before insertion or wrapping the insert in error handling to return a meaningful response (e.g., "User already invited" or "User already a member").
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/api/workspaces/`[id]/members/route.ts around lines 30 - 37, The add-member flow uses WorkspaceRepository.addMember (called from the route handler after getDb()) but doesn't handle duplicate-invite DB constraint errors, causing raw DB errors to surface; wrap the addMember call in a try/catch (or do a pre-check via a repository method like hasMember/getMember) and detect the unique-constraint/duplicate-insert error from your DB client, then return a friendly API response such as "User already invited" or "User already a member" instead of propagating the raw DB error. Ensure you reference WorkspaceRepository.addMember in the route handler and normalize the error handling around getDb() -> new WorkspaceRepository(db) -> repo.addMember(...) so duplicates are caught and translated to a 4xx response.apps/web/app/dashboard/settings/members/page.tsx-136-144 (1)
136-144:⚠️ Potential issue | 🟡 MinorAdd an accessible name to the remove-member button.
This is an icon-only control right now, so assistive technology does not get a reliable label for the destructive action.
Proposed fix
{member.role !== "owner" && ( <Button variant="ghost" size="sm" + aria-label={`Remove ${member.userId} from workspace`} className="h-7 w-7 p-0 text-muted-foreground hover:text-destructive shrink-0" onClick={() => handleRemove(member.userId)} disabled={removeMember.isPending} >🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/dashboard/settings/members/page.tsx` around lines 136 - 144, The icon-only remove button lacks an accessible name; update the Button (the instance wrapping <Trash2 />) to provide an explicit accessible label (e.g., aria-label) so screen readers announce the action — use a contextual label like aria-label={`Remove ${member.name || 'member'}`} (fall back to a generic "Remove member" if member.name is unavailable) and keep the existing props (onClick={handleRemove(member.userId)} and disabled={removeMember.isPending}) so the control remains operable and descriptive for assistive tech.apps/web/app/api/tools/[id]/execute/route.ts-100-114 (1)
100-114:⚠️ Potential issue | 🟡 MinorMissing error check on
callResbefore parsing JSON.If the runtime returns a non-2xx status,
callRes.json()may fail or return unexpected content. The response should be validated before parsing.🐛 Proposed fix
// Execute tool const callRes = await fetch(runtimeUrl, { method: 'POST', headers, body: JSON.stringify({ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: tool.toolName, arguments: input.arguments }, }), }); + if (!callRes.ok) { + return NextResponse.json( + createSuccessResponse({ error: 'Tool execution failed', status: callRes.status }), + { status: 502 }, + ); + } + const result = await callRes.json(); return NextResponse.json(createSuccessResponse(result));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/api/tools/`[id]/execute/route.ts around lines 100 - 114, The code calls fetch(runtimeUrl) into callRes and immediately does await callRes.json() without checking the response status; update the block around callRes to verify callRes.ok (or status) before parsing, and if not ok read the response body (json or text) and return an error via NextResponse.json(createSuccessResponse(...)) or an appropriate error response; reference the variables callRes, runtimeUrl, tool.toolName, and the return path using NextResponse.json and createSuccessResponse to locate and change the logic so non-2xx responses are handled gracefully and include the runtime error details.apps/web/app/api/workspaces/[id]/route.ts-23-25 (1)
23-25:⚠️ Potential issue | 🟡 MinorInconsistent rate limit handling between GET and PUT.
In
GET,withRateLimitis called but its return value is not checked, whilePUTcorrectly returns early when rate limited. IfwithRateLimitreturns a response when the limit is exceeded,GETwill ignore it and continue processing.🐛 Proposed fix
const rbac = await resolveWorkspaceContext(id, 'viewer'); - await withRateLimit(rbac.userId); + const rateLimited = await withRateLimit(rbac.userId); + if (rateLimited) return rateLimited;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/api/workspaces/`[id]/route.ts around lines 23 - 25, The GET handler calls resolveWorkspaceContext(id, 'viewer') and invokes withRateLimit(rbac.userId) but ignores its return value; update the GET flow to mirror PUT by capturing the result of withRateLimit(rbac.userId) and returning early when it yields a Response (i.e., if (const rl = await withRateLimit(...)) return rl;), so rate-limited requests are stopped before further processing.apps/runtime/__tests__/unit/response-cache.test.ts-28-32 (1)
28-32:⚠️ Potential issue | 🟡 MinorUpdate mock
scansignature to match actual ioredis usage.The mock declares only 3 parameters
(_cursor, _match, pattern), but actual calls pass 5 arguments:scan(cursor, 'MATCH', pattern, 'COUNT', 100). Update the signature to accept the remaining parameters:scan: vi.fn().mockImplementation((_cursor: string, _match: string, pattern: string, _count: string, _limit: number) => { const prefix = pattern.replace(/\*$/, ''); const matched = [...store.keys()].filter((k) => k.startsWith(prefix)); return Promise.resolve(['0', matched]); }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/__tests__/unit/response-cache.test.ts` around lines 28 - 32, Update the mock scan implementation in the test to match ioredis's actual call signature by adding the extra parameters so the function accepts (cursor, _match, pattern, _count, _limit); modify the mock declared as scan: vi.fn().mockImplementation(...) to include the additional _count and _limit parameters, keep the existing logic that strips the trailing '*' from pattern and filters store.keys(), and return Promise.resolve(['0', matched]) as before so tests receive the same result shape.apps/runtime/src/transports/composite.ts-10-10 (1)
10-10:⚠️ Potential issue | 🟡 MinorRemove unused import
ToolDefinition.The
ToolDefinitiontype is imported but never used, causing the ESLint@typescript-eslint/no-unused-varserror in CI.🔧 Proposed fix
-import type { ToolLoader, ToolDefinition } from '../registry/tool-loader.js'; +import type { ToolLoader } from '../registry/tool-loader.js';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/src/transports/composite.ts` at line 10, Remove the unused type import ToolDefinition from the import list in composite.ts: update the import that currently references "ToolLoader, ToolDefinition" (from '../registry/tool-loader.js') to only import the used symbol(s) (e.g., "ToolLoader") so the unused-variable ESLint error (`@typescript-eslint/no-unused-vars`) is resolved.apps/runtime/src/transports/composite.ts-1-6 (1)
1-6:⚠️ Potential issue | 🟡 MinorFix import ordering to resolve ESLint warning.
The ESLint
import/orderrule flags an empty line within the import group between lines 4-6.🔧 Proposed fix
import { randomUUID } from 'node:crypto'; - import type { Request, Response, Router } from 'express'; import express from 'express'; import type { Redis } from 'ioredis';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/src/transports/composite.ts` around lines 1 - 6, The ESLint import/order warning is caused by an empty line inside the same import group in apps/runtime/src/transports/composite.ts; remove the blank line between the express import ("import express from 'express'") and the Redis type import ("import type { Redis } from 'ioredis'") so that randomUUID, the express-related imports (Request, Response, Router, express) and the Redis type import are contiguous in the same group (or otherwise reorder them into proper groups) to satisfy the import/order rule.apps/web/app/api/composites/[id]/route.ts-18-24 (1)
18-24:⚠️ Potential issue | 🟡 MinorRemove the
.min(1)constraint to allow clearing all composite members.The
membersarray has.min(1).optional()which prevents sending an empty array. However, the repository implementation (lines 182-195 ofcomposite.repository.ts) explicitly supports clearing all members by deleting existing members when a new (possibly empty) array is provided. The UI also allows users to remove members one by one, potentially reaching an empty state. Sending an empty array from the UI would fail validation. Remove.min(1)to permit clearing members when updating a composite.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/api/composites/`[id]/route.ts around lines 18 - 24, The members schema currently enforces `.min(1)` which blocks sending an empty array; remove the `.min(1)` call from the members validation (i.e., change the declaration that currently reads members: z.array(...).min(1).max(20).optional() to drop .min(1)) so an empty array is permitted while preserving `.max(20).optional()`, leaving repository logic in composite.repository.ts to perform the clear/delete behavior; run relevant tests or exercise the update endpoint to confirm empty arrays are accepted.apps/runtime/src/middleware/response-cache.ts-5-7 (1)
5-7:⚠️ Potential issue | 🟡 MinorFix import ordering per ESLint rules.
The pipeline is failing due to import order. Type import of
../mcp/tool-executor.jsshould occur before type import of../observability/logger.js.🔧 Proposed fix
-import type { Logger } from '../observability/logger.js'; -import { metrics } from '../observability/metrics.js'; import type { MCPToolResult } from '../mcp/tool-executor.js'; +import type { Logger } from '../observability/logger.js'; +import { metrics } from '../observability/metrics.js';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/src/middleware/response-cache.ts` around lines 5 - 7, Reorder the type imports so the type import for MCPToolResult appears before the type import for Logger to satisfy ESLint import-order rules; specifically, swap the two import lines that bring in the Logger type and the MCPToolResult type (references: Logger, MCPToolResult) so the MCPToolResult type import precedes the Logger type import.
🧹 Nitpick comments (23)
apps/web/components/playground/schema-form.tsx (1)
237-243: Set explicit button type for add/remove controls.If this component is rendered inside a
<form>, default button behavior may submit unexpectedly. Settingtype="button"makes intent explicit.🔧 Suggested fix
<Button + type="button" variant="ghost" size="sm" @@ <Button + type="button" variant="outline" size="sm" onClick={handleAdd}Also applies to: 248-252
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/components/playground/schema-form.tsx` around lines 237 - 243, The add/remove Button elements in schema-form.tsx lack an explicit type, which can cause them to submit a surrounding form; update the Button components used for removal and addition (the one calling handleRemove(i) and the sibling add button that calls handleAdd or similar) to include type="button" so their intent is explicit and they won't trigger form submission; locate the Button instances around the handlers handleRemove and handleAdd (the controls at the shown diff and the ones at lines 248-252) and add the type attribute to each.apps/web/lib/db/migrations/0011_webhook_events.sql (1)
15-16: Ensure the eviction job is scheduled before production deployment.The comment correctly identifies the need for a scheduled job to purge old webhook events. Without this, the table will grow unbounded. Consider creating a tracking issue or adding the job to deployment runbooks.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/db/migrations/0011_webhook_events.sql` around lines 15 - 16, Add a scheduled eviction job to purge old rows from the webhook_events table before production deployment by creating a cron/worker task that runs the SQL shown (DELETE FROM webhook_events WHERE received_at < NOW() - INTERVAL '30 days') on a regular cadence, and document this in deployment runbooks and/or create a tracking issue so the job is not forgotten; reference the webhook_events table and the eviction SQL snippet when adding the task and documentation.apps/runtime/src/server.ts (1)
24-25: Fix import ordering per linter.Pipeline reports import order violations. Consider reordering
./webhooks/receiver.jsto come after./webhooks/notifier.js.🔧 Proposed fix
-import { createWebhookRouter } from './webhooks/receiver.js'; import { WebhookNotifier } from './webhooks/notifier.js'; +import { createWebhookRouter } from './webhooks/receiver.js';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/src/server.ts` around lines 24 - 25, The import order violates the linter: swap the two import statements so WebhookNotifier is imported before createWebhookRouter; specifically reorder the imports that reference WebhookNotifier and createWebhookRouter (so the import that brings in WebhookNotifier appears above the import that brings in createWebhookRouter) to satisfy the project's import-order rule.apps/web/lib/db/schema/specs.ts (1)
14-14: Add.references()and index to the Drizzle schema forworkspaceId.The migrations define a foreign key constraint and index at the database level (migration 0013 adds the FK with
REFERENCES workspaces(id) ON DELETE CASCADE, and migration 0014 createsidx_specs_workspace), but the Drizzle schema doesn't declare them. Add.references('workspaces', 'id')toworkspaceIdand include an index in the table definition for type safety, ORM consistency, and proper documentation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/db/schema/specs.ts` at line 14, The Drizzle schema for the Specs table is missing the foreign key and index declared in migrations; update the workspaceId column declaration (currently uuid('workspace_id')) to include .references('workspaces', 'id') and add the corresponding index definition (named idx_specs_workspace) to the table definition so the Specs table schema includes the FK and index for workspaceId (use the table's index helper referencing workspaceId to match the migration).apps/web/lib/hooks/use-workspaces.ts (1)
4-4: Use the shared invite input type here.This reintroduces a free-form
role: stringeven though@apifold/typesalready exports the constrained invite contract. Typos like"Owner"will compile and only fail once the request hits the API.Proposed fix
-import type { Workspace, WorkspaceWithMembers, CreateWorkspaceInput, UpdateWorkspaceInput } from "@apifold/types"; +import type { + Workspace, + WorkspaceWithMembers, + CreateWorkspaceInput, + UpdateWorkspaceInput, + InviteMemberInput, +} from "@apifold/types"; @@ export function useInviteMember() { const queryClient = useQueryClient(); return useMutation({ - mutationFn: ({ workspaceId, email, role }: { readonly workspaceId: string; readonly email: string; readonly role: string }) => - api.post(`/workspaces/${workspaceId}/members`, { email, role }), + mutationFn: ({ workspaceId, ...input }: { readonly workspaceId: string } & InviteMemberInput) => + api.post(`/workspaces/${workspaceId}/members`, input), onSuccess: (_data, variables) => { queryClient.invalidateQueries({ queryKey: ["workspaces", variables.workspaceId] }); }, }); }Also applies to: 45-49
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/hooks/use-workspaces.ts` at line 4, The hook currently reintroduces an unconstrained role: string for invites; import and use the shared invite input type exported from `@apifold/types` (instead of the free-form role) and update all places in use-workspaces.ts that construct or type invite payloads (including the occurrences around the earlier import line and the spots around lines 45-49) to use that shared invite type; specifically add the exported invite type to the existing import list (alongside Workspace, WorkspaceWithMembers, CreateWorkspaceInput, UpdateWorkspaceInput) and replace any local/inline invite type or parameter (e.g., role: string) with the shared type so the hook's function signatures and request bodies match the constrained invite contract.apps/web/components/dashboard/workspace-switcher.tsx (2)
38-66: Improve dropdown accessibility with keyboard support and ARIA attributes.The dropdown implementation lacks:
aria-expandedon the trigger buttonaria-haspopup="listbox"orrole="menu"semantics- Keyboard navigation (Escape to close, arrow keys for selection)
- Focus trapping within the dropdown
Consider using a headless UI library (e.g., Radix UI, Headless UI) or adding keyboard event handlers.
♻️ Minimal accessibility improvements
<button type="button" onClick={() => setOpen((prev) => !prev)} + aria-expanded={open} + aria-haspopup="listbox" className="flex items-center gap-2 rounded-md border border-border bg-background px-3 py-1.5 text-sm transition-colors duration-150 hover:bg-muted/50" >Additionally, consider adding an
onKeyDownhandler to close on Escape:onKeyDown={(e) => { if (e.key === 'Escape') setOpen(false); }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/components/dashboard/workspace-switcher.tsx` around lines 38 - 66, The dropdown lacks ARIA and keyboard support; update the workspace-switcher component to add aria-expanded on the trigger button and aria-haspopup="listbox" (or "menu"), mark the dropdown container with role="menu" or role="listbox", and mark each workspace button with role="menuitem" (or option) and tabIndex to be focusable; implement an onKeyDown on the dropdown (or container) to handle Escape (call setOpen(false)), ArrowDown/ArrowUp to move a focusIndex state and programmatically focus the corresponding workspace button (use refs or a ref array), and Enter/Space to invoke onSwitch(workspace.id); optionally wrap focus-management in a small focus trap to keep focus inside while open or replace with Radix/Headless UI if preferred. Ensure you reference/modify setOpen, onSwitch, current, workspaces and the mapped workspace button rendering to add these attributes and handlers.
17-19: Handle loading and error states fromuseWorkspaces.The hook destructures only
data: workspaces, butuseWorkspaceslikely providesisLoadinganderrorstates. Consider showing a loading indicator or handling errors gracefully rather than rendering an empty dropdown.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/components/dashboard/workspace-switcher.tsx` around lines 17 - 19, The component uses useWorkspaces but only reads data (workspaces) and can render an empty dropdown during loading or on error; update the Workspace Switcher to also destructure isLoading and error from useWorkspaces, render a loading state (spinner or disabled placeholder) when isLoading is true, and render an error/fallback UI (or disabled dropdown with a message) when error is present; adjust how current is computed (const { data: workspaces, isLoading, error } = useWorkspaces(); const current = !isLoading && !error ? (workspaces?.find(w => w.id === currentWorkspaceId) ?? workspaces?.[0]) : undefined) and ensure the UI disables selection when isLoading/error to avoid empty dropdowns.apps/runtime/src/mcp/protocol-handler.ts (2)
268-277: URI host validation may be too permissive.The regex
^webhook:\/\/[^/]+\/(.+)$accepts any non-slash characters as the host, but doesn't verify it matchesserver.slug. A client could potentially craft a URI likewebhook://other-server/eventand the handler would still attempt to read from the current server's Redis keys. While this doesn't expose cross-tenant data (the Redis key usesserver.id), it could cause confusion.♻️ Validate URI host matches server slug
- const match = uri.match(/^webhook:\/\/[^/]+\/(.+)$/); + const match = uri.match(/^webhook:\/\/([^/]+)\/(.+)$/); if (!match) { return jsonRpcError(req.id, -32602, 'Invalid resource URI'); } - const eventName = match[1]!; + const [, host, eventName] = match; + + if (host !== server.slug) { + return jsonRpcError(req.id, -32602, 'Resource URI does not match server'); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/src/mcp/protocol-handler.ts` around lines 268 - 277, The URI host extraction currently uses /^webhook:\/\/[^/]+\/(.+)$/ and only captures the event name; change the parsing to capture the host and event (e.g., /^webhook:\/\/([^/]+)\/(.+)$/ via the existing uri.match call), then compare the captured host against server.slug and return jsonRpcError(req.id, -32602, 'Invalid resource URI host') if they differ; keep the existing eventName validation (/^[a-zA-Z0-9_.:-]+$/ and max length) and continue using server.id for Redis keys as before.
231-246: Consider adding a limit to prevent unbounded resource listings.
scanKeysiterates through all matching Redis keys without a limit. For servers with many webhook events, this could return a very large array, impacting memory and response time.♻️ Add a configurable limit
+const MAX_RESOURCES = 1000; + try { const pattern = `webhook:latest:${server.id}:*`; const keys = await scanKeys(this.redis, pattern); const prefix = `webhook:latest:${server.id}:`; const resources = keys + .slice(0, MAX_RESOURCES) .map((key) => key.slice(prefix.length)) .filter((name) => name.length > 0 && /^[a-zA-Z0-9_.:-]+$/.test(name))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/src/mcp/protocol-handler.ts` around lines 231 - 246, The keys listing can be unbounded because scanKeys returns all matches; update the logic around scanKeys and resources to enforce a configurable limit (e.g., via a new config or parameter) so you stop collecting keys once the limit is reached and only map those keys into resources; specifically, modify the call/site that uses scanKeys (symbol: scanKeys) in protocol-handler.ts to accept/implement a maxResults arg or iterate results incrementally, break when count >= limit, and then pass the truncated list to the existing mapping that builds resources before returning jsonRpcSuccess(req.id, { resources }); ensure the limit is configurable and validated.apps/web/app/api/workspaces/route.ts (1)
9-12: Tighten slug validation to prevent edge cases.The current regex
^[a-z0-9-]+$allows slugs like---,-test, ortest-. Consider anchoring the pattern to require alphanumeric characters at the start/end.♻️ Suggested stricter slug pattern
const createSchema = z.object({ name: z.string().min(1).max(100), - slug: z.string().min(1).max(50).regex(/^[a-z0-9-]+$/), + slug: z.string().min(1).max(50).regex(/^[a-z0-9]+(?:-[a-z0-9]+)*$/), });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/api/workspaces/route.ts` around lines 9 - 12, The slug regex in createSchema currently allows leading/trailing or all-hyphen slugs; update the slug validator in createSchema to require an alphanumeric start and end and only allow hyphens between alphanumeric groups (e.g. use a pattern like ^[a-z0-9]+(?:-[a-z0-9]+)*$) so slugs like "-test", "test-", or "---" are rejected; replace the existing regex on the slug field and keep the existing z.string().min/max constraints and an appropriate error message.apps/web/lib/db/schema/composite-servers.ts (1)
25-25: Add a foreign key constraint forworkspaceId.The
workspaceIdcolumn is nullable but lacks a foreign key reference to theworkspacestable. This is inconsistent with how other tables reference workspaces (e.g.,workspaceMembers) and could lead to orphaned records if a workspace is deleted.Consider updating line 25 to:
workspaceId: uuid('workspace_id').references(() => workspaces.id, { onDelete: 'cascade' }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/db/schema/composite-servers.ts` at line 25, The workspaceId column (uuid('workspace_id')) is nullable but missing a foreign-key constraint; update its definition in composite-servers.ts to reference the workspaces table (workspaces.id) and add onDelete: 'cascade' so workspaceId is enforced and rows are removed when a workspace is deleted—i.e. change the uuid('workspace_id') definition to reference workspaces.id with onDelete: 'cascade' for consistency with workspaceMembers.apps/web/lib/db/migrations/0013_workspaces.sql (1)
25-28: Consider adding indexes onworkspace_idcolumns for query performance.The new
workspace_idforeign key columns will likely be used in WHERE clauses for workspace-scoped queries. Adding indexes can significantly improve query performance, especially as data grows.📊 Suggested indexes
CREATE INDEX idx_specs_workspace ON specs (workspace_id); CREATE INDEX idx_mcp_servers_workspace ON mcp_servers (workspace_id); CREATE INDEX idx_credentials_workspace ON credentials (workspace_id); CREATE INDEX idx_composite_servers_workspace ON composite_servers (workspace_id);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/db/migrations/0013_workspaces.sql` around lines 25 - 28, Add indexes on the newly added workspace_id foreign-key columns to improve query performance for workspace-scoped queries: create indexes for specs(workspace_id), mcp_servers(workspace_id), credentials(workspace_id), and composite_servers(workspace_id) (e.g., CREATE INDEX idx_specs_workspace ON specs (workspace_id) etc.), ensuring each index name is unique and matches the table/column names added in the migration.apps/runtime/__tests__/integration/streamable-http.test.ts (2)
255-260: SSE event assertions may be brittle if event names change.The test asserts specific event names (
event: progress,event: message) which couples the test to implementation details. Consider extracting these as constants shared with the implementation, or at minimum add a comment documenting the expected SSE protocol.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/__tests__/integration/streamable-http.test.ts` around lines 255 - 260, The test currently asserts literal SSE event strings ("event: progress", "event: message") which couples the test to implementation details; update the test to reference shared constants from the implementation (e.g., SSE_EVENT_PROGRESS, SSE_EVENT_MESSAGE) or import the protocol constants used by the stream publisher and use those in the assertions (or, if importing isn't feasible, add a clear inline comment above the assertions documenting the expected SSE event names and protocol contract). Locate the assertions using the callRes/text() result (variable body) and replace the hard-coded strings with the shared symbol names (or add the comment) so the test remains stable if event names change.
49-181: Consider extracting test setup to a shared helper.The
beforeEachblock has significant complexity (~130 lines). Extracting the runtime setup into a reusable helper (similar tocreateTestLogger) would improve readability and enable reuse across other integration test files.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/__tests__/integration/streamable-http.test.ts` around lines 49 - 181, The beforeEach test setup is too large and should be extracted into a reusable helper to improve readability and reuse; create a helper (e.g., createRuntimeTestSetup) that encapsulates the logic currently in beforeEach—initializing upstreamApp/upstreamServer, ServerRegistry.upsert, ToolLoader mock, CredentialCache, CircuitBreaker, ConnectionMonitor, SessionManager, mockRedis (createMockRedis), ProtocolHandler, and createApp invocation—and return the resulting server/baseUrl/registry/protocolHandler/toolLoader/credentialCache objects; then simplify the test file's beforeEach to call this helper and assign the returned values (keeping existing symbols like beforeEach, createApp, ProtocolHandler, ToolLoader, CredentialCache, ServerRegistry, createMockRedis unchanged so consumers can find and reuse them).apps/web/lib/hooks/use-composites.ts (2)
55-62: Delete mutation should also invalidate the specific composite query.If a component is still mounted and holding a reference to
["composites", id], it won't know the composite was deleted. Consider invalidating both query keys for consistency withuseUpdateComposite.♻️ Proposed fix
export function useDeleteComposite() { const queryClient = useQueryClient(); return useMutation({ mutationFn: (id: string) => api.delete<void>(`/composites/${id}`), - onSuccess: () => { + onSuccess: (_data, id) => { + queryClient.invalidateQueries({ queryKey: ["composites", id] }); queryClient.invalidateQueries({ queryKey: ["composites"] }); }, }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/hooks/use-composites.ts` around lines 55 - 62, The delete mutation in useDeleteComposite currently only invalidates the list query ["composites"], so any mounted components holding ["composites", id] won't update; modify useDeleteComposite's onSuccess to accept the mutation variables (the id) and call queryClient.invalidateQueries for both the list key ["composites"] and the specific key ["composites", id] (use the second onSuccess param to get the id), keeping the mutationFn (id: string) => api.delete(...) unchanged.
19-24: Empty stringidwill still enable the query.The
enabled: !!idcheck passes for empty strings (!!'' === falseis actually false, so this is fine). However, consider whether callers might passundefinedbefore the ID is resolved, which would result in a request to/composites/undefined.♻️ More explicit check
export function useComposite(id: string) { return useQuery({ queryKey: ["composites", id], queryFn: () => api.get<CompositeServerWithMembers>(`/composites/${id}`), - enabled: !!id, + enabled: !!id && id.length > 0, }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/hooks/use-composites.ts` around lines 19 - 24, The useComposite hook can be called with undefined and still build a request path like `/composites/undefined`; update the enabled check and guard the query function so it only runs for a non-empty string id: change the enabled condition in useComposite to explicitly check that id is a defined, non-empty string (e.g., typeof id === "string" && id.length > 0) and ensure the queryFn uses that validated id (or returns/throws if id is absent) so api.get is never called with "undefined"; reference the useComposite function, its queryKey ["composites", id], and the queryFn to locate where to apply the fix.packages/transformer/src/webhooks.ts (1)
28-37: Version check may exclude future OpenAPI versions.The
startsWith('3.1')check will reject OpenAPI 3.2+ specs that also support webhooks. Consider a more flexible version comparison.♻️ Suggested improvement
- if (!spec.openapi.startsWith('3.1')) { + const [major, minor] = spec.openapi.split('.').map(Number); + if (major < 3 || (major === 3 && minor < 1)) { return { resources: [], notifications: [], warnings: [{ code: 'WEBHOOKS_UNSUPPORTED_VERSION', - message: 'Webhooks require OpenAPI 3.1+; found ' + spec.openapi, + message: `Webhooks require OpenAPI 3.1+; found ${spec.openapi}`, }], }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/transformer/src/webhooks.ts` around lines 28 - 37, The current check using spec.openapi.startsWith('3.1') will reject newer OpenAPI 3.x versions; change it to parse spec.openapi into numeric major/minor (e.g., split on '.' or use a semver lib) and accept if major > 3 OR (major === 3 AND minor >= 1). Replace the startsWith check with a reusable helper (e.g., isOpenApiAtLeast31) and use that helper where spec.openapi is evaluated so OpenAPI 3.2+ and other future 3.x versions are correctly allowed.apps/web/app/api/tools/[id]/execute/route.ts (1)
88-93: Error response usescreateSuccessResponsewhich is semantically misleading.Using
createSuccessResponseto wrap an error object is confusing for API consumers. Consider using a dedicated error response helper or returning a plain error structure.♻️ Cleaner error response
if (!initRes.ok) { return NextResponse.json( - createSuccessResponse({ error: 'Runtime unavailable', status: initRes.status }), + { success: false, error: { code: 'RUNTIME_UNAVAILABLE', message: 'Runtime unavailable' } }, { status: 502 }, ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/api/tools/`[id]/execute/route.ts around lines 88 - 93, Replace the misleading use of createSuccessResponse when initRes.ok is false: inside the initRes.ok check in route.ts (the block returning NextResponse.json for runtime unavailable) return a proper error payload instead—either call the project's error helper (e.g., createErrorResponse or createFailureResponse if available) or return a plain { error: 'Runtime unavailable', status: initRes.status } object directly to NextResponse.json with status 502; update the import if you add a new helper and ensure the response shape matches other API error responses.apps/web/lib/middleware/rbac.ts (1)
1-2: Remove unused imports.
NextResponseandcreateErrorResponseare imported but not used in this file. The middleware throwsApiErrorwhich is handled by the error handler wrapper at the route level.🔧 Proposed fix
-import { NextResponse } from 'next/server'; -import { createErrorResponse } from '@apifold/types'; import type { WorkspaceRole } from '@apifold/types';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/middleware/rbac.ts` around lines 1 - 2, Remove the unused imports NextResponse and createErrorResponse from the top of apps/web/lib/middleware/rbac.ts since the file throws ApiError and relies on the route-level error handler; update the import statement to only include symbols actually referenced in this module (e.g., remove NextResponse and createErrorResponse) so there are no unused imports flagged.apps/web/lib/db/repositories/composite.repository.ts (1)
202-206: Consider returning deletion status or throwing on not found.The
deletemethod silently succeeds even if the composite doesn't exist. This might be intentional (idempotent delete), but if the caller expects confirmation that something was deleted, consider returning a boolean or checking affected rows.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/db/repositories/composite.repository.ts` around lines 202 - 206, The delete method currently always resolves void and hides whether a row was removed; update the async delete(userId: string, id: string) in composite.repository.ts to return a deletion status (e.g., boolean) or throw when nothing was deleted: call this.db.delete(compositeServers)... and inspect the returned result/affected-rows (or returned rows) from the delete operation, then return true when a row was removed (false if none) or throw a NotFound error when affected rows === 0; adjust callers to handle the new boolean/exception accordingly.packages/types/src/workspace.ts (1)
36-39: Consider restrictingownerfromInviteMemberInput.role.The
InviteMemberInputallows settingroleto'owner', but typically owner is assigned during workspace creation. Inviting someone as owner could lead to multiple owners. Consider using a restricted type or validating at the API layer.💡 Alternative type
export type InvitableRole = 'admin' | 'member' | 'viewer'; export interface InviteMemberInput { readonly email: string; readonly role: InvitableRole; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/types/src/workspace.ts` around lines 36 - 39, InviteMemberInput currently allows WorkspaceRole values including 'owner', which can permit inviting owners; update the type to prevent assigning owner on invite by replacing or restricting InviteMemberInput.role (referencing InviteMemberInput and WorkspaceRole) — either introduce a new union type (e.g., InvitableRole) excluding 'owner' and use it for InviteMemberInput.role, or add validation at the API layer to reject 'owner' when processing invite requests; ensure all references to InviteMemberInput are updated accordingly and adjust any callers to use the new restricted role set.apps/web/app/dashboard/composites/[id]/page.tsx (1)
202-210: Consider adding error feedback for mutations.The
handleMembersUpdateandhandleDeletefunctions call mutations but don't handle error states. Users won't see feedback if the update/delete fails.💡 Suggested improvement
const handleMembersUpdate = (members: readonly { serverId: string; namespace: string; displayOrder: number }[]) => { - updateComposite.mutate({ id, input: { members } }); + updateComposite.mutate({ id, input: { members } }, { + onError: (error) => { + // Consider using a toast notification + console.error('Failed to update members:', error); + }, + }); }; const handleDelete = () => { deleteComposite.mutate(id, { onSuccess: () => router.push("/dashboard/composites"), + onError: (error) => { + setConfirmDelete(false); + // Consider using a toast notification + console.error('Failed to delete composite:', error); + }, }); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/dashboard/composites/`[id]/page.tsx around lines 202 - 210, Add error feedback to the mutations by supplying onError handlers to updateComposite.mutate and deleteComposite.mutate: update the handleMembersUpdate and handleDelete functions to pass an options object with onError that surfaces the error (e.g., show a toast/snackbar or set an error state) and keep the existing onSuccess behavior (for handleDelete, still call router.push on success). Specifically modify calls to updateComposite.mutate(...) and deleteComposite.mutate(id, {...}) to include onError callbacks that log the error and display a user-friendly message so failures are visible to the user.apps/runtime/src/middleware/response-cache.ts (1)
56-72: Consider skipping cache writes for write methods.
setCachedResponsewill write cache entries for POST/PUT/DELETE/PATCH operations, butgetCachedResponsenever reads them. Adding the same WRITE_METHODS check here would avoid wasted Redis writes.♻️ Proposed refactor
export async function setCachedResponse( deps: ResponseCacheDeps, config: CacheConfig, input: Readonly<Record<string, unknown>>, result: MCPToolResult, ): Promise<void> { if (config.cacheTtlSeconds <= 0) return; + if (WRITE_METHODS.has(config.httpMethod.toLowerCase())) return; if (result.isError) return;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/src/middleware/response-cache.ts` around lines 56 - 72, setCachedResponse is currently writing cache for all requests including write operations; add the same WRITE_METHODS check used by getCachedResponse to skip caching for HTTP write methods (e.g., POST/PUT/PATCH/DELETE). Update setCachedResponse (function name) to inspect the incoming request method from CacheConfig (or the same property getCachedResponse reads) and return early when the method is in WRITE_METHODS, to avoid unnecessary Redis writes and keep behavior consistent with getCachedResponse.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3f85ebc8-f810-4c5b-993f-b3d25dd4f5ae
📒 Files selected for processing (58)
apps/runtime/__tests__/integration/composite.test.tsapps/runtime/__tests__/integration/streamable-http.test.tsapps/runtime/__tests__/unit/namespace.test.tsapps/runtime/__tests__/unit/response-cache.test.tsapps/runtime/src/mcp/namespace.tsapps/runtime/src/mcp/protocol-handler.tsapps/runtime/src/mcp/session-manager.tsapps/runtime/src/middleware/response-cache.tsapps/runtime/src/registry/tool-loader.tsapps/runtime/src/server.tsapps/runtime/src/transports/composite.tsapps/runtime/src/transports/streamable-http.tsapps/runtime/src/webhooks/notifier.tsapps/runtime/src/webhooks/receiver.tsapps/runtime/src/webhooks/signature.tsapps/web/__tests__/rbac/permissions.test.tsapps/web/app/api/composites/[id]/route.tsapps/web/app/api/composites/route.tsapps/web/app/api/servers/[id]/webhook-events/route.tsapps/web/app/api/tools/[id]/execute/route.tsapps/web/app/api/workspaces/[id]/members/[userId]/route.tsapps/web/app/api/workspaces/[id]/members/route.tsapps/web/app/api/workspaces/[id]/route.tsapps/web/app/api/workspaces/route.tsapps/web/app/dashboard/composites/[id]/page.tsxapps/web/app/dashboard/composites/page.tsxapps/web/app/dashboard/servers/[id]/layout.tsxapps/web/app/dashboard/servers/[id]/webhooks/page.tsxapps/web/app/dashboard/settings/members/page.tsxapps/web/components/dashboard/workspace-switcher.tsxapps/web/components/playground/schema-form.tsxapps/web/lib/constants/navigation.tsapps/web/lib/db/migrations/0011_webhook_events.sqlapps/web/lib/db/migrations/0012_composite_servers.sqlapps/web/lib/db/migrations/0013_workspaces.sqlapps/web/lib/db/migrations/0014_migrate_users_to_workspaces.sqlapps/web/lib/db/migrations/0015_tool_cache_ttl.sqlapps/web/lib/db/migrations/0016_enforce_workspace_not_null.sqlapps/web/lib/db/repositories/composite.repository.tsapps/web/lib/db/repositories/workspace.repository.tsapps/web/lib/db/schema/composite-servers.tsapps/web/lib/db/schema/credentials.tsapps/web/lib/db/schema/index.tsapps/web/lib/db/schema/servers.tsapps/web/lib/db/schema/specs.tsapps/web/lib/db/schema/workspaces.tsapps/web/lib/hooks/index.tsapps/web/lib/hooks/use-composites.tsapps/web/lib/hooks/use-webhook-events.tsapps/web/lib/hooks/use-workspaces.tsapps/web/lib/middleware/rbac.tspackages/transformer/__tests__/webhooks.test.tspackages/transformer/src/index.tspackages/transformer/src/types.tspackages/transformer/src/webhooks.tspackages/types/src/composite.tspackages/types/src/index.tspackages/types/src/workspace.ts
| const NAMESPACE_SEPARATOR = '__'; | ||
|
|
||
| export function prefixToolName(namespace: string, toolName: string): string { | ||
| return `${namespace}${NAMESPACE_SEPARATOR}${toolName}`; | ||
| } | ||
|
|
||
| export function stripNamespace(namespacedName: string): { namespace: string; toolName: string } | null { | ||
| const idx = namespacedName.indexOf(NAMESPACE_SEPARATOR); | ||
| if (idx < 0) return null; | ||
| return { | ||
| namespace: namespacedName.slice(0, idx), | ||
| toolName: namespacedName.slice(idx + NAMESPACE_SEPARATOR.length), | ||
| }; | ||
| } | ||
|
|
||
| export function isNamespaced(name: string): boolean { | ||
| return name.includes(NAMESPACE_SEPARATOR); | ||
| } |
There was a problem hiding this comment.
Reject ambiguous namespace values before encoding.
prefixToolName("foo__bar", "baz") serializes to foo__bar__baz, but stripNamespace() decodes that as { namespace: "foo", toolName: "bar__baz" }. The helper also treats __tool and ns__ as valid. If callers ever pass those shapes, composite dispatch can target the wrong member or a blank tool name.
Proposed fix
export function prefixToolName(namespace: string, toolName: string): string {
+ if (!namespace || namespace.includes(NAMESPACE_SEPARATOR)) {
+ throw new Error(`namespace must be non-empty and must not contain "${NAMESPACE_SEPARATOR}"`);
+ }
+ if (!toolName) {
+ throw new Error('toolName must be non-empty');
+ }
return `${namespace}${NAMESPACE_SEPARATOR}${toolName}`;
}
export function stripNamespace(namespacedName: string): { namespace: string; toolName: string } | null {
const idx = namespacedName.indexOf(NAMESPACE_SEPARATOR);
- if (idx < 0) return null;
+ if (idx <= 0) return null;
+ const toolName = namespacedName.slice(idx + NAMESPACE_SEPARATOR.length);
+ if (!toolName) return null;
return {
namespace: namespacedName.slice(0, idx),
- toolName: namespacedName.slice(idx + NAMESPACE_SEPARATOR.length),
+ toolName,
};
}
export function isNamespaced(name: string): boolean {
- return name.includes(NAMESPACE_SEPARATOR);
+ return stripNamespace(name) !== null;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/runtime/src/mcp/namespace.ts` around lines 1 - 18, Reject and validate
inputs containing the namespace separator or empty segments in prefixToolName
and isNamespaced-related flows: update prefixToolName to check that namespace
and toolName are non-empty and do not include NAMESPACE_SEPARATOR (and similarly
ensure names do not start or end with the separator), and have it throw or
return an error for invalid inputs; adjust stripNamespace to handle and reject
ambiguous inputs (e.g., those with no separator or with empty
namespace/toolName) rather than silently parsing them; reference
NAMESPACE_SEPARATOR, prefixToolName, stripNamespace, and isNamespaced when
locating where to add these validations.
| const uriMatch = uri.match(/^webhook:\/\/[^/]+\/(.+)$/); | ||
| if (!uriMatch) { | ||
| response = jsonRpcError(message.id, -32602, 'Invalid resource URI'); | ||
| break; | ||
| } | ||
| const eventName = uriMatch[1]!; | ||
|
|
||
| if (!/^[a-zA-Z0-9_.:-]+$/.test(eventName) || eventName.length > 200) { | ||
| response = jsonRpcError(message.id, -32602, 'Invalid event name in resource URI'); | ||
| break; | ||
| } | ||
|
|
||
| try { | ||
| const redisKey = `webhook:latest:${server.id}:${eventName}`; | ||
| const data = await deps.redis.get(redisKey); | ||
|
|
||
| if (!data) { | ||
| response = jsonRpcError(message.id, -32002, 'Resource not found'); | ||
| break; | ||
| } | ||
|
|
||
| response = jsonRpcSuccess(message.id, { | ||
| contents: [{ | ||
| uri, | ||
| mimeType: 'application/json', | ||
| text: data, | ||
| }], | ||
| }); |
There was a problem hiding this comment.
Validate the resource URI host before reading Redis.
resources/read ignores the webhook://<slug>/... authority and always reads from the currently addressed server. A request like webhook://other-server/order.created can therefore return this server's order.created payload, which breaks URI semantics and can poison client-side caching.
Proposed fix
- const uriMatch = uri.match(/^webhook:\/\/[^/]+\/(.+)$/);
+ const uriMatch = uri.match(/^webhook:\/\/([^/]+)\/(.+)$/);
if (!uriMatch) {
response = jsonRpcError(message.id, -32602, 'Invalid resource URI');
break;
}
- const eventName = uriMatch[1]!;
+ const uriServerSlug = uriMatch[1]!;
+ const eventName = uriMatch[2]!;
+
+ if (uriServerSlug !== server.slug) {
+ response = jsonRpcError(message.id, -32602, 'Resource URI does not belong to this server');
+ break;
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/runtime/src/transports/streamable-http.ts` around lines 366 - 393, The
code currently only extracts the event name (uriMatch) and ignores the URI
authority; update the URI parsing to capture the host (e.g., change the regex to
capture both host and eventName), then validate that the captured host equals
the current server identifier used for storage (server.id) before reading Redis
(deps.redis.get). If the host does not match, return a JSON-RPC error (use
jsonRpcError with the same message.id) instead of reading redisKey, and only
construct redisKey and call deps.redis.get when the host matches; reference the
existing variables uriMatch, eventName, server.id, redisKey, deps.redis.get and
jsonRpcError to locate where to change the logic.
| notify(serverSlug: string, eventName: string, payload: unknown): void { | ||
| const notification = Object.freeze({ | ||
| jsonrpc: '2.0' as const, | ||
| method: `notifications/webhook/${eventName}`, | ||
| params: { eventName, payload }, | ||
| }); | ||
|
|
||
| const data = JSON.stringify(notification); | ||
| let sent = 0; | ||
|
|
||
| this.sessionManager.forEachSession(serverSlug, (session) => { | ||
| this.sessionManager.sendEvent(session, 'message', data); | ||
| sent++; | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n -C3 '\bsessionManager\.create\s*\(' apps/runtime/src
rg -n -C3 '\bauthenticated\b' apps/runtime/src
rg -n -C3 'new WebhookNotifier|\.notify\s*\(' apps/runtime/srcRepository: Work90210/APIFold
Length of output: 6465
🏁 Script executed:
rg -n 'sessionManager\.create' apps/runtime/srcRepository: Work90210/APIFold
Length of output: 203
🏁 Script executed:
rg -n 'forEachSession|sendEvent' apps/runtime/src/mcp/session-manager.ts -A 5Repository: Work90210/APIFold
Length of output: 1360
Add defense-in-depth auth check to webhook fanout and normalize event name parameter.
The webhook notifier broadcasts to all sessions matching a server slug without an authentication guard. While the SSE transport currently only creates sessions with authenticated: true, adding a defensive check prevents data leakage if a future code path creates unauthenticated sessions. Additionally, eventName should be normalized before embedding in the method path.
Suggested hardening
notify(serverSlug: string, eventName: string, payload: unknown): void {
+ const normalizedEventName = eventName.replace(/[^a-zA-Z0-9._-]/g, '_');
const notification = Object.freeze({
jsonrpc: '2.0' as const,
- method: `notifications/webhook/${eventName}`,
- params: { eventName, payload },
+ method: `notifications/webhook/${normalizedEventName}`,
+ params: { eventName: normalizedEventName, payload },
});
const data = JSON.stringify(notification);
let sent = 0;
this.sessionManager.forEachSession(serverSlug, (session) => {
+ if (!session.authenticated) return;
this.sessionManager.sendEvent(session, 'message', data);
sent++;
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| notify(serverSlug: string, eventName: string, payload: unknown): void { | |
| const notification = Object.freeze({ | |
| jsonrpc: '2.0' as const, | |
| method: `notifications/webhook/${eventName}`, | |
| params: { eventName, payload }, | |
| }); | |
| const data = JSON.stringify(notification); | |
| let sent = 0; | |
| this.sessionManager.forEachSession(serverSlug, (session) => { | |
| this.sessionManager.sendEvent(session, 'message', data); | |
| sent++; | |
| }); | |
| notify(serverSlug: string, eventName: string, payload: unknown): void { | |
| const normalizedEventName = eventName.replace(/[^a-zA-Z0-9._-]/g, '_'); | |
| const notification = Object.freeze({ | |
| jsonrpc: '2.0' as const, | |
| method: `notifications/webhook/${normalizedEventName}`, | |
| params: { eventName: normalizedEventName, payload }, | |
| }); | |
| const data = JSON.stringify(notification); | |
| let sent = 0; | |
| this.sessionManager.forEachSession(serverSlug, (session) => { | |
| if (!session.authenticated) return; | |
| this.sessionManager.sendEvent(session, 'message', data); | |
| sent++; | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/runtime/src/webhooks/notifier.ts` around lines 18 - 31, The notify
method currently builds the notification using the raw eventName and blindly
sends to every session; update notify to first normalize the eventName (e.g.,
coerce to string, trim, and sanitize/slugify so it’s safe to embed in the method
path) and use that normalized value when constructing the notification object,
and then, inside the this.sessionManager.forEachSession(serverSlug, (session) =>
...) callback, add an authentication guard that skips sessions that are not
authenticated (check session.authenticated or the appropriate property) before
calling this.sessionManager.sendEvent(session, 'message', data); reference the
notify function and the sessionManager.forEachSession/sendEvent calls when
making the changes.
| onChange={(e) => { | ||
| const raw = e.target.value; | ||
| if (raw === '') { onChange(undefined); return; } | ||
| onChange(type === 'integer' ? parseInt(raw, 10) : parseFloat(raw)); | ||
| }} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n apps/web/components/playground/schema-form.tsx | sed -n '95,125p'Repository: Work90210/APIFold
Length of output: 1330
🏁 Script executed:
node << 'EOF'
// Test parseInt and parseFloat behavior with intermediate inputs
const testCases = ['-', '.', '-.', '.5', '-5', '-5.5', 'abc', ''];
console.log("parseInt behavior:");
testCases.forEach(input => {
const result = parseInt(input, 10);
console.log(` parseInt("${input}", 10) = ${result}, isNaN: ${Number.isNaN(result)}`);
});
console.log("\nparseFloat behavior:");
testCases.forEach(input => {
const result = parseFloat(input);
console.log(` parseFloat("${input}") = ${result}, isNaN: ${Number.isNaN(result)}`);
});
console.log("\nNumber() behavior:");
testCases.forEach(input => {
const result = Number(input);
console.log(` Number("${input}") = ${result}, isNaN: ${Number.isNaN(result)}`);
});
EOFRepository: Work90210/APIFold
Length of output: 1019
🏁 Script executed:
# Find where this component is defined and used
grep -n "case 'integer':" apps/web/components/playground/schema-form.tsx | head -5Repository: Work90210/APIFold
Length of output: 83
🏁 Script executed:
# Look for the component definition and its props
head -100 apps/web/components/playground/schema-form.tsx | cat -nRepository: Work90210/APIFold
Length of output: 3885
🏁 Script executed:
# Check if there's any downstream handling of NaN values
rg "isNaN|Number.isNaN" apps/web/components/playground/ -A 2 -B 2Repository: Work90210/APIFold
Length of output: 43
🏁 Script executed:
# Check the full component to see the complete SchemaField implementation
wc -l apps/web/components/playground/schema-form.tsxRepository: Work90210/APIFold
Length of output: 111
🏁 Script executed:
# Review the suggested fix logic to verify it's correct
node << 'EOF'
// Verify the suggested fix handles all cases
const testCases = [
{ raw: '', expected: 'return (undefined)' },
{ raw: '-', expected: 'return (NaN check)' },
{ raw: '.', expected: 'return (NaN check)' },
{ raw: '-.', expected: 'return (NaN check)' },
{ raw: '.5', type: 'number', expected: 0.5 },
{ raw: '5', type: 'integer', expected: 5 },
{ raw: '5', type: 'number', expected: 5 },
{ raw: '5.5', type: 'integer', expected: 'return (not integer)' },
{ raw: '5.5', type: 'number', expected: 5.5 },
{ raw: '-5.5', type: 'integer', expected: 'return (not integer)' },
];
console.log("Suggested fix validation:\n");
testCases.forEach(tc => {
const { raw, type = 'number', expected } = tc;
// Simulate the suggested fix logic
if (raw === '') {
console.log(`✓ raw="${raw}": returns undefined (empty string)`);
return;
}
const parsed = Number(raw);
if (Number.isNaN(parsed)) {
console.log(`✓ raw="${raw}": returns early (NaN detected)`);
return;
}
if (type === 'integer' && !Number.isInteger(parsed)) {
console.log(`✓ raw="${raw}", type=integer: returns early (not integer)`);
return;
}
console.log(`✓ raw="${raw}", type=${type}: onChange(${parsed})`);
});
EOFRepository: Work90210/APIFold
Length of output: 514
Guard against NaN propagation from number inputs.
At line 110, intermediate values during typing (e.g., -, ., -.) parse to NaN with parseInt and parseFloat, which are then propagated to the form state via the onChange callback with no validation.
Update the handler to validate the parsed value:
🔧 Suggested fix
onChange={(e) => {
const raw = e.target.value;
if (raw === '') { onChange(undefined); return; }
- onChange(type === 'integer' ? parseInt(raw, 10) : parseFloat(raw));
+ const parsed = Number(raw);
+ if (Number.isNaN(parsed)) return;
+ if (type === 'integer' && !Number.isInteger(parsed)) return;
+ onChange(parsed);
}}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| onChange={(e) => { | |
| const raw = e.target.value; | |
| if (raw === '') { onChange(undefined); return; } | |
| onChange(type === 'integer' ? parseInt(raw, 10) : parseFloat(raw)); | |
| }} | |
| onChange={(e) => { | |
| const raw = e.target.value; | |
| if (raw === '') { onChange(undefined); return; } | |
| const parsed = Number(raw); | |
| if (Number.isNaN(parsed)) return; | |
| if (type === 'integer' && !Number.isInteger(parsed)) return; | |
| onChange(parsed); | |
| }} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/components/playground/schema-form.tsx` around lines 107 - 111, The
current inline onChange handler in the form's number input (the arrow function
using variables type and onChange) passes parseInt/parseFloat results directly,
which can propagate NaN for intermediate typing states; update the handler to
parse the value, guard against NaN and other non-numeric intermediates, and call
onChange(undefined) for invalid parses (or when raw === ''), otherwise call
onChange with the parsed number; specifically, in the onChange handler used in
the component, compute parsed = type === 'integer' ? parseInt(raw, 10) :
parseFloat(raw), then if Number.isNaN(parsed) (or raw matches only '-' or '.'),
call onChange(undefined) else call onChange(parsed).
| encryptedClientSecret: text('encrypted_client_secret'), | ||
| tokenExpiresAt: timestamp('token_expires_at', { withTimezone: true }), | ||
| provider: text('provider'), | ||
| workspaceId: uuid('workspace_id'), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the current credentials schema file
cat -n apps/web/lib/db/schema/credentials.tsRepository: Work90210/APIFold
Length of output: 2113
🏁 Script executed:
# Check the workspaces schema structure
cat -n apps/web/lib/db/schema/workspaces.tsRepository: Work90210/APIFold
Length of output: 1506
🏁 Script executed:
# Let's also check the servers schema to understand the pattern mentioned
cat -n apps/web/lib/db/schema/servers.tsRepository: Work90210/APIFold
Length of output: 2203
🏁 Script executed:
# Search for migrations related to workspace constraints
find . -type f -name "*migration*" -o -name "*migrate*" | grep -E "\.(ts|js|sql)$" | head -20Repository: Work90210/APIFold
Length of output: 162
🏁 Script executed:
# Look for any workspace-related constraints in the codebase
rg "workspaceId.*notNull|workspaceId.*references" --type ts -B 2 -A 2Repository: Work90210/APIFold
Length of output: 43
🏁 Script executed:
# Check the migration that handles workspace migration
cat -n apps/web/lib/db/migrations/0014_migrate_users_to_workspaces.sqlRepository: Work90210/APIFold
Length of output: 3030
🏁 Script executed:
# Search for other migrations that might define workspace constraints
find apps/web/lib/db/migrations -type f -name "*.sql" | sort | xargs grep -l "workspace" 2>/dev/nullRepository: Work90210/APIFold
Length of output: 234
🏁 Script executed:
# Check if there are any ALTER TABLE statements that add constraints
rg "ALTER TABLE.*workspace|ADD.*CONSTRAINT.*workspace" --type sql -B 2 -A 2Repository: Work90210/APIFold
Length of output: 2503
🏁 Script executed:
# Look for TypeScript code that reads/writes credentials to understand how workspaceId is used
rg "credentials\." apps/web/lib/db -A 3 -B 1 --type ts | head -50Repository: Work90210/APIFold
Length of output: 4047
🏁 Script executed:
# Check the full credential.repository.ts to understand workspaceId usage
cat -n apps/web/lib/db/repositories/credential.repository.tsRepository: Work90210/APIFold
Length of output: 10417
🏁 Script executed:
# Search for any places that query or set credentials.workspaceId
rg "credentials\.workspaceId|workspaceId.*credentials" --type ts -B 2 -A 2Repository: Work90210/APIFold
Length of output: 43
credentials.workspaceId schema must reflect existing and pending DB constraints.
The Drizzle schema at line 30 declares workspaceId as nullable without a foreign key reference. However, migration 0013 already adds a DB-level FK constraint (REFERENCES workspaces(id) ON DELETE CASCADE), and migration 0016 will enforce NOT NULL. This schema drift causes type mismatches: the schema claims the column is optional and unconstrained, while the database enforces the opposite. Update the schema to align:
🧩 Suggested schema alignment
import { pgTable, uuid, text, timestamp, index } from 'drizzle-orm/pg-core';
import { sql } from 'drizzle-orm';
import { mcpServers } from './servers';
+import { workspaces } from './workspaces';
@@
- workspaceId: uuid('workspace_id'),
+ workspaceId: uuid('workspace_id')
+ .references(() => workspaces.id, { onDelete: 'cascade' })
+ .notNull(),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| workspaceId: uuid('workspace_id'), | |
| workspaceId: uuid('workspace_id') | |
| .references(() => workspaces.id, { onDelete: 'cascade' }) | |
| .notNull(), |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/lib/db/schema/credentials.ts` at line 30, The
credentials.workspaceId column in the Drizzle schema must be updated to match DB
migrations 0013 and 0016: make workspaceId non-nullable and add the foreign-key
reference to workspaces(id) with ON DELETE CASCADE; update the credentials table
schema entry (workspaceId) so it declares the FK to workspaces.id and marks the
column as NOT NULL to avoid schema/type drift and match the existing DB
constraints.
| domainVerifiedAt: timestamp('domain_verified_at', { withTimezone: true }), | ||
| domainVerificationToken: text('domain_verification_token'), | ||
| tokenHash: text('token_hash'), | ||
| workspaceId: uuid('workspace_id'), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n -C2 'mcp_servers|workspace_id|NOT NULL|REFERENCES' apps/web/lib/db/migrations/0013_workspaces.sql apps/web/lib/db/migrations/0014_migrate_users_to_workspaces.sql apps/web/lib/db/migrations/0016_enforce_workspace_not_null.sql
rg -n -C3 'workspaceId' apps/web/lib/db/schema/servers.tsRepository: Work90210/APIFold
Length of output: 10276
Add .notNull() constraint and FK reference to match enforced DB constraints.
Migration 0016 enforces mcp_servers.workspace_id as NOT NULL, and migration 0013 adds the FK reference to workspaces(id). The current Drizzle schema is nullable without FK reference, creating type safety and integrity gaps.
Suggested fix
import {
@@
} from 'drizzle-orm/pg-core';
import { specs } from './specs';
+import { workspaces } from './workspaces';
@@
- workspaceId: uuid('workspace_id'),
+ workspaceId: uuid('workspace_id')
+ .references(() => workspaces.id)
+ .notNull(),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| workspaceId: uuid('workspace_id'), | |
| workspaceId: uuid('workspace_id') | |
| .references(() => workspaces.id) | |
| .notNull(), |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/lib/db/schema/servers.ts` at line 33, The servers schema's
workspaceId column is currently nullable and missing the foreign-key, so update
the workspaceId column definition in apps/web/lib/db/schema/servers.ts (symbol:
workspaceId) to add .notNull() and a FK reference to the workspaces table (e.g.
.references(() => workspaces.id) or the appropriate Drizzle reference form), and
import the workspaces schema if not already imported so the reference resolves.
- Fix migration backfill key mismatch (CRITICAL): UPDATEs now use encode(digest(...)) matching the INSERT slug formula - Fix cache httpMethod case-sensitivity (CRITICAL): lowercase before checking WRITE_METHODS set - Validate namespace against separator in prefixToolName - Deep-sort nested objects in cache key hash for deterministic keys - Validate resource URI host matches addressed server slug - Fail closed when signature validator configured but raw body missing - Reject duplicate serverId/namespace in composite create schema - Add 30s/60s timeouts to playground execution fetch calls - Handle query error states in composites, webhooks, members pages - Handle empty workspace gracefully in members page
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/runtime/src/transports/streamable-http.ts (1)
216-221:⚠️ Potential issue | 🟠 MajorDon't advertise
resources.subscribeuntil the method exists.This
initializeresponse now declaressubscribe: true, but this router still falls through to-32601forresources/subscribe. That breaks capability discovery for compliant clients.apps/runtime/src/mcp/protocol-handler.tshas the same gap, so whichever fix you choose needs to land in both initialize paths.💡 Proposed fix
response = jsonRpcSuccess(message.id, { protocolVersion: '2024-11-05', capabilities: { tools: { listChanged: true }, - resources: { subscribe: true, listChanged: true }, + resources: { listChanged: true }, }, serverInfo: {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/src/transports/streamable-http.ts` around lines 216 - 221, The initialize response is advertising capabilities.resources.subscribe: true even though the router doesn't implement resources/subscribe (calls fall through to -32601); update the initialize logic that builds the jsonRpcSuccess response (the block producing jsonRpcSuccess(message.id, { protocolVersion:..., capabilities: { resources: { subscribe: true }}})) to only set resources.subscribe to true when the corresponding RPC handler is actually registered/available (e.g., check the request router/handler registry or feature flag used by protocol handling), and apply the same conditional change in the other initialize path in mcp/protocol-handler.ts so both initialization flows only advertise subscribe when resources/subscribe is implemented.
♻️ Duplicate comments (1)
apps/web/app/dashboard/servers/[id]/webhooks/page.tsx (1)
65-79:⚠️ Potential issue | 🟠 MajorHandle
useServerfailures separately from “not found.”
useServererrors still fall through toif (!server) return null(Line 78), which can hide auth/backend failures as a blank page. Add an explicit server-error branch before the null/not-found branch.Suggested fix
- const { data: server, isLoading: serverLoading } = useServer(id); + const { + data: server, + isLoading: serverLoading, + isError: serverError, + } = useServer(id); if (serverLoading) { return ( <div className="space-y-4"> <Skeleton className="h-6 w-48" /> <Skeleton className="h-24 w-full rounded-lg" /> <Skeleton className="h-64 w-full rounded-lg" /> </div> ); } - if (!server) return null; + if (serverError) { + return ( + <div className="rounded-lg border border-destructive/30 p-6 text-center"> + <p className="text-sm text-destructive">Failed to load server details. Please try again.</p> + </div> + ); + } + + if (!server) return null;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/dashboard/servers/`[id]/webhooks/page.tsx around lines 65 - 79, The page currently treats any falsy server as "not found" which hides real errors; update the server-loading branch to handle useServer errors explicitly by checking the error flag from useServer (e.g., isError or error returned by useServer) and render an error state/message before the "if (!server) return null" not-found branch; locate the useServer usage (const { data: server, isLoading: serverLoading } = useServer(id)) and expand it to read and check its error indicator (e.g., isError or error) and return an appropriate error UI when set, prior to the null return.
🧹 Nitpick comments (1)
apps/web/app/dashboard/servers/[id]/webhooks/page.tsx (1)
27-31: Expose expand/collapse state to assistive tech.The row is clickable, but screen readers don’t get expanded state/context. Add
aria-expandedandaria-controlson the button and anidon the expanded region.Suggested fix
-import { use, useState } from "react"; +import { use, useId, useState } from "react"; function EventRow({ event }: { readonly event: WebhookEvent }) { const [expanded, setExpanded] = useState(false); + const panelId = useId(); return ( <div className="border-b border-border/50 last:border-b-0"> <button type="button" onClick={() => setExpanded((prev) => !prev)} + aria-expanded={expanded} + aria-controls={panelId} className="flex w-full items-center gap-3 px-4 py-3 text-left transition-colors duration-150 hover:bg-muted/30" > @@ {expanded && ( - <div className="border-t border-border/30 bg-muted/20 px-4 py-3"> + <div id={panelId} className="border-t border-border/30 bg-muted/20 px-4 py-3"> <pre className="overflow-x-auto rounded-md bg-background p-3 text-xs font-mono leading-relaxed"> {JSON.stringify(event.payload, null, 2)} </pre> </div> )}Also applies to: 48-53
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/dashboard/servers/`[id]/webhooks/page.tsx around lines 27 - 31, The clickable row button toggled by the onClick using setExpanded and the expanded state does not expose its state to assistive tech; update the button element to include aria-expanded={expanded} and aria-controls referencing a unique id (e.g., `${id}-webhook-details`) and add that id to the expanded region element (the container rendered when expanded) so screen readers can associate the button with the controlled content; apply the same aria-expanded/aria-controls/id pattern to the second toggle button block (lines ~48-53) so both toggle controls are accessible.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/runtime/src/mcp/protocol-handler.ts`:
- Around line 236-244: The resources listing currently only validates charset
for event names in the map that produces uri/name/description (the
resources/list path), causing it to advertise eventNames that will be rejected
later; update the filter used in that chain (the .filter(...) before .map(...)
in protocol-handler.ts) to also enforce the same max-length check used by
resources/read and apps/runtime/src/webhooks/receiver.ts (<= 200 chars) and keep
it consistent with the validation in
apps/runtime/src/transports/streamable-http.ts so the allowed charset and length
are identical across resources/list, resources/read, and the webhook receiver.
In `@apps/web/app/dashboard/settings/members/page.tsx`:
- Around line 27-32: The code computes workspaceId eagerly causing an
empty-string fallback before workspaces load; update the logic around
useWorkspaces/useWorkspace so you only derive workspaceId after workspaces are
loaded or when an explicitId exists: check the result of useWorkspaces (or its
isLoading flag) and if workspaces are undefined and explicitId is null, do not
set workspaceId to "" (use undefined or skip calling useWorkspace) and render a
loading state instead; change the ownedWorkspace/workspaceId computation to run
only when workspaces is defined (refer to useWorkspaces, searchParams,
explicitId, ownedWorkspace, workspaceId, and useWorkspace).
- Around line 150-156: The remove-button branch only hides owners but still
shows the trash icon for the current user's row; update the conditional so the
Button is not rendered when the member.userId equals the current user/membership
id. Replace the render guard from member.role !== "owner" to something like
member.role !== "owner" && member.userId !== <currentUserId> (use the app's
current user identifier variable — e.g., session.user.id or
currentMembership.userId), keeping the same
onClick={handleRemove(member.userId)} and disabled={removeMember.isPending}
behavior.
- Around line 151-159: The trash icon button is inaccessible because it lacks an
accessible name; update the Button in the members list (the component rendering
<Button ... onClick={() => handleRemove(member.userId)}
disabled={removeMember.isPending}> with <Trash2/>) to include an aria-label that
identifies the target member (e.g., `aria-label={`Remove ${member.email ||
member.name}`}` or similar using member.userId if no human-readable name), so
assistive tech can announce the destructive action for the correct member while
preserving the existing onClick and disabled behavior.
- Around line 100-116: The email input and role select lack accessible names;
update the invite fields in the members settings component to provide explicit
labels or aria labels (e.g., add unique ids like "invite-email" and
"invite-role" and associate <label htmlFor="invite-email">Email</label> and
<label htmlFor="invite-role">Role</label>, or add aria-label/aria-labelledby) so
screen readers can identify the controls that use email, setEmail, role, setRole
and the WorkspaceRole select; ensure labels are present visually or as
screen-reader-only (sr-only) text.
---
Outside diff comments:
In `@apps/runtime/src/transports/streamable-http.ts`:
- Around line 216-221: The initialize response is advertising
capabilities.resources.subscribe: true even though the router doesn't implement
resources/subscribe (calls fall through to -32601); update the initialize logic
that builds the jsonRpcSuccess response (the block producing
jsonRpcSuccess(message.id, { protocolVersion:..., capabilities: { resources: {
subscribe: true }}})) to only set resources.subscribe to true when the
corresponding RPC handler is actually registered/available (e.g., check the
request router/handler registry or feature flag used by protocol handling), and
apply the same conditional change in the other initialize path in
mcp/protocol-handler.ts so both initialization flows only advertise subscribe
when resources/subscribe is implemented.
---
Duplicate comments:
In `@apps/web/app/dashboard/servers/`[id]/webhooks/page.tsx:
- Around line 65-79: The page currently treats any falsy server as "not found"
which hides real errors; update the server-loading branch to handle useServer
errors explicitly by checking the error flag from useServer (e.g., isError or
error returned by useServer) and render an error state/message before the "if
(!server) return null" not-found branch; locate the useServer usage (const {
data: server, isLoading: serverLoading } = useServer(id)) and expand it to read
and check its error indicator (e.g., isError or error) and return an appropriate
error UI when set, prior to the null return.
---
Nitpick comments:
In `@apps/web/app/dashboard/servers/`[id]/webhooks/page.tsx:
- Around line 27-31: The clickable row button toggled by the onClick using
setExpanded and the expanded state does not expose its state to assistive tech;
update the button element to include aria-expanded={expanded} and aria-controls
referencing a unique id (e.g., `${id}-webhook-details`) and add that id to the
expanded region element (the container rendered when expanded) so screen readers
can associate the button with the controlled content; apply the same
aria-expanded/aria-controls/id pattern to the second toggle button block (lines
~48-53) so both toggle controls are accessible.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3181d126-7479-45e7-b41d-fee07b4dc2ca
📒 Files selected for processing (11)
apps/runtime/src/mcp/namespace.tsapps/runtime/src/mcp/protocol-handler.tsapps/runtime/src/middleware/response-cache.tsapps/runtime/src/transports/streamable-http.tsapps/runtime/src/webhooks/receiver.tsapps/web/app/api/composites/route.tsapps/web/app/api/tools/[id]/execute/route.tsapps/web/app/dashboard/composites/page.tsxapps/web/app/dashboard/servers/[id]/webhooks/page.tsxapps/web/app/dashboard/settings/members/page.tsxapps/web/lib/db/migrations/0014_migrate_users_to_workspaces.sql
✅ Files skipped from review due to trivial changes (4)
- apps/runtime/src/mcp/namespace.ts
- apps/web/lib/db/migrations/0014_migrate_users_to_workspaces.sql
- apps/web/app/dashboard/composites/page.tsx
- apps/runtime/src/middleware/response-cache.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- apps/web/app/api/composites/route.ts
- apps/runtime/src/webhooks/receiver.ts
- apps/web/app/api/tools/[id]/execute/route.ts
| const { data: workspaces } = useWorkspaces(); | ||
| // Prefer explicit workspace from URL, fall back to first owned workspace | ||
| const explicitId = searchParams.get("workspace"); | ||
| const ownedWorkspace = workspaces?.find((w) => w.slug.startsWith("user-")) ?? workspaces?.[0]; | ||
| const workspaceId = explicitId ?? ownedWorkspace?.id ?? ""; | ||
| const { data: workspace, isLoading } = useWorkspace(workspaceId); |
There was a problem hiding this comment.
Wait for useWorkspaces() before falling into the empty state.
Without a workspace query param, workspaceId is "" until the workspace list resolves, so this can briefly render “No workspace selected” on first load before the fallback workspace is picked.
Suggested fix
- const { data: workspaces } = useWorkspaces();
+ const { data: workspaces, isLoading: isLoadingWorkspaces } = useWorkspaces();
// Prefer explicit workspace from URL, fall back to first owned workspace
- const explicitId = searchParams.get("workspace");
+ const explicitId = searchParams.get("workspace") || null;
const ownedWorkspace = workspaces?.find((w) => w.slug.startsWith("user-")) ?? workspaces?.[0];
const workspaceId = explicitId ?? ownedWorkspace?.id ?? "";
const { data: workspace, isLoading } = useWorkspace(workspaceId);
+ const isResolvingWorkspace =
+ (!explicitId && isLoadingWorkspaces) || (Boolean(workspaceId) && isLoading);
@@
- if (isLoading && workspaceId) {
+ if (isResolvingWorkspace) {Also applies to: 58-79
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/app/dashboard/settings/members/page.tsx` around lines 27 - 32, The
code computes workspaceId eagerly causing an empty-string fallback before
workspaces load; update the logic around useWorkspaces/useWorkspace so you only
derive workspaceId after workspaces are loaded or when an explicitId exists:
check the result of useWorkspaces (or its isLoading flag) and if workspaces are
undefined and explicitId is null, do not set workspaceId to "" (use undefined or
skip calling useWorkspace) and render a loading state instead; change the
ownedWorkspace/workspaceId computation to run only when workspaces is defined
(refer to useWorkspaces, searchParams, explicitId, ownedWorkspace, workspaceId,
and useWorkspace).
- Align resources/list event name filter with 200-char limit from read path - Add CHECK constraint for non-negative cache_ttl_seconds - Wait for useWorkspaces() before showing empty state on members page - Add aria-label to invite email/role fields and remove button - Hide remove button on current user's own row
There was a problem hiding this comment.
🧹 Nitpick comments (1)
apps/runtime/src/transports/streamable-http.ts (1)
334-404: Consider extracting shared resource handlers to eliminate duplication.This
resources/listandresources/readlogic is nearly identical tohandleResourcesListandhandleResourcesReadinprotocol-handler.ts(lines 222-251). The protocol-handler versions exist but are never invoked by this transport since streamable-http.ts has its own switch dispatcher.Having two implementations that must stay synchronized creates maintenance burden and risk of drift. Consider extracting the shared logic into a reusable module that both transports can call.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/src/transports/streamable-http.ts` around lines 334 - 404, The two cases 'resources/list' and 'resources/read' in streamable-http.ts duplicate logic already implemented as handleResourcesList and handleResourcesRead in protocol-handler.ts; extract the shared logic into a new reusable module (e.g., resources-handler) that exports functions like listResources(deps, server) and readResource(deps, server, uri) (or adapt names to handleResourcesList/handleResourcesRead) and then replace the switch-case bodies in streamable-http.ts to call those shared functions, passing deps, server, message.id (and logger/jsonRpc helpers if needed), and translate their results into the same jsonRpcSuccess/jsonRpcError responses; ensure you remove duplicated code and update imports so both streamable-http.ts and protocol-handler.ts consume the single implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@apps/runtime/src/transports/streamable-http.ts`:
- Around line 334-404: The two cases 'resources/list' and 'resources/read' in
streamable-http.ts duplicate logic already implemented as handleResourcesList
and handleResourcesRead in protocol-handler.ts; extract the shared logic into a
new reusable module (e.g., resources-handler) that exports functions like
listResources(deps, server) and readResource(deps, server, uri) (or adapt names
to handleResourcesList/handleResourcesRead) and then replace the switch-case
bodies in streamable-http.ts to call those shared functions, passing deps,
server, message.id (and logger/jsonRpc helpers if needed), and translate their
results into the same jsonRpcSuccess/jsonRpcError responses; ensure you remove
duplicated code and update imports so both streamable-http.ts and
protocol-handler.ts consume the single implementation.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2818aa55-22b7-4d30-adf2-b8d8b5f5c96b
📒 Files selected for processing (4)
apps/runtime/src/mcp/protocol-handler.tsapps/runtime/src/transports/streamable-http.tsapps/web/app/dashboard/settings/members/page.tsxapps/web/lib/db/migrations/0015_tool_cache_ttl.sql
✅ Files skipped from review due to trivial changes (3)
- apps/web/lib/db/migrations/0015_tool_cache_ttl.sql
- apps/runtime/src/mcp/protocol-handler.ts
- apps/web/app/dashboard/settings/members/page.tsx
- Remove unused ToolDefinition import in composite.ts (lint error) - Override path-to-regexp@<0.1.13 to >=0.1.13 (GHSA-37ch-88jc-xwx2)
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
apps/runtime/src/transports/composite.ts (2)
188-189: HardcodedsessionId: 'composite'may conflict with session-tracking logic.If
executeToolor downstream code usessessionIdfor caching, rate-limiting, or state isolation, all composite calls sharing the same session ID could produce unintended collisions.Consider deriving a unique session ID (e.g., from the composite ID or request context) if session isolation matters.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/src/transports/composite.ts` around lines 188 - 189, The hardcoded sessionId in the composite transport creates collisions; change the context creation near toolInput and context so sessionId is derived uniquely (for example use the existing randomUUID() requestId or derive from the composite identifier/params) instead of the literal 'composite' so that executeTool and downstream logic get a unique sessionId per composite call; update the context assignment in the composite.ts block that builds context (where toolInput is defined) to populate sessionId with a unique value based on requestId or composite id/params.
82-92: Request body type assertion lacks runtime validation.The type assertion on line 82-87 trusts
req.bodyto match the expected shape. If middleware or network conditions produce malformed JSON (e.g.,null, array, or missing fields), accessingmessage.jsonrpcormessage.methodmay behave unexpectedly.Consider adding a lightweight runtime guard:
🛡️ Proposed defensive check
- const message = req.body as { + const body = req.body; + if (typeof body !== 'object' || body === null || Array.isArray(body)) { + res.status(400).json({ error: 'Request body must be a JSON object' }); + return; + } + + const message = body as { readonly jsonrpc: string; readonly id: string | number; readonly method: string; readonly params?: Readonly<Record<string, unknown>>; };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/src/transports/composite.ts` around lines 82 - 92, The current code directly asserts req.body into message and then reads message.jsonrpc/message.method, which can throw or behave incorrectly for null/array/malformed bodies; add a lightweight runtime guard before the assertion: ensure req.body is a non-null object and not an array (e.g., typeof req.body === 'object' && req.body !== null && !Array.isArray(req.body')), then verify the presence and types of the JSON-RPC fields (jsonrpc === '2.0', typeof method === 'string' && method.length > 0, id !== undefined) and only then assign to message and proceed; update the validation branch around message, req.body, jsonrpc, method, id (in composite.ts) to return 400 for any malformed body.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/runtime/src/transports/composite.ts`:
- Around line 175-186: The current usage gating calls getPlanLimitsForUser and
checkAndIncrementUsage with server.userId (the member server owner), which
charges the wrong account; change these calls to use the composite's owner id
(composite.userId) so billing/quotas are applied to the composite creator rather
than the referenced server owner—update the variables passed into
getPlanLimitsForUser(...) and checkAndIncrementUsage(...), keeping deps.redis
and logger intact, and ensure any log messages or error text still reference the
correct user id for auditing.
- Around line 63-80: Add a JSDoc block to createCompositeRouter that documents
its required middleware preconditions: callers must mount authentication
middleware that sets req.serverTokenVerified = true (e.g., the
createServerTokenAuth middleware) so the router’s internal authenticated check
works, and must also apply rate-limiting middleware on the
/mcp/composite/:identifier path to protect this endpoint; reference
createCompositeRouter and the req.serverTokenVerified flag in the comment so
integrators know to register those middlewares when mounting the router.
---
Nitpick comments:
In `@apps/runtime/src/transports/composite.ts`:
- Around line 188-189: The hardcoded sessionId in the composite transport
creates collisions; change the context creation near toolInput and context so
sessionId is derived uniquely (for example use the existing randomUUID()
requestId or derive from the composite identifier/params) instead of the literal
'composite' so that executeTool and downstream logic get a unique sessionId per
composite call; update the context assignment in the composite.ts block that
builds context (where toolInput is defined) to populate sessionId with a unique
value based on requestId or composite id/params.
- Around line 82-92: The current code directly asserts req.body into message and
then reads message.jsonrpc/message.method, which can throw or behave incorrectly
for null/array/malformed bodies; add a lightweight runtime guard before the
assertion: ensure req.body is a non-null object and not an array (e.g., typeof
req.body === 'object' && req.body !== null && !Array.isArray(req.body')), then
verify the presence and types of the JSON-RPC fields (jsonrpc === '2.0', typeof
method === 'string' && method.length > 0, id !== undefined) and only then assign
to message and proceed; update the validation branch around message, req.body,
jsonrpc, method, id (in composite.ts) to return 400 for any malformed body.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 7999cfbf-17cf-4a4c-af3f-480264980ef9
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (2)
apps/runtime/src/transports/composite.tspackage.json
| export function createCompositeRouter(deps: CompositeRouterDeps): Router { | ||
| const { logger, registry, compositeRegistry, toolLoader, toolExecutorDeps } = deps; | ||
| const router = express.Router(); | ||
|
|
||
| router.post('/mcp/composite/:identifier', async (req: Request, res: Response) => { | ||
| const identifier = req.params['identifier']!; | ||
| const composite = resolveComposite(compositeRegistry, identifier); | ||
|
|
||
| if (!composite || !composite.isActive) { | ||
| res.status(404).json({ error: 'Composite server not found' }); | ||
| return; | ||
| } | ||
|
|
||
| const authenticated = (req as unknown as Record<string, unknown>)['serverTokenVerified'] === true; | ||
| if (!authenticated) { | ||
| res.status(401).json({ error: 'Authorization required' }); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Missing auth middleware dependency documentation and rate limiting coverage.
The authentication guard at lines 76-80 depends on serverTokenVerified being set by createServerTokenAuth middleware. Per server.ts (lines 168-172), this middleware is mounted only at /mcp/:slug and does not cover /mcp/composite/:identifier. The same applies to rate limiting (lines 140-148 in server.ts).
While the check fails closed (returns 401 if flag is unset), the router currently provides no indication of this precondition. When the router is mounted, forgetting to register auth/rate-limiting middleware would leave the endpoint non-functional or unprotected.
Add a JSDoc comment on createCompositeRouter documenting that callers must mount:
- Auth middleware that sets
req.serverTokenVerified = true - Rate limiting middleware on the
/mcp/composite/:identifierpath
📝 Proposed documentation addition
+/**
+ * Creates a composite MCP JSON-RPC router.
+ *
+ * IMPORTANT: Before mounting this router, the caller MUST register:
+ * 1. Authentication middleware (e.g., `createServerTokenAuth`) on
+ * `/mcp/composite/:identifier` that sets `req.serverTokenVerified = true`.
+ * 2. Rate-limiting middleware on the same path prefix.
+ *
+ * The existing `/mcp/:slug` middleware does NOT cover composite routes.
+ */
export function createCompositeRouter(deps: CompositeRouterDeps): Router {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/runtime/src/transports/composite.ts` around lines 63 - 80, Add a JSDoc
block to createCompositeRouter that documents its required middleware
preconditions: callers must mount authentication middleware that sets
req.serverTokenVerified = true (e.g., the createServerTokenAuth middleware) so
the router’s internal authenticated check works, and must also apply
rate-limiting middleware on the /mcp/composite/:identifier path to protect this
endpoint; reference createCompositeRouter and the req.serverTokenVerified flag
in the comment so integrators know to register those middlewares when mounting
the router.
| // Usage gate | ||
| const planLimits = await getPlanLimitsForUser(deps.redis, server.userId); | ||
| const usageCheck = await checkAndIncrementUsage( | ||
| { redis: deps.redis, logger }, | ||
| server.userId, | ||
| planLimits, | ||
| ); | ||
|
|
||
| if (!usageCheck.allowed) { | ||
| response = jsonRpcError(message.id, -32003, 'Usage limit reached. Upgrade your plan.'); | ||
| break; | ||
| } |
There was a problem hiding this comment.
Usage billing charges the member server owner, not the composite owner.
Line 176 fetches plan limits for server.userId (the member server's owner), and line 177-181 increments that user's usage counter. However, the composite has its own userId (the composite creator).
Current behavior: If user A creates a composite referencing user B's servers, calling tools through A's composite decrements B's quota—potentially exhausting B's limits or causing billing surprises.
Consider whether the composite owner (composite.userId) should be billed instead, or document this as intentional policy.
🔧 Potential fix: bill composite owner
// Usage gate
- const planLimits = await getPlanLimitsForUser(deps.redis, server.userId);
+ const planLimits = await getPlanLimitsForUser(deps.redis, composite.userId);
const usageCheck = await checkAndIncrementUsage(
{ redis: deps.redis, logger },
- server.userId,
+ composite.userId,
planLimits,
);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/runtime/src/transports/composite.ts` around lines 175 - 186, The current
usage gating calls getPlanLimitsForUser and checkAndIncrementUsage with
server.userId (the member server owner), which charges the wrong account; change
these calls to use the composite's owner id (composite.userId) so billing/quotas
are applied to the composite creator rather than the referenced server
owner—update the variables passed into getPlanLimitsForUser(...) and
checkAndIncrementUsage(...), keeping deps.redis and logger intact, and ensure
any log messages or error text still reference the correct user id for auditing.
Coverage Report for packages/transformer
File Coverage
|
||||||||||||||||||||||||||||||||||||||||||||||||||
Summary
Implements five sprints of platform features across the runtime, transformer, types, and web app:
resources/listandresources/readprotocol handlersstripe__listCharges), composite CRUD API and repositorySecurity
Three rounds of security review and remediation addressed all findings:
New files (58 total, 4725 lines added)
webhooks/{receiver,signature,notifier}.ts,transports/composite.ts,mcp/namespace.ts,middleware/response-cache.tswebhooks.ts, types updates (MCPResourceDefinition,MCPNotificationDefinition)composite.ts,workspace.tsworkspaces/CRUD + members,composites/CRUD,webhook-events/,tools/[id]/execute/Test plan
pnpm typecheck— all 14 packages passpnpm --filter @apifold/transformer test— 200/200 pass (8 new webhook tests)pnpm --filter @apifold/runtime test— 181/181 pass (31 new: 8 composite + 9 namespace + 12 cache + 6 streamable-http); 4 pre-existing failures unchangedpnpm --filter @apifold/web test— 296/296 pass (12 new RBAC); 22 pre-existing failures unchangedSummary by CodeRabbit
New Features
Improvements