Skip to content

feat: sprints 15-19 — streaming HTTP, webhooks, composites, workspaces, playground#181

Merged
Work90210 merged 6 commits intomasterfrom
feat/sprint-15-streamable-http-webhooks
Mar 30, 2026
Merged

feat: sprints 15-19 — streaming HTTP, webhooks, composites, workspaces, playground#181
Work90210 merged 6 commits intomasterfrom
feat/sprint-15-streamable-http-webhooks

Conversation

@Work90210
Copy link
Copy Markdown
Owner

@Work90210 Work90210 commented Mar 28, 2026

Summary

Implements five sprints of platform features across the runtime, transformer, types, and web app:

  • Sprint 15 — Streamable HTTP + Webhooks Start: SSE streaming for long-running tool calls, webhook receiver with signature validation (Stripe/GitHub/Slack), webhook-to-MCP resource/notification pipeline, resources/list and resources/read protocol handlers
  • Sprint 16 — Webhooks Complete + Multi-Spec Composition: Webhook URL display and event log viewer in dashboard, composite server architecture with namespace-prefixed tool dispatch (stripe__listCharges), composite CRUD API and repository
  • Sprint 17 — Composites Complete + Hardening: Composite builder UI with member management and merged tool preview, dashboard navigation updates (Webhooks sidebar, Composites global nav)
  • Sprint 18 — Team Workspaces + Response Caching: Workspace and membership tables with phased migration, RBAC middleware (owner > admin > member > viewer), workspace repository, Redis response cache with per-tool TTLs and write-through invalidation
  • Sprint 19 — Workspace UI + Interactive Playground: Workspace switcher, member invite/remove page, workspace CRUD API with RBAC enforcement, schema-to-form renderer (recursive JSON Schema → HTML form controls), tool execution proxy API

Security

Three rounds of security review and remediation addressed all findings:

  • CRITICAL: Raw body bytes for webhook HMAC validation (not re-serialized JSON), cross-tenant ownership verification on composite members, workspace plan self-upgrade blocked, tool execute route aligned with workspace RBAC
  • HIGH: Constant-time HMAC comparison with process-startup random key, webhook rate limiting (120 req/min/slug), Redis SCAN replaces KEYS, server slug and namespace validation, pending invite isolation (acceptedAt required for RBAC), user-scoped cache keys, self-removal prevention, targetUserId validation
  • MEDIUM: DB payload size CHECK constraint, cache batch-delete during SCAN, workspace-aware members page routing, collision-free migration slugs via SHA-256 hash, inline delete confirmation (no window.confirm)
  • LOW: Event name re-validation on Redis read path, replay protection documentation, XSS safety comment on schema descriptions, NOT NULL enforcement migration stub

New files (58 total, 4725 lines added)

Area Files
Runtime webhooks/{receiver,signature,notifier}.ts, transports/composite.ts, mcp/namespace.ts, middleware/response-cache.ts
Transformer webhooks.ts, types updates (MCPResourceDefinition, MCPNotificationDefinition)
Types composite.ts, workspace.ts
Web API workspaces/ CRUD + members, composites/ CRUD, webhook-events/, tools/[id]/execute/
Web UI Webhooks page, composites list + builder, members page, workspace switcher, schema-form renderer
DB Migrations 0011-0016 (webhook_events, composite_servers, workspaces, data migration, cache TTL, NOT NULL enforcement)
Tests 47 new tests across 5 test files

Test plan

  • pnpm typecheck — all 14 packages pass
  • pnpm --filter @apifold/transformer test — 200/200 pass (8 new webhook tests)
  • pnpm --filter @apifold/runtime test — 181/181 pass (31 new: 8 composite + 9 namespace + 12 cache + 6 streamable-http); 4 pre-existing failures unchanged
  • pnpm --filter @apifold/web test — 296/296 pass (12 new RBAC); 22 pre-existing failures unchanged
  • Verified web test baseline identical on master (same 22 failures)
  • Three security review rounds with full remediation
  • Run migrations 0011-0016 against staging database
  • Manual test: webhook receiver with Stripe test event
  • Manual test: composite endpoint with two member servers
  • Manual test: workspace invite flow end-to-end
  • Manual test: playground schema-to-form with complex nested schema

Summary by CodeRabbit

  • New Features

    • Webhooks: receive, verify (Stripe/GitHub/Slack/generic), persist, notify SSE clients, view recent events, and copyable endpoints.
    • Composite servers: create/manage composites with namespaced merged tools, runtime composite endpoint, and dashboard CRUD + tool previews.
    • Workspaces: create/manage workspaces and members with RBAC, APIs, pages, and migrations.
    • Developer UX: schema-driven tool parameter form and workspace switcher component.
  • Improvements

    • SSE streaming for tool calls (EventStream).
    • Redis-backed response caching with deterministic keys and server-scoped invalidation.
    • Namespacing utilities and expanded tooling for resources listing/reading.

…, workspaces, playground

Sprint 15: Streamable HTTP streaming responses + webhook-to-MCP resource pipeline
Sprint 16: Per-provider webhook signature validation (Stripe/GitHub/Slack), webhook UI, composite server architecture with namespace-based tool dispatch
Sprint 17: Composite builder UI, merged tool preview, dashboard navigation updates
Sprint 18: Team workspaces with RBAC (owner/admin/member/viewer), response caching with per-tool TTLs and write-through invalidation
Sprint 19: Workspace switcher, member management, interactive playground with schema-to-form renderer

Security hardened: raw body signature validation, cross-tenant ownership checks, constant-time HMAC comparison, Redis SCAN over KEYS, rate limiting on webhook endpoints, user-scoped cache keys, pending invite isolation
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 28, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds composite MCP routing with namespaced dispatch and per-user plan gating, SSE streamable HTTP tools and resource endpoints, webhook ingestion with signature validation and SSE notification, Redis-backed response caching, workspace/composite DB models and APIs, frontend hooks/pages/components, transformer webhook support, and accompanying tests.

Changes

Cohort / File(s) Summary
Runtime — Composite Router & Tests
apps/runtime/src/transports/composite.ts, apps/runtime/__tests__/integration/composite.test.ts
New Express composite router POST /mcp/composite/:identifier implementing JSON‑RPC initialize/tools/list/tools/call with namespace parsing, member resolution, per-user plan gating (Redis), execution proxying, JSON‑RPC error mapping; integration tests exercise routing, namespacing, and dispatch.
Runtime — Streamable HTTP & Protocol
apps/runtime/src/transports/streamable-http.ts, apps/runtime/src/mcp/protocol-handler.ts, apps/runtime/__tests__/integration/streamable-http.test.ts
SSE streaming support for tools/call (Accept: text/event-stream); added MCP methods resources/list and resources/read backed by Redis scan/read; protocol announces resources capability; integration tests cover streaming, resources, and webhook persistence.
Runtime — Webhooks (Receiver, Notifier, Signature)
apps/runtime/src/webhooks/receiver.ts, apps/runtime/src/webhooks/notifier.ts, apps/runtime/src/webhooks/signature.ts, apps/runtime/src/server.ts
New webhook receiver route POST /webhooks/:slug/:eventName with raw-body capture, size limits, optional per-server signature validation (Stripe/GitHub/Slack/generic), Redis persistence, optional DB insert, SSE notifications via WebhookNotifier, and server wiring plus a webhook-specific rate limiter.
Runtime — SessionManager & Namespace utils
apps/runtime/src/mcp/session-manager.ts, apps/runtime/src/mcp/namespace.ts, apps/runtime/__tests__/unit/namespace.test.ts
SessionManager gains forEachSession(slug, cb); added namespacing utilities (NAMESPACE_SEPARATOR, prefixToolName, stripNamespace, isNamespaced) and unit tests for namespacing behavior.
Runtime — Response Cache Middleware & Tests
apps/runtime/src/middleware/response-cache.ts, apps/runtime/__tests__/unit/response-cache.test.ts
Redis-backed response cache helpers with deterministic key building (canonical JSON + SHA-256), get/set/invalidate with TTL and write-method bypass, safe error handling and logging; unit tests with mock Redis.
Runtime — Tool Loader Typing
apps/runtime/src/registry/tool-loader.ts
Extended ToolDefinition with optional cacheTtlSeconds and httpMethod fields.
Web — DB Migrations & Schema
apps/web/lib/db/migrations/0011_webhook_events.sql, .../0012_composite_servers.sql, .../0013_workspaces.sql, .../0014_migrate_users_to_workspaces.sql, .../0015_tool_cache_ttl.sql, .../0016_enforce_workspace_not_null.sql
New migrations adding webhook_events, composite_servers/composite_members, workspaces/workspace_members, backfill personal workspaces, tool cache_ttl column and constraint, and enforce workspace_id non-null.
Web — DB Schema Modules & Repositories
apps/web/lib/db/schema/composite-servers.ts, apps/web/lib/db/schema/workspaces.ts, apps/web/lib/db/repositories/composite.repository.ts, apps/web/lib/db/repositories/workspace.repository.ts, apps/web/lib/db/schema/*
Drizzle schema modules and repositories for composites and workspaces with CRUD, transactional member updates, ownership validation, token hashing, and repository APIs.
Web — API Routes
apps/web/app/api/composites/route.ts, apps/web/app/api/composites/[id]/route.ts, apps/web/app/api/workspaces/route.ts, apps/web/app/api/workspaces/[id]/route.ts, apps/web/app/api/workspaces/[id]/members/route.ts, apps/web/app/api/workspaces/[id]/members/[userId]/route.ts, apps/web/app/api/tools/[id]/execute/route.ts, apps/web/app/api/servers/[id]/webhook-events/route.ts
New REST handlers for composites and workspace CRUD/member management, tool execute proxy that calls runtime initialize + tools/call, and webhook event listing with Zod validation, RBAC checks, and rate-limiting.
Web — RBAC, Hooks & Frontend UI
apps/web/lib/middleware/rbac.ts, apps/web/lib/hooks/*, apps/web/app/dashboard/composites/*, apps/web/app/dashboard/servers/[id]/webhooks/page.tsx, apps/web/app/dashboard/settings/members/page.tsx, apps/web/components/*, apps/web/lib/constants/navigation.ts
RBAC helpers and role checks; React Query hooks for composites/workspaces/webhook-events; dashboard pages for composites and webhooks, members UI, WorkspaceSwitcher and SchemaForm components, and navigation addition for Composites/Webhooks.
Transformer & Types — Webhook Support
packages/transformer/src/webhooks.ts, packages/transformer/src/types.ts, packages/transformer/src/index.ts, packages/transformer/__tests__/webhooks.test.ts, packages/types/src/composite.ts, packages/types/src/workspace.ts, packages/types/src/index.ts
Transformer adds transformWebhooks to convert OpenAPI webhooks into MCP resources/notifications with warnings; new types for composites/workspaces and tests.
Tests & Miscellaneous
apps/runtime/__tests__/*, packages/transformer/__tests__/webhooks.test.ts, apps/web/__tests__/rbac/permissions.test.ts
Added unit and integration tests for composite dispatch, streamable HTTP, response cache, namespace utils, webhook transformer, and RBAC permissions; note areas requiring attention: Redis scan semantics, SSE framing, signature edge cases, and DB migration order.
Config & Tooling
.gitleaks.toml, package.json
Added gitleaks allowlist for tests path and a pnpm override for path-to-regexp version.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Client as Client
participant Composite as CompositeRouter
participant Registry as ServerRegistry/ToolLoader
participant Redis as Redis (plan limits)
participant Upstream as Upstream Server
Client->>Composite: POST /mcp/composite/:id { JSON-RPC tools/call namespaced }
Composite->>Registry: resolve composite members & load tools
Composite->>Redis: getPlanLimitsForUser(userId)
Redis-->>Composite: plan limits
Composite->>Redis: checkAndIncrementUsage(userId, toolKey)
Redis-->>Composite: allowed / exceeded
Composite->>Upstream: proxy tool execution (executeTool)
Upstream-->>Composite: tool result
Composite-->>Client: JSON‑RPC success/error response

mermaid
sequenceDiagram
participant Sender as Webhook Sender
participant Receiver as WebhookRouter
participant Redis as Redis (webhook:latest...)
participant DB as Database
participant Notifier as WebhookNotifier
participant Sessions as SessionManager
Sender->>Receiver: POST /webhooks/:slug/:eventName (body + headers)
Receiver->>Redis: SET webhook:latest:{serverId}:{eventName}
Receiver->>DB: INSERT webhook_events (optional)
Receiver->>Notifier: notify(serverSlug, eventName, payload)
Notifier->>Sessions: forEachSession(serverSlug) → sendEvent to SSE clients
Receiver-->>Sender: 200 { ok: true } / 4xx / 5xx

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: sprints 15-19 — streaming HTTP, webhooks, composites, workspaces, playground' clearly summarizes the main changes by highlighting five major feature categories implemented across multiple sprints.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/sprint-15-streamable-http-webhooks

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/runtime/src/transports/streamable-http.ts (1)

216-221: ⚠️ Potential issue | 🟠 Major

Don't advertise resources.subscribe before it's implemented.

Line 220 tells clients that resource subscriptions are supported, but the switch only handles resources/list and resources/read; a resources/subscribe request still falls through to “Method not found”. That breaks the capability contract established during initialize.

Proposed fix
             capabilities: {
               tools: { listChanged: true },
-              resources: { subscribe: true, listChanged: true },
+              resources: { listChanged: true },
             },

Also applies to: 334-399, 405-406

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/streamable-http.ts` around lines 216 - 221, The
initialize response currently advertises resources.subscribe support but the
request dispatcher (in the streamable HTTP transport) only implements
resources/list and resources/read, causing subscribe requests to be unhandled;
update the initialization capability object used in jsonRpcSuccess (the block
producing protocolVersion/capabilities) to remove or set resources.subscribe to
false and ensure the same change is applied consistently to the other
initialization response locations referenced (around the other ranges noted) so
that advertised capabilities match implemented handlers (search for
jsonRpcSuccess and the initialization response objects to locate all
occurrences).
🟡 Minor comments (12)
apps/web/components/playground/schema-form.tsx-161-171 (1)

161-171: ⚠️ Potential issue | 🟡 Minor

Pass required parameter to array fields for required indicator display.

The SchemaField function receives required, but the array case (lines 163-171) doesn't pass it to ArrayField, and ArrayField doesn't forward it to FieldWrapper (line 223). This prevents array fields from showing the required indicator, unlike all other field types.

🔧 Suggested fix
 function ArrayField({
   name,
   schema,
   value,
   onChange,
   description,
+  required,
   disabled,
   depth = 0,
 }: {
   readonly name: string;
   readonly schema: JSONSchema;
   readonly value: unknown;
   readonly onChange: (value: unknown) => void;
   readonly description?: string;
+  readonly required?: boolean;
   readonly disabled?: boolean;
   readonly depth?: number;
 }) {
-  return (
-    <FieldWrapper name={label} description={description}>
+  return (
+    <FieldWrapper name={label} description={description} required={required}>
     case 'array': {
       return (
         <ArrayField
           name={name}
           schema={schema}
           value={value}
           onChange={onChange}
           description={description}
+          required={required}
           disabled={disabled}
           depth={depth}
         />
       );
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/playground/schema-form.tsx` around lines 161 - 171, The
array branch of SchemaField is not passing the required prop to ArrayField, so
ArrayField (and ultimately FieldWrapper) never receives it; update the array
case in SchemaField to include required={required} when rendering <ArrayField
...>, and then modify the ArrayField component to accept a required prop and
forward it into its FieldWrapper invocation (e.g., pass required={required} to
FieldWrapper) so the required indicator is displayed for array fields.
apps/web/app/api/workspaces/[id]/members/[userId]/route.ts-34-40 (1)

34-40: ⚠️ Potential issue | 🟡 Minor

Handle case where target member doesn't exist.

If targetUserId is not a member of this workspace, getUserRole may return null or undefined. The current code only checks for targetRole === 'owner' but proceeds to call removeMember even if the user doesn't exist. This could result in a misleading success response or a database error depending on the repository implementation.

🛡️ Proposed fix to handle non-existent member
     const targetRole = await repo.getUserRole(workspaceId, targetUserId);
 
+    if (!targetRole) {
+      throw new ApiError('NOT_FOUND', 'Member not found', 404);
+    }
+
     if (targetRole === 'owner') {
       throw new ApiError('FORBIDDEN', 'Cannot remove the workspace owner', 403);
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/api/workspaces/`[id]/members/[userId]/route.ts around lines 34 -
40, Check whether repo.getUserRole(workspaceId, targetUserId) returns
null/undefined before proceeding: if targetRole is falsy, throw an
ApiError('NOT_FOUND', 'Target user is not a member of this workspace', 404) and
do not call repo.removeMember; otherwise keep the existing owner check
(targetRole === 'owner') and then call repo.removeMember(workspaceId,
targetUserId) only for valid non-owner members.
apps/web/app/api/workspaces/route.ts-37-39 (1)

37-39: ⚠️ Potential issue | 🟡 Minor

Add handling for duplicate workspace slug errors.

The repo.create method will throw a database unique constraint violation when a duplicate slug is provided, but withErrorHandler only handles ApiError and ZodError instances. Database constraint violations fall through to a generic 500 error instead of returning the appropriate 409 Conflict response. Add a try-catch in the repository's create method to detect duplicate slug errors and throw an ApiError with status 409, or extend withErrorHandler to map database constraint violations accordingly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/api/workspaces/route.ts` around lines 37 - 39, The repository
create path currently allows DB unique-constraint errors from
WorkspaceRepository.create to bubble up as 500s; update
WorkspaceRepository.create to catch the DB-specific duplicate-slug error (e.g.,
identify by the DB error code/message thrown by your ORM/driver) and rethrow an
ApiError with status 409 and a clear message like "workspace slug already
exists", or alternatively extend withErrorHandler to detect the same DB
constraint error and map it to new ApiError(409, ...) before returning; target
the WorkspaceRepository.create method and the withErrorHandler error-mapping
logic and ensure ApiError is used so the route returns 409 Conflict for
duplicate slugs.
apps/web/app/api/workspaces/[id]/members/route.ts-30-37 (1)

30-37: ⚠️ Potential issue | 🟡 Minor

The duplicate invitation concern is already prevented by the database schema.

The workspaceMembers table enforces a composite primary key on (workspaceId, userId), which prevents duplicate entries at the database level. If the same email is invited twice, the database will reject the duplicate insertion.

However, the code lacks graceful error handling for this case. When attempting to invite the same email twice, the API will fail with a raw database constraint violation error instead of providing a user-friendly message. Consider adding a check before insertion or wrapping the insert in error handling to return a meaningful response (e.g., "User already invited" or "User already a member").

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/api/workspaces/`[id]/members/route.ts around lines 30 - 37, The
add-member flow uses WorkspaceRepository.addMember (called from the route
handler after getDb()) but doesn't handle duplicate-invite DB constraint errors,
causing raw DB errors to surface; wrap the addMember call in a try/catch (or do
a pre-check via a repository method like hasMember/getMember) and detect the
unique-constraint/duplicate-insert error from your DB client, then return a
friendly API response such as "User already invited" or "User already a member"
instead of propagating the raw DB error. Ensure you reference
WorkspaceRepository.addMember in the route handler and normalize the error
handling around getDb() -> new WorkspaceRepository(db) -> repo.addMember(...) so
duplicates are caught and translated to a 4xx response.
apps/web/app/dashboard/settings/members/page.tsx-136-144 (1)

136-144: ⚠️ Potential issue | 🟡 Minor

Add an accessible name to the remove-member button.

This is an icon-only control right now, so assistive technology does not get a reliable label for the destructive action.

Proposed fix
               {member.role !== "owner" && (
                 <Button
                   variant="ghost"
                   size="sm"
+                  aria-label={`Remove ${member.userId} from workspace`}
                   className="h-7 w-7 p-0 text-muted-foreground hover:text-destructive shrink-0"
                   onClick={() => handleRemove(member.userId)}
                   disabled={removeMember.isPending}
                 >
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/settings/members/page.tsx` around lines 136 - 144, The
icon-only remove button lacks an accessible name; update the Button (the
instance wrapping <Trash2 />) to provide an explicit accessible label (e.g.,
aria-label) so screen readers announce the action — use a contextual label like
aria-label={`Remove ${member.name || 'member'}`} (fall back to a generic "Remove
member" if member.name is unavailable) and keep the existing props
(onClick={handleRemove(member.userId)} and disabled={removeMember.isPending}) so
the control remains operable and descriptive for assistive tech.
apps/web/app/api/tools/[id]/execute/route.ts-100-114 (1)

100-114: ⚠️ Potential issue | 🟡 Minor

Missing error check on callRes before parsing JSON.

If the runtime returns a non-2xx status, callRes.json() may fail or return unexpected content. The response should be validated before parsing.

🐛 Proposed fix
     // Execute tool
     const callRes = await fetch(runtimeUrl, {
       method: 'POST',
       headers,
       body: JSON.stringify({
         jsonrpc: '2.0',
         id: 2,
         method: 'tools/call',
         params: { name: tool.toolName, arguments: input.arguments },
       }),
     });

+    if (!callRes.ok) {
+      return NextResponse.json(
+        createSuccessResponse({ error: 'Tool execution failed', status: callRes.status }),
+        { status: 502 },
+      );
+    }
+
     const result = await callRes.json();

     return NextResponse.json(createSuccessResponse(result));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/api/tools/`[id]/execute/route.ts around lines 100 - 114, The
code calls fetch(runtimeUrl) into callRes and immediately does await
callRes.json() without checking the response status; update the block around
callRes to verify callRes.ok (or status) before parsing, and if not ok read the
response body (json or text) and return an error via
NextResponse.json(createSuccessResponse(...)) or an appropriate error response;
reference the variables callRes, runtimeUrl, tool.toolName, and the return path
using NextResponse.json and createSuccessResponse to locate and change the logic
so non-2xx responses are handled gracefully and include the runtime error
details.
apps/web/app/api/workspaces/[id]/route.ts-23-25 (1)

23-25: ⚠️ Potential issue | 🟡 Minor

Inconsistent rate limit handling between GET and PUT.

In GET, withRateLimit is called but its return value is not checked, while PUT correctly returns early when rate limited. If withRateLimit returns a response when the limit is exceeded, GET will ignore it and continue processing.

🐛 Proposed fix
     const rbac = await resolveWorkspaceContext(id, 'viewer');
-    await withRateLimit(rbac.userId);
+    const rateLimited = await withRateLimit(rbac.userId);
+    if (rateLimited) return rateLimited;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/api/workspaces/`[id]/route.ts around lines 23 - 25, The GET
handler calls resolveWorkspaceContext(id, 'viewer') and invokes
withRateLimit(rbac.userId) but ignores its return value; update the GET flow to
mirror PUT by capturing the result of withRateLimit(rbac.userId) and returning
early when it yields a Response (i.e., if (const rl = await withRateLimit(...))
return rl;), so rate-limited requests are stopped before further processing.
apps/runtime/__tests__/unit/response-cache.test.ts-28-32 (1)

28-32: ⚠️ Potential issue | 🟡 Minor

Update mock scan signature to match actual ioredis usage.

The mock declares only 3 parameters (_cursor, _match, pattern), but actual calls pass 5 arguments: scan(cursor, 'MATCH', pattern, 'COUNT', 100). Update the signature to accept the remaining parameters:

scan: vi.fn().mockImplementation((_cursor: string, _match: string, pattern: string, _count: string, _limit: number) => {
  const prefix = pattern.replace(/\*$/, '');
  const matched = [...store.keys()].filter((k) => k.startsWith(prefix));
  return Promise.resolve(['0', matched]);
}),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/__tests__/unit/response-cache.test.ts` around lines 28 - 32,
Update the mock scan implementation in the test to match ioredis's actual call
signature by adding the extra parameters so the function accepts (cursor,
_match, pattern, _count, _limit); modify the mock declared as scan:
vi.fn().mockImplementation(...) to include the additional _count and _limit
parameters, keep the existing logic that strips the trailing '*' from pattern
and filters store.keys(), and return Promise.resolve(['0', matched]) as before
so tests receive the same result shape.
apps/runtime/src/transports/composite.ts-10-10 (1)

10-10: ⚠️ Potential issue | 🟡 Minor

Remove unused import ToolDefinition.

The ToolDefinition type is imported but never used, causing the ESLint @typescript-eslint/no-unused-vars error in CI.

🔧 Proposed fix
-import type { ToolLoader, ToolDefinition } from '../registry/tool-loader.js';
+import type { ToolLoader } from '../registry/tool-loader.js';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/composite.ts` at line 10, Remove the unused type
import ToolDefinition from the import list in composite.ts: update the import
that currently references "ToolLoader, ToolDefinition" (from
'../registry/tool-loader.js') to only import the used symbol(s) (e.g.,
"ToolLoader") so the unused-variable ESLint error
(`@typescript-eslint/no-unused-vars`) is resolved.
apps/runtime/src/transports/composite.ts-1-6 (1)

1-6: ⚠️ Potential issue | 🟡 Minor

Fix import ordering to resolve ESLint warning.

The ESLint import/order rule flags an empty line within the import group between lines 4-6.

🔧 Proposed fix
 import { randomUUID } from 'node:crypto';
-
 import type { Request, Response, Router } from 'express';
 import express from 'express';

 import type { Redis } from 'ioredis';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/composite.ts` around lines 1 - 6, The ESLint
import/order warning is caused by an empty line inside the same import group in
apps/runtime/src/transports/composite.ts; remove the blank line between the
express import ("import express from 'express'") and the Redis type import
("import type { Redis } from 'ioredis'") so that randomUUID, the express-related
imports (Request, Response, Router, express) and the Redis type import are
contiguous in the same group (or otherwise reorder them into proper groups) to
satisfy the import/order rule.
apps/web/app/api/composites/[id]/route.ts-18-24 (1)

18-24: ⚠️ Potential issue | 🟡 Minor

Remove the .min(1) constraint to allow clearing all composite members.

The members array has .min(1).optional() which prevents sending an empty array. However, the repository implementation (lines 182-195 of composite.repository.ts) explicitly supports clearing all members by deleting existing members when a new (possibly empty) array is provided. The UI also allows users to remove members one by one, potentially reaching an empty state. Sending an empty array from the UI would fail validation. Remove .min(1) to permit clearing members when updating a composite.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/api/composites/`[id]/route.ts around lines 18 - 24, The members
schema currently enforces `.min(1)` which blocks sending an empty array; remove
the `.min(1)` call from the members validation (i.e., change the declaration
that currently reads members: z.array(...).min(1).max(20).optional() to drop
.min(1)) so an empty array is permitted while preserving `.max(20).optional()`,
leaving repository logic in composite.repository.ts to perform the clear/delete
behavior; run relevant tests or exercise the update endpoint to confirm empty
arrays are accepted.
apps/runtime/src/middleware/response-cache.ts-5-7 (1)

5-7: ⚠️ Potential issue | 🟡 Minor

Fix import ordering per ESLint rules.

The pipeline is failing due to import order. Type import of ../mcp/tool-executor.js should occur before type import of ../observability/logger.js.

🔧 Proposed fix
-import type { Logger } from '../observability/logger.js';
-import { metrics } from '../observability/metrics.js';
 import type { MCPToolResult } from '../mcp/tool-executor.js';
+import type { Logger } from '../observability/logger.js';
+import { metrics } from '../observability/metrics.js';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/middleware/response-cache.ts` around lines 5 - 7, Reorder
the type imports so the type import for MCPToolResult appears before the type
import for Logger to satisfy ESLint import-order rules; specifically, swap the
two import lines that bring in the Logger type and the MCPToolResult type
(references: Logger, MCPToolResult) so the MCPToolResult type import precedes
the Logger type import.
🧹 Nitpick comments (23)
apps/web/components/playground/schema-form.tsx (1)

237-243: Set explicit button type for add/remove controls.

If this component is rendered inside a <form>, default button behavior may submit unexpectedly. Setting type="button" makes intent explicit.

🔧 Suggested fix
             <Button
+              type="button"
               variant="ghost"
               size="sm"
@@
         <Button
+          type="button"
           variant="outline"
           size="sm"
           onClick={handleAdd}

Also applies to: 248-252

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/playground/schema-form.tsx` around lines 237 - 243, The
add/remove Button elements in schema-form.tsx lack an explicit type, which can
cause them to submit a surrounding form; update the Button components used for
removal and addition (the one calling handleRemove(i) and the sibling add button
that calls handleAdd or similar) to include type="button" so their intent is
explicit and they won't trigger form submission; locate the Button instances
around the handlers handleRemove and handleAdd (the controls at the shown diff
and the ones at lines 248-252) and add the type attribute to each.
apps/web/lib/db/migrations/0011_webhook_events.sql (1)

15-16: Ensure the eviction job is scheduled before production deployment.

The comment correctly identifies the need for a scheduled job to purge old webhook events. Without this, the table will grow unbounded. Consider creating a tracking issue or adding the job to deployment runbooks.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/migrations/0011_webhook_events.sql` around lines 15 - 16, Add
a scheduled eviction job to purge old rows from the webhook_events table before
production deployment by creating a cron/worker task that runs the SQL shown
(DELETE FROM webhook_events WHERE received_at < NOW() - INTERVAL '30 days') on a
regular cadence, and document this in deployment runbooks and/or create a
tracking issue so the job is not forgotten; reference the webhook_events table
and the eviction SQL snippet when adding the task and documentation.
apps/runtime/src/server.ts (1)

24-25: Fix import ordering per linter.

Pipeline reports import order violations. Consider reordering ./webhooks/receiver.js to come after ./webhooks/notifier.js.

🔧 Proposed fix
-import { createWebhookRouter } from './webhooks/receiver.js';
 import { WebhookNotifier } from './webhooks/notifier.js';
+import { createWebhookRouter } from './webhooks/receiver.js';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/server.ts` around lines 24 - 25, The import order violates
the linter: swap the two import statements so WebhookNotifier is imported before
createWebhookRouter; specifically reorder the imports that reference
WebhookNotifier and createWebhookRouter (so the import that brings in
WebhookNotifier appears above the import that brings in createWebhookRouter) to
satisfy the project's import-order rule.
apps/web/lib/db/schema/specs.ts (1)

14-14: Add .references() and index to the Drizzle schema for workspaceId.

The migrations define a foreign key constraint and index at the database level (migration 0013 adds the FK with REFERENCES workspaces(id) ON DELETE CASCADE, and migration 0014 creates idx_specs_workspace), but the Drizzle schema doesn't declare them. Add .references('workspaces', 'id') to workspaceId and include an index in the table definition for type safety, ORM consistency, and proper documentation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/schema/specs.ts` at line 14, The Drizzle schema for the Specs
table is missing the foreign key and index declared in migrations; update the
workspaceId column declaration (currently uuid('workspace_id')) to include
.references('workspaces', 'id') and add the corresponding index definition
(named idx_specs_workspace) to the table definition so the Specs table schema
includes the FK and index for workspaceId (use the table's index helper
referencing workspaceId to match the migration).
apps/web/lib/hooks/use-workspaces.ts (1)

4-4: Use the shared invite input type here.

This reintroduces a free-form role: string even though @apifold/types already exports the constrained invite contract. Typos like "Owner" will compile and only fail once the request hits the API.

Proposed fix
-import type { Workspace, WorkspaceWithMembers, CreateWorkspaceInput, UpdateWorkspaceInput } from "@apifold/types";
+import type {
+  Workspace,
+  WorkspaceWithMembers,
+  CreateWorkspaceInput,
+  UpdateWorkspaceInput,
+  InviteMemberInput,
+} from "@apifold/types";
@@
 export function useInviteMember() {
   const queryClient = useQueryClient();
   return useMutation({
-    mutationFn: ({ workspaceId, email, role }: { readonly workspaceId: string; readonly email: string; readonly role: string }) =>
-      api.post(`/workspaces/${workspaceId}/members`, { email, role }),
+    mutationFn: ({ workspaceId, ...input }: { readonly workspaceId: string } & InviteMemberInput) =>
+      api.post(`/workspaces/${workspaceId}/members`, input),
     onSuccess: (_data, variables) => {
       queryClient.invalidateQueries({ queryKey: ["workspaces", variables.workspaceId] });
     },
   });
 }

Also applies to: 45-49

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/hooks/use-workspaces.ts` at line 4, The hook currently
reintroduces an unconstrained role: string for invites; import and use the
shared invite input type exported from `@apifold/types` (instead of the free-form
role) and update all places in use-workspaces.ts that construct or type invite
payloads (including the occurrences around the earlier import line and the spots
around lines 45-49) to use that shared invite type; specifically add the
exported invite type to the existing import list (alongside Workspace,
WorkspaceWithMembers, CreateWorkspaceInput, UpdateWorkspaceInput) and replace
any local/inline invite type or parameter (e.g., role: string) with the shared
type so the hook's function signatures and request bodies match the constrained
invite contract.
apps/web/components/dashboard/workspace-switcher.tsx (2)

38-66: Improve dropdown accessibility with keyboard support and ARIA attributes.

The dropdown implementation lacks:

  • aria-expanded on the trigger button
  • aria-haspopup="listbox" or role="menu" semantics
  • Keyboard navigation (Escape to close, arrow keys for selection)
  • Focus trapping within the dropdown

Consider using a headless UI library (e.g., Radix UI, Headless UI) or adding keyboard event handlers.

♻️ Minimal accessibility improvements
       <button
         type="button"
         onClick={() => setOpen((prev) => !prev)}
+        aria-expanded={open}
+        aria-haspopup="listbox"
         className="flex items-center gap-2 rounded-md border border-border bg-background px-3 py-1.5 text-sm transition-colors duration-150 hover:bg-muted/50"
       >

Additionally, consider adding an onKeyDown handler to close on Escape:

onKeyDown={(e) => { if (e.key === 'Escape') setOpen(false); }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/dashboard/workspace-switcher.tsx` around lines 38 - 66,
The dropdown lacks ARIA and keyboard support; update the workspace-switcher
component to add aria-expanded on the trigger button and aria-haspopup="listbox"
(or "menu"), mark the dropdown container with role="menu" or role="listbox", and
mark each workspace button with role="menuitem" (or option) and tabIndex to be
focusable; implement an onKeyDown on the dropdown (or container) to handle
Escape (call setOpen(false)), ArrowDown/ArrowUp to move a focusIndex state and
programmatically focus the corresponding workspace button (use refs or a ref
array), and Enter/Space to invoke onSwitch(workspace.id); optionally wrap
focus-management in a small focus trap to keep focus inside while open or
replace with Radix/Headless UI if preferred. Ensure you reference/modify
setOpen, onSwitch, current, workspaces and the mapped workspace button rendering
to add these attributes and handlers.

17-19: Handle loading and error states from useWorkspaces.

The hook destructures only data: workspaces, but useWorkspaces likely provides isLoading and error states. Consider showing a loading indicator or handling errors gracefully rather than rendering an empty dropdown.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/dashboard/workspace-switcher.tsx` around lines 17 - 19,
The component uses useWorkspaces but only reads data (workspaces) and can render
an empty dropdown during loading or on error; update the Workspace Switcher to
also destructure isLoading and error from useWorkspaces, render a loading state
(spinner or disabled placeholder) when isLoading is true, and render an
error/fallback UI (or disabled dropdown with a message) when error is present;
adjust how current is computed (const { data: workspaces, isLoading, error } =
useWorkspaces(); const current = !isLoading && !error ? (workspaces?.find(w =>
w.id === currentWorkspaceId) ?? workspaces?.[0]) : undefined) and ensure the UI
disables selection when isLoading/error to avoid empty dropdowns.
apps/runtime/src/mcp/protocol-handler.ts (2)

268-277: URI host validation may be too permissive.

The regex ^webhook:\/\/[^/]+\/(.+)$ accepts any non-slash characters as the host, but doesn't verify it matches server.slug. A client could potentially craft a URI like webhook://other-server/event and the handler would still attempt to read from the current server's Redis keys. While this doesn't expose cross-tenant data (the Redis key uses server.id), it could cause confusion.

♻️ Validate URI host matches server slug
-    const match = uri.match(/^webhook:\/\/[^/]+\/(.+)$/);
+    const match = uri.match(/^webhook:\/\/([^/]+)\/(.+)$/);
     if (!match) {
       return jsonRpcError(req.id, -32602, 'Invalid resource URI');
     }
-    const eventName = match[1]!;
+    const [, host, eventName] = match;
+
+    if (host !== server.slug) {
+      return jsonRpcError(req.id, -32602, 'Resource URI does not match server');
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/mcp/protocol-handler.ts` around lines 268 - 277, The URI
host extraction currently uses /^webhook:\/\/[^/]+\/(.+)$/ and only captures the
event name; change the parsing to capture the host and event (e.g.,
/^webhook:\/\/([^/]+)\/(.+)$/ via the existing uri.match call), then compare the
captured host against server.slug and return jsonRpcError(req.id, -32602,
'Invalid resource URI host') if they differ; keep the existing eventName
validation (/^[a-zA-Z0-9_.:-]+$/ and max length) and continue using server.id
for Redis keys as before.

231-246: Consider adding a limit to prevent unbounded resource listings.

scanKeys iterates through all matching Redis keys without a limit. For servers with many webhook events, this could return a very large array, impacting memory and response time.

♻️ Add a configurable limit
+const MAX_RESOURCES = 1000;
+
   try {
     const pattern = `webhook:latest:${server.id}:*`;
     const keys = await scanKeys(this.redis, pattern);
     const prefix = `webhook:latest:${server.id}:`;

     const resources = keys
+      .slice(0, MAX_RESOURCES)
       .map((key) => key.slice(prefix.length))
       .filter((name) => name.length > 0 && /^[a-zA-Z0-9_.:-]+$/.test(name))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/mcp/protocol-handler.ts` around lines 231 - 246, The keys
listing can be unbounded because scanKeys returns all matches; update the logic
around scanKeys and resources to enforce a configurable limit (e.g., via a new
config or parameter) so you stop collecting keys once the limit is reached and
only map those keys into resources; specifically, modify the call/site that uses
scanKeys (symbol: scanKeys) in protocol-handler.ts to accept/implement a
maxResults arg or iterate results incrementally, break when count >= limit, and
then pass the truncated list to the existing mapping that builds resources
before returning jsonRpcSuccess(req.id, { resources }); ensure the limit is
configurable and validated.
apps/web/app/api/workspaces/route.ts (1)

9-12: Tighten slug validation to prevent edge cases.

The current regex ^[a-z0-9-]+$ allows slugs like ---, -test, or test-. Consider anchoring the pattern to require alphanumeric characters at the start/end.

♻️ Suggested stricter slug pattern
 const createSchema = z.object({
   name: z.string().min(1).max(100),
-  slug: z.string().min(1).max(50).regex(/^[a-z0-9-]+$/),
+  slug: z.string().min(1).max(50).regex(/^[a-z0-9]+(?:-[a-z0-9]+)*$/),
 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/api/workspaces/route.ts` around lines 9 - 12, The slug regex in
createSchema currently allows leading/trailing or all-hyphen slugs; update the
slug validator in createSchema to require an alphanumeric start and end and only
allow hyphens between alphanumeric groups (e.g. use a pattern like
^[a-z0-9]+(?:-[a-z0-9]+)*$) so slugs like "-test", "test-", or "---" are
rejected; replace the existing regex on the slug field and keep the existing
z.string().min/max constraints and an appropriate error message.
apps/web/lib/db/schema/composite-servers.ts (1)

25-25: Add a foreign key constraint for workspaceId.

The workspaceId column is nullable but lacks a foreign key reference to the workspaces table. This is inconsistent with how other tables reference workspaces (e.g., workspaceMembers) and could lead to orphaned records if a workspace is deleted.

Consider updating line 25 to:

workspaceId: uuid('workspace_id').references(() => workspaces.id, { onDelete: 'cascade' }),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/schema/composite-servers.ts` at line 25, The workspaceId
column (uuid('workspace_id')) is nullable but missing a foreign-key constraint;
update its definition in composite-servers.ts to reference the workspaces table
(workspaces.id) and add onDelete: 'cascade' so workspaceId is enforced and rows
are removed when a workspace is deleted—i.e. change the uuid('workspace_id')
definition to reference workspaces.id with onDelete: 'cascade' for consistency
with workspaceMembers.
apps/web/lib/db/migrations/0013_workspaces.sql (1)

25-28: Consider adding indexes on workspace_id columns for query performance.

The new workspace_id foreign key columns will likely be used in WHERE clauses for workspace-scoped queries. Adding indexes can significantly improve query performance, especially as data grows.

📊 Suggested indexes
CREATE INDEX idx_specs_workspace ON specs (workspace_id);
CREATE INDEX idx_mcp_servers_workspace ON mcp_servers (workspace_id);
CREATE INDEX idx_credentials_workspace ON credentials (workspace_id);
CREATE INDEX idx_composite_servers_workspace ON composite_servers (workspace_id);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/migrations/0013_workspaces.sql` around lines 25 - 28, Add
indexes on the newly added workspace_id foreign-key columns to improve query
performance for workspace-scoped queries: create indexes for
specs(workspace_id), mcp_servers(workspace_id), credentials(workspace_id), and
composite_servers(workspace_id) (e.g., CREATE INDEX idx_specs_workspace ON specs
(workspace_id) etc.), ensuring each index name is unique and matches the
table/column names added in the migration.
apps/runtime/__tests__/integration/streamable-http.test.ts (2)

255-260: SSE event assertions may be brittle if event names change.

The test asserts specific event names (event: progress, event: message) which couples the test to implementation details. Consider extracting these as constants shared with the implementation, or at minimum add a comment documenting the expected SSE protocol.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/__tests__/integration/streamable-http.test.ts` around lines 255
- 260, The test currently asserts literal SSE event strings ("event: progress",
"event: message") which couples the test to implementation details; update the
test to reference shared constants from the implementation (e.g.,
SSE_EVENT_PROGRESS, SSE_EVENT_MESSAGE) or import the protocol constants used by
the stream publisher and use those in the assertions (or, if importing isn't
feasible, add a clear inline comment above the assertions documenting the
expected SSE event names and protocol contract). Locate the assertions using the
callRes/text() result (variable body) and replace the hard-coded strings with
the shared symbol names (or add the comment) so the test remains stable if event
names change.

49-181: Consider extracting test setup to a shared helper.

The beforeEach block has significant complexity (~130 lines). Extracting the runtime setup into a reusable helper (similar to createTestLogger) would improve readability and enable reuse across other integration test files.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/__tests__/integration/streamable-http.test.ts` around lines 49 -
181, The beforeEach test setup is too large and should be extracted into a
reusable helper to improve readability and reuse; create a helper (e.g.,
createRuntimeTestSetup) that encapsulates the logic currently in
beforeEach—initializing upstreamApp/upstreamServer, ServerRegistry.upsert,
ToolLoader mock, CredentialCache, CircuitBreaker, ConnectionMonitor,
SessionManager, mockRedis (createMockRedis), ProtocolHandler, and createApp
invocation—and return the resulting
server/baseUrl/registry/protocolHandler/toolLoader/credentialCache objects; then
simplify the test file's beforeEach to call this helper and assign the returned
values (keeping existing symbols like beforeEach, createApp, ProtocolHandler,
ToolLoader, CredentialCache, ServerRegistry, createMockRedis unchanged so
consumers can find and reuse them).
apps/web/lib/hooks/use-composites.ts (2)

55-62: Delete mutation should also invalidate the specific composite query.

If a component is still mounted and holding a reference to ["composites", id], it won't know the composite was deleted. Consider invalidating both query keys for consistency with useUpdateComposite.

♻️ Proposed fix
 export function useDeleteComposite() {
   const queryClient = useQueryClient();
   return useMutation({
     mutationFn: (id: string) => api.delete<void>(`/composites/${id}`),
-    onSuccess: () => {
+    onSuccess: (_data, id) => {
+      queryClient.invalidateQueries({ queryKey: ["composites", id] });
       queryClient.invalidateQueries({ queryKey: ["composites"] });
     },
   });
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/hooks/use-composites.ts` around lines 55 - 62, The delete
mutation in useDeleteComposite currently only invalidates the list query
["composites"], so any mounted components holding ["composites", id] won't
update; modify useDeleteComposite's onSuccess to accept the mutation variables
(the id) and call queryClient.invalidateQueries for both the list key
["composites"] and the specific key ["composites", id] (use the second onSuccess
param to get the id), keeping the mutationFn (id: string) => api.delete(...)
unchanged.

19-24: Empty string id will still enable the query.

The enabled: !!id check passes for empty strings (!!'' === false is actually false, so this is fine). However, consider whether callers might pass undefined before the ID is resolved, which would result in a request to /composites/undefined.

♻️ More explicit check
 export function useComposite(id: string) {
   return useQuery({
     queryKey: ["composites", id],
     queryFn: () => api.get<CompositeServerWithMembers>(`/composites/${id}`),
-    enabled: !!id,
+    enabled: !!id && id.length > 0,
   });
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/hooks/use-composites.ts` around lines 19 - 24, The useComposite
hook can be called with undefined and still build a request path like
`/composites/undefined`; update the enabled check and guard the query function
so it only runs for a non-empty string id: change the enabled condition in
useComposite to explicitly check that id is a defined, non-empty string (e.g.,
typeof id === "string" && id.length > 0) and ensure the queryFn uses that
validated id (or returns/throws if id is absent) so api.get is never called with
"undefined"; reference the useComposite function, its queryKey ["composites",
id], and the queryFn to locate where to apply the fix.
packages/transformer/src/webhooks.ts (1)

28-37: Version check may exclude future OpenAPI versions.

The startsWith('3.1') check will reject OpenAPI 3.2+ specs that also support webhooks. Consider a more flexible version comparison.

♻️ Suggested improvement
-  if (!spec.openapi.startsWith('3.1')) {
+  const [major, minor] = spec.openapi.split('.').map(Number);
+  if (major < 3 || (major === 3 && minor < 1)) {
     return {
       resources: [],
       notifications: [],
       warnings: [{
         code: 'WEBHOOKS_UNSUPPORTED_VERSION',
-        message: 'Webhooks require OpenAPI 3.1+; found ' + spec.openapi,
+        message: `Webhooks require OpenAPI 3.1+; found ${spec.openapi}`,
       }],
     };
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/transformer/src/webhooks.ts` around lines 28 - 37, The current check
using spec.openapi.startsWith('3.1') will reject newer OpenAPI 3.x versions;
change it to parse spec.openapi into numeric major/minor (e.g., split on '.' or
use a semver lib) and accept if major > 3 OR (major === 3 AND minor >= 1).
Replace the startsWith check with a reusable helper (e.g., isOpenApiAtLeast31)
and use that helper where spec.openapi is evaluated so OpenAPI 3.2+ and other
future 3.x versions are correctly allowed.
apps/web/app/api/tools/[id]/execute/route.ts (1)

88-93: Error response uses createSuccessResponse which is semantically misleading.

Using createSuccessResponse to wrap an error object is confusing for API consumers. Consider using a dedicated error response helper or returning a plain error structure.

♻️ Cleaner error response
     if (!initRes.ok) {
       return NextResponse.json(
-        createSuccessResponse({ error: 'Runtime unavailable', status: initRes.status }),
+        { success: false, error: { code: 'RUNTIME_UNAVAILABLE', message: 'Runtime unavailable' } },
         { status: 502 },
       );
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/api/tools/`[id]/execute/route.ts around lines 88 - 93, Replace
the misleading use of createSuccessResponse when initRes.ok is false: inside the
initRes.ok check in route.ts (the block returning NextResponse.json for runtime
unavailable) return a proper error payload instead—either call the project's
error helper (e.g., createErrorResponse or createFailureResponse if available)
or return a plain { error: 'Runtime unavailable', status: initRes.status }
object directly to NextResponse.json with status 502; update the import if you
add a new helper and ensure the response shape matches other API error
responses.
apps/web/lib/middleware/rbac.ts (1)

1-2: Remove unused imports.

NextResponse and createErrorResponse are imported but not used in this file. The middleware throws ApiError which is handled by the error handler wrapper at the route level.

🔧 Proposed fix
-import { NextResponse } from 'next/server';
-import { createErrorResponse } from '@apifold/types';
 import type { WorkspaceRole } from '@apifold/types';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/middleware/rbac.ts` around lines 1 - 2, Remove the unused
imports NextResponse and createErrorResponse from the top of
apps/web/lib/middleware/rbac.ts since the file throws ApiError and relies on the
route-level error handler; update the import statement to only include symbols
actually referenced in this module (e.g., remove NextResponse and
createErrorResponse) so there are no unused imports flagged.
apps/web/lib/db/repositories/composite.repository.ts (1)

202-206: Consider returning deletion status or throwing on not found.

The delete method silently succeeds even if the composite doesn't exist. This might be intentional (idempotent delete), but if the caller expects confirmation that something was deleted, consider returning a boolean or checking affected rows.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/repositories/composite.repository.ts` around lines 202 - 206,
The delete method currently always resolves void and hides whether a row was
removed; update the async delete(userId: string, id: string) in
composite.repository.ts to return a deletion status (e.g., boolean) or throw
when nothing was deleted: call this.db.delete(compositeServers)... and inspect
the returned result/affected-rows (or returned rows) from the delete operation,
then return true when a row was removed (false if none) or throw a NotFound
error when affected rows === 0; adjust callers to handle the new
boolean/exception accordingly.
packages/types/src/workspace.ts (1)

36-39: Consider restricting owner from InviteMemberInput.role.

The InviteMemberInput allows setting role to 'owner', but typically owner is assigned during workspace creation. Inviting someone as owner could lead to multiple owners. Consider using a restricted type or validating at the API layer.

💡 Alternative type
export type InvitableRole = 'admin' | 'member' | 'viewer';

export interface InviteMemberInput {
  readonly email: string;
  readonly role: InvitableRole;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/types/src/workspace.ts` around lines 36 - 39, InviteMemberInput
currently allows WorkspaceRole values including 'owner', which can permit
inviting owners; update the type to prevent assigning owner on invite by
replacing or restricting InviteMemberInput.role (referencing InviteMemberInput
and WorkspaceRole) — either introduce a new union type (e.g., InvitableRole)
excluding 'owner' and use it for InviteMemberInput.role, or add validation at
the API layer to reject 'owner' when processing invite requests; ensure all
references to InviteMemberInput are updated accordingly and adjust any callers
to use the new restricted role set.
apps/web/app/dashboard/composites/[id]/page.tsx (1)

202-210: Consider adding error feedback for mutations.

The handleMembersUpdate and handleDelete functions call mutations but don't handle error states. Users won't see feedback if the update/delete fails.

💡 Suggested improvement
 const handleMembersUpdate = (members: readonly { serverId: string; namespace: string; displayOrder: number }[]) => {
-  updateComposite.mutate({ id, input: { members } });
+  updateComposite.mutate({ id, input: { members } }, {
+    onError: (error) => {
+      // Consider using a toast notification
+      console.error('Failed to update members:', error);
+    },
+  });
 };

 const handleDelete = () => {
   deleteComposite.mutate(id, {
     onSuccess: () => router.push("/dashboard/composites"),
+    onError: (error) => {
+      setConfirmDelete(false);
+      // Consider using a toast notification
+      console.error('Failed to delete composite:', error);
+    },
   });
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/composites/`[id]/page.tsx around lines 202 - 210, Add
error feedback to the mutations by supplying onError handlers to
updateComposite.mutate and deleteComposite.mutate: update the
handleMembersUpdate and handleDelete functions to pass an options object with
onError that surfaces the error (e.g., show a toast/snackbar or set an error
state) and keep the existing onSuccess behavior (for handleDelete, still call
router.push on success). Specifically modify calls to
updateComposite.mutate(...) and deleteComposite.mutate(id, {...}) to include
onError callbacks that log the error and display a user-friendly message so
failures are visible to the user.
apps/runtime/src/middleware/response-cache.ts (1)

56-72: Consider skipping cache writes for write methods.

setCachedResponse will write cache entries for POST/PUT/DELETE/PATCH operations, but getCachedResponse never reads them. Adding the same WRITE_METHODS check here would avoid wasted Redis writes.

♻️ Proposed refactor
 export async function setCachedResponse(
   deps: ResponseCacheDeps,
   config: CacheConfig,
   input: Readonly<Record<string, unknown>>,
   result: MCPToolResult,
 ): Promise<void> {
   if (config.cacheTtlSeconds <= 0) return;
+  if (WRITE_METHODS.has(config.httpMethod.toLowerCase())) return;
   if (result.isError) return;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/middleware/response-cache.ts` around lines 56 - 72,
setCachedResponse is currently writing cache for all requests including write
operations; add the same WRITE_METHODS check used by getCachedResponse to skip
caching for HTTP write methods (e.g., POST/PUT/PATCH/DELETE). Update
setCachedResponse (function name) to inspect the incoming request method from
CacheConfig (or the same property getCachedResponse reads) and return early when
the method is in WRITE_METHODS, to avoid unnecessary Redis writes and keep
behavior consistent with getCachedResponse.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3f85ebc8-f810-4c5b-993f-b3d25dd4f5ae

📥 Commits

Reviewing files that changed from the base of the PR and between 6070602 and 572d90c.

📒 Files selected for processing (58)
  • apps/runtime/__tests__/integration/composite.test.ts
  • apps/runtime/__tests__/integration/streamable-http.test.ts
  • apps/runtime/__tests__/unit/namespace.test.ts
  • apps/runtime/__tests__/unit/response-cache.test.ts
  • apps/runtime/src/mcp/namespace.ts
  • apps/runtime/src/mcp/protocol-handler.ts
  • apps/runtime/src/mcp/session-manager.ts
  • apps/runtime/src/middleware/response-cache.ts
  • apps/runtime/src/registry/tool-loader.ts
  • apps/runtime/src/server.ts
  • apps/runtime/src/transports/composite.ts
  • apps/runtime/src/transports/streamable-http.ts
  • apps/runtime/src/webhooks/notifier.ts
  • apps/runtime/src/webhooks/receiver.ts
  • apps/runtime/src/webhooks/signature.ts
  • apps/web/__tests__/rbac/permissions.test.ts
  • apps/web/app/api/composites/[id]/route.ts
  • apps/web/app/api/composites/route.ts
  • apps/web/app/api/servers/[id]/webhook-events/route.ts
  • apps/web/app/api/tools/[id]/execute/route.ts
  • apps/web/app/api/workspaces/[id]/members/[userId]/route.ts
  • apps/web/app/api/workspaces/[id]/members/route.ts
  • apps/web/app/api/workspaces/[id]/route.ts
  • apps/web/app/api/workspaces/route.ts
  • apps/web/app/dashboard/composites/[id]/page.tsx
  • apps/web/app/dashboard/composites/page.tsx
  • apps/web/app/dashboard/servers/[id]/layout.tsx
  • apps/web/app/dashboard/servers/[id]/webhooks/page.tsx
  • apps/web/app/dashboard/settings/members/page.tsx
  • apps/web/components/dashboard/workspace-switcher.tsx
  • apps/web/components/playground/schema-form.tsx
  • apps/web/lib/constants/navigation.ts
  • apps/web/lib/db/migrations/0011_webhook_events.sql
  • apps/web/lib/db/migrations/0012_composite_servers.sql
  • apps/web/lib/db/migrations/0013_workspaces.sql
  • apps/web/lib/db/migrations/0014_migrate_users_to_workspaces.sql
  • apps/web/lib/db/migrations/0015_tool_cache_ttl.sql
  • apps/web/lib/db/migrations/0016_enforce_workspace_not_null.sql
  • apps/web/lib/db/repositories/composite.repository.ts
  • apps/web/lib/db/repositories/workspace.repository.ts
  • apps/web/lib/db/schema/composite-servers.ts
  • apps/web/lib/db/schema/credentials.ts
  • apps/web/lib/db/schema/index.ts
  • apps/web/lib/db/schema/servers.ts
  • apps/web/lib/db/schema/specs.ts
  • apps/web/lib/db/schema/workspaces.ts
  • apps/web/lib/hooks/index.ts
  • apps/web/lib/hooks/use-composites.ts
  • apps/web/lib/hooks/use-webhook-events.ts
  • apps/web/lib/hooks/use-workspaces.ts
  • apps/web/lib/middleware/rbac.ts
  • packages/transformer/__tests__/webhooks.test.ts
  • packages/transformer/src/index.ts
  • packages/transformer/src/types.ts
  • packages/transformer/src/webhooks.ts
  • packages/types/src/composite.ts
  • packages/types/src/index.ts
  • packages/types/src/workspace.ts

Comment on lines +1 to +18
const NAMESPACE_SEPARATOR = '__';

export function prefixToolName(namespace: string, toolName: string): string {
return `${namespace}${NAMESPACE_SEPARATOR}${toolName}`;
}

export function stripNamespace(namespacedName: string): { namespace: string; toolName: string } | null {
const idx = namespacedName.indexOf(NAMESPACE_SEPARATOR);
if (idx < 0) return null;
return {
namespace: namespacedName.slice(0, idx),
toolName: namespacedName.slice(idx + NAMESPACE_SEPARATOR.length),
};
}

export function isNamespaced(name: string): boolean {
return name.includes(NAMESPACE_SEPARATOR);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reject ambiguous namespace values before encoding.

prefixToolName("foo__bar", "baz") serializes to foo__bar__baz, but stripNamespace() decodes that as { namespace: "foo", toolName: "bar__baz" }. The helper also treats __tool and ns__ as valid. If callers ever pass those shapes, composite dispatch can target the wrong member or a blank tool name.

Proposed fix
 export function prefixToolName(namespace: string, toolName: string): string {
+  if (!namespace || namespace.includes(NAMESPACE_SEPARATOR)) {
+    throw new Error(`namespace must be non-empty and must not contain "${NAMESPACE_SEPARATOR}"`);
+  }
+  if (!toolName) {
+    throw new Error('toolName must be non-empty');
+  }
   return `${namespace}${NAMESPACE_SEPARATOR}${toolName}`;
 }
 
 export function stripNamespace(namespacedName: string): { namespace: string; toolName: string } | null {
   const idx = namespacedName.indexOf(NAMESPACE_SEPARATOR);
-  if (idx < 0) return null;
+  if (idx <= 0) return null;
+  const toolName = namespacedName.slice(idx + NAMESPACE_SEPARATOR.length);
+  if (!toolName) return null;
   return {
     namespace: namespacedName.slice(0, idx),
-    toolName: namespacedName.slice(idx + NAMESPACE_SEPARATOR.length),
+    toolName,
   };
 }
 
 export function isNamespaced(name: string): boolean {
-  return name.includes(NAMESPACE_SEPARATOR);
+  return stripNamespace(name) !== null;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/mcp/namespace.ts` around lines 1 - 18, Reject and validate
inputs containing the namespace separator or empty segments in prefixToolName
and isNamespaced-related flows: update prefixToolName to check that namespace
and toolName are non-empty and do not include NAMESPACE_SEPARATOR (and similarly
ensure names do not start or end with the separator), and have it throw or
return an error for invalid inputs; adjust stripNamespace to handle and reject
ambiguous inputs (e.g., those with no separator or with empty
namespace/toolName) rather than silently parsing them; reference
NAMESPACE_SEPARATOR, prefixToolName, stripNamespace, and isNamespaced when
locating where to add these validations.

Comment on lines +366 to +393
const uriMatch = uri.match(/^webhook:\/\/[^/]+\/(.+)$/);
if (!uriMatch) {
response = jsonRpcError(message.id, -32602, 'Invalid resource URI');
break;
}
const eventName = uriMatch[1]!;

if (!/^[a-zA-Z0-9_.:-]+$/.test(eventName) || eventName.length > 200) {
response = jsonRpcError(message.id, -32602, 'Invalid event name in resource URI');
break;
}

try {
const redisKey = `webhook:latest:${server.id}:${eventName}`;
const data = await deps.redis.get(redisKey);

if (!data) {
response = jsonRpcError(message.id, -32002, 'Resource not found');
break;
}

response = jsonRpcSuccess(message.id, {
contents: [{
uri,
mimeType: 'application/json',
text: data,
}],
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate the resource URI host before reading Redis.

resources/read ignores the webhook://<slug>/... authority and always reads from the currently addressed server. A request like webhook://other-server/order.created can therefore return this server's order.created payload, which breaks URI semantics and can poison client-side caching.

Proposed fix
-          const uriMatch = uri.match(/^webhook:\/\/[^/]+\/(.+)$/);
+          const uriMatch = uri.match(/^webhook:\/\/([^/]+)\/(.+)$/);
           if (!uriMatch) {
             response = jsonRpcError(message.id, -32602, 'Invalid resource URI');
             break;
           }
-          const eventName = uriMatch[1]!;
+          const uriServerSlug = uriMatch[1]!;
+          const eventName = uriMatch[2]!;
+
+          if (uriServerSlug !== server.slug) {
+            response = jsonRpcError(message.id, -32602, 'Resource URI does not belong to this server');
+            break;
+          }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/streamable-http.ts` around lines 366 - 393, The
code currently only extracts the event name (uriMatch) and ignores the URI
authority; update the URI parsing to capture the host (e.g., change the regex to
capture both host and eventName), then validate that the captured host equals
the current server identifier used for storage (server.id) before reading Redis
(deps.redis.get). If the host does not match, return a JSON-RPC error (use
jsonRpcError with the same message.id) instead of reading redisKey, and only
construct redisKey and call deps.redis.get when the host matches; reference the
existing variables uriMatch, eventName, server.id, redisKey, deps.redis.get and
jsonRpcError to locate where to change the logic.

Comment on lines +18 to +31
notify(serverSlug: string, eventName: string, payload: unknown): void {
const notification = Object.freeze({
jsonrpc: '2.0' as const,
method: `notifications/webhook/${eventName}`,
params: { eventName, payload },
});

const data = JSON.stringify(notification);
let sent = 0;

this.sessionManager.forEachSession(serverSlug, (session) => {
this.sessionManager.sendEvent(session, 'message', data);
sent++;
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

rg -n -C3 '\bsessionManager\.create\s*\(' apps/runtime/src
rg -n -C3 '\bauthenticated\b' apps/runtime/src
rg -n -C3 'new WebhookNotifier|\.notify\s*\(' apps/runtime/src

Repository: Work90210/APIFold

Length of output: 6465


🏁 Script executed:

rg -n 'sessionManager\.create' apps/runtime/src

Repository: Work90210/APIFold

Length of output: 203


🏁 Script executed:

rg -n 'forEachSession|sendEvent' apps/runtime/src/mcp/session-manager.ts -A 5

Repository: Work90210/APIFold

Length of output: 1360


Add defense-in-depth auth check to webhook fanout and normalize event name parameter.

The webhook notifier broadcasts to all sessions matching a server slug without an authentication guard. While the SSE transport currently only creates sessions with authenticated: true, adding a defensive check prevents data leakage if a future code path creates unauthenticated sessions. Additionally, eventName should be normalized before embedding in the method path.

Suggested hardening
   notify(serverSlug: string, eventName: string, payload: unknown): void {
+    const normalizedEventName = eventName.replace(/[^a-zA-Z0-9._-]/g, '_');
     const notification = Object.freeze({
       jsonrpc: '2.0' as const,
-      method: `notifications/webhook/${eventName}`,
-      params: { eventName, payload },
+      method: `notifications/webhook/${normalizedEventName}`,
+      params: { eventName: normalizedEventName, payload },
     });

     const data = JSON.stringify(notification);
     let sent = 0;

     this.sessionManager.forEachSession(serverSlug, (session) => {
+      if (!session.authenticated) return;
       this.sessionManager.sendEvent(session, 'message', data);
       sent++;
     });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
notify(serverSlug: string, eventName: string, payload: unknown): void {
const notification = Object.freeze({
jsonrpc: '2.0' as const,
method: `notifications/webhook/${eventName}`,
params: { eventName, payload },
});
const data = JSON.stringify(notification);
let sent = 0;
this.sessionManager.forEachSession(serverSlug, (session) => {
this.sessionManager.sendEvent(session, 'message', data);
sent++;
});
notify(serverSlug: string, eventName: string, payload: unknown): void {
const normalizedEventName = eventName.replace(/[^a-zA-Z0-9._-]/g, '_');
const notification = Object.freeze({
jsonrpc: '2.0' as const,
method: `notifications/webhook/${normalizedEventName}`,
params: { eventName: normalizedEventName, payload },
});
const data = JSON.stringify(notification);
let sent = 0;
this.sessionManager.forEachSession(serverSlug, (session) => {
if (!session.authenticated) return;
this.sessionManager.sendEvent(session, 'message', data);
sent++;
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/webhooks/notifier.ts` around lines 18 - 31, The notify
method currently builds the notification using the raw eventName and blindly
sends to every session; update notify to first normalize the eventName (e.g.,
coerce to string, trim, and sanitize/slugify so it’s safe to embed in the method
path) and use that normalized value when constructing the notification object,
and then, inside the this.sessionManager.forEachSession(serverSlug, (session) =>
...) callback, add an authentication guard that skips sessions that are not
authenticated (check session.authenticated or the appropriate property) before
calling this.sessionManager.sendEvent(session, 'message', data); reference the
notify function and the sessionManager.forEachSession/sendEvent calls when
making the changes.

Comment on lines +107 to +111
onChange={(e) => {
const raw = e.target.value;
if (raw === '') { onChange(undefined); return; }
onChange(type === 'integer' ? parseInt(raw, 10) : parseFloat(raw));
}}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n apps/web/components/playground/schema-form.tsx | sed -n '95,125p'

Repository: Work90210/APIFold

Length of output: 1330


🏁 Script executed:

node << 'EOF'
// Test parseInt and parseFloat behavior with intermediate inputs
const testCases = ['-', '.', '-.', '.5', '-5', '-5.5', 'abc', ''];

console.log("parseInt behavior:");
testCases.forEach(input => {
  const result = parseInt(input, 10);
  console.log(`  parseInt("${input}", 10) = ${result}, isNaN: ${Number.isNaN(result)}`);
});

console.log("\nparseFloat behavior:");
testCases.forEach(input => {
  const result = parseFloat(input);
  console.log(`  parseFloat("${input}") = ${result}, isNaN: ${Number.isNaN(result)}`);
});

console.log("\nNumber() behavior:");
testCases.forEach(input => {
  const result = Number(input);
  console.log(`  Number("${input}") = ${result}, isNaN: ${Number.isNaN(result)}`);
});
EOF

Repository: Work90210/APIFold

Length of output: 1019


🏁 Script executed:

# Find where this component is defined and used
grep -n "case 'integer':" apps/web/components/playground/schema-form.tsx | head -5

Repository: Work90210/APIFold

Length of output: 83


🏁 Script executed:

# Look for the component definition and its props
head -100 apps/web/components/playground/schema-form.tsx | cat -n

Repository: Work90210/APIFold

Length of output: 3885


🏁 Script executed:

# Check if there's any downstream handling of NaN values
rg "isNaN|Number.isNaN" apps/web/components/playground/ -A 2 -B 2

Repository: Work90210/APIFold

Length of output: 43


🏁 Script executed:

# Check the full component to see the complete SchemaField implementation
wc -l apps/web/components/playground/schema-form.tsx

Repository: Work90210/APIFold

Length of output: 111


🏁 Script executed:

# Review the suggested fix logic to verify it's correct
node << 'EOF'
// Verify the suggested fix handles all cases
const testCases = [
  { raw: '', expected: 'return (undefined)' },
  { raw: '-', expected: 'return (NaN check)' },
  { raw: '.', expected: 'return (NaN check)' },
  { raw: '-.', expected: 'return (NaN check)' },
  { raw: '.5', type: 'number', expected: 0.5 },
  { raw: '5', type: 'integer', expected: 5 },
  { raw: '5', type: 'number', expected: 5 },
  { raw: '5.5', type: 'integer', expected: 'return (not integer)' },
  { raw: '5.5', type: 'number', expected: 5.5 },
  { raw: '-5.5', type: 'integer', expected: 'return (not integer)' },
];

console.log("Suggested fix validation:\n");
testCases.forEach(tc => {
  const { raw, type = 'number', expected } = tc;
  
  // Simulate the suggested fix logic
  if (raw === '') {
    console.log(`✓ raw="${raw}": returns undefined (empty string)`);
    return;
  }
  
  const parsed = Number(raw);
  if (Number.isNaN(parsed)) {
    console.log(`✓ raw="${raw}": returns early (NaN detected)`);
    return;
  }
  
  if (type === 'integer' && !Number.isInteger(parsed)) {
    console.log(`✓ raw="${raw}", type=integer: returns early (not integer)`);
    return;
  }
  
  console.log(`✓ raw="${raw}", type=${type}: onChange(${parsed})`);
});
EOF

Repository: Work90210/APIFold

Length of output: 514


Guard against NaN propagation from number inputs.

At line 110, intermediate values during typing (e.g., -, ., -.) parse to NaN with parseInt and parseFloat, which are then propagated to the form state via the onChange callback with no validation.

Update the handler to validate the parsed value:

🔧 Suggested fix
             onChange={(e) => {
               const raw = e.target.value;
               if (raw === '') { onChange(undefined); return; }
-              onChange(type === 'integer' ? parseInt(raw, 10) : parseFloat(raw));
+              const parsed = Number(raw);
+              if (Number.isNaN(parsed)) return;
+              if (type === 'integer' && !Number.isInteger(parsed)) return;
+              onChange(parsed);
             }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
onChange={(e) => {
const raw = e.target.value;
if (raw === '') { onChange(undefined); return; }
onChange(type === 'integer' ? parseInt(raw, 10) : parseFloat(raw));
}}
onChange={(e) => {
const raw = e.target.value;
if (raw === '') { onChange(undefined); return; }
const parsed = Number(raw);
if (Number.isNaN(parsed)) return;
if (type === 'integer' && !Number.isInteger(parsed)) return;
onChange(parsed);
}}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/playground/schema-form.tsx` around lines 107 - 111, The
current inline onChange handler in the form's number input (the arrow function
using variables type and onChange) passes parseInt/parseFloat results directly,
which can propagate NaN for intermediate typing states; update the handler to
parse the value, guard against NaN and other non-numeric intermediates, and call
onChange(undefined) for invalid parses (or when raw === ''), otherwise call
onChange with the parsed number; specifically, in the onChange handler used in
the component, compute parsed = type === 'integer' ? parseInt(raw, 10) :
parseFloat(raw), then if Number.isNaN(parsed) (or raw matches only '-' or '.'),
call onChange(undefined) else call onChange(parsed).

encryptedClientSecret: text('encrypted_client_secret'),
tokenExpiresAt: timestamp('token_expires_at', { withTimezone: true }),
provider: text('provider'),
workspaceId: uuid('workspace_id'),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's look at the current credentials schema file
cat -n apps/web/lib/db/schema/credentials.ts

Repository: Work90210/APIFold

Length of output: 2113


🏁 Script executed:

# Check the workspaces schema structure
cat -n apps/web/lib/db/schema/workspaces.ts

Repository: Work90210/APIFold

Length of output: 1506


🏁 Script executed:

# Let's also check the servers schema to understand the pattern mentioned
cat -n apps/web/lib/db/schema/servers.ts

Repository: Work90210/APIFold

Length of output: 2203


🏁 Script executed:

# Search for migrations related to workspace constraints
find . -type f -name "*migration*" -o -name "*migrate*" | grep -E "\.(ts|js|sql)$" | head -20

Repository: Work90210/APIFold

Length of output: 162


🏁 Script executed:

# Look for any workspace-related constraints in the codebase
rg "workspaceId.*notNull|workspaceId.*references" --type ts -B 2 -A 2

Repository: Work90210/APIFold

Length of output: 43


🏁 Script executed:

# Check the migration that handles workspace migration
cat -n apps/web/lib/db/migrations/0014_migrate_users_to_workspaces.sql

Repository: Work90210/APIFold

Length of output: 3030


🏁 Script executed:

# Search for other migrations that might define workspace constraints
find apps/web/lib/db/migrations -type f -name "*.sql" | sort | xargs grep -l "workspace" 2>/dev/null

Repository: Work90210/APIFold

Length of output: 234


🏁 Script executed:

# Check if there are any ALTER TABLE statements that add constraints
rg "ALTER TABLE.*workspace|ADD.*CONSTRAINT.*workspace" --type sql -B 2 -A 2

Repository: Work90210/APIFold

Length of output: 2503


🏁 Script executed:

# Look for TypeScript code that reads/writes credentials to understand how workspaceId is used
rg "credentials\." apps/web/lib/db -A 3 -B 1 --type ts | head -50

Repository: Work90210/APIFold

Length of output: 4047


🏁 Script executed:

# Check the full credential.repository.ts to understand workspaceId usage
cat -n apps/web/lib/db/repositories/credential.repository.ts

Repository: Work90210/APIFold

Length of output: 10417


🏁 Script executed:

# Search for any places that query or set credentials.workspaceId
rg "credentials\.workspaceId|workspaceId.*credentials" --type ts -B 2 -A 2

Repository: Work90210/APIFold

Length of output: 43


credentials.workspaceId schema must reflect existing and pending DB constraints.

The Drizzle schema at line 30 declares workspaceId as nullable without a foreign key reference. However, migration 0013 already adds a DB-level FK constraint (REFERENCES workspaces(id) ON DELETE CASCADE), and migration 0016 will enforce NOT NULL. This schema drift causes type mismatches: the schema claims the column is optional and unconstrained, while the database enforces the opposite. Update the schema to align:

🧩 Suggested schema alignment
 import { pgTable, uuid, text, timestamp, index } from 'drizzle-orm/pg-core';
 import { sql } from 'drizzle-orm';
 import { mcpServers } from './servers';
+import { workspaces } from './workspaces';
@@
-    workspaceId: uuid('workspace_id'),
+    workspaceId: uuid('workspace_id')
+      .references(() => workspaces.id, { onDelete: 'cascade' })
+      .notNull(),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
workspaceId: uuid('workspace_id'),
workspaceId: uuid('workspace_id')
.references(() => workspaces.id, { onDelete: 'cascade' })
.notNull(),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/schema/credentials.ts` at line 30, The
credentials.workspaceId column in the Drizzle schema must be updated to match DB
migrations 0013 and 0016: make workspaceId non-nullable and add the foreign-key
reference to workspaces(id) with ON DELETE CASCADE; update the credentials table
schema entry (workspaceId) so it declares the FK to workspaces.id and marks the
column as NOT NULL to avoid schema/type drift and match the existing DB
constraints.

domainVerifiedAt: timestamp('domain_verified_at', { withTimezone: true }),
domainVerificationToken: text('domain_verification_token'),
tokenHash: text('token_hash'),
workspaceId: uuid('workspace_id'),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

rg -n -C2 'mcp_servers|workspace_id|NOT NULL|REFERENCES' apps/web/lib/db/migrations/0013_workspaces.sql apps/web/lib/db/migrations/0014_migrate_users_to_workspaces.sql apps/web/lib/db/migrations/0016_enforce_workspace_not_null.sql
rg -n -C3 'workspaceId' apps/web/lib/db/schema/servers.ts

Repository: Work90210/APIFold

Length of output: 10276


Add .notNull() constraint and FK reference to match enforced DB constraints.

Migration 0016 enforces mcp_servers.workspace_id as NOT NULL, and migration 0013 adds the FK reference to workspaces(id). The current Drizzle schema is nullable without FK reference, creating type safety and integrity gaps.

Suggested fix
 import {
@@
 } from 'drizzle-orm/pg-core';
 import { specs } from './specs';
+import { workspaces } from './workspaces';
@@
-    workspaceId: uuid('workspace_id'),
+    workspaceId: uuid('workspace_id')
+      .references(() => workspaces.id)
+      .notNull(),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
workspaceId: uuid('workspace_id'),
workspaceId: uuid('workspace_id')
.references(() => workspaces.id)
.notNull(),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/schema/servers.ts` at line 33, The servers schema's
workspaceId column is currently nullable and missing the foreign-key, so update
the workspaceId column definition in apps/web/lib/db/schema/servers.ts (symbol:
workspaceId) to add .notNull() and a FK reference to the workspaces table (e.g.
.references(() => workspaces.id) or the appropriate Drizzle reference form), and
import the workspaces schema if not already imported so the reference resolves.

- Fix migration backfill key mismatch (CRITICAL): UPDATEs now use
  encode(digest(...)) matching the INSERT slug formula
- Fix cache httpMethod case-sensitivity (CRITICAL): lowercase before
  checking WRITE_METHODS set
- Validate namespace against separator in prefixToolName
- Deep-sort nested objects in cache key hash for deterministic keys
- Validate resource URI host matches addressed server slug
- Fail closed when signature validator configured but raw body missing
- Reject duplicate serverId/namespace in composite create schema
- Add 30s/60s timeouts to playground execution fetch calls
- Handle query error states in composites, webhooks, members pages
- Handle empty workspace gracefully in members page
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/runtime/src/transports/streamable-http.ts (1)

216-221: ⚠️ Potential issue | 🟠 Major

Don't advertise resources.subscribe until the method exists.

This initialize response now declares subscribe: true, but this router still falls through to -32601 for resources/subscribe. That breaks capability discovery for compliant clients. apps/runtime/src/mcp/protocol-handler.ts has the same gap, so whichever fix you choose needs to land in both initialize paths.

💡 Proposed fix
           response = jsonRpcSuccess(message.id, {
             protocolVersion: '2024-11-05',
             capabilities: {
               tools: { listChanged: true },
-              resources: { subscribe: true, listChanged: true },
+              resources: { listChanged: true },
             },
             serverInfo: {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/streamable-http.ts` around lines 216 - 221, The
initialize response is advertising capabilities.resources.subscribe: true even
though the router doesn't implement resources/subscribe (calls fall through to
-32601); update the initialize logic that builds the jsonRpcSuccess response
(the block producing jsonRpcSuccess(message.id, { protocolVersion:...,
capabilities: { resources: { subscribe: true }}})) to only set
resources.subscribe to true when the corresponding RPC handler is actually
registered/available (e.g., check the request router/handler registry or feature
flag used by protocol handling), and apply the same conditional change in the
other initialize path in mcp/protocol-handler.ts so both initialization flows
only advertise subscribe when resources/subscribe is implemented.
♻️ Duplicate comments (1)
apps/web/app/dashboard/servers/[id]/webhooks/page.tsx (1)

65-79: ⚠️ Potential issue | 🟠 Major

Handle useServer failures separately from “not found.”

useServer errors still fall through to if (!server) return null (Line 78), which can hide auth/backend failures as a blank page. Add an explicit server-error branch before the null/not-found branch.

Suggested fix
-  const { data: server, isLoading: serverLoading } = useServer(id);
+  const {
+    data: server,
+    isLoading: serverLoading,
+    isError: serverError,
+  } = useServer(id);

   if (serverLoading) {
     return (
       <div className="space-y-4">
         <Skeleton className="h-6 w-48" />
         <Skeleton className="h-24 w-full rounded-lg" />
         <Skeleton className="h-64 w-full rounded-lg" />
       </div>
     );
   }

-  if (!server) return null;
+  if (serverError) {
+    return (
+      <div className="rounded-lg border border-destructive/30 p-6 text-center">
+        <p className="text-sm text-destructive">Failed to load server details. Please try again.</p>
+      </div>
+    );
+  }
+
+  if (!server) return null;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/servers/`[id]/webhooks/page.tsx around lines 65 - 79,
The page currently treats any falsy server as "not found" which hides real
errors; update the server-loading branch to handle useServer errors explicitly
by checking the error flag from useServer (e.g., isError or error returned by
useServer) and render an error state/message before the "if (!server) return
null" not-found branch; locate the useServer usage (const { data: server,
isLoading: serverLoading } = useServer(id)) and expand it to read and check its
error indicator (e.g., isError or error) and return an appropriate error UI when
set, prior to the null return.
🧹 Nitpick comments (1)
apps/web/app/dashboard/servers/[id]/webhooks/page.tsx (1)

27-31: Expose expand/collapse state to assistive tech.

The row is clickable, but screen readers don’t get expanded state/context. Add aria-expanded and aria-controls on the button and an id on the expanded region.

Suggested fix
-import { use, useState } from "react";
+import { use, useId, useState } from "react";

 function EventRow({ event }: { readonly event: WebhookEvent }) {
   const [expanded, setExpanded] = useState(false);
+  const panelId = useId();

   return (
     <div className="border-b border-border/50 last:border-b-0">
       <button
         type="button"
         onClick={() => setExpanded((prev) => !prev)}
+        aria-expanded={expanded}
+        aria-controls={panelId}
         className="flex w-full items-center gap-3 px-4 py-3 text-left transition-colors duration-150 hover:bg-muted/30"
       >
@@
       {expanded && (
-        <div className="border-t border-border/30 bg-muted/20 px-4 py-3">
+        <div id={panelId} className="border-t border-border/30 bg-muted/20 px-4 py-3">
           <pre className="overflow-x-auto rounded-md bg-background p-3 text-xs font-mono leading-relaxed">
             {JSON.stringify(event.payload, null, 2)}
           </pre>
         </div>
       )}

Also applies to: 48-53

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/servers/`[id]/webhooks/page.tsx around lines 27 - 31,
The clickable row button toggled by the onClick using setExpanded and the
expanded state does not expose its state to assistive tech; update the button
element to include aria-expanded={expanded} and aria-controls referencing a
unique id (e.g., `${id}-webhook-details`) and add that id to the expanded region
element (the container rendered when expanded) so screen readers can associate
the button with the controlled content; apply the same
aria-expanded/aria-controls/id pattern to the second toggle button block (lines
~48-53) so both toggle controls are accessible.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/runtime/src/mcp/protocol-handler.ts`:
- Around line 236-244: The resources listing currently only validates charset
for event names in the map that produces uri/name/description (the
resources/list path), causing it to advertise eventNames that will be rejected
later; update the filter used in that chain (the .filter(...) before .map(...)
in protocol-handler.ts) to also enforce the same max-length check used by
resources/read and apps/runtime/src/webhooks/receiver.ts (<= 200 chars) and keep
it consistent with the validation in
apps/runtime/src/transports/streamable-http.ts so the allowed charset and length
are identical across resources/list, resources/read, and the webhook receiver.

In `@apps/web/app/dashboard/settings/members/page.tsx`:
- Around line 27-32: The code computes workspaceId eagerly causing an
empty-string fallback before workspaces load; update the logic around
useWorkspaces/useWorkspace so you only derive workspaceId after workspaces are
loaded or when an explicitId exists: check the result of useWorkspaces (or its
isLoading flag) and if workspaces are undefined and explicitId is null, do not
set workspaceId to "" (use undefined or skip calling useWorkspace) and render a
loading state instead; change the ownedWorkspace/workspaceId computation to run
only when workspaces is defined (refer to useWorkspaces, searchParams,
explicitId, ownedWorkspace, workspaceId, and useWorkspace).
- Around line 150-156: The remove-button branch only hides owners but still
shows the trash icon for the current user's row; update the conditional so the
Button is not rendered when the member.userId equals the current user/membership
id. Replace the render guard from member.role !== "owner" to something like
member.role !== "owner" && member.userId !== <currentUserId> (use the app's
current user identifier variable — e.g., session.user.id or
currentMembership.userId), keeping the same
onClick={handleRemove(member.userId)} and disabled={removeMember.isPending}
behavior.
- Around line 151-159: The trash icon button is inaccessible because it lacks an
accessible name; update the Button in the members list (the component rendering
<Button ... onClick={() => handleRemove(member.userId)}
disabled={removeMember.isPending}> with <Trash2/>) to include an aria-label that
identifies the target member (e.g., `aria-label={`Remove ${member.email ||
member.name}`}` or similar using member.userId if no human-readable name), so
assistive tech can announce the destructive action for the correct member while
preserving the existing onClick and disabled behavior.
- Around line 100-116: The email input and role select lack accessible names;
update the invite fields in the members settings component to provide explicit
labels or aria labels (e.g., add unique ids like "invite-email" and
"invite-role" and associate <label htmlFor="invite-email">Email</label> and
<label htmlFor="invite-role">Role</label>, or add aria-label/aria-labelledby) so
screen readers can identify the controls that use email, setEmail, role, setRole
and the WorkspaceRole select; ensure labels are present visually or as
screen-reader-only (sr-only) text.

---

Outside diff comments:
In `@apps/runtime/src/transports/streamable-http.ts`:
- Around line 216-221: The initialize response is advertising
capabilities.resources.subscribe: true even though the router doesn't implement
resources/subscribe (calls fall through to -32601); update the initialize logic
that builds the jsonRpcSuccess response (the block producing
jsonRpcSuccess(message.id, { protocolVersion:..., capabilities: { resources: {
subscribe: true }}})) to only set resources.subscribe to true when the
corresponding RPC handler is actually registered/available (e.g., check the
request router/handler registry or feature flag used by protocol handling), and
apply the same conditional change in the other initialize path in
mcp/protocol-handler.ts so both initialization flows only advertise subscribe
when resources/subscribe is implemented.

---

Duplicate comments:
In `@apps/web/app/dashboard/servers/`[id]/webhooks/page.tsx:
- Around line 65-79: The page currently treats any falsy server as "not found"
which hides real errors; update the server-loading branch to handle useServer
errors explicitly by checking the error flag from useServer (e.g., isError or
error returned by useServer) and render an error state/message before the "if
(!server) return null" not-found branch; locate the useServer usage (const {
data: server, isLoading: serverLoading } = useServer(id)) and expand it to read
and check its error indicator (e.g., isError or error) and return an appropriate
error UI when set, prior to the null return.

---

Nitpick comments:
In `@apps/web/app/dashboard/servers/`[id]/webhooks/page.tsx:
- Around line 27-31: The clickable row button toggled by the onClick using
setExpanded and the expanded state does not expose its state to assistive tech;
update the button element to include aria-expanded={expanded} and aria-controls
referencing a unique id (e.g., `${id}-webhook-details`) and add that id to the
expanded region element (the container rendered when expanded) so screen readers
can associate the button with the controlled content; apply the same
aria-expanded/aria-controls/id pattern to the second toggle button block (lines
~48-53) so both toggle controls are accessible.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3181d126-7479-45e7-b41d-fee07b4dc2ca

📥 Commits

Reviewing files that changed from the base of the PR and between 572d90c and d4dbe4b.

📒 Files selected for processing (11)
  • apps/runtime/src/mcp/namespace.ts
  • apps/runtime/src/mcp/protocol-handler.ts
  • apps/runtime/src/middleware/response-cache.ts
  • apps/runtime/src/transports/streamable-http.ts
  • apps/runtime/src/webhooks/receiver.ts
  • apps/web/app/api/composites/route.ts
  • apps/web/app/api/tools/[id]/execute/route.ts
  • apps/web/app/dashboard/composites/page.tsx
  • apps/web/app/dashboard/servers/[id]/webhooks/page.tsx
  • apps/web/app/dashboard/settings/members/page.tsx
  • apps/web/lib/db/migrations/0014_migrate_users_to_workspaces.sql
✅ Files skipped from review due to trivial changes (4)
  • apps/runtime/src/mcp/namespace.ts
  • apps/web/lib/db/migrations/0014_migrate_users_to_workspaces.sql
  • apps/web/app/dashboard/composites/page.tsx
  • apps/runtime/src/middleware/response-cache.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • apps/web/app/api/composites/route.ts
  • apps/runtime/src/webhooks/receiver.ts
  • apps/web/app/api/tools/[id]/execute/route.ts

Comment on lines +27 to +32
const { data: workspaces } = useWorkspaces();
// Prefer explicit workspace from URL, fall back to first owned workspace
const explicitId = searchParams.get("workspace");
const ownedWorkspace = workspaces?.find((w) => w.slug.startsWith("user-")) ?? workspaces?.[0];
const workspaceId = explicitId ?? ownedWorkspace?.id ?? "";
const { data: workspace, isLoading } = useWorkspace(workspaceId);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Wait for useWorkspaces() before falling into the empty state.

Without a workspace query param, workspaceId is "" until the workspace list resolves, so this can briefly render “No workspace selected” on first load before the fallback workspace is picked.

Suggested fix
-  const { data: workspaces } = useWorkspaces();
+  const { data: workspaces, isLoading: isLoadingWorkspaces } = useWorkspaces();
   // Prefer explicit workspace from URL, fall back to first owned workspace
-  const explicitId = searchParams.get("workspace");
+  const explicitId = searchParams.get("workspace") || null;
   const ownedWorkspace = workspaces?.find((w) => w.slug.startsWith("user-")) ?? workspaces?.[0];
   const workspaceId = explicitId ?? ownedWorkspace?.id ?? "";
   const { data: workspace, isLoading } = useWorkspace(workspaceId);
+  const isResolvingWorkspace =
+    (!explicitId && isLoadingWorkspaces) || (Boolean(workspaceId) && isLoading);
@@
-  if (isLoading && workspaceId) {
+  if (isResolvingWorkspace) {

Also applies to: 58-79

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/settings/members/page.tsx` around lines 27 - 32, The
code computes workspaceId eagerly causing an empty-string fallback before
workspaces load; update the logic around useWorkspaces/useWorkspace so you only
derive workspaceId after workspaces are loaded or when an explicitId exists:
check the result of useWorkspaces (or its isLoading flag) and if workspaces are
undefined and explicitId is null, do not set workspaceId to "" (use undefined or
skip calling useWorkspace) and render a loading state instead; change the
ownedWorkspace/workspaceId computation to run only when workspaces is defined
(refer to useWorkspaces, searchParams, explicitId, ownedWorkspace, workspaceId,
and useWorkspace).

- Align resources/list event name filter with 200-char limit from read path
- Add CHECK constraint for non-negative cache_ttl_seconds
- Wait for useWorkspaces() before showing empty state on members page
- Add aria-label to invite email/role fields and remove button
- Hide remove button on current user's own row
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
apps/runtime/src/transports/streamable-http.ts (1)

334-404: Consider extracting shared resource handlers to eliminate duplication.

This resources/list and resources/read logic is nearly identical to handleResourcesList and handleResourcesRead in protocol-handler.ts (lines 222-251). The protocol-handler versions exist but are never invoked by this transport since streamable-http.ts has its own switch dispatcher.

Having two implementations that must stay synchronized creates maintenance burden and risk of drift. Consider extracting the shared logic into a reusable module that both transports can call.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/streamable-http.ts` around lines 334 - 404, The
two cases 'resources/list' and 'resources/read' in streamable-http.ts duplicate
logic already implemented as handleResourcesList and handleResourcesRead in
protocol-handler.ts; extract the shared logic into a new reusable module (e.g.,
resources-handler) that exports functions like listResources(deps, server) and
readResource(deps, server, uri) (or adapt names to
handleResourcesList/handleResourcesRead) and then replace the switch-case bodies
in streamable-http.ts to call those shared functions, passing deps, server,
message.id (and logger/jsonRpc helpers if needed), and translate their results
into the same jsonRpcSuccess/jsonRpcError responses; ensure you remove
duplicated code and update imports so both streamable-http.ts and
protocol-handler.ts consume the single implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@apps/runtime/src/transports/streamable-http.ts`:
- Around line 334-404: The two cases 'resources/list' and 'resources/read' in
streamable-http.ts duplicate logic already implemented as handleResourcesList
and handleResourcesRead in protocol-handler.ts; extract the shared logic into a
new reusable module (e.g., resources-handler) that exports functions like
listResources(deps, server) and readResource(deps, server, uri) (or adapt names
to handleResourcesList/handleResourcesRead) and then replace the switch-case
bodies in streamable-http.ts to call those shared functions, passing deps,
server, message.id (and logger/jsonRpc helpers if needed), and translate their
results into the same jsonRpcSuccess/jsonRpcError responses; ensure you remove
duplicated code and update imports so both streamable-http.ts and
protocol-handler.ts consume the single implementation.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 2818aa55-22b7-4d30-adf2-b8d8b5f5c96b

📥 Commits

Reviewing files that changed from the base of the PR and between d4dbe4b and 5719177.

📒 Files selected for processing (4)
  • apps/runtime/src/mcp/protocol-handler.ts
  • apps/runtime/src/transports/streamable-http.ts
  • apps/web/app/dashboard/settings/members/page.tsx
  • apps/web/lib/db/migrations/0015_tool_cache_ttl.sql
✅ Files skipped from review due to trivial changes (3)
  • apps/web/lib/db/migrations/0015_tool_cache_ttl.sql
  • apps/runtime/src/mcp/protocol-handler.ts
  • apps/web/app/dashboard/settings/members/page.tsx

KyleFuehri added 2 commits March 28, 2026 17:09
- Remove unused ToolDefinition import in composite.ts (lint error)
- Override path-to-regexp@<0.1.13 to >=0.1.13 (GHSA-37ch-88jc-xwx2)
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
apps/runtime/src/transports/composite.ts (2)

188-189: Hardcoded sessionId: 'composite' may conflict with session-tracking logic.

If executeTool or downstream code uses sessionId for caching, rate-limiting, or state isolation, all composite calls sharing the same session ID could produce unintended collisions.

Consider deriving a unique session ID (e.g., from the composite ID or request context) if session isolation matters.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/composite.ts` around lines 188 - 189, The
hardcoded sessionId in the composite transport creates collisions; change the
context creation near toolInput and context so sessionId is derived uniquely
(for example use the existing randomUUID() requestId or derive from the
composite identifier/params) instead of the literal 'composite' so that
executeTool and downstream logic get a unique sessionId per composite call;
update the context assignment in the composite.ts block that builds context
(where toolInput is defined) to populate sessionId with a unique value based on
requestId or composite id/params.

82-92: Request body type assertion lacks runtime validation.

The type assertion on line 82-87 trusts req.body to match the expected shape. If middleware or network conditions produce malformed JSON (e.g., null, array, or missing fields), accessing message.jsonrpc or message.method may behave unexpectedly.

Consider adding a lightweight runtime guard:

🛡️ Proposed defensive check
-    const message = req.body as {
+    const body = req.body;
+    if (typeof body !== 'object' || body === null || Array.isArray(body)) {
+      res.status(400).json({ error: 'Request body must be a JSON object' });
+      return;
+    }
+
+    const message = body as {
       readonly jsonrpc: string;
       readonly id: string | number;
       readonly method: string;
       readonly params?: Readonly<Record<string, unknown>>;
     };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/composite.ts` around lines 82 - 92, The current
code directly asserts req.body into message and then reads
message.jsonrpc/message.method, which can throw or behave incorrectly for
null/array/malformed bodies; add a lightweight runtime guard before the
assertion: ensure req.body is a non-null object and not an array (e.g., typeof
req.body === 'object' && req.body !== null && !Array.isArray(req.body')), then
verify the presence and types of the JSON-RPC fields (jsonrpc === '2.0', typeof
method === 'string' && method.length > 0, id !== undefined) and only then assign
to message and proceed; update the validation branch around message, req.body,
jsonrpc, method, id (in composite.ts) to return 400 for any malformed body.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/runtime/src/transports/composite.ts`:
- Around line 175-186: The current usage gating calls getPlanLimitsForUser and
checkAndIncrementUsage with server.userId (the member server owner), which
charges the wrong account; change these calls to use the composite's owner id
(composite.userId) so billing/quotas are applied to the composite creator rather
than the referenced server owner—update the variables passed into
getPlanLimitsForUser(...) and checkAndIncrementUsage(...), keeping deps.redis
and logger intact, and ensure any log messages or error text still reference the
correct user id for auditing.
- Around line 63-80: Add a JSDoc block to createCompositeRouter that documents
its required middleware preconditions: callers must mount authentication
middleware that sets req.serverTokenVerified = true (e.g., the
createServerTokenAuth middleware) so the router’s internal authenticated check
works, and must also apply rate-limiting middleware on the
/mcp/composite/:identifier path to protect this endpoint; reference
createCompositeRouter and the req.serverTokenVerified flag in the comment so
integrators know to register those middlewares when mounting the router.

---

Nitpick comments:
In `@apps/runtime/src/transports/composite.ts`:
- Around line 188-189: The hardcoded sessionId in the composite transport
creates collisions; change the context creation near toolInput and context so
sessionId is derived uniquely (for example use the existing randomUUID()
requestId or derive from the composite identifier/params) instead of the literal
'composite' so that executeTool and downstream logic get a unique sessionId per
composite call; update the context assignment in the composite.ts block that
builds context (where toolInput is defined) to populate sessionId with a unique
value based on requestId or composite id/params.
- Around line 82-92: The current code directly asserts req.body into message and
then reads message.jsonrpc/message.method, which can throw or behave incorrectly
for null/array/malformed bodies; add a lightweight runtime guard before the
assertion: ensure req.body is a non-null object and not an array (e.g., typeof
req.body === 'object' && req.body !== null && !Array.isArray(req.body')), then
verify the presence and types of the JSON-RPC fields (jsonrpc === '2.0', typeof
method === 'string' && method.length > 0, id !== undefined) and only then assign
to message and proceed; update the validation branch around message, req.body,
jsonrpc, method, id (in composite.ts) to return 400 for any malformed body.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 7999cfbf-17cf-4a4c-af3f-480264980ef9

📥 Commits

Reviewing files that changed from the base of the PR and between ba56b61 and 16e436f.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (2)
  • apps/runtime/src/transports/composite.ts
  • package.json

Comment on lines +63 to +80
export function createCompositeRouter(deps: CompositeRouterDeps): Router {
const { logger, registry, compositeRegistry, toolLoader, toolExecutorDeps } = deps;
const router = express.Router();

router.post('/mcp/composite/:identifier', async (req: Request, res: Response) => {
const identifier = req.params['identifier']!;
const composite = resolveComposite(compositeRegistry, identifier);

if (!composite || !composite.isActive) {
res.status(404).json({ error: 'Composite server not found' });
return;
}

const authenticated = (req as unknown as Record<string, unknown>)['serverTokenVerified'] === true;
if (!authenticated) {
res.status(401).json({ error: 'Authorization required' });
return;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing auth middleware dependency documentation and rate limiting coverage.

The authentication guard at lines 76-80 depends on serverTokenVerified being set by createServerTokenAuth middleware. Per server.ts (lines 168-172), this middleware is mounted only at /mcp/:slug and does not cover /mcp/composite/:identifier. The same applies to rate limiting (lines 140-148 in server.ts).

While the check fails closed (returns 401 if flag is unset), the router currently provides no indication of this precondition. When the router is mounted, forgetting to register auth/rate-limiting middleware would leave the endpoint non-functional or unprotected.

Add a JSDoc comment on createCompositeRouter documenting that callers must mount:

  1. Auth middleware that sets req.serverTokenVerified = true
  2. Rate limiting middleware on the /mcp/composite/:identifier path
📝 Proposed documentation addition
+/**
+ * Creates a composite MCP JSON-RPC router.
+ *
+ * IMPORTANT: Before mounting this router, the caller MUST register:
+ * 1. Authentication middleware (e.g., `createServerTokenAuth`) on
+ *    `/mcp/composite/:identifier` that sets `req.serverTokenVerified = true`.
+ * 2. Rate-limiting middleware on the same path prefix.
+ *
+ * The existing `/mcp/:slug` middleware does NOT cover composite routes.
+ */
 export function createCompositeRouter(deps: CompositeRouterDeps): Router {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/composite.ts` around lines 63 - 80, Add a JSDoc
block to createCompositeRouter that documents its required middleware
preconditions: callers must mount authentication middleware that sets
req.serverTokenVerified = true (e.g., the createServerTokenAuth middleware) so
the router’s internal authenticated check works, and must also apply
rate-limiting middleware on the /mcp/composite/:identifier path to protect this
endpoint; reference createCompositeRouter and the req.serverTokenVerified flag
in the comment so integrators know to register those middlewares when mounting
the router.

Comment on lines +175 to +186
// Usage gate
const planLimits = await getPlanLimitsForUser(deps.redis, server.userId);
const usageCheck = await checkAndIncrementUsage(
{ redis: deps.redis, logger },
server.userId,
planLimits,
);

if (!usageCheck.allowed) {
response = jsonRpcError(message.id, -32003, 'Usage limit reached. Upgrade your plan.');
break;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Usage billing charges the member server owner, not the composite owner.

Line 176 fetches plan limits for server.userId (the member server's owner), and line 177-181 increments that user's usage counter. However, the composite has its own userId (the composite creator).

Current behavior: If user A creates a composite referencing user B's servers, calling tools through A's composite decrements B's quota—potentially exhausting B's limits or causing billing surprises.

Consider whether the composite owner (composite.userId) should be billed instead, or document this as intentional policy.

🔧 Potential fix: bill composite owner
           // Usage gate
-          const planLimits = await getPlanLimitsForUser(deps.redis, server.userId);
+          const planLimits = await getPlanLimitsForUser(deps.redis, composite.userId);
           const usageCheck = await checkAndIncrementUsage(
             { redis: deps.redis, logger },
-            server.userId,
+            composite.userId,
             planLimits,
           );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/composite.ts` around lines 175 - 186, The current
usage gating calls getPlanLimitsForUser and checkAndIncrementUsage with
server.userId (the member server owner), which charges the wrong account; change
these calls to use the composite's owner id (composite.userId) so billing/quotas
are applied to the composite creator rather than the referenced server
owner—update the variables passed into getPlanLimitsForUser(...) and
checkAndIncrementUsage(...), keeping deps.redis and logger intact, and ensure
any log messages or error text still reference the correct user id for auditing.

@github-actions
Copy link
Copy Markdown
Contributor

Coverage Report for packages/transformer

Status Category Percentage Covered / Total
🔵 Lines 94.11% (🎯 80%) 767 / 815
🔵 Statements 94.11% (🎯 80%) 767 / 815
🔵 Functions 97.87% (🎯 80%) 46 / 47
🔵 Branches 89.07% (🎯 80%) 318 / 357
File Coverage
File Stmts Branches Functions Lines Uncovered Lines
Changed Files
packages/transformer/src/index.ts 100% 100% 100% 100%
packages/transformer/src/types.ts 100% 100% 100% 100%
packages/transformer/src/webhooks.ts 97.53% 89.65% 100% 97.53% 113-114
Generated in workflow #261 for commit 6fb11a1 by the Vitest Coverage Report Action

@Work90210 Work90210 merged commit 8e180a2 into master Mar 30, 2026
11 of 13 checks passed
@Work90210 Work90210 deleted the feat/sprint-15-streamable-http-webhooks branch March 30, 2026 06:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant