Prep for usage: bulk API, export, WebSocket, mount hardening, E2E#4
Prep for usage: bulk API, export, WebSocket, mount hardening, E2E#4khaliqgant merged 17 commits intomainfrom
Conversation
…E2E infrastructure Extends the Go server with bulk seed/export endpoints, WebSocket file-change notifications, and binary file support. Hardens mount sync with conflict resolution and bidirectional sync. Adds E2E test script, workflow definitions, design docs, and updated TypeScript SDK. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds relayfile-cli with login/mount/seed/export commands, Homebrew tap formula, GitHub Actions release workflow, install script, and user-facing docs (API reference, CLI design, guides). Updates .gitignore to exclude compiled binaries and agent tool configs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds OIDC-based npm publish with provenance for the TypeScript SDK. Includes npm update step per prpm trusted publishing guidance to avoid outdated npm versions on runners. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds dedicated GitHub Actions workflows for CI (tests + typecheck), npm publishing, and Go binary releases. Updates SDK package.json, tsconfig, and README. Adds CI/CD design doc. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds sdk/relayfile package that downloads the correct platform binary on install, so users can `npx relayfile` or `npm install -g relayfile`. Updates publish-npm workflow to use OIDC (no NPM_TOKEN), adds npm update step, and publishes both @relayfile/sdk and relayfile packages with version sync. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Configures root package.json with workspaces pointing to sdk/relayfile-sdk and sdk/relayfile. Makes CLI postinstall non-fatal so installs work before a release binary exists. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Renames sdk/relayfile-sdk and sdk/relayfile to packages/. Updates all workflow files, GitHub Actions, and package.json references to use the new paths. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Provenance only works in CI with OIDC — removing from package.json so local publishes work. CI workflows already pass --provenance explicitly. Also normalizes repository URLs to avoid npm publish warnings. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Astro + Tailwind marketing site with hero, feature grid, architecture diagram, API preview, and use cases sections. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CI was failing with "workflow file issue" due to npm cache requiring a lockfile path and the workers-typecheck job referencing packages/server which doesn't exist. Simplified to use go.mod for Go version. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Resolves merge conflicts from sdk/ -> packages/ rename. Fixes: - Tar export: log error instead of writing JSON after headers sent - filepath.Clean -> path.Clean for OS-independent tar entry names - Bulk write: reject files on store read errors instead of skipping permission checks - WebSocket: load catch-up events before subscribing to avoid duplicates - normalizeEncoding: return empty string for utf-8 to preserve omitempty Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds FilesystemEventType, EventOrigin, OperationStatus, WritebackState, SyncProviderStatus, SyncProviderStatusState to the SDK's public exports. These are needed by relayfile-cloud which now imports types from the SDK instead of duplicating them. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Devin review fixes: 1. Tar export: return proper error response instead of empty 200 2. WebSocket: subscribe-before-catchup to prevent missed events, with dedup via EventID to avoid replaying catch-up events 3. CLI seed: single progress line instead of N instant messages 4. Bulk write: normalize file path before passing to BulkWrite (matches permission check path) New E2E tests: - Bulk write API: creates 5 files, verifies count + no errors - JSON export: verifies non-empty array response - Tar export: verifies gzip content-type and magic bytes - WebSocket: connects, writes file, verifies event received
All new E2E tests (bulk write, export, WebSocket) were using raw fetch() without the required X-Correlation-Id header, causing 400 responses. Switched to the existing api() helper which includes the header automatically. Also fixed export route path and file write method/endpoint.
1. Tar export: split into prepareTarExport (can return error response) and streamTarExport (headers committed, errors logged only). Prep errors (bad base64, etc.) now return proper HTTP 500. 2. WS reader goroutine: derive context from caller instead of context.Background() so it's cancelled on syncer shutdown. 3. E2E WebSocket: fire-and-forget the bulk write that triggers the event — prevents unhandled rejection when server shuts down before the response arrives.
| return err | ||
| } | ||
|
|
||
| readCtx, cancel := context.WithCancel(ctx) |
There was a problem hiding this comment.
🔴 WebSocket connection torn down after every sync cycle due to short-lived parent context
The connectWebSocket method at internal/mountsync/syncer.go:425 derives readCtx from the ctx parameter of SyncOnce. Both callers (cmd/relayfile-mount/main.go:68-69 and cmd/relayfile-cli/main.go:365-366) create a short-lived timeout context (default 15s) per sync cycle: ctx, cancel := context.WithTimeout(rootCtx, *timeout) with defer cancel(). When SyncOnce returns and the timeout context is canceled, readCtx — a child of that context — is also canceled. This causes wsjson.Read(readCtx, conn, &event) in readWebSocketLoop to fail with context.Canceled, tearing down the WebSocket connection. On the next sync cycle, needsWS evaluates to true again (since handleWebSocketDisconnect sets s.wsConn = nil), causing a reconnect. This connect/disconnect cycle repeats every sync interval, making WebSocket real-time streaming completely non-functional — the connection never lives long enough to receive any live events.
Call chain showing the context propagation
run() creates context.WithTimeout(rootCtx, 15s) → passes to SyncOnce(ctx) → passes to connectWebSocket(ctx) → readCtx, cancel := context.WithCancel(ctx) → go readWebSocketLoop(readCtx, conn) → when run() returns, defer cancel() kills readCtx → read loop exits → handleWebSocketDisconnect sets wsConn = nil.
Prompt for agents
In internal/mountsync/syncer.go, the readCtx for the WebSocket read loop must NOT be derived from the per-sync-cycle ctx parameter. Instead, it should use a longer-lived context that survives across sync cycles.
Option 1: Store a long-lived cancel context on the Syncer struct, created once (e.g. in NewSyncer or on first connect), and use that as the parent for readCtx instead of the SyncOnce ctx.
Option 2: Use context.Background() as the parent for readCtx, relying on the wsCancel function for cleanup.
The fix at line 425 of internal/mountsync/syncer.go should change from:
readCtx, cancel := context.WithCancel(ctx)
to something like:
readCtx, cancel := context.WithCancel(context.Background())
This ensures the WebSocket read loop survives beyond a single SyncOnce call. The handleWebSocketDisconnect method already handles cleanup via wsCancel.
Was this helpful? React with 👍 or 👎 to provide feedback.
Summary
This PR will be continuously updated as the workflows run.
Test plan
🤖 Generated with Claude Code