Skip to content

feat: relayauth integration — JWKS verification + path-scoped access#5

Open
khaliqgant wants to merge 10 commits intomainfrom
feat/relayauth-integration
Open

feat: relayauth integration — JWKS verification + path-scoped access#5
khaliqgant wants to merge 10 commits intomainfrom
feat/relayauth-integration

Conversation

@khaliqgant
Copy link
Member

@khaliqgant khaliqgant commented Mar 24, 2026

Summary

Adds workflow (workflows/integrate-relayauth.ts) to integrate relayauth into relayfile:

  • Go server: verify relayauth JWTs via JWKS endpoint
  • Path-scoped access: relayfile:fs:write:/src/api/* restricts file writes to matching paths
  • Mount daemon: accepts relayauth tokens via existing --token flag
  • TS SDK: fromRelayAuth() factory for relayauth-authenticated clients
  • Backwards compatible — existing relayfile JWTs still work

Depends on

  • @relayauth/sdk types (for token format reference)
  • Go JWKS verification (standard library, no external deps)

Run

agent-relay run workflows/integrate-relayauth.ts

🤖 Generated with Claude Code


Open with Devin

khaliqgant and others added 9 commits March 24, 2026 14:58
…E2E infrastructure

Extends the Go server with bulk seed/export endpoints, WebSocket file-change
notifications, and binary file support. Hardens mount sync with conflict
resolution and bidirectional sync. Adds E2E test script, workflow definitions,
design docs, and updated TypeScript SDK.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds relayfile-cli with login/mount/seed/export commands, Homebrew tap
formula, GitHub Actions release workflow, install script, and user-facing
docs (API reference, CLI design, guides). Updates .gitignore to exclude
compiled binaries and agent tool configs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds OIDC-based npm publish with provenance for the TypeScript SDK.
Includes npm update step per prpm trusted publishing guidance to avoid
outdated npm versions on runners.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds dedicated GitHub Actions workflows for CI (tests + typecheck),
npm publishing, and Go binary releases. Updates SDK package.json,
tsconfig, and README. Adds CI/CD design doc.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds sdk/relayfile package that downloads the correct platform binary
on install, so users can `npx relayfile` or `npm install -g relayfile`.
Updates publish-npm workflow to use OIDC (no NPM_TOKEN), adds
npm update step, and publishes both @relayfile/sdk and relayfile
packages with version sync.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Configures root package.json with workspaces pointing to sdk/relayfile-sdk
and sdk/relayfile. Makes CLI postinstall non-fatal so installs work before
a release binary exists.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Renames sdk/relayfile-sdk and sdk/relayfile to packages/. Updates all
workflow files, GitHub Actions, and package.json references to use the
new paths.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Provenance only works in CI with OIDC — removing from package.json so
local publishes work. CI workflows already pass --provenance explicitly.
Also normalizes repository URLs to avoid npm publish warnings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Workflow to add relayauth JWT verification to the Go server and mount
daemon. Path-scoped access (relayfile:fs:write:/src/api/*), JWKS
caching, backwards compat with existing relayfile JWTs.

Depends on: @relayauth/sdk (for types reference), Go JWKS verification

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 2 potential issues.

View 7 additional findings in Devin Review.

Open in Devin Review

Comment on lines +1428 to +1431
case "tar":
if err := s.writeTarExport(w, visible); err != nil {
writeError(w, http.StatusInternalServerError, "internal_error", err.Error(), correlationID)
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Tar export sends HTTP 200 before writing, making error responses impossible

writeTarExport calls w.WriteHeader(http.StatusOK) at internal/httpapi/server.go:2033 before writing tar entries. If tw.WriteHeader or tw.Write fails afterward (lines 2047-2051), the error is returned to handleExport at internal/httpapi/server.go:1429-1430, which calls writeError with status 500. However, since WriteHeader(200) was already called, Go's HTTP server silently ignores the second WriteHeader(500) call. The client receives a truncated/corrupt tar file with a 200 OK status, making the failure invisible to the caller.

Prompt for agents
In internal/httpapi/server.go, the handleExport function at line 1428-1431 calls writeTarExport which sends a 200 OK header before writing tar content. If tar writing fails after that, the writeError call at line 1430 has no effect because the status code was already sent. Fix this by having writeTarExport decode all content BEFORE calling WriteHeader, so that decoding errors can still produce proper error responses. For write errors that occur during streaming (after WriteHeader), either log the error and let the truncated response signal failure to the client, or buffer the entire tar in memory before writing (trading memory for correctness). Remove the writeError call at line 1430 since it cannot meaningfully change the response after WriteHeader was already called inside writeTarExport.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +3690 to +3691
case "", "utf-8", "utf8":
return "utf-8", nil

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 normalizeEncoding returns "utf-8" for empty input, making omitempty on File.Encoding ineffective

normalizeEncoding("") returns "utf-8" at internal/relayfile/store.go:3690-3691. This value is stored in File.Encoding (which has json:"encoding,omitempty" at line 76). Because the returned value is non-empty, omitempty never triggers, so every file written via WriteFile or BulkWrite — including plain text files where the caller did not specify encoding — will have "encoding": "utf-8" in JSON responses. Meanwhile, provider-synced files explicitly set Encoding: "" at internal/relayfile/store.go:3259, so they omit the field. This creates an inconsistency: the same text file has different JSON shapes depending on whether it was written by an agent or synced from a provider. The fix is to return "" (empty string) instead of "utf-8" for the default case, letting omitempty omit the field for text files.

Suggested change
case "", "utf-8", "utf8":
return "utf-8", nil
case "", "utf-8", "utf8":
return "", nil
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Copy link
Member Author

@khaliqgant khaliqgant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review: relayauth integration (relayfile)

Verdict: Needs changes before merge

This PR bundles ~6 distinct features. Key issues:

Critical/High

  1. Misleading title — no actual JWKS code; relayauth is only a workflow definition. Should split into focused PRs.
  2. Hardcoded local paths in workflow files.
  3. Duplicate release workflows — both trigger on v* tags creating races.
  4. Potential deadlock in syncer mutex discipline.
  5. Silent event dropping with no logging.

Medium

  1. No bulk write size limits enforced (despite design doc claims).
  2. WebSocket token in query params (security consideration).
  3. Homebrew formula has placeholder SHA256s.
  4. README references old SDK path.

Missing Tests

  • CLI tool: 940 lines, zero test coverage.
  • No tests for bulk write limits, WebSocket reconnection.

Copy link

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 new potential issue.

View 10 additional findings in Devin Review.

Open in Devin Review

Comment on lines +434 to +436
for i := range files {
fmt.Fprintf(stdout, "Seeding %d/%d files...\n", i+1, len(files))
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 CLI seed prints all progress messages instantly before the bulk upload even starts

The runSeed function at lines 434-436 uses a for i := range files loop that simply iterates over the already-collected files slice and prints "Seeding X/Y files..." for each one. This loop completes instantly before the single bulk API call at line 439. The user sees all progress lines (e.g. "Seeding 1/100", "Seeding 2/100", ..., "Seeding 100/100") dumped at once, then a pause while the actual upload happens. The loop should either be removed (since all files are sent in one request) or the progress should be tied to actual batched uploads.

Suggested change
for i := range files {
fmt.Fprintf(stdout, "Seeding %d/%d files...\n", i+1, len(files))
}
fmt.Fprintf(stdout, "Seeding %d files...\n", len(files))
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant