feat: add native Linux desktop app (Rust + GTK4)#143
feat: add native Linux desktop app (Rust + GTK4)#1432witstudios wants to merge 4 commits intomainfrom
Conversation
Rust + GTK4 + libadwaita desktop application for PPG agent orchestration. - Sidebar with project/worktree/agent hierarchy and status badges - Home dashboard with agent stats and cairo commit heatmap - Command palette (Ctrl+Shift+P) for spawning agents - REST API client (reqwest) for all ppg serve endpoints - WebSocket manager (tokio-tungstenite) with exponential backoff - Settings dialog with connection, terminal, and appearance config - VTE terminal pane placeholder (ready for libvte integration) - Setup view with prerequisite checks (ppg + tmux)
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
📝 WalkthroughWalkthroughAdds full iOS SwiftUI app (models, networking, WebSocket, state, views, tests), a Linux GTK4/LibAdwaita desktop app in Rust (models, UI, services, REST/WebSocket clients), extensive TypeScript server and CLI features (auth, TLS, daemon serve commands, core operation extractions), plus project config and .gitignore updates. Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related issues
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
- Dashboard stats: Running/Completed/Failed/Killed (was Running/Idle/Exited/Total) - Completed = exited with code 0, Failed = exited non-zero, Killed = gone - Add recent commits fetch via `git log --format=%h|%s|%ar -n 10` - Add missing POST /api/reset endpoint to PpgClient
There was a problem hiding this comment.
Actionable comments posted: 9
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (17)
.gitignore-17-18 (1)
17-18:⚠️ Potential issue | 🟠 MajorRemove the global
*.xcodeprojand*.xcworkspacepatterns—they ignore tracked project files.This repository has Xcode project files that are already version-controlled (
PPG CLI/PPG CLI.xcodeproj/project.pbxproj,PPG CLI/PPG CLI.xcodeproj/project.xcworkspace/contents.xcworkspacedata). Adding these broad patterns will prevent future updates to the project configuration from being committed.Keep only the existing user-data ignores (
DerivedData/,**/xcuserdata/,*.xcuserstate,*.profraw), which are the appropriate ones to ignore.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore around lines 17 - 18, Remove the global ignore patterns "*.xcodeproj" and "*.xcworkspace" from .gitignore so tracked Xcode project files (e.g., PPG CLI/PPG CLI.xcodeproj and its workspace contents) are not accidentally ignored; keep the user-specific ignores (DerivedData/, **/xcuserdata/, *.xcuserstate, *.profraw) only.ios/PPGMobile/PPGMobile/Assets.xcassets/AppIcon.appiconset/Contents.json-3-7 (1)
3-7:⚠️ Potential issue | 🟠 MajorSet the
filenamefield to the actual app icon file.The 1024×1024 app icon entry has
"filename": null, which is invalid according to Apple's asset catalog format and will cause Xcode/App Store validation to fail. This is required for app submission.Either provide the correct filename (e.g.,
"filename": "AppIcon-1024.png") and add the corresponding PNG file to the directory, or remove the entry if it is not needed.Suggested fix
"images": [ { + "filename": "AppIcon-1024.png", "idiom": "universal", "platform": "ios", "size": "1024x1024" } ],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/PPGMobile/PPGMobile/Assets.xcassets/AppIcon.appiconset/Contents.json` around lines 3 - 7, The Contents.json entry for the AppIcon asset with "size": "1024x1024" currently has "filename": null and must reference an actual PNG; update the "filename" field to the real asset name (e.g., "AppIcon-1024.png") and add that PNG to the AppIcon.appiconset, or if the 1024x1024 image is not required remove that entire JSON object for the 1024x1024 idiom/platform entry so the asset catalog is valid for Xcode/App Store validation.linux/src/ui/command_palette.rs-194-202 (1)
194-202:⚠️ Potential issue | 🟠 MajorRwLock guard held across
.await.Same issue as in
window.rs:client.read().unwrap()acquires a lock that is held for the duration of the asyncspawn()call.🐛 Proposed fix
services_spawn.runtime.spawn(async move { + let client = client.read().unwrap().clone(); let req = SpawnRequest { name: variant.clone(), agent: Some(variant), prompt: if prompt_text.is_empty() { None } else { Some(prompt_text) }, count: None, }; - match client.read().unwrap().spawn(&req).await { + match client.spawn(&req).await {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linux/src/ui/command_palette.rs` around lines 194 - 202, The code holds an RwLock read guard across an await (client.read().unwrap().spawn(&req).await), which can deadlock; fix by taking the read lock, cloning or extracting the inner client handle needed for spawn (e.g. let inner = client.read().unwrap().clone() or copy the callable), immediately dropping the guard, then call inner.spawn(&req).await and handle Ok/Err (preserving resp.name and resp.worktree_id in the success branch). Ensure you do not call spawn on the lock guard itself across an await.linux/src/ui/window.rs-239-258 (1)
239-258:⚠️ Potential issue | 🟠 MajorRwLock guard held across
.await— may block other threads.At Line 240,
client.read().unwrap()acquires astd::sync::RwLockread guard, and.status().awaitis called on it. This holds the lock for the entire duration of the HTTP request, blocking any other code that needs to access or modify the client.🐛 Proposed fix
Clone the client before the async block or restructure to release the lock before awaiting:
services.runtime.spawn(async move { - match client.read().unwrap().status().await { + let client_clone = client.read().unwrap().clone(); + match client_clone.status().await { Ok(manifest) => {This requires
PpgClientto implementClone. Alternatively, ifPpgClientwraps areqwest::Client(which is cheap to clone), extract the necessary data before the spawn.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linux/src/ui/window.rs` around lines 239 - 258, The RwLock read guard from client.read().unwrap() is being held across an await (client.read().unwrap().status().await) which can block other threads; fix by obtaining a cloned/owned PpgClient (or the inner reqwest::Client) before spawning the async task so the lock is released immediately: call client.read().unwrap().clone() (or extract the necessary inner client) into a local variable and move that clone into the services.runtime.spawn async move block, then call cloned_client.status().await; ensure PpgClient implements Clone or extract a cheaply-clonable inner client if needed (refer to client.read().unwrap(), services.runtime.spawn, and status().await).src/core/tls.ts-15-19 (1)
15-19:⚠️ Potential issue | 🟠 MajorPath inconsistency: hardcoded
'certs'directory doesn't matchtlsDir()from paths.ts.This module stores certs in
.ppg/certs/(line 16), butsrc/lib/paths.tsdefinestlsDir()as.ppg/serve/tls/and provides helpers liketlsServerKeyPath()andtlsServerCertPath(). This mismatch will cause other code using the path helpers to look in the wrong location.Proposed fix: Use path helpers from lib/paths.ts
-import { ppgDir } from '../lib/paths.js'; +import { tlsDir, tlsServerKeyPath, tlsServerCertPath } from '../lib/paths.js'; import { execaEnv } from '../lib/env.js'; export interface TlsCredentials { key: string; cert: string; fingerprint: string; } export async function ensureTlsCerts(projectRoot: string): Promise<TlsCredentials> { - const certsDir = path.join(ppgDir(projectRoot), 'certs'); - const keyPath = path.join(certsDir, 'server.key'); - const certPath = path.join(certsDir, 'server.crt'); + const certsDir = tlsDir(projectRoot); + const keyPath = tlsServerKeyPath(projectRoot); + const certPath = tlsServerCertPath(projectRoot);Note: You'll also need to update the file names in
tlsServerKeyPath/tlsServerCertPathin paths.ts to match (server.key/server.crtvsserver-key.pem/server-cert.pem), or update this module to use the.pemnaming convention.Based on learnings: "Use path helpers from
lib/paths.tsfor all path computation:ppgDir(),manifestPath(),resultFile(),worktreePath(), etc."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/tls.ts` around lines 15 - 19, The code in ensureTlsCerts is using a hardcoded '.ppg/certs' path and 'server.key'/'server.crt' filenames; replace that logic to import and use the path helpers from lib/paths.ts (tlsDir(), tlsServerKeyPath(), tlsServerCertPath()) inside ensureTlsCerts so all TLS cert I/O uses the same central paths; if paths.ts currently defines different filenames (e.g., 'server-key.pem'/'server-cert.pem') either update tlsServerKeyPath()/tlsServerCertPath() to expect 'server.key'/'server.crt' or adjust ensureTlsCerts to use the .pem names so the filenames are consistent across ensureTlsCerts and the helpers.src/commands/serve.test.ts-291-306 (1)
291-306:⚠️ Potential issue | 🟠 MajorTest expects wrong function:
serveDaemonCommandcallsstartServer, notrunServeDaemon.The implementation at line 200 of
src/commands/serve.tscallsstartServer({ projectRoot, port: options.port, host: options.host, token: options.token }), but the test mocks and expectsrunServeDaemoninstead. This test will not catch regressions in the actual implementation. Additionally, while the test invocation omits thetokenparameter (which is acceptable since it's optional), the mock assertion should reflect the actual call signature including the token when present.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/serve.test.ts` around lines 291 - 306, Update the tests to assert the real function called by serveDaemonCommand: replace expectations for runServeDaemon with assertions that startServer was called; specifically, in the "initialized project" test assert requireManifest was called with '/fake/project' and startServer was called with an object like { projectRoot: '/fake/project', port: 4000, host: '0.0.0.0', token: undefined } (or include token when provided), and in the "not initialized" test ensure startServer was not called; keep requireManifest mocks and error behavior as-is.src/commands/serve.test.ts-35-45 (1)
35-45:⚠️ Potential issue | 🟠 MajorMock is missing
startServerfrom../server/index.js, which is whatserveDaemonCommandactually calls.The implementation calls
startServer({ projectRoot, port, host, token }), notrunServeDaemon. Add a mock for the../server/index.jsmodule and update the test expectations accordingly. Currently, the test checks for calls torunServeDaemonwhich is never invoked.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/serve.test.ts` around lines 35 - 45, The test currently mocks functions in ../core/serve.js but the command actually calls startServer from ../server/index.js, so add a mock for the ../server/index.js module that stubs startServer (e.g., vi.mock('../server/index.js', () => ({ startServer: vi.fn() })), then update the test assertions to expect startServer to have been called with the correct args (projectRoot, port, host, token) and remove or replace any expectations for runServeDaemon/getServePid that are not invoked by serveDaemonCommand; ensure the mocked startServer is imported or referenced by the test where serveDaemonCommand is executed so the call is observed.src/core/tls.ts-51-70 (1)
51-70:⚠️ Potential issue | 🟠 MajorAdd OpenSSL to documented system requirements and improve error handling for missing dependency.
OpenSSL is not listed in the requirements (README line 59), but the
generateSelfSignedCertfunction at line 58 shells out toopenssl. This is problematic on macOS (the target platform), where OpenSSL is not pre-installed—only LibreSSL. Users without Homebrew-installed OpenSSL will encounter an uncaught execa error with no helpful context.Two fixes needed:
- Document OpenSSL as a required dependency in the setup instructions, with macOS-specific guidance (e.g.,
brew install openssl).- Wrap the
execa('openssl', ...)call in a try-catch to provide a clear error message if the command is not found or fails.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/tls.ts` around lines 51 - 70, Update the README to list OpenSSL as a required system dependency with macOS guidance (e.g., brew install openssl) and improve error handling in generateSelfSignedCert: wrap the execa('openssl', ...) invocation in a try/catch inside the generateSelfSignedCert function and throw or log a clear, actionable error when the command is missing or fails (inspect error.code/err.message to detect ENOENT or process exit and include the attempted command and suggestion to install OpenSSL).src/commands/serve.ts-132-136 (1)
132-136:⚠️ Potential issue | 🟠 MajorDo not swallow
SIGTERMdelivery failures.The blanket catch treats all
process.killerrors as success. This can report a stopped daemon even when termination failed; downstream success output becomes misleading.Suggested fix
try { process.kill(pid, 'SIGTERM'); - } catch { - // Already dead + } catch (err) { + const code = (err as NodeJS.ErrnoException).code; + if (code !== 'ESRCH') { + throw err; + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/serve.ts` around lines 132 - 136, The try/catch around process.kill(pid, 'SIGTERM') in serve.ts is swallowing all errors; change it to only ignore the "process does not exist" case (error.code === 'ESRCH') and let other errors propagate or be reported. Locate the block using process.kill and pid, catch the thrown error as e (or err), check err.code === 'ESRCH' and treat that as "already dead"; for any other error rethrow or surface it via the existing logger so a failed SIGTERM is not treated as a successful stop.src/commands/serve.ts-70-75 (1)
70-75:⚠️ Potential issue | 🟠 MajorValidate
portandhostexplicitly instead of non-null assertions.
ServeStartOptionsdefines both as optional, but!allowsundefinedto flow into command construction. Add runtime required checks and port-range validation.Suggested fix
- const port = options.port!; - const host = options.host!; + if (options.port == null || options.host == null) { + throw new PpgError('Missing required serve options: --port and --host', 'INVALID_ARGS'); + } + const { port, host } = options; + + if (!Number.isInteger(port) || port < 1 || port > 65535) { + throw new PpgError(`Invalid port: "${port}"`, 'INVALID_ARGS'); + } if (!VALID_HOST.test(host)) { throw new PpgError(`Invalid host: "${host}"`, 'INVALID_ARGS'); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/serve.ts` around lines 70 - 75, The code uses non-null assertions for options.port and options.host which can be undefined; replace those with explicit runtime validation: check that options.host is present and matches VALID_HOST (throw PpgError with 'INVALID_ARGS' if not) and check that options.port is present, is a number (or parseable int), and falls within an acceptable range (e.g., 1–65535), throwing PpgError('INVALID_ARGS') with a clear message if invalid; update the code paths that reference port and host (e.g., ServeStartOptions usage in this module and any functions/constructors that consume port/host) to use the validated local variables.src/core/operations/kill.ts-1-2 (1)
1-2:⚠️ Potential issue | 🟠 MajorKill routing should refresh agent status before target selection.
Current decisions rely on cached manifest status (
runningvs terminal). That can be stale and lead to incorrect skip/kill behavior across all paths.💡 Suggested fix
-import { killAgent, killAgents } from '../agent.js'; +import { killAgent, killAgents, refreshAllAgentStatuses } from '../agent.js'; export async function performKill(input: KillInput): Promise<KillResult> { const { projectRoot } = input; if (!input.agent && !input.worktree && !input.all) { throw new PpgError('One of --agent, --worktree, or --all is required', 'INVALID_ARGS'); } + await updateManifest(projectRoot, (m) => refreshAllAgentStatuses(m, projectRoot)); + if (input.agent) { return killSingleAgent(projectRoot, input.agent, input); } else if (input.worktree) { return killWorktreeAgents(projectRoot, input.worktree, input); } else { return killAllAgents(projectRoot, input); } }Based on learnings: Detect agent status using
core/agent.ts:checkAgentStatus()with signal-stack priority: result file → pane exists → pane dead → current command → running.Also applies to: 33-47
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/operations/kill.ts` around lines 1 - 2, The kill routing is using cached manifest status and must refresh each agent's real status before deciding to skip or kill; update the logic in src/core/operations/kill.ts to call core/agent.ts:checkAgentStatus() for each candidate (using the same signal-stack priority: result file → pane exists → pane dead → current command → running) prior to selecting a target or skipping, and then use that fresh status to drive calls to killAgent/killAgents and to write any updates via updateManifest; also apply the same refresh-before-decision changes to the other block referenced (the section around lines 33-47) so all selection paths consult checkAgentStatus instead of the cached manifest flag.src/core/operations/merge.ts-45-49 (1)
45-49:⚠️ Potential issue | 🟠 Major
dryRunis currently mutating manifest state.The dry-run path returns early, but
updateManifest(...)already executed and can persist refreshed agent statuses. That makes dry-run non-read-only.💡 Suggested fix
export async function performMerge(options: MergeOptions): Promise<MergeResult> { const { projectRoot, worktreeRef, force = false, dryRun = false } = options; const strategy = options.strategy ?? 'squash'; const shouldCleanup = options.cleanup !== false; - // Load and refresh manifest - await requireManifest(projectRoot); - const manifest = await updateManifest(projectRoot, async (m) => { - return refreshAllAgentStatuses(m, projectRoot); - }); + // Load manifest (read-only for dry-run), refresh only for real execution + const initialManifest = await requireManifest(projectRoot); + const manifest = dryRun + ? initialManifest + : await updateManifest(projectRoot, async (m) => refreshAllAgentStatuses(m, projectRoot));Also applies to: 66-78
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/operations/merge.ts` around lines 45 - 49, The dry-run path is mutating manifest because updateManifest(...) and refreshAllAgentStatuses(...) run before the early return; fix by short-circuiting dryRun before calling requireManifest/updateManifest OR by invoking refreshAllAgentStatuses on a cloned manifest and ensuring updateManifest is not persisted when dryRun is true; specifically, in the merge flow move the dryRun check ahead of the requireManifest/updateManifest calls or pass a flag into updateManifest to skip persistence so functions like requireManifest, updateManifest, and refreshAllAgentStatuses do not write state during a dry run (also apply the same change to the other block at lines 66–78).src/core/operations/kill.ts-145-163 (1)
145-163:⚠️ Potential issue | 🟠 MajorCleanup self-protection is dropped, but delete/remove still proceeds.
removeWorktreeCleanup(...)ignorescleanupWorktree(...)output, so callers can report/remove/delete worktrees even when cleanup is self-protected. That can desync manifest state from actual runtime state.💡 Suggested fix
-async function removeWorktreeCleanup( +async function removeWorktreeCleanup( projectRoot: string, wtId: string, selfPaneId: string | null, paneMap?: Map<string, PaneInfo>, -): Promise<void> { +): Promise<{ selfProtected: boolean }> { const manifest = await readManifest(projectRoot); const wt = resolveWorktree(manifest, wtId); - if (!wt) return; - await cleanupWorktree(projectRoot, wt, { selfPaneId, paneMap }); + if (!wt) return { selfProtected: false }; + const result = await cleanupWorktree(projectRoot, wt, { selfPaneId, paneMap }); + return { selfProtected: result.selfProtected }; }Then gate
removed/deleteddecisions in callers on!cleanupResult.selfProtected.Also applies to: 225-247, 252-262
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/operations/kill.ts` around lines 145 - 163, The removal/deletion logic ignores cleanupWorktree's self-protection result: change callers to capture the result of removeWorktreeCleanup/cleanupWorktree (e.g., store as cleanupResult) and only set removed/deleted arrays or call updateManifest when cleanupResult.selfProtected is false; update the shouldRemove/remove/delete branches (references: removeWorktreeCleanup, cleanupWorktree, updateManifest, shouldRemove, input.delete, skippedOpenPr) so removed/deleted are gated on !cleanupResult.selfProtected to avoid desync with manifest.ios/PPGMobile/PPGMobile/Views/Terminal/TerminalView.swift-117-133 (1)
117-133:⚠️ Potential issue | 🟠 MajorDon’t mark subscription active before WebSocket attachment succeeds.
Line 119 sets
isSubscribedbefore the WebSocket guard on Line 132. IfwsManageris nil, the model reports subscribed without any live subscriber being registered.🩹 Suggested fix
func subscribe(agentId: String, appState: AppState) async { guard !isSubscribed else { return } - isSubscribed = true // Fetch initial log content via REST do { let logs = try await appState.client.fetchAgentLogs(agentId: agentId, lines: 200) output = logs.output trimOutput() } catch { output = "Failed to load logs: \(error.localizedDescription)" hasError = true } // Subscribe to live WebSocket updates guard let ws = appState.wsManager else { return } + isSubscribed = true subscriptionID = TerminalMessageRouter.shared.addSubscriber(wsManager: ws) { [weak self] message in guard message.type == "terminal:output", message.agentId == agentId, let data = message.data else { return } Task { `@MainActor` [weak self] in🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/PPGMobile/PPGMobile/Views/Terminal/TerminalView.swift` around lines 117 - 133, The subscribe(agentId:appState:) method sets isSubscribed too early; move the isSubscribed = true assignment so it only runs after a successful WebSocket attach and subscriber registration. Specifically, after the guard let ws = appState.wsManager check and after TerminalMessageRouter.shared.addSubscriber(wsManager:ws) returns a valid subscriptionID, set isSubscribed = true (and assign subscriptionID); if wsManager is nil or addSubscriber fails, ensure isSubscribed remains false and handle the early return/cleanup accordingly.PPG CLI/PPG CLI/WebSocketManager.swift-12-13 (1)
12-13:⚠️ Potential issue | 🟠 MajorRequire Swift 6.1 or later for
nonisolatedtype declarations.These
nonisolatedmodifiers on top-level types (enum, class) are only supported in Swift 6.1+ (SE-0449). Earlier toolchains (Swift 5.x and 6.0) will fail to compile this file.If your project targets an earlier Swift version, remove the
nonisolatedmodifiers:Compatibility fallback
-nonisolated enum WebSocketConnectionState: Equatable, Sendable { +enum WebSocketConnectionState: Equatable, Sendable { @@ -nonisolated enum WebSocketEvent: Sendable { +enum WebSocketEvent: Sendable { @@ -nonisolated enum WebSocketCommand: Sendable { +enum WebSocketCommand: Sendable { @@ -nonisolated class WebSocketManager: NSObject, `@unchecked` Sendable, URLSessionWebSocketDelegate { +class WebSocketManager: NSObject, `@unchecked` Sendable, URLSessionWebSocketDelegate {Also applies to: 28-29, 38-39, 63-63
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@PPG` CLI/PPG CLI/WebSocketManager.swift around lines 12 - 13, The file uses the Swift 6.1-only `nonisolated` modifier on top-level types (e.g., enum WebSocketConnectionState and the other top-level type declarations noted in the review), which breaks builds on older toolchains; fix by either removing the `nonisolated` keywords from those declarations (e.g., WebSocketConnectionState) or add a conditional compilation guard around the declarations using compiler(>=6.1) so the `nonisolated` form is only compiled on Swift 6.1+, leaving the plain declarations for older compilers.ios/PPGMobile/PPGMobile/Views/Settings/AddServerView.swift-75-77 (1)
75-77:⚠️ Potential issue | 🟠 MajorRequire a non-empty token before enabling “Add Server”.
Line 76 does not validate
trimmedToken, so Line 107 can save/connect with an empty token and fail auth immediately.Suggested fix
private var isValid: Bool { - !trimmedHost.isEmpty && parsedPort != nil + !trimmedHost.isEmpty && !trimmedToken.isEmpty && parsedPort != nil }Also applies to: 101-108
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/PPGMobile/PPGMobile/Views/Settings/AddServerView.swift` around lines 75 - 77, The isValid computed property currently only checks !trimmedHost.isEmpty and parsedPort != nil, so the Add Server button and the save/connect flow (e.g., the handler around save/connect at/near the method that runs on line ~107) can proceed with an empty token; update isValid to also require !trimmedToken.isEmpty (and similarly adjust any other validation checks around the save/connect path) so that trimmedToken is validated before enabling the Add Server action and before attempting authentication.ios/PPGMobile/PPGMobileTests/ServerConnectionTests.swift-110-115 (1)
110-115:⚠️ Potential issue | 🟠 MajorFix the missing-
portexpectation inrejectsMissingFields.Line 113 currently expects
nilwhenportis omitted, butServerConnection.fromQRCodedefaults omittedportto7700(seeios/PPGMobile/PPGMobile/Models/ServerConnection.swift, Lines 115-117). This makes the test assert the wrong contract.Suggested test update
`@Test`("fromQRCode returns nil when required fields are missing") func rejectsMissingFields() { `#expect`(ServerConnection.fromQRCode("ppg://connect?host=x&port=1") == nil) // no token - `#expect`(ServerConnection.fromQRCode("ppg://connect?host=x&token=t") == nil) // no port + let parsed = ServerConnection.fromQRCode("ppg://connect?host=x&token=t") // port defaults to 7700 + `#expect`(parsed?.host == "x") + `#expect`(parsed?.port == 7700) + `#expect`(parsed?.token == "t") `#expect`(ServerConnection.fromQRCode("ppg://connect?port=1&token=t") == nil) // no host }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/PPGMobile/PPGMobileTests/ServerConnectionTests.swift` around lines 110 - 115, The test rejectsMissingFields incorrectly expects nil when port is omitted but ServerConnection.fromQRCode defaults missing port to 7700; update the second assertion that calls ServerConnection.fromQRCode("ppg://connect?host=x&token=t") to assert a non-nil ServerConnection and that its port equals 7700 (leave the other two expectations asserting nil for missing token and missing host unchanged).
🟡 Minor comments (18)
src/server/auth.ts-147-153 (1)
147-153:⚠️ Potential issue | 🟡 MinorConsider distinguishing permission errors from corruption.
Non-
ENOENTerrors (e.g.,EACCESfor permission denied,EMFILEfor too many open files) are currently reported asAuthCorruptError, which may be misleading during debugging. Consider checking for permission errors specifically:Suggested refinement
} catch (err) { if ((err as NodeJS.ErrnoException).code === 'ENOENT') { cache = { tokens: [] }; return cache; } + // Could add: if (code === 'EACCES') throw new AuthAccessError(filePath); throw new AuthCorruptError(filePath); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/server/auth.ts` around lines 147 - 153, The catch block treating all non-ENOENT errors as AuthCorruptError is too broad; update it to detect permission-related errors (e.g. 'EACCES', 'EPERM') and handle them differently—either rethrow the original error so callers see the real permission failure or throw a distinct error (e.g. AuthPermissionError(filePath)) if such a class exists—while preserving the existing behavior for genuine corruption by still throwing AuthCorruptError(filePath) for other errors; modify the catch in the function that reads the cache (references: cache variable, filePath, AuthCorruptError) to implement this conditional check.linux/src/api/websocket.rs-103-107 (1)
103-107:⚠️ Potential issue | 🟡 MinorURL-encode the token query parameter.
If the token contains special characters (
&,=,?, spaces, etc.), the URL will be malformed. Use URL encoding to ensure the token is safely included.🔒 Proposed fix
+use url::form_urlencoded; + let url = if let Some(ref t) = token { - format!("{}?token={}", ws_url, t) + let encoded_token: String = form_urlencoded::byte_serialize(t.as_bytes()).collect(); + format!("{}?token={}", ws_url, encoded_token) } else { ws_url.clone() };Note: The
urlcrate is already inCargo.toml, soform_urlencodedis available.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linux/src/api/websocket.rs` around lines 103 - 107, The code builds the WebSocket URL by interpolating token into the query string without escaping, so encode the token before inserting it: in websocket.rs where `let url = if let Some(ref t) = token { ... } else { ... }`, replace the direct interpolation with a URL-encoded token using the `url` crate's form URL encoding (e.g., `url::form_urlencoded::byte_serialize` or `url::form_urlencoded::Serializer`) to percent-encode `t` and then format `"{}?token={}"` with the encoded value; keep the `ws_url.clone()` branch unchanged.linux/src/util/shell.rs-15-40 (1)
15-40:⚠️ Potential issue | 🟡 MinorRemove
tmux_attach_commandor restructure it for proper usage.This function is unused throughout the codebase. Additionally, if it were called via
std::process::Command::args(...), the semicolons would be treated as literal arguments rather than tmux command separators—tmux only interprets;as a separator in shell contexts.The codebase already has
tmux_attach_shell_command(lines 43–55), which correctly constructs the command as a shell string with properly escaped separators (\;). Consider removing thetmux_attach_commandvariant or clarify its intended use case.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linux/src/util/shell.rs` around lines 15 - 40, tmux_attach_command is unused and unsafe to use with std::process::Command::args because the literal ";" will not act as tmux separators; either remove tmux_attach_command entirely or refactor it to match the working pattern used by tmux_attach_shell_command: return a single shell string with escaped separators (or document it clearly if you intend it for a shell-only path). If removing, delete the tmux_attach_command function and any tests/refs; if refactoring, make tmux_attach_command produce the same shell-safe command string as tmux_attach_shell_command or build a true argument vector that does not include ";" tokens so it’s safe for Command::args.linux/src/main.rs-24-35 (1)
24-35:⚠️ Potential issue | 🟡 MinorMissing argument values are silently ignored.
If a user runs
ppg-desktop --urlwithout providing a URL, the flag is silently skipped. This could lead to confusion when the app connects to the default server instead of the intended one.Proposed fix to warn on missing values
"--url" | "-u" => { if i + 1 < args.len() { server_url = Some(args[i + 1].clone()); i += 1; + } else { + eprintln!("Warning: --url requires a value"); } } "--token" | "-t" => { if i + 1 < args.len() { token = Some(args[i + 1].clone()); i += 1; + } else { + eprintln!("Warning: --token requires a value"); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linux/src/main.rs` around lines 24 - 35, The argument parsing silently ignores missing values for the "--url"/"-u" and "--token"/"-t" flags; update the parsing block that sets server_url and token so that if i + 1 >= args.len() or the next arg starts with '-' you emit an error (e.g. eprintln! or process::exit with a non‑zero code) and/or show usage, rather than silently skipping; apply this check for both the "--url"/"-u" and "--token"/"-t" branches and ensure you do not consume a following flag as a value when validating the next arg.linux/src/ui/setup_view.rs-161-175 (1)
161-175:⚠️ Potential issue | 🟡 MinorIcon CSS classes accumulate on repeated checks.
When
update_check_statusis called multiple times (e.g., via the Retry button), the icon accumulates bothsuccessanderrorCSS classes because onlyadd_css_classis called without removing the opposite class first. The status label correctly removes the opposite class, but the icon does not.🐛 Proposed fix
fn update_check_status(icon: >k::Image, status: >k::Label, found: bool) { if found { icon.set_icon_name(Some("emblem-ok-symbolic")); + icon.remove_css_class("error"); icon.add_css_class("success"); status.set_text("Found"); status.add_css_class("status-running"); status.remove_css_class("status-failed"); } else { icon.set_icon_name(Some("dialog-error-symbolic")); + icon.remove_css_class("success"); icon.add_css_class("error"); status.set_text("Not found"); status.add_css_class("status-failed"); status.remove_css_class("status-running"); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linux/src/ui/setup_view.rs` around lines 161 - 175, In update_check_status ensure the icon's previous state class is removed before adding the new one: when found is true call icon.remove_css_class("error") then icon.add_css_class("success") (and set_icon_name as now), and when found is false call icon.remove_css_class("success") then icon.add_css_class("error"); mirror the status label logic so icon.set_icon_name, icon.add_css_class, and icon.remove_css_class are updated accordingly to avoid accumulating both classes.linux/src/state.rs-82-86 (1)
82-86:⚠️ Potential issue | 🟡 MinorSilent error swallowing on settings persistence.
let _ = inner.settings.save()discards the save result, so users won't know if their settings failed to persist. Consider logging the error or returning aResultso callers can surface feedback.🛠️ Proposed fix to log save failures
pub fn update_settings<F: FnOnce(&mut AppSettings)>(&self, f: F) { let mut inner = self.inner.write().unwrap(); f(&mut inner.settings); - let _ = inner.settings.save(); + if let Err(e) = inner.settings.save() { + eprintln!("Failed to save settings: {e}"); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linux/src/state.rs` around lines 82 - 86, The update_settings method currently swallows errors from inner.settings.save() (in update_settings), so change it to propagate or surface failures: update update_settings(&self, f: F) to return a Result<(), E> (or a suitable crate error type) and after applying f(&mut inner.settings) call inner.settings.save()? and return its Result (mapping types if needed); alternatively, if you prefer logging, call the project logger/tracing to log the Err from inner.settings.save() instead of discarding it. Ensure you reference update_settings, self.inner, and inner.settings.save so callers can see persistence failures.src/core/operations/spawn.ts-60-60 (1)
60-60:⚠️ Potential issue | 🟡 MinorValidate
countbefore side effects.
count <= 0currently allows creating/adopting a worktree and returning success with no spawned agents. Fail fast withINVALID_ARGS.As per coding guidelines: Use `PpgError` hierarchy with typed error codes: `TMUX_NOT_FOUND`, `NOT_GIT_REPO`, `NOT_INITIALIZED`, `MANIFEST_LOCK`, `WORKTREE_NOT_FOUND`, `AGENT_NOT_FOUND`, `MERGE_FAILED`, `INVALID_ARGS`, `AGENTS_RUNNING`, `WAIT_TIMEOUT`, `AGENTS_FAILED`, `NO_SESSION_ID`.🔧 Proposed fix
const agentConfig = resolveAgentConfig(config, options.agent); const count = options.count ?? 1; + if (!Number.isInteger(count) || count < 1) { + throw new PpgError('--count must be a positive integer', 'INVALID_ARGS'); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/operations/spawn.ts` at line 60, Validate the computed count immediately after the assignment "const count = options.count ?? 1;" and before any side effects (creating/adopting a worktree or starting agents); if count is not a positive integer (> 0) throw a PpgError with the typed error code INVALID_ARGS and a concise message (e.g. "count must be a positive integer") so the function fails fast and avoids creating/adopting worktrees when no agents should be spawned.src/core/prompt.ts-19-23 (1)
19-23:⚠️ Potential issue | 🟡 MinorDon’t swallow all prompt-directory read errors.
Returning
[]for everyreaddirfailure makes real I/O/permission issues indistinguishable from “directory not found”. Only ignoreENOENT; rethrow others.🔧 Proposed fix
async function readMdNames(dir: string): Promise<string[]> { try { const files = await fs.readdir(dir); return files.filter((f) => f.endsWith('.md')).map((f) => f.replace(/\.md$/, '')).sort(); - } catch { - return []; + } catch (err) { + if ((err as NodeJS.ErrnoException).code === 'ENOENT') { + return []; + } + throw err; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/prompt.ts` around lines 19 - 23, The current try/catch around fs.readdir swallows all errors; change the catch to capture the error (e.g., catch (err)) and only return [] when err.code === 'ENOENT', otherwise rethrow the error so I/O/permission issues surface; update the block around the fs.readdir call (the try that returns files.filter(...).sort()) to check err.code and rethrow for non-ENOENT cases.src/lib/paths.ts-114-116 (1)
114-116:⚠️ Potential issue | 🟡 MinorConsolidate duplicate path functions:
serveJsonPathandserveStatePathboth return the same path.Both functions return
path.join(ppgDir(projectRoot), 'serve.json')but are used in different parts of the codebase. Remove one function and update all callers—serveJsonPathin src/core/serve.ts and src/commands/serve.ts, andserveStatePathin src/server/index.ts—to use a single consolidated function.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/paths.ts` around lines 114 - 116, There are two duplicate helpers (serveJsonPath and serveStatePath) returning the same path; remove the duplication by keeping serveJsonPath as the single exported helper, delete serveStatePath, and update all places that import or call serveStatePath to import/call serveJsonPath instead (specifically the modules that currently use serveJsonPath and the module that used serveStatePath). Ensure the remaining export signature of serveJsonPath unchanged (export function serveJsonPath(projectRoot: string): string) and update any import statements to reference serveJsonPath, then run type checks to confirm no leftover imports.src/core/serve.ts-67-71 (1)
67-71:⚠️ Potential issue | 🟡 MinorStrictly validate PID file contents before accepting them as a process ID.
parseInt()accepts numeric prefixes (e.g.123abc), so a corruptedserve.pidcan be treated as valid and point to an unrelated process.Suggested fix
- const pid = parseInt(raw, 10); - if (isNaN(pid)) { + const text = raw.trim(); + if (!/^\d+$/.test(text)) { await cleanupPidFile(pidPath); return null; } + const pid = Number(text);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/serve.ts` around lines 67 - 71, The PID parsing accepts strings with numeric prefixes (e.g. "123abc"); change the validation so you first trim the raw content and ensure it matches a full digits-only pattern (e.g. /^\d+$/) before calling parseInt; if the trimmed raw fails that test, call cleanupPidFile(pidPath) and return null (same behavior as current error path) — update the logic around raw, pid and parseInt in serve.ts to use this stricter check.ios/PPGMobile/PPGMobile/Views/Terminal/TerminalInputBar.swift-17-20 (1)
17-20:⚠️ Potential issue | 🟡 MinorAdd an explicit accessibility label to the icon-only send button.
VoiceOver will otherwise announce the SF Symbol name, which is less clear than an action label.
♿ Suggested accessibility tweak
Button(action: onSend) { Image(systemName: "arrow.up.circle.fill") .font(.title2) } + .accessibilityLabel("Send to terminal") + .accessibilityHint("Sends the current input text") .disabled(text.isEmpty)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/PPGMobile/PPGMobile/Views/Terminal/TerminalInputBar.swift` around lines 17 - 20, The send button currently uses an icon-only Image(systemName: "arrow.up.circle.fill") inside Button(action: onSend) which causes VoiceOver to read the SF Symbol name; add an explicit accessibility label (e.g., "Send" or a localized equivalent) to the Button or the Image so screen readers announce the action clearly (refer to Button(action: onSend) and Image(systemName: "arrow.up.circle.fill") in TerminalInputBar.swift).ios/PPGMobile/PPGMobile/Views/Spawn/SpawnView.swift-20-27 (1)
20-27:⚠️ Potential issue | 🟡 MinorTrim newlines in validation and payload sanitization.
Line 26 and Line 156 only trim
.whitespaces, so a newline-only prompt can passisFormValidand be submitted.Suggested fix
private var sanitizedName: String { - name.trimmingCharacters(in: .whitespaces) + name.trimmingCharacters(in: .whitespacesAndNewlines) } private var isFormValid: Bool { let hasName = !sanitizedName.isEmpty && sanitizedName.wholeMatch(of: Self.namePattern) != nil - let hasPrompt = !prompt.trimmingCharacters(in: .whitespaces).isEmpty + let hasPrompt = !prompt.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty return hasName && hasPrompt } ... - let trimmedPrompt = prompt.trimmingCharacters(in: .whitespaces) + let trimmedPrompt = prompt.trimmingCharacters(in: .whitespacesAndNewlines)Also applies to: 156-157
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/PPGMobile/PPGMobile/Views/Spawn/SpawnView.swift` around lines 20 - 27, The validation and sanitization currently only trim .whitespaces which lets newline-only values pass; update the sanitization and validation to use .whitespacesAndNewlines so sanitizedName (used by isFormValid/namePattern) and the prompt trimming (the prompt payload sanitization around the code referenced at lines ~156-157) both remove newlines as well as spaces; locate the properties sanitizedName and isFormValid and the prompt payload sanitization and replace .trimmingCharacters(in: .whitespaces) with .trimmingCharacters(in: .whitespacesAndNewlines) to ensure newline-only input is rejected and not submitted.ios/PPGMobile/PPGMobile/Views/Settings/SettingsView.swift-9-10 (1)
9-10:⚠️ Potential issue | 🟡 MinorPrevent stale test tasks from clearing newer
testResultstate.If the user starts a second test before the first task’s delayed reset finishes, the older task can clear the newer result.
Suggested fix (run-id guard)
`@State` private var testResult: TestResult? `@State` private var showQRError = false +@State private var testRunID = UUID() ... private func testConnection() { + let runID = UUID() + testRunID = runID testResult = .testing Task { `@MainActor` in do { _ = try await appState.client.fetchStatus() testResult = .success } catch { testResult = .failure(error.localizedDescription) } try? await Task.sleep(for: .seconds(3)) - if !Task.isCancelled { + if !Task.isCancelled && testRunID == runID { testResult = nil } } }Also applies to: 217-230
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/PPGMobile/PPGMobile/Views/Settings/SettingsView.swift` around lines 9 - 10, The delayed reset task clears testResult even if a newer test has started; introduce a run-id guard to avoid stale clears: add a `@State` private var currentTestRunId: UUID (or Int) that you update whenever starting a new test, capture that id in the async/Task closure that performs the delayed reset, and before mutating testResult or showQRError check that the captured id matches currentTestRunId; apply the same guard to the other delayed-reset location (the code around the second occurrence that manipulates testResult/showQRError) so only the most recent test run can clear its own state.ios/PPGMobile/PPGMobile/Models/ServerConnection.swift-126-132 (1)
126-132:⚠️ Potential issue | 🟡 MinorConsider
0.0.0.0might not be a usable client address.The server binds to
0.0.0.0to listen on all interfaces, but clients can't actually connect to0.0.0.0. If the QR code contains0.0.0.0, the connection will likely fail. Consider warning the user or prompting for the actual server IP.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/PPGMobile/PPGMobile/Models/ServerConnection.swift` around lines 126 - 132, The QR-parsed ServerConnection currently treats host == "0.0.0.0" as "Local Mac" but still uses "0.0.0.0" as the connection host, which is not a routable client address; update the logic that constructs ServerConnection (the return block creating ServerConnection(name:host:port:token:caCertificate:)) to detect "0.0.0.0" and either prompt the user to enter or select a real server IP (or resolve a usable LAN IP/hostname) and use that value for the host field, and surface a warning/alert if no valid reachable address is provided so the app doesn't attempt to connect to 0.0.0.0 silently.ios/PPGMobile/PPGMobile/Models/ServerConnection.swift-83-97 (1)
83-97:⚠️ Potential issue | 🟡 MinorFallback QR string loses connection data.
If
URLComponents.stringreturnsnil(which shouldn't happen with valid data, but could with edge cases), the fallback"ppg://connect"produces a non-functional QR code with no host/token. Consider makingqrCodeStringreturnOptional<String>or using a fatalError for this unexpected case.🛡️ Option: make return type optional
- var qrCodeString: String { + var qrCodeString: String? { var components = URLComponents() // ... - return components.string ?? "ppg://connect" + return components.string }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/PPGMobile/PPGMobile/Models/ServerConnection.swift` around lines 83 - 97, Change qrCodeString to return an optional String instead of using the hardcoded fallback: update the computed property signature from var qrCodeString: String to var qrCodeString: String? and return components.string directly (removing the "ppg://connect" fallback). Ensure callers of qrCodeString (e.g., UI or QR generator) are updated to handle the optional (display an error, skip generation, or surface a message). Keep the construction logic with URLComponents, host, port, token and optional caCertificate unchanged.ios/PPGMobile/PPGMobile/Models/DashboardModels.swift-91-106 (1)
91-106:⚠️ Potential issue | 🟡 MinorAvoid repeated
ISO8601DateFormatterinstantiation.
ISO8601DateFormatter()is relatively expensive to create. These computed properties instantiate a new formatter on every access, which can add up in list views rendering many agents/worktrees.⚡ Use a shared static formatter
+private let iso8601Formatter: ISO8601DateFormatter = { + let formatter = ISO8601DateFormatter() + return formatter +}() + extension AgentEntry { var startDate: Date? { - ISO8601DateFormatter().date(from: startedAt) + iso8601Formatter.date(from: startedAt) } } extension WorktreeEntry { var createdDate: Date? { - ISO8601DateFormatter().date(from: createdAt) + iso8601Formatter.date(from: createdAt) } var mergedDate: Date? { - mergedAt.flatMap { ISO8601DateFormatter().date(from: $0) } + mergedAt.flatMap { iso8601Formatter.date(from: $0) } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/PPGMobile/PPGMobile/Models/DashboardModels.swift` around lines 91 - 106, AgentEntry.startDate and WorktreeEntry.createdDate/mergedDate currently instantiate ISO8601DateFormatter() on every access; replace those per-access creations with a shared static ISO8601DateFormatter instance (e.g., add a single static let isoFormatter = ISO8601DateFormatter() in a common scope such as an extension or a private file-level helper) and use that shared formatter inside the computed properties startDate, createdDate, and mergedDate to avoid repeated expensive allocations.ios/PPGMobile/PPGMobile/Views/Dashboard/WorktreeDetailView.swift-26-31 (1)
26-31:⚠️ Potential issue | 🟡 MinorSilent error swallowing may confuse users.
The
try?on line 28 silently discards errors frommergeWorktree. If the merge fails, the user sees no feedback—the manifest simply refreshes showing unchanged state. Consider surfacing errors via an alert or theappState.errorproperty so users understand when an action fails.The same pattern appears in the "Kill All Agents" dialog (line 39), agent kill/restart handlers (lines 109, 113), and the "Create Pull Request" action (line 151).
🛡️ Example: surface errors to user
Button("Squash Merge") { Task { - try? await appState.client.mergeWorktree(worktreeId: worktreeId) - await appState.manifestStore.refresh() + do { + try await appState.client.mergeWorktree(worktreeId: worktreeId) + } catch { + // Surface error to user via appState or local `@State` + } + await appState.manifestStore.refresh() } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/PPGMobile/PPGMobile/Views/Dashboard/WorktreeDetailView.swift` around lines 26 - 31, The Button action currently swallows errors with try? when calling mergeWorktree, which hides failures from users; change this to a do-catch inside the Task, await try appState.client.mergeWorktree(worktreeId: worktreeId) and in catch set a user-visible error (e.g. assign the caught Error to appState.error or trigger an alert) before/after calling await appState.manifestStore.refresh(); apply the same pattern to the other handlers mentioned (the "Kill All Agents" dialog, killAgent/restartAgent handlers, and createPullRequest action) so any thrown error from mergeWorktree, killAgent, restartAgent, or createPullRequest is surfaced instead of being ignored.ios/PPGMobile/PPGMobile/Views/Settings/QRScannerView.swift-130-132 (1)
130-132:⚠️ Potential issue | 🟡 MinorSilent failure when camera is unavailable leaves user with no feedback.
If the camera device is unavailable (simulator, device with broken camera, or
AVCaptureDeviceInputinitialization fails), the view returns without any visual indication to the user. They'll see the scan overlay on a blank background with no explanation.Consider setting an error state or showing a fallback UI similar to
cameraPermissionView:Suggested approach
+ `@State` private var cameraUnavailable = falseIn
QRCameraView, propagate failure via a binding or callback, or check device availability inQRScannerViewbefore renderingQRCameraView.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/PPGMobile/PPGMobile/Views/Settings/QRScannerView.swift` around lines 130 - 132, QRScannerView currently silently returns the view when AVCaptureDevice.default(for: .video) or AVCaptureDeviceInput(device:) fails (the guard around AVCaptureDevice.default and AVCaptureDeviceInput), leaving users with an empty scanner UI; update QRScannerView to detect this failure and surface an error/fallback UI (reuse cameraPermissionView or a new error view) by propagating the failure state to QRCameraView via a `@Binding` or callback or by checking device availability before instantiating QRCameraView, and ensure the code that currently uses AVCaptureDevice.default(for: .video) and AVCaptureDeviceInput(device:) sets an explicit error state so the UI shows an explanatory message and fallback controls when the camera is unavailable.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
linux/Cargo.lockis excluded by!**/*.lockpackage-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (119)
.gitignorePPG CLI/PPG CLI/WebSocketManager.swiftPPG CLI/PPG CLITests/WebSocketManagerTests.swiftios/PPGMobile/PPGMobile/App/ContentView.swiftios/PPGMobile/PPGMobile/App/PPGMobileApp.swiftios/PPGMobile/PPGMobile/Assets.xcassets/AppIcon.appiconset/Contents.jsonios/PPGMobile/PPGMobile/Assets.xcassets/Contents.jsonios/PPGMobile/PPGMobile/Models/AgentVariant.swiftios/PPGMobile/PPGMobile/Models/DashboardModels.swiftios/PPGMobile/PPGMobile/Models/Manifest.swiftios/PPGMobile/PPGMobile/Models/ServerConnection.swiftios/PPGMobile/PPGMobile/Models/ServerConnectionTests.swiftios/PPGMobile/PPGMobile/Networking/PPGClient.swiftios/PPGMobile/PPGMobile/Networking/TokenStorage.swiftios/PPGMobile/PPGMobile/Networking/WebSocketManager.swiftios/PPGMobile/PPGMobile/State/AppState.swiftios/PPGMobile/PPGMobile/State/ManifestStore.swiftios/PPGMobile/PPGMobile/Views/Dashboard/AgentRow.swiftios/PPGMobile/PPGMobile/Views/Dashboard/DashboardView.swiftios/PPGMobile/PPGMobile/Views/Dashboard/WorktreeCard.swiftios/PPGMobile/PPGMobile/Views/Dashboard/WorktreeDetailView.swiftios/PPGMobile/PPGMobile/Views/Settings/AddServerView.swiftios/PPGMobile/PPGMobile/Views/Settings/QRScannerView.swiftios/PPGMobile/PPGMobile/Views/Settings/SettingsView.swiftios/PPGMobile/PPGMobile/Views/Spawn/SpawnView.swiftios/PPGMobile/PPGMobile/Views/Terminal/TerminalInputBar.swiftios/PPGMobile/PPGMobile/Views/Terminal/TerminalView.swiftios/PPGMobile/PPGMobileTests/AgentVariantTests.swiftios/PPGMobile/PPGMobileTests/ManifestTests.swiftios/PPGMobile/PPGMobileTests/ServerConnectionTests.swiftios/PPGMobile/project.ymllinux/Cargo.tomllinux/README.mdlinux/src/api/client.rslinux/src/api/mod.rslinux/src/api/websocket.rslinux/src/app.rslinux/src/main.rslinux/src/models/agent_variant.rslinux/src/models/manifest.rslinux/src/models/mod.rslinux/src/models/settings.rslinux/src/state.rslinux/src/style.csslinux/src/ui/command_palette.rslinux/src/ui/home_dashboard.rslinux/src/ui/mod.rslinux/src/ui/pane_grid.rslinux/src/ui/settings_dialog.rslinux/src/ui/setup_view.rslinux/src/ui/sidebar.rslinux/src/ui/terminal_pane.rslinux/src/ui/window.rslinux/src/ui/worktree_detail.rslinux/src/util/mod.rslinux/src/util/shell.rspackage.jsonsrc/cli.tssrc/commands/init.tssrc/commands/kill.tssrc/commands/list.tssrc/commands/merge.tssrc/commands/pr.tssrc/commands/restart.tssrc/commands/serve.test.tssrc/commands/serve.tssrc/commands/spawn.test.tssrc/commands/spawn.tssrc/commands/status.tssrc/core/agent.tssrc/core/kill.test.tssrc/core/kill.tssrc/core/lifecycle.tssrc/core/merge.test.tssrc/core/merge.tssrc/core/operations/kill.test.tssrc/core/operations/kill.tssrc/core/operations/merge.test.tssrc/core/operations/merge.tssrc/core/operations/restart.test.tssrc/core/operations/restart.tssrc/core/operations/spawn.test.tssrc/core/operations/spawn.tssrc/core/pr.tssrc/core/prompt.test.tssrc/core/prompt.tssrc/core/serve.test.tssrc/core/serve.tssrc/core/spawn.tssrc/core/tls.tssrc/lib/errors.tssrc/lib/paths.test.tssrc/lib/paths.tssrc/server/auth.test.tssrc/server/auth.tssrc/server/error-handler.test.tssrc/server/error-handler.tssrc/server/index.test.tssrc/server/index.tssrc/server/routes/agents.test.tssrc/server/routes/agents.tssrc/server/routes/config.test.tssrc/server/routes/config.tssrc/server/routes/spawn.test.tssrc/server/routes/spawn.tssrc/server/routes/status.test.tssrc/server/routes/status.tssrc/server/routes/worktrees.test.tssrc/server/routes/worktrees.tssrc/server/tls.test.tssrc/server/tls.tssrc/server/ws/events.tssrc/server/ws/handler.test.tssrc/server/ws/handler.tssrc/server/ws/terminal.test.tssrc/server/ws/terminal.tssrc/server/ws/watcher.test.tssrc/server/ws/watcher.tssrc/test-fixtures.ts
💤 Files with no reviewable changes (1)
- src/commands/init.ts
| XCTAssertEqual(conn?.host, "192.168.1.10") | ||
| XCTAssertEqual(conn?.port, 7700) | ||
| XCTAssertEqual(conn?.token, "abc123") | ||
| XCTAssertNil(conn?.ca) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "ServerConnection*.swift" -type fRepository: 2witstudios/ppg-cli
Length of output: 238
🏁 Script executed:
# Get line counts first
wc -l "./ios/PPGMobile/PPGMobile/Models/ServerConnection.swift" "./ios/PPGMobile/PPGMobile/Models/ServerConnectionTests.swift"Repository: 2witstudios/ppg-cli
Length of output: 203
🏁 Script executed:
# Read the ServerConnection.swift file to check property definitions
cat -n "./ios/PPGMobile/PPGMobile/Models/ServerConnection.swift"Repository: 2witstudios/ppg-cli
Length of output: 6366
🏁 Script executed:
# Read the ServerConnectionTests.swift file to check the test assertions
cat -n "./ios/PPGMobile/PPGMobile/Models/ServerConnectionTests.swift"Repository: 2witstudios/ppg-cli
Length of output: 6321
Property name mismatch: ca vs caCertificate.
The tests access conn?.ca but ServerConnection defines the property as caCertificate. Additionally, the initializer calls use ca: parameter instead of caCertificate:. These mismatches will cause compile errors.
🐛 Fix property and parameter names
Property access (lines 16, 28, 99):
- XCTAssertNil(conn?.ca)
+ XCTAssertNil(conn?.caCertificate)- XCTAssertEqual(conn?.ca, "dGVzdA==")
+ XCTAssertEqual(conn?.caCertificate, "dGVzdA==")Initializer parameters (lines 124, 129):
- let conn = ServerConnection(host: "myhost", port: 7700, token: "abc", ca: "dGVzdA==")
+ let conn = ServerConnection(host: "myhost", port: 7700, token: "abc", caCertificate: "dGVzdA==")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ios/PPGMobile/PPGMobile/Models/ServerConnectionTests.swift` at line 16, Tests
reference a non-existent property/parameter name; update all uses of the old
name to match ServerConnection's API: replace accesses to conn?.ca with
conn?.caCertificate and update any initializer calls using the ca: parameter to
use caCertificate: instead so they match the ServerConnection property and
initializer signatures (e.g., in ServerConnection tests and places that call
ServerConnection(..., ca: ...) change to ServerConnection(..., caCertificate:
...)).
| let conn = ServerConnection(host: "myhost", port: 7700, token: "abc") | ||
| XCTAssertEqual(conn.baseURL?.absoluteString, "http://myhost:7700") | ||
| } | ||
|
|
||
| func testBaseURLUsesHTTPSWithCA() { | ||
| let conn = ServerConnection(host: "myhost", port: 7700, token: "abc", ca: "dGVzdA==") | ||
| XCTAssertEqual(conn.baseURL?.absoluteString, "https://myhost:7700") |
There was a problem hiding this comment.
Initializer parameter name mismatch.
The tests use ca: as the parameter label, but ServerConnection.init uses caCertificate: as shown in ServerConnection.swift line 16.
🐛 Fix initializer calls
- let conn = ServerConnection(host: "myhost", port: 7700, token: "abc", ca: "dGVzdA==")
+ let conn = ServerConnection(host: "myhost", port: 7700, token: "abc", caCertificate: "dGVzdA==")Apply to lines 124 and 129 as well.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let conn = ServerConnection(host: "myhost", port: 7700, token: "abc") | |
| XCTAssertEqual(conn.baseURL?.absoluteString, "http://myhost:7700") | |
| } | |
| func testBaseURLUsesHTTPSWithCA() { | |
| let conn = ServerConnection(host: "myhost", port: 7700, token: "abc", ca: "dGVzdA==") | |
| XCTAssertEqual(conn.baseURL?.absoluteString, "https://myhost:7700") | |
| let conn = ServerConnection(host: "myhost", port: 7700, token: "abc") | |
| XCTAssertEqual(conn.baseURL?.absoluteString, "http://myhost:7700") | |
| } | |
| func testBaseURLUsesHTTPSWithCA() { | |
| let conn = ServerConnection(host: "myhost", port: 7700, token: "abc", caCertificate: "dGVzdA==") | |
| XCTAssertEqual(conn.baseURL?.absoluteString, "https://myhost:7700") |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ios/PPGMobile/PPGMobile/Models/ServerConnectionTests.swift` around lines 119
- 125, The test initializer calls for ServerConnection use the wrong parameter
label `ca:`; update the calls in ServerConnectionTests (the instances that call
ServerConnection(host:port:token:...)) to use the actual initializer parameter
name `caCertificate:` so they match
ServerConnection.init(host:port:token:caCertificate:), and make the same change
for the other occurrences noted (the other two test lines).
| let msg_label = gtk::Label::new(Some(message)); | ||
| msg_label.set_halign(gtk::Align::Start); | ||
| msg_label.set_hexpand(true); | ||
| msg_label.set_ellipsize(pango::EllipsizeMode::End); |
There was a problem hiding this comment.
Missing pango import.
pango::EllipsizeMode::End is used but pango is not imported. This will cause a compilation error.
🐛 Proposed fix
Add the import at the top of the file:
use cairo;
use gtk4::prelude::*;
use gtk4::{self as gtk};
+use pango;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/ui/home_dashboard.rs` at line 234, The code calls
msg_label.set_ellipsize(pango::EllipsizeMode::End) but pango is not imported;
add the appropriate pango import at the top of the file (so the symbol
pango::EllipsizeMode::End resolves) — for example import the pango crate or the
specific EllipsizeMode enum so references in home_dashboard.rs (where
msg_label.set_ellipsize is used) compile.
| use gtk4::prelude::*; | ||
| use gtk4::{self as gtk}; | ||
|
|
||
| use crate::models::manifest::{AgentEntry, AgentStatus, Manifest, WorktreeEntry}; | ||
| use crate::state::Services; | ||
| use crate::ui::window::SidebarSelection; |
There was a problem hiding this comment.
Missing pango import causes compilation error.
Line 216 uses pango::EllipsizeMode::End, but pango is not imported. This will fail to compile.
Proposed fix
use gtk4::prelude::*;
use gtk4::{self as gtk};
+use pango;
use crate::models::manifest::{AgentEntry, AgentStatus, Manifest, WorktreeEntry};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| use gtk4::prelude::*; | |
| use gtk4::{self as gtk}; | |
| use crate::models::manifest::{AgentEntry, AgentStatus, Manifest, WorktreeEntry}; | |
| use crate::state::Services; | |
| use crate::ui::window::SidebarSelection; | |
| use gtk4::prelude::*; | |
| use gtk4::{self as gtk}; | |
| use pango; | |
| use crate::models::manifest::{AgentEntry, AgentStatus, Manifest, WorktreeEntry}; | |
| use crate::state::Services; | |
| use crate::ui::window::SidebarSelection; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/ui/sidebar.rs` around lines 1 - 6, Add an import for the Pango enum
used at line 216 so the code compiles: add a top-level use for pango (e.g. add
"use pango::EllipsizeMode;" alongside the other use statements in sidebar.rs) so
references to pango::EllipsizeMode::End resolve correctly.
| kill_button.connect_clicked(move |_| { | ||
| if let Some(ref wt_id) = *id_kill.borrow() { | ||
| let client = services_kill.client.clone(); | ||
| let id = wt_id.clone(); | ||
| services_kill.runtime.spawn(async move { | ||
| let _ = client.read().unwrap().kill_worktree(&id).await; | ||
| }); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Critical: RwLock guard held across .await can cause deadlocks or panics.
client.read().unwrap() acquires a lock guard, then the async method kill_worktree is called. The guard is held across the await point, which can cause issues since standard RwLock guards are not Send. Additionally, unwrap() on a potentially poisoned lock can panic.
Proposed fix: clone client data before spawning
kill_button.connect_clicked(move |_| {
if let Some(ref wt_id) = *id_kill.borrow() {
let client = services_kill.client.clone();
let id = wt_id.clone();
services_kill.runtime.spawn(async move {
- let _ = client.read().unwrap().kill_worktree(&id).await;
+ let result = {
+ let c = client.read().expect("client lock poisoned");
+ c.clone()
+ };
+ if let Err(e) = result.kill_worktree(&id).await {
+ log::error!("Failed to kill worktree {}: {}", id, e);
+ }
});
}
});Since PpgClient derives Clone, clone it before the async block to avoid holding the lock across await points.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/ui/worktree_detail.rs` around lines 103 - 111, The click handler
currently calls client.read().unwrap() and holds the RwLock guard across the
async call to kill_worktree, risking deadlocks and panics; instead, take the
read lock on services_kill.client inside kill_button.connect_clicked, clone the
inner PpgClient (since PpgClient: Clone) while the guard is held, handle a
poisoned lock without unwrap (e.g., map/match and bail or log), drop the guard,
then call services_kill.runtime.spawn and inside the async use the cloned client
to call cloned_client.kill_worktree(&id). This avoids holding the RwLock guard
across the await and removes the unwrap panic risk; apply to the
kill_button.connect_clicked closure where services_kill.client, runtime.spawn,
and kill_worktree are used.
| merge_button.connect_clicked(move |_| { | ||
| if let Some(ref wt_id) = *id_merge.borrow() { | ||
| let client = services_merge.client.clone(); | ||
| let id = wt_id.clone(); | ||
| services_merge.runtime.spawn(async move { | ||
| let req = MergeRequest { | ||
| strategy: Some("squash".to_string()), | ||
| cleanup: Some(true), | ||
| force: None, | ||
| }; | ||
| let _ = client.read().unwrap().merge_worktree(&id, &req).await; | ||
| }); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Same RwLock guard issue in merge button handler.
The merge handler has the same problem—holding the lock guard across an await point.
Proposed fix
merge_button.connect_clicked(move |_| {
if let Some(ref wt_id) = *id_merge.borrow() {
let client = services_merge.client.clone();
let id = wt_id.clone();
services_merge.runtime.spawn(async move {
let req = MergeRequest {
strategy: Some("squash".to_string()),
cleanup: Some(true),
force: None,
};
- let _ = client.read().unwrap().merge_worktree(&id, &req).await;
+ let client_clone = client.read().expect("client lock poisoned").clone();
+ if let Err(e) = client_clone.merge_worktree(&id, &req).await {
+ log::error!("Failed to merge worktree {}: {}", id, e);
+ }
});
}
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| merge_button.connect_clicked(move |_| { | |
| if let Some(ref wt_id) = *id_merge.borrow() { | |
| let client = services_merge.client.clone(); | |
| let id = wt_id.clone(); | |
| services_merge.runtime.spawn(async move { | |
| let req = MergeRequest { | |
| strategy: Some("squash".to_string()), | |
| cleanup: Some(true), | |
| force: None, | |
| }; | |
| let _ = client.read().unwrap().merge_worktree(&id, &req).await; | |
| }); | |
| } | |
| }); | |
| merge_button.connect_clicked(move |_| { | |
| if let Some(ref wt_id) = *id_merge.borrow() { | |
| let client = services_merge.client.clone(); | |
| let id = wt_id.clone(); | |
| services_merge.runtime.spawn(async move { | |
| let req = MergeRequest { | |
| strategy: Some("squash".to_string()), | |
| cleanup: Some(true), | |
| force: None, | |
| }; | |
| let client_clone = client.read().expect("client lock poisoned").clone(); | |
| if let Err(e) = client_clone.merge_worktree(&id, &req).await { | |
| log::error!("Failed to merge worktree {}: {}", id, e); | |
| } | |
| }); | |
| } | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/ui/worktree_detail.rs` around lines 116 - 129, The
merge_button.connect_clicked handler currently holds an RwLock read guard
(services_merge.client.read().unwrap()) across an await, causing potential
deadlocks; fix it by obtaining the read guard synchronously, cloning the inner
client (or required handle) into a local variable, dropping the guard before
calling services_merge.runtime.spawn, then move the cloned client and cloned id
into the async block so MergeRequest/merge_worktree are awaited without any
RwLock held (refer to merge_button.connect_clicked, id_merge,
services_merge.client, services_merge.runtime, MergeRequest, and
merge_worktree).
| let command = `ppg serve _daemon --port ${port} --host ${host}`; | ||
| if (options.token) { | ||
| command += ` --token ${options.token}`; | ||
| } |
There was a problem hiding this comment.
Escape shell arguments before sending tmux command.
options.token is interpolated directly into a shell command. A crafted token can inject arbitrary shell fragments in the tmux window.
Suggested fix
+function shQuote(value: string): string {
+ return `'${value.replace(/'/g, `'\\''`)}'`;
+}
+
...
- let command = `ppg serve _daemon --port ${port} --host ${host}`;
+ let command = `ppg serve _daemon --port ${port} --host ${shQuote(host)}`;
if (options.token) {
- command += ` --token ${options.token}`;
+ command += ` --token ${shQuote(options.token)}`;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let command = `ppg serve _daemon --port ${port} --host ${host}`; | |
| if (options.token) { | |
| command += ` --token ${options.token}`; | |
| } | |
| function shQuote(value: string): string { | |
| return `'${value.replace(/'/g, `'\\''`)}'`; | |
| } | |
| let command = `ppg serve _daemon --port ${port} --host ${shQuote(host)}`; | |
| if (options.token) { | |
| command += ` --token ${shQuote(options.token)}`; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/commands/serve.ts` around lines 98 - 101, The command string built into
the variable `command` interpolates `options.token` directly and is vulnerable
to shell injection; sanitize or escape `options.token` before appending it to
`command` (or switch to building an argv list instead of a single shell string).
Locate where `command` is constructed (the `ppg serve _daemon --port ${port}
--host ${host}` block) and either (a) escape the token for the shell by wrapping
it in single quotes and escaping any internal single quotes (e.g., replace '
with '\''), or (b) change the tmux invocation to use an API that accepts an arg
array/args (avoid a single shell line) so `options.token` is passed safely as an
argument. Ensure you apply the same escaping/arg-safety for any other
user-controlled interpolation into `command`.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@linux/src/api/client.rs`:
- Around line 121-165: The get, post, and post_no_body request builders
currently have no per-request timeout; update each of the methods (get, post,
post_no_body) to call .timeout(Duration::from_secs(15)) on the req builder
before sending to avoid hanging requests (use
std::time::Duration::from_secs(15)); ensure any missing use/import for Duration
is added and keep the timeout applied to the same req variable prior to
req.send().await.
In `@linux/src/ui/home_dashboard.rs`:
- Around line 157-160: The manifest update currently calls fetch_heatmap_data
and fetch_recent_commits on every update, which spawns unbounded background git
work; modify update_manifest to throttle/coalesce those background calls (e.g.,
debounce or rate-limit to a configurable interval) so repeated calls within the
interval do not spawn new tasks. Implement this by tracking the last-run time or
a pending task flag inside the struct (or use a single shared
debounce/timer/future) and only call fetch_heatmap_data and fetch_recent_commits
when the throttle allows (or schedule a single consolidated run after the
interval), ensuring the unique functions update_manifest, fetch_heatmap_data,
and fetch_recent_commits are the ones gated.
- Around line 168-183: The current parsing of git output (the command built with
args "log", "--format=%h|%s|%ar", "-n", "10") and the splitn call using '|'
breaks when the commit subject contains '|' — change the git format to emit a
delimiter that cannot appear in subjects (e.g. use a control character via %x1F
or %x00) and update the parsing where stdout is split (the code building
commits: Vec<(String,String,String)> and the filter_map that does splitn(3,
'|')) to split on that same unique byte/char; keep the three-field logic and
handle from_utf8_lossy as before so each tuple still contains (hash, subject,
relative_time).
- Around line 162-210: fetch_recent_commits and fetch_heatmap_data currently
move GTK widgets (commits_list, DrawingArea) and Rc<RefCell<_>> into
std::thread::spawn which is illegal; instead, create an async_channel (tx, rx)
for plain serializable results (e.g., Vec<Commit> or Vec<HeatmapPoint>), spawn
the background thread to compute only data from root (do not move gtk widgets or
Rc<RefCell> into the closure) and send the results via tx (send_blocking or
tx.send), then on the main thread use glib::spawn_future_local or a loop
receiving from rx to call glib::idle_add_once and update commits_list via
create_commit_row or update the DrawingArea from the received data; keep
references to commits_list, DrawingArea, and any Rc<RefCell<_>> entirely on the
main thread and only use the plain data from the channel inside glib callbacks.
| async fn get<T: serde::de::DeserializeOwned>(&self, path: &str) -> Result<T> { | ||
| let mut req = self.client.get(self.url(path)); | ||
| if let Some(auth) = self.auth_header() { | ||
| req = req.header("Authorization", auth); | ||
| } | ||
| let resp = req.send().await.context("HTTP GET failed")?; | ||
| let status = resp.status(); | ||
| if !status.is_success() { | ||
| let body = resp.text().await.unwrap_or_default(); | ||
| anyhow::bail!("HTTP {} — {}", status, body); | ||
| } | ||
| resp.json().await.context("Failed to parse JSON response") | ||
| } | ||
|
|
||
| async fn post<B: Serialize, T: serde::de::DeserializeOwned>( | ||
| &self, | ||
| path: &str, | ||
| body: &B, | ||
| ) -> Result<T> { | ||
| let mut req = self.client.post(self.url(path)).json(body); | ||
| if let Some(auth) = self.auth_header() { | ||
| req = req.header("Authorization", auth); | ||
| } | ||
| let resp = req.send().await.context("HTTP POST failed")?; | ||
| let status = resp.status(); | ||
| if !status.is_success() { | ||
| let body = resp.text().await.unwrap_or_default(); | ||
| anyhow::bail!("HTTP {} — {}", status, body); | ||
| } | ||
| resp.json().await.context("Failed to parse JSON response") | ||
| } | ||
|
|
||
| async fn post_no_body<T: serde::de::DeserializeOwned>(&self, path: &str) -> Result<T> { | ||
| let mut req = self.client.post(self.url(path)); | ||
| if let Some(auth) = self.auth_header() { | ||
| req = req.header("Authorization", auth); | ||
| } | ||
| let resp = req.send().await.context("HTTP POST failed")?; | ||
| let status = resp.status(); | ||
| if !status.is_success() { | ||
| let body = resp.text().await.unwrap_or_default(); | ||
| anyhow::bail!("HTTP {} — {}", status, body); | ||
| } | ||
| resp.json().await.context("Failed to parse JSON response") | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n linux/src/api/client.rs | sed -n '1,50p'Repository: 2witstudios/ppg-cli
Length of output: 1553
🏁 Script executed:
wc -l linux/src/api/client.rsRepository: 2witstudios/ppg-cli
Length of output: 90
🏁 Script executed:
cat -n linux/src/api/client.rs | sed -n '110,170p'Repository: 2witstudios/ppg-cli
Length of output: 2690
🏁 Script executed:
rg -n 'timeout|Duration' linux/src/api/client.rsRepository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
rg -n 'struct.*Client|impl.*Client' linux/src/api/client.rs -A 2Repository: 2witstudios/ppg-cli
Length of output: 245
🏁 Script executed:
cat -n linux/src/api/client.rs | sed -n '99,112p'Repository: 2witstudios/ppg-cli
Length of output: 577
🏁 Script executed:
rg -n 'Cargo.toml' --type toml -lRepository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
cat Cargo.toml | grep -A 3 reqwestRepository: 2witstudios/ppg-cli
Length of output: 105
🏁 Script executed:
find . -name 'Cargo.toml' -type fRepository: 2witstudios/ppg-cli
Length of output: 81
🏁 Script executed:
cat linux/Cargo.toml | grep -A 3 reqwestRepository: 2witstudios/ppg-cli
Length of output: 252
Add per-request timeouts to prevent hung UI/network operations.
The get, post, and post_no_body methods lack timeout protection. Without timeouts, failed server connections or network hangs will cause indefinite waits, freezing async operations and the UI. This is especially critical since these are called from public API methods throughout the client.
Reqwest 0.12 supports per-request timeouts via .timeout(). Apply a reasonable timeout (e.g., 15 seconds) to each request:
Proposed fix
+use std::time::Duration;
+
+const HTTP_TIMEOUT: Duration = Duration::from_secs(15);
+
async fn get<T: serde::de::DeserializeOwned>(&self, path: &str) -> Result<T> {
- let mut req = self.client.get(self.url(path));
+ let mut req = self.client.get(self.url(path)).timeout(HTTP_TIMEOUT);
@@
async fn post<B: Serialize, T: serde::de::DeserializeOwned>(
@@
- let mut req = self.client.post(self.url(path)).json(body);
+ let mut req = self
+ .client
+ .post(self.url(path))
+ .json(body)
+ .timeout(HTTP_TIMEOUT);
@@
async fn post_no_body<T: serde::de::DeserializeOwned>(&self, path: &str) -> Result<T> {
- let mut req = self.client.post(self.url(path));
+ let mut req = self.client.post(self.url(path)).timeout(HTTP_TIMEOUT);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async fn get<T: serde::de::DeserializeOwned>(&self, path: &str) -> Result<T> { | |
| let mut req = self.client.get(self.url(path)); | |
| if let Some(auth) = self.auth_header() { | |
| req = req.header("Authorization", auth); | |
| } | |
| let resp = req.send().await.context("HTTP GET failed")?; | |
| let status = resp.status(); | |
| if !status.is_success() { | |
| let body = resp.text().await.unwrap_or_default(); | |
| anyhow::bail!("HTTP {} — {}", status, body); | |
| } | |
| resp.json().await.context("Failed to parse JSON response") | |
| } | |
| async fn post<B: Serialize, T: serde::de::DeserializeOwned>( | |
| &self, | |
| path: &str, | |
| body: &B, | |
| ) -> Result<T> { | |
| let mut req = self.client.post(self.url(path)).json(body); | |
| if let Some(auth) = self.auth_header() { | |
| req = req.header("Authorization", auth); | |
| } | |
| let resp = req.send().await.context("HTTP POST failed")?; | |
| let status = resp.status(); | |
| if !status.is_success() { | |
| let body = resp.text().await.unwrap_or_default(); | |
| anyhow::bail!("HTTP {} — {}", status, body); | |
| } | |
| resp.json().await.context("Failed to parse JSON response") | |
| } | |
| async fn post_no_body<T: serde::de::DeserializeOwned>(&self, path: &str) -> Result<T> { | |
| let mut req = self.client.post(self.url(path)); | |
| if let Some(auth) = self.auth_header() { | |
| req = req.header("Authorization", auth); | |
| } | |
| let resp = req.send().await.context("HTTP POST failed")?; | |
| let status = resp.status(); | |
| if !status.is_success() { | |
| let body = resp.text().await.unwrap_or_default(); | |
| anyhow::bail!("HTTP {} — {}", status, body); | |
| } | |
| resp.json().await.context("Failed to parse JSON response") | |
| } | |
| use std::time::Duration; | |
| const HTTP_TIMEOUT: Duration = Duration::from_secs(15); | |
| async fn get<T: serde::de::DeserializeOwned>(&self, path: &str) -> Result<T> { | |
| let mut req = self.client.get(self.url(path)).timeout(HTTP_TIMEOUT); | |
| if let Some(auth) = self.auth_header() { | |
| req = req.header("Authorization", auth); | |
| } | |
| let resp = req.send().await.context("HTTP GET failed")?; | |
| let status = resp.status(); | |
| if !status.is_success() { | |
| let body = resp.text().await.unwrap_or_default(); | |
| anyhow::bail!("HTTP {} — {}", status, body); | |
| } | |
| resp.json().await.context("Failed to parse JSON response") | |
| } | |
| async fn post<B: Serialize, T: serde::de::DeserializeOwned>( | |
| &self, | |
| path: &str, | |
| body: &B, | |
| ) -> Result<T> { | |
| let mut req = self | |
| .client | |
| .post(self.url(path)) | |
| .json(body) | |
| .timeout(HTTP_TIMEOUT); | |
| if let Some(auth) = self.auth_header() { | |
| req = req.header("Authorization", auth); | |
| } | |
| let resp = req.send().await.context("HTTP POST failed")?; | |
| let status = resp.status(); | |
| if !status.is_success() { | |
| let body = resp.text().await.unwrap_or_default(); | |
| anyhow::bail!("HTTP {} — {}", status, body); | |
| } | |
| resp.json().await.context("Failed to parse JSON response") | |
| } | |
| async fn post_no_body<T: serde::de::DeserializeOwned>(&self, path: &str) -> Result<T> { | |
| let mut req = self.client.post(self.url(path)).timeout(HTTP_TIMEOUT); | |
| if let Some(auth) = self.auth_header() { | |
| req = req.header("Authorization", auth); | |
| } | |
| let resp = req.send().await.context("HTTP POST failed")?; | |
| let status = resp.status(); | |
| if !status.is_success() { | |
| let body = resp.text().await.unwrap_or_default(); | |
| anyhow::bail!("HTTP {} — {}", status, body); | |
| } | |
| resp.json().await.context("Failed to parse JSON response") | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/api/client.rs` around lines 121 - 165, The get, post, and
post_no_body request builders currently have no per-request timeout; update each
of the methods (get, post, post_no_body) to call
.timeout(Duration::from_secs(15)) on the req builder before sending to avoid
hanging requests (use std::time::Duration::from_secs(15)); ensure any missing
use/import for Duration is added and keep the timeout applied to the same req
variable prior to req.send().await.
| // Fetch git log data for heatmap and recent commits (async) | ||
| self.fetch_heatmap_data(&manifest.project_root); | ||
| self.fetch_recent_commits(&manifest.project_root); | ||
| } |
There was a problem hiding this comment.
Throttle git refresh calls from update_manifest.
Every manifest update currently spawns two background git commands. With frequent WS updates this can create unbounded process/thread churn and degrade responsiveness.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/ui/home_dashboard.rs` around lines 157 - 160, The manifest update
currently calls fetch_heatmap_data and fetch_recent_commits on every update,
which spawns unbounded background git work; modify update_manifest to
throttle/coalesce those background calls (e.g., debounce or rate-limit to a
configurable interval) so repeated calls within the interval do not spawn new
tasks. Implement this by tracking the last-run time or a pending task flag
inside the struct (or use a single shared debounce/timer/future) and only call
fetch_heatmap_data and fetch_recent_commits when the throttle allows (or
schedule a single consolidated run after the interval), ensuring the unique
functions update_manifest, fetch_heatmap_data, and fetch_recent_commits are the
ones gated.
| fn fetch_recent_commits(&self, project_root: &str) { | ||
| let root = project_root.to_string(); | ||
| let commits_list = self.commits_list.clone(); | ||
|
|
||
| std::thread::spawn(move || { | ||
| let output = std::process::Command::new("git") | ||
| .args([ | ||
| "log", | ||
| "--format=%h|%s|%ar", | ||
| "-n", | ||
| "10", | ||
| ]) | ||
| .current_dir(&root) | ||
| .output(); | ||
|
|
||
| if let Ok(output) = output { | ||
| let stdout = String::from_utf8_lossy(&output.stdout); | ||
| let commits: Vec<(String, String, String)> = stdout | ||
| .lines() | ||
| .filter_map(|line| { | ||
| let parts: Vec<&str> = line.splitn(3, '|').collect(); | ||
| if parts.len() == 3 { | ||
| Some(( | ||
| parts[0].to_string(), | ||
| parts[1].to_string(), | ||
| parts[2].to_string(), | ||
| )) | ||
| } else { | ||
| None | ||
| } | ||
| }) | ||
| .collect(); | ||
|
|
||
| glib::idle_add_once(move || { | ||
| // Clear existing rows | ||
| while let Some(row) = commits_list.row_at_index(0) { | ||
| commits_list.remove(&row); | ||
| } | ||
|
|
||
| if commits.is_empty() { | ||
| commits_list.append(&create_commit_row("—", "No commits found", "")); | ||
| } else { | ||
| for (hash, message, time) in &commits { | ||
| commits_list.append(&create_commit_row(hash, message, time)); | ||
| } | ||
| } | ||
| }); | ||
| } | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
gtk4-rs 0.9 glib 0.20 std::thread::spawn Send trait bounds
💡 Result:
std::thread::spawn requires the spawned closure (and anything it captures) to be Send + 'static. In gtk4-rs 0.9, GTK is not thread-safe, so none of GTK’s wrapper types implement Send or Sync—you can’t move a gtk::Widget, glib::Object, etc. into a thread::spawn closure (or into any callback that requires Send). (docs.rs)
What to do instead (recommended pattern)
- Do the work on a worker thread, but only with
Senddata (plain Rust types,Arc<Mutex<_>>, etc.). - Send results back to the main thread (UI thread) using an async channel (since
glib::MainContext::channel()was removed in gtk-rs-core 0.19+). (gtk-rs.org) - On the main thread, receive messages and update GTK widgets there.
Sketch:
let (tx, rx) = async_channel::unbounded::<MyResult>();
// Main thread: drive UI updates
glib::MainContext::default().spawn_local(async move {
while let Ok(msg) = rx.recv().await {
// safe: update GTK here (main thread)
// widget.set_label(...)
}
});
// Worker thread: do computation, send only data back
std::thread::spawn(move || {
let result = do_work();
let _ = tx.send_blocking(result);
});Why not glib::MainContext::invoke_local / idle_add_local?
Those “_local” APIs intentionally don’t require Send, but they may only be called from the thread that owns the main context (i.e., the GTK main thread). (docs.rs)
If you share the exact error you’re getting (the type that “is not Send”), I can show the smallest refactor for that specific case.
Citations:
- 1: https://docs.rs/gtk4/latest/gtk4/
- 2: https://gtk-rs.org/blog/2024/06/01/new-release.html?utm_source=openai
- 3: https://docs.rs/glib/latest/glib/struct.MainContext.html
🌐 Web query:
gtk4 ListBox DrawingArea Send Sync trait implementation gtk-rs
💡 Result:
In gtk4 / gtk-rs, ListBox, DrawingArea, and in general Widget types are explicitly !Send and !Sync, because GTK objects are not thread-safe and must only be used from the main (GTK) thread. That’s why you can’t (and shouldn’t) “implement Send/Sync” for them to move them across threads. (docs.rs)
What to do instead (common pattern)
- Keep all GTK widgets (
ListBoxRow,DrawingArea, etc.) owned/used only on the main thread. - If you need background work, send data (not widgets) back to the main thread using:
glib::MainContext::channel()/Sender+ attach receiver to main context, orglib::idle_add_local(...)/glib::timeout_add_local(...),
then update widgets and callqueue_draw()from the main thread. (stackoverflow.com)
For a DrawingArea inside a ListBoxRow
- Store per-row model/state separately (e.g.,
Rc<RefCell<State>>if single-threaded; orArc<Mutex<State>>if produced by a worker thread). - In
drawing_area.set_draw_func(...), read the current state and render. - When new data arrives on the main thread, update the state and call
drawing_area.queue_draw().
About “unsafe impl Send/Sync”
gtk-rs used to require manual unsafe impl Send/Sync for some custom GObject subclasses, but that’s about your subclass type and only when it’s actually thread-safe; it does not make GTK widgets thread-safe, and it doesn’t apply to ListBox/Widget types which remain !Send/!Sync. (gtk-rs.org)
Citations:
- 1: https://docs.rs/gtk4/latest/gtk4/struct.ListBox.html?utm_source=openai
- 2: https://stackoverflow.com/questions/66510406/gtk-rs-how-to-update-view-from-another-thread?utm_source=openai
- 3: https://gtk-rs.org/blog/2022/10/18/new-release.html?utm_source=openai
🌐 Web query:
glib idle_add_once idle_add_local_once difference thread safety
💡 Result:
What they do (both “*_once” variants)
g_idle_add_once()/idle_add_once()schedules an idle callback atG_PRIORITY_DEFAULT_IDLEon the global-default main context, and removes itself after the first call. (docs.gtk.org)
The key difference: “local” vs “non-local”
-
Non-local (
idle_add_once/g_idle_add_once)- Designed to be safe to call from other threads: GLib allows sources to be added/removed from a
GMainContextfrom other threads, and main-loop/context operations for built-in sources are thread-safe. (gnome.pages.gitlab.gnome.org) - Because it targets the global-default main context, the callback will run in whatever thread is iterating that context (in GTK apps, typically the “main thread”). (docs.gtk.org)
- Designed to be safe to call from other threads: GLib allows sources to be added/removed from a
-
Local (
idle_add_local_once) (commonly seen in language bindings like gtk-rs)- Intended for same-thread use only: it does not require
Send, but in exchange it must be scheduled from the thread that owns the main context; calling it from another thread is considered an error (gtk-rs panics). (docs.rs) - Practical meaning: it’s “thread-constrained” rather than “thread-safe to schedule from anywhere”.
- Intended for same-thread use only: it does not require
Thread-safety implications (rule of thumb)
- From a worker thread, use
g_idle_add_once()(org_main_context_invoke()if you need to target a specific context) to marshal work back to the main loop thread. (gnome.pages.gitlab.gnome.org) - Use
idle_add_local_onceonly when you are already on the owning/main-loop thread and want the convenience of capturing non-Senddata safely. (docs.rs)
Citations:
- 1: https://docs.gtk.org/glib/func.idle_add_once.html?utm_source=openai
- 2: https://gnome.pages.gitlab.gnome.org/libsoup/glib/glib-The-Main-Event-Loop.html?utm_source=openai
- 3: https://docs.gtk.org/gtk4/section-threading.html?utm_source=openai
- 4: https://docs.rs/glib/latest/glib/source/fn.idle_add_local.html?utm_source=openai
- 5: https://gnome.pages.gitlab.gnome.org/libsoup/glib/glib-The-Main-Event-Loop.html?utm_source=openai
- 6: https://docs.rs/glib/latest/glib/source/fn.idle_add_local.html?utm_source=openai
🏁 Script executed:
# First, let's check the actual file to see the context
find . -name "home_dashboard.rs" -type fRepository: 2witstudios/ppg-cli
Length of output: 95
🏁 Script executed:
# Let's check the actual code in the file around lines 162-210 and 213-249
cat -n linux/src/ui/home_dashboard.rs | sed -n '160,250p'Repository: 2witstudios/ppg-cli
Length of output: 4120
🏁 Script executed:
# Also check if there are any use statements or type definitions we need to understand
cat -n linux/src/ui/home_dashboard.rs | head -60Repository: 2witstudios/ppg-cli
Length of output: 2633
🏁 Script executed:
# Let's check if this code compiles - look for any Cargo.toml and try to understand the structure
find . -name "Cargo.toml" -type f | head -5Repository: 2witstudios/ppg-cli
Length of output: 81
🏁 Script executed:
# Let's also search for the type of commits_list and heatmap_area to confirm they are GTK widgets
cat -n linux/src/ui/home_dashboard.rs | grep -A 5 "commits_list\|heatmap_area" | head -30Repository: 2witstudios/ppg-cli
Length of output: 1388
🏁 Script executed:
# Check the full struct definition to confirm the types
cat -n linux/src/ui/home_dashboard.rs | sed -n '13,26p'Repository: 2witstudios/ppg-cli
Length of output: 550
🏁 Script executed:
# Check if there's any unsafe impl Send/Sync for these types
rg "unsafe impl.*Send|unsafe impl.*Sync" linux/src/ui/home_dashboard.rsRepository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
# Look for any #[allow] or other suppressions
rg "#\[allow|#\[cfg" linux/src/ui/home_dashboard.rsRepository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
# Check the Cargo.toml to see the gtk4-rs and glib versions
cat linux/Cargo.toml | grep -A 30 "\[dependencies\]"Repository: 2witstudios/ppg-cli
Length of output: 742
🏁 Script executed:
# Check if the code actually compiles by looking at any build artifacts or errors
find . -name "target" -type d | head -1Repository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
# Let's search for any other patterns in the codebase that might show how channels are being used
rg "async_channel|channel\|sender\|receiver" linux/src/ui/home_dashboard.rsRepository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
# Check if there are other similar patterns in the codebase that DO use channels correctly
rg -l "std::thread::spawn" linux/src --type rustRepository: 2witstudios/ppg-cli
Length of output: 93
🏁 Script executed:
# Try to see if there are any compile errors or if the code successfully builds
# Let's check if there's a target directory that might indicate build status
ls -la linux/ | grep -E "target|Cargo"Repository: 2witstudios/ppg-cli
Length of output: 186
🏁 Script executed:
# Let's also check if there are similar correct patterns elsewhere using async_channel
rg -A 10 "async_channel" linux/src --type rust | head -40Repository: 2witstudios/ppg-cli
Length of output: 2260
Do not move GTK widgets or Rc<RefCell<_>> into std::thread::spawn closures.
fetch_recent_commits (lines 162–210) and fetch_heatmap_data (lines 213–249) move gtk::ListBox, gtk::DrawingArea, and Rc<RefCell<_>> into std::thread::spawn. These types are !Send in gtk4-rs 0.9 and cannot be moved across thread boundaries. This code fails to compile due to trait bounds on std::thread::spawn.
The correct pattern (already used in window.rs): spawn a background thread that computes plain data only, then send results through async_channel back to the main thread where GTK widgets are updated via glib::idle_add_once or glib::spawn_future_local.
Correct pattern
let (tx, rx) = async_channel::unbounded::<Vec<Commit>>();
// Main thread: receive data and update widgets
glib::spawn_future_local(async move {
while let Ok(commits) = rx.recv().await {
// safe: update GTK widgets here
}
});
// Worker thread: compute and send plain data only
std::thread::spawn(move || {
let commits = compute_commits(&root);
let _ = tx.send_blocking(commits);
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/ui/home_dashboard.rs` around lines 162 - 210, fetch_recent_commits
and fetch_heatmap_data currently move GTK widgets (commits_list, DrawingArea)
and Rc<RefCell<_>> into std::thread::spawn which is illegal; instead, create an
async_channel (tx, rx) for plain serializable results (e.g., Vec<Commit> or
Vec<HeatmapPoint>), spawn the background thread to compute only data from root
(do not move gtk widgets or Rc<RefCell> into the closure) and send the results
via tx (send_blocking or tx.send), then on the main thread use
glib::spawn_future_local or a loop receiving from rx to call glib::idle_add_once
and update commits_list via create_commit_row or update the DrawingArea from the
received data; keep references to commits_list, DrawingArea, and any
Rc<RefCell<_>> entirely on the main thread and only use the plain data from the
channel inside glib callbacks.
| .args([ | ||
| "log", | ||
| "--format=%h|%s|%ar", | ||
| "-n", | ||
| "10", | ||
| ]) | ||
| .current_dir(&root) | ||
| .output(); | ||
|
|
||
| if let Ok(output) = output { | ||
| let stdout = String::from_utf8_lossy(&output.stdout); | ||
| let commits: Vec<(String, String, String)> = stdout | ||
| .lines() | ||
| .filter_map(|line| { | ||
| let parts: Vec<&str> = line.splitn(3, '|').collect(); | ||
| if parts.len() == 3 { |
There was a problem hiding this comment.
Use a delimiter that cannot appear in commit subjects.
Parsing git log --format=%h|%s|%ar with splitn(..., '|') breaks when %s contains |, causing wrong message/time splits.
Proposed fix
- "--format=%h|%s|%ar",
+ "--format=%h%x1f%s%x1f%ar",
@@
- let parts: Vec<&str> = line.splitn(3, '|').collect();
+ let parts: Vec<&str> = line.splitn(3, '\x1f').collect();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .args([ | |
| "log", | |
| "--format=%h|%s|%ar", | |
| "-n", | |
| "10", | |
| ]) | |
| .current_dir(&root) | |
| .output(); | |
| if let Ok(output) = output { | |
| let stdout = String::from_utf8_lossy(&output.stdout); | |
| let commits: Vec<(String, String, String)> = stdout | |
| .lines() | |
| .filter_map(|line| { | |
| let parts: Vec<&str> = line.splitn(3, '|').collect(); | |
| if parts.len() == 3 { | |
| .args([ | |
| "log", | |
| "--format=%h%x1f%s%x1f%ar", | |
| "-n", | |
| "10", | |
| ]) | |
| .current_dir(&root) | |
| .output(); | |
| if let Ok(output) = output { | |
| let stdout = String::from_utf8_lossy(&output.stdout); | |
| let commits: Vec<(String, String, String)> = stdout | |
| .lines() | |
| .filter_map(|line| { | |
| let parts: Vec<&str> = line.splitn(3, '\x1f').collect(); | |
| if parts.len() == 3 { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/ui/home_dashboard.rs` around lines 168 - 183, The current parsing
of git output (the command built with args "log", "--format=%h|%s|%ar", "-n",
"10") and the splitn call using '|' breaks when the commit subject contains '|'
— change the git format to emit a delimiter that cannot appear in subjects (e.g.
use a control character via %x1F or %x00) and update the parsing where stdout is
split (the code building commits: Vec<(String,String,String)> and the filter_map
that does splitn(3, '|')) to split on that same unique byte/char; keep the
three-field logic and handle from_utf8_lossy as before so each tuple still
contains (hash, subject, relative_time).
- Fix WebSocket reconnect: centralize ws_tx/ws_rx in Services so reconnect shares the same event pipeline instead of dropping the receiver immediately - Fix ping keepalive: actually send JSON ping commands via Arc<Mutex<SplitSink>> instead of a no-op loop - Settings dialog now triggers WS reconnect after saving - Add toast notification system (ToastOverlay) for user feedback on API successes/failures across spawn, kill, merge, restart, logs - Add sidebar right-click context menus for worktrees (Kill, Merge) and agents (Kill, Restart, View Logs) - Wire setup view as conditional first-launch flow that checks for ppg and tmux prerequisites before connecting - Fix command palette timer leak by storing SourceId and cancelling on dialog close - Add PR URL display in worktree detail info grid - Remove 5 dead CSS classes (terminal-pane, command-palette, status-badge, heatmap, commit-row)
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (1)
linux/src/ui/worktree_detail.rs (1)
111-112:⚠️ Potential issue | 🔴 CriticalPreviously reported: lock guard is still held across
.awaitin both handlers.This was flagged earlier and is still present on Line 112 and Line 144 (
read().unwrap().<async>().await), with the same deadlock/panic risk.🐛 Proposed fix
- let client = services_kill.client.clone(); + let client = match services_kill.client.read() { + Ok(guard) => guard.clone(), + Err(_) => return, + }; @@ - match client.read().unwrap().kill_worktree(&id).await { + match client.kill_worktree(&id).await { @@ - let client = services_merge.client.clone(); + let client = match services_merge.client.read() { + Ok(guard) => guard.clone(), + Err(_) => return, + }; @@ - match client.read().unwrap().merge_worktree(&id, &req).await { + match client.merge_worktree(&id, &req).await {Also applies to: 144-145
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linux/src/ui/worktree_detail.rs` around lines 111 - 112, The read lock is being held across an .await in the spawn handlers (e.g., services_kill.runtime.spawn calling client.read().unwrap().kill_worktree(&id).await and the similar client.read().unwrap().<async>().await at lines ~144-145); fix by acquiring the read guard, cloning or extracting the inner client/needed data while the guard is held (for example clone the Arc/handle or copy required fields), immediately drop the guard, then call the async method (kill_worktree or the other async call) on the cloned/extracted value so no RwLockGuard is held across .await; update both spawn handlers (the services_kill.runtime.spawn block(s) that call kill_worktree and the other client.read().unwrap().<async>().await call) accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@linux/src/state.rs`:
- Around line 82-86: The update_settings method currently swallows errors from
inner.settings.save(); change it to return a Result so callers can observe
failures: update the signature pub fn update_settings<F: FnOnce(&mut
AppSettings)>(&self, f: F) -> Result<(), E> (use your crate's error type, e.g.,
anyhow::Error or a custom SaveError), keep acquiring the write lock and calling
f(&mut inner.settings) as before, then call inner.settings.save()? and
map/propagate the returned error so update_settings returns Err(save_err) on
failure; alternatively, if you must keep the void signature, at minimum log the
error from inner.settings.save() using your logging facility (e.g., log::error!
or tracing::error!) with context mentioning update_settings and the save error.
Ensure references are to update_settings, AppSettings, and
inner.settings.save().
- Line 100: The code leaks the Tokio runtime by calling std::mem::forget on the
runtime handle; remove those mem::forget calls and keep ownership of the runtime
inside Services so it can be dropped/shutdown normally. Change the Services
field from tokio::runtime::Handle to owning tokio::runtime::Runtime (update the
struct field named runtime), remove all std::mem::forget(...) usages that
currently leak the runtime, and either implement Drop for Services to call
Runtime::shutdown_background()/shutdown_timeout or rely on dropping the Runtime
to clean up threads; update any constructors (e.g., Services::new or the
function that creates the runtime) to store the owned Runtime rather than
leaking a Handle.
In `@linux/src/ui/command_palette.rs`:
- Around line 180-196: The code currently calls
client.read().unwrap().spawn(&req).await inside the async block, which holds the
RwLock read guard across .await and uses unwrap; fix this by acquiring the read
guard and cloning the underlying PpgClient
(services_spawn.client.read().expect("...").clone()) before creating the async
move, then call cloned_client.spawn(&req).await inside the spawned task (and
replace unwrap with expect or proper error handling to avoid panic-on-poison);
update references to use the cloned client variable instead of client.read()
inside the async block and keep SpawnRequest construction unchanged.
In `@linux/src/ui/settings_dialog.rs`:
- Around line 50-54: The code currently holds a read RwLock guard on
services_test.client across an await by calling
client.read().unwrap().test_connection().await; instead, acquire the read guard
on services_test.client, clone the inner PpgClient (the cloned type used by
test_connection), drop the guard, and then call the async test_connection() on
the cloned client inside the runtime.spawn block; update the closure to use the
cloned client (e.g., clone the PpgClient from services_test.client.read() before
the async) so no lock is held across .await.
In `@linux/src/ui/sidebar.rs`:
- Around line 213-249: The RwLock read guards are being held across awaits in
the action handlers (e.g., the kill_action closure using
services_kill.client.read().unwrap().kill_worktree and the merge_action using
services_merge.client.read().unwrap().merge_worktree); fix by taking the read
lock synchronously, cloning the inner client/handle into a local variable,
dropping the guard, and then spawning/awaiting using the cloned handle so no
RwLock guard lives across await points (apply the same pattern to the
kill_agent, restart_agent and agent_logs handlers).
In `@linux/src/ui/window.rs`:
- Around line 119-127: The shortcut is currently invoking the app-scoped action
name but the action is registered on the window; update the shortcut string to
target the window scope. Locate the creation/registration of the palette action
(palette_action via gio::SimpleAction::new("command-palette", ...) and
window.add_action(&palette_action)) and change the accelerator/shortcut
reference from "app.command-palette" to "win.command-palette" so the
Ctrl+Shift+P binding calls the window-scoped action.
- Around line 317-319: The code is holding a std::sync::RwLockReadGuard across
an .await (e.g., the expression client.read().unwrap().status().await inside the
services.runtime.spawn closure), which is not allowed; fix by cloning or
otherwise extracting the inner client handle while holding the read lock and
then dropping the guard before awaiting (for example, let client_clone =
client.read().unwrap().clone(); drop(guard); client_clone.status().await). Apply
the same pattern to all occurrences (search for
client.read().unwrap().status().await in window.rs, worktree_detail.rs,
sidebar.rs, command_palette.rs) so the RwLock guard is released before any
await.
---
Duplicate comments:
In `@linux/src/ui/worktree_detail.rs`:
- Around line 111-112: The read lock is being held across an .await in the spawn
handlers (e.g., services_kill.runtime.spawn calling
client.read().unwrap().kill_worktree(&id).await and the similar
client.read().unwrap().<async>().await at lines ~144-145); fix by acquiring the
read guard, cloning or extracting the inner client/needed data while the guard
is held (for example clone the Arc/handle or copy required fields), immediately
drop the guard, then call the async method (kill_worktree or the other async
call) on the cloned/extracted value so no RwLockGuard is held across .await;
update both spawn handlers (the services_kill.runtime.spawn block(s) that call
kill_worktree and the other client.read().unwrap().<async>().await call)
accordingly.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
linux/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (9)
linux/src/api/websocket.rslinux/src/app.rslinux/src/state.rslinux/src/style.csslinux/src/ui/command_palette.rslinux/src/ui/settings_dialog.rslinux/src/ui/sidebar.rslinux/src/ui/window.rslinux/src/ui/worktree_detail.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- linux/src/api/websocket.rs
| pub fn update_settings<F: FnOnce(&mut AppSettings)>(&self, f: F) { | ||
| let mut inner = self.inner.write().unwrap(); | ||
| f(&mut inner.settings); | ||
| let _ = inner.settings.save(); | ||
| } |
There was a problem hiding this comment.
Do not swallow settings persistence failures.
Line 85 ignores save() errors, so failed writes are silent. Please log/propagate this failure path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/state.rs` around lines 82 - 86, The update_settings method
currently swallows errors from inner.settings.save(); change it to return a
Result so callers can observe failures: update the signature pub fn
update_settings<F: FnOnce(&mut AppSettings)>(&self, f: F) -> Result<(), E> (use
your crate's error type, e.g., anyhow::Error or a custom SaveError), keep
acquiring the write lock and calling f(&mut inner.settings) as before, then call
inner.settings.save()? and map/propagate the returned error so update_settings
returns Err(save_err) on failure; alternatively, if you must keep the void
signature, at minimum log the error from inner.settings.save() using your
logging facility (e.g., log::error! or tracing::error!) with context mentioning
update_settings and the save error. Ensure references are to update_settings,
AppSettings, and inner.settings.save().
| pub state: AppState, | ||
| pub client: Arc<RwLock<PpgClient>>, | ||
| pub ws: Arc<RwLock<WsManager>>, | ||
| pub runtime: tokio::runtime::Handle, |
There was a problem hiding this comment.
Avoid leaking the Tokio runtime with std::mem::forget.
Line 128 permanently leaks runtime threads/resources. Keep the runtime owned by Services instead of leaking it.
💡 Proposed fix
pub struct Services {
@@
- pub runtime: tokio::runtime::Handle,
+ pub runtime: Arc<tokio::runtime::Runtime>,
@@
- let runtime = tokio::runtime::Builder::new_multi_thread()
+ let runtime = Arc::new(tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
- .expect("Failed to create tokio runtime");
- let handle = runtime.handle().clone();
-
- // Keep runtime alive by leaking it — it lives for the app's lifetime.
- std::mem::forget(runtime);
+ .expect("Failed to create tokio runtime"));
@@
- runtime: handle,
+ runtime,
@@
- &self.runtime,
+ self.runtime.handle(),
);Also applies to: 121-129, 137-137, 165-165
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/state.rs` at line 100, The code leaks the Tokio runtime by calling
std::mem::forget on the runtime handle; remove those mem::forget calls and keep
ownership of the runtime inside Services so it can be dropped/shutdown normally.
Change the Services field from tokio::runtime::Handle to owning
tokio::runtime::Runtime (update the struct field named runtime), remove all
std::mem::forget(...) usages that currently leak the runtime, and either
implement Drop for Services to call
Runtime::shutdown_background()/shutdown_timeout or rely on dropping the Runtime
to clean up threads; update any constructors (e.g., Services::new or the
function that creates the runtime) to store the owned Runtime rather than
leaking a Handle.
| let client = services_spawn.client.clone(); | ||
| let variant = variant_id.clone(); | ||
| let prompt_text = prompt.clone(); | ||
| let toast_tx = services_spawn.toast_tx.clone(); | ||
| services_spawn.runtime.spawn(async move { | ||
| let req = SpawnRequest { | ||
| name: variant.clone(), | ||
| agent: Some(variant), | ||
| prompt: if prompt_text.is_empty() { | ||
| None | ||
| } else { | ||
| Some(prompt_text) | ||
| }, | ||
| count: None, | ||
| }; | ||
| match client.read().unwrap().spawn(&req).await { | ||
| Ok(resp) => { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n linux/src/ui/command_palette.rs | sed -n '175,200p'Repository: 2witstudios/ppg-cli
Length of output: 1370
🏁 Script executed:
# Find the struct definition containing services_spawn
rg "services_spawn.*client" -B 5 -A 5 linux/src/ui/command_palette.rs | head -30Repository: 2witstudios/ppg-cli
Length of output: 475
🏁 Script executed:
# Also look for the struct/function definition this code is in
sed -n '160,184p' linux/src/ui/command_palette.rsRepository: 2witstudios/ppg-cli
Length of output: 1049
🏁 Script executed:
# Look for the function signature or where services parameter comes from
sed -n '130,175p' linux/src/ui/command_palette.rsRepository: 2witstudios/ppg-cli
Length of output: 1874
🏁 Script executed:
# Search for struct definitions with a client field that is a lock
rg "struct.*{" linux/src/ui/command_palette.rs -A 10Repository: 2witstudios/ppg-cli
Length of output: 144
🏁 Script executed:
# Find the function that contains this code
rg "fn.*{" linux/src/ui/command_palette.rs -B 2 | head -40Repository: 2witstudios/ppg-cli
Length of output: 136
🏁 Script executed:
# Look at the beginning of the file for the function signature
head -150 linux/src/ui/command_palette.rs | tail -100Repository: 2witstudios/ppg-cli
Length of output: 4096
🏁 Script executed:
# Look for the function definition - search for "pub fn" or "fn"
rg "^pub fn|^fn" linux/src/ui/command_palette.rsRepository: 2witstudios/ppg-cli
Length of output: 129
🏁 Script executed:
# Check the full file structure
wc -l linux/src/ui/command_palette.rs
head -50 linux/src/ui/command_palette.rsRepository: 2witstudios/ppg-cli
Length of output: 1721
🏁 Script executed:
# Find the Services struct definition
find linux/src -name "*.rs" -type f -exec grep -l "struct Services" {} \;Repository: 2witstudios/ppg-cli
Length of output: 81
🏁 Script executed:
# Search for Services definition
rg "struct Services" linux/src -A 10Repository: 2witstudios/ppg-cli
Length of output: 821
🏁 Script executed:
# Now I have the critical info: client is Arc<RwLock<PpgClient>>
# Let's verify if PpgClient implements Clone
rg "struct PpgClient|impl Clone.*PpgClient|#\[derive.*Clone.*\]" linux/src -B 2 -A 5 | head -50Repository: 2witstudios/ppg-cli
Length of output: 2464
🏁 Script executed:
# Also check the api/client module
find linux/src -name "*.rs" -path "*/api/*" | head -10Repository: 2witstudios/ppg-cli
Length of output: 134
🏁 Script executed:
# Find PpgClient definition
rg "struct PpgClient|impl.*PpgClient" linux/src/api/client.rs -B 2 -A 15 | head -60Repository: 2witstudios/ppg-cli
Length of output: 1062
🏁 Script executed:
# Also check if the lock guard is Send
rg "impl Send|impl Sync" linux/src/api/client.rs -B 5 -A 5Repository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
# Check if PpgClient::spawn is an async method
rg "pub async fn spawn|pub fn spawn" linux/src/api/client.rs -A 5Repository: 2witstudios/ppg-cli
Length of output: 363
🏁 Script executed:
# Check the actual tokio::spawn signature and whether the code compiles
# Let's also check what RwLock is used
rg "use.*RwLock|Arc<RwLock" linux/src/state.rs -B 2 -A 2Repository: 2witstudios/ppg-cli
Length of output: 907
🏁 Script executed:
# Let's check if this code compiles - search for any compilation errors or comments about this
rg "client.read|RwLock" linux/src/ui/command_palette.rs -B 2 -A 2Repository: 2witstudios/ppg-cli
Length of output: 299
🏁 Script executed:
# Let's verify the full context around the async block to understand the lock guard issue
sed -n '180,210p' linux/src/ui/command_palette.rsRepository: 2witstudios/ppg-cli
Length of output: 1485
Clone the client outside the async block to avoid holding the lock guard across .await.
Line 195 acquires a read lock from client.read().unwrap() and holds it across the spawn(&req).await call. This blocks other readers/writers during the network request, creating lock contention and deadlock risk. Additionally, the .unwrap() adds panic-on-poison risk.
Instead, clone the PpgClient value outside the async block before awaiting:
Proposed fix
- let client = services_spawn.client.clone();
+ let client = match services_spawn.client.read() {
+ Ok(guard) => guard.clone(),
+ Err(_) => return,
+ };
@@
- match client.read().unwrap().spawn(&req).await {
+ match client.spawn(&req).await {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/ui/command_palette.rs` around lines 180 - 196, The code currently
calls client.read().unwrap().spawn(&req).await inside the async block, which
holds the RwLock read guard across .await and uses unwrap; fix this by acquiring
the read guard and cloning the underlying PpgClient
(services_spawn.client.read().expect("...").clone()) before creating the async
move, then call cloned_client.spawn(&req).await inside the spawned task (and
replace unwrap with expect or proper error handling to avoid panic-on-poison);
update references to use the cloned client variable instead of client.read()
inside the async block and keep SpawnRequest construction unchanged.
| let client = services_test.client.clone(); | ||
| let btn = test_btn_ref.clone(); | ||
| services_test.runtime.spawn(async move { | ||
| let result = client.read().unwrap().test_connection().await; | ||
| let label = match result { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "settings_dialog.rs" | head -20Repository: 2witstudios/ppg-cli
Length of output: 96
🏁 Script executed:
# Search for the specific file mentioned in the review
git ls-files | grep -i settings_dialog.rsRepository: 2witstudios/ppg-cli
Length of output: 94
🏁 Script executed:
# If found, let's read the context around lines 50-54
find . -name "settings_dialog.rs" -type f -exec wc -l {} \;Repository: 2witstudios/ppg-cli
Length of output: 100
🏁 Script executed:
sed -n '40,65p' linux/src/ui/settings_dialog.rsRepository: 2witstudios/ppg-cli
Length of output: 1122
🏁 Script executed:
# Find the Services struct definition
rg -A 5 "struct Services|pub struct Services" linux/src/Repository: 2witstudios/ppg-cli
Length of output: 413
🏁 Script executed:
# Search for client field definition
rg "client:" linux/src/ -B 2 -A 1Repository: 2witstudios/ppg-cli
Length of output: 2218
🏁 Script executed:
# Find test_connection method in PpgClient
rg -A 10 "fn test_connection|pub fn test_connection" linux/src/api/client.rsRepository: 2witstudios/ppg-cli
Length of output: 361
🏁 Script executed:
# Check if PpgClient is Clone and what's in it
rg -B 5 -A 15 "pub struct PpgClient" linux/src/api/client.rsRepository: 2witstudios/ppg-cli
Length of output: 593
🏁 Script executed:
# Check if the proposed fix would even compile
# The proposed fix tries to clone a RwLockReadGuard, which doesn't implement Clone
# Let me verify by checking what PpgClient looks like and if it's safe to clone within the lock
cat linux/src/api/client.rs | head -50Repository: 2witstudios/ppg-cli
Length of output: 1203
🏁 Script executed:
# Let's verify the exact issue: can we clone PpgClient and is it safe?
rg "impl Clone for PpgClient|#\[derive.*Clone" linux/src/api/client.rs -B 2 -A 2Repository: 2witstudios/ppg-cli
Length of output: 167
🏁 Script executed:
# Verify: does RwLockReadGuard implement Clone? It shouldn't
# Let me check Rust std library behavior by examining patterns in the codebase
rg "RwLockReadGuard" linux/src/ -A 2 -B 2Repository: 2witstudios/ppg-cli
Length of output: 45
Hold RwLock guard acquisition and cloning before the async operation.
Line 53 acquires a read guard on the RwLock and holds it across .await, which can cause deadlocks under contention. The test_connection() call is async and may yield control, allowing another task to attempt acquiring the write lock.
Since PpgClient is Clone, acquire and clone it within the lock, then drop the guard before entering the async block:
Suggested fix
let client = services_test.client.clone();
+ let client = client.read().unwrap().clone();
let btn = test_btn_ref.clone();
services_test.runtime.spawn(async move {
- let result = client.read().unwrap().test_connection().await;
+ let result = client.test_connection().await;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let client = services_test.client.clone(); | |
| let btn = test_btn_ref.clone(); | |
| services_test.runtime.spawn(async move { | |
| let result = client.read().unwrap().test_connection().await; | |
| let label = match result { | |
| let client = services_test.client.clone(); | |
| let client = client.read().unwrap().clone(); | |
| let btn = test_btn_ref.clone(); | |
| services_test.runtime.spawn(async move { | |
| let result = client.test_connection().await; | |
| let label = match result { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/ui/settings_dialog.rs` around lines 50 - 54, The code currently
holds a read RwLock guard on services_test.client across an await by calling
client.read().unwrap().test_connection().await; instead, acquire the read guard
on services_test.client, clone the inner PpgClient (the cloned type used by
test_connection), drop the guard, and then call the async test_connection() on
the cloned client inside the runtime.spawn block; update the closure to use the
cloned client (e.g., clone the PpgClient from services_test.client.read() before
the async) so no lock is held across .await.
| kill_action.connect_activate(move |_, _| { | ||
| let client = services_kill.client.clone(); | ||
| let id = wt_id_kill.clone(); | ||
| let toast_tx = services_kill.toast_tx.clone(); | ||
| services_kill.runtime.spawn(async move { | ||
| match client.read().unwrap().kill_worktree(&id).await { | ||
| Ok(_) => { | ||
| let _ = toast_tx.send(crate::state::ToastMessage { | ||
| text: format!("Killed worktree {}", id), | ||
| is_error: false, | ||
| }).await; | ||
| } | ||
| Err(e) => { | ||
| let _ = toast_tx.send(crate::state::ToastMessage { | ||
| text: format!("Kill failed: {}", e), | ||
| is_error: true, | ||
| }).await; | ||
| } | ||
| } | ||
| }); | ||
| }); | ||
| action_group.add_action(&kill_action); | ||
|
|
||
| let merge_action = gio::SimpleAction::new(&format!("merge.{}", wt_id), None); | ||
| let services_merge = services_ctx.clone(); | ||
| let wt_id_merge = wt_id.clone(); | ||
| merge_action.connect_activate(move |_, _| { | ||
| let client = services_merge.client.clone(); | ||
| let id = wt_id_merge.clone(); | ||
| let toast_tx = services_merge.toast_tx.clone(); | ||
| services_merge.runtime.spawn(async move { | ||
| let req = crate::api::client::MergeRequest { | ||
| strategy: Some("squash".to_string()), | ||
| cleanup: Some(true), | ||
| force: None, | ||
| }; | ||
| match client.read().unwrap().merge_worktree(&id, &req).await { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "sidebar.rs" -type fRepository: 2witstudios/ppg-cli
Length of output: 88
🏁 Script executed:
wc -l ./linux/src/ui/sidebar.rsRepository: 2witstudios/ppg-cli
Length of output: 92
🏁 Script executed:
sed -n '200,260p' ./linux/src/ui/sidebar.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 3070
🏁 Script executed:
sed -n '1,50p' ./linux/src/ui/sidebar.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 2172
🏁 Script executed:
find . -name "*.rs" -type f | xargs grep -l "struct Services" | head -5Repository: 2witstudios/ppg-cli
Length of output: 83
🏁 Script executed:
cat -n ./linux/src/state.rs | head -100Repository: 2witstudios/ppg-cli
Length of output: 3656
🏁 Script executed:
sed -n '100,120p' ./linux/src/state.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 1056
🏁 Script executed:
find . -name "*.rs" -type f | xargs grep -l "struct PpgClient\|impl.*PpgClient" | head -5Repository: 2witstudios/ppg-cli
Length of output: 88
🏁 Script executed:
cat -n ./linux/src/api/client.rs | head -150Repository: 2witstudios/ppg-cli
Length of output: 5329
🏁 Script executed:
sed -n '320,370p' ./linux/src/ui/sidebar.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 2465
🏁 Script executed:
sed -n '150,250p' ./linux/src/api/client.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 3916
🏁 Script executed:
sed -n '380,400p' ./linux/src/ui/sidebar.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 1233
RwLock guards held across network awaits in action handlers—risking lock poisoning panics and contention.
The handlers clone the Arc but acquire the RwLock inside the async block, holding the guard across network calls. Clone the client before spawning the async task:
- Line 213–249 (kill_worktree)
- Line 333–365 (kill_agent, restart_agent)
- Line 391+ (agent_logs)
Recommended fix pattern
- let client = services_kill.client.clone();
+ let client = match services_kill.client.read() {
+ Ok(guard) => guard.clone(),
+ Err(_) => return,
+ };
@@
- match client.read().unwrap().kill_worktree(&id).await {
+ match client.kill_worktree(&id).await {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| kill_action.connect_activate(move |_, _| { | |
| let client = services_kill.client.clone(); | |
| let id = wt_id_kill.clone(); | |
| let toast_tx = services_kill.toast_tx.clone(); | |
| services_kill.runtime.spawn(async move { | |
| match client.read().unwrap().kill_worktree(&id).await { | |
| Ok(_) => { | |
| let _ = toast_tx.send(crate::state::ToastMessage { | |
| text: format!("Killed worktree {}", id), | |
| is_error: false, | |
| }).await; | |
| } | |
| Err(e) => { | |
| let _ = toast_tx.send(crate::state::ToastMessage { | |
| text: format!("Kill failed: {}", e), | |
| is_error: true, | |
| }).await; | |
| } | |
| } | |
| }); | |
| }); | |
| action_group.add_action(&kill_action); | |
| let merge_action = gio::SimpleAction::new(&format!("merge.{}", wt_id), None); | |
| let services_merge = services_ctx.clone(); | |
| let wt_id_merge = wt_id.clone(); | |
| merge_action.connect_activate(move |_, _| { | |
| let client = services_merge.client.clone(); | |
| let id = wt_id_merge.clone(); | |
| let toast_tx = services_merge.toast_tx.clone(); | |
| services_merge.runtime.spawn(async move { | |
| let req = crate::api::client::MergeRequest { | |
| strategy: Some("squash".to_string()), | |
| cleanup: Some(true), | |
| force: None, | |
| }; | |
| match client.read().unwrap().merge_worktree(&id, &req).await { | |
| kill_action.connect_activate(move |_, _| { | |
| let client = match services_kill.client.read() { | |
| Ok(guard) => guard.clone(), | |
| Err(_) => return, | |
| }; | |
| let id = wt_id_kill.clone(); | |
| let toast_tx = services_kill.toast_tx.clone(); | |
| services_kill.runtime.spawn(async move { | |
| match client.kill_worktree(&id).await { | |
| Ok(_) => { | |
| let _ = toast_tx.send(crate::state::ToastMessage { | |
| text: format!("Killed worktree {}", id), | |
| is_error: false, | |
| }).await; | |
| } | |
| Err(e) => { | |
| let _ = toast_tx.send(crate::state::ToastMessage { | |
| text: format!("Kill failed: {}", e), | |
| is_error: true, | |
| }).await; | |
| } | |
| } | |
| }); | |
| }); | |
| action_group.add_action(&kill_action); | |
| let merge_action = gio::SimpleAction::new(&format!("merge.{}", wt_id), None); | |
| let services_merge = services_ctx.clone(); | |
| let wt_id_merge = wt_id.clone(); | |
| merge_action.connect_activate(move |_, _| { | |
| let client = services_merge.client.clone(); | |
| let id = wt_id_merge.clone(); | |
| let toast_tx = services_merge.toast_tx.clone(); | |
| services_merge.runtime.spawn(async move { | |
| let req = crate::api::client::MergeRequest { | |
| strategy: Some("squash".to_string()), | |
| cleanup: Some(true), | |
| force: None, | |
| }; | |
| match client.read().unwrap().merge_worktree(&id, &req).await { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/ui/sidebar.rs` around lines 213 - 249, The RwLock read guards are
being held across awaits in the action handlers (e.g., the kill_action closure
using services_kill.client.read().unwrap().kill_worktree and the merge_action
using services_merge.client.read().unwrap().merge_worktree); fix by taking the
read lock synchronously, cloning the inner client/handle into a local variable,
dropping the guard, and then spawning/awaiting using the cloned handle so no
RwLock guard lives across await points (apply the same pattern to the
kill_agent, restart_agent and agent_logs handlers).
| let palette_action = gio::SimpleAction::new("command-palette", None); | ||
| let services_cp = services.clone(); | ||
| let window_ref = window.clone(); | ||
| palette_action.connect_activate(move |_, _| { | ||
| let palette = CommandPalette::new(services_cp.clone()); | ||
| palette.present(&window_ref); | ||
| }); | ||
| window.add_action(&palette_action); | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd linux/src/ui/window.rs --type f -x wc -l {}Repository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
sed -n '115,140p' linux/src/ui/window.rsRepository: 2witstudios/ppg-cli
Length of output: 1256
🏁 Script executed:
grep -n "command-palette" linux/src/ui/window.rsRepository: 2witstudios/ppg-cli
Length of output: 244
🏁 Script executed:
rg "command-palette" --type rust -B 2 -A 2Repository: 2witstudios/ppg-cli
Length of output: 887
🏁 Script executed:
rg "SimpleAction::new.*command-palette" --type rustRepository: 2witstudios/ppg-cli
Length of output: 163
🏁 Script executed:
rg "add_action.*command-palette" --type rustRepository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
rg "app\." linux/src/ui/window.rs | grep -i action | head -20Repository: 2witstudios/ppg-cli
Length of output: 243
Change shortcut action scope from app to win.
The command-palette action is registered to the window at line 126, but the shortcut at line 132 attempts to invoke app.command-palette. This scope mismatch will cause the Ctrl+Shift+P shortcut to fail. Change the action string to reference the window scope:
Fix
- let action = gtk::ShortcutAction::parse_string("action(app.command-palette)").unwrap();
+ let action = gtk::ShortcutAction::parse_string("action(win.command-palette)").unwrap();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/ui/window.rs` around lines 119 - 127, The shortcut is currently
invoking the app-scoped action name but the action is registered on the window;
update the shortcut string to target the window scope. Locate the
creation/registration of the palette action (palette_action via
gio::SimpleAction::new("command-palette", ...) and
window.add_action(&palette_action)) and change the accelerator/shortcut
reference from "app.command-palette" to "win.command-palette" so the
Ctrl+Shift+P binding calls the window-scoped action.
| services.runtime.spawn(async move { | ||
| match client.read().unwrap().status().await { | ||
| Ok(manifest) => { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, let's locate and read the file to see the context around lines 317-319
wc -l linux/src/ui/window.rsRepository: 2witstudios/ppg-cli
Length of output: 89
🏁 Script executed:
#!/bin/bash
# Read the code around the specified lines with context
sed -n '300,330p' linux/src/ui/window.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 1647
🏁 Script executed:
#!/bin/bash
# Let's find where client is defined in this file
rg -n "let.*client\s*=" linux/src/ui/window.rs | head -20Repository: 2witstudios/ppg-cli
Length of output: 112
🏁 Script executed:
#!/bin/bash
# Look for the client type definition more broadly in the function context
sed -n '280,330p' linux/src/ui/window.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 2396
🏁 Script executed:
#!/bin/bash
# Find services struct definition to see the client field type
rg -n "struct.*Services\|field.*client\|client.*:" linux/src/ui/window.rs | head -20Repository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
#!/bin/bash
# Look for Services struct in the entire codebase
fd -t f "\.rs$" linux | xargs rg -l "struct Services" | head -5Repository: 2witstudios/ppg-cli
Length of output: 81
🏁 Script executed:
#!/bin/bash
# Search for Services definition in common locations
rg -n "struct Services" --type rust | head -10Repository: 2witstudios/ppg-cli
Length of output: 106
🏁 Script executed:
#!/bin/bash
# Read Services struct definition around line 96
sed -n '90,130p' linux/src/state.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 2028
🏁 Script executed:
#!/bin/bash
# Find PpgClient definition and status method
rg -n "struct PpgClient\|impl.*PpgClient\|fn status" --type rust | head -20Repository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
#!/bin/bash
# Search for PpgClient in the codebase
fd -t f "\.rs$" | xargs rg -l "struct PpgClient" | head -3Repository: 2witstudios/ppg-cli
Length of output: 86
🏁 Script executed:
#!/bin/bash
# Read PpgClient struct definition and find status method
rg -n "struct PpgClient|fn status" linux/src/api/client.rs -A 5 | head -30Repository: 2witstudios/ppg-cli
Length of output: 405
🏁 Script executed:
#!/bin/bash
# Get line count to understand file size
wc -l linux/src/api/client.rsRepository: 2witstudios/ppg-cli
Length of output: 90
🏁 Script executed:
#!/bin/bash
# Check imports in state.rs to see which RwLock is used
head -30 linux/src/state.rs | grep -E "use|RwLock"Repository: 2witstudios/ppg-cli
Length of output: 257
🏁 Script executed:
#!/bin/bash
# Also check the full imports section
rg -n "use.*RwLock|use std::sync|use tokio::sync|use parking_lot" linux/src/state.rsRepository: 2witstudios/ppg-cli
Length of output: 94
🏁 Script executed:
#!/antml:thinking
I need to verify which RwLock is being used and check if the current code actually compiles. The issue claims there's a problem with holding RwLock guard across await, which would be a serious Rust compilation error if it's std::sync::RwLock.
</thinking>
bash
# Check if the code compiles by looking at the file more broadly
cat -n linux/src/state.rs | head -20Repository: 2witstudios/ppg-cli
Length of output: 207
🏁 Script executed:
#!/bin/bash
# Verify the exact lines in the file at 317-319
sed -n '315,325p' linux/src/ui/window.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 717
🏁 Script executed:
#!/bin/bash
# Check if the code compiles by looking for build/target files
ls -la | head -20Repository: 2witstudios/ppg-cli
Length of output: 1267
🏁 Script executed:
#!/bin/bash
# Try to understand the async context better - check if code actually has the pattern
grep -n "read().unwrap().status().await" linux/src/ui/window.rsRepository: 2witstudios/ppg-cli
Length of output: 124
🏁 Script executed:
#!/bin/bash
# Check if PpgClient derives Clone
rg -B 5 "pub struct PpgClient" linux/src/api/client.rs | head -10Repository: 2witstudios/ppg-cli
Length of output: 187
🏁 Script executed:
#!/bin/bash
# Check the exact derive attributes and also check Client type
sed -n '1,20p' linux/src/api/client.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 651
🏁 Script executed:
#!/bin/bash
# Check git log to see recent commits and if there were fixes
git log --oneline linux/src/ui/window.rs | head -10Repository: 2witstudios/ppg-cli
Length of output: 127
🏁 Script executed:
#!/antml:thinking
I need to verify if this code actually compiles or if it's a known issue. Let me check the recent changes and see if there's a compilation error.
</thinking>
bash
# Check if there's a Cargo.toml and what version of Rust is expected
cat Cargo.toml | head -30Repository: 2witstudios/ppg-cli
Length of output: 207
🏁 Script executed:
#!/antml:thinking
Actually, let me check if this code would even compile. std::sync::RwLock::read() returns a guard that doesn't implement Send, so you can't use it across an await boundary. This is a hard compiler error.
Let me verify this is the actual current state of the code.
</thinking>
bash
# Verify line numbers more carefully - the original comment says lines 317-319
# Let's check the actual content
sed -n '316,320p' linux/src/ui/window.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 353
🏁 Script executed:
#!/bin/bash
# Verify exact lines at 316-320
sed -n '316,320p' linux/src/ui/window.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 329
🏁 Script executed:
#!/bin/bash
# Check what the recent fix commit did
git show 8b6b65c --stat | head -20Repository: 2witstudios/ppg-cli
Length of output: 1077
🏁 Script executed:
#!/bin/bash
# See if there are other instances of this pattern
rg "read\(\)\.unwrap\(\)\." linux/src/ -nRepository: 2witstudios/ppg-cli
Length of output: 1312
🏁 Script executed:
#!/bin/bash
# Check if the code actually compiles - look for build output
ls -la linux/target/debug 2>/dev/null | head -5 || echo "No build artifacts"Repository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
#!/an/bash
# Let's verify the exact nature of the issue by examining if this is truly a compilation error
# Check how tokio::runtime::Handle::spawn works with non-Send futures
rg -A 5 "services.runtime.spawn" linux/src/ui/window.rs | head -20Repository: 2witstudios/ppg-cli
Length of output: 349
🏁 Script executed:
#!/bin/bash
# Also check if there's maybe a different RwLock being used from a different crate
rg "use.*RwLock" linux/src/ui/window.rsRepository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
#!/an/bash
# Check all imports in window.rs to see full context
head -40 linux/src/ui/window.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 1603
🏁 Script executed:
#!/bin/bash
# Check if this code actually compiles - look for Cargo.toml
cat Cargo.toml | grep -A 5 "\[package\]"Repository: 2witstudios/ppg-cli
Length of output: 105
🏁 Script executed:
#!/antml/bash
# Try to check recent test/build status by looking at git
git log --all --oneline | head -5Repository: 2witstudios/ppg-cli
Length of output: 378
🏁 Script executed:
#!/antml/bash
# Let me verify by checking what happens when you hold a std::sync::RwLock guard across await
# This should NOT compile in Rust - let's verify the exact semantics
# Check if maybe there's a different runtime being used
rg "tokio::runtime\|spawn" linux/src/state.rs -B 2 -A 2Repository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
#!/an/bash
# Find Cargo.toml - might be in a subdirectory
find . -name "Cargo.toml" -type f | head -5Repository: 2witstudios/ppg-cli
Length of output: 81
🏁 Script executed:
#!/bin/bash
# Let me verify the core issue: std::sync::RwLock guard across await
# by checking what the actual compiler would say
# First, let's see what version of Rust this uses
rg "rust-version|edition" . --type toml | head -10Repository: 2witstudios/ppg-cli
Length of output: 139
🏁 Script executed:
#!/an/bash
# Check if there's a workspace Cargo.toml
ls -la Cargo.* 2>/dev/null || echo "No Cargo files at root"Repository: 2witstudios/ppg-cli
Length of output: 85
🏁 Script executed:
#!/antml/bash
# Look for any build configuration
ls -la | grep -i cargoRepository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
#!/bin/bash
# Check the dependencies in Cargo.toml to understand the full picture
cat linux/Cargo.toml | head -50Repository: 2witstudios/ppg-cli
Length of output: 922
🏁 Script executed:
#!/bin/bash
# Look for any possible custom RwLock implementations or re-exports
rg "type RwLock|pub use.*RwLock" linux/src/ --type rustRepository: 2witstudios/ppg-cli
Length of output: 45
🏁 Script executed:
#!/bin/bash
# Check the exact tokio version
cat linux/Cargo.toml | grep tokioRepository: 2witstudios/ppg-cli
Length of output: 177
🏁 Script executed:
#!/bin/bash
# This is the critical test: check if the code actually compiles
# Let's try to build it
cd linux && cargo check 2>&1 | head -100Repository: 2witstudios/ppg-cli
Length of output: 309
🏁 Script executed:
#!/antml/bash
# If cargo isn't available, let's see what the actual error would be
# by checking if there are any build artifacts
ls linux/target/ 2>/dev/null || echo "No build artifacts found"Repository: 2witstudios/ppg-cli
Length of output: 87
🏁 Script executed:
#!/antml/bash
# Let me also verify the exact behavior expected: does the RwLock guard escape the await?
# by reading the exact scope more carefully
sed -n '310,330p' linux/src/ui/window.rs | cat -nRepository: 2witstudios/ppg-cli
Length of output: 1203
Fix holding std::sync::RwLock guard across .await boundary.
The pattern client.read().unwrap().status().await holds a non-Send RwLockReadGuard across an await point, which is a compile error. Clone the client before await to release the lock guard immediately.
Proposed fix
services.runtime.spawn(async move {
- match client.read().unwrap().status().await {
+ let client = match client.read() {
+ Ok(guard) => guard.clone(),
+ Err(_) => {
+ let _ = toast_tx.try_send(ToastMessage {
+ text: "Connection failed: client lock poisoned".to_string(),
+ is_error: true,
+ });
+ return;
+ }
+ };
+ match client.status().await {This pattern appears in multiple locations (worktree_detail.rs, sidebar.rs, command_palette.rs).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| services.runtime.spawn(async move { | |
| match client.read().unwrap().status().await { | |
| Ok(manifest) => { | |
| services.runtime.spawn(async move { | |
| let client = match client.read() { | |
| Ok(guard) => guard.clone(), | |
| Err(_) => { | |
| let _ = toast_tx.try_send(ToastMessage { | |
| text: "Connection failed: client lock poisoned".to_string(), | |
| is_error: true, | |
| }); | |
| return; | |
| } | |
| }; | |
| match client.status().await { | |
| Ok(manifest) => { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@linux/src/ui/window.rs` around lines 317 - 319, The code is holding a
std::sync::RwLockReadGuard across an .await (e.g., the expression
client.read().unwrap().status().await inside the services.runtime.spawn
closure), which is not allowed; fix by cloning or otherwise extracting the inner
client handle while holding the read lock and then dropping the guard before
awaiting (for example, let client_clone = client.read().unwrap().clone();
drop(guard); client_clone.status().await). Apply the same pattern to all
occurrences (search for client.read().unwrap().status().await in window.rs,
worktree_detail.rs, sidebar.rs, command_palette.rs) so the RwLock guard is
released before any await.
Summary
ppg servevia REST API (reqwest) and WebSocket (tokio-tungstenite) with auto-reconnectWhat's included
Architecture
linux/async_channelfor cross-thread WS → GTK dispatch (glib 0.20 compatible)camelCasefield mapping)Test plan
cd linux && cargo build --releaseppg serve startthen./target/release/ppg-desktop~/.config/ppg-desktop/settings.tomlSummary by CodeRabbit
New Features
Enhancements
Tests