feat: CLI for recording, uploading, and video management#1706
feat: CLI for recording, uploading, and video management#1706salmonumbrella wants to merge 2 commits intoCapSoftware:mainfrom
Conversation
apps/cli/src/daemon.rs
Outdated
| @@ -0,0 +1 @@ | |||
|
|
|||
There was a problem hiding this comment.
Build-breaking: empty
daemon.rs conflicts with daemon/mod.rs
In Rust 2018 and later editions (including 2024 used here), when the compiler resolves mod daemon; in main.rs, it finds BOTH src/daemon.rs AND src/daemon/mod.rs and emits a hard error:
error[E0761]: file for module `daemon` found at both "src/daemon.rs" and "src/daemon/mod.rs"
This empty file needs to be deleted. The actual module declarations live in apps/cli/src/daemon/mod.rs, which is the correct single source of truth for the daemon module.
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/cli/src/daemon.rs
Line: 1
Comment:
**Build-breaking: empty `daemon.rs` conflicts with `daemon/mod.rs`**
In Rust 2018 and later editions (including 2024 used here), when the compiler resolves `mod daemon;` in `main.rs`, it finds BOTH `src/daemon.rs` AND `src/daemon/mod.rs` and emits a hard error:
```
error[E0761]: file for module `daemon` found at both "src/daemon.rs" and "src/daemon/mod.rs"
```
This empty file needs to be deleted. The actual module declarations live in `apps/cli/src/daemon/mod.rs`, which is the correct single source of truth for the `daemon` module.
How can I resolve this? If you propose a fix, please make it concise.| enum Commands { | ||
| /// Export a '.cap' project to an mp4 file | ||
| Export(Export), | ||
| /// Start a recording or list available capture targets and devices | ||
| Record(RecordArgs), | ||
| Auth(auth::AuthArgs), | ||
| Upload(upload_cmd::UploadArgs), | ||
| List(videos::ListArgs), | ||
| Get(videos::GetArgs), | ||
| Delete(videos::DeleteArgs), | ||
| Open(videos::OpenArgs), | ||
| Orgs(orgs::OrgsArgs), | ||
| S3(s3::S3Args), | ||
| } |
There was a problem hiding this comment.
cap transcript and cap password commands never registered
The PR description documents both cap transcript <id> and cap password <id> --set/--remove, and their full implementations (TranscriptArgs with run() and PasswordArgs with run()) exist in videos.rs. However, neither type is added to the Commands enum here, so both commands are completely unreachable from the CLI — they won't appear in --help and cannot be invoked.
The Commands enum needs Transcript(videos::TranscriptArgs) and Password(videos::PasswordArgs) variants, with corresponding Commands::Transcript(t) => t.run(json_output).await? and Commands::Password(p) => p.run(json_output).await? arms in the match below.
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/cli/src/main.rs
Line: 33-44
Comment:
**`cap transcript` and `cap password` commands never registered**
The PR description documents both `cap transcript <id>` and `cap password <id> --set/--remove`, and their full implementations (`TranscriptArgs` with `run()` and `PasswordArgs` with `run()`) exist in `videos.rs`. However, neither type is added to the `Commands` enum here, so both commands are completely unreachable from the CLI — they won't appear in `--help` and cannot be invoked.
The `Commands` enum needs `Transcript(videos::TranscriptArgs)` and `Password(videos::PasswordArgs)` variants, with corresponding `Commands::Transcript(t) => t.run(json_output).await?` and `Commands::Password(p) => p.run(json_output).await?` arms in the `match` below.
How can I resolve this? If you propose a fix, please make it concise.| { | ||
| Ok(resp) => { | ||
| let etag = resp | ||
| .headers() | ||
| .get("etag") | ||
| .and_then(|v| v.to_str().ok()) | ||
| .unwrap_or("") | ||
| .to_string(); | ||
| return Ok(etag); | ||
| } |
There was a problem hiding this comment.
S3 error responses silently treated as successful chunk uploads
The Ok(resp) arm never checks resp.status(). If S3 returns a 403 (expired presigned URL), 500, or any other error status, the code returns Ok("") (empty ETag string) and moves on. The multipart complete call will then fail with a confusing S3 error about malformed ETags rather than a clear "chunk upload failed" message. The retry logic is also bypassed for HTTP-level errors.
| { | |
| Ok(resp) => { | |
| let etag = resp | |
| .headers() | |
| .get("etag") | |
| .and_then(|v| v.to_str().ok()) | |
| .unwrap_or("") | |
| .to_string(); | |
| return Ok(etag); | |
| } | |
| Ok(resp) => { | |
| if !resp.status().is_success() { | |
| let status = resp.status(); | |
| let body = resp.text().await.unwrap_or_default(); | |
| warn!(part_number, attempt, %status, body = %body, "Chunk upload HTTP error"); | |
| continue; | |
| } | |
| let etag = resp | |
| .headers() | |
| .get("etag") | |
| .and_then(|v| v.to_str().ok()) | |
| .unwrap_or("") | |
| .to_string(); | |
| return Ok(etag); | |
| } |
Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/upload/src/upload.rs
Line: 233-242
Comment:
**S3 error responses silently treated as successful chunk uploads**
The `Ok(resp)` arm never checks `resp.status()`. If S3 returns a 403 (expired presigned URL), 500, or any other error status, the code returns `Ok("")` (empty ETag string) and moves on. The multipart complete call will then fail with a confusing S3 error about malformed ETags rather than a clear "chunk upload failed" message. The retry logic is also bypassed for HTTP-level errors.
```suggestion
Ok(resp) => {
if !resp.status().is_success() {
let status = resp.status();
let body = resp.text().await.unwrap_or_default();
warn!(part_number, attempt, %status, body = %body, "Chunk upload HTTP error");
continue;
}
let etag = resp
.headers()
.get("etag")
.and_then(|v| v.to_str().ok())
.unwrap_or("")
.to_string();
return Ok(etag);
}
```
How can I resolve this? If you propose a fix, please make it concise.
apps/cli/src/auth.rs
Outdated
| .readable() | ||
| .await | ||
| .map_err(|e| format!("Stream not readable: {e}"))?; | ||
| let n = stream | ||
| .try_read(&mut buf) | ||
| .map_err(|e| format!("Failed to read: {e}"))?; | ||
| let request = String::from_utf8_lossy(&buf[..n]); | ||
|
|
||
| let api_key = extract_query_param(&request, "api_key"); | ||
| let user_id = extract_query_param(&request, "user_id"); | ||
|
|
||
| let response_body = if api_key.is_some() { | ||
| "Authentication successful! You can close this tab." | ||
| } else { | ||
| "Authentication failed. Please try again." |
There was a problem hiding this comment.
try_read / try_write may silently transfer partial HTTP data
try_read and try_write are non-blocking: they transfer only the bytes available at that instant. try_read can legitimately return 0 bytes on a spurious readable() wakeup, causing the API key parse to fail. try_write may write only part of the HTTP response, leaving the browser with a broken reply. Use BufReader::read_line in a loop and write_all instead.
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/cli/src/auth.rs
Line: 88-102
Comment:
**`try_read` / `try_write` may silently transfer partial HTTP data**
`try_read` and `try_write` are non-blocking: they transfer only the bytes available at that instant. `try_read` can legitimately return 0 bytes on a spurious `readable()` wakeup, causing the API key parse to fail. `try_write` may write only part of the HTTP response, leaving the browser with a broken reply. Use `BufReader::read_line` in a loop and `write_all` instead.
How can I resolve this? If you propose a fix, please make it concise.
apps/cli/src/auth.rs
Outdated
|
|
||
| let (stream, _addr) = listener | ||
| .accept() |
There was a problem hiding this comment.
No timeout on OAuth callback — CLI hangs indefinitely
listener.accept() blocks forever if the user closes the browser tab. Add a tokio::time::timeout (e.g. 5 minutes) so the user gets a clear error message rather than a hung process.
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/cli/src/auth.rs
Line: 80-82
Comment:
**No timeout on OAuth callback — CLI hangs indefinitely**
`listener.accept()` blocks forever if the user closes the browser tab. Add a `tokio::time::timeout` (e.g. 5 minutes) so the user gets a clear error message rather than a hung process.
How can I resolve this? If you propose a fix, please make it concise.
apps/cli/src/s3.rs
Outdated
| struct S3ConfigArgs { | ||
| #[arg(long)] | ||
| provider: String, | ||
| #[arg(long)] | ||
| bucket: String, | ||
| #[arg(long)] | ||
| region: String, | ||
| #[arg(long)] | ||
| endpoint: String, | ||
| #[arg(long)] | ||
| access_key_id: String, | ||
| #[arg(long)] | ||
| secret_access_key: String, | ||
| } | ||
|
|
||
| #[derive(Args)] | ||
| struct S3TestArgs { | ||
| #[arg(long)] | ||
| provider: String, | ||
| #[arg(long)] | ||
| bucket: String, | ||
| #[arg(long)] | ||
| region: String, | ||
| #[arg(long)] | ||
| endpoint: String, | ||
| #[arg(long)] | ||
| access_key_id: String, | ||
| #[arg(long)] | ||
| secret_access_key: String, | ||
| } | ||
|
|
||
| impl S3Args { | ||
| pub async fn run(self, json: bool) -> Result<(), String> { | ||
| match self.command { | ||
| S3Commands::Config(args) => set_config(args, json).await, | ||
| S3Commands::Test(args) => test_config(args, json).await, | ||
| S3Commands::Get => get_config(json).await, | ||
| S3Commands::Delete => delete_config(json).await, | ||
| } | ||
| } | ||
| } | ||
|
|
||
| fn build_s3_input_from_config(args: &S3ConfigArgs) -> S3ConfigInput { | ||
| S3ConfigInput { | ||
| provider: args.provider.clone(), | ||
| access_key_id: args.access_key_id.clone(), | ||
| secret_access_key: args.secret_access_key.clone(), | ||
| endpoint: args.endpoint.clone(), | ||
| bucket_name: args.bucket.clone(), | ||
| region: args.region.clone(), | ||
| } | ||
| } | ||
|
|
||
| fn build_s3_input_from_test(args: &S3TestArgs) -> S3ConfigInput { | ||
| S3ConfigInput { | ||
| provider: args.provider.clone(), | ||
| access_key_id: args.access_key_id.clone(), | ||
| secret_access_key: args.secret_access_key.clone(), | ||
| endpoint: args.endpoint.clone(), | ||
| bucket_name: args.bucket.clone(), | ||
| region: args.region.clone(), | ||
| } |
There was a problem hiding this comment.
S3ConfigArgs and S3TestArgs are identical — and so are their builder functions
Both structs have exactly the same six fields, and build_s3_input_from_config / build_s3_input_from_test are line-for-line duplicates. A single shared S3BucketArgs struct reused in both S3Commands::Config and S3Commands::Test would eliminate the duplication.
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/cli/src/s3.rs
Line: 19-80
Comment:
**`S3ConfigArgs` and `S3TestArgs` are identical — and so are their builder functions**
Both structs have exactly the same six fields, and `build_s3_input_from_config` / `build_s3_input_from_test` are line-for-line duplicates. A single shared `S3BucketArgs` struct reused in both `S3Commands::Config` and `S3Commands::Test` would eliminate the duplication.
How can I resolve this? If you propose a fix, please make it concise.| extract_first_frame(video_path, &frame_path)?; | ||
| let jpeg_data = compress_image(&frame_path)?; | ||
|
|
||
| debug!( | ||
| video_id, | ||
| size_bytes = jpeg_data.len(), | ||
| "Uploading thumbnail" | ||
| ); | ||
|
|
||
| client | ||
| .upload_signed(video_id, "screenshot/screen-capture.jpg", jpeg_data) | ||
| .await | ||
| .map_err(|e| format!("Failed to upload thumbnail: {e}"))?; | ||
|
|
||
| std::fs::remove_file(&frame_path).ok(); | ||
| Ok(()) | ||
| } | ||
|
|
There was a problem hiding this comment.
Temp frame file leaked when
compress_image fails
If extract_first_frame succeeds but compress_image returns an error, the ? early-returns before std::fs::remove_file(&frame_path) is reached, leaving the .png file in the temp directory.
| extract_first_frame(video_path, &frame_path)?; | |
| let jpeg_data = compress_image(&frame_path)?; | |
| debug!( | |
| video_id, | |
| size_bytes = jpeg_data.len(), | |
| "Uploading thumbnail" | |
| ); | |
| client | |
| .upload_signed(video_id, "screenshot/screen-capture.jpg", jpeg_data) | |
| .await | |
| .map_err(|e| format!("Failed to upload thumbnail: {e}"))?; | |
| std::fs::remove_file(&frame_path).ok(); | |
| Ok(()) | |
| } | |
| extract_first_frame(video_path, &frame_path)?; | |
| let jpeg_result = compress_image(&frame_path); | |
| std::fs::remove_file(&frame_path).ok(); // clean up regardless | |
| let jpeg_data = jpeg_result?; |
Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/upload/src/thumbnail.rs
Line: 65-82
Comment:
**Temp frame file leaked when `compress_image` fails**
If `extract_first_frame` succeeds but `compress_image` returns an error, the `?` early-returns before `std::fs::remove_file(&frame_path)` is reached, leaving the `.png` file in the temp directory.
```suggestion
extract_first_frame(video_path, &frame_path)?;
let jpeg_result = compress_image(&frame_path);
std::fs::remove_file(&frame_path).ok(); // clean up regardless
let jpeg_data = jpeg_result?;
```
How can I resolve this? If you propose a fix, please make it concise.3717adc to
9aa6476
Compare
9aa6476 to
bcd9397
Compare
Adds a
capCLI and supportingcap-uploadcrate for headless screen recording, uploading, and video management. Closes #669.New crate:
cap-upload(crates/upload/)Standalone upload library with auth, multipart chunked uploads, and a full API client for all desktop endpoints (including feedback submission and debug log upload via multipart).
CLI commands (
apps/cli/)cap auth login/logout/status--api-keyauthcap upload <file> [--password] [--org]cap record start/stop/statuscap record screens/windows/camerascap record start --auto-zoom/--no-auto-zoomcap record start --capture-keys/--no-capture-keyscap record start --exclude <window>cap export <project> [output].capproject to MP4cap config get [--json]cap config set --fps/--auto-zoom/--capture-keyscap config set --exclude-add/--exclude-remove/--exclude-resetcap system-info [--json]cap feedback <message>cap debug uploadcap debug logscap list [--org] [--limit] [--offset]cap info <id>cap transcript <id>cap password <id> --set/--removecap delete/open <id>cap orgs listcap s3 config/test/get/deleteSettings resolution
Layered config: CLI flags >
~/.config/cap/settings.json> Tauri desktop store > built-in defaults. Thecap configcommands read/write the CLI config file; recording flags override everything per-invocation.File logging
All CLI invocations append structured logs to
~/.config/cap/logs/cap-cli.log(no ANSI). Upload them withcap debug upload.Web API additions (
apps/web/)Four new authenticated endpoints under
/api/desktop/video/:GET /list— paginated video listing with optional org filterGET /info— video details including AI metadataGET /transcript— fetch VTT transcript from S3POST /password— set or remove video password