feat(auth): auto-migrate legacy credentials.enc to per-account format#239
feat(auth): auto-migrate legacy credentials.enc to per-account format#239
Conversation
Fixes #232 - Add migrate_legacy_credentials() that runs once on startup - Decrypts legacy file, resolves email via userinfo, re-saves per-account - Creates accounts.json with migrated account as default - Renames legacy file to .bak - Remove legacy credentials.enc fallback from resolve_account - Gracefully handles offline/failure with warnings
🦋 Changeset detectedLatest commit: 9604715 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant improvement to the Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #239 +/- ##
==========================================
- Coverage 57.69% 57.29% -0.41%
==========================================
Files 38 38
Lines 14328 14430 +102
==========================================
Hits 8267 8267
- Misses 6061 6163 +102 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Code Review
This pull request introduces an automatic migration for legacy credentials, which is a great enhancement for user experience. The implementation is thorough, with good error handling and user feedback. I've identified two areas for improvement in src/auth.rs:
- The logic for ensuring the migration runs only once per process is complex and involves duplicated code. I've suggested a refactoring to a more standard pattern using
tokio::sync::Mutexwhich is cleaner and easier to maintain. - The backup mechanism for the legacy credentials file doesn't account for pre-existing backup files on Windows, which could cause the
renameoperation to fail. I've suggested a fix to make this more robust.
Overall, these are solid changes that improve the CLI's usability.
…ckup Addresses review comments: - Replace std::sync::Once with AtomicBool + tokio::sync::Mutex to eliminate duplicated precondition checks and simplify concurrency logic for the async migration function. - Remove existing .bak file before rename to prevent failures on Windows where rename fails if destination already exists.
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces a helpful feature to automatically migrate legacy credentials to the new per-account format. The implementation is comprehensive, covering various steps from decryption to creating backups. However, I've identified some critical issues. The new migration function uses blocking file I/O within an async context, which can cause performance problems. Additionally, the JSON parsing for legacy credentials is not robust and could lead to panics. I've also suggested refactoring the large migration function to improve maintainability and testability. Addressing these points will make the migration process more reliable and robust.
src/auth.rs
Outdated
| if backup_path.exists() { | ||
| if let Err(e) = std::fs::remove_file(&backup_path) { | ||
| eprintln!( | ||
| "[gws] Warning: Failed to remove existing backup file '{}': {e}", | ||
| backup_path.display() | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
This code uses the blocking std::fs::remove_file and backup_path.exists() inside an async function. Synchronous I/O operations can block the thread of the async runtime, leading to performance issues or even deadlocks in a concurrent application.
You should use the asynchronous equivalents from tokio::fs. This issue is present for other file operations within migrate_legacy_credentials as well:
credential_store::load_encrypted()(line 223, usesstd::fs::read)credential_store::save_encrypted_for()(line 304, uses blocking I/O)crate::accounts::save_accounts()(line 313, uses blocking I/O)std::fs::rename()(line 329)
All these blocking calls should be replaced with their tokio::fs counterparts.
| if backup_path.exists() { | |
| if let Err(e) = std::fs::remove_file(&backup_path) { | |
| eprintln!( | |
| "[gws] Warning: Failed to remove existing backup file '{}': {e}", | |
| backup_path.display() | |
| ); | |
| } | |
| } | |
| if tokio::fs::metadata(&backup_path).await.is_ok() { | |
| if let Err(e) = tokio::fs::remove_file(&backup_path).await { | |
| eprintln!( | |
| "[gws] Warning: Failed to remove existing backup file '{}': {e}", | |
| backup_path.display() | |
| ); | |
| } | |
| } |
| async fn migrate_legacy_credentials() { | ||
| use std::sync::atomic::{AtomicBool, Ordering}; | ||
| use tokio::sync::Mutex; | ||
|
|
||
| static MIGRATION_LOCK: Mutex<()> = Mutex::const_new(()); | ||
| static MIGRATION_ATTEMPTED: AtomicBool = AtomicBool::new(false); | ||
|
|
||
| // Quick, non-locking check to bail out early if migration has already been handled. | ||
| if MIGRATION_ATTEMPTED.load(Ordering::Relaxed) { | ||
| return; | ||
| } | ||
|
|
||
| // Acquire a lock to ensure only one task performs the detailed check and migration. | ||
| let _guard = MIGRATION_LOCK.lock().await; | ||
|
|
||
| // Re-check after acquiring the lock, in case another task just finished. | ||
| if MIGRATION_ATTEMPTED.load(Ordering::SeqCst) { | ||
| return; | ||
| } | ||
|
|
||
| // Mark as attempted before the checks, so we only ever try this logic once per process. | ||
| MIGRATION_ATTEMPTED.store(true, Ordering::SeqCst); | ||
|
|
||
| let legacy_path = credential_store::encrypted_credentials_path(); | ||
| let registry = crate::accounts::load_accounts().ok().flatten(); | ||
|
|
||
| // Only migrate if legacy file exists AND no accounts registry exists | ||
| if !legacy_path.exists() || registry.is_some() { | ||
| return; | ||
| } | ||
|
|
||
| eprintln!("[gws] Migrating legacy credentials to per-account format..."); | ||
|
|
||
| // Decrypt the legacy credentials | ||
| let json_str = match credential_store::load_encrypted() { | ||
| Ok(s) => s, | ||
| Err(e) => { | ||
| eprintln!("[gws] Warning: Failed to decrypt legacy credentials: {e}"); | ||
| eprintln!("[gws] Run 'gws auth login' to re-authenticate."); | ||
| return; | ||
| } | ||
| }; | ||
|
|
||
| // Parse credentials to get refresh_token | ||
| let creds: serde_json::Value = match serde_json::from_str(&json_str) { | ||
| Ok(v) => v, | ||
| Err(e) => { | ||
| eprintln!("[gws] Warning: Failed to parse legacy credentials: {e}"); | ||
| return; | ||
| } | ||
| }; | ||
|
|
||
| let client_id = creds["client_id"].as_str().unwrap_or_default(); | ||
| let client_secret = creds["client_secret"].as_str().unwrap_or_default(); | ||
| let refresh_token = creds["refresh_token"].as_str().unwrap_or_default(); | ||
|
|
||
| if client_id.is_empty() || client_secret.is_empty() || refresh_token.is_empty() { | ||
| eprintln!("[gws] Warning: Legacy credentials are incomplete, cannot migrate."); | ||
| eprintln!("[gws] Run 'gws auth login' to re-authenticate."); | ||
| return; | ||
| } | ||
|
|
||
| // Get an access token to determine the email | ||
| let secret = yup_oauth2::authorized_user::AuthorizedUserSecret { | ||
| client_id: client_id.to_string(), | ||
| client_secret: client_secret.to_string(), | ||
| refresh_token: refresh_token.to_string(), | ||
| key_type: "authorized_user".to_string(), | ||
| }; | ||
|
|
||
| let auth = match yup_oauth2::AuthorizedUserAuthenticator::builder(secret) | ||
| .build() | ||
| .await | ||
| { | ||
| Ok(a) => a, | ||
| Err(e) => { | ||
| eprintln!("[gws] Warning: Failed to build authenticator for migration: {e}"); | ||
| eprintln!("[gws] Run 'gws auth login' to re-authenticate."); | ||
| return; | ||
| } | ||
| }; | ||
|
|
||
| let token = match auth | ||
| .token(&["https://www.googleapis.com/auth/userinfo.email"]) | ||
| .await | ||
| { | ||
| Ok(t) => t, | ||
| Err(e) => { | ||
| eprintln!("[gws] Warning: Failed to get token for migration: {e}"); | ||
| eprintln!("[gws] Run 'gws auth login' to re-authenticate."); | ||
| return; | ||
| } | ||
| }; | ||
|
|
||
| let access_token = match token.token() { | ||
| Some(t) => t.to_string(), | ||
| None => { | ||
| eprintln!("[gws] Warning: No access token available for migration."); | ||
| return; | ||
| } | ||
| }; | ||
|
|
||
| // Get email via tokeninfo | ||
| let email = match crate::auth_commands::fetch_userinfo_email(&access_token).await { | ||
| Some(e) => e, | ||
| None => { | ||
| eprintln!("[gws] Warning: Could not determine email from legacy credentials."); | ||
| eprintln!("[gws] Run 'gws auth login' to re-authenticate."); | ||
| return; | ||
| } | ||
| }; | ||
|
|
||
| eprintln!("[gws] Found account: {email}"); | ||
|
|
||
| // Save as per-account credentials | ||
| if let Err(e) = credential_store::save_encrypted_for(&json_str, &email) { | ||
| eprintln!("[gws] Warning: Failed to save migrated credentials: {e}"); | ||
| return; | ||
| } | ||
|
|
||
| // Register in accounts.json using the existing helper | ||
| let mut registry = crate::accounts::AccountsRegistry::default(); | ||
| crate::accounts::add_account(&mut registry, &email); | ||
|
|
||
| if let Err(e) = crate::accounts::save_accounts(®istry) { | ||
| eprintln!("[gws] Warning: Failed to save accounts registry: {e}"); | ||
| return; | ||
| } | ||
|
|
||
| // Rename legacy file to .bak | ||
| // On Windows, `rename` fails if the destination exists. Remove old backup first. | ||
| let backup_path = legacy_path.with_extension("enc.bak"); | ||
| if backup_path.exists() { | ||
| if let Err(e) = std::fs::remove_file(&backup_path) { | ||
| eprintln!( | ||
| "[gws] Warning: Failed to remove existing backup file '{}': {e}", | ||
| backup_path.display() | ||
| ); | ||
| } | ||
| } | ||
| if let Err(e) = std::fs::rename(&legacy_path, &backup_path) { | ||
| eprintln!("[gws] Warning: Failed to rename legacy credentials: {e}"); | ||
| // Still succeeded in migration, just couldn't clean up | ||
| } | ||
|
|
||
| eprintln!( | ||
| "[gws] ✓ Migrated credentials for {}. Backup at {}", | ||
| email, | ||
| backup_path.display() | ||
| ); | ||
| } |
There was a problem hiding this comment.
The migrate_legacy_credentials function is over 150 lines long and handles many different concerns: concurrency control, file I/O, decryption, JSON parsing, network requests, and state updates. This high complexity makes the function difficult to read, test, and maintain.
Consider refactoring this logic into several smaller, more focused functions. For example:
- A function to decrypt and parse the legacy credentials, returning a
Result. - A function to fetch the user's email, given the credentials.
- A function to save the new credentials and update the accounts registry.
- A function to handle file cleanup.
This would improve modularity and allow for more granular error handling and unit testing.
src/auth.rs
Outdated
| let client_id = creds["client_id"].as_str().unwrap_or_default(); | ||
| let client_secret = creds["client_secret"].as_str().unwrap_or_default(); | ||
| let refresh_token = creds["refresh_token"].as_str().unwrap_or_default(); |
There was a problem hiding this comment.
Accessing JSON fields by index (e.g., creds["client_id"]) will panic if the key is missing. Using as_str() followed by unwrap_or_default() will also panic if the value is present but not a string (e.g., null or a number). This makes the migration process fragile. For more robust parsing, you should use get() and and_then() to safely access and convert the values.
| let client_id = creds["client_id"].as_str().unwrap_or_default(); | |
| let client_secret = creds["client_secret"].as_str().unwrap_or_default(); | |
| let refresh_token = creds["refresh_token"].as_str().unwrap_or_default(); | |
| let client_id = creds.get("client_id").and_then(|v| v.as_str()).unwrap_or_default(); | |
| let client_secret = creds.get("client_secret").and_then(|v| v.as_str()).unwrap_or_default(); | |
| let refresh_token = creds.get("refresh_token").and_then(|v| v.as_str()).unwrap_or_default(); |
Addresses review comments: - Use creds.get().and_then(as_str) instead of index operator to prevent potential panics on missing or non-string JSON values. - Use tokio::fs for rename/remove operations to avoid blocking the async runtime during backup file cleanup.
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces an automatic migration for legacy credentials to a new per-account format. The changes are well-structured, and the migration logic correctly handles backups and potential failures. However, I've identified a potential inter-process race condition in the migration logic that could occur if multiple instances of the CLI are run concurrently. My feedback focuses on addressing this to make the migration process more robust.
| /// | ||
| /// On failure (e.g. offline, can't determine email), prints a warning and | ||
| /// leaves the legacy file in place — the user can manually re-run `gws auth login`. | ||
| async fn migrate_legacy_credentials() { |
There was a problem hiding this comment.
There is a potential inter-process race condition here. If two gws commands are run simultaneously, both processes could attempt the migration concurrently. The current locking mechanism (Mutex and AtomicBool) only prevents race conditions within a single process.
While this may not lead to data corruption in this specific case (as both processes would be writing the same data), it can cause confusing output for the user and result in redundant API calls and file operations. The second process to finish will likely fail to rename the legacy credentials file, adding to the confusion.
To prevent this, you should implement an inter-process locking mechanism, such as a file lock. A common strategy is to atomically create a lock file (e.g., ~/.config/gws/.migration.lock) at the beginning of the migration process and remove it upon completion or failure.
For example, you could use tokio::fs::OpenOptions::new().create_new(true) to attempt to create the lock file. If it succeeds, this process has the lock. If it fails with AlreadyExists, another process holds the lock, and this process should wait or exit.
Summary
Fixes #232
When users who authenticated with an older version of gws run the updated CLI, their legacy
credentials.encfile is automatically migrated to the per-account format:What changed
src/auth.rs: Addedmigrate_legacy_credentials()that runs once before account resolution:credentials.enccredentials.<b64-email>.encaccounts.jsonwith the account as default.baksrc/auth.rs: Removed theNone → legacy credentials.encfallback inresolve_account(). After migration, all credential resolution goes through the accounts registry or ADC.src/auth_commands.rs: Madefetch_userinfo_emailpub(crate)so the migration function can use it.Migration is safe
gws auth login..bak, not deleted.std::sync::Onceso migration runs at most once per process.accounts.jsondoesn't exist.Test plan
cargo clippy -- -D warningsclean