Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions .changeset/migrate-legacy-credentials.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
"@googleworkspace/cli": minor
---

Auto-migrate legacy `credentials.enc` to per-account format on first run. When a legacy credential file exists without an `accounts.json` registry, gws now automatically:
1. Decrypts the legacy credentials
2. Determines the account email via Google userinfo
3. Re-saves as `credentials.<b64-email>.enc`
4. Creates `accounts.json` with the account as default
5. Renames the legacy file to `credentials.enc.bak`

Also removes the legacy `credentials.enc` fallback path — all credential resolution now goes through the accounts registry or ADC.
193 changes: 185 additions & 8 deletions src/auth.rs
Original file line number Diff line number Diff line change
Expand Up @@ -86,19 +86,27 @@ pub async fn get_token(scopes: &[&str], account: Option<&str>) -> anyhow::Result

// If env var credentials are specified, skip account resolution entirely
if creds_file.is_some() {
let enc_path = credential_store::encrypted_credentials_path();
let default_path = config_dir.join("credentials.json");
let token_cache = config_dir.join("token_cache.json");
// When using env var creds, we don't need account-specific paths
let enc_path = PathBuf::from("/nonexistent");
let creds = load_credentials_inner(creds_file.as_deref(), &enc_path, &default_path).await?;
return get_token_inner(scopes, creds, &token_cache, impersonated_user.as_deref()).await;
}

// Auto-migrate legacy credentials.enc if present and no accounts.json exists
migrate_legacy_credentials().await;

// Resolve account from registry
let resolved_account = resolve_account(account)?;

let enc_path = match &resolved_account {
Some(email) => credential_store::encrypted_credentials_path_for(email),
None => credential_store::encrypted_credentials_path(),
None => {
// No account resolved — no legacy fallback, just use a non-existent path
// so load_credentials_inner falls through to ADC/plaintext
config_dir.join("credentials.nonexistent.enc")
}
};

// Per-account token cache: token_cache.<b64-email>.json
Expand All @@ -125,7 +133,7 @@ pub async fn get_token(scopes: &[&str], account: Option<&str>) -> anyhow::Result
/// Resolve which account to use:
/// 1. Explicit `account` parameter takes priority.
/// 2. Fall back to `accounts.json` default.
/// 3. If no registry exists, return None to allow legacy `credentials.enc` fallthrough.
/// 3. If no registry exists, return None (caller falls through to ADC/plaintext).
fn resolve_account(account: Option<&str>) -> anyhow::Result<Option<String>> {
let registry = crate::accounts::load_accounts()?;

Expand Down Expand Up @@ -161,13 +169,182 @@ fn resolve_account(account: Option<&str>) -> anyhow::Result<Option<String>> {
);
}
}
// No account, no registry — use legacy credentials if they exist
(None, None) => {
// Fall through to standard credential loading which will pick up
// the legacy credentials.enc file if it exists.
Ok(None)
// No account, no registry — no credentials to resolve
(None, None) => Ok(None),
}
}

/// Auto-migrate legacy `credentials.enc` to the per-account format.
///
/// If `credentials.enc` exists and no `accounts.json` registry has been created
/// yet, this function:
/// 1. Decrypts the legacy file
/// 2. Obtains an access token to determine the email via Google tokeninfo
/// 3. Saves as `credentials.<b64-email>.enc`
/// 4. Registers the account in `accounts.json` as default
/// 5. Renames `credentials.enc` → `credentials.enc.bak`
///
/// On failure (e.g. offline, can't determine email), prints a warning and
/// leaves the legacy file in place — the user can manually re-run `gws auth login`.
async fn migrate_legacy_credentials() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There is a potential inter-process race condition here. If two gws commands are run simultaneously, both processes could attempt the migration concurrently. The current locking mechanism (Mutex and AtomicBool) only prevents race conditions within a single process.

While this may not lead to data corruption in this specific case (as both processes would be writing the same data), it can cause confusing output for the user and result in redundant API calls and file operations. The second process to finish will likely fail to rename the legacy credentials file, adding to the confusion.

To prevent this, you should implement an inter-process locking mechanism, such as a file lock. A common strategy is to atomically create a lock file (e.g., ~/.config/gws/.migration.lock) at the beginning of the migration process and remove it upon completion or failure.

For example, you could use tokio::fs::OpenOptions::new().create_new(true) to attempt to create the lock file. If it succeeds, this process has the lock. If it fails with AlreadyExists, another process holds the lock, and this process should wait or exit.

use std::sync::atomic::{AtomicBool, Ordering};
use tokio::sync::Mutex;

static MIGRATION_LOCK: Mutex<()> = Mutex::const_new(());
static MIGRATION_ATTEMPTED: AtomicBool = AtomicBool::new(false);

// Quick, non-locking check to bail out early if migration has already been handled.
if MIGRATION_ATTEMPTED.load(Ordering::Relaxed) {
return;
}

// Acquire a lock to ensure only one task performs the detailed check and migration.
let _guard = MIGRATION_LOCK.lock().await;

// Re-check after acquiring the lock, in case another task just finished.
if MIGRATION_ATTEMPTED.load(Ordering::SeqCst) {
return;
}

// Mark as attempted before the checks, so we only ever try this logic once per process.
MIGRATION_ATTEMPTED.store(true, Ordering::SeqCst);

let legacy_path = credential_store::encrypted_credentials_path();
let registry = crate::accounts::load_accounts().ok().flatten();

// Only migrate if legacy file exists AND no accounts registry exists
if !legacy_path.exists() || registry.is_some() {
return;
}

eprintln!("[gws] Migrating legacy credentials to per-account format...");

// Decrypt the legacy credentials
let json_str = match credential_store::load_encrypted() {
Ok(s) => s,
Err(e) => {
eprintln!("[gws] Warning: Failed to decrypt legacy credentials: {e}");
eprintln!("[gws] Run 'gws auth login' to re-authenticate.");
return;
}
};

// Parse credentials to get refresh_token
let creds: serde_json::Value = match serde_json::from_str(&json_str) {
Ok(v) => v,
Err(e) => {
eprintln!("[gws] Warning: Failed to parse legacy credentials: {e}");
return;
}
};

let client_id = creds
.get("client_id")
.and_then(|v| v.as_str())
.unwrap_or_default();
let client_secret = creds
.get("client_secret")
.and_then(|v| v.as_str())
.unwrap_or_default();
let refresh_token = creds
.get("refresh_token")
.and_then(|v| v.as_str())
.unwrap_or_default();

if client_id.is_empty() || client_secret.is_empty() || refresh_token.is_empty() {
eprintln!("[gws] Warning: Legacy credentials are incomplete, cannot migrate.");
eprintln!("[gws] Run 'gws auth login' to re-authenticate.");
return;
}

// Get an access token to determine the email
let secret = yup_oauth2::authorized_user::AuthorizedUserSecret {
client_id: client_id.to_string(),
client_secret: client_secret.to_string(),
refresh_token: refresh_token.to_string(),
key_type: "authorized_user".to_string(),
};

let auth = match yup_oauth2::AuthorizedUserAuthenticator::builder(secret)
.build()
.await
{
Ok(a) => a,
Err(e) => {
eprintln!("[gws] Warning: Failed to build authenticator for migration: {e}");
eprintln!("[gws] Run 'gws auth login' to re-authenticate.");
return;
}
};

let token = match auth
.token(&["https://www.googleapis.com/auth/userinfo.email"])
.await
{
Ok(t) => t,
Err(e) => {
eprintln!("[gws] Warning: Failed to get token for migration: {e}");
eprintln!("[gws] Run 'gws auth login' to re-authenticate.");
return;
}
};

let access_token = match token.token() {
Some(t) => t.to_string(),
None => {
eprintln!("[gws] Warning: No access token available for migration.");
return;
}
};

// Get email via tokeninfo
let email = match crate::auth_commands::fetch_userinfo_email(&access_token).await {
Some(e) => e,
None => {
eprintln!("[gws] Warning: Could not determine email from legacy credentials.");
eprintln!("[gws] Run 'gws auth login' to re-authenticate.");
return;
}
};

eprintln!("[gws] Found account: {email}");

// Save as per-account credentials
if let Err(e) = credential_store::save_encrypted_for(&json_str, &email) {
eprintln!("[gws] Warning: Failed to save migrated credentials: {e}");
return;
}

// Register in accounts.json using the existing helper
let mut registry = crate::accounts::AccountsRegistry::default();
crate::accounts::add_account(&mut registry, &email);

if let Err(e) = crate::accounts::save_accounts(&registry) {
eprintln!("[gws] Warning: Failed to save accounts registry: {e}");
return;
}

// Rename legacy file to .bak
// On Windows, `rename` fails if the destination exists. Remove old backup first.
let backup_path = legacy_path.with_extension("enc.bak");
if tokio::fs::metadata(&backup_path).await.is_ok() {
if let Err(e) = tokio::fs::remove_file(&backup_path).await {
eprintln!(
"[gws] Warning: Failed to remove existing backup file '{}': {e}",
backup_path.display()
);
}
}
if let Err(e) = tokio::fs::rename(&legacy_path, &backup_path).await {
eprintln!("[gws] Warning: Failed to rename legacy credentials: {e}");
// Still succeeded in migration, just couldn't clean up
}

eprintln!(
"[gws] ✓ Migrated credentials for {}. Backup at {}",
email,
backup_path.display()
);
}
Comment on lines +189 to 348
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The migrate_legacy_credentials function is over 150 lines long and handles many different concerns: concurrency control, file I/O, decryption, JSON parsing, network requests, and state updates. This high complexity makes the function difficult to read, test, and maintain.

Consider refactoring this logic into several smaller, more focused functions. For example:

  • A function to decrypt and parse the legacy credentials, returning a Result.
  • A function to fetch the user's email, given the credentials.
  • A function to save the new credentials and update the accounts registry.
  • A function to handle file cleanup.

This would improve modularity and allow for more granular error handling and unit testing.


async fn get_token_inner(
Expand Down
2 changes: 1 addition & 1 deletion src/auth_commands.rs
Original file line number Diff line number Diff line change
Expand Up @@ -443,7 +443,7 @@ async fn handle_login(args: &[String]) -> Result<(), GwsError> {
}

/// Fetch the authenticated user's email from Google's userinfo endpoint.
async fn fetch_userinfo_email(access_token: &str) -> Option<String> {
pub(crate) async fn fetch_userinfo_email(access_token: &str) -> Option<String> {
let client = match crate::client::build_client() {
Ok(c) => c,
Err(_) => return None,
Expand Down
Loading