Skip to content

feat: add atlas-server CLI commands#41

Open
pthmas wants to merge 5 commits intomainfrom
pthmas/cli-binary-support
Open

feat: add atlas-server CLI commands#41
pthmas wants to merge 5 commits intomainfrom
pthmas/cli-binary-support

Conversation

@pthmas
Copy link
Copy Markdown
Collaborator

@pthmas pthmas commented Mar 30, 2026

Summary

  • add a subcommand-based CLI for atlas-server
  • add database utility commands for migrations, validation, dump, restore, and reset
  • add CI coverage for the CLI database flow plus a warning-only snapshot compatibility check
  • update local and Docker entrypoints to use the new CLI

Commands

  • atlas-server run: start the API, indexer, optional DA worker, and optional faucet
  • atlas-server migrate: run database migrations and exit
  • atlas-server check: validate config and verify database/RPC connectivity
  • atlas-server db dump <output>: create a PostgreSQL logical dump with pg_dump
  • atlas-server db restore <input>: restore a PostgreSQL logical dump with pg_restore --clean --if-exists
  • atlas-server db reset --confirm: truncate indexed data while keeping schema and migrations
  • just build-release: build a release binary and copy it to build/atlas-server
  • just install: install atlas-server from the local crate
  • just backend-run: run the server via cargo run --bin atlas-server -- run

Summary by CodeRabbit

  • New Features

    • Added CLI subcommands: run (start server), migrate, check (validate config & RPC/DB connectivity), and db utilities (dump/restore/reset).
  • Tests

    • Added integration tests to validate DB schema, partitions, indexes, extensions, and constraint enforcement after migrations.
  • Chores

    • Enhanced CI with database flow and snapshot-compatibility jobs.
    • Added build/install tasks and changed container startup to invoke the server with an explicit run command.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 30, 2026

📝 Walkthrough

Walkthrough

Adds a clap-based CLI to atlas-server with Run/Migrate/Check/Db subcommands, switches main to CLI dispatch, adds config-from-CLI constructors and tests, extends integration schema tests, updates Docker/Just build/run behavior, and adds two CI jobs validating migrations, dump/restore, and snapshot compatibility.

Changes

Cohort / File(s) Summary
CI workflow
\.github/workflows/ci.yml
Added backend-cli-db-flow and backend-cli-snapshot-compat jobs to provision Postgres, run migrations, insert deterministic rows, perform pg_dump/db restore, re-run migrations, and assert counts; snapshot-compat can use a git worktree and is continue-on-error; docker job dependency updated to include the new job.
Build tasks & Docker
Justfile, backend/Dockerfile
Added build-release and install Just targets; updated backend-run to pass -- run; Docker CMD changed to ["atlas-server","run"].
Workspace deps
backend/Cargo.toml, backend/crates/atlas-server/Cargo.toml
Added workspace dependency clap = { version = "4", features = ["derive","env"] } and referenced clap in the atlas-server crate.
CLI surface & export
backend/crates/atlas-server/src/cli.rs, backend/crates/atlas-server/src/lib.rs
New clap-based CLI module exposing Cli, Command, RunArgs, MigrateArgs, Db subcommands and multiple typed argument groups; added pub mod cli;.
Main control flow & helpers
backend/crates/atlas-server/src/main.rs
Replaced unconditional startup with CLI dispatch (Run/Migrate/Check/Db), moved .env load earlier, per-command tracing init, added run and check handlers, DB utility commands (cmd_db_dump, cmd_db_restore, cmd_db_reset), postgres_command helper, and DB URL validation.
Config & faucet mapping (tests gated)
backend/crates/atlas-server/src/config.rs
Added Config::from_run_args and FaucetConfig::from_faucet_args (gated #[cfg(test)]) with validations (SSE buffer range, DA invariants, chain-name trimming/default), plus unit tests for these mappings.
Integration tests
backend/crates/atlas-server/tests/integration/main.rs, backend/crates/atlas-server/tests/integration/schema.rs
Added schema test module and integration tests that run migrations, assert expected tables/partitions/indexes/extensions, and verify constraint enforcement via duplicate-insert failure checks.

Sequence Diagram

sequenceDiagram
    participant User
    participant CLI as CLI Parser\n(clap)
    participant Config as Config\n(Constructor)
    participant DB as Database\n(Migrations/Pool)
    participant RPC as RPC Client
    participant Server as Server\n(API & Indexer)

    User->>CLI: atlas-server run [args]
    CLI->>CLI: parse args & env fallbacks
    CLI->>Config: Config::from_run_args(RunArgs)
    Config->>DB: run migrations / create pool
    DB-->>Config: pool ready
    Config->>RPC: fetch_chain_id (health check)
    RPC-->>Config: chain_id
    CLI->>Server: start server with Config
    Server->>DB: background indexer / tasks
    Server->>Server: start HTTP API listener
    Server-->>User: ready / listening
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • tac0turtle

Poem

🐰 I hopped through code with nimble paws,
New commands and tests to bless the cause.
Run, migrate, dump, then check the scheme—
Schemas safe, the CI hums like cream.
A carrot cheer for working streams.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 46.81% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add atlas-server CLI commands' directly and clearly describes the main change—introducing a subcommand-based CLI for the atlas-server binary with database utilities.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch pthmas/cli-binary-support

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
backend/Dockerfile (1)

12-22: ⚠️ Potential issue | 🟠 Major

The server image still can't execute db dump or db restore.

The new CLI exposes those subcommands, but this runtime stage only ships atlas-server plus CA roots. CI already has to install PostgreSQL client tooling before exercising them in .github/workflows/ci.yml (Lines 112-113 and 204-205), so overriding this container to atlas-server db dump ... / db restore ... will currently fail because pg_dump / pg_restore are missing. Either ship the PostgreSQL client utilities in this image or split/document those commands as host-only.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/Dockerfile` around lines 12 - 22, The runtime Dockerfile currently
only ships atlas-server and CA certs so invoking atlas-server db dump or
atlas-server db restore fails because pg_dump/pg_restore are missing; update the
server stage (where atlas-server is copied and USER atlas is set) to install the
PostgreSQL client utilities (e.g., add apk add --no-cache postgresql-client or
the appropriate package for Alpine) so pg_dump and pg_restore are available, or
alternatively document that the db dump/restore subcommands are host-only and
must be run from an image/CI step that has PostgreSQL client tooling installed;
ensure references to atlas-server, db dump, db restore, pg_dump and pg_restore
are covered.
🧹 Nitpick comments (5)
backend/crates/atlas-server/tests/integration/schema.rs (1)

146-258: These constraint tests write into a shared database and can reuse old fixture rows.

common::pool() is a single static pool for the whole integration binary (backend/crates/atlas-server/tests/integration/common.rs, Lines 13-57), so the fixed ids inserted here persist across test cases. In duplicate_block_number_is_rejected, the ON CONFLICT DO NOTHING setup can even silently reuse a pre-existing row, which makes the duplicate assertion pass for the wrong reason. Running each case inside a transaction, or at least asserting rows_affected() == 1 on the setup inserts, will keep these checks deterministic.

♻️ Example isolation pattern
 common::run(async {
-    let pool = common::pool();
+    let mut tx = common::pool().begin().await.expect("begin tx");
 
     sqlx::query(...)
-        .execute(pool)
+        .execute(&mut *tx)
         .await
         .expect("insert block");
 
     let result = sqlx::query(...)
-        .execute(pool)
+        .execute(&mut *tx)
         .await;
 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/tests/integration/schema.rs` around lines 146 -
258, The tests duplicate_block_number_is_rejected,
duplicate_transaction_is_rejected and duplicate_erc20_transfer_is_rejected use a
shared static pool (common::pool()) so setup inserts can reuse existing rows;
wrap each test’s setup and assertions in a DB transaction (begin and roll back
at the end) or, at minimum, assert the setup insert returned rows_affected() ==
1 to ensure the row was actually created (for the first test remove or avoid
relying on ON CONFLICT DO NOTHING for the setup insert), and for the erc20 test
check the insert() execute result’s rows_affected() == 1 before attempting the
duplicate insert; use the existing query executions (the
sqlx::query(...).execute(pool).await results) to perform these checks so the
unique-constraint assertions remain deterministic.
backend/crates/atlas-server/src/config.rs (2)

256-323: Share normalization with from_env() before these two paths drift further.

This constructor already blank-filters cors_origin, while from_env() still preserves the raw string. Pulling the common validation/normalization into one helper would keep env-only and CLI-driven startup behavior aligned.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/config.rs` around lines 256 - 323, The CLI
path in Config::from_run_args trims-and-filters empty strings for cors_origin
but from_env() keeps raw values; extract that normalization into a shared helper
(e.g., normalize_optional_string or enhance parse_optional_env) that trims,
converts empty -> None, and returns Option<String>, then use that helper from
both Config::from_run_args and Config::from_env() (or wherever from_env() builds
Config) so both code paths apply identical blank-filtering/normalization for
cors_origin and any other optional string fields (chain.logo_url, branding
colors).

327-370: Add direct coverage for FaucetConfig::from_faucet_args.

The new CLI faucet validator has its own required-field and env checks, but the added tests only cover Config::from_run_args. A few focused cases here would make the enabled path much safer to change.

As per coding guidelines, "Add unit tests for new logic in a #[cfg(test)] mod tests block in the same file".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/config.rs` around lines 327 - 370, Add a new
#[cfg(test)] mod tests in the same file and add focused unit tests that directly
exercise FaucetConfig::from_faucet_args: test the disabled path (args.enabled =
false), the enabled happy path (set args with valid amount string and cooldown,
and set FAUCET_PRIVATE_KEY env var to a valid key), and failure cases for
missing FAUCET_PRIVATE_KEY env var, invalid FAUCET_PRIVATE_KEY
(PrivateKeySigner::from_str error), missing/zero/invalid amount string
(parse_faucet_amount_to_wei and U256::ZERO), missing/zero cooldown_minutes, and
an overflow cooldown_minutes (checked_mul returning None); use the
crate::cli::FaucetArgs struct to build args and assert Ok(Self) or the
appropriate anyhow::Error for each scenario so changes to from_faucet_args are
directly covered.
backend/crates/atlas-server/src/cli.rs (1)

4-52: Please add parser smoke tests for the public CLI contract.

This file is now the user-facing interface, but there are no Cli::try_parse_from(...) tests for the critical subcommands or env fallbacks. A small #[cfg(test)] mod tests here would catch accidental flag/arg regressions early.

As per coding guidelines, "Add unit tests for new logic in a #[cfg(test)] mod tests block in the same file".

Also applies to: 56-409

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/cli.rs` around lines 4 - 52, Add a
#[cfg(test)] mod tests in this file that exercises the public CLI contract by
calling Cli::try_parse_from(...) for each critical subcommand (Command::Run,
Command::Migrate, Command::Check, Command::Db) and verifying the parsed struct
types and key fields (e.g., RunArgs, MigrateArgs, DatabaseArgs, RpcArgs) using
simple asserts; include tests that exercise env fallbacks by setting/unsetting
relevant environment variables and then calling Cli::try_parse_from to ensure
the parser picks up defaults from env; keep tests small, deterministic, and
placed next to the definitions of Cli, Command, RunArgs, and MigrateArgs so
future flag/arg regressions are caught early.
backend/crates/atlas-server/src/main.rs (1)

260-263: Return an error here instead of terminating the whole process.

std::process::exit in a helper makes this branch untestable and skips normal unwinding. Let main decide the final exit code.

Possible simplification
 async fn cmd_db_reset(db_url: &str, confirm: bool) -> Result<()> {
     if !confirm {
-        eprintln!("This will DELETE all indexed data. Pass --confirm to proceed.");
-        std::process::exit(1);
+        anyhow::bail!("This will DELETE all indexed data. Pass --confirm to proceed.");
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/main.rs` around lines 260 - 263, Replace the
unconditional process termination with an error return so callers (main) can
decide exit behavior: instead of calling std::process::exit(1) in the branch
where !confirm, return an Err describing the aborted destructive action (e.g.,
Err(anyhow::anyhow!("operation aborted: pass --confirm to proceed"))). Update
the function signature to return a Result if it doesn't already (or propagate
the error via ?), and ensure callers (including main) handle the Result and set
the final process exit code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/crates/atlas-server/src/cli.rs`:
- Around line 59-66: Remove the command-line flag that accepts
credential-bearing DB URLs and make the field env-only: for the struct field
declared as pub url: String with the #[arg(id = "db-url", long = "atlas.db.url",
env = "DATABASE_URL", ...)] attribute, remove the long = "atlas.db.url" (and any
value_name) so the value is sourced only from env (env = "DATABASE_URL"); apply
the same change to the other occurrences referenced around the code (the other
fields using id "db-url" at the lines you noted) so no CLI flag exposes
passwords in shell history or process lists.

In `@backend/crates/atlas-server/src/main.rs`:
- Around line 234-235: The code currently invokes Command::new("pg_dump") and
Command::new("pg_restore") with the full db_url in args (db_url), exposing
credentials in the process list; instead parse db_url (use Url or a simple
parser) to extract host, port, user, password, and database name, set those as
environment variables on the Command instances using .env("PGHOST", host),
.env("PGPORT", port), .env("PGUSER", user), .env("PGPASSWORD", password) (or
populate a .pgpass file) and pass only the database name (or omit --dbname so
libpq uses the env vars) and keep the output arg (output) unchanged; update the
Command::new("pg_dump") and Command::new("pg_restore") invocations to use
.env/.envs instead of embedding db_url in args.

---

Outside diff comments:
In `@backend/Dockerfile`:
- Around line 12-22: The runtime Dockerfile currently only ships atlas-server
and CA certs so invoking atlas-server db dump or atlas-server db restore fails
because pg_dump/pg_restore are missing; update the server stage (where
atlas-server is copied and USER atlas is set) to install the PostgreSQL client
utilities (e.g., add apk add --no-cache postgresql-client or the appropriate
package for Alpine) so pg_dump and pg_restore are available, or alternatively
document that the db dump/restore subcommands are host-only and must be run from
an image/CI step that has PostgreSQL client tooling installed; ensure references
to atlas-server, db dump, db restore, pg_dump and pg_restore are covered.

---

Nitpick comments:
In `@backend/crates/atlas-server/src/cli.rs`:
- Around line 4-52: Add a #[cfg(test)] mod tests in this file that exercises the
public CLI contract by calling Cli::try_parse_from(...) for each critical
subcommand (Command::Run, Command::Migrate, Command::Check, Command::Db) and
verifying the parsed struct types and key fields (e.g., RunArgs, MigrateArgs,
DatabaseArgs, RpcArgs) using simple asserts; include tests that exercise env
fallbacks by setting/unsetting relevant environment variables and then calling
Cli::try_parse_from to ensure the parser picks up defaults from env; keep tests
small, deterministic, and placed next to the definitions of Cli, Command,
RunArgs, and MigrateArgs so future flag/arg regressions are caught early.

In `@backend/crates/atlas-server/src/config.rs`:
- Around line 256-323: The CLI path in Config::from_run_args trims-and-filters
empty strings for cors_origin but from_env() keeps raw values; extract that
normalization into a shared helper (e.g., normalize_optional_string or enhance
parse_optional_env) that trims, converts empty -> None, and returns
Option<String>, then use that helper from both Config::from_run_args and
Config::from_env() (or wherever from_env() builds Config) so both code paths
apply identical blank-filtering/normalization for cors_origin and any other
optional string fields (chain.logo_url, branding colors).
- Around line 327-370: Add a new #[cfg(test)] mod tests in the same file and add
focused unit tests that directly exercise FaucetConfig::from_faucet_args: test
the disabled path (args.enabled = false), the enabled happy path (set args with
valid amount string and cooldown, and set FAUCET_PRIVATE_KEY env var to a valid
key), and failure cases for missing FAUCET_PRIVATE_KEY env var, invalid
FAUCET_PRIVATE_KEY (PrivateKeySigner::from_str error), missing/zero/invalid
amount string (parse_faucet_amount_to_wei and U256::ZERO), missing/zero
cooldown_minutes, and an overflow cooldown_minutes (checked_mul returning None);
use the crate::cli::FaucetArgs struct to build args and assert Ok(Self) or the
appropriate anyhow::Error for each scenario so changes to from_faucet_args are
directly covered.

In `@backend/crates/atlas-server/src/main.rs`:
- Around line 260-263: Replace the unconditional process termination with an
error return so callers (main) can decide exit behavior: instead of calling
std::process::exit(1) in the branch where !confirm, return an Err describing the
aborted destructive action (e.g., Err(anyhow::anyhow!("operation aborted: pass
--confirm to proceed"))). Update the function signature to return a Result if it
doesn't already (or propagate the error via ?), and ensure callers (including
main) handle the Result and set the final process exit code.

In `@backend/crates/atlas-server/tests/integration/schema.rs`:
- Around line 146-258: The tests duplicate_block_number_is_rejected,
duplicate_transaction_is_rejected and duplicate_erc20_transfer_is_rejected use a
shared static pool (common::pool()) so setup inserts can reuse existing rows;
wrap each test’s setup and assertions in a DB transaction (begin and roll back
at the end) or, at minimum, assert the setup insert returned rows_affected() ==
1 to ensure the row was actually created (for the first test remove or avoid
relying on ON CONFLICT DO NOTHING for the setup insert), and for the erc20 test
check the insert() execute result’s rows_affected() == 1 before attempting the
duplicate insert; use the existing query executions (the
sqlx::query(...).execute(pool).await results) to perform these checks so the
unique-constraint assertions remain deterministic.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 40b984b2-8c21-4d12-afa1-41c4a341a3d5

📥 Commits

Reviewing files that changed from the base of the PR and between 68e84ba and 54d0318.

📒 Files selected for processing (11)
  • .github/workflows/ci.yml
  • Justfile
  • backend/Cargo.toml
  • backend/Dockerfile
  • backend/crates/atlas-server/Cargo.toml
  • backend/crates/atlas-server/src/cli.rs
  • backend/crates/atlas-server/src/config.rs
  • backend/crates/atlas-server/src/lib.rs
  • backend/crates/atlas-server/src/main.rs
  • backend/crates/atlas-server/tests/integration/main.rs
  • backend/crates/atlas-server/tests/integration/schema.rs

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
.github/workflows/ci.yml (1)

280-283: ⚠️ Potential issue | 🟡 Minor

Use the CLI dump wrapper in the compat job.

Line 280 passes a credential-bearing URL to pg_dump --dbname, which reintroduces the argv exposure the new dump helper was added to avoid. Reusing atlas-server db dump here keeps credentials out of argv and exercises the same path as the shipped CLI.

🔁 Suggested fix
-          pg_dump --dbname="$SOURCE_DB_URL" --format=custom --file "$DUMP_FILE"
+          DATABASE_URL="$SOURCE_DB_URL" cargo run --quiet --bin atlas-server -- db dump "$DUMP_FILE"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 280 - 283, Replace the direct pg_dump
invocation with the CLI wrapper command so credentials aren't exposed in argv:
call the atlas-server db dump helper (e.g., run DATABASE_URL="$SOURCE_DB_URL"
cargo run --quiet --bin atlas-server -- db dump ... to produce "$DUMP_FILE")
instead of pg_dump --dbname="$SOURCE_DB_URL", then continue using
DATABASE_URL="$RESTORE_DB_URL" cargo run --quiet --bin atlas-server -- db
restore "$DUMP_FILE" and migrate; ensure the dump command writes the same
"$DUMP_FILE" and that all credential-bearing URLs are passed via the
DATABASE_URL environment variable to atlas-server rather than as pg_dump
arguments.
🧹 Nitpick comments (1)
backend/crates/atlas-server/src/cli.rs (1)

64-87: Add parser coverage for the env-only DATABASE_URL path.

#[arg(skip = database_url_from_env())] makes this behavior invisible to clap itself, so a small Cli::try_parse_from matrix for run, migrate, db dump, and db reset --confirm would be the safest way to lock it down.

As per coding guidelines, "backend/crates/**/*.rs: Add unit tests for new logic in a #[cfg(test)] mod tests block in the same file".

Also applies to: 382-410

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/cli.rs` around lines 64 - 87, Add unit tests
in a #[cfg(test)] mod tests block in the same file to cover the env-only
DATABASE_URL path created by database_url_from_env(); specifically write a
Cli::try_parse_from matrix exercising the commands "run", "migrate", "db dump",
and "db reset --confirm" to ensure DatabaseArgs (the struct with #[arg(skip =
database_url_from_env())]) behaves correctly when DATABASE_URL is provided only
via environment; set/clear the env var inside each test case and assert parsing
succeeds and yields the expected DatabaseArgs.url value or error state as
appropriate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/crates/atlas-server/src/main.rs`:
- Around line 49-92: The subprocess inherits the parent's PG* environment which
can leak host/user/password into pg_dump/pg_restore; before setting any PG
variables, call command.env_remove(...) for each relevant name (PGHOST, PGPORT,
PGUSER, PGPASSWORD, PGDATABASE, PGSSLMODE, PGSSLCERT, PGSSLKEY, PGSSLROOTCERT,
PGSSLCRL, PGAPPNAME, PGOPTIONS, PGCONNECT_TIMEOUT) so inherited values are
cleared, then proceed to set the ones parsed from url as the current code
(references: the Command instance named command and the env-setting blocks for
PGHOST/PGPORT/PGUSER/PGPASSWORD/PGDATABASE and the query_pairs match arms).

---

Duplicate comments:
In @.github/workflows/ci.yml:
- Around line 280-283: Replace the direct pg_dump invocation with the CLI
wrapper command so credentials aren't exposed in argv: call the atlas-server db
dump helper (e.g., run DATABASE_URL="$SOURCE_DB_URL" cargo run --quiet --bin
atlas-server -- db dump ... to produce "$DUMP_FILE") instead of pg_dump
--dbname="$SOURCE_DB_URL", then continue using DATABASE_URL="$RESTORE_DB_URL"
cargo run --quiet --bin atlas-server -- db restore "$DUMP_FILE" and migrate;
ensure the dump command writes the same "$DUMP_FILE" and that all
credential-bearing URLs are passed via the DATABASE_URL environment variable to
atlas-server rather than as pg_dump arguments.

---

Nitpick comments:
In `@backend/crates/atlas-server/src/cli.rs`:
- Around line 64-87: Add unit tests in a #[cfg(test)] mod tests block in the
same file to cover the env-only DATABASE_URL path created by
database_url_from_env(); specifically write a Cli::try_parse_from matrix
exercising the commands "run", "migrate", "db dump", and "db reset --confirm" to
ensure DatabaseArgs (the struct with #[arg(skip = database_url_from_env())])
behaves correctly when DATABASE_URL is provided only via environment; set/clear
the env var inside each test case and assert parsing succeeds and yields the
expected DatabaseArgs.url value or error state as appropriate.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e6e0b500-3c11-49c8-a00f-a511ed717c34

📥 Commits

Reviewing files that changed from the base of the PR and between 54d0318 and 1c4da5a.

📒 Files selected for processing (4)
  • .github/workflows/ci.yml
  • backend/crates/atlas-server/src/cli.rs
  • backend/crates/atlas-server/src/config.rs
  • backend/crates/atlas-server/src/main.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/crates/atlas-server/src/config.rs

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
backend/crates/atlas-server/src/main.rs (2)

346-348: Prefer returning an error over std::process::exit here.

Exiting inside the helper bypasses normal cleanup and makes the --confirm guard hard to unit-test. bail!(...) preserves the same non-zero CLI behavior because main already returns Result<()>.

♻️ Small refactor
     if !confirm {
-        eprintln!("This will DELETE all indexed data. Pass --confirm to proceed.");
-        std::process::exit(1);
+        bail!("This will DELETE all indexed data. Pass --confirm to proceed.");
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/main.rs` around lines 346 - 348, Replace the
std::process::exit call with an error return using anyhow's bail! so the helper
returns Err instead of terminating the process; in the if !confirm branch (in
main or the helper where the check lives) remove std::process::exit(1) and call
bail!("This will DELETE all indexed data. Pass --confirm to proceed.") (or use
return Err(anyhow::anyhow!(...)) if you prefer), and add use anyhow::bail; at
the top if not already imported so the CLI keeps a non-zero exit while allowing
normal cleanup and unit-testing.

29-112: Please add same-file unit tests for the new DB CLI helpers.

The test module still only covers shutdown and fetch_chain_id, so the new DATABASE_URL parsing/env translation, invalid-URL branches, and db reset --confirm guard are untested.

As per coding guidelines, "Add unit tests for new logic in a #[cfg(test)] mod tests block in the same file".

Also applies to: 319-362

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/main.rs` around lines 29 - 112, Add a new
#[cfg(test)] mod tests block in this same file containing unit tests for the
DATABASE_URL helpers: write tests for required_db_url to assert it trims input
and returns Err when empty, and tests for postgres_command that (1) parse a full
postgres:// URL with user/password/host/port/dbname and query params and assert
the returned Command has the expected environment variables set (PGHOST, PGPORT,
PGUSER, PGPASSWORD, PGDATABASE, PGSSLMODE, PGSSLCERT, PGSSLKEY, PGSSLROOTCERT,
PGSSLCRL, PGAPPNAME, PGOPTIONS, PGCONNECT_TIMEOUT), (2) assert it returns an Err
for non-postgres schemes, and (3) assert it returns an Err when the URL is
missing a database name; also add a test covering the db reset confirmation
guard mentioned in the review (the same-file test should exercise the function
that enforces "--confirm" for reset) so all new branches in required_db_url and
postgres_command are covered.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/crates/atlas-server/src/main.rs`:
- Around line 44-47: The current logic reads the database name only from
url.path() (variable database_name) and bails if empty; instead parse the URL
query parameters (url.query()) for libpq-style keys and map them to the
corresponding PG_* env vars before rejecting: extract keys like
dbname→PGDATABASE, host/hostaddr→PGHOST, port→PGPORT, user→PGUSER,
password→PGPASSWORD, service→PGSERVICE and any other libpq parameters present,
and prefer query params over path when path is empty; update the code around
database_name, the url parsing block and the SSL/app query translation (the
existing mapping logic near lines 81–109) to populate std::env variables
accordingly and only bail if neither path nor dbname query yields a database
name.
- Around line 332-335: The cmd_db_restore function currently calls
postgres_command(db_url)? and relies on PGDATABASE but must pass the database
name explicitly to pg_restore using the --dbname/-d flag for direct-restore
mode; update cmd_db_restore to parse or obtain the DB name from the provided
db_url (reusing the same extraction logic used by postgres_command) and add
.arg("--dbname").arg(db_name) (or .args(["--dbname", &db_name])) before
.args([...input...]) so pg_restore runs in direct-restore mode instead of script
mode.

---

Nitpick comments:
In `@backend/crates/atlas-server/src/main.rs`:
- Around line 346-348: Replace the std::process::exit call with an error return
using anyhow's bail! so the helper returns Err instead of terminating the
process; in the if !confirm branch (in main or the helper where the check lives)
remove std::process::exit(1) and call bail!("This will DELETE all indexed data.
Pass --confirm to proceed.") (or use return Err(anyhow::anyhow!(...)) if you
prefer), and add use anyhow::bail; at the top if not already imported so the CLI
keeps a non-zero exit while allowing normal cleanup and unit-testing.
- Around line 29-112: Add a new #[cfg(test)] mod tests block in this same file
containing unit tests for the DATABASE_URL helpers: write tests for
required_db_url to assert it trims input and returns Err when empty, and tests
for postgres_command that (1) parse a full postgres:// URL with
user/password/host/port/dbname and query params and assert the returned Command
has the expected environment variables set (PGHOST, PGPORT, PGUSER, PGPASSWORD,
PGDATABASE, PGSSLMODE, PGSSLCERT, PGSSLKEY, PGSSLROOTCERT, PGSSLCRL, PGAPPNAME,
PGOPTIONS, PGCONNECT_TIMEOUT), (2) assert it returns an Err for non-postgres
schemes, and (3) assert it returns an Err when the URL is missing a database
name; also add a test covering the db reset confirmation guard mentioned in the
review (the same-file test should exercise the function that enforces
"--confirm" for reset) so all new branches in required_db_url and
postgres_command are covered.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9839e5f3-fef6-43a1-9663-3a898ceb66a2

📥 Commits

Reviewing files that changed from the base of the PR and between 1c4da5a and cfe7347.

📒 Files selected for processing (1)
  • backend/crates/atlas-server/src/main.rs

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
backend/crates/atlas-server/src/main.rs (1)

404-421: Consider extracting the table list to a constant or querying dynamically.

The hardcoded table list in TRUNCATE will need manual updates whenever tables are added or removed from the schema. This could lead to stale data remaining after a reset if new tables are added but this list isn't updated.

💡 Alternative approach

Consider querying information_schema.tables filtered by schema to dynamically build the truncate list, or extracting the table names to a shared constant that can be referenced by tests:

// Option 1: Shared constant (simpler)
const INDEXED_TABLES: &[&str] = &[
    "blocks", "transactions", "event_logs", /* ... */
];

// Option 2: Dynamic query (more robust)
// SELECT table_name FROM information_schema.tables 
// WHERE table_schema = 'public' AND table_name NOT IN ('_sqlx_migrations', ...)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/crates/atlas-server/src/main.rs` around lines 404 - 421, The TRUNCATE
in cmd_db_reset currently uses a hardcoded comma-separated table list which will
go out-of-sync as schema changes; refactor by extracting the table names into a
single shared constant (e.g. INDEXED_TABLES) or build the list dynamically by
querying information_schema.tables for the target schema (excluding migrations
and internal tables), then join those names to produce the TRUNCATE ... CASCADE
SQL string used in cmd_db_reset; update any tests or callers to reference the
new constant if you choose the shared-constant approach.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@backend/crates/atlas-server/src/main.rs`:
- Around line 404-421: The TRUNCATE in cmd_db_reset currently uses a hardcoded
comma-separated table list which will go out-of-sync as schema changes; refactor
by extracting the table names into a single shared constant (e.g.
INDEXED_TABLES) or build the list dynamically by querying
information_schema.tables for the target schema (excluding migrations and
internal tables), then join those names to produce the TRUNCATE ... CASCADE SQL
string used in cmd_db_reset; update any tests or callers to reference the new
constant if you choose the shared-constant approach.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 71116649-dca2-4f6a-ab7c-373f3def0f6c

📥 Commits

Reviewing files that changed from the base of the PR and between cfe7347 and 4697404.

📒 Files selected for processing (1)
  • backend/crates/atlas-server/src/main.rs

@pthmas pthmas self-assigned this Mar 30, 2026
@pthmas pthmas requested a review from tac0turtle March 30, 2026 14:20
- name: Test
run: cargo test --workspace --all-targets

backend-cli-db-flow:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can this be a test to run locally as well?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants