Conversation
📝 WalkthroughWalkthroughAdds a clap-based CLI to atlas-server with Run/Migrate/Check/Db subcommands, switches main to CLI dispatch, adds config-from-CLI constructors and tests, extends integration schema tests, updates Docker/Just build/run behavior, and adds two CI jobs validating migrations, dump/restore, and snapshot compatibility. Changes
Sequence DiagramsequenceDiagram
participant User
participant CLI as CLI Parser\n(clap)
participant Config as Config\n(Constructor)
participant DB as Database\n(Migrations/Pool)
participant RPC as RPC Client
participant Server as Server\n(API & Indexer)
User->>CLI: atlas-server run [args]
CLI->>CLI: parse args & env fallbacks
CLI->>Config: Config::from_run_args(RunArgs)
Config->>DB: run migrations / create pool
DB-->>Config: pool ready
Config->>RPC: fetch_chain_id (health check)
RPC-->>Config: chain_id
CLI->>Server: start server with Config
Server->>DB: background indexer / tasks
Server->>Server: start HTTP API listener
Server-->>User: ready / listening
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/Dockerfile (1)
12-22:⚠️ Potential issue | 🟠 MajorThe server image still can't execute
db dumpordb restore.The new CLI exposes those subcommands, but this runtime stage only ships
atlas-serverplus CA roots. CI already has to install PostgreSQL client tooling before exercising them in.github/workflows/ci.yml(Lines 112-113 and 204-205), so overriding this container toatlas-server db dump .../db restore ...will currently fail becausepg_dump/pg_restoreare missing. Either ship the PostgreSQL client utilities in this image or split/document those commands as host-only.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/Dockerfile` around lines 12 - 22, The runtime Dockerfile currently only ships atlas-server and CA certs so invoking atlas-server db dump or atlas-server db restore fails because pg_dump/pg_restore are missing; update the server stage (where atlas-server is copied and USER atlas is set) to install the PostgreSQL client utilities (e.g., add apk add --no-cache postgresql-client or the appropriate package for Alpine) so pg_dump and pg_restore are available, or alternatively document that the db dump/restore subcommands are host-only and must be run from an image/CI step that has PostgreSQL client tooling installed; ensure references to atlas-server, db dump, db restore, pg_dump and pg_restore are covered.
🧹 Nitpick comments (5)
backend/crates/atlas-server/tests/integration/schema.rs (1)
146-258: These constraint tests write into a shared database and can reuse old fixture rows.
common::pool()is a single static pool for the whole integration binary (backend/crates/atlas-server/tests/integration/common.rs, Lines 13-57), so the fixed ids inserted here persist across test cases. Induplicate_block_number_is_rejected, theON CONFLICT DO NOTHINGsetup can even silently reuse a pre-existing row, which makes the duplicate assertion pass for the wrong reason. Running each case inside a transaction, or at least assertingrows_affected() == 1on the setup inserts, will keep these checks deterministic.♻️ Example isolation pattern
common::run(async { - let pool = common::pool(); + let mut tx = common::pool().begin().await.expect("begin tx"); sqlx::query(...) - .execute(pool) + .execute(&mut *tx) .await .expect("insert block"); let result = sqlx::query(...) - .execute(pool) + .execute(&mut *tx) .await; });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/tests/integration/schema.rs` around lines 146 - 258, The tests duplicate_block_number_is_rejected, duplicate_transaction_is_rejected and duplicate_erc20_transfer_is_rejected use a shared static pool (common::pool()) so setup inserts can reuse existing rows; wrap each test’s setup and assertions in a DB transaction (begin and roll back at the end) or, at minimum, assert the setup insert returned rows_affected() == 1 to ensure the row was actually created (for the first test remove or avoid relying on ON CONFLICT DO NOTHING for the setup insert), and for the erc20 test check the insert() execute result’s rows_affected() == 1 before attempting the duplicate insert; use the existing query executions (the sqlx::query(...).execute(pool).await results) to perform these checks so the unique-constraint assertions remain deterministic.backend/crates/atlas-server/src/config.rs (2)
256-323: Share normalization withfrom_env()before these two paths drift further.This constructor already blank-filters
cors_origin, whilefrom_env()still preserves the raw string. Pulling the common validation/normalization into one helper would keep env-only and CLI-driven startup behavior aligned.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/config.rs` around lines 256 - 323, The CLI path in Config::from_run_args trims-and-filters empty strings for cors_origin but from_env() keeps raw values; extract that normalization into a shared helper (e.g., normalize_optional_string or enhance parse_optional_env) that trims, converts empty -> None, and returns Option<String>, then use that helper from both Config::from_run_args and Config::from_env() (or wherever from_env() builds Config) so both code paths apply identical blank-filtering/normalization for cors_origin and any other optional string fields (chain.logo_url, branding colors).
327-370: Add direct coverage forFaucetConfig::from_faucet_args.The new CLI faucet validator has its own required-field and env checks, but the added tests only cover
Config::from_run_args. A few focused cases here would make the enabled path much safer to change.As per coding guidelines, "Add unit tests for new logic in a
#[cfg(test)] mod testsblock in the same file".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/config.rs` around lines 327 - 370, Add a new #[cfg(test)] mod tests in the same file and add focused unit tests that directly exercise FaucetConfig::from_faucet_args: test the disabled path (args.enabled = false), the enabled happy path (set args with valid amount string and cooldown, and set FAUCET_PRIVATE_KEY env var to a valid key), and failure cases for missing FAUCET_PRIVATE_KEY env var, invalid FAUCET_PRIVATE_KEY (PrivateKeySigner::from_str error), missing/zero/invalid amount string (parse_faucet_amount_to_wei and U256::ZERO), missing/zero cooldown_minutes, and an overflow cooldown_minutes (checked_mul returning None); use the crate::cli::FaucetArgs struct to build args and assert Ok(Self) or the appropriate anyhow::Error for each scenario so changes to from_faucet_args are directly covered.backend/crates/atlas-server/src/cli.rs (1)
4-52: Please add parser smoke tests for the public CLI contract.This file is now the user-facing interface, but there are no
Cli::try_parse_from(...)tests for the critical subcommands or env fallbacks. A small#[cfg(test)] mod testshere would catch accidental flag/arg regressions early.As per coding guidelines, "Add unit tests for new logic in a
#[cfg(test)] mod testsblock in the same file".Also applies to: 56-409
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/cli.rs` around lines 4 - 52, Add a #[cfg(test)] mod tests in this file that exercises the public CLI contract by calling Cli::try_parse_from(...) for each critical subcommand (Command::Run, Command::Migrate, Command::Check, Command::Db) and verifying the parsed struct types and key fields (e.g., RunArgs, MigrateArgs, DatabaseArgs, RpcArgs) using simple asserts; include tests that exercise env fallbacks by setting/unsetting relevant environment variables and then calling Cli::try_parse_from to ensure the parser picks up defaults from env; keep tests small, deterministic, and placed next to the definitions of Cli, Command, RunArgs, and MigrateArgs so future flag/arg regressions are caught early.backend/crates/atlas-server/src/main.rs (1)
260-263: Return an error here instead of terminating the whole process.
std::process::exitin a helper makes this branch untestable and skips normal unwinding. Letmaindecide the final exit code.Possible simplification
async fn cmd_db_reset(db_url: &str, confirm: bool) -> Result<()> { if !confirm { - eprintln!("This will DELETE all indexed data. Pass --confirm to proceed."); - std::process::exit(1); + anyhow::bail!("This will DELETE all indexed data. Pass --confirm to proceed."); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/main.rs` around lines 260 - 263, Replace the unconditional process termination with an error return so callers (main) can decide exit behavior: instead of calling std::process::exit(1) in the branch where !confirm, return an Err describing the aborted destructive action (e.g., Err(anyhow::anyhow!("operation aborted: pass --confirm to proceed"))). Update the function signature to return a Result if it doesn't already (or propagate the error via ?), and ensure callers (including main) handle the Result and set the final process exit code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/crates/atlas-server/src/cli.rs`:
- Around line 59-66: Remove the command-line flag that accepts
credential-bearing DB URLs and make the field env-only: for the struct field
declared as pub url: String with the #[arg(id = "db-url", long = "atlas.db.url",
env = "DATABASE_URL", ...)] attribute, remove the long = "atlas.db.url" (and any
value_name) so the value is sourced only from env (env = "DATABASE_URL"); apply
the same change to the other occurrences referenced around the code (the other
fields using id "db-url" at the lines you noted) so no CLI flag exposes
passwords in shell history or process lists.
In `@backend/crates/atlas-server/src/main.rs`:
- Around line 234-235: The code currently invokes Command::new("pg_dump") and
Command::new("pg_restore") with the full db_url in args (db_url), exposing
credentials in the process list; instead parse db_url (use Url or a simple
parser) to extract host, port, user, password, and database name, set those as
environment variables on the Command instances using .env("PGHOST", host),
.env("PGPORT", port), .env("PGUSER", user), .env("PGPASSWORD", password) (or
populate a .pgpass file) and pass only the database name (or omit --dbname so
libpq uses the env vars) and keep the output arg (output) unchanged; update the
Command::new("pg_dump") and Command::new("pg_restore") invocations to use
.env/.envs instead of embedding db_url in args.
---
Outside diff comments:
In `@backend/Dockerfile`:
- Around line 12-22: The runtime Dockerfile currently only ships atlas-server
and CA certs so invoking atlas-server db dump or atlas-server db restore fails
because pg_dump/pg_restore are missing; update the server stage (where
atlas-server is copied and USER atlas is set) to install the PostgreSQL client
utilities (e.g., add apk add --no-cache postgresql-client or the appropriate
package for Alpine) so pg_dump and pg_restore are available, or alternatively
document that the db dump/restore subcommands are host-only and must be run from
an image/CI step that has PostgreSQL client tooling installed; ensure references
to atlas-server, db dump, db restore, pg_dump and pg_restore are covered.
---
Nitpick comments:
In `@backend/crates/atlas-server/src/cli.rs`:
- Around line 4-52: Add a #[cfg(test)] mod tests in this file that exercises the
public CLI contract by calling Cli::try_parse_from(...) for each critical
subcommand (Command::Run, Command::Migrate, Command::Check, Command::Db) and
verifying the parsed struct types and key fields (e.g., RunArgs, MigrateArgs,
DatabaseArgs, RpcArgs) using simple asserts; include tests that exercise env
fallbacks by setting/unsetting relevant environment variables and then calling
Cli::try_parse_from to ensure the parser picks up defaults from env; keep tests
small, deterministic, and placed next to the definitions of Cli, Command,
RunArgs, and MigrateArgs so future flag/arg regressions are caught early.
In `@backend/crates/atlas-server/src/config.rs`:
- Around line 256-323: The CLI path in Config::from_run_args trims-and-filters
empty strings for cors_origin but from_env() keeps raw values; extract that
normalization into a shared helper (e.g., normalize_optional_string or enhance
parse_optional_env) that trims, converts empty -> None, and returns
Option<String>, then use that helper from both Config::from_run_args and
Config::from_env() (or wherever from_env() builds Config) so both code paths
apply identical blank-filtering/normalization for cors_origin and any other
optional string fields (chain.logo_url, branding colors).
- Around line 327-370: Add a new #[cfg(test)] mod tests in the same file and add
focused unit tests that directly exercise FaucetConfig::from_faucet_args: test
the disabled path (args.enabled = false), the enabled happy path (set args with
valid amount string and cooldown, and set FAUCET_PRIVATE_KEY env var to a valid
key), and failure cases for missing FAUCET_PRIVATE_KEY env var, invalid
FAUCET_PRIVATE_KEY (PrivateKeySigner::from_str error), missing/zero/invalid
amount string (parse_faucet_amount_to_wei and U256::ZERO), missing/zero
cooldown_minutes, and an overflow cooldown_minutes (checked_mul returning None);
use the crate::cli::FaucetArgs struct to build args and assert Ok(Self) or the
appropriate anyhow::Error for each scenario so changes to from_faucet_args are
directly covered.
In `@backend/crates/atlas-server/src/main.rs`:
- Around line 260-263: Replace the unconditional process termination with an
error return so callers (main) can decide exit behavior: instead of calling
std::process::exit(1) in the branch where !confirm, return an Err describing the
aborted destructive action (e.g., Err(anyhow::anyhow!("operation aborted: pass
--confirm to proceed"))). Update the function signature to return a Result if it
doesn't already (or propagate the error via ?), and ensure callers (including
main) handle the Result and set the final process exit code.
In `@backend/crates/atlas-server/tests/integration/schema.rs`:
- Around line 146-258: The tests duplicate_block_number_is_rejected,
duplicate_transaction_is_rejected and duplicate_erc20_transfer_is_rejected use a
shared static pool (common::pool()) so setup inserts can reuse existing rows;
wrap each test’s setup and assertions in a DB transaction (begin and roll back
at the end) or, at minimum, assert the setup insert returned rows_affected() ==
1 to ensure the row was actually created (for the first test remove or avoid
relying on ON CONFLICT DO NOTHING for the setup insert), and for the erc20 test
check the insert() execute result’s rows_affected() == 1 before attempting the
duplicate insert; use the existing query executions (the
sqlx::query(...).execute(pool).await results) to perform these checks so the
unique-constraint assertions remain deterministic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 40b984b2-8c21-4d12-afa1-41c4a341a3d5
📒 Files selected for processing (11)
.github/workflows/ci.ymlJustfilebackend/Cargo.tomlbackend/Dockerfilebackend/crates/atlas-server/Cargo.tomlbackend/crates/atlas-server/src/cli.rsbackend/crates/atlas-server/src/config.rsbackend/crates/atlas-server/src/lib.rsbackend/crates/atlas-server/src/main.rsbackend/crates/atlas-server/tests/integration/main.rsbackend/crates/atlas-server/tests/integration/schema.rs
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.github/workflows/ci.yml (1)
280-283:⚠️ Potential issue | 🟡 MinorUse the CLI dump wrapper in the compat job.
Line 280 passes a credential-bearing URL to
pg_dump --dbname, which reintroduces the argv exposure the new dump helper was added to avoid. Reusingatlas-server db dumphere keeps credentials out of argv and exercises the same path as the shipped CLI.🔁 Suggested fix
- pg_dump --dbname="$SOURCE_DB_URL" --format=custom --file "$DUMP_FILE" + DATABASE_URL="$SOURCE_DB_URL" cargo run --quiet --bin atlas-server -- db dump "$DUMP_FILE"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml around lines 280 - 283, Replace the direct pg_dump invocation with the CLI wrapper command so credentials aren't exposed in argv: call the atlas-server db dump helper (e.g., run DATABASE_URL="$SOURCE_DB_URL" cargo run --quiet --bin atlas-server -- db dump ... to produce "$DUMP_FILE") instead of pg_dump --dbname="$SOURCE_DB_URL", then continue using DATABASE_URL="$RESTORE_DB_URL" cargo run --quiet --bin atlas-server -- db restore "$DUMP_FILE" and migrate; ensure the dump command writes the same "$DUMP_FILE" and that all credential-bearing URLs are passed via the DATABASE_URL environment variable to atlas-server rather than as pg_dump arguments.
🧹 Nitpick comments (1)
backend/crates/atlas-server/src/cli.rs (1)
64-87: Add parser coverage for the env-onlyDATABASE_URLpath.
#[arg(skip = database_url_from_env())]makes this behavior invisible to clap itself, so a smallCli::try_parse_frommatrix forrun,migrate,db dump, anddb reset --confirmwould be the safest way to lock it down.As per coding guidelines, "
backend/crates/**/*.rs: Add unit tests for new logic in a#[cfg(test)] mod testsblock in the same file".Also applies to: 382-410
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/cli.rs` around lines 64 - 87, Add unit tests in a #[cfg(test)] mod tests block in the same file to cover the env-only DATABASE_URL path created by database_url_from_env(); specifically write a Cli::try_parse_from matrix exercising the commands "run", "migrate", "db dump", and "db reset --confirm" to ensure DatabaseArgs (the struct with #[arg(skip = database_url_from_env())]) behaves correctly when DATABASE_URL is provided only via environment; set/clear the env var inside each test case and assert parsing succeeds and yields the expected DatabaseArgs.url value or error state as appropriate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/crates/atlas-server/src/main.rs`:
- Around line 49-92: The subprocess inherits the parent's PG* environment which
can leak host/user/password into pg_dump/pg_restore; before setting any PG
variables, call command.env_remove(...) for each relevant name (PGHOST, PGPORT,
PGUSER, PGPASSWORD, PGDATABASE, PGSSLMODE, PGSSLCERT, PGSSLKEY, PGSSLROOTCERT,
PGSSLCRL, PGAPPNAME, PGOPTIONS, PGCONNECT_TIMEOUT) so inherited values are
cleared, then proceed to set the ones parsed from url as the current code
(references: the Command instance named command and the env-setting blocks for
PGHOST/PGPORT/PGUSER/PGPASSWORD/PGDATABASE and the query_pairs match arms).
---
Duplicate comments:
In @.github/workflows/ci.yml:
- Around line 280-283: Replace the direct pg_dump invocation with the CLI
wrapper command so credentials aren't exposed in argv: call the atlas-server db
dump helper (e.g., run DATABASE_URL="$SOURCE_DB_URL" cargo run --quiet --bin
atlas-server -- db dump ... to produce "$DUMP_FILE") instead of pg_dump
--dbname="$SOURCE_DB_URL", then continue using DATABASE_URL="$RESTORE_DB_URL"
cargo run --quiet --bin atlas-server -- db restore "$DUMP_FILE" and migrate;
ensure the dump command writes the same "$DUMP_FILE" and that all
credential-bearing URLs are passed via the DATABASE_URL environment variable to
atlas-server rather than as pg_dump arguments.
---
Nitpick comments:
In `@backend/crates/atlas-server/src/cli.rs`:
- Around line 64-87: Add unit tests in a #[cfg(test)] mod tests block in the
same file to cover the env-only DATABASE_URL path created by
database_url_from_env(); specifically write a Cli::try_parse_from matrix
exercising the commands "run", "migrate", "db dump", and "db reset --confirm" to
ensure DatabaseArgs (the struct with #[arg(skip = database_url_from_env())])
behaves correctly when DATABASE_URL is provided only via environment; set/clear
the env var inside each test case and assert parsing succeeds and yields the
expected DatabaseArgs.url value or error state as appropriate.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e6e0b500-3c11-49c8-a00f-a511ed717c34
📒 Files selected for processing (4)
.github/workflows/ci.ymlbackend/crates/atlas-server/src/cli.rsbackend/crates/atlas-server/src/config.rsbackend/crates/atlas-server/src/main.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/crates/atlas-server/src/config.rs
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
backend/crates/atlas-server/src/main.rs (2)
346-348: Prefer returning an error overstd::process::exithere.Exiting inside the helper bypasses normal cleanup and makes the
--confirmguard hard to unit-test.bail!(...)preserves the same non-zero CLI behavior becausemainalready returnsResult<()>.♻️ Small refactor
if !confirm { - eprintln!("This will DELETE all indexed data. Pass --confirm to proceed."); - std::process::exit(1); + bail!("This will DELETE all indexed data. Pass --confirm to proceed."); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/main.rs` around lines 346 - 348, Replace the std::process::exit call with an error return using anyhow's bail! so the helper returns Err instead of terminating the process; in the if !confirm branch (in main or the helper where the check lives) remove std::process::exit(1) and call bail!("This will DELETE all indexed data. Pass --confirm to proceed.") (or use return Err(anyhow::anyhow!(...)) if you prefer), and add use anyhow::bail; at the top if not already imported so the CLI keeps a non-zero exit while allowing normal cleanup and unit-testing.
29-112: Please add same-file unit tests for the new DB CLI helpers.The test module still only covers shutdown and
fetch_chain_id, so the newDATABASE_URLparsing/env translation, invalid-URL branches, anddb reset --confirmguard are untested.As per coding guidelines, "Add unit tests for new logic in a
#[cfg(test)] mod testsblock in the same file".Also applies to: 319-362
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/main.rs` around lines 29 - 112, Add a new #[cfg(test)] mod tests block in this same file containing unit tests for the DATABASE_URL helpers: write tests for required_db_url to assert it trims input and returns Err when empty, and tests for postgres_command that (1) parse a full postgres:// URL with user/password/host/port/dbname and query params and assert the returned Command has the expected environment variables set (PGHOST, PGPORT, PGUSER, PGPASSWORD, PGDATABASE, PGSSLMODE, PGSSLCERT, PGSSLKEY, PGSSLROOTCERT, PGSSLCRL, PGAPPNAME, PGOPTIONS, PGCONNECT_TIMEOUT), (2) assert it returns an Err for non-postgres schemes, and (3) assert it returns an Err when the URL is missing a database name; also add a test covering the db reset confirmation guard mentioned in the review (the same-file test should exercise the function that enforces "--confirm" for reset) so all new branches in required_db_url and postgres_command are covered.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/crates/atlas-server/src/main.rs`:
- Around line 44-47: The current logic reads the database name only from
url.path() (variable database_name) and bails if empty; instead parse the URL
query parameters (url.query()) for libpq-style keys and map them to the
corresponding PG_* env vars before rejecting: extract keys like
dbname→PGDATABASE, host/hostaddr→PGHOST, port→PGPORT, user→PGUSER,
password→PGPASSWORD, service→PGSERVICE and any other libpq parameters present,
and prefer query params over path when path is empty; update the code around
database_name, the url parsing block and the SSL/app query translation (the
existing mapping logic near lines 81–109) to populate std::env variables
accordingly and only bail if neither path nor dbname query yields a database
name.
- Around line 332-335: The cmd_db_restore function currently calls
postgres_command(db_url)? and relies on PGDATABASE but must pass the database
name explicitly to pg_restore using the --dbname/-d flag for direct-restore
mode; update cmd_db_restore to parse or obtain the DB name from the provided
db_url (reusing the same extraction logic used by postgres_command) and add
.arg("--dbname").arg(db_name) (or .args(["--dbname", &db_name])) before
.args([...input...]) so pg_restore runs in direct-restore mode instead of script
mode.
---
Nitpick comments:
In `@backend/crates/atlas-server/src/main.rs`:
- Around line 346-348: Replace the std::process::exit call with an error return
using anyhow's bail! so the helper returns Err instead of terminating the
process; in the if !confirm branch (in main or the helper where the check lives)
remove std::process::exit(1) and call bail!("This will DELETE all indexed data.
Pass --confirm to proceed.") (or use return Err(anyhow::anyhow!(...)) if you
prefer), and add use anyhow::bail; at the top if not already imported so the CLI
keeps a non-zero exit while allowing normal cleanup and unit-testing.
- Around line 29-112: Add a new #[cfg(test)] mod tests block in this same file
containing unit tests for the DATABASE_URL helpers: write tests for
required_db_url to assert it trims input and returns Err when empty, and tests
for postgres_command that (1) parse a full postgres:// URL with
user/password/host/port/dbname and query params and assert the returned Command
has the expected environment variables set (PGHOST, PGPORT, PGUSER, PGPASSWORD,
PGDATABASE, PGSSLMODE, PGSSLCERT, PGSSLKEY, PGSSLROOTCERT, PGSSLCRL, PGAPPNAME,
PGOPTIONS, PGCONNECT_TIMEOUT), (2) assert it returns an Err for non-postgres
schemes, and (3) assert it returns an Err when the URL is missing a database
name; also add a test covering the db reset confirmation guard mentioned in the
review (the same-file test should exercise the function that enforces
"--confirm" for reset) so all new branches in required_db_url and
postgres_command are covered.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9839e5f3-fef6-43a1-9663-3a898ceb66a2
📒 Files selected for processing (1)
backend/crates/atlas-server/src/main.rs
There was a problem hiding this comment.
🧹 Nitpick comments (1)
backend/crates/atlas-server/src/main.rs (1)
404-421: Consider extracting the table list to a constant or querying dynamically.The hardcoded table list in
TRUNCATEwill need manual updates whenever tables are added or removed from the schema. This could lead to stale data remaining after a reset if new tables are added but this list isn't updated.💡 Alternative approach
Consider querying
information_schema.tablesfiltered by schema to dynamically build the truncate list, or extracting the table names to a shared constant that can be referenced by tests:// Option 1: Shared constant (simpler) const INDEXED_TABLES: &[&str] = &[ "blocks", "transactions", "event_logs", /* ... */ ]; // Option 2: Dynamic query (more robust) // SELECT table_name FROM information_schema.tables // WHERE table_schema = 'public' AND table_name NOT IN ('_sqlx_migrations', ...)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/crates/atlas-server/src/main.rs` around lines 404 - 421, The TRUNCATE in cmd_db_reset currently uses a hardcoded comma-separated table list which will go out-of-sync as schema changes; refactor by extracting the table names into a single shared constant (e.g. INDEXED_TABLES) or build the list dynamically by querying information_schema.tables for the target schema (excluding migrations and internal tables), then join those names to produce the TRUNCATE ... CASCADE SQL string used in cmd_db_reset; update any tests or callers to reference the new constant if you choose the shared-constant approach.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@backend/crates/atlas-server/src/main.rs`:
- Around line 404-421: The TRUNCATE in cmd_db_reset currently uses a hardcoded
comma-separated table list which will go out-of-sync as schema changes; refactor
by extracting the table names into a single shared constant (e.g.
INDEXED_TABLES) or build the list dynamically by querying
information_schema.tables for the target schema (excluding migrations and
internal tables), then join those names to produce the TRUNCATE ... CASCADE SQL
string used in cmd_db_reset; update any tests or callers to reference the new
constant if you choose the shared-constant approach.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 71116649-dca2-4f6a-ab7c-373f3def0f6c
📒 Files selected for processing (1)
backend/crates/atlas-server/src/main.rs
| - name: Test | ||
| run: cargo test --workspace --all-targets | ||
|
|
||
| backend-cli-db-flow: |
There was a problem hiding this comment.
can this be a test to run locally as well?
Summary
atlas-serverCommands
atlas-server run: start the API, indexer, optional DA worker, and optional faucetatlas-server migrate: run database migrations and exitatlas-server check: validate config and verify database/RPC connectivityatlas-server db dump <output>: create a PostgreSQL logical dump withpg_dumpatlas-server db restore <input>: restore a PostgreSQL logical dump withpg_restore --clean --if-existsatlas-server db reset --confirm: truncate indexed data while keeping schema and migrationsjust build-release: build a release binary and copy it tobuild/atlas-serverjust install: installatlas-serverfrom the local cratejust backend-run: run the server viacargo run --bin atlas-server -- runSummary by CodeRabbit
New Features
Tests
Chores