Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds Redis support (ping and backup), introduces a RedisDatabase, propagates an optional is_test flag through Database trait and implementations to conditionally skip file locking, updates tests, Docker/CI, and adds a test requirements installer script and redis-tools in images. Changes
Sequence Diagram(s)sequenceDiagram
participant Runner as rgba(0,128,255,0.5) Service/Runner
participant Factory as rgba(0,200,83,0.5) DatabaseFactory
participant RedisDB as rgba(255,193,7,0.5) RedisDatabase
participant BackupTask as rgba(156,39,176,0.5) redis::backup::run
participant CLI as rgba(244,67,54,0.5) redis-cli
participant FS as rgba(121,85,72,0.5) Filesystem
Runner->>Factory: create_for_backup(cfg)
Factory->>RedisDB: RedisDatabase::new(cfg)
Runner->>RedisDB: backup(backup_dir, is_test=Some(false))
alt test_mode != true
RedisDB->>RedisDB: FileLock::acquire()
end
RedisDB->>BackupTask: run(cfg, backup_dir, ".rdb")
BackupTask->>CLI: execute redis-cli --rdb <path> (with auth if set)
CLI->>FS: write RDB file
FS-->>CLI: file created
CLI-->>BackupTask: stdout/stderr
BackupTask-->>RedisDB: return PathBuf
alt test_mode != true
RedisDB->>RedisDB: FileLock::release()
end
RedisDB-->>Runner: Result<PathBuf]
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
🧹 Nitpick comments (5)
src/domain/factory.rs (1)
37-47: Avoid routing Redis through the restore factory.Line 47 returns a
RedisDatabase, butRedisDatabase::restorealways bails. That turns an unsupported feature into a later runtime failure after restore backend selection has already succeeded. Consider rejectingDbType::Redisat the factory/service boundary instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/factory.rs` around lines 37 - 47, The factory function create_for_restore currently returns a RedisDatabase for DbType::Redis even though RedisDatabase::restore always fails; change create_for_restore to reject Redis at this boundary by returning a Result (e.g., Result<Arc<dyn Database>, RestoreError>) instead of unconditionally constructing RedisDatabase, update the match arm for DbType::Redis to return an Err with a clear unsupported-backend error, and propagate this Result to callers so restore selection fails fast; reference create_for_restore, DbType::Redis, RedisDatabase::restore when making the change.docker-compose.yml (2)
107-107: Consider pinning Redis image version for reproducibility.Using
redis:latestcan lead to inconsistent behavior across environments when the image is updated. Consider pinning to a specific version (e.g.,redis:7.2-alpine).Also applies to: 118-118
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker-compose.yml` at line 107, Replace the floating Redis image tag "redis:latest" with a pinned, explicit version to ensure reproducible builds (for example "redis:7.2-alpine"); update both occurrences currently using "image: redis:latest" so the docker-compose service definitions reference the pinned tag (or a variable like ${REDIS_IMAGE} sourced from .env) instead of latest.
42-104: Large blocks of commented-out code.Consider using Docker Compose profiles instead of commenting out services. This would allow enabling/disabling services more cleanly via
--profileflag.💡 Example using profiles
services: db-mariadb: profiles: ["mariadb"] container_name: db-mariadb image: mariadb:latest # ... rest of config db-mongodb: profiles: ["mongodb"] container_name: db-mongodb # ... rest of configThen run with:
docker compose --profile redis --profile postgres up🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker-compose.yml` around lines 42 - 104, The large blocks of commented-out service definitions (db-mariadb, db-mongodb-auth, db-mongodb, sqlite) should be converted to optional Docker Compose profiles so services can be enabled/disabled via compose flags; update each commented service to an active service entry and add a profiles: ["mariadb"] / ["mongodb-auth"] / ["mongodb"] / ["sqlite"] array on the respective service definitions (retain existing keys like container_name, image, ports, volumes, environment, healthcheck, command) and remove the comment markers so maintainers can run them with docker compose --profile <name> up.src/domain/sqlite/database.rs (2)
30-40: Lock release may mask backup errors.If
backup::runfails and thenFileLock::releasealso fails, the release error will propagate via?and mask the original backup error. Consider capturing the backup result first and ensuring it's returned even if release fails.This pattern is consistent across all database implementations (Redis, MongoDB, Postgres, MySQL), so it may warrant a broader refactor if addressed.
♻️ Suggested improvement to preserve original error
async fn backup(&self, dir: &Path, is_test: Option<bool>) -> Result<PathBuf> { let test_mode = is_test.unwrap_or(false); if !test_mode { FileLock::acquire(&self.cfg.generated_id, DbOpLock::Backup.as_str()).await?; } let res = backup::run(self.cfg.clone(), dir.to_path_buf(), self.file_extension()).await; if !test_mode { - FileLock::release(&self.cfg.generated_id).await?; + if let Err(e) = FileLock::release(&self.cfg.generated_id).await { + tracing::warn!("Failed to release lock: {e}"); + } } res }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/sqlite/database.rs` around lines 30 - 40, In the backup function (async fn backup) the FileLock::release error can overwrite a failing backup::run error; capture the result of backup::run into a variable (e.g., let res = backup::run(...).await;) before releasing the lock, then call FileLock::release and if it fails, decide: if res is Err return the original res (and optionally log the release error), else return the release error; ensure this pattern uses the same symbols (backup::run, FileLock::release, cfg.generated_id, DbOpLock::Backup) so the original backup error is preserved while still surfacing lock-release failures when there was no backup error.
41-51: Same error-masking concern in restore method.The restore method has the same potential issue where a
FileLock::releasefailure could mask the originalrestore::runerror. Consider applying the same fix as suggested for the backup method for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/sqlite/database.rs` around lines 41 - 51, The restore method can mask an error from restore::run when FileLock::release fails; change restore to store the result of restore::run in a variable, then always attempt FileLock::release (when not test_mode) and if release fails while restore::run already returned Err, return a combined/annotated error that preserves the original restore::run error plus the release failure (or at minimum return the original restore error and log the release error); specifically update the restore function (references: restore, restore::run, FileLock::acquire, FileLock::release, DbOpLock::Restore) to attempt release in a separate step and merge or chain errors instead of letting release overwrite the original error.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@databases.json`:
- Around line 61-67: The committed test config contains a real-looking secret in
the "password" field for the entry named "Test database 9 - Redis Auth"; remove
the concrete value and replace it with a non-secret placeholder or an
environment-reference (e.g., set password to a placeholder like "<REDACTED>" or
reference an env var) so the file no longer contains secrets, and rotate the
credential if it was ever used; update the entry that contains "generated_id":
"16678160-ff7e-4c97-8c83-0adeff214681" accordingly to ensure tests still find
the record.
In `@docker-compose.yml`:
- Around line 106-128: Move the hardcoded password out of the db-redis-auth
service and use environment variable substitution: create a .env entry
REDIS_PASSWORD (and add .env to .gitignore), remove the literal
"m6o2L0Ukw3Og5RJUKh" from the environment list and the command array in the
db-redis-auth service, and reference the variable (e.g., ${REDIS_PASSWORD}) for
both the REDIS_PASSWORD environment key and the --requirepass argument so the
service uses the value from the environment instead of committing secrets to
git.
In `@scripts/tests/requirements.sh`:
- Around line 15-16: The apt install call for the PostgreSQL client is
interactive and may block CI; update the installation command in the
scripts/tests/requirements.sh where the script echoes "Installing
postgresql-client" and then runs the installer so it runs non-interactively by
adding the automatic yes flag (e.g., change the postgresql-client install
invocation to include -y) so the package installs without prompting.
In `@src/core/agent.rs`:
- Around line 46-50: The current lookup using
config.databases.iter().find(...).unwrap() can panic if no matching generated_id
is found; change the unwrap into a safe conditional (e.g., if let Some(database)
= config.databases.iter().find(|cfg_db| cfg_db.generated_id == db.generated_id)
{ ... } else { log an error mentioning db.generated_id and skip/continue this
loop iteration }) so the agent loop does not crash on inconsistent generated_id;
keep the info! logging inside the Some branch and ensure the else branch emits
an error or warning and continues.
In `@src/domain/postgres/backup.rs`:
- Around line 30-34: The check treating None as test mode is inverted: replace
the conditional that uses is_test == Option::from(false) with a clear boolean
default (e.g. is_test.unwrap_or(false)) so that None becomes false (non-test)
and the code uses select_pg_path(&version).join("pg_dump") by default; only use
the PATH-resolved "pg_dump" (the "pg_dump".to_string().parse()? branch) when
is_test is true. Update the conditional around pg_dump (variable pg_dump in
src/domain/postgres/backup.rs) accordingly to use is_test.unwrap_or(false) or an
equivalent match.
In `@src/domain/redis/backup.rs`:
- Around line 12-36: The backup subprocess is spawned with cmd.output() inside
the spawn_blocking closure without any timeout; change this to run the process
with a cancellable timeout like in ping.rs: instead of calling cmd.output(),
call cmd.spawn() to get a Child (preserve use of cfg and file_path), then use
tokio::time::timeout(Duration::from_secs(10), child.wait_with_output()) from the
async context (move the blocking spawn out if needed) so the wait will fail
after 10s; on timeout, ensure you call child.kill() (and wait for reaping) and
return an appropriate error via context("Redis backup command timed out");
handle and propagate other child errors as before.
- Around line 28-34: The current Backup routine passes credentials on the
command line (cfg.password via "-a" and cfg.username via "--user") and logs the
full cmd (debug!("Command Backup: {:?}", cmd)), which exposes secrets; change it
to supply authentication via a Redis URI in the environment (e.g., set an env
var like REDIS_URL or REDISCLI_AUTH from cfg) and remove adding "-a" and
"--user" to cmd, and update the debug logging to print a redacted/sanitized
version of the command (replace any credential-containing args or show only
executable and non-sensitive flags) so that cfg.password/cfg.username are never
included in argv or logs; locate references to cfg.password, cfg.username, cmd,
file_path and the debug! call to implement these changes.
In `@src/domain/redis/ping.rs`:
- Around line 24-36: The code currently logs the full command and raw redis-cli
output (see the debug!("Command Ping: {:?}", cmd) and info!("Redis
stdout/stderr") lines in ping.rs where timeout(..., cmd.output()).await is
used), which may expose credentials; remove or replace those logs with sanitized
metadata only: do not log the raw `cmd` or full stdout/stderr, instead log
non-sensitive info such as success/failure, exit status, a truncated/sanitized
snippet with secrets redacted (implement or call a helper like
sanitize_command(cmd) or redact_sensitive_output(stdout/stderr)), and any safe
error context (e.g., "Ping command failed: <error message>" without embedding
credentials). Ensure all references to logging inside the match arms for
Ok(output) and error paths use the sanitizer before logging.
In `@src/services/config.rs`:
- Around line 176-179: The config currently accepts a Redis `database`
(db.database) but never uses it at runtime; update the match in
services/config.rs where `database_name` is computed so that for DbType::Redis
you explicitly reject a non-empty `db.database` instead of calling
`optional(&db.database)` — validate `db.database` is empty and return a
configuration error (with context such as `db.name` and the "database" field) if
it's set; reference `DbType::Redis`, `db.database`/`database_name`, and ensure
this behavior matches the Redis code paths in src/domain/redis/ping.rs and
src/domain/redis/backup.rs so callers aren’t silently ignored.
In `@src/tests/domain/postgres.rs`:
- Around line 81-87: The test currently assigns an empty PathBuf to backup_file
when decompression yields != 1, hiding the real failure; replace that branch
with an explicit assert or expect on files.len() == 1 (with a clear message like
"decompression produced X files, expected 1") before creating backup_file, so
restore/format failures point to the decompression problem; refer to the
backup_file variable and the call site that later passes it into
create_for_restore to locate where to add the assertion.
- Around line 24-29: The test currently falls back to 127.0.0.1:5432 when
Testcontainers fails to resolve the container endpoint (using
container.get_host().await.unwrap_or(Host::parse("127.0.0.1").unwrap()) and
container.get_host_port_ipv4(5432).await.unwrap_or(5432)); change these to fail
fast by removing the default fallbacks and surface the error instead (e.g., use
expect or propagate the Result from container.get_host() and
container.get_host_port_ipv4() so the test panics or returns an Err when
resolution fails), updating references where host and port are used to assume
successful resolution from container.get_host() and
container.get_host_port_ipv4().
In `@src/tests/domain/redis.rs`:
- Around line 14-22: The test currently falls back to localhost when container
lookups fail (using container.get_host().await.unwrap_or(...) and
container.get_host_port_ipv4(6379).await.unwrap_or(...)), which can mask
container setup failures; change the code to fail fast by removing the fallback
defaults and instead unwrap/expect the Results from container.get_host() and
container.get_host_port_ipv4(6379) (or propagate the error), so that resolution
failures for host and port (variables host and port) cause the test to abort
immediately rather than defaulting to 127.0.0.1:6379.
---
Nitpick comments:
In `@docker-compose.yml`:
- Line 107: Replace the floating Redis image tag "redis:latest" with a pinned,
explicit version to ensure reproducible builds (for example "redis:7.2-alpine");
update both occurrences currently using "image: redis:latest" so the
docker-compose service definitions reference the pinned tag (or a variable like
${REDIS_IMAGE} sourced from .env) instead of latest.
- Around line 42-104: The large blocks of commented-out service definitions
(db-mariadb, db-mongodb-auth, db-mongodb, sqlite) should be converted to
optional Docker Compose profiles so services can be enabled/disabled via compose
flags; update each commented service to an active service entry and add a
profiles: ["mariadb"] / ["mongodb-auth"] / ["mongodb"] / ["sqlite"] array on the
respective service definitions (retain existing keys like container_name, image,
ports, volumes, environment, healthcheck, command) and remove the comment
markers so maintainers can run them with docker compose --profile <name> up.
In `@src/domain/factory.rs`:
- Around line 37-47: The factory function create_for_restore currently returns a
RedisDatabase for DbType::Redis even though RedisDatabase::restore always fails;
change create_for_restore to reject Redis at this boundary by returning a Result
(e.g., Result<Arc<dyn Database>, RestoreError>) instead of unconditionally
constructing RedisDatabase, update the match arm for DbType::Redis to return an
Err with a clear unsupported-backend error, and propagate this Result to callers
so restore selection fails fast; reference create_for_restore, DbType::Redis,
RedisDatabase::restore when making the change.
In `@src/domain/sqlite/database.rs`:
- Around line 30-40: In the backup function (async fn backup) the
FileLock::release error can overwrite a failing backup::run error; capture the
result of backup::run into a variable (e.g., let res = backup::run(...).await;)
before releasing the lock, then call FileLock::release and if it fails, decide:
if res is Err return the original res (and optionally log the release error),
else return the release error; ensure this pattern uses the same symbols
(backup::run, FileLock::release, cfg.generated_id, DbOpLock::Backup) so the
original backup error is preserved while still surfacing lock-release failures
when there was no backup error.
- Around line 41-51: The restore method can mask an error from restore::run when
FileLock::release fails; change restore to store the result of restore::run in a
variable, then always attempt FileLock::release (when not test_mode) and if
release fails while restore::run already returned Err, return a
combined/annotated error that preserves the original restore::run error plus the
release failure (or at minimum return the original restore error and log the
release error); specifically update the restore function (references: restore,
restore::run, FileLock::acquire, FileLock::release, DbOpLock::Restore) to
attempt release in a separate step and merge or chain errors instead of letting
release overwrite the original error.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 8858f40c-21cf-4442-ace5-515033cbfbad
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (26)
.github/workflows/codecov.ymlCargo.tomldatabases.jsondocker-compose.ymldocker/Dockerfilescripts/tests/requirements.shsrc/core/agent.rssrc/domain/factory.rssrc/domain/mod.rssrc/domain/mongodb/database.rssrc/domain/mysql/database.rssrc/domain/postgres/backup.rssrc/domain/postgres/database.rssrc/domain/postgres/restore.rssrc/domain/redis/backup.rssrc/domain/redis/database.rssrc/domain/redis/mod.rssrc/domain/redis/ping.rssrc/domain/sqlite/database.rssrc/services/backup/runner.rssrc/services/config.rssrc/services/restore/runner.rssrc/tests/domain/mod.rssrc/tests/domain/postgres.rssrc/tests/domain/redis.rssrc/tests/mod.rs
| "name": "Test database 9 - Redis Auth", | ||
| "type": "redis", | ||
| "password": "m6o2L0Ukw3Og5RJUKh", | ||
| "port": 6379, | ||
| "username": "default", | ||
| "host": "db-redis-auth", | ||
| "generated_id": "16678160-ff7e-4c97-8c83-0adeff214681" |
There was a problem hiding this comment.
Remove committed credential-like password from test config.
The Redis Auth entry includes a concrete secret-looking value. This should not be committed as-is; use a known placeholder (or env-injected value) and rotate it if it was ever real.
🔐 Suggested fix
{
"name": "Test database 9 - Redis Auth",
"type": "redis",
- "password": "m6o2L0Ukw3Og5RJUKh",
+ "password": "changeme",
"port": 6379,
"username": "default",
"host": "db-redis-auth",
"generated_id": "16678160-ff7e-4c97-8c83-0adeff214681"
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "name": "Test database 9 - Redis Auth", | |
| "type": "redis", | |
| "password": "m6o2L0Ukw3Og5RJUKh", | |
| "port": 6379, | |
| "username": "default", | |
| "host": "db-redis-auth", | |
| "generated_id": "16678160-ff7e-4c97-8c83-0adeff214681" | |
| "name": "Test database 9 - Redis Auth", | |
| "type": "redis", | |
| "password": "changeme", | |
| "port": 6379, | |
| "username": "default", | |
| "host": "db-redis-auth", | |
| "generated_id": "16678160-ff7e-4c97-8c83-0adeff214681" |
🧰 Tools
🪛 Gitleaks (8.30.0)
[high] 63-63: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@databases.json` around lines 61 - 67, The committed test config contains a
real-looking secret in the "password" field for the entry named "Test database 9
- Redis Auth"; remove the concrete value and replace it with a non-secret
placeholder or an environment-reference (e.g., set password to a placeholder
like "<REDACTED>" or reference an env var) so the file no longer contains
secrets, and rotate the credential if it was ever used; update the entry that
contains "generated_id": "16678160-ff7e-4c97-8c83-0adeff214681" accordingly to
ensure tests still find the record.
| db-redis: | ||
| image: redis:latest | ||
| container_name: db-redis | ||
| ports: | ||
| - "6379:6379" | ||
| volumes: | ||
| - redis-data:/data | ||
| command: [ "redis-server", "--appendonly", "yes" ] | ||
| networks: | ||
| - portabase | ||
|
|
||
| db-redis-auth: | ||
| image: redis:latest | ||
| container_name: db-redis-auth | ||
| ports: | ||
| - "6380:6379" | ||
| volumes: | ||
| - redis-data-auth:/data | ||
| environment: | ||
| - REDIS_PASSWORD=m6o2L0Ukw3Og5RJUKh | ||
| command: [ "redis-server", "--requirepass", "m6o2L0Ukw3Og5RJUKh", "--appendonly", "yes" ] | ||
| networks: | ||
| - portabase |
There was a problem hiding this comment.
Hardcoded Redis password should be moved to environment variables.
The Redis password m6o2L0Ukw3Og5RJUKh is hardcoded in both the environment variable (line 125) and the command (line 126). Even for development environments, credentials should not be committed to version control. Consider using a .env file with variable substitution.
🔐 Suggested fix using .env file
Create a .env file (and add it to .gitignore):
REDIS_PASSWORD=m6o2L0Ukw3Og5RJUKh
Then update the docker-compose.yml:
db-redis-auth:
image: redis:latest
container_name: db-redis-auth
ports:
- "6380:6379"
volumes:
- redis-data-auth:/data
environment:
- - REDIS_PASSWORD=m6o2L0Ukw3Og5RJUKh
- command: [ "redis-server", "--requirepass", "m6o2L0Ukw3Og5RJUKh", "--appendonly", "yes" ]
+ - REDIS_PASSWORD=${REDIS_PASSWORD}
+ command: [ "redis-server", "--requirepass", "${REDIS_PASSWORD}", "--appendonly", "yes" ]
networks:
- portabase📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| db-redis: | |
| image: redis:latest | |
| container_name: db-redis | |
| ports: | |
| - "6379:6379" | |
| volumes: | |
| - redis-data:/data | |
| command: [ "redis-server", "--appendonly", "yes" ] | |
| networks: | |
| - portabase | |
| db-redis-auth: | |
| image: redis:latest | |
| container_name: db-redis-auth | |
| ports: | |
| - "6380:6379" | |
| volumes: | |
| - redis-data-auth:/data | |
| environment: | |
| - REDIS_PASSWORD=m6o2L0Ukw3Og5RJUKh | |
| command: [ "redis-server", "--requirepass", "m6o2L0Ukw3Og5RJUKh", "--appendonly", "yes" ] | |
| networks: | |
| - portabase | |
| db-redis: | |
| image: redis:latest | |
| container_name: db-redis | |
| ports: | |
| - "6379:6379" | |
| volumes: | |
| - redis-data:/data | |
| command: [ "redis-server", "--appendonly", "yes" ] | |
| networks: | |
| - portabase | |
| db-redis-auth: | |
| image: redis:latest | |
| container_name: db-redis-auth | |
| ports: | |
| - "6380:6379" | |
| volumes: | |
| - redis-data-auth:/data | |
| environment: | |
| - REDIS_PASSWORD=${REDIS_PASSWORD} | |
| command: [ "redis-server", "--requirepass", "${REDIS_PASSWORD}", "--appendonly", "yes" ] | |
| networks: | |
| - portabase |
🧰 Tools
🪛 Gitleaks (8.30.0)
[high] 125-125: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docker-compose.yml` around lines 106 - 128, Move the hardcoded password out
of the db-redis-auth service and use environment variable substitution: create a
.env entry REDIS_PASSWORD (and add .env to .gitignore), remove the literal
"m6o2L0Ukw3Og5RJUKh" from the environment list and the command array in the
db-redis-auth service, and reference the variable (e.g., ${REDIS_PASSWORD}) for
both the REDIS_PASSWORD environment key and the --requirepass argument so the
service uses the value from the environment instead of committing secrets to
git.
| let database = config.databases.iter().find(|cfg_db|cfg_db.generated_id == db.generated_id).unwrap(); | ||
| info!( | ||
| "Generated Id: {} | backup action: {} | restore action: {}", | ||
| db.generated_id, db.data.backup.action, db.data.restore.action | ||
| "Generated Id: {} | backup action: {} | restore action: {} | Database Name: {}", | ||
| db.generated_id, db.data.backup.action, db.data.restore.action, database.name, | ||
| ); |
There was a problem hiding this comment.
Avoid panic on missing config match in agent loop.
Line 46 can panic via unwrap(). A single inconsistent generated_id will crash the run loop.
Proposed fix
- let database = config.databases.iter().find(|cfg_db|cfg_db.generated_id == db.generated_id).unwrap();
+ let Some(database) = config
+ .databases
+ .iter()
+ .find(|cfg_db| cfg_db.generated_id == db.generated_id)
+ else {
+ tracing::warn!("Skipping status for unknown generated_id: {}", db.generated_id);
+ continue;
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let database = config.databases.iter().find(|cfg_db|cfg_db.generated_id == db.generated_id).unwrap(); | |
| info!( | |
| "Generated Id: {} | backup action: {} | restore action: {}", | |
| db.generated_id, db.data.backup.action, db.data.restore.action | |
| "Generated Id: {} | backup action: {} | restore action: {} | Database Name: {}", | |
| db.generated_id, db.data.backup.action, db.data.restore.action, database.name, | |
| ); | |
| let Some(database) = config | |
| .databases | |
| .iter() | |
| .find(|cfg_db| cfg_db.generated_id == db.generated_id) | |
| else { | |
| tracing::warn!("Skipping status for unknown generated_id: {}", db.generated_id); | |
| continue; | |
| }; | |
| info!( | |
| "Generated Id: {} | backup action: {} | restore action: {} | Database Name: {}", | |
| db.generated_id, db.data.backup.action, db.data.restore.action, database.name, | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/core/agent.rs` around lines 46 - 50, The current lookup using
config.databases.iter().find(...).unwrap() can panic if no matching generated_id
is found; change the unwrap into a safe conditional (e.g., if let Some(database)
= config.databases.iter().find(|cfg_db| cfg_db.generated_id == db.generated_id)
{ ... } else { log an error mentioning db.generated_id and skip/continue this
loop iteration }) so the agent loop does not crash on inconsistent generated_id;
keep the info! logging inside the Some branch and ensure the else branch emits
an error or warning and continues.
| debug!("Command Ping: {:?}", cmd); | ||
|
|
||
|
|
||
| let result = timeout(Duration::from_secs(10), cmd.output()).await; | ||
|
|
||
| match result { | ||
| Ok(output) => { | ||
| let output = output.context("Failed to execute redis-cli")?; | ||
| let stdout = String::from_utf8_lossy(&output.stdout); | ||
| let stderr = String::from_utf8_lossy(&output.stderr); | ||
|
|
||
| info!("Redis stdout: {}", stdout); | ||
| info!("Redis stderr: {}", stderr); |
There was a problem hiding this comment.
Do not log credential-bearing command/output.
This logs the full redis-cli command and raw outputs; when auth is used, credentials can be exposed in logs. Log only sanitized metadata.
🧼 Suggested fix
- debug!("Command Ping: {:?}", cmd);
+ debug!(
+ "Running Redis ping (host={}, port={}, has_auth={})",
+ cfg.host,
+ cfg.port,
+ !cfg.password.is_empty()
+ );
@@
- info!("Redis stdout: {}", stdout);
- info!("Redis stderr: {}", stderr);
+ debug!("Redis ping completed (status={:?})", output.status);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/domain/redis/ping.rs` around lines 24 - 36, The code currently logs the
full command and raw redis-cli output (see the debug!("Command Ping: {:?}", cmd)
and info!("Redis stdout/stderr") lines in ping.rs where timeout(...,
cmd.output()).await is used), which may expose credentials; remove or replace
those logs with sanitized metadata only: do not log the raw `cmd` or full
stdout/stderr, instead log non-sensitive info such as success/failure, exit
status, a truncated/sanitized snippet with secrets redacted (implement or call a
helper like sanitize_command(cmd) or redact_sensitive_output(stdout/stderr)),
and any safe error context (e.g., "Ping command failed: <error message>" without
embedding credentials). Ensure all references to logging inside the match arms
for Ok(output) and error paths use the sanitizer before logging.
| let database_name = match db.db_type { | ||
| DbType::Sqlite => optional(&db.database), | ||
| DbType::Sqlite | DbType::Redis => optional(&db.database), | ||
| _ => required(&db.database, &db.name, "database")? | ||
| }; |
There was a problem hiding this comment.
Redis database is accepted here but ignored by runtime.
Lines 177-178 preserve a database value for Redis, but the Redis paths in src/domain/redis/ping.rs and src/domain/redis/backup.rs never read cfg.database. A non-empty value is silently discarded, which is misleading if callers expect logical-DB selection. Either reject the field for Redis or wire it through explicitly.
🛠️ If Redis backups are instance-level only, reject the unused field
- let database_name = match db.db_type {
- DbType::Sqlite | DbType::Redis => optional(&db.database),
- _ => required(&db.database, &db.name, "database")?
- };
+ let database_name = match db.db_type {
+ DbType::Sqlite => optional(&db.database),
+ DbType::Redis => {
+ let database_name = optional(&db.database);
+ if !database_name.is_empty() {
+ return Err(format!(
+ "Field 'database' is not supported for Redis database '{}'",
+ db.name
+ ));
+ }
+ database_name
+ }
+ _ => required(&db.database, &db.name, "database")?
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let database_name = match db.db_type { | |
| DbType::Sqlite => optional(&db.database), | |
| DbType::Sqlite | DbType::Redis => optional(&db.database), | |
| _ => required(&db.database, &db.name, "database")? | |
| }; | |
| let database_name = match db.db_type { | |
| DbType::Sqlite => optional(&db.database), | |
| DbType::Redis => { | |
| let database_name = optional(&db.database); | |
| if !database_name.is_empty() { | |
| return Err(format!( | |
| "Field 'database' is not supported for Redis database '{}'", | |
| db.name | |
| )); | |
| } | |
| database_name | |
| } | |
| _ => required(&db.database, &db.name, "database")? | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/config.rs` around lines 176 - 179, The config currently accepts
a Redis `database` (db.database) but never uses it at runtime; update the match
in services/config.rs where `database_name` is computed so that for
DbType::Redis you explicitly reject a non-empty `db.database` instead of calling
`optional(&db.database)` — validate `db.database` is empty and return a
configuration error (with context such as `db.name` and the "database" field) if
it's set; reference `DbType::Redis`, `db.database`/`database_name`, and ensure
this behavior matches the Redis code paths in src/domain/redis/ping.rs and
src/domain/redis/backup.rs so callers aren’t silently ignored.
| let host = container | ||
| .get_host() | ||
| .await | ||
| .unwrap_or(Host::parse("127.0.0.1").unwrap()); | ||
|
|
||
| let port = container.get_host_port_ipv4(5432).await.unwrap_or(5432); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "postgres.rs" -path "*/tests/domain/*" -type fRepository: Portabase/agent
Length of output: 89
🏁 Script executed:
wc -l ./src/tests/domain/postgres.rsRepository: Portabase/agent
Length of output: 93
🏁 Script executed:
cat -n ./src/tests/domain/postgres.rsRepository: Portabase/agent
Length of output: 3959
Fail fast if Testcontainers cannot resolve the Postgres endpoint.
Lines 27 and 29 fall back to 127.0.0.1:5432, which can connect the test to a developer's local Postgres instead of the container when endpoint resolution fails. This should stop the test immediately.
🧪 Prefer explicit test setup failure
let host = container
.get_host()
.await
- .unwrap_or(Host::parse("127.0.0.1").unwrap());
+ .expect("failed to resolve Postgres testcontainer host");
- let port = container.get_host_port_ipv4(5432).await.unwrap_or(5432);
+ let port = container
+ .get_host_port_ipv4(5432)
+ .await
+ .expect("failed to resolve Postgres testcontainer port");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let host = container | |
| .get_host() | |
| .await | |
| .unwrap_or(Host::parse("127.0.0.1").unwrap()); | |
| let port = container.get_host_port_ipv4(5432).await.unwrap_or(5432); | |
| let host = container | |
| .get_host() | |
| .await | |
| .expect("failed to resolve Postgres testcontainer host"); | |
| let port = container | |
| .get_host_port_ipv4(5432) | |
| .await | |
| .expect("failed to resolve Postgres testcontainer port"); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tests/domain/postgres.rs` around lines 24 - 29, The test currently falls
back to 127.0.0.1:5432 when Testcontainers fails to resolve the container
endpoint (using
container.get_host().await.unwrap_or(Host::parse("127.0.0.1").unwrap()) and
container.get_host_port_ipv4(5432).await.unwrap_or(5432)); change these to fail
fast by removing the default fallbacks and surface the error instead (e.g., use
expect or propagate the Result from container.get_host() and
container.get_host_port_ipv4() so the test panics or returns an Err when
resolution fails), updating references where host and port are used to assume
successful resolution from container.get_host() and
container.get_host_port_ipv4().
| let backup_file: PathBuf; | ||
|
|
||
| if files.len() == 1 { | ||
| backup_file = files[0].clone() | ||
| } else { | ||
| backup_file = "".into() | ||
| } |
There was a problem hiding this comment.
Assert on decompression output instead of using an empty path.
Line 86 feeds "" into create_for_restore when decompression returns 0 or more than 1 files, so later failures point at restore/format handling instead of the real issue. Stop here with an explicit assertion.
🧩 Tighten the test contract
- let backup_file: PathBuf;
-
- if files.len() == 1 {
- backup_file = files[0].clone()
- } else {
- backup_file = "".into()
- }
+ assert_eq!(files.len(), 1, "expected exactly one decompressed backup file");
+ let backup_file: PathBuf = files[0].clone();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let backup_file: PathBuf; | |
| if files.len() == 1 { | |
| backup_file = files[0].clone() | |
| } else { | |
| backup_file = "".into() | |
| } | |
| assert_eq!(files.len(), 1, "expected exactly one decompressed backup file"); | |
| let backup_file: PathBuf = files[0].clone(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tests/domain/postgres.rs` around lines 81 - 87, The test currently
assigns an empty PathBuf to backup_file when decompression yields != 1, hiding
the real failure; replace that branch with an explicit assert or expect on
files.len() == 1 (with a clear message like "decompression produced X files,
expected 1") before creating backup_file, so restore/format failures point to
the decompression problem; refer to the backup_file variable and the call site
that later passes it into create_for_restore to locate where to add the
assertion.
| let host = container | ||
| .get_host() | ||
| .await | ||
| .unwrap_or(Host::parse("127.0.0.1").unwrap()); | ||
|
|
||
| let port = container | ||
| .get_host_port_ipv4(6379) | ||
| .await | ||
| .unwrap_or(6379); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/tests/domain/redis.rsRepository: Portabase/agent
Length of output: 2238
Fail fast if Testcontainers cannot resolve the Redis endpoint.
Lines 17 and 22 fall back to 127.0.0.1:6379, so a broken container lookup can accidentally hit a local Redis instance and hide the real setup failure. This should abort the test immediately.
🧪 Prefer explicit test setup failure
let host = container
.get_host()
.await
- .unwrap_or(Host::parse("127.0.0.1").unwrap());
+ .expect("failed to resolve Redis testcontainer host");
let port = container
.get_host_port_ipv4(6379)
.await
- .unwrap_or(6379);
+ .expect("failed to resolve Redis testcontainer port");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let host = container | |
| .get_host() | |
| .await | |
| .unwrap_or(Host::parse("127.0.0.1").unwrap()); | |
| let port = container | |
| .get_host_port_ipv4(6379) | |
| .await | |
| .unwrap_or(6379); | |
| let host = container | |
| .get_host() | |
| .await | |
| .expect("failed to resolve Redis testcontainer host"); | |
| let port = container | |
| .get_host_port_ipv4(6379) | |
| .await | |
| .expect("failed to resolve Redis testcontainer port"); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tests/domain/redis.rs` around lines 14 - 22, The test currently falls
back to localhost when container lookups fail (using
container.get_host().await.unwrap_or(...) and
container.get_host_port_ipv4(6379).await.unwrap_or(...)), which can mask
container setup failures; change the code to fail fast by removing the fallback
defaults and instead unwrap/expect the Results from container.get_host() and
container.get_host_port_ipv4(6379) (or propagate the error), so that resolution
failures for host and port (variables host and port) cause the test to abort
immediately rather than defaulting to 127.0.0.1:6379.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/tests/requirements.sh`:
- Around line 16-17: Replace the deprecated use of "wget --quiet -O -
https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -" and the
plain repo line written by "echo ... | sudo tee
/etc/apt/sources.list.d/pgdg.list" with the modern keyring approach: download
and dearmor the PostgreSQL key into a keyring file under /usr/share/keyrings
(e.g., postgresql-archive-keyring.gpg) and update the repository entry to
include "signed-by=/usr/share/keyrings/postgresql-archive-keyring.gpg" in the
source list; ensure the commands that currently reference "wget --quiet -O -
https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -" and the
"echo ... | sudo tee /etc/apt/sources.list.d/pgdg.list" snippet are replaced
accordingly so apt uses the keyring instead of apt-key.
- Around line 39-41: The Linux installation block in
scripts/tests/requirements.sh currently installs postgresql-client-17 which
mismatches the macOS block that installs postgresql@18; update the Linux
commands to install postgresql-client-18 so both platforms use PostgreSQL 18
(look for the Linux apt-get/apt install lines referencing postgresql-client-17
and replace them to reference postgresql-client-18 to match the macOS brew
install/postgresql@18 and brew link lines).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ffdc5bab-6d55-4809-a418-d85f6c1da494
📒 Files selected for processing (1)
scripts/tests/requirements.sh
scripts/tests/requirements.sh
Outdated
| wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - | ||
| echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" | sudo tee /etc/apt/sources.list.d/pgdg.list |
There was a problem hiding this comment.
apt-key is deprecated and may cause CI failures.
apt-key has been deprecated since apt 2.1 (Ubuntu 20.04+, Debian 11+) and will eventually be removed. Modern Ubuntu/Debian versions show warnings, and this approach will break on future releases.
🔧 Proposed fix using modern keyring approach
- wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
- echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" | sudo tee /etc/apt/sources.list.d/pgdg.list
+ sudo mkdir -p /etc/apt/keyrings
+ wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo gpg --dearmor -o /etc/apt/keyrings/postgresql.gpg
+ echo "deb [signed-by=/etc/apt/keyrings/postgresql.gpg] http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" | sudo tee /etc/apt/sources.list.d/pgdg.list📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - | |
| echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" | sudo tee /etc/apt/sources.list.d/pgdg.list | |
| sudo mkdir -p /etc/apt/keyrings | |
| wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo gpg --dearmor -o /etc/apt/keyrings/postgresql.gpg | |
| echo "deb [signed-by=/etc/apt/keyrings/postgresql.gpg] http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" | sudo tee /etc/apt/sources.list.d/pgdg.list |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/tests/requirements.sh` around lines 16 - 17, Replace the deprecated
use of "wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc |
sudo apt-key add -" and the plain repo line written by "echo ... | sudo tee
/etc/apt/sources.list.d/pgdg.list" with the modern keyring approach: download
and dearmor the PostgreSQL key into a keyring file under /usr/share/keyrings
(e.g., postgresql-archive-keyring.gpg) and update the repository entry to
include "signed-by=/usr/share/keyrings/postgresql-archive-keyring.gpg" in the
source list; ensure the commands that currently reference "wget --quiet -O -
https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -" and the
"echo ... | sudo tee /etc/apt/sources.list.d/pgdg.list" snippet are replaced
accordingly so apt uses the keyring instead of apt-key.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/tests/requirements.sh`:
- Line 14: The mkdir usage with sudo creates $TARGET_DIR but later cp and chmod
run without sudo and the assets path is relative and CWD-dependent; update the
script to compute an absolute assets path (e.g., ASSETS_DIR="$(cd "$(dirname
"${BASH_SOURCE[0]}")"/../../assets/tools && pwd)") and use that instead of
../../assets/tools, always perform filesystem operations against $TARGET_DIR
using sudo when required (apply sudo to cp and chmod or validate writability
before attempting non-sudo operations), and add explicit architecture
validation/error handling around the ARCH detection logic (fail loudly with a
clear error message if architecture is unsupported instead of silently skipping
installation). Ensure references to TARGET_DIR, ASSETS_DIR (or the computed
script directory), the mkdir -p/cp/chmod commands, and the ARCH detection block
are updated accordingly.
- Around line 24-27: PG_SRC is being set from brew --prefix which can fail and
return an empty value causing PG_SRC to fallback to /bin and trigger a bad copy;
update the logic that sets and validates PG_SRC (and other similar variables) to
first capture brew --prefix postgresql@$v into a temp var, verify it's non-empty
and that "$PREFIX/bin/psql" (or the "$PREFIX/bin" directory) exists before
assigning PG_SRC or proceeding to the copy step, and if the check fails log the
"not installed" message and skip the copy/install; apply the same validation
pattern to the other occurrences referenced (the similar lookups that lead to
copying system /bin).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 1732315f-76ad-426d-ad83-b64bee22c228
📒 Files selected for processing (4)
scripts/tests/requirements.shsrc/domain/postgres/backup.rssrc/domain/postgres/connection.rssrc/domain/postgres/restore.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src/domain/postgres/restore.rs
- src/domain/postgres/backup.rs
|
|
||
| for v in 12 13 14 15 16 17 18; do | ||
| TARGET_DIR="$POSTGRES_BASE/$v/bin" | ||
| sudo mkdir -p "$TARGET_DIR" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n scripts/tests/requirements.shRepository: Portabase/agent
Length of output: 3067
Fix reliability issues in Linux PostgreSQL installation: deterministic paths, consistent permissions, and architecture validation.
Three coupled critical issues prevent reliable installation:
- Line 52 uses a CWD-dependent relative path (
../../assets/tools), causing PostgreSQL install to be silently skipped depending on script invocation location. - Line 14 creates directories with
sudo, but lines 18–20 copy files and line 40 changes permissions withoutsudo, causing permission failures when the script lacks write access. - Lines 17–21 lack error handling for unsupported Linux architectures; unrecognized architectures silently skip binary installation.
🔧 Proposed fix
POSTGRES_BASE="/usr/local/postgresql"
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+ASSETS_DIR="$SCRIPT_DIR/../../assets/tools"
echo "Detecting OS and architecture..."
OS_TYPE="$(uname -s)"
ARCH="$(uname -m)"
@@
if [[ "$OS_TYPE" == "Linux" ]]; then
if [[ "$ARCH" == "x86_64" ]]; then
- cp -r ../../assets/tools/amd64/postgresql/postgresql-$v/bin/* "$TARGET_DIR/"
- elif [[ "$ARCH" == "aarch64" ]]; then
- cp -r ../../assets/tools/arm64/postgresql/postgresql-$v/bin/* "$TARGET_DIR/"
+ sudo cp -r "$ASSETS_DIR/amd64/postgresql/postgresql-$v/bin/"* "$TARGET_DIR/"
+ elif [[ "$ARCH" == "aarch64" || "$ARCH" == "arm64" ]]; then
+ sudo cp -r "$ASSETS_DIR/arm64/postgresql/postgresql-$v/bin/"* "$TARGET_DIR/"
+ else
+ echo "Unsupported Linux architecture: $ARCH"
+ exit 1
fi
@@
- find "$TARGET_DIR" -type f -exec chmod +x {} +
+ sudo find "$TARGET_DIR" -type f -exec chmod +x {} +
@@
- if [[ -d "../../assets/tools" ]]; then
+ if [[ -d "$ASSETS_DIR" ]]; then
install_pg_binaries
+ else
+ echo "Missing PostgreSQL assets directory: $ASSETS_DIR"
+ exit 1
fiAlso applies to: 17–21, 40, 52–54
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/tests/requirements.sh` at line 14, The mkdir usage with sudo creates
$TARGET_DIR but later cp and chmod run without sudo and the assets path is
relative and CWD-dependent; update the script to compute an absolute assets path
(e.g., ASSETS_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")"/../../assets/tools &&
pwd)") and use that instead of ../../assets/tools, always perform filesystem
operations against $TARGET_DIR using sudo when required (apply sudo to cp and
chmod or validate writability before attempting non-sudo operations), and add
explicit architecture validation/error handling around the ARCH detection logic
(fail loudly with a clear error message if architecture is unsupported instead
of silently skipping installation). Ensure references to TARGET_DIR, ASSETS_DIR
(or the computed script directory), the mkdir -p/cp/chmod commands, and the ARCH
detection block are updated accordingly.
| PG_SRC="$(brew --prefix postgresql@$v)/bin" 2>/dev/null || true | ||
|
|
||
| if [[ ! -d "$PG_SRC" ]]; then | ||
| echo "PostgreSQL $v not installed via Homebrew. Trying to install..." |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n scripts/tests/requirements.sh | head -50Repository: Portabase/agent
Length of output: 2144
🏁 Script executed:
# Test the Bash semantics described in the review
# When brew --prefix fails with || true, does the result + "/bin" give "/bin"?
python3 << 'EOF'
# Simulate the issue
import subprocess
import os
# Test 1: What does command substitution give us with || true?
test_cases = [
'echo "$(false 2>/dev/null || true)/bin"', # false returns nothing
'echo "$(echo -n "" 2>/dev/null || true)/bin"', # empty string case
'echo "$(brew --prefix postgresql@16 2>/dev/null || true)/bin" || echo "brew not available"' # real case
]
for test in test_cases:
print(f"Test: {test}")
result = subprocess.run(test, shell=True, capture_output=True, text=True)
output = result.stdout.rstrip('\n')
print(f"Result: '{output}'")
print(f"Is '/bin'? {output == '/bin'}")
print()
EOFRepository: Portabase/agent
Length of output: 348
🏁 Script executed:
# Now examine the actual file content around the flagged lines
sed -n '20,40p' scripts/tests/requirements.shRepository: Portabase/agent
Length of output: 893
Prevent accidental /bin copy when Homebrew formula lookup fails.
On line 24, if brew --prefix postgresql@$v fails, PG_SRC evaluates to /bin. Since the system /bin directory exists, the condition on line 26 passes, and line 36 copies system binaries instead of PostgreSQL binaries.
🔧 Proposed fix
- PG_SRC="$(brew --prefix postgresql@$v)/bin" 2>/dev/null || true
+ PG_PREFIX="$(brew --prefix postgresql@$v 2>/dev/null || true)"
+ PG_SRC="${PG_PREFIX:+$PG_PREFIX/bin}"
- if [[ ! -d "$PG_SRC" ]]; then
+ if [[ -z "$PG_SRC" || ! -d "$PG_SRC" ]]; then
echo "PostgreSQL $v not installed via Homebrew. Trying to install..."
if ! brew install postgresql@$v; then
echo "PostgreSQL $v not available, skipping..."
continue # passe à la version suivante
fi
- PG_SRC="$(brew --prefix postgresql@$v)/bin"
+ PG_PREFIX="$(brew --prefix postgresql@$v)"
+ PG_SRC="$PG_PREFIX/bin"
fiAlso applies to: 32, 35–36
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/tests/requirements.sh` around lines 24 - 27, PG_SRC is being set from
brew --prefix which can fail and return an empty value causing PG_SRC to
fallback to /bin and trigger a bad copy; update the logic that sets and
validates PG_SRC (and other similar variables) to first capture brew --prefix
postgresql@$v into a temp var, verify it's non-empty and that "$PREFIX/bin/psql"
(or the "$PREFIX/bin" directory) exists before assigning PG_SRC or proceeding to
the copy step, and if the check fails log the "not installed" message and skip
the copy/install; apply the same validation pattern to the other occurrences
referenced (the similar lookups that lead to copying system /bin).
There was a problem hiding this comment.
♻️ Duplicate comments (2)
scripts/tests/requirements.sh (2)
35-44:⚠️ Potential issue | 🔴 Critical
brew --prefixfailure causes system/binto be copied.If
brew --prefix postgresql@$vfails (formula not installed), the command substitution returns empty string, makingPG_SRC="/bin". Since/binalways exists, the directory check on line 37 passes, and line 47 copies system binaries into the PostgreSQL directory.🔧 Proposed fix with proper validation
- PG_SRC="$(brew --prefix postgresql@$v)/bin" 2>/dev/null || true - - if [[ ! -d "$PG_SRC" ]]; then + PG_PREFIX="$(brew --prefix postgresql@$v 2>/dev/null || true)" + PG_SRC="${PG_PREFIX:+$PG_PREFIX/bin}" + + if [[ -z "$PG_SRC" || ! -d "$PG_SRC" ]]; then echo "PostgreSQL $v not installed via Homebrew. Trying to install..." if ! brew install postgresql@$v; then echo "PostgreSQL $v not available, skipping..." continue fi - PG_SRC="$(brew --prefix postgresql@$v)/bin" + PG_PREFIX="$(brew --prefix postgresql@$v)" + PG_SRC="$PG_PREFIX/bin" fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/tests/requirements.sh` around lines 35 - 44, The PG_SRC assignment uses command substitution that can yield an empty string and fall back to "/bin", so change the logic around PG_SRC/brew --prefix: first capture the brew prefix into a separate variable (e.g., PREFIX="$(brew --prefix postgresql@$v 2>/dev/null || true)"), validate that PREFIX is non-empty and that "$PREFIX/bin" exists, and only then set PG_SRC="$PREFIX/bin"; if PREFIX is empty or "$PREFIX/bin" doesn't exist, treat the formula as not installed and run the brew install/postgres install branch (the code paths around PG_SRC and the directory check should reference PG_SRC and fail-safe conditions so you never copy from /bin).
17-24:⚠️ Potential issue | 🔴 CriticalRelative paths are CWD-dependent and will break when script is invoked from different directories.
Lines 18-20 use
../../assets/tools/...which resolves relative to the current working directory, not the script's location. This causes silent failures in CI or when invoked from the repository root.🔧 Proposed fix using script-relative paths
Add this near the top of the script (after line 7):
ARCH="$(uname -m)" +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +ASSETS_DIR="$SCRIPT_DIR/../../assets/tools"Then update the path references inside the function:
if [[ "$OS_TYPE" == "Linux" ]]; then if [[ "$ARCH" == "x86_64" ]]; then - SRC_DIR="../../assets/tools/amd64/postgresql/postgresql-$v/bin" + SRC_DIR="$ASSETS_DIR/amd64/postgresql/postgresql-$v/bin" elif [[ "$ARCH" == "aarch64" ]]; then - SRC_DIR="../../assets/tools/arm64/postgresql/postgresql-$v/bin" + SRC_DIR="$ASSETS_DIR/arm64/postgresql/postgresql-$v/bin" else🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/tests/requirements.sh` around lines 17 - 24, The relative asset paths in the script rely on the current working directory and should be made script-relative: add logic near the top to compute the script directory (e.g., set a SCRIPT_DIR variable using the script's own path) and then update the SRC_DIR assignments (the branches that set SRC_DIR for ARCH == "x86_64" and ARCH == "aarch64") to build paths off that SCRIPT_DIR instead of "../../assets/..."; keep the same ARCH checks and error branch, only replace the hardcoded relative prefixes with the script-rooted path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@scripts/tests/requirements.sh`:
- Around line 35-44: The PG_SRC assignment uses command substitution that can
yield an empty string and fall back to "/bin", so change the logic around
PG_SRC/brew --prefix: first capture the brew prefix into a separate variable
(e.g., PREFIX="$(brew --prefix postgresql@$v 2>/dev/null || true)"), validate
that PREFIX is non-empty and that "$PREFIX/bin" exists, and only then set
PG_SRC="$PREFIX/bin"; if PREFIX is empty or "$PREFIX/bin" doesn't exist, treat
the formula as not installed and run the brew install/postgres install branch
(the code paths around PG_SRC and the directory check should reference PG_SRC
and fail-safe conditions so you never copy from /bin).
- Around line 17-24: The relative asset paths in the script rely on the current
working directory and should be made script-relative: add logic near the top to
compute the script directory (e.g., set a SCRIPT_DIR variable using the script's
own path) and then update the SRC_DIR assignments (the branches that set SRC_DIR
for ARCH == "x86_64" and ARCH == "aarch64") to build paths off that SCRIPT_DIR
instead of "../../assets/..."; keep the same ARCH checks and error branch, only
replace the hardcoded relative prefixes with the script-rooted path.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 36c8da68-c743-4f9e-a02d-0ff778ba6ed3
📒 Files selected for processing (1)
scripts/tests/requirements.sh
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
Summary by CodeRabbit
New Features
Tests
Chores