Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions .claude/skills/review-pr/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,10 +59,10 @@ gh pr diff <number> --repo <owner/repo>
For each specific issue, post a comment on the exact file and line:

```bash
gh api -X POST -H "Accept: application/vnd.github+json" "repos/<owner/repo>/pulls/<number>/comments" -f body=$'Your comment\n\n— Claude Code' -f path="path/to/file" -F line=<line_number> -f side="RIGHT" -f commit_id="<headRefOid>"
gh api -X POST -H "Accept: application/vnd.github+json" "repos/<owner/repo>/pulls/<number>/comments" -f body="Your comment<br><br>— Claude Code" -f path="path/to/file" -F line=<line_number> -f side="RIGHT" -f commit_id="<headRefOid>"
```

**The command must stay on a single bash line.** Use `$'...'` quoting for the `-f body=` value, with `\n` for line breaks. Never use `<br>` — it renders as literal text inside code blocks and suggestion blocks.
**The command must stay on a single bash line.** Never use newlines in bash commands — use `<br>` for line breaks in comment bodies. Never put `<br>` inside code blocks or suggestion blocks — use `\n` within `$'...'` quoting only for content inside fenced code blocks.

Each inline comment must:
- Be short and direct — say what's wrong, why it's wrong, and how to fix it in 1-3 sentences
Expand All @@ -76,7 +76,7 @@ Each inline comment must:
Only suggest when you can show the exact replacement. For architectural or design issues, just describe the problem.
Example with a suggestion block:
```bash
gh api ... -f body=$'Missing the shared-guidelines update command.\n\n```suggestion\n/plugin update shared-guidelines@scality-agent-hub\n/plugin update scality-skills@scality-agent-hub\n```\n\n— Claude Code' ...
gh api ... -f body=$'Missing the shared-guidelines update command.<br><br>```suggestion\n/plugin update shared-guidelines@scality-agent-hub\n/plugin update scality-skills@scality-agent-hub\n```<br><br>— Claude Code' ...
```
- Escape single quotes inside `$'...'` as `\'` (e.g., `don\'t`)
- End with: `— Claude Code`
Expand All @@ -86,10 +86,10 @@ Use the line number from the **new version** of the file (the line number you'd
#### Part B: Summary comment

```bash
gh pr comment <number> --repo <owner/repo> --body $'LGTM\n\nReview by Claude Code'
gh pr comment <number> --repo <owner/repo> --body "LGTM<br><br>Review by Claude Code"
```

**The command must stay on a single bash line.** Use `$'...'` quoting with `\n` for line breaks.
**The command must stay on a single bash line.** Never use newlines in bash commands — use `<br>` for line breaks in comment bodies. Never put `<br>` inside code blocks or suggestion blocks.

Do not describe or summarize the PR. For each issue, state the problem on one line, then list one or more suggestions below it:

Expand Down
7 changes: 7 additions & 0 deletions .github/workflows/dependency-review.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,3 +14,10 @@ jobs:

- name: 'Dependency Review'
uses: actions/dependency-review-action@v4
with:
# aws-sdk v2 is intentionally used as a devDependency because the
# fix-missing-replication-permissions script runs inside the vault
# container of older S3C versions (pre-9.5.2) where only v2 is
# available. This low-severity advisory is informational (no patch
# exists, v2 is end-of-life).
allow-ghsas: GHSA-j965-2qgj-vjmq
3 changes: 2 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "s3utils",
"version": "1.17.4",
"version": "1.17.5",
"engines": {
"node": ">= 22"
},
Expand Down Expand Up @@ -55,6 +55,7 @@
},
"devDependencies": {
"@aws-sdk/client-iam": "^3.962.0",
"aws-sdk": "^2.1692.0",
"@scality/eslint-config-scality": "scality/Guidelines#8.3.0",
"@sinonjs/fake-timers": "^14.0.0",
"eslint": "^9.14.0",
Expand Down
221 changes: 217 additions & 4 deletions replicationAudit/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,15 @@
# TL;DR Complete Workflow Example

Here's a complete example running the two scripts and audit the IAM policies used by CRR:
Here's a complete example running the two scripts and audit the IAM policies used by CRR.

Use the fix-missing-replication-permissions.js script (step 7) to correct any missing permissions found by the check script. If the fix script fails with "missing ownerDisplayName", re-run check-replication-permissions.js — this field was added in s3utils 1.17.5.

From your local machine: copy scripts to the supervisor

```bash
scp replicationAudit/list-buckets-with-replication.sh root@<supervisor-ip>:/root/
scp replicationAudit/check-replication-permissions.js root@<supervisor-ip>:/root/
scp replicationAudit/fix-missing-replication-permissions.js root@<supervisor-ip>:/root/
```

Connect to the supervisor
Expand All @@ -26,6 +29,8 @@ ansible -i env/$ENV_DIR/inventory runners_s3[0] -m copy \
-a 'src=/root/list-buckets-with-replication.sh dest=/root/'
ansible -i env/$ENV_DIR/inventory runners_s3[0] -m copy \
-a 'src=/root/check-replication-permissions.js dest={{ env_host_logs}}/scality-vault{{ container_name_suffix | default("")}}/logs'
ansible -i env/$ENV_DIR/inventory runners_s3[0] -m copy \
-a 'src=/root/fix-missing-replication-permissions.js dest={{ env_host_logs}}/scality-vault{{ container_name_suffix | default("")}}/logs'

# Step 2: Run list-buckets-with-replication.sh
ansible -i env/$ENV_DIR/inventory runners_s3[0] -m shell \
Expand All @@ -41,7 +46,7 @@ ansible -i env/$ENV_DIR/inventory md1-cluster1 -m shell \
LEADER_IP=<leader-ip-from-step-3>

ansible -i env/$ENV_DIR/inventory runners_s3[0] -m shell \
-a "mv /root/buckets-with-replication.json {{ env_host_logs}}/scality-vault{{ container_name_suffix | default("")}}/logs && \
-a "cp /root/buckets-with-replication.json {{ env_host_logs}}/scality-vault{{ container_name_suffix | default("")}}/logs && \
ctrctl exec scality-vault{{ container_name_suffix | default("")}} node /logs/check-replication-permissions.js \
/logs/buckets-with-replication.json $LEADER_IP /logs/missing.json"

Expand All @@ -50,12 +55,35 @@ ansible -i env/$ENV_DIR/inventory runners_s3[0] -m shell \
-a 'cat {{ env_host_logs}}/scality-vault{{ container_name_suffix | default("")}}/logs/missing.json' \
| grep -v CHANGED | tee /root/replicationAudit_missing.json

# Step 6: Clean up
# Step 6: Fix missing permissions
# Copy admin credentials and missing.json to vault container and run inside it
ansible -i env/$ENV_DIR/inventory runners_s3[0] -m copy \
-a "src=/srv/scality/s3/s3-offline/federation/env/$ENV_DIR/vault/admin-clientprofile/admin1.json dest={{ env_host_logs}}/scality-vault{{ container_name_suffix | default("")}}/logs"

ansible -i env/$ENV_DIR/inventory runners_s3[0] -m copy \
-a 'src=/root/replicationAudit_missing.json dest={{ env_host_logs}}/scality-vault{{ container_name_suffix | default("")}}/logs/missing.json'

ansible -i env/$ENV_DIR/inventory runners_s3[0] -m shell \
-a 'ctrctl exec scality-vault{{ container_name_suffix | default("")}} env NODE_PATH=/home/scality/vault/node_modules node /logs/fix-missing-replication-permissions.js \
/logs/missing.json localhost /logs/admin1.json /logs/replication-fix-results.json'

# Retrieve fix results
ansible -i env/$ENV_DIR/inventory runners_s3[0] -m shell \
-a 'cat {{ env_host_logs}}/scality-vault{{ container_name_suffix | default("")}}/logs/replication-fix-results.json' \
| grep -v CHANGED | tee /root/replicationAudit_fix_results.json

# Step 7: Re-run check to verify fixes (repeat steps 3-5)

# Step 8: Clean up remote files
ansible -i env/$ENV_DIR/inventory runners_s3[0] -m shell \
-a 'rm -f {{ env_host_logs}}/scality-vault{{ container_name_suffix | default("")}}/logs/missing.json \
{{ env_host_logs}}/scality-vault{{ container_name_suffix | default("")}}/logs/check-replication-permissions.js \
{{ env_host_logs}}/scality-vault{{ container_name_suffix | default("")}}/logs/fix-missing-replication-permissions.js \
{{ env_host_logs}}/scality-vault{{ container_name_suffix | default("")}}/logs/buckets-with-replication.json \
/root/list-buckets-with-replication.sh'
{{ env_host_logs}}/scality-vault{{ container_name_suffix | default("")}}/logs/replication-fix-results.json \
{{ env_host_logs}}/scality-vault{{ container_name_suffix | default("")}}/logs/admin1.json \
/root/list-buckets-with-replication.sh \
/root/buckets-with-replication.json'
```

# Scripts Documentation
Expand Down Expand Up @@ -285,6 +313,13 @@ node check-replication-permissions.js [input-file] [leader-ip] [output-file] [--

### Output Format

> **Breaking change (since 1.17.5):** The output now includes `ownerDisplayName`
> in each result entry. This field is required by
> `fix-missing-replication-permissions.js` to identify accounts without an
> extra API call. If you ran `check-replication-permissions.js` on version
> 1.17.4 or earlier, **re-run it** to produce an output that
> `fix-missing-replication-permissions.js` can consume.

The script produces a JSON file with metadata and results. The `results` array
contains **only buckets missing the `s3:ReplicateObject` permission**.

Expand All @@ -310,6 +345,7 @@ contains **only buckets missing the `s3:ReplicateObject` permission**.
"results": [
{
"bucket": "bucket-old-1",
"ownerDisplayName": "testaccount",
"sourceRole": "arn:aws:iam::267390090509:role/crr-role-outdated",
"policies": [
{
Expand Down Expand Up @@ -461,3 +497,180 @@ Output saved to: /tmp/missing.json
**Script timeout**

- For many buckets, run directly on the S3 connector node via interactive SSH

---

## fix-missing-replication-permissions.js

Reads the output of `check-replication-permissions.js` and creates IAM policies
with `s3:ReplicateObject` for roles that are missing it.

The script applies **minimal changes**: one policy per bucket, with an explicit
Statement ID (`AllowReplicateObjectAuditFix`) so the policies are easily
identifiable later. Using one policy per bucket makes re-runs truly idempotent:
if the policy already exists, its document is guaranteed to be identical.

### Prerequisites

- Output from `check-replication-permissions.js` (missing.json)
- Vault admin credentials (`admin1.json` with `accessKey` and `secretKeyValue`).
Found on the supervisor at:
```
/srv/scality/s3/s3-offline/federation/env/<ENV_DIR>/vault/admin-clientprofile/admin1.json
```
- The script runs inside the vault container (`scality-vault`), which has `vaultclient` and `aws-sdk` pre-installed

### Usage

```bash
node fix-missing-replication-permissions.js <input-file> <vault-host> <admin-config> [output-file] [options]
```

| Argument | Default | Description |
|----------|---------|-------------|
| `input-file` | (required) | Path to missing.json from check script |
| `vault-host` | (required) | Vault admin host (use `localhost` when running inside the vault container) |
| `admin-config` | (required) | Path to admin credentials JSON |
| `output-file` | replication-fix-results.json | Output file path |
| `--iam-port <port>` | 8600 | Vault admin and IAM API port |
| `--https` | (not set) | Use HTTPS to connect to Vault |
| `--dry-run` | (not set) | Show what would be done without making changes |

### How It Works

1. **Reads** the missing permissions file and enriches each entry with parsed role fields
2. **Maps** account IDs to names using `ownerDisplayName` from the input (no API call)
3. For each bucket entry (grouped by account for credential reuse):
- **Generates** a temporary access key via vault admin API (15-minute auto-expiry)
- **Creates** one IAM policy per bucket with `s3:ReplicateObject`
- **Attaches** the policy to the role
- **Deletes** the temporary access key (falls back to auto-expiry on failure)
4. **Writes** results to the output file

### Policy Created

For each bucket, the script creates a policy named
`s3-replication-audit-fix-<bucketName>`:
Comment thread
nicolas2bert marked this conversation as resolved.

```json
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "AllowReplicateObjectAuditFix",
Comment thread
nicolas2bert marked this conversation as resolved.
"Effect": "Allow",
"Action": "s3:ReplicateObject",
"Resource": "arn:aws:s3:::bucket-old-1/*"
}]
}
```

### Output Format

```json
{
"metadata": {
"timestamp": "2026-02-23T20:35:00.000Z",
"durationMs": 65,
"inputFile": "missing.json",
"dryRun": false,
"counts": {
"totalBucketsProcessed": 3,
"totalBucketsFixed": 3,
"policiesCreated": 3,
"policiesAttached": 3,
"keysCreated": 1,
"keysDeleted": 1,
"errors": 0
}
},
"fixes": [
{
"accountId": "267390090509",
"accountName": "testaccount",
"roleName": "crr-role-outdated",
"roleArn": "arn:aws:iam::267390090509:role/crr-role-outdated",
"policyName": "s3-replication-audit-fix-bucket-old-1",
"policyArn": "arn:aws:iam::267390090509:policy/s3-replication-audit-fix-bucket-old-1",
Comment thread
nicolas2bert marked this conversation as resolved.
"bucket": "bucket-old-1",
"status": "success"
},
{
"accountId": "267390090509",
"accountName": "testaccount",
"roleName": "crr-role-outdated",
"roleArn": "arn:aws:iam::267390090509:role/crr-role-outdated",
"policyName": "s3-replication-audit-fix-bucket-old-2",
"policyArn": "arn:aws:iam::267390090509:policy/s3-replication-audit-fix-bucket-old-2",
Comment thread
nicolas2bert marked this conversation as resolved.
"bucket": "bucket-old-2",
"status": "success"
}
],
"errors": []
}
```

### Example Run

```
=== Fix Missing Replication Permissions ===
Input: missing.json
Output: replication-fix-results.json
Vault/IAM: 13.50.166.21:8600

Processing 3 bucket(s)

[1/3] Bucket "bucket-old-1" — role "crr-role-outdated" — account "testaccount"
Created policy "s3-replication-audit-fix-bucket-old-1"
Comment thread
nicolas2bert marked this conversation as resolved.
Attached policy to role "crr-role-outdated"
[2/3] Bucket "bucket-old-2" — role "crr-role-outdated" — account "testaccount"
Created policy "s3-replication-audit-fix-bucket-old-2"
Comment thread
nicolas2bert marked this conversation as resolved.
Attached policy to role "crr-role-outdated"
[3/3] Bucket "bucket-old-3" — role "crr-role-outdated" — account "testaccount"
Created policy "s3-replication-audit-fix-bucket-old-3"
Comment thread
nicolas2bert marked this conversation as resolved.
Attached policy to role "crr-role-outdated"
Deleted temp key for account "testaccount" (267390090509)

=== Summary ===
Buckets processed: 3
Buckets fixed: 3
Policies created: 3
Policies attached: 3
Keys created: 1
Keys deleted: 1
Errors: 0
Duration: 0.7s
Output saved to: replication-fix-results.json

Done.
```

### Idempotency

The script is safe to run multiple times:

- Each bucket has its own policy, so if it already exists the document is
guaranteed to be identical — `EntityAlreadyExists` is a true no-op
- Attaching an already-attached policy is a no-op in IAM
- Temporary access keys auto-expire after 15 minutes even if deletion fails

### Troubleshooting

**"No ownerDisplayName found for account"**

- The input file is missing `ownerDisplayName`. Re-run `check-replication-permissions.js`
to generate a fresh output that includes this field.

**"Failed to generate temp key"**

- Verify admin credentials in the config file
- Ensure vault admin API is reachable on the specified host and port

**IAM operation errors**

- Check that the IAM port is correct (default 8600, may differ per deployment)
- Verify the role still exists in vault

**"Failed to delete temp key"**

- Non-critical: the key auto-expires after 15 minutes
- The error is logged but does not prevent other operations
6 changes: 3 additions & 3 deletions replicationAudit/check-replication-permissions.js
Original file line number Diff line number Diff line change
Expand Up @@ -421,7 +421,7 @@ async function main() {

// Process each bucket
for (let i = 0; i < buckets.length; i++) {
const { bucket, sourceRole } = buckets[i];
const { bucket, sourceRole, ownerDisplayName } = buckets[i];

if (!sourceRole) {
logProgress(i + 1, buckets.length, bucket, 'SKIP (no role)');
Expand All @@ -440,12 +440,12 @@ async function main() {
} else {
const reason = result.error || 's3:ReplicateObject';
logProgress(i + 1, buckets.length, bucket, `MISSING: ${reason}`);
results.push(result);
results.push({ ...result, ownerDisplayName });
stats.missing++;
}
} catch (e) {
logProgress(i + 1, buckets.length, bucket, `ERROR: ${e.message}`);
results.push({ bucket, sourceRole, error: e.message, policies: [] });
results.push({ bucket, sourceRole, ownerDisplayName, error: e.message, policies: [] });
stats.errors++;
}
}
Expand Down
Loading
Loading