From 1a3fbea0293f1290706fd6711392bb0070adc085 Mon Sep 17 00:00:00 2001 From: Andrei Smirnov Date: Wed, 11 Feb 2026 11:28:23 +0300 Subject: [PATCH 01/12] Scripts for asdb and replace-operator --- .gitignore | 2 + scripts/README.md | 23 + scripts/edit/replace-operator/README.md | 50 ++ scripts/edit/replace-operator/new-operator.sh | 348 ++++++++++ .../replace-operator/remaining-operator.sh | 343 ++++++++++ scripts/edit/replace-operator/test/.gitignore | 2 + scripts/edit/replace-operator/test/README.md | 27 + .../fixtures/.charon/charon-enr-private-key | 1 + .../test/fixtures/.charon/cluster-lock.json | 55 ++ .../replace-operator/test/fixtures/.env.test | 3 + .../test/fixtures/new-cluster-lock.json | 55 ++ .../test/fixtures/sample-asdb.json | 24 + .../test/test_replace_operator.sh | 596 ++++++++++++++++++ scripts/edit/vc/export_asdb.sh | 56 ++ scripts/edit/vc/import_asdb.sh | 56 ++ scripts/edit/vc/lodestar/export_asdb.sh | 119 ++++ scripts/edit/vc/lodestar/import_asdb.sh | 121 ++++ scripts/edit/vc/nimbus/export_asdb.sh | 118 ++++ scripts/edit/vc/nimbus/import_asdb.sh | 117 ++++ scripts/edit/vc/prysm/export_asdb.sh | 121 ++++ scripts/edit/vc/prysm/import_asdb.sh | 121 ++++ scripts/edit/vc/teku/export_asdb.sh | 118 ++++ scripts/edit/vc/teku/import_asdb.sh | 118 ++++ scripts/edit/vc/test/.gitignore | 9 + scripts/edit/vc/test/README.md | 34 + scripts/edit/vc/test/docker-compose.test.yml | 40 ++ .../fixtures/sample-slashing-protection.json | 38 ++ .../vc/test/fixtures/source-cluster-lock.json | 19 + .../vc/test/fixtures/target-cluster-lock.json | 19 + .../fixtures/validator_keys/keystore-0.json | 31 + .../fixtures/validator_keys/keystore-0.txt | 1 + scripts/edit/vc/test/test_lodestar_asdb.sh | 231 +++++++ scripts/edit/vc/test/test_nimbus_asdb.sh | 258 ++++++++ scripts/edit/vc/test/test_prysm_asdb.sh | 275 ++++++++ scripts/edit/vc/test/test_teku_asdb.sh | 211 +++++++ scripts/edit/vc/update-anti-slashing-db.sh | 233 +++++++ 36 files changed, 3993 insertions(+) create mode 100644 scripts/README.md create mode 100644 scripts/edit/replace-operator/README.md create mode 100755 scripts/edit/replace-operator/new-operator.sh create mode 100755 scripts/edit/replace-operator/remaining-operator.sh create mode 100644 scripts/edit/replace-operator/test/.gitignore create mode 100644 scripts/edit/replace-operator/test/README.md create mode 100644 scripts/edit/replace-operator/test/fixtures/.charon/charon-enr-private-key create mode 100644 scripts/edit/replace-operator/test/fixtures/.charon/cluster-lock.json create mode 100644 scripts/edit/replace-operator/test/fixtures/.env.test create mode 100644 scripts/edit/replace-operator/test/fixtures/new-cluster-lock.json create mode 100644 scripts/edit/replace-operator/test/fixtures/sample-asdb.json create mode 100755 scripts/edit/replace-operator/test/test_replace_operator.sh create mode 100755 scripts/edit/vc/export_asdb.sh create mode 100755 scripts/edit/vc/import_asdb.sh create mode 100755 scripts/edit/vc/lodestar/export_asdb.sh create mode 100755 scripts/edit/vc/lodestar/import_asdb.sh create mode 100755 scripts/edit/vc/nimbus/export_asdb.sh create mode 100755 scripts/edit/vc/nimbus/import_asdb.sh create mode 100755 scripts/edit/vc/prysm/export_asdb.sh create mode 100755 scripts/edit/vc/prysm/import_asdb.sh create mode 100755 scripts/edit/vc/teku/export_asdb.sh create mode 100755 scripts/edit/vc/teku/import_asdb.sh create mode 100644 scripts/edit/vc/test/.gitignore create mode 100644 scripts/edit/vc/test/README.md create mode 100644 scripts/edit/vc/test/docker-compose.test.yml create mode 100644 scripts/edit/vc/test/fixtures/sample-slashing-protection.json create mode 100644 scripts/edit/vc/test/fixtures/source-cluster-lock.json create mode 100644 scripts/edit/vc/test/fixtures/target-cluster-lock.json create mode 100644 scripts/edit/vc/test/fixtures/validator_keys/keystore-0.json create mode 100644 scripts/edit/vc/test/fixtures/validator_keys/keystore-0.txt create mode 100755 scripts/edit/vc/test/test_lodestar_asdb.sh create mode 100755 scripts/edit/vc/test/test_nimbus_asdb.sh create mode 100755 scripts/edit/vc/test/test_prysm_asdb.sh create mode 100755 scripts/edit/vc/test/test_teku_asdb.sh create mode 100755 scripts/edit/vc/update-anti-slashing-db.sh diff --git a/.gitignore b/.gitignore index be6f45cf..f1b9447c 100644 --- a/.gitignore +++ b/.gitignore @@ -11,5 +11,7 @@ cluster-lock.json data/ .idea .charon +!scripts/edit/replace-operator/test/fixtures/.charon/ +!scripts/edit/replace-operator/test/fixtures/.charon/* prometheus/prometheus.yml commit-boost/config.toml diff --git a/scripts/README.md b/scripts/README.md new file mode 100644 index 00000000..251e8c63 --- /dev/null +++ b/scripts/README.md @@ -0,0 +1,23 @@ +# Cluster Edit Automation Scripts + +Automation scripts for Charon distributed validator cluster editing operations. + +## Documentation + +- [Obol Replace-Operator Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/replace-operator) +- [Charon Edit Commands](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/) +- [EIP-3076 Slashing Protection Interchange Format](https://eips.ethereum.org/EIPS/eip-3076) + +## Scripts + +| Directory | Description | +|-----------|-------------| +| [edit/replace-operator/](edit/replace-operator/README.md) | Replace an operator in a cluster | +| [edit/vc/](edit/vc/) | Export/import anti-slashing database for various VCs | + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables +- Docker and `docker compose` +- `jq` + diff --git a/scripts/edit/replace-operator/README.md b/scripts/edit/replace-operator/README.md new file mode 100644 index 00000000..c768c53a --- /dev/null +++ b/scripts/edit/replace-operator/README.md @@ -0,0 +1,50 @@ +# Replace-Operator Scripts + +Scripts to automate the [replace-operator workflow](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/replace-operator) for Charon distributed validators. + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- Docker running +- `jq` installed + +## For Remaining Operators + +Automates the complete workflow for operators staying in the cluster: + +```bash +./scripts/edit/replace-operator/remaining-operator.sh \ + --new-enr "enr:-..." \ + --operator-index 2 +``` + +**Options:** +- `--new-enr ` - ENR of the new operator (required) +- `--operator-index ` - Index of operator being replaced (required) +- `--skip-export` - Skip ASDB export if already done +- `--skip-ceremony` - Skip ceremony if cluster-lock already generated +- `--dry-run` - Preview without executing + +## For New Operators + +**Step 1:** Generate ENR and share with remaining operators: + +```bash +./scripts/edit/replace-operator/new-operator.sh --generate-enr +``` + +**Step 2:** After receiving cluster-lock from remaining operators: + +```bash +# curl -o received-cluster-lock.json https://example.com/cluster-lock.json +./scripts/edit/replace-operator/new-operator.sh --cluster-lock ./received-cluster-lock.json +``` + +**Options:** +- `--cluster-lock ` - Path to new cluster-lock.json +- `--generate-enr` - Generate new ENR private key +- `--dry-run` - Preview without executing + +## Testing + +See [test/README.md](test/README.md) for integration tests. diff --git a/scripts/edit/replace-operator/new-operator.sh b/scripts/edit/replace-operator/new-operator.sh new file mode 100755 index 00000000..17583d14 --- /dev/null +++ b/scripts/edit/replace-operator/new-operator.sh @@ -0,0 +1,348 @@ +#!/usr/bin/env bash + +# Replace-Operator Workflow Script for NEW Operator +# +# This script helps a new operator join an existing cluster after a +# replace-operator ceremony has been completed by the remaining operators. +# +# Prerequisites (before running this script): +# 1. Generate your ENR private key: +# docker run --rm -v "$(pwd)/.charon:/opt/charon/.charon" obolnetwork/charon:latest create enr +# +# 2. Share your ENR (found in .charon/charon-enr-private-key.pub or printed by the command) +# with the remaining operators so they can run the ceremony. +# +# 3. Receive the new cluster-lock.json from the remaining operators after +# they complete the ceremony. +# +# The workflow: +# 1. Verify prerequisites (.charon folder, private key, cluster-lock) +# 2. Stop any running containers +# 3. Place the new cluster-lock.json (if not already in place) +# 4. Start charon and VC containers +# +# Usage: +# ./scripts/edit/replace-operator/new-operator.sh [OPTIONS] +# +# Options: +# --cluster-lock Path to the new cluster-lock.json file (optional if already in .charon) +# --generate-enr Generate a new ENR private key if not present +# --dry-run Show what would be done without executing +# -h, --help Show this help message +# +# Examples: +# # Generate ENR first (share the output with remaining operators) +# ./scripts/edit/replace-operator/new-operator.sh --generate-enr +# +# # After receiving cluster-lock, join the cluster +# ./scripts/edit/replace-operator/new-operator.sh --cluster-lock ./received-cluster-lock.json + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +cd "$REPO_ROOT" + +# Default values +CLUSTER_LOCK_PATH="" +GENERATE_ENR=false +DRY_RUN=false + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/replace-operator/new-operator.sh [OPTIONS] + +Helps a new operator join an existing cluster after a replace-operator +ceremony has been completed by the remaining operators. + +Options: + --cluster-lock Path to the new cluster-lock.json file + --generate-enr Generate a new ENR private key if not present + --dry-run Show what would be done without executing + -h, --help Show this help message + +Examples: + # Step 1: Generate ENR and share with remaining operators + ./scripts/edit/replace-operator/new-operator.sh --generate-enr + + # Step 2: After receiving cluster-lock, join the cluster + ./scripts/edit/replace-operator/new-operator.sh --cluster-lock ./received-cluster-lock.json + +Prerequisites: + - .env file with NETWORK and VC variables set + - For --generate-enr: Docker installed + - For joining: .charon/charon-enr-private-key must exist +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --cluster-lock) + CLUSTER_LOCK_PATH="$2" + shift 2 + ;; + --generate-enr) + GENERATE_ENR=true + shift + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Replace-Operator Workflow - NEW OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Check for .env file +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + log_info "Copy from a sample: cp .env.sample.hoodi .env" + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Configuration:" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +echo "" + +# Step 1: Handle ENR generation +if [ "$GENERATE_ENR" = true ]; then + log_step "Step 1: Generating ENR private key..." + + if [ -f .charon/charon-enr-private-key ]; then + log_warn "ENR private key already exists at .charon/charon-enr-private-key" + log_warn "Skipping generation to avoid overwriting existing key." + log_info "If you want to generate a new key, remove the existing file first." + else + mkdir -p .charon + + if [ "$DRY_RUN" = false ]; then + docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + create enr + else + echo " [DRY-RUN] docker run --rm ... charon create enr" + fi + + log_info "ENR private key generated" + fi + + if [ -f .charon/charon-enr-private-key ]; then + echo "" + echo "╔════════════════════════════════════════════════════════════════╗" + echo "║ SHARE YOUR ENR WITH THE REMAINING OPERATORS ║" + echo "╚════════════════════════════════════════════════════════════════╝" + echo "" + + # Extract and display the ENR + if [ -f .charon/charon-enr-private-key ]; then + ENR=$(docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + enr 2>/dev/null || echo "") + + if [ -n "$ENR" ]; then + log_info "Your ENR:" + echo "" + echo "$ENR" + echo "" + fi + fi + + log_info "Send this ENR to the remaining operators." + log_info "They will use it with: --new-enr \"\"" + log_info "" + log_info "After they complete the ceremony, run this script again with:" + log_info " ./scripts/edit/replace-operator/new-operator.sh --cluster-lock " + fi + + exit 0 +fi + +# Step 2: Check prerequisites +log_step "Step 1: Checking prerequisites..." + +if [ "$DRY_RUN" = false ]; then + if [ ! -d .charon ]; then + log_error ".charon directory not found" + log_info "First generate your ENR with: ./scripts/edit/replace-operator/new-operator.sh --generate-enr" + exit 1 + fi + + if [ ! -f .charon/charon-enr-private-key ]; then + log_error ".charon/charon-enr-private-key not found" + log_info "First generate your ENR with: ./scripts/edit/replace-operator/new-operator.sh --generate-enr" + exit 1 + fi +else + if [ ! -d .charon ]; then + log_warn "Would check for .charon directory (not found)" + fi + if [ ! -f .charon/charon-enr-private-key ]; then + log_warn "Would check for .charon/charon-enr-private-key (not found)" + fi +fi + +# Handle cluster-lock +if [ -n "$CLUSTER_LOCK_PATH" ]; then + if [ "$DRY_RUN" = false ] && [ ! -f "$CLUSTER_LOCK_PATH" ]; then + log_error "Cluster-lock file not found: $CLUSTER_LOCK_PATH" + exit 1 + fi + log_info "Using provided cluster-lock: $CLUSTER_LOCK_PATH" +elif [ -f .charon/cluster-lock.json ]; then + log_info "Using existing cluster-lock: .charon/cluster-lock.json" +elif [ "$DRY_RUN" = true ]; then + log_warn "Would need cluster-lock.json (not found)" +else + log_error "No cluster-lock.json found" + log_info "Provide the path to the new cluster-lock.json with:" + log_info " ./scripts/edit/replace-operator/new-operator.sh --cluster-lock " + exit 1 +fi + +log_info "Prerequisites OK" + +echo "" + +# Step 3: Stop any running containers +log_step "Step 2: Stopping any running containers..." + +# Stop containers if running (ignore errors if not running) +run_cmd docker compose stop "$VC" charon 2>/dev/null || true + +log_info "Containers stopped" + +echo "" + +# Step 4: Install cluster-lock if provided +if [ -n "$CLUSTER_LOCK_PATH" ]; then + log_step "Step 3: Installing new cluster-lock..." + + if [ -f .charon/cluster-lock.json ]; then + TIMESTAMP=$(date +%Y%m%d_%H%M%S) + mkdir -p ./backups + run_cmd cp .charon/cluster-lock.json "./backups/cluster-lock.json.$TIMESTAMP" + log_info "Old cluster-lock backed up to ./backups/cluster-lock.json.$TIMESTAMP" + fi + + run_cmd cp "$CLUSTER_LOCK_PATH" .charon/cluster-lock.json + log_info "New cluster-lock installed" +else + log_step "Step 3: Using existing cluster-lock..." + log_info "cluster-lock.json already in place" +fi + +echo "" + +# Step 5: Verify cluster-lock matches our ENR +log_step "Step 4: Verifying cluster-lock configuration..." + +if [ "$DRY_RUN" = false ] && [ -f .charon/cluster-lock.json ]; then + # Get our ENR + OUR_ENR=$(docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + enr 2>/dev/null || echo "") + + if [ -n "$OUR_ENR" ]; then + # Check if our ENR is in the cluster-lock + if grep -q "${OUR_ENR:0:50}" .charon/cluster-lock.json 2>/dev/null; then + log_info "Verified: Your ENR is present in the cluster-lock" + else + log_warn "Your ENR may not be in this cluster-lock." + log_warn "Make sure you received the correct cluster-lock from the remaining operators." + fi + fi + + # Show cluster info + NUM_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + NUM_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info "Cluster info: $NUM_VALIDATORS validator(s), $NUM_OPERATORS operator(s)" +fi + +echo "" + +# Step 6: Start containers +log_step "Step 5: Starting containers..." + +run_cmd docker compose up -d charon "$VC" + +log_info "Containers started" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ New Operator Setup COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Cluster-lock installed in: .charon/cluster-lock.json" +log_info " - Containers started: charon, $VC" +echo "" +log_info "Next steps:" +log_info " 1. Wait for charon to sync with peers: docker compose logs -f charon" +log_info " 2. Verify VC is running: docker compose logs -f $VC" +log_info " 3. Monitor validator duties once synced" +echo "" +log_warn "Note: As a new operator, you do NOT have any slashing protection history." +log_warn "Your VC will start fresh. Ensure all remaining operators have completed" +log_warn "their replace-operator workflow before validators resume duties." +echo "" diff --git a/scripts/edit/replace-operator/remaining-operator.sh b/scripts/edit/replace-operator/remaining-operator.sh new file mode 100755 index 00000000..8c566061 --- /dev/null +++ b/scripts/edit/replace-operator/remaining-operator.sh @@ -0,0 +1,343 @@ +#!/usr/bin/env bash + +# Replace-Operator Workflow Script for REMAINING Operators +# +# This script automates the complete replace-operator workflow for operators +# who are staying in the cluster (continuing operators). +# +# The workflow: +# 1. Export the current anti-slashing database +# 2. Run the replace-operator ceremony (charon edit replace-operator) +# 3. Update the exported ASDB with new pubkeys +# 4. Stop charon and VC containers +# 5. Backup and replace the cluster-lock +# 6. Import the updated ASDB +# 7. Restart all containers +# +# Prerequisites: +# - .env file with NETWORK and VC variables set +# - .charon directory with cluster-lock.json and charon-enr-private-key +# - Docker and docker compose installed and running +# - VC container running (for initial export) +# +# Usage: +# ./scripts/edit/replace-operator/remaining-operator.sh [OPTIONS] +# +# Options: +# --new-enr ENR of the new operator (required) +# --operator-index Index of the operator being replaced (required) +# --skip-export Skip ASDB export (if already exported) +# --skip-ceremony Skip ceremony (if cluster-lock already generated) +# --dry-run Show what would be done without executing +# -h, --help Show this help message +# +# Example: +# ./scripts/edit/remaining-operator.sh \ +# --new-enr "enr:-..." \ +# --operator-index 2 + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +cd "$REPO_ROOT" + +# Default values +NEW_ENR="" +OPERATOR_INDEX="" +SKIP_EXPORT=false +SKIP_CEREMONY=false +DRY_RUN=false + +# Output directories +ASDB_EXPORT_DIR="./asdb-export" +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/replace-operator/remaining-operator.sh [OPTIONS] + +Automates the complete replace-operator workflow for operators +who are staying in the cluster (continuing operators). + +Options: + --new-enr ENR of the new operator (required) + --operator-index Index of the operator being replaced (required) + --skip-export Skip ASDB export (if already exported) + --skip-ceremony Skip ceremony (if cluster-lock already generated) + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + ./scripts/edit/remaining-operator.sh \ + --new-enr "enr:-..." \ + --operator-index 2 + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json and charon-enr-private-key + - Docker and docker compose installed and running + - VC container running (for initial export) +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --new-enr) + NEW_ENR="$2" + shift 2 + ;; + --operator-index) + OPERATOR_INDEX="$2" + shift 2 + ;; + --skip-export) + SKIP_EXPORT=true + shift + ;; + --skip-ceremony) + SKIP_CEREMONY=true + shift + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ "$SKIP_CEREMONY" = false ]; then + if [ -z "$NEW_ENR" ]; then + log_error "Missing required argument: --new-enr" + echo "Use --help for usage information" + exit 1 + fi + if [ -z "$OPERATOR_INDEX" ]; then + log_error "Missing required argument: --operator-index" + echo "Use --help for usage information" + exit 1 + fi +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Replace-Operator Workflow - REMAINING OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -f .charon/charon-enr-private-key ]; then + log_error ".charon/charon-enr-private-key not found" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +echo "" + +# Step 1: Export anti-slashing database +log_step "Step 1: Exporting anti-slashing database..." + +if [ "$SKIP_EXPORT" = true ]; then + log_warn "Skipping export (--skip-export specified)" + if [ ! -f "$ASDB_EXPORT_DIR/slashing-protection.json" ]; then + log_error "Cannot skip export: $ASDB_EXPORT_DIR/slashing-protection.json not found" + exit 1 + fi +else + # Check VC container is running (skip check in dry-run mode) + if [ "$DRY_RUN" = false ]; then + if ! docker compose ps "$VC" 2>/dev/null | grep -q Up; then + log_error "VC container ($VC) is not running. Start it first:" + log_error " docker compose up -d $VC" + exit 1 + fi + else + log_warn "Would check that $VC container is running" + fi + + mkdir -p "$ASDB_EXPORT_DIR" + + run_cmd VC="$VC" "$SCRIPT_DIR/../vc/export_asdb.sh" \ + --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" + + log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" +fi + +echo "" + +# Step 2: Run replace-operator ceremony +log_step "Step 2: Running replace-operator ceremony..." + +if [ "$SKIP_CEREMONY" = true ]; then + log_warn "Skipping ceremony (--skip-ceremony specified)" + if [ ! -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_error "Cannot skip ceremony: $OUTPUT_DIR/cluster-lock.json not found" + exit 1 + fi +else + mkdir -p "$OUTPUT_DIR" + + log_info "Running: charon edit replace-operator" + log_info " Replacing operator index: $OPERATOR_INDEX" + log_info " New ENR: ${NEW_ENR:0:50}..." + + if [ "$DRY_RUN" = false ]; then + docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + edit replace-operator \ + --lock-file=/opt/charon/.charon/cluster-lock.json \ + --output-dir=/opt/charon/output \ + --operator-index="$OPERATOR_INDEX" \ + --new-enr="$NEW_ENR" + else + echo " [DRY-RUN] docker run --rm ... charon edit replace-operator ..." + fi + + log_info "New cluster-lock generated at $OUTPUT_DIR/cluster-lock.json" +fi + +echo "" + +# Step 3: Update ASDB pubkeys +log_step "Step 3: Updating anti-slashing database pubkeys..." + +run_cmd "$SCRIPT_DIR/../vc/update-anti-slashing-db.sh" \ + "$ASDB_EXPORT_DIR/slashing-protection.json" \ + ".charon/cluster-lock.json" \ + "$OUTPUT_DIR/cluster-lock.json" + +log_info "Anti-slashing database pubkeys updated" + +echo "" + +# Step 4: Stop containers +log_step "Step 4: Stopping charon and VC containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" + +# Step 5: Backup and replace cluster-lock +log_step "Step 5: Backing up and replacing cluster-lock..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd cp .charon/cluster-lock.json "$BACKUP_DIR/cluster-lock.json.$TIMESTAMP" +log_info "Old cluster-lock backed up to $BACKUP_DIR/cluster-lock.json.$TIMESTAMP" + +run_cmd cp "$OUTPUT_DIR/cluster-lock.json" .charon/cluster-lock.json +log_info "New cluster-lock installed" + +echo "" + +# Step 6: Import updated ASDB +log_step "Step 6: Importing updated anti-slashing database..." + +run_cmd VC="$VC" "$SCRIPT_DIR/../vc/import_asdb.sh" \ + --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database imported" + +echo "" + +# Step 7: Restart containers +log_step "Step 7: Restarting containers..." + +run_cmd docker compose up -d charon "$VC" + +log_info "Containers restarted" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Replace-Operator Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old cluster-lock backed up to: $BACKUP_DIR/cluster-lock.json.$TIMESTAMP" +log_info " - New cluster-lock installed in: .charon/cluster-lock.json" +log_info " - Anti-slashing database updated and imported" +log_info " - Containers restarted: charon, $VC" +echo "" +log_info "Next steps:" +log_info " 1. Verify charon is syncing with peers: docker compose logs -f charon" +log_info " 2. Verify VC is running: docker compose logs -f $VC" +log_info " 3. Share the new cluster-lock.json with the NEW operator" +echo "" diff --git a/scripts/edit/replace-operator/test/.gitignore b/scripts/edit/replace-operator/test/.gitignore new file mode 100644 index 00000000..92fdcf73 --- /dev/null +++ b/scripts/edit/replace-operator/test/.gitignore @@ -0,0 +1,2 @@ +# Test artifacts - don't commit +data/ diff --git a/scripts/edit/replace-operator/test/README.md b/scripts/edit/replace-operator/test/README.md new file mode 100644 index 00000000..3cc3d53b --- /dev/null +++ b/scripts/edit/replace-operator/test/README.md @@ -0,0 +1,27 @@ +# Replace-Operator Integration Tests + +Integration tests for `new-operator.sh` and `remaining-operator.sh` scripts. + +## Overview + +These tests validate the replace-operator scripts without running actual Docker containers or the charon ceremony. The focus is on: + +- **Argument parsing and validation** +- **Prerequisite checks** (`.env`, `.charon/`, cluster-lock, ENR key) +- **Dry-run output** for all workflow steps +- **Error messages** for missing/invalid inputs + +## Running Tests + +```bash +./scripts/edit/replace-operator/test/test_replace_operator.sh +``` + +Expected output: All 21 tests should pass in under 5 seconds. + +## What's NOT Tested + +- **Actual Docker operations** - Docker commands are mocked +- **Charon ceremony** - Would require actual cluster coordination +- **ASDB export/import** - Tested separately in `scripts/edit/vc/test/` +- **Container orchestration** - Would require running services diff --git a/scripts/edit/replace-operator/test/fixtures/.charon/charon-enr-private-key b/scripts/edit/replace-operator/test/fixtures/.charon/charon-enr-private-key new file mode 100644 index 00000000..372a826b --- /dev/null +++ b/scripts/edit/replace-operator/test/fixtures/.charon/charon-enr-private-key @@ -0,0 +1 @@ +0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef diff --git a/scripts/edit/replace-operator/test/fixtures/.charon/cluster-lock.json b/scripts/edit/replace-operator/test/fixtures/.charon/cluster-lock.json new file mode 100644 index 00000000..d3be61be --- /dev/null +++ b/scripts/edit/replace-operator/test/fixtures/.charon/cluster-lock.json @@ -0,0 +1,55 @@ +{ + "cluster_definition": { + "name": "TestCluster", + "num_validators": 1, + "threshold": 3, + "operators": [ + { + "address": "0x1111111111111111111111111111111111111111", + "enr": "enr:-HW4QOldBest...operator0" + }, + { + "address": "0x2222222222222222222222222222222222222222", + "enr": "enr:-HW4QNewOper...operator1" + }, + { + "address": "0x3333333333333333333333333333333333333333", + "enr": "enr:-HW4QThird...operator2" + }, + { + "address": "0x4444444444444444444444444444444444444444", + "enr": "enr:-HW4QFourth...operator3" + } + ] + }, + "distributed_validators": [ + { + "distributed_public_key": "0xa9fb2be415318eb77709f7c378ab26025371c0b11213d93fd662ffdb06e77a05c7b04573a478e9d5c0c0fd98078965ef", + "public_shares": [ + "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", + "0x8afba316fdcf51e25a89e05e17377b8c72fd465c95346df4ed5694f295faa2ce061e14e579c5bc01a468dbbb191c58e8", + "0xa1aeebe0980509f5f8d8d424beb89004a967da8d8093248f64eb27c4ee5d22ba9c0f157025f551f47b31833f8bc585f8", + "0xa6c283c82cd0b65436861a149fb840849d06ded1dd8d2f900afb358c6a4232004309120f00a553cdccd8a43f6b743c82" + ] + } + ], + "operators": [ + { + "address": "0x1111111111111111111111111111111111111111", + "enr": "enr:-HW4QOldBest...operator0" + }, + { + "address": "0x2222222222222222222222222222222222222222", + "enr": "enr:-HW4QNewOper...operator1" + }, + { + "address": "0x3333333333333333333333333333333333333333", + "enr": "enr:-HW4QThird...operator2" + }, + { + "address": "0x4444444444444444444444444444444444444444", + "enr": "enr:-HW4QFourth...operator3" + } + ], + "lock_hash": "0xe9dbc87171f99bd8b6f348f6bf314291651933256e712ace299190f5e04e7795" +} diff --git a/scripts/edit/replace-operator/test/fixtures/.env.test b/scripts/edit/replace-operator/test/fixtures/.env.test new file mode 100644 index 00000000..b0d4457d --- /dev/null +++ b/scripts/edit/replace-operator/test/fixtures/.env.test @@ -0,0 +1,3 @@ +# Test environment for replace-operator tests +NETWORK=hoodi +VC=vc-lodestar diff --git a/scripts/edit/replace-operator/test/fixtures/new-cluster-lock.json b/scripts/edit/replace-operator/test/fixtures/new-cluster-lock.json new file mode 100644 index 00000000..187b3582 --- /dev/null +++ b/scripts/edit/replace-operator/test/fixtures/new-cluster-lock.json @@ -0,0 +1,55 @@ +{ + "cluster_definition": { + "name": "TestCluster", + "num_validators": 1, + "threshold": 3, + "operators": [ + { + "address": "0x5555555555555555555555555555555555555555", + "enr": "enr:-HW4QNewReplacement...newoperator0" + }, + { + "address": "0x2222222222222222222222222222222222222222", + "enr": "enr:-HW4QNewOper...operator1" + }, + { + "address": "0x3333333333333333333333333333333333333333", + "enr": "enr:-HW4QThird...operator2" + }, + { + "address": "0x4444444444444444444444444444444444444444", + "enr": "enr:-HW4QFourth...operator3" + } + ] + }, + "distributed_validators": [ + { + "distributed_public_key": "0xa9fb2be415318eb77709f7c378ab26025371c0b11213d93fd662ffdb06e77a05c7b04573a478e9d5c0c0fd98078965ef", + "public_shares": [ + "0xb11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111", + "0x8afba316fdcf51e25a89e05e17377b8c72fd465c95346df4ed5694f295faa2ce061e14e579c5bc01a468dbbb191c58e8", + "0xa1aeebe0980509f5f8d8d424beb89004a967da8d8093248f64eb27c4ee5d22ba9c0f157025f551f47b31833f8bc585f8", + "0xa6c283c82cd0b65436861a149fb840849d06ded1dd8d2f900afb358c6a4232004309120f00a553cdccd8a43f6b743c82" + ] + } + ], + "operators": [ + { + "address": "0x5555555555555555555555555555555555555555", + "enr": "enr:-HW4QNewReplacement...newoperator0" + }, + { + "address": "0x2222222222222222222222222222222222222222", + "enr": "enr:-HW4QNewOper...operator1" + }, + { + "address": "0x3333333333333333333333333333333333333333", + "enr": "enr:-HW4QThird...operator2" + }, + { + "address": "0x4444444444444444444444444444444444444444", + "enr": "enr:-HW4QFourth...operator3" + } + ], + "lock_hash": "0xf0000000000000000000000000000000000000000000000000000000000000000" +} diff --git a/scripts/edit/replace-operator/test/fixtures/sample-asdb.json b/scripts/edit/replace-operator/test/fixtures/sample-asdb.json new file mode 100644 index 00000000..3acc3886 --- /dev/null +++ b/scripts/edit/replace-operator/test/fixtures/sample-asdb.json @@ -0,0 +1,24 @@ +{ + "metadata": { + "interchange_format_version": "5", + "genesis_validators_root": "0x212f13fc4df078b6cb7db228f1c8307566dcecf900867401a92023d7ba99cb5f" + }, + "data": [ + { + "pubkey": "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", + "signed_blocks": [ + { + "slot": "81952", + "signing_root": "0x4ff6f743a43f3b4f95350831aeaf0a122a1a392922c45d804280284a69eb850b" + } + ], + "signed_attestations": [ + { + "source_epoch": "2560", + "target_epoch": "2561", + "signing_root": "0x587d6a4f59a58fe15bdac1234e3d51a1d5c8b2e0e3f5e0f2a1b3c4d5e6f7a8b9" + } + ] + } + ] +} diff --git a/scripts/edit/replace-operator/test/test_replace_operator.sh b/scripts/edit/replace-operator/test/test_replace_operator.sh new file mode 100755 index 00000000..66a6e86c --- /dev/null +++ b/scripts/edit/replace-operator/test/test_replace_operator.sh @@ -0,0 +1,596 @@ +#!/usr/bin/env bash + +# Integration test for replace-operator scripts (new-operator.sh & remaining-operator.sh) +# +# This test validates: +# - Argument parsing and validation +# - Prerequisite checks (.env, .charon/, cluster-lock, ENR key) +# - Dry-run output for all workflow steps +# - Error messages for missing inputs +# +# No actual Docker containers or ceremonies are run - all Docker commands are mocked. +# +# Usage: ./scripts/edit/replace-operator/test/test_replace_operator.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" + +# Test directories +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_DATA_DIR="$SCRIPT_DIR/data" + +# Scripts under test +NEW_OPERATOR_SCRIPT="$REPO_ROOT/scripts/edit/replace-operator/new-operator.sh" +REMAINING_OPERATOR_SCRIPT="$REPO_ROOT/scripts/edit/replace-operator/remaining-operator.sh" + +# Test counters +TESTS_RUN=0 +TESTS_PASSED=0 +TESTS_FAILED=0 + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_test() { echo -e "${BLUE}[TEST]${NC} $1"; } + +# Create mock docker script that logs calls and returns success +setup_mock_docker() { + local mock_bin_dir="$TEST_DATA_DIR/mock-bin" + mkdir -p "$mock_bin_dir" + + # Create mock docker command + cat > "$mock_bin_dir/docker" << 'MOCK_DOCKER' +#!/usr/bin/env bash +# Mock docker for testing - logs all calls +echo "[MOCK-DOCKER] $*" >> "${MOCK_DOCKER_LOG:-/dev/null}" + +# Handle specific commands +case "$*" in + "info") + echo "Mock Docker info" + exit 0 + ;; + "compose"*"ps"*) + # Simulate container not running (for remaining-operator checks) + exit 0 + ;; + "compose"*"stop"*) + echo "[MOCK] Stopping containers" + exit 0 + ;; + "compose"*"up"*) + echo "[MOCK] Starting containers" + exit 0 + ;; + *"charon"*"enr"*) + # Return a mock ENR + echo "enr:-HW4QMockENRForTesting12345" + exit 0 + ;; + *"charon"*"create enr"*) + echo "[MOCK] Creating ENR" + exit 0 + ;; + *"charon"*"edit replace-operator"*) + echo "[MOCK] Running replace-operator ceremony" + exit 0 + ;; + *) + echo "[MOCK] Unhandled docker command: $*" + exit 0 + ;; +esac +MOCK_DOCKER + chmod +x "$mock_bin_dir/docker" + + # Export PATH with mock first + export PATH="$mock_bin_dir:$PATH" + export MOCK_DOCKER_LOG="$TEST_DATA_DIR/docker-calls.log" +} + +# Setup test working directory with fixtures +# Note: Scripts always cd to REPO_ROOT, so we must put test fixtures there +# We backup any existing files and restore them on cleanup +setup_test_env() { + rm -rf "$TEST_DATA_DIR" + mkdir -p "$TEST_DATA_DIR/backup" + + # Backup existing files in REPO_ROOT if they exist + if [ -f "$REPO_ROOT/.env" ]; then + cp "$REPO_ROOT/.env" "$TEST_DATA_DIR/backup/.env.bak" + fi + if [ -d "$REPO_ROOT/.charon" ]; then + # Only backup key files, not the whole directory + mkdir -p "$TEST_DATA_DIR/backup/.charon" + [ -f "$REPO_ROOT/.charon/cluster-lock.json" ] && \ + cp "$REPO_ROOT/.charon/cluster-lock.json" "$TEST_DATA_DIR/backup/.charon/" + [ -f "$REPO_ROOT/.charon/charon-enr-private-key" ] && \ + cp "$REPO_ROOT/.charon/charon-enr-private-key" "$TEST_DATA_DIR/backup/.charon/" + fi + + # Install test fixtures to REPO_ROOT + cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" + mkdir -p "$REPO_ROOT/.charon" + cp "$TEST_FIXTURES_DIR/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" + cp "$TEST_FIXTURES_DIR/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" + + # Create required directories + mkdir -p "$REPO_ROOT/backups" + mkdir -p "$REPO_ROOT/output" + mkdir -p "$REPO_ROOT/asdb-export" + + # Copy sample ASDB for remaining-operator tests + cp "$TEST_FIXTURES_DIR/sample-asdb.json" "$REPO_ROOT/asdb-export/slashing-protection.json" + + # Copy new cluster-lock fixture to output + cp "$TEST_FIXTURES_DIR/new-cluster-lock.json" "$REPO_ROOT/output/cluster-lock.json" + + # Setup mock docker + setup_mock_docker +} + +restore_repo_state() { + # Restore backed up files + if [ -f "$TEST_DATA_DIR/backup/.env.bak" ]; then + cp "$TEST_DATA_DIR/backup/.env.bak" "$REPO_ROOT/.env" + else + rm -f "$REPO_ROOT/.env" + fi + + if [ -d "$TEST_DATA_DIR/backup/.charon" ]; then + [ -f "$TEST_DATA_DIR/backup/.charon/cluster-lock.json" ] && \ + cp "$TEST_DATA_DIR/backup/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" + [ -f "$TEST_DATA_DIR/backup/.charon/charon-enr-private-key" ] && \ + cp "$TEST_DATA_DIR/backup/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" + fi + + # Clean up test artifacts + rm -f "$REPO_ROOT/asdb-export/slashing-protection.json" + rm -f "$REPO_ROOT/output/cluster-lock.json" +} + +cleanup() { + log_info "Cleaning up and restoring original state..." + restore_repo_state +} + +trap cleanup EXIT + +# Test assertion helpers +assert_exit_code() { + local expected="$1" + local actual="$2" + local test_name="$3" + + if [ "$actual" -eq "$expected" ]; then + return 0 + else + log_error "Expected exit code $expected, got $actual in $test_name" + return 1 + fi +} + +assert_output_contains() { + local pattern="$1" + local output="$2" + local test_name="$3" + + if echo "$output" | grep -q -F -- "$pattern"; then + return 0 + else + log_error "Expected output to contain '$pattern' in $test_name" + echo "Actual output:" + echo "$output" | head -20 + return 1 + fi +} + +assert_output_not_contains() { + local pattern="$1" + local output="$2" + local test_name="$3" + + if echo "$output" | grep -q "$pattern"; then + log_error "Expected output NOT to contain '$pattern' in $test_name" + return 1 + else + return 0 + fi +} + +run_test() { + local test_name="$1" + local test_func="$2" + + TESTS_RUN=$((TESTS_RUN + 1)) + log_test "Running: $test_name" + + if $test_func; then + echo -e " ${GREEN}✓ PASSED${NC}" + TESTS_PASSED=$((TESTS_PASSED + 1)) + else + echo -e " ${RED}✗ FAILED${NC}" + TESTS_FAILED=$((TESTS_FAILED + 1)) + fi +} + +# ============================================================================ +# NEW-OPERATOR.SH TESTS +# ============================================================================ + +test_new_help() { + local output + local exit_code=0 + + output=$("$NEW_OPERATOR_SCRIPT" --help 2>&1) || exit_code=$? + + assert_exit_code 0 "$exit_code" "test_new_help" && \ + assert_output_contains "Usage:" "$output" "test_new_help" && \ + assert_output_contains "--cluster-lock" "$output" "test_new_help" && \ + assert_output_contains "--generate-enr" "$output" "test_new_help" && \ + assert_output_contains "--dry-run" "$output" "test_new_help" +} + +test_new_missing_env() { + local output + local exit_code=0 + + # Remove .env from REPO_ROOT + rm -f "$REPO_ROOT/.env" + + output=$("$NEW_OPERATOR_SCRIPT" 2>&1) || exit_code=$? + + # Restore .env for other tests + cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" + + assert_exit_code 1 "$exit_code" "test_new_missing_env" && \ + assert_output_contains ".env file not found" "$output" "test_new_missing_env" +} + +test_new_missing_network() { + local output + local exit_code=0 + + echo "VC=vc-lodestar" > "$REPO_ROOT/.env" # Missing NETWORK + + output=$("$NEW_OPERATOR_SCRIPT" 2>&1) || exit_code=$? + + # Restore .env + cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" + + assert_exit_code 1 "$exit_code" "test_new_missing_network" && \ + assert_output_contains "NETWORK variable not set" "$output" "test_new_missing_network" +} + +test_new_missing_vc() { + local output + local exit_code=0 + + echo "NETWORK=hoodi" > "$REPO_ROOT/.env" # Missing VC + + output=$("$NEW_OPERATOR_SCRIPT" 2>&1) || exit_code=$? + + # Restore .env + cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" + + assert_exit_code 1 "$exit_code" "test_new_missing_vc" && \ + assert_output_contains "VC variable not set" "$output" "test_new_missing_vc" +} + +test_new_missing_charon_dir() { + local output + local exit_code=0 + + # Temporarily rename .charon + mv "$REPO_ROOT/.charon" "$REPO_ROOT/.charon.test.bak" + + output=$("$NEW_OPERATOR_SCRIPT" 2>&1) || exit_code=$? + + # Restore .charon + mv "$REPO_ROOT/.charon.test.bak" "$REPO_ROOT/.charon" + + assert_exit_code 1 "$exit_code" "test_new_missing_charon_dir" && \ + assert_output_contains ".charon directory not found" "$output" "test_new_missing_charon_dir" +} + +test_new_missing_enr_key() { + local output + local exit_code=0 + + rm -f "$REPO_ROOT/.charon/charon-enr-private-key" + + output=$("$NEW_OPERATOR_SCRIPT" 2>&1) || exit_code=$? + + # Restore ENR key + cp "$TEST_FIXTURES_DIR/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" + + assert_exit_code 1 "$exit_code" "test_new_missing_enr_key" && \ + assert_output_contains "charon-enr-private-key not found" "$output" "test_new_missing_enr_key" +} + +test_new_invalid_cluster_lock_path() { + local output + local exit_code=0 + + output=$("$NEW_OPERATOR_SCRIPT" --cluster-lock /nonexistent/path.json 2>&1) || exit_code=$? + + assert_exit_code 1 "$exit_code" "test_new_invalid_cluster_lock_path" && \ + assert_output_contains "Cluster-lock file not found" "$output" "test_new_invalid_cluster_lock_path" +} + +test_new_dry_run_generate_enr() { + local output + local exit_code=0 + + output=$("$NEW_OPERATOR_SCRIPT" --generate-enr --dry-run 2>&1) || exit_code=$? + + assert_exit_code 0 "$exit_code" "test_new_dry_run_generate_enr" && \ + assert_output_contains "DRY-RUN MODE" "$output" "test_new_dry_run_generate_enr" && \ + assert_output_contains "Generating ENR" "$output" "test_new_dry_run_generate_enr" +} + +test_new_dry_run_join_cluster() { + local output + local exit_code=0 + + output=$("$NEW_OPERATOR_SCRIPT" --cluster-lock "$TEST_FIXTURES_DIR/new-cluster-lock.json" --dry-run 2>&1) || exit_code=$? + + assert_exit_code 0 "$exit_code" "test_new_dry_run_join_cluster" && \ + assert_output_contains "DRY-RUN MODE" "$output" "test_new_dry_run_join_cluster" && \ + assert_output_contains "Stopping" "$output" "test_new_dry_run_join_cluster" && \ + assert_output_contains "Installing new cluster-lock" "$output" "test_new_dry_run_join_cluster" && \ + assert_output_contains "Starting containers" "$output" "test_new_dry_run_join_cluster" +} + +test_new_unknown_argument() { + local output + local exit_code=0 + + output=$("$NEW_OPERATOR_SCRIPT" --invalid-flag 2>&1) || exit_code=$? + + assert_exit_code 1 "$exit_code" "test_new_unknown_argument" && \ + assert_output_contains "Unknown argument" "$output" "test_new_unknown_argument" +} + +# ============================================================================ +# REMAINING-OPERATOR.SH TESTS +# ============================================================================ + +test_remaining_help() { + local output + local exit_code=0 + + output=$("$REMAINING_OPERATOR_SCRIPT" --help 2>&1) || exit_code=$? + + assert_exit_code 0 "$exit_code" "test_remaining_help" && \ + assert_output_contains "Usage:" "$output" "test_remaining_help" && \ + assert_output_contains "--new-enr" "$output" "test_remaining_help" && \ + assert_output_contains "--operator-index" "$output" "test_remaining_help" && \ + assert_output_contains "--skip-export" "$output" "test_remaining_help" && \ + assert_output_contains "--skip-ceremony" "$output" "test_remaining_help" +} + +test_remaining_missing_new_enr() { + local output + local exit_code=0 + + output=$("$REMAINING_OPERATOR_SCRIPT" --operator-index 0 2>&1) || exit_code=$? + + assert_exit_code 1 "$exit_code" "test_remaining_missing_new_enr" && \ + assert_output_contains "Missing required argument: --new-enr" "$output" "test_remaining_missing_new_enr" +} + +test_remaining_missing_operator_index() { + local output + local exit_code=0 + + output=$("$REMAINING_OPERATOR_SCRIPT" --new-enr "enr:-test123" 2>&1) || exit_code=$? + + assert_exit_code 1 "$exit_code" "test_remaining_missing_operator_index" && \ + assert_output_contains "Missing required argument: --operator-index" "$output" "test_remaining_missing_operator_index" +} + +test_remaining_missing_env() { + local output + local exit_code=0 + + rm -f "$REPO_ROOT/.env" + + output=$("$REMAINING_OPERATOR_SCRIPT" --new-enr "enr:-test" --operator-index 0 2>&1) || exit_code=$? + + # Restore .env + cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" + + assert_exit_code 1 "$exit_code" "test_remaining_missing_env" && \ + assert_output_contains ".env file not found" "$output" "test_remaining_missing_env" +} + +test_remaining_missing_charon_dir() { + local output + local exit_code=0 + + mv "$REPO_ROOT/.charon" "$REPO_ROOT/.charon.test.bak" + + output=$("$REMAINING_OPERATOR_SCRIPT" --new-enr "enr:-test" --operator-index 0 2>&1) || exit_code=$? + + # Restore .charon + mv "$REPO_ROOT/.charon.test.bak" "$REPO_ROOT/.charon" + + assert_exit_code 1 "$exit_code" "test_remaining_missing_charon_dir" && \ + assert_output_contains ".charon directory not found" "$output" "test_remaining_missing_charon_dir" +} + +test_remaining_missing_cluster_lock() { + local output + local exit_code=0 + + rm -f "$REPO_ROOT/.charon/cluster-lock.json" + + output=$("$REMAINING_OPERATOR_SCRIPT" --new-enr "enr:-test" --operator-index 0 2>&1) || exit_code=$? + + # Restore cluster-lock + cp "$TEST_FIXTURES_DIR/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" + + assert_exit_code 1 "$exit_code" "test_remaining_missing_cluster_lock" && \ + assert_output_contains "cluster-lock.json not found" "$output" "test_remaining_missing_cluster_lock" +} + +test_remaining_missing_enr_key() { + local output + local exit_code=0 + + rm -f "$REPO_ROOT/.charon/charon-enr-private-key" + + output=$("$REMAINING_OPERATOR_SCRIPT" --new-enr "enr:-test" --operator-index 0 2>&1) || exit_code=$? + + # Restore ENR key + cp "$TEST_FIXTURES_DIR/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" + + assert_exit_code 1 "$exit_code" "test_remaining_missing_enr_key" && \ + assert_output_contains "charon-enr-private-key not found" "$output" "test_remaining_missing_enr_key" +} + +test_remaining_dry_run_full_workflow() { + local output + local exit_code=0 + + # Use --skip-export and --skip-ceremony to avoid Docker dependencies + output=$("$REMAINING_OPERATOR_SCRIPT" \ + --new-enr "enr:-HW4QTestNewOperator123456789" \ + --operator-index 0 \ + --skip-export \ + --skip-ceremony \ + --dry-run 2>&1) || exit_code=$? + + assert_exit_code 0 "$exit_code" "test_remaining_dry_run_full_workflow" && \ + assert_output_contains "DRY-RUN MODE" "$output" "test_remaining_dry_run_full_workflow" && \ + assert_output_contains "Updating anti-slashing database pubkeys" "$output" "test_remaining_dry_run_full_workflow" && \ + assert_output_contains "Stopping" "$output" "test_remaining_dry_run_full_workflow" && \ + assert_output_contains "Backing up" "$output" "test_remaining_dry_run_full_workflow" && \ + assert_output_contains "Importing" "$output" "test_remaining_dry_run_full_workflow" && \ + assert_output_contains "Restarting" "$output" "test_remaining_dry_run_full_workflow" +} + +test_remaining_skip_export_missing_asdb() { + local output + local exit_code=0 + + rm -f "$REPO_ROOT/asdb-export/slashing-protection.json" + + output=$("$REMAINING_OPERATOR_SCRIPT" \ + --new-enr "enr:-test" \ + --operator-index 0 \ + --skip-export \ + --dry-run 2>&1) || exit_code=$? + + # Restore ASDB + cp "$TEST_FIXTURES_DIR/sample-asdb.json" "$REPO_ROOT/asdb-export/slashing-protection.json" + + assert_exit_code 1 "$exit_code" "test_remaining_skip_export_missing_asdb" && \ + assert_output_contains "Cannot skip export" "$output" "test_remaining_skip_export_missing_asdb" +} + +test_remaining_skip_ceremony_missing_output() { + local output + local exit_code=0 + + rm -f "$REPO_ROOT/output/cluster-lock.json" + + output=$("$REMAINING_OPERATOR_SCRIPT" \ + --new-enr "enr:-test" \ + --operator-index 0 \ + --skip-ceremony \ + --dry-run 2>&1) || exit_code=$? + + # Restore output cluster-lock + cp "$TEST_FIXTURES_DIR/new-cluster-lock.json" "$REPO_ROOT/output/cluster-lock.json" + + assert_exit_code 1 "$exit_code" "test_remaining_skip_ceremony_missing_output" && \ + assert_output_contains "Cannot skip ceremony" "$output" "test_remaining_skip_ceremony_missing_output" +} + +test_remaining_unknown_argument() { + local output + local exit_code=0 + + output=$("$REMAINING_OPERATOR_SCRIPT" --invalid-flag 2>&1) || exit_code=$? + + assert_exit_code 1 "$exit_code" "test_remaining_unknown_argument" && \ + assert_output_contains "Unknown argument" "$output" "test_remaining_unknown_argument" +} + +# ============================================================================ +# MAIN TEST RUNNER +# ============================================================================ + +main() { + echo "" + echo "╔════════════════════════════════════════════════════════════════╗" + echo "║ Replace-Operator Scripts - Integration Tests ║" + echo "╚════════════════════════════════════════════════════════════════╝" + echo "" + + # Setup test environment + log_info "Setting up test environment..." + setup_test_env + + echo "" + echo "─────────────────────────────────────────────────────────────────" + echo " NEW-OPERATOR.SH TESTS" + echo "─────────────────────────────────────────────────────────────────" + echo "" + + run_test "new-operator: --help shows usage" test_new_help + run_test "new-operator: error when .env missing" test_new_missing_env + run_test "new-operator: error when NETWORK missing" test_new_missing_network + run_test "new-operator: error when VC missing" test_new_missing_vc + run_test "new-operator: error when .charon dir missing" test_new_missing_charon_dir + run_test "new-operator: error when ENR key missing" test_new_missing_enr_key + run_test "new-operator: error for invalid cluster-lock path" test_new_invalid_cluster_lock_path + run_test "new-operator: dry-run generate ENR" test_new_dry_run_generate_enr + run_test "new-operator: dry-run join cluster" test_new_dry_run_join_cluster + run_test "new-operator: error for unknown argument" test_new_unknown_argument + + echo "" + echo "─────────────────────────────────────────────────────────────────" + echo " REMAINING-OPERATOR.SH TESTS" + echo "─────────────────────────────────────────────────────────────────" + echo "" + + run_test "remaining-operator: --help shows usage" test_remaining_help + run_test "remaining-operator: error when --new-enr missing" test_remaining_missing_new_enr + run_test "remaining-operator: error when --operator-index missing" test_remaining_missing_operator_index + run_test "remaining-operator: error when .env missing" test_remaining_missing_env + run_test "remaining-operator: error when .charon dir missing" test_remaining_missing_charon_dir + run_test "remaining-operator: error when cluster-lock missing" test_remaining_missing_cluster_lock + run_test "remaining-operator: error when ENR key missing" test_remaining_missing_enr_key + run_test "remaining-operator: dry-run full workflow" test_remaining_dry_run_full_workflow + run_test "remaining-operator: skip-export needs existing ASDB" test_remaining_skip_export_missing_asdb + run_test "remaining-operator: skip-ceremony needs existing output" test_remaining_skip_ceremony_missing_output + run_test "remaining-operator: error for unknown argument" test_remaining_unknown_argument + + echo "" + echo "═════════════════════════════════════════════════════════════════" + echo "" + + if [ "$TESTS_FAILED" -eq 0 ]; then + echo -e "${GREEN}All $TESTS_PASSED tests passed!${NC}" + echo "" + exit 0 + else + echo -e "${RED}$TESTS_FAILED of $TESTS_RUN tests failed${NC}" + echo "" + exit 1 + fi +} + +main "$@" diff --git a/scripts/edit/vc/export_asdb.sh b/scripts/edit/vc/export_asdb.sh new file mode 100755 index 00000000..f7ed0b68 --- /dev/null +++ b/scripts/edit/vc/export_asdb.sh @@ -0,0 +1,56 @@ +#!/usr/bin/env bash + +# Script to export validator anti-slashing database to EIP-3076 format. +# +# This script routes to the appropriate VC-specific export script based on the VC environment variable. +# +# Usage: VC=vc-lodestar ./scripts/edit/vc/export_asdb.sh [options] +# +# Environment Variables: +# VC Validator client type (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus) +# +# All options are passed through to the VC-specific script. + +set -euo pipefail + +# Check if VC environment variable is set +if [ -z "${VC:-}" ]; then + echo "Error: VC environment variable is not set" >&2 + echo "Usage: VC=vc-lodestar $0 [options]" >&2 + echo "" >&2 + echo "Supported VC types:" >&2 + echo " - vc-lodestar" >&2 + echo " - vc-teku" >&2 + echo " - vc-prysm" >&2 + echo " - vc-nimbus" >&2 + exit 1 +fi + +# Extract the VC name (remove "vc-" prefix) +VC_NAME="${VC#vc-}" + +# Get the script directory +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +# Path to the VC-specific script +VC_SCRIPT="${SCRIPT_DIR}/${VC_NAME}/export_asdb.sh" + +# Check if the VC-specific script exists +if [ ! -f "$VC_SCRIPT" ]; then + echo "Error: Export script for '$VC' not found at: $VC_SCRIPT" >&2 + echo "" >&2 + echo "Available VC types:" >&2 + for dir in "${SCRIPT_DIR}"/*; do + if [ -d "$dir" ] && [ -f "$dir/export_asdb.sh" ]; then + basename "$dir" + fi + done | sed 's/^/ - vc-/' >&2 + exit 1 +fi + +# Make sure the VC-specific script is executable +chmod +x "$VC_SCRIPT" + +# Run the VC-specific script with all arguments passed through +echo "Running export for $VC..." +exec "$VC_SCRIPT" "$@" diff --git a/scripts/edit/vc/import_asdb.sh b/scripts/edit/vc/import_asdb.sh new file mode 100755 index 00000000..6e8facd7 --- /dev/null +++ b/scripts/edit/vc/import_asdb.sh @@ -0,0 +1,56 @@ +#!/usr/bin/env bash + +# Script to import validator anti-slashing database from EIP-3076 format. +# +# This script routes to the appropriate VC-specific import script based on the VC environment variable. +# +# Usage: VC=vc-lodestar ./scripts/edit/vc/import_asdb.sh [options] +# +# Environment Variables: +# VC Validator client type (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus) +# +# All options are passed through to the VC-specific script. + +set -euo pipefail + +# Check if VC environment variable is set +if [ -z "${VC:-}" ]; then + echo "Error: VC environment variable is not set" >&2 + echo "Usage: VC=vc-lodestar $0 [options]" >&2 + echo "" >&2 + echo "Supported VC types:" >&2 + echo " - vc-lodestar" >&2 + echo " - vc-teku" >&2 + echo " - vc-prysm" >&2 + echo " - vc-nimbus" >&2 + exit 1 +fi + +# Extract the VC name (remove "vc-" prefix) +VC_NAME="${VC#vc-}" + +# Get the script directory +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +# Path to the VC-specific script +VC_SCRIPT="${SCRIPT_DIR}/${VC_NAME}/import_asdb.sh" + +# Check if the VC-specific script exists +if [ ! -f "$VC_SCRIPT" ]; then + echo "Error: Import script for '$VC' not found at: $VC_SCRIPT" >&2 + echo "" >&2 + echo "Available VC types:" >&2 + for dir in "${SCRIPT_DIR}"/*; do + if [ -d "$dir" ] && [ -f "$dir/import_asdb.sh" ]; then + basename "$dir" + fi + done | sed 's/^/ - vc-/' >&2 + exit 1 +fi + +# Make sure the VC-specific script is executable +chmod +x "$VC_SCRIPT" + +# Run the VC-specific script with all arguments passed through +echo "Running import for $VC..." +exec "$VC_SCRIPT" "$@" diff --git a/scripts/edit/vc/lodestar/export_asdb.sh b/scripts/edit/vc/lodestar/export_asdb.sh new file mode 100755 index 00000000..371fd45f --- /dev/null +++ b/scripts/edit/vc/lodestar/export_asdb.sh @@ -0,0 +1,119 @@ +#!/usr/bin/env bash + +# Script to export Lodestar validator anti-slashing database to EIP-3076 format. +# +# This script is run by continuing operators before the replace-operator ceremony. +# It exports the slashing protection database from the running vc-lodestar container +# to a JSON file that can be updated and re-imported after the ceremony. +# +# Usage: export_asdb.sh [--data-dir ] [--output-file ] +# +# Options: +# --data-dir Path to Lodestar data directory (default: ./data/lodestar) +# --output-file Path for exported slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-lodestar container must be running +# - docker and docker compose must be available + +set -euo pipefail + +# Default values +DATA_DIR="./data/lodestar" +OUTPUT_FILE="./asdb-export/slashing-protection.json" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + --output-file) + OUTPUT_FILE="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--data-dir ] [--output-file ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Exporting anti-slashing database for Lodestar validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Output file: $OUTPUT_FILE" +echo "" + +# Check if vc-lodestar container is running +if ! docker compose ps vc-lodestar | grep -q Up; then + echo "Error: vc-lodestar container is not running" >&2 + echo "Please start the validator client before exporting:" >&2 + echo " docker compose up -d vc-lodestar" >&2 + exit 1 +fi + +# Create output directory if it doesn't exist +OUTPUT_DIR=$(dirname "$OUTPUT_FILE") +mkdir -p "$OUTPUT_DIR" + +echo "Exporting slashing protection data from vc-lodestar container..." + +# Export slashing protection data from the container +# The container writes to /tmp/export.json, then we copy it out +# Using full path to lodestar binary as found in run.sh to ensure it's found +if ! docker compose exec -T vc-lodestar node /usr/app/packages/cli/bin/lodestar validator slashing-protection export \ + --file /tmp/export.json \ + --dataDir /opt/data \ + --network "$NETWORK"; then + echo "Error: Failed to export slashing protection from vc-lodestar container" >&2 + exit 1 +fi + +echo "Copying exported file from container to host..." + +# Copy the exported file from container to host +if ! docker compose cp vc-lodestar:/tmp/export.json "$OUTPUT_FILE"; then + echo "Error: Failed to copy exported file from container" >&2 + exit 1 +fi + +# Validate the exported JSON +if ! jq empty "$OUTPUT_FILE" 2>/dev/null; then + echo "Error: Exported file is not valid JSON" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully exported anti-slashing database" +echo " Output file: $OUTPUT_FILE" +echo "" +echo "You can now proceed with the replace-operator ceremony." diff --git a/scripts/edit/vc/lodestar/import_asdb.sh b/scripts/edit/vc/lodestar/import_asdb.sh new file mode 100755 index 00000000..c9751b4d --- /dev/null +++ b/scripts/edit/vc/lodestar/import_asdb.sh @@ -0,0 +1,121 @@ +#!/usr/bin/env bash + +# Script to import Lodestar validator anti-slashing database from EIP-3076 format. +# +# This script is run by continuing operators after the replace-operator ceremony +# and anti-slashing database update. It imports the updated slashing protection +# database back into the vc-lodestar container. +# +# Usage: import_asdb.sh [--input-file ] [--data-dir ] +# +# Options: +# --input-file Path to updated slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# --data-dir Path to Lodestar data directory (default: ./data/lodestar) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-lodestar container must be STOPPED before import +# - docker and docker compose must be available +# - Input file must be valid EIP-3076 JSON + +set -euo pipefail + +# Default values +INPUT_FILE="./asdb-export/slashing-protection.json" +DATA_DIR="./data/lodestar" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --input-file) + INPUT_FILE="$2" + shift 2 + ;; + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--input-file ] [--data-dir ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Importing anti-slashing database for Lodestar validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Input file: $INPUT_FILE" +echo "" + +# Check if input file exists +if [ ! -f "$INPUT_FILE" ]; then + echo "Error: Input file not found: $INPUT_FILE" >&2 + exit 1 +fi + +# Validate input file is valid JSON +if ! jq empty "$INPUT_FILE" 2>/dev/null; then + echo "Error: Input file is not valid JSON: $INPUT_FILE" >&2 + exit 1 +fi + +# Check if vc-lodestar container is running (it should be stopped) +if docker compose ps vc-lodestar 2>/dev/null | grep -q Up; then + echo "Error: vc-lodestar container is still running" >&2 + echo "Please stop the validator client before importing:" >&2 + echo " docker compose stop vc-lodestar" >&2 + echo "" >&2 + echo "Importing while the container is running may cause database corruption." >&2 + exit 1 +fi + +echo "Importing slashing protection data into vc-lodestar container..." + +# Import slashing protection data using a temporary container based on the vc-lodestar service. +# The input file is bind-mounted into the container at /tmp/import.json (read-only). +# We MUST override the entrypoint because the default run.sh ignores arguments. +# Using --force to allow importing even if some data already exists. +if ! docker compose run --rm -T \ + --entrypoint node \ + -v "$INPUT_FILE":/tmp/import.json:ro \ + vc-lodestar /usr/app/packages/cli/bin/lodestar validator slashing-protection import \ + --file /tmp/import.json \ + --dataDir /opt/data \ + --network "$NETWORK" \ + --force; then + echo "Error: Failed to import slashing protection into vc-lodestar container" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully imported anti-slashing database" +echo "" +echo "You can now restart the validator client:" +echo " docker compose up -d vc-lodestar" diff --git a/scripts/edit/vc/nimbus/export_asdb.sh b/scripts/edit/vc/nimbus/export_asdb.sh new file mode 100755 index 00000000..4129dd61 --- /dev/null +++ b/scripts/edit/vc/nimbus/export_asdb.sh @@ -0,0 +1,118 @@ +#!/usr/bin/env bash + +# Script to export Nimbus validator anti-slashing database to EIP-3076 format. +# +# This script is run by continuing operators before the replace-operator ceremony. +# It exports the slashing protection database from the running vc-nimbus container +# to a JSON file that can be updated and re-imported after the ceremony. +# +# Usage: export_asdb.sh [--data-dir ] [--output-file ] +# +# Options: +# --data-dir Path to Nimbus data directory (default: ./data/nimbus) +# --output-file Path for exported slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-nimbus container must be running +# - docker and docker compose must be available + +set -euo pipefail + +# Default values +DATA_DIR="./data/nimbus" +OUTPUT_FILE="./asdb-export/slashing-protection.json" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + --output-file) + OUTPUT_FILE="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--data-dir ] [--output-file ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Exporting anti-slashing database for Nimbus validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Output file: $OUTPUT_FILE" +echo "" + +# Check if vc-nimbus container is running +if ! docker compose ps vc-nimbus | grep -q Up; then + echo "Error: vc-nimbus container is not running" >&2 + echo "Please start the validator client before exporting:" >&2 + echo " docker compose up -d vc-nimbus" >&2 + exit 1 +fi + +# Create output directory if it doesn't exist +OUTPUT_DIR=$(dirname "$OUTPUT_FILE") +mkdir -p "$OUTPUT_DIR" + +echo "Exporting slashing protection data from vc-nimbus container..." + +# Export slashing protection data from the container +# The container writes to /tmp/export.json, then we copy it out +# Note: slashingdb commands are in nimbus_beacon_node, not nimbus_validator_client. +# Nimbus requires --data-dir BEFORE the subcommand. +if ! docker compose exec -T vc-nimbus /home/user/nimbus_beacon_node \ + --data-dir=/home/user/data slashingdb export /tmp/export.json; then + echo "Error: Failed to export slashing protection from vc-nimbus container" >&2 + exit 1 +fi + +echo "Copying exported file from container to host..." + +# Copy the exported file from container to host +if ! docker compose cp vc-nimbus:/tmp/export.json "$OUTPUT_FILE"; then + echo "Error: Failed to copy exported file from container" >&2 + exit 1 +fi + +# Validate the exported JSON +if ! jq empty "$OUTPUT_FILE" 2>/dev/null; then + echo "Error: Exported file is not valid JSON" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully exported anti-slashing database" +echo " Output file: $OUTPUT_FILE" +echo "" +echo "You can now proceed with the replace-operator ceremony." diff --git a/scripts/edit/vc/nimbus/import_asdb.sh b/scripts/edit/vc/nimbus/import_asdb.sh new file mode 100755 index 00000000..36433a5d --- /dev/null +++ b/scripts/edit/vc/nimbus/import_asdb.sh @@ -0,0 +1,117 @@ +#!/usr/bin/env bash + +# Script to import Nimbus validator anti-slashing database from EIP-3076 format. +# +# This script is run by continuing operators after the replace-operator ceremony +# and anti-slashing database update. It imports the updated slashing protection +# database back into the vc-nimbus container. +# +# Usage: import_asdb.sh [--input-file ] [--data-dir ] +# +# Options: +# --input-file Path to updated slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# --data-dir Path to Nimbus data directory (default: ./data/nimbus) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-nimbus container must be STOPPED before import +# - docker and docker compose must be available +# - Input file must be valid EIP-3076 JSON + +set -euo pipefail + +# Default values +INPUT_FILE="./asdb-export/slashing-protection.json" +DATA_DIR="./data/nimbus" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --input-file) + INPUT_FILE="$2" + shift 2 + ;; + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--input-file ] [--data-dir ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Importing anti-slashing database for Nimbus validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Input file: $INPUT_FILE" +echo "" + +# Check if input file exists +if [ ! -f "$INPUT_FILE" ]; then + echo "Error: Input file not found: $INPUT_FILE" >&2 + exit 1 +fi + +# Validate input file is valid JSON +if ! jq empty "$INPUT_FILE" 2>/dev/null; then + echo "Error: Input file is not valid JSON: $INPUT_FILE" >&2 + exit 1 +fi + +# Check if vc-nimbus container is running (it should be stopped) +if docker compose ps vc-nimbus 2>/dev/null | grep -q Up; then + echo "Error: vc-nimbus container is still running" >&2 + echo "Please stop the validator client before importing:" >&2 + echo " docker compose stop vc-nimbus" >&2 + echo "" >&2 + echo "Importing while the container is running may cause database corruption." >&2 + exit 1 +fi + +echo "Importing slashing protection data into vc-nimbus container..." + +# Import slashing protection data using a temporary container based on the vc-nimbus service. +# The input file is bind-mounted into the container at /tmp/import.json (read-only). +# Note: slashingdb commands are in nimbus_beacon_node, not nimbus_validator_client. +# Nimbus requires --data-dir BEFORE the subcommand. +if ! docker compose run --rm -T \ + --entrypoint sh \ + -v "$INPUT_FILE":/tmp/import.json:ro \ + vc-nimbus -c "/home/user/nimbus_beacon_node --data-dir=/home/user/data slashingdb import /tmp/import.json"; then + echo "Error: Failed to import slashing protection into vc-nimbus container" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully imported anti-slashing database" +echo "" +echo "You can now restart the validator client:" +echo " docker compose up -d vc-nimbus" diff --git a/scripts/edit/vc/prysm/export_asdb.sh b/scripts/edit/vc/prysm/export_asdb.sh new file mode 100755 index 00000000..79820081 --- /dev/null +++ b/scripts/edit/vc/prysm/export_asdb.sh @@ -0,0 +1,121 @@ +#!/usr/bin/env bash + +# Script to export Prysm validator anti-slashing database to EIP-3076 format. +# +# This script is run by continuing operators before the replace-operator ceremony. +# It exports the slashing protection database from the running vc-prysm container +# to a JSON file that can be updated and re-imported after the ceremony. +# +# Usage: export_asdb.sh [--data-dir ] [--output-file ] +# +# Options: +# --data-dir Path to Prysm data directory (default: ./data/prysm) +# --output-file Path for exported slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-prysm container must be running +# - docker and docker compose must be available + +set -euo pipefail + +# Default values +DATA_DIR="./data/prysm" +OUTPUT_FILE="./asdb-export/slashing-protection.json" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + --output-file) + OUTPUT_FILE="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--data-dir ] [--output-file ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Exporting anti-slashing database for Prysm validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Output file: $OUTPUT_FILE" +echo "" + +# Check if vc-prysm container is running +if ! docker compose ps vc-prysm | grep -q Up; then + echo "Error: vc-prysm container is not running" >&2 + echo "Please start the validator client before exporting:" >&2 + echo " docker compose up -d vc-prysm" >&2 + exit 1 +fi + +# Create output directory if it doesn't exist +OUTPUT_DIR=$(dirname "$OUTPUT_FILE") +mkdir -p "$OUTPUT_DIR" + +echo "Exporting slashing protection data from vc-prysm container..." + +# Export slashing protection data from the container +# The container writes to /tmp/export.json, then we copy it out +# Prysm stores data in /data/vc and wallet in /prysm-wallet +if ! docker compose exec -T vc-prysm /app/cmd/validator/validator slashing-protection-history export \ + --accept-terms-of-use \ + --datadir=/data/vc \ + --slashing-protection-export-dir=/tmp \ + --$NETWORK; then + echo "Error: Failed to export slashing protection from vc-prysm container" >&2 + exit 1 +fi + +echo "Copying exported file from container to host..." + +# Prysm creates a file named slashing_protection.json in the export directory +# Copy the exported file from container to host +if ! docker compose cp vc-prysm:/tmp/slashing_protection.json "$OUTPUT_FILE"; then + echo "Error: Failed to copy exported file from container" >&2 + exit 1 +fi + +# Validate the exported JSON +if ! jq empty "$OUTPUT_FILE" 2>/dev/null; then + echo "Error: Exported file is not valid JSON" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully exported anti-slashing database" +echo " Output file: $OUTPUT_FILE" +echo "" +echo "You can now proceed with the replace-operator ceremony." diff --git a/scripts/edit/vc/prysm/import_asdb.sh b/scripts/edit/vc/prysm/import_asdb.sh new file mode 100755 index 00000000..bc2c6bc5 --- /dev/null +++ b/scripts/edit/vc/prysm/import_asdb.sh @@ -0,0 +1,121 @@ +#!/usr/bin/env bash + +# Script to import Prysm validator anti-slashing database from EIP-3076 format. +# +# This script is run by continuing operators after the replace-operator ceremony +# and anti-slashing database update. It imports the updated slashing protection +# database back into the vc-prysm container. +# +# Usage: import_asdb.sh [--input-file ] [--data-dir ] +# +# Options: +# --input-file Path to updated slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# --data-dir Path to Prysm data directory (default: ./data/prysm) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-prysm container must be STOPPED before import +# - docker and docker compose must be available +# - Input file must be valid EIP-3076 JSON + +set -euo pipefail + +# Default values +INPUT_FILE="./asdb-export/slashing-protection.json" +DATA_DIR="./data/prysm" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --input-file) + INPUT_FILE="$2" + shift 2 + ;; + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--input-file ] [--data-dir ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Importing anti-slashing database for Prysm validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Input file: $INPUT_FILE" +echo "" + +# Check if input file exists +if [ ! -f "$INPUT_FILE" ]; then + echo "Error: Input file not found: $INPUT_FILE" >&2 + exit 1 +fi + +# Validate input file is valid JSON +if ! jq empty "$INPUT_FILE" 2>/dev/null; then + echo "Error: Input file is not valid JSON: $INPUT_FILE" >&2 + exit 1 +fi + +# Check if vc-prysm container is running (it should be stopped) +if docker compose ps vc-prysm 2>/dev/null | grep -q Up; then + echo "Error: vc-prysm container is still running" >&2 + echo "Please stop the validator client before importing:" >&2 + echo " docker compose stop vc-prysm" >&2 + echo "" >&2 + echo "Importing while the container is running may cause database corruption." >&2 + exit 1 +fi + +echo "Importing slashing protection data into vc-prysm container..." + +# Import slashing protection data using a temporary container based on the vc-prysm service. +# The input file is bind-mounted into the container at /tmp/slashing_protection.json (read-only). +# We MUST override the entrypoint because the default run.sh ignores arguments. +# Prysm expects the file to be named slashing_protection.json +if ! docker compose run --rm -T \ + --entrypoint /app/cmd/validator/validator \ + -v "$INPUT_FILE":/tmp/slashing_protection.json:ro \ + vc-prysm slashing-protection-history import \ + --accept-terms-of-use \ + --datadir=/data/vc \ + --slashing-protection-json-file=/tmp/slashing_protection.json \ + --$NETWORK; then + echo "Error: Failed to import slashing protection into vc-prysm container" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully imported anti-slashing database" +echo "" +echo "You can now restart the validator client:" +echo " docker compose up -d vc-prysm" diff --git a/scripts/edit/vc/teku/export_asdb.sh b/scripts/edit/vc/teku/export_asdb.sh new file mode 100755 index 00000000..145712ed --- /dev/null +++ b/scripts/edit/vc/teku/export_asdb.sh @@ -0,0 +1,118 @@ +#!/usr/bin/env bash + +# Script to export Teku validator anti-slashing database to EIP-3076 format. +# +# This script is run by continuing operators before the replace-operator ceremony. +# It exports the slashing protection database from the running vc-teku container +# to a JSON file that can be updated and re-imported after the ceremony. +# +# Usage: export_asdb.sh [--data-dir ] [--output-file ] +# +# Options: +# --data-dir Path to Teku data directory (default: ./data/vc-teku) +# --output-file Path for exported slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-teku container must be running +# - docker and docker compose must be available + +set -euo pipefail + +# Default values +DATA_DIR="./data/vc-teku" +OUTPUT_FILE="./asdb-export/slashing-protection.json" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + --output-file) + OUTPUT_FILE="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--data-dir ] [--output-file ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Exporting anti-slashing database for Teku validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Output file: $OUTPUT_FILE" +echo "" + +# Check if vc-teku container is running +if ! docker compose ps vc-teku | grep -q Up; then + echo "Error: vc-teku container is not running" >&2 + echo "Please start the validator client before exporting:" >&2 + echo " docker compose up -d vc-teku" >&2 + exit 1 +fi + +# Create output directory if it doesn't exist +OUTPUT_DIR=$(dirname "$OUTPUT_FILE") +mkdir -p "$OUTPUT_DIR" + +echo "Exporting slashing protection data from vc-teku container..." + +# Export slashing protection data from the container +# Teku stores data in /home/data (mapped from ./data/vc-teku) +# The export command writes to a file we specify +if ! docker compose exec -T vc-teku /opt/teku/bin/teku slashing-protection export \ + --data-path=/home/data \ + --to=/tmp/export.json; then + echo "Error: Failed to export slashing protection from vc-teku container" >&2 + exit 1 +fi + +echo "Copying exported file from container to host..." + +# Copy the exported file from container to host +if ! docker compose cp vc-teku:/tmp/export.json "$OUTPUT_FILE"; then + echo "Error: Failed to copy exported file from container" >&2 + exit 1 +fi + +# Validate the exported JSON +if ! jq empty "$OUTPUT_FILE" 2>/dev/null; then + echo "Error: Exported file is not valid JSON" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully exported anti-slashing database" +echo " Output file: $OUTPUT_FILE" +echo "" +echo "You can now proceed with the replace-operator ceremony." diff --git a/scripts/edit/vc/teku/import_asdb.sh b/scripts/edit/vc/teku/import_asdb.sh new file mode 100755 index 00000000..d73b7c6f --- /dev/null +++ b/scripts/edit/vc/teku/import_asdb.sh @@ -0,0 +1,118 @@ +#!/usr/bin/env bash + +# Script to import Teku validator anti-slashing database from EIP-3076 format. +# +# This script is run by continuing operators after the replace-operator ceremony +# and anti-slashing database update. It imports the updated slashing protection +# database back into the vc-teku container. +# +# Usage: import_asdb.sh [--input-file ] [--data-dir ] +# +# Options: +# --input-file Path to updated slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# --data-dir Path to Teku data directory (default: ./data/vc-teku) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-teku container must be STOPPED before import +# - docker and docker compose must be available +# - Input file must be valid EIP-3076 JSON + +set -euo pipefail + +# Default values +INPUT_FILE="./asdb-export/slashing-protection.json" +DATA_DIR="./data/vc-teku" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --input-file) + INPUT_FILE="$2" + shift 2 + ;; + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--input-file ] [--data-dir ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Importing anti-slashing database for Teku validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Input file: $INPUT_FILE" +echo "" + +# Check if input file exists +if [ ! -f "$INPUT_FILE" ]; then + echo "Error: Input file not found: $INPUT_FILE" >&2 + exit 1 +fi + +# Validate input file is valid JSON +if ! jq empty "$INPUT_FILE" 2>/dev/null; then + echo "Error: Input file is not valid JSON: $INPUT_FILE" >&2 + exit 1 +fi + +# Check if vc-teku container is running (it should be stopped) +if docker compose ps vc-teku 2>/dev/null | grep -q Up; then + echo "Error: vc-teku container is still running" >&2 + echo "Please stop the validator client before importing:" >&2 + echo " docker compose stop vc-teku" >&2 + echo "" >&2 + echo "Importing while the container is running may cause database corruption." >&2 + exit 1 +fi + +echo "Importing slashing protection data into vc-teku container..." + +# Import slashing protection data using a temporary container based on the vc-teku service. +# The input file is bind-mounted into the container at /tmp/import.json (read-only). +# We override the command to run the import instead of the validator client. +if ! docker compose run --rm -T \ + -v "$INPUT_FILE":/tmp/import.json:ro \ + --entrypoint /opt/teku/bin/teku \ + vc-teku slashing-protection import \ + --data-path=/home/data \ + --from=/tmp/import.json; then + echo "Error: Failed to import slashing protection into vc-teku container" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully imported anti-slashing database" +echo "" +echo "You can now restart the validator client:" +echo " docker compose up -d vc-teku" diff --git a/scripts/edit/vc/test/.gitignore b/scripts/edit/vc/test/.gitignore new file mode 100644 index 00000000..e1c0f7a9 --- /dev/null +++ b/scripts/edit/vc/test/.gitignore @@ -0,0 +1,9 @@ +# Temporary test artifacts +output/ +data/ +*.tmp + +# Keep fixtures (override root .gitignore rules) +!fixtures/ +!fixtures/validator_keys/ +!fixtures/validator_keys/* diff --git a/scripts/edit/vc/test/README.md b/scripts/edit/vc/test/README.md new file mode 100644 index 00000000..6e9c768b --- /dev/null +++ b/scripts/edit/vc/test/README.md @@ -0,0 +1,34 @@ +# Integration Tests for ASDB Export/Import Scripts + +These tests verify export/import scripts for various VC types work correctly with test data. + +## Prerequisites + +- Docker must be running +- No `.charon` folder required (test uses fixtures) + +## Running Tests + +```bash +# Lodestar VC test +# (for other VC types the usage is identical) +./scripts/edit/vc/test/test_lodestar_asdb.sh +``` + +## ⚠️ Test Isolation + +The test uses isolated data directories within `scripts/edit/vc/test/data/` to avoid any interference with production data in `data/`. + +## Test Flow + +1. Starts vc-lodestar container (no charon dependency) +2. Imports sample slashing protection data from fixtures +3. Exports slashing protection via `export_asdb.sh` +4. Transforms pubkeys via `update-anti-slashing-db.sh` +5. Re-imports updated data via `import_asdb.sh` + +## Test Artifacts + +After running, inspect results in `scripts/edit/vc/test/output/`: +- `exported-asdb.json` - Original export +- `updated-asdb.json` - After pubkey transformation diff --git a/scripts/edit/vc/test/docker-compose.test.yml b/scripts/edit/vc/test/docker-compose.test.yml new file mode 100644 index 00000000..ce2d6065 --- /dev/null +++ b/scripts/edit/vc/test/docker-compose.test.yml @@ -0,0 +1,40 @@ +# Test override for validator client services +# Removes charon dependency and keeps container alive for testing +# Mounts test fixtures instead of .charon/validator_keys +# Uses dedicated test data directory to avoid conflicts + +services: + vc-lodestar: + depends_on: [] + entrypoint: ["sh", "-c", "tail -f /dev/null"] + volumes: + - ./lodestar/run.sh:/opt/lodestar/run.sh + - ./scripts/edit/vc/test/fixtures/validator_keys:/home/charon/validator_keys + - ./scripts/edit/vc/test/data/lodestar:/opt/data + + vc-nimbus: + depends_on: [] + entrypoint: ["sh", "-c", "tail -f /dev/null"] + volumes: + # Mount run.sh from INSIDE the test data directory to avoid conflicts + # with the base compose's run.sh mount (volumes are merged, not replaced) + - ./scripts/edit/vc/test/data/nimbus/run.sh:/home/user/data/run.sh + - ./scripts/edit/vc/test/fixtures/validator_keys:/home/validator_keys + - ./scripts/edit/vc/test/data/nimbus:/home/user/data + + vc-prysm: + depends_on: [] + entrypoint: ["sh", "-c", "tail -f /dev/null"] + volumes: + # Mount run.sh from INSIDE the test data directory to avoid conflicts + - ./scripts/edit/vc/test/data/prysm/run.sh:/home/prysm/run.sh + - ./scripts/edit/vc/test/fixtures/validator_keys:/home/charon/validator_keys + - ./scripts/edit/vc/test/data/prysm:/data/vc + + vc-teku: + depends_on: [] + entrypoint: ["sh", "-c", "tail -f /dev/null"] + volumes: + # Mount test fixtures validator keys and test data directory + - ./scripts/edit/vc/test/fixtures/validator_keys:/opt/charon/validator_keys + - ./scripts/edit/vc/test/data/teku:/home/data diff --git a/scripts/edit/vc/test/fixtures/sample-slashing-protection.json b/scripts/edit/vc/test/fixtures/sample-slashing-protection.json new file mode 100644 index 00000000..6c1f42bb --- /dev/null +++ b/scripts/edit/vc/test/fixtures/sample-slashing-protection.json @@ -0,0 +1,38 @@ +{ + "metadata": { + "interchange_format_version": "5", + "genesis_validators_root": "0x212f13fc4df078b6cb7db228f1c8307566dcecf900867401a92023d7ba99cb5f" + }, + "data": [ + { + "pubkey": "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", + "signed_blocks": [ + { + "slot": "81952", + "signing_root": "0x4ff6f743a43f3b4f95350831aeaf0a122a1a392922c45d804280284a69eb850b" + }, + { + "slot": "81984", + "signing_root": "0x5a2b9c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b" + } + ], + "signed_attestations": [ + { + "source_epoch": "2560", + "target_epoch": "2561", + "signing_root": "0x587d6a4f59a58fe15bdac1234e3d51a1d5c8b2e0e3f5e0f2a1b3c4d5e6f7a8b9" + }, + { + "source_epoch": "2561", + "target_epoch": "2562", + "signing_root": "0x6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b" + }, + { + "source_epoch": "2562", + "target_epoch": "2563", + "signing_root": "0x7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c" + } + ] + } + ] +} diff --git a/scripts/edit/vc/test/fixtures/source-cluster-lock.json b/scripts/edit/vc/test/fixtures/source-cluster-lock.json new file mode 100644 index 00000000..d17c11fe --- /dev/null +++ b/scripts/edit/vc/test/fixtures/source-cluster-lock.json @@ -0,0 +1,19 @@ +{ + "cluster_definition": { + "name": "TestCluster", + "num_validators": 1, + "threshold": 3 + }, + "distributed_validators": [ + { + "distributed_public_key": "0xa9fb2be415318eb77709f7c378ab26025371c0b11213d93fd662ffdb06e77a05c7b04573a478e9d5c0c0fd98078965ef", + "public_shares": [ + "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", + "0x8afba316fdcf51e25a89e05e17377b8c72fd465c95346df4ed5694f295faa2ce061e14e579c5bc01a468dbbb191c58e8", + "0xa1aeebe0980509f5f8d8d424beb89004a967da8d8093248f64eb27c4ee5d22ba9c0f157025f551f47b31833f8bc585f8", + "0xa6c283c82cd0b65436861a149fb840849d06ded1dd8d2f900afb358c6a4232004309120f00a553cdccd8a43f6b743c82" + ] + } + ], + "lock_hash": "0xe9dbc87171f99bd8b6f348f6bf314291651933256e712ace299190f5e04e7795" +} diff --git a/scripts/edit/vc/test/fixtures/target-cluster-lock.json b/scripts/edit/vc/test/fixtures/target-cluster-lock.json new file mode 100644 index 00000000..8449e309 --- /dev/null +++ b/scripts/edit/vc/test/fixtures/target-cluster-lock.json @@ -0,0 +1,19 @@ +{ + "cluster_definition": { + "name": "TestCluster", + "num_validators": 1, + "threshold": 3 + }, + "distributed_validators": [ + { + "distributed_public_key": "0xa9fb2be415318eb77709f7c378ab26025371c0b11213d93fd662ffdb06e77a05c7b04573a478e9d5c0c0fd98078965ef", + "public_shares": [ + "0xb11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111", + "0x8afba316fdcf51e25a89e05e17377b8c72fd465c95346df4ed5694f295faa2ce061e14e579c5bc01a468dbbb191c58e8", + "0xa1aeebe0980509f5f8d8d424beb89004a967da8d8093248f64eb27c4ee5d22ba9c0f157025f551f47b31833f8bc585f8", + "0xa6c283c82cd0b65436861a149fb840849d06ded1dd8d2f900afb358c6a4232004309120f00a553cdccd8a43f6b743c82" + ] + } + ], + "lock_hash": "0xf0000000000000000000000000000000000000000000000000000000000000000" +} diff --git a/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.json b/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.json new file mode 100644 index 00000000..dba1e6ff --- /dev/null +++ b/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.json @@ -0,0 +1,31 @@ +{ + "crypto": { + "checksum": { + "function": "sha256", + "message": "eeaf8c59d062a397f74d62b97243860cef812cf168662135b9fca023d26c71df", + "params": {} + }, + "cipher": { + "function": "aes-128-ctr", + "message": "c3daae6234285577322e5d674ed90469da1d888b0a406cde50b6472d5206e165", + "params": { + "iv": "87350b9c54dc1e7563b9d784eba86f6d" + } + }, + "kdf": { + "function": "pbkdf2", + "message": "", + "params": { + "c": 262144, + "dklen": 32, + "prf": "hmac-sha256", + "salt": "f3d31631d40448dd9134bcf54630e2ad2f1668bb8470af8f5394c12e214a6fed" + } + } + }, + "description": "", + "pubkey": "a3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", + "path": "m/12381/3600/0/0/0", + "uuid": "840CFCF8-A23B-7742-9057-3B149122244A", + "version": 4 +} diff --git a/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.txt b/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.txt new file mode 100644 index 00000000..c0245cc2 --- /dev/null +++ b/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.txt @@ -0,0 +1 @@ +90bb9cd1986560f92016c8766fe8c528 \ No newline at end of file diff --git a/scripts/edit/vc/test/test_lodestar_asdb.sh b/scripts/edit/vc/test/test_lodestar_asdb.sh new file mode 100755 index 00000000..f8948bf6 --- /dev/null +++ b/scripts/edit/vc/test/test_lodestar_asdb.sh @@ -0,0 +1,231 @@ +#!/usr/bin/env bash + +# Integration test for export/import ASDB scripts with Lodestar VC. +# +# This script: +# 1. Starts vc-lodestar via docker-compose with test override (no charon dependency) +# 2. Sets up keystores in the container +# 3. Imports sample slashing protection data (with known pubkey and attestations) +# 4. Calls scripts/edit/vc/export_asdb.sh to export slashing protection +# 5. Runs update-anti-slashing-db.sh to transform pubkeys +# 6. Stops the container +# 7. Calls scripts/edit/vc/import_asdb.sh to import updated slashing protection +# +# Usage: ./scripts/edit/vc/test/test_lodestar_asdb.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" +cd "$REPO_ROOT" + +# Test artifacts directories +TEST_OUTPUT_DIR="$SCRIPT_DIR/output" +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_COMPOSE_FILE="$SCRIPT_DIR/docker-compose.test.yml" +TEST_DATA_DIR="$SCRIPT_DIR/data/lodestar" +TEST_COMPOSE_FILES="docker-compose.yml:compose-vc.yml:$TEST_COMPOSE_FILE" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +cleanup() { + log_info "Cleaning up test resources..." + COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-lodestar down 2>/dev/null || true + # Keep TEST_OUTPUT_DIR for inspection + # Clean test data to avoid stale DB locks + rm -rf "$TEST_DATA_DIR" 2>/dev/null || true +} + +trap cleanup EXIT + +# Clean test data directory before starting (remove stale locks) +log_info "Preparing test environment..." +COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-lodestar down 2>/dev/null || true +rm -rf "$TEST_DATA_DIR" +mkdir -p "$TEST_DATA_DIR" + +# Check prerequisites +log_info "Checking prerequisites..." + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check for test validator keys in fixtures +KEYSTORE_COUNT=$(ls "$TEST_FIXTURES_DIR/validator_keys"/keystore-*.json 2>/dev/null | wc -l | tr -d ' ') +if [ "$KEYSTORE_COUNT" -eq 0 ]; then + log_error "No keystore files found in $TEST_FIXTURES_DIR/validator_keys" + exit 1 +fi +log_info "Found $KEYSTORE_COUNT test keystore file(s)" + +# Verify test fixtures exist +if [ ! -f "$TEST_FIXTURES_DIR/source-cluster-lock.json" ] || [ ! -f "$TEST_FIXTURES_DIR/target-cluster-lock.json" ]; then + log_error "Test fixtures not found in $TEST_FIXTURES_DIR" + exit 1 +fi +log_info "Test fixtures verified" + +# Source .env for NETWORK, then override COMPOSE_FILE with test compose +if [ ! -f .env ]; then + log_warn ".env file not found, creating with NETWORK=hoodi" + echo "NETWORK=hoodi" > .env +fi + +source .env +NETWORK="${NETWORK:-hoodi}" + +# Override COMPOSE_FILE after sourcing .env (which may have its own COMPOSE_FILE) +export COMPOSE_FILE="$TEST_COMPOSE_FILES" + +log_info "Using network: $NETWORK" +log_info "Using compose files: $COMPOSE_FILE" + +# Create test output directory +mkdir -p "$TEST_OUTPUT_DIR" + +# Step 1: Start vc-lodestar via docker-compose +log_info "Step 1: Starting vc-lodestar via docker-compose..." + +docker compose --profile vc-lodestar up -d vc-lodestar + +sleep 2 + +# Verify container is running +if ! docker compose ps vc-lodestar | grep -q Up; then + log_error "Container failed to start. Checking logs:" + docker compose logs vc-lodestar 2>&1 || true + exit 1 +fi + +log_info "Container started successfully" + +# Step 2: Set up keystores (normally done by run.sh but we override entrypoint) +log_info "Step 2: Setting up keystores..." + +docker compose exec -T vc-lodestar sh -c ' + mkdir -p /opt/data/keystores /opt/data/secrets + for f in /home/charon/validator_keys/keystore-*.json; do + PUBKEY="0x$(grep "\"pubkey\"" "$f" | sed "s/.*: *\"\([^\"]*\)\".*/\1/")" + mkdir -p "/opt/data/keystores/$PUBKEY" + cp "$f" "/opt/data/keystores/$PUBKEY/voting-keystore.json" + cp "${f%.json}.txt" "/opt/data/secrets/$PUBKEY" + echo "Imported keystore for $PUBKEY" + done +' + +log_info "Keystores set up successfully" + +# Step 3: Stop container and import sample slashing protection data +log_info "Step 3: Importing sample slashing protection data..." + +docker compose stop vc-lodestar + +SAMPLE_ASDB="$TEST_FIXTURES_DIR/sample-slashing-protection.json" + +if VC=vc-lodestar "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$SAMPLE_ASDB"; then + log_info "Sample data imported successfully!" +else + log_error "Failed to import sample data" + exit 1 +fi + +# Start container again for export +docker compose --profile vc-lodestar up -d vc-lodestar +sleep 2 + +# Clean stale LevelDB lock file from previous import run +docker compose exec -T vc-lodestar rm -f /opt/data/validator-db/LOCK 2>/dev/null || true + +# Step 4: Test export using the actual script +log_info "Step 4: Testing export_asdb.sh script..." + +EXPORT_FILE="$TEST_OUTPUT_DIR/exported-asdb.json" + +if VC=vc-lodestar "$REPO_ROOT/scripts/edit/vc/export_asdb.sh" --output-file "$EXPORT_FILE"; then + log_info "Export script successful!" + log_info "Exported content:" + jq '.' "$EXPORT_FILE" + + # Verify exported data matches what we imported + EXPORTED_COUNT=$(jq '.data | length' "$EXPORT_FILE") + EXPORTED_ATTESTATIONS=$(jq '.data[0].signed_attestations | length' "$EXPORT_FILE") + log_info "Exported $EXPORTED_COUNT validator(s) with $EXPORTED_ATTESTATIONS attestation(s)" +else + log_error "Export script failed" + exit 1 +fi + +# Step 5: Run update-anti-slashing-db.sh to transform pubkeys +log_info "Step 5: Running update-anti-slashing-db.sh..." + +UPDATE_SCRIPT="$REPO_ROOT/scripts/edit/vc/update-anti-slashing-db.sh" +SOURCE_LOCK="$TEST_FIXTURES_DIR/source-cluster-lock.json" +TARGET_LOCK="$TEST_FIXTURES_DIR/target-cluster-lock.json" + +# Copy export to a working file that will be modified in place +UPDATED_FILE="$TEST_OUTPUT_DIR/updated-asdb.json" +cp "$EXPORT_FILE" "$UPDATED_FILE" + +log_info "Source pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$SOURCE_LOCK")" +log_info "Target pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$TARGET_LOCK")" + +if "$UPDATE_SCRIPT" "$UPDATED_FILE" "$SOURCE_LOCK" "$TARGET_LOCK"; then + log_info "Update successful!" + log_info "Updated content:" + jq '.' "$UPDATED_FILE" + + # Verify the pubkey was transformed + EXPORTED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$EXPORT_FILE") + UPDATED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$UPDATED_FILE") + + if [ -n "$EXPORTED_PUBKEY" ] && [ -n "$UPDATED_PUBKEY" ]; then + if [ "$EXPORTED_PUBKEY" != "$UPDATED_PUBKEY" ]; then + log_info "Pubkey transformation verified:" + log_info " Before: $EXPORTED_PUBKEY" + log_info " After: $UPDATED_PUBKEY" + else + log_error "Pubkey was NOT transformed - test fixture mismatch!" + exit 1 + fi + else + log_error "No pubkey data in exported file - sample import may have failed" + exit 1 + fi +else + log_error "Update script failed" + exit 1 +fi + +# Step 6: Stop container before import (required by import script) +log_info "Step 6: Stopping vc-lodestar for import..." + +docker compose stop vc-lodestar + +# Step 7: Test import using the actual script +log_info "Step 7: Testing import_asdb.sh script..." + +if VC=vc-lodestar "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$UPDATED_FILE"; then + log_info "Import script successful!" +else + log_error "Import script failed" + exit 1 +fi + +echo "" +log_info "=========================================" +log_info "All tests passed successfully!" +log_info "=========================================" +log_info "" +log_info "Test artifacts in: $TEST_OUTPUT_DIR" +log_info " - exported-asdb.json (original export)" +log_info " - updated-asdb.json (after pubkey transformation)" diff --git a/scripts/edit/vc/test/test_nimbus_asdb.sh b/scripts/edit/vc/test/test_nimbus_asdb.sh new file mode 100755 index 00000000..8944b59d --- /dev/null +++ b/scripts/edit/vc/test/test_nimbus_asdb.sh @@ -0,0 +1,258 @@ +#!/usr/bin/env bash + +# Integration test for export/import ASDB scripts with Nimbus VC. +# +# This script: +# 1. Builds vc-nimbus image if needed +# 2. Starts vc-nimbus via docker-compose with test override (no charon dependency) +# 3. Sets up keystores in the container +# 4. Imports sample slashing protection data (with known pubkey and attestations) +# 5. Calls scripts/edit/vc/export_asdb.sh to export slashing protection +# 6. Runs update-anti-slashing-db.sh to transform pubkeys +# 7. Stops the container +# 8. Calls scripts/edit/vc/import_asdb.sh to import updated slashing protection +# +# Usage: ./scripts/edit/vc/test/test_nimbus_asdb.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" +cd "$REPO_ROOT" + +# Test artifacts directories +TEST_OUTPUT_DIR="$SCRIPT_DIR/output" +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_COMPOSE_FILE="$SCRIPT_DIR/docker-compose.test.yml" +TEST_DATA_DIR="$SCRIPT_DIR/data/nimbus" +TEST_COMPOSE_FILES="docker-compose.yml:compose-vc.yml:$TEST_COMPOSE_FILE" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +cleanup() { + log_info "Cleaning up test resources..." + COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-nimbus down 2>/dev/null || true + # Keep TEST_OUTPUT_DIR for inspection + # Clean test data to avoid stale DB locks + rm -rf "$TEST_DATA_DIR" 2>/dev/null || true +} + +trap cleanup EXIT + +# Clean test data directory before starting (remove stale locks) +log_info "Preparing test environment..." +COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-nimbus down 2>/dev/null || true +rm -rf "$TEST_DATA_DIR" +mkdir -p "$TEST_DATA_DIR" + +# Copy run.sh into test data directory to satisfy the volume mount from base compose +# (compose merge keeps the original mount ./nimbus/run.sh:/home/user/data/run.sh, +# which conflicts with our test data mount unless we provide the file there) +cp "$REPO_ROOT/nimbus/run.sh" "$TEST_DATA_DIR/run.sh" + +# Check prerequisites +log_info "Checking prerequisites..." + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check for test validator keys in fixtures +KEYSTORE_COUNT=$(ls "$TEST_FIXTURES_DIR/validator_keys"/keystore-*.json 2>/dev/null | wc -l | tr -d ' ') +if [ "$KEYSTORE_COUNT" -eq 0 ]; then + log_error "No keystore files found in $TEST_FIXTURES_DIR/validator_keys" + exit 1 +fi +log_info "Found $KEYSTORE_COUNT test keystore file(s)" + +# Verify test fixtures exist +if [ ! -f "$TEST_FIXTURES_DIR/source-cluster-lock.json" ] || [ ! -f "$TEST_FIXTURES_DIR/target-cluster-lock.json" ]; then + log_error "Test fixtures not found in $TEST_FIXTURES_DIR" + exit 1 +fi +log_info "Test fixtures verified" + +# Source .env for NETWORK, then override COMPOSE_FILE with test compose +if [ ! -f .env ]; then + log_warn ".env file not found, creating with NETWORK=hoodi" + echo "NETWORK=hoodi" > .env +fi + +source .env +NETWORK="${NETWORK:-hoodi}" + +# Override COMPOSE_FILE after sourcing .env (which may have its own COMPOSE_FILE) +export COMPOSE_FILE="$TEST_COMPOSE_FILES" + +log_info "Using network: $NETWORK" +log_info "Using compose files: $COMPOSE_FILE" + +# Create test output directory +mkdir -p "$TEST_OUTPUT_DIR" + +# Step 0: Build vc-nimbus image if needed +log_info "Step 0: Building vc-nimbus image..." + +if ! docker compose --profile vc-nimbus build vc-nimbus; then + log_error "Failed to build vc-nimbus image" + exit 1 +fi +log_info "Image built successfully" + +# Step 1: Start vc-nimbus via docker-compose +log_info "Step 1: Starting vc-nimbus via docker-compose..." + +docker compose --profile vc-nimbus up -d vc-nimbus + +sleep 2 + +# Verify container is running +if ! docker compose ps vc-nimbus | grep -q Up; then + log_error "Container failed to start. Checking logs:" + docker compose logs vc-nimbus 2>&1 || true + exit 1 +fi + +log_info "Container started successfully" + +# Step 2: Set up keystores using nimbus_beacon_node deposits import +log_info "Step 2: Setting up keystores..." + +# Create a temporary directory in the container for importing +docker compose exec -T vc-nimbus sh -c ' + mkdir -p /home/user/data/validators /tmp/keyimport + + for f in /home/validator_keys/keystore-*.json; do + echo "Importing key from $f" + + # Read password + password=$(cat "${f%.json}.txt") + + # Copy keystore to temp dir + cp "$f" /tmp/keyimport/ + + # Import using nimbus_beacon_node + echo "$password" | /home/user/nimbus_beacon_node deposits import \ + --data-dir=/home/user/data \ + /tmp/keyimport + + # Clean temp dir + rm /tmp/keyimport/* + done + + rm -rf /tmp/keyimport + echo "Done importing keystores" +' + +log_info "Keystores set up successfully" + +# Step 3: Stop container and import sample slashing protection data +log_info "Step 3: Importing sample slashing protection data..." + +docker compose stop vc-nimbus + +SAMPLE_ASDB="$TEST_FIXTURES_DIR/sample-slashing-protection.json" + +if VC=vc-nimbus "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$SAMPLE_ASDB"; then + log_info "Sample data imported successfully!" +else + log_error "Failed to import sample data" + exit 1 +fi + +# Start container again for export +docker compose --profile vc-nimbus up -d vc-nimbus +sleep 2 + +# Step 4: Test export using the actual script +log_info "Step 4: Testing export_asdb.sh script..." + +EXPORT_FILE="$TEST_OUTPUT_DIR/exported-asdb.json" + +if VC=vc-nimbus "$REPO_ROOT/scripts/edit/vc/export_asdb.sh" --output-file "$EXPORT_FILE"; then + log_info "Export script successful!" + log_info "Exported content:" + jq '.' "$EXPORT_FILE" + + # Verify exported data matches what we imported + EXPORTED_COUNT=$(jq '.data | length' "$EXPORT_FILE") + EXPORTED_ATTESTATIONS=$(jq '.data[0].signed_attestations | length' "$EXPORT_FILE" 2>/dev/null || echo "0") + log_info "Exported $EXPORTED_COUNT validator(s) with $EXPORTED_ATTESTATIONS attestation(s)" +else + log_error "Export script failed" + exit 1 +fi + +# Step 5: Run update-anti-slashing-db.sh to transform pubkeys +log_info "Step 5: Running update-anti-slashing-db.sh..." + +UPDATE_SCRIPT="$REPO_ROOT/scripts/edit/vc/update-anti-slashing-db.sh" +SOURCE_LOCK="$TEST_FIXTURES_DIR/source-cluster-lock.json" +TARGET_LOCK="$TEST_FIXTURES_DIR/target-cluster-lock.json" + +# Copy export to a working file that will be modified in place +UPDATED_FILE="$TEST_OUTPUT_DIR/updated-asdb.json" +cp "$EXPORT_FILE" "$UPDATED_FILE" + +log_info "Source pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$SOURCE_LOCK")" +log_info "Target pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$TARGET_LOCK")" + +if "$UPDATE_SCRIPT" "$UPDATED_FILE" "$SOURCE_LOCK" "$TARGET_LOCK"; then + log_info "Update successful!" + log_info "Updated content:" + jq '.' "$UPDATED_FILE" + + # Verify the pubkey was transformed + EXPORTED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$EXPORT_FILE") + UPDATED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$UPDATED_FILE") + + if [ -n "$EXPORTED_PUBKEY" ] && [ -n "$UPDATED_PUBKEY" ]; then + if [ "$EXPORTED_PUBKEY" != "$UPDATED_PUBKEY" ]; then + log_info "Pubkey transformation verified:" + log_info " Before: $EXPORTED_PUBKEY" + log_info " After: $UPDATED_PUBKEY" + else + log_error "Pubkey was NOT transformed - test fixture mismatch!" + exit 1 + fi + else + log_error "No pubkey data in exported file - sample import may have failed" + exit 1 + fi +else + log_error "Update script failed" + exit 1 +fi + +# Step 6: Stop container before import (required by import script) +log_info "Step 6: Stopping vc-nimbus for import..." + +docker compose stop vc-nimbus + +# Step 7: Test import using the actual script +log_info "Step 7: Testing import_asdb.sh script..." + +if VC=vc-nimbus "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$UPDATED_FILE"; then + log_info "Import script successful!" +else + log_error "Import script failed" + exit 1 +fi + +echo "" +log_info "=========================================" +log_info "All tests passed successfully!" +log_info "=========================================" +log_info "" +log_info "Test artifacts in: $TEST_OUTPUT_DIR" +log_info " - exported-asdb.json (original export)" +log_info " - updated-asdb.json (after pubkey transformation)" diff --git a/scripts/edit/vc/test/test_prysm_asdb.sh b/scripts/edit/vc/test/test_prysm_asdb.sh new file mode 100755 index 00000000..4bf834b3 --- /dev/null +++ b/scripts/edit/vc/test/test_prysm_asdb.sh @@ -0,0 +1,275 @@ +#!/usr/bin/env bash + +# Integration test for export/import ASDB scripts with Prysm VC. +# +# This script: +# 1. Starts vc-prysm via docker-compose with test override (no charon dependency) +# 2. Sets up wallet and keystores in the container +# 3. Imports sample slashing protection data (with known pubkey and attestations) +# 4. Calls scripts/edit/vc/export_asdb.sh to export slashing protection +# 5. Runs update-anti-slashing-db.sh to transform pubkeys +# 6. Stops the container +# 7. Calls scripts/edit/vc/import_asdb.sh to import updated slashing protection +# +# Usage: ./scripts/edit/vc/test/test_prysm_asdb.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" +cd "$REPO_ROOT" + +# Test artifacts directories +TEST_OUTPUT_DIR="$SCRIPT_DIR/output" +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_COMPOSE_FILE="$SCRIPT_DIR/docker-compose.test.yml" +TEST_DATA_DIR="$SCRIPT_DIR/data/prysm" +TEST_COMPOSE_FILES="docker-compose.yml:compose-vc.yml:$TEST_COMPOSE_FILE" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +cleanup() { + log_info "Cleaning up test resources..." + COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-prysm down 2>/dev/null || true + # Keep TEST_OUTPUT_DIR for inspection + # Clean test data to avoid stale DB locks + rm -rf "$TEST_DATA_DIR" 2>/dev/null || true +} + +trap cleanup EXIT + +# Clean test data directory before starting (remove stale locks) +log_info "Preparing test environment..." +COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-prysm down 2>/dev/null || true +rm -rf "$TEST_DATA_DIR" +mkdir -p "$TEST_DATA_DIR" + +# Copy run.sh into test data directory to satisfy the volume mount from base compose +cp "$REPO_ROOT/prysm/run.sh" "$TEST_DATA_DIR/run.sh" + +# Check prerequisites +log_info "Checking prerequisites..." + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check for test validator keys in fixtures +KEYSTORE_COUNT=$(ls "$TEST_FIXTURES_DIR/validator_keys"/keystore-*.json 2>/dev/null | wc -l | tr -d ' ') +if [ "$KEYSTORE_COUNT" -eq 0 ]; then + log_error "No keystore files found in $TEST_FIXTURES_DIR/validator_keys" + exit 1 +fi +log_info "Found $KEYSTORE_COUNT test keystore file(s)" + +# Verify test fixtures exist +if [ ! -f "$TEST_FIXTURES_DIR/source-cluster-lock.json" ] || [ ! -f "$TEST_FIXTURES_DIR/target-cluster-lock.json" ]; then + log_error "Test fixtures not found in $TEST_FIXTURES_DIR" + exit 1 +fi +log_info "Test fixtures verified" + +# Source .env for NETWORK, then override COMPOSE_FILE with test compose +if [ ! -f .env ]; then + log_warn ".env file not found, creating with NETWORK=hoodi" + echo "NETWORK=hoodi" > .env +fi + +source .env +NETWORK="${NETWORK:-hoodi}" + +# Override COMPOSE_FILE after sourcing .env (which may have its own COMPOSE_FILE) +export COMPOSE_FILE="$TEST_COMPOSE_FILES" + +log_info "Using network: $NETWORK" +log_info "Using compose files: $COMPOSE_FILE" + +# Create test output directory +mkdir -p "$TEST_OUTPUT_DIR" + +# Step 1: Start vc-prysm via docker-compose +log_info "Step 1: Starting vc-prysm via docker-compose..." + +docker compose --profile vc-prysm up -d vc-prysm + +sleep 2 + +# Verify container is running +if ! docker compose ps vc-prysm | grep -q Up; then + log_error "Container failed to start. Checking logs:" + docker compose logs vc-prysm 2>&1 || true + exit 1 +fi + +log_info "Container started successfully" + +# Step 2: Set up wallet and keystores (similar to run.sh) +# Note: We use /data/vc/wallet so it's persisted in the test data directory +log_info "Step 2: Setting up wallet and keystores..." + +docker compose exec -T vc-prysm sh -c ' + WALLET_DIR="/data/vc/wallet" + WALLET_PASSWORD="prysm-validator-secret" + + # Create wallet + rm -rf $WALLET_DIR + mkdir -p $WALLET_DIR + echo $WALLET_PASSWORD > /data/vc/wallet-password.txt + + /app/cmd/validator/validator wallet create \ + --accept-terms-of-use \ + --wallet-password-file=/data/vc/wallet-password.txt \ + --keymanager-kind=direct \ + --wallet-dir="$WALLET_DIR" + + # Import keys + tmpkeys="/home/validator_keys/tmpkeys" + mkdir -p ${tmpkeys} + + for f in /home/charon/validator_keys/keystore-*.json; do + echo "Importing key ${f}" + + # Copy keystore file to tmpkeys/ directory + cp "${f}" "${tmpkeys}" + + # Import keystore with password + /app/cmd/validator/validator accounts import \ + --accept-terms-of-use=true \ + --wallet-dir="$WALLET_DIR" \ + --keys-dir="${tmpkeys}" \ + --account-password-file="${f//json/txt}" \ + --wallet-password-file=/data/vc/wallet-password.txt + + # Delete tmpkeys/keystore-*.json file + filename="$(basename ${f})" + rm "${tmpkeys}/${filename}" + done + + rm -r ${tmpkeys} + + # Initialize the validator DB by starting and immediately stopping the validator + # This creates the necessary database structure for slashing protection import + echo "Initializing validator database..." + timeout 3 /app/cmd/validator/validator \ + --wallet-dir="$WALLET_DIR" \ + --accept-terms-of-use=true \ + --datadir="/data/vc" \ + --wallet-password-file="/data/vc/wallet-password.txt" \ + --beacon-rpc-provider="http://localhost:3600" \ + --hoodi || true + + echo "Done setting up wallet and initializing DB" +' + +log_info "Wallet and keystores set up successfully" + +# Step 3: Stop container and import sample slashing protection data +log_info "Step 3: Importing sample slashing protection data..." + +docker compose stop vc-prysm + +SAMPLE_ASDB="$TEST_FIXTURES_DIR/sample-slashing-protection.json" + +if VC=vc-prysm "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$SAMPLE_ASDB"; then + log_info "Sample data imported successfully!" +else + log_error "Failed to import sample data" + exit 1 +fi + +# Start container again for export +docker compose --profile vc-prysm up -d vc-prysm +sleep 2 + +# Step 4: Test export using the actual script +log_info "Step 4: Testing export_asdb.sh script..." + +EXPORT_FILE="$TEST_OUTPUT_DIR/exported-asdb.json" + +if VC=vc-prysm "$REPO_ROOT/scripts/edit/vc/export_asdb.sh" --output-file "$EXPORT_FILE"; then + log_info "Export script successful!" + log_info "Exported content:" + jq '.' "$EXPORT_FILE" + + # Verify exported data matches what we imported + EXPORTED_COUNT=$(jq '.data | length' "$EXPORT_FILE") + EXPORTED_ATTESTATIONS=$(jq '.data[0].signed_attestations | length' "$EXPORT_FILE" 2>/dev/null || echo "0") + log_info "Exported $EXPORTED_COUNT validator(s) with $EXPORTED_ATTESTATIONS attestation(s)" +else + log_error "Export script failed" + exit 1 +fi + +# Step 5: Run update-anti-slashing-db.sh to transform pubkeys +log_info "Step 5: Running update-anti-slashing-db.sh..." + +UPDATE_SCRIPT="$REPO_ROOT/scripts/edit/vc/update-anti-slashing-db.sh" +SOURCE_LOCK="$TEST_FIXTURES_DIR/source-cluster-lock.json" +TARGET_LOCK="$TEST_FIXTURES_DIR/target-cluster-lock.json" + +# Copy export to a working file that will be modified in place +UPDATED_FILE="$TEST_OUTPUT_DIR/updated-asdb.json" +cp "$EXPORT_FILE" "$UPDATED_FILE" + +log_info "Source pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$SOURCE_LOCK")" +log_info "Target pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$TARGET_LOCK")" + +if "$UPDATE_SCRIPT" "$UPDATED_FILE" "$SOURCE_LOCK" "$TARGET_LOCK"; then + log_info "Update successful!" + log_info "Updated content:" + jq '.' "$UPDATED_FILE" + + # Verify the pubkey was transformed + EXPORTED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$EXPORT_FILE") + UPDATED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$UPDATED_FILE") + + if [ -n "$EXPORTED_PUBKEY" ] && [ -n "$UPDATED_PUBKEY" ]; then + if [ "$EXPORTED_PUBKEY" != "$UPDATED_PUBKEY" ]; then + log_info "Pubkey transformation verified:" + log_info " Before: $EXPORTED_PUBKEY" + log_info " After: $UPDATED_PUBKEY" + else + log_error "Pubkey was NOT transformed - test fixture mismatch!" + exit 1 + fi + else + log_error "No pubkey data in exported file - sample import may have failed" + exit 1 + fi +else + log_error "Update script failed" + exit 1 +fi + +# Step 6: Stop container before import (required by import script) +log_info "Step 6: Stopping vc-prysm for import..." + +docker compose stop vc-prysm + +# Step 7: Test import using the actual script +log_info "Step 7: Testing import_asdb.sh script..." + +if VC=vc-prysm "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$UPDATED_FILE"; then + log_info "Import script successful!" +else + log_error "Import script failed" + exit 1 +fi + +echo "" +log_info "=========================================" +log_info "All tests passed successfully!" +log_info "=========================================" +log_info "" +log_info "Test artifacts in: $TEST_OUTPUT_DIR" +log_info " - exported-asdb.json (original export)" +log_info " - updated-asdb.json (after pubkey transformation)" diff --git a/scripts/edit/vc/test/test_teku_asdb.sh b/scripts/edit/vc/test/test_teku_asdb.sh new file mode 100755 index 00000000..4b4048eb --- /dev/null +++ b/scripts/edit/vc/test/test_teku_asdb.sh @@ -0,0 +1,211 @@ +#!/usr/bin/env bash + +# Integration test for export/import ASDB scripts with Teku VC. +# +# This script: +# 1. Starts vc-teku via docker-compose with test override (no charon dependency) +# 2. Imports sample slashing protection data (with known pubkey and attestations) +# 3. Calls scripts/edit/vc/export_asdb.sh to export slashing protection +# 4. Runs update-anti-slashing-db.sh to transform pubkeys +# 5. Stops the container +# 6. Calls scripts/edit/vc/import_asdb.sh to import updated slashing protection +# +# Usage: ./scripts/edit/vc/test/test_teku_asdb.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" +cd "$REPO_ROOT" + +# Test artifacts directories +TEST_OUTPUT_DIR="$SCRIPT_DIR/output" +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_COMPOSE_FILE="$SCRIPT_DIR/docker-compose.test.yml" +TEST_DATA_DIR="$SCRIPT_DIR/data/teku" +TEST_COMPOSE_FILES="docker-compose.yml:compose-vc.yml:$TEST_COMPOSE_FILE" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +cleanup() { + log_info "Cleaning up test resources..." + COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-teku down 2>/dev/null || true + # Keep TEST_OUTPUT_DIR for inspection + # Clean test data to avoid stale DB locks + rm -rf "$TEST_DATA_DIR" 2>/dev/null || true +} + +trap cleanup EXIT + +# Clean test data directory before starting (remove stale locks) +log_info "Preparing test environment..." +COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-teku down 2>/dev/null || true +rm -rf "$TEST_DATA_DIR" +mkdir -p "$TEST_DATA_DIR" + +# Check prerequisites +log_info "Checking prerequisites..." + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check for test validator keys in fixtures +KEYSTORE_COUNT=$(ls "$TEST_FIXTURES_DIR/validator_keys"/keystore-*.json 2>/dev/null | wc -l | tr -d ' ') +if [ "$KEYSTORE_COUNT" -eq 0 ]; then + log_error "No keystore files found in $TEST_FIXTURES_DIR/validator_keys" + exit 1 +fi +log_info "Found $KEYSTORE_COUNT test keystore file(s)" + +# Verify test fixtures exist +if [ ! -f "$TEST_FIXTURES_DIR/source-cluster-lock.json" ] || [ ! -f "$TEST_FIXTURES_DIR/target-cluster-lock.json" ]; then + log_error "Test fixtures not found in $TEST_FIXTURES_DIR" + exit 1 +fi +log_info "Test fixtures verified" + +# Source .env for NETWORK, then override COMPOSE_FILE with test compose +if [ ! -f .env ]; then + log_warn ".env file not found, creating with NETWORK=hoodi" + echo "NETWORK=hoodi" > .env +fi + +source .env +NETWORK="${NETWORK:-hoodi}" + +# Override COMPOSE_FILE after sourcing .env (which may have its own COMPOSE_FILE) +export COMPOSE_FILE="$TEST_COMPOSE_FILES" + +log_info "Using network: $NETWORK" +log_info "Using compose files: $COMPOSE_FILE" + +# Create test output directory +mkdir -p "$TEST_OUTPUT_DIR" + +# Step 1: Start vc-teku via docker-compose +log_info "Step 1: Starting vc-teku via docker-compose..." + +docker compose --profile vc-teku up -d vc-teku + +sleep 2 + +# Verify container is running +if ! docker compose ps vc-teku | grep -q Up; then + log_error "Container failed to start. Checking logs:" + docker compose logs vc-teku 2>&1 || true + exit 1 +fi + +log_info "Container started successfully" + +# Step 2: Stop container and import sample slashing protection data +log_info "Step 2: Importing sample slashing protection data..." + +docker compose stop vc-teku + +SAMPLE_ASDB="$TEST_FIXTURES_DIR/sample-slashing-protection.json" + +if VC=vc-teku "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$SAMPLE_ASDB"; then + log_info "Sample data imported successfully!" +else + log_error "Failed to import sample data" + exit 1 +fi + +# Start container again for export +docker compose --profile vc-teku up -d vc-teku +sleep 2 + +# Step 3: Test export using the actual script +log_info "Step 3: Testing export_asdb.sh script..." + +EXPORT_FILE="$TEST_OUTPUT_DIR/exported-asdb.json" + +if VC=vc-teku "$REPO_ROOT/scripts/edit/vc/export_asdb.sh" --output-file "$EXPORT_FILE"; then + log_info "Export script successful!" + log_info "Exported content:" + jq '.' "$EXPORT_FILE" + + # Verify exported data matches what we imported + EXPORTED_COUNT=$(jq '.data | length' "$EXPORT_FILE") + EXPORTED_ATTESTATIONS=$(jq '.data[0].signed_attestations | length' "$EXPORT_FILE" 2>/dev/null || echo "0") + log_info "Exported $EXPORTED_COUNT validator(s) with $EXPORTED_ATTESTATIONS attestation(s)" +else + log_error "Export script failed" + exit 1 +fi + +# Step 4: Run update-anti-slashing-db.sh to transform pubkeys +log_info "Step 4: Running update-anti-slashing-db.sh..." + +UPDATE_SCRIPT="$REPO_ROOT/scripts/edit/vc/update-anti-slashing-db.sh" +SOURCE_LOCK="$TEST_FIXTURES_DIR/source-cluster-lock.json" +TARGET_LOCK="$TEST_FIXTURES_DIR/target-cluster-lock.json" + +# Copy export to a working file that will be modified in place +UPDATED_FILE="$TEST_OUTPUT_DIR/updated-asdb.json" +cp "$EXPORT_FILE" "$UPDATED_FILE" + +log_info "Source pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$SOURCE_LOCK")" +log_info "Target pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$TARGET_LOCK")" + +if "$UPDATE_SCRIPT" "$UPDATED_FILE" "$SOURCE_LOCK" "$TARGET_LOCK"; then + log_info "Update successful!" + log_info "Updated content:" + jq '.' "$UPDATED_FILE" + + # Verify the pubkey was transformed + EXPORTED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$EXPORT_FILE") + UPDATED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$UPDATED_FILE") + + if [ -n "$EXPORTED_PUBKEY" ] && [ -n "$UPDATED_PUBKEY" ]; then + if [ "$EXPORTED_PUBKEY" != "$UPDATED_PUBKEY" ]; then + log_info "Pubkey transformation verified:" + log_info " Before: $EXPORTED_PUBKEY" + log_info " After: $UPDATED_PUBKEY" + else + log_error "Pubkey was NOT transformed - test fixture mismatch!" + exit 1 + fi + else + log_error "No pubkey data in exported file - sample import may have failed" + exit 1 + fi +else + log_error "Update script failed" + exit 1 +fi + +# Step 5: Stop container before import (required by import script) +log_info "Step 5: Stopping vc-teku for import..." + +docker compose stop vc-teku + +# Step 6: Test import using the actual script +log_info "Step 6: Testing import_asdb.sh script..." + +if VC=vc-teku "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$UPDATED_FILE"; then + log_info "Import script successful!" +else + log_error "Import script failed" + exit 1 +fi + +echo "" +log_info "=========================================" +log_info "All tests passed successfully!" +log_info "=========================================" +log_info "" +log_info "Test artifacts in: $TEST_OUTPUT_DIR" +log_info " - exported-asdb.json (original export)" +log_info " - updated-asdb.json (after pubkey transformation)" diff --git a/scripts/edit/vc/update-anti-slashing-db.sh b/scripts/edit/vc/update-anti-slashing-db.sh new file mode 100755 index 00000000..688002a9 --- /dev/null +++ b/scripts/edit/vc/update-anti-slashing-db.sh @@ -0,0 +1,233 @@ +#!/usr/bin/env bash + +# Script to update EIP-3076 anti-slashing DB by replacing pubkey values +# based on lookup in source and target cluster-lock.json files. +# +# Usage: update-anti-slashing-db.sh +# +# Arguments: +# eip3076-file - Path to EIP-3076 JSON file to update in place +# source-cluster-lock - Path to source cluster-lock.json (original) +# target-cluster-lock - Path to target cluster-lock.json (new, from output/) +# +# The script traverses the EIP-3076 JSON file and finds all "pubkey" values in the +# data array. For each pubkey, it looks up the value in the source cluster-lock.json's +# distributed_validators[].public_shares[] arrays, remembers the indices, and then +# replaces the pubkey with the corresponding value from the target cluster-lock.json +# at the same indices. + +set -euo pipefail + +# Check if jq is installed +if ! command -v jq &> /dev/null; then + echo "Error: jq is required but not installed. Please install jq first." >&2 + exit 1 +fi + +# Validate arguments +if [ "$#" -ne 3 ]; then + echo "Usage: $0 " >&2 + exit 1 +fi + +EIP3076_FILE="$1" +SOURCE_LOCK="$2" +TARGET_LOCK="$3" + +# Validate files exist +if [ ! -f "$EIP3076_FILE" ]; then + echo "Error: EIP-3076 file not found: $EIP3076_FILE" >&2 + exit 1 +fi + +if [ ! -f "$SOURCE_LOCK" ]; then + echo "Error: Source cluster-lock file not found: $SOURCE_LOCK" >&2 + exit 1 +fi + +if [ ! -f "$TARGET_LOCK" ]; then + echo "Error: Target cluster-lock file not found: $TARGET_LOCK" >&2 + exit 1 +fi + +# Validate all files contain valid JSON +if ! jq empty "$EIP3076_FILE" 2>/dev/null; then + echo "Error: EIP-3076 file contains invalid JSON: $EIP3076_FILE" >&2 + exit 1 +fi + +if ! jq empty "$SOURCE_LOCK" 2>/dev/null; then + echo "Error: Source cluster-lock file contains invalid JSON: $SOURCE_LOCK" >&2 + exit 1 +fi + +if ! jq empty "$TARGET_LOCK" 2>/dev/null; then + echo "Error: Target cluster-lock file contains invalid JSON: $TARGET_LOCK" >&2 + exit 1 +fi + +# Create temporary files for processing +TEMP_FILE=$(mktemp) +trap 'rm -f "$TEMP_FILE" "${TEMP_FILE}.tmp"' EXIT INT TERM + +# Function to find pubkey in cluster-lock and return validator_index,share_index +# Returns empty string if not found +find_pubkey_indices() { + local pubkey="$1" + local cluster_lock_file="$2" + + # Search through distributed_validators and public_shares + jq -r --arg pubkey "$pubkey" ' + .distributed_validators as $validators | + foreach range(0; $validators | length) as $v_idx ( + null; + . ; + $validators[$v_idx].public_shares as $shares | + foreach range(0; $shares | length) as $s_idx ( + null; + . ; + if $shares[$s_idx] == $pubkey then + "\($v_idx),\($s_idx)" + else + empty + end + ) + ) | select(. != null) + ' "$cluster_lock_file" | head -n 1 +} + +# Function to get pubkey from cluster-lock at specific indices +get_pubkey_at_indices() { + local validator_idx="$1" + local share_idx="$2" + local cluster_lock_file="$3" + + jq -r --argjson v_idx "$validator_idx" --argjson s_idx "$share_idx" ' + .distributed_validators[$v_idx].public_shares[$s_idx] + ' "$cluster_lock_file" +} + +echo "Reading EIP-3076 file: $EIP3076_FILE" +echo "Source cluster-lock: $SOURCE_LOCK" +echo "Target cluster-lock: $TARGET_LOCK" +echo "" + +# Validate cluster-lock structure +source_validators=$(jq '.distributed_validators | length' "$SOURCE_LOCK") +target_validators=$(jq '.distributed_validators | length' "$TARGET_LOCK") + +# Validate that we got valid numeric values +if [ -z "$source_validators" ] || [ "$source_validators" = "null" ]; then + echo "Error: Source cluster-lock missing 'distributed_validators' field" >&2 + exit 1 +fi + +if [ -z "$target_validators" ] || [ "$target_validators" = "null" ]; then + echo "Error: Target cluster-lock missing 'distributed_validators' field" >&2 + exit 1 +fi + +echo "Source cluster-lock has $source_validators validators" +echo "Target cluster-lock has $target_validators validators" + +if [ "$source_validators" -eq 0 ]; then + echo "Error: Source cluster-lock has no validators" >&2 + exit 1 +fi + +if [ "$target_validators" -eq 0 ]; then + echo "Error: Target cluster-lock has no validators" >&2 + exit 1 +fi + +# Verify that target has at least as many validators as source +if [ "$target_validators" -lt "$source_validators" ]; then + echo "Error: Target cluster-lock has fewer validators ($target_validators) than source ($source_validators)" >&2 + echo " This may result in missing pubkey replacements" >&2 + exit 1 +fi + +echo "" + +# Get all unique pubkeys from the data array +# Note: The same pubkey may appear multiple times, so we deduplicate with sort -u +pubkeys=$(jq -r '.data[].pubkey' "$EIP3076_FILE" | sort -u) + +if [ -z "$pubkeys" ]; then + echo "Warning: No pubkeys found in EIP-3076 file" >&2 + exit 0 +fi + +pubkey_count=$(grep -c '^' <<< "$pubkeys") +echo "Found $pubkey_count unique pubkey(s) to process" +echo "" + +# Copy original file to temp file, we'll modify it in place +cp "$EIP3076_FILE" "$TEMP_FILE" + +# Process each pubkey +while IFS= read -r old_pubkey; do + echo "Processing pubkey: $old_pubkey" + + # Find indices in source cluster-lock + indices=$(find_pubkey_indices "$old_pubkey" "$SOURCE_LOCK") + + if [ -z "$indices" ]; then + echo " Error: Pubkey not found in source cluster-lock.json" >&2 + echo " Cannot proceed without mapping for all pubkeys" >&2 + exit 1 + fi + + # Split indices + validator_idx=$(echo "$indices" | cut -d',' -f1) + share_idx=$(echo "$indices" | cut -d',' -f2) + + echo " Found at distributed_validators[$validator_idx].public_shares[$share_idx]" + + # Verify target has sufficient validators + if [ "$validator_idx" -ge "$target_validators" ]; then + echo " Error: Target cluster-lock.json doesn't have validator at index $validator_idx" >&2 + echo " Target has only $target_validators validators" >&2 + exit 1 + fi + + # Verify target validator has sufficient public_shares + target_share_count=$(jq --argjson v_idx "$validator_idx" '.distributed_validators[$v_idx].public_shares | length' "$TARGET_LOCK") + if [ "$share_idx" -ge "$target_share_count" ]; then + echo " Error: Target cluster-lock.json validator[$validator_idx] doesn't have share at index $share_idx" >&2 + echo " Target validator has only $target_share_count shares" >&2 + exit 1 + fi + + # Get corresponding pubkey from target cluster-lock + new_pubkey=$(get_pubkey_at_indices "$validator_idx" "$share_idx" "$TARGET_LOCK") + + if [ -z "$new_pubkey" ] || [ "$new_pubkey" = "null" ]; then + echo " Error: Could not find pubkey at same indices in target cluster-lock.json" >&2 + exit 1 + fi + + echo " Replacing with: $new_pubkey" + + # Replace the pubkey in the JSON data + # Note: The same pubkey may appear multiple times in the data array (one per validator). + # This filter will update ALL occurrences of the old pubkey with the new one. + # We modify the temp file in place using jq's output redirection + jq --arg old "$old_pubkey" --arg new "$new_pubkey" ' + (.data[] | select(.pubkey == $old) | .pubkey) |= $new + ' "$TEMP_FILE" > "${TEMP_FILE}.tmp" && mv "${TEMP_FILE}.tmp" "$TEMP_FILE" + + echo " Done" + echo "" +done <<< "$pubkeys" + +# Validate the output is valid JSON +if ! jq empty "$TEMP_FILE" 2>/dev/null; then + echo "Error: Generated invalid JSON" >&2 + exit 1 +fi + +# Replace original file with updated version +cp "$TEMP_FILE" "$EIP3076_FILE" + +echo "Successfully updated $EIP3076_FILE" From ef87c0940c997b3e3e68e1ad4acee9ed50fbe72c Mon Sep 17 00:00:00 2001 From: Andrei Smirnov Date: Wed, 11 Feb 2026 11:42:46 +0300 Subject: [PATCH 02/12] Updated gitignore --- .gitignore | 2 -- scripts/edit/vc/test/.gitignore | 5 ----- 2 files changed, 7 deletions(-) diff --git a/.gitignore b/.gitignore index f1b9447c..be6f45cf 100644 --- a/.gitignore +++ b/.gitignore @@ -11,7 +11,5 @@ cluster-lock.json data/ .idea .charon -!scripts/edit/replace-operator/test/fixtures/.charon/ -!scripts/edit/replace-operator/test/fixtures/.charon/* prometheus/prometheus.yml commit-boost/config.toml diff --git a/scripts/edit/vc/test/.gitignore b/scripts/edit/vc/test/.gitignore index e1c0f7a9..0f8c84f0 100644 --- a/scripts/edit/vc/test/.gitignore +++ b/scripts/edit/vc/test/.gitignore @@ -2,8 +2,3 @@ output/ data/ *.tmp - -# Keep fixtures (override root .gitignore rules) -!fixtures/ -!fixtures/validator_keys/ -!fixtures/validator_keys/* From aaf299b9bea7e2b411e92eea333b4b765c043d0e Mon Sep 17 00:00:00 2001 From: Andrei Smirnov Date: Mon, 16 Feb 2026 15:20:49 +0300 Subject: [PATCH 03/12] Added recreate edit script --- scripts/edit/recreate-private-keys/README.md | 59 +++ .../recreate-private-keys.sh | 315 ++++++++++++++ .../edit/recreate-private-keys/test/README.md | 26 ++ .../test/fixtures/.env.test | 3 + .../test/test_recreate_private_keys.sh | 387 ++++++++++++++++++ scripts/edit/replace-operator/README.md | 1 - .../replace-operator/remaining-operator.sh | 73 ++-- .../test/test_replace_operator.sh | 27 +- 8 files changed, 821 insertions(+), 70 deletions(-) create mode 100644 scripts/edit/recreate-private-keys/README.md create mode 100755 scripts/edit/recreate-private-keys/recreate-private-keys.sh create mode 100644 scripts/edit/recreate-private-keys/test/README.md create mode 100644 scripts/edit/recreate-private-keys/test/fixtures/.env.test create mode 100755 scripts/edit/recreate-private-keys/test/test_recreate_private_keys.sh diff --git a/scripts/edit/recreate-private-keys/README.md b/scripts/edit/recreate-private-keys/README.md new file mode 100644 index 00000000..cc8b6a6e --- /dev/null +++ b/scripts/edit/recreate-private-keys/README.md @@ -0,0 +1,59 @@ +# Recreate-Private-Keys Script + +Script to automate the [recreate-private-keys ceremony](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/recreate-private-keys) for Charon distributed validators. + +## Overview + +This script helps operators recreate validator private key shares while keeping the same validator public keys. This is useful for: + +- **Security concerns**: If private key shares may have been compromised +- **Key rotation**: As part of regular security practices +- **Recovery**: After a security incident to refresh key material + +**Important**: This operation maintains the same validator public keys, so validators remain registered on the beacon chain without any changes. Only the underlying private key shares held by operators are refreshed. + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- `.charon` directory with `cluster-lock.json` and `validator_keys` +- Docker running +- **All operators must participate in the ceremony** + +## Usage + +All operators must run this script simultaneously: + +```bash +./scripts/edit/recreate-private-keys/recreate-private-keys.sh +``` + +The script will: +1. Export the anti-slashing database from the validator client +2. Run the recreate-private-keys ceremony (P2P coordinated with all operators) +3. Update the ASDB pubkeys to match new key shares +4. Stop charon and VC containers +5. Backup current `.charon` directory to `./backups/` +6. Move new keys from `./output/` to `.charon/` +7. Import the updated anti-slashing database +8. Restart containers + +## Options + +- `--dry-run` - Preview without executing + +## Current Limitations + +- The new cluster configuration will not be reflected on the Launchpad +- The new cluster will have a new cluster hash (different observability identifier) +- All operators must participate; no partial participation option +- All operators must have their current validator private key shares + +## Testing + +See [test/README.md](test/README.md) for integration tests. + +## Related + +- [Replace-Operator Workflow](../replace-operator/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/recreate-private-keys) diff --git a/scripts/edit/recreate-private-keys/recreate-private-keys.sh b/scripts/edit/recreate-private-keys/recreate-private-keys.sh new file mode 100755 index 00000000..77db0a0c --- /dev/null +++ b/scripts/edit/recreate-private-keys/recreate-private-keys.sh @@ -0,0 +1,315 @@ +#!/usr/bin/env bash + +# Recreate-Private-Keys Script +# +# This script automates the recreate-private-keys ceremony for Charon +# distributed validators. This is used to regenerate validator private key +# shares while keeping the same validator public keys. +# +# Reference: https://docs.obol.org/next/advanced-and-troubleshooting/advanced/recreate-private-keys +# +# IMPORTANT: This is a CEREMONY - ALL operators in the cluster must run this +# script simultaneously. The ceremony coordinates between all operators to +# generate new private key shares. +# +# Use cases: +# - Security concerns: If private key shares may have been compromised +# - Key rotation: As part of regular security practices +# - Recovery: After a security incident to refresh key material +# +# The workflow: +# 1. Export the current anti-slashing database +# 2. Run the recreate-private-keys ceremony (all operators simultaneously) +# 3. Update the exported ASDB with new pubkeys +# 4. Stop containers +# 5. Backup and replace .charon directory +# 6. Import the updated ASDB +# 7. Restart containers +# +# Prerequisites: +# - .env file with NETWORK and VC variables set +# - .charon directory with cluster-lock.json and validator_keys +# - Docker and docker compose installed and running +# - All operators must participate in the ceremony +# +# Usage: +# ./scripts/edit/recreate-private-keys/recreate-private-keys.sh [OPTIONS] +# +# Options: +# --dry-run Show what would be done without executing +# -h, --help Show this help message + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +cd "$REPO_ROOT" + +# Default values +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" +ASDB_EXPORT_DIR="./asdb-export" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/recreate-private-keys/recreate-private-keys.sh [OPTIONS] + +Recreates validator private key shares for the cluster. This is a CEREMONY +that ALL operators must run simultaneously. + +Use cases: + - Security concerns: If private key shares may have been compromised + - Key rotation: As part of regular security practices + - Recovery: After a security incident to refresh key material + +NOTE: This operation maintains the same validator public keys. Only the +underlying private key shares held by operators are refreshed. + +Options: + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + ./scripts/edit/recreate-private-keys/recreate-private-keys.sh + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json and validator_keys + - Docker and docker compose installed and running + - All operators must participate in the ceremony +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Recreate Private Keys Workflow ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -d .charon/validator_keys ]; then + log_error ".charon/validator_keys directory not found" + log_info "All operators must have their current validator private key shares." + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +echo "" + +# Step 1: Export anti-slashing database +log_step "Step 1: Exporting anti-slashing database..." + +# Check VC container is running (skip check in dry-run mode) +if [ "$DRY_RUN" = false ]; then + if ! docker compose ps "$VC" 2>/dev/null | grep -q Up; then + log_error "VC container ($VC) is not running. Start it first:" + log_error " docker compose up -d $VC" + exit 1 + fi +else + log_warn "Would check that $VC container is running" +fi + +mkdir -p "$ASDB_EXPORT_DIR" + +run_cmd VC="$VC" "$SCRIPT_DIR/../vc/export_asdb.sh" \ + --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" + +echo "" + +# Step 2: Run ceremony +log_step "Step 2: Running recreate-private-keys ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL operators must run this ceremony simultaneously ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit recreate-private-keys" +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all operators to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + docker run --rm -it \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + alpha edit recreate-private-keys \ + --output-dir=/opt/charon/output + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + log_info "New cluster-lock.json generated in $OUTPUT_DIR/" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit recreate-private-keys --output-dir=output" +fi + +echo "" + +# Step 3: Update ASDB pubkeys +log_step "Step 3: Updating anti-slashing database pubkeys..." + +run_cmd "$SCRIPT_DIR/../vc/update-anti-slashing-db.sh" \ + "$ASDB_EXPORT_DIR/slashing-protection.json" \ + ".charon/cluster-lock.json" \ + "$OUTPUT_DIR/cluster-lock.json" + +log_info "Anti-slashing database pubkeys updated" + +echo "" + +# Step 4: Stop containers +log_step "Step 4: Stopping containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" + +# Step 5: Backup and replace .charon +log_step "Step 5: Backing up and replacing .charon directory..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info "Current .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New keys installed to .charon/" + +echo "" + +# Step 6: Import updated ASDB +log_step "Step 6: Importing updated anti-slashing database..." + +run_cmd VC="$VC" "$SCRIPT_DIR/../vc/import_asdb.sh" \ + --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database imported" + +echo "" + +# Step 7: Restart containers +log_step "Step 7: Restarting containers..." + +run_cmd docker compose up -d charon "$VC" + +log_info "Containers restarted" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Recreate Private Keys Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old .charon backed up to: $BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info " - New keys installed in: .charon/" +log_info " - Anti-slashing database updated and imported" +log_info " - Containers restarted: charon, $VC" +echo "" +log_info "Next steps:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Verify all nodes connected and healthy" +log_info " 3. Verify cluster is producing attestations" +log_info " 4. Check no signature verification errors in logs" +echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" diff --git a/scripts/edit/recreate-private-keys/test/README.md b/scripts/edit/recreate-private-keys/test/README.md new file mode 100644 index 00000000..8576b933 --- /dev/null +++ b/scripts/edit/recreate-private-keys/test/README.md @@ -0,0 +1,26 @@ +# Recreate-Private-Keys Integration Tests + +Integration tests for `recreate-private-keys.sh` script. + +## Overview + +These tests validate the recreate-private-keys script without running actual Docker containers or the ceremony. The focus is on: + +- **Argument parsing and validation** +- **Prerequisite checks** (`.env`, `.charon/`, cluster-lock, validator_keys) +- **Dry-run output** for all workflow steps +- **Error messages** for missing/invalid inputs + +## Running Tests + +```bash +./scripts/edit/recreate-private-keys/test/test_recreate_private_keys.sh +``` + +Expected output: All tests should pass in under 5 seconds. + +## What's NOT Tested + +- **Actual Docker operations** - Docker commands are mocked +- **Charon ceremony** - Would require actual cluster coordination with all operators +- **Container orchestration** - Would require running services diff --git a/scripts/edit/recreate-private-keys/test/fixtures/.env.test b/scripts/edit/recreate-private-keys/test/fixtures/.env.test new file mode 100644 index 00000000..81298829 --- /dev/null +++ b/scripts/edit/recreate-private-keys/test/fixtures/.env.test @@ -0,0 +1,3 @@ +# Test environment for recreate-private-keys tests +NETWORK=hoodi +VC=vc-lodestar diff --git a/scripts/edit/recreate-private-keys/test/test_recreate_private_keys.sh b/scripts/edit/recreate-private-keys/test/test_recreate_private_keys.sh new file mode 100755 index 00000000..9b28c28e --- /dev/null +++ b/scripts/edit/recreate-private-keys/test/test_recreate_private_keys.sh @@ -0,0 +1,387 @@ +#!/usr/bin/env bash + +# Integration test for recreate-private-keys.sh script +# +# This test validates: +# - Argument parsing and validation +# - Prerequisite checks (.env, .charon/, cluster-lock) +# - Dry-run output for all workflow steps +# - Error messages for missing inputs +# +# No actual Docker containers are run - all Docker commands are mocked. +# +# Usage: ./scripts/edit/recreate-private-keys/test/test_recreate_private_keys.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" + +# Test directories +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_DATA_DIR="$SCRIPT_DIR/data" + +# Script under test +RECREATE_SCRIPT="$REPO_ROOT/scripts/edit/recreate-private-keys/recreate-private-keys.sh" + +# Test counters +TESTS_RUN=0 +TESTS_PASSED=0 +TESTS_FAILED=0 + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_test() { echo -e "${BLUE}[TEST]${NC} $1"; } + +# Create mock docker script that logs calls and returns success +setup_mock_docker() { + local mock_bin_dir="$TEST_DATA_DIR/mock-bin" + mkdir -p "$mock_bin_dir" + + # Create mock docker command + cat > "$mock_bin_dir/docker" << 'MOCK_DOCKER' +#!/usr/bin/env bash +# Mock docker for testing - logs all calls +echo "[MOCK-DOCKER] $*" >> "${MOCK_DOCKER_LOG:-/dev/null}" + +# Handle specific commands +case "$*" in + "info") + echo "Mock Docker info" + exit 0 + ;; + "compose"*"stop"*) + echo "[MOCK] Stopping containers" + exit 0 + ;; + "compose"*"up"*) + echo "[MOCK] Starting containers" + exit 0 + ;; + *"charon"*"enr"*) + # Return a mock ENR + echo "enr:-HW4QMockENRForTesting12345" + exit 0 + ;; + *"charon"*"edit recreate-private-keys"*) + echo "[MOCK] Running recreate-private-keys" + exit 0 + ;; + *) + echo "[MOCK] Unhandled docker command: $*" + exit 0 + ;; +esac +MOCK_DOCKER + chmod +x "$mock_bin_dir/docker" + + # Export PATH with mock first + export PATH="$mock_bin_dir:$PATH" + export MOCK_DOCKER_LOG="$TEST_DATA_DIR/docker-calls.log" +} + +# Setup test working directory with fixtures +# Note: Scripts always cd to REPO_ROOT, so we must put test fixtures there +# We backup any existing files and restore them on cleanup +setup_test_env() { + rm -rf "$TEST_DATA_DIR" + mkdir -p "$TEST_DATA_DIR/backup" + + # Backup existing files in REPO_ROOT if they exist + if [ -f "$REPO_ROOT/.env" ]; then + cp "$REPO_ROOT/.env" "$TEST_DATA_DIR/backup/.env.bak" + fi + if [ -d "$REPO_ROOT/.charon" ]; then + # Only backup key files, not the whole directory + mkdir -p "$TEST_DATA_DIR/backup/.charon" + [ -f "$REPO_ROOT/.charon/cluster-lock.json" ] && \ + cp "$REPO_ROOT/.charon/cluster-lock.json" "$TEST_DATA_DIR/backup/.charon/" + [ -f "$REPO_ROOT/.charon/charon-enr-private-key" ] && \ + cp "$REPO_ROOT/.charon/charon-enr-private-key" "$TEST_DATA_DIR/backup/.charon/" + fi + + # Install test fixtures to REPO_ROOT + cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" + mkdir -p "$REPO_ROOT/.charon" + cp "$TEST_FIXTURES_DIR/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" + cp "$TEST_FIXTURES_DIR/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" + + # Create required directories + mkdir -p "$REPO_ROOT/backups" + + # Setup mock docker + setup_mock_docker +} + +restore_repo_state() { + # Restore backed up files + if [ -f "$TEST_DATA_DIR/backup/.env.bak" ]; then + cp "$TEST_DATA_DIR/backup/.env.bak" "$REPO_ROOT/.env" + else + rm -f "$REPO_ROOT/.env" + fi + + if [ -d "$TEST_DATA_DIR/backup/.charon" ]; then + [ -f "$TEST_DATA_DIR/backup/.charon/cluster-lock.json" ] && \ + cp "$TEST_DATA_DIR/backup/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" + [ -f "$TEST_DATA_DIR/backup/.charon/charon-enr-private-key" ] && \ + cp "$TEST_DATA_DIR/backup/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" + fi +} + +cleanup() { + log_info "Cleaning up and restoring original state..." + restore_repo_state +} + +trap cleanup EXIT + +# Test assertion helpers +assert_exit_code() { + local expected="$1" + local actual="$2" + local test_name="$3" + + if [ "$actual" -eq "$expected" ]; then + return 0 + else + log_error "Expected exit code $expected, got $actual in $test_name" + return 1 + fi +} + +assert_output_contains() { + local pattern="$1" + local output="$2" + local test_name="$3" + + if echo "$output" | grep -q -F -- "$pattern"; then + return 0 + else + log_error "Expected output to contain '$pattern' in $test_name" + echo "Actual output:" + echo "$output" | head -20 + return 1 + fi +} + +assert_output_not_contains() { + local pattern="$1" + local output="$2" + local test_name="$3" + + if echo "$output" | grep -q "$pattern"; then + log_error "Expected output NOT to contain '$pattern' in $test_name" + return 1 + else + return 0 + fi +} + +run_test() { + local test_name="$1" + local test_func="$2" + + TESTS_RUN=$((TESTS_RUN + 1)) + log_test "Running: $test_name" + + if $test_func; then + echo -e " ${GREEN}✓ PASSED${NC}" + TESTS_PASSED=$((TESTS_PASSED + 1)) + else + echo -e " ${RED}✗ FAILED${NC}" + TESTS_FAILED=$((TESTS_FAILED + 1)) + fi +} + +# ============================================================================ +# RECREATE-PRIVATE-KEYS.SH TESTS +# ============================================================================ + +test_help() { + local output + local exit_code=0 + + output=$("$RECREATE_SCRIPT" --help 2>&1) || exit_code=$? + + assert_exit_code 0 "$exit_code" "test_help" && \ + assert_output_contains "Usage:" "$output" "test_help" && \ + assert_output_contains "--dry-run" "$output" "test_help" +} + +test_missing_env() { + local output + local exit_code=0 + + rm -f "$REPO_ROOT/.env" + + output=$("$RECREATE_SCRIPT" 2>&1) || exit_code=$? + + # Restore .env for other tests + cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" + + assert_exit_code 1 "$exit_code" "test_missing_env" && \ + assert_output_contains ".env file not found" "$output" "test_missing_env" +} + +test_missing_network() { + local output + local exit_code=0 + + echo "VC=vc-lodestar" > "$REPO_ROOT/.env" # Missing NETWORK + + output=$("$RECREATE_SCRIPT" 2>&1) || exit_code=$? + + # Restore .env + cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" + + assert_exit_code 1 "$exit_code" "test_missing_network" && \ + assert_output_contains "NETWORK variable not set" "$output" "test_missing_network" +} + +test_missing_vc() { + local output + local exit_code=0 + + echo "NETWORK=hoodi" > "$REPO_ROOT/.env" # Missing VC + + output=$("$RECREATE_SCRIPT" 2>&1) || exit_code=$? + + # Restore .env + cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" + + assert_exit_code 1 "$exit_code" "test_missing_vc" && \ + assert_output_contains "VC variable not set" "$output" "test_missing_vc" +} + +test_missing_charon_dir() { + local output + local exit_code=0 + + mv "$REPO_ROOT/.charon" "$REPO_ROOT/.charon.test.bak" + + output=$("$RECREATE_SCRIPT" 2>&1) || exit_code=$? + + # Restore .charon + mv "$REPO_ROOT/.charon.test.bak" "$REPO_ROOT/.charon" + + assert_exit_code 1 "$exit_code" "test_missing_charon_dir" && \ + assert_output_contains ".charon directory not found" "$output" "test_missing_charon_dir" +} + +test_missing_cluster_lock() { + local output + local exit_code=0 + + rm -f "$REPO_ROOT/.charon/cluster-lock.json" + + output=$("$RECREATE_SCRIPT" 2>&1) || exit_code=$? + + # Restore cluster-lock + cp "$TEST_FIXTURES_DIR/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" + + assert_exit_code 1 "$exit_code" "test_missing_cluster_lock" && \ + assert_output_contains "cluster-lock.json not found" "$output" "test_missing_cluster_lock" +} + +test_missing_validator_keys() { + local output + local exit_code=0 + + # Ensure validator_keys doesn't exist + rm -rf "$REPO_ROOT/.charon/validator_keys" + + output=$("$RECREATE_SCRIPT" 2>&1) || exit_code=$? + + assert_exit_code 1 "$exit_code" "test_missing_validator_keys" && \ + assert_output_contains "validator_keys directory not found" "$output" "test_missing_validator_keys" +} + +test_dry_run_workflow() { + local output + local exit_code=0 + + # Create validator_keys for this test + mkdir -p "$REPO_ROOT/.charon/validator_keys" + + output=$("$RECREATE_SCRIPT" --dry-run 2>&1) || exit_code=$? + + # Cleanup + rm -rf "$REPO_ROOT/.charon/validator_keys" + + assert_exit_code 0 "$exit_code" "test_dry_run_workflow" && \ + assert_output_contains "DRY-RUN MODE" "$output" "test_dry_run_workflow" && \ + assert_output_contains "Exporting anti-slashing database" "$output" "test_dry_run_workflow" && \ + assert_output_contains "charon alpha edit recreate-private-keys" "$output" "test_dry_run_workflow" && \ + assert_output_contains "Updating anti-slashing database" "$output" "test_dry_run_workflow" && \ + assert_output_contains "Stopping containers" "$output" "test_dry_run_workflow" && \ + assert_output_contains "Backing up" "$output" "test_dry_run_workflow" && \ + assert_output_contains "Importing updated anti-slashing" "$output" "test_dry_run_workflow" && \ + assert_output_contains "Restarting containers" "$output" "test_dry_run_workflow" +} + +test_unknown_argument() { + local output + local exit_code=0 + + output=$("$RECREATE_SCRIPT" --invalid-flag 2>&1) || exit_code=$? + + assert_exit_code 1 "$exit_code" "test_unknown_argument" && \ + assert_output_contains "Unknown argument" "$output" "test_unknown_argument" +} + +# ============================================================================ +# MAIN TEST RUNNER +# ============================================================================ + +main() { + echo "" + echo "╔════════════════════════════════════════════════════════════════╗" + echo "║ Recreate-Private-Keys Script - Integration Tests ║" + echo "╚════════════════════════════════════════════════════════════════╝" + echo "" + + # Setup test environment + log_info "Setting up test environment..." + setup_test_env + + echo "" + echo "─────────────────────────────────────────────────────────────────" + echo " RECREATE-PRIVATE-KEYS.SH TESTS" + echo "─────────────────────────────────────────────────────────────────" + echo "" + + run_test "recreate-private-keys: --help shows usage" test_help + run_test "recreate-private-keys: error when .env missing" test_missing_env + run_test "recreate-private-keys: error when NETWORK missing" test_missing_network + run_test "recreate-private-keys: error when VC missing" test_missing_vc + run_test "recreate-private-keys: error when .charon dir missing" test_missing_charon_dir + run_test "recreate-private-keys: error when cluster-lock missing" test_missing_cluster_lock + run_test "recreate-private-keys: error when validator_keys missing" test_missing_validator_keys + run_test "recreate-private-keys: dry-run full workflow" test_dry_run_workflow + run_test "recreate-private-keys: error for unknown argument" test_unknown_argument + + echo "" + echo "═════════════════════════════════════════════════════════════════" + echo "" + + if [ "$TESTS_FAILED" -eq 0 ]; then + echo -e "${GREEN}All $TESTS_PASSED tests passed!${NC}" + echo "" + exit 0 + else + echo -e "${RED}$TESTS_FAILED of $TESTS_RUN tests failed${NC}" + echo "" + exit 1 + fi +} + +main "$@" diff --git a/scripts/edit/replace-operator/README.md b/scripts/edit/replace-operator/README.md index c768c53a..f33d8d68 100644 --- a/scripts/edit/replace-operator/README.md +++ b/scripts/edit/replace-operator/README.md @@ -22,7 +22,6 @@ Automates the complete workflow for operators staying in the cluster: - `--new-enr ` - ENR of the new operator (required) - `--operator-index ` - Index of operator being replaced (required) - `--skip-export` - Skip ASDB export if already done -- `--skip-ceremony` - Skip ceremony if cluster-lock already generated - `--dry-run` - Preview without executing ## For New Operators diff --git a/scripts/edit/replace-operator/remaining-operator.sh b/scripts/edit/replace-operator/remaining-operator.sh index 8c566061..1031791e 100755 --- a/scripts/edit/replace-operator/remaining-operator.sh +++ b/scripts/edit/replace-operator/remaining-operator.sh @@ -27,7 +27,6 @@ # --new-enr ENR of the new operator (required) # --operator-index Index of the operator being replaced (required) # --skip-export Skip ASDB export (if already exported) -# --skip-ceremony Skip ceremony (if cluster-lock already generated) # --dry-run Show what would be done without executing # -h, --help Show this help message # @@ -46,7 +45,6 @@ cd "$REPO_ROOT" NEW_ENR="" OPERATOR_INDEX="" SKIP_EXPORT=false -SKIP_CEREMONY=false DRY_RUN=false # Output directories @@ -77,7 +75,6 @@ Options: --new-enr ENR of the new operator (required) --operator-index Index of the operator being replaced (required) --skip-export Skip ASDB export (if already exported) - --skip-ceremony Skip ceremony (if cluster-lock already generated) --dry-run Show what would be done without executing -h, --help Show this help message @@ -110,10 +107,6 @@ while [[ $# -gt 0 ]]; do SKIP_EXPORT=true shift ;; - --skip-ceremony) - SKIP_CEREMONY=true - shift - ;; --dry-run) DRY_RUN=true shift @@ -130,17 +123,15 @@ while [[ $# -gt 0 ]]; do done # Validate required arguments -if [ "$SKIP_CEREMONY" = false ]; then - if [ -z "$NEW_ENR" ]; then - log_error "Missing required argument: --new-enr" - echo "Use --help for usage information" - exit 1 - fi - if [ -z "$OPERATOR_INDEX" ]; then - log_error "Missing required argument: --operator-index" - echo "Use --help for usage information" - exit 1 - fi +if [ -z "$NEW_ENR" ]; then + log_error "Missing required argument: --new-enr" + echo "Use --help for usage information" + exit 1 +fi +if [ -z "$OPERATOR_INDEX" ]; then + log_error "Missing required argument: --operator-index" + echo "Use --help for usage information" + exit 1 fi run_cmd() { @@ -241,36 +232,28 @@ echo "" # Step 2: Run replace-operator ceremony log_step "Step 2: Running replace-operator ceremony..." -if [ "$SKIP_CEREMONY" = true ]; then - log_warn "Skipping ceremony (--skip-ceremony specified)" - if [ ! -f "$OUTPUT_DIR/cluster-lock.json" ]; then - log_error "Cannot skip ceremony: $OUTPUT_DIR/cluster-lock.json not found" - exit 1 - fi +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon edit replace-operator" +log_info " Replacing operator index: $OPERATOR_INDEX" +log_info " New ENR: ${NEW_ENR:0:50}..." + +if [ "$DRY_RUN" = false ]; then + docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + edit replace-operator \ + --lock-file=/opt/charon/.charon/cluster-lock.json \ + --output-dir=/opt/charon/output \ + --operator-index="$OPERATOR_INDEX" \ + --new-enr="$NEW_ENR" else - mkdir -p "$OUTPUT_DIR" - - log_info "Running: charon edit replace-operator" - log_info " Replacing operator index: $OPERATOR_INDEX" - log_info " New ENR: ${NEW_ENR:0:50}..." - - if [ "$DRY_RUN" = false ]; then - docker run --rm \ - -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ - -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ - "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ - edit replace-operator \ - --lock-file=/opt/charon/.charon/cluster-lock.json \ - --output-dir=/opt/charon/output \ - --operator-index="$OPERATOR_INDEX" \ - --new-enr="$NEW_ENR" - else - echo " [DRY-RUN] docker run --rm ... charon edit replace-operator ..." - fi - - log_info "New cluster-lock generated at $OUTPUT_DIR/cluster-lock.json" + echo " [DRY-RUN] docker run --rm ... charon edit replace-operator ..." fi +log_info "New cluster-lock generated at $OUTPUT_DIR/cluster-lock.json" + echo "" # Step 3: Update ASDB pubkeys diff --git a/scripts/edit/replace-operator/test/test_replace_operator.sh b/scripts/edit/replace-operator/test/test_replace_operator.sh index 66a6e86c..a928e4eb 100755 --- a/scripts/edit/replace-operator/test/test_replace_operator.sh +++ b/scripts/edit/replace-operator/test/test_replace_operator.sh @@ -375,8 +375,7 @@ test_remaining_help() { assert_output_contains "Usage:" "$output" "test_remaining_help" && \ assert_output_contains "--new-enr" "$output" "test_remaining_help" && \ assert_output_contains "--operator-index" "$output" "test_remaining_help" && \ - assert_output_contains "--skip-export" "$output" "test_remaining_help" && \ - assert_output_contains "--skip-ceremony" "$output" "test_remaining_help" + assert_output_contains "--skip-export" "$output" "test_remaining_help" } test_remaining_missing_new_enr() { @@ -463,16 +462,16 @@ test_remaining_dry_run_full_workflow() { local output local exit_code=0 - # Use --skip-export and --skip-ceremony to avoid Docker dependencies + # Use --skip-export to avoid Docker dependencies output=$("$REMAINING_OPERATOR_SCRIPT" \ --new-enr "enr:-HW4QTestNewOperator123456789" \ --operator-index 0 \ --skip-export \ - --skip-ceremony \ --dry-run 2>&1) || exit_code=$? assert_exit_code 0 "$exit_code" "test_remaining_dry_run_full_workflow" && \ assert_output_contains "DRY-RUN MODE" "$output" "test_remaining_dry_run_full_workflow" && \ + assert_output_contains "charon edit replace-operator" "$output" "test_remaining_dry_run_full_workflow" && \ assert_output_contains "Updating anti-slashing database pubkeys" "$output" "test_remaining_dry_run_full_workflow" && \ assert_output_contains "Stopping" "$output" "test_remaining_dry_run_full_workflow" && \ assert_output_contains "Backing up" "$output" "test_remaining_dry_run_full_workflow" && \ @@ -499,25 +498,6 @@ test_remaining_skip_export_missing_asdb() { assert_output_contains "Cannot skip export" "$output" "test_remaining_skip_export_missing_asdb" } -test_remaining_skip_ceremony_missing_output() { - local output - local exit_code=0 - - rm -f "$REPO_ROOT/output/cluster-lock.json" - - output=$("$REMAINING_OPERATOR_SCRIPT" \ - --new-enr "enr:-test" \ - --operator-index 0 \ - --skip-ceremony \ - --dry-run 2>&1) || exit_code=$? - - # Restore output cluster-lock - cp "$TEST_FIXTURES_DIR/new-cluster-lock.json" "$REPO_ROOT/output/cluster-lock.json" - - assert_exit_code 1 "$exit_code" "test_remaining_skip_ceremony_missing_output" && \ - assert_output_contains "Cannot skip ceremony" "$output" "test_remaining_skip_ceremony_missing_output" -} - test_remaining_unknown_argument() { local output local exit_code=0 @@ -575,7 +555,6 @@ main() { run_test "remaining-operator: error when ENR key missing" test_remaining_missing_enr_key run_test "remaining-operator: dry-run full workflow" test_remaining_dry_run_full_workflow run_test "remaining-operator: skip-export needs existing ASDB" test_remaining_skip_export_missing_asdb - run_test "remaining-operator: skip-ceremony needs existing output" test_remaining_skip_ceremony_missing_output run_test "remaining-operator: error for unknown argument" test_remaining_unknown_argument echo "" From 3f880e82c946d0d6dee519e8bd1e49bac19b051e Mon Sep 17 00:00:00 2001 From: Andrei Smirnov Date: Tue, 17 Feb 2026 11:26:55 +0300 Subject: [PATCH 04/12] Added CLAUDE.md --- CLAUDE.md | 209 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 209 insertions(+) create mode 100644 CLAUDE.md diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 00000000..454cd6a9 --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,209 @@ +# CLAUDE.md + +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. + +## Project Overview + +This repository contains Docker Compose configurations for running a Charon Distributed Validator Node (CDVN), which coordinates multiple operators to run Ethereum validators. A distributed validator node runs four main components: +- Execution client (EL): Processes Ethereum transactions +- Consensus client (CL/beacon node): Participates in Ethereum's proof-of-stake consensus +- Charon: Obol Network's distributed validator middleware that coordinates between operators +- Validator client (VC): Signs attestations and proposals through Charon + +## Architecture & Multi-Client System + +The repository uses a **profile-based multi-client architecture** where different Ethereum client implementations can be swapped via `.env` configuration: + +- **Compose file structure**: `compose-el.yml` (execution), `compose-cl.yml` (consensus), `compose-vc.yml` (validator), `compose-mev.yml` (MEV), and `docker-compose.yml` (main/monitoring) +- **Client selection**: Set via environment variables `EL`, `CL`, `VC`, `MEV` in `.env` (e.g., `EL=el-nethermind`, `CL=cl-lighthouse`, `VC=vc-lodestar`, `MEV=mev-mevboost`) +- **Profiles**: Docker Compose profiles automatically activate the selected clients via `COMPOSE_PROFILES=${EL},${CL},${VC},${MEV}` +- **Service naming**: Client services use prefixed names (e.g., `el-nethermind`, `cl-lighthouse`, `vc-lodestar`) while the main compose file uses unprefixed names for backward compatibility + +### Supported Clients + +- **Execution Layer**: `el-nethermind`, `el-reth`, `el-none` +- **Consensus Layer**: `cl-lighthouse`, `cl-grandine`, `cl-teku`, `cl-lodestar`, `cl-none` +- **Validator Clients**: `vc-lodestar`, `vc-nimbus`, `vc-prysm`, `vc-teku` +- **MEV Clients**: `mev-mevboost`, `mev-commitboost`, `mev-none` + +### Key Integration Points + +- Charon connects to the consensus layer at `http://${CL}:5052` (beacon node API) +- Validator clients connect to Charon at `http://charon:3600` (distributed validator middleware API) +- Consensus layer connects to execution layer at `http://${EL}:8551` (Engine API with JWT auth) +- MEV clients expose builder API at port `18550` + +## Common Commands + +### Starting/Stopping the Cluster + +```bash +# Start the full cluster (uses profile from .env) +docker compose up -d + +# Stop specific services +docker compose down + +# Stop all services +docker compose down + +# View logs +docker compose logs -f + +# Restart after config changes +docker compose restart +``` + +### Switching Clients + +```bash +# 1. Stop the old client +docker compose down cl-lighthouse + +# 2. Update .env to change CL variable (e.g., CL=cl-grandine) + +# 3. Start new client +docker compose up cl-grandine -d + +# 4. Restart charon to use new beacon node +docker compose restart charon + +# 5. Optional: clean up old client data +rm -rf ./data/lighthouse +``` + +### Testing + +```bash +# Verify containers can be created +docker compose up --no-start + +# Test with debug profile +docker compose -f docker-compose.yml -f compose-debug.yml up --no-start +``` + +## Configuration + +### Environment Setup + +1. Copy the appropriate sample file: `.env.sample.mainnet` or `.env.sample.hoodi` → `.env` +2. Set `NETWORK` (mainnet, hoodi) +3. Select clients by uncommenting the desired `EL`, `CL`, `VC`, `MEV` variables +4. Configure optional settings (ports, external hostnames, monitoring tokens, etc.) + +### Important Environment Variables + +- `NETWORK`: Ethereum network (mainnet, hoodi) +- `EL`, `CL`, `VC`, `MEV`: Client selection (determines which Docker profiles activate) +- `CHARON_BEACON_NODE_ENDPOINTS`: Override default beacon node (defaults to selected CL client) +- `CHARON_FALLBACK_BEACON_NODE_ENDPOINTS`: Fallback beacon nodes for redundancy +- `BUILDER_API_ENABLED`: Enable/disable MEV-boost integration +- `CLUSTER_NAME`, `CLUSTER_PEER`: Required for monitoring with Alloy/Prometheus +- `ALERT_DISCORD_IDS`: Discord IDs for Obol Agent monitoring alerts + +### Key Directories + +- `.charon/`: Cluster configuration and validator keys (created by DKG or add-validators) +- `data/`: Persistent data for all clients (execution, consensus, validator databases) +- `jwt/`: JWT secret for execution<->consensus authentication +- `grafana/`: Monitoring dashboards and configuration +- `prometheus/`: Metrics collection configuration +- `scripts/`: Automation scripts for cluster operations + +## Cluster Edit Scripts + +Located in `scripts/edit/`, these automate complex cluster modification operations: + +### Replace Operator (`scripts/edit/replace-operator/`) + +Automates the workflow when one operator in a distributed validator cluster needs to be replaced. + +**For remaining operators:** +```bash +./scripts/edit/replace-operator/remaining-operator.sh \ + --new-enr "enr:-..." \ + --operator-index 2 +``` + +**For new operators:** +```bash +# Step 1: Generate and share ENR +./scripts/edit/replace-operator/new-operator.sh --generate-enr + +# Step 2: Apply received cluster-lock +./scripts/edit/replace-operator/new-operator.sh --cluster-lock ./received-cluster-lock.json +``` + +### Anti-Slashing Database Management (`scripts/edit/vc/`) + +When switching validator clients or replacing operators, the anti-slashing database (ASDB) must be exported and imported to prevent slashing violations (EIP-3076 format). + +```bash +# Export from current VC +./scripts/edit/vc/export_asdb.sh + +# Import to new VC (after switching VC in .env) +./scripts/edit/vc/import_asdb.sh +``` + +Client-specific scripts are in subdirectories: `lodestar/`, `nimbus/`, `prysm/`, `teku/`. + +### Recreate Private Keys (`scripts/edit/recreate-private-keys/`) + +Recreates validator private keys from cluster-lock.json when they are lost but the cluster-lock file is still available. + +```bash +./scripts/edit/recreate-private-keys/recreate-private-keys.sh +``` + +## Adding Validators + +Starting with Charon v1.6, you can add validators to an existing cluster using `charon alpha add-validators`: + +```bash +# Using Docker (recommended) +docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:latest \ + alpha add-validators \ + --num-validators 10 \ + --withdrawal-addresses=0x
\ + --fee-recipient-addresses=0x
\ + --data-dir=/opt/charon/.charon \ + --output-dir=/opt/charon/output + +# Apply the new configuration (backup first!) +docker compose stop charon +mv .charon .charon-backup +mv output .charon +docker compose up -d charon +``` + +**Note**: All operators must independently perform the upgrade. The cluster continues operating once threshold operators have upgraded. + +## Monitoring Stack + +- **Grafana** (port 3000): Dashboards for cluster health, validator performance +- **Prometheus**: Metrics collection from all services +- **Loki**: Log aggregation (optional, via `CHARON_LOKI_ADDRESSES`) +- **Tempo**: Distributed tracing (debug profile) +- **Alloy**: Log and metric forwarding (uses `alloy-monitored` labels on services) + +Access Grafana at `http://localhost:3000` (or `${MONITORING_PORT_GRAFANA}`). + +## Development Workflow + +When modifying this repository: + +1. **Test container creation** before committing changes to compose files +2. **Preserve backward compatibility** for existing node operators (data paths, service names) +3. **Update all sample .env files** when adding new configuration options +4. **Test client switching** if modifying compose file structure +5. **Update version defaults** to tested/stable releases + +## Important Notes + +- **Never commit `.env` files** - they contain operator-specific configuration +- **JWT secret** in `jwt/jwt.hex` must be shared between EL and CL clients +- **Cluster lock** in `.charon/cluster-lock.json` is critical - back it up before any edit operations +- **Validator keys** in `.charon/validator_keys/` must be kept secure and never committed +- **Data directory compatibility**: When switching VCs, verify the new client can handle existing key state +- **Slashing protection**: Always export/import ASDB when switching VCs or replacing operators From e29871f05573d0a4ed8bb44a8b223e039381fd26 Mon Sep 17 00:00:00 2001 From: Andrei Smirnov Date: Tue, 17 Feb 2026 11:35:58 +0300 Subject: [PATCH 05/12] Added add-validators script --- .gitignore | 4 + scripts/edit/add-validators/README.md | 107 +++++ scripts/edit/add-validators/add-validators.sh | 376 +++++++++++++++ scripts/edit/add-validators/test/README.md | 48 ++ .../fixtures/.charon/charon-enr-private-key | 1 + .../test/fixtures/.charon/cluster-lock.json | 54 +++ .../add-validators/test/fixtures/.env.test | 3 + .../test/test_add_validators.sh | 441 ++++++++++++++++++ .../fixtures/.charon/charon-enr-private-key | 1 + .../test/fixtures/.charon/cluster-lock.json | 55 +++ 10 files changed, 1090 insertions(+) create mode 100644 scripts/edit/add-validators/README.md create mode 100755 scripts/edit/add-validators/add-validators.sh create mode 100644 scripts/edit/add-validators/test/README.md create mode 100644 scripts/edit/add-validators/test/fixtures/.charon/charon-enr-private-key create mode 100644 scripts/edit/add-validators/test/fixtures/.charon/cluster-lock.json create mode 100644 scripts/edit/add-validators/test/fixtures/.env.test create mode 100755 scripts/edit/add-validators/test/test_add_validators.sh create mode 100644 scripts/edit/recreate-private-keys/test/fixtures/.charon/charon-enr-private-key create mode 100644 scripts/edit/recreate-private-keys/test/fixtures/.charon/cluster-lock.json diff --git a/.gitignore b/.gitignore index be6f45cf..ea2a8aa6 100644 --- a/.gitignore +++ b/.gitignore @@ -13,3 +13,7 @@ data/ .charon prometheus/prometheus.yml commit-boost/config.toml + +# Allow test fixtures +!scripts/edit/**/test/fixtures/.charon/ +!scripts/edit/**/test/fixtures/.charon/** diff --git a/scripts/edit/add-validators/README.md b/scripts/edit/add-validators/README.md new file mode 100644 index 00000000..4fca5e75 --- /dev/null +++ b/scripts/edit/add-validators/README.md @@ -0,0 +1,107 @@ +# Add-Validators Script + +Script to automate the [add-validators ceremony](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-validators) for Charon distributed validators. + +## Overview + +This script helps operators add new validators to an existing distributed validator cluster. This is useful for: + +- **Expanding capacity**: Add more validators without creating a new cluster +- **Scaling operations**: Grow your staking operation with existing operators + +**Important**: This is a coordinated ceremony. All operators must run this script simultaneously to complete the process. + +> ⚠️ This is an alpha feature in Charon and is not yet recommended for production use. + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- `.charon` directory with `cluster-lock.json` +- Docker running +- **Charon and VC must be RUNNING** during the ceremony +- **All operators must participate in the ceremony** + +## Usage + +All operators must run this script simultaneously: + +```bash +./scripts/edit/add-validators/add-validators.sh \ + --num-validators 10 \ + --withdrawal-addresses 0x123...abc \ + --fee-recipient-addresses 0x456...def +``` + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--num-validators ` | Yes | Number of validators to add | +| `--withdrawal-addresses ` | No | Withdrawal address(es), comma-separated for multiple | +| `--fee-recipient-addresses ` | No | Fee recipient address(es), comma-separated | +| `--unverified` | No | Skip key verification (for remote KeyManager) | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +### Examples + +```bash +# Add 10 validators with same addresses for all +./scripts/edit/add-validators/add-validators.sh \ + --num-validators 10 \ + --withdrawal-addresses 0x123...abc \ + --fee-recipient-addresses 0x456...def + +# Add validators without key verification (remote KeyManager) +./scripts/edit/add-validators/add-validators.sh \ + --num-validators 5 \ + --withdrawal-addresses 0x123...abc \ + --fee-recipient-addresses 0x456...def \ + --unverified + +# Preview what would happen +./scripts/edit/add-validators/add-validators.sh \ + --num-validators 5 \ + --withdrawal-addresses 0x123...abc \ + --dry-run +``` + +## Workflow + +The script performs the following steps: + +1. **Check prerequisites** - Verify environment, cluster-lock, and running containers +2. **Run ceremony** - P2P coordinated add-validators ceremony with all operators +3. **Stop containers** - Stop charon and VC +4. **Backup and replace** - Backup current `.charon/` to `./backups/`, install new configuration +5. **Restart containers** - Start charon and VC with new configuration + +## After the Ceremony + +1. **Wait for threshold** - Once threshold operators complete their upgrades, new validators will begin participating +2. **Generate deposits** - New validator deposit data is available in `.charon/deposit-data.json` +3. **Activate validators** - Submit deposits to activate new validators on the beacon chain + +## Using --unverified Mode + +If your validator keys are stored remotely (e.g., in a KeyManager) and Charon cannot access them, use the `--unverified` flag. This skips key verification during the ceremony. + +**Important**: When using cluster artifacts created with `--unverified`: +- You must start `charon run` with the `--no-verify` flag +- Or set `CHARON_NO_VERIFY=true` in your `.env` file + +## Current Limitations + +- The new cluster configuration will not be reflected on the Obol Launchpad +- The new cluster will have a new cluster hash (different observability identifier) +- All operators must participate; no partial participation option + +## Testing + +See [test/README.md](test/README.md) for integration tests. + +## Related + +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-validators) diff --git a/scripts/edit/add-validators/add-validators.sh b/scripts/edit/add-validators/add-validators.sh new file mode 100755 index 00000000..bd4b0e10 --- /dev/null +++ b/scripts/edit/add-validators/add-validators.sh @@ -0,0 +1,376 @@ +#!/usr/bin/env bash + +# Add-Validators Script +# +# This script automates the add-validators ceremony for Charon distributed +# validators. This is used to add new validators to an existing cluster. +# +# Reference: https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-validators +# +# IMPORTANT: This is a CEREMONY - ALL operators in the cluster must run this +# script simultaneously. The ceremony coordinates between all operators to +# generate new validator key shares. +# +# Use cases: +# - Adding more validators to an existing distributed validator cluster +# - Expanding staking capacity without creating a new cluster +# +# The workflow: +# 1. Check prerequisites (cluster running, cluster-lock exists) +# 2. Run the add-validators ceremony (all operators simultaneously) +# 3. Stop containers +# 4. Backup and replace .charon directory +# 5. Restart containers +# +# Prerequisites: +# - .env file with NETWORK and VC variables set +# - .charon directory with cluster-lock.json +# - Docker and docker compose installed and running +# - Charon and VC must be RUNNING during ceremony +# - All operators must participate in the ceremony +# +# Usage: +# ./scripts/edit/add-validators/add-validators.sh [OPTIONS] +# +# Options: +# --num-validators Number of validators to add (required) +# --withdrawal-addresses Withdrawal address(es), comma-separated for multiple +# --fee-recipient-addresses Fee recipient address(es), comma-separated +# --unverified Skip key verification (when keys not accessible) +# --dry-run Show what would be done without executing +# -h, --help Show this help message + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +cd "$REPO_ROOT" + +# Default values +NUM_VALIDATORS="" +WITHDRAWAL_ADDRESSES="" +FEE_RECIPIENT_ADDRESSES="" +UNVERIFIED=false +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/add-validators/add-validators.sh [OPTIONS] + +Adds new validators to an existing distributed validator cluster. This is a +CEREMONY that ALL operators must run simultaneously. + +Options: + --num-validators Number of validators to add (required) + --withdrawal-addresses Withdrawal address(es), comma-separated for multiple + --fee-recipient-addresses Fee recipient address(es), comma-separated + --unverified Skip key verification (when keys not accessible) + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + # Add 10 validators with same withdrawal address + ./scripts/edit/add-validators/add-validators.sh \ + --num-validators 10 \ + --withdrawal-addresses 0x123...abc \ + --fee-recipient-addresses 0x456...def + + # Add validators without key verification (remote KeyManager) + ./scripts/edit/add-validators/add-validators.sh \ + --num-validators 5 \ + --withdrawal-addresses 0x123...abc \ + --fee-recipient-addresses 0x456...def \ + --unverified + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json and validator_keys + - Charon and VC containers RUNNING during ceremony + - All operators must participate in the ceremony + +Note: + If using --unverified flag, you must start charon with --no-verify flag + or set CHARON_NO_VERIFY=true environment variable. +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --num-validators) + NUM_VALIDATORS="$2" + shift 2 + ;; + --withdrawal-addresses) + WITHDRAWAL_ADDRESSES="$2" + shift 2 + ;; + --fee-recipient-addresses) + FEE_RECIPIENT_ADDRESSES="$2" + shift 2 + ;; + --unverified) + UNVERIFIED=true + shift + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ -z "$NUM_VALIDATORS" ]; then + log_error "Missing required argument: --num-validators" + echo "Use --help for usage information" + exit 1 +fi + +# Validate num-validators is a positive integer +if ! [[ "$NUM_VALIDATORS" =~ ^[1-9][0-9]*$ ]]; then + log_error "Invalid --num-validators: must be a positive integer" + exit 1 +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add Validators Workflow ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check if charon is running (required for ceremony) +if [ "$DRY_RUN" = false ]; then + if ! docker compose ps charon 2>/dev/null | grep -q Up; then + log_error "Charon container is not running." + log_error "The DV node should be running during the add-validators ceremony." + log_error "Start it with: docker compose up -d charon $VC" + exit 1 + fi +else + log_warn "Would check that charon container is running" +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" +log_info " Validators to add: $NUM_VALIDATORS" + +if [ -n "$WITHDRAWAL_ADDRESSES" ]; then + log_info " Withdrawal addresses: $WITHDRAWAL_ADDRESSES" +fi +if [ -n "$FEE_RECIPIENT_ADDRESSES" ]; then + log_info " Fee recipient addresses: $FEE_RECIPIENT_ADDRESSES" +fi +if [ "$UNVERIFIED" = true ]; then + log_warn " Mode: UNVERIFIED (key verification skipped)" +fi + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +# Show current cluster info +if [ -f .charon/cluster-lock.json ]; then + CURRENT_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + CURRENT_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info " Current cluster: $CURRENT_VALIDATORS validator(s), $CURRENT_OPERATORS operator(s)" +fi + +echo "" + +# Step 1: Run ceremony +log_step "Step 1: Running add-validators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL operators must run this ceremony simultaneously ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit add-validators" +log_info " Number of validators: $NUM_VALIDATORS" +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all operators to connect..." +echo "" + +# Build Docker command arguments +DOCKER_ARGS=( + run --rm -it + -v "$REPO_ROOT/.charon:/opt/charon/.charon" + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" + alpha edit add-validators + --num-validators="$NUM_VALIDATORS" + --output-dir=/opt/charon/output +) + +if [ -n "$WITHDRAWAL_ADDRESSES" ]; then + DOCKER_ARGS+=(--withdrawal-addresses="$WITHDRAWAL_ADDRESSES") +fi + +if [ -n "$FEE_RECIPIENT_ADDRESSES" ]; then + DOCKER_ARGS+=(--fee-recipient-addresses="$FEE_RECIPIENT_ADDRESSES") +fi + +if [ "$UNVERIFIED" = true ]; then + DOCKER_ARGS+=(--unverified) +fi + +if [ "$DRY_RUN" = false ]; then + docker "${DOCKER_ARGS[@]}" + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + NEW_VALIDATORS=$(jq '.distributed_validators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + log_info "New cluster-lock.json generated with $NEW_VALIDATORS validator(s)" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker ${DOCKER_ARGS[*]}" +fi + +echo "" + +# Step 2: Stop containers +log_step "Step 2: Stopping containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" + +# Step 3: Backup and replace .charon +log_step "Step 3: Backing up and replacing .charon directory..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info "Current .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New cluster configuration installed to .charon/" + +echo "" + +# Step 4: Restart containers +log_step "Step 4: Restarting containers..." + +# Build up command with potential --no-verify flag +if [ "$UNVERIFIED" = true ]; then + log_warn "Starting charon with CHARON_NO_VERIFY=true (required for --unverified mode)" + run_cmd docker compose up -d charon "$VC" +else + run_cmd docker compose up -d charon "$VC" +fi + +log_info "Containers restarted" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add Validators Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old .charon backed up to: $BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info " - New cluster configuration installed in: .charon/" +log_info " - $NUM_VALIDATORS new validator(s) added" +log_info " - Containers restarted: charon, $VC" + +if [ "$UNVERIFIED" = true ]; then + echo "" + log_warn "IMPORTANT: You used --unverified mode." + log_warn "Ensure CHARON_NO_VERIFY=true is set in your .env file for future restarts." +fi + +echo "" +log_info "Next steps:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Wait for threshold operators to complete their upgrades" +log_info " 3. Verify new validators appear in cluster" +log_info " 4. Generate deposit data for new validators (in .charon/deposit-data.json)" +log_info " 5. Activate new validators on the beacon chain" +echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" +log_info "Current limitations:" +log_info " - The new configuration will not be reflected on the Obol Launchpad" +log_info " - The cluster will have a new cluster hash (different observability ID)" +echo "" diff --git a/scripts/edit/add-validators/test/README.md b/scripts/edit/add-validators/test/README.md new file mode 100644 index 00000000..9d89803e --- /dev/null +++ b/scripts/edit/add-validators/test/README.md @@ -0,0 +1,48 @@ +# Add-Validators Integration Tests + +Integration tests for `add-validators.sh` script. + +## Overview + +These tests validate the add-validators script without running actual Docker containers or the ceremony. The focus is on: + +- **Argument parsing and validation** +- **Prerequisite checks** (`.env`, `.charon/`, cluster-lock) +- **Dry-run output** for all workflow steps +- **Error messages** for missing/invalid inputs + +## Running Tests + +```bash +./scripts/edit/add-validators/test/test_add_validators.sh +``` + +Expected output: All tests should pass in under 5 seconds. + +## What's NOT Tested + +- **Actual Docker operations** - Docker commands are mocked +- **Charon ceremony** - Would require actual cluster coordination with all operators +- **Container orchestration** - Would require running services + +## Test Structure + +``` +test/ +├── README.md # This file +├── test_add_validators.sh # Main test script +├── fixtures/ # Test fixtures +│ ├── .env.test # Test environment file +│ └── .charon/ # Mock .charon directory +│ ├── cluster-lock.json +│ └── charon-enr-private-key +└── data/ # Test runtime data (git-ignored) + ├── backup/ # Backed up repo files during test + └── mock-bin/ # Mock docker command +``` + +## Adding New Tests + +1. Add a new test function following the naming convention `test_*` +2. Use the assertion helpers: `assert_exit_code`, `assert_output_contains`, `assert_output_not_contains` +3. Register the test in the `main()` function using `run_test` diff --git a/scripts/edit/add-validators/test/fixtures/.charon/charon-enr-private-key b/scripts/edit/add-validators/test/fixtures/.charon/charon-enr-private-key new file mode 100644 index 00000000..37a6f7d7 --- /dev/null +++ b/scripts/edit/add-validators/test/fixtures/.charon/charon-enr-private-key @@ -0,0 +1 @@ +mock-enr-private-key-for-testing-only-do-not-use-in-production diff --git a/scripts/edit/add-validators/test/fixtures/.charon/cluster-lock.json b/scripts/edit/add-validators/test/fixtures/.charon/cluster-lock.json new file mode 100644 index 00000000..ae99ed93 --- /dev/null +++ b/scripts/edit/add-validators/test/fixtures/.charon/cluster-lock.json @@ -0,0 +1,54 @@ +{ + "cluster_definition": { + "name": "TestCluster", + "num_validators": 1, + "threshold": 3, + "operators": [ + { + "address": "0x1111111111111111111111111111111111111111", + "enr": "enr:-HW4QOldBest...operator0" + }, + { + "address": "0x2222222222222222222222222222222222222222", + "enr": "enr:-HW4QNewOper...operator1" + }, + { + "address": "0x3333333333333333333333333333333333333333", + "enr": "enr:-HW4QThird...operator2" + }, + { + "address": "0x4444444444444444444444444444444444444444", + "enr": "enr:-HW4QFourth...operator3" + } + ] + }, + "distributed_validators": [ + { + "distributed_public_key": "0xa9fb2be415318eb77709f7c378ab26025371c0b11213d93fd662ffdb06e77a05c7b04573a478e9d5c0c0fd98078965ef", + "public_shares": [ + "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", + "0x8afba316fdcf51e25a89e05e17377b8c72fd465c95346df4ed5694f295faa2ce061e14e579c5bc01a468dbbb191c58e8", + "0xa1aeebe0980509f5f8d8d424beb89004a967da8d8093248f64eb27c4ee5d22ba9c0f157025f551f47b31833f8bc585f8", + "0xa6c283c82cd0b65436861a149fb840849d06ded1dd8d2f900afb358c6a4232004309120f00a553cdccd8a43f6b743c82" + ] + } + ], + "operators": [ + { + "address": "0x1111111111111111111111111111111111111111", + "enr": "enr:-HW4QOldBest...operator0" + }, + { + "address": "0x2222222222222222222222222222222222222222", + "enr": "enr:-HW4QNewOper...operator1" + }, + { + "address": "0x3333333333333333333333333333333333333333", + "enr": "enr:-HW4QThird...operator2" + }, + { + "address": "0x4444444444444444444444444444444444444444", + "enr": "enr:-HW4QFourth...operator3" + } + ] +} diff --git a/scripts/edit/add-validators/test/fixtures/.env.test b/scripts/edit/add-validators/test/fixtures/.env.test new file mode 100644 index 00000000..1717e872 --- /dev/null +++ b/scripts/edit/add-validators/test/fixtures/.env.test @@ -0,0 +1,3 @@ +# Test environment for add-validators tests +NETWORK=hoodi +VC=vc-lodestar diff --git a/scripts/edit/add-validators/test/test_add_validators.sh b/scripts/edit/add-validators/test/test_add_validators.sh new file mode 100755 index 00000000..466f43c1 --- /dev/null +++ b/scripts/edit/add-validators/test/test_add_validators.sh @@ -0,0 +1,441 @@ +#!/usr/bin/env bash + +# Integration test for add-validators.sh script +# +# This test validates: +# - Argument parsing and validation +# - Prerequisite checks (.env, .charon/, cluster-lock) +# - Dry-run output for all workflow steps +# - Error messages for missing inputs +# +# No actual Docker containers are run - all Docker commands are mocked. +# +# Usage: ./scripts/edit/add-validators/test/test_add_validators.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" + +# Test directories +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_DATA_DIR="$SCRIPT_DIR/data" + +# Script under test +ADD_VALIDATORS_SCRIPT="$REPO_ROOT/scripts/edit/add-validators/add-validators.sh" + +# Test counters +TESTS_RUN=0 +TESTS_PASSED=0 +TESTS_FAILED=0 + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_test() { echo -e "${BLUE}[TEST]${NC} $1"; } + +# Create mock docker script that logs calls and returns success +setup_mock_docker() { + local mock_bin_dir="$TEST_DATA_DIR/mock-bin" + mkdir -p "$mock_bin_dir" + + # Create mock docker command + cat > "$mock_bin_dir/docker" << 'MOCK_DOCKER' +#!/usr/bin/env bash +# Mock docker for testing - logs all calls +echo "[MOCK-DOCKER] $*" >> "${MOCK_DOCKER_LOG:-/dev/null}" + +# Handle specific commands +case "$*" in + "info") + echo "Mock Docker info" + exit 0 + ;; + "compose"*"ps"*"charon"*) + # Simulate charon is running + echo "charon Up" + exit 0 + ;; + "compose"*"stop"*) + echo "[MOCK] Stopping containers" + exit 0 + ;; + "compose"*"up"*) + echo "[MOCK] Starting containers" + exit 0 + ;; + *"charon"*"add-validators"*) + echo "[MOCK] Running add-validators ceremony" + exit 0 + ;; + *) + echo "[MOCK] Unhandled docker command: $*" + exit 0 + ;; +esac +MOCK_DOCKER + chmod +x "$mock_bin_dir/docker" + + # Export PATH with mock first + export PATH="$mock_bin_dir:$PATH" + export MOCK_DOCKER_LOG="$TEST_DATA_DIR/docker-calls.log" +} + +# Setup test working directory with fixtures +# Note: Scripts always cd to REPO_ROOT, so we must put test fixtures there +# We backup any existing files and restore them on cleanup +setup_test_env() { + rm -rf "$TEST_DATA_DIR" + mkdir -p "$TEST_DATA_DIR/backup" + + # Backup existing files in REPO_ROOT if they exist + if [ -f "$REPO_ROOT/.env" ]; then + cp "$REPO_ROOT/.env" "$TEST_DATA_DIR/backup/.env.bak" + fi + if [ -d "$REPO_ROOT/.charon" ]; then + # Only backup key files, not the whole directory + mkdir -p "$TEST_DATA_DIR/backup/.charon" + [ -f "$REPO_ROOT/.charon/cluster-lock.json" ] && \ + cp "$REPO_ROOT/.charon/cluster-lock.json" "$TEST_DATA_DIR/backup/.charon/" + [ -f "$REPO_ROOT/.charon/charon-enr-private-key" ] && \ + cp "$REPO_ROOT/.charon/charon-enr-private-key" "$TEST_DATA_DIR/backup/.charon/" + fi + + # Install test fixtures to REPO_ROOT + cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" + mkdir -p "$REPO_ROOT/.charon" + cp "$TEST_FIXTURES_DIR/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" + cp "$TEST_FIXTURES_DIR/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" + + # Create required directories + mkdir -p "$REPO_ROOT/backups" + + # Setup mock docker + setup_mock_docker +} + +restore_repo_state() { + # Restore backed up files + if [ -f "$TEST_DATA_DIR/backup/.env.bak" ]; then + cp "$TEST_DATA_DIR/backup/.env.bak" "$REPO_ROOT/.env" + else + rm -f "$REPO_ROOT/.env" + fi + + if [ -d "$TEST_DATA_DIR/backup/.charon" ]; then + [ -f "$TEST_DATA_DIR/backup/.charon/cluster-lock.json" ] && \ + cp "$TEST_DATA_DIR/backup/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" + [ -f "$TEST_DATA_DIR/backup/.charon/charon-enr-private-key" ] && \ + cp "$TEST_DATA_DIR/backup/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" + fi +} + +cleanup() { + log_info "Cleaning up and restoring original state..." + restore_repo_state +} + +trap cleanup EXIT + +# Test assertion helpers +assert_exit_code() { + local expected="$1" + local actual="$2" + local test_name="$3" + + if [ "$actual" -eq "$expected" ]; then + return 0 + else + log_error "Expected exit code $expected, got $actual in $test_name" + return 1 + fi +} + +assert_output_contains() { + local pattern="$1" + local output="$2" + local test_name="$3" + + if echo "$output" | grep -q -F -- "$pattern"; then + return 0 + else + log_error "Expected output to contain '$pattern' in $test_name" + echo "Actual output:" + echo "$output" | head -20 + return 1 + fi +} + +assert_output_not_contains() { + local pattern="$1" + local output="$2" + local test_name="$3" + + if echo "$output" | grep -q "$pattern"; then + log_error "Expected output NOT to contain '$pattern' in $test_name" + return 1 + else + return 0 + fi +} + +run_test() { + local test_name="$1" + local test_func="$2" + + TESTS_RUN=$((TESTS_RUN + 1)) + log_test "Running: $test_name" + + if $test_func; then + echo -e " ${GREEN}✓ PASSED${NC}" + TESTS_PASSED=$((TESTS_PASSED + 1)) + else + echo -e " ${RED}✗ FAILED${NC}" + TESTS_FAILED=$((TESTS_FAILED + 1)) + fi +} + +# ============================================================================ +# ADD-VALIDATORS.SH TESTS +# ============================================================================ + +test_help() { + local output + local exit_code=0 + + output=$("$ADD_VALIDATORS_SCRIPT" --help 2>&1) || exit_code=$? + + assert_exit_code 0 "$exit_code" "test_help" && \ + assert_output_contains "Usage:" "$output" "test_help" && \ + assert_output_contains "--num-validators" "$output" "test_help" && \ + assert_output_contains "--withdrawal-addresses" "$output" "test_help" && \ + assert_output_contains "--dry-run" "$output" "test_help" +} + +test_missing_num_validators() { + local output + local exit_code=0 + + output=$("$ADD_VALIDATORS_SCRIPT" 2>&1) || exit_code=$? + + assert_exit_code 1 "$exit_code" "test_missing_num_validators" && \ + assert_output_contains "Missing required argument: --num-validators" "$output" "test_missing_num_validators" +} + +test_invalid_num_validators() { + local output + local exit_code=0 + + output=$("$ADD_VALIDATORS_SCRIPT" --num-validators abc 2>&1) || exit_code=$? + + assert_exit_code 1 "$exit_code" "test_invalid_num_validators" && \ + assert_output_contains "must be a positive integer" "$output" "test_invalid_num_validators" +} + +test_invalid_num_validators_zero() { + local output + local exit_code=0 + + output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 0 2>&1) || exit_code=$? + + assert_exit_code 1 "$exit_code" "test_invalid_num_validators_zero" && \ + assert_output_contains "must be a positive integer" "$output" "test_invalid_num_validators_zero" +} + +test_missing_env() { + local output + local exit_code=0 + + rm -f "$REPO_ROOT/.env" + + output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 2>&1) || exit_code=$? + + # Restore .env for other tests + cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" + + assert_exit_code 1 "$exit_code" "test_missing_env" && \ + assert_output_contains ".env file not found" "$output" "test_missing_env" +} + +test_missing_network() { + local output + local exit_code=0 + + echo "VC=vc-lodestar" > "$REPO_ROOT/.env" # Missing NETWORK + + output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 2>&1) || exit_code=$? + + # Restore .env + cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" + + assert_exit_code 1 "$exit_code" "test_missing_network" && \ + assert_output_contains "NETWORK variable not set" "$output" "test_missing_network" +} + +test_missing_vc() { + local output + local exit_code=0 + + echo "NETWORK=hoodi" > "$REPO_ROOT/.env" # Missing VC + + output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 2>&1) || exit_code=$? + + # Restore .env + cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" + + assert_exit_code 1 "$exit_code" "test_missing_vc" && \ + assert_output_contains "VC variable not set" "$output" "test_missing_vc" +} + +test_missing_charon_dir() { + local output + local exit_code=0 + + mv "$REPO_ROOT/.charon" "$REPO_ROOT/.charon.test.bak" + + output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 2>&1) || exit_code=$? + + # Restore .charon + mv "$REPO_ROOT/.charon.test.bak" "$REPO_ROOT/.charon" + + assert_exit_code 1 "$exit_code" "test_missing_charon_dir" && \ + assert_output_contains ".charon directory not found" "$output" "test_missing_charon_dir" +} + +test_missing_cluster_lock() { + local output + local exit_code=0 + + rm -f "$REPO_ROOT/.charon/cluster-lock.json" + + output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 2>&1) || exit_code=$? + + # Restore cluster-lock + cp "$TEST_FIXTURES_DIR/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" + + assert_exit_code 1 "$exit_code" "test_missing_cluster_lock" && \ + assert_output_contains "cluster-lock.json not found" "$output" "test_missing_cluster_lock" +} + +test_dry_run_basic() { + local output + local exit_code=0 + + output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 --dry-run 2>&1) || exit_code=$? + + assert_exit_code 0 "$exit_code" "test_dry_run_basic" && \ + assert_output_contains "DRY-RUN MODE" "$output" "test_dry_run_basic" && \ + assert_output_contains "Validators to add: 5" "$output" "test_dry_run_basic" +} + +test_dry_run_with_addresses() { + local output + local exit_code=0 + + output=$("$ADD_VALIDATORS_SCRIPT" \ + --num-validators 10 \ + --withdrawal-addresses 0x1234567890abcdef1234567890abcdef12345678 \ + --fee-recipient-addresses 0xabcdef1234567890abcdef1234567890abcdef12 \ + --dry-run 2>&1) || exit_code=$? + + assert_exit_code 0 "$exit_code" "test_dry_run_with_addresses" && \ + assert_output_contains "Withdrawal addresses:" "$output" "test_dry_run_with_addresses" && \ + assert_output_contains "Fee recipient addresses:" "$output" "test_dry_run_with_addresses" +} + +test_dry_run_unverified() { + local output + local exit_code=0 + + output=$("$ADD_VALIDATORS_SCRIPT" \ + --num-validators 5 \ + --unverified \ + --dry-run 2>&1) || exit_code=$? + + assert_exit_code 0 "$exit_code" "test_dry_run_unverified" && \ + assert_output_contains "UNVERIFIED" "$output" "test_dry_run_unverified" +} + +test_dry_run_workflow() { + local output + local exit_code=0 + + output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 --dry-run 2>&1) || exit_code=$? + + assert_exit_code 0 "$exit_code" "test_dry_run_workflow" && \ + assert_output_contains "Running add-validators ceremony" "$output" "test_dry_run_workflow" && \ + assert_output_contains "charon alpha edit add-validators" "$output" "test_dry_run_workflow" && \ + assert_output_contains "Stopping containers" "$output" "test_dry_run_workflow" && \ + assert_output_contains "Backing up" "$output" "test_dry_run_workflow" && \ + assert_output_contains "Restarting containers" "$output" "test_dry_run_workflow" +} + +test_unknown_argument() { + local output + local exit_code=0 + + output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 --invalid-flag 2>&1) || exit_code=$? + + assert_exit_code 1 "$exit_code" "test_unknown_argument" && \ + assert_output_contains "Unknown argument" "$output" "test_unknown_argument" +} + +# ============================================================================ +# MAIN TEST RUNNER +# ============================================================================ + +main() { + echo "" + echo "╔════════════════════════════════════════════════════════════════╗" + echo "║ Add-Validators Script - Integration Tests ║" + echo "╚════════════════════════════════════════════════════════════════╝" + echo "" + + # Setup test environment + log_info "Setting up test environment..." + setup_test_env + + echo "" + echo "─────────────────────────────────────────────────────────────────" + echo " ADD-VALIDATORS.SH TESTS" + echo "─────────────────────────────────────────────────────────────────" + echo "" + + run_test "add-validators: --help shows usage" test_help + run_test "add-validators: error when --num-validators missing" test_missing_num_validators + run_test "add-validators: error when --num-validators invalid" test_invalid_num_validators + run_test "add-validators: error when --num-validators is zero" test_invalid_num_validators_zero + run_test "add-validators: error when .env missing" test_missing_env + run_test "add-validators: error when NETWORK missing" test_missing_network + run_test "add-validators: error when VC missing" test_missing_vc + run_test "add-validators: error when .charon dir missing" test_missing_charon_dir + run_test "add-validators: error when cluster-lock missing" test_missing_cluster_lock + run_test "add-validators: dry-run basic" test_dry_run_basic + run_test "add-validators: dry-run with addresses" test_dry_run_with_addresses + run_test "add-validators: dry-run with --unverified" test_dry_run_unverified + run_test "add-validators: dry-run full workflow" test_dry_run_workflow + run_test "add-validators: error for unknown argument" test_unknown_argument + + echo "" + echo "═════════════════════════════════════════════════════════════════" + echo "" + + if [ "$TESTS_FAILED" -eq 0 ]; then + echo -e "${GREEN}All $TESTS_PASSED tests passed!${NC}" + echo "" + exit 0 + else + echo -e "${RED}$TESTS_FAILED of $TESTS_RUN tests failed${NC}" + echo "" + exit 1 + fi +} + +main "$@" diff --git a/scripts/edit/recreate-private-keys/test/fixtures/.charon/charon-enr-private-key b/scripts/edit/recreate-private-keys/test/fixtures/.charon/charon-enr-private-key new file mode 100644 index 00000000..372a826b --- /dev/null +++ b/scripts/edit/recreate-private-keys/test/fixtures/.charon/charon-enr-private-key @@ -0,0 +1 @@ +0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef diff --git a/scripts/edit/recreate-private-keys/test/fixtures/.charon/cluster-lock.json b/scripts/edit/recreate-private-keys/test/fixtures/.charon/cluster-lock.json new file mode 100644 index 00000000..d3be61be --- /dev/null +++ b/scripts/edit/recreate-private-keys/test/fixtures/.charon/cluster-lock.json @@ -0,0 +1,55 @@ +{ + "cluster_definition": { + "name": "TestCluster", + "num_validators": 1, + "threshold": 3, + "operators": [ + { + "address": "0x1111111111111111111111111111111111111111", + "enr": "enr:-HW4QOldBest...operator0" + }, + { + "address": "0x2222222222222222222222222222222222222222", + "enr": "enr:-HW4QNewOper...operator1" + }, + { + "address": "0x3333333333333333333333333333333333333333", + "enr": "enr:-HW4QThird...operator2" + }, + { + "address": "0x4444444444444444444444444444444444444444", + "enr": "enr:-HW4QFourth...operator3" + } + ] + }, + "distributed_validators": [ + { + "distributed_public_key": "0xa9fb2be415318eb77709f7c378ab26025371c0b11213d93fd662ffdb06e77a05c7b04573a478e9d5c0c0fd98078965ef", + "public_shares": [ + "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", + "0x8afba316fdcf51e25a89e05e17377b8c72fd465c95346df4ed5694f295faa2ce061e14e579c5bc01a468dbbb191c58e8", + "0xa1aeebe0980509f5f8d8d424beb89004a967da8d8093248f64eb27c4ee5d22ba9c0f157025f551f47b31833f8bc585f8", + "0xa6c283c82cd0b65436861a149fb840849d06ded1dd8d2f900afb358c6a4232004309120f00a553cdccd8a43f6b743c82" + ] + } + ], + "operators": [ + { + "address": "0x1111111111111111111111111111111111111111", + "enr": "enr:-HW4QOldBest...operator0" + }, + { + "address": "0x2222222222222222222222222222222222222222", + "enr": "enr:-HW4QNewOper...operator1" + }, + { + "address": "0x3333333333333333333333333333333333333333", + "enr": "enr:-HW4QThird...operator2" + }, + { + "address": "0x4444444444444444444444444444444444444444", + "enr": "enr:-HW4QFourth...operator3" + } + ], + "lock_hash": "0xe9dbc87171f99bd8b6f348f6bf314291651933256e712ace299190f5e04e7795" +} From 997336dfb109f26ac531732e5103c66e43a07f60 Mon Sep 17 00:00:00 2001 From: Andrei Smirnov Date: Tue, 17 Feb 2026 11:46:06 +0300 Subject: [PATCH 06/12] Unified look and feel --- scripts/edit/add-validators/add-validators.sh | 2 +- scripts/edit/recreate-private-keys/README.md | 5 +- scripts/edit/replace-operator/README.md | 66 ++++++++++++++++--- scripts/edit/replace-operator/new-operator.sh | 32 +++++---- .../replace-operator/remaining-operator.sh | 6 +- scripts/edit/vc/README.md | 61 +++++++++++++++++ 6 files changed, 144 insertions(+), 28 deletions(-) create mode 100644 scripts/edit/vc/README.md diff --git a/scripts/edit/add-validators/add-validators.sh b/scripts/edit/add-validators/add-validators.sh index bd4b0e10..40c4e534 100755 --- a/scripts/edit/add-validators/add-validators.sh +++ b/scripts/edit/add-validators/add-validators.sh @@ -302,7 +302,7 @@ if [ "$DRY_RUN" = false ]; then exit 1 fi else - echo " [DRY-RUN] docker ${DOCKER_ARGS[*]}" + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit add-validators --num-validators=$NUM_VALIDATORS --output-dir=$OUTPUT_DIR" fi echo "" diff --git a/scripts/edit/recreate-private-keys/README.md b/scripts/edit/recreate-private-keys/README.md index cc8b6a6e..c81ccaa3 100644 --- a/scripts/edit/recreate-private-keys/README.md +++ b/scripts/edit/recreate-private-keys/README.md @@ -39,7 +39,10 @@ The script will: ## Options -- `--dry-run` - Preview without executing +| Option | Required | Description | +|--------|----------|-------------| +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | ## Current Limitations diff --git a/scripts/edit/replace-operator/README.md b/scripts/edit/replace-operator/README.md index f33d8d68..e24f6343 100644 --- a/scripts/edit/replace-operator/README.md +++ b/scripts/edit/replace-operator/README.md @@ -2,9 +2,23 @@ Scripts to automate the [replace-operator workflow](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/replace-operator) for Charon distributed validators. +## Overview + +These scripts help operators replace a single operator in an existing distributed validator cluster. This is useful for: + +- **Operator rotation**: Replacing an operator who is leaving the cluster +- **Infrastructure migration**: Moving an operator to new infrastructure +- **Recovery**: Replacing an operator whose keys may have been compromised + +There are two scripts for the two roles involved: + +- **`remaining-operator.sh`** - For operators staying in the cluster (runs the ceremony) +- **`new-operator.sh`** - For the new operator joining the cluster + ## Prerequisites - `.env` file with `NETWORK` and `VC` variables set +- `.charon` directory with `cluster-lock.json` and `charon-enr-private-key` - Docker running - `jq` installed @@ -18,14 +32,30 @@ Automates the complete workflow for operators staying in the cluster: --operator-index 2 ``` -**Options:** -- `--new-enr ` - ENR of the new operator (required) -- `--operator-index ` - Index of operator being replaced (required) -- `--skip-export` - Skip ASDB export if already done -- `--dry-run` - Preview without executing +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--new-enr ` | Yes | ENR of the new operator | +| `--operator-index ` | Yes | Index of operator being replaced | +| `--skip-export` | No | Skip ASDB export if already done | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +### Workflow + +1. **Export ASDB** - Export anti-slashing database from running VC +2. **Run ceremony** - Execute `charon edit replace-operator` with new ENR +3. **Update ASDB** - Replace pubkeys in exported ASDB to match new cluster-lock +4. **Stop containers** - Stop charon and VC +5. **Backup and replace** - Backup old cluster-lock, install new one +6. **Import ASDB** - Import updated anti-slashing database +7. **Restart containers** - Start charon and VC with new configuration ## For New Operators +Two-step workflow for the new operator joining the cluster. + **Step 1:** Generate ENR and share with remaining operators: ```bash @@ -35,15 +65,31 @@ Automates the complete workflow for operators staying in the cluster: **Step 2:** After receiving cluster-lock from remaining operators: ```bash -# curl -o received-cluster-lock.json https://example.com/cluster-lock.json ./scripts/edit/replace-operator/new-operator.sh --cluster-lock ./received-cluster-lock.json ``` -**Options:** -- `--cluster-lock ` - Path to new cluster-lock.json -- `--generate-enr` - Generate new ENR private key -- `--dry-run` - Preview without executing +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--cluster-lock ` | No | Path to new cluster-lock.json | +| `--generate-enr` | No | Generate new ENR private key | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +## Current Limitations + +- The new cluster configuration will not be reflected on the Obol Launchpad +- The new cluster will have a new cluster hash (different observability identifier) +- Only one operator can be replaced at a time ## Testing See [test/README.md](test/README.md) for integration tests. + +## Related + +- [Add-Validators Workflow](../add-validators/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/replace-operator) diff --git a/scripts/edit/replace-operator/new-operator.sh b/scripts/edit/replace-operator/new-operator.sh index 17583d14..c5a7adc1 100755 --- a/scripts/edit/replace-operator/new-operator.sh +++ b/scripts/edit/replace-operator/new-operator.sh @@ -48,6 +48,9 @@ CLUSTER_LOCK_PATH="" GENERATE_ENR=false DRY_RUN=false +# Output directories +BACKUP_DIR="./backups" + # Colors for output RED='\033[0;31m' GREEN='\033[0;32m' @@ -128,10 +131,11 @@ echo "║ Replace-Operator Workflow - NEW OPERATOR ║" echo "╚════════════════════════════════════════════════════════════════╝" echo "" -# Check for .env file +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + if [ ! -f .env ]; then log_error ".env file not found. Please create one with NETWORK and VC variables." - log_info "Copy from a sample: cp .env.sample.hoodi .env" exit 1 fi @@ -152,7 +156,7 @@ if ! docker info >/dev/null 2>&1; then exit 1 fi -log_info "Configuration:" +log_info "Prerequisites OK" log_info " Network: $NETWORK" log_info " Validator Client: $VC" @@ -187,9 +191,9 @@ if [ "$GENERATE_ENR" = true ]; then if [ -f .charon/charon-enr-private-key ]; then echo "" - echo "╔════════════════════════════════════════════════════════════════╗" - echo "║ SHARE YOUR ENR WITH THE REMAINING OPERATORS ║" - echo "╚════════════════════════════════════════════════════════════════╝" + log_warn "╔════════════════════════════════════════════════════════════════╗" + log_warn "║ SHARE YOUR ENR WITH THE REMAINING OPERATORS ║" + log_warn "╚════════════════════════════════════════════════════════════════╝" echo "" # Extract and display the ENR @@ -217,7 +221,7 @@ if [ "$GENERATE_ENR" = true ]; then exit 0 fi -# Step 2: Check prerequisites +# Step 1: Check prerequisites log_step "Step 1: Checking prerequisites..." if [ "$DRY_RUN" = false ]; then @@ -263,7 +267,7 @@ log_info "Prerequisites OK" echo "" -# Step 3: Stop any running containers +# Step 2: Stop any running containers log_step "Step 2: Stopping any running containers..." # Stop containers if running (ignore errors if not running) @@ -273,15 +277,15 @@ log_info "Containers stopped" echo "" -# Step 4: Install cluster-lock if provided +# Step 3: Install cluster-lock if provided if [ -n "$CLUSTER_LOCK_PATH" ]; then log_step "Step 3: Installing new cluster-lock..." if [ -f .charon/cluster-lock.json ]; then TIMESTAMP=$(date +%Y%m%d_%H%M%S) - mkdir -p ./backups - run_cmd cp .charon/cluster-lock.json "./backups/cluster-lock.json.$TIMESTAMP" - log_info "Old cluster-lock backed up to ./backups/cluster-lock.json.$TIMESTAMP" + mkdir -p "$BACKUP_DIR" + run_cmd cp .charon/cluster-lock.json "$BACKUP_DIR/cluster-lock.json.$TIMESTAMP" + log_info "Old cluster-lock backed up to $BACKUP_DIR/cluster-lock.json.$TIMESTAMP" fi run_cmd cp "$CLUSTER_LOCK_PATH" .charon/cluster-lock.json @@ -293,7 +297,7 @@ fi echo "" -# Step 5: Verify cluster-lock matches our ENR +# Step 4: Verify cluster-lock matches our ENR log_step "Step 4: Verifying cluster-lock configuration..." if [ "$DRY_RUN" = false ] && [ -f .charon/cluster-lock.json ]; then @@ -321,7 +325,7 @@ fi echo "" -# Step 6: Start containers +# Step 5: Start containers log_step "Step 5: Starting containers..." run_cmd docker compose up -d charon "$VC" diff --git a/scripts/edit/replace-operator/remaining-operator.sh b/scripts/edit/replace-operator/remaining-operator.sh index 1031791e..5943519d 100755 --- a/scripts/edit/replace-operator/remaining-operator.sh +++ b/scripts/edit/replace-operator/remaining-operator.sh @@ -31,7 +31,7 @@ # -h, --help Show this help message # # Example: -# ./scripts/edit/remaining-operator.sh \ +# ./scripts/edit/replace-operator/remaining-operator.sh \ # --new-enr "enr:-..." \ # --operator-index 2 @@ -79,7 +79,7 @@ Options: -h, --help Show this help message Example: - ./scripts/edit/remaining-operator.sh \ + ./scripts/edit/replace-operator/remaining-operator.sh \ --new-enr "enr:-..." \ --operator-index 2 @@ -324,3 +324,5 @@ log_info " 1. Verify charon is syncing with peers: docker compose logs -f charo log_info " 2. Verify VC is running: docker compose logs -f $VC" log_info " 3. Share the new cluster-lock.json with the NEW operator" echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" diff --git a/scripts/edit/vc/README.md b/scripts/edit/vc/README.md new file mode 100644 index 00000000..be580999 --- /dev/null +++ b/scripts/edit/vc/README.md @@ -0,0 +1,61 @@ +# Anti-Slashing Database Scripts + +Scripts to export, import, and update validator anti-slashing databases (ASDB) in [EIP-3076](https://eips.ethereum.org/EIPS/eip-3076) format for Charon distributed validators. + +## Overview + +When performing cluster edit operations (replace-operator, recreate-private-keys), the anti-slashing database must be exported, updated with new pubkeys, and re-imported to prevent slashing violations. These scripts automate that process across all supported validator clients. + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- Docker running +- `jq` installed (for `update-anti-slashing-db.sh`) + +## Scripts + +### Router Scripts + +| Script | Description | +|--------|-------------| +| `export_asdb.sh` | Routes to the appropriate VC-specific export script based on `VC` env var | +| `import_asdb.sh` | Routes to the appropriate VC-specific import script based on `VC` env var | + +Usage: + +```bash +# Export ASDB from running VC container +VC=vc-lodestar ./scripts/edit/vc/export_asdb.sh --output-file ./asdb-export/slashing-protection.json + +# Import ASDB into stopped VC container +VC=vc-lodestar ./scripts/edit/vc/import_asdb.sh --input-file ./asdb-export/slashing-protection.json +``` + +### Update Anti-Slashing DB + +Updates pubkeys in an EIP-3076 file by mapping them between source and target cluster-lock files. + +```bash +./scripts/edit/vc/update-anti-slashing-db.sh +``` + +### Supported Validator Clients + +Each client has its own `export_asdb.sh` and `import_asdb.sh` in a subdirectory: + +| Client | Directory | Export requires | Import requires | +|--------|-----------|-----------------|-----------------| +| Lodestar | `lodestar/` | Container running | Container stopped | +| Prysm | `prysm/` | Container running | Container stopped | +| Teku | `teku/` | Container running | Container stopped | +| Nimbus | `nimbus/` | Container running | Container stopped | + +## Testing + +See [test/README.md](test/README.md) for integration tests. + +## Related + +- [Replace-Operator Workflow](../replace-operator/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Add-Validators Workflow](../add-validators/README.md) From 34ffc35c8eb7de4e50ab108f44ae17af8b1c2c0c Mon Sep 17 00:00:00 2001 From: Andrei Smirnov Date: Tue, 17 Feb 2026 11:54:00 +0300 Subject: [PATCH 07/12] Added add-operators scripts --- scripts/edit/add-operators/README.md | 102 +++++ .../edit/add-operators/existing-operator.sh | 340 +++++++++++++++ scripts/edit/add-operators/new-operator.sh | 388 ++++++++++++++++++ 3 files changed, 830 insertions(+) create mode 100644 scripts/edit/add-operators/README.md create mode 100755 scripts/edit/add-operators/existing-operator.sh create mode 100755 scripts/edit/add-operators/new-operator.sh diff --git a/scripts/edit/add-operators/README.md b/scripts/edit/add-operators/README.md new file mode 100644 index 00000000..85e6ada9 --- /dev/null +++ b/scripts/edit/add-operators/README.md @@ -0,0 +1,102 @@ +# Add-Operators Scripts + +Scripts to automate the [add-operators ceremony](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-operators) for Charon distributed validators. + +## Overview + +These scripts help operators expand an existing distributed validator cluster by adding new operators. This is useful for: + +- **Cluster expansion**: Adding more operators for increased redundancy +- **Decentralization**: Distributing validator duties across more parties +- **Resilience**: Expanding the operator set while maintaining the same validators + +**Important**: This is a coordinated ceremony. All operators (existing AND new) must run their respective scripts simultaneously to complete the process. + +> Warning: This is an alpha feature in Charon and is not yet recommended for production use. + +There are two scripts for the two roles involved: + +- **`existing-operator.sh`** - For operators already in the cluster +- **`new-operator.sh`** - For new operators joining the cluster + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- Docker running +- `jq` installed +- **Existing operators**: `.charon` directory with `cluster-lock.json` and `validator_keys` +- **New operators**: Charon ENR private key (generated via `--generate-enr`) + +## For Existing Operators + +Automates the complete workflow for operators already in the cluster: + +```bash +./scripts/edit/add-operators/existing-operator.sh \ + --new-operator-enrs "enr:-..." +``` + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--new-operator-enrs ` | Yes | Comma-separated ENRs of new operators | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +### Workflow + +1. **Export ASDB** - Export anti-slashing database from running VC +2. **Run ceremony** - P2P coordinated add-operators ceremony with all operators +3. **Update ASDB** - Replace pubkeys in exported ASDB to match new cluster-lock +4. **Stop containers** - Stop charon and VC +5. **Backup and replace** - Backup current `.charon/` to `./backups/`, install new configuration +6. **Import ASDB** - Import updated anti-slashing database +7. **Restart containers** - Start charon and VC with new configuration + +## For New Operators + +Two-step workflow for new operators joining the cluster. + +**Step 1:** Generate ENR and share with existing operators: + +```bash +./scripts/edit/add-operators/new-operator.sh --generate-enr +``` + +**Step 2:** After receiving the existing cluster-lock, run the ceremony: + +```bash +./scripts/edit/add-operators/new-operator.sh \ + --new-operator-enrs "enr:-...,enr:-..." \ + --cluster-lock ./received-cluster-lock.json +``` + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--new-operator-enrs ` | For ceremony | Comma-separated ENRs of ALL new operators | +| `--cluster-lock ` | For ceremony | Path to existing cluster-lock.json | +| `--generate-enr` | No | Generate new ENR private key | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +## Current Limitations + +- The new cluster configuration will not be reflected on the Obol Launchpad +- The cluster will have a new cluster hash (different observability identifier) +- All operators (existing and new) must participate; no partial participation option +- Cluster threshold remains unchanged + +## Testing + +See [test/README.md](test/README.md) for integration tests. + +## Related + +- [Add-Validators Workflow](../add-validators/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-operators) diff --git a/scripts/edit/add-operators/existing-operator.sh b/scripts/edit/add-operators/existing-operator.sh new file mode 100755 index 00000000..990e7c9d --- /dev/null +++ b/scripts/edit/add-operators/existing-operator.sh @@ -0,0 +1,340 @@ +#!/usr/bin/env bash + +# Add-Operators Script for EXISTING Operators +# +# This script automates the add-operators ceremony for operators who are +# already in the cluster. It handles the full workflow including ASDB +# export/update/import around the ceremony. +# +# Reference: https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-operators +# +# IMPORTANT: This is a CEREMONY - ALL operators (existing AND new) must run +# their respective scripts simultaneously. The ceremony coordinates between +# all operators to generate new key shares for the expanded operator set. +# +# The workflow: +# 1. Export the current anti-slashing database +# 2. Run the add-operators ceremony (all operators simultaneously) +# 3. Update the exported ASDB with new pubkeys +# 4. Stop containers +# 5. Backup and replace .charon directory +# 6. Import the updated ASDB +# 7. Restart containers +# +# Prerequisites: +# - .env file with NETWORK and VC variables set +# - .charon directory with cluster-lock.json and validator_keys +# - Docker and docker compose installed and running +# - VC container running (for ASDB export) +# - All operators must participate in the ceremony +# +# Usage: +# ./scripts/edit/add-operators/existing-operator.sh [OPTIONS] +# +# Options: +# --new-operator-enrs Comma-separated ENRs of new operators (required) +# --dry-run Show what would be done without executing +# -h, --help Show this help message + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +cd "$REPO_ROOT" + +# Default values +NEW_OPERATOR_ENRS="" +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" +ASDB_EXPORT_DIR="./asdb-export" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/add-operators/existing-operator.sh [OPTIONS] + +Automates the add-operators ceremony for operators already in the cluster. +This is a CEREMONY that ALL operators (existing AND new) must run simultaneously. + +Options: + --new-operator-enrs Comma-separated ENRs of new operators (required) + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + # Add one new operator + ./scripts/edit/add-operators/existing-operator.sh \ + --new-operator-enrs "enr:-..." + + # Add multiple new operators + ./scripts/edit/add-operators/existing-operator.sh \ + --new-operator-enrs "enr:-...,enr:-..." + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json and validator_keys + - Docker and docker compose installed and running + - VC container running (for ASDB export) + - All operators must participate in the ceremony +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --new-operator-enrs) + NEW_OPERATOR_ENRS="$2" + shift 2 + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ -z "$NEW_OPERATOR_ENRS" ]; then + log_error "Missing required argument: --new-operator-enrs" + echo "Use --help for usage information" + exit 1 +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add-Operators Workflow - EXISTING OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -d .charon/validator_keys ]; then + log_error ".charon/validator_keys directory not found" + log_info "All operators must have their current validator private key shares." + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" +log_info " New operator ENRs: ${NEW_OPERATOR_ENRS:0:80}..." + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +# Show current cluster info +if [ -f .charon/cluster-lock.json ]; then + CURRENT_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + CURRENT_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info " Current cluster: $CURRENT_VALIDATORS validator(s), $CURRENT_OPERATORS operator(s)" +fi + +echo "" + +# Step 1: Export anti-slashing database +log_step "Step 1: Exporting anti-slashing database..." + +# Check VC container is running (skip check in dry-run mode) +if [ "$DRY_RUN" = false ]; then + if ! docker compose ps "$VC" 2>/dev/null | grep -q Up; then + log_error "VC container ($VC) is not running. Start it first:" + log_error " docker compose up -d $VC" + exit 1 + fi +else + log_warn "Would check that $VC container is running" +fi + +mkdir -p "$ASDB_EXPORT_DIR" + +run_cmd VC="$VC" "$SCRIPT_DIR/../vc/export_asdb.sh" \ + --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" + +echo "" + +# Step 2: Run ceremony +log_step "Step 2: Running add-operators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL operators must run this ceremony simultaneously ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit add-operators" +log_info " New operator ENRs: ${NEW_OPERATOR_ENRS:0:80}..." +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all operators to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + docker run --rm -it \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + alpha edit add-operators \ + --new-operator-enrs="$NEW_OPERATOR_ENRS" \ + --output-dir=/opt/charon/output + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + NEW_VALIDATORS=$(jq '.distributed_validators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + NEW_OPERATORS=$(jq '.operators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + log_info "New cluster-lock.json generated with $NEW_VALIDATORS validator(s), $NEW_OPERATORS operator(s)" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit add-operators --new-operator-enrs=... --output-dir=$OUTPUT_DIR" +fi + +echo "" + +# Step 3: Update ASDB pubkeys +log_step "Step 3: Updating anti-slashing database pubkeys..." + +run_cmd "$SCRIPT_DIR/../vc/update-anti-slashing-db.sh" \ + "$ASDB_EXPORT_DIR/slashing-protection.json" \ + ".charon/cluster-lock.json" \ + "$OUTPUT_DIR/cluster-lock.json" + +log_info "Anti-slashing database pubkeys updated" + +echo "" + +# Step 4: Stop containers +log_step "Step 4: Stopping containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" + +# Step 5: Backup and replace .charon +log_step "Step 5: Backing up and replacing .charon directory..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info "Current .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New cluster configuration installed to .charon/" + +echo "" + +# Step 6: Import updated ASDB +log_step "Step 6: Importing updated anti-slashing database..." + +run_cmd VC="$VC" "$SCRIPT_DIR/../vc/import_asdb.sh" \ + --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database imported" + +echo "" + +# Step 7: Restart containers +log_step "Step 7: Restarting containers..." + +run_cmd docker compose up -d charon "$VC" + +log_info "Containers restarted" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add-Operators Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old .charon backed up to: $BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info " - New cluster configuration installed in: .charon/" +log_info " - Anti-slashing database updated and imported" +log_info " - Containers restarted: charon, $VC" +echo "" +log_info "Next steps:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Verify all nodes connected and healthy" +log_info " 3. Verify cluster is producing attestations" +log_info " 4. Confirm new operators have joined successfully" +echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" +log_info "Current limitations:" +log_info " - The new configuration will not be reflected on the Obol Launchpad" +log_info " - The cluster will have a new cluster hash (different observability ID)" +echo "" diff --git a/scripts/edit/add-operators/new-operator.sh b/scripts/edit/add-operators/new-operator.sh new file mode 100755 index 00000000..e2a6d102 --- /dev/null +++ b/scripts/edit/add-operators/new-operator.sh @@ -0,0 +1,388 @@ +#!/usr/bin/env bash + +# Add-Operators Script for NEW Operators +# +# This script helps new operators join an existing cluster during the +# add-operators ceremony. +# +# Reference: https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-operators +# +# IMPORTANT: This is a CEREMONY - ALL operators (existing AND new) must run +# their respective scripts simultaneously. +# +# Two-step workflow: +# 1. Generate your ENR and share it with existing operators +# 2. Run the ceremony with the cluster-lock received from existing operators +# +# Prerequisites: +# - .env file with NETWORK and VC variables set +# - For --generate-enr: Docker installed +# - For ceremony: .charon/charon-enr-private-key must exist +# - For ceremony: Cluster-lock.json received from existing operators +# +# Usage: +# ./scripts/edit/add-operators/new-operator.sh [OPTIONS] +# +# Options: +# --new-operator-enrs Comma-separated ENRs of ALL new operators (required for ceremony) +# --cluster-lock Path to existing cluster-lock.json (required for ceremony) +# --generate-enr Generate a new ENR private key if not present +# --dry-run Show what would be done without executing +# -h, --help Show this help message +# +# Examples: +# # Step 1: Generate ENR and share with existing operators +# ./scripts/edit/add-operators/new-operator.sh --generate-enr +# +# # Step 2: Run ceremony with all new operator ENRs +# ./scripts/edit/add-operators/new-operator.sh \ +# --new-operator-enrs "enr:-...,enr:-..." \ +# --cluster-lock ./received-cluster-lock.json + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +cd "$REPO_ROOT" + +# Default values +NEW_OPERATOR_ENRS="" +CLUSTER_LOCK_PATH="" +GENERATE_ENR=false +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/add-operators/new-operator.sh [OPTIONS] + +Helps new operators join an existing cluster during the add-operators ceremony. +This is a CEREMONY that ALL operators (existing AND new) must run simultaneously. + +Options: + --new-operator-enrs Comma-separated ENRs of ALL new operators (required for ceremony) + --cluster-lock Path to existing cluster-lock.json (required for ceremony) + --generate-enr Generate a new ENR private key if not present + --dry-run Show what would be done without executing + -h, --help Show this help message + +Examples: + # Step 1: Generate ENR and share with existing operators + ./scripts/edit/add-operators/new-operator.sh --generate-enr + + # Step 2: Run ceremony with cluster-lock and all new operator ENRs + ./scripts/edit/add-operators/new-operator.sh \ + --new-operator-enrs "enr:-...,enr:-..." \ + --cluster-lock ./received-cluster-lock.json + +Prerequisites: + - .env file with NETWORK and VC variables set + - For --generate-enr: Docker installed + - For ceremony: .charon/charon-enr-private-key must exist + - For ceremony: Cluster-lock.json received from existing operators +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --new-operator-enrs) + NEW_OPERATOR_ENRS="$2" + shift 2 + ;; + --cluster-lock) + CLUSTER_LOCK_PATH="$2" + shift 2 + ;; + --generate-enr) + GENERATE_ENR=true + shift + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add-Operators Workflow - NEW OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +echo "" + +# Handle ENR generation mode +if [ "$GENERATE_ENR" = true ]; then + log_step "Step 1: Generating ENR private key..." + + if [ -f .charon/charon-enr-private-key ]; then + log_warn "ENR private key already exists at .charon/charon-enr-private-key" + log_warn "Skipping generation to avoid overwriting existing key." + log_info "If you want to generate a new key, remove the existing file first." + else + mkdir -p .charon + + if [ "$DRY_RUN" = false ]; then + docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + create enr + else + echo " [DRY-RUN] docker run --rm ... charon create enr" + fi + + log_info "ENR private key generated" + fi + + if [ -f .charon/charon-enr-private-key ]; then + echo "" + log_warn "╔════════════════════════════════════════════════════════════════╗" + log_warn "║ SHARE YOUR ENR WITH THE EXISTING OPERATORS ║" + log_warn "╚════════════════════════════════════════════════════════════════╝" + echo "" + + # Extract and display the ENR + if [ "$DRY_RUN" = false ]; then + ENR=$(docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + enr 2>/dev/null || echo "") + + if [ -n "$ENR" ]; then + log_info "Your ENR:" + echo "" + echo "$ENR" + echo "" + fi + fi + + log_info "Send this ENR to the existing operators." + log_info "They will use it with: --new-operator-enrs \"\"" + log_info "" + log_info "You will also need the existing cluster-lock.json from them." + log_info "" + log_info "After receiving it, run the ceremony with:" + log_info " ./scripts/edit/add-operators/new-operator.sh \\" + log_info " --new-operator-enrs \"\" \\" + log_info " --cluster-lock " + fi + + exit 0 +fi + +# Ceremony mode: validate required arguments +if [ -z "$NEW_OPERATOR_ENRS" ]; then + log_error "Missing required argument: --new-operator-enrs" + echo "Use --help for usage information" + exit 1 +fi + +if [ -z "$CLUSTER_LOCK_PATH" ]; then + log_error "Missing required argument: --cluster-lock" + echo "Use --help for usage information" + exit 1 +fi + +# Step 1: Check ceremony prerequisites +log_step "Step 1: Checking ceremony prerequisites..." + +if [ "$DRY_RUN" = false ]; then + if [ ! -d .charon ]; then + log_error ".charon directory not found" + log_info "First generate your ENR with: ./scripts/edit/add-operators/new-operator.sh --generate-enr" + exit 1 + fi + + if [ ! -f .charon/charon-enr-private-key ]; then + log_error ".charon/charon-enr-private-key not found" + log_info "First generate your ENR with: ./scripts/edit/add-operators/new-operator.sh --generate-enr" + exit 1 + fi + + if [ ! -f "$CLUSTER_LOCK_PATH" ]; then + log_error "Cluster-lock file not found: $CLUSTER_LOCK_PATH" + exit 1 + fi + + # Validate cluster-lock is valid JSON + if ! jq empty "$CLUSTER_LOCK_PATH" 2>/dev/null; then + log_error "Cluster-lock file is not valid JSON: $CLUSTER_LOCK_PATH" + exit 1 + fi +else + if [ ! -d .charon ]; then + log_warn "Would check for .charon directory (not found)" + fi + if [ ! -f .charon/charon-enr-private-key ]; then + log_warn "Would check for .charon/charon-enr-private-key (not found)" + fi +fi + +log_info "Using cluster-lock: $CLUSTER_LOCK_PATH" +log_info "New operator ENRs: ${NEW_OPERATOR_ENRS:0:80}..." + +# Show cluster info +if [ "$DRY_RUN" = false ] && [ -f "$CLUSTER_LOCK_PATH" ]; then + NUM_VALIDATORS=$(jq '.distributed_validators | length' "$CLUSTER_LOCK_PATH" 2>/dev/null || echo "?") + NUM_OPERATORS=$(jq '.operators | length' "$CLUSTER_LOCK_PATH" 2>/dev/null || echo "?") + log_info "Cluster info: $NUM_VALIDATORS validator(s), $NUM_OPERATORS operator(s)" +fi + +log_info "Prerequisites OK" + +echo "" + +# Step 2: Run ceremony +log_step "Step 2: Running add-operators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL operators must run this ceremony simultaneously ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit add-operators" +log_info " New operator ENRs: ${NEW_OPERATOR_ENRS:0:80}..." +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all operators to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + docker run --rm -it \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ + -v "$REPO_ROOT/$CLUSTER_LOCK_PATH:/opt/charon/cluster-lock.json:ro" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + alpha edit add-operators \ + --new-operator-enrs="$NEW_OPERATOR_ENRS" \ + --output-dir=/opt/charon/output \ + --lock-file=/opt/charon/cluster-lock.json \ + --private-key-file=/opt/charon/.charon/charon-enr-private-key + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + NEW_VALIDATORS=$(jq '.distributed_validators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + NEW_OPERATORS=$(jq '.operators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + log_info "New cluster-lock.json generated with $NEW_VALIDATORS validator(s), $NEW_OPERATORS operator(s)" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit add-operators --new-operator-enrs=... --output-dir=$OUTPUT_DIR --lock-file=... --private-key-file=..." +fi + +echo "" + +# Step 3: Install .charon from output +log_step "Step 3: Installing new cluster configuration..." + +if [ -d .charon ]; then + TIMESTAMP=$(date +%Y%m%d_%H%M%S) + mkdir -p "$BACKUP_DIR" + run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" + log_info "Old .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" +fi + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New cluster configuration installed to .charon/" + +echo "" + +# Step 4: Start containers +log_step "Step 4: Starting containers..." + +run_cmd docker compose up -d charon "$VC" + +log_info "Containers started" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ New Operator Setup COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Cluster configuration installed in: .charon/" +log_info " - Containers started: charon, $VC" +echo "" +log_info "Next steps:" +log_info " 1. Wait for charon to sync with peers: docker compose logs -f charon" +log_info " 2. Verify VC is running: docker compose logs -f $VC" +log_info " 3. Monitor validator duties once synced" +echo "" +log_warn "Note: As a new operator, you do NOT have any slashing protection history." +log_warn "Your VC will start fresh. Ensure all existing operators have completed" +log_warn "their add-operators workflow before validators resume duties." +echo "" From f4449e2baccaf1123e25a279403cc48eabc4bca6 Mon Sep 17 00:00:00 2001 From: Andrei Smirnov Date: Tue, 17 Feb 2026 11:58:36 +0300 Subject: [PATCH 08/12] Added remove-operators scripts --- scripts/edit/remove-operators/README.md | 103 +++++ .../remove-operators/remaining-operator.sh | 388 ++++++++++++++++++ .../edit/remove-operators/removed-operator.sh | 294 +++++++++++++ 3 files changed, 785 insertions(+) create mode 100644 scripts/edit/remove-operators/README.md create mode 100755 scripts/edit/remove-operators/remaining-operator.sh create mode 100755 scripts/edit/remove-operators/removed-operator.sh diff --git a/scripts/edit/remove-operators/README.md b/scripts/edit/remove-operators/README.md new file mode 100644 index 00000000..42f72046 --- /dev/null +++ b/scripts/edit/remove-operators/README.md @@ -0,0 +1,103 @@ +# Remove-Operators Scripts + +Scripts to automate the [remove-operators ceremony](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/remove-operators) for Charon distributed validators. + +## Overview + +These scripts help operators remove specific operators from an existing distributed validator cluster while preserving all validators. This is useful for: + +- **Operator offboarding**: Removing an operator who is leaving the cluster +- **Cluster downsizing**: Reducing the number of operators +- **Security response**: Removing a compromised operator + +**Important**: This is a coordinated ceremony. All participating operators must run their respective scripts simultaneously to complete the process. + +> Warning: This is an alpha feature in Charon and is not yet recommended for production use. + +There are two scripts for the two roles involved: + +- **`remaining-operator.sh`** - For operators staying in the cluster +- **`removed-operator.sh`** - For operators being removed who need to participate (only required when removal exceeds fault tolerance) + +### Fault Tolerance + +The cluster's fault tolerance is `f = operators - threshold`. When removing more operators than `f`, removed operators must participate in the ceremony by running `removed-operator.sh` with the `--participating-operator-enrs` flag. + +When the removal is within fault tolerance, removed operators simply stop their nodes after the ceremony completes. + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- `.charon` directory with `cluster-lock.json` and `validator_keys` +- Docker running +- `jq` installed + +## For Remaining Operators + +Automates the complete workflow for operators staying in the cluster: + +```bash +./scripts/edit/remove-operators/remaining-operator.sh \ + --operator-enrs-to-remove "enr:-..." +``` + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--operator-enrs-to-remove ` | Yes | Comma-separated ENRs of operators to remove | +| `--participating-operator-enrs ` | When exceeding fault tolerance | Comma-separated ENRs of all participating operators | +| `--new-threshold ` | No | Override default threshold (defaults to ceil(n * 2/3)) | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +### Workflow + +1. **Export ASDB** - Export anti-slashing database from running VC +2. **Run ceremony** - P2P coordinated remove-operators ceremony with all participants +3. **Update ASDB** - Replace pubkeys in exported ASDB to match new cluster-lock +4. **Stop containers** - Stop charon and VC +5. **Backup and replace** - Backup current `.charon/` to `./backups/`, install new configuration +6. **Import ASDB** - Import updated anti-slashing database +7. **Restart containers** - Start charon and VC with new configuration + +## For Removed Operators + +Only required when the removal exceeds the cluster's fault tolerance. In that case, removed operators must participate in the ceremony to provide their key shares. + +```bash +./scripts/edit/remove-operators/removed-operator.sh \ + --operator-enrs-to-remove "enr:-..." \ + --participating-operator-enrs "enr:-...,enr:-...,enr:-..." +``` + +If the removal is within fault tolerance, removed operators do **not** need to run this script - simply stop your node after the remaining operators complete the ceremony. + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--operator-enrs-to-remove ` | Yes | Comma-separated ENRs of operators to remove | +| `--participating-operator-enrs ` | Yes | Comma-separated ENRs of ALL participating operators | +| `--new-threshold ` | No | Override default threshold (defaults to ceil(n * 2/3)) | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +## Current Limitations + +- The new cluster configuration will not be reflected on the Obol Launchpad +- The cluster will have a new cluster hash (different observability identifier) +- All remaining operators must have valid validator keys to participate +- The old cluster must be completely stopped before the new cluster can operate + +## Testing + +See [test/README.md](test/README.md) for integration tests. + +## Related + +- [Add-Operators Workflow](../add-operators/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/remove-operators) diff --git a/scripts/edit/remove-operators/remaining-operator.sh b/scripts/edit/remove-operators/remaining-operator.sh new file mode 100755 index 00000000..dbc0582a --- /dev/null +++ b/scripts/edit/remove-operators/remaining-operator.sh @@ -0,0 +1,388 @@ +#!/usr/bin/env bash + +# Remove-Operators Script for REMAINING Operators +# +# This script automates the remove-operators ceremony for operators who are +# staying in the cluster. It handles the full workflow including ASDB +# export/update/import around the ceremony. +# +# Reference: https://docs.obol.org/next/advanced-and-troubleshooting/advanced/remove-operators +# +# IMPORTANT: This is a CEREMONY - ALL participating operators must run their +# respective scripts simultaneously. The ceremony coordinates between +# participants to generate new key shares for the reduced operator set. +# +# The workflow: +# 1. Export the current anti-slashing database +# 2. Run the remove-operators ceremony (all participating operators simultaneously) +# 3. Update the exported ASDB with new pubkeys +# 4. Stop containers +# 5. Backup and replace .charon directory +# 6. Import the updated ASDB +# 7. Restart containers +# +# Prerequisites: +# - .env file with NETWORK and VC variables set +# - .charon directory with cluster-lock.json and validator_keys +# - Docker and docker compose installed and running +# - VC container running (for ASDB export) +# - All participating operators must run the ceremony +# +# Usage: +# ./scripts/edit/remove-operators/remaining-operator.sh [OPTIONS] +# +# Options: +# --operator-enrs-to-remove Comma-separated ENRs of operators to remove (required) +# --participating-operator-enrs Comma-separated ENRs of participating operators +# (required when removing beyond fault tolerance) +# --new-threshold Override default threshold (defaults to ceil(n * 2/3)) +# --dry-run Show what would be done without executing +# -h, --help Show this help message + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +cd "$REPO_ROOT" + +# Default values +OPERATOR_ENRS_TO_REMOVE="" +PARTICIPATING_OPERATOR_ENRS="" +NEW_THRESHOLD="" +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" +ASDB_EXPORT_DIR="./asdb-export" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/remove-operators/remaining-operator.sh [OPTIONS] + +Automates the remove-operators ceremony for operators staying in the cluster. +All participating operators must run their respective scripts simultaneously. + +Options: + --operator-enrs-to-remove Comma-separated ENRs of operators to remove (required) + --participating-operator-enrs Comma-separated ENRs of participating operators + (required when removing beyond fault tolerance) + --new-threshold Override default threshold (defaults to ceil(n * 2/3)) + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + # Remove one operator (within fault tolerance) + ./scripts/edit/remove-operators/remaining-operator.sh \ + --operator-enrs-to-remove "enr:-..." + + # Remove operators beyond fault tolerance (must specify participants) + ./scripts/edit/remove-operators/remaining-operator.sh \ + --operator-enrs-to-remove "enr:-...,enr:-..." \ + --participating-operator-enrs "enr:-...,enr:-...,enr:-..." + + # Remove operator with custom threshold + ./scripts/edit/remove-operators/remaining-operator.sh \ + --operator-enrs-to-remove "enr:-..." \ + --new-threshold 3 + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json and validator_keys + - Docker and docker compose installed and running + - VC container running (for ASDB export) + - All participating operators must run the ceremony +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --operator-enrs-to-remove) + OPERATOR_ENRS_TO_REMOVE="$2" + shift 2 + ;; + --participating-operator-enrs) + PARTICIPATING_OPERATOR_ENRS="$2" + shift 2 + ;; + --new-threshold) + NEW_THRESHOLD="$2" + shift 2 + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ -z "$OPERATOR_ENRS_TO_REMOVE" ]; then + log_error "Missing required argument: --operator-enrs-to-remove" + echo "Use --help for usage information" + exit 1 +fi + +# Validate new-threshold is a positive integer if provided +if [ -n "$NEW_THRESHOLD" ] && ! [[ "$NEW_THRESHOLD" =~ ^[1-9][0-9]*$ ]]; then + log_error "Invalid --new-threshold: must be a positive integer" + exit 1 +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Remove-Operators Workflow - REMAINING OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -d .charon/validator_keys ]; then + log_error ".charon/validator_keys directory not found" + log_info "All remaining operators must have their current validator private key shares." + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" +log_info " Operators to remove: ${OPERATOR_ENRS_TO_REMOVE:0:80}..." + +if [ -n "$PARTICIPATING_OPERATOR_ENRS" ]; then + log_info " Participating operators: ${PARTICIPATING_OPERATOR_ENRS:0:80}..." +fi +if [ -n "$NEW_THRESHOLD" ]; then + log_info " New threshold: $NEW_THRESHOLD" +fi + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +# Show current cluster info +if [ -f .charon/cluster-lock.json ]; then + CURRENT_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + CURRENT_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info " Current cluster: $CURRENT_VALIDATORS validator(s), $CURRENT_OPERATORS operator(s)" +fi + +echo "" + +# Step 1: Export anti-slashing database +log_step "Step 1: Exporting anti-slashing database..." + +# Check VC container is running (skip check in dry-run mode) +if [ "$DRY_RUN" = false ]; then + if ! docker compose ps "$VC" 2>/dev/null | grep -q Up; then + log_error "VC container ($VC) is not running. Start it first:" + log_error " docker compose up -d $VC" + exit 1 + fi +else + log_warn "Would check that $VC container is running" +fi + +mkdir -p "$ASDB_EXPORT_DIR" + +run_cmd VC="$VC" "$SCRIPT_DIR/../vc/export_asdb.sh" \ + --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" + +echo "" + +# Step 2: Run ceremony +log_step "Step 2: Running remove-operators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL participating operators must run simultaneously ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit remove-operators" +log_info " Operators to remove: ${OPERATOR_ENRS_TO_REMOVE:0:80}..." +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all participants to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + # Build Docker command arguments + DOCKER_ARGS=( + run --rm -it + -v "$REPO_ROOT/.charon:/opt/charon/.charon" + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" + alpha edit remove-operators + --operator-enrs-to-remove="$OPERATOR_ENRS_TO_REMOVE" + --output-dir=/opt/charon/output + ) + + if [ -n "$PARTICIPATING_OPERATOR_ENRS" ]; then + DOCKER_ARGS+=(--participating-operator-enrs="$PARTICIPATING_OPERATOR_ENRS") + fi + + if [ -n "$NEW_THRESHOLD" ]; then + DOCKER_ARGS+=(--new-threshold="$NEW_THRESHOLD") + fi + + docker "${DOCKER_ARGS[@]}" + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + NEW_VALIDATORS=$(jq '.distributed_validators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + NEW_OPERATORS=$(jq '.operators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + log_info "New cluster-lock.json generated with $NEW_VALIDATORS validator(s), $NEW_OPERATORS operator(s)" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit remove-operators --operator-enrs-to-remove=... --output-dir=$OUTPUT_DIR" +fi + +echo "" + +# Step 3: Update ASDB pubkeys +log_step "Step 3: Updating anti-slashing database pubkeys..." + +run_cmd "$SCRIPT_DIR/../vc/update-anti-slashing-db.sh" \ + "$ASDB_EXPORT_DIR/slashing-protection.json" \ + ".charon/cluster-lock.json" \ + "$OUTPUT_DIR/cluster-lock.json" + +log_info "Anti-slashing database pubkeys updated" + +echo "" + +# Step 4: Stop containers +log_step "Step 4: Stopping containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" + +# Step 5: Backup and replace .charon +log_step "Step 5: Backing up and replacing .charon directory..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info "Current .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New cluster configuration installed to .charon/" + +echo "" + +# Step 6: Import updated ASDB +log_step "Step 6: Importing updated anti-slashing database..." + +run_cmd VC="$VC" "$SCRIPT_DIR/../vc/import_asdb.sh" \ + --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database imported" + +echo "" + +# Step 7: Restart containers +log_step "Step 7: Restarting containers..." + +run_cmd docker compose up -d charon "$VC" + +log_info "Containers restarted" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Remove-Operators Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old .charon backed up to: $BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info " - New cluster configuration installed in: .charon/" +log_info " - Anti-slashing database updated and imported" +log_info " - Containers restarted: charon, $VC" +echo "" +log_info "Next steps:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Verify all remaining nodes connected and healthy" +log_info " 3. Verify cluster is producing attestations" +log_info " 4. Confirm removed operators have stopped their nodes" +echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" +log_info "Current limitations:" +log_info " - The new configuration will not be reflected on the Obol Launchpad" +log_info " - The cluster will have a new cluster hash (different observability ID)" +echo "" diff --git a/scripts/edit/remove-operators/removed-operator.sh b/scripts/edit/remove-operators/removed-operator.sh new file mode 100755 index 00000000..677a760c --- /dev/null +++ b/scripts/edit/remove-operators/removed-operator.sh @@ -0,0 +1,294 @@ +#!/usr/bin/env bash + +# Remove-Operators Script for REMOVED Operators +# +# This script helps operators who are being removed from the cluster to +# participate in the remove-operators ceremony. This is only required when +# the removal exceeds the cluster's fault tolerance. +# +# Reference: https://docs.obol.org/next/advanced-and-troubleshooting/advanced/remove-operators +# +# IMPORTANT: This script is only needed when removing more operators than the +# cluster's fault tolerance (f = operators - threshold) allows. In that case, +# removed operators must participate in the ceremony to provide their key shares. +# +# If the removal is within fault tolerance, removed operators do NOT need to +# run this script - they simply stop their nodes after the ceremony. +# +# Prerequisites: +# - .env file with NETWORK and VC variables set +# - .charon directory with cluster-lock.json, charon-enr-private-key, and validator_keys +# - Docker and docker compose installed and running +# +# Usage: +# ./scripts/edit/remove-operators/removed-operator.sh [OPTIONS] +# +# Options: +# --operator-enrs-to-remove Comma-separated ENRs of operators to remove (required) +# --participating-operator-enrs Comma-separated ENRs of ALL participating operators (required) +# --new-threshold Override default threshold (defaults to ceil(n * 2/3)) +# --dry-run Show what would be done without executing +# -h, --help Show this help message + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +cd "$REPO_ROOT" + +# Default values +OPERATOR_ENRS_TO_REMOVE="" +PARTICIPATING_OPERATOR_ENRS="" +NEW_THRESHOLD="" +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/remove-operators/removed-operator.sh [OPTIONS] + +Helps removed operators participate in the remove-operators ceremony. +This is only required when the removal exceeds the cluster's fault tolerance. + +If the removal is within fault tolerance, removed operators do NOT need to +run this script - simply stop your node after the remaining operators complete +the ceremony. + +Options: + --operator-enrs-to-remove Comma-separated ENRs of operators to remove (required) + --participating-operator-enrs Comma-separated ENRs of ALL participating operators (required) + --new-threshold Override default threshold (defaults to ceil(n * 2/3)) + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + ./scripts/edit/remove-operators/removed-operator.sh \ + --operator-enrs-to-remove "enr:-..." \ + --participating-operator-enrs "enr:-...,enr:-...,enr:-..." + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json, charon-enr-private-key, and validator_keys + - Docker and docker compose installed and running + - Your ENR must be listed in --participating-operator-enrs +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --operator-enrs-to-remove) + OPERATOR_ENRS_TO_REMOVE="$2" + shift 2 + ;; + --participating-operator-enrs) + PARTICIPATING_OPERATOR_ENRS="$2" + shift 2 + ;; + --new-threshold) + NEW_THRESHOLD="$2" + shift 2 + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ -z "$OPERATOR_ENRS_TO_REMOVE" ]; then + log_error "Missing required argument: --operator-enrs-to-remove" + echo "Use --help for usage information" + exit 1 +fi + +if [ -z "$PARTICIPATING_OPERATOR_ENRS" ]; then + log_error "Missing required argument: --participating-operator-enrs" + echo "Use --help for usage information" + exit 1 +fi + +# Validate new-threshold is a positive integer if provided +if [ -n "$NEW_THRESHOLD" ] && ! [[ "$NEW_THRESHOLD" =~ ^[1-9][0-9]*$ ]]; then + log_error "Invalid --new-threshold: must be a positive integer" + exit 1 +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Remove-Operators Workflow - REMOVED OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -f .charon/charon-enr-private-key ]; then + log_error ".charon/charon-enr-private-key not found" + exit 1 +fi + +if [ ! -d .charon/validator_keys ]; then + log_error ".charon/validator_keys directory not found" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" +log_info " Operators to remove: ${OPERATOR_ENRS_TO_REMOVE:0:80}..." +log_info " Participating operators: ${PARTICIPATING_OPERATOR_ENRS:0:80}..." + +if [ -n "$NEW_THRESHOLD" ]; then + log_info " New threshold: $NEW_THRESHOLD" +fi + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +# Show current cluster info +if [ -f .charon/cluster-lock.json ]; then + CURRENT_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + CURRENT_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info " Current cluster: $CURRENT_VALIDATORS validator(s), $CURRENT_OPERATORS operator(s)" +fi + +echo "" + +# Step 1: Run ceremony +log_step "Step 1: Running remove-operators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL participating operators must run simultaneously ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit remove-operators (as removed operator)" +log_info " Operators to remove: ${OPERATOR_ENRS_TO_REMOVE:0:80}..." +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all participants to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + # Build Docker command arguments + DOCKER_ARGS=( + run --rm -it + -v "$REPO_ROOT/.charon:/opt/charon/.charon" + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" + alpha edit remove-operators + --operator-enrs-to-remove="$OPERATOR_ENRS_TO_REMOVE" + --participating-operator-enrs="$PARTICIPATING_OPERATOR_ENRS" + --private-key-file=/opt/charon/.charon/charon-enr-private-key + --lock-file=/opt/charon/.charon/cluster-lock.json + --validator-keys-dir=/opt/charon/.charon/validator_keys + --output-dir=/opt/charon/output + ) + + if [ -n "$NEW_THRESHOLD" ]; then + DOCKER_ARGS+=(--new-threshold="$NEW_THRESHOLD") + fi + + docker "${DOCKER_ARGS[@]}" + + log_info "Ceremony completed successfully!" +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit remove-operators --operator-enrs-to-remove=... --participating-operator-enrs=... --output-dir=$OUTPUT_DIR" +fi + +echo "" + +# Step 2: Stop containers +log_step "Step 2: Stopping containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Removed Operator Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Ceremony participation completed" +log_info " - Containers stopped: charon, $VC" +echo "" +log_warn "You have been removed from the cluster." +log_warn "Your node no longer needs to run for this cluster." +echo "" +log_info "Next steps:" +log_info " 1. Confirm with remaining operators that the ceremony succeeded" +log_info " 2. Optionally clean up cluster data: rm -rf .charon data/" +log_info " 3. Optionally remove Docker resources: docker compose down -v" +echo "" From 6913a0974962c4037ff04d50ab9acec802a9da03 Mon Sep 17 00:00:00 2001 From: Andrei Smirnov Date: Tue, 17 Feb 2026 11:59:41 +0300 Subject: [PATCH 09/12] Removed rotation script tests --- .gitignore | 3 - scripts/edit/add-validators/test/README.md | 48 -- .../fixtures/.charon/charon-enr-private-key | 1 - .../test/fixtures/.charon/cluster-lock.json | 54 -- .../add-validators/test/fixtures/.env.test | 3 - .../test/test_add_validators.sh | 441 -------------- .../edit/recreate-private-keys/test/README.md | 26 - .../fixtures/.charon/charon-enr-private-key | 1 - .../test/fixtures/.charon/cluster-lock.json | 55 -- .../test/fixtures/.env.test | 3 - .../test/test_recreate_private_keys.sh | 387 ------------ scripts/edit/replace-operator/test/.gitignore | 2 - scripts/edit/replace-operator/test/README.md | 27 - .../fixtures/.charon/charon-enr-private-key | 1 - .../test/fixtures/.charon/cluster-lock.json | 55 -- .../replace-operator/test/fixtures/.env.test | 3 - .../test/fixtures/new-cluster-lock.json | 55 -- .../test/fixtures/sample-asdb.json | 24 - .../test/test_replace_operator.sh | 575 ------------------ 19 files changed, 1764 deletions(-) delete mode 100644 scripts/edit/add-validators/test/README.md delete mode 100644 scripts/edit/add-validators/test/fixtures/.charon/charon-enr-private-key delete mode 100644 scripts/edit/add-validators/test/fixtures/.charon/cluster-lock.json delete mode 100644 scripts/edit/add-validators/test/fixtures/.env.test delete mode 100755 scripts/edit/add-validators/test/test_add_validators.sh delete mode 100644 scripts/edit/recreate-private-keys/test/README.md delete mode 100644 scripts/edit/recreate-private-keys/test/fixtures/.charon/charon-enr-private-key delete mode 100644 scripts/edit/recreate-private-keys/test/fixtures/.charon/cluster-lock.json delete mode 100644 scripts/edit/recreate-private-keys/test/fixtures/.env.test delete mode 100755 scripts/edit/recreate-private-keys/test/test_recreate_private_keys.sh delete mode 100644 scripts/edit/replace-operator/test/.gitignore delete mode 100644 scripts/edit/replace-operator/test/README.md delete mode 100644 scripts/edit/replace-operator/test/fixtures/.charon/charon-enr-private-key delete mode 100644 scripts/edit/replace-operator/test/fixtures/.charon/cluster-lock.json delete mode 100644 scripts/edit/replace-operator/test/fixtures/.env.test delete mode 100644 scripts/edit/replace-operator/test/fixtures/new-cluster-lock.json delete mode 100644 scripts/edit/replace-operator/test/fixtures/sample-asdb.json delete mode 100755 scripts/edit/replace-operator/test/test_replace_operator.sh diff --git a/.gitignore b/.gitignore index ea2a8aa6..b55ae221 100644 --- a/.gitignore +++ b/.gitignore @@ -14,6 +14,3 @@ data/ prometheus/prometheus.yml commit-boost/config.toml -# Allow test fixtures -!scripts/edit/**/test/fixtures/.charon/ -!scripts/edit/**/test/fixtures/.charon/** diff --git a/scripts/edit/add-validators/test/README.md b/scripts/edit/add-validators/test/README.md deleted file mode 100644 index 9d89803e..00000000 --- a/scripts/edit/add-validators/test/README.md +++ /dev/null @@ -1,48 +0,0 @@ -# Add-Validators Integration Tests - -Integration tests for `add-validators.sh` script. - -## Overview - -These tests validate the add-validators script without running actual Docker containers or the ceremony. The focus is on: - -- **Argument parsing and validation** -- **Prerequisite checks** (`.env`, `.charon/`, cluster-lock) -- **Dry-run output** for all workflow steps -- **Error messages** for missing/invalid inputs - -## Running Tests - -```bash -./scripts/edit/add-validators/test/test_add_validators.sh -``` - -Expected output: All tests should pass in under 5 seconds. - -## What's NOT Tested - -- **Actual Docker operations** - Docker commands are mocked -- **Charon ceremony** - Would require actual cluster coordination with all operators -- **Container orchestration** - Would require running services - -## Test Structure - -``` -test/ -├── README.md # This file -├── test_add_validators.sh # Main test script -├── fixtures/ # Test fixtures -│ ├── .env.test # Test environment file -│ └── .charon/ # Mock .charon directory -│ ├── cluster-lock.json -│ └── charon-enr-private-key -└── data/ # Test runtime data (git-ignored) - ├── backup/ # Backed up repo files during test - └── mock-bin/ # Mock docker command -``` - -## Adding New Tests - -1. Add a new test function following the naming convention `test_*` -2. Use the assertion helpers: `assert_exit_code`, `assert_output_contains`, `assert_output_not_contains` -3. Register the test in the `main()` function using `run_test` diff --git a/scripts/edit/add-validators/test/fixtures/.charon/charon-enr-private-key b/scripts/edit/add-validators/test/fixtures/.charon/charon-enr-private-key deleted file mode 100644 index 37a6f7d7..00000000 --- a/scripts/edit/add-validators/test/fixtures/.charon/charon-enr-private-key +++ /dev/null @@ -1 +0,0 @@ -mock-enr-private-key-for-testing-only-do-not-use-in-production diff --git a/scripts/edit/add-validators/test/fixtures/.charon/cluster-lock.json b/scripts/edit/add-validators/test/fixtures/.charon/cluster-lock.json deleted file mode 100644 index ae99ed93..00000000 --- a/scripts/edit/add-validators/test/fixtures/.charon/cluster-lock.json +++ /dev/null @@ -1,54 +0,0 @@ -{ - "cluster_definition": { - "name": "TestCluster", - "num_validators": 1, - "threshold": 3, - "operators": [ - { - "address": "0x1111111111111111111111111111111111111111", - "enr": "enr:-HW4QOldBest...operator0" - }, - { - "address": "0x2222222222222222222222222222222222222222", - "enr": "enr:-HW4QNewOper...operator1" - }, - { - "address": "0x3333333333333333333333333333333333333333", - "enr": "enr:-HW4QThird...operator2" - }, - { - "address": "0x4444444444444444444444444444444444444444", - "enr": "enr:-HW4QFourth...operator3" - } - ] - }, - "distributed_validators": [ - { - "distributed_public_key": "0xa9fb2be415318eb77709f7c378ab26025371c0b11213d93fd662ffdb06e77a05c7b04573a478e9d5c0c0fd98078965ef", - "public_shares": [ - "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", - "0x8afba316fdcf51e25a89e05e17377b8c72fd465c95346df4ed5694f295faa2ce061e14e579c5bc01a468dbbb191c58e8", - "0xa1aeebe0980509f5f8d8d424beb89004a967da8d8093248f64eb27c4ee5d22ba9c0f157025f551f47b31833f8bc585f8", - "0xa6c283c82cd0b65436861a149fb840849d06ded1dd8d2f900afb358c6a4232004309120f00a553cdccd8a43f6b743c82" - ] - } - ], - "operators": [ - { - "address": "0x1111111111111111111111111111111111111111", - "enr": "enr:-HW4QOldBest...operator0" - }, - { - "address": "0x2222222222222222222222222222222222222222", - "enr": "enr:-HW4QNewOper...operator1" - }, - { - "address": "0x3333333333333333333333333333333333333333", - "enr": "enr:-HW4QThird...operator2" - }, - { - "address": "0x4444444444444444444444444444444444444444", - "enr": "enr:-HW4QFourth...operator3" - } - ] -} diff --git a/scripts/edit/add-validators/test/fixtures/.env.test b/scripts/edit/add-validators/test/fixtures/.env.test deleted file mode 100644 index 1717e872..00000000 --- a/scripts/edit/add-validators/test/fixtures/.env.test +++ /dev/null @@ -1,3 +0,0 @@ -# Test environment for add-validators tests -NETWORK=hoodi -VC=vc-lodestar diff --git a/scripts/edit/add-validators/test/test_add_validators.sh b/scripts/edit/add-validators/test/test_add_validators.sh deleted file mode 100755 index 466f43c1..00000000 --- a/scripts/edit/add-validators/test/test_add_validators.sh +++ /dev/null @@ -1,441 +0,0 @@ -#!/usr/bin/env bash - -# Integration test for add-validators.sh script -# -# This test validates: -# - Argument parsing and validation -# - Prerequisite checks (.env, .charon/, cluster-lock) -# - Dry-run output for all workflow steps -# - Error messages for missing inputs -# -# No actual Docker containers are run - all Docker commands are mocked. -# -# Usage: ./scripts/edit/add-validators/test/test_add_validators.sh - -set -euo pipefail - -SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" - -# Test directories -TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" -TEST_DATA_DIR="$SCRIPT_DIR/data" - -# Script under test -ADD_VALIDATORS_SCRIPT="$REPO_ROOT/scripts/edit/add-validators/add-validators.sh" - -# Test counters -TESTS_RUN=0 -TESTS_PASSED=0 -TESTS_FAILED=0 - -# Colors for output -RED='\033[0;31m' -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -BLUE='\033[0;34m' -NC='\033[0m' - -log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } -log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } -log_error() { echo -e "${RED}[ERROR]${NC} $1"; } -log_test() { echo -e "${BLUE}[TEST]${NC} $1"; } - -# Create mock docker script that logs calls and returns success -setup_mock_docker() { - local mock_bin_dir="$TEST_DATA_DIR/mock-bin" - mkdir -p "$mock_bin_dir" - - # Create mock docker command - cat > "$mock_bin_dir/docker" << 'MOCK_DOCKER' -#!/usr/bin/env bash -# Mock docker for testing - logs all calls -echo "[MOCK-DOCKER] $*" >> "${MOCK_DOCKER_LOG:-/dev/null}" - -# Handle specific commands -case "$*" in - "info") - echo "Mock Docker info" - exit 0 - ;; - "compose"*"ps"*"charon"*) - # Simulate charon is running - echo "charon Up" - exit 0 - ;; - "compose"*"stop"*) - echo "[MOCK] Stopping containers" - exit 0 - ;; - "compose"*"up"*) - echo "[MOCK] Starting containers" - exit 0 - ;; - *"charon"*"add-validators"*) - echo "[MOCK] Running add-validators ceremony" - exit 0 - ;; - *) - echo "[MOCK] Unhandled docker command: $*" - exit 0 - ;; -esac -MOCK_DOCKER - chmod +x "$mock_bin_dir/docker" - - # Export PATH with mock first - export PATH="$mock_bin_dir:$PATH" - export MOCK_DOCKER_LOG="$TEST_DATA_DIR/docker-calls.log" -} - -# Setup test working directory with fixtures -# Note: Scripts always cd to REPO_ROOT, so we must put test fixtures there -# We backup any existing files and restore them on cleanup -setup_test_env() { - rm -rf "$TEST_DATA_DIR" - mkdir -p "$TEST_DATA_DIR/backup" - - # Backup existing files in REPO_ROOT if they exist - if [ -f "$REPO_ROOT/.env" ]; then - cp "$REPO_ROOT/.env" "$TEST_DATA_DIR/backup/.env.bak" - fi - if [ -d "$REPO_ROOT/.charon" ]; then - # Only backup key files, not the whole directory - mkdir -p "$TEST_DATA_DIR/backup/.charon" - [ -f "$REPO_ROOT/.charon/cluster-lock.json" ] && \ - cp "$REPO_ROOT/.charon/cluster-lock.json" "$TEST_DATA_DIR/backup/.charon/" - [ -f "$REPO_ROOT/.charon/charon-enr-private-key" ] && \ - cp "$REPO_ROOT/.charon/charon-enr-private-key" "$TEST_DATA_DIR/backup/.charon/" - fi - - # Install test fixtures to REPO_ROOT - cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" - mkdir -p "$REPO_ROOT/.charon" - cp "$TEST_FIXTURES_DIR/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" - cp "$TEST_FIXTURES_DIR/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" - - # Create required directories - mkdir -p "$REPO_ROOT/backups" - - # Setup mock docker - setup_mock_docker -} - -restore_repo_state() { - # Restore backed up files - if [ -f "$TEST_DATA_DIR/backup/.env.bak" ]; then - cp "$TEST_DATA_DIR/backup/.env.bak" "$REPO_ROOT/.env" - else - rm -f "$REPO_ROOT/.env" - fi - - if [ -d "$TEST_DATA_DIR/backup/.charon" ]; then - [ -f "$TEST_DATA_DIR/backup/.charon/cluster-lock.json" ] && \ - cp "$TEST_DATA_DIR/backup/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" - [ -f "$TEST_DATA_DIR/backup/.charon/charon-enr-private-key" ] && \ - cp "$TEST_DATA_DIR/backup/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" - fi -} - -cleanup() { - log_info "Cleaning up and restoring original state..." - restore_repo_state -} - -trap cleanup EXIT - -# Test assertion helpers -assert_exit_code() { - local expected="$1" - local actual="$2" - local test_name="$3" - - if [ "$actual" -eq "$expected" ]; then - return 0 - else - log_error "Expected exit code $expected, got $actual in $test_name" - return 1 - fi -} - -assert_output_contains() { - local pattern="$1" - local output="$2" - local test_name="$3" - - if echo "$output" | grep -q -F -- "$pattern"; then - return 0 - else - log_error "Expected output to contain '$pattern' in $test_name" - echo "Actual output:" - echo "$output" | head -20 - return 1 - fi -} - -assert_output_not_contains() { - local pattern="$1" - local output="$2" - local test_name="$3" - - if echo "$output" | grep -q "$pattern"; then - log_error "Expected output NOT to contain '$pattern' in $test_name" - return 1 - else - return 0 - fi -} - -run_test() { - local test_name="$1" - local test_func="$2" - - TESTS_RUN=$((TESTS_RUN + 1)) - log_test "Running: $test_name" - - if $test_func; then - echo -e " ${GREEN}✓ PASSED${NC}" - TESTS_PASSED=$((TESTS_PASSED + 1)) - else - echo -e " ${RED}✗ FAILED${NC}" - TESTS_FAILED=$((TESTS_FAILED + 1)) - fi -} - -# ============================================================================ -# ADD-VALIDATORS.SH TESTS -# ============================================================================ - -test_help() { - local output - local exit_code=0 - - output=$("$ADD_VALIDATORS_SCRIPT" --help 2>&1) || exit_code=$? - - assert_exit_code 0 "$exit_code" "test_help" && \ - assert_output_contains "Usage:" "$output" "test_help" && \ - assert_output_contains "--num-validators" "$output" "test_help" && \ - assert_output_contains "--withdrawal-addresses" "$output" "test_help" && \ - assert_output_contains "--dry-run" "$output" "test_help" -} - -test_missing_num_validators() { - local output - local exit_code=0 - - output=$("$ADD_VALIDATORS_SCRIPT" 2>&1) || exit_code=$? - - assert_exit_code 1 "$exit_code" "test_missing_num_validators" && \ - assert_output_contains "Missing required argument: --num-validators" "$output" "test_missing_num_validators" -} - -test_invalid_num_validators() { - local output - local exit_code=0 - - output=$("$ADD_VALIDATORS_SCRIPT" --num-validators abc 2>&1) || exit_code=$? - - assert_exit_code 1 "$exit_code" "test_invalid_num_validators" && \ - assert_output_contains "must be a positive integer" "$output" "test_invalid_num_validators" -} - -test_invalid_num_validators_zero() { - local output - local exit_code=0 - - output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 0 2>&1) || exit_code=$? - - assert_exit_code 1 "$exit_code" "test_invalid_num_validators_zero" && \ - assert_output_contains "must be a positive integer" "$output" "test_invalid_num_validators_zero" -} - -test_missing_env() { - local output - local exit_code=0 - - rm -f "$REPO_ROOT/.env" - - output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 2>&1) || exit_code=$? - - # Restore .env for other tests - cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" - - assert_exit_code 1 "$exit_code" "test_missing_env" && \ - assert_output_contains ".env file not found" "$output" "test_missing_env" -} - -test_missing_network() { - local output - local exit_code=0 - - echo "VC=vc-lodestar" > "$REPO_ROOT/.env" # Missing NETWORK - - output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 2>&1) || exit_code=$? - - # Restore .env - cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" - - assert_exit_code 1 "$exit_code" "test_missing_network" && \ - assert_output_contains "NETWORK variable not set" "$output" "test_missing_network" -} - -test_missing_vc() { - local output - local exit_code=0 - - echo "NETWORK=hoodi" > "$REPO_ROOT/.env" # Missing VC - - output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 2>&1) || exit_code=$? - - # Restore .env - cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" - - assert_exit_code 1 "$exit_code" "test_missing_vc" && \ - assert_output_contains "VC variable not set" "$output" "test_missing_vc" -} - -test_missing_charon_dir() { - local output - local exit_code=0 - - mv "$REPO_ROOT/.charon" "$REPO_ROOT/.charon.test.bak" - - output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 2>&1) || exit_code=$? - - # Restore .charon - mv "$REPO_ROOT/.charon.test.bak" "$REPO_ROOT/.charon" - - assert_exit_code 1 "$exit_code" "test_missing_charon_dir" && \ - assert_output_contains ".charon directory not found" "$output" "test_missing_charon_dir" -} - -test_missing_cluster_lock() { - local output - local exit_code=0 - - rm -f "$REPO_ROOT/.charon/cluster-lock.json" - - output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 2>&1) || exit_code=$? - - # Restore cluster-lock - cp "$TEST_FIXTURES_DIR/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" - - assert_exit_code 1 "$exit_code" "test_missing_cluster_lock" && \ - assert_output_contains "cluster-lock.json not found" "$output" "test_missing_cluster_lock" -} - -test_dry_run_basic() { - local output - local exit_code=0 - - output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 --dry-run 2>&1) || exit_code=$? - - assert_exit_code 0 "$exit_code" "test_dry_run_basic" && \ - assert_output_contains "DRY-RUN MODE" "$output" "test_dry_run_basic" && \ - assert_output_contains "Validators to add: 5" "$output" "test_dry_run_basic" -} - -test_dry_run_with_addresses() { - local output - local exit_code=0 - - output=$("$ADD_VALIDATORS_SCRIPT" \ - --num-validators 10 \ - --withdrawal-addresses 0x1234567890abcdef1234567890abcdef12345678 \ - --fee-recipient-addresses 0xabcdef1234567890abcdef1234567890abcdef12 \ - --dry-run 2>&1) || exit_code=$? - - assert_exit_code 0 "$exit_code" "test_dry_run_with_addresses" && \ - assert_output_contains "Withdrawal addresses:" "$output" "test_dry_run_with_addresses" && \ - assert_output_contains "Fee recipient addresses:" "$output" "test_dry_run_with_addresses" -} - -test_dry_run_unverified() { - local output - local exit_code=0 - - output=$("$ADD_VALIDATORS_SCRIPT" \ - --num-validators 5 \ - --unverified \ - --dry-run 2>&1) || exit_code=$? - - assert_exit_code 0 "$exit_code" "test_dry_run_unverified" && \ - assert_output_contains "UNVERIFIED" "$output" "test_dry_run_unverified" -} - -test_dry_run_workflow() { - local output - local exit_code=0 - - output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 --dry-run 2>&1) || exit_code=$? - - assert_exit_code 0 "$exit_code" "test_dry_run_workflow" && \ - assert_output_contains "Running add-validators ceremony" "$output" "test_dry_run_workflow" && \ - assert_output_contains "charon alpha edit add-validators" "$output" "test_dry_run_workflow" && \ - assert_output_contains "Stopping containers" "$output" "test_dry_run_workflow" && \ - assert_output_contains "Backing up" "$output" "test_dry_run_workflow" && \ - assert_output_contains "Restarting containers" "$output" "test_dry_run_workflow" -} - -test_unknown_argument() { - local output - local exit_code=0 - - output=$("$ADD_VALIDATORS_SCRIPT" --num-validators 5 --invalid-flag 2>&1) || exit_code=$? - - assert_exit_code 1 "$exit_code" "test_unknown_argument" && \ - assert_output_contains "Unknown argument" "$output" "test_unknown_argument" -} - -# ============================================================================ -# MAIN TEST RUNNER -# ============================================================================ - -main() { - echo "" - echo "╔════════════════════════════════════════════════════════════════╗" - echo "║ Add-Validators Script - Integration Tests ║" - echo "╚════════════════════════════════════════════════════════════════╝" - echo "" - - # Setup test environment - log_info "Setting up test environment..." - setup_test_env - - echo "" - echo "─────────────────────────────────────────────────────────────────" - echo " ADD-VALIDATORS.SH TESTS" - echo "─────────────────────────────────────────────────────────────────" - echo "" - - run_test "add-validators: --help shows usage" test_help - run_test "add-validators: error when --num-validators missing" test_missing_num_validators - run_test "add-validators: error when --num-validators invalid" test_invalid_num_validators - run_test "add-validators: error when --num-validators is zero" test_invalid_num_validators_zero - run_test "add-validators: error when .env missing" test_missing_env - run_test "add-validators: error when NETWORK missing" test_missing_network - run_test "add-validators: error when VC missing" test_missing_vc - run_test "add-validators: error when .charon dir missing" test_missing_charon_dir - run_test "add-validators: error when cluster-lock missing" test_missing_cluster_lock - run_test "add-validators: dry-run basic" test_dry_run_basic - run_test "add-validators: dry-run with addresses" test_dry_run_with_addresses - run_test "add-validators: dry-run with --unverified" test_dry_run_unverified - run_test "add-validators: dry-run full workflow" test_dry_run_workflow - run_test "add-validators: error for unknown argument" test_unknown_argument - - echo "" - echo "═════════════════════════════════════════════════════════════════" - echo "" - - if [ "$TESTS_FAILED" -eq 0 ]; then - echo -e "${GREEN}All $TESTS_PASSED tests passed!${NC}" - echo "" - exit 0 - else - echo -e "${RED}$TESTS_FAILED of $TESTS_RUN tests failed${NC}" - echo "" - exit 1 - fi -} - -main "$@" diff --git a/scripts/edit/recreate-private-keys/test/README.md b/scripts/edit/recreate-private-keys/test/README.md deleted file mode 100644 index 8576b933..00000000 --- a/scripts/edit/recreate-private-keys/test/README.md +++ /dev/null @@ -1,26 +0,0 @@ -# Recreate-Private-Keys Integration Tests - -Integration tests for `recreate-private-keys.sh` script. - -## Overview - -These tests validate the recreate-private-keys script without running actual Docker containers or the ceremony. The focus is on: - -- **Argument parsing and validation** -- **Prerequisite checks** (`.env`, `.charon/`, cluster-lock, validator_keys) -- **Dry-run output** for all workflow steps -- **Error messages** for missing/invalid inputs - -## Running Tests - -```bash -./scripts/edit/recreate-private-keys/test/test_recreate_private_keys.sh -``` - -Expected output: All tests should pass in under 5 seconds. - -## What's NOT Tested - -- **Actual Docker operations** - Docker commands are mocked -- **Charon ceremony** - Would require actual cluster coordination with all operators -- **Container orchestration** - Would require running services diff --git a/scripts/edit/recreate-private-keys/test/fixtures/.charon/charon-enr-private-key b/scripts/edit/recreate-private-keys/test/fixtures/.charon/charon-enr-private-key deleted file mode 100644 index 372a826b..00000000 --- a/scripts/edit/recreate-private-keys/test/fixtures/.charon/charon-enr-private-key +++ /dev/null @@ -1 +0,0 @@ -0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef diff --git a/scripts/edit/recreate-private-keys/test/fixtures/.charon/cluster-lock.json b/scripts/edit/recreate-private-keys/test/fixtures/.charon/cluster-lock.json deleted file mode 100644 index d3be61be..00000000 --- a/scripts/edit/recreate-private-keys/test/fixtures/.charon/cluster-lock.json +++ /dev/null @@ -1,55 +0,0 @@ -{ - "cluster_definition": { - "name": "TestCluster", - "num_validators": 1, - "threshold": 3, - "operators": [ - { - "address": "0x1111111111111111111111111111111111111111", - "enr": "enr:-HW4QOldBest...operator0" - }, - { - "address": "0x2222222222222222222222222222222222222222", - "enr": "enr:-HW4QNewOper...operator1" - }, - { - "address": "0x3333333333333333333333333333333333333333", - "enr": "enr:-HW4QThird...operator2" - }, - { - "address": "0x4444444444444444444444444444444444444444", - "enr": "enr:-HW4QFourth...operator3" - } - ] - }, - "distributed_validators": [ - { - "distributed_public_key": "0xa9fb2be415318eb77709f7c378ab26025371c0b11213d93fd662ffdb06e77a05c7b04573a478e9d5c0c0fd98078965ef", - "public_shares": [ - "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", - "0x8afba316fdcf51e25a89e05e17377b8c72fd465c95346df4ed5694f295faa2ce061e14e579c5bc01a468dbbb191c58e8", - "0xa1aeebe0980509f5f8d8d424beb89004a967da8d8093248f64eb27c4ee5d22ba9c0f157025f551f47b31833f8bc585f8", - "0xa6c283c82cd0b65436861a149fb840849d06ded1dd8d2f900afb358c6a4232004309120f00a553cdccd8a43f6b743c82" - ] - } - ], - "operators": [ - { - "address": "0x1111111111111111111111111111111111111111", - "enr": "enr:-HW4QOldBest...operator0" - }, - { - "address": "0x2222222222222222222222222222222222222222", - "enr": "enr:-HW4QNewOper...operator1" - }, - { - "address": "0x3333333333333333333333333333333333333333", - "enr": "enr:-HW4QThird...operator2" - }, - { - "address": "0x4444444444444444444444444444444444444444", - "enr": "enr:-HW4QFourth...operator3" - } - ], - "lock_hash": "0xe9dbc87171f99bd8b6f348f6bf314291651933256e712ace299190f5e04e7795" -} diff --git a/scripts/edit/recreate-private-keys/test/fixtures/.env.test b/scripts/edit/recreate-private-keys/test/fixtures/.env.test deleted file mode 100644 index 81298829..00000000 --- a/scripts/edit/recreate-private-keys/test/fixtures/.env.test +++ /dev/null @@ -1,3 +0,0 @@ -# Test environment for recreate-private-keys tests -NETWORK=hoodi -VC=vc-lodestar diff --git a/scripts/edit/recreate-private-keys/test/test_recreate_private_keys.sh b/scripts/edit/recreate-private-keys/test/test_recreate_private_keys.sh deleted file mode 100755 index 9b28c28e..00000000 --- a/scripts/edit/recreate-private-keys/test/test_recreate_private_keys.sh +++ /dev/null @@ -1,387 +0,0 @@ -#!/usr/bin/env bash - -# Integration test for recreate-private-keys.sh script -# -# This test validates: -# - Argument parsing and validation -# - Prerequisite checks (.env, .charon/, cluster-lock) -# - Dry-run output for all workflow steps -# - Error messages for missing inputs -# -# No actual Docker containers are run - all Docker commands are mocked. -# -# Usage: ./scripts/edit/recreate-private-keys/test/test_recreate_private_keys.sh - -set -euo pipefail - -SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" - -# Test directories -TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" -TEST_DATA_DIR="$SCRIPT_DIR/data" - -# Script under test -RECREATE_SCRIPT="$REPO_ROOT/scripts/edit/recreate-private-keys/recreate-private-keys.sh" - -# Test counters -TESTS_RUN=0 -TESTS_PASSED=0 -TESTS_FAILED=0 - -# Colors for output -RED='\033[0;31m' -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -BLUE='\033[0;34m' -NC='\033[0m' - -log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } -log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } -log_error() { echo -e "${RED}[ERROR]${NC} $1"; } -log_test() { echo -e "${BLUE}[TEST]${NC} $1"; } - -# Create mock docker script that logs calls and returns success -setup_mock_docker() { - local mock_bin_dir="$TEST_DATA_DIR/mock-bin" - mkdir -p "$mock_bin_dir" - - # Create mock docker command - cat > "$mock_bin_dir/docker" << 'MOCK_DOCKER' -#!/usr/bin/env bash -# Mock docker for testing - logs all calls -echo "[MOCK-DOCKER] $*" >> "${MOCK_DOCKER_LOG:-/dev/null}" - -# Handle specific commands -case "$*" in - "info") - echo "Mock Docker info" - exit 0 - ;; - "compose"*"stop"*) - echo "[MOCK] Stopping containers" - exit 0 - ;; - "compose"*"up"*) - echo "[MOCK] Starting containers" - exit 0 - ;; - *"charon"*"enr"*) - # Return a mock ENR - echo "enr:-HW4QMockENRForTesting12345" - exit 0 - ;; - *"charon"*"edit recreate-private-keys"*) - echo "[MOCK] Running recreate-private-keys" - exit 0 - ;; - *) - echo "[MOCK] Unhandled docker command: $*" - exit 0 - ;; -esac -MOCK_DOCKER - chmod +x "$mock_bin_dir/docker" - - # Export PATH with mock first - export PATH="$mock_bin_dir:$PATH" - export MOCK_DOCKER_LOG="$TEST_DATA_DIR/docker-calls.log" -} - -# Setup test working directory with fixtures -# Note: Scripts always cd to REPO_ROOT, so we must put test fixtures there -# We backup any existing files and restore them on cleanup -setup_test_env() { - rm -rf "$TEST_DATA_DIR" - mkdir -p "$TEST_DATA_DIR/backup" - - # Backup existing files in REPO_ROOT if they exist - if [ -f "$REPO_ROOT/.env" ]; then - cp "$REPO_ROOT/.env" "$TEST_DATA_DIR/backup/.env.bak" - fi - if [ -d "$REPO_ROOT/.charon" ]; then - # Only backup key files, not the whole directory - mkdir -p "$TEST_DATA_DIR/backup/.charon" - [ -f "$REPO_ROOT/.charon/cluster-lock.json" ] && \ - cp "$REPO_ROOT/.charon/cluster-lock.json" "$TEST_DATA_DIR/backup/.charon/" - [ -f "$REPO_ROOT/.charon/charon-enr-private-key" ] && \ - cp "$REPO_ROOT/.charon/charon-enr-private-key" "$TEST_DATA_DIR/backup/.charon/" - fi - - # Install test fixtures to REPO_ROOT - cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" - mkdir -p "$REPO_ROOT/.charon" - cp "$TEST_FIXTURES_DIR/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" - cp "$TEST_FIXTURES_DIR/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" - - # Create required directories - mkdir -p "$REPO_ROOT/backups" - - # Setup mock docker - setup_mock_docker -} - -restore_repo_state() { - # Restore backed up files - if [ -f "$TEST_DATA_DIR/backup/.env.bak" ]; then - cp "$TEST_DATA_DIR/backup/.env.bak" "$REPO_ROOT/.env" - else - rm -f "$REPO_ROOT/.env" - fi - - if [ -d "$TEST_DATA_DIR/backup/.charon" ]; then - [ -f "$TEST_DATA_DIR/backup/.charon/cluster-lock.json" ] && \ - cp "$TEST_DATA_DIR/backup/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" - [ -f "$TEST_DATA_DIR/backup/.charon/charon-enr-private-key" ] && \ - cp "$TEST_DATA_DIR/backup/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" - fi -} - -cleanup() { - log_info "Cleaning up and restoring original state..." - restore_repo_state -} - -trap cleanup EXIT - -# Test assertion helpers -assert_exit_code() { - local expected="$1" - local actual="$2" - local test_name="$3" - - if [ "$actual" -eq "$expected" ]; then - return 0 - else - log_error "Expected exit code $expected, got $actual in $test_name" - return 1 - fi -} - -assert_output_contains() { - local pattern="$1" - local output="$2" - local test_name="$3" - - if echo "$output" | grep -q -F -- "$pattern"; then - return 0 - else - log_error "Expected output to contain '$pattern' in $test_name" - echo "Actual output:" - echo "$output" | head -20 - return 1 - fi -} - -assert_output_not_contains() { - local pattern="$1" - local output="$2" - local test_name="$3" - - if echo "$output" | grep -q "$pattern"; then - log_error "Expected output NOT to contain '$pattern' in $test_name" - return 1 - else - return 0 - fi -} - -run_test() { - local test_name="$1" - local test_func="$2" - - TESTS_RUN=$((TESTS_RUN + 1)) - log_test "Running: $test_name" - - if $test_func; then - echo -e " ${GREEN}✓ PASSED${NC}" - TESTS_PASSED=$((TESTS_PASSED + 1)) - else - echo -e " ${RED}✗ FAILED${NC}" - TESTS_FAILED=$((TESTS_FAILED + 1)) - fi -} - -# ============================================================================ -# RECREATE-PRIVATE-KEYS.SH TESTS -# ============================================================================ - -test_help() { - local output - local exit_code=0 - - output=$("$RECREATE_SCRIPT" --help 2>&1) || exit_code=$? - - assert_exit_code 0 "$exit_code" "test_help" && \ - assert_output_contains "Usage:" "$output" "test_help" && \ - assert_output_contains "--dry-run" "$output" "test_help" -} - -test_missing_env() { - local output - local exit_code=0 - - rm -f "$REPO_ROOT/.env" - - output=$("$RECREATE_SCRIPT" 2>&1) || exit_code=$? - - # Restore .env for other tests - cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" - - assert_exit_code 1 "$exit_code" "test_missing_env" && \ - assert_output_contains ".env file not found" "$output" "test_missing_env" -} - -test_missing_network() { - local output - local exit_code=0 - - echo "VC=vc-lodestar" > "$REPO_ROOT/.env" # Missing NETWORK - - output=$("$RECREATE_SCRIPT" 2>&1) || exit_code=$? - - # Restore .env - cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" - - assert_exit_code 1 "$exit_code" "test_missing_network" && \ - assert_output_contains "NETWORK variable not set" "$output" "test_missing_network" -} - -test_missing_vc() { - local output - local exit_code=0 - - echo "NETWORK=hoodi" > "$REPO_ROOT/.env" # Missing VC - - output=$("$RECREATE_SCRIPT" 2>&1) || exit_code=$? - - # Restore .env - cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" - - assert_exit_code 1 "$exit_code" "test_missing_vc" && \ - assert_output_contains "VC variable not set" "$output" "test_missing_vc" -} - -test_missing_charon_dir() { - local output - local exit_code=0 - - mv "$REPO_ROOT/.charon" "$REPO_ROOT/.charon.test.bak" - - output=$("$RECREATE_SCRIPT" 2>&1) || exit_code=$? - - # Restore .charon - mv "$REPO_ROOT/.charon.test.bak" "$REPO_ROOT/.charon" - - assert_exit_code 1 "$exit_code" "test_missing_charon_dir" && \ - assert_output_contains ".charon directory not found" "$output" "test_missing_charon_dir" -} - -test_missing_cluster_lock() { - local output - local exit_code=0 - - rm -f "$REPO_ROOT/.charon/cluster-lock.json" - - output=$("$RECREATE_SCRIPT" 2>&1) || exit_code=$? - - # Restore cluster-lock - cp "$TEST_FIXTURES_DIR/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" - - assert_exit_code 1 "$exit_code" "test_missing_cluster_lock" && \ - assert_output_contains "cluster-lock.json not found" "$output" "test_missing_cluster_lock" -} - -test_missing_validator_keys() { - local output - local exit_code=0 - - # Ensure validator_keys doesn't exist - rm -rf "$REPO_ROOT/.charon/validator_keys" - - output=$("$RECREATE_SCRIPT" 2>&1) || exit_code=$? - - assert_exit_code 1 "$exit_code" "test_missing_validator_keys" && \ - assert_output_contains "validator_keys directory not found" "$output" "test_missing_validator_keys" -} - -test_dry_run_workflow() { - local output - local exit_code=0 - - # Create validator_keys for this test - mkdir -p "$REPO_ROOT/.charon/validator_keys" - - output=$("$RECREATE_SCRIPT" --dry-run 2>&1) || exit_code=$? - - # Cleanup - rm -rf "$REPO_ROOT/.charon/validator_keys" - - assert_exit_code 0 "$exit_code" "test_dry_run_workflow" && \ - assert_output_contains "DRY-RUN MODE" "$output" "test_dry_run_workflow" && \ - assert_output_contains "Exporting anti-slashing database" "$output" "test_dry_run_workflow" && \ - assert_output_contains "charon alpha edit recreate-private-keys" "$output" "test_dry_run_workflow" && \ - assert_output_contains "Updating anti-slashing database" "$output" "test_dry_run_workflow" && \ - assert_output_contains "Stopping containers" "$output" "test_dry_run_workflow" && \ - assert_output_contains "Backing up" "$output" "test_dry_run_workflow" && \ - assert_output_contains "Importing updated anti-slashing" "$output" "test_dry_run_workflow" && \ - assert_output_contains "Restarting containers" "$output" "test_dry_run_workflow" -} - -test_unknown_argument() { - local output - local exit_code=0 - - output=$("$RECREATE_SCRIPT" --invalid-flag 2>&1) || exit_code=$? - - assert_exit_code 1 "$exit_code" "test_unknown_argument" && \ - assert_output_contains "Unknown argument" "$output" "test_unknown_argument" -} - -# ============================================================================ -# MAIN TEST RUNNER -# ============================================================================ - -main() { - echo "" - echo "╔════════════════════════════════════════════════════════════════╗" - echo "║ Recreate-Private-Keys Script - Integration Tests ║" - echo "╚════════════════════════════════════════════════════════════════╝" - echo "" - - # Setup test environment - log_info "Setting up test environment..." - setup_test_env - - echo "" - echo "─────────────────────────────────────────────────────────────────" - echo " RECREATE-PRIVATE-KEYS.SH TESTS" - echo "─────────────────────────────────────────────────────────────────" - echo "" - - run_test "recreate-private-keys: --help shows usage" test_help - run_test "recreate-private-keys: error when .env missing" test_missing_env - run_test "recreate-private-keys: error when NETWORK missing" test_missing_network - run_test "recreate-private-keys: error when VC missing" test_missing_vc - run_test "recreate-private-keys: error when .charon dir missing" test_missing_charon_dir - run_test "recreate-private-keys: error when cluster-lock missing" test_missing_cluster_lock - run_test "recreate-private-keys: error when validator_keys missing" test_missing_validator_keys - run_test "recreate-private-keys: dry-run full workflow" test_dry_run_workflow - run_test "recreate-private-keys: error for unknown argument" test_unknown_argument - - echo "" - echo "═════════════════════════════════════════════════════════════════" - echo "" - - if [ "$TESTS_FAILED" -eq 0 ]; then - echo -e "${GREEN}All $TESTS_PASSED tests passed!${NC}" - echo "" - exit 0 - else - echo -e "${RED}$TESTS_FAILED of $TESTS_RUN tests failed${NC}" - echo "" - exit 1 - fi -} - -main "$@" diff --git a/scripts/edit/replace-operator/test/.gitignore b/scripts/edit/replace-operator/test/.gitignore deleted file mode 100644 index 92fdcf73..00000000 --- a/scripts/edit/replace-operator/test/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -# Test artifacts - don't commit -data/ diff --git a/scripts/edit/replace-operator/test/README.md b/scripts/edit/replace-operator/test/README.md deleted file mode 100644 index 3cc3d53b..00000000 --- a/scripts/edit/replace-operator/test/README.md +++ /dev/null @@ -1,27 +0,0 @@ -# Replace-Operator Integration Tests - -Integration tests for `new-operator.sh` and `remaining-operator.sh` scripts. - -## Overview - -These tests validate the replace-operator scripts without running actual Docker containers or the charon ceremony. The focus is on: - -- **Argument parsing and validation** -- **Prerequisite checks** (`.env`, `.charon/`, cluster-lock, ENR key) -- **Dry-run output** for all workflow steps -- **Error messages** for missing/invalid inputs - -## Running Tests - -```bash -./scripts/edit/replace-operator/test/test_replace_operator.sh -``` - -Expected output: All 21 tests should pass in under 5 seconds. - -## What's NOT Tested - -- **Actual Docker operations** - Docker commands are mocked -- **Charon ceremony** - Would require actual cluster coordination -- **ASDB export/import** - Tested separately in `scripts/edit/vc/test/` -- **Container orchestration** - Would require running services diff --git a/scripts/edit/replace-operator/test/fixtures/.charon/charon-enr-private-key b/scripts/edit/replace-operator/test/fixtures/.charon/charon-enr-private-key deleted file mode 100644 index 372a826b..00000000 --- a/scripts/edit/replace-operator/test/fixtures/.charon/charon-enr-private-key +++ /dev/null @@ -1 +0,0 @@ -0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef diff --git a/scripts/edit/replace-operator/test/fixtures/.charon/cluster-lock.json b/scripts/edit/replace-operator/test/fixtures/.charon/cluster-lock.json deleted file mode 100644 index d3be61be..00000000 --- a/scripts/edit/replace-operator/test/fixtures/.charon/cluster-lock.json +++ /dev/null @@ -1,55 +0,0 @@ -{ - "cluster_definition": { - "name": "TestCluster", - "num_validators": 1, - "threshold": 3, - "operators": [ - { - "address": "0x1111111111111111111111111111111111111111", - "enr": "enr:-HW4QOldBest...operator0" - }, - { - "address": "0x2222222222222222222222222222222222222222", - "enr": "enr:-HW4QNewOper...operator1" - }, - { - "address": "0x3333333333333333333333333333333333333333", - "enr": "enr:-HW4QThird...operator2" - }, - { - "address": "0x4444444444444444444444444444444444444444", - "enr": "enr:-HW4QFourth...operator3" - } - ] - }, - "distributed_validators": [ - { - "distributed_public_key": "0xa9fb2be415318eb77709f7c378ab26025371c0b11213d93fd662ffdb06e77a05c7b04573a478e9d5c0c0fd98078965ef", - "public_shares": [ - "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", - "0x8afba316fdcf51e25a89e05e17377b8c72fd465c95346df4ed5694f295faa2ce061e14e579c5bc01a468dbbb191c58e8", - "0xa1aeebe0980509f5f8d8d424beb89004a967da8d8093248f64eb27c4ee5d22ba9c0f157025f551f47b31833f8bc585f8", - "0xa6c283c82cd0b65436861a149fb840849d06ded1dd8d2f900afb358c6a4232004309120f00a553cdccd8a43f6b743c82" - ] - } - ], - "operators": [ - { - "address": "0x1111111111111111111111111111111111111111", - "enr": "enr:-HW4QOldBest...operator0" - }, - { - "address": "0x2222222222222222222222222222222222222222", - "enr": "enr:-HW4QNewOper...operator1" - }, - { - "address": "0x3333333333333333333333333333333333333333", - "enr": "enr:-HW4QThird...operator2" - }, - { - "address": "0x4444444444444444444444444444444444444444", - "enr": "enr:-HW4QFourth...operator3" - } - ], - "lock_hash": "0xe9dbc87171f99bd8b6f348f6bf314291651933256e712ace299190f5e04e7795" -} diff --git a/scripts/edit/replace-operator/test/fixtures/.env.test b/scripts/edit/replace-operator/test/fixtures/.env.test deleted file mode 100644 index b0d4457d..00000000 --- a/scripts/edit/replace-operator/test/fixtures/.env.test +++ /dev/null @@ -1,3 +0,0 @@ -# Test environment for replace-operator tests -NETWORK=hoodi -VC=vc-lodestar diff --git a/scripts/edit/replace-operator/test/fixtures/new-cluster-lock.json b/scripts/edit/replace-operator/test/fixtures/new-cluster-lock.json deleted file mode 100644 index 187b3582..00000000 --- a/scripts/edit/replace-operator/test/fixtures/new-cluster-lock.json +++ /dev/null @@ -1,55 +0,0 @@ -{ - "cluster_definition": { - "name": "TestCluster", - "num_validators": 1, - "threshold": 3, - "operators": [ - { - "address": "0x5555555555555555555555555555555555555555", - "enr": "enr:-HW4QNewReplacement...newoperator0" - }, - { - "address": "0x2222222222222222222222222222222222222222", - "enr": "enr:-HW4QNewOper...operator1" - }, - { - "address": "0x3333333333333333333333333333333333333333", - "enr": "enr:-HW4QThird...operator2" - }, - { - "address": "0x4444444444444444444444444444444444444444", - "enr": "enr:-HW4QFourth...operator3" - } - ] - }, - "distributed_validators": [ - { - "distributed_public_key": "0xa9fb2be415318eb77709f7c378ab26025371c0b11213d93fd662ffdb06e77a05c7b04573a478e9d5c0c0fd98078965ef", - "public_shares": [ - "0xb11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111", - "0x8afba316fdcf51e25a89e05e17377b8c72fd465c95346df4ed5694f295faa2ce061e14e579c5bc01a468dbbb191c58e8", - "0xa1aeebe0980509f5f8d8d424beb89004a967da8d8093248f64eb27c4ee5d22ba9c0f157025f551f47b31833f8bc585f8", - "0xa6c283c82cd0b65436861a149fb840849d06ded1dd8d2f900afb358c6a4232004309120f00a553cdccd8a43f6b743c82" - ] - } - ], - "operators": [ - { - "address": "0x5555555555555555555555555555555555555555", - "enr": "enr:-HW4QNewReplacement...newoperator0" - }, - { - "address": "0x2222222222222222222222222222222222222222", - "enr": "enr:-HW4QNewOper...operator1" - }, - { - "address": "0x3333333333333333333333333333333333333333", - "enr": "enr:-HW4QThird...operator2" - }, - { - "address": "0x4444444444444444444444444444444444444444", - "enr": "enr:-HW4QFourth...operator3" - } - ], - "lock_hash": "0xf0000000000000000000000000000000000000000000000000000000000000000" -} diff --git a/scripts/edit/replace-operator/test/fixtures/sample-asdb.json b/scripts/edit/replace-operator/test/fixtures/sample-asdb.json deleted file mode 100644 index 3acc3886..00000000 --- a/scripts/edit/replace-operator/test/fixtures/sample-asdb.json +++ /dev/null @@ -1,24 +0,0 @@ -{ - "metadata": { - "interchange_format_version": "5", - "genesis_validators_root": "0x212f13fc4df078b6cb7db228f1c8307566dcecf900867401a92023d7ba99cb5f" - }, - "data": [ - { - "pubkey": "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", - "signed_blocks": [ - { - "slot": "81952", - "signing_root": "0x4ff6f743a43f3b4f95350831aeaf0a122a1a392922c45d804280284a69eb850b" - } - ], - "signed_attestations": [ - { - "source_epoch": "2560", - "target_epoch": "2561", - "signing_root": "0x587d6a4f59a58fe15bdac1234e3d51a1d5c8b2e0e3f5e0f2a1b3c4d5e6f7a8b9" - } - ] - } - ] -} diff --git a/scripts/edit/replace-operator/test/test_replace_operator.sh b/scripts/edit/replace-operator/test/test_replace_operator.sh deleted file mode 100755 index a928e4eb..00000000 --- a/scripts/edit/replace-operator/test/test_replace_operator.sh +++ /dev/null @@ -1,575 +0,0 @@ -#!/usr/bin/env bash - -# Integration test for replace-operator scripts (new-operator.sh & remaining-operator.sh) -# -# This test validates: -# - Argument parsing and validation -# - Prerequisite checks (.env, .charon/, cluster-lock, ENR key) -# - Dry-run output for all workflow steps -# - Error messages for missing inputs -# -# No actual Docker containers or ceremonies are run - all Docker commands are mocked. -# -# Usage: ./scripts/edit/replace-operator/test/test_replace_operator.sh - -set -euo pipefail - -SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" - -# Test directories -TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" -TEST_DATA_DIR="$SCRIPT_DIR/data" - -# Scripts under test -NEW_OPERATOR_SCRIPT="$REPO_ROOT/scripts/edit/replace-operator/new-operator.sh" -REMAINING_OPERATOR_SCRIPT="$REPO_ROOT/scripts/edit/replace-operator/remaining-operator.sh" - -# Test counters -TESTS_RUN=0 -TESTS_PASSED=0 -TESTS_FAILED=0 - -# Colors for output -RED='\033[0;31m' -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -BLUE='\033[0;34m' -NC='\033[0m' - -log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } -log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } -log_error() { echo -e "${RED}[ERROR]${NC} $1"; } -log_test() { echo -e "${BLUE}[TEST]${NC} $1"; } - -# Create mock docker script that logs calls and returns success -setup_mock_docker() { - local mock_bin_dir="$TEST_DATA_DIR/mock-bin" - mkdir -p "$mock_bin_dir" - - # Create mock docker command - cat > "$mock_bin_dir/docker" << 'MOCK_DOCKER' -#!/usr/bin/env bash -# Mock docker for testing - logs all calls -echo "[MOCK-DOCKER] $*" >> "${MOCK_DOCKER_LOG:-/dev/null}" - -# Handle specific commands -case "$*" in - "info") - echo "Mock Docker info" - exit 0 - ;; - "compose"*"ps"*) - # Simulate container not running (for remaining-operator checks) - exit 0 - ;; - "compose"*"stop"*) - echo "[MOCK] Stopping containers" - exit 0 - ;; - "compose"*"up"*) - echo "[MOCK] Starting containers" - exit 0 - ;; - *"charon"*"enr"*) - # Return a mock ENR - echo "enr:-HW4QMockENRForTesting12345" - exit 0 - ;; - *"charon"*"create enr"*) - echo "[MOCK] Creating ENR" - exit 0 - ;; - *"charon"*"edit replace-operator"*) - echo "[MOCK] Running replace-operator ceremony" - exit 0 - ;; - *) - echo "[MOCK] Unhandled docker command: $*" - exit 0 - ;; -esac -MOCK_DOCKER - chmod +x "$mock_bin_dir/docker" - - # Export PATH with mock first - export PATH="$mock_bin_dir:$PATH" - export MOCK_DOCKER_LOG="$TEST_DATA_DIR/docker-calls.log" -} - -# Setup test working directory with fixtures -# Note: Scripts always cd to REPO_ROOT, so we must put test fixtures there -# We backup any existing files and restore them on cleanup -setup_test_env() { - rm -rf "$TEST_DATA_DIR" - mkdir -p "$TEST_DATA_DIR/backup" - - # Backup existing files in REPO_ROOT if they exist - if [ -f "$REPO_ROOT/.env" ]; then - cp "$REPO_ROOT/.env" "$TEST_DATA_DIR/backup/.env.bak" - fi - if [ -d "$REPO_ROOT/.charon" ]; then - # Only backup key files, not the whole directory - mkdir -p "$TEST_DATA_DIR/backup/.charon" - [ -f "$REPO_ROOT/.charon/cluster-lock.json" ] && \ - cp "$REPO_ROOT/.charon/cluster-lock.json" "$TEST_DATA_DIR/backup/.charon/" - [ -f "$REPO_ROOT/.charon/charon-enr-private-key" ] && \ - cp "$REPO_ROOT/.charon/charon-enr-private-key" "$TEST_DATA_DIR/backup/.charon/" - fi - - # Install test fixtures to REPO_ROOT - cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" - mkdir -p "$REPO_ROOT/.charon" - cp "$TEST_FIXTURES_DIR/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" - cp "$TEST_FIXTURES_DIR/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" - - # Create required directories - mkdir -p "$REPO_ROOT/backups" - mkdir -p "$REPO_ROOT/output" - mkdir -p "$REPO_ROOT/asdb-export" - - # Copy sample ASDB for remaining-operator tests - cp "$TEST_FIXTURES_DIR/sample-asdb.json" "$REPO_ROOT/asdb-export/slashing-protection.json" - - # Copy new cluster-lock fixture to output - cp "$TEST_FIXTURES_DIR/new-cluster-lock.json" "$REPO_ROOT/output/cluster-lock.json" - - # Setup mock docker - setup_mock_docker -} - -restore_repo_state() { - # Restore backed up files - if [ -f "$TEST_DATA_DIR/backup/.env.bak" ]; then - cp "$TEST_DATA_DIR/backup/.env.bak" "$REPO_ROOT/.env" - else - rm -f "$REPO_ROOT/.env" - fi - - if [ -d "$TEST_DATA_DIR/backup/.charon" ]; then - [ -f "$TEST_DATA_DIR/backup/.charon/cluster-lock.json" ] && \ - cp "$TEST_DATA_DIR/backup/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" - [ -f "$TEST_DATA_DIR/backup/.charon/charon-enr-private-key" ] && \ - cp "$TEST_DATA_DIR/backup/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" - fi - - # Clean up test artifacts - rm -f "$REPO_ROOT/asdb-export/slashing-protection.json" - rm -f "$REPO_ROOT/output/cluster-lock.json" -} - -cleanup() { - log_info "Cleaning up and restoring original state..." - restore_repo_state -} - -trap cleanup EXIT - -# Test assertion helpers -assert_exit_code() { - local expected="$1" - local actual="$2" - local test_name="$3" - - if [ "$actual" -eq "$expected" ]; then - return 0 - else - log_error "Expected exit code $expected, got $actual in $test_name" - return 1 - fi -} - -assert_output_contains() { - local pattern="$1" - local output="$2" - local test_name="$3" - - if echo "$output" | grep -q -F -- "$pattern"; then - return 0 - else - log_error "Expected output to contain '$pattern' in $test_name" - echo "Actual output:" - echo "$output" | head -20 - return 1 - fi -} - -assert_output_not_contains() { - local pattern="$1" - local output="$2" - local test_name="$3" - - if echo "$output" | grep -q "$pattern"; then - log_error "Expected output NOT to contain '$pattern' in $test_name" - return 1 - else - return 0 - fi -} - -run_test() { - local test_name="$1" - local test_func="$2" - - TESTS_RUN=$((TESTS_RUN + 1)) - log_test "Running: $test_name" - - if $test_func; then - echo -e " ${GREEN}✓ PASSED${NC}" - TESTS_PASSED=$((TESTS_PASSED + 1)) - else - echo -e " ${RED}✗ FAILED${NC}" - TESTS_FAILED=$((TESTS_FAILED + 1)) - fi -} - -# ============================================================================ -# NEW-OPERATOR.SH TESTS -# ============================================================================ - -test_new_help() { - local output - local exit_code=0 - - output=$("$NEW_OPERATOR_SCRIPT" --help 2>&1) || exit_code=$? - - assert_exit_code 0 "$exit_code" "test_new_help" && \ - assert_output_contains "Usage:" "$output" "test_new_help" && \ - assert_output_contains "--cluster-lock" "$output" "test_new_help" && \ - assert_output_contains "--generate-enr" "$output" "test_new_help" && \ - assert_output_contains "--dry-run" "$output" "test_new_help" -} - -test_new_missing_env() { - local output - local exit_code=0 - - # Remove .env from REPO_ROOT - rm -f "$REPO_ROOT/.env" - - output=$("$NEW_OPERATOR_SCRIPT" 2>&1) || exit_code=$? - - # Restore .env for other tests - cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" - - assert_exit_code 1 "$exit_code" "test_new_missing_env" && \ - assert_output_contains ".env file not found" "$output" "test_new_missing_env" -} - -test_new_missing_network() { - local output - local exit_code=0 - - echo "VC=vc-lodestar" > "$REPO_ROOT/.env" # Missing NETWORK - - output=$("$NEW_OPERATOR_SCRIPT" 2>&1) || exit_code=$? - - # Restore .env - cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" - - assert_exit_code 1 "$exit_code" "test_new_missing_network" && \ - assert_output_contains "NETWORK variable not set" "$output" "test_new_missing_network" -} - -test_new_missing_vc() { - local output - local exit_code=0 - - echo "NETWORK=hoodi" > "$REPO_ROOT/.env" # Missing VC - - output=$("$NEW_OPERATOR_SCRIPT" 2>&1) || exit_code=$? - - # Restore .env - cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" - - assert_exit_code 1 "$exit_code" "test_new_missing_vc" && \ - assert_output_contains "VC variable not set" "$output" "test_new_missing_vc" -} - -test_new_missing_charon_dir() { - local output - local exit_code=0 - - # Temporarily rename .charon - mv "$REPO_ROOT/.charon" "$REPO_ROOT/.charon.test.bak" - - output=$("$NEW_OPERATOR_SCRIPT" 2>&1) || exit_code=$? - - # Restore .charon - mv "$REPO_ROOT/.charon.test.bak" "$REPO_ROOT/.charon" - - assert_exit_code 1 "$exit_code" "test_new_missing_charon_dir" && \ - assert_output_contains ".charon directory not found" "$output" "test_new_missing_charon_dir" -} - -test_new_missing_enr_key() { - local output - local exit_code=0 - - rm -f "$REPO_ROOT/.charon/charon-enr-private-key" - - output=$("$NEW_OPERATOR_SCRIPT" 2>&1) || exit_code=$? - - # Restore ENR key - cp "$TEST_FIXTURES_DIR/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" - - assert_exit_code 1 "$exit_code" "test_new_missing_enr_key" && \ - assert_output_contains "charon-enr-private-key not found" "$output" "test_new_missing_enr_key" -} - -test_new_invalid_cluster_lock_path() { - local output - local exit_code=0 - - output=$("$NEW_OPERATOR_SCRIPT" --cluster-lock /nonexistent/path.json 2>&1) || exit_code=$? - - assert_exit_code 1 "$exit_code" "test_new_invalid_cluster_lock_path" && \ - assert_output_contains "Cluster-lock file not found" "$output" "test_new_invalid_cluster_lock_path" -} - -test_new_dry_run_generate_enr() { - local output - local exit_code=0 - - output=$("$NEW_OPERATOR_SCRIPT" --generate-enr --dry-run 2>&1) || exit_code=$? - - assert_exit_code 0 "$exit_code" "test_new_dry_run_generate_enr" && \ - assert_output_contains "DRY-RUN MODE" "$output" "test_new_dry_run_generate_enr" && \ - assert_output_contains "Generating ENR" "$output" "test_new_dry_run_generate_enr" -} - -test_new_dry_run_join_cluster() { - local output - local exit_code=0 - - output=$("$NEW_OPERATOR_SCRIPT" --cluster-lock "$TEST_FIXTURES_DIR/new-cluster-lock.json" --dry-run 2>&1) || exit_code=$? - - assert_exit_code 0 "$exit_code" "test_new_dry_run_join_cluster" && \ - assert_output_contains "DRY-RUN MODE" "$output" "test_new_dry_run_join_cluster" && \ - assert_output_contains "Stopping" "$output" "test_new_dry_run_join_cluster" && \ - assert_output_contains "Installing new cluster-lock" "$output" "test_new_dry_run_join_cluster" && \ - assert_output_contains "Starting containers" "$output" "test_new_dry_run_join_cluster" -} - -test_new_unknown_argument() { - local output - local exit_code=0 - - output=$("$NEW_OPERATOR_SCRIPT" --invalid-flag 2>&1) || exit_code=$? - - assert_exit_code 1 "$exit_code" "test_new_unknown_argument" && \ - assert_output_contains "Unknown argument" "$output" "test_new_unknown_argument" -} - -# ============================================================================ -# REMAINING-OPERATOR.SH TESTS -# ============================================================================ - -test_remaining_help() { - local output - local exit_code=0 - - output=$("$REMAINING_OPERATOR_SCRIPT" --help 2>&1) || exit_code=$? - - assert_exit_code 0 "$exit_code" "test_remaining_help" && \ - assert_output_contains "Usage:" "$output" "test_remaining_help" && \ - assert_output_contains "--new-enr" "$output" "test_remaining_help" && \ - assert_output_contains "--operator-index" "$output" "test_remaining_help" && \ - assert_output_contains "--skip-export" "$output" "test_remaining_help" -} - -test_remaining_missing_new_enr() { - local output - local exit_code=0 - - output=$("$REMAINING_OPERATOR_SCRIPT" --operator-index 0 2>&1) || exit_code=$? - - assert_exit_code 1 "$exit_code" "test_remaining_missing_new_enr" && \ - assert_output_contains "Missing required argument: --new-enr" "$output" "test_remaining_missing_new_enr" -} - -test_remaining_missing_operator_index() { - local output - local exit_code=0 - - output=$("$REMAINING_OPERATOR_SCRIPT" --new-enr "enr:-test123" 2>&1) || exit_code=$? - - assert_exit_code 1 "$exit_code" "test_remaining_missing_operator_index" && \ - assert_output_contains "Missing required argument: --operator-index" "$output" "test_remaining_missing_operator_index" -} - -test_remaining_missing_env() { - local output - local exit_code=0 - - rm -f "$REPO_ROOT/.env" - - output=$("$REMAINING_OPERATOR_SCRIPT" --new-enr "enr:-test" --operator-index 0 2>&1) || exit_code=$? - - # Restore .env - cp "$TEST_FIXTURES_DIR/.env.test" "$REPO_ROOT/.env" - - assert_exit_code 1 "$exit_code" "test_remaining_missing_env" && \ - assert_output_contains ".env file not found" "$output" "test_remaining_missing_env" -} - -test_remaining_missing_charon_dir() { - local output - local exit_code=0 - - mv "$REPO_ROOT/.charon" "$REPO_ROOT/.charon.test.bak" - - output=$("$REMAINING_OPERATOR_SCRIPT" --new-enr "enr:-test" --operator-index 0 2>&1) || exit_code=$? - - # Restore .charon - mv "$REPO_ROOT/.charon.test.bak" "$REPO_ROOT/.charon" - - assert_exit_code 1 "$exit_code" "test_remaining_missing_charon_dir" && \ - assert_output_contains ".charon directory not found" "$output" "test_remaining_missing_charon_dir" -} - -test_remaining_missing_cluster_lock() { - local output - local exit_code=0 - - rm -f "$REPO_ROOT/.charon/cluster-lock.json" - - output=$("$REMAINING_OPERATOR_SCRIPT" --new-enr "enr:-test" --operator-index 0 2>&1) || exit_code=$? - - # Restore cluster-lock - cp "$TEST_FIXTURES_DIR/.charon/cluster-lock.json" "$REPO_ROOT/.charon/" - - assert_exit_code 1 "$exit_code" "test_remaining_missing_cluster_lock" && \ - assert_output_contains "cluster-lock.json not found" "$output" "test_remaining_missing_cluster_lock" -} - -test_remaining_missing_enr_key() { - local output - local exit_code=0 - - rm -f "$REPO_ROOT/.charon/charon-enr-private-key" - - output=$("$REMAINING_OPERATOR_SCRIPT" --new-enr "enr:-test" --operator-index 0 2>&1) || exit_code=$? - - # Restore ENR key - cp "$TEST_FIXTURES_DIR/.charon/charon-enr-private-key" "$REPO_ROOT/.charon/" - - assert_exit_code 1 "$exit_code" "test_remaining_missing_enr_key" && \ - assert_output_contains "charon-enr-private-key not found" "$output" "test_remaining_missing_enr_key" -} - -test_remaining_dry_run_full_workflow() { - local output - local exit_code=0 - - # Use --skip-export to avoid Docker dependencies - output=$("$REMAINING_OPERATOR_SCRIPT" \ - --new-enr "enr:-HW4QTestNewOperator123456789" \ - --operator-index 0 \ - --skip-export \ - --dry-run 2>&1) || exit_code=$? - - assert_exit_code 0 "$exit_code" "test_remaining_dry_run_full_workflow" && \ - assert_output_contains "DRY-RUN MODE" "$output" "test_remaining_dry_run_full_workflow" && \ - assert_output_contains "charon edit replace-operator" "$output" "test_remaining_dry_run_full_workflow" && \ - assert_output_contains "Updating anti-slashing database pubkeys" "$output" "test_remaining_dry_run_full_workflow" && \ - assert_output_contains "Stopping" "$output" "test_remaining_dry_run_full_workflow" && \ - assert_output_contains "Backing up" "$output" "test_remaining_dry_run_full_workflow" && \ - assert_output_contains "Importing" "$output" "test_remaining_dry_run_full_workflow" && \ - assert_output_contains "Restarting" "$output" "test_remaining_dry_run_full_workflow" -} - -test_remaining_skip_export_missing_asdb() { - local output - local exit_code=0 - - rm -f "$REPO_ROOT/asdb-export/slashing-protection.json" - - output=$("$REMAINING_OPERATOR_SCRIPT" \ - --new-enr "enr:-test" \ - --operator-index 0 \ - --skip-export \ - --dry-run 2>&1) || exit_code=$? - - # Restore ASDB - cp "$TEST_FIXTURES_DIR/sample-asdb.json" "$REPO_ROOT/asdb-export/slashing-protection.json" - - assert_exit_code 1 "$exit_code" "test_remaining_skip_export_missing_asdb" && \ - assert_output_contains "Cannot skip export" "$output" "test_remaining_skip_export_missing_asdb" -} - -test_remaining_unknown_argument() { - local output - local exit_code=0 - - output=$("$REMAINING_OPERATOR_SCRIPT" --invalid-flag 2>&1) || exit_code=$? - - assert_exit_code 1 "$exit_code" "test_remaining_unknown_argument" && \ - assert_output_contains "Unknown argument" "$output" "test_remaining_unknown_argument" -} - -# ============================================================================ -# MAIN TEST RUNNER -# ============================================================================ - -main() { - echo "" - echo "╔════════════════════════════════════════════════════════════════╗" - echo "║ Replace-Operator Scripts - Integration Tests ║" - echo "╚════════════════════════════════════════════════════════════════╝" - echo "" - - # Setup test environment - log_info "Setting up test environment..." - setup_test_env - - echo "" - echo "─────────────────────────────────────────────────────────────────" - echo " NEW-OPERATOR.SH TESTS" - echo "─────────────────────────────────────────────────────────────────" - echo "" - - run_test "new-operator: --help shows usage" test_new_help - run_test "new-operator: error when .env missing" test_new_missing_env - run_test "new-operator: error when NETWORK missing" test_new_missing_network - run_test "new-operator: error when VC missing" test_new_missing_vc - run_test "new-operator: error when .charon dir missing" test_new_missing_charon_dir - run_test "new-operator: error when ENR key missing" test_new_missing_enr_key - run_test "new-operator: error for invalid cluster-lock path" test_new_invalid_cluster_lock_path - run_test "new-operator: dry-run generate ENR" test_new_dry_run_generate_enr - run_test "new-operator: dry-run join cluster" test_new_dry_run_join_cluster - run_test "new-operator: error for unknown argument" test_new_unknown_argument - - echo "" - echo "─────────────────────────────────────────────────────────────────" - echo " REMAINING-OPERATOR.SH TESTS" - echo "─────────────────────────────────────────────────────────────────" - echo "" - - run_test "remaining-operator: --help shows usage" test_remaining_help - run_test "remaining-operator: error when --new-enr missing" test_remaining_missing_new_enr - run_test "remaining-operator: error when --operator-index missing" test_remaining_missing_operator_index - run_test "remaining-operator: error when .env missing" test_remaining_missing_env - run_test "remaining-operator: error when .charon dir missing" test_remaining_missing_charon_dir - run_test "remaining-operator: error when cluster-lock missing" test_remaining_missing_cluster_lock - run_test "remaining-operator: error when ENR key missing" test_remaining_missing_enr_key - run_test "remaining-operator: dry-run full workflow" test_remaining_dry_run_full_workflow - run_test "remaining-operator: skip-export needs existing ASDB" test_remaining_skip_export_missing_asdb - run_test "remaining-operator: error for unknown argument" test_remaining_unknown_argument - - echo "" - echo "═════════════════════════════════════════════════════════════════" - echo "" - - if [ "$TESTS_FAILED" -eq 0 ]; then - echo -e "${GREEN}All $TESTS_PASSED tests passed!${NC}" - echo "" - exit 0 - else - echo -e "${RED}$TESTS_FAILED of $TESTS_RUN tests failed${NC}" - echo "" - exit 1 - fi -} - -main "$@" From f75dc384e24ede16b8095472e7374d2c15ab87ab Mon Sep 17 00:00:00 2001 From: Andrei Smirnov Date: Tue, 17 Feb 2026 12:02:03 +0300 Subject: [PATCH 10/12] Updated READMEs --- CLAUDE.md | 74 +++----------------- scripts/edit/add-operators/README.md | 4 -- scripts/edit/add-validators/README.md | 4 -- scripts/edit/recreate-private-keys/README.md | 4 -- scripts/edit/remove-operators/README.md | 4 -- scripts/edit/replace-operator/README.md | 4 -- 6 files changed, 8 insertions(+), 86 deletions(-) diff --git a/CLAUDE.md b/CLAUDE.md index 454cd6a9..2346f6dc 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -112,72 +112,14 @@ docker compose -f docker-compose.yml -f compose-debug.yml up --no-start ## Cluster Edit Scripts -Located in `scripts/edit/`, these automate complex cluster modification operations: - -### Replace Operator (`scripts/edit/replace-operator/`) - -Automates the workflow when one operator in a distributed validator cluster needs to be replaced. - -**For remaining operators:** -```bash -./scripts/edit/replace-operator/remaining-operator.sh \ - --new-enr "enr:-..." \ - --operator-index 2 -``` - -**For new operators:** -```bash -# Step 1: Generate and share ENR -./scripts/edit/replace-operator/new-operator.sh --generate-enr - -# Step 2: Apply received cluster-lock -./scripts/edit/replace-operator/new-operator.sh --cluster-lock ./received-cluster-lock.json -``` - -### Anti-Slashing Database Management (`scripts/edit/vc/`) - -When switching validator clients or replacing operators, the anti-slashing database (ASDB) must be exported and imported to prevent slashing violations (EIP-3076 format). - -```bash -# Export from current VC -./scripts/edit/vc/export_asdb.sh - -# Import to new VC (after switching VC in .env) -./scripts/edit/vc/import_asdb.sh -``` - -Client-specific scripts are in subdirectories: `lodestar/`, `nimbus/`, `prysm/`, `teku/`. - -### Recreate Private Keys (`scripts/edit/recreate-private-keys/`) - -Recreates validator private keys from cluster-lock.json when they are lost but the cluster-lock file is still available. - -```bash -./scripts/edit/recreate-private-keys/recreate-private-keys.sh -``` - -## Adding Validators - -Starting with Charon v1.6, you can add validators to an existing cluster using `charon alpha add-validators`: - -```bash -# Using Docker (recommended) -docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:latest \ - alpha add-validators \ - --num-validators 10 \ - --withdrawal-addresses=0x
\ - --fee-recipient-addresses=0x
\ - --data-dir=/opt/charon/.charon \ - --output-dir=/opt/charon/output - -# Apply the new configuration (backup first!) -docker compose stop charon -mv .charon .charon-backup -mv output .charon -docker compose up -d charon -``` - -**Note**: All operators must independently perform the upgrade. The cluster continues operating once threshold operators have upgraded. +Located in `scripts/edit/`, these automate complex cluster modification operations. Each has its own README with full usage details: + +- **[Add Validators](scripts/edit/add-validators/README.md)** - Add new validators to an existing cluster +- **[Add Operators](scripts/edit/add-operators/README.md)** - Expand the cluster by adding new operators +- **[Remove Operators](scripts/edit/remove-operators/README.md)** - Remove operators from the cluster +- **[Replace Operator](scripts/edit/replace-operator/README.md)** - Replace a single operator in the cluster +- **[Recreate Private Keys](scripts/edit/recreate-private-keys/README.md)** - Refresh private key shares while keeping the same validator public keys +- **[Anti-Slashing DB (vc/)](scripts/edit/vc/README.md)** - Export/import/update anti-slashing databases (EIP-3076) ## Monitoring Stack diff --git a/scripts/edit/add-operators/README.md b/scripts/edit/add-operators/README.md index 85e6ada9..a5f04a3f 100644 --- a/scripts/edit/add-operators/README.md +++ b/scripts/edit/add-operators/README.md @@ -89,10 +89,6 @@ Two-step workflow for new operators joining the cluster. - All operators (existing and new) must participate; no partial participation option - Cluster threshold remains unchanged -## Testing - -See [test/README.md](test/README.md) for integration tests. - ## Related - [Add-Validators Workflow](../add-validators/README.md) diff --git a/scripts/edit/add-validators/README.md b/scripts/edit/add-validators/README.md index 4fca5e75..05eb45ed 100644 --- a/scripts/edit/add-validators/README.md +++ b/scripts/edit/add-validators/README.md @@ -96,10 +96,6 @@ If your validator keys are stored remotely (e.g., in a KeyManager) and Charon ca - The new cluster will have a new cluster hash (different observability identifier) - All operators must participate; no partial participation option -## Testing - -See [test/README.md](test/README.md) for integration tests. - ## Related - [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) diff --git a/scripts/edit/recreate-private-keys/README.md b/scripts/edit/recreate-private-keys/README.md index c81ccaa3..4e449440 100644 --- a/scripts/edit/recreate-private-keys/README.md +++ b/scripts/edit/recreate-private-keys/README.md @@ -51,10 +51,6 @@ The script will: - All operators must participate; no partial participation option - All operators must have their current validator private key shares -## Testing - -See [test/README.md](test/README.md) for integration tests. - ## Related - [Replace-Operator Workflow](../replace-operator/README.md) diff --git a/scripts/edit/remove-operators/README.md b/scripts/edit/remove-operators/README.md index 42f72046..0003fe66 100644 --- a/scripts/edit/remove-operators/README.md +++ b/scripts/edit/remove-operators/README.md @@ -90,10 +90,6 @@ If the removal is within fault tolerance, removed operators do **not** need to r - All remaining operators must have valid validator keys to participate - The old cluster must be completely stopped before the new cluster can operate -## Testing - -See [test/README.md](test/README.md) for integration tests. - ## Related - [Add-Operators Workflow](../add-operators/README.md) diff --git a/scripts/edit/replace-operator/README.md b/scripts/edit/replace-operator/README.md index e24f6343..c87a5573 100644 --- a/scripts/edit/replace-operator/README.md +++ b/scripts/edit/replace-operator/README.md @@ -83,10 +83,6 @@ Two-step workflow for the new operator joining the cluster. - The new cluster will have a new cluster hash (different observability identifier) - Only one operator can be replaced at a time -## Testing - -See [test/README.md](test/README.md) for integration tests. - ## Related - [Add-Validators Workflow](../add-validators/README.md) From f4c4833dfe0a7a5f37bee5a315126706a7c70a53 Mon Sep 17 00:00:00 2001 From: Andrei Smirnov Date: Tue, 17 Feb 2026 14:19:57 +0300 Subject: [PATCH 11/12] Added e2e test --- .../edit/add-operators/existing-operator.sh | 6 +- scripts/edit/add-operators/new-operator.sh | 2 +- scripts/edit/add-validators/add-validators.sh | 2 +- .../recreate-private-keys.sh | 6 +- .../remove-operators/remaining-operator.sh | 6 +- .../edit/remove-operators/removed-operator.sh | 2 +- scripts/edit/replace-operator/new-operator.sh | 2 +- .../replace-operator/remaining-operator.sh | 6 +- scripts/edit/test/README.md | 50 ++ scripts/edit/test/bin/docker | 288 +++++++ scripts/edit/test/e2e_test.sh | 772 ++++++++++++++++++ 11 files changed, 1126 insertions(+), 16 deletions(-) create mode 100644 scripts/edit/test/README.md create mode 100755 scripts/edit/test/bin/docker create mode 100755 scripts/edit/test/e2e_test.sh diff --git a/scripts/edit/add-operators/existing-operator.sh b/scripts/edit/add-operators/existing-operator.sh index 990e7c9d..701ec3f6 100755 --- a/scripts/edit/add-operators/existing-operator.sh +++ b/scripts/edit/add-operators/existing-operator.sh @@ -39,7 +39,7 @@ set -euo pipefail SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" cd "$REPO_ROOT" # Default values @@ -212,7 +212,7 @@ fi mkdir -p "$ASDB_EXPORT_DIR" -run_cmd VC="$VC" "$SCRIPT_DIR/../vc/export_asdb.sh" \ +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/export_asdb.sh" \ --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" @@ -301,7 +301,7 @@ echo "" # Step 6: Import updated ASDB log_step "Step 6: Importing updated anti-slashing database..." -run_cmd VC="$VC" "$SCRIPT_DIR/../vc/import_asdb.sh" \ +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/import_asdb.sh" \ --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" log_info "Anti-slashing database imported" diff --git a/scripts/edit/add-operators/new-operator.sh b/scripts/edit/add-operators/new-operator.sh index e2a6d102..58f45323 100755 --- a/scripts/edit/add-operators/new-operator.sh +++ b/scripts/edit/add-operators/new-operator.sh @@ -42,7 +42,7 @@ set -euo pipefail SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" cd "$REPO_ROOT" # Default values diff --git a/scripts/edit/add-validators/add-validators.sh b/scripts/edit/add-validators/add-validators.sh index 40c4e534..0f046704 100755 --- a/scripts/edit/add-validators/add-validators.sh +++ b/scripts/edit/add-validators/add-validators.sh @@ -43,7 +43,7 @@ set -euo pipefail SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" cd "$REPO_ROOT" # Default values diff --git a/scripts/edit/recreate-private-keys/recreate-private-keys.sh b/scripts/edit/recreate-private-keys/recreate-private-keys.sh index 77db0a0c..11babd91 100755 --- a/scripts/edit/recreate-private-keys/recreate-private-keys.sh +++ b/scripts/edit/recreate-private-keys/recreate-private-keys.sh @@ -42,7 +42,7 @@ set -euo pipefail SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" cd "$REPO_ROOT" # Default values @@ -195,7 +195,7 @@ fi mkdir -p "$ASDB_EXPORT_DIR" -run_cmd VC="$VC" "$SCRIPT_DIR/../vc/export_asdb.sh" \ +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/export_asdb.sh" \ --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" @@ -280,7 +280,7 @@ echo "" # Step 6: Import updated ASDB log_step "Step 6: Importing updated anti-slashing database..." -run_cmd VC="$VC" "$SCRIPT_DIR/../vc/import_asdb.sh" \ +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/import_asdb.sh" \ --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" log_info "Anti-slashing database imported" diff --git a/scripts/edit/remove-operators/remaining-operator.sh b/scripts/edit/remove-operators/remaining-operator.sh index dbc0582a..de2d0663 100755 --- a/scripts/edit/remove-operators/remaining-operator.sh +++ b/scripts/edit/remove-operators/remaining-operator.sh @@ -42,7 +42,7 @@ set -euo pipefail SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" cd "$REPO_ROOT" # Default values @@ -247,7 +247,7 @@ fi mkdir -p "$ASDB_EXPORT_DIR" -run_cmd VC="$VC" "$SCRIPT_DIR/../vc/export_asdb.sh" \ +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/export_asdb.sh" \ --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" @@ -349,7 +349,7 @@ echo "" # Step 6: Import updated ASDB log_step "Step 6: Importing updated anti-slashing database..." -run_cmd VC="$VC" "$SCRIPT_DIR/../vc/import_asdb.sh" \ +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/import_asdb.sh" \ --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" log_info "Anti-slashing database imported" diff --git a/scripts/edit/remove-operators/removed-operator.sh b/scripts/edit/remove-operators/removed-operator.sh index 677a760c..0f66d349 100755 --- a/scripts/edit/remove-operators/removed-operator.sh +++ b/scripts/edit/remove-operators/removed-operator.sh @@ -33,7 +33,7 @@ set -euo pipefail SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" cd "$REPO_ROOT" # Default values diff --git a/scripts/edit/replace-operator/new-operator.sh b/scripts/edit/replace-operator/new-operator.sh index c5a7adc1..c5fb5ae7 100755 --- a/scripts/edit/replace-operator/new-operator.sh +++ b/scripts/edit/replace-operator/new-operator.sh @@ -40,7 +40,7 @@ set -euo pipefail SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" cd "$REPO_ROOT" # Default values diff --git a/scripts/edit/replace-operator/remaining-operator.sh b/scripts/edit/replace-operator/remaining-operator.sh index 5943519d..c1623d5c 100755 --- a/scripts/edit/replace-operator/remaining-operator.sh +++ b/scripts/edit/replace-operator/remaining-operator.sh @@ -38,7 +38,7 @@ set -euo pipefail SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" cd "$REPO_ROOT" # Default values @@ -221,7 +221,7 @@ else mkdir -p "$ASDB_EXPORT_DIR" - run_cmd VC="$VC" "$SCRIPT_DIR/../vc/export_asdb.sh" \ + VC="$VC" run_cmd "$SCRIPT_DIR/../vc/export_asdb.sh" \ --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" @@ -294,7 +294,7 @@ echo "" # Step 6: Import updated ASDB log_step "Step 6: Importing updated anti-slashing database..." -run_cmd VC="$VC" "$SCRIPT_DIR/../vc/import_asdb.sh" \ +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/import_asdb.sh" \ --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" log_info "Anti-slashing database imported" diff --git a/scripts/edit/test/README.md b/scripts/edit/test/README.md new file mode 100644 index 00000000..087201f3 --- /dev/null +++ b/scripts/edit/test/README.md @@ -0,0 +1,50 @@ +# E2E Integration Tests for Edit Scripts + +End-to-end tests that verify the cluster edit scripts work correctly across the full workflow. + +## Prerequisites + +- **Docker** running locally +- **jq** installed +- **Internet access** (charon ceremonies use the Obol P2P relay) + +## Running + +```bash +./scripts/edit/test/e2e_test.sh +``` + +Override the charon version: + +```bash +CHARON_VERSION=v1.8.2 ./scripts/edit/test/e2e_test.sh +``` + +## What It Tests + +| # | Test | Type | Description | +|---|------|------|-------------| +| 1 | add-validators | P2P ceremony (4 ops) | Adds 1 validator to a 4-operator, 1-validator cluster. Verifies 2 validators in output. | +| 2 | recreate-private-keys | P2P ceremony (4 ops) | Refreshes key shares. Verifies public_shares changed, same validator count. | +| 3 | add-operators | P2P ceremony (4+1 ops) | Adds 1 new operator. Verifies 5 operators in output. | +| 4 | remove-operators | P2P ceremony (4 of 5 ops) | Removes the added operator. Verifies 4 operators in output. | +| 5 | replace-operator | Offline (sequential) | Replaces operator 0. Verifies ENR changed in output. | +| 6 | update-anti-slashing-db | Standalone (no Docker) | Transforms EIP-3076 pubkeys between cluster-locks. | + +## How It Works + +1. Creates a real test cluster using `charon create cluster` (4 nodes, 1 validator) +2. Sets up 4 operator work directories with `.charon/` and `.env` +3. Interposes a **mock docker wrapper** (`test/bin/docker`) on `PATH` + - Real `docker run` is used for charon ceremony commands (P2P relay) + - `docker compose` commands are mocked (container lifecycle, ASDB export/import) +4. Runs each edit script through its happy path +5. Verifies outputs (validator count, operator count, key changes) at each step + +## WORK_DIR Environment Variable + +The test uses the `WORK_DIR` environment variable to redirect each script's working directory. When set, scripts use `WORK_DIR` as their repo root instead of computing it relative to the script location. This allows running multiple operator instances from isolated directories. + +## Expected Runtime + +Approximately 2-5 minutes depending on P2P relay connectivity. The P2P ceremonies (tests 1-4) require all operators to connect through the relay simultaneously. diff --git a/scripts/edit/test/bin/docker b/scripts/edit/test/bin/docker new file mode 100755 index 00000000..806c7e78 --- /dev/null +++ b/scripts/edit/test/bin/docker @@ -0,0 +1,288 @@ +#!/usr/bin/env bash + +# Mock docker wrapper for E2E testing of edit scripts. +# +# This script intercepts docker and docker compose commands: +# - Real docker is used for `docker info` and `docker run` (charon ceremonies) +# - Docker compose commands are mocked (container lifecycle, ASDB export/import) +# +# Environment variables: +# REAL_DOCKER - Path to real docker binary (required) +# MOCK_STATE_DIR - Directory for service state tracking (required for compose) +# MOCK_OPERATOR_INDEX - Operator index for ASDB pubkey generation (required for compose cp) + +set -euo pipefail + +# --- Helpers --- + +strip_tty_flags() { + local args=() + local skip_next=false + for arg in "$@"; do + if [ "$skip_next" = true ]; then + skip_next=false + continue + fi + case "$arg" in + -it|-ti) ;; # combined flags + -i|-t) ;; # individual flags + --interactive|--tty) ;; + *) args+=("$arg") ;; + esac + done + echo "${args[@]}" +} + +state_file() { + echo "${MOCK_STATE_DIR}/services.state" +} + +mark_service() { + local svc="$1" status="$2" + local sf + sf="$(state_file)" + mkdir -p "$(dirname "$sf")" + touch "$sf" + if grep -q "^${svc}=" "$sf" 2>/dev/null; then + # Use a temp file for portable in-place edit + local tmp="${sf}.tmp" + while IFS= read -r line; do + if [[ "$line" == "${svc}="* ]]; then + echo "${svc}=${status}" + else + echo "$line" + fi + done < "$sf" > "$tmp" + mv "$tmp" "$sf" + else + echo "${svc}=${status}" >> "$sf" + fi +} + +is_service_up() { + local svc="$1" + local sf + sf="$(state_file)" + [ -f "$sf" ] && grep -q "^${svc}=running" "$sf" 2>/dev/null +} + +generate_eip3076() { + local dest="$1" + local operator_index="${MOCK_OPERATOR_INDEX:-0}" + + # Find cluster-lock.json in cwd + local lock_file="./charon/cluster-lock.json" + if [ ! -f "$lock_file" ]; then + lock_file="./.charon/cluster-lock.json" + fi + + if [ ! -f "$lock_file" ]; then + # Fallback: generate minimal valid EIP-3076 + cat > "$dest" <<'FALLBACK' +{"metadata":{"interchange_format_version":"5","genesis_validators_root":"0x0000000000000000000000000000000000000000000000000000000000000000"},"data":[]} +FALLBACK + return 0 + fi + + # Extract this operator's public shares from cluster-lock + local pubkeys + pubkeys=$(jq -r --argjson idx "$operator_index" \ + '[.distributed_validators[].public_shares[$idx]] | map(select(. != null)) | .[]' \ + "$lock_file" 2>/dev/null || echo "") + + if [ -z "$pubkeys" ]; then + cat > "$dest" <<'FALLBACK' +{"metadata":{"interchange_format_version":"5","genesis_validators_root":"0x0000000000000000000000000000000000000000000000000000000000000000"},"data":[]} +FALLBACK + return 0 + fi + + # Build EIP-3076 JSON with one entry per validator + local data_entries="" + local first=true + while IFS= read -r pk; do + if [ "$first" = true ]; then + first=false + else + data_entries="${data_entries}," + fi + data_entries="${data_entries}{\"pubkey\":\"${pk}\",\"signed_blocks\":[],\"signed_attestations\":[]}" + done <<< "$pubkeys" + + cat > "$dest" < + svc="${1:-}" + if [ -n "$svc" ] && is_service_up "$svc"; then + echo "NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS" + echo "${svc}-1 img cmd ${svc} 1m ago Up 1m " + fi + exit 0 + ;; + stop) + # docker compose stop ... + for svc in "$@"; do + mark_service "$svc" "stopped" + done + exit 0 + ;; + up) + # docker compose up -d ... + # Skip the -d flag + for arg in "$@"; do + case "$arg" in + -d) ;; + *) mark_service "$arg" "running" ;; + esac + done + exit 0 + ;; + exec) + # docker compose exec -T (ASDB export inside container) + # Just succeed - the actual data is handled by cp + exit 0 + ;; + cp) + # docker compose cp : + # Generate EIP-3076 JSON at + local_dest="${2:-}" + if [ -z "$local_dest" ]; then + # Maybe args are container:src dest + local_dest="${2:-/dev/null}" + fi + # The arguments are: container:path localpath + # $1 = container:path, $2 = local dest + generate_eip3076 "${2:-/dev/null}" + exit 0 + ;; + run) + # docker compose run --rm ... (ASDB import) + # Just succeed + exit 0 + ;; + *) + # Unknown compose command - pass through + exec "${REAL_DOCKER}" compose "$compose_cmd" "$@" + ;; + esac + exit 0 +fi + +# Non-compose commands +case "${1:-}" in + info) + # Pass through to real docker + exec "${REAL_DOCKER}" "$@" + ;; + run) + # Pass through to real docker, but strip -i/-t flags for background execution + shift # consume "run" + cleaned_args=() + volume_mounts=() + charon_cmd="" + new_enr="" + operator_index="" + lock_file_mount="" + output_dir_mount="" + + while [ $# -gt 0 ]; do + case "$1" in + -it|-ti) ;; + -i|-t|--interactive|--tty) ;; + -v) + volume_mounts+=("$1" "$2") + cleaned_args+=("$1" "$2") + # Track volume mounts for mock replace-operator + case "$2" in + */.charon:*) lock_file_mount="${2%%:*}" ;; + *output:*) output_dir_mount="${2%%:*}" ;; + esac + shift + ;; + -v*) + # -v/path:/path format + cleaned_args+=("$1") + local_mount="${1#-v}" + case "$local_mount" in + */.charon:*) lock_file_mount="${local_mount%%:*}" ;; + *output:*) output_dir_mount="${local_mount%%:*}" ;; + esac + ;; + edit|"alpha") + # Detect charon edit/alpha edit commands + cleaned_args+=("$1") + charon_cmd="$1" + ;; + replace-operator) + charon_cmd="replace-operator" + cleaned_args+=("$1") + ;; + --new-enr=*) + new_enr="${1#--new-enr=}" + cleaned_args+=("$1") + ;; + --operator-index=*) + operator_index="${1#--operator-index=}" + cleaned_args+=("$1") + ;; + *) cleaned_args+=("$1") ;; + esac + shift + done + + # Check if this is a replace-operator command (mock it since it doesn't exist in v1.8.2) + if [ "$charon_cmd" = "replace-operator" ] && [ -n "$new_enr" ] && [ -n "$operator_index" ]; then + # Find the cluster-lock via volume mounts + local_lock="" + local_output="" + for ((idx=0; idx<${#cleaned_args[@]}; idx++)); do + case "${cleaned_args[$idx]}" in + -v) + next_idx=$((idx+1)) + mount="${cleaned_args[$next_idx]:-}" + host_path="${mount%%:*}" + container_path="${mount#*:}" + container_path="${container_path%%:*}" + case "$container_path" in + */\.charon|*/.charon) local_lock="$host_path/cluster-lock.json" ;; + */output) local_output="$host_path" ;; + esac + ;; + esac + done + + if [ -n "$local_lock" ] && [ -f "$local_lock" ] && [ -n "$local_output" ]; then + mkdir -p "$local_output" + # Replace operator ENR at the given index using jq + jq --argjson idx "$operator_index" --arg enr "$new_enr" \ + '.cluster_definition.operators[$idx].enr = $enr' \ + "$local_lock" > "$local_output/cluster-lock.json" + exit 0 + else + echo "Mock replace-operator: cannot find cluster-lock or output dir" >&2 + exit 1 + fi + fi + + exec "${REAL_DOCKER}" run "${cleaned_args[@]}" + ;; + *) + # Pass through everything else + exec "${REAL_DOCKER}" "$@" + ;; +esac diff --git a/scripts/edit/test/e2e_test.sh b/scripts/edit/test/e2e_test.sh new file mode 100755 index 00000000..e1c67d5b --- /dev/null +++ b/scripts/edit/test/e2e_test.sh @@ -0,0 +1,772 @@ +#!/usr/bin/env bash + +# E2E Integration Test for Cluster Edit Scripts +# +# This test creates a real cluster using charon and runs each edit script +# through its happy path. Real Docker is used for charon ceremony commands; +# docker compose (container lifecycle, ASDB) is mocked. +# +# Prerequisites: +# - Docker running +# - jq installed +# - Internet access (charon uses Obol relay for P2P ceremonies) +# +# Usage: +# ./scripts/edit/test/e2e_test.sh + +set -euo pipefail + +# --- Configuration --- + +TEST_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$TEST_DIR/../../.." && pwd)" +CHARON_VERSION="${CHARON_VERSION:-v1.8.2}" +CHARON_IMAGE="obolnetwork/charon:${CHARON_VERSION}" +NUM_OPERATORS=4 +ZERO_ADDR="0x0000000000000000000000000000000000000001" + +# Colors +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +# Counters +TESTS_RUN=0 +TESTS_PASSED=0 +TESTS_FAILED=0 + +# --- Helpers --- + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_test() { echo -e "${BLUE}[TEST]${NC} $1"; } + +assert_eq() { + local desc="$1" expected="$2" actual="$3" + if [ "$expected" = "$actual" ]; then + log_info " PASS: $desc (got $actual)" + return 0 + else + log_error " FAIL: $desc - expected '$expected', got '$actual'" + return 1 + fi +} + +assert_ne() { + local desc="$1" not_expected="$2" actual="$3" + if [ "$not_expected" != "$actual" ]; then + log_info " PASS: $desc (values differ)" + return 0 + else + log_error " FAIL: $desc - expected different from '$not_expected', but got same" + return 1 + fi +} + +run_test() { + local name="$1" + shift + TESTS_RUN=$((TESTS_RUN + 1)) + echo "" + echo "================================================================" + log_test "TEST $TESTS_RUN: $name" + echo "================================================================" + echo "" + if "$@"; then + TESTS_PASSED=$((TESTS_PASSED + 1)) + log_info "TEST $TESTS_RUN PASSED: $name" + else + TESTS_FAILED=$((TESTS_FAILED + 1)) + log_error "TEST $TESTS_RUN FAILED: $name" + fi +} + +# --- Setup --- + +TMP_DIR="" +cleanup() { + if [ -n "$TMP_DIR" ] && [ -d "$TMP_DIR" ]; then + rm -rf "$TMP_DIR" + log_info "Cleaned up $TMP_DIR" + fi +} +trap cleanup EXIT + +check_prerequisites() { + log_info "Checking prerequisites..." + + if ! command -v jq &>/dev/null; then + log_error "jq is required but not installed" + exit 1 + fi + + if ! docker info &>/dev/null; then + log_error "Docker is not running" + exit 1 + fi + + log_info "Pulling charon image: $CHARON_IMAGE" + docker pull "$CHARON_IMAGE" >/dev/null 2>&1 || true + + log_info "Prerequisites OK" +} + +setup_tmp_dir() { + TMP_DIR=$(mktemp -d) + log_info "Working directory: $TMP_DIR" +} + +create_cluster() { + log_info "Creating test cluster with $NUM_OPERATORS nodes, 1 validator..." + + local cluster_dir="$TMP_DIR/cluster" + mkdir -p "$cluster_dir" + + docker run --rm \ + -v "$cluster_dir:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + create cluster \ + --nodes="$NUM_OPERATORS" \ + --num-validators=1 \ + --network=hoodi \ + --withdrawal-addresses="$ZERO_ADDR" \ + --fee-recipient-addresses="$ZERO_ADDR" \ + --cluster-dir=/opt/charon/.charon + + # Verify cluster was created + if [ ! -d "$cluster_dir/node0" ]; then + log_error "Cluster creation failed - no node0 directory" + exit 1 + fi + + log_info "Cluster created successfully" + + # Set up operator work directories + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + mkdir -p "$op_dir" + + # Copy node contents to operator's .charon directory + cp -r "$cluster_dir/node${i}" "$op_dir/.charon" + + # Create .env file + cat > "$op_dir/.env" < "$op_dir/services.state" + echo "vc-lodestar=running" >> "$op_dir/services.state" + + log_info " Operator $i set up at $op_dir" + done +} + +setup_mock_docker() { + export REAL_DOCKER + REAL_DOCKER="$(which docker)" + export PATH="$TEST_DIR/bin:$PATH" + + log_info "Mock docker enabled (real docker at $REAL_DOCKER)" +} + +# --- Test Functions --- + +test_add_validators() { + log_info "Running add-validators ceremony (4 operators in parallel)..." + + local pids=() + local logs_dir="$TMP_DIR/logs/add-validators" + mkdir -p "$logs_dir" + + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + ( + WORK_DIR="$op_dir" \ + MOCK_OPERATOR_INDEX="$i" \ + MOCK_STATE_DIR="$op_dir" \ + "$REPO_ROOT/scripts/edit/add-validators/add-validators.sh" \ + --num-validators 1 \ + --withdrawal-addresses "$ZERO_ADDR" \ + --fee-recipient-addresses "$ZERO_ADDR" + ) > "$logs_dir/operator${i}.log" 2>&1 & + pids+=($!) + done + + # Wait for all operators + local all_ok=true + for i in "${!pids[@]}"; do + if ! wait "${pids[$i]}"; then + log_error "Operator $i failed. Log:" + cat "$logs_dir/operator${i}.log" || true + all_ok=false + fi + done + + if [ "$all_ok" = false ]; then + return 1 + fi + + # Verify: each operator should have a cluster-lock with 2 validators + local ok=true + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + local lock="$op_dir/.charon/cluster-lock.json" + + if [ ! -f "$lock" ]; then + log_error "Operator $i: cluster-lock.json not found" + ok=false + continue + fi + + local num_vals + num_vals=$(jq '.distributed_validators | length' "$lock") + assert_eq "Operator $i has 2 validators" "2" "$num_vals" || ok=false + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "Operator $i has $NUM_OPERATORS operators" "$NUM_OPERATORS" "$num_ops" || ok=false + done + + [ "$ok" = true ] +} + +test_recreate_private_keys() { + log_info "Running recreate-private-keys ceremony (4 operators in parallel)..." + + # Save current public_shares for comparison + local old_shares + old_shares=$(jq -r '.distributed_validators[0].public_shares[0]' \ + "$TMP_DIR/operator0/.charon/cluster-lock.json") + + local pids=() + local logs_dir="$TMP_DIR/logs/recreate-private-keys" + mkdir -p "$logs_dir" + + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + # Reset service state to running for this test + echo "charon=running" > "$op_dir/services.state" + echo "vc-lodestar=running" >> "$op_dir/services.state" + ( + WORK_DIR="$op_dir" \ + MOCK_OPERATOR_INDEX="$i" \ + MOCK_STATE_DIR="$op_dir" \ + "$REPO_ROOT/scripts/edit/recreate-private-keys/recreate-private-keys.sh" + ) > "$logs_dir/operator${i}.log" 2>&1 & + pids+=($!) + done + + local all_ok=true + for i in "${!pids[@]}"; do + if ! wait "${pids[$i]}"; then + log_error "Operator $i failed. Log:" + cat "$logs_dir/operator${i}.log" || true + all_ok=false + fi + done + + if [ "$all_ok" = false ]; then + return 1 + fi + + # Verify: still 2 validators, 4 operators, but different public_shares + local ok=true + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + local lock="$op_dir/.charon/cluster-lock.json" + + if [ ! -f "$lock" ]; then + log_error "Operator $i: cluster-lock.json not found" + ok=false + continue + fi + + local num_vals + num_vals=$(jq '.distributed_validators | length' "$lock") + assert_eq "Operator $i has 2 validators" "2" "$num_vals" || ok=false + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "Operator $i has $NUM_OPERATORS operators" "$NUM_OPERATORS" "$num_ops" || ok=false + done + + # Check that public shares changed + local new_shares + new_shares=$(jq -r '.distributed_validators[0].public_shares[0]' \ + "$TMP_DIR/operator0/.charon/cluster-lock.json") + assert_ne "Public shares changed after recreate" "$old_shares" "$new_shares" || ok=false + + [ "$ok" = true ] +} + +test_add_operators() { + log_info "Running add-operators ceremony (4 existing + 1 new)..." + + # Create operator4 work directory + local new_op_dir="$TMP_DIR/operator4" + mkdir -p "$new_op_dir/.charon" + + # Generate ENR for new operator + log_info " Generating ENR for new operator..." + "$REAL_DOCKER" run --rm \ + -v "$new_op_dir/.charon:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + create enr + + # Extract ENR + local new_enr + new_enr=$("$REAL_DOCKER" run --rm \ + -v "$new_op_dir/.charon:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + enr 2>/dev/null) + + if [ -z "$new_enr" ]; then + log_error "Failed to get ENR for new operator" + return 1 + fi + log_info " New operator ENR: ${new_enr:0:50}..." + + # Copy cluster-lock from operator0 to operator4 + cp "$TMP_DIR/operator0/.charon/cluster-lock.json" "$new_op_dir/.charon/cluster-lock.json" + + # Create .env for new operator + cat > "$new_op_dir/.env" < "$new_op_dir/services.state" + echo "vc-lodestar=running" >> "$new_op_dir/services.state" + + # Reset service states for existing operators + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + echo "charon=running" > "$op_dir/services.state" + echo "vc-lodestar=running" >> "$op_dir/services.state" + done + + local pids=() + local logs_dir="$TMP_DIR/logs/add-operators" + mkdir -p "$logs_dir" + + # Run existing operators + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + ( + WORK_DIR="$op_dir" \ + MOCK_OPERATOR_INDEX="$i" \ + MOCK_STATE_DIR="$op_dir" \ + "$REPO_ROOT/scripts/edit/add-operators/existing-operator.sh" \ + --new-operator-enrs "$new_enr" + ) > "$logs_dir/operator${i}.log" 2>&1 & + pids+=($!) + done + + # Run new operator + ( + WORK_DIR="$new_op_dir" \ + MOCK_OPERATOR_INDEX="$NUM_OPERATORS" \ + MOCK_STATE_DIR="$new_op_dir" \ + "$REPO_ROOT/scripts/edit/add-operators/new-operator.sh" \ + --new-operator-enrs "$new_enr" \ + --cluster-lock ".charon/cluster-lock.json" + ) > "$logs_dir/operator4.log" 2>&1 & + pids+=($!) + + # Wait for all + local all_ok=true + for i in "${!pids[@]}"; do + if ! wait "${pids[$i]}"; then + log_error "Operator $i failed. Log:" + cat "$logs_dir/operator${i}.log" || true + all_ok=false + fi + done + + if [ "$all_ok" = false ]; then + return 1 + fi + + # Verify: all operators should now have 5 operators in cluster-lock + local ok=true + for i in $(seq 0 "$NUM_OPERATORS"); do + local op_dir="$TMP_DIR/operator${i}" + local lock="$op_dir/.charon/cluster-lock.json" + + if [ ! -f "$lock" ]; then + log_error "Operator $i: cluster-lock.json not found" + ok=false + continue + fi + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "Operator $i has 5 operators" "5" "$num_ops" || ok=false + done + + [ "$ok" = true ] +} + +test_remove_operators() { + log_info "Running remove-operators ceremony (removing operator4, 4 remaining)..." + + # Get operator4's ENR from cluster-lock + local op4_enr + op4_enr=$(jq -r '.cluster_definition.operators[4].enr' "$TMP_DIR/operator0/.charon/cluster-lock.json") + + if [ -z "$op4_enr" ] || [ "$op4_enr" = "null" ]; then + log_error "Failed to get operator4 ENR from cluster-lock" + return 1 + fi + log_info " Operator4 ENR to remove: ${op4_enr:0:50}..." + + # Reset service states for remaining operators + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + echo "charon=running" > "$op_dir/services.state" + echo "vc-lodestar=running" >> "$op_dir/services.state" + done + + local pids=() + local logs_dir="$TMP_DIR/logs/remove-operators" + mkdir -p "$logs_dir" + + # Run remaining operators (0-3) — operator4 does NOT participate + # (within fault tolerance: 5 ops, threshold ~4, f=1) + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + ( + WORK_DIR="$op_dir" \ + MOCK_OPERATOR_INDEX="$i" \ + MOCK_STATE_DIR="$op_dir" \ + "$REPO_ROOT/scripts/edit/remove-operators/remaining-operator.sh" \ + --operator-enrs-to-remove "$op4_enr" + ) > "$logs_dir/operator${i}.log" 2>&1 & + pids+=($!) + done + + local all_ok=true + for i in "${!pids[@]}"; do + if ! wait "${pids[$i]}"; then + log_error "Operator $i failed. Log:" + cat "$logs_dir/operator${i}.log" || true + all_ok=false + fi + done + + if [ "$all_ok" = false ]; then + return 1 + fi + + # Verify: 4 operators in new cluster-lock + local ok=true + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + local lock="$op_dir/.charon/cluster-lock.json" + + if [ ! -f "$lock" ]; then + log_error "Operator $i: cluster-lock.json not found" + ok=false + continue + fi + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "Operator $i has 4 operators" "4" "$num_ops" || ok=false + done + + [ "$ok" = true ] +} + +test_replace_operator() { + log_info "Running replace-operator workflow (replacing operator0)..." + + # Create new operator work directory + local new_op_dir="$TMP_DIR/new-operator" + mkdir -p "$new_op_dir/.charon" + + # Generate ENR for replacement operator + log_info " Generating ENR for replacement operator..." + "$REAL_DOCKER" run --rm \ + -v "$new_op_dir/.charon:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + create enr + + local new_enr + new_enr=$("$REAL_DOCKER" run --rm \ + -v "$new_op_dir/.charon:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + enr 2>/dev/null) + + if [ -z "$new_enr" ]; then + log_error "Failed to get ENR for replacement operator" + return 1 + fi + log_info " Replacement operator ENR: ${new_enr:0:50}..." + + # Create .env for new operator + cat > "$new_op_dir/.env" < "$op_dir/services.state" + echo "vc-lodestar=running" >> "$op_dir/services.state" + done + + # Replace-operator is OFFLINE (no P2P) — each remaining operator runs independently + local logs_dir="$TMP_DIR/logs/replace-operator" + mkdir -p "$logs_dir" + + local ok=true + for i in $(seq 1 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + log_info " Running remaining-operator.sh for operator $i..." + if ! ( + WORK_DIR="$op_dir" \ + MOCK_OPERATOR_INDEX="$i" \ + MOCK_STATE_DIR="$op_dir" \ + "$REPO_ROOT/scripts/edit/replace-operator/remaining-operator.sh" \ + --new-enr "$new_enr" \ + --operator-index 0 + ) > "$logs_dir/operator${i}.log" 2>&1; then + log_error "Operator $i failed. Log:" + cat "$logs_dir/operator${i}.log" || true + ok=false + fi + done + + if [ "$ok" = false ]; then + return 1 + fi + + # Copy output cluster-lock from operator1 to new operator's work dir + local src_lock="$TMP_DIR/operator1/.charon/cluster-lock.json" + if [ ! -f "$src_lock" ]; then + # Try output dir + src_lock="$TMP_DIR/operator1/output/cluster-lock.json" + fi + if [ ! -f "$src_lock" ]; then + log_error "No output cluster-lock found for new operator" + return 1 + fi + + # New operator receives cluster-lock and joins + echo "charon=stopped" > "$new_op_dir/services.state" + echo "vc-lodestar=stopped" >> "$new_op_dir/services.state" + + log_info " Running new-operator.sh..." + if ! ( + WORK_DIR="$new_op_dir" \ + MOCK_OPERATOR_INDEX="0" \ + MOCK_STATE_DIR="$new_op_dir" \ + "$REPO_ROOT/scripts/edit/replace-operator/new-operator.sh" \ + --cluster-lock "$src_lock" + ) > "$logs_dir/new-operator.log" 2>&1; then + log_error "New operator failed. Log:" + cat "$logs_dir/new-operator.log" || true + return 1 + fi + + # Verify: 4 operators, operator 0's ENR changed + local lock="$new_op_dir/.charon/cluster-lock.json" + if [ ! -f "$lock" ]; then + log_error "New operator: cluster-lock.json not found" + return 1 + fi + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "New operator has 4 operators" "4" "$num_ops" || ok=false + + # Check that operator 0's ENR changed to the new ENR + local op0_enr + op0_enr=$(jq -r '.cluster_definition.operators[0].enr' "$lock") + # The ENR should contain part of our new ENR (ENRs are reformatted by charon) + if [ "$op0_enr" != "null" ] && [ -n "$op0_enr" ]; then + log_info " PASS: Operator 0 ENR is present in new cluster-lock" + else + log_error " FAIL: Operator 0 ENR missing from cluster-lock" + ok=false + fi + + [ "$ok" = true ] +} + +test_update_asdb() { + log_info "Running update-anti-slashing-db standalone test..." + + # Use cluster-locks from earlier tests as source/target + # Find two different cluster-locks (before/after recreate-private-keys) + # We'll use operator0's backup and current cluster-lock + + local source_lock="" + local target_lock="" + + # Find backup from recreate-private-keys (or add-validators) + for backup in "$TMP_DIR"/operator0/backups/.charon-backup.*/cluster-lock.json; do + if [ -f "$backup" ]; then + source_lock="$backup" + break + fi + done + + target_lock="$TMP_DIR/operator0/.charon/cluster-lock.json" + + if [ -z "$source_lock" ] || [ ! -f "$source_lock" ]; then + log_warn "No backup cluster-lock found, creating synthetic test data..." + + # Create synthetic source and target + local asdb_test_dir="$TMP_DIR/asdb-test" + mkdir -p "$asdb_test_dir" + + source_lock="$asdb_test_dir/source-lock.json" + target_lock="$asdb_test_dir/target-lock.json" + + # Use operator0's original cluster from the initial creation + local orig_lock="$TMP_DIR/cluster/node0/cluster-lock.json" + if [ -f "$orig_lock" ]; then + cp "$orig_lock" "$source_lock" + cp "$TMP_DIR/operator0/.charon/cluster-lock.json" "$target_lock" + else + log_error "Cannot find any cluster-lock files for ASDB test" + return 1 + fi + fi + + if [ ! -f "$target_lock" ]; then + log_error "Target cluster-lock not found: $target_lock" + return 1 + fi + + log_info " Source lock: $source_lock" + log_info " Target lock: $target_lock" + + # Generate EIP-3076 JSON with pubkeys from source lock + local asdb_dir="$TMP_DIR/asdb-test" + mkdir -p "$asdb_dir" + local eip3076_file="$asdb_dir/slashing-protection.json" + + # Extract operator 0's pubkeys from source lock + local pubkeys + pubkeys=$(jq -r '.distributed_validators[].public_shares[0]' "$source_lock") + + local data_entries="" + local first=true + while IFS= read -r pk; do + [ -z "$pk" ] && continue + if [ "$first" = true ]; then + first=false + else + data_entries="${data_entries}," + fi + data_entries="${data_entries}{\"pubkey\":\"${pk}\",\"signed_blocks\":[],\"signed_attestations\":[]}" + done <<< "$pubkeys" + + cat > "$eip3076_file" </dev/null; then + log_error "Output is not valid JSON" + return 1 + fi + + # Check that pubkeys now match target lock's operator 0 shares + # Only compare validators that existed in the source lock + local source_val_count + source_val_count=$(jq '.distributed_validators | length' "$source_lock") + local expected_pubkeys + expected_pubkeys=$(jq -r --argjson n "$source_val_count" \ + '[.distributed_validators[:$n][].public_shares[0]] | .[]' "$target_lock" | sort) + + if [ "$new_pubkeys" = "$expected_pubkeys" ]; then + log_info " PASS: Pubkeys correctly transformed to target cluster-lock values" + else + log_error " FAIL: Pubkeys don't match target cluster-lock" + log_error " Expected: $expected_pubkeys" + log_error " Got: $new_pubkeys" + ok=false + fi + + [ "$ok" = true ] +} + +# --- Main --- + +main() { + echo "" + echo "╔════════════════════════════════════════════════════════════════╗" + echo "║ E2E Integration Test for Cluster Edit Scripts ║" + echo "╚════════════════════════════════════════════════════════════════╝" + echo "" + + check_prerequisites + setup_tmp_dir + create_cluster + setup_mock_docker + + # Run tests sequentially — each builds on the previous state + run_test "add-validators" test_add_validators + run_test "recreate-private-keys" test_recreate_private_keys + run_test "add-operators" test_add_operators + run_test "remove-operators" test_remove_operators + run_test "replace-operator" test_replace_operator + run_test "update-anti-slashing-db" test_update_asdb + + # Summary + echo "" + echo "╔════════════════════════════════════════════════════════════════╗" + echo "║ Test Summary ║" + echo "╚════════════════════════════════════════════════════════════════╝" + echo "" + echo " Tests run: $TESTS_RUN" + echo -e " Tests passed: ${GREEN}$TESTS_PASSED${NC}" + if [ "$TESTS_FAILED" -gt 0 ]; then + echo -e " Tests failed: ${RED}$TESTS_FAILED${NC}" + else + echo " Tests failed: $TESTS_FAILED" + fi + echo "" + + if [ "$TESTS_FAILED" -gt 0 ]; then + log_error "SOME TESTS FAILED" + exit 1 + else + log_info "ALL TESTS PASSED" + exit 0 + fi +} + +main "$@" From 7910a2f6ccbad669ceecf1d842d441d770b5c6f6 Mon Sep 17 00:00:00 2001 From: Andrei Smirnov Date: Tue, 17 Feb 2026 14:26:33 +0300 Subject: [PATCH 12/12] Updated READMEs --- scripts/README.md | 10 +++++++--- scripts/edit/add-operators/README.md | 5 +++-- scripts/edit/add-validators/README.md | 8 ++++++-- scripts/edit/recreate-private-keys/README.md | 10 ++++++++-- scripts/edit/remove-operators/README.md | 5 +++-- scripts/edit/replace-operator/README.md | 6 +++++- scripts/edit/test/README.md | 1 + scripts/edit/vc/README.md | 8 +++++--- 8 files changed, 38 insertions(+), 15 deletions(-) diff --git a/scripts/README.md b/scripts/README.md index 251e8c63..a195af60 100644 --- a/scripts/README.md +++ b/scripts/README.md @@ -4,7 +4,6 @@ Automation scripts for Charon distributed validator cluster editing operations. ## Documentation -- [Obol Replace-Operator Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/replace-operator) - [Charon Edit Commands](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/) - [EIP-3076 Slashing Protection Interchange Format](https://eips.ethereum.org/EIPS/eip-3076) @@ -12,8 +11,13 @@ Automation scripts for Charon distributed validator cluster editing operations. | Directory | Description | |-----------|-------------| -| [edit/replace-operator/](edit/replace-operator/README.md) | Replace an operator in a cluster | -| [edit/vc/](edit/vc/) | Export/import anti-slashing database for various VCs | +| [edit/add-validators/](edit/add-validators/README.md) | Add new validators to an existing cluster | +| [edit/recreate-private-keys/](edit/recreate-private-keys/README.md) | Refresh private key shares while keeping the same validator public keys | +| [edit/add-operators/](edit/add-operators/README.md) | Expand the cluster by adding new operators | +| [edit/remove-operators/](edit/remove-operators/README.md) | Remove operators from the cluster | +| [edit/replace-operator/](edit/replace-operator/README.md) | Replace a single operator in a cluster | +| [edit/vc/](edit/vc/README.md) | Export/import/update anti-slashing databases (EIP-3076) | +| [edit/test/](edit/test/README.md) | E2E integration tests for all edit scripts | ## Prerequisites diff --git a/scripts/edit/add-operators/README.md b/scripts/edit/add-operators/README.md index a5f04a3f..7f2b0793 100644 --- a/scripts/edit/add-operators/README.md +++ b/scripts/edit/add-operators/README.md @@ -12,7 +12,7 @@ These scripts help operators expand an existing distributed validator cluster by **Important**: This is a coordinated ceremony. All operators (existing AND new) must run their respective scripts simultaneously to complete the process. -> Warning: This is an alpha feature in Charon and is not yet recommended for production use. +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. There are two scripts for the two roles involved: @@ -92,7 +92,8 @@ Two-step workflow for new operators joining the cluster. ## Related - [Add-Validators Workflow](../add-validators/README.md) -- [Replace-Operator Workflow](../replace-operator/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) - [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) - [Anti-Slashing DB Scripts](../vc/README.md) - [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-operators) diff --git a/scripts/edit/add-validators/README.md b/scripts/edit/add-validators/README.md index 05eb45ed..2b9cc2db 100644 --- a/scripts/edit/add-validators/README.md +++ b/scripts/edit/add-validators/README.md @@ -11,13 +11,14 @@ This script helps operators add new validators to an existing distributed valida **Important**: This is a coordinated ceremony. All operators must run this script simultaneously to complete the process. -> ⚠️ This is an alpha feature in Charon and is not yet recommended for production use. +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. ## Prerequisites - `.env` file with `NETWORK` and `VC` variables set - `.charon` directory with `cluster-lock.json` - Docker running +- `jq` installed - **Charon and VC must be RUNNING** during the ceremony - **All operators must participate in the ceremony** @@ -93,11 +94,14 @@ If your validator keys are stored remotely (e.g., in a KeyManager) and Charon ca ## Current Limitations - The new cluster configuration will not be reflected on the Obol Launchpad -- The new cluster will have a new cluster hash (different observability identifier) +- The cluster will have a new cluster hash (different observability identifier) - All operators must participate; no partial participation option ## Related +- [Add-Operators Workflow](../add-operators/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) - [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) - [Replace-Operator Workflow](../replace-operator/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) - [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-validators) diff --git a/scripts/edit/recreate-private-keys/README.md b/scripts/edit/recreate-private-keys/README.md index 4e449440..fe6e61c4 100644 --- a/scripts/edit/recreate-private-keys/README.md +++ b/scripts/edit/recreate-private-keys/README.md @@ -12,11 +12,14 @@ This script helps operators recreate validator private key shares while keeping **Important**: This operation maintains the same validator public keys, so validators remain registered on the beacon chain without any changes. Only the underlying private key shares held by operators are refreshed. +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. + ## Prerequisites - `.env` file with `NETWORK` and `VC` variables set - `.charon` directory with `cluster-lock.json` and `validator_keys` - Docker running +- `jq` installed - **All operators must participate in the ceremony** ## Usage @@ -46,13 +49,16 @@ The script will: ## Current Limitations -- The new cluster configuration will not be reflected on the Launchpad -- The new cluster will have a new cluster hash (different observability identifier) +- The new cluster configuration will not be reflected on the Obol Launchpad +- The cluster will have a new cluster hash (different observability identifier) - All operators must participate; no partial participation option - All operators must have their current validator private key shares ## Related +- [Add-Validators Workflow](../add-validators/README.md) +- [Add-Operators Workflow](../add-operators/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) - [Replace-Operator Workflow](../replace-operator/README.md) - [Anti-Slashing DB Scripts](../vc/README.md) - [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/recreate-private-keys) diff --git a/scripts/edit/remove-operators/README.md b/scripts/edit/remove-operators/README.md index 0003fe66..cd43ea2e 100644 --- a/scripts/edit/remove-operators/README.md +++ b/scripts/edit/remove-operators/README.md @@ -12,7 +12,7 @@ These scripts help operators remove specific operators from an existing distribu **Important**: This is a coordinated ceremony. All participating operators must run their respective scripts simultaneously to complete the process. -> Warning: This is an alpha feature in Charon and is not yet recommended for production use. +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. There are two scripts for the two roles involved: @@ -92,8 +92,9 @@ If the removal is within fault tolerance, removed operators do **not** need to r ## Related +- [Add-Validators Workflow](../add-validators/README.md) - [Add-Operators Workflow](../add-operators/README.md) -- [Replace-Operator Workflow](../replace-operator/README.md) - [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) - [Anti-Slashing DB Scripts](../vc/README.md) - [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/remove-operators) diff --git a/scripts/edit/replace-operator/README.md b/scripts/edit/replace-operator/README.md index c87a5573..220c56d0 100644 --- a/scripts/edit/replace-operator/README.md +++ b/scripts/edit/replace-operator/README.md @@ -10,6 +10,8 @@ These scripts help operators replace a single operator in an existing distribute - **Infrastructure migration**: Moving an operator to new infrastructure - **Recovery**: Replacing an operator whose keys may have been compromised +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. + There are two scripts for the two roles involved: - **`remaining-operator.sh`** - For operators staying in the cluster (runs the ceremony) @@ -80,12 +82,14 @@ Two-step workflow for the new operator joining the cluster. ## Current Limitations - The new cluster configuration will not be reflected on the Obol Launchpad -- The new cluster will have a new cluster hash (different observability identifier) +- The cluster will have a new cluster hash (different observability identifier) - Only one operator can be replaced at a time ## Related - [Add-Validators Workflow](../add-validators/README.md) +- [Add-Operators Workflow](../add-operators/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) - [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) - [Anti-Slashing DB Scripts](../vc/README.md) - [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/replace-operator) diff --git a/scripts/edit/test/README.md b/scripts/edit/test/README.md index 087201f3..08abf7a4 100644 --- a/scripts/edit/test/README.md +++ b/scripts/edit/test/README.md @@ -38,6 +38,7 @@ CHARON_VERSION=v1.8.2 ./scripts/edit/test/e2e_test.sh 3. Interposes a **mock docker wrapper** (`test/bin/docker`) on `PATH` - Real `docker run` is used for charon ceremony commands (P2P relay) - `docker compose` commands are mocked (container lifecycle, ASDB export/import) + - `edit replace-operator` is mocked locally (the command does not yet exist in charon v1.8.2) 4. Runs each edit script through its happy path 5. Verifies outputs (validator count, operator count, key changes) at each step diff --git a/scripts/edit/vc/README.md b/scripts/edit/vc/README.md index be580999..0e4b05c3 100644 --- a/scripts/edit/vc/README.md +++ b/scripts/edit/vc/README.md @@ -4,7 +4,7 @@ Scripts to export, import, and update validator anti-slashing databases (ASDB) i ## Overview -When performing cluster edit operations (replace-operator, recreate-private-keys), the anti-slashing database must be exported, updated with new pubkeys, and re-imported to prevent slashing violations. These scripts automate that process across all supported validator clients. +When performing cluster edit operations (replace-operator, recreate-private-keys, add-operators, remove-operators), the anti-slashing database must be exported, updated with new pubkeys, and re-imported to prevent slashing violations. These scripts automate that process across all supported validator clients. ## Prerequisites @@ -56,6 +56,8 @@ See [test/README.md](test/README.md) for integration tests. ## Related -- [Replace-Operator Workflow](../replace-operator/README.md) -- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) - [Add-Validators Workflow](../add-validators/README.md) +- [Add-Operators Workflow](../add-operators/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md)