diff --git a/.gitignore b/.gitignore index be6f45cf..b55ae221 100644 --- a/.gitignore +++ b/.gitignore @@ -13,3 +13,4 @@ data/ .charon prometheus/prometheus.yml commit-boost/config.toml + diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 00000000..2346f6dc --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,151 @@ +# CLAUDE.md + +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. + +## Project Overview + +This repository contains Docker Compose configurations for running a Charon Distributed Validator Node (CDVN), which coordinates multiple operators to run Ethereum validators. A distributed validator node runs four main components: +- Execution client (EL): Processes Ethereum transactions +- Consensus client (CL/beacon node): Participates in Ethereum's proof-of-stake consensus +- Charon: Obol Network's distributed validator middleware that coordinates between operators +- Validator client (VC): Signs attestations and proposals through Charon + +## Architecture & Multi-Client System + +The repository uses a **profile-based multi-client architecture** where different Ethereum client implementations can be swapped via `.env` configuration: + +- **Compose file structure**: `compose-el.yml` (execution), `compose-cl.yml` (consensus), `compose-vc.yml` (validator), `compose-mev.yml` (MEV), and `docker-compose.yml` (main/monitoring) +- **Client selection**: Set via environment variables `EL`, `CL`, `VC`, `MEV` in `.env` (e.g., `EL=el-nethermind`, `CL=cl-lighthouse`, `VC=vc-lodestar`, `MEV=mev-mevboost`) +- **Profiles**: Docker Compose profiles automatically activate the selected clients via `COMPOSE_PROFILES=${EL},${CL},${VC},${MEV}` +- **Service naming**: Client services use prefixed names (e.g., `el-nethermind`, `cl-lighthouse`, `vc-lodestar`) while the main compose file uses unprefixed names for backward compatibility + +### Supported Clients + +- **Execution Layer**: `el-nethermind`, `el-reth`, `el-none` +- **Consensus Layer**: `cl-lighthouse`, `cl-grandine`, `cl-teku`, `cl-lodestar`, `cl-none` +- **Validator Clients**: `vc-lodestar`, `vc-nimbus`, `vc-prysm`, `vc-teku` +- **MEV Clients**: `mev-mevboost`, `mev-commitboost`, `mev-none` + +### Key Integration Points + +- Charon connects to the consensus layer at `http://${CL}:5052` (beacon node API) +- Validator clients connect to Charon at `http://charon:3600` (distributed validator middleware API) +- Consensus layer connects to execution layer at `http://${EL}:8551` (Engine API with JWT auth) +- MEV clients expose builder API at port `18550` + +## Common Commands + +### Starting/Stopping the Cluster + +```bash +# Start the full cluster (uses profile from .env) +docker compose up -d + +# Stop specific services +docker compose down + +# Stop all services +docker compose down + +# View logs +docker compose logs -f + +# Restart after config changes +docker compose restart +``` + +### Switching Clients + +```bash +# 1. Stop the old client +docker compose down cl-lighthouse + +# 2. Update .env to change CL variable (e.g., CL=cl-grandine) + +# 3. Start new client +docker compose up cl-grandine -d + +# 4. Restart charon to use new beacon node +docker compose restart charon + +# 5. Optional: clean up old client data +rm -rf ./data/lighthouse +``` + +### Testing + +```bash +# Verify containers can be created +docker compose up --no-start + +# Test with debug profile +docker compose -f docker-compose.yml -f compose-debug.yml up --no-start +``` + +## Configuration + +### Environment Setup + +1. Copy the appropriate sample file: `.env.sample.mainnet` or `.env.sample.hoodi` → `.env` +2. Set `NETWORK` (mainnet, hoodi) +3. Select clients by uncommenting the desired `EL`, `CL`, `VC`, `MEV` variables +4. Configure optional settings (ports, external hostnames, monitoring tokens, etc.) + +### Important Environment Variables + +- `NETWORK`: Ethereum network (mainnet, hoodi) +- `EL`, `CL`, `VC`, `MEV`: Client selection (determines which Docker profiles activate) +- `CHARON_BEACON_NODE_ENDPOINTS`: Override default beacon node (defaults to selected CL client) +- `CHARON_FALLBACK_BEACON_NODE_ENDPOINTS`: Fallback beacon nodes for redundancy +- `BUILDER_API_ENABLED`: Enable/disable MEV-boost integration +- `CLUSTER_NAME`, `CLUSTER_PEER`: Required for monitoring with Alloy/Prometheus +- `ALERT_DISCORD_IDS`: Discord IDs for Obol Agent monitoring alerts + +### Key Directories + +- `.charon/`: Cluster configuration and validator keys (created by DKG or add-validators) +- `data/`: Persistent data for all clients (execution, consensus, validator databases) +- `jwt/`: JWT secret for execution<->consensus authentication +- `grafana/`: Monitoring dashboards and configuration +- `prometheus/`: Metrics collection configuration +- `scripts/`: Automation scripts for cluster operations + +## Cluster Edit Scripts + +Located in `scripts/edit/`, these automate complex cluster modification operations. Each has its own README with full usage details: + +- **[Add Validators](scripts/edit/add-validators/README.md)** - Add new validators to an existing cluster +- **[Add Operators](scripts/edit/add-operators/README.md)** - Expand the cluster by adding new operators +- **[Remove Operators](scripts/edit/remove-operators/README.md)** - Remove operators from the cluster +- **[Replace Operator](scripts/edit/replace-operator/README.md)** - Replace a single operator in the cluster +- **[Recreate Private Keys](scripts/edit/recreate-private-keys/README.md)** - Refresh private key shares while keeping the same validator public keys +- **[Anti-Slashing DB (vc/)](scripts/edit/vc/README.md)** - Export/import/update anti-slashing databases (EIP-3076) + +## Monitoring Stack + +- **Grafana** (port 3000): Dashboards for cluster health, validator performance +- **Prometheus**: Metrics collection from all services +- **Loki**: Log aggregation (optional, via `CHARON_LOKI_ADDRESSES`) +- **Tempo**: Distributed tracing (debug profile) +- **Alloy**: Log and metric forwarding (uses `alloy-monitored` labels on services) + +Access Grafana at `http://localhost:3000` (or `${MONITORING_PORT_GRAFANA}`). + +## Development Workflow + +When modifying this repository: + +1. **Test container creation** before committing changes to compose files +2. **Preserve backward compatibility** for existing node operators (data paths, service names) +3. **Update all sample .env files** when adding new configuration options +4. **Test client switching** if modifying compose file structure +5. **Update version defaults** to tested/stable releases + +## Important Notes + +- **Never commit `.env` files** - they contain operator-specific configuration +- **JWT secret** in `jwt/jwt.hex` must be shared between EL and CL clients +- **Cluster lock** in `.charon/cluster-lock.json` is critical - back it up before any edit operations +- **Validator keys** in `.charon/validator_keys/` must be kept secure and never committed +- **Data directory compatibility**: When switching VCs, verify the new client can handle existing key state +- **Slashing protection**: Always export/import ASDB when switching VCs or replacing operators diff --git a/scripts/README.md b/scripts/README.md new file mode 100644 index 00000000..a195af60 --- /dev/null +++ b/scripts/README.md @@ -0,0 +1,27 @@ +# Cluster Edit Automation Scripts + +Automation scripts for Charon distributed validator cluster editing operations. + +## Documentation + +- [Charon Edit Commands](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/) +- [EIP-3076 Slashing Protection Interchange Format](https://eips.ethereum.org/EIPS/eip-3076) + +## Scripts + +| Directory | Description | +|-----------|-------------| +| [edit/add-validators/](edit/add-validators/README.md) | Add new validators to an existing cluster | +| [edit/recreate-private-keys/](edit/recreate-private-keys/README.md) | Refresh private key shares while keeping the same validator public keys | +| [edit/add-operators/](edit/add-operators/README.md) | Expand the cluster by adding new operators | +| [edit/remove-operators/](edit/remove-operators/README.md) | Remove operators from the cluster | +| [edit/replace-operator/](edit/replace-operator/README.md) | Replace a single operator in a cluster | +| [edit/vc/](edit/vc/README.md) | Export/import/update anti-slashing databases (EIP-3076) | +| [edit/test/](edit/test/README.md) | E2E integration tests for all edit scripts | + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables +- Docker and `docker compose` +- `jq` + diff --git a/scripts/edit/add-operators/README.md b/scripts/edit/add-operators/README.md new file mode 100644 index 00000000..7f2b0793 --- /dev/null +++ b/scripts/edit/add-operators/README.md @@ -0,0 +1,99 @@ +# Add-Operators Scripts + +Scripts to automate the [add-operators ceremony](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-operators) for Charon distributed validators. + +## Overview + +These scripts help operators expand an existing distributed validator cluster by adding new operators. This is useful for: + +- **Cluster expansion**: Adding more operators for increased redundancy +- **Decentralization**: Distributing validator duties across more parties +- **Resilience**: Expanding the operator set while maintaining the same validators + +**Important**: This is a coordinated ceremony. All operators (existing AND new) must run their respective scripts simultaneously to complete the process. + +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. + +There are two scripts for the two roles involved: + +- **`existing-operator.sh`** - For operators already in the cluster +- **`new-operator.sh`** - For new operators joining the cluster + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- Docker running +- `jq` installed +- **Existing operators**: `.charon` directory with `cluster-lock.json` and `validator_keys` +- **New operators**: Charon ENR private key (generated via `--generate-enr`) + +## For Existing Operators + +Automates the complete workflow for operators already in the cluster: + +```bash +./scripts/edit/add-operators/existing-operator.sh \ + --new-operator-enrs "enr:-..." +``` + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--new-operator-enrs ` | Yes | Comma-separated ENRs of new operators | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +### Workflow + +1. **Export ASDB** - Export anti-slashing database from running VC +2. **Run ceremony** - P2P coordinated add-operators ceremony with all operators +3. **Update ASDB** - Replace pubkeys in exported ASDB to match new cluster-lock +4. **Stop containers** - Stop charon and VC +5. **Backup and replace** - Backup current `.charon/` to `./backups/`, install new configuration +6. **Import ASDB** - Import updated anti-slashing database +7. **Restart containers** - Start charon and VC with new configuration + +## For New Operators + +Two-step workflow for new operators joining the cluster. + +**Step 1:** Generate ENR and share with existing operators: + +```bash +./scripts/edit/add-operators/new-operator.sh --generate-enr +``` + +**Step 2:** After receiving the existing cluster-lock, run the ceremony: + +```bash +./scripts/edit/add-operators/new-operator.sh \ + --new-operator-enrs "enr:-...,enr:-..." \ + --cluster-lock ./received-cluster-lock.json +``` + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--new-operator-enrs ` | For ceremony | Comma-separated ENRs of ALL new operators | +| `--cluster-lock ` | For ceremony | Path to existing cluster-lock.json | +| `--generate-enr` | No | Generate new ENR private key | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +## Current Limitations + +- The new cluster configuration will not be reflected on the Obol Launchpad +- The cluster will have a new cluster hash (different observability identifier) +- All operators (existing and new) must participate; no partial participation option +- Cluster threshold remains unchanged + +## Related + +- [Add-Validators Workflow](../add-validators/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-operators) diff --git a/scripts/edit/add-operators/existing-operator.sh b/scripts/edit/add-operators/existing-operator.sh new file mode 100755 index 00000000..701ec3f6 --- /dev/null +++ b/scripts/edit/add-operators/existing-operator.sh @@ -0,0 +1,340 @@ +#!/usr/bin/env bash + +# Add-Operators Script for EXISTING Operators +# +# This script automates the add-operators ceremony for operators who are +# already in the cluster. It handles the full workflow including ASDB +# export/update/import around the ceremony. +# +# Reference: https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-operators +# +# IMPORTANT: This is a CEREMONY - ALL operators (existing AND new) must run +# their respective scripts simultaneously. The ceremony coordinates between +# all operators to generate new key shares for the expanded operator set. +# +# The workflow: +# 1. Export the current anti-slashing database +# 2. Run the add-operators ceremony (all operators simultaneously) +# 3. Update the exported ASDB with new pubkeys +# 4. Stop containers +# 5. Backup and replace .charon directory +# 6. Import the updated ASDB +# 7. Restart containers +# +# Prerequisites: +# - .env file with NETWORK and VC variables set +# - .charon directory with cluster-lock.json and validator_keys +# - Docker and docker compose installed and running +# - VC container running (for ASDB export) +# - All operators must participate in the ceremony +# +# Usage: +# ./scripts/edit/add-operators/existing-operator.sh [OPTIONS] +# +# Options: +# --new-operator-enrs Comma-separated ENRs of new operators (required) +# --dry-run Show what would be done without executing +# -h, --help Show this help message + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +NEW_OPERATOR_ENRS="" +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" +ASDB_EXPORT_DIR="./asdb-export" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/add-operators/existing-operator.sh [OPTIONS] + +Automates the add-operators ceremony for operators already in the cluster. +This is a CEREMONY that ALL operators (existing AND new) must run simultaneously. + +Options: + --new-operator-enrs Comma-separated ENRs of new operators (required) + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + # Add one new operator + ./scripts/edit/add-operators/existing-operator.sh \ + --new-operator-enrs "enr:-..." + + # Add multiple new operators + ./scripts/edit/add-operators/existing-operator.sh \ + --new-operator-enrs "enr:-...,enr:-..." + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json and validator_keys + - Docker and docker compose installed and running + - VC container running (for ASDB export) + - All operators must participate in the ceremony +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --new-operator-enrs) + NEW_OPERATOR_ENRS="$2" + shift 2 + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ -z "$NEW_OPERATOR_ENRS" ]; then + log_error "Missing required argument: --new-operator-enrs" + echo "Use --help for usage information" + exit 1 +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add-Operators Workflow - EXISTING OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -d .charon/validator_keys ]; then + log_error ".charon/validator_keys directory not found" + log_info "All operators must have their current validator private key shares." + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" +log_info " New operator ENRs: ${NEW_OPERATOR_ENRS:0:80}..." + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +# Show current cluster info +if [ -f .charon/cluster-lock.json ]; then + CURRENT_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + CURRENT_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info " Current cluster: $CURRENT_VALIDATORS validator(s), $CURRENT_OPERATORS operator(s)" +fi + +echo "" + +# Step 1: Export anti-slashing database +log_step "Step 1: Exporting anti-slashing database..." + +# Check VC container is running (skip check in dry-run mode) +if [ "$DRY_RUN" = false ]; then + if ! docker compose ps "$VC" 2>/dev/null | grep -q Up; then + log_error "VC container ($VC) is not running. Start it first:" + log_error " docker compose up -d $VC" + exit 1 + fi +else + log_warn "Would check that $VC container is running" +fi + +mkdir -p "$ASDB_EXPORT_DIR" + +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/export_asdb.sh" \ + --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" + +echo "" + +# Step 2: Run ceremony +log_step "Step 2: Running add-operators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL operators must run this ceremony simultaneously ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit add-operators" +log_info " New operator ENRs: ${NEW_OPERATOR_ENRS:0:80}..." +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all operators to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + docker run --rm -it \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + alpha edit add-operators \ + --new-operator-enrs="$NEW_OPERATOR_ENRS" \ + --output-dir=/opt/charon/output + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + NEW_VALIDATORS=$(jq '.distributed_validators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + NEW_OPERATORS=$(jq '.operators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + log_info "New cluster-lock.json generated with $NEW_VALIDATORS validator(s), $NEW_OPERATORS operator(s)" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit add-operators --new-operator-enrs=... --output-dir=$OUTPUT_DIR" +fi + +echo "" + +# Step 3: Update ASDB pubkeys +log_step "Step 3: Updating anti-slashing database pubkeys..." + +run_cmd "$SCRIPT_DIR/../vc/update-anti-slashing-db.sh" \ + "$ASDB_EXPORT_DIR/slashing-protection.json" \ + ".charon/cluster-lock.json" \ + "$OUTPUT_DIR/cluster-lock.json" + +log_info "Anti-slashing database pubkeys updated" + +echo "" + +# Step 4: Stop containers +log_step "Step 4: Stopping containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" + +# Step 5: Backup and replace .charon +log_step "Step 5: Backing up and replacing .charon directory..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info "Current .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New cluster configuration installed to .charon/" + +echo "" + +# Step 6: Import updated ASDB +log_step "Step 6: Importing updated anti-slashing database..." + +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/import_asdb.sh" \ + --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database imported" + +echo "" + +# Step 7: Restart containers +log_step "Step 7: Restarting containers..." + +run_cmd docker compose up -d charon "$VC" + +log_info "Containers restarted" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add-Operators Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old .charon backed up to: $BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info " - New cluster configuration installed in: .charon/" +log_info " - Anti-slashing database updated and imported" +log_info " - Containers restarted: charon, $VC" +echo "" +log_info "Next steps:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Verify all nodes connected and healthy" +log_info " 3. Verify cluster is producing attestations" +log_info " 4. Confirm new operators have joined successfully" +echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" +log_info "Current limitations:" +log_info " - The new configuration will not be reflected on the Obol Launchpad" +log_info " - The cluster will have a new cluster hash (different observability ID)" +echo "" diff --git a/scripts/edit/add-operators/new-operator.sh b/scripts/edit/add-operators/new-operator.sh new file mode 100755 index 00000000..58f45323 --- /dev/null +++ b/scripts/edit/add-operators/new-operator.sh @@ -0,0 +1,388 @@ +#!/usr/bin/env bash + +# Add-Operators Script for NEW Operators +# +# This script helps new operators join an existing cluster during the +# add-operators ceremony. +# +# Reference: https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-operators +# +# IMPORTANT: This is a CEREMONY - ALL operators (existing AND new) must run +# their respective scripts simultaneously. +# +# Two-step workflow: +# 1. Generate your ENR and share it with existing operators +# 2. Run the ceremony with the cluster-lock received from existing operators +# +# Prerequisites: +# - .env file with NETWORK and VC variables set +# - For --generate-enr: Docker installed +# - For ceremony: .charon/charon-enr-private-key must exist +# - For ceremony: Cluster-lock.json received from existing operators +# +# Usage: +# ./scripts/edit/add-operators/new-operator.sh [OPTIONS] +# +# Options: +# --new-operator-enrs Comma-separated ENRs of ALL new operators (required for ceremony) +# --cluster-lock Path to existing cluster-lock.json (required for ceremony) +# --generate-enr Generate a new ENR private key if not present +# --dry-run Show what would be done without executing +# -h, --help Show this help message +# +# Examples: +# # Step 1: Generate ENR and share with existing operators +# ./scripts/edit/add-operators/new-operator.sh --generate-enr +# +# # Step 2: Run ceremony with all new operator ENRs +# ./scripts/edit/add-operators/new-operator.sh \ +# --new-operator-enrs "enr:-...,enr:-..." \ +# --cluster-lock ./received-cluster-lock.json + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +NEW_OPERATOR_ENRS="" +CLUSTER_LOCK_PATH="" +GENERATE_ENR=false +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/add-operators/new-operator.sh [OPTIONS] + +Helps new operators join an existing cluster during the add-operators ceremony. +This is a CEREMONY that ALL operators (existing AND new) must run simultaneously. + +Options: + --new-operator-enrs Comma-separated ENRs of ALL new operators (required for ceremony) + --cluster-lock Path to existing cluster-lock.json (required for ceremony) + --generate-enr Generate a new ENR private key if not present + --dry-run Show what would be done without executing + -h, --help Show this help message + +Examples: + # Step 1: Generate ENR and share with existing operators + ./scripts/edit/add-operators/new-operator.sh --generate-enr + + # Step 2: Run ceremony with cluster-lock and all new operator ENRs + ./scripts/edit/add-operators/new-operator.sh \ + --new-operator-enrs "enr:-...,enr:-..." \ + --cluster-lock ./received-cluster-lock.json + +Prerequisites: + - .env file with NETWORK and VC variables set + - For --generate-enr: Docker installed + - For ceremony: .charon/charon-enr-private-key must exist + - For ceremony: Cluster-lock.json received from existing operators +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --new-operator-enrs) + NEW_OPERATOR_ENRS="$2" + shift 2 + ;; + --cluster-lock) + CLUSTER_LOCK_PATH="$2" + shift 2 + ;; + --generate-enr) + GENERATE_ENR=true + shift + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add-Operators Workflow - NEW OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +echo "" + +# Handle ENR generation mode +if [ "$GENERATE_ENR" = true ]; then + log_step "Step 1: Generating ENR private key..." + + if [ -f .charon/charon-enr-private-key ]; then + log_warn "ENR private key already exists at .charon/charon-enr-private-key" + log_warn "Skipping generation to avoid overwriting existing key." + log_info "If you want to generate a new key, remove the existing file first." + else + mkdir -p .charon + + if [ "$DRY_RUN" = false ]; then + docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + create enr + else + echo " [DRY-RUN] docker run --rm ... charon create enr" + fi + + log_info "ENR private key generated" + fi + + if [ -f .charon/charon-enr-private-key ]; then + echo "" + log_warn "╔════════════════════════════════════════════════════════════════╗" + log_warn "║ SHARE YOUR ENR WITH THE EXISTING OPERATORS ║" + log_warn "╚════════════════════════════════════════════════════════════════╝" + echo "" + + # Extract and display the ENR + if [ "$DRY_RUN" = false ]; then + ENR=$(docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + enr 2>/dev/null || echo "") + + if [ -n "$ENR" ]; then + log_info "Your ENR:" + echo "" + echo "$ENR" + echo "" + fi + fi + + log_info "Send this ENR to the existing operators." + log_info "They will use it with: --new-operator-enrs \"\"" + log_info "" + log_info "You will also need the existing cluster-lock.json from them." + log_info "" + log_info "After receiving it, run the ceremony with:" + log_info " ./scripts/edit/add-operators/new-operator.sh \\" + log_info " --new-operator-enrs \"\" \\" + log_info " --cluster-lock " + fi + + exit 0 +fi + +# Ceremony mode: validate required arguments +if [ -z "$NEW_OPERATOR_ENRS" ]; then + log_error "Missing required argument: --new-operator-enrs" + echo "Use --help for usage information" + exit 1 +fi + +if [ -z "$CLUSTER_LOCK_PATH" ]; then + log_error "Missing required argument: --cluster-lock" + echo "Use --help for usage information" + exit 1 +fi + +# Step 1: Check ceremony prerequisites +log_step "Step 1: Checking ceremony prerequisites..." + +if [ "$DRY_RUN" = false ]; then + if [ ! -d .charon ]; then + log_error ".charon directory not found" + log_info "First generate your ENR with: ./scripts/edit/add-operators/new-operator.sh --generate-enr" + exit 1 + fi + + if [ ! -f .charon/charon-enr-private-key ]; then + log_error ".charon/charon-enr-private-key not found" + log_info "First generate your ENR with: ./scripts/edit/add-operators/new-operator.sh --generate-enr" + exit 1 + fi + + if [ ! -f "$CLUSTER_LOCK_PATH" ]; then + log_error "Cluster-lock file not found: $CLUSTER_LOCK_PATH" + exit 1 + fi + + # Validate cluster-lock is valid JSON + if ! jq empty "$CLUSTER_LOCK_PATH" 2>/dev/null; then + log_error "Cluster-lock file is not valid JSON: $CLUSTER_LOCK_PATH" + exit 1 + fi +else + if [ ! -d .charon ]; then + log_warn "Would check for .charon directory (not found)" + fi + if [ ! -f .charon/charon-enr-private-key ]; then + log_warn "Would check for .charon/charon-enr-private-key (not found)" + fi +fi + +log_info "Using cluster-lock: $CLUSTER_LOCK_PATH" +log_info "New operator ENRs: ${NEW_OPERATOR_ENRS:0:80}..." + +# Show cluster info +if [ "$DRY_RUN" = false ] && [ -f "$CLUSTER_LOCK_PATH" ]; then + NUM_VALIDATORS=$(jq '.distributed_validators | length' "$CLUSTER_LOCK_PATH" 2>/dev/null || echo "?") + NUM_OPERATORS=$(jq '.operators | length' "$CLUSTER_LOCK_PATH" 2>/dev/null || echo "?") + log_info "Cluster info: $NUM_VALIDATORS validator(s), $NUM_OPERATORS operator(s)" +fi + +log_info "Prerequisites OK" + +echo "" + +# Step 2: Run ceremony +log_step "Step 2: Running add-operators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL operators must run this ceremony simultaneously ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit add-operators" +log_info " New operator ENRs: ${NEW_OPERATOR_ENRS:0:80}..." +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all operators to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + docker run --rm -it \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ + -v "$REPO_ROOT/$CLUSTER_LOCK_PATH:/opt/charon/cluster-lock.json:ro" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + alpha edit add-operators \ + --new-operator-enrs="$NEW_OPERATOR_ENRS" \ + --output-dir=/opt/charon/output \ + --lock-file=/opt/charon/cluster-lock.json \ + --private-key-file=/opt/charon/.charon/charon-enr-private-key + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + NEW_VALIDATORS=$(jq '.distributed_validators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + NEW_OPERATORS=$(jq '.operators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + log_info "New cluster-lock.json generated with $NEW_VALIDATORS validator(s), $NEW_OPERATORS operator(s)" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit add-operators --new-operator-enrs=... --output-dir=$OUTPUT_DIR --lock-file=... --private-key-file=..." +fi + +echo "" + +# Step 3: Install .charon from output +log_step "Step 3: Installing new cluster configuration..." + +if [ -d .charon ]; then + TIMESTAMP=$(date +%Y%m%d_%H%M%S) + mkdir -p "$BACKUP_DIR" + run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" + log_info "Old .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" +fi + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New cluster configuration installed to .charon/" + +echo "" + +# Step 4: Start containers +log_step "Step 4: Starting containers..." + +run_cmd docker compose up -d charon "$VC" + +log_info "Containers started" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ New Operator Setup COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Cluster configuration installed in: .charon/" +log_info " - Containers started: charon, $VC" +echo "" +log_info "Next steps:" +log_info " 1. Wait for charon to sync with peers: docker compose logs -f charon" +log_info " 2. Verify VC is running: docker compose logs -f $VC" +log_info " 3. Monitor validator duties once synced" +echo "" +log_warn "Note: As a new operator, you do NOT have any slashing protection history." +log_warn "Your VC will start fresh. Ensure all existing operators have completed" +log_warn "their add-operators workflow before validators resume duties." +echo "" diff --git a/scripts/edit/add-validators/README.md b/scripts/edit/add-validators/README.md new file mode 100644 index 00000000..2b9cc2db --- /dev/null +++ b/scripts/edit/add-validators/README.md @@ -0,0 +1,107 @@ +# Add-Validators Script + +Script to automate the [add-validators ceremony](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-validators) for Charon distributed validators. + +## Overview + +This script helps operators add new validators to an existing distributed validator cluster. This is useful for: + +- **Expanding capacity**: Add more validators without creating a new cluster +- **Scaling operations**: Grow your staking operation with existing operators + +**Important**: This is a coordinated ceremony. All operators must run this script simultaneously to complete the process. + +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- `.charon` directory with `cluster-lock.json` +- Docker running +- `jq` installed +- **Charon and VC must be RUNNING** during the ceremony +- **All operators must participate in the ceremony** + +## Usage + +All operators must run this script simultaneously: + +```bash +./scripts/edit/add-validators/add-validators.sh \ + --num-validators 10 \ + --withdrawal-addresses 0x123...abc \ + --fee-recipient-addresses 0x456...def +``` + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--num-validators ` | Yes | Number of validators to add | +| `--withdrawal-addresses ` | No | Withdrawal address(es), comma-separated for multiple | +| `--fee-recipient-addresses ` | No | Fee recipient address(es), comma-separated | +| `--unverified` | No | Skip key verification (for remote KeyManager) | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +### Examples + +```bash +# Add 10 validators with same addresses for all +./scripts/edit/add-validators/add-validators.sh \ + --num-validators 10 \ + --withdrawal-addresses 0x123...abc \ + --fee-recipient-addresses 0x456...def + +# Add validators without key verification (remote KeyManager) +./scripts/edit/add-validators/add-validators.sh \ + --num-validators 5 \ + --withdrawal-addresses 0x123...abc \ + --fee-recipient-addresses 0x456...def \ + --unverified + +# Preview what would happen +./scripts/edit/add-validators/add-validators.sh \ + --num-validators 5 \ + --withdrawal-addresses 0x123...abc \ + --dry-run +``` + +## Workflow + +The script performs the following steps: + +1. **Check prerequisites** - Verify environment, cluster-lock, and running containers +2. **Run ceremony** - P2P coordinated add-validators ceremony with all operators +3. **Stop containers** - Stop charon and VC +4. **Backup and replace** - Backup current `.charon/` to `./backups/`, install new configuration +5. **Restart containers** - Start charon and VC with new configuration + +## After the Ceremony + +1. **Wait for threshold** - Once threshold operators complete their upgrades, new validators will begin participating +2. **Generate deposits** - New validator deposit data is available in `.charon/deposit-data.json` +3. **Activate validators** - Submit deposits to activate new validators on the beacon chain + +## Using --unverified Mode + +If your validator keys are stored remotely (e.g., in a KeyManager) and Charon cannot access them, use the `--unverified` flag. This skips key verification during the ceremony. + +**Important**: When using cluster artifacts created with `--unverified`: +- You must start `charon run` with the `--no-verify` flag +- Or set `CHARON_NO_VERIFY=true` in your `.env` file + +## Current Limitations + +- The new cluster configuration will not be reflected on the Obol Launchpad +- The cluster will have a new cluster hash (different observability identifier) +- All operators must participate; no partial participation option + +## Related + +- [Add-Operators Workflow](../add-operators/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-validators) diff --git a/scripts/edit/add-validators/add-validators.sh b/scripts/edit/add-validators/add-validators.sh new file mode 100755 index 00000000..0f046704 --- /dev/null +++ b/scripts/edit/add-validators/add-validators.sh @@ -0,0 +1,376 @@ +#!/usr/bin/env bash + +# Add-Validators Script +# +# This script automates the add-validators ceremony for Charon distributed +# validators. This is used to add new validators to an existing cluster. +# +# Reference: https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-validators +# +# IMPORTANT: This is a CEREMONY - ALL operators in the cluster must run this +# script simultaneously. The ceremony coordinates between all operators to +# generate new validator key shares. +# +# Use cases: +# - Adding more validators to an existing distributed validator cluster +# - Expanding staking capacity without creating a new cluster +# +# The workflow: +# 1. Check prerequisites (cluster running, cluster-lock exists) +# 2. Run the add-validators ceremony (all operators simultaneously) +# 3. Stop containers +# 4. Backup and replace .charon directory +# 5. Restart containers +# +# Prerequisites: +# - .env file with NETWORK and VC variables set +# - .charon directory with cluster-lock.json +# - Docker and docker compose installed and running +# - Charon and VC must be RUNNING during ceremony +# - All operators must participate in the ceremony +# +# Usage: +# ./scripts/edit/add-validators/add-validators.sh [OPTIONS] +# +# Options: +# --num-validators Number of validators to add (required) +# --withdrawal-addresses Withdrawal address(es), comma-separated for multiple +# --fee-recipient-addresses Fee recipient address(es), comma-separated +# --unverified Skip key verification (when keys not accessible) +# --dry-run Show what would be done without executing +# -h, --help Show this help message + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +NUM_VALIDATORS="" +WITHDRAWAL_ADDRESSES="" +FEE_RECIPIENT_ADDRESSES="" +UNVERIFIED=false +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/add-validators/add-validators.sh [OPTIONS] + +Adds new validators to an existing distributed validator cluster. This is a +CEREMONY that ALL operators must run simultaneously. + +Options: + --num-validators Number of validators to add (required) + --withdrawal-addresses Withdrawal address(es), comma-separated for multiple + --fee-recipient-addresses Fee recipient address(es), comma-separated + --unverified Skip key verification (when keys not accessible) + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + # Add 10 validators with same withdrawal address + ./scripts/edit/add-validators/add-validators.sh \ + --num-validators 10 \ + --withdrawal-addresses 0x123...abc \ + --fee-recipient-addresses 0x456...def + + # Add validators without key verification (remote KeyManager) + ./scripts/edit/add-validators/add-validators.sh \ + --num-validators 5 \ + --withdrawal-addresses 0x123...abc \ + --fee-recipient-addresses 0x456...def \ + --unverified + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json and validator_keys + - Charon and VC containers RUNNING during ceremony + - All operators must participate in the ceremony + +Note: + If using --unverified flag, you must start charon with --no-verify flag + or set CHARON_NO_VERIFY=true environment variable. +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --num-validators) + NUM_VALIDATORS="$2" + shift 2 + ;; + --withdrawal-addresses) + WITHDRAWAL_ADDRESSES="$2" + shift 2 + ;; + --fee-recipient-addresses) + FEE_RECIPIENT_ADDRESSES="$2" + shift 2 + ;; + --unverified) + UNVERIFIED=true + shift + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ -z "$NUM_VALIDATORS" ]; then + log_error "Missing required argument: --num-validators" + echo "Use --help for usage information" + exit 1 +fi + +# Validate num-validators is a positive integer +if ! [[ "$NUM_VALIDATORS" =~ ^[1-9][0-9]*$ ]]; then + log_error "Invalid --num-validators: must be a positive integer" + exit 1 +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add Validators Workflow ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check if charon is running (required for ceremony) +if [ "$DRY_RUN" = false ]; then + if ! docker compose ps charon 2>/dev/null | grep -q Up; then + log_error "Charon container is not running." + log_error "The DV node should be running during the add-validators ceremony." + log_error "Start it with: docker compose up -d charon $VC" + exit 1 + fi +else + log_warn "Would check that charon container is running" +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" +log_info " Validators to add: $NUM_VALIDATORS" + +if [ -n "$WITHDRAWAL_ADDRESSES" ]; then + log_info " Withdrawal addresses: $WITHDRAWAL_ADDRESSES" +fi +if [ -n "$FEE_RECIPIENT_ADDRESSES" ]; then + log_info " Fee recipient addresses: $FEE_RECIPIENT_ADDRESSES" +fi +if [ "$UNVERIFIED" = true ]; then + log_warn " Mode: UNVERIFIED (key verification skipped)" +fi + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +# Show current cluster info +if [ -f .charon/cluster-lock.json ]; then + CURRENT_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + CURRENT_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info " Current cluster: $CURRENT_VALIDATORS validator(s), $CURRENT_OPERATORS operator(s)" +fi + +echo "" + +# Step 1: Run ceremony +log_step "Step 1: Running add-validators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL operators must run this ceremony simultaneously ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit add-validators" +log_info " Number of validators: $NUM_VALIDATORS" +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all operators to connect..." +echo "" + +# Build Docker command arguments +DOCKER_ARGS=( + run --rm -it + -v "$REPO_ROOT/.charon:/opt/charon/.charon" + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" + alpha edit add-validators + --num-validators="$NUM_VALIDATORS" + --output-dir=/opt/charon/output +) + +if [ -n "$WITHDRAWAL_ADDRESSES" ]; then + DOCKER_ARGS+=(--withdrawal-addresses="$WITHDRAWAL_ADDRESSES") +fi + +if [ -n "$FEE_RECIPIENT_ADDRESSES" ]; then + DOCKER_ARGS+=(--fee-recipient-addresses="$FEE_RECIPIENT_ADDRESSES") +fi + +if [ "$UNVERIFIED" = true ]; then + DOCKER_ARGS+=(--unverified) +fi + +if [ "$DRY_RUN" = false ]; then + docker "${DOCKER_ARGS[@]}" + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + NEW_VALIDATORS=$(jq '.distributed_validators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + log_info "New cluster-lock.json generated with $NEW_VALIDATORS validator(s)" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit add-validators --num-validators=$NUM_VALIDATORS --output-dir=$OUTPUT_DIR" +fi + +echo "" + +# Step 2: Stop containers +log_step "Step 2: Stopping containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" + +# Step 3: Backup and replace .charon +log_step "Step 3: Backing up and replacing .charon directory..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info "Current .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New cluster configuration installed to .charon/" + +echo "" + +# Step 4: Restart containers +log_step "Step 4: Restarting containers..." + +# Build up command with potential --no-verify flag +if [ "$UNVERIFIED" = true ]; then + log_warn "Starting charon with CHARON_NO_VERIFY=true (required for --unverified mode)" + run_cmd docker compose up -d charon "$VC" +else + run_cmd docker compose up -d charon "$VC" +fi + +log_info "Containers restarted" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add Validators Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old .charon backed up to: $BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info " - New cluster configuration installed in: .charon/" +log_info " - $NUM_VALIDATORS new validator(s) added" +log_info " - Containers restarted: charon, $VC" + +if [ "$UNVERIFIED" = true ]; then + echo "" + log_warn "IMPORTANT: You used --unverified mode." + log_warn "Ensure CHARON_NO_VERIFY=true is set in your .env file for future restarts." +fi + +echo "" +log_info "Next steps:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Wait for threshold operators to complete their upgrades" +log_info " 3. Verify new validators appear in cluster" +log_info " 4. Generate deposit data for new validators (in .charon/deposit-data.json)" +log_info " 5. Activate new validators on the beacon chain" +echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" +log_info "Current limitations:" +log_info " - The new configuration will not be reflected on the Obol Launchpad" +log_info " - The cluster will have a new cluster hash (different observability ID)" +echo "" diff --git a/scripts/edit/recreate-private-keys/README.md b/scripts/edit/recreate-private-keys/README.md new file mode 100644 index 00000000..fe6e61c4 --- /dev/null +++ b/scripts/edit/recreate-private-keys/README.md @@ -0,0 +1,64 @@ +# Recreate-Private-Keys Script + +Script to automate the [recreate-private-keys ceremony](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/recreate-private-keys) for Charon distributed validators. + +## Overview + +This script helps operators recreate validator private key shares while keeping the same validator public keys. This is useful for: + +- **Security concerns**: If private key shares may have been compromised +- **Key rotation**: As part of regular security practices +- **Recovery**: After a security incident to refresh key material + +**Important**: This operation maintains the same validator public keys, so validators remain registered on the beacon chain without any changes. Only the underlying private key shares held by operators are refreshed. + +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- `.charon` directory with `cluster-lock.json` and `validator_keys` +- Docker running +- `jq` installed +- **All operators must participate in the ceremony** + +## Usage + +All operators must run this script simultaneously: + +```bash +./scripts/edit/recreate-private-keys/recreate-private-keys.sh +``` + +The script will: +1. Export the anti-slashing database from the validator client +2. Run the recreate-private-keys ceremony (P2P coordinated with all operators) +3. Update the ASDB pubkeys to match new key shares +4. Stop charon and VC containers +5. Backup current `.charon` directory to `./backups/` +6. Move new keys from `./output/` to `.charon/` +7. Import the updated anti-slashing database +8. Restart containers + +## Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +## Current Limitations + +- The new cluster configuration will not be reflected on the Obol Launchpad +- The cluster will have a new cluster hash (different observability identifier) +- All operators must participate; no partial participation option +- All operators must have their current validator private key shares + +## Related + +- [Add-Validators Workflow](../add-validators/README.md) +- [Add-Operators Workflow](../add-operators/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/recreate-private-keys) diff --git a/scripts/edit/recreate-private-keys/recreate-private-keys.sh b/scripts/edit/recreate-private-keys/recreate-private-keys.sh new file mode 100755 index 00000000..11babd91 --- /dev/null +++ b/scripts/edit/recreate-private-keys/recreate-private-keys.sh @@ -0,0 +1,315 @@ +#!/usr/bin/env bash + +# Recreate-Private-Keys Script +# +# This script automates the recreate-private-keys ceremony for Charon +# distributed validators. This is used to regenerate validator private key +# shares while keeping the same validator public keys. +# +# Reference: https://docs.obol.org/next/advanced-and-troubleshooting/advanced/recreate-private-keys +# +# IMPORTANT: This is a CEREMONY - ALL operators in the cluster must run this +# script simultaneously. The ceremony coordinates between all operators to +# generate new private key shares. +# +# Use cases: +# - Security concerns: If private key shares may have been compromised +# - Key rotation: As part of regular security practices +# - Recovery: After a security incident to refresh key material +# +# The workflow: +# 1. Export the current anti-slashing database +# 2. Run the recreate-private-keys ceremony (all operators simultaneously) +# 3. Update the exported ASDB with new pubkeys +# 4. Stop containers +# 5. Backup and replace .charon directory +# 6. Import the updated ASDB +# 7. Restart containers +# +# Prerequisites: +# - .env file with NETWORK and VC variables set +# - .charon directory with cluster-lock.json and validator_keys +# - Docker and docker compose installed and running +# - All operators must participate in the ceremony +# +# Usage: +# ./scripts/edit/recreate-private-keys/recreate-private-keys.sh [OPTIONS] +# +# Options: +# --dry-run Show what would be done without executing +# -h, --help Show this help message + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" +ASDB_EXPORT_DIR="./asdb-export" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/recreate-private-keys/recreate-private-keys.sh [OPTIONS] + +Recreates validator private key shares for the cluster. This is a CEREMONY +that ALL operators must run simultaneously. + +Use cases: + - Security concerns: If private key shares may have been compromised + - Key rotation: As part of regular security practices + - Recovery: After a security incident to refresh key material + +NOTE: This operation maintains the same validator public keys. Only the +underlying private key shares held by operators are refreshed. + +Options: + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + ./scripts/edit/recreate-private-keys/recreate-private-keys.sh + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json and validator_keys + - Docker and docker compose installed and running + - All operators must participate in the ceremony +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Recreate Private Keys Workflow ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -d .charon/validator_keys ]; then + log_error ".charon/validator_keys directory not found" + log_info "All operators must have their current validator private key shares." + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +echo "" + +# Step 1: Export anti-slashing database +log_step "Step 1: Exporting anti-slashing database..." + +# Check VC container is running (skip check in dry-run mode) +if [ "$DRY_RUN" = false ]; then + if ! docker compose ps "$VC" 2>/dev/null | grep -q Up; then + log_error "VC container ($VC) is not running. Start it first:" + log_error " docker compose up -d $VC" + exit 1 + fi +else + log_warn "Would check that $VC container is running" +fi + +mkdir -p "$ASDB_EXPORT_DIR" + +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/export_asdb.sh" \ + --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" + +echo "" + +# Step 2: Run ceremony +log_step "Step 2: Running recreate-private-keys ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL operators must run this ceremony simultaneously ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit recreate-private-keys" +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all operators to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + docker run --rm -it \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + alpha edit recreate-private-keys \ + --output-dir=/opt/charon/output + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + log_info "New cluster-lock.json generated in $OUTPUT_DIR/" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit recreate-private-keys --output-dir=output" +fi + +echo "" + +# Step 3: Update ASDB pubkeys +log_step "Step 3: Updating anti-slashing database pubkeys..." + +run_cmd "$SCRIPT_DIR/../vc/update-anti-slashing-db.sh" \ + "$ASDB_EXPORT_DIR/slashing-protection.json" \ + ".charon/cluster-lock.json" \ + "$OUTPUT_DIR/cluster-lock.json" + +log_info "Anti-slashing database pubkeys updated" + +echo "" + +# Step 4: Stop containers +log_step "Step 4: Stopping containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" + +# Step 5: Backup and replace .charon +log_step "Step 5: Backing up and replacing .charon directory..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info "Current .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New keys installed to .charon/" + +echo "" + +# Step 6: Import updated ASDB +log_step "Step 6: Importing updated anti-slashing database..." + +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/import_asdb.sh" \ + --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database imported" + +echo "" + +# Step 7: Restart containers +log_step "Step 7: Restarting containers..." + +run_cmd docker compose up -d charon "$VC" + +log_info "Containers restarted" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Recreate Private Keys Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old .charon backed up to: $BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info " - New keys installed in: .charon/" +log_info " - Anti-slashing database updated and imported" +log_info " - Containers restarted: charon, $VC" +echo "" +log_info "Next steps:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Verify all nodes connected and healthy" +log_info " 3. Verify cluster is producing attestations" +log_info " 4. Check no signature verification errors in logs" +echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" diff --git a/scripts/edit/remove-operators/README.md b/scripts/edit/remove-operators/README.md new file mode 100644 index 00000000..cd43ea2e --- /dev/null +++ b/scripts/edit/remove-operators/README.md @@ -0,0 +1,100 @@ +# Remove-Operators Scripts + +Scripts to automate the [remove-operators ceremony](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/remove-operators) for Charon distributed validators. + +## Overview + +These scripts help operators remove specific operators from an existing distributed validator cluster while preserving all validators. This is useful for: + +- **Operator offboarding**: Removing an operator who is leaving the cluster +- **Cluster downsizing**: Reducing the number of operators +- **Security response**: Removing a compromised operator + +**Important**: This is a coordinated ceremony. All participating operators must run their respective scripts simultaneously to complete the process. + +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. + +There are two scripts for the two roles involved: + +- **`remaining-operator.sh`** - For operators staying in the cluster +- **`removed-operator.sh`** - For operators being removed who need to participate (only required when removal exceeds fault tolerance) + +### Fault Tolerance + +The cluster's fault tolerance is `f = operators - threshold`. When removing more operators than `f`, removed operators must participate in the ceremony by running `removed-operator.sh` with the `--participating-operator-enrs` flag. + +When the removal is within fault tolerance, removed operators simply stop their nodes after the ceremony completes. + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- `.charon` directory with `cluster-lock.json` and `validator_keys` +- Docker running +- `jq` installed + +## For Remaining Operators + +Automates the complete workflow for operators staying in the cluster: + +```bash +./scripts/edit/remove-operators/remaining-operator.sh \ + --operator-enrs-to-remove "enr:-..." +``` + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--operator-enrs-to-remove ` | Yes | Comma-separated ENRs of operators to remove | +| `--participating-operator-enrs ` | When exceeding fault tolerance | Comma-separated ENRs of all participating operators | +| `--new-threshold ` | No | Override default threshold (defaults to ceil(n * 2/3)) | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +### Workflow + +1. **Export ASDB** - Export anti-slashing database from running VC +2. **Run ceremony** - P2P coordinated remove-operators ceremony with all participants +3. **Update ASDB** - Replace pubkeys in exported ASDB to match new cluster-lock +4. **Stop containers** - Stop charon and VC +5. **Backup and replace** - Backup current `.charon/` to `./backups/`, install new configuration +6. **Import ASDB** - Import updated anti-slashing database +7. **Restart containers** - Start charon and VC with new configuration + +## For Removed Operators + +Only required when the removal exceeds the cluster's fault tolerance. In that case, removed operators must participate in the ceremony to provide their key shares. + +```bash +./scripts/edit/remove-operators/removed-operator.sh \ + --operator-enrs-to-remove "enr:-..." \ + --participating-operator-enrs "enr:-...,enr:-...,enr:-..." +``` + +If the removal is within fault tolerance, removed operators do **not** need to run this script - simply stop your node after the remaining operators complete the ceremony. + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--operator-enrs-to-remove ` | Yes | Comma-separated ENRs of operators to remove | +| `--participating-operator-enrs ` | Yes | Comma-separated ENRs of ALL participating operators | +| `--new-threshold ` | No | Override default threshold (defaults to ceil(n * 2/3)) | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +## Current Limitations + +- The new cluster configuration will not be reflected on the Obol Launchpad +- The cluster will have a new cluster hash (different observability identifier) +- All remaining operators must have valid validator keys to participate +- The old cluster must be completely stopped before the new cluster can operate + +## Related + +- [Add-Validators Workflow](../add-validators/README.md) +- [Add-Operators Workflow](../add-operators/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/remove-operators) diff --git a/scripts/edit/remove-operators/remaining-operator.sh b/scripts/edit/remove-operators/remaining-operator.sh new file mode 100755 index 00000000..de2d0663 --- /dev/null +++ b/scripts/edit/remove-operators/remaining-operator.sh @@ -0,0 +1,388 @@ +#!/usr/bin/env bash + +# Remove-Operators Script for REMAINING Operators +# +# This script automates the remove-operators ceremony for operators who are +# staying in the cluster. It handles the full workflow including ASDB +# export/update/import around the ceremony. +# +# Reference: https://docs.obol.org/next/advanced-and-troubleshooting/advanced/remove-operators +# +# IMPORTANT: This is a CEREMONY - ALL participating operators must run their +# respective scripts simultaneously. The ceremony coordinates between +# participants to generate new key shares for the reduced operator set. +# +# The workflow: +# 1. Export the current anti-slashing database +# 2. Run the remove-operators ceremony (all participating operators simultaneously) +# 3. Update the exported ASDB with new pubkeys +# 4. Stop containers +# 5. Backup and replace .charon directory +# 6. Import the updated ASDB +# 7. Restart containers +# +# Prerequisites: +# - .env file with NETWORK and VC variables set +# - .charon directory with cluster-lock.json and validator_keys +# - Docker and docker compose installed and running +# - VC container running (for ASDB export) +# - All participating operators must run the ceremony +# +# Usage: +# ./scripts/edit/remove-operators/remaining-operator.sh [OPTIONS] +# +# Options: +# --operator-enrs-to-remove Comma-separated ENRs of operators to remove (required) +# --participating-operator-enrs Comma-separated ENRs of participating operators +# (required when removing beyond fault tolerance) +# --new-threshold Override default threshold (defaults to ceil(n * 2/3)) +# --dry-run Show what would be done without executing +# -h, --help Show this help message + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +OPERATOR_ENRS_TO_REMOVE="" +PARTICIPATING_OPERATOR_ENRS="" +NEW_THRESHOLD="" +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" +ASDB_EXPORT_DIR="./asdb-export" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/remove-operators/remaining-operator.sh [OPTIONS] + +Automates the remove-operators ceremony for operators staying in the cluster. +All participating operators must run their respective scripts simultaneously. + +Options: + --operator-enrs-to-remove Comma-separated ENRs of operators to remove (required) + --participating-operator-enrs Comma-separated ENRs of participating operators + (required when removing beyond fault tolerance) + --new-threshold Override default threshold (defaults to ceil(n * 2/3)) + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + # Remove one operator (within fault tolerance) + ./scripts/edit/remove-operators/remaining-operator.sh \ + --operator-enrs-to-remove "enr:-..." + + # Remove operators beyond fault tolerance (must specify participants) + ./scripts/edit/remove-operators/remaining-operator.sh \ + --operator-enrs-to-remove "enr:-...,enr:-..." \ + --participating-operator-enrs "enr:-...,enr:-...,enr:-..." + + # Remove operator with custom threshold + ./scripts/edit/remove-operators/remaining-operator.sh \ + --operator-enrs-to-remove "enr:-..." \ + --new-threshold 3 + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json and validator_keys + - Docker and docker compose installed and running + - VC container running (for ASDB export) + - All participating operators must run the ceremony +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --operator-enrs-to-remove) + OPERATOR_ENRS_TO_REMOVE="$2" + shift 2 + ;; + --participating-operator-enrs) + PARTICIPATING_OPERATOR_ENRS="$2" + shift 2 + ;; + --new-threshold) + NEW_THRESHOLD="$2" + shift 2 + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ -z "$OPERATOR_ENRS_TO_REMOVE" ]; then + log_error "Missing required argument: --operator-enrs-to-remove" + echo "Use --help for usage information" + exit 1 +fi + +# Validate new-threshold is a positive integer if provided +if [ -n "$NEW_THRESHOLD" ] && ! [[ "$NEW_THRESHOLD" =~ ^[1-9][0-9]*$ ]]; then + log_error "Invalid --new-threshold: must be a positive integer" + exit 1 +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Remove-Operators Workflow - REMAINING OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -d .charon/validator_keys ]; then + log_error ".charon/validator_keys directory not found" + log_info "All remaining operators must have their current validator private key shares." + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" +log_info " Operators to remove: ${OPERATOR_ENRS_TO_REMOVE:0:80}..." + +if [ -n "$PARTICIPATING_OPERATOR_ENRS" ]; then + log_info " Participating operators: ${PARTICIPATING_OPERATOR_ENRS:0:80}..." +fi +if [ -n "$NEW_THRESHOLD" ]; then + log_info " New threshold: $NEW_THRESHOLD" +fi + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +# Show current cluster info +if [ -f .charon/cluster-lock.json ]; then + CURRENT_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + CURRENT_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info " Current cluster: $CURRENT_VALIDATORS validator(s), $CURRENT_OPERATORS operator(s)" +fi + +echo "" + +# Step 1: Export anti-slashing database +log_step "Step 1: Exporting anti-slashing database..." + +# Check VC container is running (skip check in dry-run mode) +if [ "$DRY_RUN" = false ]; then + if ! docker compose ps "$VC" 2>/dev/null | grep -q Up; then + log_error "VC container ($VC) is not running. Start it first:" + log_error " docker compose up -d $VC" + exit 1 + fi +else + log_warn "Would check that $VC container is running" +fi + +mkdir -p "$ASDB_EXPORT_DIR" + +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/export_asdb.sh" \ + --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" + +echo "" + +# Step 2: Run ceremony +log_step "Step 2: Running remove-operators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL participating operators must run simultaneously ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit remove-operators" +log_info " Operators to remove: ${OPERATOR_ENRS_TO_REMOVE:0:80}..." +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all participants to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + # Build Docker command arguments + DOCKER_ARGS=( + run --rm -it + -v "$REPO_ROOT/.charon:/opt/charon/.charon" + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" + alpha edit remove-operators + --operator-enrs-to-remove="$OPERATOR_ENRS_TO_REMOVE" + --output-dir=/opt/charon/output + ) + + if [ -n "$PARTICIPATING_OPERATOR_ENRS" ]; then + DOCKER_ARGS+=(--participating-operator-enrs="$PARTICIPATING_OPERATOR_ENRS") + fi + + if [ -n "$NEW_THRESHOLD" ]; then + DOCKER_ARGS+=(--new-threshold="$NEW_THRESHOLD") + fi + + docker "${DOCKER_ARGS[@]}" + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + NEW_VALIDATORS=$(jq '.distributed_validators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + NEW_OPERATORS=$(jq '.operators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + log_info "New cluster-lock.json generated with $NEW_VALIDATORS validator(s), $NEW_OPERATORS operator(s)" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit remove-operators --operator-enrs-to-remove=... --output-dir=$OUTPUT_DIR" +fi + +echo "" + +# Step 3: Update ASDB pubkeys +log_step "Step 3: Updating anti-slashing database pubkeys..." + +run_cmd "$SCRIPT_DIR/../vc/update-anti-slashing-db.sh" \ + "$ASDB_EXPORT_DIR/slashing-protection.json" \ + ".charon/cluster-lock.json" \ + "$OUTPUT_DIR/cluster-lock.json" + +log_info "Anti-slashing database pubkeys updated" + +echo "" + +# Step 4: Stop containers +log_step "Step 4: Stopping containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" + +# Step 5: Backup and replace .charon +log_step "Step 5: Backing up and replacing .charon directory..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info "Current .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New cluster configuration installed to .charon/" + +echo "" + +# Step 6: Import updated ASDB +log_step "Step 6: Importing updated anti-slashing database..." + +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/import_asdb.sh" \ + --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database imported" + +echo "" + +# Step 7: Restart containers +log_step "Step 7: Restarting containers..." + +run_cmd docker compose up -d charon "$VC" + +log_info "Containers restarted" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Remove-Operators Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old .charon backed up to: $BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info " - New cluster configuration installed in: .charon/" +log_info " - Anti-slashing database updated and imported" +log_info " - Containers restarted: charon, $VC" +echo "" +log_info "Next steps:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Verify all remaining nodes connected and healthy" +log_info " 3. Verify cluster is producing attestations" +log_info " 4. Confirm removed operators have stopped their nodes" +echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" +log_info "Current limitations:" +log_info " - The new configuration will not be reflected on the Obol Launchpad" +log_info " - The cluster will have a new cluster hash (different observability ID)" +echo "" diff --git a/scripts/edit/remove-operators/removed-operator.sh b/scripts/edit/remove-operators/removed-operator.sh new file mode 100755 index 00000000..0f66d349 --- /dev/null +++ b/scripts/edit/remove-operators/removed-operator.sh @@ -0,0 +1,294 @@ +#!/usr/bin/env bash + +# Remove-Operators Script for REMOVED Operators +# +# This script helps operators who are being removed from the cluster to +# participate in the remove-operators ceremony. This is only required when +# the removal exceeds the cluster's fault tolerance. +# +# Reference: https://docs.obol.org/next/advanced-and-troubleshooting/advanced/remove-operators +# +# IMPORTANT: This script is only needed when removing more operators than the +# cluster's fault tolerance (f = operators - threshold) allows. In that case, +# removed operators must participate in the ceremony to provide their key shares. +# +# If the removal is within fault tolerance, removed operators do NOT need to +# run this script - they simply stop their nodes after the ceremony. +# +# Prerequisites: +# - .env file with NETWORK and VC variables set +# - .charon directory with cluster-lock.json, charon-enr-private-key, and validator_keys +# - Docker and docker compose installed and running +# +# Usage: +# ./scripts/edit/remove-operators/removed-operator.sh [OPTIONS] +# +# Options: +# --operator-enrs-to-remove Comma-separated ENRs of operators to remove (required) +# --participating-operator-enrs Comma-separated ENRs of ALL participating operators (required) +# --new-threshold Override default threshold (defaults to ceil(n * 2/3)) +# --dry-run Show what would be done without executing +# -h, --help Show this help message + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +OPERATOR_ENRS_TO_REMOVE="" +PARTICIPATING_OPERATOR_ENRS="" +NEW_THRESHOLD="" +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/remove-operators/removed-operator.sh [OPTIONS] + +Helps removed operators participate in the remove-operators ceremony. +This is only required when the removal exceeds the cluster's fault tolerance. + +If the removal is within fault tolerance, removed operators do NOT need to +run this script - simply stop your node after the remaining operators complete +the ceremony. + +Options: + --operator-enrs-to-remove Comma-separated ENRs of operators to remove (required) + --participating-operator-enrs Comma-separated ENRs of ALL participating operators (required) + --new-threshold Override default threshold (defaults to ceil(n * 2/3)) + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + ./scripts/edit/remove-operators/removed-operator.sh \ + --operator-enrs-to-remove "enr:-..." \ + --participating-operator-enrs "enr:-...,enr:-...,enr:-..." + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json, charon-enr-private-key, and validator_keys + - Docker and docker compose installed and running + - Your ENR must be listed in --participating-operator-enrs +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --operator-enrs-to-remove) + OPERATOR_ENRS_TO_REMOVE="$2" + shift 2 + ;; + --participating-operator-enrs) + PARTICIPATING_OPERATOR_ENRS="$2" + shift 2 + ;; + --new-threshold) + NEW_THRESHOLD="$2" + shift 2 + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ -z "$OPERATOR_ENRS_TO_REMOVE" ]; then + log_error "Missing required argument: --operator-enrs-to-remove" + echo "Use --help for usage information" + exit 1 +fi + +if [ -z "$PARTICIPATING_OPERATOR_ENRS" ]; then + log_error "Missing required argument: --participating-operator-enrs" + echo "Use --help for usage information" + exit 1 +fi + +# Validate new-threshold is a positive integer if provided +if [ -n "$NEW_THRESHOLD" ] && ! [[ "$NEW_THRESHOLD" =~ ^[1-9][0-9]*$ ]]; then + log_error "Invalid --new-threshold: must be a positive integer" + exit 1 +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Remove-Operators Workflow - REMOVED OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -f .charon/charon-enr-private-key ]; then + log_error ".charon/charon-enr-private-key not found" + exit 1 +fi + +if [ ! -d .charon/validator_keys ]; then + log_error ".charon/validator_keys directory not found" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" +log_info " Operators to remove: ${OPERATOR_ENRS_TO_REMOVE:0:80}..." +log_info " Participating operators: ${PARTICIPATING_OPERATOR_ENRS:0:80}..." + +if [ -n "$NEW_THRESHOLD" ]; then + log_info " New threshold: $NEW_THRESHOLD" +fi + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +# Show current cluster info +if [ -f .charon/cluster-lock.json ]; then + CURRENT_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + CURRENT_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info " Current cluster: $CURRENT_VALIDATORS validator(s), $CURRENT_OPERATORS operator(s)" +fi + +echo "" + +# Step 1: Run ceremony +log_step "Step 1: Running remove-operators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL participating operators must run simultaneously ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit remove-operators (as removed operator)" +log_info " Operators to remove: ${OPERATOR_ENRS_TO_REMOVE:0:80}..." +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all participants to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + # Build Docker command arguments + DOCKER_ARGS=( + run --rm -it + -v "$REPO_ROOT/.charon:/opt/charon/.charon" + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" + alpha edit remove-operators + --operator-enrs-to-remove="$OPERATOR_ENRS_TO_REMOVE" + --participating-operator-enrs="$PARTICIPATING_OPERATOR_ENRS" + --private-key-file=/opt/charon/.charon/charon-enr-private-key + --lock-file=/opt/charon/.charon/cluster-lock.json + --validator-keys-dir=/opt/charon/.charon/validator_keys + --output-dir=/opt/charon/output + ) + + if [ -n "$NEW_THRESHOLD" ]; then + DOCKER_ARGS+=(--new-threshold="$NEW_THRESHOLD") + fi + + docker "${DOCKER_ARGS[@]}" + + log_info "Ceremony completed successfully!" +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit remove-operators --operator-enrs-to-remove=... --participating-operator-enrs=... --output-dir=$OUTPUT_DIR" +fi + +echo "" + +# Step 2: Stop containers +log_step "Step 2: Stopping containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Removed Operator Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Ceremony participation completed" +log_info " - Containers stopped: charon, $VC" +echo "" +log_warn "You have been removed from the cluster." +log_warn "Your node no longer needs to run for this cluster." +echo "" +log_info "Next steps:" +log_info " 1. Confirm with remaining operators that the ceremony succeeded" +log_info " 2. Optionally clean up cluster data: rm -rf .charon data/" +log_info " 3. Optionally remove Docker resources: docker compose down -v" +echo "" diff --git a/scripts/edit/replace-operator/README.md b/scripts/edit/replace-operator/README.md new file mode 100644 index 00000000..220c56d0 --- /dev/null +++ b/scripts/edit/replace-operator/README.md @@ -0,0 +1,95 @@ +# Replace-Operator Scripts + +Scripts to automate the [replace-operator workflow](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/replace-operator) for Charon distributed validators. + +## Overview + +These scripts help operators replace a single operator in an existing distributed validator cluster. This is useful for: + +- **Operator rotation**: Replacing an operator who is leaving the cluster +- **Infrastructure migration**: Moving an operator to new infrastructure +- **Recovery**: Replacing an operator whose keys may have been compromised + +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. + +There are two scripts for the two roles involved: + +- **`remaining-operator.sh`** - For operators staying in the cluster (runs the ceremony) +- **`new-operator.sh`** - For the new operator joining the cluster + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- `.charon` directory with `cluster-lock.json` and `charon-enr-private-key` +- Docker running +- `jq` installed + +## For Remaining Operators + +Automates the complete workflow for operators staying in the cluster: + +```bash +./scripts/edit/replace-operator/remaining-operator.sh \ + --new-enr "enr:-..." \ + --operator-index 2 +``` + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--new-enr ` | Yes | ENR of the new operator | +| `--operator-index ` | Yes | Index of operator being replaced | +| `--skip-export` | No | Skip ASDB export if already done | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +### Workflow + +1. **Export ASDB** - Export anti-slashing database from running VC +2. **Run ceremony** - Execute `charon edit replace-operator` with new ENR +3. **Update ASDB** - Replace pubkeys in exported ASDB to match new cluster-lock +4. **Stop containers** - Stop charon and VC +5. **Backup and replace** - Backup old cluster-lock, install new one +6. **Import ASDB** - Import updated anti-slashing database +7. **Restart containers** - Start charon and VC with new configuration + +## For New Operators + +Two-step workflow for the new operator joining the cluster. + +**Step 1:** Generate ENR and share with remaining operators: + +```bash +./scripts/edit/replace-operator/new-operator.sh --generate-enr +``` + +**Step 2:** After receiving cluster-lock from remaining operators: + +```bash +./scripts/edit/replace-operator/new-operator.sh --cluster-lock ./received-cluster-lock.json +``` + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--cluster-lock ` | No | Path to new cluster-lock.json | +| `--generate-enr` | No | Generate new ENR private key | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +## Current Limitations + +- The new cluster configuration will not be reflected on the Obol Launchpad +- The cluster will have a new cluster hash (different observability identifier) +- Only one operator can be replaced at a time + +## Related + +- [Add-Validators Workflow](../add-validators/README.md) +- [Add-Operators Workflow](../add-operators/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/replace-operator) diff --git a/scripts/edit/replace-operator/new-operator.sh b/scripts/edit/replace-operator/new-operator.sh new file mode 100755 index 00000000..c5fb5ae7 --- /dev/null +++ b/scripts/edit/replace-operator/new-operator.sh @@ -0,0 +1,352 @@ +#!/usr/bin/env bash + +# Replace-Operator Workflow Script for NEW Operator +# +# This script helps a new operator join an existing cluster after a +# replace-operator ceremony has been completed by the remaining operators. +# +# Prerequisites (before running this script): +# 1. Generate your ENR private key: +# docker run --rm -v "$(pwd)/.charon:/opt/charon/.charon" obolnetwork/charon:latest create enr +# +# 2. Share your ENR (found in .charon/charon-enr-private-key.pub or printed by the command) +# with the remaining operators so they can run the ceremony. +# +# 3. Receive the new cluster-lock.json from the remaining operators after +# they complete the ceremony. +# +# The workflow: +# 1. Verify prerequisites (.charon folder, private key, cluster-lock) +# 2. Stop any running containers +# 3. Place the new cluster-lock.json (if not already in place) +# 4. Start charon and VC containers +# +# Usage: +# ./scripts/edit/replace-operator/new-operator.sh [OPTIONS] +# +# Options: +# --cluster-lock Path to the new cluster-lock.json file (optional if already in .charon) +# --generate-enr Generate a new ENR private key if not present +# --dry-run Show what would be done without executing +# -h, --help Show this help message +# +# Examples: +# # Generate ENR first (share the output with remaining operators) +# ./scripts/edit/replace-operator/new-operator.sh --generate-enr +# +# # After receiving cluster-lock, join the cluster +# ./scripts/edit/replace-operator/new-operator.sh --cluster-lock ./received-cluster-lock.json + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +CLUSTER_LOCK_PATH="" +GENERATE_ENR=false +DRY_RUN=false + +# Output directories +BACKUP_DIR="./backups" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/replace-operator/new-operator.sh [OPTIONS] + +Helps a new operator join an existing cluster after a replace-operator +ceremony has been completed by the remaining operators. + +Options: + --cluster-lock Path to the new cluster-lock.json file + --generate-enr Generate a new ENR private key if not present + --dry-run Show what would be done without executing + -h, --help Show this help message + +Examples: + # Step 1: Generate ENR and share with remaining operators + ./scripts/edit/replace-operator/new-operator.sh --generate-enr + + # Step 2: After receiving cluster-lock, join the cluster + ./scripts/edit/replace-operator/new-operator.sh --cluster-lock ./received-cluster-lock.json + +Prerequisites: + - .env file with NETWORK and VC variables set + - For --generate-enr: Docker installed + - For joining: .charon/charon-enr-private-key must exist +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --cluster-lock) + CLUSTER_LOCK_PATH="$2" + shift 2 + ;; + --generate-enr) + GENERATE_ENR=true + shift + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Replace-Operator Workflow - NEW OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +echo "" + +# Step 1: Handle ENR generation +if [ "$GENERATE_ENR" = true ]; then + log_step "Step 1: Generating ENR private key..." + + if [ -f .charon/charon-enr-private-key ]; then + log_warn "ENR private key already exists at .charon/charon-enr-private-key" + log_warn "Skipping generation to avoid overwriting existing key." + log_info "If you want to generate a new key, remove the existing file first." + else + mkdir -p .charon + + if [ "$DRY_RUN" = false ]; then + docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + create enr + else + echo " [DRY-RUN] docker run --rm ... charon create enr" + fi + + log_info "ENR private key generated" + fi + + if [ -f .charon/charon-enr-private-key ]; then + echo "" + log_warn "╔════════════════════════════════════════════════════════════════╗" + log_warn "║ SHARE YOUR ENR WITH THE REMAINING OPERATORS ║" + log_warn "╚════════════════════════════════════════════════════════════════╝" + echo "" + + # Extract and display the ENR + if [ -f .charon/charon-enr-private-key ]; then + ENR=$(docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + enr 2>/dev/null || echo "") + + if [ -n "$ENR" ]; then + log_info "Your ENR:" + echo "" + echo "$ENR" + echo "" + fi + fi + + log_info "Send this ENR to the remaining operators." + log_info "They will use it with: --new-enr \"\"" + log_info "" + log_info "After they complete the ceremony, run this script again with:" + log_info " ./scripts/edit/replace-operator/new-operator.sh --cluster-lock " + fi + + exit 0 +fi + +# Step 1: Check prerequisites +log_step "Step 1: Checking prerequisites..." + +if [ "$DRY_RUN" = false ]; then + if [ ! -d .charon ]; then + log_error ".charon directory not found" + log_info "First generate your ENR with: ./scripts/edit/replace-operator/new-operator.sh --generate-enr" + exit 1 + fi + + if [ ! -f .charon/charon-enr-private-key ]; then + log_error ".charon/charon-enr-private-key not found" + log_info "First generate your ENR with: ./scripts/edit/replace-operator/new-operator.sh --generate-enr" + exit 1 + fi +else + if [ ! -d .charon ]; then + log_warn "Would check for .charon directory (not found)" + fi + if [ ! -f .charon/charon-enr-private-key ]; then + log_warn "Would check for .charon/charon-enr-private-key (not found)" + fi +fi + +# Handle cluster-lock +if [ -n "$CLUSTER_LOCK_PATH" ]; then + if [ "$DRY_RUN" = false ] && [ ! -f "$CLUSTER_LOCK_PATH" ]; then + log_error "Cluster-lock file not found: $CLUSTER_LOCK_PATH" + exit 1 + fi + log_info "Using provided cluster-lock: $CLUSTER_LOCK_PATH" +elif [ -f .charon/cluster-lock.json ]; then + log_info "Using existing cluster-lock: .charon/cluster-lock.json" +elif [ "$DRY_RUN" = true ]; then + log_warn "Would need cluster-lock.json (not found)" +else + log_error "No cluster-lock.json found" + log_info "Provide the path to the new cluster-lock.json with:" + log_info " ./scripts/edit/replace-operator/new-operator.sh --cluster-lock " + exit 1 +fi + +log_info "Prerequisites OK" + +echo "" + +# Step 2: Stop any running containers +log_step "Step 2: Stopping any running containers..." + +# Stop containers if running (ignore errors if not running) +run_cmd docker compose stop "$VC" charon 2>/dev/null || true + +log_info "Containers stopped" + +echo "" + +# Step 3: Install cluster-lock if provided +if [ -n "$CLUSTER_LOCK_PATH" ]; then + log_step "Step 3: Installing new cluster-lock..." + + if [ -f .charon/cluster-lock.json ]; then + TIMESTAMP=$(date +%Y%m%d_%H%M%S) + mkdir -p "$BACKUP_DIR" + run_cmd cp .charon/cluster-lock.json "$BACKUP_DIR/cluster-lock.json.$TIMESTAMP" + log_info "Old cluster-lock backed up to $BACKUP_DIR/cluster-lock.json.$TIMESTAMP" + fi + + run_cmd cp "$CLUSTER_LOCK_PATH" .charon/cluster-lock.json + log_info "New cluster-lock installed" +else + log_step "Step 3: Using existing cluster-lock..." + log_info "cluster-lock.json already in place" +fi + +echo "" + +# Step 4: Verify cluster-lock matches our ENR +log_step "Step 4: Verifying cluster-lock configuration..." + +if [ "$DRY_RUN" = false ] && [ -f .charon/cluster-lock.json ]; then + # Get our ENR + OUR_ENR=$(docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + enr 2>/dev/null || echo "") + + if [ -n "$OUR_ENR" ]; then + # Check if our ENR is in the cluster-lock + if grep -q "${OUR_ENR:0:50}" .charon/cluster-lock.json 2>/dev/null; then + log_info "Verified: Your ENR is present in the cluster-lock" + else + log_warn "Your ENR may not be in this cluster-lock." + log_warn "Make sure you received the correct cluster-lock from the remaining operators." + fi + fi + + # Show cluster info + NUM_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + NUM_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info "Cluster info: $NUM_VALIDATORS validator(s), $NUM_OPERATORS operator(s)" +fi + +echo "" + +# Step 5: Start containers +log_step "Step 5: Starting containers..." + +run_cmd docker compose up -d charon "$VC" + +log_info "Containers started" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ New Operator Setup COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Cluster-lock installed in: .charon/cluster-lock.json" +log_info " - Containers started: charon, $VC" +echo "" +log_info "Next steps:" +log_info " 1. Wait for charon to sync with peers: docker compose logs -f charon" +log_info " 2. Verify VC is running: docker compose logs -f $VC" +log_info " 3. Monitor validator duties once synced" +echo "" +log_warn "Note: As a new operator, you do NOT have any slashing protection history." +log_warn "Your VC will start fresh. Ensure all remaining operators have completed" +log_warn "their replace-operator workflow before validators resume duties." +echo "" diff --git a/scripts/edit/replace-operator/remaining-operator.sh b/scripts/edit/replace-operator/remaining-operator.sh new file mode 100755 index 00000000..c1623d5c --- /dev/null +++ b/scripts/edit/replace-operator/remaining-operator.sh @@ -0,0 +1,328 @@ +#!/usr/bin/env bash + +# Replace-Operator Workflow Script for REMAINING Operators +# +# This script automates the complete replace-operator workflow for operators +# who are staying in the cluster (continuing operators). +# +# The workflow: +# 1. Export the current anti-slashing database +# 2. Run the replace-operator ceremony (charon edit replace-operator) +# 3. Update the exported ASDB with new pubkeys +# 4. Stop charon and VC containers +# 5. Backup and replace the cluster-lock +# 6. Import the updated ASDB +# 7. Restart all containers +# +# Prerequisites: +# - .env file with NETWORK and VC variables set +# - .charon directory with cluster-lock.json and charon-enr-private-key +# - Docker and docker compose installed and running +# - VC container running (for initial export) +# +# Usage: +# ./scripts/edit/replace-operator/remaining-operator.sh [OPTIONS] +# +# Options: +# --new-enr ENR of the new operator (required) +# --operator-index Index of the operator being replaced (required) +# --skip-export Skip ASDB export (if already exported) +# --dry-run Show what would be done without executing +# -h, --help Show this help message +# +# Example: +# ./scripts/edit/replace-operator/remaining-operator.sh \ +# --new-enr "enr:-..." \ +# --operator-index 2 + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +NEW_ENR="" +OPERATOR_INDEX="" +SKIP_EXPORT=false +DRY_RUN=false + +# Output directories +ASDB_EXPORT_DIR="./asdb-export" +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/replace-operator/remaining-operator.sh [OPTIONS] + +Automates the complete replace-operator workflow for operators +who are staying in the cluster (continuing operators). + +Options: + --new-enr ENR of the new operator (required) + --operator-index Index of the operator being replaced (required) + --skip-export Skip ASDB export (if already exported) + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + ./scripts/edit/replace-operator/remaining-operator.sh \ + --new-enr "enr:-..." \ + --operator-index 2 + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json and charon-enr-private-key + - Docker and docker compose installed and running + - VC container running (for initial export) +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --new-enr) + NEW_ENR="$2" + shift 2 + ;; + --operator-index) + OPERATOR_INDEX="$2" + shift 2 + ;; + --skip-export) + SKIP_EXPORT=true + shift + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ -z "$NEW_ENR" ]; then + log_error "Missing required argument: --new-enr" + echo "Use --help for usage information" + exit 1 +fi +if [ -z "$OPERATOR_INDEX" ]; then + log_error "Missing required argument: --operator-index" + echo "Use --help for usage information" + exit 1 +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Replace-Operator Workflow - REMAINING OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +source .env + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -f .charon/charon-enr-private-key ]; then + log_error ".charon/charon-enr-private-key not found" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +echo "" + +# Step 1: Export anti-slashing database +log_step "Step 1: Exporting anti-slashing database..." + +if [ "$SKIP_EXPORT" = true ]; then + log_warn "Skipping export (--skip-export specified)" + if [ ! -f "$ASDB_EXPORT_DIR/slashing-protection.json" ]; then + log_error "Cannot skip export: $ASDB_EXPORT_DIR/slashing-protection.json not found" + exit 1 + fi +else + # Check VC container is running (skip check in dry-run mode) + if [ "$DRY_RUN" = false ]; then + if ! docker compose ps "$VC" 2>/dev/null | grep -q Up; then + log_error "VC container ($VC) is not running. Start it first:" + log_error " docker compose up -d $VC" + exit 1 + fi + else + log_warn "Would check that $VC container is running" + fi + + mkdir -p "$ASDB_EXPORT_DIR" + + VC="$VC" run_cmd "$SCRIPT_DIR/../vc/export_asdb.sh" \ + --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" + + log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" +fi + +echo "" + +# Step 2: Run replace-operator ceremony +log_step "Step 2: Running replace-operator ceremony..." + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon edit replace-operator" +log_info " Replacing operator index: $OPERATOR_INDEX" +log_info " New ENR: ${NEW_ENR:0:50}..." + +if [ "$DRY_RUN" = false ]; then + docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.8.2}" \ + edit replace-operator \ + --lock-file=/opt/charon/.charon/cluster-lock.json \ + --output-dir=/opt/charon/output \ + --operator-index="$OPERATOR_INDEX" \ + --new-enr="$NEW_ENR" +else + echo " [DRY-RUN] docker run --rm ... charon edit replace-operator ..." +fi + +log_info "New cluster-lock generated at $OUTPUT_DIR/cluster-lock.json" + +echo "" + +# Step 3: Update ASDB pubkeys +log_step "Step 3: Updating anti-slashing database pubkeys..." + +run_cmd "$SCRIPT_DIR/../vc/update-anti-slashing-db.sh" \ + "$ASDB_EXPORT_DIR/slashing-protection.json" \ + ".charon/cluster-lock.json" \ + "$OUTPUT_DIR/cluster-lock.json" + +log_info "Anti-slashing database pubkeys updated" + +echo "" + +# Step 4: Stop containers +log_step "Step 4: Stopping charon and VC containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" + +# Step 5: Backup and replace cluster-lock +log_step "Step 5: Backing up and replacing cluster-lock..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd cp .charon/cluster-lock.json "$BACKUP_DIR/cluster-lock.json.$TIMESTAMP" +log_info "Old cluster-lock backed up to $BACKUP_DIR/cluster-lock.json.$TIMESTAMP" + +run_cmd cp "$OUTPUT_DIR/cluster-lock.json" .charon/cluster-lock.json +log_info "New cluster-lock installed" + +echo "" + +# Step 6: Import updated ASDB +log_step "Step 6: Importing updated anti-slashing database..." + +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/import_asdb.sh" \ + --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database imported" + +echo "" + +# Step 7: Restart containers +log_step "Step 7: Restarting containers..." + +run_cmd docker compose up -d charon "$VC" + +log_info "Containers restarted" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Replace-Operator Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old cluster-lock backed up to: $BACKUP_DIR/cluster-lock.json.$TIMESTAMP" +log_info " - New cluster-lock installed in: .charon/cluster-lock.json" +log_info " - Anti-slashing database updated and imported" +log_info " - Containers restarted: charon, $VC" +echo "" +log_info "Next steps:" +log_info " 1. Verify charon is syncing with peers: docker compose logs -f charon" +log_info " 2. Verify VC is running: docker compose logs -f $VC" +log_info " 3. Share the new cluster-lock.json with the NEW operator" +echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" diff --git a/scripts/edit/test/README.md b/scripts/edit/test/README.md new file mode 100644 index 00000000..08abf7a4 --- /dev/null +++ b/scripts/edit/test/README.md @@ -0,0 +1,51 @@ +# E2E Integration Tests for Edit Scripts + +End-to-end tests that verify the cluster edit scripts work correctly across the full workflow. + +## Prerequisites + +- **Docker** running locally +- **jq** installed +- **Internet access** (charon ceremonies use the Obol P2P relay) + +## Running + +```bash +./scripts/edit/test/e2e_test.sh +``` + +Override the charon version: + +```bash +CHARON_VERSION=v1.8.2 ./scripts/edit/test/e2e_test.sh +``` + +## What It Tests + +| # | Test | Type | Description | +|---|------|------|-------------| +| 1 | add-validators | P2P ceremony (4 ops) | Adds 1 validator to a 4-operator, 1-validator cluster. Verifies 2 validators in output. | +| 2 | recreate-private-keys | P2P ceremony (4 ops) | Refreshes key shares. Verifies public_shares changed, same validator count. | +| 3 | add-operators | P2P ceremony (4+1 ops) | Adds 1 new operator. Verifies 5 operators in output. | +| 4 | remove-operators | P2P ceremony (4 of 5 ops) | Removes the added operator. Verifies 4 operators in output. | +| 5 | replace-operator | Offline (sequential) | Replaces operator 0. Verifies ENR changed in output. | +| 6 | update-anti-slashing-db | Standalone (no Docker) | Transforms EIP-3076 pubkeys between cluster-locks. | + +## How It Works + +1. Creates a real test cluster using `charon create cluster` (4 nodes, 1 validator) +2. Sets up 4 operator work directories with `.charon/` and `.env` +3. Interposes a **mock docker wrapper** (`test/bin/docker`) on `PATH` + - Real `docker run` is used for charon ceremony commands (P2P relay) + - `docker compose` commands are mocked (container lifecycle, ASDB export/import) + - `edit replace-operator` is mocked locally (the command does not yet exist in charon v1.8.2) +4. Runs each edit script through its happy path +5. Verifies outputs (validator count, operator count, key changes) at each step + +## WORK_DIR Environment Variable + +The test uses the `WORK_DIR` environment variable to redirect each script's working directory. When set, scripts use `WORK_DIR` as their repo root instead of computing it relative to the script location. This allows running multiple operator instances from isolated directories. + +## Expected Runtime + +Approximately 2-5 minutes depending on P2P relay connectivity. The P2P ceremonies (tests 1-4) require all operators to connect through the relay simultaneously. diff --git a/scripts/edit/test/bin/docker b/scripts/edit/test/bin/docker new file mode 100755 index 00000000..806c7e78 --- /dev/null +++ b/scripts/edit/test/bin/docker @@ -0,0 +1,288 @@ +#!/usr/bin/env bash + +# Mock docker wrapper for E2E testing of edit scripts. +# +# This script intercepts docker and docker compose commands: +# - Real docker is used for `docker info` and `docker run` (charon ceremonies) +# - Docker compose commands are mocked (container lifecycle, ASDB export/import) +# +# Environment variables: +# REAL_DOCKER - Path to real docker binary (required) +# MOCK_STATE_DIR - Directory for service state tracking (required for compose) +# MOCK_OPERATOR_INDEX - Operator index for ASDB pubkey generation (required for compose cp) + +set -euo pipefail + +# --- Helpers --- + +strip_tty_flags() { + local args=() + local skip_next=false + for arg in "$@"; do + if [ "$skip_next" = true ]; then + skip_next=false + continue + fi + case "$arg" in + -it|-ti) ;; # combined flags + -i|-t) ;; # individual flags + --interactive|--tty) ;; + *) args+=("$arg") ;; + esac + done + echo "${args[@]}" +} + +state_file() { + echo "${MOCK_STATE_DIR}/services.state" +} + +mark_service() { + local svc="$1" status="$2" + local sf + sf="$(state_file)" + mkdir -p "$(dirname "$sf")" + touch "$sf" + if grep -q "^${svc}=" "$sf" 2>/dev/null; then + # Use a temp file for portable in-place edit + local tmp="${sf}.tmp" + while IFS= read -r line; do + if [[ "$line" == "${svc}="* ]]; then + echo "${svc}=${status}" + else + echo "$line" + fi + done < "$sf" > "$tmp" + mv "$tmp" "$sf" + else + echo "${svc}=${status}" >> "$sf" + fi +} + +is_service_up() { + local svc="$1" + local sf + sf="$(state_file)" + [ -f "$sf" ] && grep -q "^${svc}=running" "$sf" 2>/dev/null +} + +generate_eip3076() { + local dest="$1" + local operator_index="${MOCK_OPERATOR_INDEX:-0}" + + # Find cluster-lock.json in cwd + local lock_file="./charon/cluster-lock.json" + if [ ! -f "$lock_file" ]; then + lock_file="./.charon/cluster-lock.json" + fi + + if [ ! -f "$lock_file" ]; then + # Fallback: generate minimal valid EIP-3076 + cat > "$dest" <<'FALLBACK' +{"metadata":{"interchange_format_version":"5","genesis_validators_root":"0x0000000000000000000000000000000000000000000000000000000000000000"},"data":[]} +FALLBACK + return 0 + fi + + # Extract this operator's public shares from cluster-lock + local pubkeys + pubkeys=$(jq -r --argjson idx "$operator_index" \ + '[.distributed_validators[].public_shares[$idx]] | map(select(. != null)) | .[]' \ + "$lock_file" 2>/dev/null || echo "") + + if [ -z "$pubkeys" ]; then + cat > "$dest" <<'FALLBACK' +{"metadata":{"interchange_format_version":"5","genesis_validators_root":"0x0000000000000000000000000000000000000000000000000000000000000000"},"data":[]} +FALLBACK + return 0 + fi + + # Build EIP-3076 JSON with one entry per validator + local data_entries="" + local first=true + while IFS= read -r pk; do + if [ "$first" = true ]; then + first=false + else + data_entries="${data_entries}," + fi + data_entries="${data_entries}{\"pubkey\":\"${pk}\",\"signed_blocks\":[],\"signed_attestations\":[]}" + done <<< "$pubkeys" + + cat > "$dest" < + svc="${1:-}" + if [ -n "$svc" ] && is_service_up "$svc"; then + echo "NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS" + echo "${svc}-1 img cmd ${svc} 1m ago Up 1m " + fi + exit 0 + ;; + stop) + # docker compose stop ... + for svc in "$@"; do + mark_service "$svc" "stopped" + done + exit 0 + ;; + up) + # docker compose up -d ... + # Skip the -d flag + for arg in "$@"; do + case "$arg" in + -d) ;; + *) mark_service "$arg" "running" ;; + esac + done + exit 0 + ;; + exec) + # docker compose exec -T (ASDB export inside container) + # Just succeed - the actual data is handled by cp + exit 0 + ;; + cp) + # docker compose cp : + # Generate EIP-3076 JSON at + local_dest="${2:-}" + if [ -z "$local_dest" ]; then + # Maybe args are container:src dest + local_dest="${2:-/dev/null}" + fi + # The arguments are: container:path localpath + # $1 = container:path, $2 = local dest + generate_eip3076 "${2:-/dev/null}" + exit 0 + ;; + run) + # docker compose run --rm ... (ASDB import) + # Just succeed + exit 0 + ;; + *) + # Unknown compose command - pass through + exec "${REAL_DOCKER}" compose "$compose_cmd" "$@" + ;; + esac + exit 0 +fi + +# Non-compose commands +case "${1:-}" in + info) + # Pass through to real docker + exec "${REAL_DOCKER}" "$@" + ;; + run) + # Pass through to real docker, but strip -i/-t flags for background execution + shift # consume "run" + cleaned_args=() + volume_mounts=() + charon_cmd="" + new_enr="" + operator_index="" + lock_file_mount="" + output_dir_mount="" + + while [ $# -gt 0 ]; do + case "$1" in + -it|-ti) ;; + -i|-t|--interactive|--tty) ;; + -v) + volume_mounts+=("$1" "$2") + cleaned_args+=("$1" "$2") + # Track volume mounts for mock replace-operator + case "$2" in + */.charon:*) lock_file_mount="${2%%:*}" ;; + *output:*) output_dir_mount="${2%%:*}" ;; + esac + shift + ;; + -v*) + # -v/path:/path format + cleaned_args+=("$1") + local_mount="${1#-v}" + case "$local_mount" in + */.charon:*) lock_file_mount="${local_mount%%:*}" ;; + *output:*) output_dir_mount="${local_mount%%:*}" ;; + esac + ;; + edit|"alpha") + # Detect charon edit/alpha edit commands + cleaned_args+=("$1") + charon_cmd="$1" + ;; + replace-operator) + charon_cmd="replace-operator" + cleaned_args+=("$1") + ;; + --new-enr=*) + new_enr="${1#--new-enr=}" + cleaned_args+=("$1") + ;; + --operator-index=*) + operator_index="${1#--operator-index=}" + cleaned_args+=("$1") + ;; + *) cleaned_args+=("$1") ;; + esac + shift + done + + # Check if this is a replace-operator command (mock it since it doesn't exist in v1.8.2) + if [ "$charon_cmd" = "replace-operator" ] && [ -n "$new_enr" ] && [ -n "$operator_index" ]; then + # Find the cluster-lock via volume mounts + local_lock="" + local_output="" + for ((idx=0; idx<${#cleaned_args[@]}; idx++)); do + case "${cleaned_args[$idx]}" in + -v) + next_idx=$((idx+1)) + mount="${cleaned_args[$next_idx]:-}" + host_path="${mount%%:*}" + container_path="${mount#*:}" + container_path="${container_path%%:*}" + case "$container_path" in + */\.charon|*/.charon) local_lock="$host_path/cluster-lock.json" ;; + */output) local_output="$host_path" ;; + esac + ;; + esac + done + + if [ -n "$local_lock" ] && [ -f "$local_lock" ] && [ -n "$local_output" ]; then + mkdir -p "$local_output" + # Replace operator ENR at the given index using jq + jq --argjson idx "$operator_index" --arg enr "$new_enr" \ + '.cluster_definition.operators[$idx].enr = $enr' \ + "$local_lock" > "$local_output/cluster-lock.json" + exit 0 + else + echo "Mock replace-operator: cannot find cluster-lock or output dir" >&2 + exit 1 + fi + fi + + exec "${REAL_DOCKER}" run "${cleaned_args[@]}" + ;; + *) + # Pass through everything else + exec "${REAL_DOCKER}" "$@" + ;; +esac diff --git a/scripts/edit/test/e2e_test.sh b/scripts/edit/test/e2e_test.sh new file mode 100755 index 00000000..e1c67d5b --- /dev/null +++ b/scripts/edit/test/e2e_test.sh @@ -0,0 +1,772 @@ +#!/usr/bin/env bash + +# E2E Integration Test for Cluster Edit Scripts +# +# This test creates a real cluster using charon and runs each edit script +# through its happy path. Real Docker is used for charon ceremony commands; +# docker compose (container lifecycle, ASDB) is mocked. +# +# Prerequisites: +# - Docker running +# - jq installed +# - Internet access (charon uses Obol relay for P2P ceremonies) +# +# Usage: +# ./scripts/edit/test/e2e_test.sh + +set -euo pipefail + +# --- Configuration --- + +TEST_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$TEST_DIR/../../.." && pwd)" +CHARON_VERSION="${CHARON_VERSION:-v1.8.2}" +CHARON_IMAGE="obolnetwork/charon:${CHARON_VERSION}" +NUM_OPERATORS=4 +ZERO_ADDR="0x0000000000000000000000000000000000000001" + +# Colors +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +# Counters +TESTS_RUN=0 +TESTS_PASSED=0 +TESTS_FAILED=0 + +# --- Helpers --- + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_test() { echo -e "${BLUE}[TEST]${NC} $1"; } + +assert_eq() { + local desc="$1" expected="$2" actual="$3" + if [ "$expected" = "$actual" ]; then + log_info " PASS: $desc (got $actual)" + return 0 + else + log_error " FAIL: $desc - expected '$expected', got '$actual'" + return 1 + fi +} + +assert_ne() { + local desc="$1" not_expected="$2" actual="$3" + if [ "$not_expected" != "$actual" ]; then + log_info " PASS: $desc (values differ)" + return 0 + else + log_error " FAIL: $desc - expected different from '$not_expected', but got same" + return 1 + fi +} + +run_test() { + local name="$1" + shift + TESTS_RUN=$((TESTS_RUN + 1)) + echo "" + echo "================================================================" + log_test "TEST $TESTS_RUN: $name" + echo "================================================================" + echo "" + if "$@"; then + TESTS_PASSED=$((TESTS_PASSED + 1)) + log_info "TEST $TESTS_RUN PASSED: $name" + else + TESTS_FAILED=$((TESTS_FAILED + 1)) + log_error "TEST $TESTS_RUN FAILED: $name" + fi +} + +# --- Setup --- + +TMP_DIR="" +cleanup() { + if [ -n "$TMP_DIR" ] && [ -d "$TMP_DIR" ]; then + rm -rf "$TMP_DIR" + log_info "Cleaned up $TMP_DIR" + fi +} +trap cleanup EXIT + +check_prerequisites() { + log_info "Checking prerequisites..." + + if ! command -v jq &>/dev/null; then + log_error "jq is required but not installed" + exit 1 + fi + + if ! docker info &>/dev/null; then + log_error "Docker is not running" + exit 1 + fi + + log_info "Pulling charon image: $CHARON_IMAGE" + docker pull "$CHARON_IMAGE" >/dev/null 2>&1 || true + + log_info "Prerequisites OK" +} + +setup_tmp_dir() { + TMP_DIR=$(mktemp -d) + log_info "Working directory: $TMP_DIR" +} + +create_cluster() { + log_info "Creating test cluster with $NUM_OPERATORS nodes, 1 validator..." + + local cluster_dir="$TMP_DIR/cluster" + mkdir -p "$cluster_dir" + + docker run --rm \ + -v "$cluster_dir:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + create cluster \ + --nodes="$NUM_OPERATORS" \ + --num-validators=1 \ + --network=hoodi \ + --withdrawal-addresses="$ZERO_ADDR" \ + --fee-recipient-addresses="$ZERO_ADDR" \ + --cluster-dir=/opt/charon/.charon + + # Verify cluster was created + if [ ! -d "$cluster_dir/node0" ]; then + log_error "Cluster creation failed - no node0 directory" + exit 1 + fi + + log_info "Cluster created successfully" + + # Set up operator work directories + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + mkdir -p "$op_dir" + + # Copy node contents to operator's .charon directory + cp -r "$cluster_dir/node${i}" "$op_dir/.charon" + + # Create .env file + cat > "$op_dir/.env" < "$op_dir/services.state" + echo "vc-lodestar=running" >> "$op_dir/services.state" + + log_info " Operator $i set up at $op_dir" + done +} + +setup_mock_docker() { + export REAL_DOCKER + REAL_DOCKER="$(which docker)" + export PATH="$TEST_DIR/bin:$PATH" + + log_info "Mock docker enabled (real docker at $REAL_DOCKER)" +} + +# --- Test Functions --- + +test_add_validators() { + log_info "Running add-validators ceremony (4 operators in parallel)..." + + local pids=() + local logs_dir="$TMP_DIR/logs/add-validators" + mkdir -p "$logs_dir" + + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + ( + WORK_DIR="$op_dir" \ + MOCK_OPERATOR_INDEX="$i" \ + MOCK_STATE_DIR="$op_dir" \ + "$REPO_ROOT/scripts/edit/add-validators/add-validators.sh" \ + --num-validators 1 \ + --withdrawal-addresses "$ZERO_ADDR" \ + --fee-recipient-addresses "$ZERO_ADDR" + ) > "$logs_dir/operator${i}.log" 2>&1 & + pids+=($!) + done + + # Wait for all operators + local all_ok=true + for i in "${!pids[@]}"; do + if ! wait "${pids[$i]}"; then + log_error "Operator $i failed. Log:" + cat "$logs_dir/operator${i}.log" || true + all_ok=false + fi + done + + if [ "$all_ok" = false ]; then + return 1 + fi + + # Verify: each operator should have a cluster-lock with 2 validators + local ok=true + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + local lock="$op_dir/.charon/cluster-lock.json" + + if [ ! -f "$lock" ]; then + log_error "Operator $i: cluster-lock.json not found" + ok=false + continue + fi + + local num_vals + num_vals=$(jq '.distributed_validators | length' "$lock") + assert_eq "Operator $i has 2 validators" "2" "$num_vals" || ok=false + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "Operator $i has $NUM_OPERATORS operators" "$NUM_OPERATORS" "$num_ops" || ok=false + done + + [ "$ok" = true ] +} + +test_recreate_private_keys() { + log_info "Running recreate-private-keys ceremony (4 operators in parallel)..." + + # Save current public_shares for comparison + local old_shares + old_shares=$(jq -r '.distributed_validators[0].public_shares[0]' \ + "$TMP_DIR/operator0/.charon/cluster-lock.json") + + local pids=() + local logs_dir="$TMP_DIR/logs/recreate-private-keys" + mkdir -p "$logs_dir" + + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + # Reset service state to running for this test + echo "charon=running" > "$op_dir/services.state" + echo "vc-lodestar=running" >> "$op_dir/services.state" + ( + WORK_DIR="$op_dir" \ + MOCK_OPERATOR_INDEX="$i" \ + MOCK_STATE_DIR="$op_dir" \ + "$REPO_ROOT/scripts/edit/recreate-private-keys/recreate-private-keys.sh" + ) > "$logs_dir/operator${i}.log" 2>&1 & + pids+=($!) + done + + local all_ok=true + for i in "${!pids[@]}"; do + if ! wait "${pids[$i]}"; then + log_error "Operator $i failed. Log:" + cat "$logs_dir/operator${i}.log" || true + all_ok=false + fi + done + + if [ "$all_ok" = false ]; then + return 1 + fi + + # Verify: still 2 validators, 4 operators, but different public_shares + local ok=true + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + local lock="$op_dir/.charon/cluster-lock.json" + + if [ ! -f "$lock" ]; then + log_error "Operator $i: cluster-lock.json not found" + ok=false + continue + fi + + local num_vals + num_vals=$(jq '.distributed_validators | length' "$lock") + assert_eq "Operator $i has 2 validators" "2" "$num_vals" || ok=false + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "Operator $i has $NUM_OPERATORS operators" "$NUM_OPERATORS" "$num_ops" || ok=false + done + + # Check that public shares changed + local new_shares + new_shares=$(jq -r '.distributed_validators[0].public_shares[0]' \ + "$TMP_DIR/operator0/.charon/cluster-lock.json") + assert_ne "Public shares changed after recreate" "$old_shares" "$new_shares" || ok=false + + [ "$ok" = true ] +} + +test_add_operators() { + log_info "Running add-operators ceremony (4 existing + 1 new)..." + + # Create operator4 work directory + local new_op_dir="$TMP_DIR/operator4" + mkdir -p "$new_op_dir/.charon" + + # Generate ENR for new operator + log_info " Generating ENR for new operator..." + "$REAL_DOCKER" run --rm \ + -v "$new_op_dir/.charon:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + create enr + + # Extract ENR + local new_enr + new_enr=$("$REAL_DOCKER" run --rm \ + -v "$new_op_dir/.charon:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + enr 2>/dev/null) + + if [ -z "$new_enr" ]; then + log_error "Failed to get ENR for new operator" + return 1 + fi + log_info " New operator ENR: ${new_enr:0:50}..." + + # Copy cluster-lock from operator0 to operator4 + cp "$TMP_DIR/operator0/.charon/cluster-lock.json" "$new_op_dir/.charon/cluster-lock.json" + + # Create .env for new operator + cat > "$new_op_dir/.env" < "$new_op_dir/services.state" + echo "vc-lodestar=running" >> "$new_op_dir/services.state" + + # Reset service states for existing operators + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + echo "charon=running" > "$op_dir/services.state" + echo "vc-lodestar=running" >> "$op_dir/services.state" + done + + local pids=() + local logs_dir="$TMP_DIR/logs/add-operators" + mkdir -p "$logs_dir" + + # Run existing operators + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + ( + WORK_DIR="$op_dir" \ + MOCK_OPERATOR_INDEX="$i" \ + MOCK_STATE_DIR="$op_dir" \ + "$REPO_ROOT/scripts/edit/add-operators/existing-operator.sh" \ + --new-operator-enrs "$new_enr" + ) > "$logs_dir/operator${i}.log" 2>&1 & + pids+=($!) + done + + # Run new operator + ( + WORK_DIR="$new_op_dir" \ + MOCK_OPERATOR_INDEX="$NUM_OPERATORS" \ + MOCK_STATE_DIR="$new_op_dir" \ + "$REPO_ROOT/scripts/edit/add-operators/new-operator.sh" \ + --new-operator-enrs "$new_enr" \ + --cluster-lock ".charon/cluster-lock.json" + ) > "$logs_dir/operator4.log" 2>&1 & + pids+=($!) + + # Wait for all + local all_ok=true + for i in "${!pids[@]}"; do + if ! wait "${pids[$i]}"; then + log_error "Operator $i failed. Log:" + cat "$logs_dir/operator${i}.log" || true + all_ok=false + fi + done + + if [ "$all_ok" = false ]; then + return 1 + fi + + # Verify: all operators should now have 5 operators in cluster-lock + local ok=true + for i in $(seq 0 "$NUM_OPERATORS"); do + local op_dir="$TMP_DIR/operator${i}" + local lock="$op_dir/.charon/cluster-lock.json" + + if [ ! -f "$lock" ]; then + log_error "Operator $i: cluster-lock.json not found" + ok=false + continue + fi + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "Operator $i has 5 operators" "5" "$num_ops" || ok=false + done + + [ "$ok" = true ] +} + +test_remove_operators() { + log_info "Running remove-operators ceremony (removing operator4, 4 remaining)..." + + # Get operator4's ENR from cluster-lock + local op4_enr + op4_enr=$(jq -r '.cluster_definition.operators[4].enr' "$TMP_DIR/operator0/.charon/cluster-lock.json") + + if [ -z "$op4_enr" ] || [ "$op4_enr" = "null" ]; then + log_error "Failed to get operator4 ENR from cluster-lock" + return 1 + fi + log_info " Operator4 ENR to remove: ${op4_enr:0:50}..." + + # Reset service states for remaining operators + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + echo "charon=running" > "$op_dir/services.state" + echo "vc-lodestar=running" >> "$op_dir/services.state" + done + + local pids=() + local logs_dir="$TMP_DIR/logs/remove-operators" + mkdir -p "$logs_dir" + + # Run remaining operators (0-3) — operator4 does NOT participate + # (within fault tolerance: 5 ops, threshold ~4, f=1) + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + ( + WORK_DIR="$op_dir" \ + MOCK_OPERATOR_INDEX="$i" \ + MOCK_STATE_DIR="$op_dir" \ + "$REPO_ROOT/scripts/edit/remove-operators/remaining-operator.sh" \ + --operator-enrs-to-remove "$op4_enr" + ) > "$logs_dir/operator${i}.log" 2>&1 & + pids+=($!) + done + + local all_ok=true + for i in "${!pids[@]}"; do + if ! wait "${pids[$i]}"; then + log_error "Operator $i failed. Log:" + cat "$logs_dir/operator${i}.log" || true + all_ok=false + fi + done + + if [ "$all_ok" = false ]; then + return 1 + fi + + # Verify: 4 operators in new cluster-lock + local ok=true + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + local lock="$op_dir/.charon/cluster-lock.json" + + if [ ! -f "$lock" ]; then + log_error "Operator $i: cluster-lock.json not found" + ok=false + continue + fi + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "Operator $i has 4 operators" "4" "$num_ops" || ok=false + done + + [ "$ok" = true ] +} + +test_replace_operator() { + log_info "Running replace-operator workflow (replacing operator0)..." + + # Create new operator work directory + local new_op_dir="$TMP_DIR/new-operator" + mkdir -p "$new_op_dir/.charon" + + # Generate ENR for replacement operator + log_info " Generating ENR for replacement operator..." + "$REAL_DOCKER" run --rm \ + -v "$new_op_dir/.charon:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + create enr + + local new_enr + new_enr=$("$REAL_DOCKER" run --rm \ + -v "$new_op_dir/.charon:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + enr 2>/dev/null) + + if [ -z "$new_enr" ]; then + log_error "Failed to get ENR for replacement operator" + return 1 + fi + log_info " Replacement operator ENR: ${new_enr:0:50}..." + + # Create .env for new operator + cat > "$new_op_dir/.env" < "$op_dir/services.state" + echo "vc-lodestar=running" >> "$op_dir/services.state" + done + + # Replace-operator is OFFLINE (no P2P) — each remaining operator runs independently + local logs_dir="$TMP_DIR/logs/replace-operator" + mkdir -p "$logs_dir" + + local ok=true + for i in $(seq 1 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + log_info " Running remaining-operator.sh for operator $i..." + if ! ( + WORK_DIR="$op_dir" \ + MOCK_OPERATOR_INDEX="$i" \ + MOCK_STATE_DIR="$op_dir" \ + "$REPO_ROOT/scripts/edit/replace-operator/remaining-operator.sh" \ + --new-enr "$new_enr" \ + --operator-index 0 + ) > "$logs_dir/operator${i}.log" 2>&1; then + log_error "Operator $i failed. Log:" + cat "$logs_dir/operator${i}.log" || true + ok=false + fi + done + + if [ "$ok" = false ]; then + return 1 + fi + + # Copy output cluster-lock from operator1 to new operator's work dir + local src_lock="$TMP_DIR/operator1/.charon/cluster-lock.json" + if [ ! -f "$src_lock" ]; then + # Try output dir + src_lock="$TMP_DIR/operator1/output/cluster-lock.json" + fi + if [ ! -f "$src_lock" ]; then + log_error "No output cluster-lock found for new operator" + return 1 + fi + + # New operator receives cluster-lock and joins + echo "charon=stopped" > "$new_op_dir/services.state" + echo "vc-lodestar=stopped" >> "$new_op_dir/services.state" + + log_info " Running new-operator.sh..." + if ! ( + WORK_DIR="$new_op_dir" \ + MOCK_OPERATOR_INDEX="0" \ + MOCK_STATE_DIR="$new_op_dir" \ + "$REPO_ROOT/scripts/edit/replace-operator/new-operator.sh" \ + --cluster-lock "$src_lock" + ) > "$logs_dir/new-operator.log" 2>&1; then + log_error "New operator failed. Log:" + cat "$logs_dir/new-operator.log" || true + return 1 + fi + + # Verify: 4 operators, operator 0's ENR changed + local lock="$new_op_dir/.charon/cluster-lock.json" + if [ ! -f "$lock" ]; then + log_error "New operator: cluster-lock.json not found" + return 1 + fi + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "New operator has 4 operators" "4" "$num_ops" || ok=false + + # Check that operator 0's ENR changed to the new ENR + local op0_enr + op0_enr=$(jq -r '.cluster_definition.operators[0].enr' "$lock") + # The ENR should contain part of our new ENR (ENRs are reformatted by charon) + if [ "$op0_enr" != "null" ] && [ -n "$op0_enr" ]; then + log_info " PASS: Operator 0 ENR is present in new cluster-lock" + else + log_error " FAIL: Operator 0 ENR missing from cluster-lock" + ok=false + fi + + [ "$ok" = true ] +} + +test_update_asdb() { + log_info "Running update-anti-slashing-db standalone test..." + + # Use cluster-locks from earlier tests as source/target + # Find two different cluster-locks (before/after recreate-private-keys) + # We'll use operator0's backup and current cluster-lock + + local source_lock="" + local target_lock="" + + # Find backup from recreate-private-keys (or add-validators) + for backup in "$TMP_DIR"/operator0/backups/.charon-backup.*/cluster-lock.json; do + if [ -f "$backup" ]; then + source_lock="$backup" + break + fi + done + + target_lock="$TMP_DIR/operator0/.charon/cluster-lock.json" + + if [ -z "$source_lock" ] || [ ! -f "$source_lock" ]; then + log_warn "No backup cluster-lock found, creating synthetic test data..." + + # Create synthetic source and target + local asdb_test_dir="$TMP_DIR/asdb-test" + mkdir -p "$asdb_test_dir" + + source_lock="$asdb_test_dir/source-lock.json" + target_lock="$asdb_test_dir/target-lock.json" + + # Use operator0's original cluster from the initial creation + local orig_lock="$TMP_DIR/cluster/node0/cluster-lock.json" + if [ -f "$orig_lock" ]; then + cp "$orig_lock" "$source_lock" + cp "$TMP_DIR/operator0/.charon/cluster-lock.json" "$target_lock" + else + log_error "Cannot find any cluster-lock files for ASDB test" + return 1 + fi + fi + + if [ ! -f "$target_lock" ]; then + log_error "Target cluster-lock not found: $target_lock" + return 1 + fi + + log_info " Source lock: $source_lock" + log_info " Target lock: $target_lock" + + # Generate EIP-3076 JSON with pubkeys from source lock + local asdb_dir="$TMP_DIR/asdb-test" + mkdir -p "$asdb_dir" + local eip3076_file="$asdb_dir/slashing-protection.json" + + # Extract operator 0's pubkeys from source lock + local pubkeys + pubkeys=$(jq -r '.distributed_validators[].public_shares[0]' "$source_lock") + + local data_entries="" + local first=true + while IFS= read -r pk; do + [ -z "$pk" ] && continue + if [ "$first" = true ]; then + first=false + else + data_entries="${data_entries}," + fi + data_entries="${data_entries}{\"pubkey\":\"${pk}\",\"signed_blocks\":[],\"signed_attestations\":[]}" + done <<< "$pubkeys" + + cat > "$eip3076_file" </dev/null; then + log_error "Output is not valid JSON" + return 1 + fi + + # Check that pubkeys now match target lock's operator 0 shares + # Only compare validators that existed in the source lock + local source_val_count + source_val_count=$(jq '.distributed_validators | length' "$source_lock") + local expected_pubkeys + expected_pubkeys=$(jq -r --argjson n "$source_val_count" \ + '[.distributed_validators[:$n][].public_shares[0]] | .[]' "$target_lock" | sort) + + if [ "$new_pubkeys" = "$expected_pubkeys" ]; then + log_info " PASS: Pubkeys correctly transformed to target cluster-lock values" + else + log_error " FAIL: Pubkeys don't match target cluster-lock" + log_error " Expected: $expected_pubkeys" + log_error " Got: $new_pubkeys" + ok=false + fi + + [ "$ok" = true ] +} + +# --- Main --- + +main() { + echo "" + echo "╔════════════════════════════════════════════════════════════════╗" + echo "║ E2E Integration Test for Cluster Edit Scripts ║" + echo "╚════════════════════════════════════════════════════════════════╝" + echo "" + + check_prerequisites + setup_tmp_dir + create_cluster + setup_mock_docker + + # Run tests sequentially — each builds on the previous state + run_test "add-validators" test_add_validators + run_test "recreate-private-keys" test_recreate_private_keys + run_test "add-operators" test_add_operators + run_test "remove-operators" test_remove_operators + run_test "replace-operator" test_replace_operator + run_test "update-anti-slashing-db" test_update_asdb + + # Summary + echo "" + echo "╔════════════════════════════════════════════════════════════════╗" + echo "║ Test Summary ║" + echo "╚════════════════════════════════════════════════════════════════╝" + echo "" + echo " Tests run: $TESTS_RUN" + echo -e " Tests passed: ${GREEN}$TESTS_PASSED${NC}" + if [ "$TESTS_FAILED" -gt 0 ]; then + echo -e " Tests failed: ${RED}$TESTS_FAILED${NC}" + else + echo " Tests failed: $TESTS_FAILED" + fi + echo "" + + if [ "$TESTS_FAILED" -gt 0 ]; then + log_error "SOME TESTS FAILED" + exit 1 + else + log_info "ALL TESTS PASSED" + exit 0 + fi +} + +main "$@" diff --git a/scripts/edit/vc/README.md b/scripts/edit/vc/README.md new file mode 100644 index 00000000..0e4b05c3 --- /dev/null +++ b/scripts/edit/vc/README.md @@ -0,0 +1,63 @@ +# Anti-Slashing Database Scripts + +Scripts to export, import, and update validator anti-slashing databases (ASDB) in [EIP-3076](https://eips.ethereum.org/EIPS/eip-3076) format for Charon distributed validators. + +## Overview + +When performing cluster edit operations (replace-operator, recreate-private-keys, add-operators, remove-operators), the anti-slashing database must be exported, updated with new pubkeys, and re-imported to prevent slashing violations. These scripts automate that process across all supported validator clients. + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- Docker running +- `jq` installed (for `update-anti-slashing-db.sh`) + +## Scripts + +### Router Scripts + +| Script | Description | +|--------|-------------| +| `export_asdb.sh` | Routes to the appropriate VC-specific export script based on `VC` env var | +| `import_asdb.sh` | Routes to the appropriate VC-specific import script based on `VC` env var | + +Usage: + +```bash +# Export ASDB from running VC container +VC=vc-lodestar ./scripts/edit/vc/export_asdb.sh --output-file ./asdb-export/slashing-protection.json + +# Import ASDB into stopped VC container +VC=vc-lodestar ./scripts/edit/vc/import_asdb.sh --input-file ./asdb-export/slashing-protection.json +``` + +### Update Anti-Slashing DB + +Updates pubkeys in an EIP-3076 file by mapping them between source and target cluster-lock files. + +```bash +./scripts/edit/vc/update-anti-slashing-db.sh +``` + +### Supported Validator Clients + +Each client has its own `export_asdb.sh` and `import_asdb.sh` in a subdirectory: + +| Client | Directory | Export requires | Import requires | +|--------|-----------|-----------------|-----------------| +| Lodestar | `lodestar/` | Container running | Container stopped | +| Prysm | `prysm/` | Container running | Container stopped | +| Teku | `teku/` | Container running | Container stopped | +| Nimbus | `nimbus/` | Container running | Container stopped | + +## Testing + +See [test/README.md](test/README.md) for integration tests. + +## Related + +- [Add-Validators Workflow](../add-validators/README.md) +- [Add-Operators Workflow](../add-operators/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) diff --git a/scripts/edit/vc/export_asdb.sh b/scripts/edit/vc/export_asdb.sh new file mode 100755 index 00000000..f7ed0b68 --- /dev/null +++ b/scripts/edit/vc/export_asdb.sh @@ -0,0 +1,56 @@ +#!/usr/bin/env bash + +# Script to export validator anti-slashing database to EIP-3076 format. +# +# This script routes to the appropriate VC-specific export script based on the VC environment variable. +# +# Usage: VC=vc-lodestar ./scripts/edit/vc/export_asdb.sh [options] +# +# Environment Variables: +# VC Validator client type (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus) +# +# All options are passed through to the VC-specific script. + +set -euo pipefail + +# Check if VC environment variable is set +if [ -z "${VC:-}" ]; then + echo "Error: VC environment variable is not set" >&2 + echo "Usage: VC=vc-lodestar $0 [options]" >&2 + echo "" >&2 + echo "Supported VC types:" >&2 + echo " - vc-lodestar" >&2 + echo " - vc-teku" >&2 + echo " - vc-prysm" >&2 + echo " - vc-nimbus" >&2 + exit 1 +fi + +# Extract the VC name (remove "vc-" prefix) +VC_NAME="${VC#vc-}" + +# Get the script directory +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +# Path to the VC-specific script +VC_SCRIPT="${SCRIPT_DIR}/${VC_NAME}/export_asdb.sh" + +# Check if the VC-specific script exists +if [ ! -f "$VC_SCRIPT" ]; then + echo "Error: Export script for '$VC' not found at: $VC_SCRIPT" >&2 + echo "" >&2 + echo "Available VC types:" >&2 + for dir in "${SCRIPT_DIR}"/*; do + if [ -d "$dir" ] && [ -f "$dir/export_asdb.sh" ]; then + basename "$dir" + fi + done | sed 's/^/ - vc-/' >&2 + exit 1 +fi + +# Make sure the VC-specific script is executable +chmod +x "$VC_SCRIPT" + +# Run the VC-specific script with all arguments passed through +echo "Running export for $VC..." +exec "$VC_SCRIPT" "$@" diff --git a/scripts/edit/vc/import_asdb.sh b/scripts/edit/vc/import_asdb.sh new file mode 100755 index 00000000..6e8facd7 --- /dev/null +++ b/scripts/edit/vc/import_asdb.sh @@ -0,0 +1,56 @@ +#!/usr/bin/env bash + +# Script to import validator anti-slashing database from EIP-3076 format. +# +# This script routes to the appropriate VC-specific import script based on the VC environment variable. +# +# Usage: VC=vc-lodestar ./scripts/edit/vc/import_asdb.sh [options] +# +# Environment Variables: +# VC Validator client type (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus) +# +# All options are passed through to the VC-specific script. + +set -euo pipefail + +# Check if VC environment variable is set +if [ -z "${VC:-}" ]; then + echo "Error: VC environment variable is not set" >&2 + echo "Usage: VC=vc-lodestar $0 [options]" >&2 + echo "" >&2 + echo "Supported VC types:" >&2 + echo " - vc-lodestar" >&2 + echo " - vc-teku" >&2 + echo " - vc-prysm" >&2 + echo " - vc-nimbus" >&2 + exit 1 +fi + +# Extract the VC name (remove "vc-" prefix) +VC_NAME="${VC#vc-}" + +# Get the script directory +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +# Path to the VC-specific script +VC_SCRIPT="${SCRIPT_DIR}/${VC_NAME}/import_asdb.sh" + +# Check if the VC-specific script exists +if [ ! -f "$VC_SCRIPT" ]; then + echo "Error: Import script for '$VC' not found at: $VC_SCRIPT" >&2 + echo "" >&2 + echo "Available VC types:" >&2 + for dir in "${SCRIPT_DIR}"/*; do + if [ -d "$dir" ] && [ -f "$dir/import_asdb.sh" ]; then + basename "$dir" + fi + done | sed 's/^/ - vc-/' >&2 + exit 1 +fi + +# Make sure the VC-specific script is executable +chmod +x "$VC_SCRIPT" + +# Run the VC-specific script with all arguments passed through +echo "Running import for $VC..." +exec "$VC_SCRIPT" "$@" diff --git a/scripts/edit/vc/lodestar/export_asdb.sh b/scripts/edit/vc/lodestar/export_asdb.sh new file mode 100755 index 00000000..371fd45f --- /dev/null +++ b/scripts/edit/vc/lodestar/export_asdb.sh @@ -0,0 +1,119 @@ +#!/usr/bin/env bash + +# Script to export Lodestar validator anti-slashing database to EIP-3076 format. +# +# This script is run by continuing operators before the replace-operator ceremony. +# It exports the slashing protection database from the running vc-lodestar container +# to a JSON file that can be updated and re-imported after the ceremony. +# +# Usage: export_asdb.sh [--data-dir ] [--output-file ] +# +# Options: +# --data-dir Path to Lodestar data directory (default: ./data/lodestar) +# --output-file Path for exported slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-lodestar container must be running +# - docker and docker compose must be available + +set -euo pipefail + +# Default values +DATA_DIR="./data/lodestar" +OUTPUT_FILE="./asdb-export/slashing-protection.json" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + --output-file) + OUTPUT_FILE="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--data-dir ] [--output-file ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Exporting anti-slashing database for Lodestar validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Output file: $OUTPUT_FILE" +echo "" + +# Check if vc-lodestar container is running +if ! docker compose ps vc-lodestar | grep -q Up; then + echo "Error: vc-lodestar container is not running" >&2 + echo "Please start the validator client before exporting:" >&2 + echo " docker compose up -d vc-lodestar" >&2 + exit 1 +fi + +# Create output directory if it doesn't exist +OUTPUT_DIR=$(dirname "$OUTPUT_FILE") +mkdir -p "$OUTPUT_DIR" + +echo "Exporting slashing protection data from vc-lodestar container..." + +# Export slashing protection data from the container +# The container writes to /tmp/export.json, then we copy it out +# Using full path to lodestar binary as found in run.sh to ensure it's found +if ! docker compose exec -T vc-lodestar node /usr/app/packages/cli/bin/lodestar validator slashing-protection export \ + --file /tmp/export.json \ + --dataDir /opt/data \ + --network "$NETWORK"; then + echo "Error: Failed to export slashing protection from vc-lodestar container" >&2 + exit 1 +fi + +echo "Copying exported file from container to host..." + +# Copy the exported file from container to host +if ! docker compose cp vc-lodestar:/tmp/export.json "$OUTPUT_FILE"; then + echo "Error: Failed to copy exported file from container" >&2 + exit 1 +fi + +# Validate the exported JSON +if ! jq empty "$OUTPUT_FILE" 2>/dev/null; then + echo "Error: Exported file is not valid JSON" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully exported anti-slashing database" +echo " Output file: $OUTPUT_FILE" +echo "" +echo "You can now proceed with the replace-operator ceremony." diff --git a/scripts/edit/vc/lodestar/import_asdb.sh b/scripts/edit/vc/lodestar/import_asdb.sh new file mode 100755 index 00000000..c9751b4d --- /dev/null +++ b/scripts/edit/vc/lodestar/import_asdb.sh @@ -0,0 +1,121 @@ +#!/usr/bin/env bash + +# Script to import Lodestar validator anti-slashing database from EIP-3076 format. +# +# This script is run by continuing operators after the replace-operator ceremony +# and anti-slashing database update. It imports the updated slashing protection +# database back into the vc-lodestar container. +# +# Usage: import_asdb.sh [--input-file ] [--data-dir ] +# +# Options: +# --input-file Path to updated slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# --data-dir Path to Lodestar data directory (default: ./data/lodestar) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-lodestar container must be STOPPED before import +# - docker and docker compose must be available +# - Input file must be valid EIP-3076 JSON + +set -euo pipefail + +# Default values +INPUT_FILE="./asdb-export/slashing-protection.json" +DATA_DIR="./data/lodestar" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --input-file) + INPUT_FILE="$2" + shift 2 + ;; + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--input-file ] [--data-dir ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Importing anti-slashing database for Lodestar validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Input file: $INPUT_FILE" +echo "" + +# Check if input file exists +if [ ! -f "$INPUT_FILE" ]; then + echo "Error: Input file not found: $INPUT_FILE" >&2 + exit 1 +fi + +# Validate input file is valid JSON +if ! jq empty "$INPUT_FILE" 2>/dev/null; then + echo "Error: Input file is not valid JSON: $INPUT_FILE" >&2 + exit 1 +fi + +# Check if vc-lodestar container is running (it should be stopped) +if docker compose ps vc-lodestar 2>/dev/null | grep -q Up; then + echo "Error: vc-lodestar container is still running" >&2 + echo "Please stop the validator client before importing:" >&2 + echo " docker compose stop vc-lodestar" >&2 + echo "" >&2 + echo "Importing while the container is running may cause database corruption." >&2 + exit 1 +fi + +echo "Importing slashing protection data into vc-lodestar container..." + +# Import slashing protection data using a temporary container based on the vc-lodestar service. +# The input file is bind-mounted into the container at /tmp/import.json (read-only). +# We MUST override the entrypoint because the default run.sh ignores arguments. +# Using --force to allow importing even if some data already exists. +if ! docker compose run --rm -T \ + --entrypoint node \ + -v "$INPUT_FILE":/tmp/import.json:ro \ + vc-lodestar /usr/app/packages/cli/bin/lodestar validator slashing-protection import \ + --file /tmp/import.json \ + --dataDir /opt/data \ + --network "$NETWORK" \ + --force; then + echo "Error: Failed to import slashing protection into vc-lodestar container" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully imported anti-slashing database" +echo "" +echo "You can now restart the validator client:" +echo " docker compose up -d vc-lodestar" diff --git a/scripts/edit/vc/nimbus/export_asdb.sh b/scripts/edit/vc/nimbus/export_asdb.sh new file mode 100755 index 00000000..4129dd61 --- /dev/null +++ b/scripts/edit/vc/nimbus/export_asdb.sh @@ -0,0 +1,118 @@ +#!/usr/bin/env bash + +# Script to export Nimbus validator anti-slashing database to EIP-3076 format. +# +# This script is run by continuing operators before the replace-operator ceremony. +# It exports the slashing protection database from the running vc-nimbus container +# to a JSON file that can be updated and re-imported after the ceremony. +# +# Usage: export_asdb.sh [--data-dir ] [--output-file ] +# +# Options: +# --data-dir Path to Nimbus data directory (default: ./data/nimbus) +# --output-file Path for exported slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-nimbus container must be running +# - docker and docker compose must be available + +set -euo pipefail + +# Default values +DATA_DIR="./data/nimbus" +OUTPUT_FILE="./asdb-export/slashing-protection.json" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + --output-file) + OUTPUT_FILE="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--data-dir ] [--output-file ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Exporting anti-slashing database for Nimbus validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Output file: $OUTPUT_FILE" +echo "" + +# Check if vc-nimbus container is running +if ! docker compose ps vc-nimbus | grep -q Up; then + echo "Error: vc-nimbus container is not running" >&2 + echo "Please start the validator client before exporting:" >&2 + echo " docker compose up -d vc-nimbus" >&2 + exit 1 +fi + +# Create output directory if it doesn't exist +OUTPUT_DIR=$(dirname "$OUTPUT_FILE") +mkdir -p "$OUTPUT_DIR" + +echo "Exporting slashing protection data from vc-nimbus container..." + +# Export slashing protection data from the container +# The container writes to /tmp/export.json, then we copy it out +# Note: slashingdb commands are in nimbus_beacon_node, not nimbus_validator_client. +# Nimbus requires --data-dir BEFORE the subcommand. +if ! docker compose exec -T vc-nimbus /home/user/nimbus_beacon_node \ + --data-dir=/home/user/data slashingdb export /tmp/export.json; then + echo "Error: Failed to export slashing protection from vc-nimbus container" >&2 + exit 1 +fi + +echo "Copying exported file from container to host..." + +# Copy the exported file from container to host +if ! docker compose cp vc-nimbus:/tmp/export.json "$OUTPUT_FILE"; then + echo "Error: Failed to copy exported file from container" >&2 + exit 1 +fi + +# Validate the exported JSON +if ! jq empty "$OUTPUT_FILE" 2>/dev/null; then + echo "Error: Exported file is not valid JSON" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully exported anti-slashing database" +echo " Output file: $OUTPUT_FILE" +echo "" +echo "You can now proceed with the replace-operator ceremony." diff --git a/scripts/edit/vc/nimbus/import_asdb.sh b/scripts/edit/vc/nimbus/import_asdb.sh new file mode 100755 index 00000000..36433a5d --- /dev/null +++ b/scripts/edit/vc/nimbus/import_asdb.sh @@ -0,0 +1,117 @@ +#!/usr/bin/env bash + +# Script to import Nimbus validator anti-slashing database from EIP-3076 format. +# +# This script is run by continuing operators after the replace-operator ceremony +# and anti-slashing database update. It imports the updated slashing protection +# database back into the vc-nimbus container. +# +# Usage: import_asdb.sh [--input-file ] [--data-dir ] +# +# Options: +# --input-file Path to updated slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# --data-dir Path to Nimbus data directory (default: ./data/nimbus) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-nimbus container must be STOPPED before import +# - docker and docker compose must be available +# - Input file must be valid EIP-3076 JSON + +set -euo pipefail + +# Default values +INPUT_FILE="./asdb-export/slashing-protection.json" +DATA_DIR="./data/nimbus" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --input-file) + INPUT_FILE="$2" + shift 2 + ;; + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--input-file ] [--data-dir ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Importing anti-slashing database for Nimbus validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Input file: $INPUT_FILE" +echo "" + +# Check if input file exists +if [ ! -f "$INPUT_FILE" ]; then + echo "Error: Input file not found: $INPUT_FILE" >&2 + exit 1 +fi + +# Validate input file is valid JSON +if ! jq empty "$INPUT_FILE" 2>/dev/null; then + echo "Error: Input file is not valid JSON: $INPUT_FILE" >&2 + exit 1 +fi + +# Check if vc-nimbus container is running (it should be stopped) +if docker compose ps vc-nimbus 2>/dev/null | grep -q Up; then + echo "Error: vc-nimbus container is still running" >&2 + echo "Please stop the validator client before importing:" >&2 + echo " docker compose stop vc-nimbus" >&2 + echo "" >&2 + echo "Importing while the container is running may cause database corruption." >&2 + exit 1 +fi + +echo "Importing slashing protection data into vc-nimbus container..." + +# Import slashing protection data using a temporary container based on the vc-nimbus service. +# The input file is bind-mounted into the container at /tmp/import.json (read-only). +# Note: slashingdb commands are in nimbus_beacon_node, not nimbus_validator_client. +# Nimbus requires --data-dir BEFORE the subcommand. +if ! docker compose run --rm -T \ + --entrypoint sh \ + -v "$INPUT_FILE":/tmp/import.json:ro \ + vc-nimbus -c "/home/user/nimbus_beacon_node --data-dir=/home/user/data slashingdb import /tmp/import.json"; then + echo "Error: Failed to import slashing protection into vc-nimbus container" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully imported anti-slashing database" +echo "" +echo "You can now restart the validator client:" +echo " docker compose up -d vc-nimbus" diff --git a/scripts/edit/vc/prysm/export_asdb.sh b/scripts/edit/vc/prysm/export_asdb.sh new file mode 100755 index 00000000..79820081 --- /dev/null +++ b/scripts/edit/vc/prysm/export_asdb.sh @@ -0,0 +1,121 @@ +#!/usr/bin/env bash + +# Script to export Prysm validator anti-slashing database to EIP-3076 format. +# +# This script is run by continuing operators before the replace-operator ceremony. +# It exports the slashing protection database from the running vc-prysm container +# to a JSON file that can be updated and re-imported after the ceremony. +# +# Usage: export_asdb.sh [--data-dir ] [--output-file ] +# +# Options: +# --data-dir Path to Prysm data directory (default: ./data/prysm) +# --output-file Path for exported slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-prysm container must be running +# - docker and docker compose must be available + +set -euo pipefail + +# Default values +DATA_DIR="./data/prysm" +OUTPUT_FILE="./asdb-export/slashing-protection.json" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + --output-file) + OUTPUT_FILE="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--data-dir ] [--output-file ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Exporting anti-slashing database for Prysm validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Output file: $OUTPUT_FILE" +echo "" + +# Check if vc-prysm container is running +if ! docker compose ps vc-prysm | grep -q Up; then + echo "Error: vc-prysm container is not running" >&2 + echo "Please start the validator client before exporting:" >&2 + echo " docker compose up -d vc-prysm" >&2 + exit 1 +fi + +# Create output directory if it doesn't exist +OUTPUT_DIR=$(dirname "$OUTPUT_FILE") +mkdir -p "$OUTPUT_DIR" + +echo "Exporting slashing protection data from vc-prysm container..." + +# Export slashing protection data from the container +# The container writes to /tmp/export.json, then we copy it out +# Prysm stores data in /data/vc and wallet in /prysm-wallet +if ! docker compose exec -T vc-prysm /app/cmd/validator/validator slashing-protection-history export \ + --accept-terms-of-use \ + --datadir=/data/vc \ + --slashing-protection-export-dir=/tmp \ + --$NETWORK; then + echo "Error: Failed to export slashing protection from vc-prysm container" >&2 + exit 1 +fi + +echo "Copying exported file from container to host..." + +# Prysm creates a file named slashing_protection.json in the export directory +# Copy the exported file from container to host +if ! docker compose cp vc-prysm:/tmp/slashing_protection.json "$OUTPUT_FILE"; then + echo "Error: Failed to copy exported file from container" >&2 + exit 1 +fi + +# Validate the exported JSON +if ! jq empty "$OUTPUT_FILE" 2>/dev/null; then + echo "Error: Exported file is not valid JSON" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully exported anti-slashing database" +echo " Output file: $OUTPUT_FILE" +echo "" +echo "You can now proceed with the replace-operator ceremony." diff --git a/scripts/edit/vc/prysm/import_asdb.sh b/scripts/edit/vc/prysm/import_asdb.sh new file mode 100755 index 00000000..bc2c6bc5 --- /dev/null +++ b/scripts/edit/vc/prysm/import_asdb.sh @@ -0,0 +1,121 @@ +#!/usr/bin/env bash + +# Script to import Prysm validator anti-slashing database from EIP-3076 format. +# +# This script is run by continuing operators after the replace-operator ceremony +# and anti-slashing database update. It imports the updated slashing protection +# database back into the vc-prysm container. +# +# Usage: import_asdb.sh [--input-file ] [--data-dir ] +# +# Options: +# --input-file Path to updated slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# --data-dir Path to Prysm data directory (default: ./data/prysm) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-prysm container must be STOPPED before import +# - docker and docker compose must be available +# - Input file must be valid EIP-3076 JSON + +set -euo pipefail + +# Default values +INPUT_FILE="./asdb-export/slashing-protection.json" +DATA_DIR="./data/prysm" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --input-file) + INPUT_FILE="$2" + shift 2 + ;; + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--input-file ] [--data-dir ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Importing anti-slashing database for Prysm validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Input file: $INPUT_FILE" +echo "" + +# Check if input file exists +if [ ! -f "$INPUT_FILE" ]; then + echo "Error: Input file not found: $INPUT_FILE" >&2 + exit 1 +fi + +# Validate input file is valid JSON +if ! jq empty "$INPUT_FILE" 2>/dev/null; then + echo "Error: Input file is not valid JSON: $INPUT_FILE" >&2 + exit 1 +fi + +# Check if vc-prysm container is running (it should be stopped) +if docker compose ps vc-prysm 2>/dev/null | grep -q Up; then + echo "Error: vc-prysm container is still running" >&2 + echo "Please stop the validator client before importing:" >&2 + echo " docker compose stop vc-prysm" >&2 + echo "" >&2 + echo "Importing while the container is running may cause database corruption." >&2 + exit 1 +fi + +echo "Importing slashing protection data into vc-prysm container..." + +# Import slashing protection data using a temporary container based on the vc-prysm service. +# The input file is bind-mounted into the container at /tmp/slashing_protection.json (read-only). +# We MUST override the entrypoint because the default run.sh ignores arguments. +# Prysm expects the file to be named slashing_protection.json +if ! docker compose run --rm -T \ + --entrypoint /app/cmd/validator/validator \ + -v "$INPUT_FILE":/tmp/slashing_protection.json:ro \ + vc-prysm slashing-protection-history import \ + --accept-terms-of-use \ + --datadir=/data/vc \ + --slashing-protection-json-file=/tmp/slashing_protection.json \ + --$NETWORK; then + echo "Error: Failed to import slashing protection into vc-prysm container" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully imported anti-slashing database" +echo "" +echo "You can now restart the validator client:" +echo " docker compose up -d vc-prysm" diff --git a/scripts/edit/vc/teku/export_asdb.sh b/scripts/edit/vc/teku/export_asdb.sh new file mode 100755 index 00000000..145712ed --- /dev/null +++ b/scripts/edit/vc/teku/export_asdb.sh @@ -0,0 +1,118 @@ +#!/usr/bin/env bash + +# Script to export Teku validator anti-slashing database to EIP-3076 format. +# +# This script is run by continuing operators before the replace-operator ceremony. +# It exports the slashing protection database from the running vc-teku container +# to a JSON file that can be updated and re-imported after the ceremony. +# +# Usage: export_asdb.sh [--data-dir ] [--output-file ] +# +# Options: +# --data-dir Path to Teku data directory (default: ./data/vc-teku) +# --output-file Path for exported slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-teku container must be running +# - docker and docker compose must be available + +set -euo pipefail + +# Default values +DATA_DIR="./data/vc-teku" +OUTPUT_FILE="./asdb-export/slashing-protection.json" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + --output-file) + OUTPUT_FILE="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--data-dir ] [--output-file ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Exporting anti-slashing database for Teku validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Output file: $OUTPUT_FILE" +echo "" + +# Check if vc-teku container is running +if ! docker compose ps vc-teku | grep -q Up; then + echo "Error: vc-teku container is not running" >&2 + echo "Please start the validator client before exporting:" >&2 + echo " docker compose up -d vc-teku" >&2 + exit 1 +fi + +# Create output directory if it doesn't exist +OUTPUT_DIR=$(dirname "$OUTPUT_FILE") +mkdir -p "$OUTPUT_DIR" + +echo "Exporting slashing protection data from vc-teku container..." + +# Export slashing protection data from the container +# Teku stores data in /home/data (mapped from ./data/vc-teku) +# The export command writes to a file we specify +if ! docker compose exec -T vc-teku /opt/teku/bin/teku slashing-protection export \ + --data-path=/home/data \ + --to=/tmp/export.json; then + echo "Error: Failed to export slashing protection from vc-teku container" >&2 + exit 1 +fi + +echo "Copying exported file from container to host..." + +# Copy the exported file from container to host +if ! docker compose cp vc-teku:/tmp/export.json "$OUTPUT_FILE"; then + echo "Error: Failed to copy exported file from container" >&2 + exit 1 +fi + +# Validate the exported JSON +if ! jq empty "$OUTPUT_FILE" 2>/dev/null; then + echo "Error: Exported file is not valid JSON" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully exported anti-slashing database" +echo " Output file: $OUTPUT_FILE" +echo "" +echo "You can now proceed with the replace-operator ceremony." diff --git a/scripts/edit/vc/teku/import_asdb.sh b/scripts/edit/vc/teku/import_asdb.sh new file mode 100755 index 00000000..d73b7c6f --- /dev/null +++ b/scripts/edit/vc/teku/import_asdb.sh @@ -0,0 +1,118 @@ +#!/usr/bin/env bash + +# Script to import Teku validator anti-slashing database from EIP-3076 format. +# +# This script is run by continuing operators after the replace-operator ceremony +# and anti-slashing database update. It imports the updated slashing protection +# database back into the vc-teku container. +# +# Usage: import_asdb.sh [--input-file ] [--data-dir ] +# +# Options: +# --input-file Path to updated slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# --data-dir Path to Teku data directory (default: ./data/vc-teku) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-teku container must be STOPPED before import +# - docker and docker compose must be available +# - Input file must be valid EIP-3076 JSON + +set -euo pipefail + +# Default values +INPUT_FILE="./asdb-export/slashing-protection.json" +DATA_DIR="./data/vc-teku" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --input-file) + INPUT_FILE="$2" + shift 2 + ;; + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--input-file ] [--data-dir ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE if it was set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Importing anti-slashing database for Teku validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Input file: $INPUT_FILE" +echo "" + +# Check if input file exists +if [ ! -f "$INPUT_FILE" ]; then + echo "Error: Input file not found: $INPUT_FILE" >&2 + exit 1 +fi + +# Validate input file is valid JSON +if ! jq empty "$INPUT_FILE" 2>/dev/null; then + echo "Error: Input file is not valid JSON: $INPUT_FILE" >&2 + exit 1 +fi + +# Check if vc-teku container is running (it should be stopped) +if docker compose ps vc-teku 2>/dev/null | grep -q Up; then + echo "Error: vc-teku container is still running" >&2 + echo "Please stop the validator client before importing:" >&2 + echo " docker compose stop vc-teku" >&2 + echo "" >&2 + echo "Importing while the container is running may cause database corruption." >&2 + exit 1 +fi + +echo "Importing slashing protection data into vc-teku container..." + +# Import slashing protection data using a temporary container based on the vc-teku service. +# The input file is bind-mounted into the container at /tmp/import.json (read-only). +# We override the command to run the import instead of the validator client. +if ! docker compose run --rm -T \ + -v "$INPUT_FILE":/tmp/import.json:ro \ + --entrypoint /opt/teku/bin/teku \ + vc-teku slashing-protection import \ + --data-path=/home/data \ + --from=/tmp/import.json; then + echo "Error: Failed to import slashing protection into vc-teku container" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully imported anti-slashing database" +echo "" +echo "You can now restart the validator client:" +echo " docker compose up -d vc-teku" diff --git a/scripts/edit/vc/test/.gitignore b/scripts/edit/vc/test/.gitignore new file mode 100644 index 00000000..0f8c84f0 --- /dev/null +++ b/scripts/edit/vc/test/.gitignore @@ -0,0 +1,4 @@ +# Temporary test artifacts +output/ +data/ +*.tmp diff --git a/scripts/edit/vc/test/README.md b/scripts/edit/vc/test/README.md new file mode 100644 index 00000000..6e9c768b --- /dev/null +++ b/scripts/edit/vc/test/README.md @@ -0,0 +1,34 @@ +# Integration Tests for ASDB Export/Import Scripts + +These tests verify export/import scripts for various VC types work correctly with test data. + +## Prerequisites + +- Docker must be running +- No `.charon` folder required (test uses fixtures) + +## Running Tests + +```bash +# Lodestar VC test +# (for other VC types the usage is identical) +./scripts/edit/vc/test/test_lodestar_asdb.sh +``` + +## ⚠️ Test Isolation + +The test uses isolated data directories within `scripts/edit/vc/test/data/` to avoid any interference with production data in `data/`. + +## Test Flow + +1. Starts vc-lodestar container (no charon dependency) +2. Imports sample slashing protection data from fixtures +3. Exports slashing protection via `export_asdb.sh` +4. Transforms pubkeys via `update-anti-slashing-db.sh` +5. Re-imports updated data via `import_asdb.sh` + +## Test Artifacts + +After running, inspect results in `scripts/edit/vc/test/output/`: +- `exported-asdb.json` - Original export +- `updated-asdb.json` - After pubkey transformation diff --git a/scripts/edit/vc/test/docker-compose.test.yml b/scripts/edit/vc/test/docker-compose.test.yml new file mode 100644 index 00000000..ce2d6065 --- /dev/null +++ b/scripts/edit/vc/test/docker-compose.test.yml @@ -0,0 +1,40 @@ +# Test override for validator client services +# Removes charon dependency and keeps container alive for testing +# Mounts test fixtures instead of .charon/validator_keys +# Uses dedicated test data directory to avoid conflicts + +services: + vc-lodestar: + depends_on: [] + entrypoint: ["sh", "-c", "tail -f /dev/null"] + volumes: + - ./lodestar/run.sh:/opt/lodestar/run.sh + - ./scripts/edit/vc/test/fixtures/validator_keys:/home/charon/validator_keys + - ./scripts/edit/vc/test/data/lodestar:/opt/data + + vc-nimbus: + depends_on: [] + entrypoint: ["sh", "-c", "tail -f /dev/null"] + volumes: + # Mount run.sh from INSIDE the test data directory to avoid conflicts + # with the base compose's run.sh mount (volumes are merged, not replaced) + - ./scripts/edit/vc/test/data/nimbus/run.sh:/home/user/data/run.sh + - ./scripts/edit/vc/test/fixtures/validator_keys:/home/validator_keys + - ./scripts/edit/vc/test/data/nimbus:/home/user/data + + vc-prysm: + depends_on: [] + entrypoint: ["sh", "-c", "tail -f /dev/null"] + volumes: + # Mount run.sh from INSIDE the test data directory to avoid conflicts + - ./scripts/edit/vc/test/data/prysm/run.sh:/home/prysm/run.sh + - ./scripts/edit/vc/test/fixtures/validator_keys:/home/charon/validator_keys + - ./scripts/edit/vc/test/data/prysm:/data/vc + + vc-teku: + depends_on: [] + entrypoint: ["sh", "-c", "tail -f /dev/null"] + volumes: + # Mount test fixtures validator keys and test data directory + - ./scripts/edit/vc/test/fixtures/validator_keys:/opt/charon/validator_keys + - ./scripts/edit/vc/test/data/teku:/home/data diff --git a/scripts/edit/vc/test/fixtures/sample-slashing-protection.json b/scripts/edit/vc/test/fixtures/sample-slashing-protection.json new file mode 100644 index 00000000..6c1f42bb --- /dev/null +++ b/scripts/edit/vc/test/fixtures/sample-slashing-protection.json @@ -0,0 +1,38 @@ +{ + "metadata": { + "interchange_format_version": "5", + "genesis_validators_root": "0x212f13fc4df078b6cb7db228f1c8307566dcecf900867401a92023d7ba99cb5f" + }, + "data": [ + { + "pubkey": "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", + "signed_blocks": [ + { + "slot": "81952", + "signing_root": "0x4ff6f743a43f3b4f95350831aeaf0a122a1a392922c45d804280284a69eb850b" + }, + { + "slot": "81984", + "signing_root": "0x5a2b9c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b" + } + ], + "signed_attestations": [ + { + "source_epoch": "2560", + "target_epoch": "2561", + "signing_root": "0x587d6a4f59a58fe15bdac1234e3d51a1d5c8b2e0e3f5e0f2a1b3c4d5e6f7a8b9" + }, + { + "source_epoch": "2561", + "target_epoch": "2562", + "signing_root": "0x6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b" + }, + { + "source_epoch": "2562", + "target_epoch": "2563", + "signing_root": "0x7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c" + } + ] + } + ] +} diff --git a/scripts/edit/vc/test/fixtures/source-cluster-lock.json b/scripts/edit/vc/test/fixtures/source-cluster-lock.json new file mode 100644 index 00000000..d17c11fe --- /dev/null +++ b/scripts/edit/vc/test/fixtures/source-cluster-lock.json @@ -0,0 +1,19 @@ +{ + "cluster_definition": { + "name": "TestCluster", + "num_validators": 1, + "threshold": 3 + }, + "distributed_validators": [ + { + "distributed_public_key": "0xa9fb2be415318eb77709f7c378ab26025371c0b11213d93fd662ffdb06e77a05c7b04573a478e9d5c0c0fd98078965ef", + "public_shares": [ + "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", + "0x8afba316fdcf51e25a89e05e17377b8c72fd465c95346df4ed5694f295faa2ce061e14e579c5bc01a468dbbb191c58e8", + "0xa1aeebe0980509f5f8d8d424beb89004a967da8d8093248f64eb27c4ee5d22ba9c0f157025f551f47b31833f8bc585f8", + "0xa6c283c82cd0b65436861a149fb840849d06ded1dd8d2f900afb358c6a4232004309120f00a553cdccd8a43f6b743c82" + ] + } + ], + "lock_hash": "0xe9dbc87171f99bd8b6f348f6bf314291651933256e712ace299190f5e04e7795" +} diff --git a/scripts/edit/vc/test/fixtures/target-cluster-lock.json b/scripts/edit/vc/test/fixtures/target-cluster-lock.json new file mode 100644 index 00000000..8449e309 --- /dev/null +++ b/scripts/edit/vc/test/fixtures/target-cluster-lock.json @@ -0,0 +1,19 @@ +{ + "cluster_definition": { + "name": "TestCluster", + "num_validators": 1, + "threshold": 3 + }, + "distributed_validators": [ + { + "distributed_public_key": "0xa9fb2be415318eb77709f7c378ab26025371c0b11213d93fd662ffdb06e77a05c7b04573a478e9d5c0c0fd98078965ef", + "public_shares": [ + "0xb11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111", + "0x8afba316fdcf51e25a89e05e17377b8c72fd465c95346df4ed5694f295faa2ce061e14e579c5bc01a468dbbb191c58e8", + "0xa1aeebe0980509f5f8d8d424beb89004a967da8d8093248f64eb27c4ee5d22ba9c0f157025f551f47b31833f8bc585f8", + "0xa6c283c82cd0b65436861a149fb840849d06ded1dd8d2f900afb358c6a4232004309120f00a553cdccd8a43f6b743c82" + ] + } + ], + "lock_hash": "0xf0000000000000000000000000000000000000000000000000000000000000000" +} diff --git a/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.json b/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.json new file mode 100644 index 00000000..dba1e6ff --- /dev/null +++ b/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.json @@ -0,0 +1,31 @@ +{ + "crypto": { + "checksum": { + "function": "sha256", + "message": "eeaf8c59d062a397f74d62b97243860cef812cf168662135b9fca023d26c71df", + "params": {} + }, + "cipher": { + "function": "aes-128-ctr", + "message": "c3daae6234285577322e5d674ed90469da1d888b0a406cde50b6472d5206e165", + "params": { + "iv": "87350b9c54dc1e7563b9d784eba86f6d" + } + }, + "kdf": { + "function": "pbkdf2", + "message": "", + "params": { + "c": 262144, + "dklen": 32, + "prf": "hmac-sha256", + "salt": "f3d31631d40448dd9134bcf54630e2ad2f1668bb8470af8f5394c12e214a6fed" + } + } + }, + "description": "", + "pubkey": "a3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", + "path": "m/12381/3600/0/0/0", + "uuid": "840CFCF8-A23B-7742-9057-3B149122244A", + "version": 4 +} diff --git a/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.txt b/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.txt new file mode 100644 index 00000000..c0245cc2 --- /dev/null +++ b/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.txt @@ -0,0 +1 @@ +90bb9cd1986560f92016c8766fe8c528 \ No newline at end of file diff --git a/scripts/edit/vc/test/test_lodestar_asdb.sh b/scripts/edit/vc/test/test_lodestar_asdb.sh new file mode 100755 index 00000000..f8948bf6 --- /dev/null +++ b/scripts/edit/vc/test/test_lodestar_asdb.sh @@ -0,0 +1,231 @@ +#!/usr/bin/env bash + +# Integration test for export/import ASDB scripts with Lodestar VC. +# +# This script: +# 1. Starts vc-lodestar via docker-compose with test override (no charon dependency) +# 2. Sets up keystores in the container +# 3. Imports sample slashing protection data (with known pubkey and attestations) +# 4. Calls scripts/edit/vc/export_asdb.sh to export slashing protection +# 5. Runs update-anti-slashing-db.sh to transform pubkeys +# 6. Stops the container +# 7. Calls scripts/edit/vc/import_asdb.sh to import updated slashing protection +# +# Usage: ./scripts/edit/vc/test/test_lodestar_asdb.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" +cd "$REPO_ROOT" + +# Test artifacts directories +TEST_OUTPUT_DIR="$SCRIPT_DIR/output" +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_COMPOSE_FILE="$SCRIPT_DIR/docker-compose.test.yml" +TEST_DATA_DIR="$SCRIPT_DIR/data/lodestar" +TEST_COMPOSE_FILES="docker-compose.yml:compose-vc.yml:$TEST_COMPOSE_FILE" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +cleanup() { + log_info "Cleaning up test resources..." + COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-lodestar down 2>/dev/null || true + # Keep TEST_OUTPUT_DIR for inspection + # Clean test data to avoid stale DB locks + rm -rf "$TEST_DATA_DIR" 2>/dev/null || true +} + +trap cleanup EXIT + +# Clean test data directory before starting (remove stale locks) +log_info "Preparing test environment..." +COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-lodestar down 2>/dev/null || true +rm -rf "$TEST_DATA_DIR" +mkdir -p "$TEST_DATA_DIR" + +# Check prerequisites +log_info "Checking prerequisites..." + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check for test validator keys in fixtures +KEYSTORE_COUNT=$(ls "$TEST_FIXTURES_DIR/validator_keys"/keystore-*.json 2>/dev/null | wc -l | tr -d ' ') +if [ "$KEYSTORE_COUNT" -eq 0 ]; then + log_error "No keystore files found in $TEST_FIXTURES_DIR/validator_keys" + exit 1 +fi +log_info "Found $KEYSTORE_COUNT test keystore file(s)" + +# Verify test fixtures exist +if [ ! -f "$TEST_FIXTURES_DIR/source-cluster-lock.json" ] || [ ! -f "$TEST_FIXTURES_DIR/target-cluster-lock.json" ]; then + log_error "Test fixtures not found in $TEST_FIXTURES_DIR" + exit 1 +fi +log_info "Test fixtures verified" + +# Source .env for NETWORK, then override COMPOSE_FILE with test compose +if [ ! -f .env ]; then + log_warn ".env file not found, creating with NETWORK=hoodi" + echo "NETWORK=hoodi" > .env +fi + +source .env +NETWORK="${NETWORK:-hoodi}" + +# Override COMPOSE_FILE after sourcing .env (which may have its own COMPOSE_FILE) +export COMPOSE_FILE="$TEST_COMPOSE_FILES" + +log_info "Using network: $NETWORK" +log_info "Using compose files: $COMPOSE_FILE" + +# Create test output directory +mkdir -p "$TEST_OUTPUT_DIR" + +# Step 1: Start vc-lodestar via docker-compose +log_info "Step 1: Starting vc-lodestar via docker-compose..." + +docker compose --profile vc-lodestar up -d vc-lodestar + +sleep 2 + +# Verify container is running +if ! docker compose ps vc-lodestar | grep -q Up; then + log_error "Container failed to start. Checking logs:" + docker compose logs vc-lodestar 2>&1 || true + exit 1 +fi + +log_info "Container started successfully" + +# Step 2: Set up keystores (normally done by run.sh but we override entrypoint) +log_info "Step 2: Setting up keystores..." + +docker compose exec -T vc-lodestar sh -c ' + mkdir -p /opt/data/keystores /opt/data/secrets + for f in /home/charon/validator_keys/keystore-*.json; do + PUBKEY="0x$(grep "\"pubkey\"" "$f" | sed "s/.*: *\"\([^\"]*\)\".*/\1/")" + mkdir -p "/opt/data/keystores/$PUBKEY" + cp "$f" "/opt/data/keystores/$PUBKEY/voting-keystore.json" + cp "${f%.json}.txt" "/opt/data/secrets/$PUBKEY" + echo "Imported keystore for $PUBKEY" + done +' + +log_info "Keystores set up successfully" + +# Step 3: Stop container and import sample slashing protection data +log_info "Step 3: Importing sample slashing protection data..." + +docker compose stop vc-lodestar + +SAMPLE_ASDB="$TEST_FIXTURES_DIR/sample-slashing-protection.json" + +if VC=vc-lodestar "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$SAMPLE_ASDB"; then + log_info "Sample data imported successfully!" +else + log_error "Failed to import sample data" + exit 1 +fi + +# Start container again for export +docker compose --profile vc-lodestar up -d vc-lodestar +sleep 2 + +# Clean stale LevelDB lock file from previous import run +docker compose exec -T vc-lodestar rm -f /opt/data/validator-db/LOCK 2>/dev/null || true + +# Step 4: Test export using the actual script +log_info "Step 4: Testing export_asdb.sh script..." + +EXPORT_FILE="$TEST_OUTPUT_DIR/exported-asdb.json" + +if VC=vc-lodestar "$REPO_ROOT/scripts/edit/vc/export_asdb.sh" --output-file "$EXPORT_FILE"; then + log_info "Export script successful!" + log_info "Exported content:" + jq '.' "$EXPORT_FILE" + + # Verify exported data matches what we imported + EXPORTED_COUNT=$(jq '.data | length' "$EXPORT_FILE") + EXPORTED_ATTESTATIONS=$(jq '.data[0].signed_attestations | length' "$EXPORT_FILE") + log_info "Exported $EXPORTED_COUNT validator(s) with $EXPORTED_ATTESTATIONS attestation(s)" +else + log_error "Export script failed" + exit 1 +fi + +# Step 5: Run update-anti-slashing-db.sh to transform pubkeys +log_info "Step 5: Running update-anti-slashing-db.sh..." + +UPDATE_SCRIPT="$REPO_ROOT/scripts/edit/vc/update-anti-slashing-db.sh" +SOURCE_LOCK="$TEST_FIXTURES_DIR/source-cluster-lock.json" +TARGET_LOCK="$TEST_FIXTURES_DIR/target-cluster-lock.json" + +# Copy export to a working file that will be modified in place +UPDATED_FILE="$TEST_OUTPUT_DIR/updated-asdb.json" +cp "$EXPORT_FILE" "$UPDATED_FILE" + +log_info "Source pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$SOURCE_LOCK")" +log_info "Target pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$TARGET_LOCK")" + +if "$UPDATE_SCRIPT" "$UPDATED_FILE" "$SOURCE_LOCK" "$TARGET_LOCK"; then + log_info "Update successful!" + log_info "Updated content:" + jq '.' "$UPDATED_FILE" + + # Verify the pubkey was transformed + EXPORTED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$EXPORT_FILE") + UPDATED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$UPDATED_FILE") + + if [ -n "$EXPORTED_PUBKEY" ] && [ -n "$UPDATED_PUBKEY" ]; then + if [ "$EXPORTED_PUBKEY" != "$UPDATED_PUBKEY" ]; then + log_info "Pubkey transformation verified:" + log_info " Before: $EXPORTED_PUBKEY" + log_info " After: $UPDATED_PUBKEY" + else + log_error "Pubkey was NOT transformed - test fixture mismatch!" + exit 1 + fi + else + log_error "No pubkey data in exported file - sample import may have failed" + exit 1 + fi +else + log_error "Update script failed" + exit 1 +fi + +# Step 6: Stop container before import (required by import script) +log_info "Step 6: Stopping vc-lodestar for import..." + +docker compose stop vc-lodestar + +# Step 7: Test import using the actual script +log_info "Step 7: Testing import_asdb.sh script..." + +if VC=vc-lodestar "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$UPDATED_FILE"; then + log_info "Import script successful!" +else + log_error "Import script failed" + exit 1 +fi + +echo "" +log_info "=========================================" +log_info "All tests passed successfully!" +log_info "=========================================" +log_info "" +log_info "Test artifacts in: $TEST_OUTPUT_DIR" +log_info " - exported-asdb.json (original export)" +log_info " - updated-asdb.json (after pubkey transformation)" diff --git a/scripts/edit/vc/test/test_nimbus_asdb.sh b/scripts/edit/vc/test/test_nimbus_asdb.sh new file mode 100755 index 00000000..8944b59d --- /dev/null +++ b/scripts/edit/vc/test/test_nimbus_asdb.sh @@ -0,0 +1,258 @@ +#!/usr/bin/env bash + +# Integration test for export/import ASDB scripts with Nimbus VC. +# +# This script: +# 1. Builds vc-nimbus image if needed +# 2. Starts vc-nimbus via docker-compose with test override (no charon dependency) +# 3. Sets up keystores in the container +# 4. Imports sample slashing protection data (with known pubkey and attestations) +# 5. Calls scripts/edit/vc/export_asdb.sh to export slashing protection +# 6. Runs update-anti-slashing-db.sh to transform pubkeys +# 7. Stops the container +# 8. Calls scripts/edit/vc/import_asdb.sh to import updated slashing protection +# +# Usage: ./scripts/edit/vc/test/test_nimbus_asdb.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" +cd "$REPO_ROOT" + +# Test artifacts directories +TEST_OUTPUT_DIR="$SCRIPT_DIR/output" +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_COMPOSE_FILE="$SCRIPT_DIR/docker-compose.test.yml" +TEST_DATA_DIR="$SCRIPT_DIR/data/nimbus" +TEST_COMPOSE_FILES="docker-compose.yml:compose-vc.yml:$TEST_COMPOSE_FILE" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +cleanup() { + log_info "Cleaning up test resources..." + COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-nimbus down 2>/dev/null || true + # Keep TEST_OUTPUT_DIR for inspection + # Clean test data to avoid stale DB locks + rm -rf "$TEST_DATA_DIR" 2>/dev/null || true +} + +trap cleanup EXIT + +# Clean test data directory before starting (remove stale locks) +log_info "Preparing test environment..." +COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-nimbus down 2>/dev/null || true +rm -rf "$TEST_DATA_DIR" +mkdir -p "$TEST_DATA_DIR" + +# Copy run.sh into test data directory to satisfy the volume mount from base compose +# (compose merge keeps the original mount ./nimbus/run.sh:/home/user/data/run.sh, +# which conflicts with our test data mount unless we provide the file there) +cp "$REPO_ROOT/nimbus/run.sh" "$TEST_DATA_DIR/run.sh" + +# Check prerequisites +log_info "Checking prerequisites..." + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check for test validator keys in fixtures +KEYSTORE_COUNT=$(ls "$TEST_FIXTURES_DIR/validator_keys"/keystore-*.json 2>/dev/null | wc -l | tr -d ' ') +if [ "$KEYSTORE_COUNT" -eq 0 ]; then + log_error "No keystore files found in $TEST_FIXTURES_DIR/validator_keys" + exit 1 +fi +log_info "Found $KEYSTORE_COUNT test keystore file(s)" + +# Verify test fixtures exist +if [ ! -f "$TEST_FIXTURES_DIR/source-cluster-lock.json" ] || [ ! -f "$TEST_FIXTURES_DIR/target-cluster-lock.json" ]; then + log_error "Test fixtures not found in $TEST_FIXTURES_DIR" + exit 1 +fi +log_info "Test fixtures verified" + +# Source .env for NETWORK, then override COMPOSE_FILE with test compose +if [ ! -f .env ]; then + log_warn ".env file not found, creating with NETWORK=hoodi" + echo "NETWORK=hoodi" > .env +fi + +source .env +NETWORK="${NETWORK:-hoodi}" + +# Override COMPOSE_FILE after sourcing .env (which may have its own COMPOSE_FILE) +export COMPOSE_FILE="$TEST_COMPOSE_FILES" + +log_info "Using network: $NETWORK" +log_info "Using compose files: $COMPOSE_FILE" + +# Create test output directory +mkdir -p "$TEST_OUTPUT_DIR" + +# Step 0: Build vc-nimbus image if needed +log_info "Step 0: Building vc-nimbus image..." + +if ! docker compose --profile vc-nimbus build vc-nimbus; then + log_error "Failed to build vc-nimbus image" + exit 1 +fi +log_info "Image built successfully" + +# Step 1: Start vc-nimbus via docker-compose +log_info "Step 1: Starting vc-nimbus via docker-compose..." + +docker compose --profile vc-nimbus up -d vc-nimbus + +sleep 2 + +# Verify container is running +if ! docker compose ps vc-nimbus | grep -q Up; then + log_error "Container failed to start. Checking logs:" + docker compose logs vc-nimbus 2>&1 || true + exit 1 +fi + +log_info "Container started successfully" + +# Step 2: Set up keystores using nimbus_beacon_node deposits import +log_info "Step 2: Setting up keystores..." + +# Create a temporary directory in the container for importing +docker compose exec -T vc-nimbus sh -c ' + mkdir -p /home/user/data/validators /tmp/keyimport + + for f in /home/validator_keys/keystore-*.json; do + echo "Importing key from $f" + + # Read password + password=$(cat "${f%.json}.txt") + + # Copy keystore to temp dir + cp "$f" /tmp/keyimport/ + + # Import using nimbus_beacon_node + echo "$password" | /home/user/nimbus_beacon_node deposits import \ + --data-dir=/home/user/data \ + /tmp/keyimport + + # Clean temp dir + rm /tmp/keyimport/* + done + + rm -rf /tmp/keyimport + echo "Done importing keystores" +' + +log_info "Keystores set up successfully" + +# Step 3: Stop container and import sample slashing protection data +log_info "Step 3: Importing sample slashing protection data..." + +docker compose stop vc-nimbus + +SAMPLE_ASDB="$TEST_FIXTURES_DIR/sample-slashing-protection.json" + +if VC=vc-nimbus "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$SAMPLE_ASDB"; then + log_info "Sample data imported successfully!" +else + log_error "Failed to import sample data" + exit 1 +fi + +# Start container again for export +docker compose --profile vc-nimbus up -d vc-nimbus +sleep 2 + +# Step 4: Test export using the actual script +log_info "Step 4: Testing export_asdb.sh script..." + +EXPORT_FILE="$TEST_OUTPUT_DIR/exported-asdb.json" + +if VC=vc-nimbus "$REPO_ROOT/scripts/edit/vc/export_asdb.sh" --output-file "$EXPORT_FILE"; then + log_info "Export script successful!" + log_info "Exported content:" + jq '.' "$EXPORT_FILE" + + # Verify exported data matches what we imported + EXPORTED_COUNT=$(jq '.data | length' "$EXPORT_FILE") + EXPORTED_ATTESTATIONS=$(jq '.data[0].signed_attestations | length' "$EXPORT_FILE" 2>/dev/null || echo "0") + log_info "Exported $EXPORTED_COUNT validator(s) with $EXPORTED_ATTESTATIONS attestation(s)" +else + log_error "Export script failed" + exit 1 +fi + +# Step 5: Run update-anti-slashing-db.sh to transform pubkeys +log_info "Step 5: Running update-anti-slashing-db.sh..." + +UPDATE_SCRIPT="$REPO_ROOT/scripts/edit/vc/update-anti-slashing-db.sh" +SOURCE_LOCK="$TEST_FIXTURES_DIR/source-cluster-lock.json" +TARGET_LOCK="$TEST_FIXTURES_DIR/target-cluster-lock.json" + +# Copy export to a working file that will be modified in place +UPDATED_FILE="$TEST_OUTPUT_DIR/updated-asdb.json" +cp "$EXPORT_FILE" "$UPDATED_FILE" + +log_info "Source pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$SOURCE_LOCK")" +log_info "Target pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$TARGET_LOCK")" + +if "$UPDATE_SCRIPT" "$UPDATED_FILE" "$SOURCE_LOCK" "$TARGET_LOCK"; then + log_info "Update successful!" + log_info "Updated content:" + jq '.' "$UPDATED_FILE" + + # Verify the pubkey was transformed + EXPORTED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$EXPORT_FILE") + UPDATED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$UPDATED_FILE") + + if [ -n "$EXPORTED_PUBKEY" ] && [ -n "$UPDATED_PUBKEY" ]; then + if [ "$EXPORTED_PUBKEY" != "$UPDATED_PUBKEY" ]; then + log_info "Pubkey transformation verified:" + log_info " Before: $EXPORTED_PUBKEY" + log_info " After: $UPDATED_PUBKEY" + else + log_error "Pubkey was NOT transformed - test fixture mismatch!" + exit 1 + fi + else + log_error "No pubkey data in exported file - sample import may have failed" + exit 1 + fi +else + log_error "Update script failed" + exit 1 +fi + +# Step 6: Stop container before import (required by import script) +log_info "Step 6: Stopping vc-nimbus for import..." + +docker compose stop vc-nimbus + +# Step 7: Test import using the actual script +log_info "Step 7: Testing import_asdb.sh script..." + +if VC=vc-nimbus "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$UPDATED_FILE"; then + log_info "Import script successful!" +else + log_error "Import script failed" + exit 1 +fi + +echo "" +log_info "=========================================" +log_info "All tests passed successfully!" +log_info "=========================================" +log_info "" +log_info "Test artifacts in: $TEST_OUTPUT_DIR" +log_info " - exported-asdb.json (original export)" +log_info " - updated-asdb.json (after pubkey transformation)" diff --git a/scripts/edit/vc/test/test_prysm_asdb.sh b/scripts/edit/vc/test/test_prysm_asdb.sh new file mode 100755 index 00000000..4bf834b3 --- /dev/null +++ b/scripts/edit/vc/test/test_prysm_asdb.sh @@ -0,0 +1,275 @@ +#!/usr/bin/env bash + +# Integration test for export/import ASDB scripts with Prysm VC. +# +# This script: +# 1. Starts vc-prysm via docker-compose with test override (no charon dependency) +# 2. Sets up wallet and keystores in the container +# 3. Imports sample slashing protection data (with known pubkey and attestations) +# 4. Calls scripts/edit/vc/export_asdb.sh to export slashing protection +# 5. Runs update-anti-slashing-db.sh to transform pubkeys +# 6. Stops the container +# 7. Calls scripts/edit/vc/import_asdb.sh to import updated slashing protection +# +# Usage: ./scripts/edit/vc/test/test_prysm_asdb.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" +cd "$REPO_ROOT" + +# Test artifacts directories +TEST_OUTPUT_DIR="$SCRIPT_DIR/output" +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_COMPOSE_FILE="$SCRIPT_DIR/docker-compose.test.yml" +TEST_DATA_DIR="$SCRIPT_DIR/data/prysm" +TEST_COMPOSE_FILES="docker-compose.yml:compose-vc.yml:$TEST_COMPOSE_FILE" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +cleanup() { + log_info "Cleaning up test resources..." + COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-prysm down 2>/dev/null || true + # Keep TEST_OUTPUT_DIR for inspection + # Clean test data to avoid stale DB locks + rm -rf "$TEST_DATA_DIR" 2>/dev/null || true +} + +trap cleanup EXIT + +# Clean test data directory before starting (remove stale locks) +log_info "Preparing test environment..." +COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-prysm down 2>/dev/null || true +rm -rf "$TEST_DATA_DIR" +mkdir -p "$TEST_DATA_DIR" + +# Copy run.sh into test data directory to satisfy the volume mount from base compose +cp "$REPO_ROOT/prysm/run.sh" "$TEST_DATA_DIR/run.sh" + +# Check prerequisites +log_info "Checking prerequisites..." + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check for test validator keys in fixtures +KEYSTORE_COUNT=$(ls "$TEST_FIXTURES_DIR/validator_keys"/keystore-*.json 2>/dev/null | wc -l | tr -d ' ') +if [ "$KEYSTORE_COUNT" -eq 0 ]; then + log_error "No keystore files found in $TEST_FIXTURES_DIR/validator_keys" + exit 1 +fi +log_info "Found $KEYSTORE_COUNT test keystore file(s)" + +# Verify test fixtures exist +if [ ! -f "$TEST_FIXTURES_DIR/source-cluster-lock.json" ] || [ ! -f "$TEST_FIXTURES_DIR/target-cluster-lock.json" ]; then + log_error "Test fixtures not found in $TEST_FIXTURES_DIR" + exit 1 +fi +log_info "Test fixtures verified" + +# Source .env for NETWORK, then override COMPOSE_FILE with test compose +if [ ! -f .env ]; then + log_warn ".env file not found, creating with NETWORK=hoodi" + echo "NETWORK=hoodi" > .env +fi + +source .env +NETWORK="${NETWORK:-hoodi}" + +# Override COMPOSE_FILE after sourcing .env (which may have its own COMPOSE_FILE) +export COMPOSE_FILE="$TEST_COMPOSE_FILES" + +log_info "Using network: $NETWORK" +log_info "Using compose files: $COMPOSE_FILE" + +# Create test output directory +mkdir -p "$TEST_OUTPUT_DIR" + +# Step 1: Start vc-prysm via docker-compose +log_info "Step 1: Starting vc-prysm via docker-compose..." + +docker compose --profile vc-prysm up -d vc-prysm + +sleep 2 + +# Verify container is running +if ! docker compose ps vc-prysm | grep -q Up; then + log_error "Container failed to start. Checking logs:" + docker compose logs vc-prysm 2>&1 || true + exit 1 +fi + +log_info "Container started successfully" + +# Step 2: Set up wallet and keystores (similar to run.sh) +# Note: We use /data/vc/wallet so it's persisted in the test data directory +log_info "Step 2: Setting up wallet and keystores..." + +docker compose exec -T vc-prysm sh -c ' + WALLET_DIR="/data/vc/wallet" + WALLET_PASSWORD="prysm-validator-secret" + + # Create wallet + rm -rf $WALLET_DIR + mkdir -p $WALLET_DIR + echo $WALLET_PASSWORD > /data/vc/wallet-password.txt + + /app/cmd/validator/validator wallet create \ + --accept-terms-of-use \ + --wallet-password-file=/data/vc/wallet-password.txt \ + --keymanager-kind=direct \ + --wallet-dir="$WALLET_DIR" + + # Import keys + tmpkeys="/home/validator_keys/tmpkeys" + mkdir -p ${tmpkeys} + + for f in /home/charon/validator_keys/keystore-*.json; do + echo "Importing key ${f}" + + # Copy keystore file to tmpkeys/ directory + cp "${f}" "${tmpkeys}" + + # Import keystore with password + /app/cmd/validator/validator accounts import \ + --accept-terms-of-use=true \ + --wallet-dir="$WALLET_DIR" \ + --keys-dir="${tmpkeys}" \ + --account-password-file="${f//json/txt}" \ + --wallet-password-file=/data/vc/wallet-password.txt + + # Delete tmpkeys/keystore-*.json file + filename="$(basename ${f})" + rm "${tmpkeys}/${filename}" + done + + rm -r ${tmpkeys} + + # Initialize the validator DB by starting and immediately stopping the validator + # This creates the necessary database structure for slashing protection import + echo "Initializing validator database..." + timeout 3 /app/cmd/validator/validator \ + --wallet-dir="$WALLET_DIR" \ + --accept-terms-of-use=true \ + --datadir="/data/vc" \ + --wallet-password-file="/data/vc/wallet-password.txt" \ + --beacon-rpc-provider="http://localhost:3600" \ + --hoodi || true + + echo "Done setting up wallet and initializing DB" +' + +log_info "Wallet and keystores set up successfully" + +# Step 3: Stop container and import sample slashing protection data +log_info "Step 3: Importing sample slashing protection data..." + +docker compose stop vc-prysm + +SAMPLE_ASDB="$TEST_FIXTURES_DIR/sample-slashing-protection.json" + +if VC=vc-prysm "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$SAMPLE_ASDB"; then + log_info "Sample data imported successfully!" +else + log_error "Failed to import sample data" + exit 1 +fi + +# Start container again for export +docker compose --profile vc-prysm up -d vc-prysm +sleep 2 + +# Step 4: Test export using the actual script +log_info "Step 4: Testing export_asdb.sh script..." + +EXPORT_FILE="$TEST_OUTPUT_DIR/exported-asdb.json" + +if VC=vc-prysm "$REPO_ROOT/scripts/edit/vc/export_asdb.sh" --output-file "$EXPORT_FILE"; then + log_info "Export script successful!" + log_info "Exported content:" + jq '.' "$EXPORT_FILE" + + # Verify exported data matches what we imported + EXPORTED_COUNT=$(jq '.data | length' "$EXPORT_FILE") + EXPORTED_ATTESTATIONS=$(jq '.data[0].signed_attestations | length' "$EXPORT_FILE" 2>/dev/null || echo "0") + log_info "Exported $EXPORTED_COUNT validator(s) with $EXPORTED_ATTESTATIONS attestation(s)" +else + log_error "Export script failed" + exit 1 +fi + +# Step 5: Run update-anti-slashing-db.sh to transform pubkeys +log_info "Step 5: Running update-anti-slashing-db.sh..." + +UPDATE_SCRIPT="$REPO_ROOT/scripts/edit/vc/update-anti-slashing-db.sh" +SOURCE_LOCK="$TEST_FIXTURES_DIR/source-cluster-lock.json" +TARGET_LOCK="$TEST_FIXTURES_DIR/target-cluster-lock.json" + +# Copy export to a working file that will be modified in place +UPDATED_FILE="$TEST_OUTPUT_DIR/updated-asdb.json" +cp "$EXPORT_FILE" "$UPDATED_FILE" + +log_info "Source pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$SOURCE_LOCK")" +log_info "Target pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$TARGET_LOCK")" + +if "$UPDATE_SCRIPT" "$UPDATED_FILE" "$SOURCE_LOCK" "$TARGET_LOCK"; then + log_info "Update successful!" + log_info "Updated content:" + jq '.' "$UPDATED_FILE" + + # Verify the pubkey was transformed + EXPORTED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$EXPORT_FILE") + UPDATED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$UPDATED_FILE") + + if [ -n "$EXPORTED_PUBKEY" ] && [ -n "$UPDATED_PUBKEY" ]; then + if [ "$EXPORTED_PUBKEY" != "$UPDATED_PUBKEY" ]; then + log_info "Pubkey transformation verified:" + log_info " Before: $EXPORTED_PUBKEY" + log_info " After: $UPDATED_PUBKEY" + else + log_error "Pubkey was NOT transformed - test fixture mismatch!" + exit 1 + fi + else + log_error "No pubkey data in exported file - sample import may have failed" + exit 1 + fi +else + log_error "Update script failed" + exit 1 +fi + +# Step 6: Stop container before import (required by import script) +log_info "Step 6: Stopping vc-prysm for import..." + +docker compose stop vc-prysm + +# Step 7: Test import using the actual script +log_info "Step 7: Testing import_asdb.sh script..." + +if VC=vc-prysm "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$UPDATED_FILE"; then + log_info "Import script successful!" +else + log_error "Import script failed" + exit 1 +fi + +echo "" +log_info "=========================================" +log_info "All tests passed successfully!" +log_info "=========================================" +log_info "" +log_info "Test artifacts in: $TEST_OUTPUT_DIR" +log_info " - exported-asdb.json (original export)" +log_info " - updated-asdb.json (after pubkey transformation)" diff --git a/scripts/edit/vc/test/test_teku_asdb.sh b/scripts/edit/vc/test/test_teku_asdb.sh new file mode 100755 index 00000000..4b4048eb --- /dev/null +++ b/scripts/edit/vc/test/test_teku_asdb.sh @@ -0,0 +1,211 @@ +#!/usr/bin/env bash + +# Integration test for export/import ASDB scripts with Teku VC. +# +# This script: +# 1. Starts vc-teku via docker-compose with test override (no charon dependency) +# 2. Imports sample slashing protection data (with known pubkey and attestations) +# 3. Calls scripts/edit/vc/export_asdb.sh to export slashing protection +# 4. Runs update-anti-slashing-db.sh to transform pubkeys +# 5. Stops the container +# 6. Calls scripts/edit/vc/import_asdb.sh to import updated slashing protection +# +# Usage: ./scripts/edit/vc/test/test_teku_asdb.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" +cd "$REPO_ROOT" + +# Test artifacts directories +TEST_OUTPUT_DIR="$SCRIPT_DIR/output" +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_COMPOSE_FILE="$SCRIPT_DIR/docker-compose.test.yml" +TEST_DATA_DIR="$SCRIPT_DIR/data/teku" +TEST_COMPOSE_FILES="docker-compose.yml:compose-vc.yml:$TEST_COMPOSE_FILE" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +cleanup() { + log_info "Cleaning up test resources..." + COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-teku down 2>/dev/null || true + # Keep TEST_OUTPUT_DIR for inspection + # Clean test data to avoid stale DB locks + rm -rf "$TEST_DATA_DIR" 2>/dev/null || true +} + +trap cleanup EXIT + +# Clean test data directory before starting (remove stale locks) +log_info "Preparing test environment..." +COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-teku down 2>/dev/null || true +rm -rf "$TEST_DATA_DIR" +mkdir -p "$TEST_DATA_DIR" + +# Check prerequisites +log_info "Checking prerequisites..." + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check for test validator keys in fixtures +KEYSTORE_COUNT=$(ls "$TEST_FIXTURES_DIR/validator_keys"/keystore-*.json 2>/dev/null | wc -l | tr -d ' ') +if [ "$KEYSTORE_COUNT" -eq 0 ]; then + log_error "No keystore files found in $TEST_FIXTURES_DIR/validator_keys" + exit 1 +fi +log_info "Found $KEYSTORE_COUNT test keystore file(s)" + +# Verify test fixtures exist +if [ ! -f "$TEST_FIXTURES_DIR/source-cluster-lock.json" ] || [ ! -f "$TEST_FIXTURES_DIR/target-cluster-lock.json" ]; then + log_error "Test fixtures not found in $TEST_FIXTURES_DIR" + exit 1 +fi +log_info "Test fixtures verified" + +# Source .env for NETWORK, then override COMPOSE_FILE with test compose +if [ ! -f .env ]; then + log_warn ".env file not found, creating with NETWORK=hoodi" + echo "NETWORK=hoodi" > .env +fi + +source .env +NETWORK="${NETWORK:-hoodi}" + +# Override COMPOSE_FILE after sourcing .env (which may have its own COMPOSE_FILE) +export COMPOSE_FILE="$TEST_COMPOSE_FILES" + +log_info "Using network: $NETWORK" +log_info "Using compose files: $COMPOSE_FILE" + +# Create test output directory +mkdir -p "$TEST_OUTPUT_DIR" + +# Step 1: Start vc-teku via docker-compose +log_info "Step 1: Starting vc-teku via docker-compose..." + +docker compose --profile vc-teku up -d vc-teku + +sleep 2 + +# Verify container is running +if ! docker compose ps vc-teku | grep -q Up; then + log_error "Container failed to start. Checking logs:" + docker compose logs vc-teku 2>&1 || true + exit 1 +fi + +log_info "Container started successfully" + +# Step 2: Stop container and import sample slashing protection data +log_info "Step 2: Importing sample slashing protection data..." + +docker compose stop vc-teku + +SAMPLE_ASDB="$TEST_FIXTURES_DIR/sample-slashing-protection.json" + +if VC=vc-teku "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$SAMPLE_ASDB"; then + log_info "Sample data imported successfully!" +else + log_error "Failed to import sample data" + exit 1 +fi + +# Start container again for export +docker compose --profile vc-teku up -d vc-teku +sleep 2 + +# Step 3: Test export using the actual script +log_info "Step 3: Testing export_asdb.sh script..." + +EXPORT_FILE="$TEST_OUTPUT_DIR/exported-asdb.json" + +if VC=vc-teku "$REPO_ROOT/scripts/edit/vc/export_asdb.sh" --output-file "$EXPORT_FILE"; then + log_info "Export script successful!" + log_info "Exported content:" + jq '.' "$EXPORT_FILE" + + # Verify exported data matches what we imported + EXPORTED_COUNT=$(jq '.data | length' "$EXPORT_FILE") + EXPORTED_ATTESTATIONS=$(jq '.data[0].signed_attestations | length' "$EXPORT_FILE" 2>/dev/null || echo "0") + log_info "Exported $EXPORTED_COUNT validator(s) with $EXPORTED_ATTESTATIONS attestation(s)" +else + log_error "Export script failed" + exit 1 +fi + +# Step 4: Run update-anti-slashing-db.sh to transform pubkeys +log_info "Step 4: Running update-anti-slashing-db.sh..." + +UPDATE_SCRIPT="$REPO_ROOT/scripts/edit/vc/update-anti-slashing-db.sh" +SOURCE_LOCK="$TEST_FIXTURES_DIR/source-cluster-lock.json" +TARGET_LOCK="$TEST_FIXTURES_DIR/target-cluster-lock.json" + +# Copy export to a working file that will be modified in place +UPDATED_FILE="$TEST_OUTPUT_DIR/updated-asdb.json" +cp "$EXPORT_FILE" "$UPDATED_FILE" + +log_info "Source pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$SOURCE_LOCK")" +log_info "Target pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$TARGET_LOCK")" + +if "$UPDATE_SCRIPT" "$UPDATED_FILE" "$SOURCE_LOCK" "$TARGET_LOCK"; then + log_info "Update successful!" + log_info "Updated content:" + jq '.' "$UPDATED_FILE" + + # Verify the pubkey was transformed + EXPORTED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$EXPORT_FILE") + UPDATED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$UPDATED_FILE") + + if [ -n "$EXPORTED_PUBKEY" ] && [ -n "$UPDATED_PUBKEY" ]; then + if [ "$EXPORTED_PUBKEY" != "$UPDATED_PUBKEY" ]; then + log_info "Pubkey transformation verified:" + log_info " Before: $EXPORTED_PUBKEY" + log_info " After: $UPDATED_PUBKEY" + else + log_error "Pubkey was NOT transformed - test fixture mismatch!" + exit 1 + fi + else + log_error "No pubkey data in exported file - sample import may have failed" + exit 1 + fi +else + log_error "Update script failed" + exit 1 +fi + +# Step 5: Stop container before import (required by import script) +log_info "Step 5: Stopping vc-teku for import..." + +docker compose stop vc-teku + +# Step 6: Test import using the actual script +log_info "Step 6: Testing import_asdb.sh script..." + +if VC=vc-teku "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$UPDATED_FILE"; then + log_info "Import script successful!" +else + log_error "Import script failed" + exit 1 +fi + +echo "" +log_info "=========================================" +log_info "All tests passed successfully!" +log_info "=========================================" +log_info "" +log_info "Test artifacts in: $TEST_OUTPUT_DIR" +log_info " - exported-asdb.json (original export)" +log_info " - updated-asdb.json (after pubkey transformation)" diff --git a/scripts/edit/vc/update-anti-slashing-db.sh b/scripts/edit/vc/update-anti-slashing-db.sh new file mode 100755 index 00000000..688002a9 --- /dev/null +++ b/scripts/edit/vc/update-anti-slashing-db.sh @@ -0,0 +1,233 @@ +#!/usr/bin/env bash + +# Script to update EIP-3076 anti-slashing DB by replacing pubkey values +# based on lookup in source and target cluster-lock.json files. +# +# Usage: update-anti-slashing-db.sh +# +# Arguments: +# eip3076-file - Path to EIP-3076 JSON file to update in place +# source-cluster-lock - Path to source cluster-lock.json (original) +# target-cluster-lock - Path to target cluster-lock.json (new, from output/) +# +# The script traverses the EIP-3076 JSON file and finds all "pubkey" values in the +# data array. For each pubkey, it looks up the value in the source cluster-lock.json's +# distributed_validators[].public_shares[] arrays, remembers the indices, and then +# replaces the pubkey with the corresponding value from the target cluster-lock.json +# at the same indices. + +set -euo pipefail + +# Check if jq is installed +if ! command -v jq &> /dev/null; then + echo "Error: jq is required but not installed. Please install jq first." >&2 + exit 1 +fi + +# Validate arguments +if [ "$#" -ne 3 ]; then + echo "Usage: $0 " >&2 + exit 1 +fi + +EIP3076_FILE="$1" +SOURCE_LOCK="$2" +TARGET_LOCK="$3" + +# Validate files exist +if [ ! -f "$EIP3076_FILE" ]; then + echo "Error: EIP-3076 file not found: $EIP3076_FILE" >&2 + exit 1 +fi + +if [ ! -f "$SOURCE_LOCK" ]; then + echo "Error: Source cluster-lock file not found: $SOURCE_LOCK" >&2 + exit 1 +fi + +if [ ! -f "$TARGET_LOCK" ]; then + echo "Error: Target cluster-lock file not found: $TARGET_LOCK" >&2 + exit 1 +fi + +# Validate all files contain valid JSON +if ! jq empty "$EIP3076_FILE" 2>/dev/null; then + echo "Error: EIP-3076 file contains invalid JSON: $EIP3076_FILE" >&2 + exit 1 +fi + +if ! jq empty "$SOURCE_LOCK" 2>/dev/null; then + echo "Error: Source cluster-lock file contains invalid JSON: $SOURCE_LOCK" >&2 + exit 1 +fi + +if ! jq empty "$TARGET_LOCK" 2>/dev/null; then + echo "Error: Target cluster-lock file contains invalid JSON: $TARGET_LOCK" >&2 + exit 1 +fi + +# Create temporary files for processing +TEMP_FILE=$(mktemp) +trap 'rm -f "$TEMP_FILE" "${TEMP_FILE}.tmp"' EXIT INT TERM + +# Function to find pubkey in cluster-lock and return validator_index,share_index +# Returns empty string if not found +find_pubkey_indices() { + local pubkey="$1" + local cluster_lock_file="$2" + + # Search through distributed_validators and public_shares + jq -r --arg pubkey "$pubkey" ' + .distributed_validators as $validators | + foreach range(0; $validators | length) as $v_idx ( + null; + . ; + $validators[$v_idx].public_shares as $shares | + foreach range(0; $shares | length) as $s_idx ( + null; + . ; + if $shares[$s_idx] == $pubkey then + "\($v_idx),\($s_idx)" + else + empty + end + ) + ) | select(. != null) + ' "$cluster_lock_file" | head -n 1 +} + +# Function to get pubkey from cluster-lock at specific indices +get_pubkey_at_indices() { + local validator_idx="$1" + local share_idx="$2" + local cluster_lock_file="$3" + + jq -r --argjson v_idx "$validator_idx" --argjson s_idx "$share_idx" ' + .distributed_validators[$v_idx].public_shares[$s_idx] + ' "$cluster_lock_file" +} + +echo "Reading EIP-3076 file: $EIP3076_FILE" +echo "Source cluster-lock: $SOURCE_LOCK" +echo "Target cluster-lock: $TARGET_LOCK" +echo "" + +# Validate cluster-lock structure +source_validators=$(jq '.distributed_validators | length' "$SOURCE_LOCK") +target_validators=$(jq '.distributed_validators | length' "$TARGET_LOCK") + +# Validate that we got valid numeric values +if [ -z "$source_validators" ] || [ "$source_validators" = "null" ]; then + echo "Error: Source cluster-lock missing 'distributed_validators' field" >&2 + exit 1 +fi + +if [ -z "$target_validators" ] || [ "$target_validators" = "null" ]; then + echo "Error: Target cluster-lock missing 'distributed_validators' field" >&2 + exit 1 +fi + +echo "Source cluster-lock has $source_validators validators" +echo "Target cluster-lock has $target_validators validators" + +if [ "$source_validators" -eq 0 ]; then + echo "Error: Source cluster-lock has no validators" >&2 + exit 1 +fi + +if [ "$target_validators" -eq 0 ]; then + echo "Error: Target cluster-lock has no validators" >&2 + exit 1 +fi + +# Verify that target has at least as many validators as source +if [ "$target_validators" -lt "$source_validators" ]; then + echo "Error: Target cluster-lock has fewer validators ($target_validators) than source ($source_validators)" >&2 + echo " This may result in missing pubkey replacements" >&2 + exit 1 +fi + +echo "" + +# Get all unique pubkeys from the data array +# Note: The same pubkey may appear multiple times, so we deduplicate with sort -u +pubkeys=$(jq -r '.data[].pubkey' "$EIP3076_FILE" | sort -u) + +if [ -z "$pubkeys" ]; then + echo "Warning: No pubkeys found in EIP-3076 file" >&2 + exit 0 +fi + +pubkey_count=$(grep -c '^' <<< "$pubkeys") +echo "Found $pubkey_count unique pubkey(s) to process" +echo "" + +# Copy original file to temp file, we'll modify it in place +cp "$EIP3076_FILE" "$TEMP_FILE" + +# Process each pubkey +while IFS= read -r old_pubkey; do + echo "Processing pubkey: $old_pubkey" + + # Find indices in source cluster-lock + indices=$(find_pubkey_indices "$old_pubkey" "$SOURCE_LOCK") + + if [ -z "$indices" ]; then + echo " Error: Pubkey not found in source cluster-lock.json" >&2 + echo " Cannot proceed without mapping for all pubkeys" >&2 + exit 1 + fi + + # Split indices + validator_idx=$(echo "$indices" | cut -d',' -f1) + share_idx=$(echo "$indices" | cut -d',' -f2) + + echo " Found at distributed_validators[$validator_idx].public_shares[$share_idx]" + + # Verify target has sufficient validators + if [ "$validator_idx" -ge "$target_validators" ]; then + echo " Error: Target cluster-lock.json doesn't have validator at index $validator_idx" >&2 + echo " Target has only $target_validators validators" >&2 + exit 1 + fi + + # Verify target validator has sufficient public_shares + target_share_count=$(jq --argjson v_idx "$validator_idx" '.distributed_validators[$v_idx].public_shares | length' "$TARGET_LOCK") + if [ "$share_idx" -ge "$target_share_count" ]; then + echo " Error: Target cluster-lock.json validator[$validator_idx] doesn't have share at index $share_idx" >&2 + echo " Target validator has only $target_share_count shares" >&2 + exit 1 + fi + + # Get corresponding pubkey from target cluster-lock + new_pubkey=$(get_pubkey_at_indices "$validator_idx" "$share_idx" "$TARGET_LOCK") + + if [ -z "$new_pubkey" ] || [ "$new_pubkey" = "null" ]; then + echo " Error: Could not find pubkey at same indices in target cluster-lock.json" >&2 + exit 1 + fi + + echo " Replacing with: $new_pubkey" + + # Replace the pubkey in the JSON data + # Note: The same pubkey may appear multiple times in the data array (one per validator). + # This filter will update ALL occurrences of the old pubkey with the new one. + # We modify the temp file in place using jq's output redirection + jq --arg old "$old_pubkey" --arg new "$new_pubkey" ' + (.data[] | select(.pubkey == $old) | .pubkey) |= $new + ' "$TEMP_FILE" > "${TEMP_FILE}.tmp" && mv "${TEMP_FILE}.tmp" "$TEMP_FILE" + + echo " Done" + echo "" +done <<< "$pubkeys" + +# Validate the output is valid JSON +if ! jq empty "$TEMP_FILE" 2>/dev/null; then + echo "Error: Generated invalid JSON" >&2 + exit 1 +fi + +# Replace original file with updated version +cp "$TEMP_FILE" "$EIP3076_FILE" + +echo "Successfully updated $EIP3076_FILE"