diff --git a/README.md b/README.md index 8bfc2a7..77970cf 100644 --- a/README.md +++ b/README.md @@ -24,6 +24,7 @@ A single command line quickstart to spin up lean node(s) 3. **yq**: YAML processor for automated configuration parsing - Install on macOS: `brew install yq` - Install on Linux: See [yq installation guide](https://github.com/mikefarah/yq#install) +4. **Python 3 + PyYAML** (optional, for leanpoint upstreams sync): Required only if you use the automatic leanpoint upstreams sync (tooling server). Install with `pip install pyyaml` or `uv add pyyaml`. ## Quick Start @@ -95,6 +96,54 @@ NETWORK_DIR=local-devnet ./spin-node.sh --node all --generateGenesis --aggregato # is updated in validator-config.yaml before nodes are started ``` +### Leanpoint deployment + +After validator nodes are spun up, leanpoint is deployed so it can monitor them. Behavior depends on deployment mode: + +- **Local deployment** (`NETWORK_DIR=local-devnet`, `deployment_mode: local`): Leanpoint runs **locally**. `sync-leanpoint-upstreams.sh` generates `upstreams.json` (with `--docker` so the container can reach host validators at `host.docker.internal`), writes it to `/data/upstreams.json`, pulls the latest image, and starts a local Docker container. UI at http://localhost:5555. The container is removed on Ctrl+C cleanup or when you run with `--stop`. +- **Ansible/remote deployment**: Leanpoint is updated on the **tooling server**. The script rsyncs `upstreams.json` to the server, pulls the latest image there, and recreates the remote container. + +**What runs:** +1. `convert-validator-config.py` reads `validator-config.yaml` and generates `upstreams.json` (validator URLs for health checks). +2. `sync-leanpoint-upstreams.sh` either deploys leanpoint locally (local devnet) or syncs to the tooling server and recreates the remote container (Ansible). + +**Remote defaults:** Tooling server `46.225.10.32`, user `root`, remote path `/etc/leanpoint/upstreams.json`, container name `leanpoint`. Override with env vars (see script header in `sync-leanpoint-upstreams.sh`). + +**SSH key for remote sync:** When using Ansible deployment, the tooling server may require a specific SSH key. Pass `--sshKey ~/.ssh/id_ed25519_github` (or `--private-key`) so the sync can succeed. + +**Skip via flag:** Pass `--skip-leanpoint` to `spin-node.sh` to skip leanpoint deployment (local and remote). Alternatively set `LEANPOINT_SYNC_DISABLED=1`, or the step is skipped when the convert script or validator config is missing. + +**Standalone use of convert script:** You can generate `upstreams.json` for local leanpoint without the tooling server: + +```sh +# From lean-quickstart root +python3 convert-validator-config.py local-devnet/genesis/validator-config.yaml upstreams.json +# With --docker for leanpoint in Docker reaching a host devnet: +python3 convert-validator-config.py local-devnet/genesis/validator-config.yaml upstreams-local-docker.json --docker +``` + +Requires Python 3 and PyYAML (`pip install pyyaml`). + +### Remote Observability Stack + +Every Ansible deployment automatically deploys an observability stack alongside each lean node on remote hosts. No additional flags are needed. + +**What gets deployed on each remote host:** +- **cadvisor** - Container metrics +- **node-exporter** - System metrics +- **prometheus** - Scrape local targets, remote_write to central +- **promtail** - Collect lean node container logs, push to Loki + +**How it works:** +- The local prometheus on each host scrapes the lean node (at its `metricsPort`), cadvisor, node-exporter, and itself, then forwards all data to central prometheus via `remote_write` +- Promtail discovers the lean node container via Docker socket and pushes logs to central Loki + +**Key properties:** +- **Idempotent**: cadvisor and node-exporter are only started if not already running; prometheus and promtail only restart when their config files change +- **Persistent**: observability containers are not stopped when lean nodes are stopped — they run independently +- **Configurable**: central endpoints, images, and ports can be overridden in `ansible/roles/observability/defaults/main.yml` +- **Remote config path**: `/opt/lean-quickstart/observability/` on each host + ## Args 1. `NETWORK_DIR` is an env to specify the network directory. Should have a `genesis` directory with genesis config. A `data` folder will be created inside this `NETWORK_DIR` if not already there. @@ -570,6 +619,7 @@ For more details, see the [Docker Desktop host networking documentation](https:/ This quickstart includes automated configuration parsing: - **Official Genesis Generation**: Uses PK's `eth-beacon-genesis` docker tool from [PR #36](https://github.com/ethpandaops/eth-beacon-genesis/pull/36) +- **Leanpoint upstreams sync**: After nodes are spun up, `convert-validator-config.py` and `sync-leanpoint-upstreams.sh` generate `upstreams.json` from `validator-config.yaml`, rsync it to the tooling server, and restart the leanpoint container (see [Leanpoint upstreams sync](#leanpoint-upstreams-sync-tooling-server)) - **Complete File Set**: Generates `validators.yaml`, `nodes.yaml`, `genesis.json`, `genesis.ssz`, and `.key` files - **QUIC Port Detection**: Automatically extracts QUIC ports from `validator-config.yaml` using `yq` - **Node Detection**: Dynamically discovers available nodes from the validator configuration diff --git a/ansible-devnet/genesis/validator-config.yaml b/ansible-devnet/genesis/validator-config.yaml index 6bc6603..fcf8160 100644 --- a/ansible-devnet/genesis/validator-config.yaml +++ b/ansible-devnet/genesis/validator-config.yaml @@ -14,6 +14,7 @@ validators: ip: "46.224.123.223" quic: 9001 metricsPort: 9095 + apiPort: 5055 isAggregator: false count: 1 # number of indices for this node @@ -26,6 +27,7 @@ validators: ip: "77.42.27.219" quic: 9001 metricsPort: 9095 + apiPort: 5055 isAggregator: false devnet: 1 count: 1 @@ -39,6 +41,7 @@ validators: ip: "46.224.123.220" quic: 9001 metricsPort: 9095 + apiPort: 5055 isAggregator: false count: 1 @@ -51,6 +54,7 @@ validators: ip: "46.224.135.177" quic: 9001 metricsPort: 9095 + httpPort: 5055 isAggregator: false count: 1 @@ -63,6 +67,7 @@ validators: ip: "46.224.135.169" quic: 9001 metricsPort: 9095 + apiPort: 5055 isAggregator: false count: 1 @@ -75,6 +80,7 @@ validators: ip: "37.27.250.20" quic: 9001 metricsPort: 9095 + apiPort: 9095 isAggregator: false count: 1 @@ -87,5 +93,6 @@ validators: ip: "78.47.44.215" quic: 9001 metricsPort: 9095 + apiPort: 9095 isAggregator: false count: 1 \ No newline at end of file diff --git a/ansible/playbooks/deploy-nodes.yml b/ansible/playbooks/deploy-nodes.yml index b2755ed..398952c 100644 --- a/ansible/playbooks/deploy-nodes.yml +++ b/ansible/playbooks/deploy-nodes.yml @@ -14,6 +14,9 @@ hosts: localhost connection: local gather_facts: yes + tags: + - deploy + - observability vars: validator_config_file: "{{ genesis_dir }}/validator-config.yaml" @@ -205,6 +208,17 @@ - setup - deploy + - name: Deploy observability stack + include_role: + name: observability + apply: + tags: + - observability + - deploy + tags: + - observability + - deploy + - name: Deploy this node include_tasks: helpers/deploy-single-node.yml tags: diff --git a/ansible/playbooks/site.yml b/ansible/playbooks/site.yml index 52dbbac..0f86840 100644 --- a/ansible/playbooks/site.yml +++ b/ansible/playbooks/site.yml @@ -3,7 +3,7 @@ # 1. Clean data directories (if clean_data=true) # 2. Generate genesis files locally (including .key files) # 3. Copy genesis files to remote hosts -# 4. Deploy nodes +# 4. Deploy nodes (observability stack + lean node per host) # # Usage: # ansible-playbook -i inventory/hosts.yml playbooks/site.yml \ diff --git a/ansible/roles/ethlambda/tasks/main.yml b/ansible/roles/ethlambda/tasks/main.yml index 4d59f9d..fc68d54 100644 --- a/ansible/roles/ethlambda/tasks/main.yml +++ b/ansible/roles/ethlambda/tasks/main.yml @@ -38,6 +38,7 @@ loop: - enrFields.quic - metricsPort + - apiPort - isAggregator when: node_name is defined @@ -45,7 +46,8 @@ set_fact: ethlambda_quic_port: "{{ ethlambda_node_config.results[0].stdout }}" ethlambda_metrics_port: "{{ ethlambda_node_config.results[1].stdout }}" - ethlambda_is_aggregator: "{{ 'true' if (ethlambda_node_config.results[2].stdout | default('') | trim) == 'true' else 'false' }}" + ethlambda_api_port: "{{ ethlambda_node_config.results[2].stdout }}" + ethlambda_is_aggregator: "{{ 'true' if (ethlambda_node_config.results[3].stdout | default('') | trim) == 'true' else 'false' }}" when: ethlambda_node_config is defined - name: Ensure node key file exists @@ -97,7 +99,8 @@ --gossipsub-port {{ ethlambda_quic_port }} --node-id {{ node_name }} --node-key /config/{{ node_name }}.key - --metrics-address 0.0.0.0 + --http-address 0.0.0.0 + --api-port {{ ethlambda_api_port }} --metrics-port {{ ethlambda_metrics_port }} {{ '--is-aggregator' if (ethlambda_is_aggregator | default('false')) == 'true' else '' }} {{ ('--checkpoint-sync-url ' + checkpoint_sync_url) if (checkpoint_sync_url is defined and checkpoint_sync_url | length > 0) else '' }} diff --git a/ansible/roles/grandine/tasks/main.yml b/ansible/roles/grandine/tasks/main.yml index 4f76c17..5e0f520 100644 --- a/ansible/roles/grandine/tasks/main.yml +++ b/ansible/roles/grandine/tasks/main.yml @@ -38,6 +38,7 @@ loop: - enrFields.quic - metricsPort + - apiPort - privkey - isAggregator when: node_name is defined @@ -46,8 +47,9 @@ set_fact: grandine_quic_port: "{{ grandine_node_config.results[0].stdout }}" grandine_metrics_port: "{{ grandine_node_config.results[1].stdout }}" - grandine_privkey: "{{ grandine_node_config.results[2].stdout }}" - grandine_is_aggregator: "{{ 'true' if (grandine_node_config.results[3].stdout | default('') | trim) == 'true' else 'false' }}" + grandine_api_port: "{{ grandine_node_config.results[2].stdout }}" + grandine_privkey: "{{ grandine_node_config.results[3].stdout }}" + grandine_is_aggregator: "{{ 'true' if (grandine_node_config.results[4].stdout | default('') | trim) == 'true' else 'false' }}" when: grandine_node_config is defined - name: Ensure node key file exists @@ -101,11 +103,12 @@ --node-id {{ node_name }} --node-key /config/{{ node_name }}.key --port {{ grandine_quic_port }} - --address 0.0.0.0 --hash-sig-key-dir /config/hash-sig-keys - --metrics --http-address 0.0.0.0 - --http-port {{ grandine_metrics_port }} + --http-port {{ grandine_api_port }} + --metrics + --metrics-address 0.0.0.0 + --metrics-port {{ grandine_metrics_port }} {{ '--is-aggregator' if (grandine_is_aggregator | default('false')) == 'true' else '' }} {{ ('--checkpoint-sync-url ' + checkpoint_sync_url) if (checkpoint_sync_url is defined and checkpoint_sync_url | length > 0) else '' }} register: grandine_container diff --git a/ansible/roles/lantern/tasks/main.yml b/ansible/roles/lantern/tasks/main.yml index aca089b..bb3696d 100644 --- a/ansible/roles/lantern/tasks/main.yml +++ b/ansible/roles/lantern/tasks/main.yml @@ -29,6 +29,7 @@ loop: - enrFields.quic - metricsPort + - apiPort - privkey - isAggregator when: node_name is defined @@ -37,8 +38,9 @@ set_fact: lantern_quic_port: "{{ lantern_node_config.results[0].stdout }}" lantern_metrics_port: "{{ lantern_node_config.results[1].stdout }}" - lantern_privkey: "{{ lantern_node_config.results[2].stdout }}" - lantern_is_aggregator: "{{ 'true' if (lantern_node_config.results[3].stdout | default('') | trim) == 'true' else 'false' }}" + lantern_api_port: "{{ lantern_node_config.results[2].stdout }}" + lantern_privkey: "{{ lantern_node_config.results[3].stdout }}" + lantern_is_aggregator: "{{ 'true' if (lantern_node_config.results[4].stdout | default('') | trim) == 'true' else 'false' }}" when: lantern_node_config is defined - name: Ensure node key file exists @@ -90,7 +92,7 @@ --node-key-path /config/{{ node_name }}.key --listen-address /ip4/0.0.0.0/udp/{{ lantern_quic_port }}/quic-v1 --metrics-port {{ lantern_metrics_port }} - --http-port 5055 + --http-port {{ lantern_api_port }} --hash-sig-key-dir /config/hash-sig-keys {{ '--is-aggregator' if (lantern_is_aggregator | default('false')) == 'true' else '' }} {{ ('--checkpoint-sync-url ' + checkpoint_sync_url) if (checkpoint_sync_url is defined and checkpoint_sync_url | length > 0) else '' }} diff --git a/ansible/roles/lighthouse/tasks/main.yml b/ansible/roles/lighthouse/tasks/main.yml index 6040796..cb8bd02 100644 --- a/ansible/roles/lighthouse/tasks/main.yml +++ b/ansible/roles/lighthouse/tasks/main.yml @@ -37,6 +37,7 @@ loop: - enrFields.quic - metricsPort + - apiPort - privkey - isAggregator when: node_name is defined @@ -45,8 +46,9 @@ set_fact: lighthouse_quic_port: "{{ lighthouse_node_config.results[0].stdout }}" lighthouse_metrics_port: "{{ lighthouse_node_config.results[1].stdout }}" - lighthouse_privkey: "{{ lighthouse_node_config.results[2].stdout }}" - lighthouse_is_aggregator: "{{ 'true' if (lighthouse_node_config.results[3].stdout | default('') | trim) == 'true' else 'false' }}" + lighthouse_api_port: "{{ lighthouse_node_config.results[2].stdout }}" + lighthouse_privkey: "{{ lighthouse_node_config.results[3].stdout }}" + lighthouse_is_aggregator: "{{ 'true' if (lighthouse_node_config.results[4].stdout | default('') | trim) == 'true' else 'false' }}" when: lighthouse_node_config is defined - name: Ensure node key file exists @@ -104,6 +106,7 @@ --metrics --metrics-address 0.0.0.0 --metrics-port {{ lighthouse_metrics_port }} + --api-port {{ lighthouse_api_port }} {{ '--is-aggregator' if (lighthouse_is_aggregator | default('false')) == 'true' else '' }} {{ ('--checkpoint-sync-url ' + checkpoint_sync_url) if (checkpoint_sync_url is defined and checkpoint_sync_url | length > 0) else '' }} register: lighthouse_container diff --git a/ansible/roles/observability/defaults/main.yml b/ansible/roles/observability/defaults/main.yml new file mode 100644 index 0000000..7dc86fa --- /dev/null +++ b/ansible/roles/observability/defaults/main.yml @@ -0,0 +1,12 @@ +--- +observability_dir: "/opt/lean-quickstart/observability" +cadvisor_image: "gcr.io/cadvisor/cadvisor:latest" +node_exporter_image: "prom/node-exporter:latest" +prometheus_image: "prom/prometheus:latest" +promtail_image: "grafana/promtail:latest" +prometheus_port: 9090 +promtail_port: 9080 +cadvisor_port: 9098 +node_exporter_port: 9100 +remote_write_url: "http://46.225.10.32:9090/api/v1/write" +loki_push_url: "http://46.225.10.32:3100/loki/api/v1/push" diff --git a/ansible/roles/observability/tasks/main.yml b/ansible/roles/observability/tasks/main.yml new file mode 100644 index 0000000..f7ae6b2 --- /dev/null +++ b/ansible/roles/observability/tasks/main.yml @@ -0,0 +1,141 @@ +--- +# Observability role: Deploy cadvisor, node_exporter, prometheus, and promtail +# alongside each lean node on remote hosts. + +- name: Extract metricsPort from validator-config.yaml + shell: | + yq eval ".validators[] | select(.name == \"{{ node_name }}\") | .metricsPort" "{{ local_genesis_dir }}/validator-config.yaml" + register: obs_metrics_port_raw + changed_when: false + delegate_to: localhost + +- name: Set metricsPort fact + set_fact: + obs_metrics_port: "{{ obs_metrics_port_raw.stdout | trim }}" + +- name: Create observability config directory + file: + path: "{{ observability_dir }}" + state: directory + mode: '0755' + +- name: Template prometheus config + template: + src: prometheus.yml.j2 + dest: "{{ observability_dir }}/prometheus.yml" + mode: '0644' + +- name: Template promtail config + template: + src: promtail.yml.j2 + dest: "{{ observability_dir }}/promtail.yml" + mode: '0644' + +# --- cadvisor (no config, only start if not running) --- + +- name: Check if cadvisor exists + command: docker ps -a --filter name=^cadvisor$ -q + register: cadvisor_exists + changed_when: false + failed_when: false + +- name: Check if cadvisor is running + command: docker ps --filter name=^cadvisor$ -q + register: cadvisor_running + changed_when: false + failed_when: false + +- name: Start stopped cadvisor container + command: docker start cadvisor + when: cadvisor_exists.stdout != "" and cadvisor_running.stdout == "" + +- name: Create cadvisor container + command: >- + docker run -d + --name cadvisor + --restart unless-stopped + --network host + -v /:/rootfs:ro + -v /var/run:/var/run:ro + -v /sys:/sys:ro + -v /var/lib/docker/:/var/lib/docker:ro + {{ cadvisor_image }} + when: cadvisor_exists.stdout == "" + +# --- node_exporter (no config, only start if not running) --- + +- name: Check if node_exporter exists + command: docker ps -a --filter name=^node_exporter$ -q + register: node_exporter_exists + changed_when: false + failed_when: false + +- name: Check if node_exporter is running + command: docker ps --filter name=^node_exporter$ -q + register: node_exporter_running + changed_when: false + failed_when: false + +- name: Start stopped node_exporter container + command: docker start node_exporter + when: node_exporter_exists.stdout != "" and node_exporter_running.stdout == "" + +- name: Create node_exporter container + command: >- + docker run -d + --name node_exporter + --restart unless-stopped + --network host + --pid host + -v /proc:/host/proc:ro + -v /sys:/host/sys:ro + -v /:/rootfs:ro + {{ node_exporter_image }} + --path.procfs=/host/proc + --path.sysfs=/host/sys + --path.rootfs=/rootfs + when: node_exporter_exists.stdout == "" + +# --- prometheus (always recreate to pick up config/mount changes, data persists on host) --- + +- name: Create prometheus data directory + file: + path: "{{ observability_dir }}/prometheus-data" + state: directory + mode: '0777' + recurse: yes + +- name: Remove existing prometheus container + command: docker rm -f prometheus + failed_when: false + +- name: Start prometheus container + command: >- + docker run -d + --name prometheus + --restart unless-stopped + --network host + -v {{ observability_dir }}/prometheus.yml:/etc/prometheus/prometheus.yml:ro + -v {{ observability_dir }}/prometheus-data:/prometheus + {{ prometheus_image }} + --config.file=/etc/prometheus/prometheus.yml + --storage.tsdb.retention.time=15d + --web.listen-address=0.0.0.0:{{ prometheus_port }} + +# --- promtail (always recreate to pick up config/mount changes) --- + +- name: Remove existing promtail container + command: docker rm -f promtail + failed_when: false + +- name: Start promtail container + command: >- + docker run -d + --name promtail + --restart unless-stopped + --network host + -v {{ observability_dir }}/promtail.yml:/etc/promtail/config.yml:ro + -v /var/run/docker.sock:/var/run/docker.sock:ro + -v /var/lib/docker/containers:/var/lib/docker/containers:ro + {{ promtail_image }} + -config.file=/etc/promtail/config.yml diff --git a/ansible/roles/observability/templates/prometheus.yml.j2 b/ansible/roles/observability/templates/prometheus.yml.j2 new file mode 100644 index 0000000..b2e55bf --- /dev/null +++ b/ansible/roles/observability/templates/prometheus.yml.j2 @@ -0,0 +1,23 @@ +global: + scrape_interval: 15s + +scrape_configs: + - job_name: '{{ node_name }}' + static_configs: + - targets: ['172.17.0.1:{{ obs_metrics_port }}'] + labels: + type: 'app' + node_id: '{{ node_name }}' + - targets: ['172.17.0.1:{{ node_exporter_port }}'] + labels: + type: 'node' + node_id: '{{ node_name }}' + - targets: ['172.17.0.1:{{ cadvisor_port }}'] + labels: + type: 'docker' + node_id: '{{ node_name }}' + relabel_configs: + - source_labels: [node_id] + target_label: instance +remote_write: + - url: {{ remote_write_url }} diff --git a/ansible/roles/observability/templates/promtail.yml.j2 b/ansible/roles/observability/templates/promtail.yml.j2 new file mode 100644 index 0000000..ebf0ea0 --- /dev/null +++ b/ansible/roles/observability/templates/promtail.yml.j2 @@ -0,0 +1,28 @@ +server: + http_listen_port: {{ promtail_port }} + grpc_listen_port: 0 + +positions: + filename: /tmp/positions.yaml + +clients: + - url: {{ loki_push_url }} + +scrape_configs: + - job_name: {{ node_name }} + docker_sd_configs: + - host: unix:///var/run/docker.sock + refresh_interval: 5s + filters: + - name: name + values: ["{{ node_name }}"] + relabel_configs: + - source_labels: ['__meta_docker_container_name'] + regex: '/(.*)' + target_label: 'container' + - source_labels: ['__meta_docker_container_log_stream'] + target_label: 'stream' + - target_label: 'node' + replacement: '{{ node_name }}' + - target_label: 'host' + replacement: '{{ ansible_host }}' diff --git a/ansible/roles/qlean/tasks/main.yml b/ansible/roles/qlean/tasks/main.yml index 7967a15..336f8bd 100644 --- a/ansible/roles/qlean/tasks/main.yml +++ b/ansible/roles/qlean/tasks/main.yml @@ -36,6 +36,7 @@ loop: - enrFields.quic - metricsPort + - apiPort - privkey - isAggregator when: node_name is defined @@ -44,8 +45,9 @@ set_fact: qlean_quic_port: "{{ qlean_node_config.results[0].stdout }}" qlean_metrics_port: "{{ qlean_node_config.results[1].stdout }}" - qlean_privkey: "{{ qlean_node_config.results[2].stdout }}" - qlean_is_aggregator: "{{ 'true' if (qlean_node_config.results[3].stdout | default('') | trim) == 'true' else 'false' }}" + qlean_api_port: "{{ qlean_node_config.results[2].stdout }}" + qlean_privkey: "{{ qlean_node_config.results[3].stdout }}" + qlean_is_aggregator: "{{ 'true' if (qlean_node_config.results[4].stdout | default('') | trim) == 'true' else 'false' }}" when: qlean_node_config is defined - name: Extract validator index from validators.yaml @@ -114,7 +116,7 @@ --metrics-host 0.0.0.0 --metrics-port {{ qlean_metrics_port }} --api-host 0.0.0.0 - --api-port 5053 + --api-port {{ qlean_api_port }} {{ '--is-aggregator' if (qlean_is_aggregator | default('false')) == 'true' else '' }} {{ ('--checkpoint-sync-url ' + checkpoint_sync_url) if (checkpoint_sync_url is defined and checkpoint_sync_url | length > 0) else '' }} -ldebug diff --git a/ansible/roles/ream/tasks/main.yml b/ansible/roles/ream/tasks/main.yml index c800ad1..a86ed4f 100644 --- a/ansible/roles/ream/tasks/main.yml +++ b/ansible/roles/ream/tasks/main.yml @@ -37,6 +37,7 @@ loop: - enrFields.quic - metricsPort + - apiPort - privkey - isAggregator when: node_name is defined @@ -45,8 +46,9 @@ set_fact: ream_quic_port: "{{ ream_node_config.results[0].stdout }}" ream_metrics_port: "{{ ream_node_config.results[1].stdout }}" - ream_privkey: "{{ ream_node_config.results[2].stdout }}" - ream_is_aggregator: "{{ 'true' if (ream_node_config.results[3].stdout | default('') | trim) == 'true' else 'false' }}" + ream_api_port: "{{ ream_node_config.results[2].stdout }}" + ream_privkey: "{{ ream_node_config.results[3].stdout }}" + ream_is_aggregator: "{{ 'true' if (ream_node_config.results[4].stdout | default('') | trim) == 'true' else 'false' }}" when: ream_node_config is defined - name: Ensure node key file exists @@ -100,6 +102,7 @@ --metrics-address 0.0.0.0 --metrics-port {{ ream_metrics_port }} --http-address 0.0.0.0 + --http-port {{ ream_api_port }} {{ '--is-aggregator' if (ream_is_aggregator | default('false')) == 'true' else '' }} {{ ('--checkpoint-sync-url ' + checkpoint_sync_url) if (checkpoint_sync_url is defined and checkpoint_sync_url | length > 0) else '' }} register: ream_container diff --git a/ansible/roles/zeam/tasks/main.yml b/ansible/roles/zeam/tasks/main.yml index 2d50a8c..2fc01c7 100644 --- a/ansible/roles/zeam/tasks/main.yml +++ b/ansible/roles/zeam/tasks/main.yml @@ -44,6 +44,7 @@ delegate_to: localhost loop: - metricsPort + - apiPort - privkey - isAggregator when: node_name is defined @@ -51,7 +52,8 @@ - name: Set node metrics port and aggregator flag set_fact: zeam_metrics_port: "{{ node_config.results[0].stdout }}" - zeam_is_aggregator: "{{ 'true' if (node_config.results[2].stdout | default('') | trim) == 'true' else 'false' }}" + zeam_api_port: "{{ node_config.results[1].stdout }}" + zeam_is_aggregator: "{{ 'true' if (node_config.results[3].stdout | default('') | trim) == 'true' else 'false' }}" when: node_config is defined - name: Ensure node key file exists @@ -108,7 +110,8 @@ --node-id {{ node_name }} --node-key /config/{{ node_name }}.key --metrics_enable - --api-port {{ zeam_metrics_port }} + --api-port {{ zeam_api_port }} + --metrics-port {{ zeam_metrics_port }} {{ '--is-aggregator' if (zeam_is_aggregator | default('false')) == 'true' else '' }} {{ ('--checkpoint-sync-url ' + checkpoint_sync_url) if (checkpoint_sync_url is defined and checkpoint_sync_url | length > 0) else '' }} register: zeam_container_result diff --git a/client-cmds/ethlambda-cmd.sh b/client-cmds/ethlambda-cmd.sh index 25d3cee..86f5e89 100644 --- a/client-cmds/ethlambda-cmd.sh +++ b/client-cmds/ethlambda-cmd.sh @@ -28,7 +28,8 @@ node_binary="$binary_path \ --gossipsub-port $quicPort \ --node-id $item \ --node-key $configDir/$item.key \ - --metrics-address 0.0.0.0 \ + --http-address 0.0.0.0 \ + --api-port $apiPort \ --metrics-port $metricsPort \ $attestation_committee_flag \ $aggregator_flag \ @@ -40,7 +41,8 @@ node_docker="ghcr.io/lambdaclass/ethlambda:devnet3 \ --gossipsub-port $quicPort \ --node-id $item \ --node-key /config/$item.key \ - --metrics-address 0.0.0.0 \ + --http-address 0.0.0.0 \ + --api-port $apiPort \ --metrics-port $metricsPort \ $attestation_committee_flag \ $aggregator_flag \ diff --git a/client-cmds/grandine-cmd.sh b/client-cmds/grandine-cmd.sh index f0aac2e..52a1432 100644 --- a/client-cmds/grandine-cmd.sh +++ b/client-cmds/grandine-cmd.sh @@ -26,9 +26,11 @@ node_binary="$grandine_bin \ --node-key $configDir/$privKeyPath \ --port $quicPort \ --address 0.0.0.0 \ - --metrics \ --http-address 0.0.0.0 \ - --http-port $metricsPort \ + --http-port $apiPort \ + --metrics \ + --metrics-address 0.0.0.0 \ + --metrics-port $metricsPort \ --hash-sig-key-dir $configDir/hash-sig-keys \ $attestation_committee_flag \ $aggregator_flag \ @@ -41,10 +43,11 @@ node_docker="sifrai/lean:devnet-3 \ --node-id $item \ --node-key /config/$privKeyPath \ --port $quicPort \ - --address 0.0.0.0 \ - --metrics \ --http-address 0.0.0.0 \ - --http-port $metricsPort \ + --http-port $apiPort \ + --metrics \ + --metrics-address 0.0.0.0 \ + --metrics-port $metricsPort \ --hash-sig-key-dir /config/hash-sig-keys \ $attestation_committee_flag \ $aggregator_flag \ diff --git a/client-cmds/lantern-cmd.sh b/client-cmds/lantern-cmd.sh index 97ab11b..91eb024 100755 --- a/client-cmds/lantern-cmd.sh +++ b/client-cmds/lantern-cmd.sh @@ -20,11 +20,6 @@ if [ -n "$attestationCommitteeCount" ]; then attestation_committee_flag="--attestation-committee-count $attestationCommitteeCount" fi -# Set HTTP port (default to 5055 if not specified in validator-config.yaml) -if [ -z "$httpPort" ]; then - httpPort="5055" -fi - # Set checkpoint sync URL when restarting with checkpoint sync checkpoint_sync_flag="" if [ -n "${checkpoint_sync_url:-}" ]; then @@ -43,7 +38,7 @@ node_binary="$scriptDir/lantern/build/lantern_cli \ --node-id $item --node-key-path $configDir/$privKeyPath \ --listen-address /ip4/0.0.0.0/udp/$quicPort/quic-v1 \ --metrics-port $metricsPort \ - --http-port $httpPort \ + --http-port $apiPort \ --log-level info \ --hash-sig-key-dir $configDir/hash-sig-keys \ $attestation_committee_flag \ @@ -60,7 +55,7 @@ node_docker="$LANTERN_IMAGE --data-dir /data \ --node-id $item --node-key-path /config/$privKeyPath \ --listen-address /ip4/0.0.0.0/udp/$quicPort/quic-v1 \ --metrics-port $metricsPort \ - --http-port $httpPort \ + --http-port $apiPort \ --log-level info \ --hash-sig-key-dir /config/hash-sig-keys \ $attestation_committee_flag \ diff --git a/client-cmds/lighthouse-cmd.sh b/client-cmds/lighthouse-cmd.sh index 4bbd765..eb45016 100644 --- a/client-cmds/lighthouse-cmd.sh +++ b/client-cmds/lighthouse-cmd.sh @@ -33,6 +33,7 @@ node_binary="$lighthouse_bin lean_node \ $metrics_flag \ --metrics-address 0.0.0.0 \ --metrics-port $metricsPort \ + --api-port $apiPort \ $attestation_committee_flag \ $aggregator_flag \ $checkpoint_sync_flag" @@ -49,6 +50,7 @@ node_docker="hopinheimer/lighthouse:latest lighthouse lean_node \ $metrics_flag \ --metrics-address 0.0.0.0 \ --metrics-port $metricsPort \ + --api-port $apiPort \ $attestation_committee_flag \ $aggregator_flag \ $checkpoint_sync_flag" diff --git a/client-cmds/qlean-cmd.sh b/client-cmds/qlean-cmd.sh index f8f7b98..f8b1f47 100644 --- a/client-cmds/qlean-cmd.sh +++ b/client-cmds/qlean-cmd.sh @@ -44,7 +44,10 @@ node_binary="$scriptDir/qlean/build/src/executable/qlean \ --data-dir $dataDir/$item \ --node-id $item --node-key $configDir/$privKeyPath \ --listen-addr /ip4/0.0.0.0/udp/$quicPort/quic-v1 \ - --prometheus-port $metricsPort \ + --metrics-host 0.0.0.0 \ + --metrics-port $metricsPort \ + --api-host 0.0.0.0 \ + --api-port $apiPort \ $attestation_committee_flag \ $aggregator_flag \ $checkpoint_sync_flag \ @@ -64,7 +67,7 @@ node_docker="$QLEAN_IMAGE \ --metrics-host 0.0.0.0 \ --metrics-port $metricsPort \ --api-host 0.0.0.0 \ - --api-port 5053 \ + --api-port $apiPort \ $attestation_committee_flag \ $aggregator_flag \ $checkpoint_sync_flag \ diff --git a/client-cmds/ream-cmd.sh b/client-cmds/ream-cmd.sh index c6c3342..ef387bb 100755 --- a/client-cmds/ream-cmd.sh +++ b/client-cmds/ream-cmd.sh @@ -34,6 +34,7 @@ node_binary="$scriptDir/../ream/target/release/ream --data-dir $dataDir/$item \ --metrics-address 0.0.0.0 \ --metrics-port $metricsPort \ --http-address 0.0.0.0 \ + --http-port $apiPort \ $attestation_committee_flag \ $aggregator_flag \ $checkpoint_sync_flag" @@ -49,6 +50,7 @@ node_docker="ghcr.io/reamlabs/ream:latest-devnet3 --data-dir /data \ --metrics-address 0.0.0.0 \ --metrics-port $metricsPort \ --http-address 0.0.0.0 \ + --http-port $apiPort \ $attestation_committee_flag \ $aggregator_flag \ $checkpoint_sync_flag" diff --git a/client-cmds/zeam-cmd.sh b/client-cmds/zeam-cmd.sh index ef6faae..a7e5f32 100644 --- a/client-cmds/zeam-cmd.sh +++ b/client-cmds/zeam-cmd.sh @@ -30,7 +30,8 @@ node_binary="$scriptDir/../zig-out/bin/zeam node \ --data-dir $dataDir/$item \ --node-id $item --node-key $configDir/$item.key \ $metrics_flag \ - --api-port $metricsPort \ + --api-port $apiPort \ + --metrics-port $metricsPort \ $attestation_committee_flag \ $aggregator_flag \ $checkpoint_sync_flag" @@ -41,7 +42,8 @@ node_docker="--security-opt seccomp=unconfined blockblaz/zeam:devnet3 node \ --data-dir /data \ --node-id $item --node-key /config/$item.key \ $metrics_flag \ - --api-port $metricsPort \ + --api-port $apiPort \ + --metrics-port $metricsPort \ $attestation_committee_flag \ $aggregator_flag \ $checkpoint_sync_flag" diff --git a/convert-validator-config.py b/convert-validator-config.py new file mode 100644 index 0000000..91a6e18 --- /dev/null +++ b/convert-validator-config.py @@ -0,0 +1,125 @@ +#!/usr/bin/env python3 +""" +Convert validator-config.yaml to upstreams.json for leanpoint. + +This script reads a validator-config.yaml file (used by lean-quickstart) +and generates an upstreams.json file that leanpoint can use to monitor +multiple lean nodes. + +Usage: + python3 convert-validator-config.py [validator-config.yaml] [output.json] [--docker] + +Options: + --docker Use host.docker.internal so leanpoint running in Docker can + reach a devnet on the host (e.g. upstreams-local-docker.json). + +Examples: + python3 convert-validator-config.py \\ + local-devnet/genesis/validator-config.yaml \\ + upstreams.json + + python3 convert-validator-config.py \\ + ansible-devnet/genesis/validator-config.yaml \\ + upstreams.json + + python3 convert-validator-config.py \\ + local-devnet/genesis/validator-config.yaml \\ + upstreams-local-docker.json --docker +""" + +import sys +import json +import yaml + + +def convert_validator_config( + yaml_path: str, + output_path: str, + base_port: int = 8081, + docker_host: bool = False, +): + """ + Convert validator-config.yaml to upstreams.json. + + Args: + yaml_path: Path to validator-config.yaml + output_path: Path to output upstreams.json + base_port: Base HTTP port for beacon API (default: 8081) + docker_host: If True, use host.docker.internal so leanpoint in Docker + can reach a devnet running on the host (Docker Desktop/Orbstack). + """ + with open(yaml_path, 'r') as f: + config = yaml.safe_load(f) + + if 'validators' not in config: + print("Error: No 'validators' key found in config", file=sys.stderr) + sys.exit(1) + + upstreams = [] + + for idx, validator in enumerate(config['validators']): + name = validator.get('name', f'validator_{idx}') + + # Try to get IP from enrFields, default to localhost + ip = "127.0.0.1" + if 'enrFields' in validator and 'ip' in validator['enrFields']: + ip = validator['enrFields']['ip'] + if docker_host: + ip = "host.docker.internal" + + # Use apiPort from config + http_port = validator.get('apiPort', base_port + idx) + + upstream = { + "name": name, + "url": f"http://{ip}:{http_port}", + "path": "/v0/health" # Health check endpoint + } + + upstreams.append(upstream) + + output = {"upstreams": upstreams} + + with open(output_path, 'w') as f: + json.dump(output, f, indent=2) + + print(f"āœ… Converted {len(upstreams)} validators to {output_path}") + print(f"\nGenerated upstreams:") + for u in upstreams: + print(f" - {u['name']}: {u['url']}{u['path']}") + + print(f"\nšŸ’” To use: leanpoint --upstreams-config {output_path}") + + +def main(): + args = [a for a in sys.argv[1:] if a != "--docker"] + docker_host = "--docker" in sys.argv + + if len(args) < 2: + if len(args) == 0: + print(__doc__) + print("\nUsing default paths...") + yaml_path = "local-devnet/genesis/validator-config.yaml" + output_path = "upstreams.json" + else: + yaml_path = args[0] + output_path = "upstreams-local-docker.json" if docker_host else "upstreams.json" + else: + yaml_path = args[0] + output_path = args[1] + + try: + convert_validator_config(yaml_path, output_path, docker_host=docker_host) + except FileNotFoundError as e: + print(f"Error: File not found: {e}", file=sys.stderr) + sys.exit(1) + except yaml.YAMLError as e: + print(f"Error: Invalid YAML: {e}", file=sys.stderr) + sys.exit(1) + except Exception as e: + print(f"Error: {e}", file=sys.stderr) + sys.exit(1) + + +if __name__ == "__main__": + main() diff --git a/local-devnet/genesis/validator-config.yaml b/local-devnet/genesis/validator-config.yaml index fc79da5..c1e26de 100644 --- a/local-devnet/genesis/validator-config.yaml +++ b/local-devnet/genesis/validator-config.yaml @@ -14,6 +14,7 @@ validators: ip: "127.0.0.1" quic: 9001 metricsPort: 8081 + apiPort: 5051 isAggregator: false count: 1 - name: "ream_0" @@ -23,6 +24,7 @@ validators: ip: "127.0.0.1" quic: 9002 metricsPort: 8082 + apiPort: 5052 isAggregator: false count: 1 - name: "qlean_0" @@ -34,6 +36,7 @@ validators: ip: "127.0.0.1" quic: 9003 metricsPort: 8083 + apiPort: 5053 isAggregator: false count: 1 - name: "lantern_0" @@ -45,6 +48,7 @@ validators: quic: 9004 metricsPort: 8084 httpPort: 5054 + apiPort: 5054 isAggregator: false count: 1 @@ -57,6 +61,7 @@ validators: ip: "127.0.0.1" quic: 9005 metricsPort: 8085 + apiPort: 5055 isAggregator: false count: 1 @@ -69,6 +74,7 @@ validators: ip: "127.0.0.1" quic: 9006 metricsPort: 8086 + apiPort: 8086 isAggregator: false count: 1 @@ -81,5 +87,6 @@ validators: ip: "127.0.0.1" quic: 9007 metricsPort: 8087 + apiPort: 8087 isAggregator: false count: 1 diff --git a/parse-env.sh b/parse-env.sh index 2fd8c7b..694ae44 100755 --- a/parse-env.sh +++ b/parse-env.sh @@ -100,6 +100,10 @@ while [[ $# -gt 0 ]]; do shift # past argument shift # past value ;; + --skip-leanpoint) + skipLeanpoint=true + shift + ;; *) # unknown option shift # past argument ;; @@ -139,3 +143,4 @@ echo "aggregatorNode = ${aggregatorNode:-}" echo "coreDumps = ${coreDumps:-disabled}" echo "checkpointSyncUrl = ${checkpointSyncUrl:-}" echo "restartClient = ${restartClient:-}" +echo "skipLeanpoint = ${skipLeanpoint:-false}" diff --git a/parse-vc.sh b/parse-vc.sh index abfa24c..4fae8d2 100644 --- a/parse-vc.sh +++ b/parse-vc.sh @@ -51,6 +51,12 @@ if [ -z "$httpPort" ] || [ "$httpPort" == "null" ]; then httpPort="" fi +# Automatically extract API port using yq (optional - only some clients use it) +apiPort=$(yq eval ".validators[] | select(.name == \"$item\") | .apiPort" "$validator_config_file") +if [ -z "$apiPort" ] || [ "$apiPort" == "null" ]; then + apiPort="" +fi + # Automatically extract devnet using yq (optional - only ream uses it) devnet=$(yq eval ".validators[] | select(.name == \"$item\") | .devnet" "$validator_config_file") if [ -z "$devnet" ] || [ "$devnet" == "null" ]; then @@ -111,6 +117,7 @@ if [ "$keyType" == "hash-sig" ] && [ "$hashSigKeyIndex" != "null" ] && [ -n "$ha echo "Node: $item" echo "QUIC Port: $quicPort" echo "Metrics Port: $metricsPort" + echo "API Port: ${apiPort:-}" echo "Devnet: ${devnet:-}" echo "Private Key File: $privKeyPath" echo "Key Type: $keyType" @@ -125,6 +132,7 @@ else echo "Node: $item" echo "QUIC Port: $quicPort" echo "Metrics Port: $metricsPort" + echo "API Port: ${apiPort:-}" echo "Devnet: ${devnet:-}" echo "Private Key File: $privKeyPath" echo "Is Aggregator: $isAggregator" diff --git a/spin-node.sh b/spin-node.sh index a60300b..f100554 100755 --- a/spin-node.sh +++ b/spin-node.sh @@ -255,7 +255,14 @@ if [ "$deployment_mode" == "ansible" ]; then echo "āŒ Ansible deployment failed. Exiting." exit 1 fi - + + if [ -z "$skipLeanpoint" ]; then + # Sync leanpoint upstreams to tooling server and restart remote container (no 5th arg = remote) + if ! "$scriptDir/sync-leanpoint-upstreams.sh" "$validator_config_file" "$scriptDir" "$sshKeyFile" "$useRoot"; then + echo "Warning: leanpoint sync failed. If the tooling server requires a specific SSH key, run with: --sshKey " + fi + fi + # Ansible deployment succeeded, exit normally exit 0 fi @@ -305,6 +312,13 @@ if [ -n "$stopNodes" ] && [ "$stopNodes" == "true" ]; then fi fi + # Stop local leanpoint container if running + if [ -n "$dockerWithSudo" ]; then + sudo docker rm -f leanpoint 2>/dev/null || echo " Container leanpoint not found or already stopped" + else + docker rm -f leanpoint 2>/dev/null || echo " Container leanpoint not found or already stopped" + fi + echo "āœ… Local nodes stopped successfully!" exit 0 fi @@ -464,6 +478,16 @@ if [ -n "$enableMetrics" ] && [ "$enableMetrics" == "true" ]; then echo "" fi +# Deploy leanpoint: locally (local devnet) or sync to tooling server (Ansible), unless --skip-leanpoint +local_leanpoint_deployed=0 +if [ -z "$skipLeanpoint" ]; then + if "$scriptDir/sync-leanpoint-upstreams.sh" "$validator_config_file" "$scriptDir" "$sshKeyFile" "$useRoot" "$dataDir"; then + local_leanpoint_deployed=1 + else + echo "Warning: leanpoint deploy failed. For remote sync, pass --sshKey if the tooling server requires it." + fi +fi + container_names="${spin_nodes[*]}" process_ids="${spinned_pids[*]}" @@ -481,6 +505,12 @@ cleanup() { echo "$execCmd" eval "$execCmd" + if [ "${local_leanpoint_deployed:-0}" = "1" ]; then + execCmd="docker rm -f leanpoint" + [ -n "$dockerWithSudo" ] && execCmd="sudo $execCmd" + eval "$execCmd" 2>/dev/null || true + fi + # try for process ids execCmd="kill -9 $process_ids" echo "$execCmd" diff --git a/sync-leanpoint-upstreams.sh b/sync-leanpoint-upstreams.sh new file mode 100755 index 0000000..8e1c48c --- /dev/null +++ b/sync-leanpoint-upstreams.sh @@ -0,0 +1,98 @@ +#!/bin/bash +# sync-leanpoint-upstreams.sh: Regenerate upstreams.json from validator-config.yaml, +# then either deploy leanpoint locally (local devnet) or rsync to tooling server and +# restart the remote container (Ansible/remote deployment). +# +# Used after validator nodes are spun up so leanpoint monitors the current set +# of nodes. Called at the end of spin-node.sh (both Ansible and local deployment). +# +# Usage: +# sync-leanpoint-upstreams.sh [ssh_key_file] [use_root] [local_data_dir] +# +# If local_data_dir (5th arg) is set, leanpoint is deployed locally: upstreams.json +# is written there (with --docker so leanpoint in Docker can reach host validators), +# and a local Docker container is started. Otherwise upstreams are synced to the +# remote tooling server and the remote container is recreated. +# +# Env (optional): +# TOOLING_SERVER Tooling server host (default: 46.225.10.32) +# TOOLING_SERVER_USER SSH user on tooling server (default: root) +# LEANPOINT_DIR Path containing convert-validator-config.py (default: script_dir) +# REMOTE_UPSTREAMS_PATH Remote path for upstreams.json (default: /etc/leanpoint/upstreams.json) +# LEANPOINT_CONTAINER Docker container name (default: leanpoint) +# LEANPOINT_IMAGE Docker image to pull and run (default: 0xpartha/leanpoint:latest) +# LEANPOINT_SYNC_DISABLED Set to 1 to skip (e.g. when tooling server is not used) + +set -e + +validator_config_file="${1:?Usage: sync-leanpoint-upstreams.sh [ssh_key_file] [use_root] [local_data_dir]}" +scriptDir="${2:?Usage: sync-leanpoint-upstreams.sh [ssh_key_file] [use_root] [local_data_dir]}" +sshKeyFile="${3:-}" +useRoot="${4:-false}" +local_data_dir="${5:-}" + +TOOLING_SERVER="${TOOLING_SERVER:-46.225.10.32}" +TOOLING_SERVER_USER="${TOOLING_SERVER_USER:-root}" +LEANPOINT_DIR="${LEANPOINT_DIR:-$scriptDir}" +REMOTE_UPSTREAMS_PATH="${REMOTE_UPSTREAMS_PATH:-/etc/leanpoint/upstreams.json}" +LEANPOINT_CONTAINER="${LEANPOINT_CONTAINER:-leanpoint}" +LEANPOINT_IMAGE="${LEANPOINT_IMAGE:-0xpartha/leanpoint:latest}" + +if [ "${LEANPOINT_SYNC_DISABLED:-0}" = "1" ]; then + echo "Leanpoint sync disabled (LEANPOINT_SYNC_DISABLED=1), skipping." + exit 0 +fi + +convert_script="$LEANPOINT_DIR/convert-validator-config.py" +if [ ! -f "$convert_script" ]; then + echo "Warning: convert-validator-config.py not found at $convert_script, skipping leanpoint sync." + exit 0 +fi + +if [ ! -f "$validator_config_file" ]; then + echo "Warning: validator config not found at $validator_config_file, skipping leanpoint sync." + exit 0 +fi + +# --- Local deployment: generate upstreams with --docker, run leanpoint container locally --- +if [ -n "$local_data_dir" ]; then + mkdir -p "$local_data_dir" + local_upstreams="$local_data_dir/upstreams.json" + python3 "$convert_script" "$validator_config_file" "$local_upstreams" --docker || { + echo "Warning: convert-validator-config.py failed, skipping local leanpoint deploy." + exit 0 + } + docker pull "$LEANPOINT_IMAGE" + docker stop "$LEANPOINT_CONTAINER" 2>/dev/null || true + docker rm "$LEANPOINT_CONTAINER" 2>/dev/null || true + docker run -d --name "$LEANPOINT_CONTAINER" --restart unless-stopped -p 5555:5555 \ + -v "$local_upstreams:/etc/leanpoint/upstreams.json:ro" "$LEANPOINT_IMAGE" + echo "Leanpoint deployed locally at http://localhost:5555 (upstreams: $local_upstreams)." + exit 0 +fi + +# --- Remote deployment: rsync to tooling server and recreate container there --- +remote_target="${TOOLING_SERVER_USER}@${TOOLING_SERVER}" +ssh_cmd="ssh -o StrictHostKeyChecking=no" +if [ -n "$sshKeyFile" ]; then + key_path="$sshKeyFile" + [[ "$key_path" == ~* ]] && key_path="${key_path/#\~/$HOME}" + if [ -f "$key_path" ]; then + ssh_cmd="ssh -i $key_path -o StrictHostKeyChecking=no" + fi +fi + +out_file=$(mktemp) +trap "rm -f $out_file" EXIT +python3 "$convert_script" "$validator_config_file" "$out_file" || { + echo "Warning: convert-validator-config.py failed, skipping leanpoint sync." + exit 0 +} + +remote_dir=$(dirname "$REMOTE_UPSTREAMS_PATH") +$ssh_cmd "$remote_target" "mkdir -p $remote_dir" +rsync -e "$ssh_cmd" "$out_file" "${remote_target}:${REMOTE_UPSTREAMS_PATH}" + +$ssh_cmd "$remote_target" "docker pull $LEANPOINT_IMAGE && docker stop $LEANPOINT_CONTAINER 2>/dev/null || true; docker rm $LEANPOINT_CONTAINER 2>/dev/null || true; docker run -d --name $LEANPOINT_CONTAINER --restart unless-stopped -p 5555:5555 -v $REMOTE_UPSTREAMS_PATH:/etc/leanpoint/upstreams.json:ro $LEANPOINT_IMAGE" + +echo "Leanpoint upstreams synced to $TOOLING_SERVER, image $LEANPOINT_IMAGE pulled, container '$LEANPOINT_CONTAINER' recreated."