Skip to content

docs: clarify node removal strategies for IP recovery#21

Merged
chinameok merged 2 commits intomasterfrom
AIT-65863-fix
Feb 14, 2026
Merged

docs: clarify node removal strategies for IP recovery#21
chinameok merged 2 commits intomasterfrom
AIT-65863-fix

Conversation

@chinameok
Copy link
Collaborator

@chinameok chinameok commented Feb 11, 2026

Summary by CodeRabbit

  • Documentation
    • Expanded node removal guide with two strategies: Random and Targeted, plus step-by-step workflows.
    • Added annotation-based targeted-deletion workflow and IP recovery scenario.
    • Introduced a Data Loss Warning for scale-down operations and data distribution considerations.
    • Reworked IP pool management into an "Extend IP Pool" procedure with examples for expanding pools and adding entries.
    • Adjusted step labeling and monitoring checkpoints for scaling and verification.
    • Added version compatibility notes to template rollout/upgrade guidance.

@coderabbitai
Copy link

coderabbitai bot commented Feb 11, 2026

Walkthrough

This PR expands the Node Management docs (docs/en/node.mdx), adding detailed workflows for scaling down and removing worker nodes, two removal strategies (Random and Targeted), IP pool expansion instructions, data-loss warnings, and examples/commands for annotation-based deletions and IP recovery.

Changes

Cohort / File(s) Summary
Node Management docs
docs/en/node.mdx
Large content additions and rework: new Random and Targeted node removal workflows, annotation-based targeted deletion steps, monitoring and verification steps, IP pool expansion procedure (export/patch/merge full array), IP recovery scenario, data-loss warning, updated step numbering and examples (many code/YAML blocks).

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

  • Ait 65863 #20: Edits the same docs/en/node.mdx with overlapping node scaling/removal workflows, targeted deletion annotations, and IP pool expansion steps.

Suggested reviewers

  • wgkingk

Poem

🐰 I nibbled docs by moonlight’s glow,

Annotated paths where lost IPs go.
Two carrot-trails — random or true —
Nodes hop out, then new ones queue.

🚥 Pre-merge checks | ✅ 4
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main changes in the pull request, which focuses on clarifying node removal strategies and IP recovery scenarios in the Node Management documentation.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into master

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch AIT-65863-fix

No actionable comments were generated in the recent review. 🎉

🧹 Recent nitpick comments
docs/en/node.mdx (2)

627-634: Add a concrete kubectl example for targeted scale-down, matching the random removal section.

The random removal section (line 548-551) includes a concrete kubectl patch example, but the targeted removal step only describes the scenario textually. Adding a matching example would improve consistency.

Suggested addition after line 632
     **Example**: If you annotated 2 machines, reduce replicas by exactly 2 (e.g., from 5 to 3)

+    ```bash
+    kubectl patch machinedeployment worker-pool-1 -n cpaas-system \
+      --type='json' -p='[{"op": "replace", "path": "/spec/replicas", "value": 3}]'
+    ```
+
     The platform will delete the **annotated** machines, not randomly selected ones.

338-380: Consider JSON Patch for safer array append without manual entry copying.

The current merge patch approach (--type='merge') is documented with a helpful warning, but requires users to manually copy all existing entries into the patch command. A safer alternative is JSON Patch (--type='json') with the append operator, which adds entries without needing to include existing ones:

💡 Alternative using JSON Patch
kubectl patch dcsiphostnamepool <worker-iphostname-pool-name> -n cpaas-system \
  --type='json' -p='[
    {"op":"add","path":"/spec/pool/-","value":{"ip":"<new-worker-ip-1>","mask":"<worker-mask>","gateway":"<worker-gateway>","dns":"<worker-dns>","hostname":"<new-worker-hostname-1>","machineName":"<new-worker-machine-name-1>"}},
    {"op":"add","path":"/spec/pool/-","value":{"ip":"<new-worker-ip-2>","mask":"<worker-mask>","gateway":"<worker-gateway>","dns":"<worker-dns>","hostname":"<new-worker-hostname-2>","machineName":"<new-worker-machine-name-2>"}}
  ]'

The /spec/pool/- path appends to the array without the risk of accidentally omitting existing entries. JSON Patch is well-supported for custom resources and eliminates the manual copying step.

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@chinameok chinameok requested a review from wgkingk February 11, 2026 08:42
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@docs/en/node.mdx`:
- Around line 490-499: Add a clarifying note immediately after the "Scale Down
the MachineDeployment" kubectl patch example explaining that the replica
reduction must equal the number of previously annotated machines: if you
annotate N machines, reduce replicas by N to have exactly those annotated
machines deleted; reducing by fewer deletes fewer annotated machines, and
reducing by more deletes all annotated machines plus additional (random)
machines. Reference the section title "Scale Down the MachineDeployment" and the
kubectl patch command in the note so readers know where this behavior applies.
🧹 Nitpick comments (1)
docs/en/node.mdx (1)

520-526: Consider moving the data-loss warning earlier so it applies visibly to both strategies.

Currently, a reader following the "Random Removal" path linearly will not encounter this warning until after the entire "Targeted Removal" section. Moving it immediately after the intro (after line 395) would ensure both paths benefit from the warning before the user executes any commands.

@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Feb 11, 2026

Deploying alauda-immutable-infra with  Cloudflare Pages  Cloudflare Pages

Latest commit: f9f3698
Status: ✅  Deploy successful!
Preview URL: https://9074dd23.alauda-immutable-infra.pages.dev
Branch Preview URL: https://ait-65863-fix.alauda-immutable-infra.pages.dev

View logs

- Add IP pool expansion step before scaling up worker nodes
- Move Data Loss Warning to apply to both removal strategies
- Add replica count guidance for targeted machine removal
- Update info callout for template rollout behavior
- Move version compatibility warning to proper section

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@chinameok chinameok enabled auto-merge February 14, 2026 02:38
@chinameok chinameok merged commit b1be4d9 into master Feb 14, 2026
3 checks passed
@chinameok chinameok deleted the AIT-65863-fix branch February 14, 2026 02:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant