Skip to content

refactor: extract capability lookup into PeerPool helpers#509

Open
xdustinface wants to merge 1 commit intov0.42-devfrom
refactor/peer-service-lookup
Open

refactor: extract capability lookup into PeerPool helpers#509
xdustinface wants to merge 1 commit intov0.42-devfrom
refactor/peer-service-lookup

Conversation

@xdustinface
Copy link
Collaborator

@xdustinface xdustinface commented Mar 10, 2026

  • Add peer_with_service() and peers_with_service() on PeerPool to replace repeated "iterate peers, check service flag" loops in message routing.
  • Generalize the match/log/error pattern for required-service peer selection in send_to_single_peer so new service requirements only need a single match arm.

Summary by CodeRabbit

  • Improvements
    • Optimized peer selection for sync and message delivery by selecting peers based on advertised capabilities; prefers compressed headers and compact-filter-capable peers when available.
    • Introduced sticky sync-peer tracking with clearer logging and better fallback/error handling when capable peers are unavailable.
  • Tests
    • Added coverage for peer-selection helpers and updated test utilities to support custom test peers.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 10, 2026

📝 Walkthrough

Walkthrough

Refactors peer selection to use new PeerPool service-query methods. Adds peer_with_service and peers_with_service, updates the network manager to select and track sync peers via pool queries (including headers2/compact-filters capability handling), and adjusts test utilities and peer test helpers accordingly.

Changes

Cohort / File(s) Summary
PeerPool service queries
dash-spv/src/network/pool.rs
Added peer_with_service(flags) and peers_with_service(flags) to return one or all connected peers advertising given ServiceFlags. Implemented test scaffolding and unit tests for these lookups.
Network manager selection logic
dash-spv/src/network/manager.rs
Replaced per-peer scans with pool-level queries for GetCFHeaders/GetCFilters and GetHeaders/GetHeaders2 (preferring headers2 when available). Introduced current_sync_peer tracking, tighter logging, and adjusted send/distribution paths to use pool methods and capability-aware filtering.
Peer test helpers & test utils
dash-spv/src/network/peer.rs, dash-spv/src/test_utils/network.rs
Added test-only Peer::set_services and changed Peer::dummy to accept a SocketAddr parameter. Updated tests to construct peers with explicit addresses.

Sequence Diagram(s)

sequenceDiagram
    participant Manager
    participant PeerPool
    participant Peer
    participant Network

    Manager->>PeerPool: peer_with_service(flags) / peers_with_service(flags)
    PeerPool->>Peer: acquire read lock, check has_service(flags)
    Peer-->>PeerPool: matching peer(s) (addr, Arc<RwLock<Peer>>)
    PeerPool-->>Manager: return peer(s)
    Manager->>Peer: send message / mark current_sync_peer
    Peer->>Network: transmit message over connection
    Network-->>Manager: ack / error
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

I nibble logs and tweak the code today,
From scan to pool we hop away.
Peers now answer when I call their name,
Sync hops steady — no more wild game.
Hooray, a carrot for every new peer! 🐰✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'refactor: extract capability lookup into PeerPool helpers' accurately summarizes the main change: extracting peer capability lookup logic into new PeerPool helper methods (peer_with_service and peers_with_service).
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch refactor/peer-service-lookup

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Mar 10, 2026

Codecov Report

❌ Patch coverage is 85.43689% with 15 lines in your changes missing coverage. Please review.
✅ Project coverage is 66.90%. Comparing base (b1451dc) to head (bbaf01d).
⚠️ Report is 1 commits behind head on v0.42-dev.

Files with missing lines Patch % Lines
dash-spv/src/network/manager.rs 31.81% 15 Missing ⚠️
Additional details and impacted files
@@              Coverage Diff              @@
##           v0.42-dev     #509      +/-   ##
=============================================
+ Coverage      66.85%   66.90%   +0.04%     
=============================================
  Files            313      313              
  Lines          64775    64835      +60     
=============================================
+ Hits           43306    43376      +70     
+ Misses         21469    21459      -10     
Flag Coverage Δ *Carryforward flag
core 74.93% <ø> (ø)
dash-network 74.92% <ø> (ø) Carriedforward from b1451dc
dash-network-ffi 34.74% <ø> (ø) Carriedforward from b1451dc
dash-spv 68.42% <ø> (+0.23%) ⬆️ Carriedforward from b1451dc
dash-spv-ffi 34.74% <ø> (ø) Carriedforward from b1451dc
dashcore 74.92% <ø> (ø) Carriedforward from b1451dc
dashcore-private 74.92% <ø> (ø) Carriedforward from b1451dc
dashcore-rpc 19.92% <ø> (ø) Carriedforward from b1451dc
dashcore-rpc-json 19.92% <ø> (ø) Carriedforward from b1451dc
dashcore_hashes 74.92% <ø> (ø) Carriedforward from b1451dc
ffi 35.98% <ø> (+0.01%) ⬆️
key-wallet 65.69% <ø> (ø) Carriedforward from b1451dc
key-wallet-ffi 34.74% <ø> (ø) Carriedforward from b1451dc
key-wallet-manager 65.69% <ø> (ø) Carriedforward from b1451dc
rpc 19.92% <ø> (ø)
spv 81.11% <85.43%> (+0.15%) ⬆️
wallet 65.69% <ø> (ø)

*This pull request uses carry forward flags. Click here to find out more.

Files with missing lines Coverage Δ
dash-spv/src/network/peer.rs 58.71% <100.00%> (-0.83%) ⬇️
dash-spv/src/network/pool.rs 88.20% <100.00%> (+7.53%) ⬆️
dash-spv/src/network/manager.rs 57.69% <31.81%> (-0.06%) ⬇️

... and 5 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
dash-spv/src/network/manager.rs (1)

994-1012: ⚠️ Potential issue | 🟠 Major

Split GetHeaders and GetHeaders2 selection rules.

These branches currently share the same fallback logic, but send_message_to_peer() only has a safe downgrade path for GetHeaders. An explicit GetHeaders2 can still be routed to a peer in headers2_disabled or, after the fallback, to a peer outside the headers2-capable set. GetHeaders2 should require an eligible peer and return a protocol error otherwise; current_sync_peer should also be filtered through headers2_disabled before reuse.

Also applies to: 1078-1089

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/network/manager.rs` around lines 994 - 1012, The GetHeaders2
selection must only pick peers that support headers2 and must not fall back to
non-headers2 peers; update the branch that runs when check_headers2 is true to
(1) filter the reused current_sync_peer through the headers2_disabled set and
ensure peer.read().await.supports_headers2() before reusing it, (2) when no such
peer is found do not call peer_with_service or pick peers[0] but instead return
a protocol error (similar to other error paths) so send_message_to_peer() will
never be given a GetHeaders2 to a headers2-disabled peer, and (3) make the
identical change in the second occurrence around the 1078-1089 region; use the
symbols current_sync_peer, headers2_disabled, supports_headers2(),
peer_with_service(ServiceFlags::NODE_HEADERS_COMPRESSED) and
send_message_to_peer() to locate and modify the logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 983-987: The snapshot "peers" can be stale between calling
self.pool.peer_with_service(flags) and later peers.find(...), causing "Selected
peer not found"; after obtaining the address from
self.pool.peer_with_service(flags) (the selected_peer variable) re-resolve the
peer from the current pool instead of using the old peers snapshot — e.g.,
replace the peers.find(...) call with a fresh lookup on the pool (call the pool
method that returns a peer by address, e.g.,
self.pool.get_peer_by_address(address) or self.pool.peer(&address).await) and
use that result, keeping the selection branch that logs "Selected peer ..." but
using the freshly-resolved peer object for further work.

In `@dash-spv/src/network/pool.rs`:
- Around line 153-177: Add unit tests for the new service-selection helpers
peer_with_service() and peers_with_service(): under #[cfg(test)] add async
#[tokio::test] cases that (1) verify no-match returns None/empty, (2) verify
first-match returns the first matching SocketAddr, and (3) verify multiple
matches are all returned by peers_with_service(). Create a Pool, populate its
peers map via pool.peers.write().await with Arc<RwLock<Peer>> entries that
advertise specific ServiceFlags, then call peer_with_service(flags) and
peers_with_service(flags) and assert expected results; place tests next to the
implementation in the same file. Ensure you exercise both functions and cover
no-match, single-match, and multi-match scenarios.

---

Outside diff comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 994-1012: The GetHeaders2 selection must only pick peers that
support headers2 and must not fall back to non-headers2 peers; update the branch
that runs when check_headers2 is true to (1) filter the reused current_sync_peer
through the headers2_disabled set and ensure
peer.read().await.supports_headers2() before reusing it, (2) when no such peer
is found do not call peer_with_service or pick peers[0] but instead return a
protocol error (similar to other error paths) so send_message_to_peer() will
never be given a GetHeaders2 to a headers2-disabled peer, and (3) make the
identical change in the second occurrence around the 1078-1089 region; use the
symbols current_sync_peer, headers2_disabled, supports_headers2(),
peer_with_service(ServiceFlags::NODE_HEADERS_COMPRESSED) and
send_message_to_peer() to locate and modify the logic.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 76aae646-b9b2-466a-967e-43c2ba68b7a8

📥 Commits

Reviewing files that changed from the base of the PR and between 98988cc and cdcfe3d.

📒 Files selected for processing (2)
  • dash-spv/src/network/manager.rs
  • dash-spv/src/network/pool.rs

@xdustinface xdustinface marked this pull request as draft March 10, 2026 06:53
@xdustinface xdustinface force-pushed the refactor/peer-service-lookup branch from 9e7f807 to a43d564 Compare March 10, 2026 09:33
- Add `peer_with_service()` and `peers_with_service()` on `PeerPool` to replace repeated "iterate peers, check service flag" loops in message routing.
- Add unit tests
- Generalize the match/log/error pattern for required-service peer
selection in `send_to_single_peer` so new service requirements only need a single match arm.
@xdustinface xdustinface force-pushed the refactor/peer-service-lookup branch from a43d564 to bbaf01d Compare March 10, 2026 09:38
@xdustinface xdustinface marked this pull request as ready for review March 10, 2026 15:51
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
dash-spv/src/network/manager.rs (1)

983-987: ⚠️ Potential issue | 🟡 Minor

Resolve the selected peer from the live pool right before send.

peer_with_service() runs after the snapshot at Line 968, so it can pick a peer that is missing from peers. That still makes the later find(...) fail with Selected peer not found even though a valid peer exists.

Suggested fix
-        let (addr, peer) = peers
-            .iter()
-            .find(|(a, _)| *a == selected_peer)
-            .ok_or_else(|| NetworkError::ConnectionFailed("Selected peer not found".to_string()))?;
-
-        self.send_message_to_peer(addr, peer, message).await
+        let peer = self.pool.get_peer(&selected_peer).await.ok_or_else(|| {
+            NetworkError::ConnectionFailed(
+                "Selected peer disconnected before send".to_string(),
+            )
+        })?;
+
+        self.send_message_to_peer(&selected_peer, &peer, message).await

Also applies to: 1009-1013, 1053-1059

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dash-spv/src/network/manager.rs` around lines 983 - 987, The selected peer
can be stale because peer_with_service() was called before taking the snapshot
used by peers.find(...); to fix, re-resolve the peer from the live pool
immediately before using it: call self.pool.peer_with_service(flags).await (or
the equivalent live lookup) right before the peers.find(...) / send path and use
that fresh address to locate the Peer object so the later find(...) will
succeed; apply this change for the other similar blocks that use
peer_with_service() earlier (the blocks referenced around peer selection at the
other occurrences).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@dash-spv/src/network/pool.rs`:
- Around line 153-180: The pool read lock is held while awaiting per-peer
RwLocks in peer_with_service and peers_with_service; to fix, first take a
short-lived snapshot of the peer addresses and Arcs (e.g. collect
Vec<(SocketAddr, Arc<RwLock<Peer>>)>) while holding self.peers.read().await,
then drop that read guard and only afterwards iterate the snapshot and await
peer.read().await to call has_service; update both peer_with_service and
peers_with_service to follow this pattern (use Arc::clone or peer.clone() when
building the snapshot so the Arcs outlive the released pool lock).

---

Duplicate comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 983-987: The selected peer can be stale because
peer_with_service() was called before taking the snapshot used by
peers.find(...); to fix, re-resolve the peer from the live pool immediately
before using it: call self.pool.peer_with_service(flags).await (or the
equivalent live lookup) right before the peers.find(...) / send path and use
that fresh address to locate the Peer object so the later find(...) will
succeed; apply this change for the other similar blocks that use
peer_with_service() earlier (the blocks referenced around peer selection at the
other occurrences).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: bb2447ee-7c31-44cd-a8f6-1e7b2339e449

📥 Commits

Reviewing files that changed from the base of the PR and between cdcfe3d and bbaf01d.

📒 Files selected for processing (4)
  • dash-spv/src/network/manager.rs
  • dash-spv/src/network/peer.rs
  • dash-spv/src/network/pool.rs
  • dash-spv/src/test_utils/network.rs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant