refactor: extract capability lookup into PeerPool helpers#509
refactor: extract capability lookup into PeerPool helpers#509xdustinface wants to merge 1 commit intov0.42-devfrom
PeerPool helpers#509Conversation
📝 WalkthroughWalkthroughRefactors peer selection to use new PeerPool service-query methods. Adds Changes
Sequence Diagram(s)sequenceDiagram
participant Manager
participant PeerPool
participant Peer
participant Network
Manager->>PeerPool: peer_with_service(flags) / peers_with_service(flags)
PeerPool->>Peer: acquire read lock, check has_service(flags)
Peer-->>PeerPool: matching peer(s) (addr, Arc<RwLock<Peer>>)
PeerPool-->>Manager: return peer(s)
Manager->>Peer: send message / mark current_sync_peer
Peer->>Network: transmit message over connection
Network-->>Manager: ack / error
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## v0.42-dev #509 +/- ##
=============================================
+ Coverage 66.85% 66.90% +0.04%
=============================================
Files 313 313
Lines 64775 64835 +60
=============================================
+ Hits 43306 43376 +70
+ Misses 21469 21459 -10
*This pull request uses carry forward flags. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
dash-spv/src/network/manager.rs (1)
994-1012:⚠️ Potential issue | 🟠 MajorSplit
GetHeadersandGetHeaders2selection rules.These branches currently share the same fallback logic, but
send_message_to_peer()only has a safe downgrade path forGetHeaders. An explicitGetHeaders2can still be routed to a peer inheaders2_disabledor, after the fallback, to a peer outside the headers2-capable set.GetHeaders2should require an eligible peer and return a protocol error otherwise;current_sync_peershould also be filtered throughheaders2_disabledbefore reuse.Also applies to: 1078-1089
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/network/manager.rs` around lines 994 - 1012, The GetHeaders2 selection must only pick peers that support headers2 and must not fall back to non-headers2 peers; update the branch that runs when check_headers2 is true to (1) filter the reused current_sync_peer through the headers2_disabled set and ensure peer.read().await.supports_headers2() before reusing it, (2) when no such peer is found do not call peer_with_service or pick peers[0] but instead return a protocol error (similar to other error paths) so send_message_to_peer() will never be given a GetHeaders2 to a headers2-disabled peer, and (3) make the identical change in the second occurrence around the 1078-1089 region; use the symbols current_sync_peer, headers2_disabled, supports_headers2(), peer_with_service(ServiceFlags::NODE_HEADERS_COMPRESSED) and send_message_to_peer() to locate and modify the logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 983-987: The snapshot "peers" can be stale between calling
self.pool.peer_with_service(flags) and later peers.find(...), causing "Selected
peer not found"; after obtaining the address from
self.pool.peer_with_service(flags) (the selected_peer variable) re-resolve the
peer from the current pool instead of using the old peers snapshot — e.g.,
replace the peers.find(...) call with a fresh lookup on the pool (call the pool
method that returns a peer by address, e.g.,
self.pool.get_peer_by_address(address) or self.pool.peer(&address).await) and
use that result, keeping the selection branch that logs "Selected peer ..." but
using the freshly-resolved peer object for further work.
In `@dash-spv/src/network/pool.rs`:
- Around line 153-177: Add unit tests for the new service-selection helpers
peer_with_service() and peers_with_service(): under #[cfg(test)] add async
#[tokio::test] cases that (1) verify no-match returns None/empty, (2) verify
first-match returns the first matching SocketAddr, and (3) verify multiple
matches are all returned by peers_with_service(). Create a Pool, populate its
peers map via pool.peers.write().await with Arc<RwLock<Peer>> entries that
advertise specific ServiceFlags, then call peer_with_service(flags) and
peers_with_service(flags) and assert expected results; place tests next to the
implementation in the same file. Ensure you exercise both functions and cover
no-match, single-match, and multi-match scenarios.
---
Outside diff comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 994-1012: The GetHeaders2 selection must only pick peers that
support headers2 and must not fall back to non-headers2 peers; update the branch
that runs when check_headers2 is true to (1) filter the reused current_sync_peer
through the headers2_disabled set and ensure
peer.read().await.supports_headers2() before reusing it, (2) when no such peer
is found do not call peer_with_service or pick peers[0] but instead return a
protocol error (similar to other error paths) so send_message_to_peer() will
never be given a GetHeaders2 to a headers2-disabled peer, and (3) make the
identical change in the second occurrence around the 1078-1089 region; use the
symbols current_sync_peer, headers2_disabled, supports_headers2(),
peer_with_service(ServiceFlags::NODE_HEADERS_COMPRESSED) and
send_message_to_peer() to locate and modify the logic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 76aae646-b9b2-466a-967e-43c2ba68b7a8
📒 Files selected for processing (2)
dash-spv/src/network/manager.rsdash-spv/src/network/pool.rs
9e7f807 to
a43d564
Compare
- Add `peer_with_service()` and `peers_with_service()` on `PeerPool` to replace repeated "iterate peers, check service flag" loops in message routing. - Add unit tests - Generalize the match/log/error pattern for required-service peer selection in `send_to_single_peer` so new service requirements only need a single match arm.
a43d564 to
bbaf01d
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
dash-spv/src/network/manager.rs (1)
983-987:⚠️ Potential issue | 🟡 MinorResolve the selected peer from the live pool right before send.
peer_with_service()runs after the snapshot at Line 968, so it can pick a peer that is missing frompeers. That still makes the laterfind(...)fail withSelected peer not foundeven though a valid peer exists.Suggested fix
- let (addr, peer) = peers - .iter() - .find(|(a, _)| *a == selected_peer) - .ok_or_else(|| NetworkError::ConnectionFailed("Selected peer not found".to_string()))?; - - self.send_message_to_peer(addr, peer, message).await + let peer = self.pool.get_peer(&selected_peer).await.ok_or_else(|| { + NetworkError::ConnectionFailed( + "Selected peer disconnected before send".to_string(), + ) + })?; + + self.send_message_to_peer(&selected_peer, &peer, message).awaitAlso applies to: 1009-1013, 1053-1059
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/network/manager.rs` around lines 983 - 987, The selected peer can be stale because peer_with_service() was called before taking the snapshot used by peers.find(...); to fix, re-resolve the peer from the live pool immediately before using it: call self.pool.peer_with_service(flags).await (or the equivalent live lookup) right before the peers.find(...) / send path and use that fresh address to locate the Peer object so the later find(...) will succeed; apply this change for the other similar blocks that use peer_with_service() earlier (the blocks referenced around peer selection at the other occurrences).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-spv/src/network/pool.rs`:
- Around line 153-180: The pool read lock is held while awaiting per-peer
RwLocks in peer_with_service and peers_with_service; to fix, first take a
short-lived snapshot of the peer addresses and Arcs (e.g. collect
Vec<(SocketAddr, Arc<RwLock<Peer>>)>) while holding self.peers.read().await,
then drop that read guard and only afterwards iterate the snapshot and await
peer.read().await to call has_service; update both peer_with_service and
peers_with_service to follow this pattern (use Arc::clone or peer.clone() when
building the snapshot so the Arcs outlive the released pool lock).
---
Duplicate comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 983-987: The selected peer can be stale because
peer_with_service() was called before taking the snapshot used by
peers.find(...); to fix, re-resolve the peer from the live pool immediately
before using it: call self.pool.peer_with_service(flags).await (or the
equivalent live lookup) right before the peers.find(...) / send path and use
that fresh address to locate the Peer object so the later find(...) will
succeed; apply this change for the other similar blocks that use
peer_with_service() earlier (the blocks referenced around peer selection at the
other occurrences).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bb2447ee-7c31-44cd-a8f6-1e7b2339e449
📒 Files selected for processing (4)
dash-spv/src/network/manager.rsdash-spv/src/network/peer.rsdash-spv/src/network/pool.rsdash-spv/src/test_utils/network.rs
peer_with_service()andpeers_with_service()onPeerPoolto replace repeated "iterate peers, check service flag" loops in message routing.send_to_single_peerso new service requirements only need a single match arm.Summary by CodeRabbit