Skip to content

Minor fixes#1787

Merged
reubenyap merged 16 commits intomasterfrom
minor_fixes
Mar 19, 2026
Merged

Minor fixes#1787
reubenyap merged 16 commits intomasterfrom
minor_fixes

Conversation

@levonpetrosyan93
Copy link
Copy Markdown
Contributor

@levonpetrosyan93 levonpetrosyan93 commented Mar 2, 2026

Note

Medium Risk
Touches consensus-adjacent validation paths (Spark spend verification, fee accounting, reorg handling) and P2P relay logic, which could impact block acceptance/relay behavior if incorrect. Changes are mostly defensive checks and data-structure swaps, but they sit on critical codepaths.

Overview
Hardens Spark/SparkName transaction validation and reorg handling. Adds explicit overflow guards for fee + vout in Spark spend construction/verification, fixes fee accumulation during disconnect, enforces MoneyRange after adding Spark fees, and adds stricter SparkName v2 checks (operation type bounds, disallow opUnregister, require nonzero validity).

Fixes Spark spend proof caching + async verification edge cases. On reorg disconnect, invalidates cached Spark spend proof results for spends in the disconnected block; tightens async proof-check bookkeeping to handle cache entry removal during unlock windows and falls back to synchronous verification if async scheduling fails.

Improves crypto/network input validation and relay tracking. Adds additional infinity checks in Chaum verification, switches Grootle batch bind weight generation to Scalar::randomize(), validates extended key serialization length, and replaces unbounded per-peer Dandelion inventory std::set with a bounded CRollingBloomFilter (with locking updates in net processing) to reduce memory/DoS risk.

Wallet/RPC reliability. Makes Spark spend-key generation throw WalletLocked instead of returning a default key, updates wallet creation/tests/RPC to handle this explicitly, and adds missing includes/cleaner error messages.

Written by Cursor Bugbot for commit 2f01b89. This will update automatically on new commits. Configure here.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 2, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

This PR introduces multiple defensive validation checks, refactors inventory tracking, and modifies exception handling and async verification logic. Changes include: adding buffer/overflow checks for cryptographic keys and fee calculations, replacing set-based Dandelion inventory tracking with Bloom filters, refactoring wallet lock detection to use exceptions, implementing async verification scheduling for Spark spends, validating SparkName v2 operations, and fixing fee accumulation during block disconnection.

Changes

Cohort / File(s) Summary
Cryptographic Validation
src/key.h, src/pubkey.h, src/libspark/chaum.cpp, src/libspark/grootle.cpp
Added runtime validation for extended key serialization size checks; added infinity validation for Chaum proof input; refactored bind_weight sampling to use Scalar::randomize() instead of PRNG; removed trailing newline from error message.
Fee & Overflow Defense
src/libspark/spend_transaction.h, src/libspark/spend_transaction.cpp, src/validation.cpp
Added overflow checks for fee + vout sum in spend transaction constructor, setter, and verifier; fixed fee accumulation to add rather than overwrite during block disconnection; added fee range validation during block connection.
Dandelion Inventory Tracking
src/net.h, src/net.cpp, src/net_processing.cpp
Replaced std::set<uint256> setDandelionInventoryKnown with CRollingBloomFilter filterDandelionInventoryKnown for more efficient membership tracking; updated all check/insert operations in inventory push and message processing logic.
Exception Handling Refactoring
src/spark/sparkwallet.cpp, src/wallet/rpcwallet.cpp, src/test/sparkname_tests.cpp
Changed wallet lock detection from post-generation equality checks to explicit WalletLocked exception throwing; updated transaction construction and RPC command call sites to catch and handle locked-wallet exceptions; modified test error handling.
Async Spark Verification
src/spark/state.cpp
Added scheduledAsync tracking flag in CheckSparkSpendTransaction to manage conditional async proof verification batching; updated cache invalidation on block disconnection for all Spark spend proofs; added pre-existence check before updating state after async completion.
SparkName v2 Validation
src/sparkname.cpp
Added operation type validation rejecting values ≥ opMaximumValue and opUnregister operations; added validity period guard rejecting zero-block periods; updated ValidateSparkNameData to report matching error descriptions.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • psolstice
  • aleflm

Poem

🐰 A rabbit hops through validation gates,
Bloom filters spring where sets deflate,
Async tasks now schedule with care,
Fee overflows caught mid-air!
Security strengthens, bugs beware! ✨

🚥 Pre-merge checks | ❌ 3

❌ Failed checks (2 warnings, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 12.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning PR includes a detailed auto-generated description covering all major changes, but does not match the required template structure with 'PR intention' and 'Code changes brief' sections. Restructure the PR description to match the template: add explicit 'PR intention' section stating the primary goal, and 'Code changes brief' section explaining architectural/UX changes not evident from code.
Title check ❓ Inconclusive The title 'Minor fixes' is vague and generic, providing no meaningful information about the specific changes or their scope. Use a more descriptive title that summarizes the main changes, such as 'Add validation hardening for Spark spends, Dandelion inventory tracking, and fee accounting' or focus on the primary concern.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch minor_fixes
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai coderabbitai Bot requested a review from psolstice March 2, 2026 23:59
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
src/spark/sparkwallet.cpp (1)

44-52: Minor inconsistency in error message formatting.

The error messages here end with \n (e.g., "Spark wallet creation FAILED, wallet is locked\n"), while similar messages in CreateSparkSpendTransaction (line 1439) and CreateSparkNameTransaction (line 1699) end with a period (e.g., "Unable to generate spend key, wallet is locked.").

Consider standardizing the format across all locations for consistency.

💡 Suggested fix for consistency
-                throw std::runtime_error("Spark wallet creation FAILED, wallet is locked\n");
+                throw std::runtime_error("Spark wallet creation FAILED, wallet is locked.");
             }
-            throw std::runtime_error("Spark wallet creation FAILED, unable to generate spend key\n");
+            throw std::runtime_error("Spark wallet creation FAILED, unable to generate spend key.");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/spark/sparkwallet.cpp` around lines 44 - 52, Standardize the error
message formatting in the spend key generation try/catch: in the catch for
generateSpendKey (symbols: generateSpendKey, SPARK_WALLET_LOCKED_MSG, spendKey)
replace the messages that end with "\n" with the same wording and punctuation
used elsewhere (e.g., use a trailing period and the same phrasing like "Unable
to generate spend key, wallet is locked." and "Spark wallet creation FAILED,
unable to generate spend key.") so the locked-specific and generic failure
messages match CreateSparkSpendTransaction/CreateSparkNameTransaction
formatting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/spark/state.cpp`:
- Around line 461-471: The code holds a reference to
gCheckedSparkSpendTransactions[hashTx] across an unlocked wait which can be
erased by DisconnectTipSpark; after reacquiring cs_checkedSparkSpendTransactions
(around the point where the code currently writes to checkState), use
find(hashTx) on gCheckedSparkSpendTransactions to confirm the entry still exists
before mutating it (checkState and related fields); if find fails, set
fRecheckNeeded = true and continue instead of dereferencing the stale reference;
ensure all accesses to the map around the wait use the iterator/result from find
and avoid using the previously held reference.

In `@src/sparkname.cpp`:
- Around line 339-341: The preflight check in this block currently rejects
operationType values >= opMaximumValue but does not mirror the zero-validity
rejection added in CheckSparkNameTx/ValidateSparkNameData; add a check for
sparkNameData.nVersion >= 2 and sparkNameData.nValidity == 0 and set
errorDescription to a clear message (e.g., "invalid validity") so zero validity
is rejected early. Locate the validation branch handling
sparkNameData.nVersion/operationType in the same function and add the
nValidity==0 check alongside the existing operationType check to keep behavior
consistent with ValidateSparkNameData and CheckSparkNameTx.
- Around line 171-173: The current consensus check in CheckSparkNameTx accepts
any operationType < CSparkNameTxData::opMaximumValue, which incorrectly allows
opUnregister while ValidateSparkNameData treats opUnregister as unsupported;
update the validation to also reject opUnregister by adding an explicit check
against CSparkNameTxData::opUnregister (e.g., treat operationType ==
(uint8_t)CSparkNameTxData::opUnregister as invalid) alongside the existing
operationType >= (uint8_t)CSparkNameTxData::opMaximumValue check so that
sparkNameData.nVersion, sparkNameData.operationType and
CSparkNameTxData::opUnregister are used to return state.DoS(...) for that
unsupported operation.

In `@src/wallet/rpcwallet.cpp`:
- Around line 4996-4999: Replace the std::runtime_error throw in the catch block
that compares e.what() to SPARK_WALLET_LOCKED_MSG with a JSONRPCError using
RPC_WALLET_UNLOCK_NEEDED so RPC clients get the same error code; specifically,
in the catch in rpcwallet.cpp where SPARK_WALLET_LOCKED_MSG is checked, change
throw std::runtime_error(_("Unable to generate spend key, wallet is locked."))
to throw JSONRPCError(RPC_WALLET_UNLOCK_NEEDED, _("Unable to generate spend key,
wallet is locked.")) so the message is preserved but the RPC error code matches
other Spark RPC paths.

---

Nitpick comments:
In `@src/spark/sparkwallet.cpp`:
- Around line 44-52: Standardize the error message formatting in the spend key
generation try/catch: in the catch for generateSpendKey (symbols:
generateSpendKey, SPARK_WALLET_LOCKED_MSG, spendKey) replace the messages that
end with "\n" with the same wording and punctuation used elsewhere (e.g., use a
trailing period and the same phrasing like "Unable to generate spend key, wallet
is locked." and "Spark wallet creation FAILED, unable to generate spend key.")
so the locked-specific and generic failure messages match
CreateSparkSpendTransaction/CreateSparkNameTransaction formatting.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 46eee34 and beb1feb.

📒 Files selected for processing (15)
  • src/key.h
  • src/libspark/chaum.cpp
  • src/libspark/grootle.cpp
  • src/libspark/spend_transaction.cpp
  • src/libspark/spend_transaction.h
  • src/net.cpp
  • src/net.h
  • src/net_processing.cpp
  • src/spark/sparkwallet.cpp
  • src/spark/sparkwallet.h
  • src/spark/state.cpp
  • src/sparkname.cpp
  • src/test/sparkname_tests.cpp
  • src/validation.cpp
  • src/wallet/rpcwallet.cpp

Comment thread src/spark/state.cpp
Comment thread src/sparkname.cpp
Comment thread src/sparkname.cpp
Comment on lines +339 to +341
else if (sparkNameData.nVersion >= 2 && sparkNameData.operationType >= (uint8_t)CSparkNameTxData::opMaximumValue)
errorDescription = "invalid operation type";

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Mirror the minimum-validity check in ValidateSparkNameData.

CheckSparkNameTx now rejects zero validity (Line 181), but this preflight validator still allows it. That can surface late failures instead of early user-facing validation.

Proposed fix
+    else if (sparkNameData.sparkNameValidityBlocks == 0)
+        errorDescription = "transaction validity period must be at least 1 block";
+
     else if (sparkNameData.sparkNameValidityBlocks > 365*24*24*10)
         errorDescription = "transaction can't be valid for more than 10 years";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/sparkname.cpp` around lines 339 - 341, The preflight check in this block
currently rejects operationType values >= opMaximumValue but does not mirror the
zero-validity rejection added in CheckSparkNameTx/ValidateSparkNameData; add a
check for sparkNameData.nVersion >= 2 and sparkNameData.nValidity == 0 and set
errorDescription to a clear message (e.g., "invalid validity") so zero validity
is rejected early. Locate the validation branch handling
sparkNameData.nVersion/operationType in the same function and add the
nValidity==0 check alongside the existing operationType check to keep behavior
consistent with ValidateSparkNameData and CheckSparkNameTx.

Comment thread src/wallet/rpcwallet.cpp Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/wallet/rpcwallet.cpp (1)

1441-1444: Tighten lock message wording for exact-match condition.

Since this branch is entered only on exact SPARK_WALLET_LOCKED_MSG, "wallet is locked" is more accurate than "wallet may be locked".

✏️ Suggested wording tweak
-            throw JSONRPCError(RPC_WALLET_UNLOCK_NEEDED, "Unable to generate spend key, wallet may be locked");
+            throw JSONRPCError(RPC_WALLET_UNLOCK_NEEDED, "Unable to generate spend key, wallet is locked");

Also applies to: 4996-4999

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/wallet/rpcwallet.cpp` around lines 1441 - 1444, The catch block that
checks for an exact match to SPARK_WALLET_LOCKED_MSG should return a precise
error message: update the JSONRPCError call (RPC_WALLET_UNLOCK_NEEDED) to use
"Unable to generate spend key, wallet is locked" instead of "wallet may be
locked"; make the identical change for the other occurrence that matches
SPARK_WALLET_LOCKED_MSG (the second JSONRPCError usage referenced in the
review).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/wallet/rpcwallet.cpp`:
- Around line 1441-1444: The catch block that checks for an exact match to
SPARK_WALLET_LOCKED_MSG should return a precise error message: update the
JSONRPCError call (RPC_WALLET_UNLOCK_NEEDED) to use "Unable to generate spend
key, wallet is locked" instead of "wallet may be locked"; make the identical
change for the other occurrence that matches SPARK_WALLET_LOCKED_MSG (the second
JSONRPCError usage referenced in the review).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 0bad5d73-b271-4493-a21e-25f46368395d

📥 Commits

Reviewing files that changed from the base of the PR and between beb1feb and 67ebcac.

📒 Files selected for processing (3)
  • src/spark/state.cpp
  • src/sparkname.cpp
  • src/wallet/rpcwallet.cpp

@firoorg firoorg deleted a comment from cursor Bot Mar 6, 2026
@reubenyap
Copy link
Copy Markdown
Member

bugbot run

@firoorg firoorg deleted a comment from cursor Bot Mar 6, 2026
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Autofix Details

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: Missing opUnregister validation in ValidateSparkNameData
    • Added the missing opUnregister rejection check in ValidateSparkNameData to match the existing check in CheckSparkNameTx, ensuring both functions enforce the same operation type constraints.

Create PR

Or push these changes by commenting:

@cursor push d2fbad8c5d
Preview (d2fbad8c5d)
diff --git a/src/sparkname.cpp b/src/sparkname.cpp
--- a/src/sparkname.cpp
+++ b/src/sparkname.cpp
@@ -345,6 +345,9 @@
     else if (sparkNameData.nVersion >= 2 && sparkNameData.operationType >= (uint8_t)CSparkNameTxData::opMaximumValue)
         errorDescription = "invalid operation type";
 
+    else if (sparkNameData.nVersion >= 2 && sparkNameData.operationType == (uint8_t)CSparkNameTxData::opUnregister)
+        errorDescription = "unregister operation is not supported yet";
+
     else if (sparkNames.count(ToUpper(sparkNameData.name)) > 0 &&
                 sparkNames[ToUpper(sparkNameData.name)].sparkAddress != sparkNameData.sparkAddress &&
                 (sparkNameData.nVersion < 2 || sparkNameData.operationType == CSparkNameTxData::opRegister))
This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.

Comment thread src/sparkname.cpp
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/net_processing.cpp`:
- Around line 1197-1201: Accesses to the mutable bloom filter
filterDandelionInventoryKnown in ProcessGetData() and the message handling
branch (where connman.PushMessage(..., NetMsgType::DANDELIONTX, ...) and the
dandelionServiceDiscoveryHash check occur) are not protected by cs_inventory,
causing data races; wrap every read or write of filterDandelionInventoryKnown
with the same lock used elsewhere (LOCK(cs_inventory)) so the pattern matches
PushInventory() and other guarded paths, i.e., acquire LOCK(cs_inventory) before
checking contains(inv.hash) or modifying the bloom filter and release after the
check/operation to eliminate the race.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 110ff605-89ed-4de2-a3a1-d1f68f3d2fee

📥 Commits

Reviewing files that changed from the base of the PR and between 67ebcac and 627c220.

📒 Files selected for processing (3)
  • src/net.cpp
  • src/net.h
  • src/net_processing.cpp
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/net.h

Comment thread src/net_processing.cpp Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/net_processing.cpp (1)

3619-3621: Remove redundant nested cs_inventory lock in SendMessages().

Line 3605 already holds LOCK(pto->cs_inventory) protecting the entire inventory send block. The additional lock at line 3620 is unnecessary; since CCriticalSection uses a recursive mutex, it succeeds but adds complexity without benefit. Rely on the outer lock scope instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/net_processing.cpp` around lines 3619 - 3621, Remove the redundant inner
LOCK on pto->cs_inventory inside SendMessages(): the outer scope already holds
LOCK(pto->cs_inventory) protecting the inventory send block, so delete the
nested LOCK that surrounds the loop iterating over
pto->vInventoryDandelionTxToSend. Update SendMessages() to rely on the existing
outer LOCK(pto->cs_inventory) and leave the loop over
vInventoryDandelionTxToSend unwrapped by a second cs_inventory lock.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/net_processing.cpp`:
- Around line 3619-3621: Remove the redundant inner LOCK on pto->cs_inventory
inside SendMessages(): the outer scope already holds LOCK(pto->cs_inventory)
protecting the inventory send block, so delete the nested LOCK that surrounds
the loop iterating over pto->vInventoryDandelionTxToSend. Update SendMessages()
to rely on the existing outer LOCK(pto->cs_inventory) and leave the loop over
vInventoryDandelionTxToSend unwrapped by a second cs_inventory lock.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 04fddfa1-d5e3-4207-a483-effa10fe0b55

📥 Commits

Reviewing files that changed from the base of the PR and between 627c220 and 03380f6.

📒 Files selected for processing (1)
  • src/net_processing.cpp

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Autofix Details

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: Redundant nested lock on same mutex
    • Removed the redundant inner LOCK(pto->cs_inventory) and its enclosing braces since the outer scope at line 3605 already holds the same lock.

Create PR

Or push these changes by commenting:

@cursor push 62eac5847c
Preview (62eac5847c)
diff --git a/src/net_processing.cpp b/src/net_processing.cpp
--- a/src/net_processing.cpp
+++ b/src/net_processing.cpp
@@ -3616,25 +3616,22 @@
             pto->vInventoryBlockToSend.clear();
 
             // Add Dandelion transactions
-            {
-                LOCK(pto->cs_inventory);
-                for (const uint256& hash : pto->vInventoryDandelionTxToSend) {
-                    pto->filterDandelionInventoryKnown.insert(hash);
-                    uint256 dandelionServiceDiscoveryHash;
-                    dandelionServiceDiscoveryHash.SetHex(
-                            "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff");
-                    if (!pto->fSupportsDandelion && hash != dandelionServiceDiscoveryHash) {
-                        //LogPrintf("Pushing transaction MSG_TX %s to %s.",
-                        //          hash.ToString(), pto->addr.ToString());
-                        vInv.push_back(CInv(MSG_TX, hash));
-                    } else {
-                        vInv.push_back(CInv(MSG_DANDELION_TX, hash));
-                    }
-                    if (vInv.size() == MAX_INV_SZ) {
-                            connman.PushMessage(pto, msgMaker.Make(NetMsgType::INV, vInv));
-                        vInv.clear();
-                    }
+            for (const uint256& hash : pto->vInventoryDandelionTxToSend) {
+                pto->filterDandelionInventoryKnown.insert(hash);
+                uint256 dandelionServiceDiscoveryHash;
+                dandelionServiceDiscoveryHash.SetHex(
+                        "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff");
+                if (!pto->fSupportsDandelion && hash != dandelionServiceDiscoveryHash) {
+                    //LogPrintf("Pushing transaction MSG_TX %s to %s.",
+                    //          hash.ToString(), pto->addr.ToString());
+                    vInv.push_back(CInv(MSG_TX, hash));
+                } else {
+                    vInv.push_back(CInv(MSG_DANDELION_TX, hash));
                 }
+                if (vInv.size() == MAX_INV_SZ) {
+                        connman.PushMessage(pto, msgMaker.Make(NetMsgType::INV, vInv));
+                    vInv.clear();
+                }
             }
             pto->vInventoryDandelionTxToSend.clear();

This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.

Comment thread src/net_processing.cpp Outdated
…for consistency and clarity. Update comments for clarity in sparkname.cpp and remove unused SPARK_WALLET_LOCKED_MSG constant.
@reubenyap
Copy link
Copy Markdown
Member

bugbot run

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Bugbot reviewed your changes and found no new issues!

Comment @cursor review or bugbot run to trigger another review on this PR

@reubenyap reubenyap merged commit 99258e4 into master Mar 19, 2026
37 of 40 checks passed
@reubenyap reubenyap deleted the minor_fixes branch March 19, 2026 16:50
@coderabbitai coderabbitai Bot mentioned this pull request Mar 20, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants