Conversation
…on of Spark spend transactions. Added logic to handle cases where verification is scheduled asynchronously, ensuring synchronous verification occurs when necessary for defense-in-depth. Updated return behavior to reflect verification status.
…rkNameData functions
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughThis PR introduces multiple defensive validation checks, refactors inventory tracking, and modifies exception handling and async verification logic. Changes include: adding buffer/overflow checks for cryptographic keys and fee calculations, replacing set-based Dandelion inventory tracking with Bloom filters, refactoring wallet lock detection to use exceptions, implementing async verification scheduling for Spark spends, validating SparkName v2 operations, and fixing fee accumulation during block disconnection. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
src/spark/sparkwallet.cpp (1)
44-52: Minor inconsistency in error message formatting.The error messages here end with
\n(e.g.,"Spark wallet creation FAILED, wallet is locked\n"), while similar messages inCreateSparkSpendTransaction(line 1439) andCreateSparkNameTransaction(line 1699) end with a period (e.g.,"Unable to generate spend key, wallet is locked.").Consider standardizing the format across all locations for consistency.
💡 Suggested fix for consistency
- throw std::runtime_error("Spark wallet creation FAILED, wallet is locked\n"); + throw std::runtime_error("Spark wallet creation FAILED, wallet is locked."); } - throw std::runtime_error("Spark wallet creation FAILED, unable to generate spend key\n"); + throw std::runtime_error("Spark wallet creation FAILED, unable to generate spend key.");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spark/sparkwallet.cpp` around lines 44 - 52, Standardize the error message formatting in the spend key generation try/catch: in the catch for generateSpendKey (symbols: generateSpendKey, SPARK_WALLET_LOCKED_MSG, spendKey) replace the messages that end with "\n" with the same wording and punctuation used elsewhere (e.g., use a trailing period and the same phrasing like "Unable to generate spend key, wallet is locked." and "Spark wallet creation FAILED, unable to generate spend key.") so the locked-specific and generic failure messages match CreateSparkSpendTransaction/CreateSparkNameTransaction formatting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/spark/state.cpp`:
- Around line 461-471: The code holds a reference to
gCheckedSparkSpendTransactions[hashTx] across an unlocked wait which can be
erased by DisconnectTipSpark; after reacquiring cs_checkedSparkSpendTransactions
(around the point where the code currently writes to checkState), use
find(hashTx) on gCheckedSparkSpendTransactions to confirm the entry still exists
before mutating it (checkState and related fields); if find fails, set
fRecheckNeeded = true and continue instead of dereferencing the stale reference;
ensure all accesses to the map around the wait use the iterator/result from find
and avoid using the previously held reference.
In `@src/sparkname.cpp`:
- Around line 339-341: The preflight check in this block currently rejects
operationType values >= opMaximumValue but does not mirror the zero-validity
rejection added in CheckSparkNameTx/ValidateSparkNameData; add a check for
sparkNameData.nVersion >= 2 and sparkNameData.nValidity == 0 and set
errorDescription to a clear message (e.g., "invalid validity") so zero validity
is rejected early. Locate the validation branch handling
sparkNameData.nVersion/operationType in the same function and add the
nValidity==0 check alongside the existing operationType check to keep behavior
consistent with ValidateSparkNameData and CheckSparkNameTx.
- Around line 171-173: The current consensus check in CheckSparkNameTx accepts
any operationType < CSparkNameTxData::opMaximumValue, which incorrectly allows
opUnregister while ValidateSparkNameData treats opUnregister as unsupported;
update the validation to also reject opUnregister by adding an explicit check
against CSparkNameTxData::opUnregister (e.g., treat operationType ==
(uint8_t)CSparkNameTxData::opUnregister as invalid) alongside the existing
operationType >= (uint8_t)CSparkNameTxData::opMaximumValue check so that
sparkNameData.nVersion, sparkNameData.operationType and
CSparkNameTxData::opUnregister are used to return state.DoS(...) for that
unsupported operation.
In `@src/wallet/rpcwallet.cpp`:
- Around line 4996-4999: Replace the std::runtime_error throw in the catch block
that compares e.what() to SPARK_WALLET_LOCKED_MSG with a JSONRPCError using
RPC_WALLET_UNLOCK_NEEDED so RPC clients get the same error code; specifically,
in the catch in rpcwallet.cpp where SPARK_WALLET_LOCKED_MSG is checked, change
throw std::runtime_error(_("Unable to generate spend key, wallet is locked."))
to throw JSONRPCError(RPC_WALLET_UNLOCK_NEEDED, _("Unable to generate spend key,
wallet is locked.")) so the message is preserved but the RPC error code matches
other Spark RPC paths.
---
Nitpick comments:
In `@src/spark/sparkwallet.cpp`:
- Around line 44-52: Standardize the error message formatting in the spend key
generation try/catch: in the catch for generateSpendKey (symbols:
generateSpendKey, SPARK_WALLET_LOCKED_MSG, spendKey) replace the messages that
end with "\n" with the same wording and punctuation used elsewhere (e.g., use a
trailing period and the same phrasing like "Unable to generate spend key, wallet
is locked." and "Spark wallet creation FAILED, unable to generate spend key.")
so the locked-specific and generic failure messages match
CreateSparkSpendTransaction/CreateSparkNameTransaction formatting.
ℹ️ Review info
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
src/key.hsrc/libspark/chaum.cppsrc/libspark/grootle.cppsrc/libspark/spend_transaction.cppsrc/libspark/spend_transaction.hsrc/net.cppsrc/net.hsrc/net_processing.cppsrc/spark/sparkwallet.cppsrc/spark/sparkwallet.hsrc/spark/state.cppsrc/sparkname.cppsrc/test/sparkname_tests.cppsrc/validation.cppsrc/wallet/rpcwallet.cpp
| else if (sparkNameData.nVersion >= 2 && sparkNameData.operationType >= (uint8_t)CSparkNameTxData::opMaximumValue) | ||
| errorDescription = "invalid operation type"; | ||
|
|
There was a problem hiding this comment.
Mirror the minimum-validity check in ValidateSparkNameData.
CheckSparkNameTx now rejects zero validity (Line 181), but this preflight validator still allows it. That can surface late failures instead of early user-facing validation.
Proposed fix
+ else if (sparkNameData.sparkNameValidityBlocks == 0)
+ errorDescription = "transaction validity period must be at least 1 block";
+
else if (sparkNameData.sparkNameValidityBlocks > 365*24*24*10)
errorDescription = "transaction can't be valid for more than 10 years";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/sparkname.cpp` around lines 339 - 341, The preflight check in this block
currently rejects operationType values >= opMaximumValue but does not mirror the
zero-validity rejection added in CheckSparkNameTx/ValidateSparkNameData; add a
check for sparkNameData.nVersion >= 2 and sparkNameData.nValidity == 0 and set
errorDescription to a clear message (e.g., "invalid validity") so zero validity
is rejected early. Locate the validation branch handling
sparkNameData.nVersion/operationType in the same function and add the
nValidity==0 check alongside the existing operationType check to keep behavior
consistent with ValidateSparkNameData and CheckSparkNameTx.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/wallet/rpcwallet.cpp (1)
1441-1444: Tighten lock message wording for exact-match condition.Since this branch is entered only on exact
SPARK_WALLET_LOCKED_MSG,"wallet is locked"is more accurate than"wallet may be locked".✏️ Suggested wording tweak
- throw JSONRPCError(RPC_WALLET_UNLOCK_NEEDED, "Unable to generate spend key, wallet may be locked"); + throw JSONRPCError(RPC_WALLET_UNLOCK_NEEDED, "Unable to generate spend key, wallet is locked");Also applies to: 4996-4999
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/wallet/rpcwallet.cpp` around lines 1441 - 1444, The catch block that checks for an exact match to SPARK_WALLET_LOCKED_MSG should return a precise error message: update the JSONRPCError call (RPC_WALLET_UNLOCK_NEEDED) to use "Unable to generate spend key, wallet is locked" instead of "wallet may be locked"; make the identical change for the other occurrence that matches SPARK_WALLET_LOCKED_MSG (the second JSONRPCError usage referenced in the review).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/wallet/rpcwallet.cpp`:
- Around line 1441-1444: The catch block that checks for an exact match to
SPARK_WALLET_LOCKED_MSG should return a precise error message: update the
JSONRPCError call (RPC_WALLET_UNLOCK_NEEDED) to use "Unable to generate spend
key, wallet is locked" instead of "wallet may be locked"; make the identical
change for the other occurrence that matches SPARK_WALLET_LOCKED_MSG (the second
JSONRPCError usage referenced in the review).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 0bad5d73-b271-4493-a21e-25f46368395d
📒 Files selected for processing (3)
src/spark/state.cppsrc/sparkname.cppsrc/wallet/rpcwallet.cpp
|
bugbot run |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Autofix Details
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: Missing opUnregister validation in ValidateSparkNameData
- Added the missing opUnregister rejection check in ValidateSparkNameData to match the existing check in CheckSparkNameTx, ensuring both functions enforce the same operation type constraints.
Or push these changes by commenting:
@cursor push d2fbad8c5d
Preview (d2fbad8c5d)
diff --git a/src/sparkname.cpp b/src/sparkname.cpp
--- a/src/sparkname.cpp
+++ b/src/sparkname.cpp
@@ -345,6 +345,9 @@
else if (sparkNameData.nVersion >= 2 && sparkNameData.operationType >= (uint8_t)CSparkNameTxData::opMaximumValue)
errorDescription = "invalid operation type";
+ else if (sparkNameData.nVersion >= 2 && sparkNameData.operationType == (uint8_t)CSparkNameTxData::opUnregister)
+ errorDescription = "unregister operation is not supported yet";
+
else if (sparkNames.count(ToUpper(sparkNameData.name)) > 0 &&
sparkNames[ToUpper(sparkNameData.name)].sparkAddress != sparkNameData.sparkAddress &&
(sparkNameData.nVersion < 2 || sparkNameData.operationType == CSparkNameTxData::opRegister))There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/net_processing.cpp`:
- Around line 1197-1201: Accesses to the mutable bloom filter
filterDandelionInventoryKnown in ProcessGetData() and the message handling
branch (where connman.PushMessage(..., NetMsgType::DANDELIONTX, ...) and the
dandelionServiceDiscoveryHash check occur) are not protected by cs_inventory,
causing data races; wrap every read or write of filterDandelionInventoryKnown
with the same lock used elsewhere (LOCK(cs_inventory)) so the pattern matches
PushInventory() and other guarded paths, i.e., acquire LOCK(cs_inventory) before
checking contains(inv.hash) or modifying the bloom filter and release after the
check/operation to eliminate the race.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 110ff605-89ed-4de2-a3a1-d1f68f3d2fee
📒 Files selected for processing (3)
src/net.cppsrc/net.hsrc/net_processing.cpp
🚧 Files skipped from review as they are similar to previous changes (1)
- src/net.h
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/net_processing.cpp (1)
3619-3621: Remove redundant nestedcs_inventorylock inSendMessages().Line 3605 already holds
LOCK(pto->cs_inventory)protecting the entire inventory send block. The additional lock at line 3620 is unnecessary; sinceCCriticalSectionuses a recursive mutex, it succeeds but adds complexity without benefit. Rely on the outer lock scope instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/net_processing.cpp` around lines 3619 - 3621, Remove the redundant inner LOCK on pto->cs_inventory inside SendMessages(): the outer scope already holds LOCK(pto->cs_inventory) protecting the inventory send block, so delete the nested LOCK that surrounds the loop iterating over pto->vInventoryDandelionTxToSend. Update SendMessages() to rely on the existing outer LOCK(pto->cs_inventory) and leave the loop over vInventoryDandelionTxToSend unwrapped by a second cs_inventory lock.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/net_processing.cpp`:
- Around line 3619-3621: Remove the redundant inner LOCK on pto->cs_inventory
inside SendMessages(): the outer scope already holds LOCK(pto->cs_inventory)
protecting the inventory send block, so delete the nested LOCK that surrounds
the loop iterating over pto->vInventoryDandelionTxToSend. Update SendMessages()
to rely on the existing outer LOCK(pto->cs_inventory) and leave the loop over
vInventoryDandelionTxToSend unwrapped by a second cs_inventory lock.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 04fddfa1-d5e3-4207-a483-effa10fe0b55
📒 Files selected for processing (1)
src/net_processing.cpp
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Autofix Details
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: Redundant nested lock on same mutex
- Removed the redundant inner LOCK(pto->cs_inventory) and its enclosing braces since the outer scope at line 3605 already holds the same lock.
Or push these changes by commenting:
@cursor push 62eac5847c
Preview (62eac5847c)
diff --git a/src/net_processing.cpp b/src/net_processing.cpp
--- a/src/net_processing.cpp
+++ b/src/net_processing.cpp
@@ -3616,25 +3616,22 @@
pto->vInventoryBlockToSend.clear();
// Add Dandelion transactions
- {
- LOCK(pto->cs_inventory);
- for (const uint256& hash : pto->vInventoryDandelionTxToSend) {
- pto->filterDandelionInventoryKnown.insert(hash);
- uint256 dandelionServiceDiscoveryHash;
- dandelionServiceDiscoveryHash.SetHex(
- "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff");
- if (!pto->fSupportsDandelion && hash != dandelionServiceDiscoveryHash) {
- //LogPrintf("Pushing transaction MSG_TX %s to %s.",
- // hash.ToString(), pto->addr.ToString());
- vInv.push_back(CInv(MSG_TX, hash));
- } else {
- vInv.push_back(CInv(MSG_DANDELION_TX, hash));
- }
- if (vInv.size() == MAX_INV_SZ) {
- connman.PushMessage(pto, msgMaker.Make(NetMsgType::INV, vInv));
- vInv.clear();
- }
+ for (const uint256& hash : pto->vInventoryDandelionTxToSend) {
+ pto->filterDandelionInventoryKnown.insert(hash);
+ uint256 dandelionServiceDiscoveryHash;
+ dandelionServiceDiscoveryHash.SetHex(
+ "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff");
+ if (!pto->fSupportsDandelion && hash != dandelionServiceDiscoveryHash) {
+ //LogPrintf("Pushing transaction MSG_TX %s to %s.",
+ // hash.ToString(), pto->addr.ToString());
+ vInv.push_back(CInv(MSG_TX, hash));
+ } else {
+ vInv.push_back(CInv(MSG_DANDELION_TX, hash));
}
+ if (vInv.size() == MAX_INV_SZ) {
+ connman.PushMessage(pto, msgMaker.Make(NetMsgType::INV, vInv));
+ vInv.clear();
+ }
}
pto->vInventoryDandelionTxToSend.clear();This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.
…for consistency and clarity. Update comments for clarity in sparkname.cpp and remove unused SPARK_WALLET_LOCKED_MSG constant.
|
bugbot run |

Note
Medium Risk
Touches consensus-adjacent validation paths (Spark spend verification, fee accounting, reorg handling) and P2P relay logic, which could impact block acceptance/relay behavior if incorrect. Changes are mostly defensive checks and data-structure swaps, but they sit on critical codepaths.
Overview
Hardens Spark/SparkName transaction validation and reorg handling. Adds explicit overflow guards for
fee + voutin Spark spend construction/verification, fixes fee accumulation during disconnect, enforcesMoneyRangeafter adding Spark fees, and adds stricter SparkName v2 checks (operation type bounds, disallowopUnregister, require nonzero validity).Fixes Spark spend proof caching + async verification edge cases. On reorg disconnect, invalidates cached Spark spend proof results for spends in the disconnected block; tightens async proof-check bookkeeping to handle cache entry removal during unlock windows and falls back to synchronous verification if async scheduling fails.
Improves crypto/network input validation and relay tracking. Adds additional infinity checks in Chaum verification, switches Grootle batch bind weight generation to
Scalar::randomize(), validates extended key serialization length, and replaces unbounded per-peer Dandelion inventorystd::setwith a boundedCRollingBloomFilter(with locking updates in net processing) to reduce memory/DoS risk.Wallet/RPC reliability. Makes Spark spend-key generation throw
WalletLockedinstead of returning a default key, updates wallet creation/tests/RPC to handle this explicitly, and adds missing includes/cleaner error messages.Written by Cursor Bugbot for commit 2f01b89. This will update automatically on new commits. Configure here.