Skip to content

refactor: harden runtime safety across client and prover#2342

Open
sergeytimoshin wants to merge 7 commits intomainfrom
sergey/runtime-safety-hardening
Open

refactor: harden runtime safety across client and prover#2342
sergeytimoshin wants to merge 7 commits intomainfrom
sergey/runtime-safety-hardening

Conversation

@sergeytimoshin
Copy link
Contributor

@sergeytimoshin sergeytimoshin commented Mar 14, 2026

Summary by CodeRabbit

  • New Features

    • Rich, height-parametric batch snapshot type for safer batch & proof workflows.
    • New error kinds for clearer diagnostics (serialization, integer conversion, process/health checks).
  • Bug Fixes

    • Replaced unsafe/unhandled parsing with safe, fallible handling to avoid crashes.
    • Stricter validation for inputs and configuration with clearer error messages.
  • Refactor

    • Unified JSON/proof handling and prover startup with health checks and improved error propagation.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 14, 2026

Warning

Rate limit exceeded

@sergeytimoshin has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 29 minutes and 46 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 19f30201-17bd-429a-a6f0-d1b20f858250

📥 Commits

Reviewing files that changed from the base of the PR and between 713ac36 and 8ed76e5.

⛔ Files ignored due to path filters (60)
  • Cargo.lock is excluded by !**/*.lock and included by none
  • forester-utils/src/account_zero_copy.rs is excluded by none and included by none
  • forester-utils/src/address_merkle_tree_config.rs is excluded by none and included by none
  • forester-utils/src/forester_epoch.rs is excluded by none and included by none
  • forester-utils/src/instructions/compress_and_close_mint.rs is excluded by none and included by none
  • forester-utils/src/rate_limiter.rs is excluded by none and included by none
  • forester-utils/src/registry.rs is excluded by none and included by none
  • program-tests/account-compression-test/tests/address_merkle_tree_tests.rs is excluded by none and included by none
  • program-tests/account-compression-test/tests/merkle_tree_tests.rs is excluded by none and included by none
  • program-tests/batched-merkle-tree-test/tests/merkle_tree.rs is excluded by none and included by none
  • program-tests/compressed-token-test/tests/freeze/functional.rs is excluded by none and included by none
  • program-tests/compressed-token-test/tests/v1.rs is excluded by none and included by none
  • program-tests/system-cpi-v2-test/tests/event.rs is excluded by none and included by none
  • program-tests/system-cpi-v2-test/tests/invoke_cpi_with_read_only.rs is excluded by none and included by none
  • program-tests/utils/src/account_zero_copy.rs is excluded by none and included by none
  • program-tests/utils/src/actions/legacy/instructions/transfer2.rs is excluded by none and included by none
  • program-tests/utils/src/address_tree_rollover.rs is excluded by none and included by none
  • program-tests/utils/src/assert_compressed_tx.rs is excluded by none and included by none
  • program-tests/utils/src/assert_merkle_tree.rs is excluded by none and included by none
  • program-tests/utils/src/assert_queue.rs is excluded by none and included by none
  • program-tests/utils/src/batched_address_tree.rs is excluded by none and included by none
  • program-tests/utils/src/e2e_test_env.rs is excluded by none and included by none
  • program-tests/utils/src/lib.rs is excluded by none and included by none
  • program-tests/utils/src/state_tree_rollover.rs is excluded by none and included by none
  • program-tests/utils/src/test_batch_forester.rs is excluded by none and included by none
  • program-tests/utils/src/test_forester.rs is excluded by none and included by none
  • sdk-tests/anchor-manual-test/tests/shared.rs is excluded by none and included by none
  • sdk-tests/anchor-semi-manual-test/tests/shared/mod.rs is excluded by none and included by none
  • sdk-tests/anchor-semi-manual-test/tests/stress_test.rs is excluded by none and included by none
  • sdk-tests/client-test/tests/light_client.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/amm_stress_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/amm_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/basic_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/d10_ata_idempotent_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/d10_token_accounts_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/d11_zero_copy_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/failing_tests.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/integration_tests.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/shared.rs is excluded by none and included by none
  • sdk-tests/pinocchio-light-program-test/tests/shared/mod.rs is excluded by none and included by none
  • sdk-tests/pinocchio-light-program-test/tests/stress_test.rs is excluded by none and included by none
  • sdk-tests/pinocchio-manual-test/tests/shared.rs is excluded by none and included by none
  • sdk-tests/sdk-anchor-test/programs/sdk-anchor-test/tests/read_only.rs is excluded by none and included by none
  • sdk-tests/sdk-anchor-test/programs/sdk-anchor-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-native-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-pinocchio-v1-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-pinocchio-v2-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/ctoken_pda.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/decompress_full_cpi.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/pda_ctoken.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/test_4_invocations.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/test_4_transfer2.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/test_deposit.rs is excluded by none and included by none
  • sdk-tests/sdk-v1-native-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-account-loader-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-ata-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-mint-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-pda-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-token-test/tests/test.rs is excluded by none and included by none
📒 Files selected for processing (54)
  • forester/src/config.rs
  • forester/src/forester_status.rs
  • forester/src/main.rs
  • forester/src/processor/v2/proof_worker.rs
  • forester/src/queue_helpers.rs
  • forester/tests/test_compressible_pda.rs
  • prover/client/src/constants.rs
  • prover/client/src/errors.rs
  • prover/client/src/helpers.rs
  • prover/client/src/proof.rs
  • prover/client/src/proof_client.rs
  • prover/client/src/proof_types/batch_address_append/json.rs
  • prover/client/src/proof_types/batch_append/json.rs
  • prover/client/src/proof_types/batch_append/proof_inputs.rs
  • prover/client/src/proof_types/batch_update/json.rs
  • prover/client/src/proof_types/batch_update/proof_inputs.rs
  • prover/client/src/proof_types/combined/v1/json.rs
  • prover/client/src/proof_types/combined/v2/json.rs
  • prover/client/src/proof_types/combined/v2/proof_inputs.rs
  • prover/client/src/proof_types/inclusion/v1/json.rs
  • prover/client/src/proof_types/inclusion/v1/proof_inputs.rs
  • prover/client/src/proof_types/inclusion/v2/json.rs
  • prover/client/src/proof_types/inclusion/v2/proof_inputs.rs
  • prover/client/src/proof_types/non_inclusion/v1/json.rs
  • prover/client/src/proof_types/non_inclusion/v1/proof_inputs.rs
  • prover/client/src/proof_types/non_inclusion/v2/json.rs
  • prover/client/src/proof_types/non_inclusion/v2/proof_inputs.rs
  • prover/client/src/prover.rs
  • prover/client/tests/batch_append.rs
  • prover/client/tests/batch_update.rs
  • prover/client/tests/combined.rs
  • prover/client/tests/inclusion.rs
  • prover/client/tests/init_merkle_tree.rs
  • prover/client/tests/non_inclusion.rs
  • sdk-libs/client/src/fee.rs
  • sdk-libs/client/src/indexer/error.rs
  • sdk-libs/client/src/indexer/options.rs
  • sdk-libs/client/src/indexer/photon_indexer.rs
  • sdk-libs/client/src/indexer/types/proof.rs
  • sdk-libs/client/src/indexer/types/queue.rs
  • sdk-libs/client/src/indexer/types/token.rs
  • sdk-libs/client/src/interface/initialize_config.rs
  • sdk-libs/client/src/interface/instructions.rs
  • sdk-libs/client/src/interface/load_accounts.rs
  • sdk-libs/client/src/interface/mod.rs
  • sdk-libs/client/src/interface/pack.rs
  • sdk-libs/client/src/interface/serialize.rs
  • sdk-libs/client/src/local_test_validator.rs
  • sdk-libs/client/src/rpc/client.rs
  • sdk-libs/client/src/utils.rs
  • sdk-libs/macros/src/light_pdas/seeds/extract.rs
  • sdk-libs/program-test/src/indexer/test_indexer.rs
  • sdk-libs/program-test/src/program_test/compressible_setup.rs
  • sdk-libs/program-test/src/program_test/light_program_test.rs
📝 Walkthrough

Walkthrough

Replace panic-prone unwraps/unsafe blocks with Result-based error handling across Forester, Prover, and SDKs; add ProverClientError variants; introduce a const-generic AddressBatchSnapshot and batched proof reconstruction APIs; adapt many public/internal functions and tests to propagate or handle fallible operations.

Changes

Cohort / File(s) Summary
Forester core & runtime
forester/src/config.rs, forester/src/forester_status.rs, forester/src/main.rs, forester/src/queue_helpers.rs
Replace unwrap/unsafe parsing with fallible Result paths (tree_ids parsing, queue HashSet parsing); make rate-limiter construction return/propagate errors. Focus: config parsing and safe account parsing.
Forester v2 pipeline
forester/src/processor/v2/helpers.rs, forester/src/processor/v2/proof_worker.rs, forester/src/processor/v2/strategy/address.rs
Add public generic AddressBatchSnapshot<const HEIGHT: usize> and change get_batch_snapshot to return Option; derive/validate leaves_hashchain on-demand; make ProofInput::to_json fallible and thread error propagation through proof submission/polling and batch processing. Attention: API surface and error propagation in proof worker and address staging.
Prover client core & errors
prover/client/src/errors.rs, prover/client/src/helpers.rs, prover/client/src/prover.rs
Introduce new ProverClientError variants and convert many helpers to return Result<..., ProverClientError>; change spawn_prover to return Result and add staged startup/health-check flow. Focus: process start/wait and error semantics.
Proof parsing / client API
prover/client/src/proof.rs, prover/client/src/proof_client.rs
Make deserialize/compress/negation and related proof functions fallible (return Result); change poll_proof_completion to accept &str; propagate errors from proof assembly and JSON construction.
Proof JSON/types & circuit inputs
prover/client/src/proof_types/.../*.rs, prover/client/src/proof_types/batch_address_append/proof_inputs.rs
Many to_string/from_* functions now return Result<..., ProverClientError>; several public input accessors return Results; batch_address_append signature changed to accept borrowed slices and fixed-size proof arrays (const-generic HEIGHT). Attention: callsites and test adjustments.
Prover & Forester tests
forester/tests/test_compressible_pda.rs, prover/client/tests/*
Test call sites updated to unwrap new Results (e.g., spawn_prover().await.unwrap(), to_string().unwrap()); batch tests adjusted to fixed-size proof arrays and reference-based inputs. Watch for panics in tests vs propagated errors.
SDK indexer: queue & proof types
sdk-libs/client/src/indexer/types/queue.rs, sdk-libs/client/src/indexer/types/proof.rs, sdk-libs/client/src/indexer/photon_indexer.rs, sdk-libs/client/src/indexer/options.rs, sdk-libs/client/src/indexer/error.rs
Introduce const-generic proof APIs and batched reconstruct_proofs, add node-lookup optimizations, make pack_tree_infos fallible (Result), change FilterSelector conversion to From<&...>, and preserve CustomError messages. Attention: proof heights, signature changes, and new Result returns.
SDK interface & serialization
sdk-libs/client/src/interface/serialize.rs, sdk-libs/client/src/interface/instructions.rs, sdk-libs/client/src/interface/initialize_config.rs, sdk-libs/client/src/interface/pack.rs, sdk-libs/client/src/interface/load_accounts.rs, sdk-libs/client/src/interface/mod.rs
Add crate-private serialize_anchor_data helper; make instruction builders return std::io::Result<Instruction>; add PackError::Indexer; introduce PDA grouping, MAX_PDAS_PER_IX, and MissingPdaCompressedData error; propagate pack/serialize errors.
SDK utils, validator & program-test
sdk-libs/client/src/utils.rs, sdk-libs/client/src/local_test_validator.rs, sdk-libs/client/src/fee.rs, sdk-libs/client/src/rpc/client.rs, sdk-libs/program-test/src/...
Convert various startup/spawn functions to return Results (io::Result or mapped errors), improve process/output checks, avoid unnecessary clones, and map errors into RPC/test harness paths.
Misc small fixes
sdk-libs/macros/.../extract.rs, sdk-libs/client/src/indexer/types/token.rs, sdk-libs/client/src/indexer/photon_indexer.rs
Remove unnecessary clones, minor ownership and From adjustments, small signature tweaks to align with fallible flows.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested labels

ai-review

Suggested reviewers

  • SwenSchaeferjohann
  • ananas-block

Poem

✨ Unwraps retreated, Results now stand guard,
Hashchains rebuilt where the proofs ran hard,
Spawn checks sing health, serialization breathes light,
Tests now unwrap boldly or fail fast at night,
A steadier pipeline wakes — robust and bright.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'refactor: harden runtime safety across client and prover' accurately captures the main objective of this comprehensive changeset, which systematically replaces unsafe patterns and error-prone unwrap calls with robust Result-based error handling throughout the codebase.
Docstring Coverage ✅ Passed Docstring coverage is 70.80% which is sufficient. The required threshold is 70.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch sergey/runtime-safety-hardening
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sergeytimoshin sergeytimoshin force-pushed the sergey/runtime-safety-hardening branch 4 times, most recently from af9dae3 to e3f17dd Compare March 14, 2026 19:12
@sergeytimoshin sergeytimoshin force-pushed the sergey/runtime-safety-hardening branch from e3f17dd to 6e6d9c0 Compare March 14, 2026 19:21
@sergeytimoshin sergeytimoshin force-pushed the sergey/runtime-safety-hardening branch 2 times, most recently from 65676df to 131375b Compare March 14, 2026 19:52
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
sdk-libs/client/src/interface/initialize_config.rs (1)

120-124: ⚠️ Potential issue | 🟠 Major

Import and reuse the module constant instead of hardcoding the discriminator.

Line 122 duplicates INITIALIZE_COMPRESSION_CONFIG_DISCRIMINATOR from instructions.rs as a hardcoded local constant. When the discriminator value is already defined at the module level, hardcoding it creates a maintenance burden and risks silent drift if one copy is updated without the other. Replace the local const with an import from the instructions module.

use crate::interface::instructions::INITIALIZE_COMPRESSION_CONFIG_DISCRIMINATOR;

// Then use it:
data.extend_from_slice(&INITIALIZE_COMPRESSION_CONFIG_DISCRIMINATOR);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/interface/initialize_config.rs` around lines 120 - 124,
Replace the locally hardcoded const DISCRIMINATOR used for the
"initialize_compression_config" anchor with the shared module constant
INITIALIZE_COMPRESSION_CONFIG_DISCRIMINATOR from instructions.rs: remove the
local DISCRIMINATOR definition and import
INITIALIZE_COMPRESSION_CONFIG_DISCRIMINATOR into the file, then use that
constant where the code currently appends the discriminator (the code around
serialize_anchor_data and instruction_data) so the anchor uses the single
canonical value.
sdk-libs/client/src/utils.rs (1)

10-16: ⚠️ Potential issue | 🟡 Minor

Use strict UTF-8 parsing and trim only line endings when extracting filesystem paths from command output.

String::from_utf8_lossy silently replaces invalid UTF-8 bytes with the Unicode replacement character (U+FFFD), and str::trim() removes all leading/trailing whitespace—including spaces and tabs. For a binary locator, these behaviors can corrupt a valid path. Decode strictly with from_utf8(), then strip only the trailing line ending (\r\n or \n), leaving any legitimate path characters intact.

🛠️ Correct path extraction
-        let light_path = String::from_utf8_lossy(&output.stdout).trim().to_string();
+        let light_path = String::from_utf8(output.stdout)
+            .ok()?
+            .trim_end_matches(&['\r', '\n'][..])
+            .to_owned();

Also applies to: lines 30–38

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/utils.rs` around lines 10 - 16, The current use of
String::from_utf8_lossy and str::trim when building light_path can corrupt valid
filesystem paths; replace the lossy decoding with strict UTF-8 parsing using
std::str::from_utf8(&output.stdout).ok() (returning None on parse error), then
remove only trailing line endings (handle "\r\n" and "\n") via trim_end_matches
to produce the final String; apply the same fix to the other occurrence (the
similar code block around lines 30–38) so both code paths strictly validate
UTF-8 and only strip line endings when producing light_path.
forester/src/forester_status.rs (1)

673-687: ⚠️ Potential issue | 🟠 Major

Don't report unreadable V1 queues as empty.

Lines 674-684 and 729-739 currently turn parse_hash_set_from_bytes failures into (Some(0), Some(0)). That makes a corrupted or layout-mismatched queue look healthy, which underreports pending work in the status output and hides the real operational failure. Prefer logging and returning None here, or bubble the parse error so "empty" and "unparseable" stay distinct.

Also applies to: 728-742

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/forester_status.rs` around lines 673 - 687, The code currently
maps parse_hash_set_from_bytes::<QueueAccount> failures into (Some(0), Some(0)),
hiding parse/layout errors; change the logic in the block using
parse_hash_set_from_bytes::<QueueAccount> so that on parse failure you log the
error (including context) and return (None, None) (or propagate the parse error)
instead of unwrap_or((0,0)); adjust the closure that computes (len, cap) (which
references .iter(), .get_capacity(), and .sequence_number) to only produce
(Some(len), Some(cap)) when parse succeeds and otherwise produce (None, None);
apply the same fix to the second occurrence of this pattern (the similar block
around the other parse call).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@forester/src/processor/v2/helpers.rs`:
- Around line 489-534: The get_batch_snapshot function must only return fully
available, internally consistent fixed-size batches: do not shrink the requested
range to available; instead, if end > self.wait_for_batch(end) return Ok(None).
Also verify that all companion vectors (data.addresses, data.low_element_values,
data.low_element_next_values, data.low_element_indices,
data.low_element_next_indices, and data.reconstruct_proofs range) have at least
end elements and that their slices [start..end] produce the same length as
addresses before constructing AddressBatchSnapshot; if any length mismatch or
addresses.is_empty() then return Ok(None) (not Err or a truncated snapshot).
Keep the existing fallback for missing leaves_hash_chains but only use it when
the full batch is present.

In `@prover/client/src/proof_types/batch_address_append/proof_inputs.rs`:
- Line 369: The new error-return paths can leave changelog, indexed_changelog,
and sparse_merkle_tree half-updated (e.g., after calling
patcher.push_changelog_entry::<HEIGHT>, inserting into indexed_changelog, and
sparse_merkle_tree.append()), so make these mutations transactional: either
stage changes in temporary structures and only commit them back to changelog,
indexed_changelog, and sparse_merkle_tree after all fallible operations succeed,
or implement explicit rollback logic that reverts the earlier
push_changelog_entry::<HEIGHT>, the indexed_changelog insert, and the
sparse_merkle_tree.append() on every subsequent ?-return path (the branches
noted around the indexed_changelog insertion and the block that can fail at
lines ~398-408 and ~431-435). Ensure any temporary clones or rollback routines
preserve original state and are invoked before returning Err.
- Around line 202-209: Before taking slices or indexing, validate that
new_element_values.len() >= zkp_batch_size and that all parallel input
slices/vecs have length >= zkp_batch_size; if any check fails return the
appropriate ProverClientError instead of panicking. Specifically, add a guard
before the line using let new_element_values =
&new_element_values[..zkp_batch_size]; that verifies new_element_values.len() >=
zkp_batch_size and also verify lengths of the parallel collections used later
(the collections referenced by low_element_circuit_merkle_proofs,
new_element_circuit_merkle_proofs, patched_low_element_next_values/indices,
patched_low_element_values/indices or the original input slices they derive
from) and return an Err(ProverClientError::...) on mismatch. Apply the same
validation for the other slicing/indexing blocks around the 244-273 region.

In `@prover/client/src/proof_types/batch_update/proof_inputs.rs`:
- Around line 115-117: The public_inputs method currently indexes self.0[0],
which panics on empty BatchUpdateInputs and only returns the first circuit's
inputs; change public_inputs to first guard for empty inputs (e.g., return a
ProverClientError variant when self.0.is_empty()) and then iterate over all
entries in self.0, calling public_inputs_arr()? on each element and collect
their [u8; 32] results into the returned Vec<[u8; 32]>, preserving failure
semantics via the existing Result type; update references to public_inputs and
use ProverClientError for the empty-case error.

In `@prover/client/src/proof.rs`:
- Around line 72-89: Replace the loose trim_start_matches("0x") logic in
deserialize_hex_string_to_be_bytes with a strict single-prefix handling and
switch to unsigned parsing: check hex_str.strip_prefix("0x") and use the
stripped slice if present (or the original string if not) so inputs like
"0x0x01" do not silently become valid; parse into num_bigint::BigUint (use
BigUint::from_str_radix or parse_bytes) instead of BigInt, use
BigUint::to_bytes_be() for the bytes, and keep the existing
zero-pad/length-check logic but map parse errors to
ProverClientError::InvalidHexString with the original hex_str included for
debugging (referencing the function deserialize_hex_string_to_be_bytes and
variables big_int/big_int_bytes which should be renamed to
big_uint/big_uint_bytes).
- Around line 167-170: The deserialization in negate_g1 currently uses
G1::deserialize_with_mode(..., Validate::No), allowing untrusted prover bytes
through; change the call to use Validate::Yes so the G1 point is checked
(on-curve/subgroup) before any negation. Update the G1::deserialize_with_mode
invocation in negate_g1 (after convert_endianness) to pass Validate::Yes and
keep the existing map_err to ProverClientError::InvalidProofData to surface
validation failures.

In `@prover/client/src/prover.rs`:
- Around line 35-36: The startup and health-check logic currently treats any
Process::wait_with_output() result and any reqwest send().await as success; fix
by explicitly checking the child process exit status (use
Output::status.success() or Output::exit_ok()) after wait_with_output() in the
startup sequence (the function invoking wait_with_output()), and update the
health_check() implementation to inspect the HTTP response status (call
response.error_for_status() or check response.status().is_success() after
send().await) before returning true; ensure both the startup caller and
health_check() return Err on non-successful exit codes or HTTP 4xx/5xx responses
so failing startups/health checks are detected.
- Line 19: spawn_prover currently uses the blocking
std::process::Command::wait_with_output() and health_check uses
std::thread::sleep(), which block the Tokio runtime; replace the child spawn and
wait logic in spawn_prover by using tokio::process::Command and await its
.output().await (or .status().await) instead of wait_with_output, and change any
std::thread::sleep(duration) in health_check to
tokio::time::sleep(duration).await so all waits are async; update imports (use
tokio::process::Command and tokio::time::sleep) and ensure error handling
remains the same for the functions spawn_prover and health_check.

In `@prover/client/tests/batch_address_append.rs`:
- Around line 97-102: The test still treats
proof_types::batch_address_append::to_json as infallible; update the test to
handle its Result return (for the calls around the existing to_json usages at
the two sites noted) by either changing the test signature to return Result<(),
Box<dyn std::error::Error>> and using the ? operator on each
proof_types::batch_address_append::to_json(...) call, or explicitly
unwrap/map_err them (e.g., .unwrap() or .expect("...")); locate the two to_json
call sites in batch_address_append.rs and replace the current direct String
usage with error-aware handling so the code compiles with the fallible API.

In `@sdk-libs/client/src/indexer/types/proof.rs`:
- Around line 214-223: The loop currently keeps the first computed
output_tree_index and silently ignores any subsequent differing indices; change
this to detect mismatches and return an error instead of keeping the first
value. In the code paths using account.tree_info.next_tree_info and
account.tree_info.pack_output_tree_index(packed_accounts) (and the local
variable output_tree_index), compute each index as before but if
output_tree_index is Some(prev) and the new index != prev then return a
descriptive error (e.g., Err or a custom ValidationError) indicating mixed
output-tree indices; otherwise set output_tree_index = Some(index) when it was
None. Apply this guard where PackedStateTreeInfos expects a single
output_tree_index so inconsistent state proofs are rejected.

In `@sdk-libs/client/src/interface/load_accounts.rs`:
- Around line 209-214: The function group_pda_specs currently uses
assert!(max_per_group > 0, ...) which introduces a panic; replace this with
debug_assert!(max_per_group > 0, ...) to keep the developer invariant check
during testing while avoiding release-time panics, and keep the signature and
return type unchanged (call sites that pass MAX_PDAS_PER_IX will be unaffected);
if you intend this to be a public/runtime-validated API instead, alternatively
change group_pda_specs to return Result<Vec<Vec<&PdaSpec<V>>>, SomeError> and
explicitly return an Err when max_per_group == 0, but prefer debug_assert! for
internal invariants.

In `@sdk-libs/client/src/local_test_validator.rs`:
- Around line 69-70: The local variable named `path` is misleading because it
becomes a full command string; rename both occurrences (the initial `let path =
"cli/test_bin/run test-validator";` and the subsequent `let mut path =
format!("{}/{}", project_root.trim(), path);`) to `command` (e.g., `let command
= ...` and `let mut command = ...`) and update all subsequent uses like
`path.push_str(...)` to `command.push_str(...)` so the variable name reflects it
holds a command/command line rather than a filesystem path.
- Around line 128-136: The explicit std::mem::drop(child) after spawning the
process (Command::new(...).spawn() -> child) is unnecessary; remove the drop and
instead add a brief comment above the spawn or before awaiting
tokio::time::sleep(tokio::time::Duration::from_secs(config.wait_time)).await
explaining the intent to detach/ignore the child process (e.g., "intentionally
detaching spawned child; not awaiting its exit"), so the purpose is clear while
relying on scope-based drop for cleanup.
- Around line 113-126: The blocking std::process::Command.wait() call in
local_test_validator.rs stalls the tokio runtime; replace std::process::Command
and std::process::Stdio with tokio::process::Command and tokio::process::Stdio,
spawn the child the same way you do now, and await the child's termination with
child.wait().await (checking the returned ExitStatus.success()). Update imports
accordingly and remove the blocking child.wait() call so the async function does
not perform blocking IO.

---

Outside diff comments:
In `@forester/src/forester_status.rs`:
- Around line 673-687: The code currently maps
parse_hash_set_from_bytes::<QueueAccount> failures into (Some(0), Some(0)),
hiding parse/layout errors; change the logic in the block using
parse_hash_set_from_bytes::<QueueAccount> so that on parse failure you log the
error (including context) and return (None, None) (or propagate the parse error)
instead of unwrap_or((0,0)); adjust the closure that computes (len, cap) (which
references .iter(), .get_capacity(), and .sequence_number) to only produce
(Some(len), Some(cap)) when parse succeeds and otherwise produce (None, None);
apply the same fix to the second occurrence of this pattern (the similar block
around the other parse call).

In `@sdk-libs/client/src/interface/initialize_config.rs`:
- Around line 120-124: Replace the locally hardcoded const DISCRIMINATOR used
for the "initialize_compression_config" anchor with the shared module constant
INITIALIZE_COMPRESSION_CONFIG_DISCRIMINATOR from instructions.rs: remove the
local DISCRIMINATOR definition and import
INITIALIZE_COMPRESSION_CONFIG_DISCRIMINATOR into the file, then use that
constant where the code currently appends the discriminator (the code around
serialize_anchor_data and instruction_data) so the anchor uses the single
canonical value.

In `@sdk-libs/client/src/utils.rs`:
- Around line 10-16: The current use of String::from_utf8_lossy and str::trim
when building light_path can corrupt valid filesystem paths; replace the lossy
decoding with strict UTF-8 parsing using
std::str::from_utf8(&output.stdout).ok() (returning None on parse error), then
remove only trailing line endings (handle "\r\n" and "\n") via trim_end_matches
to produce the final String; apply the same fix to the other occurrence (the
similar code block around lines 30–38) so both code paths strictly validate
UTF-8 and only strip line endings when producing light_path.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 0f7749e7-8edf-4ebf-9da0-2da516629b7a

📥 Commits

Reviewing files that changed from the base of the PR and between 2644529 and 65676df.

⛔ Files ignored due to path filters (63)
  • forester-utils/src/account_zero_copy.rs is excluded by none and included by none
  • forester-utils/src/address_merkle_tree_config.rs is excluded by none and included by none
  • forester-utils/src/address_staging_tree.rs is excluded by none and included by none
  • forester-utils/src/forester_epoch.rs is excluded by none and included by none
  • forester-utils/src/instructions/compress_and_close_mint.rs is excluded by none and included by none
  • forester-utils/src/rate_limiter.rs is excluded by none and included by none
  • forester-utils/src/registry.rs is excluded by none and included by none
  • program-tests/account-compression-test/tests/address_merkle_tree_tests.rs is excluded by none and included by none
  • program-tests/account-compression-test/tests/merkle_tree_tests.rs is excluded by none and included by none
  • program-tests/batched-merkle-tree-test/tests/merkle_tree.rs is excluded by none and included by none
  • program-tests/compressed-token-test/tests/freeze/functional.rs is excluded by none and included by none
  • program-tests/compressed-token-test/tests/v1.rs is excluded by none and included by none
  • program-tests/system-cpi-v2-test/tests/event.rs is excluded by none and included by none
  • program-tests/system-cpi-v2-test/tests/invoke_cpi_with_read_only.rs is excluded by none and included by none
  • program-tests/utils/src/account_zero_copy.rs is excluded by none and included by none
  • program-tests/utils/src/actions/legacy/instructions/transfer2.rs is excluded by none and included by none
  • program-tests/utils/src/address_tree_rollover.rs is excluded by none and included by none
  • program-tests/utils/src/assert_compressed_tx.rs is excluded by none and included by none
  • program-tests/utils/src/assert_merkle_tree.rs is excluded by none and included by none
  • program-tests/utils/src/assert_queue.rs is excluded by none and included by none
  • program-tests/utils/src/batched_address_tree.rs is excluded by none and included by none
  • program-tests/utils/src/e2e_test_env.rs is excluded by none and included by none
  • program-tests/utils/src/lib.rs is excluded by none and included by none
  • program-tests/utils/src/mock_batched_forester.rs is excluded by none and included by none
  • program-tests/utils/src/state_tree_rollover.rs is excluded by none and included by none
  • program-tests/utils/src/test_batch_forester.rs is excluded by none and included by none
  • program-tests/utils/src/test_forester.rs is excluded by none and included by none
  • sdk-tests/anchor-manual-test/tests/shared.rs is excluded by none and included by none
  • sdk-tests/anchor-semi-manual-test/tests/shared/mod.rs is excluded by none and included by none
  • sdk-tests/anchor-semi-manual-test/tests/stress_test.rs is excluded by none and included by none
  • sdk-tests/client-test/tests/light_client.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/amm_stress_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/amm_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/basic_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/d10_ata_idempotent_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/d10_token_accounts_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/d11_zero_copy_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/failing_tests.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/integration_tests.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/shared.rs is excluded by none and included by none
  • sdk-tests/pinocchio-light-program-test/tests/shared/mod.rs is excluded by none and included by none
  • sdk-tests/pinocchio-light-program-test/tests/stress_test.rs is excluded by none and included by none
  • sdk-tests/pinocchio-manual-test/tests/shared.rs is excluded by none and included by none
  • sdk-tests/sdk-anchor-test/programs/sdk-anchor-test/tests/read_only.rs is excluded by none and included by none
  • sdk-tests/sdk-anchor-test/programs/sdk-anchor-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-native-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-pinocchio-v1-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-pinocchio-v2-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/ctoken_pda.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/decompress_full_cpi.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/pda_ctoken.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/test_4_invocations.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/test_4_transfer2.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/test_deposit.rs is excluded by none and included by none
  • sdk-tests/sdk-v1-native-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-account-loader-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-ata-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-mint-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-pda-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-token-test/tests/test.rs is excluded by none and included by none
  • sparse-merkle-tree/src/indexed_changelog.rs is excluded by none and included by none
  • sparse-merkle-tree/tests/indexed_changelog.rs is excluded by none and included by none
📒 Files selected for processing (56)
  • forester/src/config.rs
  • forester/src/forester_status.rs
  • forester/src/main.rs
  • forester/src/processor/v2/helpers.rs
  • forester/src/processor/v2/proof_worker.rs
  • forester/src/processor/v2/strategy/address.rs
  • forester/src/queue_helpers.rs
  • forester/tests/test_compressible_pda.rs
  • prover/client/src/errors.rs
  • prover/client/src/helpers.rs
  • prover/client/src/proof.rs
  • prover/client/src/proof_client.rs
  • prover/client/src/proof_types/batch_address_append/json.rs
  • prover/client/src/proof_types/batch_address_append/proof_inputs.rs
  • prover/client/src/proof_types/batch_append/json.rs
  • prover/client/src/proof_types/batch_append/proof_inputs.rs
  • prover/client/src/proof_types/batch_update/json.rs
  • prover/client/src/proof_types/batch_update/proof_inputs.rs
  • prover/client/src/proof_types/combined/v1/json.rs
  • prover/client/src/proof_types/combined/v2/json.rs
  • prover/client/src/proof_types/combined/v2/proof_inputs.rs
  • prover/client/src/proof_types/inclusion/v1/json.rs
  • prover/client/src/proof_types/inclusion/v1/proof_inputs.rs
  • prover/client/src/proof_types/inclusion/v2/json.rs
  • prover/client/src/proof_types/inclusion/v2/proof_inputs.rs
  • prover/client/src/proof_types/non_inclusion/v1/json.rs
  • prover/client/src/proof_types/non_inclusion/v1/proof_inputs.rs
  • prover/client/src/proof_types/non_inclusion/v2/json.rs
  • prover/client/src/proof_types/non_inclusion/v2/proof_inputs.rs
  • prover/client/src/prover.rs
  • prover/client/tests/batch_address_append.rs
  • prover/client/tests/batch_append.rs
  • prover/client/tests/batch_update.rs
  • prover/client/tests/combined.rs
  • prover/client/tests/inclusion.rs
  • prover/client/tests/init_merkle_tree.rs
  • prover/client/tests/non_inclusion.rs
  • sdk-libs/client/src/fee.rs
  • sdk-libs/client/src/indexer/error.rs
  • sdk-libs/client/src/indexer/options.rs
  • sdk-libs/client/src/indexer/photon_indexer.rs
  • sdk-libs/client/src/indexer/types/proof.rs
  • sdk-libs/client/src/indexer/types/queue.rs
  • sdk-libs/client/src/indexer/types/token.rs
  • sdk-libs/client/src/interface/initialize_config.rs
  • sdk-libs/client/src/interface/instructions.rs
  • sdk-libs/client/src/interface/load_accounts.rs
  • sdk-libs/client/src/interface/mod.rs
  • sdk-libs/client/src/interface/pack.rs
  • sdk-libs/client/src/interface/serialize.rs
  • sdk-libs/client/src/local_test_validator.rs
  • sdk-libs/client/src/rpc/client.rs
  • sdk-libs/client/src/utils.rs
  • sdk-libs/macros/src/light_pdas/seeds/extract.rs
  • sdk-libs/program-test/src/program_test/compressible_setup.rs
  • sdk-libs/program-test/src/program_test/light_program_test.rs

Comment on lines +489 to +534
pub fn get_batch_snapshot<const HEIGHT: usize>(
&self,
start: usize,
end: usize,
hashchain_idx: usize,
) -> crate::Result<Option<AddressBatchSnapshot<HEIGHT>>> {
let available = self.wait_for_batch(end);
if start >= available {
return None;
return Ok(None);
}
let actual_end = end.min(available);
let data = lock_recover(&self.data, "streaming_address_queue.data");
Some(BatchDataSlice {
addresses: data.addresses[start..actual_end].to_vec(),

let addresses = data.addresses[start..actual_end].to_vec();
if addresses.is_empty() {
return Err(anyhow!("Empty batch at start={}", start));
}

let leaves_hashchain = match data.leaves_hash_chains.get(hashchain_idx).copied() {
Some(hashchain) => hashchain,
None => {
tracing::debug!(
"Missing leaves_hash_chain for batch {} (available: {}), deriving from addresses",
hashchain_idx,
data.leaves_hash_chains.len()
);
create_hash_chain_from_slice(&addresses).map_err(|error| {
anyhow!(
"Failed to derive leaves_hash_chain for batch {} from {} addresses: {}",
hashchain_idx,
addresses.len(),
error
)
})?
}
};

Ok(Some(AddressBatchSnapshot {
low_element_values: data.low_element_values[start..actual_end].to_vec(),
low_element_next_values: data.low_element_next_values[start..actual_end].to_vec(),
low_element_indices: data.low_element_indices[start..actual_end].to_vec(),
low_element_next_indices: data.low_element_next_indices[start..actual_end].to_vec(),
})
low_element_proofs: data.reconstruct_proofs::<HEIGHT>(start..actual_end)?,
addresses,
leaves_hashchain,
}))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Only return complete, internally consistent address batches.

Lines 495-534 can shrink the requested range to available and then slice the companion vectors unchecked. If background fetching ends early or the indexer returns mismatched vector lengths, this either panics or feeds process_batch a truncated batch with a derived hashchain. Short or inconsistent batches should stay "not ready" until the full fixed-size batch is present.

Suggested guard
-        let actual_end = end.min(available);
+        if available < end {
+            return Ok(None);
+        }
+        let actual_end = end;
         let data = lock_recover(&self.data, "streaming_address_queue.data");

-        let addresses = data.addresses[start..actual_end].to_vec();
+        let addresses = data
+            .addresses
+            .get(start..actual_end)
+            .ok_or_else(|| anyhow!("addresses shorter than requested batch"))?
+            .to_vec();
+        // Apply the same checked slicing to every parallel vector before
+        // constructing `AddressBatchSnapshot`.
Based on learnings: ZKP batch sizes for address trees must be 10 or 250 in `src/initialize_address_tree.rs`. Only these circuit sizes are supported for address trees.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/processor/v2/helpers.rs` around lines 489 - 534, The
get_batch_snapshot function must only return fully available, internally
consistent fixed-size batches: do not shrink the requested range to available;
instead, if end > self.wait_for_batch(end) return Ok(None). Also verify that all
companion vectors (data.addresses, data.low_element_values,
data.low_element_next_values, data.low_element_indices,
data.low_element_next_indices, and data.reconstruct_proofs range) have at least
end elements and that their slices [start..end] produce the same length as
addresses before constructing AddressBatchSnapshot; if any length mismatch or
addresses.is_empty() then return Ok(None) (not Err or a truncated snapshot).
Keep the existing fallback for missing leaves_hash_chains but only use it when
the full batch is present.

Comment on lines +202 to +209
let new_element_values = &new_element_values[..zkp_batch_size];
let mut new_root = [0u8; 32];
let mut low_element_circuit_merkle_proofs = Vec::with_capacity(new_element_values.len());
let mut new_element_circuit_merkle_proofs = Vec::with_capacity(new_element_values.len());
let mut patched_low_element_next_values = Vec::with_capacity(new_element_values.len());
let mut patched_low_element_next_indices = Vec::with_capacity(new_element_values.len());
let mut patched_low_element_values = Vec::with_capacity(new_element_values.len());
let mut patched_low_element_indices = Vec::with_capacity(new_element_values.len());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Validate batch lengths before the new slice/index operations.

Line 202 panics when new_element_values.len() < zkp_batch_size, and the loop then blindly indexes the other parallel slices with the same bound. This reintroduces hard panics on malformed indexer input instead of returning a ProverClientError.

🩹 Proposed fix
 ) -> Result<BatchAddressAppendInputs, ProverClientError> {
+    for (name, len) in [
+        ("new_element_values", new_element_values.len()),
+        ("low_element_values", low_element_values.len()),
+        ("low_element_next_values", low_element_next_values.len()),
+        ("low_element_indices", low_element_indices.len()),
+        ("low_element_next_indices", low_element_next_indices.len()),
+        ("low_element_proofs", low_element_proofs.len()),
+    ] {
+        if len < zkp_batch_size {
+            return Err(ProverClientError::GenericError(format!(
+                "{name} has length {len}, expected at least {zkp_batch_size}",
+            )));
+        }
+    }
+
     let new_element_values = &new_element_values[..zkp_batch_size];

Also applies to: 244-273

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prover/client/src/proof_types/batch_address_append/proof_inputs.rs` around
lines 202 - 209, Before taking slices or indexing, validate that
new_element_values.len() >= zkp_batch_size and that all parallel input
slices/vecs have length >= zkp_batch_size; if any check fails return the
appropriate ProverClientError instead of panicking. Specifically, add a guard
before the line using let new_element_values =
&new_element_values[..zkp_batch_size]; that verifies new_element_values.len() >=
zkp_batch_size and also verify lengths of the parallel collections used later
(the collections referenced by low_element_circuit_merkle_proofs,
new_element_circuit_merkle_proofs, patched_low_element_next_values/indices,
patched_low_element_values/indices or the original input slices they derive
from) and return an Err(ProverClientError::...) on mismatch. Apply the same
validation for the other slicing/indexing blocks around the 244-273 region.

);
)?;

patcher.push_changelog_entry::<HEIGHT>(changelog, changelog_entry);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Make the new error-return path transactional.

By the time Line 404 can fail, the current iteration has already pushed to changelog (Line 369), indexed_changelog (Line 385), and mutated sparse_merkle_tree via append(). Returning Err from these new ? paths leaves caller-owned state half-applied, so retries can start from a corrupted tree/changelog snapshot.

🔧 Safer direction

Stage mutations in temporary state and only commit them back after the whole batch succeeds. If cloning the tree is too expensive, add explicit rollback for sparse_merkle_tree, changelog, and indexed_changelog on every fallible branch after the first mutation.

Also applies to: 385-385, 398-408, 431-435

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prover/client/src/proof_types/batch_address_append/proof_inputs.rs` at line
369, The new error-return paths can leave changelog, indexed_changelog, and
sparse_merkle_tree half-updated (e.g., after calling
patcher.push_changelog_entry::<HEIGHT>, inserting into indexed_changelog, and
sparse_merkle_tree.append()), so make these mutations transactional: either
stage changes in temporary structures and only commit them back to changelog,
indexed_changelog, and sparse_merkle_tree after all fallible operations succeed,
or implement explicit rollback logic that reverts the earlier
push_changelog_entry::<HEIGHT>, the indexed_changelog insert, and the
sparse_merkle_tree.append() on every subsequent ?-return path (the branches
noted around the indexed_changelog insertion and the block that can fail at
lines ~398-408 and ~431-435). Ensure any temporary clones or rollback routines
preserve original state and are invoked before returning Err.

Comment on lines +115 to +117
pub fn public_inputs(&self) -> Result<Vec<[u8; 32]>, ProverClientError> {
// Concatenate all public inputs into a single flat vector
vec![self.0[0].public_inputs_arr()]
Ok(vec![self.0[0].public_inputs_arr()?])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Guard empty input and include every circuit in public_inputs.

Line 117 still does self.0[0], so an empty BatchUpdateInputs panics, and the method silently drops every element after the first even though it returns a Vec<[u8; 32]>.

🩹 Proposed fix
 impl BatchUpdateInputs<'_> {
     pub fn public_inputs(&self) -> Result<Vec<[u8; 32]>, ProverClientError> {
-        // Concatenate all public inputs into a single flat vector
-        Ok(vec![self.0[0].public_inputs_arr()?])
+        if self.0.is_empty() {
+            return Err(ProverClientError::GenericError(
+                "batch update inputs cannot be empty".to_string(),
+            ));
+        }
+
+        self.0
+            .iter()
+            .map(BatchUpdateCircuitInputs::public_inputs_arr)
+            .collect()
     }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prover/client/src/proof_types/batch_update/proof_inputs.rs` around lines 115
- 117, The public_inputs method currently indexes self.0[0], which panics on
empty BatchUpdateInputs and only returns the first circuit's inputs; change
public_inputs to first guard for empty inputs (e.g., return a ProverClientError
variant when self.0.is_empty()) and then iterate over all entries in self.0,
calling public_inputs_arr()? on each element and collect their [u8; 32] results
into the returned Vec<[u8; 32]>, preserving failure semantics via the existing
Result type; update references to public_inputs and use ProverClientError for
the empty-case error.

Comment on lines +72 to 89
pub fn deserialize_hex_string_to_be_bytes(hex_str: &str) -> Result<[u8; 32], ProverClientError> {
let trimmed_str = hex_str.trim_start_matches("0x");
let big_int = num_bigint::BigInt::from_str_radix(trimmed_str, 16).unwrap();
let big_int = num_bigint::BigInt::from_str_radix(trimmed_str, 16)
.map_err(|e| ProverClientError::InvalidHexString(format!("{} ({})", hex_str, e)))?;
let big_int_bytes = big_int.to_bytes_be().1;
if big_int_bytes.len() < 32 {
let mut result = [0u8; 32];
result[32 - big_int_bytes.len()..].copy_from_slice(&big_int_bytes);
result
Ok(result)
} else {
big_int_bytes.try_into().unwrap()
let len = big_int_bytes.len();
big_int_bytes.try_into().map_err(|_| {
ProverClientError::InvalidHexString(format!(
"expected at most 32 bytes, got {} for {}",
len, hex_str
))
})
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's read the file to confirm the code matches the review
wc -l prover/client/src/proof.rs

Repository: Lightprotocol/light-protocol

Length of output: 102


🏁 Script executed:

# Read the specific lines mentioned in the review
cat -n prover/client/src/proof.rs | sed -n '70,95p'

Repository: Lightprotocol/light-protocol

Length of output: 1162


🏁 Script executed:

# Also check the negate_g1 function mentioned in the scratchpad
cat -n prover/client/src/proof.rs | sed -n '100,120p'

Repository: Lightprotocol/light-protocol

Length of output: 989


🏁 Script executed:

# Check the Cargo.toml to understand dependencies
fd -t f "Cargo.toml" prover/client/ | head -1 | xargs cat

Repository: Lightprotocol/light-protocol

Length of output: 1402


🏁 Script executed:

# Find the negate_g1 function
cat -n prover/client/src/proof.rs | grep -A 15 "fn negate_g1"

Repository: Lightprotocol/light-protocol

Length of output: 935


🏁 Script executed:

# Check the actual behavior of trim_start_matches with multiple prefixes
python3 << 'EOF'
# Simulate Rust's trim_start_matches behavior
test_str = "0x0x01"
# trim_start_matches removes all leading characters that match ANY character in the pattern
# So "0x" as a pattern means remove all '0' and 'x' characters from the start
result = test_str.lstrip("0x")
print(f"trim_start_matches('0x') on '{test_str}' = '{result}'")

# Test with strip_prefix behavior
test_str2 = "0x0x01"
if test_str2.startswith("0x"):
    result2 = test_str2[2:]  # strip_prefix
    print(f"strip_prefix('0x') on '{test_str2}' = '{result2}'")

# Test with more examples
examples = ["0x01", "0x0x01", "0x00x01", "00xABC"]
for ex in examples:
    trimmed = ex.lstrip("0x")
    print(f"'{ex}'.lstrip('0x') = '{trimmed}'")
EOF

Repository: Lightprotocol/light-protocol

Length of output: 267


🏁 Script executed:

# Check the num_bigint documentation and test negative parsing
python3 << 'EOF'
# The behavior of BigInt::from_str_radix with a sign
# In Rust's num_bigint, from_str_radix should NOT parse the sign
# But let's check if there's a signed version

# Looking at the typical implementation:
# BigInt::from_str_radix expects a string without a sign
# If you include a '-' it would fail because '-' is not a valid hex digit

# However, the concern is valid IF the input could somehow have a sign
# Let's check: what if someone passes "0x-ff"?
test_input = "0x-ff"
trimmed = test_input.lstrip("0x")
print(f"Input: '{test_input}'")
print(f"After lstrip('0x'): '{trimmed}'")
# The lstrip would remove all leading 0s and xs, so "-ff" becomes "-ff"
# But from_str_radix("-ff", 16) would fail because '-' is not hex

# However, if the sign were parsed separately or through a different mechanism:
# BigInt has a to_bytes_be() that returns (Sign, Vec<u8>)
# If you discard the sign, you lose information about negativity
print("\nThe concern is valid IF:")
print("1. Somehow a negative BigInt is created")
print("2. to_bytes_be().1 discards the sign")
print("3. The magnitude is then treated as the full value")
EOF

Repository: Lightprotocol/light-protocol

Length of output: 266


🏁 Script executed:

# Search for any sign-handling code or documentation
fd -t f "proof.rs" prover/client/src/ --exec head -30 {} \;

Repository: Lightprotocol/light-protocol

Length of output: 938


🏁 Script executed:

# Search for tests of deserialize_hex_string_to_be_bytes
rg "deserialize_hex_string_to_be_bytes" -A 5 -B 5 prover/client/src/

Repository: Lightprotocol/light-protocol

Length of output: 4954


🏁 Script executed:

# Check if there's documentation about the expected input format
fd -t f "\.md$" prover/client/ -exec cat {} \;

Repository: Lightprotocol/light-protocol

Length of output: 218


🏁 Script executed:

# Let's verify if from_str_radix can actually parse negative numbers
python3 << 'EOF'
# In Rust, BigInt::from_str_radix signature is:
# fn from_str_radix(s: &str, radix: u32) -> Result<BigInt, ParseBigIntError>
# It does NOT support signs - only the magnitude

# The behavior is:
# "ff" with radix 16 -> BigInt(255)
# "-ff" with radix 16 -> ParseBigIntError (invalid digit '-')

# So the ACTUAL risk is not from negative numbers, but from:
# 1. Malformed prefixes like "0x0x01"
# 2. Using the sign after discarding it (which can't happen)

# The correct analysis:
print("Risk 1: trim_start_matches removes ALL leading 0s and xs")
print("  '0x0x01' -> '1' (WRONG, should be '0x01' or error)")
print("Risk 2: from_str_radix cannot parse '-' so negative risk is theoretical")
print("  But using BigUint makes intent clear (always unsigned field elements)")
print()
print("Conclusion: The trim_start_matches is the real bug")
print("The BigInt vs BigUint is a code clarity/correctness improvement")
EOF

Repository: Lightprotocol/light-protocol

Length of output: 441


🏁 Script executed:

# Check if the Gnark prover output could have any unusual prefixes
rg "0x" prover/client/src/ -B 2 -A 2 | head -50

Repository: Lightprotocol/light-protocol

Length of output: 1583


🏁 Script executed:

# Look for any integration tests or examples
fd -t f "test\|example" prover/client/ | head -10

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Search for GnarkProofJson to understand the input format
rg "GnarkProofJson" prover/client/src/ -B 3 -A 10

Repository: Lightprotocol/light-protocol

Length of output: 2310


🏁 Script executed:

# Let me verify the actual behavior of trim_start_matches one more time with concrete Rust
python3 << 'EOF'
# In Rust, trim_start_matches("0x") removes characters from the set {'0', 'x'}
# This is NOT the same as removing the prefix "0x"

test_cases = [
    "0x01",
    "0x0x01",
    "0xab",
    "0x0x0x01",
    "xx01",
    "0x",
    "x0x01"
]

for test in test_cases:
    # Simulate trim_start_matches by removing all leading '0' and 'x'
    trimmed = test.lstrip("0x")
    # Then parse as hex
    try:
        value = int(trimmed, 16)
        print(f"'{test}' -> '{trimmed}' -> {value:032x}")
    except ValueError as e:
        print(f"'{test}' -> '{trimmed}' -> ERROR: {e}")
EOF

Repository: Lightprotocol/light-protocol

Length of output: 443


🏁 Script executed:

# Verify what Validate::No actually does in arkworks
web_search "arkworks Validate::No CanonicalDeserialize"

Repository: Lightprotocol/light-protocol

Length of output: 122


🌐 Web query:

arkworks CanonicalDeserialize Validate::No point validation

💡 Result:

In arkworks, point validation is controlled by the validate: Validate flag passed to CanonicalDeserialize::deserialize_with_mode.

  • Validate::No means “skip post-deserialization validation”; the exact checks skipped are type-dependent (it’s whatever that type implements in Valid::check()). (arkworks.rs)
  • The “unchecked” convenience methods are literally just deserialize_with_mode(..., Validate::No):
    • deserialize_compressed_uncheckedValidate::No
    • deserialize_uncompressed_uncheckedValidate::No (docs.rs)

For elliptic-curve points, the skipped validation typically includes checks like “is this encoding a valid curve point (on-curve)?” and often “is it in the correct prime-order subgroup?” (again: depends on that point type’s Valid::check() implementation). (arkworks.rs)

If you use Validate::No / *_unchecked on untrusted input, you can end up accepting invalid points; some downstream code assumes validated points and this can lead to serious bugs (some projects even warn about potential undefined behavior when skipping these checks). (docs.rs)

Citations:


Use strip_prefix to enforce strict hex format.

The current code uses trim_start_matches("0x") on line 73, which removes all leading characters matching '0' or 'x' — not just the prefix. Inputs like 0x0x01 become 1 instead of triggering an error. This validation bypass is a bug: either trim to exactly one 0x prefix using strip_prefix("0x").unwrap_or(hex_str), or reject malformed prefixes outright.

Additionally, parse into BigUint instead of BigInt to make explicit that these are field element magnitudes (always unsigned), eliminating any future confusion about sign handling and matching bn254 semantics.

Proposed fix
 pub fn deserialize_hex_string_to_be_bytes(hex_str: &str) -> Result<[u8; 32], ProverClientError> {
-    let trimmed_str = hex_str.trim_start_matches("0x");
-    let big_int = num_bigint::BigInt::from_str_radix(trimmed_str, 16)
+    let trimmed_str = hex_str.strip_prefix("0x").unwrap_or(hex_str);
+    let big_int = num_bigint::BigUint::from_str_radix(trimmed_str, 16)
         .map_err(|e| ProverClientError::InvalidHexString(format!("{} ({})", hex_str, e)))?;
-    let big_int_bytes = big_int.to_bytes_be().1;
+    let big_int_bytes = big_int.to_bytes_be();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub fn deserialize_hex_string_to_be_bytes(hex_str: &str) -> Result<[u8; 32], ProverClientError> {
let trimmed_str = hex_str.trim_start_matches("0x");
let big_int = num_bigint::BigInt::from_str_radix(trimmed_str, 16).unwrap();
let big_int = num_bigint::BigInt::from_str_radix(trimmed_str, 16)
.map_err(|e| ProverClientError::InvalidHexString(format!("{} ({})", hex_str, e)))?;
let big_int_bytes = big_int.to_bytes_be().1;
if big_int_bytes.len() < 32 {
let mut result = [0u8; 32];
result[32 - big_int_bytes.len()..].copy_from_slice(&big_int_bytes);
result
Ok(result)
} else {
big_int_bytes.try_into().unwrap()
let len = big_int_bytes.len();
big_int_bytes.try_into().map_err(|_| {
ProverClientError::InvalidHexString(format!(
"expected at most 32 bytes, got {} for {}",
len, hex_str
))
})
}
pub fn deserialize_hex_string_to_be_bytes(hex_str: &str) -> Result<[u8; 32], ProverClientError> {
let trimmed_str = hex_str.strip_prefix("0x").unwrap_or(hex_str);
let big_int = num_bigint::BigUint::from_str_radix(trimmed_str, 16)
.map_err(|e| ProverClientError::InvalidHexString(format!("{} ({})", hex_str, e)))?;
let big_int_bytes = big_int.to_bytes_be();
if big_int_bytes.len() < 32 {
let mut result = [0u8; 32];
result[32 - big_int_bytes.len()..].copy_from_slice(&big_int_bytes);
Ok(result)
} else {
let len = big_int_bytes.len();
big_int_bytes.try_into().map_err(|_| {
ProverClientError::InvalidHexString(format!(
"expected at most 32 bytes, got {} for {}",
len, hex_str
))
})
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prover/client/src/proof.rs` around lines 72 - 89, Replace the loose
trim_start_matches("0x") logic in deserialize_hex_string_to_be_bytes with a
strict single-prefix handling and switch to unsigned parsing: check
hex_str.strip_prefix("0x") and use the stripped slice if present (or the
original string if not) so inputs like "0x0x01" do not silently become valid;
parse into num_bigint::BigUint (use BigUint::from_str_radix or parse_bytes)
instead of BigInt, use BigUint::to_bytes_be() for the bytes, and keep the
existing zero-pad/length-check logic but map parse errors to
ProverClientError::InvalidHexString with the original hex_str included for
debugging (referencing the function deserialize_hex_string_to_be_bytes and
variables big_int/big_int_bytes which should be renamed to
big_uint/big_uint_bytes).

Comment on lines 214 to 223
if let Some(next) = account.tree_info.next_tree_info {
// SAFETY: account will always have a state Merkle tree context.
// pack_output_tree_index only panics on an address Merkle tree context.
let index = next.pack_output_tree_index(packed_accounts).unwrap();
let index = next.pack_output_tree_index(packed_accounts)?;
if output_tree_index.is_none() {
output_tree_index = Some(index);
}
} else {
// SAFETY: account will always have a state Merkle tree context.
// pack_output_tree_index only panics on an address Merkle tree context.
let index = account
.tree_info
.pack_output_tree_index(packed_accounts)
.unwrap();
let index = account.tree_info.pack_output_tree_index(packed_accounts)?;
if output_tree_index.is_none() {
output_tree_index = Some(index);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reject mixed output-tree indices instead of keeping the first one.

PackedStateTreeInfos can only carry one output_tree_index, but this loop silently keeps the first computed value and ignores later mismatches. If different state proofs resolve to different output trees, we pack inconsistent instruction data with no error.

🛡️ Suggested guard
             if let Some(next) = account.tree_info.next_tree_info {
                 let index = next.pack_output_tree_index(packed_accounts)?;
-                if output_tree_index.is_none() {
-                    output_tree_index = Some(index);
-                }
+                match output_tree_index {
+                    Some(existing) if existing != index => {
+                        return Err(IndexerError::InvalidParameters(format!(
+                            "state proofs reference multiple output tree indices: {} and {}",
+                            existing, index
+                        )));
+                    }
+                    None => output_tree_index = Some(index),
+                    _ => {}
+                }
             } else {
                 let index = account.tree_info.pack_output_tree_index(packed_accounts)?;
-                if output_tree_index.is_none() {
-                    output_tree_index = Some(index);
-                }
+                match output_tree_index {
+                    Some(existing) if existing != index => {
+                        return Err(IndexerError::InvalidParameters(format!(
+                            "state proofs reference multiple output tree indices: {} and {}",
+                            existing, index
+                        )));
+                    }
+                    None => output_tree_index = Some(index),
+                    _ => {}
+                }
             }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if let Some(next) = account.tree_info.next_tree_info {
// SAFETY: account will always have a state Merkle tree context.
// pack_output_tree_index only panics on an address Merkle tree context.
let index = next.pack_output_tree_index(packed_accounts).unwrap();
let index = next.pack_output_tree_index(packed_accounts)?;
if output_tree_index.is_none() {
output_tree_index = Some(index);
}
} else {
// SAFETY: account will always have a state Merkle tree context.
// pack_output_tree_index only panics on an address Merkle tree context.
let index = account
.tree_info
.pack_output_tree_index(packed_accounts)
.unwrap();
let index = account.tree_info.pack_output_tree_index(packed_accounts)?;
if output_tree_index.is_none() {
output_tree_index = Some(index);
}
if let Some(next) = account.tree_info.next_tree_info {
let index = next.pack_output_tree_index(packed_accounts)?;
match output_tree_index {
Some(existing) if existing != index => {
return Err(IndexerError::InvalidParameters(format!(
"state proofs reference multiple output tree indices: {} and {}",
existing, index
)));
}
None => output_tree_index = Some(index),
_ => {}
}
} else {
let index = account.tree_info.pack_output_tree_index(packed_accounts)?;
match output_tree_index {
Some(existing) if existing != index => {
return Err(IndexerError::InvalidParameters(format!(
"state proofs reference multiple output tree indices: {} and {}",
existing, index
)));
}
None => output_tree_index = Some(index),
_ => {}
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/indexer/types/proof.rs` around lines 214 - 223, The loop
currently keeps the first computed output_tree_index and silently ignores any
subsequent differing indices; change this to detect mismatches and return an
error instead of keeping the first value. In the code paths using
account.tree_info.next_tree_info and
account.tree_info.pack_output_tree_index(packed_accounts) (and the local
variable output_tree_index), compute each index as before but if
output_tree_index is Some(prev) and the new index != prev then return a
descriptive error (e.g., Err or a custom ValidationError) indicating mixed
output-tree indices; otherwise set output_tree_index = Some(index) when it was
None. Apply this guard where PackedStateTreeInfos expects a single
output_tree_index so inconsistent state proofs are rejected.

Comment on lines +209 to +214
fn group_pda_specs<'a, V>(
specs: &[&'a PdaSpec<V>],
max_per_group: usize,
) -> Vec<Vec<&'a PdaSpec<V>>> {
assert!(max_per_group > 0, "max_per_group must be non-zero");
if specs.is_empty() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

The assert! contradicts this PR's goal of removing panic-prone patterns.

Given the PR objective is to harden runtime safety by replacing unwrap/panic with Result-based handling, this assert! stands out. While the current callers always pass MAX_PDAS_PER_IX = 8, making this safe in practice, the pattern is inconsistent with the broader refactoring philosophy.

Consider using debug_assert! if this is purely a programmer invariant check, or returning a Result if this function could eventually be exposed more broadly.

Option A: Use debug_assert! for internal invariant
 fn group_pda_specs<'a, V>(
     specs: &[&'a PdaSpec<V>],
     max_per_group: usize,
 ) -> Vec<Vec<&'a PdaSpec<V>>> {
-    assert!(max_per_group > 0, "max_per_group must be non-zero");
+    debug_assert!(max_per_group > 0, "max_per_group must be non-zero");
     if specs.is_empty() {
         return Vec::new();
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fn group_pda_specs<'a, V>(
specs: &[&'a PdaSpec<V>],
max_per_group: usize,
) -> Vec<Vec<&'a PdaSpec<V>>> {
assert!(max_per_group > 0, "max_per_group must be non-zero");
if specs.is_empty() {
fn group_pda_specs<'a, V>(
specs: &[&'a PdaSpec<V>],
max_per_group: usize,
) -> Vec<Vec<&'a PdaSpec<V>>> {
debug_assert!(max_per_group > 0, "max_per_group must be non-zero");
if specs.is_empty() {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/interface/load_accounts.rs` around lines 209 - 214, The
function group_pda_specs currently uses assert!(max_per_group > 0, ...) which
introduces a panic; replace this with debug_assert!(max_per_group > 0, ...) to
keep the developer invariant check during testing while avoiding release-time
panics, and keep the signature and return type unchanged (call sites that pass
MAX_PDAS_PER_IX will be unaffected); if you intend this to be a
public/runtime-validated API instead, alternatively change group_pda_specs to
return Result<Vec<Vec<&PdaSpec<V>>>, SomeError> and explicitly return an Err
when max_per_group == 0, but prefer debug_assert! for internal invariants.

Comment on lines +69 to +70
let path = "cli/test_bin/run test-validator";
let mut path = format!("{}/{}", project_root.trim(), path);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Minor: Variable name path is misleading.

The variable starts as a path but quickly becomes a full command string with arguments. Consider renaming to command or cmd_line for clarity.

💡 Suggested rename
-    let path = "cli/test_bin/run test-validator";
-    let mut path = format!("{}/{}", project_root.trim(), path);
+    let base_cmd = "cli/test_bin/run test-validator";
+    let mut command = format!("{}/{}", project_root.trim(), base_cmd);

Then update all subsequent path.push_str(...) to command.push_str(...).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/local_test_validator.rs` around lines 69 - 70, The local
variable named `path` is misleading because it becomes a full command string;
rename both occurrences (the initial `let path = "cli/test_bin/run
test-validator";` and the subsequent `let mut path = format!("{}/{}",
project_root.trim(), path);`) to `command` (e.g., `let command = ...` and `let
mut command = ...`) and update all subsequent uses like `path.push_str(...)` to
`command.push_str(...)` so the variable name reflects it holds a command/command
line rather than a filesystem path.

Comment on lines +113 to 126
let mut child = Command::new("sh")
.arg("-c")
.arg(path)
.stdin(Stdio::null())
.stdout(Stdio::inherit())
.stderr(Stdio::inherit())
.spawn()?;
let status = child.wait()?;
if !status.success() {
return Err(io::Error::other(format!(
"validator CLI exited with status {}",
status
)));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Blocking wait() call inside async function will stall the tokio runtime.

When child.wait() is called on line 120, it performs a blocking syscall that halts the current tokio worker thread. This prevents other async tasks from making progress on that thread—particularly problematic if the validator takes time to start.

Consider using tokio::process::Command which provides an async-compatible wait():

🔧 Proposed fix using tokio::process

Update the import at the top:

 use std::{
     io,
-    process::{Command, Stdio},
+    process::Stdio,
 };
+use tokio::process::Command;

The spawn and wait calls remain the same syntactically, but wait() becomes .await-able:

         let mut child = Command::new("sh")
             .arg("-c")
             .arg(path)
             .stdin(Stdio::null())
             .stdout(Stdio::inherit())
             .stderr(Stdio::inherit())
             .spawn()?;
-        let status = child.wait()?;
+        let status = child.wait().await?;

As per coding guidelines: "Use async functions with #[tokio::main] for asynchronous Rust code in the Light Protocol client library"—this implies async functions should avoid blocking the runtime.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/local_test_validator.rs` around lines 113 - 126, The
blocking std::process::Command.wait() call in local_test_validator.rs stalls the
tokio runtime; replace std::process::Command and std::process::Stdio with
tokio::process::Command and tokio::process::Stdio, spawn the child the same way
you do now, and await the child's termination with child.wait().await (checking
the returned ExitStatus.success()). Update imports accordingly and remove the
blocking child.wait() call so the async function does not perform blocking IO.

Comment on lines +128 to +136
let child = Command::new("sh")
.arg("-c")
.arg(path)
.stdin(Stdio::null())
.stdout(Stdio::null())
.stderr(Stdio::null())
.spawn()?;
std::mem::drop(child);
tokio::time::sleep(tokio::time::Duration::from_secs(config.wait_time)).await;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Minor: Explicit drop is unnecessary; consider a brief comment on intent.

std::mem::drop(child) on line 135 is functionally equivalent to just letting child go out of scope. The explicit drop doesn't change behavior—it's stylistic. If the intent is to communicate "we're intentionally detaching this process," a comment would be clearer than the explicit drop.

That said, this is a nitpick—the code works correctly.

💡 Optional: Replace explicit drop with comment
         let child = Command::new("sh")
             .arg("-c")
             .arg(path)
             .stdin(Stdio::null())
             .stdout(Stdio::null())
             .stderr(Stdio::null())
             .spawn()?;
-        std::mem::drop(child);
+        // Child handle dropped intentionally; validator runs detached in background
+        drop(child);
         tokio::time::sleep(tokio::time::Duration::from_secs(config.wait_time)).await;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let child = Command::new("sh")
.arg("-c")
.arg(path)
.stdin(Stdio::null())
.stdout(Stdio::null())
.stderr(Stdio::null())
.spawn()?;
std::mem::drop(child);
tokio::time::sleep(tokio::time::Duration::from_secs(config.wait_time)).await;
let child = Command::new("sh")
.arg("-c")
.arg(path)
.stdin(Stdio::null())
.stdout(Stdio::null())
.stderr(Stdio::null())
.spawn()?;
// Child handle dropped intentionally; validator runs detached in background
drop(child);
tokio::time::sleep(tokio::time::Duration::from_secs(config.wait_time)).await;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/local_test_validator.rs` around lines 128 - 136, The
explicit std::mem::drop(child) after spawning the process
(Command::new(...).spawn() -> child) is unnecessary; remove the drop and instead
add a brief comment above the spawn or before awaiting
tokio::time::sleep(tokio::time::Duration::from_secs(config.wait_time)).await
explaining the intent to detach/ignore the child process (e.g., "intentionally
detaching spawned child; not awaiting its exit"), so the purpose is clear while
relying on scope-based drop for cleanup.

@sergeytimoshin sergeytimoshin force-pushed the sergey/runtime-safety-hardening branch from 131375b to 1012099 Compare March 14, 2026 21:07
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
sdk-libs/client/src/interface/load_accounts.rs (1)

213-213: ⚠️ Potential issue | 🟡 Minor

Replace panicing invariant check in group_pda_specs (already reported).

Line 213 uses assert!, which can panic in release and reintroduces panic-prone behavior in a runtime-safety hardening PR. Prefer debug_assert! for internal invariants, or return a Result for runtime validation.

#!/bin/bash
# Verify the panic-prone invariant is still present.
rg -n -C2 'assert!\(max_per_group > 0' sdk-libs/client/src/interface/load_accounts.rs
Suggested minimal fix
-    assert!(max_per_group > 0, "max_per_group must be non-zero");
+    debug_assert!(max_per_group > 0, "max_per_group must be non-zero");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/interface/load_accounts.rs` at line 213, Replace the
panic-prone assert in group_pda_specs: instead of assert!(max_per_group > 0,
...), use debug_assert!(max_per_group > 0, ...) if this is an internal
invariant, or change group_pda_specs to return a Result and validate
max_per_group at runtime (returning an Err when <= 0) so callers can handle the
error; update callers accordingly if you opt for the Result approach.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@sdk-libs/client/src/interface/load_accounts.rs`:
- Around line 252-259: The current try_join_all in fetch_individual_proofs and
fetch_proof_batches issues all RPCs at once; replace those unbounded fans with a
bounded concurrent stream using futures::stream::iter(...) +
TryStreamExt::try_buffer_unordered(CONCURRENCY_LIMIT).try_collect().await so
only CONCURRENCY_LIMIT in-flight get_validity_proof calls run at once; pick a
sane limit (e.g., introduce a constant MAX_CONCURRENT_PROOF_RPCS = 32 or make it
configurable) and keep the existing mapping that calls
indexer.get_validity_proof(...).await.map(|r| r.value) but wrap that in the
stream element (use cloned hashes or iterate over hashes.clone()) and use
try_buffer_unordered/try_collect to preserve Result error propagation; apply the
same change in both fetch_individual_proofs and fetch_proof_batches (referencing
those function names and reuse MAX_PDAS_PER_IX/MAX_ATAS_PER_IX only for
grouping, not for concurrency).

In `@sdk-libs/program-test/src/indexer/test_indexer.rs`:
- Around line 2510-2516: Replace the unsafe .unwrap() calls on
response_result.text().await and deserialize_gnark_proof_json(&body) with proper
error propagation consistent with the surrounding code: call
response_result.text().await? (or map_err into IndexerError::CustomError) to
assign body and then use deserialize_gnark_proof_json(&body).map_err(|e|
IndexerError::CustomError(e.to_string()))? to produce proof_json so failures
return IndexerError rather than panicking; update the function signature to
return Result where necessary to support the ? operator.

---

Duplicate comments:
In `@sdk-libs/client/src/interface/load_accounts.rs`:
- Line 213: Replace the panic-prone assert in group_pda_specs: instead of
assert!(max_per_group > 0, ...), use debug_assert!(max_per_group > 0, ...) if
this is an internal invariant, or change group_pda_specs to return a Result and
validate max_per_group at runtime (returning an Err when <= 0) so callers can
handle the error; update callers accordingly if you opt for the Result approach.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: ae0a0a1d-a04c-4e6c-bc59-d4b6c1826804

📥 Commits

Reviewing files that changed from the base of the PR and between 65676df and 1012099.

⛔ Files ignored due to path filters (4)
  • forester-utils/src/account_zero_copy.rs is excluded by none and included by none
  • program-tests/utils/src/e2e_test_env.rs is excluded by none and included by none
  • program-tests/utils/src/mock_batched_forester.rs is excluded by none and included by none
  • program-tests/utils/src/test_batch_forester.rs is excluded by none and included by none
📒 Files selected for processing (2)
  • sdk-libs/client/src/interface/load_accounts.rs
  • sdk-libs/program-test/src/indexer/test_indexer.rs

Comment on lines +252 to +259
futures::future::try_join_all(hashes.iter().map(|hash| async move {
indexer
.get_validity_proof(vec![*hash], vec![], None)
.await
.map(|response| response.value)
}))
.await
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify unbounded fan-out call sites in proof-fetch helpers.
rg -n -C3 'try_join_all\(' sdk-libs/client/src/interface/load_accounts.rs

Repository: Lightprotocol/light-protocol

Length of output: 541


🏁 Script executed:

# Read the full context of both functions to understand batch sizes and call patterns
cat -n sdk-libs/client/src/interface/load_accounts.rs | sed -n '240,280p'

Repository: Lightprotocol/light-protocol

Length of output: 1368


🏁 Script executed:

# Check where these functions are called and what batch sizes are typically used
rg -n 'fetch_individual_proofs\|fetch_proof_batches' sdk-libs/client/src/interface/load_accounts.rs -A 2 -B 2

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Look for any existing concurrency bounds or rate limiting patterns in the codebase
rg -n 'buffer_unordered\|buffered\|buffering\|MAX.*IN_FLIGHT\|MAX.*CONCURRENT' sdk-libs/client/src/ -t rust

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check the Indexer trait definition to see if there are hints about concurrency
fd -e rs -path '*indexer*' sdk-libs/client | head -20

Repository: Lightprotocol/light-protocol

Length of output: 240


🏁 Script executed:

# Find call sites of fetch_individual_proofs and fetch_proof_batches
rg -n 'fetch_individual_proofs\|fetch_proof_batches' sdk-libs/client/src/interface/load_accounts.rs --no-heading

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check the Indexer trait definition
find sdk-libs/client -name '*.rs' -type f | xargs grep -l 'trait Indexer'

Repository: Lightprotocol/light-protocol

Length of output: 116


🏁 Script executed:

# Look for batch size usage patterns - what are typical hash counts?
rg -n 'MAX.*ACCOUNTS\|MAX.*HASHES\|MAX.*BATCH' sdk-libs/client/src/ --type rust

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check if there are rate limiting mechanisms in the indexer implementation
find sdk-libs/client -name '*.rs' -type f | xargs grep -l 'get_validity_proof' | head -5

Repository: Lightprotocol/light-protocol

Length of output: 309


🏁 Script executed:

# Read the Indexer trait to understand the interface
cat -n sdk-libs/client/src/indexer/indexer_trait.rs | head -100

Repository: Lightprotocol/light-protocol

Length of output: 4749


🏁 Script executed:

# Find where fetch_individual_proofs and fetch_proof_batches are called
rg -n 'fetch_individual_proofs\|fetch_proof_batches' sdk-libs/client/src/ -A 2 -B 2

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check what MAX constants exist at the top of load_accounts.rs
cat -n sdk-libs/client/src/interface/load_accounts.rs | head -50

Repository: Lightprotocol/light-protocol

Length of output: 2054


🏁 Script executed:

# Continue reading load_accounts.rs to find constants
cat -n sdk-libs/client/src/interface/load_accounts.rs | sed -n '50,150p'

Repository: Lightprotocol/light-protocol

Length of output: 4177


🏁 Script executed:

# Find all callers of fetch_individual_proofs
rg -n 'fetch_individual_proofs' sdk-libs/client/src/interface/ -B 5 -A 10

Repository: Lightprotocol/light-protocol

Length of output: 2656


🏁 Script executed:

# Check if there's a photon indexer that might give hints about rate limiting
cat -n sdk-libs/client/src/indexer/photon_indexer.rs | head -80

Repository: Lightprotocol/light-protocol

Length of output: 3321


🏁 Script executed:

# Look at the third function fetch_proofs_batched to understand the pattern
cat -n sdk-libs/client/src/interface/load_accounts.rs | sed -n '278,310p'

Repository: Lightprotocol/light-protocol

Length of output: 1187


🏁 Script executed:

# Search for any documentation or comments about rate limiting/concurrency issues
rg -n 'rate.limit\|concurrent\|unbounded\|timeout\|backpressure' sdk-libs/client/src/ -i

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check if there are any existing bounds or buffer_unordered patterns in the SDK
rg -n 'buffer_unordered\|StreamExt\|TryStreamExt' sdk-libs/ -t rust

Repository: Lightprotocol/light-protocol

Length of output: 54


Bound concurrent proof RPCs to prevent request storms.

Both fetch_individual_proofs (line 252) and fetch_proof_batches (line 269) issue all requests at once via try_join_all. With many cold accounts—particularly mints or PDAs—this creates unbounded fan-out against the indexer, risking timeouts, rate-limits, or backpressure.

The MAX_PDAS_PER_IX and MAX_ATAS_PER_IX constants (8 each) bound grouping per instruction, not the total number of concurrent RPC calls. Use buffer_unordered to cap in-flight requests:

Suggested fix (bounded in-flight concurrency)
+use futures::{StreamExt, TryStreamExt};
 ...
 const MAX_ATAS_PER_IX: usize = 8;
 const MAX_PDAS_PER_IX: usize = 8;
+const MAX_PROOF_REQUESTS_IN_FLIGHT: usize = 16;
 ...
 async fn fetch_individual_proofs<I: Indexer>(
     hashes: &[[u8; 32]],
     indexer: &I,
 ) -> Result<Vec<ValidityProofWithContext>, IndexerError> {
     if hashes.is_empty() {
         return Ok(vec![]);
     }

-    futures::future::try_join_all(hashes.iter().map(|hash| async move {
-        indexer
-            .get_validity_proof(vec![*hash], vec![], None)
-            .await
-            .map(|response| response.value)
-    }))
-    .await
+    futures::stream::iter(hashes.iter().copied())
+        .map(|hash| async move {
+            indexer
+                .get_validity_proof(vec![hash], vec![], None)
+                .await
+                .map(|response| response.value)
+        })
+        .buffer_unordered(MAX_PROOF_REQUESTS_IN_FLIGHT)
+        .try_collect()
+        .await
 }
 ...
 async fn fetch_proof_batches<I: Indexer>(
     hash_batches: &[Vec<[u8; 32]>],
     indexer: &I,
 ) -> Result<Vec<ValidityProofWithContext>, IndexerError> {
     if hash_batches.is_empty() {
         return Ok(vec![]);
     }

-    futures::future::try_join_all(hash_batches.iter().map(|hashes| async move {
-        indexer
-            .get_validity_proof(hashes.clone(), vec![], None)
-            .await
-            .map(|response| response.value)
-    }))
-    .await
+    futures::stream::iter(hash_batches.iter().cloned())
+        .map(|hashes| async move {
+            indexer
+                .get_validity_proof(hashes, vec![], None)
+                .await
+                .map(|response| response.value)
+        })
+        .buffer_unordered(MAX_PROOF_REQUESTS_IN_FLIGHT)
+        .try_collect()
+        .await
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
futures::future::try_join_all(hashes.iter().map(|hash| async move {
indexer
.get_validity_proof(vec![*hash], vec![], None)
.await
.map(|response| response.value)
}))
.await
}
use futures::{StreamExt, TryStreamExt};
// ... existing code ...
const MAX_ATAS_PER_IX: usize = 8;
const MAX_PDAS_PER_IX: usize = 8;
const MAX_PROOF_REQUESTS_IN_FLIGHT: usize = 16;
// ... existing code ...
async fn fetch_individual_proofs<I: Indexer>(
hashes: &[[u8; 32]],
indexer: &I,
) -> Result<Vec<ValidityProofWithContext>, IndexerError> {
if hashes.is_empty() {
return Ok(vec![]);
}
futures::stream::iter(hashes.iter().copied())
.map(|hash| async move {
indexer
.get_validity_proof(vec![hash], vec![], None)
.await
.map(|response| response.value)
})
.buffer_unordered(MAX_PROOF_REQUESTS_IN_FLIGHT)
.try_collect()
.await
}
// ... existing code ...
async fn fetch_proof_batches<I: Indexer>(
hash_batches: &[Vec<[u8; 32]>],
indexer: &I,
) -> Result<Vec<ValidityProofWithContext>, IndexerError> {
if hash_batches.is_empty() {
return Ok(vec![]);
}
futures::stream::iter(hash_batches.iter().cloned())
.map(|hashes| async move {
indexer
.get_validity_proof(hashes, vec![], None)
.await
.map(|response| response.value)
})
.buffer_unordered(MAX_PROOF_REQUESTS_IN_FLIGHT)
.try_collect()
.await
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/interface/load_accounts.rs` around lines 252 - 259, The
current try_join_all in fetch_individual_proofs and fetch_proof_batches issues
all RPCs at once; replace those unbounded fans with a bounded concurrent stream
using futures::stream::iter(...) +
TryStreamExt::try_buffer_unordered(CONCURRENCY_LIMIT).try_collect().await so
only CONCURRENCY_LIMIT in-flight get_validity_proof calls run at once; pick a
sane limit (e.g., introduce a constant MAX_CONCURRENT_PROOF_RPCS = 32 or make it
configurable) and keep the existing mapping that calls
indexer.get_validity_proof(...).await.map(|r| r.value) but wrap that in the
stream element (use cloned hashes or iterate over hashes.clone()) and use
try_buffer_unordered/try_collect to preserve Result error propagation; apply the
same change in both fetch_individual_proofs and fetch_proof_batches (referencing
those function names and reuse MAX_PDAS_PER_IX/MAX_ATAS_PER_IX only for
grouping, not for concurrency).

@sergeytimoshin sergeytimoshin force-pushed the sergey/runtime-safety-hardening branch from 1012099 to 0b04dbc Compare March 14, 2026 21:34
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (7)
sdk-libs/client/src/rpc/client.rs (2)

342-344: 🧹 Nitpick | 🔵 Trivial

Use tracing::warn! instead of println! for consistency.

The rest of this file uses tracing::warn for logging (see lines 213, 394, 466). Using println! here bypasses structured logging and won't appear in your observability pipeline. The same applies to line 471.

📝 Suggested fix
                         UiInstruction::Parsed(_) => {
-                            println!("Parsed instructions are not implemented yet");
+                            warn!("Parsed instructions are not implemented yet");
                         }

Apply the same change at line 471.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/rpc/client.rs` around lines 342 - 344, The match arm for
UiInstruction::Parsed(_) currently uses println! which bypasses structured
logging; replace that println! with tracing::warn!("Parsed instructions are not
implemented yet") and do the same for the other println! usage noted later in
this file; ensure the tracing crate is in scope (use tracing::warn) so the
warnings integrate with the existing tracing-based observability used by the
file.

906-906: ⚠️ Potential issue | 🟠 Major

Use .first() instead of unbounded indexing to handle potential empty event vectors defensively.

At line 906, e[0] will panic if the vector is empty. While the logic in event_from_light_transaction suggests non-empty vectors are returned when Some(...) is present, defensive programming—especially in a PR focused on hardening runtime safety—should avoid relying on this assumption.

Replace the unbounded index with .first() to handle the edge case explicitly:

Suggested fix
-        let event = parsed_event.map(|(e, signature, slot)| (e[0].event.clone(), signature, slot));
+        let event = match parsed_event {
+            Some((e, signature, slot)) => {
+                let first = e.first().ok_or_else(|| {
+                    RpcError::CustomError("No events found in transaction".to_string())
+                })?;
+                Some((first.event.clone(), signature, slot))
+            }
+            None => None,
+        };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/rpc/client.rs` at line 906, The mapping that constructs
`event` uses `e[0]` which can panic on empty vectors; change the closure over
`parsed_event` to use `e.first()` and propagate the absence safely. Concretely,
replace the `parsed_event.map(|(e, signature, slot)| (e[0].event.clone(),
signature, slot))` logic with an approach that uses `e.first()` (e.g.
`parsed_event.and_then(|(e, signature, slot)| e.first().cloned().map(|ev|
(ev.event.clone(), signature, slot)))`) so empty `e` yields `None` instead of
panicking; update any downstream handling accordingly (this touches the
`parsed_event` -> `event` mapping and the `event_from_light_transaction` flow).
sdk-libs/program-test/src/indexer/test_indexer.rs (1)

2498-2531: ⚠️ Potential issue | 🟠 Major

Retry loop can spin forever on non-success HTTP responses.

When send() succeeds but status is non-2xx, retries is never decremented, so this can loop indefinitely and hammer the prover.

🛠️ Suggested fix
             let mut retries = 3;
             while retries > 0 {
                 let response_result = client
                     .post(format!("{}{}", SERVER_ADDRESS, PROVE_PATH))
                     .header("Content-Type", "text/plain; charset=utf-8")
                     .body(json_payload.clone())
                     .send()
                     .await;
                 if let Ok(response_result) = response_result {
                     if response_result.status().is_success() {
                         let body = response_result.text().await.unwrap();
                         let proof_json = deserialize_gnark_proof_json(&body).unwrap();
                         let (proof_a, proof_b, proof_c) = proof_from_json_struct(proof_json)
                             .map_err(|e| IndexerError::CustomError(e.to_string()))?;
                         let (proof_a, proof_b, proof_c) =
                             compress_proof(&proof_a, &proof_b, &proof_c)
                                 .map_err(|e| IndexerError::CustomError(e.to_string()))?;
                         return Ok(ValidityProofWithContext {
                             accounts: account_proof_inputs,
                             addresses: address_proof_inputs,
                             proof: CompressedProof {
                                 a: proof_a,
                                 b: proof_b,
                                 c: proof_c,
                             }
                             .into(),
                         });
+                    } else {
+                        tokio::time::sleep(Duration::from_secs(5)).await;
+                        retries -= 1;
                     }
                 } else {
                     println!("Error: {:#?}", response_result);
                     tokio::time::sleep(Duration::from_secs(5)).await;
                     retries -= 1;
                 }
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/program-test/src/indexer/test_indexer.rs` around lines 2498 - 2531,
The retry loop using the mutable retries variable can spin forever when
client.post.send() returns Ok(response_result) but response_result.status() is
non-success because retries is only decremented in the Err branch; modify the
loop in test_indexer.rs so that whenever the response is Ok but
!response_result.status().is_success() you decrement retries and await a backoff
(e.g., tokio::time::sleep(Duration::from_secs(5))). Concretely, inside the while
loop after checking if let Ok(response_result) = response_result, add an else
branch for the non-success status that logs the status, sleeps, and decrements
retries (mirroring the current Err behavior) so both network errors and non-2xx
responses progress toward loop termination.
sdk-libs/client/src/interface/initialize_config.rs (1)

120-122: 🧹 Nitpick | 🔵 Trivial

Consider using Discriminator trait instead of hardcoded bytes.

The coding guidelines recommend using anchor_lang::Discriminator trait for instruction discriminators. While this is pre-existing code, if you're touching this area anyway:

-        // Anchor discriminator for "initialize_compression_config"
-        // SHA256("global:initialize_compression_config")[..8]
-        const DISCRIMINATOR: [u8; 8] = [133, 228, 12, 169, 56, 76, 222, 61];
+        // Use Anchor's Discriminator trait when the anchor feature is enabled
+        #[cfg(feature = "anchor")]
+        let discriminator = <YourInstructionType as anchor_lang::Discriminator>::DISCRIMINATOR;

This would make the code more maintainable if the instruction signature changes. As per coding guidelines: "Use anchor_lang::Discriminator trait to get 8-byte instruction discriminators from Anchor programs."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/interface/initialize_config.rs` around lines 120 - 122,
Replace the hardcoded DISCRIMINATOR bytes for "initialize_compression_config"
with Anchor's Discriminator helper: remove the const DISCRIMINATOR and obtain
the 8-byte discriminator at runtime from the instruction type using the
anchor_lang::Discriminator implementation for the corresponding instruction
struct (e.g., the InitializeCompressionConfig instruction struct) and take the
first 8 bytes of that discriminator; update any usages that referenced
DISCRIMINATOR to call the Discriminator::discriminator method on the instruction
type instead so changes to the instruction signature automatically produce the
correct bytes.
forester/src/forester_status.rs (1)

674-685: ⚠️ Potential issue | 🟡 Minor

Queue parse failures are silently masked as empty queues.

At Line 674 and Line 729, deserialization errors are swallowed with .ok().unwrap_or((0, 0)), which can misreport a broken queue as “0 pending / 0 capacity”. Please at least emit a warning when parsing fails (or surface None stats) to avoid false operational signals.

Suggested patch (apply to both StateV1 and AddressV1 branches)
-                    parse_hash_set_from_bytes::<QueueAccount>(&acc.data)
-                        .ok()
-                        .map(|hs| {
-                            let len = hs
-                                .iter()
-                                .filter(|(_, cell)| cell.sequence_number.is_none())
-                                .count() as u64;
-                            let cap = hs.get_capacity() as u64;
-                            (len, cap)
-                        })
-                        .unwrap_or((0, 0))
+                    match parse_hash_set_from_bytes::<QueueAccount>(&acc.data) {
+                        Ok(hs) => {
+                            let len = hs
+                                .iter()
+                                .filter(|(_, cell)| cell.sequence_number.is_none())
+                                .count() as u64;
+                            let cap = hs.get_capacity() as u64;
+                            (len, cap)
+                        }
+                        Err(e) => {
+                            warn!("Failed to parse queue account {}: {:?}", tree.queue, e);
+                            (0, 0)
+                        }
+                    }

Also applies to: 729-740

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@forester/src/forester_status.rs` around lines 674 - 685, The code currently
swallows deserialization errors from
parse_hash_set_from_bytes::<QueueAccount>(&acc.data) by using
.ok().unwrap_or((0,0)), causing broken queues to be reported as 0/0; change this
to detect Err and emit a warning (e.g. tracing::warn! or process logger)
including the error and account identity, or return None/Option for the stats so
callers can treat parse failures explicitly; apply the same change in both
StateV1 and AddressV1 branches where parse_hash_set_from_bytes and the
(.ok().map(...).unwrap_or((0,0))) pattern is used to compute (len, cap),
replacing the silent unwrap with explicit error logging or Option propagation.
prover/client/src/errors.rs (1)

6-6: ⚠️ Potential issue | 🟠 Major

This widens a public exhaustive enum, creating a breaking API change that needs explicit release communication.

ProverClientError is public and lacks #[non_exhaustive], so adding variants (IntegerConversion, JsonSerialization, ProcessStart, ProcessWait, ProjectRootNotFound, HealthCheckFailed) breaks exhaustive downstream match statements. Combined with the changed return types (e.g., to_string() now returns Result<String, ProverClientError>), this is a significant API shift.

While the version bump to 8.0.0 reflects major release intent, there's no CHANGELOG, README, or migration guide in prover/client/ documenting these breaking changes. Before release, add explicit release notes or migration guidance so downstream users understand what changed and how to adapt.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prover/client/src/errors.rs` at line 6, Public enum ProverClientError was
widened which breaks downstream exhaustive matches; either restore backwards
compatibility or mark the enum non-exhaustive and document the breaking changes:
add #[non_exhaustive] to ProverClientError (so consumers must add a fallback
arm), or revert the new variants until you can coordinate a major release, and
create clear release notes/migration guide in prover/client/ (CHANGELOG, README)
describing the new variants (IntegerConversion, JsonSerialization, ProcessStart,
ProcessWait, ProjectRootNotFound, HealthCheckFailed) and the changed return
types (e.g., to_string() -> Result<String, ProverClientError>) with examples of
how callers should update match statements and error handling.
prover/client/src/proof_types/batch_update/proof_inputs.rs (1)

122-132: ⚠️ Potential issue | 🔴 Critical

Validate all batch vector lengths before computing the updated root.

This function zips leaves, merkle_proofs, and path_indices, but Line 182 still indexes tx_hashes[i] and Lines 199-220 serialize the full vectors afterward. A short tx_hashes panics; short proofs or path indices silently compute a root for only a prefix of the batch.

Proposed fix
 pub fn get_batch_update_inputs<const HEIGHT: usize>(
     current_root: [u8; 32],
     tx_hashes: Vec<[u8; 32]>,
     leaves: Vec<[u8; 32]>,
@@
     batch_size: u32,
     previous_changelogs: &[ChangelogEntry<HEIGHT>],
 ) -> Result<(BatchUpdateCircuitInputs, Vec<ChangelogEntry<HEIGHT>>), ProverClientError> {
+    let expected = batch_size as usize;
+    if leaves.len() != expected
+        || old_leaves.len() != expected
+        || merkle_proofs.len() != expected
+        || tx_hashes.len() != expected
+        || path_indices.len() != expected
+    {
+        return Err(ProverClientError::GenericError(format!(
+            "Input vector length mismatch: leaves={}, old_leaves={}, merkle_proofs={}, tx_hashes={}, path_indices={}, expected batch_size={}",
+            leaves.len(),
+            old_leaves.len(),
+            merkle_proofs.len(),
+            tx_hashes.len(),
+            path_indices.len(),
+            batch_size,
+        )));
+    }
+
     let mut new_root = [0u8; 32];

Based on learnings: Batch nullify ZKP verification must validate that public_input_hash equals Hash([old_root, new_root, leaves_hash_chain]), verify nullifiers are correctly computed as Hash(Leaves[i], PathIndices[i], TxHashes[i]) for each position in batch.

Also applies to: 139-184, 199-220

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prover/client/src/proof_types/batch_update/proof_inputs.rs` around lines 122
- 132, The function get_batch_update_inputs must validate that all batch vectors
(tx_hashes, leaves, merkle_proofs, path_indices, old_leaves) have lengths
consistent with each other and with batch_size before any indexing or computing
the updated root; add an early check (and return an error) ensuring
tx_hashes.len() == leaves.len() == merkle_proofs.len() == path_indices.len() ==
old_leaves.len() and that these equal batch_size as usize (or at least are >=
batch_size) so subsequent zips/indices cannot panic or silently truncate. After
validation, compute nullifiers per position using the exact relationship
Hash(leaves[i], path_indices[i], tx_hashes[i]) and only then serialize/return
the full vectors (remove any partial-prefix logic). Also ensure the
public_input_hash computation uses Hash([old_root, new_root, leaves_hashchain])
after the validated new_root is produced. Reference get_batch_update_inputs,
BatchUpdateCircuitInputs, and ChangelogEntry<HEIGHT> when making these changes.
♻️ Duplicate comments (7)
sdk-libs/program-test/src/indexer/test_indexer.rs (1)

2508-2514: ⚠️ Potential issue | 🔴 Critical

Unresolved panic path remains in proof response parsing (Line 2508, Line 2509).

unwrap() is still used on response body extraction and GNARK JSON deserialization; malformed/proxy/error payloads will still panic here.

♻️ Suggested fix
-                        let body = response_result.text().await.unwrap();
-                        let proof_json = deserialize_gnark_proof_json(&body).unwrap();
+                        let body = response_result
+                            .text()
+                            .await
+                            .map_err(|e| IndexerError::CustomError(e.to_string()))?;
+                        let proof_json = deserialize_gnark_proof_json(&body)
+                            .map_err(|e| IndexerError::CustomError(e.to_string()))?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/program-test/src/indexer/test_indexer.rs` around lines 2508 - 2514,
The code currently uses unwrap() on response_result.text().await and
deserialize_gnark_proof_json(&body) which can panic on malformed or error
payloads; replace these unwraps with proper error propagation by mapping
failures into IndexerError (e.g., use .await.map_err(|e|
IndexerError::CustomError(e.to_string())) for response_result.text(), and change
deserialize_gnark_proof_json(&body).unwrap() to
deserialize_gnark_proof_json(&body).map_err(|e|
IndexerError::CustomError(e.to_string())) so that the subsequent
proof_from_json_struct and compress_proof error handling remains consistent.
sdk-libs/client/src/local_test_validator.rs (1)

69-105: ⚠️ Potential issue | 🟠 Major

Replace sh -c command-string execution with argumentized async process spawning.

Line 69–105 builds one shell command string from dynamic values, then Line 113–115 executes it via sh -c; this is injection-prone and breaks on shell-special characters. Also, Line 120 uses blocking child.wait()? inside an async function, which can stall Tokio workers.

🔧 Proposed refactor (safe args + non-blocking wait)
 use std::{
     io,
-    process::{Command, Stdio},
+    process::Stdio,
 };
+use tokio::process::Command;

@@
-    let path = "cli/test_bin/run test-validator";
-    let mut path = format!("{}/{}", project_root.trim(), path);
+    let cli_path = format!("{}/cli/test_bin/run", project_root.trim());
+    let mut args = vec!["test-validator".to_string()];

     if !config.enable_indexer {
-        path.push_str(" --skip-indexer");
+        args.push("--skip-indexer".to_string());
     }

     if let Some(limit_ledger_size) = config.limit_ledger_size {
-        path.push_str(&format!(" --limit-ledger-size {}", limit_ledger_size));
+        args.push("--limit-ledger-size".to_string());
+        args.push(limit_ledger_size.to_string());
     }

     for sbf_program in config.sbf_programs.iter() {
-        path.push_str(&format!(
-            " --sbf-program {} {}",
-            sbf_program.0, sbf_program.1
-        ));
+        args.push("--sbf-program".to_string());
+        args.push(sbf_program.0.clone());
+        args.push(sbf_program.1.clone());
     }

     for upgradeable_program in config.upgradeable_programs.iter() {
-        path.push_str(&format!(
-            " --upgradeable-program {} {} {}",
-            upgradeable_program.program_id,
-            upgradeable_program.program_path,
-            upgradeable_program.upgrade_authority
-        ));
+        args.push("--upgradeable-program".to_string());
+        args.push(upgradeable_program.program_id.clone());
+        args.push(upgradeable_program.program_path.clone());
+        args.push(upgradeable_program.upgrade_authority.clone());
     }

     if !config.enable_prover {
-        path.push_str(" --skip-prover");
+        args.push("--skip-prover".to_string());
     }

     if config.use_surfpool {
-        path.push_str(" --use-surfpool");
+        args.push("--use-surfpool".to_string());
     }

     for arg in config.validator_args.iter() {
-        path.push_str(&format!(" {}", arg));
+        args.push(arg.clone());
     }

-    println!("Starting validator with command: {}", path);
+    println!("Starting validator: {} {}", cli_path, args.join(" "));

@@
-        let mut child = Command::new("sh")
-            .arg("-c")
-            .arg(path)
+        let mut child = Command::new(&cli_path)
+            .args(&args)
             .stdin(Stdio::null())
             .stdout(Stdio::inherit())
             .stderr(Stdio::inherit())
             .spawn()?;
-        let status = child.wait()?;
+        let status = child.wait().await?;
#!/bin/bash
# Verify no shell-string execution and no blocking wait in async function.
rg -n --type=rust 'Command::new\("sh"\)|\.arg\("-c"\)|child\.wait\(\)\?' sdk-libs/client/src/local_test_validator.rs

As per coding guidelines: "Use async functions with #[tokio::main] for asynchronous Rust code in the Light Protocol client library".

Also applies to: 113-121

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/local_test_validator.rs` around lines 69 - 105, The code
builds a single shell command string in the variable path using config flags
(enable_indexer, limit_ledger_size, sbf_programs, upgradeable_programs,
enable_prover, use_surfpool, validator_args) and later executes it with sh -c
and uses blocking child.wait() inside an async context; change this to construct
a Vec<String>/Vec<&str> of argv entries (program path as the command and each
flag/value as separate args using the same config fields and
sbf_program/program_id/program_path/upgrade_authority values), spawn the process
with tokio::process::Command
(Command::new(program_path).args(&args).spawn().await?), and replace the
blocking child.wait()? usage with the async child.wait().await (or child.await)
to avoid shell injection and blocking the Tokio reactor. Ensure each sbf_program
and upgradeable_program adds separate args (no shell interpolation) and pass
validator_args elements as individual args.
prover/client/src/proof.rs (2)

167-170: ⚠️ Potential issue | 🟠 Major

Validate the G1 point before negating it.

Line 169 still deserializes prover-supplied bytes with Validate::No, so off-curve or wrong-subgroup points can pass through negate_g1.

Proposed fix
-    let g1: G1 = G1::deserialize_with_mode(g1_le.as_slice(), Compress::No, Validate::No)
+    let g1: G1 = G1::deserialize_with_mode(g1_le.as_slice(), Compress::No, Validate::Yes)
         .map_err(|e| ProverClientError::InvalidProofData(e.to_string()))?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prover/client/src/proof.rs` around lines 167 - 170, The function negate_g1
currently deserializes prover bytes using G1::deserialize_with_mode(...,
Validate::No) which allows off-curve or wrong-subgroup points; change the
deserialization call in negate_g1 to use a validating mode (e.g., Validate::Yes)
so the G1 point is validated during deserialization before any negation, and
propagate the validation error via ProverClientError::InvalidProofData as
already done.

72-89: ⚠️ Potential issue | 🟠 Major

Strip at most one 0x prefix when decoding proof coordinates.

Line 73 still uses trim_start_matches("0x"), which strips repeated 0x prefixes. Malformed input like 0x0x01 becomes 01 and parses successfully instead of failing.

Proposed fix
 pub fn deserialize_hex_string_to_be_bytes(hex_str: &str) -> Result<[u8; 32], ProverClientError> {
-    let trimmed_str = hex_str.trim_start_matches("0x");
-    let big_int = num_bigint::BigInt::from_str_radix(trimmed_str, 16)
+    let trimmed_str = hex_str.strip_prefix("0x").unwrap_or(hex_str);
+    let big_uint = num_bigint::BigUint::from_str_radix(trimmed_str, 16)
         .map_err(|e| ProverClientError::InvalidHexString(format!("{} ({})", hex_str, e)))?;
-    let big_int_bytes = big_int.to_bytes_be().1;
-    if big_int_bytes.len() < 32 {
+    let big_uint_bytes = big_uint.to_bytes_be();
+    if big_uint_bytes.len() < 32 {
         let mut result = [0u8; 32];
-        result[32 - big_int_bytes.len()..].copy_from_slice(&big_int_bytes);
+        result[32 - big_uint_bytes.len()..].copy_from_slice(&big_uint_bytes);
         Ok(result)
     } else {
-        let len = big_int_bytes.len();
-        big_int_bytes.try_into().map_err(|_| {
+        let len = big_uint_bytes.len();
+        big_uint_bytes.try_into().map_err(|_| {
             ProverClientError::InvalidHexString(format!(
                 "expected at most 32 bytes, got {} for {}",
                 len, hex_str
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prover/client/src/proof.rs` around lines 72 - 89, The function
deserialize_hex_string_to_be_bytes currently uses trim_start_matches("0x"),
which removes every leading "0x" and lets malformed inputs like "0x0x01" parse;
change the logic to strip at most one "0x" prefix by replacing
trim_start_matches with a single-strip approach (e.g. use
strip_prefix("0x").unwrap_or(hex_str) or equivalent) when computing trimmed_str
in deserialize_hex_string_to_be_bytes so only the first "0x" is removed and
inputs with repeated prefixes fail to parse.
prover/client/src/prover.rs (2)

57-65: ⚠️ Potential issue | 🟠 Major

Check the child exit status and HTTP status before returning healthy.

Line 58 only waits for the child process; it never verifies output.status.success(). Line 84 treats any Ok(Response) as healthy, including 4xx/5xx responses. That can make spawn_prover() return Ok(()) after a failed startup or a failing health endpoint.

Proposed fix
-        command
-            .wait_with_output()
-            .map_err(|error| ProverClientError::ProcessWait(error.to_string()))?;
+        let output = command
+            .wait_with_output()
+            .map_err(|error| ProverClientError::ProcessWait(error.to_string()))?;
+
+        if !output.status.success() {
+            return Err(ProverClientError::ProcessWait(format!(
+                "start-prover exited with status {}: {}",
+                output.status,
+                String::from_utf8_lossy(&output.stderr),
+            )));
+        }
@@
-            Ok(_) => {
+            Ok(response) if response.status().is_success() => {
                 result = true;
                 break;
             }
-            Err(_) => {
+            Ok(_) | Err(_) => {
                 sleep(Duration::from_secs(timeout as u64));
             }

Also applies to: 79-85

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prover/client/src/prover.rs` around lines 57 - 65, spawn_prover currently
waits for the child via command.wait_with_output() but doesn't verify the
child's exit status, and health_check treats any Ok(Response) as success; update
spawn_prover to inspect the output.status.success() from
command.wait_with_output() and return an error (ProverClientError::ProcessExit
or similar) if the process failed, and update health_check to examine the HTTP
response status (use response.status().is_success()) rather than just Ok/Err so
non-2xx responses are considered failures; key symbols to modify:
command.wait_with_output(), spawn_prover(), and health_check() (check Response
handling to require status.is_success()) ensuring callers only return Ok(())
when both the process exit is successful and health_check reports a 2xx status.

1-5: ⚠️ Potential issue | 🟠 Major

Use Tokio process/sleep APIs in this async startup path.

Lines 58 and 89 still use blocking std APIs inside async functions. With the 120-second retry window, a stalled prover startup can pin Tokio worker threads for a long time.

Proposed fix
 use std::{
-    process::Command,
     sync::atomic::{AtomicBool, Ordering},
-    thread::sleep,
     time::Duration,
 };
 
+use tokio::{process::Command, time::sleep};
+
@@
-        command
-            .wait_with_output()
-            .map_err(|error| ProverClientError::ProcessWait(error.to_string()))?;
+        command
+            .wait_with_output()
+            .await
+            .map_err(|error| ProverClientError::ProcessWait(error.to_string()))?;
@@
-                sleep(Duration::from_secs(timeout as u64));
+                sleep(Duration::from_secs(timeout as u64)).await;

Also applies to: 51-59, 75-90

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prover/client/src/prover.rs` around lines 1 - 5, The startup path is using
blocking std APIs (std::process::Command and thread::sleep) inside async code
which can block Tokio workers; replace uses of std::process::Command with
tokio::process::Command and call .output().await (or spawn and await) instead of
blocking output(), and replace thread::sleep/Duration with
tokio::time::sleep(Duration::from_secs(...)). Update any loops that poll with
AtomicBool to use async-friendly waits (await tokio::time::sleep) so the
120-second retry loop yields to the runtime; ensure you import
tokio::process::Command and tokio::time::sleep and change any synchronous calls
at the sites referencing Command and sleep (around the reported locations) to
their async equivalents and .await them.
prover/client/src/proof_types/batch_update/proof_inputs.rs (1)

115-117: ⚠️ Potential issue | 🔴 Critical

Return every batch public input instead of self.0[0].

Line 117 still panics on empty input and silently drops every BatchUpdateCircuitInputs after the first, even though this API returns a Vec<[u8; 32]>.

Proposed fix
 impl BatchUpdateInputs<'_> {
     pub fn public_inputs(&self) -> Result<Vec<[u8; 32]>, ProverClientError> {
-        // Concatenate all public inputs into a single flat vector
-        Ok(vec![self.0[0].public_inputs_arr()?])
+        if self.0.is_empty() {
+            return Err(ProverClientError::GenericError(
+                "batch update inputs cannot be empty".to_string(),
+            ));
+        }
+
+        self.0
+            .iter()
+            .map(BatchUpdateCircuitInputs::public_inputs_arr)
+            .collect()
     }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prover/client/src/proof_types/batch_update/proof_inputs.rs` around lines 115
- 117, The current public_inputs method only returns the first element
(self.0[0]) and will panic on an empty vector; change it to iterate over all
BatchUpdateCircuitInputs in self.0, call public_inputs_arr() on each, and
collect the results into a Vec<[u8; 32]> (propagating any error returned by
public_inputs_arr) instead of indexing [0]; also handle the empty-case by
returning an empty Vec rather than panicking. Use the public_inputs function,
the BatchUpdateCircuitInputs entries (self.0) and the public_inputs_arr() method
as the points to modify.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@prover/client/src/proof_types/batch_append/proof_inputs.rs`:
- Around line 180-186: Before iterating with zip, validate that
old_leaves.len(), leaves.len(), and merkle_proofs.len() all equal batch_size and
return a hard error if any mismatch; specifically add a guard near the start of
the function that checks these lengths and returns
ProverClientError::InvalidProofData (or another appropriate error) when they
differ so you never silently build a partial batch (this covers the loop that
consumes old_leaves, leaves, and merkle_proofs and the subsequent
merkle_proof.as_slice().try_into() conversion).

In `@sdk-libs/client/src/indexer/types/queue.rs`:
- Around line 159-210: The method reconstruct_proof_with_lookup currently
redundantly calls validate_proof_height::<HEIGHT>() even though its only caller,
reconstruct_proofs, already performs that check; remove the
validate_proof_height::<HEIGHT>() invocation and its error propagation from
reconstruct_proof_with_lookup so reconstruct_proofs remains the single
validation point (leave reconstruct_proofs and validate_proof_height unchanged),
ensuring reconstruct_proof_with_lookup assumes the caller has validated the
height (or add a brief comment noting that responsibility).

In `@sdk-libs/client/src/interface/instructions.rs`:
- Line 104: Unify the builder return type by introducing a single module-level
alias or error enum (e.g., type BuilderResult = Result<Instruction, Box<dyn
std::error::Error + Send + Sync>> or a custom InstructionBuildError) and replace
the mixed signatures (the functions currently returning
std::io::Result<Instruction> and those returning Result<Instruction, Box<dyn
std::error::Error>>) with that alias; update all builder function signatures and
their internal error returns to use BuilderResult and adjust imports/usages
accordingly so callers only handle one consistent error type for Instruction
construction.

In `@sdk-libs/client/src/interface/load_accounts.rs`:
- Line 334: Remove the redundant declaration that recomputes program_id (the
line using specs.first().map(|s| s.program_id()).unwrap_or_default()); instead
reuse the existing program_id binding already declared earlier in the same
scope; delete this duplicate statement so there is only a single program_id
binding referenced by the surrounding code (e.g., the later uses in
load_accounts-related logic).

---

Outside diff comments:
In `@forester/src/forester_status.rs`:
- Around line 674-685: The code currently swallows deserialization errors from
parse_hash_set_from_bytes::<QueueAccount>(&acc.data) by using
.ok().unwrap_or((0,0)), causing broken queues to be reported as 0/0; change this
to detect Err and emit a warning (e.g. tracing::warn! or process logger)
including the error and account identity, or return None/Option for the stats so
callers can treat parse failures explicitly; apply the same change in both
StateV1 and AddressV1 branches where parse_hash_set_from_bytes and the
(.ok().map(...).unwrap_or((0,0))) pattern is used to compute (len, cap),
replacing the silent unwrap with explicit error logging or Option propagation.

In `@prover/client/src/errors.rs`:
- Line 6: Public enum ProverClientError was widened which breaks downstream
exhaustive matches; either restore backwards compatibility or mark the enum
non-exhaustive and document the breaking changes: add #[non_exhaustive] to
ProverClientError (so consumers must add a fallback arm), or revert the new
variants until you can coordinate a major release, and create clear release
notes/migration guide in prover/client/ (CHANGELOG, README) describing the new
variants (IntegerConversion, JsonSerialization, ProcessStart, ProcessWait,
ProjectRootNotFound, HealthCheckFailed) and the changed return types (e.g.,
to_string() -> Result<String, ProverClientError>) with examples of how callers
should update match statements and error handling.

In `@prover/client/src/proof_types/batch_update/proof_inputs.rs`:
- Around line 122-132: The function get_batch_update_inputs must validate that
all batch vectors (tx_hashes, leaves, merkle_proofs, path_indices, old_leaves)
have lengths consistent with each other and with batch_size before any indexing
or computing the updated root; add an early check (and return an error) ensuring
tx_hashes.len() == leaves.len() == merkle_proofs.len() == path_indices.len() ==
old_leaves.len() and that these equal batch_size as usize (or at least are >=
batch_size) so subsequent zips/indices cannot panic or silently truncate. After
validation, compute nullifiers per position using the exact relationship
Hash(leaves[i], path_indices[i], tx_hashes[i]) and only then serialize/return
the full vectors (remove any partial-prefix logic). Also ensure the
public_input_hash computation uses Hash([old_root, new_root, leaves_hashchain])
after the validated new_root is produced. Reference get_batch_update_inputs,
BatchUpdateCircuitInputs, and ChangelogEntry<HEIGHT> when making these changes.

In `@sdk-libs/client/src/interface/initialize_config.rs`:
- Around line 120-122: Replace the hardcoded DISCRIMINATOR bytes for
"initialize_compression_config" with Anchor's Discriminator helper: remove the
const DISCRIMINATOR and obtain the 8-byte discriminator at runtime from the
instruction type using the anchor_lang::Discriminator implementation for the
corresponding instruction struct (e.g., the InitializeCompressionConfig
instruction struct) and take the first 8 bytes of that discriminator; update any
usages that referenced DISCRIMINATOR to call the Discriminator::discriminator
method on the instruction type instead so changes to the instruction signature
automatically produce the correct bytes.

In `@sdk-libs/client/src/rpc/client.rs`:
- Around line 342-344: The match arm for UiInstruction::Parsed(_) currently uses
println! which bypasses structured logging; replace that println! with
tracing::warn!("Parsed instructions are not implemented yet") and do the same
for the other println! usage noted later in this file; ensure the tracing crate
is in scope (use tracing::warn) so the warnings integrate with the existing
tracing-based observability used by the file.
- Line 906: The mapping that constructs `event` uses `e[0]` which can panic on
empty vectors; change the closure over `parsed_event` to use `e.first()` and
propagate the absence safely. Concretely, replace the `parsed_event.map(|(e,
signature, slot)| (e[0].event.clone(), signature, slot))` logic with an approach
that uses `e.first()` (e.g. `parsed_event.and_then(|(e, signature, slot)|
e.first().cloned().map(|ev| (ev.event.clone(), signature, slot)))`) so empty `e`
yields `None` instead of panicking; update any downstream handling accordingly
(this touches the `parsed_event` -> `event` mapping and the
`event_from_light_transaction` flow).

In `@sdk-libs/program-test/src/indexer/test_indexer.rs`:
- Around line 2498-2531: The retry loop using the mutable retries variable can
spin forever when client.post.send() returns Ok(response_result) but
response_result.status() is non-success because retries is only decremented in
the Err branch; modify the loop in test_indexer.rs so that whenever the response
is Ok but !response_result.status().is_success() you decrement retries and await
a backoff (e.g., tokio::time::sleep(Duration::from_secs(5))). Concretely, inside
the while loop after checking if let Ok(response_result) = response_result, add
an else branch for the non-success status that logs the status, sleeps, and
decrements retries (mirroring the current Err behavior) so both network errors
and non-2xx responses progress toward loop termination.

---

Duplicate comments:
In `@prover/client/src/proof_types/batch_update/proof_inputs.rs`:
- Around line 115-117: The current public_inputs method only returns the first
element (self.0[0]) and will panic on an empty vector; change it to iterate over
all BatchUpdateCircuitInputs in self.0, call public_inputs_arr() on each, and
collect the results into a Vec<[u8; 32]> (propagating any error returned by
public_inputs_arr) instead of indexing [0]; also handle the empty-case by
returning an empty Vec rather than panicking. Use the public_inputs function,
the BatchUpdateCircuitInputs entries (self.0) and the public_inputs_arr() method
as the points to modify.

In `@prover/client/src/proof.rs`:
- Around line 167-170: The function negate_g1 currently deserializes prover
bytes using G1::deserialize_with_mode(..., Validate::No) which allows off-curve
or wrong-subgroup points; change the deserialization call in negate_g1 to use a
validating mode (e.g., Validate::Yes) so the G1 point is validated during
deserialization before any negation, and propagate the validation error via
ProverClientError::InvalidProofData as already done.
- Around line 72-89: The function deserialize_hex_string_to_be_bytes currently
uses trim_start_matches("0x"), which removes every leading "0x" and lets
malformed inputs like "0x0x01" parse; change the logic to strip at most one "0x"
prefix by replacing trim_start_matches with a single-strip approach (e.g. use
strip_prefix("0x").unwrap_or(hex_str) or equivalent) when computing trimmed_str
in deserialize_hex_string_to_be_bytes so only the first "0x" is removed and
inputs with repeated prefixes fail to parse.

In `@prover/client/src/prover.rs`:
- Around line 57-65: spawn_prover currently waits for the child via
command.wait_with_output() but doesn't verify the child's exit status, and
health_check treats any Ok(Response) as success; update spawn_prover to inspect
the output.status.success() from command.wait_with_output() and return an error
(ProverClientError::ProcessExit or similar) if the process failed, and update
health_check to examine the HTTP response status (use
response.status().is_success()) rather than just Ok/Err so non-2xx responses are
considered failures; key symbols to modify: command.wait_with_output(),
spawn_prover(), and health_check() (check Response handling to require
status.is_success()) ensuring callers only return Ok(()) when both the process
exit is successful and health_check reports a 2xx status.
- Around line 1-5: The startup path is using blocking std APIs
(std::process::Command and thread::sleep) inside async code which can block
Tokio workers; replace uses of std::process::Command with
tokio::process::Command and call .output().await (or spawn and await) instead of
blocking output(), and replace thread::sleep/Duration with
tokio::time::sleep(Duration::from_secs(...)). Update any loops that poll with
AtomicBool to use async-friendly waits (await tokio::time::sleep) so the
120-second retry loop yields to the runtime; ensure you import
tokio::process::Command and tokio::time::sleep and change any synchronous calls
at the sites referencing Command and sleep (around the reported locations) to
their async equivalents and .await them.

In `@sdk-libs/client/src/local_test_validator.rs`:
- Around line 69-105: The code builds a single shell command string in the
variable path using config flags (enable_indexer, limit_ledger_size,
sbf_programs, upgradeable_programs, enable_prover, use_surfpool, validator_args)
and later executes it with sh -c and uses blocking child.wait() inside an async
context; change this to construct a Vec<String>/Vec<&str> of argv entries
(program path as the command and each flag/value as separate args using the same
config fields and sbf_program/program_id/program_path/upgrade_authority values),
spawn the process with tokio::process::Command
(Command::new(program_path).args(&args).spawn().await?), and replace the
blocking child.wait()? usage with the async child.wait().await (or child.await)
to avoid shell injection and blocking the Tokio reactor. Ensure each sbf_program
and upgradeable_program adds separate args (no shell interpolation) and pass
validator_args elements as individual args.

In `@sdk-libs/program-test/src/indexer/test_indexer.rs`:
- Around line 2508-2514: The code currently uses unwrap() on
response_result.text().await and deserialize_gnark_proof_json(&body) which can
panic on malformed or error payloads; replace these unwraps with proper error
propagation by mapping failures into IndexerError (e.g., use .await.map_err(|e|
IndexerError::CustomError(e.to_string())) for response_result.text(), and change
deserialize_gnark_proof_json(&body).unwrap() to
deserialize_gnark_proof_json(&body).map_err(|e|
IndexerError::CustomError(e.to_string())) so that the subsequent
proof_from_json_struct and compress_proof error handling remains consistent.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 0e88447c-7e72-4ee8-aac1-79ba2ed62bd2

📥 Commits

Reviewing files that changed from the base of the PR and between 1012099 and 713ac36.

⛔ Files ignored due to path filters (61)
  • Cargo.lock is excluded by !**/*.lock and included by none
  • forester-utils/src/account_zero_copy.rs is excluded by none and included by none
  • forester-utils/src/address_merkle_tree_config.rs is excluded by none and included by none
  • forester-utils/src/forester_epoch.rs is excluded by none and included by none
  • forester-utils/src/instructions/compress_and_close_mint.rs is excluded by none and included by none
  • forester-utils/src/rate_limiter.rs is excluded by none and included by none
  • forester-utils/src/registry.rs is excluded by none and included by none
  • program-tests/account-compression-test/tests/address_merkle_tree_tests.rs is excluded by none and included by none
  • program-tests/account-compression-test/tests/merkle_tree_tests.rs is excluded by none and included by none
  • program-tests/batched-merkle-tree-test/tests/merkle_tree.rs is excluded by none and included by none
  • program-tests/compressed-token-test/tests/freeze/functional.rs is excluded by none and included by none
  • program-tests/compressed-token-test/tests/v1.rs is excluded by none and included by none
  • program-tests/system-cpi-v2-test/tests/event.rs is excluded by none and included by none
  • program-tests/system-cpi-v2-test/tests/invoke_cpi_with_read_only.rs is excluded by none and included by none
  • program-tests/utils/src/account_zero_copy.rs is excluded by none and included by none
  • program-tests/utils/src/actions/legacy/instructions/transfer2.rs is excluded by none and included by none
  • program-tests/utils/src/address_tree_rollover.rs is excluded by none and included by none
  • program-tests/utils/src/assert_compressed_tx.rs is excluded by none and included by none
  • program-tests/utils/src/assert_merkle_tree.rs is excluded by none and included by none
  • program-tests/utils/src/assert_queue.rs is excluded by none and included by none
  • program-tests/utils/src/batched_address_tree.rs is excluded by none and included by none
  • program-tests/utils/src/e2e_test_env.rs is excluded by none and included by none
  • program-tests/utils/src/lib.rs is excluded by none and included by none
  • program-tests/utils/src/mock_batched_forester.rs is excluded by none and included by none
  • program-tests/utils/src/state_tree_rollover.rs is excluded by none and included by none
  • program-tests/utils/src/test_batch_forester.rs is excluded by none and included by none
  • program-tests/utils/src/test_forester.rs is excluded by none and included by none
  • sdk-tests/anchor-manual-test/tests/shared.rs is excluded by none and included by none
  • sdk-tests/anchor-semi-manual-test/tests/shared/mod.rs is excluded by none and included by none
  • sdk-tests/anchor-semi-manual-test/tests/stress_test.rs is excluded by none and included by none
  • sdk-tests/client-test/tests/light_client.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/amm_stress_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/amm_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/basic_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/d10_ata_idempotent_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/d10_token_accounts_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/d11_zero_copy_test.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/failing_tests.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/integration_tests.rs is excluded by none and included by none
  • sdk-tests/csdk-anchor-full-derived-test/tests/shared.rs is excluded by none and included by none
  • sdk-tests/pinocchio-light-program-test/tests/shared/mod.rs is excluded by none and included by none
  • sdk-tests/pinocchio-light-program-test/tests/stress_test.rs is excluded by none and included by none
  • sdk-tests/pinocchio-manual-test/tests/shared.rs is excluded by none and included by none
  • sdk-tests/sdk-anchor-test/programs/sdk-anchor-test/tests/read_only.rs is excluded by none and included by none
  • sdk-tests/sdk-anchor-test/programs/sdk-anchor-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-native-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-pinocchio-v1-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-pinocchio-v2-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/ctoken_pda.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/decompress_full_cpi.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/pda_ctoken.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/test_4_invocations.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/test_4_transfer2.rs is excluded by none and included by none
  • sdk-tests/sdk-token-test/tests/test_deposit.rs is excluded by none and included by none
  • sdk-tests/sdk-v1-native-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-account-loader-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-ata-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-mint-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-pda-test/tests/test.rs is excluded by none and included by none
  • sdk-tests/single-token-test/tests/test.rs is excluded by none and included by none
📒 Files selected for processing (53)
  • forester/src/config.rs
  • forester/src/forester_status.rs
  • forester/src/main.rs
  • forester/src/processor/v2/proof_worker.rs
  • forester/src/queue_helpers.rs
  • forester/tests/test_compressible_pda.rs
  • prover/client/src/errors.rs
  • prover/client/src/helpers.rs
  • prover/client/src/proof.rs
  • prover/client/src/proof_client.rs
  • prover/client/src/proof_types/batch_address_append/json.rs
  • prover/client/src/proof_types/batch_append/json.rs
  • prover/client/src/proof_types/batch_append/proof_inputs.rs
  • prover/client/src/proof_types/batch_update/json.rs
  • prover/client/src/proof_types/batch_update/proof_inputs.rs
  • prover/client/src/proof_types/combined/v1/json.rs
  • prover/client/src/proof_types/combined/v2/json.rs
  • prover/client/src/proof_types/combined/v2/proof_inputs.rs
  • prover/client/src/proof_types/inclusion/v1/json.rs
  • prover/client/src/proof_types/inclusion/v1/proof_inputs.rs
  • prover/client/src/proof_types/inclusion/v2/json.rs
  • prover/client/src/proof_types/inclusion/v2/proof_inputs.rs
  • prover/client/src/proof_types/non_inclusion/v1/json.rs
  • prover/client/src/proof_types/non_inclusion/v1/proof_inputs.rs
  • prover/client/src/proof_types/non_inclusion/v2/json.rs
  • prover/client/src/proof_types/non_inclusion/v2/proof_inputs.rs
  • prover/client/src/prover.rs
  • prover/client/tests/batch_append.rs
  • prover/client/tests/batch_update.rs
  • prover/client/tests/combined.rs
  • prover/client/tests/inclusion.rs
  • prover/client/tests/init_merkle_tree.rs
  • prover/client/tests/non_inclusion.rs
  • sdk-libs/client/src/fee.rs
  • sdk-libs/client/src/indexer/error.rs
  • sdk-libs/client/src/indexer/options.rs
  • sdk-libs/client/src/indexer/photon_indexer.rs
  • sdk-libs/client/src/indexer/types/proof.rs
  • sdk-libs/client/src/indexer/types/queue.rs
  • sdk-libs/client/src/indexer/types/token.rs
  • sdk-libs/client/src/interface/initialize_config.rs
  • sdk-libs/client/src/interface/instructions.rs
  • sdk-libs/client/src/interface/load_accounts.rs
  • sdk-libs/client/src/interface/mod.rs
  • sdk-libs/client/src/interface/pack.rs
  • sdk-libs/client/src/interface/serialize.rs
  • sdk-libs/client/src/local_test_validator.rs
  • sdk-libs/client/src/rpc/client.rs
  • sdk-libs/client/src/utils.rs
  • sdk-libs/macros/src/light_pdas/seeds/extract.rs
  • sdk-libs/program-test/src/indexer/test_indexer.rs
  • sdk-libs/program-test/src/program_test/compressible_setup.rs
  • sdk-libs/program-test/src/program_test/light_program_test.rs

Comment on lines +180 to +186
let merkle_proof_array = merkle_proof.as_slice().try_into().map_err(|_| {
ProverClientError::InvalidProofData(format!(
"invalid merkle proof length: expected {}, got {}",
HEIGHT,
merkle_proof.len()
))
})?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate full batch vector lengths before iterating.

At Line 135, zip truncates to the shortest input. If old_leaves, leaves, or merkle_proofs is shorter than batch_size, this function silently builds a partial batch while still emitting the original batch_size. Please hard-fail on length mismatch before the loop.

🛠️ Suggested guard
 pub fn get_batch_append_inputs<const HEIGHT: usize>(
@@
 ) -> Result<(BatchAppendsCircuitInputs, Vec<ChangelogEntry<HEIGHT>>), ProverClientError> {
+    let expected_len = batch_size as usize;
+    if old_leaves.len() != expected_len {
+        return Err(ProverClientError::InvalidProofData(format!(
+            "old_leaves length mismatch: expected {}, got {}",
+            expected_len,
+            old_leaves.len()
+        )));
+    }
+    if leaves.len() != expected_len {
+        return Err(ProverClientError::InvalidProofData(format!(
+            "leaves length mismatch: expected {}, got {}",
+            expected_len,
+            leaves.len()
+        )));
+    }
+    if merkle_proofs.len() != expected_len {
+        return Err(ProverClientError::InvalidProofData(format!(
+            "merkle_proofs length mismatch: expected {}, got {}",
+            expected_len,
+            merkle_proofs.len()
+        )));
+    }
+
     let mut new_root = [0u8; 32];

Based on learnings: Prover server (prover/server/) must generate ZK proofs for batch operations (append, nullify, address append).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prover/client/src/proof_types/batch_append/proof_inputs.rs` around lines 180
- 186, Before iterating with zip, validate that old_leaves.len(), leaves.len(),
and merkle_proofs.len() all equal batch_size and return a hard error if any
mismatch; specifically add a guard near the start of the function that checks
these lengths and returns ProverClientError::InvalidProofData (or another
appropriate error) when they differ so you never silently build a partial batch
(this covers the loop that consumes old_leaves, leaves, and merkle_proofs and
the subsequent merkle_proof.as_slice().try_into() conversion).

Comment on lines +159 to 210
fn reconstruct_proof_with_lookup<const HEIGHT: usize>(
&self,
address_idx: usize,
node_lookup: &HashMap<u64, usize>,
) -> Result<[[u8; 32]; HEIGHT], IndexerError> {
self.validate_proof_height::<HEIGHT>()?;
let leaf_index = *self.low_element_indices.get(address_idx).ok_or_else(|| {
IndexerError::MissingResult {
context: "reconstruct_proof".to_string(),
message: format!(
"address_idx {} out of bounds for low_element_indices (len {})",
address_idx,
self.low_element_indices.len(),
),
}
})?;
let mut proof = [[0u8; 32]; HEIGHT];
let mut pos = leaf_index;

for (level, proof_element) in proof.iter_mut().enumerate() {
let sibling_pos = if pos.is_multiple_of(2) {
pos + 1
} else {
pos - 1
};
let sibling_idx = Self::encode_node_index(level, sibling_pos);
let hash_idx = node_lookup.get(&sibling_idx).copied().ok_or_else(|| {
IndexerError::MissingResult {
context: "reconstruct_proof".to_string(),
message: format!(
"Missing proof node at level {} position {} (encoded: {})",
level, sibling_pos, sibling_idx
),
}
})?;
let hash =
self.node_hashes
.get(hash_idx)
.ok_or_else(|| IndexerError::MissingResult {
context: "reconstruct_proof".to_string(),
message: format!(
"node_hashes index {} out of bounds (len {})",
hash_idx,
self.node_hashes.len(),
),
})?;
*proof_element = *hash;
pos /= 2;
}

Ok(proof)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Minor: redundant height validation in lookup path.

reconstruct_proof_with_lookup calls validate_proof_height::<HEIGHT>() at line 164, but its only caller reconstruct_proofs already validates at line 132. Consider removing the internal check to avoid O(n) redundant validations when reconstructing many proofs:

     fn reconstruct_proof_with_lookup<const HEIGHT: usize>(
         &self,
         address_idx: usize,
         node_lookup: &HashMap<u64, usize>,
     ) -> Result<[[u8; 32]; HEIGHT], IndexerError> {
-        self.validate_proof_height::<HEIGHT>()?;
         let leaf_index = *self.low_element_indices.get(address_idx).ok_or_else(|| {

That said, keeping it is acceptable as defensive programming if this method might be called directly in the future.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fn reconstruct_proof_with_lookup<const HEIGHT: usize>(
&self,
address_idx: usize,
node_lookup: &HashMap<u64, usize>,
) -> Result<[[u8; 32]; HEIGHT], IndexerError> {
self.validate_proof_height::<HEIGHT>()?;
let leaf_index = *self.low_element_indices.get(address_idx).ok_or_else(|| {
IndexerError::MissingResult {
context: "reconstruct_proof".to_string(),
message: format!(
"address_idx {} out of bounds for low_element_indices (len {})",
address_idx,
self.low_element_indices.len(),
),
}
})?;
let mut proof = [[0u8; 32]; HEIGHT];
let mut pos = leaf_index;
for (level, proof_element) in proof.iter_mut().enumerate() {
let sibling_pos = if pos.is_multiple_of(2) {
pos + 1
} else {
pos - 1
};
let sibling_idx = Self::encode_node_index(level, sibling_pos);
let hash_idx = node_lookup.get(&sibling_idx).copied().ok_or_else(|| {
IndexerError::MissingResult {
context: "reconstruct_proof".to_string(),
message: format!(
"Missing proof node at level {} position {} (encoded: {})",
level, sibling_pos, sibling_idx
),
}
})?;
let hash =
self.node_hashes
.get(hash_idx)
.ok_or_else(|| IndexerError::MissingResult {
context: "reconstruct_proof".to_string(),
message: format!(
"node_hashes index {} out of bounds (len {})",
hash_idx,
self.node_hashes.len(),
),
})?;
*proof_element = *hash;
pos /= 2;
}
Ok(proof)
}
fn reconstruct_proof_with_lookup<const HEIGHT: usize>(
&self,
address_idx: usize,
node_lookup: &HashMap<u64, usize>,
) -> Result<[[u8; 32]; HEIGHT], IndexerError> {
let leaf_index = *self.low_element_indices.get(address_idx).ok_or_else(|| {
IndexerError::MissingResult {
context: "reconstruct_proof".to_string(),
message: format!(
"address_idx {} out of bounds for low_element_indices (len {})",
address_idx,
self.low_element_indices.len(),
),
}
})?;
let mut proof = [[0u8; 32]; HEIGHT];
let mut pos = leaf_index;
for (level, proof_element) in proof.iter_mut().enumerate() {
let sibling_pos = if pos.is_multiple_of(2) {
pos + 1
} else {
pos - 1
};
let sibling_idx = Self::encode_node_index(level, sibling_pos);
let hash_idx = node_lookup.get(&sibling_idx).copied().ok_or_else(|| {
IndexerError::MissingResult {
context: "reconstruct_proof".to_string(),
message: format!(
"Missing proof node at level {} position {} (encoded: {})",
level, sibling_pos, sibling_idx
),
}
})?;
let hash =
self.node_hashes
.get(hash_idx)
.ok_or_else(|| IndexerError::MissingResult {
context: "reconstruct_proof".to_string(),
message: format!(
"node_hashes index {} out of bounds (len {})",
hash_idx,
self.node_hashes.len(),
),
})?;
*proof_element = *hash;
pos /= 2;
}
Ok(proof)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/indexer/types/queue.rs` around lines 159 - 210, The
method reconstruct_proof_with_lookup currently redundantly calls
validate_proof_height::<HEIGHT>() even though its only caller,
reconstruct_proofs, already performs that check; remove the
validate_proof_height::<HEIGHT>() invocation and its error propagation from
reconstruct_proof_with_lookup so reconstruct_proofs remains the single
validation point (leave reconstruct_proofs and validate_proof_height unchanged),
ensuring reconstruct_proof_with_lookup assumes the caller has validated the
height (or add a brief comment noting that responsibility).

address_space: Vec<Pubkey>,
config_bump: Option<u8>,
) -> Instruction {
) -> std::io::Result<Instruction> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Unify builder error types across this module.

Line 104 and Line 154 now return std::io::Result<Instruction>, while other builders still use Result<Instruction, Box<dyn std::error::Error>>. Consider a single module-level error type to avoid split caller handling and lost context.

Also applies to: 154-154

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/interface/instructions.rs` at line 104, Unify the builder
return type by introducing a single module-level alias or error enum (e.g., type
BuilderResult = Result<Instruction, Box<dyn std::error::Error + Send + Sync>> or
a custom InstructionBuildError) and replace the mixed signatures (the functions
currently returning std::io::Result<Instruction> and those returning
Result<Instruction, Box<dyn std::error::Error>>) with that alias; update all
builder function signatures and their internal error returns to use
BuilderResult and adjust imports/usages accordingly so callers only handle one
consistent error type for Instruction construction.

.collect();
.collect::<Result<Vec<_>, _>>()?;

let program_id = specs.first().map(|s| s.program_id()).unwrap_or_default();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Redundant program_id computation.

This is identical to line 311. You can remove this line and reuse the existing program_id binding.

🧹 Proposed fix
-    let program_id = specs.first().map(|s| s.program_id()).unwrap_or_default();
-
     instructions::create_decompress_accounts_idempotent_instruction(
         &program_id,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk-libs/client/src/interface/load_accounts.rs` at line 334, Remove the
redundant declaration that recomputes program_id (the line using
specs.first().map(|s| s.program_id()).unwrap_or_default()); instead reuse the
existing program_id binding already declared earlier in the same scope; delete
this duplicate statement so there is only a single program_id binding referenced
by the surrounding code (e.g., the later uses in load_accounts-related logic).

@sergeytimoshin sergeytimoshin force-pushed the sergey/runtime-safety-hardening branch from 713ac36 to 8ed76e5 Compare March 15, 2026 00:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant