Skip to content

refactor: implement fdv2 polling initializer / synchronizer#519

Open
beekld wants to merge 15 commits intomainfrom
beeklimt/SDK-2097
Open

refactor: implement fdv2 polling initializer / synchronizer#519
beekld wants to merge 15 commits intomainfrom
beeklimt/SDK-2097

Conversation

@beekld
Copy link
Copy Markdown
Contributor

@beekld beekld commented Apr 3, 2026

This implements the C++ polling initializer and synchronizer.

The bulk of this code was generated by Claude based on the Java version, but I've gone through it line-by-line, and I think it makes sense. But I'm new to both FDv2 and ASIO, so I could be missing something.

Probably the most controversial part is the decision from the previous PR to use std::future and a blocking call. If we decide we need Java-like conditions, or if we need the callers to be non-blocking, we could change these to use asio::deferred instead. But I don't think these changes require that yet.


Note

Medium Risk
Adds new FDv2 polling data-source implementations and a protocol state machine that determines how flag/segment updates are applied; bugs here could lead to missed or partial updates or incorrect retry/fallback behavior.

Overview
Introduces a shared FDv2ProtocolHandler state machine to consume FDv2 wire-format events (server-intent, put-object, delete-object, payload-transferred, error, goodbye) and emit complete FDv2ChangeSets, discarding partial state on errors and skipping unknown kinds for forward compatibility.

Adds FDv2 polling support in the server SDK via a blocking one-shot FDv2PollingInitializer and a recurring FDv2PollingSynchronizer with a minimum poll interval, selector-based basis query support, optional payload filtering, and consistent mapping of HTTP/parse/server errors into FDv2SourceResult outcomes. Includes unit coverage for the new protocol handler and wires the new sources/handler into the internal and server SDK CMake builds.

Reviewed by Cursor Bugbot for commit 0996cca. Bugbot is set up for automated code reviews on this repo. Configure here.

@beekld beekld requested a review from a team as a code owner April 3, 2026 23:01
Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 4 potential issues.

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit 0996cca. Configure here.

transferred.version, transferred.state}}};
Reset();
return changeset;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing kInactive guard in payload-transferred handler

High Severity

The put-object and delete-object handlers both early-return monostate when state_ == State::kInactive, but the payload-transferred handler lacks this guard. When inactive, the ternary (state_ == State::kPartial) ? kPartial : kFull evaluates to kFull, producing a changeset that signals "replace all data with these (zero) changes" — effectively wiping the data store. This handler is shared between polling and streaming, so a malformed server response or out-of-order event could trigger it.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 0996cca. Configure here.

std::lock_guard lock(mutex_);
closed_ = true;
cv_.notify_one();
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Timer cancel outside mutex causes data race

Medium Severity

Close() calls timer_.cancel() before acquiring mutex_, while Next() calls timer_.cancel(), timer_.expires_after(), and timer_.async_wait() while holding mutex_. Since boost::asio::steady_timer is not thread-safe, concurrent access from Close() and Next() is undefined behavior. The cancel needs to be serialized with the timer operations in Next(), for example by moving it inside the locked section or posting it to the executor.

Additional Locations (1)
Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 0996cca. Configure here.

data_interfaces::FDv2SourceResult::TerminalError{
MakeError(ErrorKind::kErrorResponse, res.Status(), std::move(msg)),
false}};
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HandlePollResult duplicated across initializer and synchronizer

Low Severity

HandlePollResult (~140 lines), MakeError, the URL-building logic, and the error message constants are substantively identical between polling_initializer.cpp and polling_synchronizer.cpp. Any future bug fix or protocol change would need to be applied to both copies, increasing the risk of divergence.

Additional Locations (1)
Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 0996cca. Configure here.

std::lock_guard guard(mutex_);
*shared_result = std::move(res);
cv_.notify_one();
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Request callbacks capture raw this, risking use-after-free

High Severity

The requester_.Request callbacks capture raw this to access mutex_, cv_, and result_. AsioRequester does not cancel in-flight requests on destruction — FoxyClient instances are independent shared_ptrs that live until completion. If Close() unblocks Run()/Next() and the caller destroys the object, the still-pending callback fires and dereferences a dangling this. The existing PollingDataSource in the same codebase avoids this by using weak_from_this() in its callbacks.

Additional Locations (1)
Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 0996cca. Configure here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant