fix(store): flag deduping for experimental incremental gRPC sync#1940
fix(store): flag deduping for experimental incremental gRPC sync#1940erenatas wants to merge 1 commit intoopen-feature:mainfrom
Conversation
✅ Deploy Preview for polite-licorice-3db33c canceled.
|
There was a problem hiding this comment.
Code Review
This pull request introduces a membership-based deduplication mechanism for flags, adding a new membership table to track flag associations with flagSetIds and refactoring the Update logic into incremental and full snapshot modes. Review feedback identifies critical issues where the Get method does not respect source priority during membership resolution and the incremental update logic fails to update flag content if the key already exists. Furthermore, there are concerns regarding performance overhead in the Watch implementation and potential database churn when handling orphaned flags during updates.
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces flag deduplication in the store to optimize memory usage for workloads where flags are shared across multiple flagSetIds. It implements membership tracking using internal Go maps to manage the relationship between flags and their sets without the overhead of additional database indexes. The Get, GetAll, and Watch operations are updated to resolve flags through these membership maps. Review feedback highlights a performance improvement opportunity in the Get method to replace a linear scan with a map lookup and suggests ensuring that flagSetId is consistently included in the flag metadata, even when the source metadata is nil.
|
I understand the motivation; 8GB to 150MB is dramatic. But this PR fundamentally changes the store architecture by introducing a parallel state system with manual synchronization, read-time patching, and dual code paths through every query method. That's a lot of new surface area for bugs in the critical evaluation path, all for an experimental feature 😅 . Maybe the 810x duplication of 938 common flags can be fixed on the sync server end?. Can we rethink this approach? |
d8bd915 to
d4933c1
Compare
|
/gemini review |
d4933c1 to
18f3786
Compare
There was a problem hiding this comment.
Code Review
This pull request introduces a flag deduplication mechanism for incremental updates by adding a membership table to track flag associations with specific flagSetIds. This allows identical flags shared across multiple sets to be stored once per source, optimizing memory usage as demonstrated by the new benchmarks. The Update, Get, GetAll, and Watch methods have been refactored to support this membership-aware logic. Review feedback identifies a redundant priority check in the update path, duplicated metadata patching logic that should be refactored into a helper, and a potential logic error where mixing update modes could lead to unintended flag deletions.
… mode Add a membership table to track which flagSetIds reference which flags, enabling per-flagSetId queries without storing duplicate flag entries. Flags are stored once per (key, source) in the flags table while the membership table records (flagSetId, key, source) triples. Key behaviors: - Get iterates all membership entries for a (flagSetId, key) and selects the highest-priority flag, consistent with GetAll/collectViaMembership. - updateIncremental upserts flag content when a (key, source) already exists at equal or lower priority, so content changes (targeting rules, variants) are not silently dropped. - Metadata always includes flagSetId even when the original metadata is nil, ensuring consistent serialization for clients. - Watch registers scoped per-(key, source) watch channels from memdb instead of watching the entire flags table, avoiding O(N_watchers x N_updates) spurious wake-ups. Signed-off-by: Eren Atas <eren_atas@hotmail.com>
18f3786 to
ccbe893
Compare
|
|
/gemini review |
Can you have a look now? I tried addressing your concerns. Non-incremental callers won't touch membership code and all experimental code is isolated behind the introduced gate |
There was a problem hiding this comment.
Code Review
This pull request implements a membership-based deduplication mechanism for incremental updates within the flag store. By introducing a new membership table, the store now tracks associations between flag keys and flagSetIds, allowing identical flags shared across multiple sets to be stored only once per source. The Get, GetAll, and Watch operations have been updated to resolve flags through this membership mapping when a flagSetId is provided. Review feedback focuses on improving the robustness of flag iteration during membership resolution and refactoring duplicated metadata patching logic into a common helper function.
| if fRaw := flagIt.Next(); fRaw != nil { | ||
| candidate := fRaw.(model.Flag) | ||
| if !found || candidate.Priority >= best.Priority { | ||
| best = candidate | ||
| found = true | ||
| } | ||
| } |
There was a problem hiding this comment.
For robustness and consistency with collectViaMembership, this should iterate through all flags returned by the iterator for the given source. While the incremental update logic aims to maintain a single record per (key, source), the store might contain multiple entries if it was previously populated via non-deduplicated updates.
| if fRaw := flagIt.Next(); fRaw != nil { | |
| candidate := fRaw.(model.Flag) | |
| if !found || candidate.Priority >= best.Priority { | |
| best = candidate | |
| found = true | |
| } | |
| } | |
| for fRaw := flagIt.Next(); fRaw != nil; fRaw = flagIt.Next() { | |
| candidate := fRaw.(model.Flag) | |
| if !found || candidate.Priority >= best.Priority { | |
| best = candidate | |
| found = true | |
| } | |
| } |
| best.FlagSetId = flagSetId | ||
| if best.Metadata == nil { | ||
| best.Metadata = make(model.Metadata) | ||
| } else { | ||
| patched := make(model.Metadata, len(best.Metadata)) | ||
| for k, v := range best.Metadata { | ||
| patched[k] = v | ||
| } | ||
| best.Metadata = patched | ||
| } | ||
| best.Metadata["flagSetId"] = flagSetId |
| flag.FlagSetId = flagSetId // patch to match the queried flagSetId | ||
| if flag.Metadata == nil { | ||
| flag.Metadata = make(model.Metadata) | ||
| } else { | ||
| patched := make(model.Metadata, len(flag.Metadata)) | ||
| for k, v := range flag.Metadata { | ||
| patched[k] = v | ||
| } | ||
| flag.Metadata = patched | ||
| } | ||
| flag.Metadata["flagSetId"] = flagSetId |



This PR
Adds a membership table to the in-memory store that tracks which flagSetIds reference which flags, enabling flag deduplication during incremental gRPC sync updates. Flags shared across flagSetIds are stored once per (key, source) in the flags table, while the membership table records (flagSetId, key, source) triples for per-flagSetId query resolution.
Flags shared across flagSetIds are stored once per (key, source) in the flags table, while a membership table records (flagSetId, key, source) triples for query resolution.
Get,GetAll, andWatchresolve flags through membership when a flagSetId selector doesn't match directly, iterating all entries to select the highest-priority flag. Content changes are upserted in place so updated targeting rules or variants are not dropped, and flagSetId is always populated in metadata even when the original is nil. Watch registers scoped per-(key, source) channels instead of the entire flags table.All membership code paths are gated behind an
atomic.Boolset only on the first incremental update, so non-incremental callers never enter them.Why
When many projects share most of the same flags, storing one copy per
flagSetIdwastes memory and can lead to OOMs. Deduplicating shared flags keeps the incremental sync path scalable without changing full-snapshot behavior.