Skip to content

fix(store): flag deduping for experimental incremental gRPC sync#1940

Open
erenatas wants to merge 1 commit intoopen-feature:mainfrom
erenatas:erenatas/membership-dedup
Open

fix(store): flag deduping for experimental incremental gRPC sync#1940
erenatas wants to merge 1 commit intoopen-feature:mainfrom
erenatas:erenatas/membership-dedup

Conversation

@erenatas
Copy link
Copy Markdown
Contributor

@erenatas erenatas commented Apr 14, 2026

This PR

Adds a membership table to the in-memory store that tracks which flagSetIds reference which flags, enabling flag deduplication during incremental gRPC sync updates. Flags shared across flagSetIds are stored once per (key, source) in the flags table, while the membership table records (flagSetId, key, source) triples for per-flagSetId query resolution.

Flags shared across flagSetIds are stored once per (key, source) in the flags table, while a membership table records (flagSetId, key, source) triples for query resolution. Get, GetAll, and Watch resolve flags through membership when a flagSetId selector doesn't match directly, iterating all entries to select the highest-priority flag. Content changes are upserted in place so updated targeting rules or variants are not dropped, and flagSetId is always populated in metadata even when the original is nil. Watch registers scoped per-(key, source) channels instead of the entire flags table.

All membership code paths are gated behind an atomic.Bool set only on the first incremental update, so non-incremental callers never enter them.

Why

When many projects share most of the same flags, storing one copy per flagSetId wastes memory and can lead to OOMs. Deduplicating shared flags keeps the incremental sync path scalable without changing full-snapshot behavior.

@erenatas erenatas requested review from a team as code owners April 14, 2026 07:28
@netlify
Copy link
Copy Markdown

netlify bot commented Apr 14, 2026

Deploy Preview for polite-licorice-3db33c canceled.

Name Link
🔨 Latest commit ccbe893
🔍 Latest deploy log https://app.netlify.com/projects/polite-licorice-3db33c/deploys/69de84bbcdb69f000780f3a2

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Apr 14, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a membership-based deduplication mechanism for flags, adding a new membership table to track flag associations with flagSetIds and refactoring the Update logic into incremental and full snapshot modes. Review feedback identifies critical issues where the Get method does not respect source priority during membership resolution and the incremental update logic fails to update flag content if the key already exists. Furthermore, there are concerns regarding performance overhead in the Watch implementation and potential database churn when handling orphaned flags during updates.

Comment thread core/pkg/store/store.go Outdated
Comment thread core/pkg/store/store.go
Comment thread core/pkg/store/store.go Outdated
Comment thread core/pkg/store/store.go
@erenatas erenatas changed the title fix(store): deduplicate flags across flagSetIds in incremental update… fix(store): flag deduplication for incremental gRPC sync updates Apr 14, 2026
@erenatas
Copy link
Copy Markdown
Contributor Author

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces flag deduplication in the store to optimize memory usage for workloads where flags are shared across multiple flagSetIds. It implements membership tracking using internal Go maps to manage the relationship between flags and their sets without the overhead of additional database indexes. The Get, GetAll, and Watch operations are updated to resolve flags through these membership maps. Review feedback highlights a performance improvement opportunity in the Get method to replace a linear scan with a map lookup and suggests ensuring that flagSetId is consistently included in the flag metadata, even when the source metadata is nil.

Comment thread core/pkg/store/store.go Outdated
Comment thread core/pkg/store/store.go Outdated
Comment thread core/pkg/store/store.go Outdated
@toddbaert toddbaert changed the title fix(store): flag deduplication for incremental gRPC sync updates fix(store): flag deduping for experimental incremental gRPC sync Apr 14, 2026
@toddbaert
Copy link
Copy Markdown
Member

I understand the motivation; 8GB to 150MB is dramatic. But this PR fundamentally changes the store architecture by introducing a parallel state system with manual synchronization, read-time patching, and dual code paths through every query method. That's a lot of new surface area for bugs in the critical evaluation path, all for an experimental feature 😅 .

Maybe the 810x duplication of 938 common flags can be fixed on the sync server end?. Can we rethink this approach?

@erenatas erenatas force-pushed the erenatas/membership-dedup branch 2 times, most recently from d8bd915 to d4933c1 Compare April 14, 2026 17:50
@erenatas
Copy link
Copy Markdown
Contributor Author

/gemini review

@erenatas erenatas force-pushed the erenatas/membership-dedup branch from d4933c1 to 18f3786 Compare April 14, 2026 17:51
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a flag deduplication mechanism for incremental updates by adding a membership table to track flag associations with specific flagSetIds. This allows identical flags shared across multiple sets to be stored once per source, optimizing memory usage as demonstrated by the new benchmarks. The Update, Get, GetAll, and Watch methods have been refactored to support this membership-aware logic. Review feedback identifies a redundant priority check in the update path, duplicated metadata patching logic that should be refactored into a helper, and a potential logic error where mixing update modes could lead to unintended flag deletions.

Comment thread core/pkg/store/store.go
Comment thread core/pkg/store/store.go Outdated
Comment thread core/pkg/store/store.go
… mode

Add a membership table to track which flagSetIds reference which flags,
enabling per-flagSetId queries without storing duplicate flag entries.
Flags are stored once per (key, source) in the flags table while the
membership table records (flagSetId, key, source) triples.

Key behaviors:
- Get iterates all membership entries for a (flagSetId, key) and selects
  the highest-priority flag, consistent with GetAll/collectViaMembership.
- updateIncremental upserts flag content when a (key, source) already
  exists at equal or lower priority, so content changes (targeting rules,
  variants) are not silently dropped.
- Metadata always includes flagSetId even when the original metadata is
  nil, ensuring consistent serialization for clients.
- Watch registers scoped per-(key, source) watch channels from memdb
  instead of watching the entire flags table, avoiding O(N_watchers x
  N_updates) spurious wake-ups.

Signed-off-by: Eren Atas <eren_atas@hotmail.com>
@erenatas erenatas force-pushed the erenatas/membership-dedup branch from 18f3786 to ccbe893 Compare April 14, 2026 18:17
@sonarqubecloud
Copy link
Copy Markdown

@erenatas
Copy link
Copy Markdown
Contributor Author

/gemini review

@erenatas
Copy link
Copy Markdown
Contributor Author

I understand the motivation; 8GB to 150MB is dramatic. But this PR fundamentally changes the store architecture by introducing a parallel state system with manual synchronization, read-time patching, and dual code paths through every query method. That's a lot of new surface area for bugs in the critical evaluation path, all for an experimental feature 😅 .

Maybe the 810x duplication of 938 common flags can be fixed on the sync server end?. Can we rethink this approach?

Can you have a look now? I tried addressing your concerns. Non-incremental callers won't touch membership code and all experimental code is isolated behind the introduced gate

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request implements a membership-based deduplication mechanism for incremental updates within the flag store. By introducing a new membership table, the store now tracks associations between flag keys and flagSetIds, allowing identical flags shared across multiple sets to be stored only once per source. The Get, GetAll, and Watch operations have been updated to resolve flags through this membership mapping when a flagSetId is provided. Review feedback focuses on improving the robustness of flag iteration during membership resolution and refactoring duplicated metadata patching logic into a common helper function.

Comment thread core/pkg/store/store.go
Comment on lines +251 to +257
if fRaw := flagIt.Next(); fRaw != nil {
candidate := fRaw.(model.Flag)
if !found || candidate.Priority >= best.Priority {
best = candidate
found = true
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For robustness and consistency with collectViaMembership, this should iterate through all flags returned by the iterator for the given source. While the incremental update logic aims to maintain a single record per (key, source), the store might contain multiple entries if it was previously populated via non-deduplicated updates.

Suggested change
if fRaw := flagIt.Next(); fRaw != nil {
candidate := fRaw.(model.Flag)
if !found || candidate.Priority >= best.Priority {
best = candidate
found = true
}
}
for fRaw := flagIt.Next(); fRaw != nil; fRaw = flagIt.Next() {
candidate := fRaw.(model.Flag)
if !found || candidate.Priority >= best.Priority {
best = candidate
found = true
}
}

Comment thread core/pkg/store/store.go
Comment on lines +260 to +270
best.FlagSetId = flagSetId
if best.Metadata == nil {
best.Metadata = make(model.Metadata)
} else {
patched := make(model.Metadata, len(best.Metadata))
for k, v := range best.Metadata {
patched[k] = v
}
best.Metadata = patched
}
best.Metadata["flagSetId"] = flagSetId
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This manual metadata patching logic is duplicated and can be simplified by using the existing patchMetadata helper function.

						best.FlagSetId = flagSetId
						best.Metadata = patchMetadata(best.Metadata, model.Metadata{"flagSetId": flagSetId})

Comment thread core/pkg/store/store.go
Comment on lines +661 to +671
flag.FlagSetId = flagSetId // patch to match the queried flagSetId
if flag.Metadata == nil {
flag.Metadata = make(model.Metadata)
} else {
patched := make(model.Metadata, len(flag.Metadata))
for k, v := range flag.Metadata {
patched[k] = v
}
flag.Metadata = patched
}
flag.Metadata["flagSetId"] = flagSetId
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This manual metadata patching logic is duplicated and can be simplified by using the existing patchMetadata helper function.

			flag.FlagSetId = flagSetId
			flag.Metadata = patchMetadata(flag.Metadata, model.Metadata{"flagSetId": flagSetId})

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants