Skip to content

Initial placement#3830

Open
cthulhu-rider wants to merge 6 commits intomasterfrom
initial-placement
Open

Initial placement#3830
cthulhu-rider wants to merge 6 commits intomasterfrom
initial-placement

Conversation

@cthulhu-rider
Copy link
Contributor

No description provided.

@codecov
Copy link

codecov bot commented Feb 18, 2026

Codecov Report

❌ Patch coverage is 13.80597% with 231 lines in your changes missing coverage. Please review.
✅ Project coverage is 25.76%. Comparing base (db0bf32) to head (e605908).

Files with missing lines Patch % Lines
pkg/services/object/put/distributed.go 10.55% 190 Missing and 5 partials ⚠️
cmd/neofs-cli/modules/container/get.go 0.00% 24 Missing ⚠️
pkg/services/object/put/ec.go 42.85% 4 Missing and 4 partials ⚠️
pkg/services/object/put/util.go 50.00% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #3830      +/-   ##
==========================================
- Coverage   25.83%   25.76%   -0.08%     
==========================================
  Files         669      669              
  Lines       43074    43287     +213     
==========================================
+ Hits        11127    11151      +24     
- Misses      30930    31114     +184     
- Partials     1017     1022       +5     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@cthulhu-rider cthulhu-rider force-pushed the initial-placement branch 7 times, most recently from 96f2c39 to a4553c1 Compare February 26, 2026 17:02
@cthulhu-rider
Copy link
Contributor Author

rdy overall. The only thing left from me is MaxReplicas tests

@cthulhu-rider cthulhu-rider marked this pull request as ready for review February 26, 2026 17:11
This refactoring will help to support initial placement rules
(nspcc-dev/neofs-api#352) which will not apply
to system objects.

Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
Such requests are specific, so it's clearer to handle them in a
separate branch. Also, avoid undefined field values.

Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
nspcc-dev/neofs-sdk-go@e6342b6...b703a48.

IR performs verification of initial rules against the main ones.

CLI supports initial rules specified in JSON file only for now.

SN allows such policies to be created, but is not yet able to serve
them. This will be supported in future updates.

Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
Except policies with set `max_replicas` which will be supported
separately. Requests to such containers are denied for now.

Requests with `copies_number` now behave like ones with initial
`max_replicas`.

Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
Support `max_replicas` and `prefer_local` settings.

Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
repLimits []uint, ecLimits []uint32, limit uint32, localOnly bool, wg *sync.WaitGroup, rulesCounters []ruleCounters, repResultsMtx *sync.RWMutex, repResults map[string]bool) (bool, error) {
for {
var added bool
for i := range repLimits {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see local flag working correctly here. You're still iterating over rules in the order they're defined in policy, but locality preference should break it. If you have [max=2, [2, 2]] initial and local node is in the second set that rule should be processed first even though it's defined to be the second in policy.

I also think this can be simplified to rule set adjustment, you have the long-term list, you filter it through the initial replica sets (potentially removing zeroes completely), then you sort these lists based on locality, then you go over them as usual just having an additional overall counter that can't exceed max replicas. No additional logic should be required to go over rules/start workers, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see local flag working correctly here. You're still iterating over rules in the order they're defined in policy, but locality preference should break it.

localOnly && !localNodeInSet(t.placementIterator.neoFSNet, nodeLists[i]) { // we're still trying with local sets
this condition makes a diff. We process local sets only first, then make a switch to non-local ones

then you go over them as usual just having an additional overall counter that can't exceed max replicas

lets discuss this first. My thought was when [max=2, [2, 2]] policy is handled w/

[N1 N2 ...]
[N3 N4 ...]

nodes, we primarily PUT on N1+N3 taking into account they are (naturally) in diff locations/subnets/etc. Ur suggestion is to PUT on N1+N2 as we'd do w/ the main policy. Do we see potential in the proposed placement order?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i would expect N1+N3 placement in this example, according to the second example in the issue

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's say it's undefined and node can do whatever it finds appropriate (and it'll try to do whatever is easier of course, and this can change in future). People that want to spread for sure will use [max=2, [1, 1]]. But [2, 0], [0, 2] and [1, 1] are all equally good for [max=2, [2, 2]].

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On to why: consider failing replications, whatever strategy you choose initially you can end up in any of the cases and it'll be valid (all constraints satisfied). So there is no guarantee over how these nodes are chosen, which btw is the case even for local, that's just a hint that tries to optimize things, but not a guarantee, if local nodes are not available while remote are --- it's perfectly fine to put to remote nodes only.

})
}

initial := t.initialPolicy != nil && (t.sessionSigner != nil || !t.localOnly)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why sessionSigner is special for us?

repLimits []uint, ecLimits []uint32, limit uint32, localOnly bool, wg *sync.WaitGroup, rulesCounters []ruleCounters, repResultsMtx *sync.RWMutex, repResults map[string]bool) (bool, error) {
for {
var added bool
for i := range repLimits {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i would expect N1+N3 placement in this example, according to the second example in the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants