Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #3830 +/- ##
==========================================
- Coverage 25.83% 25.76% -0.08%
==========================================
Files 669 669
Lines 43074 43287 +213
==========================================
+ Hits 11127 11151 +24
- Misses 30930 31114 +184
- Partials 1017 1022 +5 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
96f2c39 to
a4553c1
Compare
|
rdy overall. The only thing left from me is MaxReplicas tests |
a4553c1 to
f87d1da
Compare
This refactoring will help to support initial placement rules (nspcc-dev/neofs-api#352) which will not apply to system objects. Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
Such requests are specific, so it's clearer to handle them in a separate branch. Also, avoid undefined field values. Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
nspcc-dev/neofs-sdk-go@e6342b6...b703a48. IR performs verification of initial rules against the main ones. CLI supports initial rules specified in JSON file only for now. SN allows such policies to be created, but is not yet able to serve them. This will be supported in future updates. Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
Except policies with set `max_replicas` which will be supported separately. Requests to such containers are denied for now. Requests with `copies_number` now behave like ones with initial `max_replicas`. Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
Support `max_replicas` and `prefer_local` settings. Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
f87d1da to
e605908
Compare
| repLimits []uint, ecLimits []uint32, limit uint32, localOnly bool, wg *sync.WaitGroup, rulesCounters []ruleCounters, repResultsMtx *sync.RWMutex, repResults map[string]bool) (bool, error) { | ||
| for { | ||
| var added bool | ||
| for i := range repLimits { |
There was a problem hiding this comment.
I don't see local flag working correctly here. You're still iterating over rules in the order they're defined in policy, but locality preference should break it. If you have [max=2, [2, 2]] initial and local node is in the second set that rule should be processed first even though it's defined to be the second in policy.
I also think this can be simplified to rule set adjustment, you have the long-term list, you filter it through the initial replica sets (potentially removing zeroes completely), then you sort these lists based on locality, then you go over them as usual just having an additional overall counter that can't exceed max replicas. No additional logic should be required to go over rules/start workers, etc.
There was a problem hiding this comment.
this condition makes a diff. We process local sets only first, then make a switch to non-local onesI don't see local flag working correctly here. You're still iterating over rules in the order they're defined in policy, but locality preference should break it.
then you go over them as usual just having an additional overall counter that can't exceed max replicas
lets discuss this first. My thought was when [max=2, [2, 2]] policy is handled w/
[N1 N2 ...]
[N3 N4 ...]
nodes, we primarily PUT on N1+N3 taking into account they are (naturally) in diff locations/subnets/etc. Ur suggestion is to PUT on N1+N2 as we'd do w/ the main policy. Do we see potential in the proposed placement order?
There was a problem hiding this comment.
i would expect N1+N3 placement in this example, according to the second example in the issue
There was a problem hiding this comment.
Let's say it's undefined and node can do whatever it finds appropriate (and it'll try to do whatever is easier of course, and this can change in future). People that want to spread for sure will use [max=2, [1, 1]]. But [2, 0], [0, 2] and [1, 1] are all equally good for [max=2, [2, 2]].
There was a problem hiding this comment.
On to why: consider failing replications, whatever strategy you choose initially you can end up in any of the cases and it'll be valid (all constraints satisfied). So there is no guarantee over how these nodes are chosen, which btw is the case even for local, that's just a hint that tries to optimize things, but not a guarantee, if local nodes are not available while remote are --- it's perfectly fine to put to remote nodes only.
| }) | ||
| } | ||
|
|
||
| initial := t.initialPolicy != nil && (t.sessionSigner != nil || !t.localOnly) |
There was a problem hiding this comment.
why sessionSigner is special for us?
| repLimits []uint, ecLimits []uint32, limit uint32, localOnly bool, wg *sync.WaitGroup, rulesCounters []ruleCounters, repResultsMtx *sync.RWMutex, repResults map[string]bool) (bool, error) { | ||
| for { | ||
| var added bool | ||
| for i := range repLimits { |
There was a problem hiding this comment.
i would expect N1+N3 placement in this example, according to the second example in the issue
No description provided.