Add topologySpreadConstraints configuration to pod spec.#2530
Add topologySpreadConstraints configuration to pod spec.#2530laiminhtrung1997 wants to merge 15 commits intozalando:masterfrom
Conversation
b948c94 to
44fcb1f
Compare
|
We need that feature too. |
| } | ||
| if !reflect.DeepEqual(c.Statefulset.Spec.Template.Spec.TopologySpreadConstraints, statefulSet.Spec.Template.Spec.TopologySpreadConstraints) { | ||
| needsReplace = true | ||
| needsRollUpdate = true |
There was a problem hiding this comment.
does this really need to trigger a rolling update of pods executed by operator? Will not K8s take care of it then once the statefulset is replaced?
There was a problem hiding this comment.
I see, but is this wrong too?
https://github.com/zalando/postgres-operator/blob/master/pkg/cluster/cluster.go#L472
There was a problem hiding this comment.
hm good point. Maybe we can leave as is for now. With rolling update we make sure pods immediately adhere the new constraints.
| InitContainersOld []v1.Container `json:"init_containers,omitempty"` | ||
| PodPriorityClassNameOld string `json:"pod_priority_class_name,omitempty"` | ||
|
|
||
| TopologySpreadConstraints []v1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"` |
There was a problem hiding this comment.
pkg/cluster/k8sres.go
Outdated
| false, | ||
| "", | ||
| false, | ||
| []v1.TopologySpreadConstraint{}, |
There was a problem hiding this comment.
atm we reuse configured tolerations also for logical backup so I guess we can do the same with constraints
There was a problem hiding this comment.
Ok I got it.
pkg/cluster/k8sres.go
Outdated
| return podAntiAffinity | ||
| } | ||
|
|
||
| func generateTopologySpreadConstraints(labels labels.Set, topologySpreadConstraintObjs []v1.TopologySpreadConstraint) []v1.TopologySpreadConstraint { |
There was a problem hiding this comment.
would like to see a unit test for this function :)
There was a problem hiding this comment.
I added a unit test for this function.
|
Can you also write an e2e test that tests that the constraints work as expected, please? |
530f847 to
18023cb
Compare
|
Can you please provide
|
| } | ||
| } | ||
| k8s.api.core_v1.patch_node(master_nodes[0], patch_node_label) | ||
| k8s.api.core_v1.patch_node(replica_nodes[0], patch_node_label) |
There was a problem hiding this comment.
I would expect that the e2e test patches the Postgresql manifest and adds topologySpreadConstraints to then check if the pods spread evenly. But you're only patching the nodes here?
There was a problem hiding this comment.
I updated the e2e test.
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
Dear @FxKu |
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
8 similar comments
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
|
Cannot start a pipeline due to: Click on pipeline status check Details link below for more information. |
|
Hello, what's the next for this issue ? |
Dear all,
I think we should configure topologySpreadConstraints to pod spec so these pods can spread zones for high availability.
Could someone review it, please? Thank you very much.
Best regards.