[OCPERT-264][hold]Test OTE migration in single-module mode with keeping compat_otp#757
[OCPERT-264][hold]Test OTE migration in single-module mode with keeping compat_otp#757ming1013 wants to merge 1 commit intoopenshift:masterfrom
Conversation
WalkthroughThis pull request introduces a new Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
|
|
/hold |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Actionable comments posted: 12
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (21)
tests-extension/test/e2e/testdata/router/ocp57404-stateful-set.yaml-17-24 (1)
17-24:⚠️ Potential issue | 🟠 MajorAdd container securityContext to avoid policy failures and reduce privilege
The pod template does not set
allowPrivilegeEscalation: falseor non-root constraints. This weakens security posture and can fail in restricted-enforcement environments used by e2e runs.Suggested patch
spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: quay.io/openshifttest/nginx-alpine@sha256:cee6930776b92dc1e93b73f9e5965925d49cff3d2e91e1d071c2f0ff72cbca29 + securityContext: + allowPrivilegeEscalation: false + runAsNonRoot: true + capabilities: + drop: + - ALL + seccompProfile: + type: RuntimeDefault ports: - containerPort: 8080 name: webAs per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/ocp57404-stateful-set.yaml` around lines 17 - 24, Add a container-level securityContext to the pod template to prevent privilege escalation and enforce non-root execution: under the container named "nginx" in the "spec" -> "containers" section, add securityContext with allowPrivilegeEscalation: false and runAsNonRoot: true (optionally add capabilities: drop: ["ALL"] for stricter isolation) so the container definition (name: nginx) includes securityContext: { allowPrivilegeEscalation: false, runAsNonRoot: true } to satisfy restricted policy checks.tests-extension/test/e2e/testdata/router/ingress-with-class.yaml-17-17 (1)
17-17:⚠️ Potential issue | 🟠 MajorUse deterministic path matching for e2e stability.
Line 17 uses
ImplementationSpecific, which can vary by ingress controller and make this fixture flaky across environments. PreferPrefix(orExact) for predictable behavior in migration tests.Proposed fix
- pathType: ImplementationSpecific + pathType: PrefixAs per coding guidelines,
**: "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/ingress-with-class.yaml` at line 17, Replace the non-deterministic ingress pathType value used in the test fixture (the line containing "pathType: ImplementationSpecific") with a deterministic matcher such as "pathType: Prefix" (or "Exact" if you need an exact match) so e2e tests behave consistently across ingress controllers; update the YAML entry where pathType is set to use Prefix to ensure stable path matching in migration tests.tests-extension/test/e2e/testdata/router/extdns/catalog-source.yaml-13-14 (1)
13-14:⚠️ Potential issue | 🟠 MajorBoth External DNS and AWS Load Balancer catalog sources use
securityContextConfig: legacywithout documented justification.Two test fixtures use the less-restrictive
legacysetting:
tests-extension/test/e2e/testdata/router/extdns/catalog-source.yamlline 14tests-extension/test/e2e/testdata/router/awslb/catalog-source.yamlline 14If the operators don't explicitly require
legacy, migrate both torestrictedto avoid weaker SCC assumptions in test environments.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/extdns/catalog-source.yaml` around lines 13 - 14, Update the grpcPodConfig.securityContextConfig key from "legacy" to "restricted" in the External DNS and AWS Load Balancer catalog-source.yaml test fixtures (the extdns catalog-source.yaml and awslb catalog-source.yaml), i.e., change grpcPodConfig.securityContextConfig: legacy -> grpcPodConfig.securityContextConfig: restricted; run the e2e tests and if an operator explicitly requires legacy, document that requirement and keep legacy only for that operator-specific fixture.tests-extension/test/e2e/testdata/router/rsyslogd-pod.yaml-11-20 (1)
11-20:⚠️ Potential issue | 🟠 MajorHarden the test pod security context (privileged is too broad).
Line 12 enables full privilege, which is a high-risk default even in e2e fixtures. Prefer least-privilege settings and only add the specific capability needed for port 514.Proposed hardening
securityContext: - privileged: true + privileged: false + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + runAsNonRoot: false + capabilities: + add: + - NET_BIND_SERVICEAs per coding guidelines "**: Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/rsyslogd-pod.yaml` around lines 11 - 20, Replace the broad securityContext: privileged: true with a least-privilege policy: remove privileged and add a securityContext that grants only the NET_BIND_SERVICE capability (securityContext.capabilities.add: ["NET_BIND_SERVICE"]) so the container can bind to port 514, and also set runAsNonRoot: true (and optionally a non-root runAsUser) to avoid running as root; update the container spec that contains ports 514/TCP and 514/UDP accordingly.tests-extension/test/e2e/testdata/router/coreDNS-pod.yaml-8-27 (1)
8-27:⚠️ Potential issue | 🟠 MajorHarden this pod/container security context.
Line 9 (
runAsNonRoot: false) and Line 22 (allowPrivilegeEscalation: true) grant avoidable privilege, andreadOnlyRootFilesystemis missing.Proposed hardening
spec: securityContext: - runAsNonRoot: false + runAsNonRoot: true seccompProfile: type: RuntimeDefault @@ securityContext: - allowPrivilegeEscalation: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + runAsUser: 65534 capabilities: add: - NET_BIND_SERVICE drop: - ALLAs per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/coreDNS-pod.yaml` around lines 8 - 27, The pod spec's securityContext and container securityContext for the coredns container are too permissive; update securityContext.runAsNonRoot to true, set the container securityContext.allowPrivilegeEscalation to false, and add securityContext.readOnlyRootFilesystem: true for the coredns container (reference container name "coredns" and the top-level "securityContext" and nested "securityContext" fields) while keeping seccompProfile as RuntimeDefault and preserving required capability NET_BIND_SERVICE; ensure capability drops remain and adjust any other conflicting flags.tests-extension/test/e2e/bindata.mk-7-7 (1)
7-7:⚠️ Potential issue | 🟠 MajorPin
go-bindataversion to ensure reproducible builds.Line 7 uses
@latest, causing non-deterministic tool installation across environments and time. This impacts CI reliability and supply-chain reproducibility.Proposed fix
- `@GOFLAGS`= go install github.com/go-bindata/go-bindata/v3/go-bindata@latest + `@GOFLAGS`= go install github.com/go-bindata/go-bindata/v3/go-bindata@v3.1.3🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/bindata.mk` at line 7, Replace the non-deterministic install of go-bindata that uses "@latest" with a pinned semantic version to ensure reproducible builds; locate the install invocation "GOFLAGS= go install github.com/go-bindata/go-bindata/v3/go-bindata@latest" in the bindata make target and change the `@latest` suffix to a specific released tag (e.g., `@v3.x.y`) so CI and local builds always install the exact same tool version.tests-extension/test/e2e/testdata/router/testpod-60350.yaml-13-15 (1)
13-15:⚠️ Potential issue | 🟠 MajorRemove duplicate
namekey in the container spec.The container mapping defines
nametwice (lines 13 and 15). This is invalid YAML syntax and will cause parsing failures or silent overwrites depending on the parser implementation.Proposed fix
containers: - - name: myapp-container + - name: testpod-60350 image: quay.io/openshifttest/busybox@sha256:c5439d7db88ab5423999530349d327b04279ad3161d7596d2126dfb5b02bfd1f - name: testpod-60350 securityContext: allowPrivilegeEscalation: false capabilities:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/testpod-60350.yaml` around lines 13 - 15, The container spec defines the `name` key twice ("myapp-container" and "testpod-60350"); remove the duplicate so the container mapping only has a single `name` entry. Locate the container block that includes the image "quay.io/openshifttest/busybox@sha256:c5439d7db88ab5423999530349d327b04279ad3161d7596d2126dfb5b02bfd1f" and delete the extraneous `name: testpod-60350` line (or rename it appropriately if you intended a different field), leaving only `name: myapp-container`.tests-extension/test/e2e/testdata/router/ingresscontroller-tuning.yaml-22-22 (1)
22-22:⚠️ Potential issue | 🟠 MajorReplace deprecated node selector label.
Line 22 uses
beta.kubernetes.io/os, which is deprecated and can cause unschedulable IngressController pods on newer clusters. Update to the stable labelkubernetes.io/os.Suggested fix
- beta.kubernetes.io/os: linux + kubernetes.io/os: linux🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/ingresscontroller-tuning.yaml` at line 22, Update the deprecated node selector key `beta.kubernetes.io/os` to the stable `kubernetes.io/os` in the IngressController manifest: find the nodeSelector (or labels) entry referencing `beta.kubernetes.io/os` and replace that key with `kubernetes.io/os` so pods can be scheduled on newer clusters.tests-extension/test/e2e/testdata/router/awslb/podsvc.yaml-7-13 (1)
7-13:⚠️ Potential issue | 🟠 MajorAdd security context to pod and container spec to ensure test compatibility with restricted admission policies.
The pod and container lack security hardening settings that are standard in other test data files (testpod-60492.yaml, testpod-60350.yaml, test-client-pod.yaml). This causes failures under restricted Pod Security admission policies used in OpenShift environments.
Suggested fix
spec: + securityContext: + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault containers: - image: quay.io/openshifttest/nginx-alpine@sha256:cee6930776b92dc1e93b73f9e5965925d49cff3d2e91e1d071c2f0ff72cbca29 name: nginx + securityContext: + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + capabilities: + drop: + - ALL ports: - containerPort: 8080 - containerPort: 8443🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/awslb/podsvc.yaml` around lines 7 - 13, The Pod spec in the YAML is missing Pod/Container securityContext entries which causes failures under restricted Pod Security admission; update the top-level spec to include a pod-level securityContext (e.g., runAsNonRoot: true, runAsUser: 1000, fsGroup: 1000, allowPrivilegeEscalation: false via podSecurityContext where appropriate) and add a container-level securityContext for the nginx container (name: nginx) setting runAsNonRoot: true, runAsUser, readOnlyRootFilesystem: true, allowPrivilegeEscalation: false and dropping all capabilities; ensure these keys are placed under the same spec and under spec.containers for the container so the Pod conforms to restricted policies.tests-extension/test/e2e/testdata/router/httpbin-deploy.yaml-20-28 (1)
20-28:⚠️ Potential issue | 🟠 MajorAdd container securityContext for both
httpbincontainers.Lines 20-28 currently rely on defaults, which can cause test failures in restricted policy environments and weakens the security posture of the test fixture.
Suggested fix
containers: - name: httpbin-http image: quay.io/openshifttest/httpbin@sha256:cc44fbd857f4148d8aad8359acc03efa719517e01d390b152e4f3830ad871c9f + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"] ports: - containerPort: 8080 - name: httpbin-https image: quay.io/openshifttest/httpbin@sha256:f57f4e682e05bcdadb103c93ae5ab9be166f79bcbbccaf45d92a2cad18da8d64 + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"] ports: - containerPort: 8443🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/httpbin-deploy.yaml` around lines 20 - 28, Add a pod-level securityContext for the two containers (httpbin-http and httpbin-https) by updating the spec.template.spec.containers entries to include a securityContext that enforces non-root execution and minimal privileges (e.g., runAsNonRoot: true, runAsUser: 1000, allowPrivilegeEscalation: false, capabilities.drop: ["ALL"], and readOnlyRootFilesystem: true); make sure the fields are inside each container block (for httpbin-http and httpbin-https) so the test deployment complies with restricted policy environments.tests-extension/test/e2e/testdata/router/testpod-60492.yaml-13-15 (1)
13-15:⚠️ Potential issue | 🟠 MajorRemove duplicate
namekey in container spec (YAML correctness issue).The container mapping at lines 13–15 defines
nametwice. YAML parsers will silently use the last value (testpod-60492), potentially overriding the intended container name. Strict YAML tooling may reject this entirely.Suggested fix
containers: - - name: myapp-container + - name: testpod-60492 image: quay.io/openshifttest/busybox@sha256:c5439d7db88ab5423999530349d327b04279ad3161d7596d2126dfb5b02bfd1f - name: testpod-60492 securityContext: allowPrivilegeEscalation: false🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/testpod-60492.yaml` around lines 13 - 15, The container spec contains two duplicate name keys ("myapp-container" and "testpod-60492"); remove the incorrect duplicate so the container mapping has exactly one name field (keep the intended container name "myapp-container" and delete the extra "name: testpod-60492") to ensure valid YAML and avoid overriding the container name.tests-extension/test/e2e/testdata/router/bug1826225-proh2-deploy.yaml-20-23 (1)
20-23:⚠️ Potential issue | 🟠 MajorAdd container security context to match suite conventions.
The nginx container lacks explicit
securityContextsettings. Other similar fixtures in the same test directory (e.g.,test-client-pod.yaml,testpod-60492.yaml) define container-level security hardening. For consistency and to prevent policy drift on stricter clusters, add the standard hardening pattern:Proposed change
containers: - name: nginx image: quay.io/openshifttest/nginx-alpine@sha256:cee6930776b92dc1e93b73f9e5965925d49cff3d2e91e1d071c2f0ff72cbca29 + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/bug1826225-proh2-deploy.yaml` around lines 20 - 23, Add a container-level securityContext to the nginx container definition (container name "nginx") to match suite conventions: set runAsNonRoot: true, allowPrivilegeEscalation: false, readOnlyRootFilesystem: true, drop all capabilities, and include a seccompProfile (runtime/default) and appropriate capabilities.drop list; place this securityContext under the nginx container spec so the pod follows the same hardening as test-client-pod.yaml and testpod-60492.yaml.tests-extension/test/e2e/route-weight.go-560-561 (1)
560-561:⚠️ Potential issue | 🟠 MajorBug:
srvPod2Nameis not used in the grep pattern.Lines 560-561 reference
srvPod3Nametwice instead of usingsrvPod2Name. This appears to be a copy-paste error that prevents proper validation of pod2's weight in the HAProxy configuration.Proposed fix
- selectedSrvNum := fmt.Sprintf("cat haproxy.config | grep -E \"server pod:ingress-canary|server pod:%s|server pod:%s|server pod:%s\"| wc -l", srvPod1Name, srvPod3Name, srvPod3Name) - selectedWeight1Num := fmt.Sprintf("cat haproxy.config | grep -E \"server pod:ingress-canary|server pod:%s|server pod:%s|server pod:%s\"| grep \"weight 1\" |wc -l", srvPod1Name, srvPod3Name, srvPod3Name) + selectedSrvNum := fmt.Sprintf("cat haproxy.config | grep -E \"server pod:ingress-canary|server pod:%s|server pod:%s|server pod:%s\"| wc -l", srvPod1Name, srvPod2Name, srvPod3Name) + selectedWeight1Num := fmt.Sprintf("cat haproxy.config | grep -E \"server pod:ingress-canary|server pod:%s|server pod:%s|server pod:%s\"| grep \"weight 1\" |wc -l", srvPod1Name, srvPod2Name, srvPod3Name)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/route-weight.go` around lines 560 - 561, The grep patterns in the fmt.Sprintf calls for selectedSrvNum and selectedWeight1Num mistakenly use srvPod3Name twice; replace the duplicated srvPod3Name with srvPod2Name so the patterns include srvPod1Name, srvPod2Name, srvPod3Name (update the fmt.Sprintf invocations that build selectedSrvNum and selectedWeight1Num accordingly).tests-extension/test/e2e/aws_util.go-56-58 (1)
56-58:⚠️ Potential issue | 🟠 MajorHardcoded AWS region may cause failures in non-us-west-2 clusters.
The region is hardcoded to
"us-west-2"for STS/IAM clients, but IAM operations are global and STS endpoints should match the cluster's region. For clusters in other regions (e.g., eu-west-1, ap-southeast-1), this could lead to unexpected behavior or latency issues.Consider retrieving the cluster's region dynamically using
compat_otp.GetAWSClusterRegion(oc)or using the global STS endpoint.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/aws_util.go` around lines 56 - 58, Replace the hardcoded region passed to config.LoadDefaultConfig("us-west-2") by dynamically resolving the cluster region or using the global STS endpoint: call compat_otp.GetAWSClusterRegion(oc) (or accept an injected region variable) and pass that region into config.LoadDefaultConfig, or omit region and configure the STS client to use the global endpoint; update where cfg is created in the function that calls config.LoadDefaultConfig so IAM/STS clients match the cluster region (refer to config.LoadDefaultConfig and compat_otp.GetAWSClusterRegion to locate the change).tests-extension/test/e2e/externaldns_util.go-109-114 (1)
109-114:⚠️ Potential issue | 🟠 MajorPotential panic if no hosted zones are returned.
Accessing
hostedZoneDetails.HostedZones[0]without checking if the slice is non-empty could cause a runtime panic if the domain name doesn't match any hosted zones.🐛 Proposed fix
hostedZoneDetails, err := route53Client.ListHostedZonesByNameWithContext( context.Background(), &route53.ListHostedZonesByNameInput{ DNSName: aws.String(domainName)}) o.Expect(err).NotTo(o.HaveOccurred()) + o.Expect(hostedZoneDetails.HostedZones).NotTo(o.BeEmpty(), "No hosted zones found for domain: "+domainName) e2e.Logf("The zone Id of the host domain '%s' is '%s'", *hostedZoneDetails.HostedZones[0].Name, *hostedZoneDetails.HostedZones[0].Id) return strings.TrimPrefix(*hostedZoneDetails.HostedZones[0].Id, "/hostedzone/")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/externaldns_util.go` around lines 109 - 114, The code calls route53Client.ListHostedZonesByNameWithContext and immediately indexes hostedZoneDetails.HostedZones[0], which can panic if the slice is empty; update the logic after ListHostedZonesByNameWithContext to verify len(hostedZoneDetails.HostedZones) > 0 and fail the test with a clear message (using o.Expect or returning an error) if no zones are returned, then safely read Name/Id and continue (references: ListHostedZonesByNameWithContext, hostedZoneDetails.HostedZones, e2e.Logf, o.Expect).tests-extension/test/e2e/tuning.go-612-624 (1)
612-624:⚠️ Potential issue | 🟠 MajorComment/code mismatch - comment says 50000 but code patches 500000.
The comment at line 612 says "Patch tuningOptions/maxConnections 50000" but line 613-614 actually patches with
500000. Additionally, the subsequent assertion at line 622 expectsmaxconn 50000which contradicts the patched value. This appears to be a test logic error.🐛 Proposed fix
- compat_otp.By("Patch tuningOptions/maxConnections 50000 to the ingress-controller") - maxConnections = "500000" + compat_otp.By("Patch tuningOptions/maxConnections 500000 to the ingress-controller") + maxConnections = "500000" patchResourceAsAdmin(oc, ingctrl.namespace, ingctrlResource, "{\"spec\": {\"tuningOptions\": {\"maxConnections\": "+maxConnections+"}}}") ensureRouterDeployGenerationIs(oc, ingctrl.name, "2") compat_otp.By("Check ROUTER_MAX_CONNECTIONS env in a route pod which should be " + maxConnections) podname = getOneNewRouterPodFromRollingUpdate(oc, ingctrl.name) maxConnSearch := readRouterPodEnv(oc, podname, "ROUTER_MAX_CONNECTIONS") o.Expect(maxConnSearch).To(o.ContainSubstring("ROUTER_MAX_CONNECTIONS=" + maxConnections)) - compat_otp.By("Check maxconn in haproxy.config which should be 50000") - ensureHaproxyBlockConfigContains(oc, podname, "maxconn", []string{"maxconn 50000"}) + compat_otp.By("Check maxconn in haproxy.config which should be " + maxConnections) + ensureHaproxyBlockConfigContains(oc, podname, "maxconn", []string{"maxconn " + maxConnections})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/tuning.go` around lines 612 - 624, The test is inconsistent: the comment and haproxy assertion expect 50000 but patch uses 500000; fix by making the patched value and subsequent checks consistent (change maxConnections variable to "50000" so patchResourceAsAdmin(...) sends 50000, and keep the readRouterPodEnv/ensureHaproxyBlockConfigContains validations as-is). Update the maxConnections variable and the JSON passed to patchResourceAsAdmin (and any related string-building) so getOneNewRouterPodFromRollingUpdate, readRouterPodEnv, and ensureHaproxyBlockConfigContains all operate on the same 50000 value.tests-extension/test/e2e/awslb-operator.go-125-134 (1)
125-134:⚠️ Potential issue | 🟠 MajorIncorrect subnet count check - comparing string length instead of array element count.
internalSubnetsis a JSON string (e.g.,["subnet-1","subnet-2"]), solen(internalSubnets) > 1checks the string length, not the number of subnets. A single subnet like["subnet-12345"]has length > 1 and would incorrectly pass this check.🐛 Proposed fix
// ALB provision relies on the number of subnets (should >=2) internalSubnets, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("awsloadbalancercontroller/cluster", "-ojsonpath={.status.subnets.internal}").Output() o.Expect(err).NotTo(o.HaveOccurred()) e2e.Logf("The discovered subnets are: %v", internalSubnets) - if len(internalSubnets) > 1 { + // Count actual subnet entries by counting commas + 1, or parse JSON + subnetCount := strings.Count(internalSubnets, "subnet-") + if subnetCount >= 2 { waitForLoadBalancerProvision(oc, oc.Namespace(), "ingress-test") } else { output, err := oc.Run("describe").Args("ingress", "ingress-test").Output() o.Expect(err).NotTo(o.HaveOccurred()) e2e.Logf("Number of subnets doesn't meet the requirement, see event of ingress: %v", output) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/awslb-operator.go` around lines 125 - 134, The code is mistakenly checking the string length of internalSubnets instead of the number of subnet entries; to fix, parse internalSubnets (the JSON output from oc.AsAdmin().WithoutNamespace().Run("get").Args(...).Output()) into a []string using json.Unmarshal, check len(parsedSubnets) > 1 to decide whether to call waitForLoadBalancerProvision(oc, oc.Namespace(), "ingress-test"), and on unmarshal error fall back to describing the ingress (the oc.Run("describe").Args("ingress", "ingress-test").Output() branch) while ensuring you o.Expect(err).NotTo(o.HaveOccurred()) for the describe call.tests-extension/test/e2e/route-admission.go-49-51 (1)
49-51:⚠️ Potential issue | 🟠 MajorModifying shared fixture files in-place causes test pollution.
Using
sed -i''modifies the original template fileingresscontroller-np.yaml. If this test runs before others that use the same template, subsequent tests will see the modified file with "InterNamespaceAllowed" instead of "Strict". This can lead to flaky or failing tests.Consider copying the template to a temporary file before modification, or using Go's template processing to inject values dynamically without modifying the source file.
🐛 Proposed fix: Use a temporary copy
+ // Copy template to temp file before modification + tempFile, err := os.CreateTemp("", "ingresscontroller-*.yaml") + o.Expect(err).NotTo(o.HaveOccurred()) + defer os.Remove(tempFile.Name()) + templateContent, err := os.ReadFile(customTemp) + o.Expect(err).NotTo(o.HaveOccurred()) + _, err = tempFile.Write(templateContent) + o.Expect(err).NotTo(o.HaveOccurred()) + tempFile.Close() + - sedCmd := fmt.Sprintf(`sed -i'' -e 's|Strict|%s|g' %s`, "InterNamespaceAllowed", customTemp) + sedCmd := fmt.Sprintf(`sed -i'' -e 's|Strict|%s|g' %s`, "InterNamespaceAllowed", tempFile.Name()) _, err := exec.Command("bash", "-c", sedCmd).Output() o.Expect(err).NotTo(o.HaveOccurred()) + ingctrl.template = tempFile.Name()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/route-admission.go` around lines 49 - 51, The test currently modifies the shared template in-place using sed (see sedCmd and exec.Command) which pollutes fixtures; instead create a temporary copy of the template (ingresscontroller-np.yaml) into customTemp before running sed, or better: read the template into memory and perform the string replacement in Go and write the result to a temp file, then run exec.Command against that temp file and ensure cleanup; update the code around customTemp, sedCmd and the exec.Command call to operate on the isolated temp file so the original fixture is never mutated.tests-extension/test/e2e/dns.go-875-879 (1)
875-879:⚠️ Potential issue | 🟠 MajorFail the spec when the DaemonSet is still scheduling on the tainted nodes.
These branches only log unexpected misscheduled counts. The test still passes when the scheduler behavior is wrong.
🐛 Proposed fix
- if !strings.Contains(podOut, "Number of Nodes Misscheduled: 1") { - e2e.Logf("Number of Nodes Misscheduled: 1 is not expected") - } + o.Expect(podOut).To(o.ContainSubstring("Number of Nodes Misscheduled: 1")) @@ - if !strings.Contains(podOut2, "Number of Nodes Misscheduled: 2") { - e2e.Logf("Number of Nodes Misscheduled: 2 is not expected") - } + o.Expect(podOut2).To(o.ContainSubstring("Number of Nodes Misscheduled: 2"))Also applies to: 885-889
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/dns.go` around lines 875 - 879, The test currently only logs when the daemonset output lacks "Number of Nodes Misscheduled: 1" (in the block using podOut from oc.AsAdmin().WithoutNamespace().Run("describe").Args(...)), so the spec still passes on failure; change that logging-only branch to an actual test assertion that fails the spec (e.g., use the test framework's Expect/Fail helpers) when the substring is not present. Update the block that checks podOut and the similar block at the other occurrence (lines 885-889) to assert the presence of "Number of Nodes Misscheduled: 1" (or invert to assert the undesired state is absent) using the existing o.Expect / matcher style used elsewhere in the file so the test fails instead of merely logging.tests-extension/test/e2e/dns.go-1017-1022 (1)
1017-1022:⚠️ Potential issue | 🟠 MajorSkip on any Windows node, not only on two or more.
The message says this case does not work on clusters with Windows nodes, but
len(windowNodeList) > 1still runs it on single-Windows-node clusters.🐛 Proposed fix
- if len(windowNodeList) > 1 { + if len(windowNodeList) > 0 { g.Skip("This case will not work on clusters having windows nodes") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/dns.go` around lines 1017 - 1022, The test currently only skips when len(windowNodeList) > 1, so clusters with a single Windows node still run; update the skip check to skip whenever any Windows node exists by changing the condition to len(windowNodeList) > 0 (leave the call to compat_otp.GetAllNodesbyOSType and the g.Skip("This case will not work on clusters having windows nodes") intact so the behavior and message remain consistent).tests-extension/test/e2e/ingress.go-688-691 (1)
688-691:⚠️ Potential issue | 🟠 MajorRe-read the recreated route before asserting its labels.
Line 689 still checks the pre-delete
routeLabelsvalue, so Step 8.1 never verifies the new route that was created in Lines 685-686.🐛 Proposed fix
compat_otp.By(`8.1: Check the route that all the ingress' labels are synced to it`) + routeLabels = getByJsonPath(oc, ns, "route/"+routeName, "{.metadata.labels}") o.Expect(routeLabels).Should(o.And( o.ContainSubstring(`"shard":"beta"`), o.ContainSubstring(`"Market_01":"No1-In-APEC"`)))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/ingress.go` around lines 688 - 691, The assertion is using the stale routeLabels value from before the route was recreated; after recreating the route (the steps immediately before the compat_otp.By call) re-query the route and update the routeLabels variable with the fresh labels (i.e., call the same get/fetch route method used earlier to retrieve labels and assign it to routeLabels) before the o.Expect(...) assertion so Step 8.1 verifies the new route's labels.
🟡 Minor comments (6)
tests-extension/test/e2e/testdata/router/subdomain-routes/route.yaml-7-8 (1)
7-8:⚠️ Potential issue | 🟡 MinorOmit empty
annotationsfield to follow Kubernetes best practices.Line 7 declares
annotations:with no value (null). While Kubernetes apiserver accepts this and treats it as an empty map, best practice is to omit the field entirely when no annotations are needed, rather than explicitly setting it to null. This also avoids potential issues with strict validation tools like kubeconform.Proposed change
metadata: - annotations: labels: name: service-unsecure🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/subdomain-routes/route.yaml` around lines 7 - 8, Remove the empty annotations mapping from the route.yaml (delete the "annotations:" key declared with no value) so the file omits the annotations field entirely when none are needed; ensure you leave the "labels:" mapping intact and the YAML remains valid (only add "annotations:" back when you need to supply non-empty annotation key/values).tests-extension/test/e2e/testdata/router/ipfailover.yaml-22-27 (1)
22-27:⚠️ Potential issue | 🟡 MinorInvalid field:
privileged: trueis not a valid pod spec field.Line 24 has
privileged: trueat the pod spec level, but this field only exists in containersecurityContext. The field will be ignored by Kubernetes. Since the container already hassecurityContext.privileged: true(line 36), this line should be removed to avoid confusion.Proposed fix
spec: serviceAccountName: ipfailover - privileged: true hostNetwork: true nodeSelector: node-role.kubernetes.io/worker: ""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/ipfailover.yaml` around lines 22 - 27, Remove the invalid top-level pod spec field privileged: true (under spec) — this field only belongs in a container's securityContext; leave the existing container securityContext.privileged: true in place (it already grants the privilege) and delete the spec-level privileged entry to avoid Kubernetes ignoring it and causing confusion.tests-extension/test/e2e/externaldns_util.go-91-100 (1)
91-100:⚠️ Potential issue | 🟡 MinorModifying fixture file in-place may cause test pollution.
updateFilebySedCmdmodifies theaws-sts-creds-secret.yamlfile directly. If the test fails before completion or runs concurrently, subsequent tests may see incorrect values. Consider using a temporary copy or template substitution.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/externaldns_util.go` around lines 91 - 100, createSecretUsingRoleARN currently calls updateFilebySedCmd directly on the fixture file (aws-sts-creds-secret.yaml) which mutates the source and can pollute other tests; instead, copy the fixture to a temporary file (e.g., create a temp file path from testdata.FixturePath result), run updateFilebySedCmd against that temp copy, use the temp file when invoking oc.AsAdmin().Run("create").Args("-f", ...).Output(), and ensure the temp file is removed/cleaned up after the test; update references to awsStsCredSecret and updateFilebySedCmd in createSecretUsingRoleARN accordingly.tests-extension/test/e2e/tuning.go-196-199 (1)
196-199:⚠️ Potential issue | 🟡 MinorInconsistent timeout format assertion - possible missing 's' suffix.
Line 198 expects
ROUTER_DEFAULT_SERVER_TIMEOUT=33while the patch at line 190 setsserverTimeout:"33s". The other assertions (lines 196, 197, 199) include thessuffix. This inconsistency suggests a potential bug where the assertion may incorrectly pass or fail.🐛 Proposed fix
o.Expect(checkenv).To(o.ContainSubstring(`ROUTER_CLIENT_FIN_TIMEOUT=3s`)) o.Expect(checkenv).To(o.ContainSubstring(`ROUTER_DEFAULT_CLIENT_TIMEOUT=33s`)) - o.Expect(checkenv).To(o.ContainSubstring(`ROUTER_DEFAULT_SERVER_TIMEOUT=33`)) + o.Expect(checkenv).To(o.ContainSubstring(`ROUTER_DEFAULT_SERVER_TIMEOUT=33s`)) o.Expect(checkenv).To(o.ContainSubstring(`ROUTER_DEFAULT_SERVER_FIN_TIMEOUT=3s`))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/tuning.go` around lines 196 - 199, The test assertion for ROUTER_DEFAULT_SERVER_TIMEOUT is missing the 's' suffix and is inconsistent with the configured value; update the Expect call that uses checkenv and currently checks for `ROUTER_DEFAULT_SERVER_TIMEOUT=33` to assert `ROUTER_DEFAULT_SERVER_TIMEOUT=33s` so it matches the serverTimeout:"33s" setting and the other timeout assertions.tests-extension/test/e2e/gatewayapi.go-265-269 (1)
265-269:⚠️ Potential issue | 🟡 MinorUnchecked return values may hide test failures.
repeatCmdOnClientreturn values are ignored at lines 266 and 269. If the Prometheus query fails to return expected results within the retry window, the test will silently continue without proper assertion.🐛 Proposed fix
curlCmd := []string{"-n", "openshift-monitoring", "-c", "prometheus", "prometheus-k8s-0", "-i", "--", "curl", "-k", "-s", "-H", fmt.Sprintf("Authorization: Bearer %v", token), fmt.Sprintf("%s%s", url, query), "--connect-timeout", "10"} - repeatCmdOnClient(oc, curlCmd, `"gateway_class_type":"openshift"`, 300, 1) + result, err := repeatCmdOnClient(oc, curlCmd, `"gateway_class_type":"openshift"`, 300, 1) + o.Expect(err).NotTo(o.HaveOccurred()) + o.Expect(result).To(o.ContainSubstring(`"gateway_class_type":"openshift"`)) compat_otp.By("4.0: Confirm the metrics values are correct") - repeatCmdOnClient(oc, curlCmd, `"1"`, 120, 1) + result, err = repeatCmdOnClient(oc, curlCmd, `"1"`, 120, 1) + o.Expect(err).NotTo(o.HaveOccurred()) + o.Expect(result).To(o.ContainSubstring(`"1"`))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/gatewayapi.go` around lines 265 - 269, The two calls to repeatCmdOnClient (the one checking `"gateway_class_type":"openshift"` and the one checking `"1"`) ignore its return value so failures are hidden; capture the return (e.g., ok, out, err or similar signature) from repeatCmdOnClient for both invocations and assert/fail the test when it indicates failure (use the project test helper/assertion used elsewhere—e.g., t.Fatalf/Expect/require.FailWith message or Ginkgo Fail—with a descriptive message including oc, curlCmd and the returned output/error) so the test fails visibly if the Prometheus queries never match.tests-extension/test/e2e/route-admission.go-173-176 (1)
173-176:⚠️ Potential issue | 🟡 MinorStale pod reference after rollout may cause flaky assertion.
At line 175,
routerpodis used to check environment variables, but this is the old pod reference from line 158. After patching at line 172, the deployment rolls out new pods, sorouterpodmay no longer exist or may be terminating. The assertion should use the newly rolled out pod.🐛 Proposed fix
compat_otp.By("5. Patch the custom ingress controller and set namespaceOwnership to Strict") patchResourceAsAdmin(oc, ingctrl.namespace, "ingresscontrollers/"+ingctrl.name, "{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"Strict\"}}}") + ensureRouterDeployGenerationIs(oc, ingctrl.name, "3") compat_otp.By("6. Check the ROUTER_DISABLE_NAMESPACE_OWNERSHIP_CHECK env variable, which should be false") - namespaceOwnershipEnv = readRouterPodEnv(oc, routerpod, "ROUTER_DISABLE_NAMESPACE_OWNERSHIP_CHECK") + newerRouterpod := getOneNewRouterPodFromRollingUpdate(oc, ingctrl.name) + namespaceOwnershipEnv = readRouterPodEnv(oc, newerRouterpod, "ROUTER_DISABLE_NAMESPACE_OWNERSHIP_CHECK") o.Expect(namespaceOwnershipEnv).To(o.ContainSubstring("ROUTER_DISABLE_NAMESPACE_OWNERSHIP_CHECK=false"))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/route-admission.go` around lines 173 - 176, The assertion uses the stale variable routerpod (set before the patch/rollout) which can be terminated during rollout; after applying the patch at the rollout step, re-query/reassign the current router pod before calling readRouterPodEnv. Specifically, after the deployment patch/rollout call (the step around the patch at line 172), wait for the router to be ready and obtain the new pod (e.g., re-run the same pod lookup used earlier or call the helper that returns the current router pod), assign it to routerpod, then call readRouterPodEnv(oc, routerpod, "ROUTER_DISABLE_NAMESPACE_OWNERSHIP_CHECK") and perform the Expect check so the assertion always targets the newly rolled-out pod.
🧹 Nitpick comments (10)
tests-extension/test/e2e/testdata/router/awslb/subscription-src-qe.yaml (1)
7-8: Consider pinning operator version for reproducible e2e runs.
Using a movingstable-v1channel with automatic approval can make outcomes drift over time. For stable CI, prefer pinningstartingCSV(or equivalent version lock) in this fixture.As per coding guidelines "**: Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/awslb/subscription-src-qe.yaml` around lines 7 - 8, The fixture currently uses a moving channel value "channel: stable-v1" with "installPlanApproval: Automatic"; pin the operator to a reproducible CSV by adding a specific startingCSV entry (e.g., startingCSV: <operatorName>.vX.Y.Z) and keep or adjust installPlanApproval as desired so CI installs a fixed operator version instead of drifting with the channel; update the YAML fields "channel", "startingCSV", and "installPlanApproval" in this manifest to reflect the chosen pinned CSV.tests-extension/test/e2e/testdata/router/egressfirewall-wildcard.yaml (1)
7-10: Prefer a controlled wildcard domain for stable CI behavior.
Line 9 uses*.google.com, which introduces external dependency variance. A test-owned wildcard domain would improve repeatability.As per coding guidelines "**: Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/egressfirewall-wildcard.yaml` around lines 7 - 10, Replace the external wildcard dnsName "*.google.com" used in the Allow rule with a controlled, test-owned wildcard domain to stabilize CI (e.g., a local or test-only wildcard like "*.test-domain" under your test namespace); update the dnsName value in the Allow block (the entry with type: Allow and dnsName) and keep the subsequent Deny rule intact so the test exercises the same wildcard behaviour without relying on external services.tests-extension/test/e2e/testdata/router/egressfirewall-multiDomain.yaml (1)
7-15: Reduce e2e flakiness by avoiding third-party domains in allow rules.
Lines 9 and 12 depend on external domains not controlled by CI. Prefer a project-controlled/test-owned domain to keep the test deterministic and reduce intermittent failures.As per coding guidelines "**: Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/egressfirewall-multiDomain.yaml` around lines 7 - 15, Replace third-party DNS entries in the egress firewall allow rules with project-controlled test domains to avoid CI flakiness: find the two Allow rules that reference dnsName: registry-1.docker.io and dnsName: www.facebook.com and change them to point to a stable test-owned domain (e.g., a fixture domain used by tests or an internal mock service) and update any corresponding test fixtures or DNS mocks used by the e2e suite so the rules and assertions reference that controlled domain instead of external hosts.tests-extension/test/e2e/testdata/fixtures.go (2)
56-66: Handlefilepath.Walkerror return.The error returned by
filepath.Walkis ignored. If permission changes fail during traversal, the subsequentos.Renamemay fail with a less clear error.♻️ Proposed fix
- filepath.Walk(extractedPath, func(path string, info os.FileInfo, err error) error { + if err := filepath.Walk(extractedPath, func(path string, info os.FileInfo, err error) error { if err != nil { return err } if info.IsDir() { - os.Chmod(path, 0755) + if chmodErr := os.Chmod(path, 0755); chmodErr != nil { + return chmodErr + } } else { - os.Chmod(path, 0644) + if chmodErr := os.Chmod(path, 0644); chmodErr != nil { + return chmodErr + } } return nil - }) + }); err != nil { + panic(fmt.Sprintf("failed to set permissions on extracted files: %v", err)) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/fixtures.go` around lines 56 - 66, The call to filepath.Walk currently ignores its returned error, so failures from os.Chmod during traversal can be lost and cause later operations (like os.Rename) to fail with unclear errors; capture and handle the error returned by filepath.Walk (the invocation that walks extractedPath), e.g., assign its return value to a variable and propagate or log it (return the error from the surrounding function or wrap it with context), ensuring any failures from the os.Chmod calls inside the walker are surfaced to callers.
5-5: Useos.MkdirTempinstead of deprecatedioutil.TempDir.
ioutil.TempDiris deprecated since Go 1.16. Replace withos.MkdirTemp.♻️ Proposed fix
import ( "fmt" - "io/ioutil" "os" "path/filepath" "sort" "strings" )And on line 18:
- fixtureDir, err = ioutil.TempDir("", "testdata-fixtures-") + fixtureDir, err = os.MkdirTemp("", "testdata-fixtures-")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/fixtures.go` at line 5, Replace the deprecated ioutil.TempDir usage and import: remove the "io/ioutil" import and add "os" instead, then update any call to ioutil.TempDir(...) (likely in the fixtures setup code) to os.MkdirTemp(...) keeping the same parameters (dir, pattern) and handling its (string, error) return; also update any error handling or cleanup that references the old function name accordingly.tests-extension/test/e2e/testdata/router/extdns/sts-exdns-perms-policy.json (1)
9-23: Consider narrowing Route53 resources to test-owned hosted zones in this test fixture.The policy grants
route53:ChangeResourceRecordSetson all hosted zones (hostedzone/*) and read-only list operations on all resources (*). If tests can operate with specific hosted zone ARNs, parameterizing those would align with least-privilege principles.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/extdns/sts-exdns-perms-policy.json` around lines 9 - 23, The policy currently allows route53:ChangeResourceRecordSets on "hostedzone/*" and list actions on "*" which is too broad; update the policy in sts-exdns-perms-policy.json to target the specific hosted zone ARNs used by the tests (replace "hostedzone/*" with the test-owned hosted zone ARN strings) and avoid wildcard resources for list operations by parameterizing the Resource array for "route53:ListHostedZones", "route53:ListResourceRecordSets", and "route53:ListTagsForResource" to the same test zone ARNs or to a minimal set of ARNs supplied by the test harness; ensure the test runner injects those ARNs via the fixture variables so the JSON uses concrete ARNs instead of "*" or wildcards.tests-extension/test/e2e/testdata/router/awslb/sts-albo-perms-policy.json (1)
11-18: Consider scopingec2:CreateTags/ec2:DeleteTagsto reduce resource blast radius.The wildcard subnet resource (
arn:aws:ec2:*:*:subnet/*) is broad. While acceptable for test fixtures, best practice would constrain by account/region and add tag-basedConditionkeys to limit tagging to resources managed by the load balancer operator.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/awslb/sts-albo-perms-policy.json` around lines 11 - 18, Narrow the ec2:CreateTags and ec2:DeleteTags permission by replacing the wildcard Resource "arn:aws:ec2:*:*:subnet/*" with more specific ARNs scoped to the expected account/region and add tag-based Conditions (e.g., aws:RequestTag or aws:ResourceTag with a specific tag key/value that identifies resources managed by the load balancer operator) so that the statements granting ec2:CreateTags and ec2:DeleteTags only apply to the intended subnets.tests-extension/test/e2e/testdata/router/awslb/catalog-source.yaml (1)
14-14: This is a test fixture that intentionally usessecurityContextConfig: legacyas part of a consistent pattern across multiple CatalogSource test fixtures in the codebase. Test fixtures have different security requirements than production code, and changing this would require verification thatrestrictedworks with the test's operator compatibility requirements. Consider this only if there's evidence thatrestrictedis compatible with the test setup.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/testdata/router/awslb/catalog-source.yaml` at line 14, The test fixture sets securityContextConfig: legacy in the CatalogSource YAML; to address the review, either leave it unchanged (if test suites rely on legacy) or change it to securityContextConfig: restricted only after confirming operator compatibility and test environment supports restricted contexts; if you change it, update the CatalogSource manifest (the securityContextConfig key in this test fixture) and run the full e2e test suite and any operator compatibility checks to verify tests still pass and no permissions/PodSecurityPolicy failures occur.tests-extension/test/e2e/ipfailover.go (1)
318-324: Hard sleep of 125 seconds may cause slow test execution.The
time.Sleep(125 * time.Second)is used to wait for the preempt_delay (120s) to expire. While this is necessary for testing the preemption feature, consider adding a comment explaining why polling cannot be used here, or if possible, use a shorter preempt_delay value (e.g., 10s) for faster test execution.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/ipfailover.go` around lines 318 - 324, Replace the hard 125s sleep with a clear change: either (preferred) make the preempt delay configurable for tests and set it to a shorter value (e.g., 10s) so the test uses a reduced preempt_delay and then poll for the VIP with waitForPrimaryPod/futurePrimaryPod instead of sleeping, or if changing the delay is not possible add an explicit comment above time.Sleep explaining why polling cannot be used and why the full preempt_delay (120s) must be waited; update references to preempt_delay and keep slicingElement(new_backup, latestpods) and waitForPrimaryPod(oc, ns, futurePrimaryPod[0], virtualIP) intact.tests-extension/test/e2e/aws_util.go (1)
12-18: Mixed AWS SDK versions (v1 and v2) may cause confusion and compatibility issues.The file imports both
aws-sdk-go-v2(lines 12-15) andaws-sdk-gov1 (lines 16-18). At line 277,aws.String("vpc")from v2 is passed to the v1 EC2 client'sAllocateAddressInput. While this may work due to similar pointer semantics, mixing SDKs increases maintenance burden and confusion.Consider migrating to a single SDK version consistently.
Also applies to: 274-280
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests-extension/test/e2e/aws_util.go` around lines 12 - 18, The file mixes AWS SDK v2 and v1 which causes confusion and potential incompatibilities; update the code to use a single SDK version (preferably AWS SDK for Go v2) by replacing v1 imports and types with v2 equivalents: remove github.com/aws/aws-sdk-go/aws/awserr, github.com/aws/aws-sdk-go/aws/session, and github.com/aws/aws-sdk-go/service/ec2, import github.com/aws/aws-sdk-go-v2/service/ec2 (and ec2/types if needed), change any v1 constructs like ec2.AllocateAddressInput and aws.String("vpc") to the v2 API equivalents (ec2.AllocateAddressInput from v2 and pointers/strings per v2 conventions), and update calls that use the v1 EC2 client to the v2 client methods so all EC2 allocations and error handling consistently use the SDK v2 types and error handling.
| publishStatus, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("-n", "openshift-ingress-operator", "dnsrecord", dnsRecordName, `-o=jsonpath={.status.zones[*].conditions[?(@.type == "Published")].status})`).Output() | ||
| o.Expect(err).NotTo(o.HaveOccurred()) |
There was a problem hiding this comment.
Fix the JSONPath before checking the Published condition.
The argument ends with .status}), so this oc get dnsrecord call will fail instead of returning the zone statuses.
🐛 Proposed fix
- publishStatus, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("-n", "openshift-ingress-operator", "dnsrecord", dnsRecordName, `-o=jsonpath={.status.zones[*].conditions[?(@.type == "Published")].status})`).Output()
+ publishStatus, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("-n", "openshift-ingress-operator", "dnsrecord", dnsRecordName, `-o=jsonpath={.status.zones[*].conditions[?(@.type == "Published")].status}`).Output()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| publishStatus, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("-n", "openshift-ingress-operator", "dnsrecord", dnsRecordName, `-o=jsonpath={.status.zones[*].conditions[?(@.type == "Published")].status})`).Output() | |
| o.Expect(err).NotTo(o.HaveOccurred()) | |
| publishStatus, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("-n", "openshift-ingress-operator", "dnsrecord", dnsRecordName, `-o=jsonpath={.status.zones[*].conditions[?(@.type == "Published")].status}`).Output() | |
| o.Expect(err).NotTo(o.HaveOccurred()) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests-extension/test/e2e/cloud_load_balancer.go` around lines 133 - 134, The
JSONPath passed to oc in the publishStatus retrieval is malformed (it ends with
`.status})`) causing the
`oc.AsAdmin().WithoutNamespace().Run("get").Args(...).Output()` call to fail;
fix the JSONPath string in the call that assigns publishStatus (the line using
dnsRecordName and `-o=jsonpath=...`) by removing the extraneous closing
parenthesis and ensuring the JSONPath properly selects
`.status.zones[*].conditions[?(@.type=="Published")].status` so the command
returns the zone statuses.
| outputOpcfg, errOpcfg := oc.AsAdmin().WithoutNamespace().Run("get").Args( | ||
| "dns.operator", "default", `-o=jsonpath={.status.conditions[?(@.type=="Degraded")].message}}`).Output() | ||
| o.Expect(errOpcfg).NotTo(o.HaveOccurred()) |
There was a problem hiding this comment.
Remove the extra } from the degraded-condition JSONPath.
This query is malformed as written, so Step4 can fail before the actual status assertion runs.
🐛 Proposed fix
- outputOpcfg, errOpcfg := oc.AsAdmin().WithoutNamespace().Run("get").Args(
- "dns.operator", "default", `-o=jsonpath={.status.conditions[?(@.type=="Degraded")].message}}`).Output()
+ outputOpcfg, errOpcfg := oc.AsAdmin().WithoutNamespace().Run("get").Args(
+ "dns.operator", "default", `-o=jsonpath={.status.conditions[?(@.type=="Degraded")].message}`).Output()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| outputOpcfg, errOpcfg := oc.AsAdmin().WithoutNamespace().Run("get").Args( | |
| "dns.operator", "default", `-o=jsonpath={.status.conditions[?(@.type=="Degraded")].message}}`).Output() | |
| o.Expect(errOpcfg).NotTo(o.HaveOccurred()) | |
| outputOpcfg, errOpcfg := oc.AsAdmin().WithoutNamespace().Run("get").Args( | |
| "dns.operator", "default", `-o=jsonpath={.status.conditions[?(@.type=="Degraded")].message}`).Output() | |
| o.Expect(errOpcfg).NotTo(o.HaveOccurred()) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests-extension/test/e2e/dns-operator.go` around lines 96 - 98, The JSONPath
in the oc command is malformed due to an extra closing brace; update the string
passed to oc.AsAdmin().WithoutNamespace().Run("get").Args(...) (where variables
outputOpcfg and errOpcfg are assigned) by removing the extra `}` so the option
becomes -o=jsonpath={.status.conditions[?(@.type=="Degraded")].message}; keep
the rest of the call and error check
(o.Expect(errOpcfg).NotTo(o.HaveOccurred())) unchanged.
| patchStr := "[{\"op\":\"replace\",\"path\":\"/spec/zones/0\",\"value\":" + zoneID + "}]" | ||
| patchGlobalResourceAsAdmin(oc, "externaldnses.externaldns.olm.openshift.io/"+crName, patchStr) |
There was a problem hiding this comment.
Quote zoneID in the JSON patch.
Line 133 appends a raw string into the JSON body. zoneID is a string value, so the patch sent in Line 134 is invalid JSON and the update will fail.
🐛 Proposed fix
- patchStr := "[{\"op\":\"replace\",\"path\":\"/spec/zones/0\",\"value\":" + zoneID + "}]"
+ patchStr := fmt.Sprintf(`[{"op":"replace","path":"/spec/zones/0","value":%q}]`, zoneID)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| patchStr := "[{\"op\":\"replace\",\"path\":\"/spec/zones/0\",\"value\":" + zoneID + "}]" | |
| patchGlobalResourceAsAdmin(oc, "externaldnses.externaldns.olm.openshift.io/"+crName, patchStr) | |
| patchStr := fmt.Sprintf(`[{"op":"replace","path":"/spec/zones/0","value":%q}]`, zoneID) | |
| patchGlobalResourceAsAdmin(oc, "externaldnses.externaldns.olm.openshift.io/"+crName, patchStr) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests-extension/test/e2e/externaldns.go` around lines 133 - 134, The JSON
patch currently builds patchStr by concatenating zoneID without quotes,
producing invalid JSON; update the construction of patchStr so the zoneID value
is a quoted JSON string (e.g., insert escaped quotes around zoneID when building
patchStr) before calling patchGlobalResourceAsAdmin; modify the code that
creates patchStr (referencing the patchStr variable and zoneID) to produce a
valid JSON patch payload for the replace operation on /spec/zones/0.
| patchStr := "[{\"op\":\"replace\",\"path\":\"/spec/source/fqdnTemplate/0\",\"value\":'{{.Name}}." + baseDomain + "'},{\"op\":\"replace\",\"path\":\"/spec/zones/0\",\"value\":" + zoneName + "}]" | ||
| patchGlobalResourceAsAdmin(oc, "externaldnses.externaldns.olm.openshift.io/"+crName, patchStr) |
There was a problem hiding this comment.
Build the GCP patch with valid JSON string values.
Line 188 uses single quotes for fqdnTemplate and leaves zoneName unquoted. That produces an invalid patch payload, so Line 189 will be rejected.
🐛 Proposed fix
- patchStr := "[{\"op\":\"replace\",\"path\":\"/spec/source/fqdnTemplate/0\",\"value\":'{{.Name}}." + baseDomain + "'},{\"op\":\"replace\",\"path\":\"/spec/zones/0\",\"value\":" + zoneName + "}]"
+ patchStr := fmt.Sprintf(
+ `[{"op":"replace","path":"/spec/source/fqdnTemplate/0","value":%q},{"op":"replace","path":"/spec/zones/0","value":%q}]`,
+ "{{.Name}}."+baseDomain,
+ zoneName,
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| patchStr := "[{\"op\":\"replace\",\"path\":\"/spec/source/fqdnTemplate/0\",\"value\":'{{.Name}}." + baseDomain + "'},{\"op\":\"replace\",\"path\":\"/spec/zones/0\",\"value\":" + zoneName + "}]" | |
| patchGlobalResourceAsAdmin(oc, "externaldnses.externaldns.olm.openshift.io/"+crName, patchStr) | |
| patchStr := fmt.Sprintf( | |
| `[{"op":"replace","path":"/spec/source/fqdnTemplate/0","value":%q},{"op":"replace","path":"/spec/zones/0","value":%q}]`, | |
| "{{.Name}}."+baseDomain, | |
| zoneName, | |
| ) | |
| patchGlobalResourceAsAdmin(oc, "externaldnses.externaldns.olm.openshift.io/"+crName, patchStr) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests-extension/test/e2e/externaldns.go` around lines 188 - 189, patchStr is
currently built with single quotes around the fqdnTemplate value and with
zoneName unquoted, producing invalid JSON for the patch; update the construction
of patchStr (used with patchGlobalResourceAsAdmin and variables oc/crName) so
both the fqdnTemplate value and zoneName are proper JSON strings — e.g. use
fmt.Sprintf or strconv.Quote to produce a JSON-safe string like
[{"op":"replace","path":"/spec/source/fqdnTemplate/0","value":"{{.Name}}.<baseDomain>."},{"op":"replace","path":"/spec/zones/0","value":"<zoneName>"}]
(ensure the dynamic baseDomain and zoneName are properly quoted/escaped).
| removeRoute := fmt.Sprintf("[{\"op\":\"remove\", \"path\":\"/spec/componentRoutes\", \"value\":[{\"hostname\": \"1digit9.apps.%s\", \"name\": \"downloads\", \"namespace\": \"openshift-console\"}]}]}]", getBaseDomain(oc)) | ||
| addRoute := fmt.Sprintf("[{\"op\":\"add\", \"path\":\"/spec/componentRoutes\", \"value\":[{\"hostname\": \"1digit9.apps.%s\", \"name\": \"downloads\", \"namespace\": \"openshift-console\"}]}]}]", getBaseDomain(oc)) | ||
| defer patchGlobalResourceAsAdmin(oc, resourceName, removeRoute) | ||
| patchGlobalResourceAsAdmin(oc, resourceName, addRoute) |
There was a problem hiding this comment.
componentRoutes add/remove patches are malformed.
Both payloads have extra closing brackets, and the remove operation should not include a value. As written, the patch calls will not update ingress.config/cluster.
🐛 Proposed fix
- removeRoute := fmt.Sprintf("[{\"op\":\"remove\", \"path\":\"/spec/componentRoutes\", \"value\":[{\"hostname\": \"1digit9.apps.%s\", \"name\": \"downloads\", \"namespace\": \"openshift-console\"}]}]}]", getBaseDomain(oc))
- addRoute := fmt.Sprintf("[{\"op\":\"add\", \"path\":\"/spec/componentRoutes\", \"value\":[{\"hostname\": \"1digit9.apps.%s\", \"name\": \"downloads\", \"namespace\": \"openshift-console\"}]}]}]", getBaseDomain(oc))
+ removeRoute := `[{"op":"remove","path":"/spec/componentRoutes"}]`
+ addRoute := fmt.Sprintf(`[{"op":"add","path":"/spec/componentRoutes","value":[{"hostname":"1digit9.apps.%s","name":"downloads","namespace":"openshift-console"}]}]`, getBaseDomain(oc))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| removeRoute := fmt.Sprintf("[{\"op\":\"remove\", \"path\":\"/spec/componentRoutes\", \"value\":[{\"hostname\": \"1digit9.apps.%s\", \"name\": \"downloads\", \"namespace\": \"openshift-console\"}]}]}]", getBaseDomain(oc)) | |
| addRoute := fmt.Sprintf("[{\"op\":\"add\", \"path\":\"/spec/componentRoutes\", \"value\":[{\"hostname\": \"1digit9.apps.%s\", \"name\": \"downloads\", \"namespace\": \"openshift-console\"}]}]}]", getBaseDomain(oc)) | |
| defer patchGlobalResourceAsAdmin(oc, resourceName, removeRoute) | |
| patchGlobalResourceAsAdmin(oc, resourceName, addRoute) | |
| removeRoute := `[{"op":"remove","path":"/spec/componentRoutes"}]` | |
| addRoute := fmt.Sprintf(`[{"op":"add","path":"/spec/componentRoutes","value":[{"hostname":"1digit9.apps.%s","name":"downloads","namespace":"openshift-console"}]}]`, getBaseDomain(oc)) | |
| defer patchGlobalResourceAsAdmin(oc, resourceName, removeRoute) | |
| patchGlobalResourceAsAdmin(oc, resourceName, addRoute) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests-extension/test/e2e/ingress.go` around lines 489 - 492, The JSON patch
payloads in removeRoute and addRoute are malformed (extra closing brackets) and
the remove operation must not include a value; update removeRoute to be a single
JSON patch object with op "remove" and path "/spec/componentRoutes" (no value),
and fix addRoute to be a single JSON patch object with op "add", path
"/spec/componentRoutes", and a proper "value" array containing the route entry;
keep the same fmt.Sprintf usage and callsites (removeRoute, addRoute,
patchGlobalResourceAsAdmin, resourceName, getBaseDomain, oc) so the
patchGlobalResourceAsAdmin calls send valid JSON patch bodies.
| "ports": [ | ||
| { | ||
| "containerPort": 8080, | ||
| "securityContext": { | ||
| "privileged": "true" | ||
| } | ||
| } | ||
| ] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
FILE="tests-extension/test/e2e/testdata/router/httpbin-pod-withprivilege.json"
python - <<'PY'
import json, pathlib, sys
p = pathlib.Path("tests-extension/test/e2e/testdata/router/httpbin-pod-withprivilege.json")
obj = json.loads(p.read_text())
bad = False
for c in obj.get("spec", {}).get("containers", []):
name = c.get("name")
for i, port in enumerate(c.get("ports", [])):
if "securityContext" in port:
print(f"[FAIL] container={name} ports[{i}] has illegal field: securityContext")
bad = True
sc = port["securityContext"]
if isinstance(sc, dict) and "privileged" in sc and not isinstance(sc["privileged"], bool):
print(f"[FAIL] container={name} ports[{i}] securityContext.privileged is not boolean")
bad = True
if bad:
sys.exit(1)
print("[OK] No misplaced port-level securityContext found.")
PYRepository: openshift/router
Length of output: 371
🏁 Script executed:
cat -n tests-extension/test/e2e/testdata/router/httpbin-pod-withprivilege.jsonRepository: openshift/router
Length of output: 1187
🌐 Web query:
Kubernetes Pod securityContext container spec placement
💡 Result:
In Kubernetes, the securityContext field can be placed at two levels in the Pod specification: 1. Pod-level securityContext: Located directly under spec.securityContext (type: PodSecurityContext). This applies security settings to all containers in the Pod, such as runAsUser, runAsGroup, fsGroup, and supplementalGroups. It affects volume ownership and pod-wide settings. 2. Container-level securityContext: Located under spec.containers[].securityContext (type: SecurityContext). This applies settings only to the specific container, such as allowPrivilegeEscalation, readOnlyRootFilesystem, capabilities, runAsNonRoot, and seccompProfile. Container-level settings override overlapping Pod-level settings (e.g., runAsUser). Example YAML structure: apiVersion: v1 kind: Pod metadata: name: example spec: securityContext: # Pod-level runAsUser: 1000 fsGroup: 2000 containers: - name: app image: nginx securityContext: # Container-level (overrides pod-level where applicable) allowPrivilegeEscalation: false readOnlyRootFilesystem: true Pod-level settings provide defaults for all containers; container-level provides overrides for individual containers. Fields like fsGroup are pod-only, while capabilities are container-only.
Citations:
- 1: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
- 2: https://kubernetes.io/docs/concepts/workloads/pods/advanced-pod-config/
- 3: https://oneuptime.com/blog/post/2026-02-09-securitycontext-pod-container-hierarchy/view
Move securityContext to container level and use boolean type for privileged.
securityContext is placed inside ports[] objects (lines 17–19 and 29–31) where Kubernetes does not support it, and privileged incorrectly uses string "true" instead of boolean true. This makes the manifest invalid or causes the privilege setting to be ignored by Kubernetes.
Proposed fix
"spec": {
"containers": [{
"name": "httpbin-http",
"image": "quay.io/openshifttest/httpbin@sha256:cc44fbd857f4148d8aad8359acc03efa719517e01d390b152e4f3830ad871c9f",
+ "securityContext": {
+ "privileged": true
+ },
"ports": [
{
- "containerPort": 8080,
- "securityContext": {
- "privileged": "true"
- }
+ "containerPort": 8080
}
]
},
{
"name": "httpbin-https",
"image": "quay.io/openshifttest/httpbin@sha256:f57f4e682e05bcdadb103c93ae5ab9be166f79bcbbccaf45d92a2cad18da8d64",
+ "securityContext": {
+ "privileged": true
+ },
"ports": [
{
- "containerPort": 8443,
- "securityContext": {
- "privileged": "true"
- }
+ "containerPort": 8443
}
]
}]
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests-extension/test/e2e/testdata/router/httpbin-pod-withprivilege.json`
around lines 14 - 21, The securityContext block is incorrectly nested inside
each ports[] entry and privileged is a string; move the securityContext out of
the ports array to the container spec level (next to the ports field) and change
"privileged": "true" to a boolean privileged: true so the container's
securityContext is valid; update any other ports[] entries that repeat the
misplaced securityContext to use the single container-level securityContext
(refer to the ports array and the container's securityContext/privileged fields
in the manifest).
| metadata: | ||
| name: ${NAME} | ||
| annotations: | ||
| route.openshift.io/destination-ca-certificate-secret: service-secret | ||
| route.openshift.io/termination: reencrypt | ||
| namespace: ${NAMESPACE} | ||
| spec: |
There was a problem hiding this comment.
YAML structure error: namespace is outside metadata block.
The namespace field on line 11 is at the wrong indentation level. It should be nested under metadata, not as a sibling of spec. This will cause the Ingress to be created without the intended namespace in its metadata.
Proposed fix
metadata:
name: ${NAME}
+ namespace: ${NAMESPACE}
annotations:
route.openshift.io/destination-ca-certificate-secret: service-secret
route.openshift.io/termination: reencrypt
- namespace: ${NAMESPACE}
spec:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| metadata: | |
| name: ${NAME} | |
| annotations: | |
| route.openshift.io/destination-ca-certificate-secret: service-secret | |
| route.openshift.io/termination: reencrypt | |
| namespace: ${NAMESPACE} | |
| spec: | |
| metadata: | |
| name: ${NAME} | |
| namespace: ${NAMESPACE} | |
| annotations: | |
| route.openshift.io/destination-ca-certificate-secret: service-secret | |
| route.openshift.io/termination: reencrypt | |
| spec: |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests-extension/test/e2e/testdata/router/ingress-destCA.yaml` around lines 6
- 12, The YAML has namespace placed as a sibling of spec instead of inside
metadata; move the namespace key so it is nested under the metadata block
alongside name and annotations (i.e., ensure metadata contains name, annotations
(route.openshift.io/destination-ca-certificate-secret), and namespace) so the
resource metadata is correct before the spec.
| kind: List | ||
| items: |
There was a problem hiding this comment.
Add missing top-level apiVersion for List.
Line 1 defines kind: List without apiVersion, so this manifest can fail validation/apply.
Proposed fix
+apiVersion: v1
kind: List
items:As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| kind: List | |
| items: | |
| apiVersion: v1 | |
| kind: List | |
| items: |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests-extension/test/e2e/testdata/router/ingress-resource.yaml` around lines
1 - 2, The manifest starts with "kind: List" but lacks a top-level "apiVersion",
causing validation/apply failures; add a top-level "apiVersion: v1" immediately
above or next to "kind: List" so the List resource is valid, and verify that
each child under "items:" retains its own correct apiVersion/kind as needed
(ensure "apiVersion" exists for the List and for individual items).
| metadata: | ||
| annotations: | ||
| labels: | ||
| name: service-unsecure | ||
| name: ${SUBDOMAIN_NAME}-unsecure1 | ||
| namespace: ${NAMESPACE} | ||
| spec: | ||
| subdomain: ${SUBDOMAIN_NAME} | ||
| to: | ||
| kind: Service | ||
| name: service-unsecure |
There was a problem hiding this comment.
namespace is outside the metadata block.
The namespace: field on line 11 is at the same indentation level as spec:, placing it outside metadata:. This is invalid - namespace must be nested under metadata. The same issue repeats for all four Route objects.
🐛 Proposed fix for the first Route (apply similar fix to others)
- apiVersion: route.openshift.io/v1
kind: Route
metadata:
- annotations:
labels:
name: service-unsecure
name: ${SUBDOMAIN_NAME}-unsecure1
- namespace: ${NAMESPACE}
+ namespace: ${NAMESPACE}
spec:
subdomain: ${SUBDOMAIN_NAME}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| metadata: | |
| annotations: | |
| labels: | |
| name: service-unsecure | |
| name: ${SUBDOMAIN_NAME}-unsecure1 | |
| namespace: ${NAMESPACE} | |
| spec: | |
| subdomain: ${SUBDOMAIN_NAME} | |
| to: | |
| kind: Service | |
| name: service-unsecure | |
| metadata: | |
| labels: | |
| name: service-unsecure | |
| name: ${SUBDOMAIN_NAME}-unsecure1 | |
| namespace: ${NAMESPACE} | |
| spec: | |
| subdomain: ${SUBDOMAIN_NAME} | |
| to: | |
| kind: Service | |
| name: service-unsecure |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@tests-extension/test/e2e/testdata/router/subdomain-routes/ocp51148-route.yaml`
around lines 6 - 16, The namespace field is incorrectly placed at the same level
as spec; move each Route's namespace line into the metadata block (indenting it
to match labels/ name entries) so metadata contains name, labels, annotations
and namespace, and ensure namespace appears before spec for all four Route
objects (e.g., adjust the ${SUBDOMAIN_NAME}-unsecure1 Route so namespace is
nested under metadata alongside labels/name).
| output = getByJsonPath(oc, ns, "route/"+routeNames[0], "{.spec.host}") | ||
| o.Expect(output).Should(o.ContainSubstring(`service-secure-%s.ocp51980.%s`, ns, baseDomain)) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
gomod=$(fd '^go\.mod$' | head -n1)
echo "go.mod: $gomod"
awk '/github.com\/onsi\/gomega/ {print "gomega version:", $2}' "$gomod"
echo
echo "Local call site:"
sed -n '587,591p' tests-extension/test/e2e/tls.go
echo
python - <<'PY'
import pathlib, re, urllib.request
gomod = next(pathlib.Path('.').rglob('go.mod'))
text = gomod.read_text()
match = re.search(r'github.com/onsi/gomega\s+(\S+)', text)
version = match.group(1) if match else 'main'
url = f'https://raw.githubusercontent.com/onsi/gomega/{version}/matchers.go'
print(f'Fetching {url}')
src = urllib.request.urlopen(url, timeout=15).read().decode()
for i, line in enumerate(src.splitlines(), 1):
if 'func ContainSubstring' in line:
print(f'{i}:{line}')
break
else:
raise SystemExit('ContainSubstring definition not found in fetched source')
PYRepository: openshift/router
Length of output: 3140
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Checking gomega version in go.mod ==="
grep "github.com/onsi/gomega" go.mod || echo "Not found in go.mod"
echo
echo "=== Checking for vendor directory ==="
if [ -d "vendor/github.com/onsi/gomega" ]; then
echo "Found vendor/github.com/onsi/gomega"
head -50 vendor/github.com/onsi/gomega/matchers.go 2>/dev/null | grep -A5 "func ContainSubstring" || echo "String not found in first 50 lines"
else
echo "No vendor directory for gomega"
fi
echo
echo "=== Finding all ContainSubstring usages in test files ==="
rg "ContainSubstring" tests-extension/test/e2e/ -A1 -B1 --no-heading | head -60
echo
echo "=== Checking imports in tls.go ==="
head -30 tests-extension/test/e2e/tls.go | grep -E "^import|fmt|gomega"
echo
echo "=== Examining the exact lines 589-590 context ==="
sed -n '585,595p' tests-extension/test/e2e/tls.goRepository: openshift/router
Length of output: 5284
Fix the function call signature: ContainSubstring takes only one argument.
Line 590 passes three arguments (format string, ns, baseDomain) to o.ContainSubstring(), which expects a single substring. Every other usage in the codebase correctly passes one argument. Use fmt.Sprintf to build the string first (the fmt import is already present).
Fix
- o.Expect(output).Should(o.ContainSubstring(`service-secure-%s.ocp51980.%s`, ns, baseDomain))
+ o.Expect(output).Should(o.ContainSubstring(fmt.Sprintf(`service-secure-%s.ocp51980.%s`, ns, baseDomain)))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| output = getByJsonPath(oc, ns, "route/"+routeNames[0], "{.spec.host}") | |
| o.Expect(output).Should(o.ContainSubstring(`service-secure-%s.ocp51980.%s`, ns, baseDomain)) | |
| output = getByJsonPath(oc, ns, "route/"+routeNames[0], "{.spec.host}") | |
| o.Expect(output).Should(o.ContainSubstring(fmt.Sprintf(`service-secure-%s.ocp51980.%s`, ns, baseDomain))) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests-extension/test/e2e/tls.go` around lines 589 - 590, The call to
o.ContainSubstring is incorrectly given three arguments; change the assertion to
pass a single formatted string by wrapping the format and vars with fmt.Sprintf
— for example build the expected host using
fmt.Sprintf("service-secure-%s.ocp51980.%s", ns, baseDomain) and then call
o.Expect(output).Should(o.ContainSubstring(formattedString)); locate this near
the getByJsonPath call that uses routeNames, ns, and baseDomain and update that
assertion accordingly.
|
@ming1013: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
$ ./tests-extension/bin/router-tests-ext list | grep sig- | head -n 2
"name": "[OTP][sig-network-edge] Network_Edge Component_ALBO [Level0] Author:hongli-ROSA-OSD_CCS-ConnectedOnly-High-51189-Install aws-load-balancer-operator and controller [Serial]",
"name": "[OTP][sig-network-edge] Network_Edge Component_ALBO Author:hongli-ROSA-OSD_CCS-ConnectedOnly-Medium-51191-Provision ALB by creating an ingress [Serial]",
$ ./tests-extension/bin/router-tests-ext run-test "[OTP][sig-network-edge] Network_Edge Component_ALBO [Level0] Author:hongli-ROSA-OSD_CCS-ConnectedOnly-High-51189-Install aws-load-balancer-operator and controller [Serial]"
I0330 14:05:27.353847 159833 test_context.go:567] The --provider flag is not set. Continuing as if --provider=skeleton had been used.
Running Suite: - /home/minl/ote-migration-test/router
Random Seed: 1774850727 - will randomize all specs
Will run 1 of 1 specs
[OTP][sig-network-edge] Network_Edge Component_ALBO [Level0] Author:hongli-ROSA-OSD_CCS-ConnectedOnly-High-51189-Install aws-load-balancer-operator and controller [Serial]
/home/minl/ote-migration-test/router/tests-extension/test/e2e/awslb-operator.go:55
STEP: Creating a kubernetes client @ 03/30/26 14:05:27.686
I0330 14:05:33.928084 159833 client.go:293] configPath is now "/tmp/configfile2130183046"
I0330 14:05:33.928138 159833 client.go:368] The user is now "e2e-test-awslb-rhztg-user"
I0330 14:05:33.928154 159833 client.go:370] Creating project "e2e-test-awslb-rhztg"
I0330 14:05:34.321587 159833 client.go:378] Waiting on permissions in project "e2e-test-awslb-rhztg" ...
I0330 14:05:35.646184 159833 client.go:407] DeploymentConfig capability is enabled, adding 'deployer' SA to the list of default SAs
I0330 14:05:35.974358 159833 client.go:422] Waiting for ServiceAccount "default" to be provisioned...
I0330 14:05:36.735462 159833 client.go:422] Waiting for ServiceAccount "builder" to be provisioned...
I0330 14:05:37.499084 159833 client.go:422] Waiting for ServiceAccount "deployer" to be provisioned...
I0330 14:05:38.257126 159833 client.go:432] Waiting for RoleBinding "system:image-pullers" to be provisioned...
I0330 14:05:38.911672 159833 client.go:432] Waiting for RoleBinding "system:image-builders" to be provisioned...
I0330 14:05:39.573083 159833 client.go:432] Waiting for RoleBinding "system:deployers" to be provisioned...
I0330 14:05:42.393898 159833 client.go:469] Project "e2e-test-awslb-rhztg" has been fully provisioned.
STEP: Ensure the operartor pod is ready 03/30/26 14:05:48.556
STEP: Ensure the operartor pod is ready @ 03/30/26 14:05:48.556
I0330 14:05:55.139005 159833 util.go:383] the Ready status of pod is True
STEP: Create CR AWSLoadBalancerController 03/30/26 14:05:55.139
STEP: Create CR AWSLoadBalancerController @ 03/30/26 14:05:55.139
I0330 14:06:04.180231 159833 util.go:383] the Ready status of pod is
I0330 14:06:04.180310 159833 util.go:385] failed to get pod status: , retrying...
I0330 14:06:09.187385 159833 util.go:383] the Ready status of pod is
I0330 14:06:09.187490 159833 util.go:385] failed to get pod status: , retrying...
I0330 14:06:14.211819 159833 util.go:383] the Ready status of pod is
I0330 14:06:14.211917 159833 util.go:385] failed to get pod status: , retrying...
I0330 14:06:19.185861 159833 util.go:383] the Ready status of pod is
I0330 14:06:19.185969 159833 util.go:385] failed to get pod status: , retrying...
I0330 14:06:24.195661 159833 util.go:383] the Ready status of pod is
I0330 14:06:24.195748 159833 util.go:385] failed to get pod status: , retrying...
I0330 14:06:29.424758 159833 util.go:383] the Ready status of pod is True
I0330 14:06:31.356753 159833 client.go:689] Deleted {user.openshift.io/v1, Resource=users e2e-test-awslb-rhztg-user}, err:
I0330 14:06:31.689199 159833 client.go:689] Deleted {oauth.openshift.io/v1, Resource=oauthclients e2e-client-e2e-test-awslb-rhztg}, err:
I0330 14:06:32.023746 159833 client.go:689] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens sha256~9Fv69mnGhCq5cxIZOwKeNmImu8CcoxZyPyE12u-_9To}, err:
STEP: Destroying namespace "e2e-test-awslb-rhztg" for this suite. @ 03/30/26 14:06:33.255
• [65.908 seconds]
Ran 1 of 1 Specs in 65.908 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
$ tree tests-extension/test/e2e/ -L 1
tests-extension/test/e2e/
├── awslb-operator.go
├── aws_util.go
├── bindata.mk
├── cloud_load_balancer.go
├── cookies.go
├── dns.go
├── dns-operator.go
├── externaldns.go
├── externaldns_util.go
├── gatewayapi.go
├── haproxy-router.go
├── headers.go
├── ingress.go
├── ingress-operator.go
├── ipfailover.go
├── logging.go
├── microshift.go
├── OWNERS
├── route-admission.go
├── route.go
├── route-hsts.go
├── route-weight.go
├── testdata
├── tls.go
├── tuning.go
└── util.go
$ grep -r -ni compat_otp | wc -l
2336
$ cd ../../../
minl@minl-thinkpadx1carbongen13:~/ote-migration-test/router$ podman build -t router:test -f ./images/router/haproxy/Dockerfile.ocp .
[1/2] STEP 1/5: FROM registry.ci.openshift.org/ocp/builder:rhel-9-golang-1.25-openshift-4.22 AS test-extension-builder
[1/2] STEP 2/5: RUN mkdir -p /go/src/github.com/openshift/router
--> Using cache 276ce06d3d587abcc3b2aaf1fc07cbc76bdd26a586634a3a210327ba0abe8362
--> 276ce06d3d58
[1/2] STEP 3/5: WORKDIR /go/src/github.com/openshift/router
--> Using cache a0a8fc0adeb86c7866d4f911f376806250ab6cc0429b2171d33fda3f9339e266
--> a0a8fc0adeb8
[1/2] STEP 4/5: COPY . .
--> fe8d652ec5f5
[1/2] STEP 5/5: RUN cd tests-extension && make build && cd bin && tar -czvf router-test-extension.tar.gz router-tests-ext && rm -f router-tests-ext
Building extension binary...
make[1]: Entering directory '/go/src/github.com/openshift/router/tests-extension/test/e2e'
Installing go-bindata...
go: downloading github.com/go-bindata/go-bindata/v3 v3.1.3
go: downloading github.com/go-bindata/go-bindata v3.1.2+incompatible
Generating bindata...
/go/bin/go-bindata -nocompress -nometadata
-pkg testdata -o testdata/bindata.go -prefix "testdata" testdata/...
✅ Bindata generated successfully
make[1]: Leaving directory '/go/src/github.com/openshift/router/tests-extension/test/e2e'
GOTOOLCHAIN=auto GOSUMDB=sum.golang.org go build -o bin/router-tests-ext ./cmd
✅ Binary built: bin/router-tests-ext
router-tests-ext
--> 7cb6f486a94a
[2/2] STEP 1/10: FROM registry.ci.openshift.org/ocp/4.22:haproxy-router-base
[2/2] STEP 2/10: RUN INSTALL_PKGS="socat haproxy28 rsyslog procps-ng util-linux" && yum install -y --setopt=install_weak_deps=0 $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all && mkdir -p /var/lib/haproxy/router/{certs,cacerts,allowlists} && mkdir -p /var/lib/haproxy/{conf/.tmp,run,bin,log,mtls} && touch /var/lib/haproxy/conf/{{os_http_be,os_edge_reencrypt_be,os_tcp_be,os_sni_passthrough,os_route_http_redirect,cert_config,os_wildcard_domain}.map,haproxy.config} && setcap 'cap_net_bind_service=ep' /usr/sbin/haproxy && chown -R :0 /var/lib/haproxy && chmod -R g+w /var/lib/haproxy && sed -i 's/SECLEVEL=2/SECLEVEL=1/g' /etc/crypto-policies/back-ends/opensslcnf.config
ART yum/dnf wrapper [2]: Checking for CI build pod repo definitions...
ART yum/dnf wrapper [2]: Did not detect that this script is running in a CI build pod. Will not install CI repositories.
ART yum/dnf wrapper [2]: Installed repos that can be used when connected to the VPN.
ART yum/dnf wrapper [2]: DNF will search for repo files in: /etc/yum.repos.art/localdev
ART yum/dnf wrapper [2]: Configured repositories in rh-vpn-repos.repo:
ART yum/dnf wrapper [2]: - rhel-9-appstream-rpms-x86_64
ART yum/dnf wrapper [2]: - rhel-9-appstream-rpms-aarch64
ART yum/dnf wrapper [2]: - rhel-9-appstream-rpms-s390x
ART yum/dnf wrapper [2]: - rhel-9-appstream-rpms-ppc64le
ART yum/dnf wrapper [2]: - rhel-9-baseos-rpms-x86_64
ART yum/dnf wrapper [2]: - rhel-9-baseos-rpms-aarch64
ART yum/dnf wrapper [2]: - rhel-9-baseos-rpms-s390x
ART yum/dnf wrapper [2]: - rhel-9-baseos-rpms-ppc64le
ART yum/dnf wrapper [2]: - rhel-9-codeready-builder-rpms-x86_64
ART yum/dnf wrapper [2]: - rhel-9-codeready-builder-rpms-aarch64
ART yum/dnf wrapper [2]: - rhel-9-codeready-builder-rpms-s390x
ART yum/dnf wrapper [2]: - rhel-9-codeready-builder-rpms-ppc64le
ART yum/dnf wrapper [2]: - rhel-9-fast-datapath-rpms-x86_64
ART yum/dnf wrapper [2]: - rhel-9-fast-datapath-rpms-aarch64
ART yum/dnf wrapper [2]: - rhel-9-fast-datapath-rpms-s390x
ART yum/dnf wrapper [2]: - rhel-9-fast-datapath-rpms-ppc64le
ART yum/dnf wrapper [2]: - rhel-9-server-ironic-rpms-x86_64
ART yum/dnf wrapper [2]: - rhel-9-server-ironic-rpms-aarch64
ART yum/dnf wrapper [2]: - rhel-9-server-ironic-rpms-s390x
ART yum/dnf wrapper [2]: - rhel-9-server-ironic-rpms-ppc64le
ART yum/dnf wrapper [2]: - rhel-9-server-ose-rpms-x86_64
ART yum/dnf wrapper [2]: - rhel-9-server-ose-rpms-aarch64
ART yum/dnf wrapper [2]: - rhel-9-server-ose-rpms-s390x
ART yum/dnf wrapper [2]: - rhel-9-server-ose-rpms-ppc64le
(microdnf:69): librhsm-WARNING **: 06:32:27.068: Found 0 entitlement certificates
(microdnf:69): librhsm-WARNING **: 06:32:27.069: Found 0 entitlement certificates
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Package Repository Size
Installing:
haproxy28-2.8.18-1.rhaos4.22.el9.x86_64 rhel-9-server-ose-rpms-x86_64 2.5 MB
libestr-0.1.11-4.el9.x86_64 rhel-9-appstream-rpms-x86_64 29.2 kB
libfastjson-0.99.9-5.el9.x86_64 rhel-9-appstream-rpms-x86_64 41.5 kB
rsyslog-8.2506.0-2.el9.x86_64 rhel-9-appstream-rpms-x86_64 836.7 kB
Transaction Summary:
Installing: 4 packages
Reinstalling: 0 packages
Upgrading: 0 packages
Obsoleting: 0 packages
Removing: 0 packages
Downgrading: 0 packages
Downloading packages...
warning: Unsupported version of key: V6
(microdnf:69): libdnf-WARNING **: 08:15:23.888: failed to parse public key for /etc/pki/rpm-gpg/RPM-GPG-KEY-PQC-redhat-release
Running transaction test...
Installing: libfastjson;0.99.9-5.el9;x86_64;rhel-9-appstream-rpms-x86_64
Installing: libestr;0.1.11-4.el9;x86_64;rhel-9-appstream-rpms-x86_64
Installing: rsyslog;8.2506.0-2.el9;x86_64;rhel-9-appstream-rpms-x86_64
Installing: haproxy28;2.8.18-1.rhaos4.22.el9;x86_64;rhel-9-server-ose-rpms-x86_64
Complete.
ART yum/dnf wrapper [192]: DNF will search for repo files in: /etc/yum.repos.art/localdev
ART yum/dnf wrapper [192]: Configured repositories in redhat.repo:
ART yum/dnf wrapper [192]: Configured repositories in rh-vpn-repos.repo:
ART yum/dnf wrapper [192]: - rhel-9-appstream-rpms-x86_64
ART yum/dnf wrapper [192]: - rhel-9-appstream-rpms-aarch64
ART yum/dnf wrapper [192]: - rhel-9-appstream-rpms-s390x
ART yum/dnf wrapper [192]: - rhel-9-appstream-rpms-ppc64le
ART yum/dnf wrapper [192]: - rhel-9-baseos-rpms-x86_64
ART yum/dnf wrapper [192]: - rhel-9-baseos-rpms-aarch64
ART yum/dnf wrapper [192]: - rhel-9-baseos-rpms-s390x
ART yum/dnf wrapper [192]: - rhel-9-baseos-rpms-ppc64le
ART yum/dnf wrapper [192]: - rhel-9-codeready-builder-rpms-x86_64
ART yum/dnf wrapper [192]: - rhel-9-codeready-builder-rpms-aarch64
ART yum/dnf wrapper [192]: - rhel-9-codeready-builder-rpms-s390x
ART yum/dnf wrapper [192]: - rhel-9-codeready-builder-rpms-ppc64le
ART yum/dnf wrapper [192]: - rhel-9-fast-datapath-rpms-x86_64
ART yum/dnf wrapper [192]: - rhel-9-fast-datapath-rpms-aarch64
ART yum/dnf wrapper [192]: - rhel-9-fast-datapath-rpms-s390x
ART yum/dnf wrapper [192]: - rhel-9-fast-datapath-rpms-ppc64le
ART yum/dnf wrapper [192]: - rhel-9-server-ironic-rpms-x86_64
ART yum/dnf wrapper [192]: - rhel-9-server-ironic-rpms-aarch64
ART yum/dnf wrapper [192]: - rhel-9-server-ironic-rpms-s390x
ART yum/dnf wrapper [192]: - rhel-9-server-ironic-rpms-ppc64le
ART yum/dnf wrapper [192]: - rhel-9-server-ose-rpms-x86_64
ART yum/dnf wrapper [192]: - rhel-9-server-ose-rpms-aarch64
ART yum/dnf wrapper [192]: - rhel-9-server-ose-rpms-s390x
ART yum/dnf wrapper [192]: - rhel-9-server-ose-rpms-ppc64le
(microdnf:224): librhsm-WARNING **: 08:15:24.674: Found 0 entitlement certificates
(microdnf:224): librhsm-WARNING **: 08:15:24.676: Found 0 entitlement certificates
Complete.
--> 098b45e03fa7
[2/2] STEP 3/10: COPY images/router/haproxy/ /var/lib/haproxy/
--> af9796398744
[2/2] STEP 4/10: COPY --from=test-extension-builder /go/src/github.com/openshift/router/tests-extension/bin/router-test-extension.tar.gz /usr/bin/
--> 909e24f26905
[2/2] STEP 5/10: LABEL io.k8s.display-name="OpenShift HAProxy Router" io.k8s.description="This component offers ingress to an OpenShift cluster via Ingress and Route rules." io.openshift.tags="openshift,router,haproxy"
--> 594bd8c7164e
[2/2] STEP 6/10: USER 1001
--> aaa9e7748b61
[2/2] STEP 7/10: EXPOSE 80 443
--> fb054c311c70
[2/2] STEP 8/10: WORKDIR /var/lib/haproxy/conf
--> bf33b7b3a9e4
[2/2] STEP 9/10: ENV TEMPLATE_FILE=/var/lib/haproxy/conf/haproxy-config.template RELOAD_SCRIPT=/var/lib/haproxy/reload-haproxy
--> bcdc36470e5e
[2/2] STEP 10/10: ENTRYPOINT ["/usr/bin/openshift-router", "--v=2"]
[2/2] COMMIT router:test
--> 5b9551e116b6
Successfully tagged localhost/router:test
5b9551e116b640a4a36fcb2a2db9733f2588d92d4ba70bf84f0a3f32cd67d642
$ podman run --rm --entrypoint ls router:test -lh /usr/bin/router-test-extension.tar.gz
-rw-r--r--. 1 root root 120M Mar 30 06:31 /usr/bin/router-test-extension.tar.gz