NE-2292: Add origin test for the NO-OLM testing#30883
NE-2292: Add origin test for the NO-OLM testing#30883rhamini3 wants to merge 4 commits intoopenshift:mainfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
Skipping CI for Draft Pull Request. |
|
Important Review skippedAuto reviews are limited based on label configuration. 🚫 Excluded labels (none allowed) (1)
Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment Tip CodeRabbit can generate a title for your PR based on the changes.Add |
|
@rhamini3: An error was encountered searching for bug NE-2292 on the Jira server at https://issues.redhat.com. No known errors were detected, please see the full error message for details. Full error message.
No response returned: Get "https://issues.redhat.com/rest/api/2/issue/NE-2292": GET https://issues.redhat.com/rest/api/2/issue/NE-2292 giving up after 5 attempt(s)
Please contact an administrator to resolve this issue, then request a bug refresh with DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: rhamini3 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/test e2e-gcp-ovn-techpreview |
|
/test e2e-gcp-ovn-techpreview |
1 similar comment
|
/test e2e-gcp-ovn-techpreview |
|
Job Failure Risk Analysis for sha: 31d592e
|
| // deleted, and so we have an explicit delete command here just | ||
| // in case. The --ignore-not-found option should prevent errors | ||
| // if garbage collection has already deleted the object. | ||
| o.Expect(oc.AsAdmin().WithoutNamespace().Run("delete").Args("--ignore-not-found=true", "istio", istioName).Execute()).Should(o.Succeed()) |
There was a problem hiding this comment.
This is something you will need to fix, this g.AfterEach(func() { is specific to OSSM, and needs to be adjusted to not do OSSM-specific tasks (and possibly some Sail Library-specific cleanup).
|
/test e2e-gcp-ovn-techpreview |
|
/test e2e-gcp-ovn-techpreview |
|
@rhamini3 We should kick this off with my PR, as the tech preview job will fail without it: /testwith openshift/origin/master/e2e-gcp-ovn-techpreview openshift/cluster-ingress-operator#1354 |
|
/testwith openshift/origin/master/e2e-gcp-ovn-techpreview openshift/cluster-ingress-operator#1354 |
|
@gcs278, |
|
/testwith openshift/origin/master/e2e-gcp-ovn-techpreview openshift/cluster-ingress-operator#1354 |
|
@gcs278, |
|
/testwith openshift/origin/main/e2e-gcp-ovn-techpreview openshift/cluster-ingress-operator#1354 |
| func skipIfNoOLMFeatureGateEnabled(oc *exutil.CLI) { | ||
| featuregate, err := oc.AsAdmin().Run("get").Args("featuregate", "cluster", "-o=jsonpath='{.status.featureGates[*].enabled[*].name}'").Output() | ||
| o.Expect(err).NotTo(o.HaveOccurred()) | ||
| if strings.Contains(featuregate, "GatewayAPIWithoutOLM") { | ||
| g.Skip("Skip test since it requires OLM resources") | ||
| } | ||
| } |
There was a problem hiding this comment.
nit better to use structured API client rather than raw OC command:
| func skipIfNoOLMFeatureGateEnabled(oc *exutil.CLI) { | |
| featuregate, err := oc.AsAdmin().Run("get").Args("featuregate", "cluster", "-o=jsonpath='{.status.featureGates[*].enabled[*].name}'").Output() | |
| o.Expect(err).NotTo(o.HaveOccurred()) | |
| if strings.Contains(featuregate, "GatewayAPIWithoutOLM") { | |
| g.Skip("Skip test since it requires OLM resources") | |
| } | |
| } | |
| func skipIfNoOLMFeatureGateEnabled(oc *exutil.CLI) { | |
| fgs, err := oc.AdminConfigClient().ConfigV1().FeatureGates().Get(context.TODO(), "cluster", metav1.GetOptions{}) | |
| o.Expect(err).NotTo(o.HaveOccurred(), "Error getting cluster FeatureGates.") | |
| // Skip if the GatewayAPIWithoutOLM feature gate is enabled | |
| for _, fg := range fgs.Status.FeatureGates { | |
| for _, enabledFG := range fg.Enabled { | |
| if enabledFG.Name == "GatewayAPIWithoutOLM" { | |
| g.Skip("Skip test since it requires OLM resources") | |
| } | |
| } | |
| } | |
| } |
| @@ -114,7 +114,6 @@ var _ = g.Describe("[sig-network-edge][OCPFeatureGate:GatewayAPIController][Feat | |||
|
|
|||
| // skip non clould platforms since gateway needs LB service | |||
| skipGatewayIfNonCloudPlatform(oc) | |||
There was a problem hiding this comment.
We should skip everything right? If don't skip everything, we run the non-skipped tests here and in gatewayapiwitholm.go for tech preview jobs.
| skipGatewayIfNonCloudPlatform(oc) | |
| skipGatewayIfNonCloudPlatform(oc) | |
| skipIfNoOLMFeatureGateEnabled(oc) |
| o.Expect(errCheck).NotTo(o.HaveOccurred(), "GatewayClass %q was not installed and accepted", gatewayClassName) | ||
| }) | ||
|
|
||
| g.It("Ensure custom gatewayclass can be accepted and managed by CIO", func() { |
There was a problem hiding this comment.
Pinging @JoelSpeed (I had claude reword 🐱):
Quick question about feature gate promotion and test migration strategy:
We're promoting GatewayAPIWithoutOLM (TechPreview) which provides an alternative implementation to the existing GatewayAPIController (GA). Both implement the same Gateway API functionality, but through different installation paths:
GatewayAPIController: OLM-based OSSM installation (current GA)GatewayAPIWithoutOLM: CIO-based OSSM installation (TechPreview → GA)
Reference: https://redhat.atlassian.net/browse/NE-2286
Current Situation
We've duplicated our test suite to maintain separate CI signals:
- 4 of 8 tests are identical
- 3 of 8 tests are mostly the same with minor implementation differences (checking Istio CR vs. istiod deployment)
- 1 of 8 tests is completely different (verifying OLM vs. verifying CIO provisioning)
Example of duplication:
[OCPFeatureGate:GatewayAPIController] Ensure HTTPRoute object is created
[OCPFeatureGate:GatewayAPIWithoutOLM] Ensure HTTPRoute object is created
(These are identical tests, just tagged differently)
Questions
- Is this duplication strategy correct for accumulating separate CI signals during the TechPreview phase?
- When
GatewayAPIWithoutOLMgraduates to GA, what's the migration path?- Do we simply delete all
GatewayAPIControllertests and renameGatewayAPIWithoutOLMtests to be the primary tests? - Or is there a different recommended approach for test migration?
- Do we simply delete all
- Would renaming the 4 identical tests now to use
[OCPFeatureGate:GatewayAPIWithoutOLM]preserve their test history in Sippy, or does changing the test name break historical tracking?
There was a problem hiding this comment.
I would not expect you to duplicate the tests, but instead to label the tests with the multiple gates that they are expected to validate
As for renaming, can you poke someone from TRT, they know the semantics of how renaming works better than I do. I believe we don't typically drop the feature gate labels from tests because of some renaming issues, but there is a mapping so that you can rename and retain behaviour, but TRT folks will confirm this better than I can
|
e2e-gcp-ovn-techpreview failed as it doesn't have the PR implementation to test against. |
|
@rhamini3: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
| _, err := oc.AsAdmin().Run("get").Args("-n", namespace, name).Output() | ||
| o.Expect(err).Should(o.Or(o.Not(o.HaveOccurred()), o.MatchError(apierrors.IsNotFound, "NotFound"))) |
There was a problem hiding this comment.
I think this always returns without any error, did you mean this?
| _, err := oc.AsAdmin().Run("get").Args("-n", namespace, name).Output() | |
| o.Expect(err).Should(o.Or(o.Not(o.HaveOccurred()), o.MatchError(apierrors.IsNotFound, "NotFound"))) | |
| _, err := oc.AsAdmin().Run("get").Args("-n", namespace, name).Output() | |
| o.Expect(err).To(o.HaveOccurred(), "Expected resource %s to not exist in namespace %s", name, namespace) |
| } | ||
|
|
||
| func checkCIOConditions(oc *exutil.CLI, name string, cond string) error { | ||
| timeout := 20 * time.Minute |
There was a problem hiding this comment.
20 minutes is a lot of time 😄. We do 5 minutes for installation in the CIO presubmit: openshift/cluster-ingress-operator#1372
There was a problem hiding this comment.
If you do have time, I think refactoring to share the same functions, mainly on the identical tests, is going to be quite helpful.
In theory, when we go GA with the feature gate, I think we will be deleting the old tests. So helpers won't be that important, however, if we miss GA this release, then it's going to be in TP for a while and having duplicated tests around is not desirable. Probably best we do it right, but it's a lower priority for me.
|
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: d16a07c
New tests seen in this PR at sha: d16a07c
|
Adding origin testing for CIO provisioning without OLM resources