Skip to content

OCPBUGS-77015: wait for patch port to apply drop garp flows#2910

Open
bpickard22 wants to merge 1 commit intoopenshift:masterfrom
bpickard22:arp-patch-port-wait
Open

OCPBUGS-77015: wait for patch port to apply drop garp flows#2910
bpickard22 wants to merge 1 commit intoopenshift:masterfrom
bpickard22:arp-patch-port-wait

Conversation

@bpickard22
Copy link
Contributor

@bpickard22 bpickard22 commented Feb 20, 2026

There is currently a race in the ovnkube-lib script where we try to add some drop flows to prevent duplicate garps to the patch port, but if the patch port isnt ready, the ovn-controller can connect to the db and create the patch port without the flow.

We need to wait for the patch port to exist before we try to add the GARP drop flow

also added some logs for better visibility

Assisted-By: Claude Sonnet 4.5

cc @pperiyasamy

Summary by CodeRabbit

  • Bug Fixes
    • Added automatic retry-and-wait for bridge integration setup with periodic delays and a non-blocking background start.
    • Improved INFO logs when retries are occurring and when integration steps are applied.
    • Emit a WARN if retries exhaust without success and explicitly report failure when the required port never appears.

@coderabbitai
Copy link

coderabbitai bot commented Feb 20, 2026

Walkthrough

Updated OVN-Kubernetes script lib: add_garp_drop_flow() now returns explicit exit codes (0 on success, 1 when patch port not found) and returns immediately when matched. Added wait-and-add-garp-drop-flow() which retries add_garp_drop_flow() up to 60 times with 2s sleeps and enhanced logging; callers updated to use it in background for br-ex.

Changes

Cohort / File(s) Summary
GARP Drop Flow Retry Logic
bindata/network/ovn-kubernetes/common/008-script-lib.yaml
Reworked add_garp_drop_flow() to return 0 on success and 1 when no matching patch port is found (immediate return on match). Added wait-and-add-garp-drop-flow() that retries add_garp_drop_flow() up to 60 times with 2s sleeps, logs INFO when waiting/adding, logs WARN on final repeated failure, and updated call sites to invoke the wrapper in background for br-ex (e.g., (wait-and-add-garp-drop-flow br-ex &)).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title accurately summarizes the main change: introducing logic to wait for the patch port before applying drop GARP flows, which directly addresses the race condition described in the PR objectives.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Stable And Deterministic Test Names ✅ Passed The modified file is a YAML configuration file containing bash scripts for network operations, not test code with test definitions or test names.
Test Structure And Quality ✅ Passed PR modifies only bash script library file, not test code. Custom check assesses Go test code quality using Ginkgo patterns, which is not applicable.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 20, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: bpickard22
Once this PR has been reviewed and has the lgtm label, please assign abhat for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
bindata/network/ovn-kubernetes/common/008-script-lib.yaml (1)

244-252: ⚠️ Potential issue | 🟠 Major

Validate of_port readiness and ovs-ofctl success before returning.

The function returns 0 immediately after calling ovs-ofctl add-flow, without checking if of_port is -1 (port not ready) or if the command failed. This causes the retry wrapper to exit prematurely, even when the patch port isn't ready or the flow didn't apply, defeating the race condition fix.

Before returning success, validate that of_port is not -1 or empty, and check the ovs-ofctl exit status:

Suggested guard & status check
         of_port=$(ovs-vsctl get interface "$port_name" ofport)
+        if [[ -z "${of_port}" || "${of_port}" == "-1" ]]; then
+          return 1
+        fi
-        ovs-ofctl add-flow "$bridge" "cookie=$cookie,table=0,priority=$priority,in_port=$of_port,arp,arp_op=1,actions=drop" > /dev/null
-        echo "$(date -Iseconds) - INFO - Added GARP drop flow on $bridge port $port_name (ofport=$of_port)"
-        return 0
+        if ovs-ofctl add-flow "$bridge" "cookie=$cookie,table=0,priority=$priority,in_port=$of_port,arp,arp_op=1,actions=drop" > /dev/null; then
+          echo "$(date -Iseconds) - INFO - Added GARP drop flow on $bridge port $port_name (ofport=$of_port)"
+          return 0
+        fi
+        return 1
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindata/network/ovn-kubernetes/common/008-script-lib.yaml` around lines 244 -
252, The loop adds a GARP drop flow then returns success immediately without
verifying the interface ofport or the ovs-ofctl result; update the code around
the ovs-vsctl/ovs-ofctl calls (symbols: ovs-vsctl, of_port, ovs-ofctl, return 0,
bridge, port_name) to first ensure of_port is neither empty nor "-1" and then
check the exit status of the ovs-ofctl add-flow command (capture its return
code) before printing the success message and returning 0; if of_port is invalid
or ovs-ofctl failed, do not return—log the failure and allow the retry wrapper
to continue/try the next port or repeat instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@bindata/network/ovn-kubernetes/common/008-script-lib.yaml`:
- Around line 244-252: The loop adds a GARP drop flow then returns success
immediately without verifying the interface ofport or the ovs-ofctl result;
update the code around the ovs-vsctl/ovs-ofctl calls (symbols: ovs-vsctl,
of_port, ovs-ofctl, return 0, bridge, port_name) to first ensure of_port is
neither empty nor "-1" and then check the exit status of the ovs-ofctl add-flow
command (capture its return code) before printing the success message and
returning 0; if of_port is invalid or ovs-ofctl failed, do not return—log the
failure and allow the retry wrapper to continue/try the next port or repeat
instead.

@bpickard22
Copy link
Contributor Author

/retest

@bpickard22
Copy link
Contributor Author

ci is failing because the wait-and-add-drop-flows function I added is blocking, it currently has a retry window of 60 seconds, but its blocking, and in a ci cluster it takes longer than 60 seconds for the patch port to come up, so the container errors out, causing the failures

I think there are two ways that we could go about this fix

  1. run the garp flow check in the background so it waits for the patch port to come up after ovnkube node starts up
    this will work but i dont love this. we can make it fail gracefully, but then we could hit the same issue where we are trying to add the drop flows to a patch port that doesnt exist, and when ovn-controller creates the patch port from the db, it will not have the flows
    also i think this will be difficult to debug

  2. fix this in ovn-k
    we maybe could remove the drop flows logic here and add the drop flows there where it is easier to track the state of the patch port. I will work on a pr for this in the meantime anyways, but would like your thoughts @pperiyasamy @arghosh93

I think that we can avoid the race between the ovn-controller and the ovnkube script more easily with this solution, but it will be a bigger change

@bpickard22
Copy link
Contributor Author

I think for now I am going to go with a mix. I will make the check run in the background and fail gracefully. this should provide some relief. In the meantime I will work on a fix in ovn-k, then when that merges we can remove the bash workaround here for the drop flows

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
bindata/network/ovn-kubernetes/common/008-script-lib.yaml (1)

245-257: ⚠️ Potential issue | 🟠 Major

Return success only after the flow is actually installed.

With the new wait loop, returning 0 immediately after a port name match can stop retries even if of_port is still -1 or ovs-ofctl add-flow fails. That would leave the drop flow missing and reintroduce the race. Please gate the success path on a valid of_port and a successful ovs-ofctl call.

Proposed fix
         if [[ "$port_name" == *to-br-int ]]; then
           local of_port
           of_port=$(ovs-vsctl get interface "$port_name" ofport)
-          ovs-ofctl add-flow "$bridge" "cookie=$cookie,table=0,priority=$priority,in_port=$of_port,arp,arp_op=1,actions=drop" > /dev/null
-          echo "$(date -Iseconds) - INFO - Added GARP drop flow on $bridge port $port_name (ofport=$of_port)"
-          return 0
+          if [[ "${of_port}" -gt 0 ]] && ovs-ofctl add-flow "$bridge" "cookie=$cookie,table=0,priority=$priority,in_port=$of_port,arp,arp_op=1,actions=drop" > /dev/null; then
+            echo "$(date -Iseconds) - INFO - Added GARP drop flow on $bridge port $port_name (ofport=$of_port)"
+            return 0
+          fi
         fi
       done
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindata/network/ovn-kubernetes/common/008-script-lib.yaml` around lines 245 -
257, The loop currently returns success as soon as a port name match is found
even if of_port is "-1" or ovs-ofctl failed; update the block that handles
matching port_name so that after setting of_port (from ovs-vsctl get interface
... ofport) you verify of_port is a valid number (not "-1") and then run
ovs-ofctl add-flow and check its exit status before logging and returning 0; if
of_port is invalid or ovs-ofctl fails, do not return success — either continue
scanning other ports or let the function fall through to the final "return 1" so
retries can happen. Ensure you reference the variables and commands used
(port_name, of_port, bridge, cookie, priority, ovs-vsctl get interface,
ovs-ofctl add-flow) when implementing the checks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@bindata/network/ovn-kubernetes/common/008-script-lib.yaml`:
- Around line 245-257: The loop currently returns success as soon as a port name
match is found even if of_port is "-1" or ovs-ofctl failed; update the block
that handles matching port_name so that after setting of_port (from ovs-vsctl
get interface ... ofport) you verify of_port is a valid number (not "-1") and
then run ovs-ofctl add-flow and check its exit status before logging and
returning 0; if of_port is invalid or ovs-ofctl fails, do not return success —
either continue scanning other ports or let the function fall through to the
final "return 1" so retries can happen. Ensure you reference the variables and
commands used (port_name, of_port, bridge, cookie, priority, ovs-vsctl get
interface, ovs-ofctl add-flow) when implementing the checks.

ℹ️ Review info

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between dbbd6ad and 0e93809.

📒 Files selected for processing (1)
  • bindata/network/ovn-kubernetes/common/008-script-lib.yaml

@bpickard22
Copy link
Contributor Author

/retest-required

@bpickard22
Copy link
Contributor Author

none of the failing jobs have passed for some time, so i think it should be safe to override right @jluhrsen

@bpickard22 bpickard22 changed the title wait for patch port to apply drop garp flows OCPBUGS-77015: wait for patch port to apply drop garp flows Feb 24, 2026
@openshift-ci-robot openshift-ci-robot added jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Feb 24, 2026
@openshift-ci-robot
Copy link
Contributor

@bpickard22: This pull request references Jira Issue OCPBUGS-77015, which is invalid:

  • expected the bug to target the "4.22.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

There is currently a race in the ovnkube-lib script where we try to add some drop flows to prevent duplicate garps to the patch port, but if the patch port isnt ready, the ovn-controller can connect to the db and create the patch port without the flow.

We need to wait for the patch port to exist before we try to add the GARP drop flow

also added some logs for better visibility

Assisted-By: Claude Sonnet 4.5

cc @pperiyasamy

Summary by CodeRabbit

  • Bug Fixes
  • Added automatic retry logic for bridge integration setup with periodic backoff to handle transient initialization failures.
  • Improved logging on retries and final failure to aid troubleshooting.
  • Ensured the integration step runs in the background to avoid blocking startup.
  • Now explicitly reports failure when the required port is not found instead of silently exiting.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

# Wait for patch port to exist and add GARP drop flow.
wait-and-add-garp-drop-flow() {
local bridge="$1"
local max_retries=90
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you check comment #2801 (comment) on the PR which added this GARP drop flow?

This comment basically says that this workaround is for hard reboots or container restarts and should not require a retry, as during a graceful node reboot we add GARP drop flows while stopping the nodeControllerManager.

https://github.com/ovn-kubernetes/ovn-kubernetes/blob/master/go-controller/pkg/node/bridgeconfig/bridgeflows.go#L65-L76

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats the whole problem I believe. We have to ensure (or in this case best-effort) attempt to get the drop flow into the patch port.
There is a race between this script (where we write the flow) and the ovn-controller creating the patch port, and if the port is not up when we try to write the flow, we will just continue, and then the ovn-controller will create the patch port without the flow.

@bpickard22
Copy link
Contributor Author

/retest-required

@bpickard22
Copy link
Contributor Author

Also to address the code-rabbit comment, initially I was trying to gate the script to ensure that the drop flows returns successfully, but in CI clusters it takes to long for the patch port to come up, so it would always timeout causing the container to fail and ci to error out. This workaround should provide relief via more success in adding the drop flows to the port, but there are scenarios in which it will not. The fix here -> ovn-kubernetes/ovn-kubernetes#6011 will ensure that it always works, and we can remove the workaround here when that merges, but in the meantime I believe this is our best option

(( retries += 1 ))
if [[ "${retries}" -lt "${max_retries}" ]]; then
echo "$(date -Iseconds) - WARN - Patch port not found on ${bridge}, retry ${retries}/${max_retries}"
sleep 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sleeping for 2 seconds seems aggressive to me. Can we increase the number of retries instead?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please be mindful about info level logs if we are setting max_retries to a very high number.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah good catch, I will reduce the level to a warning and reduce the amount of logs we send

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i still think the 2 second sleep is the right number to give us the best shot at the patch port being up, but i reduced the number of retries from 90 - 60 so at the worst case we only sit here for 2 mins

There is currently a race in the ovnkube-lib script where we try to add
some drop flows to prevent duplicate garps to the patch port, but if the
patch port isnt ready, the ovn-controller can connect to the db and
create the patch port without the flow.

We need to wait for the patch port to exist before we try to add the
GARP drop flow

also added some logs for better visibility

Assisted-By: Claude Sonnet 4.5
Signed-off-by: Benjamin Pickard <bpickard@redhat.com>
@bpickard22 bpickard22 force-pushed the arp-patch-port-wait branch from 0e93809 to 49ab7b2 Compare March 3, 2026 19:39
@openshift-ci-robot
Copy link
Contributor

@bpickard22: This pull request references Jira Issue OCPBUGS-77015, which is invalid:

  • expected the bug to target the "4.22.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

Details

In response to this:

There is currently a race in the ovnkube-lib script where we try to add some drop flows to prevent duplicate garps to the patch port, but if the patch port isnt ready, the ovn-controller can connect to the db and create the patch port without the flow.

We need to wait for the patch port to exist before we try to add the GARP drop flow

also added some logs for better visibility

Assisted-By: Claude Sonnet 4.5

cc @pperiyasamy

Summary by CodeRabbit

  • Bug Fixes
  • Added automatic retry-and-wait for bridge integration setup with periodic delays and a non-blocking background start.
  • Improved INFO logs when retries are occurring and when integration steps are applied.
  • Emit a WARN if retries exhaust without success and explicitly report failure when the required port never appears.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bindata/network/ovn-kubernetes/common/008-script-lib.yaml`:
- Around line 249-252: The add_garp_drop_flow() function currently echoes
success and returns 0 immediately after running ovs-ofctl add-flow even if that
command failed; change the logic to capture and check the exit status of the
ovs-ofctl invocation (the command that writes the flow for $bridge / $of_port)
before logging success and returning 0. If ovs-ofctl returns non-zero, log an
error including the exit code and any stderr (don't let the > /dev/null swallow
the check), and return a non-zero value so the caller can retry; only return 0
when the ovs-ofctl add-flow command actually succeeded.

ℹ️ Review info

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 0e93809 and 49ab7b2.

📒 Files selected for processing (1)
  • bindata/network/ovn-kubernetes/common/008-script-lib.yaml

Comment on lines 249 to +252
ovs-ofctl add-flow "$bridge" "cookie=$cookie,table=0,priority=$priority,in_port=$of_port,arp,arp_op=1,actions=drop" > /dev/null
break
echo "$(date -Iseconds) - INFO - Added GARP drop flow on $bridge port $port_name (ofport=$of_port)"
return 0
fi
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Inspect the add_garp_drop_flow function and surrounding context
rg -n -C3 'add_garp_drop_flow\(|ovs-ofctl add-flow|return 0|return 1' bindata/network/ovn-kubernetes/common/008-script-lib.yaml | head -100

Repository: openshift/cluster-network-operator

Length of output: 2213


Gate success on ovs-ofctl add-flow exit status.

add_garp_drop_flow() reports success even when flow insertion fails, which stops retries early. The function must check the ovs-ofctl add-flow exit status before returning success.

Proposed fix
-          ovs-ofctl add-flow "$bridge" "cookie=$cookie,table=0,priority=$priority,in_port=$of_port,arp,arp_op=1,actions=drop" > /dev/null
-          echo "$(date -Iseconds) - INFO - Added GARP drop flow on $bridge port $port_name (ofport=$of_port)"
-          return 0
+          if ovs-ofctl add-flow "$bridge" "cookie=$cookie,table=0,priority=$priority,in_port=$of_port,arp,arp_op=1,actions=drop" > /dev/null; then
+            echo "$(date -Iseconds) - INFO - Added GARP drop flow on $bridge port $port_name (ofport=$of_port)"
+            return 0
+          fi
+          echo "$(date -Iseconds) - WARN - Failed to add GARP drop flow on $bridge port $port_name (ofport=$of_port), will retry"
+          return 1
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindata/network/ovn-kubernetes/common/008-script-lib.yaml` around lines 249 -
252, The add_garp_drop_flow() function currently echoes success and returns 0
immediately after running ovs-ofctl add-flow even if that command failed; change
the logic to capture and check the exit status of the ovs-ofctl invocation (the
command that writes the flow for $bridge / $of_port) before logging success and
returning 0. If ovs-ofctl returns non-zero, log an error including the exit code
and any stderr (don't let the > /dev/null swallow the check), and return a
non-zero value so the caller can retry; only return 0 when the ovs-ofctl
add-flow command actually succeeded.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 3, 2026

@bpickard22: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-ovn-ipsec-step-registry 49ab7b2 link true /test e2e-ovn-ipsec-step-registry
ci/prow/e2e-metal-ipi-ovn-dualstack-bgp 49ab7b2 link true /test e2e-metal-ipi-ovn-dualstack-bgp
ci/prow/e2e-aws-ovn-windows 49ab7b2 link true /test e2e-aws-ovn-windows
ci/prow/e2e-aws-ovn-hypershift-conformance 49ab7b2 link true /test e2e-aws-ovn-hypershift-conformance
ci/prow/e2e-aws-ovn-upgrade 49ab7b2 link true /test e2e-aws-ovn-upgrade
ci/prow/e2e-gcp-ovn-upgrade 49ab7b2 link true /test e2e-gcp-ovn-upgrade
ci/prow/e2e-aws-ovn-upgrade-ipsec 49ab7b2 link true /test e2e-aws-ovn-upgrade-ipsec
ci/prow/e2e-metal-ipi-ovn-ipv6 49ab7b2 link true /test e2e-metal-ipi-ovn-ipv6
ci/prow/e2e-metal-ipi-ovn-dualstack-bgp-local-gw 49ab7b2 link true /test e2e-metal-ipi-ovn-dualstack-bgp-local-gw
ci/prow/e2e-aws-ovn-serial-1of2 49ab7b2 link true /test e2e-aws-ovn-serial-1of2
ci/prow/hypershift-e2e-aks 49ab7b2 link true /test hypershift-e2e-aks
ci/prow/e2e-azure-ovn-upgrade 49ab7b2 link true /test e2e-azure-ovn-upgrade
ci/prow/e2e-metal-ipi-ovn-ipv6-ipsec 49ab7b2 link true /test e2e-metal-ipi-ovn-ipv6-ipsec
ci/prow/4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade 49ab7b2 link false /test 4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
ci/prow/4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-upgrade 49ab7b2 link false /test 4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-upgrade
ci/prow/e2e-aws-ovn-serial-2of2 49ab7b2 link true /test e2e-aws-ovn-serial-2of2
ci/prow/4.22-upgrade-from-stable-4.21-e2e-azure-ovn-upgrade 49ab7b2 link false /test 4.22-upgrade-from-stable-4.21-e2e-azure-ovn-upgrade
ci/prow/security 49ab7b2 link false /test security
ci/prow/images 49ab7b2 link true /test images

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@jluhrsen
Copy link
Contributor

jluhrsen commented Mar 4, 2026

/test e2e-gcp-ovn

this working yet?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants