Skip to content

Add a FailoverChannel wrapper on top of IsolationChannel to maintain a set of primary and failover channel.#37840

Open
parveensania wants to merge 5 commits intoapache:masterfrom
parveensania:failover-channel
Open

Add a FailoverChannel wrapper on top of IsolationChannel to maintain a set of primary and failover channel.#37840
parveensania wants to merge 5 commits intoapache:masterfrom
parveensania:failover-channel

Conversation

@parveensania
Copy link
Contributor

Adds a FailoverChannel wrapper class on top of IsolationChannels to maintain primary channel and failover channel and fallback to failover channel if connectivity over primary channel cannot be established. The primary channel will again be retried after a period.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the robustness of gRPC communication within Dataflow workers by implementing a failover mechanism for channels. It allows the system to gracefully handle primary channel connectivity issues by switching to a fallback channel and periodically attempting to restore the primary connection, thereby enhancing the overall stability and reliability of the worker's interaction with the Windmill service.

Highlights

  • New FailoverChannel Implementation: Introduced a new FailoverChannel class that wraps a primary and a fallback gRPC ManagedChannel, enabling automatic failover to the fallback channel if the primary becomes unavailable.
  • Resilient Channel Management: The FailoverChannel includes logic to retry the primary channel after a configurable cooling period (1 hour), enhancing the resilience of gRPC communication.
  • Integration into StreamingDataflowWorker: Modified StreamingDataflowWorker to conditionally create and use the FailoverChannel within the ChannelCache if isolated channels are enabled, leveraging dispatcher endpoints for fallback.
  • gRPC Request Enhancements: Updated the GetWorkRequest proto definition to include computation_id_filter and backend_worker_token fields, and integrated the backend_worker_token into the FanOutStreamingEngineWorkerHarness.
  • Testing and Utilities: Added comprehensive unit tests for FailoverChannel and refactored common test utility classes like NoopClientCall into a dedicated file for better organization.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/StreamingDataflowWorker.java
    • Imported new classes for failover channel and credentials.
    • Modified createChannelCache to accept GrpcDispatcherClient and to construct a FailoverChannel when isolated channels are enabled, using primary and fallback IsolationChannel instances.
  • runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/streaming/harness/FanOutStreamingEngineWorkerHarness.java
    • Updated createAndStartWindmillStreamSender to include backendWorkerToken in the GetWorkRequest if available from the endpoint.
  • runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/GrpcDispatcherClient.java
    • Changed the visibility of getDispatcherEndpoints() method from package-private to public.
  • runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
    • Added a new FailoverChannel class that implements ManagedChannel and provides failover logic between a primary and a fallback channel, with a cooling period for retrying the primary.
  • runners/google-cloud-dataflow-java/worker/src/test/java/org/apache/beam/runners/dataflow/worker/streaming/harness/FanOutStreamingEngineWorkerHarnessTest.java
    • Modified getWorkRequest helper to accept a backendWorkerToken parameter.
    • Updated testStreamsStartCorrectly to verify createDirectGetWorkStream calls with specific worker tokens.
  • runners/google-cloud-dataflow-java/worker/src/test/java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannelTest.java
    • Added a new test class FailoverChannelTest to verify the failover and retry mechanisms of the FailoverChannel, including credential handling.
  • runners/google-cloud-dataflow-java/worker/src/test/java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/IsolationChannelTest.java
    • Removed the NoopClientCall and NoopClientCallListener inner classes, as they were moved to a dedicated file.
    • Changed NoopMarshaller to be public.
  • runners/google-cloud-dataflow-java/worker/src/test/java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/NoopClientCall.java
    • Added a new NoopClientCall class and its inner NoopClientCallListener for use in gRPC client testing.
  • runners/google-cloud-dataflow-java/worker/windmill/src/main/proto/windmill.proto
    • Added computation_id_filter and backend_worker_token fields to the GetWorkRequest message.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@parveensania
Copy link
Contributor Author

R: @arunpandianp

@parveensania
Copy link
Contributor Author

R: @scwhittle

@github-actions
Copy link
Contributor

Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment assign set of reviewers

@parveensania
Copy link
Contributor Author

assign set of reviewers

@github-actions
Copy link
Contributor

Assigning reviewers:

R: @Abacn added as fallback since no labels match configuration

Note: If you would like to opt out of this review, comment assign to next reviewer.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

@parveensania
Copy link
Contributor Author

stop reviewer notifications

@parveensania
Copy link
Contributor Author

R: @arunpandianp @scwhittle

@github-actions
Copy link
Contributor

Stopping reviewer notifications for this pull request: requested by reviewer. If you'd like to restart, comment assign set of reviewers

@github-actions
Copy link
Contributor

Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment assign set of reviewers

@arunpandianp
Copy link
Contributor

When looking for similar implementations, I came across GcpMultiEndpointChannel https://github.com/GoogleCloudPlatform/grpc-gcp-java/blob/master/grpc-gcp/src/main/java/com/google/cloud/grpc/GcpMultiEndpointChannel.java

GcpMultiEndpointChannel uses the channel ConnectivityStatus to determine which channel to use. Will it be more robust, if FailoverChannel uses ConnectivityStatus instead if RPC status to failover?

Thinking something like wait X seconds for primary to become ready the first time and failover to the fallback channel if it takes more than X minutes. We can let primary retry connections in the background and switch to it whenever it becomes ready.

}

private void notifyFailure(Status status, boolean isFallback, String methodName) {
if (!status.isOk() && !isFallback && fallback != null) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The javadoc on the class says we fallback only on UNAVAILABLE errors. Based on the code here it looks like we'll fallback on any errors. Is this expected?

https://grpc.io/docs/guides/error/ says network level issues may return UNAVAILABLE or
UNKNOWN or DEADLINE_EXCEEDED. should we include them here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was previously triggering fallback only on UNAVAILABLE but later changed it to non-OK status, but forgot to update the comment. I have now changed the check to look for UNAVAILABLE or
UNKNOWN or DEADLINE_EXCEEDED non-ok status to trigger fallback

@parveensania
Copy link
Contributor Author

When looking for similar implementations, I came across GcpMultiEndpointChannel https://github.com/GoogleCloudPlatform/grpc-gcp-java/blob/master/grpc-gcp/src/main/java/com/google/cloud/grpc/GcpMultiEndpointChannel.java

GcpMultiEndpointChannel uses the channel ConnectivityStatus to determine which channel to use. Will it be more robust, if FailoverChannel uses ConnectivityStatus instead if RPC status to failover?

Thinking something like wait X seconds for primary to become ready the first time and failover to the fallback channel if it takes more than X minutes. We can let primary retry connections in the background and switch to it whenever it becomes ready.

I went for a hybrid approach, check both connection state + RPC status. ConnectionState could be transient errors, so we move back to primary as soon as state changes to READY. RPC status can capture server side issues too, like backend not responding (for instance requests getting rejected by security policies, there could be other reasons too). For this I've used longer cooling period before we re-try primary. WDYT?

* primary channel becomes READY again.
* <li><b>RPC Failover:</b> If primary channel RPC fails with transient errors ({@link
* Status.Code#UNAVAILABLE}, {@link Status.Code#DEADLINE_EXCEEDED}, or {@link
* Status.Code#UNKNOWN}), switches to fallback channel and waits for a 1-hour cooling period
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unless channel goes through unhealthy->healthy connectivity transition?
Want to make sure some race where we observe an rpc failure before we observe the connectivity failure doesn't cause us to stop using the primary channel if it reestablishes quickly.

registerPrimaryStateChangeListener();
}

// Test-only.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about removing this one then? The test can have a helper in itself that calls forTest below with default creds and time supplier


private FailoverChannel(
ManagedChannel primary,
@Nullable ManagedChannel fallback,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we just not support null here? It seems the caller could just use primary without creating a FailoverChannel if they don't want to support fallback, and then we don't have to complicate the code with it possibly being null.

}

private boolean shouldFallBackDueToPrimaryState() {
ConnectivityState connectivityState = primary.getState(true);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

passing true sounds like it might block attempting to connect if in the idle state. How about passing false and treating IDLE as not something that needs to be falled back from.

Or could we just remove this if we are anyway setting up a change listener to observe it's changes?

private boolean shouldFallbackBasedOnRPCStatus(Status status) {
switch (status.getCode()) {
case UNAVAILABLE:
case DEADLINE_EXCEEDED:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm worried that DEADLINE_EXCEEDED might occur for other reasons too.

One idea might be to see if the call had any responses, in that case we know that it was at some point connected to the backend and we could choose not to fallback.
We could also perhaps wait for several continuously failed rpcs or failing rpcs for some elapsed time period before falling back.

private final AtomicLong lastRPCFallbackTimeNanos = new AtomicLong(0);
private final AtomicLong primaryNotReadySinceNanos = new AtomicLong(-1);
private final LongSupplier nanoClock;
private final AtomicBoolean stateChangeListenerRegistered = new AtomicBoolean(false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we move all the Atomics into a State object that we synchronize? we have long-lived calls so I don't think we have to worry about the performance of synchronized block versus atomic in the call creation path as long as we are not doing any blocking stuff within it.
I think it will help keep the code simpler and we don't have to worry about possible weird states races could put us in.

return currentTimeNanos - primaryNotReadySinceNanos.get() > PRIMARY_NOT_READY_WAIT_NANOS;
}

private void notifyFailure(Status status, boolean isFallback, String methodName) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: notifyCallDone? we call it on success too

super.start(
new SimpleForwardingClientCallListener<RespT>(responseListener) {
@Override
public void onClose(Status status, Metadata trailers) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here is where I was wondering could we hook into onMessage or onHeaders to determine that the call did make some progress before possibly failing due to deadline or unavailable (which coudl possibly be from the backend status)

currentFlowControlSettings),
currentFlowControlSettings.getOnReadyThresholdBytes());
ManagedChannel primaryChannel =
IsolationChannel.create(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since it's being setup this way IsolationChannel connectivity callbacks are going to be what is used. I'm not sure how that will work since it internally has multiple channels. Looking it seems just has the default ManagedChannel implementation which throws unimplemented exception.

What about having IsolationChannel on top of fallback channels? That seems simpler to me since IsolationChannel just internally creates the separate channels and otherwise doesn't do much than forward things on.

It would be good to have a unit test of whatever setup we do use so that we flush out the issues there instead of requiring an integration test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants