Skip to content

Fix #37736: Allow composite transforms to use implicit input chaining#37861

Open
liferoad wants to merge 11 commits intoapache:masterfrom
liferoad:fix/beam-37736-composite-transform-v2
Open

Fix #37736: Allow composite transforms to use implicit input chaining#37861
liferoad wants to merge 11 commits intoapache:masterfrom
liferoad:fix/beam-37736-composite-transform-v2

Conversation

@liferoad
Copy link
Contributor

Issue

#37736

When using type: composite in Beam YAML, each sub-transform requires an explicit input, unlike type: chain which automatically passes the output of one transform to the next.

Fix

Modified expand_composite_transform() in sdks/python/apache_beam/yaml/yaml_transform.py to automatically chain sub-transforms when no explicit inputs/outputs are specified, similar to how chain type transforms work.

Testing

Added test_composite_implicit_input_chaining test case in yaml_transform_test.py to verify the fix.

…aining

When a composite transform has no explicit inputs/outputs on its
sub-transforms, automatically chain them similar to how 'chain' type
transforms work.

Added test_composite_implicit_input_chaining to verify the fix.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the usability of composite transforms within Beam YAML by introducing implicit input chaining. Previously, users had to explicitly define inputs and outputs for each sub-transform within a composite block. This change aligns composite transforms with the more streamlined chain type, automatically connecting the output of one sub-transform to the input of the next when no explicit I/O is provided. This simplifies the definition of sequential data processing pipelines using composite transforms, making them more intuitive and reducing boilerplate.

Highlights

  • Implicit Input Chaining for Composite Transforms: Enabled composite transforms in Beam YAML to automatically chain inputs and outputs between sub-transforms when no explicit input or output is specified, mirroring the behavior of chain type transforms.
  • Core Logic Update: Modified the expand_composite_transform function to incorporate the new logic for detecting and applying implicit input/output chaining.
  • New Test Case: Added a dedicated test case, test_composite_implicit_input_chaining, to ensure the correct functionality of this new implicit chaining behavior.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • sdks/python/apache_beam/yaml/yaml_transform.py
    • Implemented logic within expand_composite_transform to detect if a composite transform's sub-transforms lack explicit input/output definitions.
    • Added code to automatically assign the output of a preceding sub-transform as the input for the next sub-transform in the sequence.
    • Modified the composite's overall output to implicitly refer to the last sub-transform's output if not explicitly defined.
  • sdks/python/apache_beam/yaml/yaml_transform_test.py
    • Added test_composite_implicit_input_chaining to verify that a composite transform with multiple sub-transforms (Square, Increment) correctly processes data with implicit input chaining, producing the expected output.
Activity
  • No specific activity (comments, reviews, progress) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Contributor

Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers

This fix addresses the issue where composite transforms with no explicit
input specification were failing to receive inputs from the pipeline.

Key changes:
1. Fixed has_explicit_io check to use is_empty() instead of just checking
   key presence - this properly treats {} as 'no explicit input'
2. Added composite_has_input check to only do implicit chaining when
   the composite has an input to chain from
3. Fixed inner_scope_inputs computation to use parent scope's inputs
   when the composite has no explicit input
4. Fixed output handling to use is_empty() check (normalization sets {})
5. Fixed final return to correctly resolve scope inputs vs transform outputs
@github-actions
Copy link
Contributor

Assigning reviewers:

R: @tvalentyn for label python.

Note: If you would like to opt out of this review, comment assign to next reviewer.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

@tvalentyn
Copy link
Contributor

R: @derrickaw

@github-actions
Copy link
Contributor

Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment assign set of reviewers

@derrickaw
Copy link
Collaborator

derrickaw commented Mar 19, 2026

Any website documentation need improving due to this change? https://beam.apache.org/documentation/sdks/yaml/

@liferoad
Copy link
Contributor Author

Good point about the docs! The current Beam YAML documentation states that non-linear (composite) pipelines require explicit inputs for each transform. With this change, composite transforms now also support implicit input chaining (similar to chain), so the docs should be updated to reflect that.

I will file a follow-up issue to update the documentation.

@liferoad
Copy link
Contributor Author

#37894

@derrickaw
Copy link
Collaborator

derrickaw commented Mar 20, 2026

LGTM pending tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants