Skip to content

Fix QwenImageEditPlus max_sequence_length and transformer text seq len kwargs#12991

Closed
tonera wants to merge 1 commit intohuggingface:mainfrom
tonera:fix/qwenimage-edit-plus-max-seq-len
Closed

Fix QwenImageEditPlus max_sequence_length and transformer text seq len kwargs#12991
tonera wants to merge 1 commit intohuggingface:mainfrom
tonera:fix/qwenimage-edit-plus-max-seq-len

Conversation

@tonera
Copy link
Copy Markdown

@tonera tonera commented Jan 17, 2026

What does this PR do?

This PR fixes two issues in QwenImageEditPlusPipeline (pipeline_qwenimage_edit_plus.py) related to text sequence length handling:

  • Make max_sequence_length effective: the parameter was accepted by the public API but previously not applied. We now truncate prompt_embeds and prompt_embeds_mask to max_sequence_length.

  • Prevent failures in the denoising loop: QwenImage transformer/RoPE needs an explicit text sequence length, but some accelerated/quantized transformer implementations (e.g. nunchaku) don’t accept the new max_txt_seq_len kwarg. The pipeline now adapts to the transformer forward signature:

  • Prefer passing max_txt_seq_len when supported (or when **kwargs is accepted)

  • Otherwise fall back to deprecated txt_seq_lens

  • Otherwise pass nothing (let the implementation handle it)

  • This avoids runtime errors such as:

  • ValueError: Either max_txt_seq_len or txt_seq_lens ... must be provided

  • TypeError: ... forward() got an unexpected keyword argument 'max_txt_seq_len'

  • Who can review?

Tagging: @yiyixuxu @asomoza (pipelines)

@vladmandic
Copy link
Copy Markdown
Contributor

gentle ping @yiyixuxu @sayakpaul @asomoza

@yiyixuxu
Copy link
Copy Markdown
Collaborator

hi @tonera thanks for the PR
but first, can you share a reproducible code example(s) that demonstrates the issue? is this something specific to nunchaku?

@tonera tonera closed this Apr 2, 2026
@vladmandic
Copy link
Copy Markdown
Contributor

vladmandic commented Apr 2, 2026

not sure why was this pr closed? its definitely an issue since not all code handles changes introduced in diffusers (yes, nunchaku is an example).
also, it cannot be fixed purely on nunchaku side since newer versions of nunchaku require newer torch/cuda which dropped support for rtx1 and rtx2 gpus - which means that older gpus require existing older nunchaku - which used to work correctly until diffusers change made it incompatible.

@tonera
Copy link
Copy Markdown
Author

tonera commented Apr 8, 2026

not sure why was this pr closed? its definitely an issue since not all code handles changes introduced in diffusers (yes, nunchaku is an example). also, it cannot be fixed purely on nunchaku side since newer versions of nunchaku require newer torch/cuda which dropped support for rtx1 and rtx2 gpus - which means that older gpus require existing older nunchaku - which used to work correctly until diffusers change made it incompatible.

The pull request was submitted when version 0.37 was still in beta, and the current version is 0.38. The maintenance team is processing it too slowly.

@vladmandic
Copy link
Copy Markdown
Contributor

@yiyixuxu @sayakpaul this is a real issue and we had a valid pr 3 months ago, now we lost it :(

@sayakpaul
Copy link
Copy Markdown
Member

I am also not sure why this was closed. I think @yiyixuxu's ask is valid. Cc: @dg845 as well.

@vladmandic
Copy link
Copy Markdown
Contributor

I am also not sure why this was closed. I think @yiyixuxu's ask is valid. Cc: @dg845 as well.

ask is valid if it came in reasonable time. after 3 months, it makes contributors just give up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants