Skip to content

Fix shape mismatch on the masked_tokens param in decoder masked multi-head attention kernel.#773

Open
FengDSP wants to merge 2 commits intoNVIDIA:mainfrom
FengDSP:fix_circular_cache
Open

Fix shape mismatch on the masked_tokens param in decoder masked multi-head attention kernel.#773
FengDSP wants to merge 2 commits intoNVIDIA:mainfrom
FengDSP:fix_circular_cache

Conversation

@FengDSP
Copy link

@FengDSP FengDSP commented Oct 24, 2023

This PR addresses an inconsistency in the shape of the masked_tokens array within the decoder's masked multi-head attention kernel. The expected shape of the masked_tokens array is [batch_size, session_length], however, the current implementation in the repo has it shaped as [batch_size, memory_length]. This discrepancy leads to unexpected behaviors when memory_length is not configured to be the same as session_length.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant