Skip to content

[ET-VK][runtime] Add prepack cache to avoid duplicate weight prepacking#18361

Open
SS-JIA wants to merge 4 commits intogh/SS-JIA/499/basefrom
gh/SS-JIA/499/head
Open

[ET-VK][runtime] Add prepack cache to avoid duplicate weight prepacking#18361
SS-JIA wants to merge 4 commits intogh/SS-JIA/499/basefrom
gh/SS-JIA/499/head

Conversation

@SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Mar 20, 2026

Stack from ghstack (oldest at bottom):

When embedding and linear ops share tied weights and both use the same
prepacking function (prepack_quantized_linear_weight), the weight gets
prepacked twice, wasting GPU memory. Add a cache to ComputeGraph keyed
by (input ValueRef, kernel name) that returns the already-prepacked
tensor on cache hit, avoiding the duplicate allocation.

Differential Revision: D97430801

When embedding and linear ops share tied weights and both use the same
prepacking function (prepack_quantized_linear_weight), the weight gets
prepacked twice, wasting GPU memory. Add a cache to ComputeGraph keyed
by (input ValueRef, kernel name) that returns the already-prepacked
tensor on cache hit, avoiding the duplicate allocation.

Differential Revision: [D97430801](https://our.internmc.facebook.com/intern/diff/D97430801/)

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Mar 20, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18361

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 28 Pending

As of commit ccda364 with merge base 38b40bc (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

SS-JIA pushed a commit that referenced this pull request Mar 20, 2026
When embedding and linear ops share tied weights and both use the same
prepacking function (prepack_quantized_linear_weight), the weight gets
prepacked twice, wasting GPU memory. Add a cache to ComputeGraph keyed
by (input ValueRef, kernel name) that returns the already-prepacked
tensor on cache hit, avoiding the duplicate allocation.

Differential Revision: [D97430801](https://our.internmc.facebook.com/intern/diff/D97430801/)

ghstack-source-id: 355089157
Pull Request resolved: #18361
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 20, 2026
@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

…ht prepacking"

When embedding and linear ops share tied weights and both use the same
prepacking function (prepack_quantized_linear_weight), the weight gets
prepacked twice, wasting GPU memory. Add a cache to ComputeGraph keyed
by (input ValueRef, kernel name) that returns the already-prepacked
tensor on cache hit, avoiding the duplicate allocation.

Differential Revision: [D97430801](https://our.internmc.facebook.com/intern/diff/D97430801/)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Mar 20, 2026
Pull Request resolved: #18361

When embedding and linear ops share tied weights and both use the same
prepacking function (prepack_quantized_linear_weight), the weight gets
prepacked twice, wasting GPU memory. Add a cache to ComputeGraph keyed
by (input ValueRef, kernel name) that returns the already-prepacked
tensor on cache hit, avoiding the duplicate allocation.
ghstack-source-id: 355234968
@exported-using-ghexport

Differential Revision: [D97430801](https://our.internmc.facebook.com/intern/diff/D97430801/)
…ht prepacking"

When embedding and linear ops share tied weights and both use the same
prepacking function (prepack_quantized_linear_weight), the weight gets
prepacked twice, wasting GPU memory. Add a cache to ComputeGraph keyed
by (input ValueRef, kernel name) that returns the already-prepacked
tensor on cache hit, avoiding the duplicate allocation.

Differential Revision: [D97430801](https://our.internmc.facebook.com/intern/diff/D97430801/)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Mar 20, 2026
Pull Request resolved: #18361

When embedding and linear ops share tied weights and both use the same
prepacking function (prepack_quantized_linear_weight), the weight gets
prepacked twice, wasting GPU memory. Add a cache to ComputeGraph keyed
by (input ValueRef, kernel name) that returns the already-prepacked
tensor on cache hit, avoiding the duplicate allocation.
ghstack-source-id: 355269010
@exported-using-ghexport

Differential Revision: [D97430801](https://our.internmc.facebook.com/intern/diff/D97430801/)
…ht prepacking"

When embedding and linear ops share tied weights and both use the same
prepacking function (prepack_quantized_linear_weight), the weight gets
prepacked twice, wasting GPU memory. Add a cache to ComputeGraph keyed
by (input ValueRef, kernel name) that returns the already-prepacked
tensor on cache hit, avoiding the duplicate allocation.

Differential Revision: [D97430801](https://our.internmc.facebook.com/intern/diff/D97430801/)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Mar 20, 2026
Pull Request resolved: #18361

When embedding and linear ops share tied weights and both use the same
prepacking function (prepack_quantized_linear_weight), the weight gets
prepacked twice, wasting GPU memory. Add a cache to ComputeGraph keyed
by (input ValueRef, kernel name) that returns the already-prepacked
tensor on cache hit, avoiding the duplicate allocation.
ghstack-source-id: 355353466
@exported-using-ghexport

Differential Revision: [D97430801](https://our.internmc.facebook.com/intern/diff/D97430801/)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant