Add LoRA multimethod export to CoreML static attention#18344
Closed
lucylq wants to merge 1 commit intogh/lucylq/142/headfrom
Closed
Add LoRA multimethod export to CoreML static attention#18344lucylq wants to merge 1 commit intogh/lucylq/142/headfrom
lucylq wants to merge 1 commit intogh/lucylq/142/headfrom
Conversation
Contributor
Author
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18344
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 121 PendingAs of commit b891fba with merge base eb92cec ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
lucylq
added a commit
that referenced
this pull request
Mar 19, 2026
Add support for exporting LoRA adapters as separate methods in a CoreML PTE file. CoreML POSITIONAL weight sharing deduplicates base weights across methods so the binary overhead is just the lora_a/lora_b weights. - StaticAttention: LoRA-aware projection creation for split_mha=False - utils.py: skip_names + LoRALinear guard in replace_linear_with_split_linear - export: --adapter CLI, adapter loading, _exclude_lora quantization filter, skip_split_names for POSITIONAL sharing, multimethod export branches Authored with Claude. ghstack-source-id: 16ab852 ghstack-comment-id: 4093471437 Pull-Request: #18344
This was referenced Mar 19, 2026
Merged
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Add support for exporting LoRA adapters as separate methods in a CoreML
PTE file. CoreML POSITIONAL weight sharing deduplicates base weights
across methods so the binary overhead is just the lora_a/lora_b weights.
skip_split_names for POSITIONAL sharing, multimethod export branches
Authored with Claude.