'generate copilot summary' consumes too many premium requests? #14348
Unanswered
chenluyao-mmind
asked this question in
Q&A
Replies: 1 comment
-
|
I haven't been able to reproduce this issue, but it's certainly not the expected behavior. The feature uses VS Code's Language Model API to send a single request to the LLM, using the first available model provided by VS Code. One possible explanation is that model selection is defaulting to Claude Opus 4.6 (fast mode), which currently has a 30x multiplier. I opened #14349 with a potential fix to prefer GPT-5-mini for this feature, which has a 0x multiplier on paid plans. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am using GitHub Copilot Enterprise in Visual Studio Code and noticed that the code summarization feature consumes an unexpectedly high number of premium requests.
A single "Generate copilot summary" action typically uses around 30 premium requests, which seems excessive for one operation. And no option to control model or context size.
Beta Was this translation helpful? Give feedback.
All reactions