fix: auto-fix CI formatting issues for PR #873#874
Conversation
Fixed: - Import ordering (alphabetical) in MetadataTab.tsx, SummaryTab.tsx, price-list.tsx, response-handler.ts, session.ts - Changed 'let' to 'const' for never-reassigned variable in response-handler.ts CI Run: https://github.com/ding113/claude-code-hub/actions/runs/22762234077 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| let tailBufferedBytes = 0; | ||
| let wasTruncated = false; | ||
| let inTailMode = false; | ||
| const inTailMode = false; |
There was a problem hiding this comment.
inTailMode is always false — tail data is never joined
The CI lint fixer correctly changed let to const because this variable is never reassigned. However, this surfaces a pre-existing logic issue: inTailMode is permanently false, which means joinChunks() (line 1164) always takes the early-return branch and returns only headText — the tail buffer is populated when the head fills beyond 1 MB, but its contents are never included in the assembled output used for stats/cost parsing.
For large streaming responses (> 1 MB), this means the tail portion of the SSE stream — which typically contains the final usage/cost events and fake-200 markers — is buffered but silently dropped when joinChunks() is called. Cost calculations, usage extraction, and error detection will all miss any data that was pushed into the tail buffer.
It looks like tail-mode was intended to be activated by a inTailMode = true assignment when the head buffer fills up, but that assignment was never added. The const change in this PR makes the situation explicit — consider either:
- Removing
inTailModeentirely and relying solely onheadBufferedBytes < MAX_STATS_HEAD_BYTESto gate head vs. tail routing, or - Restoring
letand adding the missing assignment that switches to tail mode once the head is full.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/app/v1/_lib/proxy/response-handler.ts
Line: 1155
Comment:
`inTailMode` is always `false` — tail data is never joined
The CI lint fixer correctly changed `let` to `const` because this variable is never reassigned. However, this surfaces a pre-existing logic issue: `inTailMode` is permanently `false`, which means `joinChunks()` (line 1164) always takes the early-return branch and returns only `headText` — the tail buffer is populated when the head fills beyond 1 MB, but its contents are **never included** in the assembled output used for stats/cost parsing.
For large streaming responses (> 1 MB), this means the tail portion of the SSE stream — which typically contains the final usage/cost events and fake-200 markers — is buffered but silently dropped when `joinChunks()` is called. Cost calculations, usage extraction, and error detection will all miss any data that was pushed into the tail buffer.
It looks like tail-mode was intended to be activated by a `inTailMode = true` assignment when the head buffer fills up, but that assignment was never added. The `const` change in this PR makes the situation explicit — consider either:
1. Removing `inTailMode` entirely and relying solely on `headBufferedBytes < MAX_STATS_HEAD_BYTES` to gate head vs. tail routing, or
2. Restoring `let` and adding the missing assignment that switches to tail mode once the head is full.
How can I resolve this? If you propose a fix, please make it concise.…ls (#873) * feat(pricing): resolve provider-aware billing for multi-provider models * chore: format code (feat-codex-1m-11ecf3a) * fix: auto-fix CI formatting issues (#874) Fixed: - Import ordering (alphabetical) in MetadataTab.tsx, SummaryTab.tsx, price-list.tsx, response-handler.ts, session.ts - Changed 'let' to 'const' for never-reassigned variable in response-handler.ts CI Run: https://github.com/ding113/claude-code-hub/actions/runs/22762234077 Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * fix(pricing): address review feedback on billing resolution * fix(pricing): address remaining review feedback * chore: format code (feat-codex-1m-e50c32b) * feat(pricing): support codex fast service tier billing * chore: format code (feat-codex-1m-289fe7b) * fix(logs): restore context1m audit fallback compatibility --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
CI Auto-Fix
Original PR: #873
Failed CI Run: PR Build Check
Fixes Applied
Not Auto-Fixable
No errors that couldn't be safely auto-fixed.
Verification
Auto-generated by Claude AI
Greptile Summary
This PR is an automated CI formatting fix for PR #873, reordering imports alphabetically across 5 files and converting one
letdeclaration toconstinresponse-handler.ts. All import changes are safe formatting-only fixes; however, thelet→constchange oninTailModeinresponse-handler.tssurfaces a pre-existing logic issue worth addressing.Key changes:
MetadataTab.tsx,SummaryTab.tsx,price-list.tsx,session.ts: Pure import reordering, no functional impact.response-handler.ts: Import reordering +let inTailMode = false→const inTailMode = false. Theconstchange is lint-correct since the variable is never reassigned — but this also reveals thatinTailModeis permanentlyfalse, meaningjoinChunks()always early-returns with only the head buffer's content. Tail data accumulated for large streaming responses (> 1 MB) is buffered but never included in the assembled SSE text used for cost/usage extraction and fake-200 detection. This is a pre-existing bug from PR feat(pricing): resolve provider-aware billing for multi-provider models #873 that this PR makes structurally explicit.Confidence Score: 2/5
response-handler.tsis exposed and warrants resolution.response-handler.ts) has an additionallet→constchange that is lint-correct (variable is never reassigned) but highlights a pre-existing logic flaw:inTailModeis alwaysfalse, causingjoinChunks()to silently discard all tail buffer data for large streaming responses (> 1 MB). This can lead to missed cost/usage events. The bug is pre-existing (not introduced by this PR), but since this PR makes it structurally permanent withconst, it should be addressed before or alongside merging this change into the feature branch.src/app/v1/_lib/proxy/response-handler.ts— address theinTailModelogic issue.Flowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A[Incoming stream chunk] --> B{headBufferedBytes < MAX_STATS_HEAD_BYTES?} B -- Yes --> C[Push to headChunks] B -- No --> D[Push to tailChunks via pushToTail] D --> E{tailBufferedBytes > MAX_STATS_TAIL_BYTES?} E -- Yes --> F[Evict oldest tail chunks\nwasTruncated = true] E -- No --> G[Tail buffered] H[joinChunks called] --> I{inTailMode?\nalways false} I -- false\nalways --> J[Return headText only\ntail data IGNORED] I -- true\nnever reached --> K[Return headText + tailText\nwith truncation marker] style I fill:#f96,stroke:#c00,color:#000 style J fill:#f96,stroke:#c00,color:#000 style K fill:#9f9,stroke:#090,color:#000Last reviewed commit: cba447b