⚡️ Speed up function _compile_ok by 134% in PR #1902 (codeflash/optimize-pr1897-2026-03-25T17.23.56)#1904
Conversation
Removing the `@lru_cache(maxsize=1024)` decorator eliminated per-call overhead from argument hashing and dictionary lookups that exceeded the benefit of caching, since `compile()` is already fast (~15-30 µs for typical inputs) and the function is called with mostly unique source strings in practice. The 134% speedup (26.9 ms → 11.5 ms) reflects that cache management cost dominated total runtime when processing diverse code snippets through the `validate_syntax` caller. Test results show consistent small wins across all cases, with the largest gains on short/invalid inputs where cache overhead was proportionally highest (e.g., null-byte test improved 23.9%). The single regression is the unhashable-input test (43.8% slower) because TypeError now originates from `compile()` rather than cache-key construction, but this is an edge case with negligible absolute impact.
|
Claude finished @codeflash-ai[bot]'s task in 2m 42s —— View job PR Review SummaryPrek Checks✅ Code ReviewClassification: SMALL — 3 lines deleted (import, decorator, trailing newline) in one file. Change: Removes Verdict: Correct and safe.
Speedup claim credibility: Plausible. One minor note: A few of the displayed generated regression tests test Duplicate DetectionNo duplicates detected. Test CoverageSkipped — SMALL PR. Last updated: 2026-03-25T17:37:00Z |
11c5275
into
codeflash/optimize-pr1897-2026-03-25T17.23.56
⚡️ This pull request contains optimizations for PR #1902
If you approve this dependent PR, these changes will be merged into the original PR branch
codeflash/optimize-pr1897-2026-03-25T17.23.56.📄 134% (1.34x) speedup for
_compile_okincodeflash/languages/python/support.py⏱️ Runtime :
26.9 milliseconds→11.5 milliseconds(best of49runs)📝 Explanation and details
Removing the
@lru_cache(maxsize=1024)decorator eliminated per-call overhead from argument hashing and dictionary lookups that exceeded the benefit of caching, sincecompile()is already fast (~15-30 µs for typical inputs) and the function is called with mostly unique source strings in practice. The 134% speedup (26.9 ms → 11.5 ms) reflects that cache management cost dominated total runtime when processing diverse code snippets through thevalidate_syntaxcaller. Test results show consistent small wins across all cases, with the largest gains on short/invalid inputs where cache overhead was proportionally highest (e.g., null-byte test improved 23.9%). The single regression is the unhashable-input test (43.8% slower) because TypeError now originates fromcompile()rather than cache-key construction, but this is an edge case with negligible absolute impact.✅ Correctness verification report:
🌀 Click to see Generated Regression Tests
To edit these changes
git checkout codeflash/optimize-pr1902-2026-03-25T17.35.13and push.