Fix mv read loop hardcoding 4 instead of nnode#1190
Fix mv read loop hardcoding 4 instead of nnode#1190sbryngelson wants to merge 3 commits intoMFlowCode:masterfrom
Conversation
|
CodeAnt AI is reviewing your PR. Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (4)
📝 WalkthroughWalkthroughReplaced hardcoded multiplier 4 with the dynamic Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Suggested labels
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Nitpicks 🔍
|
|
CodeAnt AI finished reviewing your PR. |
There was a problem hiding this comment.
Pull request overview
This PR fixes a bug in the bubble restart file reading logic where the mv data file loop was hardcoded to iterate 4 times instead of using the nnode variable, which could cause incorrect file reads when nnode != 4.
Changes:
- Updated the
mvdata file loop to usennodeinstead of the hardcoded value4 - Corrected the file number offset calculation to use
nnodefor proper indexing
44e0f7f to
0870de7
Compare
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1190 +/- ##
=======================================
Coverage 44.05% 44.05%
=======================================
Files 70 70
Lines 20496 20496
Branches 1989 1989
=======================================
Hits 9030 9030
Misses 10328 10328
Partials 1138 1138 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
CodeAnt AI Incremental review completed. |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/pre_process/m_start_up.fpp (1)
417-420:⚠️ Potential issue | 🟡 Minor
file_numbuffer may be too short forsys_size + nb*nnodewhennnodeis large.The buffer is sized for the digit count of
sys_sizealone, but numbers written during pb/mv reads go up tosys_size + nb*nnode. For the current default (nnode = 4) this is benign becausesys_sizealready incorporatesnb*nmom(which exceedsnb*4). However, for largernnodevalues — which this PR explicitly enables — the number of digits can increase, causing Fortran to fillfile_numwith'*'characters and silently failing to find the data file.🐛 Proposed fix
-character(LEN= & - int(floor(log10(real(sys_size, wp)))) + 1) :: file_num +character(LEN= & + int(floor(log10(real(sys_size + 2*nb*nnode, wp)))) + 1) :: file_num🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/pre_process/m_start_up.fpp` around lines 417 - 420, The declaration for file_num is too short because it only uses digits of sys_size; change the sizing to compute the maximum possible index (e.g. max_index = sys_size + nb*nnode) and use int(floor(log10(real(max_index, wp))) + 1) for the character(LEN=...) length so file_num can hold numbers up to sys_size + nb*nnode; update any related initializations that assume the old length and keep the symbol names file_num, sys_size, nb, nnode, and wp to locate and modify the declaration and any dependent code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/pre_process/m_start_up.fpp`:
- Around line 417-420: The declaration for file_num is too short because it only
uses digits of sys_size; change the sizing to compute the maximum possible index
(e.g. max_index = sys_size + nb*nnode) and use int(floor(log10(real(max_index,
wp))) + 1) for the character(LEN=...) length so file_num can hold numbers up to
sys_size + nb*nnode; update any related initializations that assume the old
length and keep the symbol names file_num, sys_size, nb, nnode, and wp to locate
and modify the declaration and any dependent code.
9a63b88 to
5e704f7
Compare
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
CodeAnt AI Incremental review completed. |
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
CodeAnt AI Incremental review completed. |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/common/m_mpi_common.fpp (1)
742-757:collapse=4with a 5-loop body — consider bumping tocollapse=5for GPU parallelism consistency.The direction-1
qbmm_commpb-pack loop has 5 levels (l, k, j, i, q) but usescollapse=4, leavingqsequential inside each thread. Every other qbmm direction pack/unpack loop (including the adjacent mv-pack on Line 759 and the direction-1 pb-unpack on Line 951) usescollapse=5. There is no data dependency preventingcollapse=5here; writes tobuff_send(r)are unique per(l, k, j, i, q)combination.♻️ Suggested fix
- $:GPU_PARALLEL_LOOP(collapse=4,private='[r]') + $:GPU_PARALLEL_LOOP(collapse=5,private='[r]') do l = 0, p do k = 0, n do j = 0, buff_size - 1 do i = nVar + 1, nVar + nnode do q = 1, nbBased on learnings: "When modifying MPI pack/unpack logic in one sweep direction, update all directions" — the same parallelism level should be consistent across the symmetric pack/unpack loops for correctness and maintainability.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/common/m_mpi_common.fpp` around lines 742 - 757, The GPU parallel loop for the qbmm_comm pb-pack currently uses $:GPU_PARALLEL_LOOP(collapse=4,private='[r]') but the loop body has five nested loops (l,k,j,i,q), so change collapse=4 to collapse=5 to allow all five loops to be collapsed for GPU parallelism; locate the qbmm_comm pack block that writes buff_send(r) (the nested loops with r = ... and buff_send(r) = real(pb_in(...), kind=wp)) and update the pragma to collapse=5 to match the adjacent mv-pack and pb-unpack loops for consistency.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/common/m_mpi_common.fpp`:
- Around line 742-757: The GPU parallel loop for the qbmm_comm pb-pack currently
uses $:GPU_PARALLEL_LOOP(collapse=4,private='[r]') but the loop body has five
nested loops (l,k,j,i,q), so change collapse=4 to collapse=5 to allow all five
loops to be collapsed for GPU parallelism; locate the qbmm_comm pack block that
writes buff_send(r) (the nested loops with r = ... and buff_send(r) =
real(pb_in(...), kind=wp)) and update the pragma to collapse=5 to match the
adjacent mv-pack and pb-unpack loops for consistency.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/common/m_mpi_common.fppsrc/pre_process/m_global_parameters.fppsrc/pre_process/m_start_up.fppsrc/simulation/m_global_parameters.fpp
🚧 Files skipped from review as they are similar to previous changes (3)
- src/pre_process/m_global_parameters.fpp
- src/simulation/m_global_parameters.fpp
- src/pre_process/m_start_up.fpp
The pb loop just above correctly uses nnode for both the loop bound and file number offset. The mv loop should match. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ommon Fix all remaining instances of the literal integer 4 representing the QBMM quadrature node count (nnode) across simulation and common modules, completing the nnode parameterization started in the PR. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Fix remaining instances of the literal 4 representing QBMM quadrature node count in pre_process/m_global_parameters.fpp and m_start_up.fpp, consistent with the fixes already applied to common and simulation. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
6d1fd33 to
cfca1b2
Compare
User description
Summary
Severity: HIGH — mv data file loop uses wrong bounds and offsets when nnode != 4.
File:
src/pre_process/m_start_up.fpp, lines 479-482The
pb(pressure) read loop just above correctly usesnnodefor both the loop bound and file number offset calculation. Themv(mass/velocity) read loop immediately below hardcodes4instead.Before
After
Why this went undetected
nnodeis 4 in the most common configuration, so the hardcoded value happens to be correct.Test plan
🤖 Generated with Claude Code
Fixes #1210
Summary by CodeRabbit
CodeAnt-AI Description
Use configured nnode for QBMM I/O and MPI buffers instead of hardcoded 4
What Changed
Impact
✅ Fewer restart failures with non-default nnode✅ Correct mv mass/velocity data on restart when nnode != 4✅ Correct MPI I/O and communication sizing for QBMM runs💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.