Fix missing private clauses in 3D viscous GPU loops#1225
Fix missing private clauses in 3D viscous GPU loops#1225sbryngelson wants to merge 3 commits intoMFlowCode:masterfrom
Conversation
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Pull request overview
Fixes a GPU data-race hazard in the 3D viscous stress kernels by aligning their GPU_PARALLEL_LOOP privatization with the already-correct 2D implementations.
Changes:
- Add missing mixture variables (
rho_visc,gamma_visc,pi_inf_visc,alpha_visc_sum) to theprivateclauses for the 3D shear-stress and bulk-stress GPU parallel loops.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/simulation/m_viscous.fpp (1)
321-321: LGTM — consider addingqto the private clause to be fully explicitThe added variables (
i,j,k,l,rho_visc,gamma_visc,pi_inf_visc,alpha_visc_sum) correctly mirror the 2D shear-stress loop at line 104 and fix the GPU race condition.The integer
q(declared at line 83) is used as the induction variable of the innerdo q = 1, Re_size(i)seq sub-loop (lines 392–396). While OpenACC treats subroutine-local scalars as implicitly private in aparallelregion, the project guideline calls for all loop-local variables to be listed explicitly. The same gap exists in the 2D counterpart at line 104.💡 Suggested addition
-$:GPU_PARALLEL_LOOP(collapse=3, private='[i,j,k,l,rho_visc, gamma_visc, pi_inf_visc, alpha_visc_sum, alpha_visc, alpha_rho_visc, Re_visc, tau_Re]') +$:GPU_PARALLEL_LOOP(collapse=3, private='[i,j,k,l,q,rho_visc, gamma_visc, pi_inf_visc, alpha_visc_sum, alpha_visc, alpha_rho_visc, Re_visc, tau_Re]')(The same addition should be applied to the 2D shear/bulk loops at lines 104 and 214, and to the 3D bulk loop at line 430.)
Based on learnings: "Ensure
private(...)declarations on all loop-local variables in GPU-accelerated code to prevent unintended data sharing."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/simulation/m_viscous.fpp` at line 321, Add the induction variable q to the private clause of the GPU_PARALLEL_LOOP directives to explicitly mark it as loop-local: update the existing GPU_PARALLEL_LOOP(collapse=3, private='[...]') that governs the 3D shear-stress loop (and the analogous GPU_PARALLEL_LOOP directives used for the 2D shear and bulk loops and the 3D bulk loop) so that 'q' is included among the private variables alongside i,j,k,l,rho_visc,gamma_visc,pi_inf_visc,alpha_visc_sum,alpha_visc,alpha_rho_visc,Re_visc,tau_Re; this ensures the inner "do q = 1, Re_size(i)" seq sub-loop uses a private induction variable and satisfies the project guideline for explicit loop-local private declarations.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/simulation/m_viscous.fpp`:
- Line 430: The OpenACC/OpenMP GPU_PARALLEL_LOOP directive missing the
inner-loop induction variable causes incorrect privatization; update the
directive that currently lists private='[i,j,k,l,rho_visc, gamma_visc,
pi_inf_visc, alpha_visc_sum, alpha_visc, alpha_rho_visc, Re_visc, tau_Re]' to
also include the inner seq-loop induction variable q so it matches the 2D
bulk-stress loop’s private clause and the inner seq loop that uses q (the inner
loop around lines 501–505 referenced in the review).
---
Nitpick comments:
In `@src/simulation/m_viscous.fpp`:
- Line 321: Add the induction variable q to the private clause of the
GPU_PARALLEL_LOOP directives to explicitly mark it as loop-local: update the
existing GPU_PARALLEL_LOOP(collapse=3, private='[...]') that governs the 3D
shear-stress loop (and the analogous GPU_PARALLEL_LOOP directives used for the
2D shear and bulk loops and the 3D bulk loop) so that 'q' is included among the
private variables alongside
i,j,k,l,rho_visc,gamma_visc,pi_inf_visc,alpha_visc_sum,alpha_visc,alpha_rho_visc,Re_visc,tau_Re;
this ensures the inner "do q = 1, Re_size(i)" seq sub-loop uses a private
induction variable and satisfies the project guideline for explicit loop-local
private declarations.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #1225 +/- ##
=======================================
Coverage 44.05% 44.05%
=======================================
Files 70 70
Lines 20496 20496
Branches 1989 1989
=======================================
Hits 9030 9030
Misses 10328 10328
Partials 1138 1138 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
src/simulation/m_viscous.fpp
Outdated
|
|
||
| if (shear_stress) then ! Shear stresses | ||
| $:GPU_PARALLEL_LOOP(collapse=3, private='[alpha_visc, alpha_rho_visc, Re_visc, tau_Re]') | ||
| $:GPU_PARALLEL_LOOP(collapse=3, private='[i,j,k,l,rho_visc, gamma_visc, pi_inf_visc, alpha_visc_sum, alpha_visc, alpha_rho_visc, Re_visc, tau_Re]') |
There was a problem hiding this comment.
Suggestion: Include the additional loop iterator in the GPU private list for this parallel loop to avoid potential data races and fully comply with the GPU macro guidelines. [custom_rule]
Severity Level: Minor
| $:GPU_PARALLEL_LOOP(collapse=3, private='[i,j,k,l,rho_visc, gamma_visc, pi_inf_visc, alpha_visc_sum, alpha_visc, alpha_rho_visc, Re_visc, tau_Re]') | |
| $:GPU_PARALLEL_LOOP(collapse=3, private='[i,j,k,l,q,rho_visc, gamma_visc, pi_inf_visc, alpha_visc_sum, alpha_visc, alpha_rho_visc, Re_visc, tau_Re]') |
Why it matters? ⭐
The improved code adds the loop-local iterator q to the GPU macro private list, which directly fixes a GPU-macro guideline violation in the PR: loop-local scalars used inside parallel regions must be privatized (see "GPU — private(...) on loop-local variables" in the review rules). The inner Re_size loop uses q inside the GPU_PARALLEL_LOOP region and leaving it shared risks data races on device/host. The suggested improved_code is syntactically consistent with existing GPU macro usage and resolves the rule violation.
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** src/simulation/m_viscous.fpp
**Line:** 322:322
**Comment:**
*Custom Rule: Include the additional loop iterator in the GPU private list for this parallel loop to avoid potential data races and fully comply with the GPU macro guidelines.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
src/simulation/m_viscous.fpp
Outdated
|
|
||
| if (bulk_stress) then ! Bulk stresses | ||
| $:GPU_PARALLEL_LOOP(collapse=3, private='[alpha_visc, alpha_rho_visc, Re_visc, tau_Re]') | ||
| $:GPU_PARALLEL_LOOP(collapse=3, private='[i,j,k,l,rho_visc, gamma_visc, pi_inf_visc, alpha_visc_sum, alpha_visc, alpha_rho_visc, Re_visc, tau_Re]') |
There was a problem hiding this comment.
Suggestion: Add the missing loop iterator to the GPU private list for this bulk-stress parallel loop so that all loop-local scalars are correctly privatized. [custom_rule]
Severity Level: Minor
| $:GPU_PARALLEL_LOOP(collapse=3, private='[i,j,k,l,rho_visc, gamma_visc, pi_inf_visc, alpha_visc_sum, alpha_visc, alpha_rho_visc, Re_visc, tau_Re]') | |
| $:GPU_PARALLEL_LOOP(collapse=3, private='[i,j,k,l,q,rho_visc, gamma_visc, pi_inf_visc, alpha_visc_sum, alpha_visc, alpha_rho_visc, Re_visc, tau_Re]') |
Why it matters? ⭐
Like the shear-stress case, the bulk-stress parallel region contains an inner loop "do q = 1, Re_size(i)". Omitting q from the private list violates the GPU macro guideline to mark loop-local iterators private and can produce data races on device. Adding q to the private list directly addresses this correctness issue per the repository GPU rules.
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** src/simulation/m_viscous.fpp
**Line:** 431:431
**Comment:**
*Custom Rule: Add the missing loop iterator to the GPU private list for this bulk-stress parallel loop so that all loop-local scalars are correctly privatized.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.|
CodeAnt AI Incremental review completed. |
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
CodeAnt AI Incremental review completed. |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/simulation/m_viscous.fpp (1)
105-105: Minor: trailing space before comma inalpha_visc_sum ,alpha_visc.Lines 105 and 215 have
alpha_visc_sum ,alpha_visc(space before the comma), which is inconsistent with the standardalpha_visc_sum, alpha_viscformatting used in the corresponding 3D blocks at lines 322 and 431.✏️ Proposed fix (both occurrences)
-$:GPU_PARALLEL_LOOP(collapse=3, private='[i,j,k,l,q,rho_visc, gamma_visc, pi_inf_visc, alpha_visc_sum ,alpha_visc, alpha_rho_visc, Re_visc, tau_Re]') +$:GPU_PARALLEL_LOOP(collapse=3, private='[i,j,k,l,q,rho_visc, gamma_visc, pi_inf_visc, alpha_visc_sum, alpha_visc, alpha_rho_visc, Re_visc, tau_Re]')Also applies to: 215-215
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/simulation/m_viscous.fpp` at line 105, Fix the minor formatting typo in the GPU_PARALLEL_LOOP private list: remove the stray space before the comma so the variables read "alpha_visc_sum, alpha_visc" (consistent with the 3D blocks); update both occurrences where "alpha_visc_sum ,alpha_visc" appears (lines around the GPU_PARALLEL_LOOP uses) so the private clause lists "alpha_visc_sum, alpha_visc" exactly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/simulation/m_viscous.fpp`:
- Line 105: Fix the minor formatting typo in the GPU_PARALLEL_LOOP private list:
remove the stray space before the comma so the variables read "alpha_visc_sum,
alpha_visc" (consistent with the 3D blocks); update both occurrences where
"alpha_visc_sum ,alpha_visc" appears (lines around the GPU_PARALLEL_LOOP uses)
so the private clause lists "alpha_visc_sum, alpha_visc" exactly.
The 3D shear stress and bulk stress GPU parallel loops were missing rho_visc, gamma_visc, pi_inf_visc, and alpha_visc_sum from their private clauses. The corresponding 2D loops already had these variables listed. Without privatization, these variables could cause race conditions on GPU. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add i,j,k,l to the private list for the 3D shear_stress and bulk_stress GPU parallel loops, matching the pattern already used by the analogous 2D loops at lines 105 and 215. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The sequential loop iterator q (used in `do q = 1, Re_size(i)`) was not privatized in any of the four GPU parallel regions. Without explicit privatization, q is shared across GPU threads on OpenACC and AMD OpenMP backends, causing a data race in Reynolds number computation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
164cda3 to
9b3ee97
Compare
Claude Code ReviewFiles changed: 1
Summary of changes
FindingsNo bugs or CLAUDE.md violations found. The fix is correct: all four GPU parallel loops now carry identical and complete private variable lists. Improvement opportunities:
Head SHA: 9b3ee97 — reviewed by Claude Code |
User description
Summary
GPU_PARALLEL_LOOPdirectives were missingrho_visc,gamma_visc,pi_inf_visc, andalpha_visc_sumfrom theirprivateclausesTest plan
🤖 Generated with Claude Code
Summary by CodeRabbit
CodeAnt-AI Description
Fix GPU race conditions in 3D viscous computations by privatizing temporaries
What Changed
Impact
✅ Correct 3D viscous results on GPU✅ Fewer GPU race-condition artifacts in viscous simulations✅ No change to CPU outputs💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.