[SofaCUDA] ElementFEMForceField: Generic CUDA implementation#6071
Open
fredroy wants to merge 21 commits intosofa-framework:masterfrom
Open
[SofaCUDA] ElementFEMForceField: Generic CUDA implementation#6071fredroy wants to merge 21 commits intosofa-framework:masterfrom
fredroy wants to merge 21 commits intosofa-framework:masterfrom
Conversation
alxbilger
reviewed
Apr 8, 2026
Contributor
alxbilger
left a comment
There was a problem hiding this comment.
A criticism is that the code supposes that the space is in 3D. There are also some dynamic checks on the number of nodes per element. But this data is constexpr.
Extract assembled stiffness matrices into a separate contiguous buffer (m_assembledStiffnessMatrices) to replace getReadAccessor calls on Data<vector<FactorizedElementStiffness>> inside parallel forEachRange lambdas. The read accessor acquires a shared lock on the Data object, causing contention across threads and effectively serializing the parallel work during CG iterations. Using a direct const reference to a plain vector eliminates this synchronization bottleneck (~3x speedup in parallel mode). As a secondary benefit, the contiguous buffer only stores the assembled 24x24 matrices (~4.6 KB each) rather than the full FactorizedElementStiffness structs (~14 KB each), improving cache utilization.
Replace hardcoded 3D assumption and extern "C" + switch(nbNodesPerElem) runtime dispatch with fully compile-time C++ template parameters <T, NNodes, Dim>. All kernel dimensions, stiffness block sizes, and gather loops are now generic over Dim. The .inl callers use a single template call with constexpr nNodes and dim from the trait, eliminating both the if-constexpr type branching and the runtime NNodes switch. Explicit template instantiations in the .cu files provide the needed symbols. Applied to both ElementLinearSmallStrainFEMForceField and ElementCorotationalFEMForceField CUDA implementations.
4f653db to
e41d0b6
Compare
Contributor
Author
Taken into account by 🤖 |
This reverts commit e41d0b6.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Based on
Implementation of the #5882 for SofaCUDA.
Claude did the heavy-lifting job :
And the human implemented examples, comparison and benchmarks with the current implementation of FEM in SofaCUDA.
As for the bench (only for tet and hex):
In a nutshell,
Benches (only corotational...)
(i7 13700K + 4080Ti)
can be launched like that :
LinearSolver is CG, 250 steps, tol=1e-12
40x10x10 grid (1000 steps, 4000 nodes, 3159 hexa )
76x16x16 grid (1000 steps, 19456 nodes, 16875 hexa ) (no cpu)
For the fun, the hexa new version with the a 200x30x30 (180k points, 167k hexa) and a downgraded CG (50 steps, tol of 1e-06) :
1000 iterations done in 65.5718 s ( 15.2504 FPS)By submitting this pull request, I acknowledge that
I have read, understand, and agree SOFA Developer Certificate of Origin (DCO).
Reviewers will merge this pull-request only if