Skip to content

[SofaCUDA] ElementFEMForceField: Generic CUDA implementation#6071

Open
fredroy wants to merge 21 commits intosofa-framework:masterfrom
fredroy:femelasticity_cuda
Open

[SofaCUDA] ElementFEMForceField: Generic CUDA implementation#6071
fredroy wants to merge 21 commits intosofa-framework:masterfrom
fredroy:femelasticity_cuda

Conversation

@fredroy
Copy link
Copy Markdown
Contributor

@fredroy fredroy commented Apr 8, 2026

Based on

Implementation of the #5882 for SofaCUDA.
Claude did the heavy-lifting job :

Add CUDA-accelerated implementations of ElementLinearSmallStrainFEMForceField and 
ElementCorotationalFEMForceField for both float and double precision (CudaVec3f/CudaVec3d). Uses a two-kernel   
approach (per-element compute + per-vertex gather) with SoA memory layout for coalesced GPU access. The 
corotational version supports full GPU-side rotation computation for triangle and hexahedron  elements.

And the human implemented examples, comparison and benchmarks with the current implementation of FEM in SofaCUDA.

As for the bench (only for tet and hex):
In a nutshell,

  • the legacy Tetrahedron version is very specialized/optimized so is quite faster than the new one (more elements there are faster it is)
  • nevertheless, the Hexahedron version is opposite: the new one is faster than the legacy, more elements they are, faster it is.

Benches (only corotational...)
(i7 13700K + 4080Ti)
can be launched like that :

for version in new legacy; do for template in Vec3d CudaVec3f CudaVec3d; do NX=40 NY=10 NZ=10 NBSTEPS=1000 FEM_VERSION=$version FEM_TEMPLATE=$template python ../../../src/sandbox/applications/plugins/SofaCUDA/examples/ElementFEMForcefield/benchmarks/Hexahedron_corotational.py | grep "steps done in";   done; done

LinearSolver is CG, 250 steps, tol=1e-12

40x10x10 grid (1000 steps, 4000 nodes, 3159 hexa )

HexahedronFEM: version=new template=Vec3d grid=(40, 10, 10) | 1000 steps done in 16.1s (62.019 fps).
HexahedronFEM: version=new template=CudaVec3f grid=(40, 10, 10) | 1000 steps done in 6.0s (166.63 fps).
HexahedronFEM: version=new template=CudaVec3d grid=(40, 10, 10) | 1000 steps done in 7.17s (139.38 fps).
HexahedronFEM: version=legacy template=Vec3d grid=(40, 10, 10) | 1000 steps done in 57.4s (17.413 fps).
HexahedronFEM: version=legacy template=CudaVec3f grid=(40, 10, 10) | 1000 steps done in 7.26s (137.71 fps).
HexahedronFEM: version=legacy template=CudaVec3d grid=(40, 10, 10) | 1000 steps done in 65.6s (15.253 fps).
---
TetrahedronFEM: version=new template=Vec3d grid=(40, 10, 10) | 1000 steps done in 31.6s (31.661 fps).
TetrahedronFEM: version=new template=CudaVec3f grid=(40, 10, 10) | 1000 steps done in 6.42s (155.66 fps).
TetrahedronFEM: version=new template=CudaVec3d grid=(40, 10, 10) | 1000 steps done in 8.31s (120.36 fps).
TetrahedronFEM: version=legacy template=Vec3d grid=(40, 10, 10) | 1000 steps done in 88.5s (11.298 fps).
TetrahedronFEM: version=legacy template=CudaVec3f grid=(40, 10, 10) | 1000 steps done in 5.28s (189.25 fps).
TetrahedronFEM: version=legacy template=CudaVec3d grid=(40, 10, 10) | 1000 steps done in 6.42s (155.79 fps).

76x16x16 grid (1000 steps, 19456 nodes, 16875 hexa ) (no cpu)

HexahedronFEM: version=new template=CudaVec3f grid=(76, 16, 16) | 1000 steps done in 11.9s (83.914 fps).
HexahedronFEM: version=new template=CudaVec3d grid=(76, 16, 16) | 1000 steps done in 21.1s (47.282 fps).
HexahedronFEM: version=legacy template=CudaVec3f grid=(76, 16, 16) | 1000 steps done in 21.5s (46.525 fps).
HexahedronFEM: version=legacy template=CudaVec3d grid=(76, 16, 16) | 1000 steps done in 9.35e+02s (1.07 fps).
---
TetrahedronFEM: version=new template=CudaVec3f grid=(76, 16, 16) | 1000 steps done in 17.9s (55.988 fps).
TetrahedronFEM: version=new template=CudaVec3d grid=(76, 16, 16) | 1000 steps done in 58.8s (17.007 fps).
TetrahedronFEM: version=legacy template=CudaVec3f grid=(76, 16, 16) | 1000 steps done in 10.4s (95.943 fps).
TetrahedronFEM: version=legacy template=CudaVec3d grid=(76, 16, 16) | 1000 steps done in 17.9s (55.955 fps).

For the fun, the hexa new version with the a 200x30x30 (180k points, 167k hexa) and a downgraded CG (50 steps, tol of 1e-06) :
1000 iterations done in 65.5718 s ( 15.2504 FPS)


By submitting this pull request, I acknowledge that
I have read, understand, and agree SOFA Developer Certificate of Origin (DCO).


Reviewers will merge this pull-request only if

  • it builds with SUCCESS for all platforms on the CI.
  • it does not generate new warnings.
  • it does not generate new unit test failures.
  • it does not generate new scene test failures.
  • it does not break API compatibility.
  • it is more than 1 week old (or has fast-merge label).

@fredroy fredroy added pr: enhancement About a possible enhancement pr: status to review To notify reviewers to review this pull-request pr: highlighted in next release Highlight this contribution in the notes of the upcoming release pr: based on previous PR PR based on a previous PR, therefore to be merged ONLY subsequently pr: AI-aided Label notifying the reviewers that part or all of the PR has been generated with the help of an AI labels Apr 8, 2026
Copy link
Copy Markdown
Contributor

@alxbilger alxbilger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A criticism is that the code supposes that the space is in 3D. There are also some dynamic checks on the number of nodes per element. But this data is constexpr.

fredroy added 18 commits April 9, 2026 08:19
  Extract assembled stiffness matrices into a separate contiguous buffer
  (m_assembledStiffnessMatrices) to replace getReadAccessor calls on
  Data<vector<FactorizedElementStiffness>> inside parallel forEachRange
  lambdas. The read accessor acquires a shared lock on the Data object,
  causing contention across threads and effectively serializing the
  parallel work during CG iterations. Using a direct const reference to a
  plain vector eliminates this synchronization bottleneck (~3x speedup in
  parallel mode). As a secondary benefit, the contiguous buffer only
  stores the assembled 24x24 matrices (~4.6 KB each) rather than the full
  FactorizedElementStiffness structs (~14 KB each), improving cache
  utilization.
Replace hardcoded 3D assumption and extern "C" + switch(nbNodesPerElem) runtime dispatch with fully compile-time C++ template parameters <T, NNodes, Dim>. All kernel dimensions, stiffness block sizes,
   and gather loops are now generic over Dim. The .inl callers use a single template call with constexpr nNodes and dim from the trait, eliminating both the if-constexpr type branching and the runtime
  NNodes switch. Explicit template instantiations in the .cu files provide the needed symbols. Applied to both ElementLinearSmallStrainFEMForceField and ElementCorotationalFEMForceField CUDA
  implementations.
@fredroy fredroy force-pushed the femelasticity_cuda branch from 4f653db to e41d0b6 Compare April 8, 2026 23:19
@fredroy
Copy link
Copy Markdown
Contributor Author

fredroy commented Apr 9, 2026

A criticism is that the code supposes that the space is in 3D. There are also some dynamic checks on the number of nodes per element. But this data is constexpr.

Taken into account by 🤖

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr: AI-aided Label notifying the reviewers that part or all of the PR has been generated with the help of an AI pr: based on previous PR PR based on a previous PR, therefore to be merged ONLY subsequently pr: enhancement About a possible enhancement pr: highlighted in next release Highlight this contribution in the notes of the upcoming release pr: status to review To notify reviewers to review this pull-request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants