Reduce redundant CUDA Jacobian uploads during a linear solve#2806
Reduce redundant CUDA Jacobian uploads during a linear solve#2806LwhJesse wants to merge 3 commits intosu2code:developfrom
Conversation
| #ifdef HAVE_CUDA | ||
| if (config->GetCUDA()) Jacobian.HtDTransfer(); | ||
| #endif | ||
| auto mat_vec = CSysMatrixVectorProduct<ScalarType>(Jacobian, geometry, config); |
There was a problem hiding this comment.
It seems we could make this part of CSysMatrixVectorProduct to handle all cases.
There was a problem hiding this comment.
Good point, I agree.
I will revise this so the CUDA matrix upload is handled inside CSysMatrixVectorProduct, rather than requiring each caller to do it explicitly before constructing the matvec wrapper. Then GPUMatrixVectorProduct() can stay free of the per-matvec matrix upload, while the device-side matrix is reused across repeated operator() calls.
I will also remove the explicit HtDTransfer() calls from CSysSolve.cpp and CNewtonIntegration.hpp, check the other CSysMatrixVectorProduct construction paths, and re-run CUDA/non-CUDA tests before marking this ready for review.
There was a problem hiding this comment.
Thanks, I updated the PR accordingly.
The CUDA Jacobian upload is now handled in CSysMatrixVectorProduct, so the upload
logic is centralized there instead of being repeated at individual caller sites. This
keeps GPUMatrixVectorProduct() free of the per-matvec matrix upload while covering the
linear solve and Newton-Krylov paths consistently.
I also re-ran the CUDA benchmarks against the latest develop. The performance benefit
remains, with the original large self-contained CUDA cases still showing about 1.28x to
1.31x speedup. nsys shows that this comes from reduced HtoD transfer traffic rather
than changes in the GPU matvec kernel itself.
I additionally ran supplemental targeted NK coverage to exercise the
CNewtonIntegration path affected by this change.
There was a problem hiding this comment.
After rebasing and re-checking the call paths, I adjusted the upload timing in
CSysMatrixVectorProduct.
The upload is still handled centrally in CSysMatrixVectorProduct, but it is now
deferred to the first actual matvec use instead of happening in the constructor.
This avoids uploading too early on paths where the Jacobian may still be
modified after wrapper construction.
I re-ran the local CUDA benchmarks on the rebased branch, and I also checked a
targeted CPU regression subset plus develop-vs-patched GPU history consistency.
The speedup remains, and the final GPU history values match develop on the
original benchmark cases.
There was a problem hiding this comment.
Cool. Anything else you plan to add to this PR? If not it LGTM.
71e26d0 to
478598a
Compare
Proposed Changes
This draft PR reduces redundant CUDA Jacobian uploads in the CUDA matrix-vec
tor product path.
Previously, the CUDA matvec path uploaded the Jacobian from host to device
inside each
GPUMatrixVectorProduct()call, which could repeatedly transfer the samematrix during a
single linear solve.
This revision keeps the per-matvec upload removed from
GPUMatrixVectorProduct(), buthandles the CUDA matrix upload inside
CSysMatrixVectorProductwith a lazyfirst-use policy:
HtDTransfer()call fromCSysMatrixGPU.cu;CSysMatrixVectorProduct;of doing it in
the constructor.
This preserves the original optimization goal while avoiding an early-upload
timing issue on
paths where the Jacobian can still be modified after the matvec wrapper is
constructed.
Validation
CUDA benchmark performance
Updated local CUDA benchmarks against the rebased
developon the originalself-contained
cases show:
Geometric mean speedup:
1.246x.CUDA numerical consistency
I also re-ran the 6 original CUDA benchmark cases with:
develop_gpupatched_gpuand compared the final
history.csvvalues. The final-state numericalresults matched for all
6 cases, with
max_abs_delta = 0.0in each case.Targeted CPU regression coverage
I additionally ran a targeted workflow-style CPU subset on the most relevant
multizone / FSI
paths:
serial_dyn_fsiparallel_dyn_fsiparallel_channel_3Dparallel_aachen_restartserial_fsi_chtserial_fsi_cht_restartResults:
dyn_fsi,channel_3D, andAachen_restartpass on both base andpatched builds.
stat_fsi/stat_fsi_restartare not clean on base either, so thatissue is not unique to
this PR.
Related Work
None.
PR Checklist
--warnlevel=3when using meson).(https://su2code.github.io/docs_v7/Style-Guide/).
pre- commit run --allto format old commits.necessary.
config_template.cpp), if necessary.