Skip to content

Reduce redundant CUDA Jacobian uploads during a linear solve#2806

Draft
LwhJesse wants to merge 3 commits intosu2code:developfrom
LwhJesse:perf/gpu-single-upload-pr
Draft

Reduce redundant CUDA Jacobian uploads during a linear solve#2806
LwhJesse wants to merge 3 commits intosu2code:developfrom
LwhJesse:perf/gpu-single-upload-pr

Conversation

@LwhJesse
Copy link
Copy Markdown

@LwhJesse LwhJesse commented Apr 30, 2026

Proposed Changes

This draft PR reduces redundant CUDA Jacobian uploads in the CUDA matrix-vec
tor product path.

Previously, the CUDA matvec path uploaded the Jacobian from host to device
inside each
GPUMatrixVectorProduct() call, which could repeatedly transfer the same
matrix during a
single linear solve.

This revision keeps the per-matvec upload removed from
GPUMatrixVectorProduct(), but
handles the CUDA matrix upload inside CSysMatrixVectorProduct with a lazy
first-use policy:

  • remove the per-matvec HtDTransfer() call from CSysMatrixGPU.cu;
  • keep the upload ownership inside CSysMatrixVectorProduct;
  • defer the CUDA matrix upload until the first actual matvec call, instead
    of doing it in
    the constructor.

This preserves the original optimization goal while avoiding an early-upload
timing issue on
paths where the Jacobian can still be modified after the matvec wrapper is
constructed.

Validation

CUDA benchmark performance

Updated local CUDA benchmarks against the rebased develop on the original
self-contained
cases show:

Case Develop Patched Speedup
periodic2d_sector 1.890917 s 1.844045 s 1.025x
udf_lam_flatplate_s 4.853856 s 3.534693 s 1.373x
udf_lam_flatplate_m 23.328064 s 17.361233 s 1.344x
udf_lam_flatplate_l 39.389757 s 30.838123 s 1.277x
udf_test_11_probes_s 3.179211 s 2.585054 s 1.230x
udf_test_11_probes_m 16.393155 s 12.990735 s 1.262x

Geometric mean speedup: 1.246x.

CUDA numerical consistency

I also re-ran the 6 original CUDA benchmark cases with:

  • develop_gpu
  • patched_gpu

and compared the final history.csv values. The final-state numerical
results matched for all
6 cases, with max_abs_delta = 0.0 in each case.

Targeted CPU regression coverage

I additionally ran a targeted workflow-style CPU subset on the most relevant
multizone / FSI
paths:

  • serial_dyn_fsi
  • parallel_dyn_fsi
  • parallel_channel_3D
  • parallel_aachen_restart
  • serial_fsi_cht
  • serial_fsi_cht_restart

Results:

  • dyn_fsi, channel_3D, and Aachen_restart pass on both base and
    patched builds.
  • stat_fsi / stat_fsi_restart are not clean on base either, so that
    issue is not unique to
    this PR.

Related Work

None.

PR Checklist

  • I am submitting my contribution to the develop branch.
  • My contribution generates no new compiler warnings (try with
    --warnlevel=3 when using meson).
  • My contribution is commented and consistent with SU2 style
    (https://su2code.github.io/docs_v7/Style-Guide/).
  • I used the pre-commit hook to prevent dirty commits and used pre- commit run --all to format old commits.
  • I have added a test case that demonstrates my contribution, if
    necessary.
  • I have updated appropriate documentation (Tutorials, Docs Page,
    config_template.cpp), if necessary.

Comment thread Common/src/linear_algebra/CSysSolve.cpp Outdated
Comment on lines 1467 to 1470
#ifdef HAVE_CUDA
if (config->GetCUDA()) Jacobian.HtDTransfer();
#endif
auto mat_vec = CSysMatrixVectorProduct<ScalarType>(Jacobian, geometry, config);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems we could make this part of CSysMatrixVectorProduct to handle all cases.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I agree.

I will revise this so the CUDA matrix upload is handled inside CSysMatrixVectorProduct, rather than requiring each caller to do it explicitly before constructing the matvec wrapper. Then GPUMatrixVectorProduct() can stay free of the per-matvec matrix upload, while the device-side matrix is reused across repeated operator() calls.

I will also remove the explicit HtDTransfer() calls from CSysSolve.cpp and CNewtonIntegration.hpp, check the other CSysMatrixVectorProduct construction paths, and re-run CUDA/non-CUDA tests before marking this ready for review.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I updated the PR accordingly.

The CUDA Jacobian upload is now handled in CSysMatrixVectorProduct, so the upload
logic is centralized there instead of being repeated at individual caller sites. This
keeps GPUMatrixVectorProduct() free of the per-matvec matrix upload while covering the
linear solve and Newton-Krylov paths consistently.

I also re-ran the CUDA benchmarks against the latest develop. The performance benefit
remains, with the original large self-contained CUDA cases still showing about 1.28x to
1.31x speedup. nsys shows that this comes from reduced HtoD transfer traffic rather
than changes in the GPU matvec kernel itself.

I additionally ran supplemental targeted NK coverage to exercise the
CNewtonIntegration path affected by this change.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After rebasing and re-checking the call paths, I adjusted the upload timing in
CSysMatrixVectorProduct.

The upload is still handled centrally in CSysMatrixVectorProduct, but it is now
deferred to the first actual matvec use instead of happening in the constructor.
This avoids uploading too early on paths where the Jacobian may still be
modified after wrapper construction.

I re-ran the local CUDA benchmarks on the rebased branch, and I also checked a
targeted CPU regression subset plus develop-vs-patched GPU history consistency.
The speedup remains, and the final GPU history values match develop on the
original benchmark cases.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool. Anything else you plan to add to this PR? If not it LGTM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants