From c9a3753bdc6fbd3508831acfbdb72a3c787270e7 Mon Sep 17 00:00:00 2001 From: Timo Heister Date: Wed, 31 Jul 2024 16:04:18 -0400 Subject: [PATCH] step-64: cleanup intro, remove CUDA, mention Kokkos step-64 does not use CUDA directly anymore --- examples/step-64/doc/intro.dox | 48 +++++++++---------------------- include/deal.II/lac/cuda_vector.h | 18 ++++++++++++ 2 files changed, 31 insertions(+), 35 deletions(-) diff --git a/examples/step-64/doc/intro.dox b/examples/step-64/doc/intro.dox index 80a8c03ba0..d77a6b0d03 100644 --- a/examples/step-64/doc/intro.dox +++ b/examples/step-64/doc/intro.dox @@ -7,7 +7,7 @@ This program was contributed by Bruno Turcksin and Daniel Arndt, Oak Ridge Natio

Introduction

-This example shows how to implement a matrix-free method on the GPU using CUDA +This example shows how to implement a matrix-free method on the GPU using Kokkos for the Helmholtz equation with variable coefficients on a hypercube. The linear system will be solved using the conjugate gradient method and is parallelized with MPI. @@ -23,10 +23,12 @@ the most popular architecture for machine learning. On the other hand, GPUs are not easy to program. This program explores the deal.II capabilities to see how efficiently such a program can be implemented. +We achieve a performance portable implementation by relying on the Kokkos +library, which supports different devices including CPUs (serial, using OpenMP), NVidia +GPUs (when configured with CUDA), AMD GPUs (with ROCm), and more. While we have tried for the interface of the matrix-free classes for the CPU and -the GPU to be as close as possible, there are a few differences. When using -the matrix-free framework on a GPU, one must write some CUDA code. However, the -amount is fairly small and the use of CUDA is limited to a few keywords. +the GPU to be as close as possible, there are a few differences. However, the +amount is fairly small and the use of Kokkos is limited to a few keywords.

The test case

@@ -59,9 +61,7 @@ separate steps: The data movements can either be done explicitly by the user code or done automatically using UVM (Unified Virtual Memory). In deal.II, only the first method is supported. While it means an extra burden for the user, this -allows for -better control of data movement and more importantly it avoids to mistakenly run -important kernels on the host instead of the device. +allows for better control of data movement. The data movement in deal.II is done using LinearAlgebra::ReadWriteVector. These vectors can be seen as buffers on the host that are used to either store data @@ -70,37 +70,15 @@ that can be used on the device: - LinearAlgebra::CUDAWrappers::Vector, which is similar to the more common Vector, and - LinearAlgebra::distributed::Vector, which is a regular +MemorySpace::Default>, which is a regular LinearAlgebra::distributed::Vector where we have specified which memory space to use. -If no memory space is specified, the default is MemorySpace::Host. +If no memory space is specified, the default is MemorySpace::Host. MemorySpace::Default +corresponds to data living on the device. -Next, we show how to move data to/from the device using -LinearAlgebra::CUDAWrappers::Vector: -@code - unsigned int size = 10; - LinearAlgebra::ReadWriteVector rw_vector(size); - - ...do something with the rw_vector... - - // Move the data to the device: - LinearAlgebra::CUDAWrappers::Vector vector_dev(size); - vector_dev.import_elements(rw_vector, VectorOperations::insert); - - ...do some computations on the device... - - // Move the data back to the host: - rw_vector.import_elements(vector_dev, VectorOperations::insert); -@endcode -Both of the vector classes used here only work on a single machine, -i.e., one memory space on a host and one on a device. - -But there are cases where one wants to run a parallel computation -between multiple MPI processes on a number of machines, each of which -is equipped with GPUs. In that case, one wants to use -`LinearAlgebra::distributed::Vector`, -which is similar but the `import()` stage may involve MPI communication: +To copy a vector to/from the device, you can use `import_elements()`, which +may also involve MPI communication: @code IndexSet locally_owned_dofs, locally_relevant_dofs; ...fill the two IndexSet objects... @@ -111,7 +89,7 @@ which is similar but the `import()` stage may involve MPI communication: ...do something with the rw_vector... // Move the data to the device: - LinearAlgebra::distributed::Vector + LinearAlgebra::distributed::Vector distributed_vector_dev(locally_owned_dofs, MPI_COMM_WORLD); distributed_vector_dev.import_elements(owned_rw_vector, VectorOperations::insert); diff --git a/include/deal.II/lac/cuda_vector.h b/include/deal.II/lac/cuda_vector.h index ee2677d793..820bc53020 100644 --- a/include/deal.II/lac/cuda_vector.h +++ b/include/deal.II/lac/cuda_vector.h @@ -48,6 +48,24 @@ namespace LinearAlgebra * * @note Only float and double are supported. * + *

Moving data

+ * You can move data to/from the device as follows: + * @code + * unsigned int size = 10; + * LinearAlgebra::ReadWriteVector rw_vector(size); + * + * ...do something with the rw_vector... + * + * // Move the data to the device: + * LinearAlgebra::CUDAWrappers::Vector vector_dev(size); + * vector_dev.import_elements(rw_vector, VectorOperations::insert); + * + * ...do some computations on the device... + * + * // Move the data back to the host: + * rw_vector.import_elements(vector_dev, VectorOperations::insert); + * @endcode + * * @see CUDAWrappers * @ingroup Vectors */ -- 2.39.5