author = {R. M. Kynch and P. D. Ledger},
title = {Resolving the sign conflict problem for hp{\textendash}hexahedral {N}{\'{e}}d{\'{e}}lec elements with application to eddy current problems},
journal = {Computers {\&} Structures}
-}
\ No newline at end of file
+}
+
+@proceedings{ljungkvist2017,
+ title={{M}atrix-{F}ree {F}inite-{E}lement {C}omputations on {G}raphics
+ {P}rocessors with {A}daptively {R}efined {U}nstructured {M}eshes},
+ author={Karl Ljungkvist},
+ year={2017},
+ address={Virginia Beach, VA, USA},
+ month={April 23-26},
+ publisher={Proceedings of the 25th High Performance Computing Symposium,
+ SpringSim-HPC},
+}
\subsection{GPU support via CUDA}
\label{subsec:gpu}
-The GPU support was significantly extended for the current release. On the one hand, the MPI-parallel vector class
-\texttt{LinearAlgebra::distributed::Vector} has gained a second template
-argument \texttt{MemorySpace} which can either be \texttt{Host} or
-\texttt{CUDA}. In the latter case, the data resides in the GPU memory. Also,
-the ghost exchange can be performed completely via CUDA in case a compatible
-MPI library is found (as is usual on large GPU-based supercomputers).
+The GPU support was significantly extended for the current release:
+\begin{itemize}
+ \item \texttt{LinearAlgebra::distributed::Vector}: the MPI-parallel vector
+ class has gained a second template argument \texttt{MemorySpace} which can
+ either be \texttt{Host} or \texttt{CUDA}. In the latter case, the data
+ resides in the GPU memory. By default, the template parameter is
+ \texttt{Host} and the behavior is unchanged compared to previous versions.
+ When using CUDA, the ghost exchange can be performed either by first copy
+ the relevant data to the host, performing a MPI communication, and finally
+ move the data to the device or, if CUDA-aware MPI is available, by
+ performing the MPI directly from GPU to GPU.
+ \item Constrained degrees of freedom: the matrix-free framework now now
+ supports constraints degrees of freedom. The implementation is based on
+ \cite{ljungkvist2017}. With this addition, not only the user can impose Dirichlet
+ boundary conditions but the matrix-free framework can be used on adapted
+ meshes. The only restriction is that for two-dimensional meshes, the degree
+ of finite elements must be odd. There is no such restriction in three
+ dimensions.
+ \item MPI matrix-free: using \texttt{LinearAlgebra::distributed::Vector}, the
+ matrix-free framework can scale to multiple multiple GPUS by taking
+ advantage of MPI. Each MPI process can only use GPU and therefore, if
+ multiple GPUs are available in one node, it is necessary to have as many
+ ranks as GPUs. Using Nvidia Multi-Process Device (MPS), it is also possible
+ for multiple processes to use the same GPU. This can be advantageous if the
+ amount of work on one rank is not sufficient to fully utilize GPU.
+\end{itemize}
Furthermore, a new tutorial program \texttt{step-64} has been added that demonstrates the usage of matrix-free methods on Nvidia GPUs. GPUs are advantageous for these kind of operations because of their superior hardware characteristics, in particular a higher memory bandwidth than server CPUs within a given power envelope. The matrix-free GPU components integrated in \dealii have been analyzed against CPUs in \cite{KronbichlerLjungkvist2019}, where the application to geometric multigrid solvers is discussed.