Section~\ref{subsec:ad}),
\item Dedicated support for symbolic algebra (see
Section~\ref{subsec:sd}),
- \item Full support for $hp$ adaptivity in parallel computations (see
+ \item Full support for $hp$~adaptivity in parallel computations (see
Section~\ref{subsec:hp}),
\item Interfaces to the HDF5 file format and libraries (see
Section~\ref{subsec:hdf5}),
compiling expressions using the LLVM JIT compiler.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\subsection{Full support for $hp$ adaptivity in parallel computations}
+\subsection{Full support for $hp$~adaptivity in parallel computations}
\label{subsec:hp}
-\dealii{} has had support for $hp$ adaptive methods since around 2005
+\dealii{} has had support for $hp$~adaptive methods since around 2005
(documented in \cite{BangerthKayserHerold2007}) and for parallel
computations on distributed meshes since around 2010 (see
\cite{BangerthBursteddeHeisterKronbichler11}), but not for both at the
areas:
\begin{enumerate}
\item Data structures: The data structures necessary to store the
- indices of degrees of freedom are necessarily substantially more
- complicated for $hp$ algorithms than for the $h$-adaptive schemes
+ indices of degrees of freedom are substantially more
+ complicated for $hp$~algorithms than for the $h$~adaptive schemes
that were already implemented. This is because the number of degrees
of freedom per cell is now no longer constant. Furthermore,
faces and edges may need to store more than one set of indices if
cells), and for which the data structures stored on different
processors have to be reconciled.
-\item Algorithms: Already for $h$ adaptive meshes, enumerating all
- degrees of freedom on the global mesh is difficult, as evidenced by
+\item Algorithms: Already for $h$~adaptive meshes, enumerating all
+ degrees of freedom uniquely on the global mesh is difficult, as evidenced by
the complications of the algorithms shown in Section 3.1 of
\cite{BangerthBursteddeHeisterKronbichler11}, which requires more
than one page of text and is implemented in many hundreds of lines of
code.
- These difficulties are even more pronounced when using $hp$
- adaptivity. The main obstacle is the desire to unify the indices of
- matching degrees of freedom on adjacent cells, such as the edge
- degree of freedom of a $Q_2$ element with the middle one of
- the three edge degrees of freedom of a $Q_4$ element on a
- neighboring cell. Section 4.2 of
- \cite{BangerthKayserHerold2007} discusses a sequential algorithm
+ These difficulties are even more pronounced when using $hp$~adaptivity.
+ The main obstacle is the desire to unify the indices of
+ matching degrees of freedom on adjacent cells whenever elements with
+ continuous polynomials are used. For example, the edge degree of freedom
+ of a $Q_2$~element has to be merged with the middle one of the three
+ edge degrees of freedom of a $Q_4$~element on a neighboring cell.
+ Section 4.2 of \cite{BangerthKayserHerold2007} discusses a sequential algorithm
that eliminates one of these degrees of freedom in favor of another,
but it introduces a ``master'' and a ``slave'' side of the
interface. This is of no major consequence in sequential
\item Data transfer patterns: An important algorithm in parallel
finite element methods is the exchange of information stored on
cells during mesh repartitioning. This happens, for example, when
- interpolating the solution from one mesh to the next,
- adaptively-refined mesh; or when adapting the polynomial degrees
+ interpolating the solution from one mesh to the next
+ adaptively refined mesh; or when adapting the polynomial degrees
associated with each cell and repartitioning in order to
balance the computational cost of each processor's partition. When
- using $h$ adaptive methods, the amount of data associated with each
+ using $h$~adaptive methods, the amount of data associated with each
cell is fixed and the algorithms that implement the data transfer
- are consequently relatively simple. On the other hand, in $hp$
- contexts, each cell may have a different number of unknowns
+ are consequently relatively simple. On the other hand, in $hp$~contexts,
+ each cell may have a different number of unknowns
associated with it, and the algorithms that transfer the data are
- substantially more complicated. Furthermore, the amount of data
- associated with each cell may be large on cells with higher
- polynomial degrees, and might profit from compression before
- sending.
-
-\item Balancing computational cost: For $h$-adaptive algorithms, the
+ substantially more complicated. In order to implement those, we rely
+ on recent enhancements of the \pfrst~library (documented in
+ \cite{Burstedde2018}) to transfer data of variable size across
+ processors. Furthermore, the amount of data associated with each
+ cell may be large on cells with higher polynomial degrees, and
+ might profit from compression before sending.
+
+\item Balancing computational cost: For $h$~adaptive algorithms, the
amount of work associated with each cell is essentially the same,
both during the assembly of linear systems as well as during the
- solver phase. For $hp$-adaptive methods, this is no longer the
+ solver phase. For $hp$~adaptive methods, this is no longer the
case. Consequently, balancing the cost of work between different
processors' partitions is no longer as easy as ensuring that every
processor owns a roughly equal number of cells. Rather, one needs to
relative cost compared to some reference. To make things worse, the
relative cost of assembly on a cell might not match the relative
cost of the linear solver associated with this cell, leading to
- difficult trade-offs in defining optimal weighting factors.
+ difficult trade-offs in defining optimal weighting factors. In this
+ release, we supplied the basic functionality to attach any amount
+ of weighting factors to cells, but users still have to find reasonable
+ weights for themselves.
\end{enumerate}
All of these issues have been addressed in the current release and are
\begin{itemize}
\item For geometric multigrid: \cite{Kanschat2004,JanssenKanschat2011,ClevengerHeisterKanschatKronbichler2019};
\item For distributed parallel computing: \cite{BangerthBursteddeHeisterKronbichler11};
- \item For $hp$ adaptivity: \cite{BangerthKayserHerold2007};
+ \item For $hp$~adaptivity: \cite{BangerthKayserHerold2007};
\item For partition-of-unity (PUM) and enrichment methods of the
finite element space: \cite{Davydov2016};
\item For matrix-free and fast assembly techniques:
\item nanoflann \cite{nanoflann}
\item NetCDF \cite{rew1990netcdf}
\item OpenCASCADE \cite{opencascade-web-page}
-\item p4est \cite{p4est}
+\item p4est \cite{Burstedde2018,p4est}
\item PETSc \cite{petsc-user-ref,petsc-web-page}
\item ROL \cite{ridzal2014rapid}
\item ScaLAPACK \cite{slug}