\usepackage{graphicx}
\usepackage{xspace}
+%\renewcommand{\baselinestretch}{2.0}
+
\usepackage[normalem]{ulem}
+ \usepackage{todonotes}
+
\pgfplotsset{compat=1.9}
\newcommand{\specialword}[1]{\texttt{#1}}
\href{https://dealii.org/developer/doxygen/deal.II/changes_between_9_0_1_and_9_1_0.html}{
in the file that lists all changes for this release}, see \cite{changes91}.
- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
- \subsection{bla1}
- \label{subsec:bla1}
-
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\subsection{Improved large-scale performance}
+\label{subsec:performance}
+
+Large-scale simulations with 304,128 cores have revealed bottlenecks in release
+9.1 during initialization due to the usage of expensive collective operations
+like \texttt{MPI\_Allgather()} and \texttt{MPI\_\allowbreak Alltoall()}, e.g., during the
+pre-computation of the index ranges of all processes, which have been stored in an array.
+This information is needed to set up
+the \texttt{Utilities::MPI::Par\-ti\-ti\-oner} class.
+In release 9.2, we have removed such arrays and have replaced
+the \texttt{MPI\_Allgather}/\allowbreak\texttt{MPI\_\allowbreak Alltoall}
+function calls by consensus algorithms~\cite{hoefler2010scalable}, which can be
+found in the namespace \texttt{Utilities::\allowbreak MPI::\allowbreak ConsensusAlgorithms} (short: \texttt{CA}): now, only the locally relevant information about the index ranges is (re)computed when needed, using these algorithms.
+We provide two flavors of the consensus algorithm: the two-step
+approach \texttt{CA::PEX} and the \texttt{CA::NBX},
+which uses only point-to-point communications and a single \texttt{MPI\_IBarrier()}.
+
+%Consensus algorithms are algorithms dedicated to efficient dynamic-sparse
+%communication patterns. In this context, the term ``dynamic-sparse'' means
+%that by the time this function is called, the other processes do not know
+%yet that they have to answer requests and
+%each process only has to communicate with a small subset of processes of the
+%MPI communicator. We provide two flavors of the consensus algorithm: the two-step
+%approach \texttt{ConsensusAlgorithms::PEX} and the \texttt{ConsensusAlgorithms::NBX},
+%which uses only point-to-point communications and a single \texttt{MPI\_IBarrier()}.
+%The class \texttt{ConsensusAlgorithms::Selector} selects one of the two previous
+%algorithms, depending on the number of processes.
+
+Users can apply the new algorithms for their own dynamic-sparse problems by
+providing a list of target
+processes and pack/unpack routines either by implementing the interface
+\texttt{CA::\allowbreak Process} or by providing \texttt{std::function}
+objects to \texttt{CA::AnonymousProcess}.
+
+By replacing the collective communications during setup and removing the arrays
+that contain information for each process (enabled by the application of consensus
+algorithms and other modifications---a full list of modifications leading to this
+improvement can be found online), we were able to significantly improve the setup
+time for large-scale simulations and to solve a Poisson problem with multigrid
+with 12T unknowns.
+Figure~\ref{} compares the timings of a simulation (incl. setup) with the
+previous release 9.1 and with the current release 9.2.
+{\color{red}TODO[Peter/Martin] description of the results}
+
+The new code has been also applied to solve problems with adaptively refined
+meshes with more than 4B unknowns. {\color{red}TODO[Timo]}
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\subsection{A new fully distributed triangulation class}
+\label{subsec:pft}
+
+By release 9.1, all triangulation classes of \texttt{deal.II} have in common that the coarse grid is shared by
+all processes and the actual mesh used for computations is constructed by repeated
+refinement. However, this has its limitations in industrial applications where, often, the mesh comes
+from an external mesh generator in the form of a file that already contains millions
+or tens of millions of cells. For such configurations, applications might already
+run out of main memory, while reading the complete mesh by each MPI process.
+
+The new class \texttt{parallel::fullydistributed::Triangulation} targets this issue
+by distributing also the coarse grid. Such
+a triangulation can be created by providing to each process a \texttt{Triangulation\-De\-scrip\-tion::Description} struct, containing
+1) the relevant data to construct the local part of the coarse grid, 2) the
+translation of the local coarse-cell IDs to globally unique IDs, 3) the hierarchy
+of mesh refinements, and 4) the owner of the cells on the active mesh level as well
+as on the multigrid levels. Once the triangulation is set up with this struct, no adaptive
+changes to the mesh are allowed at the moment.
+
+%The \texttt{TriangulationDescription::Description} struct can be filled manually or
+%by the utility functions from the \texttt{TriangulationDescription::Utilities}
+%namespace. The function \texttt{create\_\allowbreak description\_\allowbreak
+%from\_ \allowbreak triangulation()} can convert a base triangulation (partitioned
+%serial \texttt{Tri\-angulation} and \texttt{parallel::distributed::Triangulation})
+%to such a struct. The advantage of this approach is that all known utility
+%functions from the namespaces \texttt{GridIn} and \texttt{GridTools} can be used
+%on the base triangulations before converting them to the structs. Since this
+%function suffers from the same main memory problems as described above, we also
+%provide the function \texttt{create\_description\_from\_triangulation\_in\_groups()},
+%which creates the structs only on the master process in a process group. These
+%structs are filled one by one and are sent to the relevant processes once they are
+%ready. A sensible process group size might contain all processes of one compute node.
+
+
+The new fully distributed triangulation class works---in contrast to
+\texttt{parallel::distributed::\allowbreak Tri\-an\-gu\-la\-tion}---not only for 2D- and 3D- but also for
+1D-problems. It can be used in the context of geometric multigrid methods and
+supports periodic boundary conditions. Furthermore, hanging nodes are supported.
+
+%We intend to extend the usability of the new triangulation class in regard of
+%different aspects, e.g., I/O. In addition, we would like to enable adaptive mesh
+%refinement, a feature of the other triangulation classes, which is very much
+%appreciated by many users. For repartitioning, we plan to use an oracle approach
+%known from \texttt{parallel::distributed::Triangulation}. Here, we would like to
+%rely on a user-provided partitioner, which might also be a graph partitioner.
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\subsection{Advances of the SIMD capabilities and the matrix-free infrastructure}
+\label{subsec:mf}
+
+%\begin{itemize}
+%\item ECL
+%\item VectorizedArrayType
+%\end{itemize}
+
+The class \texttt{VectorizedArray<Number>} is a key ingredient for the high
+node-level performance of the matrix-free algorithms in deal.II~\cite{KronbichlerKormann2012, KronbichlerKormann2019}. It is a wrapper
+class around $n$ vector entries of type \texttt{Number} and delegates relevant
+function calls to appropriate Intrinsics instructions. Up to release 9.1, the
+vector length $n$ has been set at compile time of the library to the highest
+possible value supported by the given processor architecture.
+
+The class \texttt{VectorizedArray} has been made more user-friendly by making
+it compatible with the STL algorithms found in the header \texttt{<algorithm>}.
+The length of the vector can now be queried by \texttt{VectorizedArray::size()} and its underlying number type by \texttt{VectorizedArray::value\_type}.
+Furthermore, the \texttt{VectorizedArray} class supports range-based iterations over its entries and, i.a., the following
+algorithms: \texttt{std::\allowbreak ad\-vance()}, \texttt{std::distance()}, and \texttt{std::max\_element()}.
+
+This class has been also extended with the second optional template argument
+\texttt{VectorizedArray<Number, size>} with \texttt{size} being related to the
+vector length, i.e., the number of lanes to be used and the instruction set to be
+used. By default, the number is set to the highest value supported by the given
+hardware. A full list of supported
+vector lengths is presented in Table~\ref{tab:vectorizedarray}.
+
+All matrix-free related classes (like \texttt{MatrixFree} and \texttt{FEEvaluation})
+have been templated with the floating-point number type \texttt{Number} (e.g. \texttt{double} or \texttt{float}); the computations were performed implicitly
+on \texttt{VectorizedArray<Number>} structs with the highest
+instruction-set-architecture extension, with each lane responsible for a separate
+cell (vectoriziation over elements). In release 9.2, all matrix-free classes
+have been extended with a new optional template argument specifying the
+\texttt{VectorizedArrayType}. This allows users to select the vector length/ISA and,
+as a consequence, the number of cells to be processed at once directly in their applications:
+The deal.II-based
+library \texttt{hyper.deal}~\cite{munch2020hyperdeal}, which solves the 6D Vlasov--Poisson equation with high-order
+discontinuous Galerkin methods (with more than 1024 degrees of freedom per cell), works with \texttt{MatrixFree} objects of different SIMD-vector
+length in the same application and benefits---in terms of performance---by the possibility of decreasing the number of cells processed by a single SIMD instruction.
+
+\begin{table}
+\caption{Supported vector lengths of the class \texttt{VectorizedArray} and
+the corresponding instruction-set-architecture extensions. }\label{tab:vectorizedarray}
+\centering
+\begin{tabular}{ccc}
+\toprule
+\textbf{double} & \textbf{float} & \textbf{ISA}\\
+\midrule
+VectorizedArray<double, 1> & VectorizedArray<float, 1> & (auto-vectorization) \\
+VectorizedArray<double, 2> & VectorizedArray<float, 4> & SSE2 \\
+VectorizedArray<double, 4> & VectorizedArray<float, 8> & AVX/AVX2 \\
+VectorizedArray<double, 8> & VectorizedArray<float, 16> & AVX-512 \\
+\bottomrule
+\end{tabular}
+
+\caption{Comparison of relevant SIMD-related classes in deal.II and C++20.}\label{tab:simd}
+\centering
+\begin{tabular}{cc}
+\toprule
+\textbf{VectorizedArray (deal.II)} & \textbf{std::simd (C++20)} \\
+\midrule
+VectorizedArray<Number> & std::experimental::native\_simd<Number> \\
+VectorizedArray<Number, size> & std::experimental::fixed\_size\_simd<Number, size> \\ \bottomrule
+\end{tabular}
+\end{table}
+
+A side effect of introducing the new template argument \texttt{VectorizedArrayType}
+in the \texttt{MatrixFree} classes is that any data structures
+\texttt{VectorizedArrayType} can be processed if they support required
+functionalities like \texttt{size()} or \texttt{value\_type}. In this context, we
+would like to highlight that the new \texttt{C++20} feature \texttt{std::simd}
+can be processed by the matrix-free infrastructure with minor internal
+adjustment as an open pull request shows
+(see \url{https://github.com/dealii/dealii/pull/9994}).
+Table~\ref{tab:simd} gives a comparison of the deal.II-specific SIMD classes and
+the equivalent C++20 classes.
+%We welcome the standardization of the SIMD
+%parallelization paradigm in C++ and intend to replace step by step our own
+%wrapper class, which has been continuously developed over the last decade. We
+%would like to emphasize that the work invested in this class was not in vain,
+%since many performance-relevant utility functions implemented with \texttt{VectorizedArray} in mind (e.g., \texttt{vectorized\_load\_and\_transpose}
+%and \texttt{vectorized\_transpose\_and\_store}) will be still used, since they
+%have not become part of the standard.
+
+%Further additions to the \texttt{MatrixFree} infrastructure consist of:
+%\begin{itemize}
+%\item a new variant of \texttt{MatrixFree::cell\_loop()}: It takes two
+%\texttt{std::function} objects with ranges on the locally owned degrees of freedom, one
+%with work to be scheduled before the cell operation first touches some
+%unknowns and another with work to be executed after the cell operation last
+%touches them. The goal of
+%these functions is to bring vector operations close to the time when the
+%vector entries are accessed by the cell operation, which increases the cache
+%hit rate of modern processors by improved temporal locality.
+%\item a new form of loop \texttt{MatrixFree::loop\_cell\_centric()}: This
+%kind of loop can be used in the context of discontinuous Galerkin methods,
+%where both cell and face integrals have to be evaluated. While in the case of
+%the traditional \texttt{loop}, cell and face integrals have been performed
+%independently, the new loop performs all cell and face integrals of a cell in
+%one go. This includes that each face integral has to be evaluated twice, but
+%entries have to be written into the solution vector only once with improved
+%data locality. Previous publications based on \texttt{deal.II} have shown the
+%relevance of the latter aspect for reaching higher performance.
+%\end{itemize}
+
+In a next step, we intend to support that users could set
+\texttt{VectorizedArrayType} to \texttt{Number}, which would lead to the usage not
+of \texttt{VectorizedArray<Number, 1>} but of a specialized code-path exploiting
+vectorization within an element~\cite{KronbichlerKormann2019}.
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\subsection{Advances in GPU support}
+\label{subsec:gpu}
+
+\begin{itemize}
+\item overlapping of computation and communication in the case of CUDA-aware MPI
+\end{itemize}
+
+
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{New and improved tutorial and code gallery programs}
\label{subsec:steps}
- Many of the \dealii{} tutorial programs were substantially revised as
- part of this release. In particular, we have converted many places
- that now allow for simpler code through the use of C++11 features such
- as range-based for loops and lambda functions.
-
- In addition, there are seven new tutorial programs:
+ Many of the \dealii{} tutorial programs were revised in a variety of
+ ways as part of this release. A particular example is that we have
+ converted a number of programs to use range-based for loops (a C++11
+ feature) for loops over a range of integer indices such as loops over
+ all quadrature points or all indices of degrees of freedom during
+ assembly. This makes sense given that the
+ range-based way of writing loops seems to be the idiomatic approach
+ these days, and that we had previously already converted loops over
+ all cells in this way.
+
+ In addition, there are a number of new tutorial programs:
\begin{itemize}
\item \texttt{step-47}
+ \todo[inline]{Zhuoran to write}
\item \texttt{step-50}
+ \todo[inline]{Timo/Conrad/... to write}
\item \texttt{step-58}
-\item \texttt{step-65}
-\todo[inline]{Martin to write}
-\item \texttt{step-67}
-\todo[inline]{Martin to write}
+ \todo[inline]{Wolfgang to write}
+\item \texttt{step-65} presents \texttt{TransfiniteInterpolationManifold}, a
+manifold class that can propagate curved boundary information into the
+interior of a computational domain, and \texttt{MappingQCache}, which can sample
+the information of expensive manifolds in the points of a \texttt{MappingQ} and
+cache it for further use.
+\item \texttt{step-67} presents an explicit time integrator for the
+compressible Euler equations discretized with a high-order discontinuous
+Galerkin scheme using the matrix-free infrastructure. Besides the use of
+matrix-free evaluators for systems of equations and over-integration, it also
+presents \texttt{MatrixFreeOperators::CellwiseInverseMassMatrix}, a fast implementation
+of the action of the inverse mass matrix in the DG setting using tensor
+products. Furthermore, this tutorial demonstrates the usage of new
+pre and post operations, which can be passed to \texttt{cell\_loop()}, to schedule operations on sections of vectors close
+to the matrix-vector product to increase data locality
+and discusses performance-related aspects.
\item \texttt{step-69}
+ \todo[inline]{Matthias/Ignacio to write}
\item \texttt{step-70}
+ \todo[inline]{Also need to update announce and announce-short if this
+ makes it into the release.}
\end{itemize}
+ \todo[inline]{Do we have new code gallery programs}
+
+ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+ \subsection{Support for large, fully distributed meshes}
+ \label{subsec:pfT}
+
+ \todo[inline]{Peter: Write something about p::f::T}
+
+
+
+ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+ \subsection{Better support for parallel $hp$-adaptive algorithms}
+ \label{subsec:hp}
+
+ \todo[inline]{Marc: Your section}
+
+
+ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+ \subsection{Support for particle-based methods}
+ \label{subsec:particles}
+
+ \todo[inline]{Luca: Your section}
+
+
+ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+ \subsection{Python interfaces}
+ \label{subsec:python}
+
+ \todo[inline]{What's new here? mention step-49 and step-53 versions written in python.}
+
+
+
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Incompatible changes}