step are drastically different from those used in Step-33, which focuses on
the use of automatic differentiation. From a programming perspective this
tutorial will focus on a number of techniques found in large-scale
-computations: hybrid thread- and process- (MPI) parallelization; efficient
+computations: hybrid thread-MPI parallelization; efficient
local numbering of degrees of freedom; concurrent post-processing and
write-out of results using worker threads; as well as checkpointing and
restart.
where $\mathbb{I} \in \mathbb{R}^{d \times d}$ is the identity matrix and
$\otimes$ denotes the tensor product. Here, we have introduced the pressure
$p$ that, in general, is defined by an closed-form equation of state.
-For the tutorial we limit the discussion to the class of polytropic ideal gases
-for which the pressure is given by
+In this tutorial we limit the discussion to the class of polytropic
+ideal gases for which the pressure is given by
@f{align*}
-p = p(\textbf{u}) := (\gamma -1) \Big(E - \frac{\|\textbf{m}\|^2}{2\,\rho}
+p = p(\textbf{u}) := (\gamma -1) \Big(E -
+\tfrac{|\textbf{m}|_{\ell^2}^2}{2\,\rho}
\Big),
@f}
where the factor $\gamma \in (1,5/3]$ denotes the
<a href="https://en.wikipedia.org/wiki/Heat_capacity_ratio">ratio of
-specific heats</a>, and $\|\,.\|$ denotes the Euclidian norm.
+specific heats</a>, and $|\cdot|_{\ell^2}$ denotes the Euclidian norm.
<h4>Solution theory</h4>
@f{align}
\mathcal{B} = \big\{ \textbf{u} =
[\rho, \textbf{m},E]^{\top} \in \mathbb{R}^{d+2} \, \big |
- \quad
- \rho > 0,
- \quad
- \ E - \tfrac{|\textbf{m}|_{\ell^2}^2}{2 \rho} > 0,
- \quad
+ \
+ \rho > 0 \, ,
+ \
+ \ E - \tfrac{|\textbf{m}|_{\ell^2}^2}{2 \rho} > 0 \, ,
+ \
s(\mathbf{u}) \geq \min_{x \in \Omega} s(\mathbf{u}_0(\mathbf{x}))
\big\}.
@f}
instance @cite GuermondErn2004 Chapter 5 and references therein). Most
time-dependent discretization approaches described in the deal.II tutorials
are based on such a (semi-discrete) variational approach. Fundamentally,
-from an analysis perspective, variational discretizations are conceived in
-order to provide some notion of global (integral) stabiliy, meaning an
+from an analysis perspective, variational discretizations are conceived
+to provide some notion of global (integral) stabiliy, meaning an
estimate of the form
@f{align*}
shockless regime and similarly benign regimes.
However, in the transonic and supersonic regime, and shock-hydrodynamics
-
applications the use of variational schemes might be questionable. In fact,
at the time of this writing, most shock-hydrodynamics codes are still
firmly grounded on finite volumes methods. The main reason for failure of
\mathbb{R}^{d+2}$ and $\phi_i$ is a scalar-valued shape function.
@note For simplicity we will consider the usual Lagrange finite elements.
-In such this context we let $\{\mathbf{x}_i\}_{i \in \mathcal{V}}$ denote
-the set of all support points (see @ref GlossSupport "this glossary
-entry"), where $\mathbf{x}_i \in \mathbb{R}^d$. Then each index $i \in
+In such context, let $\{\mathbf{x}_i\}_{i \in \mathcal{V}}$ denote
+the set of all support points (see @ref GlossSupport "this glossary entry"),
+where $\mathbf{x}_i \in \mathbb{R}^d$. Then each index $i \in
\mathcal{V}$ uniquely identifies a support point $\mathbf{x}_i$, as well as
a scalar-valued shape function $\phi_i$.
-With this notation at hand we can define the scheme as
+With this notation at hand we can define the scheme as:
@f{align*}
m_i \frac{\mathbf{U}_i^{n+1} - \mathbf{U}_i^{n}}{\tau}
+ \sum_{j \in \mathcal{I}(i)} \mathbb{f}(\mathbf{U}_j^{n})\cdot
- \mathbf{c}_{ij} - d_{ij} \mathbf{U}_j^{n} = \boldsymbol{0}
+ \mathbf{c}_{ij} - d_{ij} \mathbf{U}_j^{n} = \boldsymbol{0} \, ,
@f}
Where
definition in order to focus first on some algorithmic and implementational
questions. We note that
- $m_i$ and $\mathbf{c}_{ij}$ do not evolve in time (provided we keep the
- discretization fixed). It thus makes sense to assemble the matrices
- once in a so called <i>offline computation</i> and reuse them in every
- time step. They are part of what we are going to call off-line data.
+ discretization fixed). It thus makes sense to assemble these
+ matrices/vectors once in a so called <i>offline computation</i> and reuse
+ them in every time step. They are part of what we are going to call
+ off-line data.
- At every time step we have to evaluate $\mathbb{f}(\mathbf{U}_j^{n})$ and
$d_{ij} := \max \{ \lambda_{\text{max}}
(\mathbf{U}_i^{n},\mathbf{U}_j^{n}, \textbf{n}_{ij}),
@f{align*}
&\textbf{for } i \in \mathcal{V} \\
-&\ \ \ \ \{\mathbf{c}_{ij}\}_{j \in \mathcal{I}(i)} \leftarrow \texttt{gather} (\textbf{c}, \mathcal{I}(i)) \\
-&\ \ \ \ \{\textbf{U}_j^n\}_{j \in \mathcal{I}(i)} \leftarrow \texttt{gather} (\textbf{U}^n, \mathcal{I}(i)) \\
+&\ \ \ \ \{\mathbf{c}_{ij}\}_{j \in \mathcal{I}(i)} \leftarrow
+\texttt{gather_cij_vectors} (\textbf{c}, \mathcal{I}(i)) \\
+&\ \ \ \ \{\textbf{U}_j^n\}_{j \in \mathcal{I}(i)} \leftarrow
+\texttt{gather_state_vectors} (\textbf{U}^n, \mathcal{I}(i)) \\
&\ \ \ \ \ \textbf{U}_i^{n+1} \leftarrow \mathbf{U}_i^{n} \\
&\ \ \ \ \textbf{for } j \in \mathcal{I}(i) \\
&\ \ \ \ \ \ \ \ \texttt{compute } d_{ij} \\
&\ \ \ \ \ \ \ \ \textbf{U}_i^{n+1} \leftarrow \textbf{U}_i^{n+1} - \frac{\tau_n}{m_i}
\mathbb{f}(\mathbf{U}_j^{n})\cdot \mathbf{c}_{ij} + d_{ij} \mathbf{U}_j^{n} \\
&\ \ \ \ \textbf{end} \\
-&\ \ \ \ \texttt{scatter} (\textbf{U}^n, \mathcal{I}(i), \textbf{U}_i^n)) \\
+&\ \ \ \ \texttt{scatter_updated_state} (\textbf{U}_i^{n+1}) \\
&\textbf{end}
@f}
- Here $\textbf{c}$ and $\textbf{U}^n$ are a global matrix and a global vector
containing all the vectors $\mathbf{c}_{ij}$ and all the states
$\mathbf{U}_j^n$ respectively.
-- $\texttt{gather_cij_vectors}$ and $\texttt{gather_state_vectors}$ are
-hypothetical implementations that collect (from global matrices and vectors)
-only the quantities required to compute the update at the node $i$.
+- $\texttt{gather_cij_vectors}$, $\texttt{gather_state_vectors}$, and
+$\texttt{scatter_updated_state}$ are hypothetical implementations that
+either collect (from) or write (into) global matrices and vectors.
- Note that: if we assume a cartesian mesh in two space
dimensions, first-order polynomial space $\mathbb{Q}^1$, and that
$\mathbf{x}_i$ is an interior node (i.e. $\mathbf{x}_i$ is not on the boundary
nine state-vectors (i.e. all the states in the patch/macro element associated to
the shape function $\phi_i$). This is one of the major differences with the
usual cell-based loop where the gather functionality (encoded in
-FEValuesBase<dim, spacedim>.get_function_values() ) only collects values for the
-local cell (just a subset of the patch).
-
-It is worth noting that, from a practitioner's point of view
-fully-algebraic schemes (i.e. no bilinear forms, no cell-loops, and no
-quadratures) are not unusual at all in the CFD community. There is rich history
-of application of this kind of schemes, also called "edge-based" or
-"graph-based" finite element schemes (see for instance @cite Rainald2008 for
-more historical references).
-
-We note that:
-- This algorithm does not require any form of quadrature or cell-loops.
-- Here, $\textbf{c}$ and $\textbf{U}^n$ are a global matrix and a global vector
- containing all the vectors $\mathbf{c}_{ij}$ and all the states
- $\mathbf{U}_j^n$, respectively.
-- $\texttt{gather}$ and $\texttt{scatter}$ are helper functions
- that collect from, or distribute values from global vectors and matrices.
-- For an interior node $\mathbf{x}_i$ on a regular mesh in two space
- dimensions (assuming a first-order polynomial space $\mathbb{Q}^1$) the
- stencil $\mathcal{I}(i)$ contains nine entries. The update for a single
- state $\textbf{U}_i^n$ thus depends on nine state-vectors
- $\{\textbf{U}_j^n\}_{j \in \mathcal{I}(i)}$ (i.e., all the states in the
- patch formed by the support of the shape function $\phi_i$). This is one
- of the major differences compared to cell-based loop, where an update
- typically only operates on states associated with a single cell.
+FEValuesBase<dim, spacedim>.get_function_values() in the case of deal.ii) only
+collects values for the local cell (just a subset of the patch).
The actual implementation will deviate from above code in one key aspect:
-The time-step size $\tau$ has to be chosen subject to a CFL condition
+the time-step size $\tau$ has to be chosen subject to a CFL condition
@f{align*}
\tau_n = c_{\text{cfl}}\,\min_{
i\in\mathcal{V}}\left(\frac{m_i}{-2\,d_{ii}^{n}}\right),
// <code>lac/la_parallel_vector.h</code>. Instead of a Trilinos, or PETSc
// specific matrix class, we will use a non-distributed
// dealii::SparseMatrix (<code>lac/sparse_matrix.h</code>) to store the local
-// part of the $c_{ij}$, $n_{ij}$ and $d_{ij}$ matrices.
+// part of the $\mathbf{c}_{ij}$, $\mathbf{n}_{ij}$ and $d_{ij}$ matrices.
#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/parallel.h>
#include <deal.II/base/parameter_acceptor.h>
// and scratch data object private and make methods and data structures
// used by other classes public.
//
-// @note: A cleaner approach would be to guard access to all data
+// @note A cleaner approach would be to guard access to all data
// structures by <a
// href="https://en.wikipedia.org/wiki/Mutator_method">getter/setter
// functions</a>. For the sake of brevity, we refrain from that approach,
// though.
+//
+// We also note that the vast majority of classes is derived from
+// ParameterAcceptor. This facilitates the population of all the global
+// parameters into a single (global) ParameterHandler. More explanations
+// about the use inheritance from ParameterAcceptor as a global subscription
+// mechanism can be found in Step-59.
namespace Step69
{
//
// The class <code>Discretization</code> contains all data structures
// concerning the mesh (triangulation) and discretization (mapping,
- // finite element, quadrature) of the problem. We use the
+ // finite element, quadrature) of the problem. As mentioned, we use
// ParameterAcceptor class to automatically populate problem-specific
// parameters, such as the geometry information
// (<code>length</code>, etc.) or the refinement level
// constructor, and defer the creation of the mesh to the
// <code>setup()</code> method that can be called once all parameters are
// read-in via ParameterAcceptor::initialize().
- //
+
template <int dim>
class Discretization : public ParameterAcceptor
{
//
// The class <code>OfflineData</code> contains pretty much all components
// of the discretization that do not evolve in time, in particular, the
- // DoFHandler, SparsityPattern, boundary maps, the lumped mass, $c_{ij}$,
- // and $n_{ij}$ matrices.
- //
- // Here, the term <i>offline</i> refers to the fact that all the class
+ // DoFHandler, SparsityPattern, boundary maps, the lumped mass,
+ // $\mathbf{c}_{ij}$ and $\mathbf{n}_{ij}$ matrices. Here, the term
+ // <i>offline</i> refers to the fact that all the class
// members of <code>OfflineData</code> have well-defined values
// independent of the current time step. This means that they can be
// initialized ahead of time (at <i>time step zero</i>) and are not meant
private:
// We declare a private callback function that will be wired up to the
- // ParameterAcceptor::parse_parameters_call_back signal
+ // ParameterAcceptor::parse_parameters_call_back signal.
void parse_parameters_callback();
Tensor<1, dim> initial_direction;
// classes at hand we can now implement the explicit time-stepping scheme
// that was introduced in the discussion above. The main method of the
// <code>TimeStep</code> class is <code>step(vector_type &U, double
- // t)</code>. That takes a reference to a state vector <code>U</code> and
- // a time point <code>t</code> as arguments, computes the updated solution,
- // stores it in the vector <code>temp</code>, swaps its contents with the
- // vector <code>U</code>, and returns the chosen step-size $\tau$.
+ // t)</code> that takes a reference to a state vector <code>U</code> and
+ // a time point <code>t</code> (as input arguments) computes the updated
+ // solution, stores it in the vector <code>temp</code>, swaps its contents
+ // with the vector <code>U</code>, and returns the chosen step-size
+ // $\tau$.
//
// The other important method is <code>prepare()</code> which primarily
// sets the proper partition and sparsity pattern for the temporary
- // vector <code>temp</code> and the matrix <code>dij_matrix</code>.
- //
+ // vector <code>temp</code> and the matrix <code>dij_matrix</code>
+ // respectively.
template <int dim>
class TimeStep : public ParameterAcceptor
// The first major task at hand is the typical triplet of grid
// generation, setup of data structures, and assembly. A notable novelty
// in this example step is the use of the ParameterAcceptor class that we
- // use to populate parameter values: We first initialize the
+ // use to populate parameter values: we first initialize the
// ParameterAcceptor class by calling its constructor with a string
// <code>subsection</code> denoting the correct subsection in the
// parameter file. Then, in the constructor body every parameter value is
}
// Note that in the previous constructor we only passed the MPI
- // communicator to the <code>triangulation</code>but we still have not
+ // communicator to the <code>triangulation</code> but we still have not
// initialized the underlying geometry/mesh. As mentioned earlier, we
// have to postpone this task to the <code>setup()</code> function that
// gets called after the ParameterAcceptor::initialize() function has
// "holes" in the index range. The distributed matrices offered by
// deal.II avoid this by translating from a global index range into a
// contiguous local index range. But this is the precisely the type of
- // index manipulation we want to avoid.
+ // index manipulation we want to avoid in our assembly loops.
//
- // Lucky enough, the Utilities::MPI::Partitioner used for distributed
- // vectors provides exactly what we need: It manages a translation from
+ // The Utilities::MPI::Partitioner already implements the translation from
// a global index range to a contiguous local (per MPI rank) index
- // range. We therefore simply create a "local" sparsity pattern for the
- // contiguous index range $[0,$<code>n_locally_relevant</code>$)$ and
- // translate between global dof indices and the above local range with
- // the help of the Utilities::MPI::Partitioner::global_to_local()
- // function. All that is left to do is to ensure that we always access
+ // range (we don't have to reinvent the wheel). We just need to use that
+ // translation capability (once and only once) in order to create a
+ // "local" sparsity pattern for
+ // the contiguous index range $[0,$<code>n_locally_relevant</code>$)$. That
+ // capability can be invoked by
+ // Utilities::MPI::Partitioner::global_to_local()
+ // function. All that is left to do is to ensure that, when implementing
+ // our scatter and gather auxiliary functions, we always access
// elements of a distributed vector by a call to
- // LinearAlgebra::distributed::Vector::local_element(). That way we
- // avoid index translations altogether.
+ // LinearAlgebra::distributed::Vector::local_element(). That way we avoid
+ // index translations altogether and operate exclusively with local indices.
{
TimerOutput::Scope t(