@dealiiTutorialDOI{10.5281/zenodo.3698223,https://zenodo.org/badge/DOI/10.5281/zenodo.3698223.svg}
-<a name="Intro></a>
+<a name="Intro"></a>
<h1>Introduction</h1>
This tutorial presents a first-order scheme for solving compressible
$\langle \text{div} \, \mathbb{f}(\mathbf{u}), \mathbf{u}\rangle$ (understood as
the $L^2(\Omega)$ inner product or duality pairing) is not guaranteed to be
non-negative. Notions such as energy-stability or $L^2(\Omega)$-stability are
-(in general) meaningles in this context.
+(in general) meaningless in this context.
Historically, the most fruitful step taken in order to deepen the
understanding of hyperbolic conservation laws was to assume that the
- {\epsilon} \Delta \mathbf{u}^{\epsilon} = 0.
@f}
Such solutions, which are understood as the solution recovered in the
-zero-viscosity limit, are often refered to as <i>viscosity solutions</i>.
+zero-viscosity limit, are often referred to as <i>viscosity solutions</i>.
(This is, because physically $\epsilon$ can be understood as related to the viscosity of the
fluid, i.e., a quantity that indicates the amount of friction neighboring gas particles moving at
different speeds exert on each other. The Euler equations themselves are derived under
(integral, or high order moments) sense.
In context of a numerical approximation, a violation of such a constraint
-has dire consequences: it almost surely leads to catrastrophic failure of
+has dire consequences: it almost surely leads to catastrophic failure of
the numerical scheme, loss of hyperbolicity, and overall, loss of
well-posedness of the (discrete) problem. It would also mean that we have computed
something that can not be interpreted physically. (For example, what are we to make
\lambda_{\text{max}} (\mathbf{U}_j^{n}, \mathbf{U}_i^{n},
\textbf{n}_{ji}) \} \|\mathbf{c}_{ij}\|$ if $i \not = j$ is the so
called <i>graph viscosity</i>. The graph viscosity serves as a
- stabilization term, it is omewhat the discrete counterpart of
+ stabilization term, it is somewhat the discrete counterpart of
$\epsilon \Delta \mathbf{u}$ that appears in the notion of viscosity
solution described above. We will base our construction of $d_{ij}$ on
an estimate of the maximal local wavespeed $\lambda_{\text{max}}$ that
will be explained in detail in a moment.
- - the diagonal entres of the viscosity matrix are defined as
+ - the diagonal entries of the viscosity matrix are defined as
$d_{ii} = - \sum_{j \in \mathcal{I}(i)\backslash \{i\}} d_{ij}$.
- $\textbf{n}_{ij} = \frac{\mathbf{c}_{ij}}{ \|\mathbf{c}_{ij}\| }$ is a
normalization of the $\textbf{c}_{ij}$ matrix that enters the
The definition of $\lambda_{\text{max}} (\mathbf{U},\mathbf{V},
\textbf{n})$ is far from trivial and we will postpone the precise
-definition in order to focus first on some algorithmic and implementational
+definition in order to focus first on some algorithmic and implementation
questions. We note that
- $m_i$ and $\mathbf{c}_{ij}$ do not evolve in time (provided we keep the
discretization fixed). It thus makes sense to assemble these
i\in\mathcal{V}}\left(\frac{m_i}{-2\,d_{ii}^{n}}\right),
@f}
where $0<c_{\text{cfl}}\le1$ is a chosen constant. This will require to
-compute all $d_{ij}$ in a separte step prior to actually performing above
+compute all $d_{ij}$ in a separate step prior to actually performing above
update. The core principle remains unchanged, though: we do not loop over
cells but rather over all edges of the sparsity graph.
// usually centers around either a single data structure (such as the
// Triangulation) in the <code>Discretization</code> class, or a single
// method (such as the <code>make_one_step()</code> function of the
-// <code>TimeStepping</code> class). We typically declare parameter variables
+// <code>%TimeStepping</code> class). We typically declare parameter variables
// and scratch data object `private` and make methods and data structures
// used by other classes `public`.
//
Tensor<1, 3> initial_1d_state;
};
- // @sect4{The <code>TimeStepping</code> class}
+ // @sect4{The <code>%TimeStepping</code> class}
//
// With the <code>OfflineData</code> and <code>ProblemDescription</code>
// classes at hand we can now implement the explicit time-stepping scheme
// that was introduced in the discussion above. The main method of the
- // <code>TimeStepping</code> class is <code>make_one_step(vector_type &U,
+ // <code>%TimeStepping</code> class is <code>make_one_step(vector_type &U,
// double t)</code> that takes a reference to a state vector
// <code>U</code> and a time point <code>t</code> (as input arguments)
// computes the updated solution, stores it in the vector
// @sect4{The <code>MainLoop</code> class}
//
// Now, all that is left to do is to chain the methods implemented in the
- // <code>TimeStepping</code>, <code>InitialValues</code>, and
+ // <code>%TimeStepping</code>, <code>InitialValues</code>, and
// <code>SchlierenPostprocessor</code> classes together. We do this in a
// separate class <code>MainLoop</code> that contains an object of every
// class and again reads in a number of parameters with the help of the
// assemble the local part of a matrix exclusively on a given MPI
// rank. Instead, we will compute nonlinear updates while iterating
// over (the local part) of a connectivity stencil; a task for which
- // deal.II's own SparsityPattern is specificially optimized for.
+ // deal.II's own SparsityPattern is specifically optimized for.
//
// This design consideration has a caveat, though. What makes the
// deal.II SparseMatrix class fast is the <a
// the pseudo-code in the introduction) that will repeat over and over
// again. That's why this is the right time to introduce them.
//
- // We have the thread paralellization capability
+ // We have the thread parallelization capability
// parallel::apply_to_subranges() that is somehow more general than the
// WorkStream framework. In particular, parallel::apply_to_subranges() can
// be used for our node-loops. This functionality requires four input
// <code>on_subranges</code> lambda we need to name the iterator type
// of the object returned by <code>boost::irange<unsigned
// int>()</code>. This is unfortunately a very convoluted name exposing
- // implementational details about <code>boost::irange</code>. For this
+ // implementation details about <code>boost::irange</code>. For this
// reason we resort to the <a
// href="https://en.cppreference.com/w/cpp/language/decltype"><code>decltype</code></a>
// specifier, a C++11 feature that returns the type of an entity, or
// Finally, we normalize the vectors stored in
// <code>OfflineData<dim>::BoundaryNormalMap</code>. This operation has
- // not been thread paralellized as it would neither illustrate any
+ // not been thread parallelized as it would neither illustrate any
// important concept nor lead to any noticeable speed gain.
for (auto &it : boundary_normal_map)
{
// @sect4{The Forward Euler step}
- // The constructor of the <code>TimeStepping</code> class does not contain
+ // The constructor of the <code>%TimeStepping</code> class does not contain
// any surprising code:
template <int dim>
// symmetric, i.e., $d_{ij} = d_{ji}$. In this regard we note here that
// $\int_{\Omega} \nabla \phi_j \phi_i \, \mathrm{d}\mathbf{x}= -
// \int_{\Omega} \nabla \phi_i \phi_j \, \mathrm{d}\mathbf{x}$ (or
- // equivanlently $\mathbf{c}_{ij} = - \mathbf{c}_{ji}$) provided either
+ // equivalently $\mathbf{c}_{ij} = - \mathbf{c}_{ji}$) provided either
// $\mathbf{x}_i$ or $\mathbf{x}_j$ is a support point located away
// from the boundary. In this case we can check that
// $\lambda_{\text{max}} (\mathbf{U}_i^{n}, \mathbf{U}_j^{n},
// The second thing to note is that we have to compute global minimum and
// maximum $\max_j |\nabla r_j|$ and $\min_j |\nabla r_j|$. Following the
// same ideas used to compute the time step size in the class member
- // <code>TimeStepping<dim>::step()</code> we define $\max_j |\nabla r_j|$
+ // <code>%TimeStepping<dim>::step()</code> we define $\max_j |\nabla r_j|$
// and $\min_j |\nabla r_j|$ as atomic doubles in order to resolve any
// conflicts between threads. As usual, we use
// <code>Utilities::MPI::max()</code> and