discretization, using a solution expansion of the form
@f[
\mathbf{w}_h(\mathbf{x}, t) =
-\sum_{j=0}^{n_\mathbf{dofs}} \boldsymbol{\varphi}_j(\mathbf{x}) {w}_j(t).
+\sum_{j=1}^{n_\mathbf{dofs}} \boldsymbol{\varphi}_j(\mathbf{x}) {w}_j(t).
@f]
Here, $\boldsymbol{\varphi}_j$ denotes the $j$th basis function, written
in vector form with separate shape functions for the different components and
opposed to the continuous finite element method where some shape functions
span across element boundaries, the shape functions are local to a single
element in DG methods, with a discontinuity from one element to the next. The
-connectio of the solution from one cell to its neighbors is instead
+connection of the solution from one cell to its neighbors is instead
imposed by the numerical fluxes
specified below. This allows for some additional flexibility, for example to
introduce directionality in the numerical method by, e.g., upwinding.
\sqrt{\gamma p^{\pm} / \rho^{\pm}}$, in this tutorial program, and leave other
variants to a possible extension. We also note that the HLL flux has been
extended in the literature to the so-called HLLC flux, where C stands for the
-ability to represent so-called contact discontinuities.
+ability to represent contact discontinuities.
At the boundaries with no neighboring state $\mathbf{w}^+$ available, it is
common practice to deduce suitable exterior values from the boundary
eigenvalues of the linearization of $\mathbf F(\mathbf w)$ with respect to
$\mathbf{w}$. In this program, we set the time step as follows:
@f[
-\Delta t = \frac{Cr}{p^{1.5}}\left(\frac{1}{\max\left[\frac{\|\mathbf{u}\|}{h_u} +
- \frac{c}{h_c}\right]}\right),
+\Delta t = \frac{\mathrm{Cr}}{p^{1.5}}\left(\frac{1}
+ {\max\left[\frac{\|\mathbf{u}\|}{h_u} + \frac{c}{h_c}\right]}\right),
@f]
-with the maximum taken over all quadrature points and all cells. The power
-$p^{1.5}$ used for the polynomial scaling is heuristic and represents the
-closest fit for polynomial degrees between 1 and 8, see e.g.
-@cite SchoederKormann2018. In the limit of higher degrees, $p>10$, a scaling
-of $p^2$ is more accurate, related to the inverse estimates typically used for
+with the maximum taken over all quadrature points and all cells. The
+dimensionless number $\mathrm{Cr}$ denotes the Courant number and can be
+chosen up to a maximally stable number $\mathrm{Cr}_\text{max}$, whose value
+depends on the selected time stepping method and its stability properties. The
+power $p^{1.5}$ used for the polynomial scaling is heuristic and represents
+the closest fit for polynomial degrees between 1 and 8, see e.g. @cite
+SchoederKormann2018. In the limit of higher degrees, $p>10$, a scaling of
+$p^2$ is more accurate, related to the inverse estimates typically used for
interior penalty methods. Regarding the <i>effective</i> mesh sizes $h_u$ and
$h_c$ used in the formula, we note that the convective transport is
directional. Thus an appropriate scaling is to use the element length in the
@f]
With such a definition, the update to $\mathbf{w}_h^n$ shares the storage with
the information for the intermediate values $\mathbf{k}_i$. Starting with
-$\mathbf{w}^{n+1}=\mathbf{w}^n$ and $\mathbf{t}_1 = \mathbf{w}^n$, the update
+$\mathbf{w}^{n+1}=\mathbf{w}^n$ and $\mathbf{r}_1 = \mathbf{w}^n$, the update
in each of the $s$ stages simplifies to
@f[
\begin{aligned}
\mathbf{k}_i &=
-\mathcal M^{-1} \mathcal L_h\left(t^n+c_i\Delta t, \mathbf{t}_{i} \right),\\
-\mathbf{t}_{i+1} &= \mathbf{w}_h^{n+1} + \Delta t \, a_i \mathbf{k}_i,\\
+\mathcal M^{-1} \mathcal L_h\left(t^n+c_i\Delta t, \mathbf{r}_{i} \right),\\
+\mathbf{r}_{i+1} &= \mathbf{w}_h^{n+1} + \Delta t \, a_i \mathbf{k}_i,\\
\mathbf{w}_h^{n+1} &= \mathbf{w}_h^{n+1} + \Delta t \, b_i \mathbf{k}_i.
\end{aligned}
@f]
-Besides the vector $\mathbf w_h$ that is successively updated, this scheme
+Besides the vector $\mathbf w_h^{n+1}$ that is successively updated, this scheme
only needs two auxiliary vectors, namely the vector $\mathbf{k}_i$ to hold the
-evaluation of the differential operator, and the vector $\mathbf{t}_i$ that
+evaluation of the differential operator, and the vector $\mathbf{r}_i$ that
holds the right-hand side for the differential operator application. In
-subsequent stages $i$, the values $\mathbf{k}_i$ and $\mathbf{t}_i$ can use
+subsequent stages $i$, the values $\mathbf{k}_i$ and $\mathbf{r}_i$ can use
the same storage.
The main advantages of low-storage variants are the reduced memory consumption
aliasing errors can introduce unphysical oscillations in the numerical
solution for <i>barely</i> resolved simulations. The fact that aliasing mostly
affects coarse resolutions -- whereas finer meshes with the same scheme
-otherwise work fine -- is not surprising because well-resolved simulations
-have tend to be smooth on length-scales of a cell (i.e., they have
+work fine -- is not surprising because well-resolved simulations
+tend to be smooth on length-scales of a cell (i.e., they have
small coefficients in the higher polynomial degrees that are missed by
too few quadrature points, whereas the main solution contribution in the lower
polynomial degrees is still well-captured -- this is simply a consequence of Taylor's
-theorem). To address this topic, various approaches have been proposed in the
+theorem. To address this topic, various approaches have been proposed in the
DG literature. One technique is filtering which damps the solution components
pertaining to higher polynomial degrees. As the chosen nodal basis is not
hierarchical, this would mean to transform from the nodal basis into a
the context of the incompressible Navier-Stokes equations, where the
$\mathbf{u}\otimes \mathbf{u}$ nonlinearity results in polynomial integrands
of degree $3p$ (when also considering the test function), which can be
-integrated exactly with $\textbf{floor}\left(\frac{3p}{2}\right)+1$ quadrature
+integrated exactly with $\textrm{floor}\left(\frac{3p}{2}\right)+1$ quadrature
points per direction as long as the element geometry is affine. In the context
of the Euler equations with non-polynomial integrands, the choice is less
clear. Depending on the variation in the various variables both
-$\textbf{floor}\left(\frac{3p}{2}\right)+1$ or $2p+1$ points (integrating
+$\textrm{floor}\left(\frac{3p}{2}\right)+1$ or $2p+1$ points (integrating
exactly polynomials of degree $3p$ or $4p$, respectively) are common.
To reflect this variability in the choice of quadrature in the program, we
variant of mass lumping, though not the one with an additional integration
error as utilized in step-48) has been shown to not alter discretization
accuracy. The Lagrange basis in the points of Gaussian quadrature is sometimes
-also referred to as a collocation setup , as the nodal points of the
+also referred to as a collocation setup, as the nodal points of the
polynomials coincide (= are "co-located") with the points of quadrature, obviating some
interpolation operations. Given the fact that we want to use more quadrature
points for nonlinear terms in $\mathcal L_h$, however, the collocation
The class MatrixFreeOperators::CellwiseInverseMassMatrix implements this
operation: It changes from the basis contained in the finite element (in this
case, FE_DGQ) to the Lagrange basis in Gaussian quadrature points. Here, the
-diagonal mass matrix can be evaluate, which is simply the inverse of
-the `JxW` factors (i.e., the quadrature weight times the determinant of the
+inverse of a diagonal mass matrix can be evaluated, which is simply the inverse
+of the `JxW` factors (i.e., the quadrature weight times the determinant of the
Jacobian from reference to real coordinates). Once this is done, we can change
back to the standard nodal Gauss-Lobatto basis.
The advantage of this particular way of applying the inverse mass matrix is
-the fact that it is of similar cost as the forward application of a mass
-matrix, and cheaper than the evaluation of the spatial operator $\mathcal L_h$
-which is more costly due to over-integration and face integrals. (We
+a cost similar to the forward application of a mass matrix, which is cheaper
+than the evaluation of the spatial operator $\mathcal L_h$
+with over-integration and face integrals. (We
will demonstrate this with detailed timing information in the
<a href="#Results">results section</a>.) In fact, it
is so cheap that it is limited by the bandwidth of reading the source vector,
between times 5 and 6.5. After that point, the flow is simply uniform
in the same direction, and the maximum velocity of the gas is reduced
compared to the previous state where the uniform velocity was overlaid
-by the vortex. Our time step formula recognizes this and only
-uses the acoustic limit in the last part of the simulation when
-determining the time step size.
+by the vortex. Our time step formula recognizes this effect.
The final block of output shows detailed information about the timing
of individual parts of the programs; it breaks this down by showing
in run time can be traced back to cache effects on the given hardware (with 40
MB of L2 cache and 55 MB of L3 cache): While not all of the relevant data fits
into caches for 9.4 million DoFs (one vector takes 75 MB and we have three
-vectors plus some additional data in MatrixFree), there is capacity for almost
-half of one vector nonetheless. Given that modern caches are more sophisticated than
+vectors plus some additional data in MatrixFree), there is capacity for one and
+a half vector nonetheless. Given that modern caches are more sophisticated than
the naive least-recently-used strategy (where we would have little re-use as
the data is used in a streaming-like fashion), we can assume that a sizeable
fraction of data can indeed be delivered from caches for the 9.4 million DoFs
// fact that only two vectors are needed per stage, namely the accumulated
// part of the solution $\mathbf{w}$ (that will hold the solution
// $\mathbf{w}^{n+1}$ at the new time $t^{n+1}$ after the last stage), the
- // update vector $\mathbf{T}_i$ that gets evaluated during the stages, plus
- // one vector $\mathbf{K}_i$ to hold the evaluation of the operator. Such a
+ // update vector $\mathbf{r}_i$ that gets evaluated during the stages, plus
+ // one vector $\mathbf{k}_i$ to hold the evaluation of the operator. Such a
// Runge--Kutta setup reduces the memory storage and memory access. As the
// memory bandwidth is often the performance-limiting factor on modern
// hardware when the evaluation of the differential operator is
}
// The main function of the time integrator is to go through the stages,
- // evaluate the operator, prepare the $\mathbf{T}_i$ vector for the next
+ // evaluate the operator, prepare the $\mathbf{r}_i$ vector for the next
// evaluation, and update the solution vector $\mathbf{w}$. We hand off
// the work to the `pde_operator` involved in order to be able to merge
// the vector operations of the Runge--Kutta setup with the evaluation of
//
// We separately call the operator for the first stage because we need
// slightly modified arguments there: Here, we evaluate the solution from
- // the old solution $\mathbf{w}^n$ rather than a $\mathbf T_i$ vector, so
+ // the old solution $\mathbf{w}^n$ rather than a $\mathbf r_i$ vector, so
// the first argument is `solution`. We here let the stage vector
- // $\mathbf{T}_i$ also hold the temporary result of the evaluation, as it
+ // $\mathbf{r}_i$ also hold the temporary result of the evaluation, as it
// is not used otherwise. For all subsequent stages, we use the vector
- // `vec_Ki` as the second vector argument to store the result of the
+ // `vec_ki` as the second vector argument to store the result of the
// operator evaluation. Finally, when we are at the last stage, we must
- // skip the computation of the vector $\mathbf{T}_{s+1}$ as there is no
+ // skip the computation of the vector $\mathbf{r}_{s+1}$ as there is no
// coefficient $a_s$ available (nor will it be used).
template <typename VectorType, typename Operator>
void perform_time_step(const Operator &pde_operator,
const double current_time,
const double time_step,
VectorType & solution,
- VectorType & vec_Ti,
- VectorType & vec_Ki)
+ VectorType & vec_ri,
+ VectorType & vec_ki)
{
AssertDimension(ai.size() + 1, bi.size());
pde_operator.perform_stage(current_time,
bi[0] * time_step,
ai[0] * time_step,
solution,
- vec_Ti,
+ vec_ri,
solution,
- vec_Ti);
+ vec_ri);
double sum_previous_bi = 0;
for (unsigned int stage = 1; stage < bi.size(); ++stage)
{
(stage == bi.size() - 1 ?
0 :
ai[stage] * time_step),
- vec_Ti,
- vec_Ki,
+ vec_ri,
+ vec_ki,
solution,
- vec_Ti);
+ vec_ri);
sum_previous_bi += bi[stage - 1];
}
}
perform_stage(const Number cur_time,
const Number factor_solution,
const Number factor_ai,
- const LinearAlgebra::distributed::Vector<Number> ¤t_Ti,
- LinearAlgebra::distributed::Vector<Number> & vec_Ki,
+ const LinearAlgebra::distributed::Vector<Number> ¤t_ri,
+ LinearAlgebra::distributed::Vector<Number> & vec_ki,
LinearAlgebra::distributed::Vector<Number> & solution,
- LinearAlgebra::distributed::Vector<Number> &next_Ti) const;
+ LinearAlgebra::distributed::Vector<Number> &next_ri) const;
void project(const Function<dim> & function,
LinearAlgebra::distributed::Vector<Number> &solution) const;
// This function implements EulerOperator::apply() followed by some updates
- // to the vectors, namely `next_Ti = solution + factor_ai * K_i` and
- // `solution += factor_solution * K_i`. Rather than performing these
+ // to the vectors, namely `next_ri = solution + factor_ai * k_i` and
+ // `solution += factor_solution * k_i`. Rather than performing these
// steps through the vector interfaces, we here present an alternative
// strategy that is faster on cache-based architectures. As the memory
// consumed by the vectors is often much larger than what fits into caches,
// the data has to effectively come from the slow RAM memory. The situation
// can be improved by loop fusion, i.e., performing both the updates to
- // `next_Ki` and `solution` within a single sweep. In that case, we would
- // read the two vectors `rhs` and `solution` and write into `next_Ki` and
+ // `next_ki` and `solution` within a single sweep. In that case, we would
+ // read the two vectors `rhs` and `solution` and write into `next_ki` and
// `solution`, compared to at least 4 reads and two writes in the baseline
// case. Here, we go one step further and perform the loop immediately when
// the mass matrix inversion has finished on a part of the
// practice that we ensure that there is no overlapping, also called
// aliasing, between the index ranges of the pointers we use inside the
// loops). Note that we select a different code path for the last
- // Runge--Kutta stage when we do not need to update the `next_Ti`
+ // Runge--Kutta stage when we do not need to update the `next_ri`
// vector. This strategy gives a considerable speedup. Whereas the inverse
// mass matrix and vector updates take more than 60% of the computational
// time with default vector updates on a 40-core machine, the percentage is
const Number current_time,
const Number factor_solution,
const Number factor_ai,
- const LinearAlgebra::distributed::Vector<Number> ¤t_Ti,
- LinearAlgebra::distributed::Vector<Number> & vec_Ki,
+ const LinearAlgebra::distributed::Vector<Number> ¤t_ri,
+ LinearAlgebra::distributed::Vector<Number> & vec_ki,
LinearAlgebra::distributed::Vector<Number> & solution,
- LinearAlgebra::distributed::Vector<Number> & next_Ti) const
+ LinearAlgebra::distributed::Vector<Number> & next_ri) const
{
{
TimerOutput::Scope t(timer, "rk_stage - integrals L_h");
&EulerOperator::local_apply_face,
&EulerOperator::local_apply_boundary_face,
this,
- vec_Ki,
- current_Ti,
+ vec_ki,
+ current_ri,
true,
MatrixFree<dim, Number>::DataAccessOnFaces::values,
MatrixFree<dim, Number>::DataAccessOnFaces::values);
data.cell_loop(
&EulerOperator::local_apply_inverse_mass_matrix,
this,
- next_Ti,
- vec_Ki,
+ next_ri,
+ vec_ki,
std::function<void(const unsigned int, const unsigned int)>(),
[&](const unsigned int start_range, const unsigned int end_range) {
const Number ai = factor_ai;
DEAL_II_OPENMP_SIMD_PRAGMA
for (unsigned int i = start_range; i < end_range; ++i)
{
- const Number K_i = next_Ti.local_element(i);
+ const Number k_i = next_ri.local_element(i);
const Number sol_i = solution.local_element(i);
- solution.local_element(i) = sol_i + bi * K_i;
+ solution.local_element(i) = sol_i + bi * k_i;
}
}
else
DEAL_II_OPENMP_SIMD_PRAGMA
for (unsigned int i = start_range; i < end_range; ++i)
{
- const Number K_i = next_Ti.local_element(i);
+ const Number k_i = next_ri.local_element(i);
const Number sol_i = solution.local_element(i);
- solution.local_element(i) = sol_i + bi * K_i;
- next_Ti.local_element(i) = sol_i + ai * K_i;
+ solution.local_element(i) = sol_i + bi * k_i;
+ next_ri.local_element(i) = sol_i + ai * k_i;
}
}
});
// The EulerProblem::run() function puts all pieces together. It starts of
// by calling the function that creates the mesh and sets up data structures
// and initializing the time integrator and the two temporary vectors of the
- // low-storage integrator. Before we start the time loop, we compute the
- // time step size by the `EulerOperator::compute_cell_transport_speed()`
- // function. For reasons of comparison, we compare the result obtained there
- // with the minimal mesh size and print them to screen. For velocities and
- // speeds of sound close to unity as in this tutorial program, the predicted
- // effective mesh size will be close, but they could vary if scaling were
- // different.
+ // low-storage integrator. We call these vectors `rk_register_1` and
+ // `rk_register_2`, and use the first vector to represent the quantity
+ // $\mathbf{r}_i$ and the second one for $\mathbf{k}_i$. Before we start the
+ // time loop, we compute the time step size by the
+ // `EulerOperator::compute_cell_transport_speed()` function. For reasons of
+ // comparison, we compare the result obtained there with the minimal mesh
+ // size and print them to screen. For velocities and speeds of sound close
+ // to unity as in this tutorial program, the predicted effective mesh size
+ // will be close, but they could vary if scaling were different.
template <int dim>
void EulerProblem<dim>::run()
{