<h3> GMRES Iteration Counts </h3>
-The major advantage for GMG is that it is an $\mathcal{O}(n)$ method, that is, the complexity of the problem increases linearly with the problem size. To show then that the linear solver presented in this tutorial is also $\mathcal{O}(n)$, all one needs to do is show that the iteration counts for the GMRES solve stay roughly constant as we refine the mesh.
+The major advantage for GMG is that it is an $\mathcal{O}(n)$ method,
+that is, the complexity of the problem increases linearly with the
+problem size. To show then that the linear solver presented in this
+tutorial is also $\mathcal{O}(n)$, all one needs to do is show that
+the iteration counts for the GMRES solve stay roughly constant as we
+refine the mesh.
-Each of the following tables gives the GMRES iteration counts to reduce the initial residual by 1e-8.
+Each of the following tables gives the GMRES iteration counts to
+reduce the initial residual by 1e8.
<h5> DoF/Cell Renumbering: </h5>
-Staring with the additive smoothers, we see that renumbering the DoFs/cells has no effect on convergence speed. This is because these smoothers compute operations on each DoF (point-smoother) or cell (block-smoother) independently and add up the results. Since we can define these smoothers as an application of a sum of matrices, and matrix addition is communicative, the order at which we sum the different components will not affect the end result.
+Starting with the additive smoothers, we see that renumbering the
+DoFs/cells has no effect on convergence speed. This is because these
+smoothers compute operations on each DoF (point-smoother) or cell
+(block-smoother) independently and add up the results. Since we can
+define these smoothers as an application of a sum of matrices, and
+matrix addition is commutative, the order at which we sum the
+different components will not affect the end result.
+
<table align="center" class="doxtable">
<tr>
</tr>
</table>
-On the other hand, for multiplicative smoothers, we can speed up convergence by renumbering the DoFs/cells in the advection direction, and similarly, we can slow down convergence if we do the renumbering in the opposite direction. This is because the multiplicative smoothers transfer information from DoF to DoF, and the solutions at any one DoF of an advection-dominate problem is highly dependent on the solution at DoFs upstream of the advection direction.
-
-As we increase $\varepsilon$, however, the problem becomes more diffusion-dominated and the effectiveness of DoF/cell renumbering drops. On the other hand, in the extreme case of $\varepsilon=0$ (advection-only), we have a 1st-order PDE and multiplicative methods become effective solvers (Note: special care must be taken for the boundary conditions in this case).
+On the other hand, for the multiplicative smoothers, we can speed up
+convergence by renumbering the DoFs/cells in the advection direction,
+and similarly, we can slow down convergence if we do the renumbering
+in the opposite direction. This is because advection-dominated
+problems have a directional flow of information (in the advection
+direction) which, given the right renumbering of DoFs/cells,
+multiplicative methods are able to capture.
+
+This feature of multiplicative methods is, however, dependent on the
+value of $\varepsilon$. As we increase $\varepsilon$ and the problem
+becomes more diffusion-dominated, we have a more uniform propagation
+of information over the mesh and there is a diminished advantage for
+renumbering in the advection direction. On the opposite end, in the
+extreme case of $\varepsilon=0$ (advection-only), we have a 1st-order
+PDE and multiplicative methods with the right renumbering become
+effective solvers (Note: special care must be taken for the boundary
+conditions in this case).
<table align="center" class="doxtable">
<tr>
<h5> Point vs. Block Smoothers </h5>
-We will limit the results to runs using the downstream renumbering. Here is a cross comparison of all four smoothers for both $Q_1$ and $Q_3$ elements:
+We will limit the results to runs using the downstream
+renumbering. Here is a cross comparison of all four smoothers for both
+$Q_1$ and $Q_3$ elements:
<table align="center" class="doxtable">
<tr>
</tr>
</table>
-We see that for $Q_1$, both multiplicative smoothers require a smaller combination of smoothing steps and iteration counts than either additive smoother. However, when we in increase the degree to a $Q_3$ element, there is a clear advantage for the block smoothers in terms of the increase in smoothing steps and iterations required to solve. Specifically, the block SOR smoother gives constant iterations over degree, and the block Jacobi only sees about a 38% increase in iterations compared to 75% and 183% for Jacobi and SOR respectively.
+We see that for $Q_1$, both multiplicative smoothers require a smaller
+combination of smoothing steps and iteration counts than either
+additive smoother. However, when we in increase the degree to a $Q_3$
+element, there is a clear advantage for the block smoothers in terms
+of the number of smoothing steps and iterations required to
+solve. Specifically, the block SOR smoother gives constant iteration
+counts over the degree, and the block Jacobi smoother only sees about
+a 38% increase in iterations compared to 75% and 183% for Jacobi and
+SOR respectively.
<h3> Cost </h3>
-Iteration counts do not tell the full story in the optimality of a one smoother over another. Obviously we must examine the cost of an iteration. Block smoothers here are at a disadvantage as they are having to construct and invert a cell matrix for each cell. Here is a comparison of solve times for a $Q_3$ element with 74,496 DoFs:
+Iteration counts do not tell the full story in the optimality of a one
+smoother over another. Obviously we must examine the cost of an
+iteration. Block smoothers here are at a disadvantage as they are
+having to construct and invert a cell matrix for each cell. Here is a
+comparison of solve times for a $Q_3$ element with 74,496 DoFs:
<table align="center" class="doxtable">
<tr>
</tr>
</table>
-The smoother that requires the most iterations (Jacobi) actually take the shortest time (roughly 2/3 the time of the next fastest method). This is because all that is requires to apply a Jacobi smoothing step is multiplication by a diagonal matrix which is very cheap. On the other hand, while SOR requires over 3x more iterations (each with 3x more smoothing steps) than block SOR, the times are roughly equivalent, implying that a smoothing step of block SOR is roughly 9x slower than a smoothing step of SOR. Lastly, block Jacobi is almost 6x more expensive than block SOR, which intuitively makes sense from the fact that 1 step of each method has the same cost (inverting the cell matrices and either adding or multiply them together), and block Jacobi has 3 times the smoothing step per iteration with 2 times the iterations.
+The smoother that requires the most iterations (Jacobi) actually takes
+the shortest time (roughly 2/3 the time of the next fastest
+method). This is because all that is requires to apply a Jacobi
+smoothing step is multiplication by a diagonal matrix which is very
+cheap. On the other hand, while SOR requires over 3x more iterations
+(each with 3x more smoothing steps) than block SOR, the times are
+roughly equivalent, implying that a smoothing step of block SOR is
+roughly 9x slower than a smoothing step of SOR. Lastly, block Jacobi
+is almost 6x more expensive than block SOR, which intuitively makes
+sense from the fact that 1 step of each method has the same cost
+(inverting the cell matrices and either adding or multiply them
+together), and block Jacobi has 3 times the smoothing step per
+iteration with 2 times the iterations.
<h3> Additional Points </h3>
There are a few more important points to mention:
-1. For a mesh distributed in parallel, multiplicative methods cannot be executed over the entire domain. One can use a hybrid method where a multiplicative smoother is applied on each subdomain, but as you increase the number of subdomains, the method converges to that of an additive method. This is a major disadvantage to these methods.
+1. For a mesh distributed in parallel, multiplicative methods cannot
+be executed over the entire domain. One can use a hybrid method where
+a multiplicative smoother is applied on each subdomain, but as you
+increase the number of subdomains, the method approaches the behavior
+of an additive method. This is a major disadvantage to these methods.
-2. Current research into block smoothers suggest that soon we will be able to compute the inverse of the cell matrices much cheaper than what is currently being done inside deal.II. It seems that one should be able to take advantage of matrix free implementations and also the fact that, in the interior of the domain, cell matrices tend to look very similar.
+2. Current research into block smoothers suggest that soon we will be
+able to compute the inverse of the cell matrices much cheaper than
+what is currently being done inside deal.II. This research is based on
+the fast diagonalization method (dating back to the 1960s) and has
+been used in the spectral community for around 20 years (see, e.g.,
+https://doi.org/10.1007/s10915-004-4787-3). There are currently
+efforts to generalize these methods to DG and make them more
+robust. Also, it seems that one should be able to take advantage of
+matrix-free implementations and the fact that, in the interior of
+the domain, cell matrices tend to look very similar, allowing fewer
+matrix inverse computations.
-Combining 1. and 2. gives a good reason for expecting that a method like block Jacobi could become very powerful in the future, even though currently for these examples it is quite slow.
+Combining 1. and 2. gives a good reason for expecting that a method
+like block Jacobi could become very powerful in the future, even
+though currently for these examples it is quite slow.
<h5> Constant iterations for Q<sub>5</sub> </h5>
-Change the number of smoothing steps and the smoother relaxation parameter (set in <code>Smoother::AdditionalData()</code> inside <code>create_smoother()</code>, only necessary for point smoothers) so that we maintain a constant number of iterations for a $Q_5$ element.
+Change the number of smoothing steps and the smoother relaxation
+parameter (set in <code>Smoother::AdditionalData()</code> inside
+<code>create_smoother()</code>, only necessary for point smoothers) so
+that we maintain a constant number of iterations for a $Q_5$ element.
<h5> Effectiveness of renumbering for changing epsilon </h5>
-Increase/decrease the parameter ``Epsilon" in the .prm files of the multiplicative methods and observe the relationship of downstream vs. upstream reordering.
+Increase/decrease the parameter "Epsilon" in the .prm files of the
+multiplicative methods and observe for which values renumbering no
+longer influences convergence speed.
// @sect3{MeshWorker Data}
- // The following are structure needed by assemble_cell()
+ // The following are structures needed by the assemble_cell()
// function used by Meshworker::mesh_loop(). ScratchData
- // contains a FeValues object with is needed for assembling
- // a cells local contribution, while CopyData contains the
- // output from a cells local contribution and necessary information
+ // contains an FeValues object which is needed for assembling
+ // a cell's local contribution, while CopyData contains the
+ // output from a cell's local contribution and necessary information
// to copy that to the global system.
template <int dim>
// @sect3{Cell permutations}
//
// The ordering in which cells and degrees of freedom are traversed
- // will play a roll in the speed of convergence for multiplicative
+ // will play a role in the speed of convergence for multiplicative
// methods. Here we define functions which return a specific ordering
// of cells to be used by the block smoothers.
for (const auto &cell : dof_handler.active_cell_iterators())
ordered_cells.push_back(cell->index());
- // shuffle the elements:
+ // Shuffle the elements:
std::mt19937 random_number_generator;
for (unsigned int i = 1; i < n_cells; ++i)
{
- // get a random number between 0 and i (inclusive):
+ // Get a random number between 0 and i (inclusive):
const unsigned int j =
std::uniform_int_distribution<>(0, i)(random_number_generator);
- // if possible, swap the elements:
+ // If possible, swap the elements:
if (i != j)
std::swap(ordered_cells[i], ordered_cells[j]);
}
// @sect3{Right-hand Side and Boundary Values}
- // The problem solved in this tutorial is an adaptation of Ex. ___
- // found in ______ (how to cite?), namely, we add a hole in the middle
- // of our domain.
+ // The problem solved in this tutorial is an adaptation of Ex. 3.1.3
+ // found on pg. 118 of Finite Elements and Fast Iterative Solvers:
+ // with Applications in Incompressible Fluid Dynamics by Elman, Silvester,
+ // and Wathen. The main difference being that we add a hole in the center
+ // of our domain with zero Dirichlet boundary.
// We have a zero right-hand side.
template <int dim>
// @sect3{Streamline Diffusion}
- // Streamline diffusion stabilization term. Value defined in
- // 'On discontinuity–capturing methods for convection–diffusion
- // equations' (cite?)
+ // Streamline diffusion stabilization term. Value is defined in
+ // "On Discontinuity—Capturing Methods for Convection—Diffusion
+ // Equations" by Volker and Petr
+ // (https://link.springer.com/chapter/10.1007/978-3-540-34288-5_27).
template <int dim>
double compute_stabilization_delta(const double hk,
const double eps,
// @sect3{<code>AdvectionProlem</code> class}
- // This main class of the program, and should look very similar to step-16.
- // The major difference is that, since we are defining our multigrid smoother
- // at runtime, we choose to define a function create_smoother() and a class
- // object mg_smoother which is a std::unique_ptr to a smoother that is derived
- // from MGSmoother. Note that for smoother derived from RelaxationBlock, we
- // must include a smoother_data object for each level. This will contain
- // information about the cell ordering and the method of inverting cell
- // matrices.
+ // This is the main class of the program, and should look very similar to
+ // step-16. The major difference is that, since we are defining our multigrid
+ // smoother at runtime, we choose to define a function create_smoother() and a
+ // class object mg_smoother which is a std::unique_ptr to a smoother that is
+ // derived from MGSmoother. Note that for smoothers derived from
+ // RelaxationBlock, we must include a smoother_data object for each level.
+ // This will contain information about the cell ordering and the method of
+ // inverting cell matrices.
template <int dim>
class AdvectionProblem
// @sect4{<code>AdvectionProblem::setup_system</code>}
- // Here we setup the DoFHandler, ConstraintMatrix, and sparsity patterns for
+ // Here we set up the DoFHandler, ConstraintMatrix, and sparsity patterns for
// both active and multigrid level meshes.
template <int dim>
{
const unsigned int n_levels = triangulation.n_levels();
- // Setup active DoFs:
dof_handler.distribute_dofs(fe);
// We could renumber the active DoFs with the DoFRenumbering class
// here, but the smoothers only act on multigrid levels and as such, this
- // wouldn't matter for the computations. Instead, we will renumber the
+ // would not matter for the computations. Instead, we will renumber the
// DoFs on each multigrid level below.
solution.reinit(dof_handler.n_dofs());
/*keep_constrained_dofs = */ false);
sparsity_pattern.copy_from(dsp);
-
system_matrix.reinit(sparsity_pattern);
-
- // Setup GMG DoFs:
dof_handler.distribute_mg_dofs();
// Renumber DoFs on each level in downstream or upstream direction if
rhs_values);
// If we are using streamline diffusion we must add its contribution
- // to both the cell matrix and the cell right-handside. If we are not
+ // to both the cell matrix and the cell right-hand side. If we are not
// using streamline diffusion, setting $\delta=0$ negates this contribution
// below and we are left with the standard, Galerkin finite element
// assembly.
MeshWorker::assemble_own_cells);
// Unlike the constraints for the active level, we choose to create
- // local constraint matrices for each multigrid level since they are
- // never needed elsewhere in the program.
+ // constraint objects for each multigrid level local to this function
+ // since they are never needed elsewhere in the program.
std::vector<AffineConstraints<double>> boundary_constraints(
triangulation.n_global_levels());
for (unsigned int level = 0; level < triangulation.n_global_levels();
// @sect4{<code>AdvectionProblem::setup_smoother</code>}
- // Here we setup the smoother based on the settings in the .prm. The two
+ // Here we set up the smoother based on the settings in the .prm. The two
// options that are of significance is the number of pre- and post-smoothing
// steps on each level of the multigrid v-cycle and the relaxation parameter.
// Since multiplicative methods tend to be more powerful than additive method,
// fewer smoothing steps are required to see convergence indepedent of mesh
- // size. The same hold for block smoothers over point smoothers. This is
+ // size. The same holds for block smoothers over point smoothers. This is
// reflected in the choice for the number of smoothing steps for each type of
// smoother below.
// The relaxation parameter for point smoothers is chosen based on trial and
// error, and they reflect values necessary to keep the iteration counts in
// the GMRES solve constant (or as close as possible) as we refine the mesh.
- // The two values given for both "Jacobi" and "SOR" are for degree 1 and
- // degree 3 finite elements. If the user wants to change to another degree,
- // they may need to adjust these numbers. For block smoothers, this parameter
- // has a more straightforward interpretation, namely that for additive methods
- // in 2D, a DoF can have a repeated contribution from up to 4 cells,
- // therefore we must relax these methods by 0.25 to compensate. This is not an
- // issue for multiplicative methods as each cell inverse application carries
- // new information to all its DoFs.
-
- // Finally, as mention above, the point smoothers only operate on DoFs, and
+ // The two values given for both "Jacobi" and "SOR" in the .prm files are for
+ // degree 1 and degree 3 finite elements. If the user wants to change to
+ // another degree, they may need to adjust these numbers. For block smoothers,
+ // this parameter has a more straightforward interpretation, namely that for
+ // additive methods in 2D, a DoF can have a repeated contribution from up to 4
+ // cells, therefore we must relax these methods by 0.25 to compensate. This is
+ // not an issue for multiplicative methods as each cell inverse application
+ // carries new information to all its DoFs.
+
+ // Finally, as mentioned above, the point smoothers only operate on DoFs, and
// the block smoothers on cells, so only the block smoothers need to be given
// information regarding cell orderings. DoF ordering for point smoothers has
// already been taken care of in setup_system().
// @sect4{<code>AdvectionProblem::solve</code>}
// Before we can solve the system, we must first set up the multigrid
- // preconditioner. This is requires the setup of the transfer between levels,
+ // preconditioner. This requires the setup of the transfer between levels,
// the coarse matrix solver, and the smoother. This setup follows almost
// identically to Step-16, the main difference being the various smoothers
// defined above and the fact that we need different interface edge matrices
// The last thing to note is that since our problem is non-symetric, we must
// use an appropriate Krylov subspace method. We choose here to
- // use GMRES since it offers the guarentee of residual reduction in each
+ // use GMRES since it offers the guarantee of residual reduction in each
// iteration. The major disatvantage to GMRES is that, for each iteration, we
// must store an additional temporary vector as well as compute an additional
- // scalar product. However, the goal of this tutorial is to have very low
- // iteration counts by using a powerful GMG preconditioner, so this should not
- // be a factor. If the user is interested, another sutaible method offered in
- // deal.II would be BiCGStab.
+ // scalar product. This requirement is relaxed by using the restarted GMRES
+ // method which puts a cap on the number of vectors we are required to store
+ // at any one time (here we resart after 50 temporary vectors, or 48
+ // iterations). This then has the disatvantage that we lose information we
+ // have gathered throughout the iteration and therefore we could see slower
+ // convergence. However, the goal of this tutorial is to have very low
+ // iteration counts by using a powerful GMG preconditioner, so we have picked
+ // the restart length such that all of the results shown below converge prior
+ // and thus we have a standard GMRES method. If the user is interested,
+ // another sutaible method offered in deal.II would be BiCGStab.
template <int dim>
void AdvectionProblem<dim>::solve()