+<i>This program was contributed by Ryan Grove and Timo Heister.</i>
+
<a name="Intro"></a>
<h1>Introduction</h1>
The purpose of this tutorial is to create an efficient linear solver
for the Stokes equation and compare it to alternative
approaches. Using FGMRES with geometric multigrid as a precondtioner
-for the velocity block, we see that the linear solvers used in Step-22
+for the velocity block, we see that the linear solvers used in step-22
cannot keep up since multigrid is the only way to get $O(n)$ solve
time. Using the Timer class, we collect some statistics to compare
setup times, solve times, and number of iterations. We also compute
errors to make sure that what we have implemented is correct.
Let $u \in H_0^1 = \{ u \in H^1(\Omega), u|_{\partial \Omega} = 0 \}$
-and $p \in L_*^2 = \{ p_f \in L^2(\Omega), \int_\Omega p_f = 0
+and $p \in L_*^2 = \{ p \in L^2(\Omega), \int_\Omega p = 0
\}$. The Stokes equations read as follows in non-dimensionalized form:
-@f{eqnarray*} - 2 \text{div} \frac {1}{2} \left[ (\nabla \textbf{u}) +
-(\nabla \textbf{u})^T\right] + \nabla p & =& f \\ - \nabla \cdot u &=&
-0 @f}
+@f{eqnarray*}
+ - 2 \text{div} \frac {1}{2} \left[ (\nabla \textbf{u})
+ + (\nabla \textbf{u})^T\right] + \nabla p & =& f \\
+ - \nabla \cdot u &=& 0
+@f}
Note that we are using the deformation tensor instead of $\Delta u$ (a
detailed desription of the difference between the two can be found in
-Step-22, but in summary, the deformation tensor is more physical as
+step-22, but in summary, the deformation tensor is more physical as
well as more expensive).
<h3> Linear Solver and Preconditioning Issues </h3>
@f{eqnarray*}
\left(\begin{array}{cc} A & B^T \\ B & 0
\end{array}\right) \left(\begin{array}{c} U \\ P \end{array}\right) =
-\left(\begin{array}{c} F \\ 0 \end{array}\right), @f}
+\left(\begin{array}{c} F \\ 0 \end{array}\right).
+@f}
-Our goal is to compare several solver approaches. In contrast to the
-way in which step-22 solves the Stokes equation, we instead attack the
-block system at once using a direct solver or FMGRES with an efficient
+Our goal is to compare several solution approaches. While step-22
+solves the linear system using a "Schur complement approach" in two
+separate steps, we instead attack the
+block system at once using FMGRES with an efficient
preconditioner. The idea is as follows: if we find a block
preconditioner $P$ such that the matrix
is simple, then an iterative solver with that preconditioner will
converge in a few iterations. Notice that we are doing right
-preconditioning for this. Using the Schur complement $S=BA^{-1}B^T$,
+preconditioning for here. Using the Schur complement $S=BA^{-1}B^T$,
we find that
@f{eqnarray*}
-P^{-1} = \left(\begin{array}{cc} \widetilde{A} & B^T \\ 0 &
- \widetilde{S} \end{array}\right)^{-1} @f}
+P^{-1} = \left(\begin{array}{cc} A & B^T \\ 0 &
+ S \end{array}\right)^{-1}
+@f}
-is a good choice, where $\widetilde{A}$ is an approximation of $A$, $\widetilde{S}$ is an approximation of $S$, and
+is a good choice. Let $\widetilde{A^{-1}}$ be an approximation of $A^{-1}$
+and $\widetilde{S^{-1}}$ of $S^{-1}$, we see
@f{eqnarray*}
-P =
+P^{-1} =
\left(\begin{array}{cc} A^{-1} & 0 \\ 0 & I \end{array}\right)
\left(\begin{array}{cc} I & B^T \\ 0 & -I \end{array}\right)
-\left(\begin{array}{cc} I & 0 \\ 0 & S^{-1} \end{array}\right). @f}
+\left(\begin{array}{cc} I & 0 \\ 0 & S^{-1} \end{array}\right).
+\approx
+\left(\begin{array}{cc} \widetilde{A^{-1}} & 0 \\ 0 & I \end{array}\right)
+\left(\begin{array}{cc} I & B^T \\ 0 & -I \end{array}\right)
+\left(\begin{array}{cc} I & 0 \\ 0 & \widetilde{S^{-1}} \end{array}\right).
+ @f}
Since $P$ is aimed to be a preconditioner only, we shall use
-approximations to the inverse of the Schur complement $S$ and the
-matrix $A$. Therefore, in the above equations, $-M_p=\widetilde{S} \approx
-S$, where $M_p$ is the pressure mass matrix and is solved by using CG
-+ ILU, and $\widetilde{A^{-1}}$ is an approximation of $A^{-1}$ obtained by one of
+the approximations on the right in the equation above.
+
+As discussed in step-22, $-M_p^{-1}=\widetilde{S^{-1}} \approx
+S^{-1}$, where $M_p$ is the pressure mass matrix and is solved approximatively by using CG
+with ILU, and $\widetilde{A^{-1}}$ is obtained by one of
multiple methods: CG with ILU as preconditioner, just using ILU, CG with GMG (Geometric
-Multigrid as described in step-16) as a precondtioner, or just performing a few V-cycles
-of GMG. The inclusion of CG is more expensive, in general.
+Multigrid as described in step-16) as a precondtioner, or just performing a single V-cycle
+of GMG.
As a comparison, instead of FGMRES, we also use the direct solver
-UMFPACK to compare our results to. If you want to use UMFPACK as a
-solver, it is important to set the first pressure node equal to zero
-to avoid the system being singular (recall that the Stokes equation
-itself only determines the pressure up to a constant when using only
-Dirichlet boundary conditions). If we do not do this, then the direct
-solver will produce an error message whereas the iterative solvers
-quietly solve it anyway.
+UMFPACK on the whole system to compare our results with. If you want to use
+a direct solver (like UMFPACK), the system needs to be invertible. To avoid
+the one dimensional null space given by the constant pressures, we fix the first pressure unknown
+ to zero. This is not necessary for the iterative solvers.
<h3> Reference Solution </h3>
-The domain, right hand side, and boundary conditions we implement
-are chosen for their simplicity and the fact that they make it
-possible for us to compute errors using a reference solution. We
-apply Dirichlet boundary condtions for the whole velocity on the whole
-boundary of the domain Ω=[0,1]×[0,1]×[0,1]. To enforce the boundary
-conditions we can just use our reference solution that we will now
-define.
+The testproblem is a "Manufactured Solution" (see step-7 for details).
+We apply Dirichlet boundary condtions for the velocity on the whole
+boundary of the domain $\Omega=[0,1]\times[0,1]\times[0,1]$.
+To enforce the boundary conditions we can just use our reference solution that
+we will now define.
Let $u=(u_1,u_2,u_3)=(2\sin (\pi x), - \pi y \cos (\pi x),- \pi z \cos
(\pi x))$ and $p = \sin (\pi x)\cos (\pi y)\sin (\pi z)$.
value function, but vector_value, value_list, etc. Different things
you use in your code will require one of these particular
functions. This can be confusing at first, but luckily the only thing
-you actually need to implement is value. The other ones have default
-implementations inside deal.II and will be called on their own as long
-as you implement value correctly.
+you actually have to implement is @p value. The other ones have default
+implementations inside deal.II and will call your implementation of @p value
+by default.
Notice that our reference solution fulfills $\nabla \cdot u = 0$. In
addition, the pressure is chosen to have a mean value of zero. For
-the Method of Manufactured Solutions of step-7, we need to find $\bf
+the "Method of Manufactured Solutions" of step-7, we need to find $\bf
f$ such that:
@f{align*}
finite element system for the velocity. Since this is now part of the
entire system, it is no longer easy to access. The reason for this is
that there is currently no way in deal.II to ask, "May I have just
-part of a DoF handler?" So in order to answer this request for our
-needs, we have to create a new DoF handler for just the velocites and
-assure that it has the same ordering as the DoF Handler for the entire
-system so that you can copy over one to the other.
+part of a DoFHandler?" So in order to answer this request for our
+needs, we have to create a new DoFHandler for just the velocites and
+assure that it has the same ordering as the DoFHandler for the entire
+system so that we can copy over solution vectors element by element.
<h3> Differences from Step-22 </h3>
The main difference between
-step-55 and step-22 is that we use block solvers instead of the Schur
+step-56 and step-22 is that we use block solvers instead of the Schur
Complement approach used in step-22. Details of this approach can be
-found under the Block Schur complement preconditioner subsection of
-the Possible Extensions section of step-22. For the preconditioner of
+found under the "Block Schur complement preconditioner" subsection of
+the "Possible Extensions" section of step-22. For the preconditioner of
the velocity block, we borrow a class from ASPECT called
-BlockSchurPreconditioner that has the option to solve for the inverse
+@p BlockSchurPreconditioner that has the option to solve for the inverse
of $A$ or just apply one preconditioner sweep for it instead, which
provides us with an expensive and cheap approach, respectively.
<h3> Errors </h3>
+We first run the code and confirm that the manufactured solution converges
+with the correct rates as predicted by the error analysis of mixed finite
+element problems. Given sufficiently smooth exact solutions $u$ and $p$,
+the errors of the Taylor-Hood element Q_k-Q_{k-1} should be
+
+@f[
+\| u -u_h \|_0 + h ( \| u- u_h\|_1 + \|p - p_h \|_0)
+\leq C h^{k+1} ( \|u \|_{k+1} + \| p \|_k )
+@f]
+
+see for example Ern/Guermond "Theory and Practice of Finite Elements", Section
+4.2.5 p195. This is indeed what we observe:
+
<table align="center" border="1">
<tr>
<th> </th>
<h3> Timing Results </h3>
+Here is a table summarizing the solver iterations and timings done for the
+various solver combinations implemented in the code:
+
<table align="center" border="1">
<tr>
<th> </th>
As can be seen from the table,
-1. UMFPACK uses large amounts of memory, especially in 3d. Also,
-UMFPACK timings do not scale favorably with problem size.
+1. UMFPACK uses large amounts of memory, especially in 3d. Also, UMFPACK
+timings do not scale favorably with problem size.
+
+2. Because we are using inner solvers for $A$ and $S$, ILU and GMG require the
+same number of outer iterations.
-2. The number of iterations for $A$ increase for ILU with refinement
-leading to worse then linear scaling in solve time. In contrast, the
-number of inner iterations for $A$ stay constant with GMG leading to
-nearly perfect scaling in solve time.
+3. The number of iterations for $A$ increase for ILU with refinement leading
+to worse than linear scaling in solve time. In contrast, the number of inner
+iterations for $A$ stay constant with GMG leading to nearly perfect scaling in
+solve time.
-3. GMG needs slightly more memory than ILU.
+4. GMG needs slightly more memory than ILU to store the level and interface
+matrices.
<h3> Possible extensions </h3>
-<h4> Use expensive or cheap preconditioner </h4>
-Currently, use_expensive is set to true, but if you set it to false, then you will not be using CG in your calculation of $\widetilde{A^{-1}}$ which is an approximation of $A^{-1}$. Depending on if you chose to use ILU or GMG, you would just be using ILU or a few v-cycles of GMG, respectively, instead of using them as a preconditioner to CG.
+<h4> Check higher order discretizations </h4>
+
+Experiment with higher order stable FE pairs and check that you observe the
+correct convergence rates.
+
+<h4> Compare with cheap preconditioner </h4>
+
+Currently, the boolean use_expensive in solve () is set to true. When set to false,
+$\widetilde{A^{-1}}$ will be a single preconditioner application instead of an inner
+CG with the GMG or ILU preconditioner, respectively.
+
+Notice that the number of FGMRES iterations stays constant under refinement if
+you use GMG (so $A^{-1}$ is approximated by a V-cycle). This means that the
+Multigrid is optimal and independent of $h$.
+