From: Wolfgang Bangerth Date: Thu, 2 Jun 2016 14:57:41 +0000 (-0500) Subject: Minor updates to step-56. X-Git-Tag: v8.5.0-rc1~988^2 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=dcbef1033b8126a9459ac24c3133ddc311fb4e13;p=dealii.git Minor updates to step-56. --- diff --git a/examples/step-56/doc/intro.dox b/examples/step-56/doc/intro.dox index 1c16afcd36..31e83e19c5 100644 --- a/examples/step-56/doc/intro.dox +++ b/examples/step-56/doc/intro.dox @@ -56,7 +56,7 @@ preconditioner $P$ such that the matrix is simple, then an iterative solver with that preconditioner will converge in a few iterations. Notice that we are doing right -preconditioning for here. Using the Schur complement $S=BA^{-1}B^T$, +preconditioning here. Using the Schur complement $S=BA^{-1}B^T$, we find that @f{eqnarray*} @@ -70,7 +70,7 @@ and $\widetilde{S^{-1}}$ of $S^{-1}$, we see P^{-1} = \left(\begin{array}{cc} A^{-1} & 0 \\ 0 & I \end{array}\right) \left(\begin{array}{cc} I & B^T \\ 0 & -I \end{array}\right) -\left(\begin{array}{cc} I & 0 \\ 0 & S^{-1} \end{array}\right). +\left(\begin{array}{cc} I & 0 \\ 0 & S^{-1} \end{array}\right) \approx \left(\begin{array}{cc} \widetilde{A^{-1}} & 0 \\ 0 & I \end{array}\right) \left(\begin{array}{cc} I & B^T \\ 0 & -I \end{array}\right) @@ -80,10 +80,12 @@ P^{-1} = Since $P$ is aimed to be a preconditioner only, we shall use the approximations on the right in the equation above. -As discussed in step-22, $-M_p^{-1}=\widetilde{S^{-1}} \approx +As discussed in step-22, $-M_p^{-1}=:\widetilde{S^{-1}} \approx S^{-1}$, where $M_p$ is the pressure mass matrix and is solved approximately by using CG -with ILU, and $\widetilde{A^{-1}}$ is obtained by one of -multiple methods: CG with ILU as preconditioner, just using ILU, CG with GMG (Geometric +with ILU as a preconditioner, and $\widetilde{A^{-1}}$ is obtained by one of +multiple methods: solving a linear system with CG and ILU as +preconditioner, just using one application of an ILU, solving a linear +system with CG and GMG (Geometric Multigrid as described in step-16) as a preconditioner, or just performing a single V-cycle of GMG. @@ -93,24 +95,27 @@ a direct solver (like UMFPACK), the system needs to be invertible. To avoid the one dimensional null space given by the constant pressures, we fix the first pressure unknown to zero. This is not necessary for the iterative solvers. +

Reference Solution

-The test problem is a "Manufactured Solution" (see step-7 for details). +The test problem is a "Manufactured Solution" (see step-7 for +details), and we choose $u=(u_1,u_2,u_3)=(2\sin (\pi x), - \pi y \cos +(\pi x),- \pi z \cos (\pi x))$ and $p = \sin (\pi x)\cos (\pi y)\sin +(\pi z)$. We apply Dirichlet boundary conditions for the velocity on the whole boundary of the domain $\Omega=[0,1]\times[0,1]\times[0,1]$. -To enforce the boundary conditions we can just use our reference solution that -we will now define. - -Let $u=(u_1,u_2,u_3)=(2\sin (\pi x), - \pi y \cos (\pi x),- \pi z \cos -(\pi x))$ and $p = \sin (\pi x)\cos (\pi y)\sin (\pi z)$. +To enforce the boundary conditions we can just use our reference solution. If you look up in the deal.II manual what is needed to create a class -inherited from Function@, you will find not only a -value function, but vector_value, value_list, etc. Different things -you use in your code will require one of these particular +derived from Function@, you will find that this +class has numerous @p virtual functions, including +Function::value(), Function::vector_value(), Function::value_list(), +etc., all of which can be overloaded. Different parts of deal.II +will require different ones of these particular functions. This can be confusing at first, but luckily the only thing -you actually have to implement is @p value. The other ones have default -implementations inside deal.II and will call your implementation of @p value +you actually have to implement is @p value(). The other virtual +functions in the Function class have default +implementations inside that will call your implementation of @p value by default. Notice that our reference solution fulfills $\nabla \cdot u = 0$. In @@ -138,25 +143,41 @@ we need to post process the solution after solving. To do this we use the VectorTools::compute_mean_value() function to compute the mean value of the pressure to subtract it from the pressure. +

DoF Handlers

- -Geometric multigrid needs to know about the -finite element system for the velocity. Since this is now part of the -entire system, it is no longer easy to access. The reason for this is -that there is currently no way in deal.II to ask, "May I have just -part of a DoFHandler?" So in order to answer this request for our -needs, we have to create a new DoFHandler for just the velocities and -assure that it has the same ordering as the DoFHandler for the entire -system so that we can copy over solution vectors element by element. + +The way we implement geometric multigrid here only executes it on the +velocity variables (i.e., the $A$ matrix described above) but not the +pressure. One could implement this in different ways, including one in +which one considers all coarse grid operations as acting on $2\times +2$ block systems where we only consider the top left +block. Alternatively, we can implement things by really only +considering a linear system on the velocity part of the overall finite +element discretization. The latter is the way we want to use here. + +To implement this, one would need to be able to ask questions such as +"May I have just part of a DoFHandler?". This is not possible at the +time when this program was written, so in order to answer this request +for our needs, we simply create a separate, second DoFHandler for just the +velocities. We then build linear systems for the multigrid +preconditioner based on only this second DoFHandler, and simply +transfer the first block of (overall) vectors into corresponding +vectors for the entire second DoFHandler. To make this work, we have +to assure that the order in which the (velocity) degrees of freedom are +ordered in the two DoFHandler objects is the same. This is in fact the +case by first distributing degrees of freedom on both, and then using +the same sequence of DoFRenumbering operations on both. +

Differences from Step-22

-The main difference between -step-56 and step-22 is that we use block solvers instead of the Schur -Complement approach used in step-22. Details of this approach can be -found under the "Block Schur complement preconditioner" subsection of -the "Possible Extensions" section of step-22. For the preconditioner of -the velocity block, we borrow a class from ASPECT called -@p BlockSchurPreconditioner that has the option to solve for the inverse -of $A$ or just apply one preconditioner sweep for it instead, which -provides us with an expensive and cheap approach, respectively. +The main difference between step-56 and step-22 is that we use block +solvers instead of the Schur Complement approach used in +step-22. Details of this approach can be found under the "Block Schur +complement preconditioner" subsection of the "Possible Extensions" +section of step-22. For the preconditioner of the velocity block, we +borrow a class from ASPECT +called @p BlockSchurPreconditioner that has the option to solve for +the inverse of $A$ or just apply one preconditioner sweep for it +instead, which provides us with an expensive and cheap approach, +respectively. diff --git a/examples/step-56/doc/results.dox b/examples/step-56/doc/results.dox index fc9c431a67..b51f0c92bb 100644 --- a/examples/step-56/doc/results.dox +++ b/examples/step-56/doc/results.dox @@ -2,10 +2,10 @@

Errors

-We first run the code and confirm that the manufactured solution converges +We first run the code and confirm that the finite element solution converges with the correct rates as predicted by the error analysis of mixed finite element problems. Given sufficiently smooth exact solutions $u$ and $p$, -the errors of the Taylor-Hood element Q_k-Q_{k-1} should be +the errors of the Taylor-Hood element $Q_k \times Q_{k-1}$ should be @f[ \| u -u_h \|_0 + h ( \| u- u_h\|_1 + \|p - p_h \|_0) @@ -13,17 +13,19 @@ the errors of the Taylor-Hood element Q_k-Q_{k-1} should be @f] see for example Ern/Guermond "Theory and Practice of Finite Elements", Section -4.2.5 p195. This is indeed what we observe: +4.2.5 p195. This is indeed what we observe, using the $Q_2 \times Q_1$ +element as an example (this is what is done in the code, but is easily +changed in main()): - + - + - + @@ -57,8 +59,12 @@ see for example Ern/Guermond "Theory and Practice of Finite Elements", Section

Timing Results

-Here is a table summarizing the solver iterations and timings done for the -various solver combinations implemented in the code: +Let us compare the direct solver approach using UMFPACK to the two +methods in which we choose $\widetilde {A^{-1}}=A^{-1}$ and +$\widetilde{S^{-1}}=S^{-1}$ by solving linear systems with $A,S$ using +CG. The preconditioner for CG is then either ILU or GMG. +The following table summarizes solver iterations, timings, and virtual +memory (VM) peak usage:
  L2 VelocityRateReduction L2 PressureRateReduction H1 VelocityRateReduction
3D, 3 global refinements
@@ -172,9 +178,9 @@ timings do not scale favorably with problem size. 2. Because we are using inner solvers for $A$ and $S$, ILU and GMG require the same number of outer iterations. -3. The number of iterations for $A$ increase for ILU with refinement leading +3. The number of (inner) iterations for $A$ increases for ILU with refinement, leading to worse than linear scaling in solve time. In contrast, the number of inner -iterations for $A$ stay constant with GMG leading to nearly perfect scaling in +iterations for $A$ stays constant with GMG leading to nearly perfect scaling in solve time. 4. GMG needs slightly more memory than ILU to store the level and interface @@ -189,11 +195,17 @@ correct convergence rates.

Compare with cheap preconditioner

-Currently, the boolean use_expensive in solve () is set to true. When set to false, -$\widetilde{A^{-1}}$ will be a single preconditioner application instead of an inner -CG with the GMG or ILU preconditioner, respectively. +The introduction also outlined another option to precondition the +overall system, namely one in which we do not choose $\widetilde +{A^{-1}}=A^{-1}$ as in the table above, but in which +$\widetilde{A^{-1}}$ is only a single preconditioner application with +GMG or ILU, respectively. -Notice that the number of FGMRES iterations stays constant under refinement if -you use GMG (so $A^{-1}$ is approximated by a V-cycle). This means that the +This is in fact implemented in the code: Currently, the boolean +use_expensive in solve() is set to @p true. The +option mentioned above is obtained by setting it to @p false. + +What you will find is that the number of FGMRES iterations stays +constant under refinement if you use GMG this way. This means that the Multigrid is optimal and independent of $h$.