From: Wolfgang Bangerth Date: Mon, 4 Jan 2010 13:10:10 +0000 (+0000) Subject: Delete files specific to step-22. Rename step-22.cc into step-42.cc. X-Git-Tag: v8.0.0~6712 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=7a4cc186bcce6d0f885a83b3296959c3a074d01d;p=dealii.git Delete files specific to step-22. Rename step-22.cc into step-42.cc. git-svn-id: https://svn.dealii.org/trunk@20277 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-42/doc/intro.dox b/deal.II/examples/step-42/doc/intro.dox index 24c3d91a7b..c59cfdf609 100644 --- a/deal.II/examples/step-42/doc/intro.dox +++ b/deal.II/examples/step-42/doc/intro.dox @@ -1,783 +1,2 @@ -
- -This program was contributed by Martin Kronbichler and Wolfgang -Bangerth. -
-This material is based upon work partly supported by the National -Science Foundation under Award No. EAR-0426271 and The California Institute of -Technology. Any opinions, findings, and conclusions or recommendations -expressed in this publication are those of the author and do not -necessarily reflect the views of the National Science Foundation or of The -California Institute of Technology. -
- - -

Introduction

- -This program deals with the Stokes system of equations which reads as -follows in non-dimensionalized form: -@f{eqnarray*} - -\textrm{div}\; \varepsilon(\textbf{u}) + \nabla p &=& \textbf{f}, - \\ - -\textrm{div}\; \textbf{u} &=& 0, -@f} -where $\textbf u$ denotes the velocity of a fluid, $p$ is its -pressure, $\textbf f$ are external forces, and -$\varepsilon(\textbf{u})= \nabla^s{\textbf{u}}= \frac 12 \left[ -(\nabla \textbf{u}) + (\nabla \textbf{u})^T\right]$ is the -rank-2 tensor of symmetrized gradients; a component-wise definition -of it is $\varepsilon(\textbf{u})_{ij}=\frac -12\left(\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i}\right)$. - -The Stokes equations describe the steady-state motion of a -slow-moving, viscous fluid such as honey, rocks in the earth mantle, -or other cases where inertia does not play a significant role. If a -fluid is moving fast enough that inertia forces are significant -compared to viscous friction, the Stokes equations are no longer -valid; taking into account inertia effects then leads to the -nonlinear Navier-Stokes equations. However, in this tutorial program, -we will focus on the simpler Stokes system. - -To be well-posed, we will have to add boundary conditions to the -equations. What boundary conditions are readily possible here will -become clear once we discuss the weak form of the equations. - -The equations covered here fall into the class of vector-valued problems. A -toplevel overview of this topic can be found in the @ref vector_valued module. - - -

Weak form

- -The weak form of the equations is obtained by writing it in vector -form as -@f{eqnarray*} - \left( - {-\textrm{div}\; \varepsilon(\textbf{u}) + \nabla p} - \atop - {-\textrm{div}\; \textbf{u}} - \right) - = - \left( - {\textbf{f}} - \atop - 0 - \right), -@f} -forming the dot product from the left with a vector-valued test -function $\phi = \left({\textbf v \atop q}\right)$ and integrating -over the domain $\Omega$, yielding the following set of equations: -@f{eqnarray*} - (\mathrm v, - -\textrm{div}\; \varepsilon(\textbf{u}) + \nabla p)_{\Omega} - - - (q,\textrm{div}\; \textbf{u})_{\Omega} - = - (\textbf{v}, \textbf{f})_\Omega, -@f} -which has to hold for all test functions $\phi = \left({\textbf v -\atop q}\right)$. - -In practice, one wants to impose as little regularity on the pressure -variable as possible; consequently, we integrate by parts the second term: -@f{eqnarray*} - (\mathrm v, -\textrm{div}\; \varepsilon(\textbf{u}))_{\Omega} - - (\textrm{div}\; \textbf{v}, p)_{\Omega} - + (\textbf{n}\cdot\textbf{v}, p)_{\partial\Omega} - - - (q,\textrm{div}\; \textbf{u})_{\Omega} - = - (\textbf{v}, \textbf{f})_\Omega. -@f} -Likewise, we integrate by parts the first term to obtain -@f{eqnarray*} - (\nabla \mathrm v,\varepsilon(\textbf{u}))_{\Omega} - - - (\textbf{n} \otimes \mathrm v,\varepsilon(\textbf{u}))_{\partial\Omega} - - (\textrm{div}\; \textbf{v}, p)_{\Omega} - + (\textbf{n}\cdot\textbf{v}, p)_{\partial\Omega} - - - (q,\textrm{div}\; \textbf{u})_{\Omega} - = - (\textbf{v}, \textbf{f})_\Omega, -@f} -where the scalar product between two tensor-valued quantities is here -defined as -@f{eqnarray*} - (\nabla \mathrm v,\varepsilon(\textbf{u}))_{\Omega} - = - \int_\Omega \sum_{i,j=1}^d \frac{\partial v_j}{\partial x_i} - \varepsilon(\textbf{u})_{ij} \ dx. -@f} -Because the scalar product between a general tensor like -$\nabla\mathrm v$ and a symmetric tensor like -$\varepsilon(\textbf{u})$ equals the scalar product between the -symmetrized forms of the two, we can also write the bilinear form -above as follows: -@f{eqnarray*} - (\varepsilon(\mathrm v),\varepsilon(\textbf{u}))_{\Omega} - - - (\textbf{n} \otimes \mathrm v,\varepsilon(\textbf{u}))_{\partial\Omega} - - (\textrm{div}\; \textbf{v}, p)_{\Omega} - + (\textbf{n}\cdot\textbf{v}, p)_{\partial\Omega} - - - (q,\textrm{div}\; \textbf{u})_{\Omega} - = - (\textbf{v}, \textbf{f})_\Omega, -@f} -We will deal with the boundary terms in the next section, but it is already -clear from the domain terms -@f{eqnarray*} - (\varepsilon(\mathrm v),\varepsilon(\textbf{u}))_{\Omega} - - (\textrm{div}\; \textbf{v}, p)_{\Omega} - - - (q,\textrm{div}\; \textbf{u})_{\Omega} -@f} -of the bilinear form that the Stokes equations yield a symmetric bilinear -form, and consequently a symmetric (if indefinite) system matrix. - - -

%Boundary conditions

- -The weak form just derived immediately presents us with different -possibilities for imposing boundary conditions: -
    -
  1. Dirichlet velocity boundary conditions: On a part - $\Gamma_D\subset\partial\Omega$ we may impose Dirichlet conditions - on the velocity $\textbf u$: - - @f{eqnarray*} - \textbf u = \textbf g_D \qquad\qquad \textrm{on}\ \Gamma_D. - @f} - Because test functions $\textbf v$ come from the tangent space of - the solution variable, we have that $\textbf v=0$ on $\Gamma_D$ - and consequently that - @f{eqnarray*} - -(\textbf{n} \otimes \mathrm - v,\varepsilon(\textbf{u}))_{\Gamma_D} - + - (\textbf{n}\cdot\textbf{v}, p)_{\Gamma_D} - = 0. - @f} - In other words, as usual, strongly imposed boundary values do not - appear in the weak form. - - It is noteworthy that if we impose Dirichlet boundary values on the entire - boundary, then the pressure is only determined up to a constant. An - algorithmic realization of that would use similar tools as have been seen in - step-11. - -
  2. Neumann-type or natural boundary conditions: On the rest of the boundary - $\Gamma_N=\partial\Omega\backslash\Gamma_D$, let us re-write the - boundary terms as follows: - @f{eqnarray*} - -(\textbf{n} \otimes \mathrm - v,\varepsilon(\textbf{u}))_{\Gamma_N} - + - (\textbf{n}\cdot\textbf{v}, p)_{\Gamma_N} - &=& - \sum_{i,j=1}^d - -(n_i v_j,\varepsilon(\textbf{u})_{ij})_{\Gamma_N} - + - \sum_{i=1}^d - (n_i v_i, p)_{\Gamma_N} - \\ - &=& - \sum_{i,j=1}^d - -(n_i v_j,\varepsilon(\textbf{u})_{ij})_{\Gamma_N} - + - \sum_{i,j=1}^d - (n_i v_j, p \delta_{ij})_{\Gamma_N} - \\ - &=& - \sum_{i,j=1}^d - (n_i v_j,p \delta_{ij} - \varepsilon(\textbf{u})_{ij})_{\Gamma_N} - \\ - &=& - (\textbf{n} \otimes \mathrm v, - p \textbf{1} - \varepsilon(\textbf{u}))_{\Gamma_N}. - \\ - &=& - (\mathrm v, - \textbf{n}\cdot [p \textbf{1} - \varepsilon(\textbf{u})])_{\Gamma_N}. - @f} - In other words, on the Neumann part of the boundary we can - prescribe values for the total stress: - @f{eqnarray*} - \textbf{n}\cdot [p \textbf{1} - \varepsilon(\textbf{u})] - = - \textbf g_N \qquad\qquad \textrm{on}\ \Gamma_N. - @f} - If the boundary is subdivided into Dirichlet and Neumann parts - $\Gamma_D,\Gamma_N$, this then leads to the following weak form: - @f{eqnarray*} - (\varepsilon(\mathrm v),\varepsilon(\textbf{u}))_{\Omega} - - (\textrm{div}\; \textbf{v}, p)_{\Omega} - - - (q,\textrm{div}\; \textbf{u})_{\Omega} - = - (\textbf{v}, \textbf{f})_\Omega - - - (\textbf{v}, \textbf g_N)_{\Gamma_N}. - @f} - - -
  3. Robin-type boundary conditions: Robin boundary conditions are a mixture of - Dirichlet and Neumann boundary conditions. They would read - @f{eqnarray*} - \textbf{n}\cdot [p \textbf{1} - \varepsilon(\textbf{u})] - = - \textbf S \textbf u \qquad\qquad \textrm{on}\ \Gamma_R, - @f} - with a rank-2 tensor (matrix) $\textbf S$. The associated weak form is - @f{eqnarray*} - (\varepsilon(\mathrm v),\varepsilon(\textbf{u}))_{\Omega} - - (\textrm{div}\; \textbf{v}, p)_{\Omega} - - - (q,\textrm{div}\; \textbf{u})_{\Omega} - + - (\textbf S \textbf u, \textbf{v})_{\Gamma_R} - = - (\textbf{v}, \textbf{f})_\Omega. - @f} - -
  4. Partial boundary conditions: It is possible to combine Dirichlet and - Neumann boundary conditions by only enforcing each of them for certain - components of the velocity. For example, one way to impose artificial - boundary conditions is to require that the flow is perpendicular to the - boundary, i.e. the tangential component $\textbf u_{\textbf t}=(\textbf - 1-\textbf n\otimes\textbf n)\textbf u$ be zero, thereby constraining - dim-1 components of the velocity. The remaining component can - be constrained by requiring that the normal component of the normal - stress be zero, yielding the following set of boundary conditions: - @f{eqnarray*} - \textbf u_{\textbf t} &=& 0, - \\ - \textbf n \cdot \left(\textbf{n}\cdot [p \textbf{1} - - \varepsilon(\textbf{u})] \right) - &=& - 0. - @f} - - An alternative to this is when one wants the flow to be parallel - rather than perpendicular to the boundary (in deal.II, the - VectorTools::compute_no_normal_flux_constraints function can do this for - you). This is frequently the case for problems with a free boundary - (e.g. at the surface of a river or lake if vertical forces of the flow are - not large enough to actually deform the surface), or if no significant - friction is exerted by the boundary on the fluid (e.g. at the interface - between earth mantle and earth core where two fluids meet that are - stratified by different densities but that both have small enough - viscosities to not introduce much tangential stress on each other). - In formulas, this means that - @f{eqnarray*} - \textbf{n}\cdot\textbf u &=& 0, - \\ - (\textbf 1-\textbf n\otimes\textbf n) - \left(\textbf{n}\cdot [p \textbf{1} - - \varepsilon(\textbf{u})] \right) - &=& - 0, - @f} - the first condition (which needs to be imposed strongly) fixing a single - component of the velocity, with the second (which would be enforced in the - weak form) fixing the remaining two components. -
- -Despite this wealth of possibilities, we will only use Dirichlet and -(homogenous) Neumann boundary conditions in this tutorial program. - - -

Discretization

- -As developed above, the weak form of the equations with Dirichlet and Neumann -boundary conditions on $\Gamma_D$ and $\Gamma_N$ reads like this: find -$\textbf u\in \textbf V_g = \{\varphi \in H^1(\Omega)^d: \varphi_{\Gamma_D}=\textbf -g_D\}, p\in Q=L^2(\Omega)$ so that -@f{eqnarray*} - (\varepsilon(\mathrm v),\varepsilon(\textbf{u}))_{\Omega} - - (\textrm{div}\; \textbf{v}, p)_{\Omega} - - - (q,\textrm{div}\; \textbf{u})_{\Omega} - = - (\textbf{v}, \textbf{f})_\Omega - - - (\textbf{v}, \textbf g_N)_{\Gamma_N} -@f} -for all test functions -$\textbf v\in \textbf V_0 = \{\varphi \in H^1(\Omega)^d: \varphi_{\Gamma_D}=0\},q\in -Q$. - -These equations represent a symmetric saddle point problem. It is well known -that then a solution only exists if the function spaces in which we search for -a solution have to satisfy certain conditions, typically referred to as the -Babuska-Brezzi or Ladyzhenskaya-Babuska-Brezzi (LBB) conditions. The continuous -function spaces above satisfy them. However, when we discretize the equations by -replacing the continuous variables and test functions by finite element -functions in finite dimensional spaces $\textbf V_{g,h}\subset \textbf V_g, -Q_h\subset Q$, we have to make sure that $\textbf V_h,Q_h$ also satisfy the LBB -conditions. This is similar to what we had to do in @ref step_20 "step-20". - -For the Stokes equations, there are a number of possible choices to ensure -that the finite element spaces are compatible with the LBB condition. A simple -and accurate choice that we will use here is $\textbf u_h\in Q_{p+1}^d, -p_h\in Q_p$, i.e. use elements one order higher for the velocities than for the -pressures. - -This then leads to the following discrete problem: find $\textbf u_h,p_h$ so -that -@f{eqnarray*} - (\varepsilon(\mathrm v_h),\varepsilon(\textbf u_h))_{\Omega} - - (\textrm{div}\; \textbf{v}_h, p_h)_{\Omega} - - - (q_h,\textrm{div}\; \textbf{u}_h)_{\Omega} - = - (\textbf{v}_h, \textbf{f})_\Omega - - - (\textbf{v}_h, \textbf g_N)_{\Gamma_N} -@f} -for all test functions $\textbf v_h, q_h$. Assembling the linear system -associated with this problem follows the same lines used in @ref step_20 -"step-20", @ref step_21 "step-21", and explained in detail in the @ref -vector_valued module. - - - -

Linear solver and preconditioning issues

- -The weak form of the discrete equations naturally leads to the following -linear system for the nodal values of the velocity and pressure fields: -@f{eqnarray*} - \left(\begin{array}{cc} - A & B^T \\ B & 0 - \end{array}\right) - \left(\begin{array}{c} - U \\ P - \end{array}\right) - = - \left(\begin{array}{c} - F \\ G - \end{array}\right), -@f} -Like in @ref step_20 "step-20" and @ref step_21 "step-21", we will solve this -system of equations by forming the Schur complement, i.e. we will first find -the solution $P$ of -@f{eqnarray*} - BA^{-1}B^T P &=& BA^{-1} F - G, \\ -@f} -and then -@f{eqnarray*} - AU &=& F - B^TP. -@f} -The way we do this is pretty much exactly like we did in these previous -tutorial programs, i.e. we use the same classes SchurComplement -and InverseMatrix again. There are two significant differences, -however: - -
    -
  1. -First, in the mixed Laplace equation we had to deal with the question of how -to precondition the Schur complement $B^TM^{-1}B$, which was spectrally -equivalent to the Laplace operator on the pressure space (because $B$ -represents the gradient operator, $B^T$ its adjoint $-\textrm{div}$, and $M$ -the identity (up to the material parameter $K^{-1}$), so $B^TM^{-1}B$ is -something like $-\textrm{div} \mathbf 1 \nabla = -\Delta$). Consequently, the -matrix is badly conditioned for small mesh sizes and we had to come up with an -elaborate preconditioning scheme for the Schur complement. - -
  2. -Second, every time we multiplied with $B^TM^{-1}B$ we had to solve with the -mass matrix $M$. This wasn't particularly difficult, however, since the mass -matrix is always well conditioned and so simple to invert using CG and a -little bit of preconditioning. -
-In other words, preconditioning the inner solver for $M$ was simple whereas -preconditioning the outer solver for $B^TM^{-1}B$ was complicated. - -Here, the situation is pretty much exactly the opposite. The difference stems -from the fact that the matrix at the heart of the Schur complement does not -stem from the identity operator but from a variant of the Laplace operator, -$-\textrm{div} \nabla^s$ (where $\nabla^s$ is the symmetric gradient) -acting on a vector field. In the investigation of this issue -we largely follow the paper D. Silvester and A. Wathen: -"Fast iterative solution of stabilised Stokes systems part II. Using -general block preconditioners." (SIAM J. Numer. Anal., 31 (1994), -pp. 1352-1367), which is available online here. -Principally, the difference in the matrix at the heart of the Schur -complement has two consequences: - -
    -
  1. -First, it makes the outer preconditioner simple: the Schur complement -corresponds to the operator $-\textrm{div} (-\textrm{div} \nabla^s)^{-1} -\nabla$ on the pressure space; forgetting about the fact that we deal with -symmetric gradients instead of the regular one, the Schur complement is -something like $-\textrm{div} (-\textrm{div} \nabla)^{-1} \nabla = --\textrm{div} (-\Delta)^{-1} \nabla$, which, even if not mathematically -entirely concise, is spectrally equivalent to the identity operator (a -heuristic argument would be to commute the operators into -$-\textrm{div}(-\Delta)^{-1} \nabla = -\textrm{div}\nabla(-\Delta)^{-1} = --\Delta(-\Delta)^{-1} = \mathbf 1$). It turns out that it isn't easy to solve -this Schur complement in a straightforward way with the CG method: -using no preconditioner, the condition number of the Schur complement matrix -depends on the size ratios of the largest to the smallest cells, and one still -needs on the order of 50-100 CG iterations. However, there is a simple cure: -precondition with the mass matrix on the pressure space and we get down to a -number between 5-15 CG iterations, pretty much independently of the structure -of the mesh (take a look at the results section of this -program to see that indeed the number of CG iterations does not change as we -refine the mesh). - -So all we need in addition to what we already have is the mass matrix on the -pressure variables. We could do that by building this matrix on the -side in a separate data structure. However, it is worth remembering -that although we build the system matrix -@f{eqnarray*} - \left(\begin{array}{cc} - A & B^T \\ B & 0 - \end{array}\right) -@f} -as one object (of type BlockSparseMatrix), we never actually do -matrix-vector products with this matrix, or any other operations that -consider the entire matrix. Rather, we only build it in this form for -convenience (because it reflects the structure of the FESystem finite -element and associated DoFHandler object) but later only operate on -the $(0,0),(0,1)$, and $(1,0)$ blocks of this matrix. In other words, -our algorithm so far entirely ignores the $(1,1)$ (pressure-pressure) -block as it is empty anyway. - -Now, as mentioned, we need a pressure mass matrix to precondition the -Schur complement and that conveniently the pressure-pressure block of -the matrix we build anyway is currently empty and ignored. So what we -will do is to assemble the needed mass matrix in this space; this does -change the global system matrix but since our algorithm never operates -on the global matrix and instead only considers individual blocks, -this fact does not affect what we actually compute. Later, when -solving, we then precondition the Schur complement with $M_p^{-1}$ by -doing a few CG iterations on the well-conditioned pressure mass matrix -$M_p$ stored in the $(1,1)$ block. - - - -
  2. -While the outer preconditioner has become simpler compared to the -mixed Laplace case discussed in @ref step_20 "step-20", the issue of -the inner solver has become more complicated. In the mixed Laplace -discretization, the Schur complement has the form $B^TM^{-1}B$. Thus, -every time we multiplied with the Schur complement, we had to solve a -linear system $M_uz=y$; this isn't too complicated there, however, -since the mass matrix $M_u$ on the pressure space is well-conditioned. - - -On the other hand, for the Stokes equation we consider here, the Schur -complement is $BA^{-1}B^T$ where the matrix $A$ is related to the -Laplace operator (it is, in fact, the matrix corresponding to the -bilinear form $(\nabla^s \varphi_i, \nabla^s\varphi_j)$). Thus, -solving with $A$ is a lot more complicated: the matrix is badly -conditioned and we know that we need many iterations unless we have a -very good preconditioner. What is worse, we have to solve with $A$ -every time we multiply with the Schur complement, which is 5-15 times -using the preconditioner described above. - -Because we have to solve with $A$ several times, it pays off to spend -a bit more time once to create a good preconditioner for this -matrix. So here's what we're going to do: if in 2d, we use the -ultimate preconditioner, namely a direct sparse LU decomposition of -the matrix. This is implemented using the SparseDirectUMFPACK class -that uses the UMFPACK direct solver to compute the decomposition. To -use it, you will have to specify the --enable-umfpack -switch when configuring the deal.II library, see the ReadMe file for -instructions. With this, the inner solver converges in one iteration. - -In 2d, we can do this sort of thing because even reasonably large problems -rarely have more than a few 100,000 unknowns with relatively few nonzero -entries per row. Furthermore, the bandwidth of matrices in 2d is ${\cal -O}(\sqrt{N})$ and therefore moderate. For such matrices, sparse factors can be -computed in a matter of a few seconds. (As a point of reference, computing the -sparse factors of a matrix of size $N$ and bandwidth $B$ takes ${\cal -O}(NB^2)$ operations. In 2d, this is ${\cal O}(N^2)$; though this is a higher -complexity than, for example, assembling the linear system which takes ${\cal -O}(N)$, the constant for computing the decomposition is so small that it -doesn't become the dominating factor in the entire program until we get to -very large %numbers of unknowns in the high 100,000s or more.) - -The situation changes in 3d, because there we quickly have many more -unknowns and the bandwidth of matrices (which determines the number of -nonzero entries in sparse LU factors) is ${\cal O}(N^{2/3})$, and there -are many more entries per row as well. This makes using a sparse -direct solver such as UMFPACK inefficient: only for problem sizes of a -few 10,000 to maybe 100,000 unknowns can a sparse decomposition be -computed using reasonable time and memory resources. - -What we do in that case is to use an incomplete LU decomposition (ILU) as a -preconditioner, rather than actually computing complete LU factors. As it so -happens, deal.II has a class that does this: SparseILU. Computing the ILU -takes a time that only depends on the number of nonzero entries in the sparse -matrix (or that we are willing to fill in the LU factors, if these should be -more than the ones in the matrix), but is independent of the bandwidth of the -matrix. It is therefore an operation that can efficiently also be computed in -3d. On the other hand, an incomplete LU decomposition, by definition, does not -represent an exact inverse of the matrix $A$. Consequently, preconditioning -with the ILU will still require more than one iteration, unlike -preconditioning with the sparse direct solver. The inner solver will therefore -take more time when multiplying with the Schur complement, a tradeoff -unavoidable. -
- -In the program below, we will make use of the fact that the SparseILU and -SparseDirectUMFPACK classes have a very similar interface and can be used -interchangeably. All that we need is a switch class that, depending on the -dimension, provides a type that is either of the two classes mentioned -above. This is how we do that: -@code -template -struct InnerPreconditioner; - -template <> -struct InnerPreconditioner<2> -{ - typedef SparseDirectUMFPACK type; -}; - -template <> -struct InnerPreconditioner<3> -{ - typedef SparseILU type; -}; -@endcode - -From hereon, we can refer to the type typename -InnerPreconditioner@::type and automatically get the correct -preconditioner class. Because of the similarity of the interfaces of the two -classes, we will be able to use them interchangeably using the same syntax in -all places. - - -

The testcase

- -The domain, right hand side and boundary conditions we implement below relate -to a problem in geophysics: there, one wants to compute the flow field of -magma in the earth's interior under a mid-ocean rift. Rifts are places where -two continental plates are very slowly drifting apart (a few centimeters per -year at most), leaving a crack in the earth crust that is filled with magma -from below. Without trying to be entirely realistic, we model this situation -by solving the following set of equations and boundary conditions on the -domain $\Omega=[-2,2]\times[0,1]\times[-1,0]$: -@f{eqnarray*} - -\textrm{div}\; \varepsilon(\textbf{u}) + \nabla p &=& 0, - \\ - -\textrm{div}\; \textbf{u} &=& 0, - \\ - \mathbf u &=& \left(\begin{array}{c} - -1 \\ 0 \\0 - \end{array}\right) - \qquad\qquad \textrm{at}\ z=0, x<0, - \\ - \mathbf u &=& \left(\begin{array}{c} - +1 \\ 0 \\0 - \end{array}\right) - \qquad\qquad \textrm{at}\ z=0, x>0, - \\ - \mathbf u &=& \left(\begin{array}{c} - 0 \\ 0 \\0 - \end{array}\right) - \qquad\qquad \textrm{at}\ z=0, x=0, -@f} -and using natural boundary conditions $\textbf{n}\cdot [p \textbf{1} - -\varepsilon(\textbf{u})] = 0$ everywhere else. In other words, at the -left part of the top surface we prescribe that the fluid moves with the -continental plate to the left at speed $-1$, that it moves to the right on the -right part of the top surface, and impose natural flow conditions everywhere -else. If we are in 2d, the description is essentially the same, with the -exception that we omit the second component of all vectors stated above. - -As will become apparent in the results section, the -flow field will pull material from below and move it to the left and right -ends of the domain, as expected. The discontinuity of velocity boundary -conditions will produce a singularity in the pressure at the center of the top -surface that sucks material all the way to the top surface to fill the gap -left by the outward motion of material at this location. - - -

Implementation

- -

Using imhomogeneous constraints for implementing Dirichlet boundary conditions

- -In all the previous tutorial programs, we used the ConstraintMatrix merely -for handling hanging node constraints (with exception of step-11). However, -the class can also be used to implement Dirichlet boundary conditions, as we -will show in this program, by fixing some node values $x_i = b_i$. Note that -these are inhomogeneous constraints, and we have to pay some special -attention to that. The way we are going to implement this is to first read -in the boundary values into the ConstraintMatrix object by using the call - -@code - VectorTools::interpolate_boundary_values (dof_handler, - 1, - BoundaryValues(), - constraints); -@endcode - -very similar to how we were making the list of boundary nodes -before (note that we set Dirichlet conditions only on boundaries with -boundary flag 1). The actual application of the boundary values is then -handled by the ConstraintMatrix object directly, without any additional -interference. - -We could then proceed as before, namely by filling the matrix, and then -calling a condense function on the constraints object of the form -@code - constraints.condense (system_matrix, system_rhs); -@endcode - -Note that we call this on the system matrix and system right hand side -simultaneously, since resolving inhomogeneous constraints requires knowledge -about both the matrix entries and the right hand side. For efficiency -reasons, though, we choose another strategy: all the constraints collected -in the ConstraintMatrix can be resolved on the fly while writing local data -into the global matrix, by using the call -@code - constraints.distribute_local_to_global (local_matrix, local_rhs, - local_dof_indices, - system_matrix, system_rhs); -@endcode - -This technique is further discussed in the @ref step_27 "step-27" tutorial -program. All we need to know here is that this functions does three things -at once: it writes the local data into the global matrix and right hand -side, it distributes the hanging node constraints and additionally -implements (inhomogeneous) Dirichlet boundary conditions. That's nice, isn't -it? - -We can conclude that the ConstraintMatrix provides an alternative to using -MatrixTools::apply_boundary_values for implementing Dirichlet boundary -conditions. - - - -

Using ConstraintMatrix for increasing performance

-
- -Frequently, a sparse matrix contains a substantial amount of elements that -acutally are zero when we are about to start a linear solve. Such elements are -introduced when we eliminate constraints or implement Dirichlet conditions, -where we usually delete all entries in constrained rows and columns, i.e., we -set them to zero. The fraction of elements that are present in the sparsity -pattern, but do not really contain any information, can be up to one fourth -of the total number of elements in the matrix for the 3D application -considered in this tutorial program. Remember that matrix-vector products or -preconditioners operate on all the elements of a sparse matrix (even those -that are zero), which is an inefficiency we will avoid here. - -An advantage of directly resolving constrained degrees of freedom is that we -can avoid having most of the entries that are going to be zero in our sparse -matrix — we do not need constrained entries during matrix construction -(as opposed to the traditional algorithms, which first fill the matrix, and -only resolve constraints afterwards). This will save both memory and time -when forming matrix-vector products. The way we are going to do that is to -pass the information about constraints to the function that generates the -sparsity pattern, and then set a false argument specifying that we -do not intend to use constrained entries: -@code - DoFTools::make_sparsity_pattern (dof_handler, sparsity_pattern, - constraints, false); -@endcode -This functions obviates, by the way, also the call to the -condense() function on the sparsity pattern. - - -

Performance optimizations

- -The program developed below has seen a lot of TLC. We have run it over and -over under profiling tools (mainly valgrind's cachegrind and callgrind -tools, as well as the KDE KCachegrind program for -visualization) to see where the bottlenecks are. This has paid off: through -this effort, the program has become about four times as fast when -considering the runtime of the refinement cycles zero through three, -reducing the overall number of CPU instructions executed from -869,574,060,348 to 199,853,005,625. For higher refinement levels, the gain -is probably even larger since some algorithms that are not ${\cal O}(N)$ -have been eliminated. - -Essentially, there are currently two algorithms in the program that do not -scale linearly with the number of degrees of freedom: renumbering of degrees -of freedom (which is ${\cal O}(N \log N)$, and the linear solver (which is -${\cal O}(N^{4/3})$). As for the first, while reordering degrees of freedom -may not scale linearly, it is an indispensible part of the overall algorithm -as it greatly improves the quality of the sparse ILU, easily making up for -the time spent on computing the renumbering; graphs and timings to -demonstrate this are shown in the documentation of the DoFRenumbering -namespace, also underlining the choice of the Cuthill-McKee reordering -algorithm chosen below. - -As for the linear solver: as mentioned above, our implementation here uses a -Schur complement formulation. This is not necessarily the very best choice -but demonstrates various important techniques available in deal.II. The -question of which solver is best is again discussed in the section on improved solvers in the results part -of this program, along with code showing alternative solvers and a -comparison of their results. - -Apart from this, many other algorithms have been tested and improved during -the creation of this program. For example, in building the sparsity pattern, -we originally used a BlockCompressedSparsityPattern object that added one -element at a time; however, its data structures are poorly adapted for the -large numbers of nonzero entries per row created by our discretization in -3d, leading to a quadratic behavior. Replacing the internal algorithms in -deal.II to set many elements at a time, and using a -BlockCompressedSimpleSparsityPattern as a better adapted data structure, -removed this bottleneck at the price of a slightly higher memory -consumption. Likewise, the implementation of the decomposition step in the -SparseILU class was very inefficient and has been replaced by one that is -about 10 times faster. Even the vmult function of the SparseILU has been -improved to save about twenty precent of time. Small improvements were -applied here and there. Moreover, the ConstraintMatrix object has been used -to eliminate a lot of entries in the sparse matrix that are eventually going -to be zero, see the section on using advanced -features of the ConstraintMatrix class. - -A profile of how many CPU instructions are spent at the various -different places in the program during refinement cycles -zero through three in 3d is shown here: - -@image html step-22.profile-3.png - -As can be seen, at this refinement level approximately three quarters of the -instruction count is spent on the actual solver (the SparseILU::vmult calls -on the left, the SparseMatrix::vmult call in the middle for the Schur -complement solve, and another box representing the multiplications with -SparseILU and SparseMatrix in the solve for U). About one fifth of -the instruction count is spent on matrix assembly and sparse ILU computation -(box in the lower right corner) and the rest on other things. Since floating -point operations such as in the SparseILU::vmult calls typically take much -longer than many of the logical operations and table lookups in matrix -assembly, the fraction of the run time taken up by matrix assembly is -actually significantly less than the fraction of instructions, as will -become apparent in the comparison we make in the results section. - -For higher refinement levels, the boxes representing the solver as well as -the blue box at the top right stemming from reordering algorithm are going -to grow at the expense of the other parts of the program, since they don't -scale linearly. The fact that at this moderate refinement level (3168 cells -and 93176 degrees of freedom) the linear solver already makes up about three -quarters of the instructions is a good sign that most of the algorithms used -in this program are well-tuned and that major improvements in speeding up -the program are most likely not to come from hand-optimizing individual -aspects but by changing solver algorithms. We will address this point in the -discussion of results below as well. - -As a final point, and as a point of reference, the following picture also -shows how the profile looked at an early stage of optimizing this program: - -@image html step-22.profile-3.original.png - -As mentioned above, the runtime of this version was about four times as long -as for the first profile, with the SparseILU decomposition taking up about -30% of the instruction count, and operations on the ill-suited -CompressedSparsityPattern about 10%. Both these bottlenecks have since been -completely removed. diff --git a/deal.II/examples/step-42/doc/results.dox b/deal.II/examples/step-42/doc/results.dox index 2afe3ca0a3..747bdbbb3d 100644 --- a/deal.II/examples/step-42/doc/results.dox +++ b/deal.II/examples/step-42/doc/results.dox @@ -1,809 +1,3 @@

Results

-

Output of the program and graphical visualization

- -

2D calculations

- -Running the program with the space dimension set to 2 in main() -yields the following output (when the flag is set to optimized in the -Makefile): -@code -examples/step-22> make run -============================ Remaking Makefile.dep -==============optimized===== step-22.cc -============================ Linking step-22 -============================ Running step-22 -Refinement cycle 0 - Number of active cells: 64 - Number of degrees of freedom: 679 (594+85) - Assembling... - Computing preconditioner... - Solving... 11 outer CG Schur complement iterations for pressure - -Refinement cycle 1 - Number of active cells: 160 - Number of degrees of freedom: 1683 (1482+201) - Assembling... - Computing preconditioner... - Solving... 11 outer CG Schur complement iterations for pressure - -Refinement cycle 2 - Number of active cells: 376 - Number of degrees of freedom: 3813 (3370+443) - Assembling... - Computing preconditioner... - Solving... 11 outer CG Schur complement iterations for pressure - -Refinement cycle 3 - Number of active cells: 880 - Number of degrees of freedom: 8723 (7722+1001) - Assembling... - Computing preconditioner... - Solving... 11 outer CG Schur complement iterations for pressure - -Refinement cycle 4 - Number of active cells: 2008 - Number of degrees of freedom: 19383 (17186+2197) - Assembling... - Computing preconditioner... - Solving... 11 outer CG Schur complement iterations for pressure - -Refinement cycle 5 - Number of active cells: 4288 - Number of degrees of freedom: 40855 (36250+4605) - Assembling... - Computing preconditioner... - Solving... 11 outer CG Schur complement iterations for pressure -@endcode - -The entire computation above takes about 20 seconds on a reasonably -quick (for 2007 standards) machine. - -What we see immediately from this is that the number of (outer) -iterations does not increase as we refine the mesh. This confirms the -statement in the introduction that preconditioning the Schur -complement with the mass matrix indeed yields a matrix spectrally -equivalent to the identity matrix (i.e. with eigenvalues bounded above -and below independently of the mesh size or the relative sizes of -cells). In other words, the mass matrix and the Schur complement are -spectrally equivalent. - -In the images below, we show the grids for the first six refinement -steps in the program. Observe how the grid is refined in regions -where the solution rapidly changes: On the upper boundary, we have -Dirichlet boundary conditions that are -1 in the left half of the line -and 1 in the right one, so there is an aprupt change at $x=0$. Likewise, -there are changes from Dirichlet to Neumann data in the two upper -corners, so there is need for refinement there as well: - - - - - - - - - - - - - - - - - - - -
- @image html step-22.2d.mesh-0.png - - @image html step-22.2d.mesh-1.png -
- @image html step-22.2d.mesh-2.png - - @image html step-22.2d.mesh-3.png -
- @image html step-22.2d.mesh-4.png - - @image html step-22.2d.mesh-5.png -
- -Finally, following is a plot of the flow field. It shows fluid -transported along with the moving upper boundary and being replaced by -material coming from below: - -@image html step-22.2d.solution.png - -This plot uses the capability of VTK-based visualization programs (in -this case of VisIt) to show vector data; this is the result of us -declaring the velocity components of the finite element in use to be a -set of vector components, rather than independent scalar components in -the StokesProblem@::output_results function of this -tutorial program. - - - -

3D calculations

- -In 3d, the screen output of the program looks like this: - -@code -Refinement cycle 0 - Number of active cells: 32 - Number of degrees of freedom: 1356 (1275+81) - Assembling... - Computing preconditioner... - Solving... 13 outer CG Schur complement iterations for pressure. - -Refinement cycle 1 - Number of active cells: 144 - Number of degrees of freedom: 5088 (4827+261) - Assembling... - Computing preconditioner... - Solving... 14 outer CG Schur complement iterations for pressure. - -Refinement cycle 2 - Number of active cells: 704 - Number of degrees of freedom: 22406 (21351+1055) - Assembling... - Computing preconditioner... - Solving... 14 outer CG Schur complement iterations for pressure. - -Refinement cycle 3 - Number of active cells: 3168 - Number of degrees of freedom: 93176 (89043+4133) - Assembling... - Computing preconditioner... - Solving... 15 outer CG Schur complement iterations for pressure. - -Refinement cycle 4 - Number of active cells: 11456 - Number of degrees of freedom: 327808 (313659+14149) - Assembling... - Computing preconditioner... - Solving... 15 outer CG Schur complement iterations for pressure. - -Refinement cycle 5 - Number of active cells: 45056 - Number of degrees of freedom: 1254464 (1201371+53093) - Assembling... - Computing preconditioner... - Solving... 14 outer CG Schur complement iterations for pressure. -@endcode - -Again, we see that the number of outer iterations does not increase as -we refine the mesh. Nevertheless, the compute time increases -significantly: for each of the iterations above separately, it takes a -few seconds, a few seconds, 30sec, 4min, 15min, and 1h18min. This overall -superlinear (in the number of unknowns) increase in runtime is due to the fact -that our inner solver is not ${\cal O}(N)$: a simple experiment shows -that as we keep refining the mesh, the average number of -ILU-preconditioned CG iterations to invert the velocity-velocity block -$A$ increases. - -We will address the question of how possibly to improve our solver below. - -As for the graphical output, the grids generated during the solution -look as follow: - - - - - - - - - - - - - - - - - - - -
- @image html step-22.3d.mesh-0.png - - @image html step-22.3d.mesh-1.png -
- @image html step-22.3d.mesh-2.png - - @image html step-22.3d.mesh-3.png -
- @image html step-22.3d.mesh-4.png - - @image html step-22.3d.mesh-5.png -
- -Again, they show essentially the location of singularities introduced -by boundary conditions. The vector field computed makes for an -interesting graph: - -@image html step-22.3d.solution.png - -The isocountours shown here as well are those of the pressure -variable, showing the singularity at the point of discontinuous -velocity boundary conditions. - - - -

Sparsity pattern

- -As explained during the generation of the sparsity pattern, it is -important to have the numbering of degrees of freedom in mind when -using preconditioners like incomplete LU decompositions. This is most -conveniently visualized using the distribution of nonzero elements in -the stiffness matrix. - -If we don't do anything special to renumber degrees of freedom (i.e., -without using DoFRenumbering::Cuthill_McKee, but with using -DoFRenumbering::component_wise to ensure that degrees of freedom are -appropriately sorted into their corresponding blocks of the matrix and -vector), then we get the following image after the first adaptive -refinement in two dimensions: - -@image html step-22.2d.sparsity-nor.png - -In order to generate such a graph, you have to insert a piece of -code like the following to the end of the setup step. -@code - { - std::ofstream out ("sparsity_pattern.gpl"); - sparsity_pattern.print_gnuplot(out); - } -@endcode - -It is clearly visible that the nonzero entries are spread over almost the -whole matrix. This makes preconditioning by ILU inefficient: ILU generates a -Gaussian elimination (LU decomposition) without fill-in elements, which means -that more tentative fill-ins left out will result in a worse approximation of -the complete decomposition. - -In this program, we have thus chosen a more advanced renumbering of -components. The renumbering with DoFRenumbering::Cuthill_McKee and grouping -the components into velocity and pressure yields the following output: - -@image html step-22.2d.sparsity-ren.png - -It is apparent that the situation has improved a lot. Most of the elements are -now concentrated around the diagonal in the (0,0) block in the matrix. Similar -effects are also visible for the other blocks. In this case, the ILU -decomposition will be much closer to the full LU decomposition, which improves -the quality of the preconditioner. (It may be interesting to note that the -sparse direct solver UMFPACK does some %internal renumbering of the equations -before actually generating a sparse LU decomposition; that procedure leads to -a very similar pattern to the one we got from the Cuthill-McKee algorithm.) - -Finally, we want to have a closer -look at a sparsity pattern in 3D. We show only the (0,0) block of the -matrix, again after one adaptive refinement. Apart from the fact that the matrix -size has increased, it is also visible that there are many more entries -in the matrix. Moreover, even for the optimized renumbering, there will be a -considerable amount of tentative fill-in elements. This illustrates why UMFPACK -is not a good choice in 3D - a full decomposition needs many new entries that - eventually won't fit into the physical memory (RAM): - -@image html step-22.3d.sparsity_uu-ren.png - - - -

Possible Extensions

- - -

Improved linear solver in 3D

-
- -We have seen in the section of computational results that the number of outer -iterations does not depend on the mesh size, which is optimal in a sense of -scalability. This does, however, not apply to the solver as a whole, as -mentioned above: -We did not look at the number of inner iterations when generating the inverse of -the matrix $A$ and the mass matrix $M_p$. Of course, this is unproblematic in -the 2D case where we precondition $A$ with a direct solver and the -vmult operation of the inverse matrix structure will converge in -one single CG step, but this changes in 3D where we only use an ILU -preconditioner. There, the number of required preconditioned CG steps to -invert $A$ increases as the mesh is refined, and each vmult -operation involves on average approximately 14, 23, 36, 59, 75 and 101 inner -CG iterations in the refinement steps shown above. (On the other hand, -the number of iterations for applying the inverse pressure mass matrix is -always around five, both in two and three dimensions.) To summarize, most work -is spent on solving linear systems with the same matrix $A$ over and over again. -What makes this look even worse is the fact that we -actually invert a matrix that is about 95 precent the size of the total system -matrix and stands for 85 precent of the non-zero entries in the sparsity -pattern. Hence, the natural question is whether it is reasonable to solve a -linear system with matrix $A$ for about 15 times when calculating the solution -to the block system. - -The answer is, of course, that we can do that in a few other (most of the time -better) ways. -Nevertheless, it has to be remarked that an indefinite system as the one -at hand puts indeed much higher -demands on the linear algebra than standard elliptic problems as we have seen -in the early tutorial programs. The improvements are still rather -unsatisfactory, if one compares with an elliptic problem of similar -size. Either way, we will introduce below a number of improvements to the -linear solver, a discussion that we will re-consider again with additional -options in the @ref step_31 "step-31" program. - - -
Better ILU decomposition by smart reordering
-
-A first attempt to improve the speed of the linear solution process is to choose -a dof reordering that makes the ILU being closer to a full LU decomposition, as -already mentioned in the in-code comments. The DoFRenumbering namespace compares -several choices for the renumbering of dofs for the Stokes equations. The best -result regarding the computing time was found for the King ordering, which is -accessed through the call DoFRenumbering::boost::king_ordering. With that -program, the inner solver needs considerably less operations, e.g. about 62 -inner CG iterations for the inversion of $A$ at cycle 4 compared to about 75 -iterations with the standard Cuthill-McKee-algorithm. Also, the computing time -at cycle 4 decreased from about 17 to 11 minutes for the solve() -call. However, the King ordering (and the orderings provided by the -DoFRenumbering::boost namespace in general) has a serious drawback - it uses -much more memory than the in-build deal versions, since it acts on abstract -graphs rather than the geometry provided by the triangulation. In the present -case, the renumbering takes about 5 times as much memory, which yields an -infeasible algorithm for the last cycle in 3D with 1.2 million -unknowns. - -
Better preconditioner for the inner CG solver
-Another idea to improve the situation even more would be to choose a -preconditioner that makes CG for the (0,0) matrix $A$ converge in a -mesh-independent number of iterations, say 10 to 30. We have seen such a -canditate in @ref step_16 "step-16": multigrid. - -
Block Schur complement preconditioner
-Even with a good preconditioner for $A$, we still -need to solve of the same linear system repeatedly (with different -right hand sides, though) in order to make the Schur complement solve -converge. The approach we are going to discuss here is how inner iteration -and outer iteration can be combined. If we persist in calculating the Schur -complement, there is no other possibility. - -The alternative is to attack the block system at once and use an approximate -Schur complement as efficient preconditioner. The idea is as -follows: If we find a block preconditioner $P$ such that the matrix -@f{eqnarray*} - P^{-1}\left(\begin{array}{cc} - A & B^T \\ B & 0 - \end{array}\right) -@f} -is simple, then an iterative solver with that preconditioner will converge in a -few iterations. Using the Schur complement $S = B A^{-1} B^T$, one finds that -@f{eqnarray*} - P^{-1} - = - \left(\begin{array}{cc} - A^{-1} & 0 \\ S^{-1} B A^{-1} & -S^{-1} - \end{array}\right) -@f} -would appear to be a good choice since -@f{eqnarray*} - P^{-1}\left(\begin{array}{cc} - A & B^T \\ B & 0 - \end{array}\right) - = - \left(\begin{array}{cc} - A^{-1} & 0 \\ S^{-1} B A^{-1} & -S^{-1} - \end{array}\right)\cdot \left(\begin{array}{cc} - A & B^T \\ B & 0 - \end{array}\right) - = - \left(\begin{array}{cc} - I & A^{-1} B^T \\ 0 & I - \end{array}\right). -@f} -This is the approach taken by the paper by Silvester and Wathen referenced -to in the introduction (with the exception that Silvester and Wathen use -right preconditioning). In this case, a Krylov-based iterative method would -converge in one step only if exact inverses of $A$ and $S$ were applied, -since all the eigenvalues are one (and the number of iterations in such a -method is bounded by the number of distinct eigenvalues). Below, we will -discuss the choice of an adequate solver for this problem. First, we are -going to have a closer look at the implementation of the preconditioner. - -Since $P$ is aimed to be a preconditioner only, we shall use approximations to -the inverse of the Schur complement $S$ and the matrix $A$. Hence, the Schur -complement will be approximated by the pressure mass matrix $M_p$, and we use -a preconditioner to $A$ (without an InverseMatrix class around it) for -approximating $A^{-1}$. - -Here comes the class that implements the block Schur -complement preconditioner. The vmult operation for block vectors -according to the derivation above can be specified by three successive -operations: -@code -template -class BlockSchurPreconditioner : public Subscriptor -{ - public: - BlockSchurPreconditioner (const BlockSparseMatrix &S, - const InverseMatrix,PreconditionerMp> &Mpinv, - const PreconditionerA &Apreconditioner); - - void vmult (BlockVector &dst, - const BlockVector &src) const; - - private: - const SmartPointer > system_matrix; - const SmartPointer, - PreconditionerMp > > m_inverse; - const PreconditionerA &a_preconditioner; - - mutable Vector tmp; - -}; - -template -BlockSchurPreconditioner::BlockSchurPreconditioner( - const BlockSparseMatrix &S, - const InverseMatrix,PreconditionerMp> &Mpinv, - const PreconditionerA &Apreconditioner - ) - : - system_matrix (&S), - m_inverse (&Mpinv), - a_preconditioner (Apreconditioner), - tmp (S.block(1,1).m()) -{} - - // Now the interesting function, the multiplication of - // the preconditioner with a BlockVector. -template -void BlockSchurPreconditioner::vmult ( - BlockVector &dst, - const BlockVector &src) const -{ - // Form u_new = A^{-1} u - a_preconditioner.vmult (dst.block(0), src.block(0)); - // Form tmp = - B u_new + p - // (SparseMatrix::residual - // does precisely this) - system_matrix->block(1,0).residual(tmp, dst.block(0), src.block(1)); - // Change sign in tmp - tmp *= -1; - // Multiply by approximate Schur complement - // (i.e. a pressure mass matrix) - m_inverse->vmult (dst.block(1), tmp); -} -@endcode - -Since we act on the whole block system now, we have to live with one -disadvantage: we need to perform the solver iterations on -the full block system instead of the smaller pressure space. - -Now we turn to the question which solver we should use for the block -system. The first observation is that the resulting preconditioned matrix cannot -be solved with CG since it is neither positive definite nor symmetric. - -The deal.II libraries implement several solvers that are appropriate for the -problem at hand. One choice is the solver @ref SolverBicgstab "BiCGStab", which -was used for the solution of the unsymmetric advection problem in step-9. The -second option, the one we are going to choose, is @ref SolverGMRES "GMRES" -(generalized minimum residual). Both methods have their pros and cons - there -are problems where one of the two candidates clearly outperforms the other, and -vice versa. -Wikipedia's -article on the GMRES method gives a comparative presentation. -A more comprehensive and well-founded comparsion can be read e.g. in the book by -J.W. Demmel (Applied Numerical Linear Algebra, SIAM, 1997, section 6.6.6). - -For our specific problem with the ILU preconditioner for $A$, we certainly need -to perform hundreds of iterations on the block system for large problem sizes -(we won't beat CG!). Actually, this disfavors GMRES: During the GMRES -iterations, a basis of Krylov vectors is successively built up and some -operations are performed on these vectors. The more vectors are in this basis, -the more operations and memory will be needed. The number of operations scales -as ${\cal O}(n + k^2)$ and memory as ${\cal O}(kn)$, where $k$ is the number of -vectors in the Krylov basis and $n$ the size of the (block) matrix. -To not let these demands grow excessively, deal.II limits the size $k$ of the -basis to 30 vectors by default. -Then, the basis is rebuilt. This implementation of the GMRES method is called -GMRES(k), with default $k=30$. What we have gained by this restriction, -namely a bound on operations and memory requirements, will be compensated by -the fact that we use an incomplete basis - this will increase the number of -required iterations. - -BiCGStab, on the other hand, won't get slower when many iterations are needed -(one iteration uses only results from one preceeding step and -not all the steps as GMRES). Besides the fact the BiCGStab is more expensive per -step since two matrix-vector products are needed (compared to one for -CG or GMRES), there is one main reason which makes BiCGStab not appropriate for -this problem: The preconditioner applies the inverse of the pressure -mass matrix by using the InverseMatrix class. Since the application of the -inverse matrix to a vector is done only in approximative way (an exact inverse -is too expensive), this will also affect the solver. In the case of BiCGStab, -the Krylov vectors will not be orthogonal due to that perturbation. While -this is uncritical for a small number of steps (up to about 50), it ruins the -performance of the solver when these perturbations have grown to a significant -magnitude in the coarse of iterations. - -We did some experiments with BiCGStab and found it to -be faster than GMRES up to refinement cycle 3 (in 3D), but it became very slow -for cycles 4 and 5 (even slower than the original Schur complement), so the -solver is useless in this situation. Choosing a sharper tolerance for the -inverse matrix class (1e-10*src.l2_norm() instead of -1e-6*src.l2_norm()) made BiCGStab perform well also for cycle 4, -but did not change the failure on the very large problems. - -GMRES is of course also effected by the approximate inverses, but it is not as -sensitive to orthogonality and retains a relatively good performance also for -large sizes, see the results below. - -With this said, we turn to the realization of the solver call with GMRES with -$k=100$ temporary vectors: - -@code - SparseMatrix pressure_mass_matrix; - pressure_mass_matrix.reinit(sparsity_pattern.block(1,1)); - pressure_mass_matrix.copy_from(system_matrix.block(1,1)); - system_matrix.block(1,1) = 0; - - SparseILU pmass_preconditioner; - pmass_preconditioner.initialize (pressure_mass_matrix, - SparseILU::AdditionalData()); - - InverseMatrix,SparseILU > - m_inverse (pressure_mass_matrix, pmass_preconditioner); - - BlockSchurPreconditioner::type, - SparseILU > - preconditioner (system_matrix, m_inverse, *A_preconditioner); - - SolverControl solver_control (system_matrix.m(), - 1e-6*system_rhs.l2_norm()); - GrowingVectorMemory > vector_memory; - SolverGMRES >::AdditionalData gmres_data; - gmres_data.max_n_tmp_vectors = 100; - - SolverGMRES > gmres(solver_control, vector_memory, - gmres_data); - - gmres.solve(system_matrix, solution, system_rhs, - preconditioner); - - constraints.distribute (solution); - - std::cout << " " - << solver_control.last_step() - << " block GMRES iterations"; -@endcode - -Obviously, one needs to add the include file @ref SolverGMRES -"" in order to make this run. -We call the solver with a BlockVector template in order to enable -GMRES to operate on block vectors and matrices. -Note also that we need to set the (1,1) block in the system -matrix to zero (we saved the pressure mass matrix there which is not part of the -problem) after we copied the information to another matrix. - -Using the Timer class, we collect some statistics that compare the runtime -of the block solver with the one from the problem implementation above. -Besides the solution with the two options we also check if the solutions -of the two variants are close to each other (i.e. this solver gives indeed the -same solution as we had before) and calculate the infinity -norm of the vector difference. - -Let's first see the results in 2D: -@code -Refinement cycle 0 - Number of active cells: 64 - Number of degrees of freedom: 679 (594+85) [0.005999 s] - Assembling... [0.002 s] - Computing preconditioner... [0.003 s] - Solving... - Schur complement: 11 outer CG iterations for p [0.007999 s] - Block Schur preconditioner: 12 GMRES iterations [0.008998 s] - difference l_infty between solution vectors: 8.18909e-07 - -Refinement cycle 1 - Number of active cells: 160 - Number of degrees of freedom: 1683 (1482+201) [0.013998 s] - Assembling... [0.005999 s] - Computing preconditioner... [0.012998 s] - Solving... - Schur complement: 11 outer CG iterations for p [0.029995 s] - Block Schur preconditioner: 12 GMRES iterations [0.030995 s] - difference l_infty between solution vectors: 9.32504e-06 - -Refinement cycle 2 - Number of active cells: 376 - Number of degrees of freedom: 3813 (3370+443) [0.031995 s] - Assembling... [0.014998 s] - Computing preconditioner... [0.044994 s] - Solving... - Schur complement: 11 outer CG iterations for p [0.079987 s] - Block Schur preconditioner: 13 GMRES iterations [0.092986 s] - difference l_infty between solution vectors: 5.40689e-06 - -Refinement cycle 3 - Number of active cells: 880 - Number of degrees of freedom: 8723 (7722+1001) [0.074988 s] - Assembling... [0.035995 s] - Computing preconditioner... [0.110983 s] - Solving... - Schur complement: 11 outer CG iterations for p [0.19697 s] - Block Schur preconditioner: 13 GMRES iterations [0.242963 s] - difference l_infty between solution vectors: 1.14676e-05 - -Refinement cycle 4 - Number of active cells: 2008 - Number of degrees of freedom: 19383 (17186+2197) [0.180973 s] - Assembling... [0.081987 s] - Computing preconditioner... [0.315952 s] - Solving... - Schur complement: 11 outer CG iterations for p [0.673898 s] - Block Schur preconditioner: 13 GMRES iterations [0.778882 s] - difference l_infty between solution vectors: 3.13142e-05 - -Refinement cycle 5 - Number of active cells: 4288 - Number of degrees of freedom: 40855 (36250+4605) [0.386941 s] - Assembling... [0.171974 s] - Computing preconditioner... [0.766883 s] - Solving... - Schur complement: 11 outer CG iterations for p [1.65275 s] - Block Schur preconditioner: 13 GMRES iterations [1.81372 s] - difference l_infty between solution vectors: 8.59668e-05 -@endcode - -We see that there is no huge difference in the solution time between the -block Schur complement preconditioner solver and the Schur complement -itself. The reason is simple: we used a direct solve as preconditioner for -$A$ - so we cannot expect any gain by avoiding the inner iterations. We see -that the number of iterations has slightly increased for GMRES, but all in -all the two choices are fairly similar. - -The picture of course changes in 3D: - -@code -Refinement cycle 0 - Number of active cells: 32 - Number of degrees of freedom: 1356 (1275+81) [0.025996 s] - Assembling... [0.056992 s] - Computing preconditioner... [0.027995 s] - Solving... - Schur complement: 13 outer CG iterations for p [0.275958 s] - Block Schur preconditioner: 23 GMRES iterations [0.042994 s] - difference l_infty between solution vectors: 1.11307e-05 - -Refinement cycle 1 - Number of active cells: 144 - Number of degrees of freedom: 5088 (4827+261) [0.102984 s] - Assembling... [0.254961 s] - Computing preconditioner... [0.161976 s] - Solving... - Schur complement: 14 outer CG iterations for p [2.43963 s] - Block Schur preconditioner: 42 GMRES iterations [0.352946 s] - difference l_infty between solution vectors: 9.07409e-06 - -Refinement cycle 2 - Number of active cells: 704 - Number of degrees of freedom: 22406 (21351+1055) [0.52592 s] - Assembling... [1.24481 s] - Computing preconditioner... [0.948856 s] - Solving... - Schur complement: 14 outer CG iterations for p [22.2056 s] - Block Schur preconditioner: 78 GMRES iterations [4.75928 s] - difference l_infty between solution vectors: 2.48042e-05 - -Refinement cycle 3 - Number of active cells: 3168 - Number of degrees of freedom: 93176 (89043+4133) [2.66759 s] - Assembling... [5.66014 s] - Computing preconditioner... [4.69529 s] - Solving... - Schur complement: 15 outer CG iterations for p [235.74 s] - Block Schur preconditioner: 162 GMRES iterations [63.7883 s] - difference l_infty between solution vectors: 5.62978e-05 - -Refinement cycle 4 - Number of active cells: 11456 - Number of degrees of freedom: 327808 (313659+14149) [12.0242 s] - Assembling... [20.2669 s] - Computing preconditioner... [17.3384 s] - Solving... - Schur complement: 15 outer CG iterations for p [817.287 s] - Block Schur preconditioner: 294 GMRES iterations [347.307 s] - difference l_infty between solution vectors: 0.000107536 - -Refinement cycle 5 - Number of active cells: 45056 - Number of degrees of freedom: 1254464 (1201371+53093) [89.8533 s] - Assembling... [80.3588 s] - Computing preconditioner... [73.0849 s] - Solving... - Schur complement: 14 outer CG iterations for p [4401.66 s] - Block Schur preconditioner: 587 GMRES iterations [3083.21 s] - difference l_infty between solution vectors: 0.00025531 -@endcode - -Here, the block preconditioned solver is clearly superior to the Schur -complement, but the advantage gets less for more mesh points. This is -because GMRES(k) scales worse with the problem size than CG, as we discussed -above. Nonetheless, the improvement by a factor of 3-5 for moderate problem -sizes is quite impressive. - -
Combining block preconditioner and multigrid
-An ultimate linear solver for this problem could be imagined as a -combination of an optimal -preconditioner for $A$ (e.g. multigrid) and the block preconditioner -described above, which is the approach taken in the @ref step_31 "step-31" -tutorial program. - -
No block matrices and vectors
-Another possibility that can be taken into account is to not set up a block -system, but rather solve the system of velocity and pressure all at once. The -options are direct solve with UMFPACK (2D) or GMRES with ILU -preconditioning (3D). It should be straightforward to try that. - - - -

More interesting testcases

- -The program can of course also serve as a basis to compute the flow in more -interesting cases. The original motivation to write this program was for it to -be a starting point for some geophysical flow problems, such as the -movement of magma under places where continental plates drift apart (for -example mid-ocean ridges). Of course, in such places, the geometry is more -complicated than the examples shown above, but it is not hard to accomodate -for that. - -For example, by using the folllowing modification of the boundary values -function -@code -template -double -BoundaryValues::value (const Point &p, - const unsigned int component) const -{ - Assert (component < this->n_components, - ExcIndexRange (component, 0, this->n_components)); - - const double x_offset = std::atan(p[1]*4)/3; - - if (component == 0) - return (p[0] < x_offset ? -1 : (p[0] > x_offset ? 1 : 0)); - return 0; -} -@endcode -and the following way to generate the mesh as the domain -$[-2,2]\times[-2,2]\times[-1,0]$ -@code - std::vector subdivisions (dim, 1); - subdivisions[0] = 4; - if (dim>2) - subdivisions[1] = 4; - - const Point bottom_left = (dim == 2 ? - Point(-2,-1) : - Point(-2,-2,-1)); - const Point top_right = (dim == 2 ? - Point(2,0) : - Point(2,2,0)); - - GridGenerator::subdivided_hyper_rectangle (triangulation, - subdivisions, - bottom_left, - top_right); -@endcode -then we get images where the the fault line is curved: - - - - - - -
- @image html step-22.3d-extension.png - - @image html step-22.3d-grid-extension.png -
- diff --git a/deal.II/examples/step-42/step-22.cc b/deal.II/examples/step-42/step-42.cc similarity index 100% rename from deal.II/examples/step-42/step-22.cc rename to deal.II/examples/step-42/step-42.cc