that our inner solver is not ${\cal O}(N)$: a simple experiment shows
that as we keep refining the mesh, the average number of
ILU-preconditioned CG iterations to invert the velocity-velocity block
-$A$ increases from 12, 22, 35, 55, .....
+$A$ increases.
We will address the question of how possibly to improve our solver <a
href="improved-solver">below</a>.
-It is clearly visible that the dofs are spread over almost the whole matrix.
-This makes preconditioning by ILU inefficient: ILU generates a Gaussian
-elimination (LU decomposition) without fill-in elements, which means that more
-tentative fill-ins left out will result in a worse approximation of the complete
-decomposition.
+It is clearly visible that the nonzero entries are spread over almost the
+whole matrix. This makes preconditioning by ILU inefficient: ILU generates a
+Gaussian elimination (LU decomposition) without fill-in elements, which means
+that more tentative fill-ins left out will result in a worse approximation of
+the complete decomposition.
-In this program, we have thus chosen a more advanced renumbering of components.
-The renumbering with Cuthill_McKee and grouping the components into velocity
-and pressure yields the following output.
+In this program, we have thus chosen a more advanced renumbering of
+components. The renumbering with DoFRenumbering::Cuthill_McKee and grouping
+the components into velocity and pressure yields the following output:
@image html step-31.2d.sparsity-ren.png
-It is apparent that the situation has improved a lot. Most of the
-elements are now concentrated around the diagonal in the (0,0) block in the
-matrix. Similar effects are also visible for the other blocks. In this case, the
-ILU decomposition will be much closer to the full LU decomposition, which
-improves the quality of the preconditioner. It is also worthwile to note that
-the sparse direct solver UMFPACK does some internal renumbering of the equations
-before actually generating a sparse LU decomposition that leads to a
-similar pattern as the one we got from Cuthill_McKee.
+It is apparent that the situation has improved a lot. Most of the elements are
+now concentrated around the diagonal in the (0,0) block in the matrix. Similar
+effects are also visible for the other blocks. In this case, the ILU
+decomposition will be much closer to the full LU decomposition, which improves
+the quality of the preconditioner. (It may be interesting to note that the
+sparse direct solver UMFPACK does some internal renumbering of the equations
+before actually generating a sparse LU decomposition; that procedure leads to
+a very similar pattern to the one we got from the Cuthill-McKee algorithm.)
Finally, we want to have a closer
look at a sparsity pattern in 3D. We show only the (0,0) block of the
in the matrix. Moreover, even for the optimized renumbering, there will be a
considerable amount of tentative fill-in elements. This illustrates why UMFPACK
is not a good choice in 3D - a full decomposition needs many new entries that
- eventually won't fit into the physical memory (RAM).
+ eventually won't fit into the physical memory (RAM):
@image html step-31.3d.sparsity_uu-ren.png
iterations does not depend on the mesh size, which is optimal in a sense of
scalability. This is however only partly true, since we did not look at the
number of iterations for the inner iterations, i.e. generating the inverse of
-the matrix $A$ and the mass matrix $M_p$. Of course, this is
-unproblematic in the 2D case where we precondition with a direct solver and the
-vmult operation of the inverse matrix structure will converge in one single CG
-step, but this changes in 3D where we need to apply the ILU preconditioner.
-There, the number of required step basically doubles with half the element size,
-so the work gets more and more for larger systems. For the 3D results obtained
-above, each vmult operation involves approx. 14, 23, 36, 59, 72, 101 etc. inner
-CG iterations. (On the other hand, the number of iterations for applying the
-inverse pressure mass matrix is always about 10-11.)
-To summarize, most work is spent on
-creating the same matrix inverse over and over again. It is a natural question
-to ask whether we can do that any better and avoid inverting the same
-(complicated) matrix several times.
+the matrix $A$ and the mass matrix $M_p$. Of course, this is unproblematic in
+the 2D case where we precondition with a direct solver and the
+<code>vmult</code> operation of the inverse matrix structure will converge in
+one single CG step, but this changes in 3D where we need to apply the ILU
+preconditioner. There, the number of required preconditioned CG steps to
+invert $A$ basically increases as the mesh is refined. For
+the 3D results obtained above, each <code>vmult</code> operation involves
+on average approximately 14, 23, 36, 59, 72, 101, ... inner CG iterations in
+the refinement steps shown above. (On the other hand,
+the number of iterations for applying the inverse pressure mass matrix is
+always about 10-11.) To summarize, most work is spent on solving linear
+systems with the same
+matrix $A$ over and over again. It is a natural question to ask whether we
+can do that better.
The answer is, of course, that we can do that in a few other (most of the time
better) ways.
of the same linear system. The question is how this can be avoided. If we
persist in calculating the Schur complement, there is no other possibility.
The alternative is to attack the block system at one time and use an approximate
-Schur complement as an efficient preconditioner. The basic attempt is as
+Schur complement as an efficient preconditioner. The basic idea is as
follows: If we find a block preconditioner $P$ such that the matrix
@f{eqnarray*}
P^{-1}\left(\begin{array}{cc}
@f}
is simple, then an iterative solver with that preconditioner will converge in a
few iterations. Using the Schur complement $S = B A^{-1} B^T$, one finds that
+@f{eqnarray*}
+ P^{-1}
+ =
+ \left(\begin{array}{cc}
+ A^{-1} & 0 \\ S^{-1} B A^{-1} & -S^{-1}
+ \end{array}\right)\cdot \left(\begin{array}{cc}
+ A & B^T \\ B & 0
+ \end{array}\right)
+@f}
+would appear to be a good choice since
@f{eqnarray*}
P^{-1}\left(\begin{array}{cc}
A & B^T \\ B & 0
=
\left(\begin{array}{cc}
I & A^{-1} B^T \\ 0 & 0
- \end{array}\right),
+ \end{array}\right).
@f}
-see also the paper by Silvester and Wathen referenced to in the introduction. In
+This is the approach taken by the paper by Silvester and Wathen referenced to
+in the introduction. In
this case, a Krylov-based iterative method will converge in two steps, since
-there are only two distinct eigenvalues 0 and 1 of this matrix. Though, it is
-not possible to use CG since the block matrix is not positive definite (it has
-indeed both positive and negative eigenvalues, i.e. it is indefinite). Instead
-one has to choose e.g. the iterative solver GMRES, which is more expensive
-per iteration (more inner products to be calculated), but is applicable also to
-indefinite (and non-symmetric) matrices.
+there are only two distinct eigenvalues 0 and 1 of this matrix. The resulting
+preconditioned matrix can not be solved using CG since it is neither positive
+definite not symmetric. However, the iterative solver GMRES, which is more expensive
+per iteration (more inner products to be calculated) is applicable to this
+sort of matrices.
Since $P$ is aimed to be a preconditioner only, we shall only use
-approximations to the Schur complement $S$ and the matrix $A$. So an
-improved solver for the Stokes system is going to look like the following:
-The Schur
-complement will be approximated by the pressure mass matrix $M_p$, and we use a
-preconditioner to $A$ (without an InverseMatrix class around it) to approximate
-$A$. The advantage of this approach is however partly compensated by the fact
-that we need to perform the solver iterations on the full block system i
-nstead of the smaller pressure space.
+approximations to the Schur complement $S$ and the matrix $A$. So an improved
+solver for the Stokes system is going to look like the following: The Schur
+complement will be approximated by the pressure mass matrix $M_p$, and we use
+a preconditioner to $A$ (without an InverseMatrix class around it) to
+approximate $A^{-1}$. The advantage of this approach is however partly
+compensated by the fact that we need to perform the solver iterations on the
+full block system instead of the smaller pressure space.
An implementation of such a solver could look like this:
First the class for the block Schur complement preconditioner, which implements
-a vmult operation of the preconditioner for block vectors. Note that the
+a <code>vmult</code> operation of the preconditioner for block vectors. Note that the
preconditioner $P$ described above is implemented by three successive
operations.
@code
const InverseMatrix<SparseMatrix<double>,PreconditionSSOR<> > &Mpinv,
const Preconditioner &Apreconditioner);
- void vmult (BlockVector<double> &dst,
- const BlockVector<double> &src) const;
+ void vmult (BlockVector<double> &dst,
+ const BlockVector<double> &src) const;
private:
const SmartPointer<const BlockSparseMatrix<double> > system_matrix;
a_preconditioner (Apreconditioner),
tmp (2)
{
- // We have to initialize the <code>BlockVector@</code>
+ // We have to initialize the BlockVector
// tmp to the correct sizes in the respective blocks
tmp.block(0).reinit(S.block(0,0).m());
tmp.block(1).reinit(S.block(1,1).m());
// Form u_new = A^{-1} u
a_preconditioner.vmult (dst.block(0), src.block(0));
// Form tmp.block(1) = - B u_new + p
- // (<code>SparseMatrix::residual</code>
- // does precisely this)
+ // (SparseMatrix::residual does precisely this)
system_matrix->block(1,0).residual(tmp.block(1),
dst.block(0), src.block(1));
// Change sign in tmp.block(1)
}
@endcode
-The actual solver call can be realized as follows.
+The actual solver call can be realized as follows:
@code
SparseMatrix<double> pressure_mass_matrix;
GMRES to operate on block vectors and matrices.
Note also that we need to set the (1,1) block in the system
matrix to zero (we saved the pressure mass matrix there which is not part of the
-problem) after we copied to information to another matrix.
+problem) after we copied the information to another matrix.
-Using the timer class, we collected some statistics that compare the runtime of
-the block solver with the one used in the problem implementation above. Besides
-the solution of the two systems we also check if the solutions to the two
-systems are close to each other, i.e. we calculate the infinity norm of the
-vector difference.
+Using the Timer class, we can collect some statistics that compare the runtime
+of the block solver with the one used in the problem implementation above (on
+a different machine than the one for which timings were reported
+above). Besides the solution of the two systems we also check if the solutions
+to the two systems are close to each other, i.e. we calculate the infinity
+norm of the vector difference.
Let's first see the results in 2D:
@code
max difference l_infty in the two solution vectors: 0.000245764
@endcode
-We see that the runtime for solution using the block Schur complement
+We see that the runtime for solution using the block Schur complement
preconditioner is higher than the one with the Schur complement directly. The
-reason is simple: We used a direct solve as preconditioner here - so there won't
-be any gain by avoiding the inner iterations (indeed, we see that slightly more
-iterations are needed).
+reason is simple: we used a direct solve as preconditioner for the latter - so
+there won't be any gain by avoiding the inner iterations (indeed, we see that
+slightly more iterations are needed).
The picture of course changes in 3D:
Another possibility that can be taken into account is to not set up a block
system, but rather solve the system of velocity and pressure all at once. The
alternatives are direct solve with UMFPACK (2D) or GMRES with ILU
-preconditioning (3D).
+preconditioning (3D). It should be straightforward to try that.