Finally, the bilinear form contains terms involving the gradients of the
velocity component of shape functions. To be more precise, we are not really
interested in the full gradient, but only the divergence of the velocity
-components, i.e. $\text{div}\ \vec u_h^i=\sum_{d=0}^{dim-1} \frac \partial
-{\partial x_d} (\vec
-u_h^i)_d$. Here's a function that returns this quantity:
+components, i.e. $\text{div}\ \vec u_h^i = \sum_{d=0}^{dim-1}
+\frac{\partial}{\partial x_d} (\vec u_h^i)_d$. Here's a function that returns
+this quantity:
\begin{verbatim}
template <int dim>
double
\subsection*{Linear solvers and preconditioners}
+After assembling the linear system we are faced with the task of solving
+it. The problem here is: the matrix has a zero block at the bottom right
+(there is no term in the bilinear form that couples the pressure $p$ with the
+pressure test function $q$), and it is indefinite. At least it is
+symmetric. In other words: the Conjugate Gradient method is not going to
+work. We would have to resort to other iterative solvers instead, such as
+MinRes, SymmLQ, or GMRES, that can deal with indefinite systems. However, then
+the next problem immediately surfaces: due to the zero block, there are zeros
+on the diagonal and none of the usual preconditioners (Jacobi, SSOR) will work
+as they require division by diagonal elements.
+
+
+\subsubsection*{Solving using the Schur complement}
+
+In view of this, let us take another look at the matrix. If we sort our
+degrees of freedom so that all velocity come before all pressure variables,
+then we can subdivide the linear system $AX=B$ into the following blocks:
+\begin{align*}
+ \begin{pmatrix}
+ M & B^T \\ B & 0
+ \end{pmatrix}
+ \begin{pmatrix}
+ U \\ P
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ F \\ G
+ \end{pmatrix},
+\end{align*}
+where $U,P$ are the values of velocity and pressure degrees of freedom,
+respectively, $M$ is the mass matrix on the velocity space, $B$ corresponds to
+the negative divergence operator, and $B^T$ is its transpose and corresponds
+to the negative gradient.
+
+By block elimination, we can then re-order this system in the following way
+(multiply the first row of the system by $BM^{-1}$ and then subtract the
+second row from it):
+\begin{align*}
+ BM^{-1}B^T P &= BM^{-1} F - G, \\
+ MU &= F - B^TP.
+\end{align*}
+Here, the matrix $S=BM^{-1}B^T$ (called the \textit{Schur complement} of $A$)
+is obviously symmetric and, owing to the positive definiteness of $M$ and the
+fact that $B^T$ has full column rank, $S$ is also positive
+definite.
+
+Consequently, if we could compute $S$, we could apply the Conjugate Gradient
+method to it. However, computing $S$ is expensive, and $S$ is most
+likely also a full matrix. On the other hand, the CG algorithm doesn't require
+us to actually have a representation of $S$, it is sufficient to form
+matrix-vector products with it. We can do so in steps: to compute $Sv$, we
+\begin{itemize}
+\item form $w = B^T v$;
+\item solve $My = w$ for $y=M^{-1}w$, using the CG method applied to the
+ positive definite and symmetric mass matrix $M$;
+\item form $z=By$ to obtain $Sv=z$.
+\end{itemize}
+We will implement a class that does that in the program. Before showing its
+code, let us first note that we need to multiply with $M^{-1}$ in several
+places here: in multiplying with the Schur complement $S$, forming the right
+hand side of the first equation, and solving in the second equation. From a
+coding viewpoint, it is therefore appropriate to relegate such a recurring
+operation to a class of its own. We call it \texttt{InverseMatrix}. As far as
+linear solvers are concerned, this class will have all operations that solvers
+need, which in fact includes only the ability to perform matrix-vector
+products; we form them by using a CG solve (this of course requires that the
+matrix passed to this class satisfies the requirements of the CG
+solvers). Here are the relevant parts of the code that implements this:
+\begin{verbatim}
+class InverseMatrix
+{
+ public:
+ InverseMatrix (const SparseMatrix<double> &m);
+
+ void vmult (Vector<double> &dst,
+ const Vector<double> &src) const;
+
+ private:
+ const SmartPointer<const SparseMatrix<double> > matrix;
+ // ...
+};
+
+
+void InverseMatrix::vmult (Vector<double> &dst,
+ const Vector<double> &src) const
+{
+ SolverControl solver_control (src.size(), 1e-8*src.l2_norm());
+ SolverCG<> cg (solver_control, vector_memory);
+
+ cg.solve (*matrix, dst, src, PreconditionIdentity());
+}
+\end{verbatim}
+Once created, objects of this class can act as matrices: they perform
+matrix-vector multiplications. How this is actually done is irrelevant to the
+outside world.
+
+Using this class, we can then write a class that implements the Schur
+complement in much the same way: to act as a matrix, it only needs to offer a
+function to perform a matrix-vector multiplication, using the algorithm
+above. Here are again the relevant parts of the code:
+\begin{verbatim}
+class SchurComplement
+{
+ public:
+ SchurComplement (const BlockSparseMatrix<double> &A,
+ const InverseMatrix &Minv);
+
+ void vmult (Vector<double> &dst,
+ const Vector<double> &src) const;
+
+ private:
+ const SmartPointer<const BlockSparseMatrix<double> > system_matrix;
+ const SmartPointer<const InverseMatrix> m_inverse;
+
+ mutable Vector<double> tmp1, tmp2;
+};
+
+
+void SchurComplement::vmult (Vector<double> &dst,
+ const Vector<double> &src) const
+{
+ system_matrix->block(0,1).vmult (tmp1, src);
+ m_inverse->vmult (tmp2, tmp1);
+ system_matrix->block(1,0).vmult (dst, tmp2);
+}
+\end{verbatim}
+
+In this code, the constructor takes a reference to a block sparse matrix for
+the entire system, and a reference to an object representing the inverse of
+the mass matrix. It stores these using \texttt{SmartPointer} objects (see
+step-7), and additionally allocates two temporary vectors \texttt{tmp1} and
+\texttt{tmp2} for the vectors labeled $w,y$ in the list above.
+
+In the matrix-vector multiplication function, the product $Sv$ is performed in
+exactly the order outlined above. Note how we access the blocks $B^T$ and $B$
+by calling \texttt{system\_matrix->block(0,1)} and
+\texttt{system\_matrix->block(1,0)} respectively, thereby picking out
+individual blocks of the block system. Multiplication by $M^{-1}$ happens
+using the object introduced above.
+
+With all this, we can go ahead and write down the solver we are going to
+use. Essentially, all we need to do is form the right hand sides of the two
+equations defining $P$ and $U$, and then solve them with the Schur complement
+matrix and the mass matrix, respectively:
+\begin{verbatim}
+template <int dim>
+void MixedLaplaceProblem<dim>::solve ()
+{
+ const InverseMatrix m_inverse (system_matrix.block(0,0));
+ Vector<double> tmp (solution.block(0).size());
+
+ {
+ Vector<double> schur_rhs (solution.block(1).size());
+
+ m_inverse.vmult (tmp, system_rhs.block(0));
+ system_matrix.block(1,0).vmult (schur_rhs, tmp);
+ schur_rhs -= system_rhs.block(1);
+
+ SolverControl solver_control (system_matrix.block(0,0).m(),
+ 1e-6*schur_rhs.l2_norm());
+ SolverCG<> cg (solver_control);
+
+ cg.solve (SchurComplement(system_matrix, m_inverse),
+ solution.block(1),
+ schur_rhs,
+ PreconditionIdentity());
+ }
+ {
+ system_matrix.block(0,1).vmult (tmp, solution.block(1));
+ tmp *= -1;
+ tmp += system_rhs.block(0);
+
+ m_inverse.vmult (solution.block(0), tmp);
+ }
+}
+\end{verbatim}
+
+This code looks more impressive than it actually is. At the beginning, we
+declare an object representing $M^{-1}$ and a temporary vector (of the size of
+the first block of the solution, i.e. with as many entries as there are
+velocity unknowns), and the two blocks surrounded by braces then solve the two
+equations for $P$ and $U$, in this order. Most of the code in each of the two
+blocks is actually devoted to constructing the proper right hand sides. For
+the first equation, this would be $BM^{-1}F-G$, and $-B^TP+G$ for the second
+one. The first hand side is then solved with the Schur complement matrix, and
+the second simply multiplied with $M^{-1}$. The code as shown uses no
+preconditioner (i.e. the identity matrix as preconditioner) for the Schur
+complement.
+
+
+
+\subsubsection*{A preconditioner for the Schur complement}
+
+One may ask whether it would help if we had a preconditioner for the Schur
+complement $S=BM^{-1}B^T$. The general answer, as usual, is: of course. The
+problem is only, we don't know anything about this Schur complement matrix. We
+do not know its entries, all we know is its action. On the other hand, we have
+to realize that our solver is expensive since in each iteration we have to do
+one matrix-vector product with the Schur complement, which means that we have
+to do invert the mass matrix once in each iteration.
+
+There are different approaches to preconditioning such a matrix. On the one
+extreme is to use something that is cheap to apply and therefore has no real
+impact on the work done in each iteration. The other extreme is a
+preconditioner that is itself very expensive, but in return really brings down
+the number of iterations required to solve with $S$.
+
+We will try something along the second approach, as much to improve the
+performance of the program as to demonstrate some techniques. To this end, let
+us recall that the ideal preconditioner is, of course, $S^{-1}$, but that is
+unattainable. However, how about
+\begin{align*}
+ \tilde S^{-1} = [B^T (\text{diag}M)^{-1}B]^{-1}
+\end{align*}
+as a preconditioner? That would mean that every time we have to do one
+preconditioning step, we actually have to solve with $\tilde S$. At first,
+this looks almost as expensive as solving with $S$ right away. However, note
+that in the inner iteration, we do not have to calculate $M^{-1}$, but only
+the inverse of its diagonal, which is cheap.
+
+To implement something like this, let us first generalize the
+\texttt{InverseMatrix} class so that it can work not only with
+\texttt{SparseMatrix} objects, but with any matrix type. This looks like so:
+\begin{verbatim}
+template <class Matrix>
+class InverseMatrix
+{
+ public:
+ InverseMatrix (const Matrix &m);
+
+ void vmult (Vector<double> &dst,
+ const Vector<double> &src) const;
+
+ private:
+ const SmartPointer<const Matrix> matrix;
+
+ //...
+};
+
+
+template <class Matrix>
+void InverseMatrix<Matrix>::vmult (Vector<double> &dst,
+ const Vector<double> &src) const
+{
+ SolverControl solver_control (src.size(), 1e-8*src.l2_norm());
+ SolverCG<> cg (solver_control, vector_memory);
+
+ dst = 0;
+
+ cg.solve (*matrix, dst, src, PreconditionIdentity());
+}
+\end{verbatim}
+Essentially, the only change we have made is the introduction of a template
+argument that generalizes the use of \texttt{SparseMatrix}.
+
+The next step is to define a class that represents the approximate Schur
+complement. This should look very much like the Schur complement class itself,
+except that it doesn't need the object representing $M^{-1}$ any more:
+\begin{verbatim}
+class ApproximateSchurComplement : public Subscriptor
+{
+ public:
+ ApproximateSchurComplement (const BlockSparseMatrix<double> &A);
+
+ void vmult (Vector<double> &dst,
+ const Vector<double> &src) const;
+
+ private:
+ const SmartPointer<const BlockSparseMatrix<double> > system_matrix;
+
+ mutable Vector<double> tmp1, tmp2;
+};
+
+
+void ApproximateSchurComplement::vmult (Vector<double> &dst,
+ const Vector<double> &src) const
+{
+ system_matrix->block(0,1).vmult (tmp1, src);
+ system_matrix->block(0,0).precondition_Jacobi (tmp2, tmp1);
+ system_matrix->block(1,0).vmult (dst, tmp2);
+}
+\end{verbatim}
+Note how the \texttt{vmult} function differs in simply doing one Jacobi sweep
+(i.e. multiplying with the inverses of the diagonal) instead of multiplying
+with the full $M^{-1}$.
+
+With all this, we already have the preconditioner: it should be the inverse of
+the approximate Schur complement, i.e. we need code like this:
+\begin{verbatim}
+ ApproximateSchurComplement
+ approximate_schur_complement (system_matrix);
+
+ InverseMatrix<ApproximateSchurComplement>
+ preconditioner (approximate_schur_complement)
+\end{verbatim}
+That's all!
+
+Taken together, the first block of our \texttt{solve()} function will then
+look like this:
+\begin{verbatim}
+ Vector<double> schur_rhs (solution.block(1).size());
+
+ m_inverse.vmult (tmp, system_rhs.block(0));
+ system_matrix.block(1,0).vmult (schur_rhs, tmp);
+ schur_rhs -= system_rhs.block(1);
+
+ SchurComplement
+ schur_complement (system_matrix, m_inverse)
+
+ ApproximateSchurComplement
+ approximate_schur_complement (system_matrix);
+
+ InverseMatrix<ApproximateSchurComplement>
+ preconditioner (approximate_schur_complement)
+
+ SolverControl solver_control (system_matrix.block(0,0).m(),
+ 1e-6*schur_rhs.l2_norm());
+ SolverCG<> cg (solver_control);
+
+ cg.solve (schur_complement, solution.block(1), schur_rhs,
+ preconditioner);
+\end{verbatim}
+Note how we pass the so-defined preconditioner to the solver working on the
+Schur complement matrix.
+
+Obviously, applying this inverse of the approximate Schur complement is a very
+expensive preconditioner, almost as expensive as inverting the Schur
+complement itself. We can expect it to significantly reduce the number of
+outer iterations required for the Schur complement. In fact it does: in a
+typical run on 5 times refined meshes using elements of order 0, the number of
+outer iterations drops from 164 to 12. On the other hand, we now have to apply
+a very expensive preconditioner 12 times. A better measure is therefore simply
+the run-time of the program: on my laptop, it drops from 28 to 23 seconds for
+this testcase. That doesn't seem too impressive, but the savings become more
+pronounced on finer meshes and with elements of higher order. For example, a
+six times refined mesh and using elements of order 2 yields an improvement of
+318 to 12 outer iterations, at a runtime of 338 seconds to 229 seconds. Not
+earth shattering, but significant.
+
\subsection*{Definition of the testcase}