From: Matthias Maier Date: Fri, 8 Feb 2019 19:50:16 +0000 (-0600) Subject: examples/step-20: Update introduction X-Git-Tag: v9.1.0-rc1~347^2~9 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=62db86d737aaa557822b411340268352d25e8ada;p=dealii.git examples/step-20: Update introduction - Introduce LinearOperator and PackagedOperation - Explain in detail how to solve the Schur complement using this functionality. --- diff --git a/examples/step-20/doc/intro.dox b/examples/step-20/doc/intro.dox index b9a7bde9cf..f090523b02 100644 --- a/examples/step-20/doc/intro.dox +++ b/examples/step-20/doc/intro.dox @@ -249,7 +249,7 @@ program we do that in the following way: for (unsigned int i=0; i phi_j_u = fe_values[velocities].value (j, q); - const double div_phi_j_u = fe_values[velocities].divergence (j, q); - const double phi_j_p = fe_values[pressure].value (j, q); + const Tensor<1,dim> phi_j_u = fe_values[velocities].value (j, q); + const double div_phi_j_u = fe_values[velocities].divergence (j, q); + const double phi_j_p = fe_values[pressure].value (j, q); local_matrix(i,j) += (phi_i_u * k_inverse_values[q] * phi_j_u - div_phi_i_u * phi_j_p @@ -333,23 +333,23 @@ the same way as above, i.e. the extractor classes also work on FEFaceValues obje @code for (unsigned int face_no=0; - face_no::faces_per_cell; - ++face_no) - if (cell->at_boundary(face_no)) - { - fe_face_values.reinit (cell, face_no); - - pressure_boundary_values - .value_list (fe_face_values.get_quadrature_points(), - boundary_values); - - for (unsigned int q=0; q::faces_per_cell; + ++face_no) + if (cell->at_boundary(face_no)) + { + fe_face_values.reinit (cell, face_no); + + pressure_boundary_values + .value_list (fe_face_values.get_quadrature_points(), + boundary_values); + + for (unsigned int q=0; qblocks that correspond to the individual operators that appear in -the system. We note that the resulting solver is not optimal -- there are much -better ways, for example those explained in the results section of step-22 or -the one we use in step-43 for a problem rather similar to the current one -- -but that the goal is to introduce techniques rather than optimal solvers. +the system. We note that the resulting solver is not optimal -- there are +much better ways to efficiently compute the system, for example those +explained in the results section of step-22 or the one we use in step-43 +for a problem similar to the current one. Here, our goal shall be to +introduce new solution techniques how they can be implemented in deal.II.

Solving using the Schur complement

@@ -439,115 +440,162 @@ matrix-vector products. @note The key point in this consideration is to recognize that to implement an iterative solver such as CG or GMRES, we never actually need the actual elements of a matrix! All that is required is that we can form -matrix-vector products. The same is true for preconditioners. In deal.II -we encode this requirement by only requiring that matrices and preconditioners -given to solver classes have a vmult() member function that does -the matrix-vector product. How a class chooses to implement this function is -not important to the solver. Consequently, classes can implement it by, -for example, doing a sequence of products and linear solves as discussed -above. - -Using this strategy, we can then implement a class that provides the -function vmult() that is all that the SolverCG class -requires from an object representing a matrix. We can make our life a -bit easier by also introducing an object that represents $M^{-1}$ and -that has its own vmult() function that, if called, solves -the linear system with $M$. Using this (which we will implement as the -InverseMatrix class in the body of the program), the class -that implements the Schur only needs to offer the vmult() -function to perform a matrix-vector multiplication, using the algorithm -above. Here are again the relevant parts of the code: - +matrix-vector products. The same is true for preconditioners. In deal.II we +encode this requirement by only requiring that matrices and preconditioners +given to solver classes have a vmult() member function that +does the matrix-vector product. How a class chooses to implement this +function is not important to the solver. Consequently, classes can +implement it by, for example, doing a sequence of products and linear +solves as discussed above. + + +

The LinearOperator framework in deal.II

+ +deal.II includes support for describing such linear operations in a very +general way. This is done with the LinearOperator class that, like +@ref ConceptMatrixType "the MatrixType concept", +defines a minimal interface for applying a linear operation to a +vector: @code -class SchurComplement : public Subscriptor -{ - public: - SchurComplement (const BlockSparseMatrix &A, - const InverseMatrix > &inverse_mass); - - void vmult (Vector &dst, - const Vector &src) const; - - private: - const SmartPointer > system_matrix; - const SmartPointer > > inverse_mass; - - mutable Vector tmp1, tmp2; -}; - - -void SchurComplement::vmult (Vector &dst, - const Vector &src) const -{ - system_matrix->block(0,1).vmult (tmp1, src); // multiply with the top right block: B - inverse_mass->vmult (tmp2, tmp1); // multiply with M^-1 - system_matrix->block(1,0).vmult (dst, tmp2); // multiply with the bottom left block: B^T -} + std::function vmult; + std::function vmult_add; + std::function Tvmult; + std::function Tvmult_add; @endcode +The key difference between a LinearOperator and an ordinary matrix is +however that a LinearOperator does not allow any further access to the +underlying object. All you can do with a LinearOperator is to apply its +"action" to a vector! We take the opportunity to introduce the +LinearOperator concept at this point because it is a very useful tool that +allows you to construct complex solvers and preconditioners in a very +intuitive manner. + +As a first example let us construct a LinearOperator object that represents +$M^{-1}$. This means that whenever the vmult() function of +this operator is called it has to solve a linear system. This requires us +to specify a solver (and corresponding) preconditioner. Assuming that +M is a reference to the upper left block of the system matrix +we can write: +@code + const auto op_M = linear_operator(M); -In this code, the constructor takes a reference to a block sparse matrix for -the entire system, and a reference to the object representing the inverse of -the mass matrix. It stores these using SmartPointer objects (see -step-7), and additionally allocates two temporary vectors tmp1 and -tmp2 for the vectors labeled $w,y$ in the list above. + PreconditionJacobi<> preconditioner_M; + preconditioner_M.initialize(M); -In the matrix-vector multiplication function, the product $Sv$ is performed in -exactly the order outlined above. Note how we access the blocks $B$ and $B^T$ -by calling system_matrix->block(0,1) and -system_matrix->block(1,0) respectively, thereby picking out -individual blocks of the block system. Multiplication by $M^{-1}$ happens -using the object introduced above. + ReductionControl reduction_control_M(2000, 1.0e-18, 1.0e-10); + SolverCG<> solver_M(reduction_control_M); -With all this, we can go ahead and write down the solver we are going to -use. Essentially, all we need to do is form the right hand sides of the two -equations defining $P$ and $U$, and then solve them with the Schur complement -matrix and the mass matrix, respectively: + const auto op_M_inv = inverse_operator(op_M, solver_M, preconditioner_M); +@endcode +Rather than using a SolverControl we use the ReductionControl class here +that stops iterations when either an absolute tolerance is reached (for +which we choose $10^{-18}$) or when the residual is reduced by a certain +factor (here, $10^{-10}$). In contrast the SolverControl class only checks +for absolute tolerances. We have to use ReductionControl in our case to +work around a minor issue: The right hand sides that we will feed to +op_M_inv are essentially formed by residuals that naturally +have vastly differing norms. This makes control by an absolute tolerance +very error prone. + +We now have a LinearOperator op_M_inv that we can use to +construct more complicated operators such as the Schur complement $S$. +Assuming that B is a reference to the upper right block +constructing a LinearOperator op_S is a matter of two lines: +@code + const auto op_B = linear_operator(B); + const auto op_S = transpose_operator(op_B) * op_M_inv * op_B; +@endcode +Here, the multiplication of three LinearOperator objects yields a composite +object op_S whose vmult() function first applies +$B$, then $M^{-1}$ (i.e. solving an equation with $M$), and finally $B^T$ +to any given input vector. In that sense op_S.vmult() is +similar to the following code: +@code + B.vmult (tmp1, src); // multiply with the top right block: B + solver_M(M, tmp2, tmp1, preconditioner_M); // multiply with M^-1 + B.Tvmult (dst, tmp2); // multiply with the bottom left block: B^T +@endcode +(tmp1 and tmp2 are two temporary vectors). +@note We could have achieved the same goal of creating a "matrix like" +object by implementing a specialized class SchurComplement +that provides a suitable vmult() function. Skipping over some +details this might have looked like the following: @code -template -void MixedLaplaceProblem::solve () +class SchurComplement { - InverseMatrix > inverse_mass (system_matrix.block(0,0)); - Vector tmp (solution.block(0).size()); + public: - { - SchurComplement schur_complement (system_matrix, inverse_mass); - Vector schur_rhs (solution.block(1).size()); - inverse_mass.vmult (tmp, system_rhs.block(0)); - system_matrix.block(1,0).vmult (schur_rhs, tmp); - schur_rhs -= system_rhs.block(1); - - SolverControl solver_control (solution.block(1).size(), - 1e-12*schur_rhs.l2_norm()); - SolverCG<> cg (solver_control); - - PreconditionIdentity preconditioner; - cg.solve (schur_complement, solution.block(1), schur_rhs, - preconditioner); - } + // ... + void SchurComplement::vmult (Vector &dst, + const Vector &src) const { - system_matrix.block(0,1).vmult (tmp, solution.block(1)); - tmp *= -1; - tmp += system_rhs.block(0); - - inverse_mass.vmult (solution.block(0), tmp); + B.vmult (tmp1, src); + solver_M(M, tmp2, tmp1, preconditioner_M); + B.Tvmult (dst, tmp2); } -} +}; +@endcode +Even though both approaches are exactly equivalent, the LinearOperator +class has a big advantage over this manual approach. +It provides so-called syntactic sugar: Mathematically, we think +about $S$ as being the composite matrix $S=B^TM^{-1}B$ and the +LinearOperator class allows you to write this out more or less verbatim, +@code +const auto op_M_inv = inverse_operator(op_M, solver_M, preconditioner_M); +const auto op_S = transpose_operator(op_B) * op_M_inv * op_B; @endcode +The manual approach on the other hand obscures this fact. -This code looks more impressive than it actually is. At the beginning, we -declare an object representing $M^{-1}$ and a temporary vector (of the size of -the first block of the solution, i.e. with as many entries as there are -velocity unknowns), and the two blocks surrounded by braces then solve the two -equations for $P$ and $U$, in this order. Most of the code in each of the two -blocks is actually devoted to constructing the proper right hand sides. For -the first equation, this would be $B^TM^{-1}F-G$, and $-BP+F$ for the second -one. The first hand side is then solved with the Schur complement matrix, and -the second simply multiplied with $M^{-1}$. The code as shown uses no -preconditioner (i.e. the identity matrix as preconditioner) for the Schur -complement. +All that is left for us to do now is to form the right hand sides of the +two equations defining $P$ and $U$, and then solve them with the Schur +complement matrix and the mass matrix, respectively. For example the right +hand side of the first equation reads $B^TM^{-1}F-G$. This could be +implemented as follows: +@code + Vector schur_rhs (U.size()); + Vector tmp (P.size()); + op_M_inv.vmult (tmp, F); + transpose_operator(op_B).vmult (schur_rhs, tmp); + schur_rhs -= G; +@endcode +Again, this is a perfectly valid approach, but the fact that deal.II +requires us to manually resize the final and temporary vector, and that +every operation takes up a new line makes this hard to read. This is the +point where a second class in the linear operator framework can will help +us. Similarly in spirit to LinearOperator, a PackagedOperation stores a +"computation": +@code + std::function apply; + std::function apply_add; +@endcode +The class allows lazy evaluation of expressions involving vectors and +linear operators. This is done by storing the computational expression and +only performing the computation when either the object is converted to a +vector object, or PackagedOperation::apply() (or +PackagedOperation::apply_add()) is invoked by hand. Assuming that +F and G are the two vectors of the right hand +side we can simply write: +@code + const auto schur_rhs = transpose_operator(op_B) * op_M_inv * F - G; +@endcode +Here, schur_rhs is a PackagedOperation that records the +computation we specified. It does not create a vector with the actual +result immediately. +With these prerequisites at hand, solving for $P$ and $U$ is a matter of +creating another solver and inverse: +@code + SolverControl solver_control_S(2000, 1.e-12); + SolverCG<> solver_S(solver_control_S); + PreconditionIdentity preconditioner_S; + + const auto op_S_inv = inverse_operator(op_S, solver_S, preconditioner_S); + + P = op_S_inv * schur_rhs; + U = op_M_inv * (F - op_B * P); +@endcode

A preconditioner for the Schur complement

@@ -579,82 +627,34 @@ this looks almost as expensive as solving with $S$ right away. However, note that in the inner iteration, we do not have to calculate $M^{-1}$, but only the inverse of its diagonal, which is cheap. -The next step is to define a class that represents this approximate Schur -complement. This should look very much like the Schur complement class itself, -except that it doesn't need the object representing $M^{-1}$ any more -since we can compute the inverse of the diagonal of $M$ on the fly: - +Thankfully, the LinearOperator framework makes this very easy to write out. +We already used a Jacobi preconditioner (preconditioner_M) for +the $M$ matrix earlier. So all that is left to do is to write out how the +approximate Schur complement should look like: @code -class ApproximateSchurComplement : public Subscriptor -{ -public: - ApproximateSchurComplement (const BlockSparseMatrix &A); - - void vmult (Vector &dst, - const Vector &src) const; - -private: - const SmartPointer > system_matrix; - - mutable Vector tmp1, tmp2; -}; - - -void -ApproximateSchurComplement::vmult - (Vector &dst, - const Vector &src) const - { - system_matrix->block(0,1).vmult (tmp1, src); - system_matrix->block(0,0).precondition_Jacobi (tmp2, tmp1); - system_matrix->block(1,0).vmult (dst, tmp2); - } + const auto op_aS = + transpose_operator(op_B) * linear_operator(preconditioner_M) * op_B; @endcode - -Note how the vmult function differs in simply doing one Jacobi sweep +Note how this operator differs in simply doing one Jacobi sweep (i.e. multiplying with the inverses of the diagonal) instead of multiplying -with the full $M^{-1}$. (This is how a single Jacobi preconditioner -step with $M$ is defined: it is the multiplication with the inverse of -the diagonal of $M$; in other words, the operation $({\textrm{diag}\ -}M)^{-1}x$ on a vector $x$ is exactly what the function -SparseMatrix::precondition_Jacobi above does.) +with the full $M^{-1}$. (This is how a single Jacobi preconditioner step +with $M$ is defined: it is the multiplication with the inverse of the +diagonal of $M$; in other words, the operation $({\textrm{diag}\ }M)^{-1}x$ +on a vector $x$ is exactly what PreconditionJacobi does.) With all this, we nearly already have the preconditioner: it should be the -inverse of the approximate Schur complement. We implement this with -the InverseMatrix class: - +inverse of the approximate Schur complement. We implement this again with +the inverse_operator() function @code - ApproximateSchurComplement approximate_schur (system_matrix); - InverseMatrix approximate_inverse - (approximate_schur); + ReductionControl reduction_control_aS(2000, 1.e-18, 1.0e-6); + SolverCG<> solver_aS(reduction_control_aS); + PreconditionIdentity preconditioner_aS; + const auto preconditioner_S = + inverse_operator(op_aS, solver_aS, preconditioner_aS); @endcode That's all! -Taken together, the first block of our solve() function will then -look like this: - -@code - SchurComplement schur_complement (system_matrix, inverse_mass); - Vector schur_rhs (solution.block(1).size()); - inverse_mass.vmult (tmp, system_rhs.block(0)); - system_matrix.block(1,0).vmult (schur_rhs, tmp); - schur_rhs -= system_rhs.block(1); - - SolverControl solver_control (solution.block(1).size(), - 1e-12*schur_rhs.l2_norm()); - SolverCG<> cg (solver_control); - - ApproximateSchurComplement approximate_schur (system_matrix); - InverseMatrix approximate_inverse - (approximate_schur); - cg.solve (schur_complement, solution.block(1), schur_rhs, - approximate_inverse); -@endcode - -Note how we pass the so-defined preconditioner to the solver working on the -Schur complement matrix. - Obviously, applying this inverse of the approximate Schur complement is a very expensive preconditioner, almost as expensive as inverting the Schur complement itself. We can expect it to significantly reduce the number of @@ -670,16 +670,6 @@ six times refined mesh and using elements of order 2 yields an improvement of earth shattering, but significant. -

A remark on similar functionality in deal.II

- -As a final remark about solvers and preconditioners, let us note that a -significant amount of functionality introduced above is actually also present -in the library itself. It probably even is more powerful and general, but we -chose to introduce this material here anyway to demonstrate how to work with -block matrices and to develop solvers and preconditioners, rather than using -black box components from the library. - -

Definition of the test case

In this tutorial program, we will solve the Laplace equation in mixed