From: David Wells Date: Tue, 8 Dec 2015 22:25:25 +0000 (-0500) Subject: Improve spacing in the preconditioners module. X-Git-Tag: v8.4.0-rc2~161^2~3 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=e0e152a51323361818e4b42f748dc494408da444;p=dealii.git Improve spacing in the preconditioners module. --- diff --git a/doc/doxygen/headers/preconditioners.h b/doc/doxygen/headers/preconditioners.h index 66e1b77d44..c6816e9a73 100644 --- a/doc/doxygen/headers/preconditioners.h +++ b/doc/doxygen/headers/preconditioners.h @@ -1,6 +1,6 @@ // --------------------------------------------------------------------- // -// Copyright (C) 2003 - 2013 by the deal.II authors +// Copyright (C) 2003 - 2015 by the deal.II authors // // This file is part of the deal.II library. // @@ -26,18 +26,18 @@ * decompositions (ILU). In addition, sparse direct solvers can be used as * preconditioners when available. * - * Broadly speaking, preconditioners are operators, which are - * multiplied with a matrix to improve conditioning. The idea is, that - * the preconditioned system P-1Ax = P-1b - * is much easier to solve than the original system Ax = b.What - * this means exactly depends on the structure of the matrix and - * cannot be discussed here in generality. For symmetric, positive - * definite matrices A and P, it means that the spectral - * condition number (the quotient of greatest and smallest eigenvalue) - * of P-1A is much smaller than the one of A. - * - * At hand of the simplest example, Richardson iteration, implemented - * in SolverRichardson, the preconditioned iteration looks like + * Broadly speaking, preconditioners are operators, which are multiplied with + * a matrix to improve conditioning. The idea is, that the preconditioned + * system P-1Ax = P-1b is much easier to solve + * than the original system Ax = b.What this means exactly depends on + * the structure of the matrix and cannot be discussed here in generality. For + * symmetric, positive definite matrices A and P, it means that + * the spectral condition number (the quotient of greatest and smallest + * eigenvalue) of P-1A is much smaller than the one of + * A. + * + * At hand of the simplest example, Richardson iteration, implemented in + * SolverRichardson, the preconditioned iteration looks like * @f[ * x^{k+1} = x^k - P^{-1} \bigl(A x^k - b\bigr). * @f] @@ -120,30 +120,29 @@ * void vmult (VECTOR& dst, const VECTOR& src) const; * void Tvmult (VECTOR& dst, const VECTOR& src) const; * @endcode - * These functions apply the preconditioning operator to the source - * vector $src$ and return the result in $dst$ as $dst=P^{-1}src$ or - * $dst=P^{-T}src$. Preconditioned iterative - * dolvers use these vmult() functions of the preconditioner. - * Some solvers may also use Tvmult(). + * These functions apply the preconditioning operator to the source vector + * $src$ and return the result in $dst$ as $dst=P^{-1}src$ or + * $dst=P^{-T}src$. Preconditioned iterative dolvers use these + * vmult() functions of the preconditioner. Some solvers may also + * use Tvmult(). * *

Relaxation methods

* - * Additional to the interface described above, some preconditioners - * like SOR and Jacobi have been known as iterative methods - * themselves. For them, an additional interface exists, consisting of - * the functions + * Additional to the interface described above, some preconditioners like SOR + * and Jacobi have been known as iterative methods themselves. For them, an + * additional interface exists, consisting of the functions * @code * void step (VECTOR& dst, const VECTOR& rhs) const; * void Tstep (VECTOR& dst, const VECTOR& rhs) const; * @endcode * - * Here, $src$ is a residual vector and $dst$ is the iterate that is - * supposed to be updated. In other words, the operation performed by - * these functions is - * $dst = dst - P^{-1} (A dst - rhs)$ and $dst = dst - P^{-T} (A dst - rhs)$. The - * functions are called this way because they perform one step - * of a fixed point (Richardson) iteration. Note that preconditioners - * store a reference to the original matrix $A$ during initialization. + * Here, $src$ is a residual vector and $dst$ is the iterate that is supposed + * to be updated. In other words, the operation performed by these functions + * is + * $dst = dst - P^{-1} (A dst - rhs)$ and $dst = dst - P^{-T} (A dst - rhs)$. + * The functions are called this way because they perform one step of a + * fixed point (Richardson) iteration. Note that preconditioners store a + * reference to the original matrix $A$ during initialization. * * @ingroup LAC * @ingroup Matrices