From: Martin Kronbichler Date: Fri, 4 Sep 2009 08:50:00 +0000 (+0000) Subject: Properly set the degree of polynomial approximation in the Chebyshev smoother. Damped... X-Git-Tag: v8.0.0~7183 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=21f4aa22d896569bd47e2462c0ff4f7762ff4f5c;p=dealii.git Properly set the degree of polynomial approximation in the Chebyshev smoother. Damped Jacobi is degree zero, not degree 1. git-svn-id: https://svn.dealii.org/trunk@19390 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-37/doc/intro.dox b/deal.II/examples/step-37/doc/intro.dox index 13dc9e06e2..99b6276b62 100644 --- a/deal.II/examples/step-37/doc/intro.dox +++ b/deal.II/examples/step-37/doc/intro.dox @@ -432,11 +432,11 @@ implements this concept using the parallel::apply_to_subranges() function.

Combination with multigrid

Above, we have gone to significant efforts to implement a matrix-vector -product that does actually store the matrix elements. In many user codes, -however, there is more than just performing un uncertain number of +product that does not actually store the matrix elements. In many user +codes, however, one wants more than just performing some uncertain number of matrix-vector products — one wants to do as little of these operations as possible when solving linear equation systems. In theory, we could use -the CG method without preconditioning, however, that would not by very +the CG method without preconditioning, however, that would not be very efficient. Rather, one uses preconditioners for improving speed. On the other hand, most of the more frequently used preconditioners such as Jacobi, SSOR, ILU or algebraic multigrid (AMG) can now no longer be used here @@ -444,23 +444,24 @@ because their implementation requires knowledge of the elements of the system matrix. One solution is to use multigrid methods as shown in @ref step_16 -"step-16". They are known to be very fast, and -they are suitable for our purpose here since they can be designed based -purely on matrix-vector products. All one needs to do is to find a smoother -that works with matrix-vector products only. One such candidate would be a -damped Jacobi iteration, but that one is often not sufficiently good in -damping high-frequency errors than one would like. A Chebyshev -preconditioner, eventually, is what we use here. It can be seen as an -extension of the Jacobi method by using Chebyshev polynomials. With degree -zero, the Jacobi method is retrieved, whereas higher order corrections -improve the smoothing properties if some parameters are chosen -correctly. The effectiveness of Chebyshev smoothing in multigrid has been -demonstrated, e.g., by M. Adams, M. Brezina, J. Hu, R. Tuminaro: Parallel -multigrid smoothers: polynomial versus Gauss–Seidel, -J. Comput. Phys. 188:593–610, 2003. This publication also names one -more advantage of Chebyshev smoothers, namely that they are easy to -parallelize, whereas SOR/Gauss–Seidel rely on substitutions, which can -often only be tackled by working on diagonal subblocks of the matrix, which +"step-16". They are known to be very fast, and they are suitable for our +purpose since they can be designed based purely on matrix-vector products +(and knowledge of the diagonal entries of the matrix, though). All one needs +to do is to find a smoother that works with matrix-vector products only. One +such candidate would be a damped Jacobi iteration, but that one is often not +sufficiently good in damping high-frequency errors than one would like. A +Chebyshev preconditioner, eventually, is what we use here. It can be seen as +an extension of the Jacobi method by using Chebyshev polynomials. With +degree zero, the Jacobi method is retrieved, whereas higher order +corrections improve the smoothing properties if some parameters are suitably +chosen. The effectiveness of Chebyshev smoothing in multigrid has been +demonstrated, e.g., in the article M. Adams, M. Brezina, J. Hu, +R. Tuminaro. Parallel multigrid smoothers: polynomial versus +Gauss–Seidel, J. Comput. Phys. 188:593–610, 2003. This +publication also identifies one more advantage of Chebyshev smoothers that +we exploit here, namely that they are easy to parallelize, whereas +SOR/Gauss–Seidel smoothing relies on substitutions, which can often +only be parallelized by working on diagonal subblocks of the matrix, which decreases efficiency. The implementation into the multigrid framework is then very @@ -470,10 +471,10 @@ step_16 "step-16".

The test case

In order to demonstrate the capabilities of the method, we work on a rather -difficult problem, i.e., a more or less unstructured mesh (where the -Jacobians are different from cell to cell) and with non-constant -coefficients in a Poisson equation. If we worked on a constant-coefficient -case with structured mesh, we could decrease the operation count by a factor -of 4 in 2D and 6 in 3D by building a local matrix (which is then the same -for all cells), and doing the products as in the first step of the above -code pieces. +general Poisson problem, based on a more or less unstructured mesh (where +the Jacobians are different from cell to cell), higher order mappings to a +curved boundary, and a non-constant coefficient in the equation. If we +worked on a constant-coefficient case with structured mesh, we could +decrease the operation count by a factor of 4 in 2D and 6 in 3D by building +a local matrix (which is then the same for all cells), and doing the +products as in the first developing step of the above code pieces. diff --git a/deal.II/examples/step-37/step-37.cc b/deal.II/examples/step-37/step-37.cc index 4c9704acc1..49b9131ee2 100644 --- a/deal.II/examples/step-37/step-37.cc +++ b/deal.II/examples/step-37/step-37.cc @@ -24,7 +24,6 @@ #include #include -#include #include #include @@ -44,9 +43,8 @@ #include #include -#include -#include #include +#include #include #include @@ -1302,7 +1300,7 @@ void LaplaceProblem::solve () // $[\lambda_{\max}/10,\lambda_{\max}]$. typename SMOOTHER::AdditionalData smoother_data; smoother_data.smoothing_range = 10.; - smoother_data.degree = fe.degree+1; + smoother_data.degree = fe.degree; smoother_data.eig_cg_n_iterations = 4+2*fe.degree; mg_smoother.initialize(mg_matrices, smoother_data); diff --git a/deal.II/lac/include/lac/precondition.h b/deal.II/lac/include/lac/precondition.h index 37f161bd5c..a7fd5150bf 100644 --- a/deal.II/lac/include/lac/precondition.h +++ b/deal.II/lac/include/lac/precondition.h @@ -844,19 +844,21 @@ public: /** * Constructor. */ - AdditionalData (const unsigned int degree = 1, + AdditionalData (const unsigned int degree = 0, const double smoothing_range = 0., const bool nonzero_starting = false, const unsigned int eig_cg_n_iterations = 8, const double eig_cg_residual = 1e-2); - + /** * This determines the degree of the * Chebyshev polynomial. The degree * of the polynomial gives the number * of matrix-vector products to be * performed for one application of - * the vmult() operation. + * the vmult() operation. Degree zero + * corresponds to a damped Jacobi + * method. */ unsigned int degree; @@ -980,13 +982,13 @@ private: SmartPointer matrix_ptr; /** - * Internal vector used for + * Internal vector used for the * vmult operation. */ mutable VECTOR update1; /** - * Internal vector used for + * Internal vector used for the * vmult operation. */ mutable VECTOR update2; @@ -1550,8 +1552,8 @@ PreconditionChebyshev::initialize (const MATRIX &matrix, // weighted by its diagonal. // // TODO: can we obtain this with the - // standard CG method? we would need to - // read the logfile in that case, which + // regular CG implementation? we would need + // to read the logfile in that case, which // does not seem feasible. double max_eigenvalue, min_eigenvalue; { @@ -1608,6 +1610,8 @@ PreconditionChebyshev::initialize (const MATRIX &matrix, max_eigenvalue = T.eigenvalue(T.n()-1); } + // include a safety factor since the CG + // method will in general not be converged const double beta = 1.2 * max_eigenvalue; const double alpha = (data.smoothing_range > 0 ? max_eigenvalue / data.smoothing_range : @@ -1642,7 +1646,7 @@ PreconditionChebyshev::vmult (VECTOR &dst, update1.equ(-1.,dst); } - for (unsigned int k=0; kvmult (update2, dst); update2 -= src;