From: Wolfgang Bangerth Date: Wed, 13 May 2020 21:18:12 +0000 (-0600) Subject: Updates to the results section of step-50. X-Git-Tag: v9.3.0-rc1~1634^2~14 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=1dbf01f008e32e3c40342305410253aed1d056e9;p=dealii.git Updates to the results section of step-50. --- diff --git a/examples/step-50/doc/results.dox b/examples/step-50/doc/results.dox index 750b8e87d1..9e9657ed5b 100644 --- a/examples/step-50/doc/results.dox +++ b/examples/step-50/doc/results.dox @@ -70,15 +70,24 @@ allowing for vectorization over 8 doubles (vectorization used only in the matrix computations). The code is compiled using gcc 7.1.0 with intel-mpi 17.0.3. Trilinos 12.10.1 is used for the matrix-based GMG/AMG computations. -The following table gives weak scale timings for this program on up to 256M DoFs -and 7168 processors. Here, $\mathbb{E}$ is the partition efficiency from the +We can then gather a variety of information by calling the program +with the input files that are provided in the directory in which +step-50 is located. Using these, and adjusting the number of mesh +refinement steps, we can produce information about how well the +program scales. + +The following table gives weak scaling timings for this program on up to 256M DoFs +and 7,168 processors. (Recall that weak scaling keeps the number of +degrees of freedom per processor constant while increasing the number of +processors; i.e., it considers larger and larger problems.) +Here, $\mathbb{E}$ is the partition efficiency from the introduction (also equal to 1.0/workload imbalance), "Setup" is a combination of setup, setup multigrid, assemble, and assemble multigrid from the timing blocks, and "Prec" is the preconditioner setup. Ideally all times would stay constant over each problem size for the individual solvers, but since the partition efficiency decreases from 0.371 to 0.161 from largest to smallest problem size, we expect to see an approximately $0.371/0.161=2.3$ times increase in timings -for GMG. +for GMG. This is, in fact, pretty close to what we really get: @@ -116,17 +125,17 @@ for GMG. - -
13 4M 0.37 - + 0.742 0.393 0.200 1.335 - + 1.714 2.934 0.716 5.364 - + 1.544 0.456 1.150 @@ -137,59 +146,59 @@ for GMG. 15 16M 0.29 - + 0.884 0.535 0.253 1.672 - + 1.927 3.776 1.190 6.893 - + 1.544 0.456 1.150 3.150
1792 + 1,792 17 65M 0.22 - + 1.122 0.686 0.309 2.117 - + 2.171 4.862 1.660 8.693 - + 1.654 0.546 1.460 3.660
7168 + 7,168 19 256M 0.16 - + 1.214 0.893 0.521 2.628 - + 2.386 7.260 2.560 12.206 - + 1.844 1.010 1.890 @@ -197,14 +206,34 @@ for GMG.
-The following figure gives the strong scaling for each method for cycle 16 -(32M DoFs) and 19 (256M DoFs) on between 56 to 28672 processors. While the -matrix-based GMG solver and AMG scale similarly and have a similar time to -solution, the matrix-free GMG solver scales much better and solves the finer -problem in roughly the same time as the AMG solver for the coarser mesh with -only an eighth of the number of unknowns. +On the other hand, the algebraic multigrid in the last set of columns +is relatively unaffected by the increasing imbalance of the mesh +hierarchy (because it doesn't use the mesh hierarchy) and the growth +in time is rather driven by other factors that are well documented in +the literature (most notably that the algorithmic complexity of +some parts of algebraic multigrid methods appears to be ${\cal O}(N +\log N)$ instead of ${\cal O}(N)$ for geometric multigrid). + +The upshort of the table above is that the matrix-free geometric multigrid +method appears to be the fastest approach to solving this equation if +not by a huge margin. Matrix-based methods, on the other hand, are +consistently the worst. + +The following figure provides strong scaling results for each method, i.e., +we solve the same problem on more and more processors. Specifically, +we consider the problems after 16 mesh refinement cycles +(32M DoFs) and 19 cycles (256M DoFs), on between 56 to 28,672 processors: - + + +While the matrix-based GMG solver and AMG scale similarly and have a +similar time to solution (at least as long as there is a substantial +number of unknowns per processor -- say, several 10,000), the +matrix-free GMG solver scales much better and solves the finer problem +in roughly the same time as the AMG solver for the coarser mesh with +only an eighth of the number of processors. Conversely, it can solve the +same problem on the same number of processors in about one eighth the +time.

Possible extensions

@@ -218,7 +247,24 @@ you can compare error rates.

Coarse solver

-A more interesting example would involve a complicated coarse mesh (see -step-49 for inspiration). This then requires a more sophisticated coarse -solver for the geometric multigrid methods. A common approach here is a switch -to AMG by assembling the coarse matrix (even for the matrix-free version). +A more interesting example would involve a more complicated coarse mesh (see +step-49 for inspiration). The issue in that case is that the coarsest +level of the mesh hierarchy is actually quite large, and one would +have to think about ways to solve the coarse level problem +efficiently. (This is not an issue for algebraic multigrid methods +because they would just continue to build coarser and coarser levels +of the matrix, regardless of their geometric origin.) + +In the program here, we simply solve the coarse level problem with a +Conjugate Gradient method without any preconditioner. That is acceptable +if the coarse problem is really small -- for example, if the coarse +mesh had a single cell, then the coarse mesh problems has a $9\times 9$ +matrix in 2d, and a $27\times 27$ matrix in 3d; for the coarse mesh we +use on the $L$-shaped domain of the current program, these sizes are +$21\times 21$ in 2d and $117\times 117$ in 3d. But if the coarse mesh +consists of hundreds or thousands of cells, this approach will no +longer work and might start to dominate the overall run-time of each V-cyle. +A common approach is then to solve the coarse mesh problem using an +algebraic multigrid preconditioner; this would then, however, require +assembling the coarse matrix (even for the matrix-free version) as +input to the AMG implementation.