From 294878fe2f7909b288ae261c73e73cf26fe92015 Mon Sep 17 00:00:00 2001 From: Martin Kronbichler Date: Thu, 24 Nov 2016 13:04:53 +0100 Subject: [PATCH] Adjust numbers --- examples/step-37/doc/intro.dox | 17 +++--- examples/step-37/doc/results.dox | 97 ++++++++++++++++---------------- examples/step-37/step-37.cc | 8 +-- 3 files changed, 61 insertions(+), 61 deletions(-) diff --git a/examples/step-37/doc/intro.dox b/examples/step-37/doc/intro.dox index 3c8cbe1add..23120b3d16 100644 --- a/examples/step-37/doc/intro.dox +++ b/examples/step-37/doc/intro.dox @@ -17,11 +17,10 @@ International Conference on e-Science, 2011.

Introduction

-This example shows how to implement a matrix-free method, that is, a -method that does not explicitly store the matrix elements, for a -second-order Poisson equation with variable coefficients on a -hypercube. The linear system will be solved with a multigrid -method. +This example shows how to implement a matrix-free method, that is, a method +that does not explicitly store the matrix elements, for a second-order Poisson +equation with variable coefficients on a hypercube. The linear system will be +solved with a multigrid method and uses large-scale parallelism with MPI. The major motivation for matrix-free methods is the fact that on today's processors access to main memory (i.e., for objects that do not fit in the @@ -348,7 +347,7 @@ implementing generic differential operators through a common framework of cached data, whereas the latter specifically stores the coefficient for the Laplacian. In case applications demand for it, this specialization could pay off and would be worthwhile to consider. Note that the implementation in -deal.II is smart enough to detenct Cartesian or affine geometries where the +deal.II is smart enough to detect Cartesian or affine geometries where the Jacobian is constant throughout the cell and needs not be stored for every cell (and indeed often is the same over different cells as well). @@ -528,12 +527,12 @@ the processor-local part of the solution as well as data fields for ghosted DoFs, i.e. DoFs that are owned by a remote processor but accessed by cells that are owned by the present processor. In the @ref GlossLocallyActiveDof "glossary" these degrees of freedom are referred to as locally active degrees -of freedom. The function MatrixFree::initialize_dof_vector() prodides a method +of freedom. The function MatrixFree::initialize_dof_vector() provides a method that sets this design. Note that hanging nodes can relate to additional ghosted degrees of freedom that must be included in the distributed vector but are not part of the locally active DoFs in the sense of the @ref GlossLocallyActiveDof "glossary". Moreover, the distributed vector holds the -some MPI metadata for DoFs that are owned locally but needed by other +MPI metadata for DoFs that are owned locally but needed by other processors. A benefit of the design of this vector class is the way ghosted entries are accessed. In the storage scheme of the vector, the data array extends beyond the processor-local part of the solution with further vector @@ -552,7 +551,7 @@ The design of LinearAlgebra::distributed::Vector is similar to the PETScWrappers::MPI::Vector and TrilinosWrappers::MPI::Vector data types we have used in step-40 and step-32 before, but since we do not need any other parallel functionality of these libraries, we use the -parallel::distributed::Vector class of deal.II instead of linking in another +LinearAlgebra::distributed::Vector class of deal.II instead of linking in another large library in this tutorial program. Also note that the PETSc and Trilinos vectors do not provide the fine-grained control over ghost entries with direct array access because they abstract away the necessary implementation details. diff --git a/examples/step-37/doc/results.dox b/examples/step-37/doc/results.dox index f6050c226d..e2f8f37571 100644 --- a/examples/step-37/doc/results.dox +++ b/examples/step-37/doc/results.dox @@ -53,18 +53,18 @@ As in step-16, we see that the number of CG iterations remains constant with increasing number of degrees of freedom. A constant number of iterations (together with optimal computational properties) means that the computing time approximately quadruples as the problem size quadruples from one cycle to the -next. A look at the memory consumption reveals that the code is also very -efficient in terms of storage. Around 2-4 million degrees of freedom fit into -1 GB of memory. An interesting fact is that solving one linear system is -cheaper than the setup, despite not building a matrix (approximately half of -which is spent in the DoFHandler::distribute_dofs() and -DoFHandler::distributed_mg_dofs() calls). This shows the high efficiency of -this approach, but also that the deal.II data structures are quite expensive -to set up and the setup cost must be amortized over several system solves. - -Not much changes if we run the program in three spatial dimensions, with the -exception that problem sizes grow by a factor eight when refining the mesh and -the obvious increase in computing times: +next. The code is also very efficient in terms of storage. Around 2-4 million +degrees of freedom fit into 1 GB of memory, see also the MPI results below. An +interesting fact is that solving one linear system is cheaper than the setup, +despite not building a matrix (approximately half of which is spent in the +DoFHandler::distribute_dofs() and DoFHandler::distributed_mg_dofs() +calls). This shows the high efficiency of this approach, but also that the +deal.II data structures are quite expensive to set up and the setup cost must +be amortized over several system solves. + +Not much changes if we run the program in three spatial dimensions. Since we +use uniform mesh refinement, we get eight times as many elements and +approximately eight times as many degrees of freedom with each cycle: @code Cycle 0 @@ -196,17 +196,18 @@ complexity.

Comparison with a sparse matrix

In order to understand the capabilities of the matrix-free implementation, we -compare the performance of the 3d example above with a SparseMatrix -implementation by measuring both the computation times for the initialization of -the problem (distribute DoFs, setup and assemble matrices, setup multigrid -structures) and the actual solution for the matrix-free variant and the -variant based on sparse matrices. We base the preconditioner on float -numbers and the actual matrix and vectors on double numbers, as shown -above. Tests are run on an Intel Core i7-2620M notebook processor (two cores -and AVX +compare the performance of the 3d example above with a sparse matrix +implementation based on TrilinosWrappers::SparseMatrix by measuring both the +computation times for the initialization of the problem (distribute DoFs, +setup and assemble matrices, setup multigrid structures) and the actual +solution for the matrix-free variant and the variant based on sparse +matrices. We base the preconditioner on float numbers and the actual matrix +and vectors on double numbers, as shown above. Tests are run on an Intel Core +i7-5500U notebook processor (two cores and AVX support, i.e., four operations on doubles can be done with one CPU -instruction, which is heavily used in FEEvaluation) and optimized mode. The -example makes use of multithreading, so both cores are actually used. +instruction, which is heavily used in FEEvaluation), optimized mode, and two +MPI ranks. @@ -223,45 +224,45 @@ example makes use of multithreading, so both cores are actually used. - - - - + + + + - - - - + + + + - + - + - - - - + + + + - - - - + + + + - - - - + + + +
1250.0048s0.00075s0.0023s0.00092s0.0042s0.0012s0.0022s0.00095s
7290.014s0.0022s0.0029s0.0028s0.012s0.0040s0.0027s0.0021s
4,9130.10s0.082s 0.012s0.014s 0.011s0.0057s
35,9370.80s0.14s0.087s0.065s0.73s0.13s0.048s0.040s
274,6255.93s1.05s0.60s0.43s5.43s1.01s0.33s0.25s
2,146,68946.7s8.44s4.96s3.56s43.8s8.24s2.42s2.06s
@@ -316,7 +317,7 @@ network, has a latency of 1e-4 seconds. The multigrid V-cycle used in this program is also a form of global communication. Think about the coarse grid solve that happens on a single processor: It accumulates the contributions from all processors before it starts. When completed, the coarse grid solution -is transfered to finer levels, where more and more processors help in +is transferred to finer levels, where more and more processors help in smoothing until the fine grid. Essentially, this is a tree-like pattern over the processors in the network and controlled by the mesh. As opposed to the @p MPI_Allreduce operations where the tree in the reduction is optimized to the @@ -370,8 +371,8 @@ problem size until an upper limit where the computation exhausts memory. We do this for 1k cores, 8k cores, and 65k cores and see that the problem size can be varied over almost two orders of magnitude with ideal scaling. The largest computation shown in this picture involves 292 billion ($2.92 \cdot 10^{11}$) -degrees of freedom. On a DG computation of 147k DoFs, the above algorithms -were also run on 532 billion DoFs. +degrees of freedom. On a DG computation of 147k cores, the above algorithms +were also run involving up to 549 billion (2^39) DoFs. @@ -407,4 +408,4 @@ shared memory region of one node. To use this, one would need to both set the number of threads in the MPI_InitFinalize data structure in the main function, and set the MatrixFree::AdditionalData::tasks_parallel_scheme to partition_color to actually do the loop in parallel. This use case is -discussed in step-48. \ No newline at end of file +discussed in step-48. diff --git a/examples/step-37/step-37.cc b/examples/step-37/step-37.cc index 4351caa03b..a12ae9601e 100644 --- a/examples/step-37/step-37.cc +++ b/examples/step-37/step-37.cc @@ -185,7 +185,7 @@ namespace Step37 // diagonal entries of the underlying matrix. We need the diagonal for the // definition of the multigrid smoother. Since we consider a problem with // variable coefficient, we further implement a method that can fill the - // coefficiient values. + // coefficient values. // // Note that the file include/deal.II/matrix_free/operators.h // already contains an implementation of the Laplacian through the class @@ -492,7 +492,7 @@ namespace Step37 // FEEvaluation are designed to access vectors in MPI-local index space also // when working with multiple processors. Working in local index space means // that no index translation needs to be performed at the place the vector - // access happns, apart from the unavoidable indirect addressing. However, + // access happens, apart from the unavoidable indirect addressing. However, // local index spaces are ambiguous: While it is standard convention to // access the locally owned range of a vector with indices between 0 and the // local size, the numbering is not so clear for the ghosted entries and @@ -1053,7 +1053,7 @@ namespace Step37 // is used during the restriction phase of the multigrid V-cycle, whereas // vmult_interface_up is used during the prolongation phase. // - // Once the interface matrix created, we set up the remaining Multigrid + // Once the interface matrix is created, we set up the remaining Multigrid // preconditioner infrastructure in complete analogy to step-16 to obtain // a @p preconditioner object that can be applied to a matrix. mg::Matrix > mg_matrix(mg_matrices); @@ -1075,7 +1075,7 @@ namespace Step37 MGTransferMatrixFree > preconditioner(dof_handler, mg, mg_transfer); - // The setup of the multigrid routines was quite easy and one cannot see + // The setup of the multigrid routines is quite easy and one cannot see // any difference in the solve process as compared to step-16. All the // magic is hidden behind the implementation of the LaplaceOperator::vmult // operation. Note that we print out the solve time and the accumulated -- 2.39.5