From: kronbichler Date: Sun, 8 Nov 2009 16:14:23 +0000 (+0000) Subject: Extend the results section. Show comparison with sparse matrices. X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=af76bec649943b7096df90614f4cac2709a27c9a;p=dealii-svn.git Extend the results section. Show comparison with sparse matrices. git-svn-id: https://svn.dealii.org/trunk@20072 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-37/doc/results.dox b/deal.II/examples/step-37/doc/results.dox index b05321153e..20d7be7aa4 100644 --- a/deal.II/examples/step-37/doc/results.dox +++ b/deal.II/examples/step-37/doc/results.dox @@ -1,46 +1,48 @@

Results

-Since this example solves the same problem as @ref step_5 "step-5", we -refer to the graphical output there. The only difference between the two is -the solver and the implementation of the matrix-vector products. +

Program output

+ +Since this example solves the same problem as @ref step_5 "step-5" (except for +a different coefficient), we refer to the graphical output there. Here, we +evaluate some aspects of the multigrid solver. When we run this program in 2D for quadratic ($Q_2$) elements, we get the following output: @code Cycle 0 Number of degrees of freedom: 337 -System matrix memory consumption: 0.02573 MBytes. -Multigrid objects memory consumption: 0.05083 MBytes. +System matrix memory consumption: 0.02573 MiB. +Multigrid objects memory consumption: 0.05083 MiB. Convergence in 10 CG iterations. Cycle 1 Number of degrees of freedom: 1313 -System matrix memory consumption: 0.09254 MBytes. -Multigrid objects memory consumption: 0.1794 MBytes. +System matrix memory consumption: 0.09257 MiB. +Multigrid objects memory consumption: 0.1794 MiB. Convergence in 10 CG iterations. Cycle 2 Number of degrees of freedom: 5185 -System matrix memory consumption: 0.3552 MBytes. -Multigrid objects memory consumption: 0.6777 MBytes. +System matrix memory consumption: 0.3553 MiB. +Multigrid objects memory consumption: 0.6779 MiB. Convergence in 10 CG iterations. Cycle 3 Number of degrees of freedom: 20609 -System matrix memory consumption: 1.397 MBytes. -Multigrid objects memory consumption: 2.644 MBytes. +System matrix memory consumption: 1.397 MiB. +Multigrid objects memory consumption: 2.645 MiB. Convergence in 10 CG iterations. Cycle 4 Number of degrees of freedom: 82177 -System matrix memory consumption: 5.544 MBytes. -Multigrid objects memory consumption: 10.46 MBytes. +System matrix memory consumption: 5.546 MiB. +Multigrid objects memory consumption: 10.46 MiB. Convergence in 10 CG iterations. Cycle 5 Number of degrees of freedom: 328193 -System matrix memory consumption: 22.1 MBytes. -Multigrid objects memory consumption: 41.64 MBytes. +System matrix memory consumption: 22.11 MiB. +Multigrid objects memory consumption: 41.65 MiB. Convergence in 10 CG iterations. @endcode @@ -59,38 +61,246 @@ factor if one quarter): @code Cycle 0 Number of degrees of freedom: 517 -System matrix memory consumption: 0.1001 MBytes. -Multigrid objects memory consumption: 0.1463 MBytes. +System matrix memory consumption: 0.1001 MiB. +Multigrid objects memory consumption: 0.1463 MiB. Convergence in 9 CG iterations. Cycle 1 Number of degrees of freedom: 3817 -System matrix memory consumption: 0.6613 MBytes. -Multigrid objects memory consumption: 0.8896 MBytes. +System matrix memory consumption: 0.6613 MiB. +Multigrid objects memory consumption: 0.8896 MiB. Convergence in 10 CG iterations. Cycle 2 Number of degrees of freedom: 29521 -System matrix memory consumption: 5.099 MBytes. -Multigrid objects memory consumption: 6.653 MBytes. +System matrix memory consumption: 5.1 MiB. +Multigrid objects memory consumption: 6.653 MiB. Convergence in 10 CG iterations. Cycle 3 Number of degrees of freedom: 232609 -System matrix memory consumption: 40.4 MBytes. -Multigrid objects memory consumption: 52.24 MBytes. +System matrix memory consumption: 40.4 MiB. +Multigrid objects memory consumption: 52.24 MiB. Convergence in 11 CG iterations. Cycle 4 Number of degrees of freedom: 1847617 -System matrix memory consumption: 322 MBytes. -Multigrid objects memory consumption: 415.1 MBytes. +System matrix memory consumption: 322 MiB. +Multigrid objects memory consumption: 415.1 MiB. Convergence in 11 CG iterations. @endcode -A comparison of the memory consumption of the MatrixFree class with a -standard SparseMatrix object shows that, for the largest problem considered -in 3D, the memory consumption for SparseMatrix and its sparsity pattern -would be about 1.3 GBytes, which is a factor of 4 more memory. If combined -with cubic elements in 3D, the MatrixFree class would decrease memory -consumption even by a factor of 12. + +

Comparison with a sparse matrix

+ +In order to understand the capabilities of this class, we compare the memory +consumption and execution (wallclock) time for assembly and 50 matrix-vector +products (MV) on a 3D problem with one million unknowns the classical +sparse matrix implementation (SpM) and the MatrixFree implementation shown +here (M-F). Both matrices are based on @p double %numbers. The program is run +on a 2.8 GHz Opteron processor with the ACML BLAS, once utilizing only one +core, and once utilizing four cores. The sparse matrix is initialized using a +CompressedSimpleSparsityPattern for calling the +DoFTools::make_sparsity_pattern function, and then copied to a SparsityPattern +object. The boundary nodes are eliminated using the ConstraintMatrix class, so +that only actual nonzeros are stored in the matrix. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
 Memory consumptionTime assemblyTime 50 MV, 1 CPUTime 50 MV, 4 CPUs
element orderSpMM-FSpMM-FSpMM-FSpMM-F
1299 MiB394 MiB8.09 s3.43 s5.50 s22.4 s4.30 s11.0 s
2698 MiB177 MiB12.43 s1.32 s12.0 s18.6 s9.10 s6.31 s
31295 MiB124 MiB41.1 s1.31 s21.2 s23.7 s16.0 s7.43 s
42282 MiB107 MiB117 s1.97 s40.8 s36.3 s19.7 s10.9 s
53597 MiB96.4 MiB510 s5.52 s75.7 s53.9 s29.3 s15.7 s
65679 MiB96.3 MiB2389 s26.1 s135 s79.1 s45.8 s24.3 s
+ +There are a few interesting things with the %numbers in this table. + +Firstly, we see the disappointing fact that for linear elements the +MatrixFree class does actually consume more memory than a SparseMatrix with +its SparsityPattern, despite the efforts made in this program. As mentioned +earlier, this is mostly because the Transformation data is stored for every +quadrature point. These are six doubles, and there are about eight times as +many quadrature points as there are degrees of freedom. In first +approximation, the matrix consumes 384 (= 8*6*sizeof(double)) bytes for each +degree of freedom. On the other hand, the sparse matrix has a bandwidth of 27, +so each dof gives rise to about 324 (= 27*12) bytes. A more clever +implementation would try to compress the Jacobian transformation data, by +exploiting similarities between the mappings within the cells, as well as from +one cell to the next. This could dramatically reduce the memory requirements, +and hence, increase the speed for lower-order implementations. + +Secondly, we observe that the memory requirements for a SparseMatrix grow +quickly as the order of the elements increases. This is because there are +increasingly many entries in each row because more degrees of freedom couple +to each other. The matrix-free implementation does not suffer from this +drawback. Here, the memory consumption decreases instead, since the number of +DoFs that are shared among elements decreases, which decreases the relative +amount of quadrature points. Regarding the execution speed, we see that the +matrix-free variant gets more competitive with higher order, and it does scale +better (3.5 speedup with four processors compared to the serial case, compared +to 2-2.5 speedup for the SparseMatrix). The advantage in %parallel scaling was +expected, because the matrix-free variant is less memory-bound for higher +order implementations. + +A third thing, which is unrelated to this tutorial program, is the fact that +standard matrix assembly gets really slow for high order elements. The %numbers +shown here are based on the usual routines that many other tutorial programs +make use of. A closer analysis of this shows that the cell data does not fit +into cache anymore. One could circumvent this problem by writing the assembly +as a matrix-matrix product, and using (cache-aware) BLAS implementations. + +For completeness, here comes a similar table for a 2D problem with 5.7 +million unknowns. Since the excess in work for the matrix-free +implementation is less compared to 3D, the implementation is more competitive +here. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
 Memory consumptionTime assemblyTime 50 MV, 4 CPUs
element orderSpMM-FSpMM-FSpMM-F
1659 MiB661 MiB18.8 s6.45 s11.0 s28.8 s
21119 MiB391 MiB15.6 s2.46 s17.1 s16.2 s
31711 MiB318 MiB17.4 s1.82 s23.1 s13.7 s
42434 MiB285 MiB24.2 s1.34 s31.1 s14.6 s
53289 MiB266 MiB35.9 s1.26 s29.6 s16.7 s
64274 MiB254 MiB58.0 s1.12 s35.9 s19.4 s
diff --git a/deal.II/examples/step-37/step-37.cc b/deal.II/examples/step-37/step-37.cc index ebfcce3157..d021496caf 100644 --- a/deal.II/examples/step-37/step-37.cc +++ b/deal.II/examples/step-37/step-37.cc @@ -152,7 +152,7 @@ namespace WorkStreamData {} template - ScratchData::ScratchData (const ScratchData &scratch) + ScratchData::ScratchData (const ScratchData &) : solutions () {} @@ -173,7 +173,7 @@ namespace WorkStreamData {} template - CopyData::CopyData (const CopyData &scratch) + CopyData::CopyData (const CopyData &) : ScratchData () {} @@ -603,7 +603,7 @@ MatrixFree::vmult (Vector &dst, // for the multigrid operations to be // well-defined): do the same. Since we // implement a symmetric operation, we can - // refer to the @ vmult_add operation. + // refer to the @p vmult_add operation. template template void @@ -768,42 +768,36 @@ reinit (const unsigned int n_dofs_in, // to one chunk, which will determine the // size of the full matrix that we work // on. If we choose too few cells, then the - // gains from using the matrix-matrix product - // will not be fully utilized (dgemm tends to - // provide more efficiency the larger the - // matrix dimensions get). If we choose too - // many, we will firstly degrade - // parallelization (we need to have - // sufficiently independent tasks), and - // secondly introduce an inefficiency that - // comes from the computer architecture: In - // the actual working function above, right - // after the first matrix-matrix - // multiplication, we transform the solution - // on quadrature points by using - // derivatives. Obviously, we want to have - // fast access to that data, so it should - // still be present in processor cache and - // not needed to be fetched from main - // memory. The total memory usage of the data - // on quadrature points should not be more - // than about a third of the cache size of - // the processor in order to be on the safe - // side. Since most of today's processors - // provide 512 kB or more cache memory per - // core, we choose about 150 kB as a size to - // leave some room for other things to be - // stored in the CPU. Clearly, this is an + // gains from using the matrix-matrix + // product will not be fully utilized + // (dgemm tends to provide more efficiency + // the larger the matrix dimensions get), + // so we choose at least 60 cells for one + // chunk (except when there are very few + // cells, like on the coarse levels of the + // multigrid scheme). If we choose too + // many, we will degrade parallelization + // (we need to have sufficiently + // independent tasks). We need to also + // think about the fact that most high + // performance BLAS implementations + // internally work with square + // sub-matrices. Choosing as many cells in + // a chunk as there are degrees of freedom + // on each cell (coded in @p + // matrix_sizes.m) respects the BLAS GEMM + // design, whenever we exceed 60. Clearly, + // the chunk size is an // architecture-dependent value and the - // interested user can squeeze out some extra - // performance by hand-tuning this - // parameter. Once we have chosen the number - // of cells we collect in one chunk, we - // determine how many chunks we have on the - // given cell range and recalculate the + // interested user can squeeze out some + // extra performance by hand-tuning this + // parameter. Once we have chosen the + // number of cells we collect in one chunk, + // we determine how many chunks we have on + // the given cell range and recalculate the // actual chunk size in order to evenly // distribute the chunks. - const unsigned int divisor = 150000/(matrix_sizes.n*sizeof(double)); + const unsigned int divisor = std::max(60U, matrix_sizes.m); const unsigned int n_chunks = std::max (matrix_sizes.n_cells/divisor + 1, 2*multithread_info.n_default_threads); @@ -1223,8 +1217,8 @@ void LaplaceProblem::setup_system () system_matrix.get_constraints().close(); std::cout.precision(4); std::cout << "System matrix memory consumption: " - << system_matrix.memory_consumption()/std::pow(2.,20.) - << " MBytes." + << system_matrix.memory_consumption()/double(1<<20) + << " MiB." << std::endl; solution.reinit (mg_dof_handler.n_dofs()); @@ -1522,8 +1516,8 @@ void LaplaceProblem::solve () mg_transfer.memory_consumption() + coarse_matrix.memory_consumption()); std::cout << "Multigrid objects memory consumption: " - << multigrid_memory/std::pow(2.,20.) - << " MBytes." + << multigrid_memory/double(1<<20) + << " MiB." << std::endl; SolverControl solver_control (1000, 1e-12);