From 6cac8760daa4392095a8b458f517c4fb46b99001 Mon Sep 17 00:00:00 2001 From: Wolfgang Bangerth Date: Thu, 3 Sep 2009 18:25:58 +0000 Subject: [PATCH] Minor proof reading -- very good text already! git-svn-id: https://svn.dealii.org/trunk@19378 0785d39b-7218-0410-832d-ea1e28bc413d --- deal.II/examples/step-37/doc/intro.dox | 107 ++++++++++++++----------- 1 file changed, 58 insertions(+), 49 deletions(-) diff --git a/deal.II/examples/step-37/doc/intro.dox b/deal.II/examples/step-37/doc/intro.dox index f9f3405abe..ce94b64b38 100644 --- a/deal.II/examples/step-37/doc/intro.dox +++ b/deal.II/examples/step-37/doc/intro.dox @@ -6,7 +6,7 @@

Introduction

-This example shows how to implement a matrix-free method, that is, a method +This example shows how to implement a matrix-free method, that is, a method that does not explicitly store the matrix elements, for a second-order Poisson equation with variable coefficients on an unstructured mesh. @@ -15,17 +15,17 @@ mesh.

Philosophic view on usual matrix-vector products

-In most of tutorial programs the code is built around assembling some sparse -matrix and solving a linear equation system based on that matrix. The time -spent in such programs is due to the construction of the sparse matrix -(assembling) and performing some matrix-vector products (possibly together +In most of the deal.II tutorial programs the code is built around assembling some sparse +matrix and solving a linear equation system based on that matrix. The run +time of such programs is mostly spent in the construction of the sparse matrix +(assembling) and in performing matrix-vector products (possibly together with some substitutions like in SSOR preconditioners) in an iterative Krylov -method. This is a general concept in finite element program. Depending on +method. This is a general concept in finite element programs. Depending on the quality of the linear solver and the complexity of the equation to be solved, between 40 and 95 precent of the computational time is spent in -performing sparse matrix-vector products. +performing sparse matrix-vector products. -Let us shortly look at a simplified version of code for the matrix-vector +Let us briefly look at a simplified version of code for the matrix-vector product (the actual implementation in deal.II uses a different counter for the innermost loop, which avoids having one additional counter variable): @code @@ -35,10 +35,10 @@ for (unsigned int row=0; rowmatrix_values is continuously traveling from main memory into the CPU in order to be multiplied with some vector element determined by the -other array matrix_indices and add this product to a sum that +other array column_indices and add this product to a sum that eventually is written into the output vector. Let us assume for simplicity that all the vector elements are sitting in the CPU (or some fast memory like caches) and that we use a compressed storage scheme for the sparse matrix, i.e., the format deal.II matrices (as well as PETSc and Trilinos matrices) use. Then each matrix element corresponds to 12 bytes of data, 8 -bytes for the the respective element in matrix_values, and 4 -bytes for the unsigned integer position matrix_indices that +bytes for the respective element in matrix_values, and 4 +bytes for the unsigned integer position column_indices that tells which vector element we actually use for multiplication. Here we neglect the additional array that tells us the ranges of individual rows in the matrix. With that 12 bytes of data, we perform two floating point @@ -77,26 +77,32 @@ What makes things worse is that today's processors have multiple cores, and multiple cores have to compete for memory bandwidth. Imagine we have 8 cores available with a theoretical peak performance of 96 GFLOPs. However, they can only get about 35 GB/s of data. For our matrix-vector product, we would -get a performance of about 6 GFLOPS, which is at nightmarish 6 precent of -the processor's peak performance. Things won't get better in the future, -rather worse. Memory bandwidth will most likely continue to grow more slowly -than the number of cores (i.e., the computing power). Then we will probably -not see that much of an improvement in the speed of our programs when -computers become faster in the future. - -And remember: We are talking about a more or less ideal situation when the -vectors do not need to read — in reality, a substantial amount of -vector entries needs to be read, too, which makes things worse. - -Another point might be simply that matrices consume very much memory — -sometimes too much. A billion matrix entries might seem like an enormous +get a performance of about 6 GFLOPS, which is at nightmarish 6 per cent of +the processor's peak performance. In practice, things are likely even +worse since, for example, a substantial amount of +vector entries needs to be read, too. + +Things won't get better in the future, rather worse: Memory bandwidth +will most likely continue to grow more slowly than the number of cores +(i.e., the computing power). Consequently, we will probably not see +that much of an improvement in the speed of our programs even though +computers do become faster in the future if we accumulate their speed +over all of their cores. + +In essence, one may ask how this can be avoided. The basic fact that +precipitates this is that matrices just consume a lot of memory — +sometimes too much, if it limits the number of computations that can +be done with them. A billion matrix entries might seem like an enormous problem, but in fact, already a Laplace problem in 3D with cubic elements and 6 million unknowns results in a matrix of that size, or even at 2 million unknowns for sixth order polynomial interpolation. -This tutorial shows some alternative implementation that is less memory +This tutorial shows an alternative that is less memory demanding. This comes at the cost of more operations to be performed in an -actual matrix-vector product. +actual matrix-vector product. However, one can hope that because the +speed with which computations can be done increases faster than the +speed with which memory can be streamed into a processor, this +trade-off will be worthwhile.

Avoiding matrices

@@ -109,14 +115,16 @@ P_\mathrm{cell,{loc-glob}}. @f} In this formula, the matrix $P_\mathrm{cell,{loc-glob}}$ is a permutation matrix that defines the mapping from local degrees of freedom in the cells -to the global degrees of freedom, which is usually implemented using a -variable local_dof_indices. +to the global degrees of freedom. The information from which this +operator can be build is usually encoded in the +local_dof_indices variable we have always used in the +assembly of matrices. If we are to perform a matrix-vector product, we can hence use that @f{eqnarray*} -y &=& A\cdot x = \left(\sum_{j=1}^{\mathrm{n,cells}} P_\mathrm{cell,{loc-glob}}^T +y &=& A\cdot x = \left(\sum_{\text{cell}=1}^{\mathrm{n,cells}} P_\mathrm{cell,{loc-glob}}^T A_\mathrm{cell} P_\mathrm{cell,{loc-glob}}\right) \cdot x\\ -&=& \sum_{j=1}^{\mathrm{n,cells}} P_\mathrm{cell,{loc-glob}}^T +&=& \sum_{j=1}^{\mathrm{n,cells}} P_\mathrm{cell,{loc-glob}}^T A_\mathrm{cell} x_\mathrm{cell}, @f} where $x_\mathrm{cell}$ is the values of x at the degrees of freedom @@ -131,18 +139,18 @@ MatrixFree::vmult (Vector &dst, dst = 0; QGauss quadrature_formula(fe.degree+1); - FEValues fe_values (fe, quadrature_formula, + FEValues fe_values (fe, quadrature_formula, update_gradients | update_JxW_values); - + const unsigned int dofs_per_cell = fe.dofs_per_cell; const unsigned int n_q_points = quadrature_formula.size(); - + FullMatrix cell_matrix (dofs_per_cell, dofs_per_cell); Vector cell_src (dofs_per_cell), cell_dst (dofs_per_cell); - + std::vector local_dof_indices (dofs_per_cell); - + typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); @@ -150,7 +158,7 @@ MatrixFree::vmult (Vector &dst, { cell_matrix = 0; fe_values.reinit (cell); - + for (unsigned int q_point=0; q_point &dst, } @endcode -Here we neglected boundary conditions. Note how we first generate the local +Here we neglected boundary conditions as well as any hanging nodes we +may have. Note how we first generate the local matrix in the usual way. To form the actual product as expressed in the above formula, we read in the values of src of the cell-related degrees of freedom, multiply by the local matrix, and finally add the result @@ -198,11 +207,11 @@ columns). The matrix D is diagonal and contains the values JxW(q) (or, rather, dim copies of it). Every numerical analyst learns in one of her first classes that for -forming a product of the shape +forming a product of the form @f{eqnarray*} A_\mathrm{cell}\cdot x_\mathrm{cell} = B D B^T \cdot x_\mathrm{cell}, @f} -one should never be done by forming the matrix-matrix products, but rather by +one should never form the matrix-matrix products, but rather by multiplying with the vector from right to left. To put this into code, we can write: @code @@ -210,7 +219,7 @@ write: for (; cell!=endc; ++cell) { fe_values.reinit (cell); - + cell->get_dof_indices (local_dof_indices); for (unsigned int i=0; i fe_values_reference (fe, quadrature_formula, + FEValues fe_values_reference (fe, quadrature_formula, update_gradients); Triangulation reference_cell; GridGenerator::hyper_cube(reference_cell, 0., 1.); fe_values_reference.reinit (reference_cell.begin()); - FEValues fe_values (fe, quadrature_formula, + FEValues fe_values (fe, quadrature_formula, update_inverse_jacobians | update_JxW_values); for (; cell!=endc; ++cell) { fe_values.reinit (cell); - + cell->get_dof_indices (local_dof_indices); for (unsigned int i=0; i