From: kronbichler Date: Tue, 3 Nov 2009 20:52:27 +0000 (+0000) Subject: Update documentation of what happens in the matrix-free function. X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=bfee311ab8cae9261322613fa0d007070815e8b1;p=dealii-svn.git Update documentation of what happens in the matrix-free function. git-svn-id: https://svn.dealii.org/trunk@20019 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-37/doc/intro.dox b/deal.II/examples/step-37/doc/intro.dox index 58fd8614d8..40dafdf9c8 100644 --- a/deal.II/examples/step-37/doc/intro.dox +++ b/deal.II/examples/step-37/doc/intro.dox @@ -33,15 +33,15 @@ different structure for the innermost loop, thereby avoiding the counter variable): @code // y = A * x -std::size_t element_index = 0; +std::size_t element_index = A_row_indices[0]; for (unsigned int row=0; rowmatrix_values is continuously traveling from main memory into the +A_values is continuously traveling from main memory into the CPU in order to be multiplied with some vector element determined by the other -array column_indices and add this product to a sum that +array A_column_indices and add this product to a sum that eventually is written into the output vector. Let us assume for simplicity that all the vector elements are sitting in the CPU (or some fast memory like caches) and that we use a compressed storage scheme for the sparse matrix, i.e., the format deal.II matrices (as well as PETSc and Trilinos matrices) use. Then each matrix element corresponds to 12 bytes of data, 8 bytes for the -respective element in matrix_values, and 4 bytes for the unsigned -integer position column_indices that indicates which vector element +respective element in A_values, and 4 bytes for the unsigned +integer position A_column_indices that indicates which vector element we actually use for multiplication. Here we neglect the additional array that tells us the ranges of individual rows in the matrix. With those 12 bytes of data, we perform two floating point operations, a multiplication and an @@ -203,19 +203,20 @@ is much cheaper. One way to improve this is to realize that conceptually the local matrix can be thought of as the product of three matrices, @f{eqnarray*} -A_\mathrm{cell} = B D B^T, +A_\mathrm{cell} = B_\mathrm{cell}^T D_\mathrm{cell} B_\mathrm{cell}, @f} where for the example of the Laplace operator -the $(i,q*\mathrm{dim}+d)$-th element of B is given by -fe_values.shape_grad(i,q)[d] (i.e., it consists of -dofs_per_cell rows and dim*n_q_points -columns). The matrix D is diagonal and contains the values +the $(q*\mathrm{dim}+d,i)$-th element of Bcell is given by +fe_values.shape_grad(i,q)[d]. The matrix consists of +dim*n_q_pointsdofs_per_cell rows and dofs_per_cell +columns). The matrix Dcell is diagonal and contains the values fe_values.JxW(q) (or, rather, dim copies of it). Every numerical analyst learns in one of her first classes that for forming a product of the form @f{eqnarray*} -A_\mathrm{cell}\cdot x_\mathrm{cell} = B D B^T \cdot x_\mathrm{cell}, +A_\mathrm{cell}\cdot x_\mathrm{cell} = B_\mathrm{cell} D_\mathrm{cell} + B_\mathrm{cell}^T \cdot x_\mathrm{cell}, @f} one should never form the matrix-matrix products, but rather multiply with the vector from right to left so that only matrix-vector products are @@ -288,7 +289,32 @@ factored out from the gradient, then we apply the weights of the quadrature, and we apply with the transposed Jacobian for preparing the third loop which again uses the gradients on the unit cell. -Let's see how we can implement this: +Let us again write this in terms of matrices. Let the matrix +Bcell denote the cell-related gradient matrix, with each row +containing the values of the quadrature points. It is constructed by a +matrix-matrix product as +@f{eqnarray*} +B_\mathrm{cell} = J_\mathrm{cell} B_\mathrm{ref\_cell}, +@f} +where Bref_cell denotes the gradient on the reference cell +and JcellT denotes the +Jacobian. JcellT is block-diagonal, and the +blocks size is equal to the dimension of the problem. Each diagonal block is +the Jacobian transformation that goes from the reference cell to the real +cell. + +Putting things together, we find that +@f{eqnarray*} +A_\mathrm{cell} = B_\mathrm{cell}^T D B_\mathrm{cell} + = B_\mathrm{ref\_cell}^T J_\mathrm{cell}^T + D_\mathrm{cell} + J_\mathrm{cell} B_\mathrm{ref\_cell}, +@f} +so we calculate the product (starting from the right) +@f{eqnarray*} +B_\mathrm{ref\_cell}^T J_\mathrm{cell}^T D J_\mathrm{cell} +B_\mathrm{ref\_cell}\cdot x_\mathrm{cell}. +@f} @code ... FEValues fe_values_reference (fe, quadrature_formula, @@ -388,7 +414,9 @@ there are a few more points done to be even more efficient, namely:
  • We pre-compute the inverse of the Jacobian of the transformation and store it in an extra array. This allows us to fuse the three - operations (apply Jacobian, multiply with weights, apply transposed + operations JcellT D + Jcell (apply Jacobian, multiply by + weights, apply transposed Jacobian) into one second-rank tensor that is also symmetric (so we can save only half the tensor).
  • We work on several cells at once when we apply the gradients of the diff --git a/deal.II/examples/step-37/step-37.cc b/deal.II/examples/step-37/step-37.cc index a29ab473da..4abd0d8e8c 100644 --- a/deal.II/examples/step-37/step-37.cc +++ b/deal.II/examples/step-37/step-37.cc @@ -282,7 +282,7 @@ class MatrixFree : public Subscriptor copy_local_to_global (const WorkStreamData::CopyData ©, Vector &dst) const; - FullMatrix small_matrix; + FullMatrix B_ref_cell; Table<2,unsigned int> indices_local_to_global; Table<2,Transformation> derivatives; @@ -433,40 +433,111 @@ set_derivative_data (const unsigned int cell_no, // call it simultaneously on many processors, // but with different cell range data. // - // Following the discussion in the - // introduction, we try to work on multiple - // cells at a time. This is possible - // because the small matrix remains the same - // on all the cells, and only the - // derivative information from the Jacobian - // is different. That way, the operation - // that is actually the multiplication of - // the small matrix with a vector (on the - // local dofs) becomes a multiplication of - // two full but small matrices with each - // other. This is an operation that can be + // The goal of this function is to provide + // the multiplication of a vector with the + // local contributions of a set of cells. As + // mentioned in the introduction, if we were + // to deal with a single cell, this would + // amount to performing the product + // @f{eqnarray*} + // P^T_\mathrm{cell,local-global} A_\mathrm{cell} + // P_\mathrm{cell,local-global} x + // @f} + // where + // @f{eqnarray*} + // A_\mathrm{cell} = + // B_\mathrm{ref\_cell}^T J_\mathrm{cell}^T + // D_\mathrm{cell} + // J_\mathrm{cell} B_\mathrm{ref\_cell} + // @f} + // and $P_\mathrm{cell,local-global}$ is the + // transformation from local to global + // indices. + // + // To do this, we would have to do the + // following steps: + //
      + //
    1. Form $x_\mathrm{cell} = + // P_\mathrm{cell,local-global} x$. This is + // done using the command + // ConstraintMatrix::get_dof_values. + //
    2. Form $x_1 = B_\mathrm{ref\_cell} + // x_\mathrm{cell}$. The vector $x_1$ + // contains the reference cell gradient to + // the local cell vector. + //
    3. Form $x_2 = J_\mathrm{cell}^T + // D_\mathrm{cell} J_\mathrm{cell} + // x_1$. This is a block-diagonal + // operation, with the block size equal to + // dim. The blocks just + // correspond to the individual quadrature + // points. The operation on each quadrature + // point is implemented by the + // Transformation class object that this + // class is equipped with. Compared to the + // introduction, the matrix + // $D_\mathrm{cell}$ now contains the + // JxW values and the + // inhomogeneous coefficient. + //
    4. Form $y_\mathrm{cell} = + // B_\mathrm{ref\_cell}^T x_2$. This gives + // the local result of the matrix-vector + // product. + //
    5. Form $y += + // P_\mathrm{cell,local-global}^T + // y_\mathrm{cell}$. This adds the local + // result to the global vector, which is + // realized using the method + // ConstraintMatrix::distribute_local_to_global. + // Note that we do this in an extra + // function called + // copy_local_to_global + // because that operation must not be done + // in %parallel, in order to avoid two or + // more processes trying to add to the same + // positions in the result vector $y$. + //
    + // The steps 1 to 4 can be done in %parallel + // by multiple processes. + + // Now, it turns out that the most expensive + // part of the above is the multiplication + // $B_\mathrm{ref\_cell} x_\mathrm{cell}$ in + // the second step and the transpose + // operation in step 4. Note that the matrix + // $J^T D J$ is block-diagonal, and hence, + // its application is cheaper. Since the + // matrix $B_\mathrm{ref\_cell}$ is the same + // for all cells, all that changes is the + // vector $x_\mathrm{cell}$. Hence, nothing + // prevents us from collecting several cell + // vectors to a (rectangular) matrix, and + // then perform a matrix-matrix + // product. These matrices are both full, but + // not very large, having of the order + // dofs_per_cell rows and + // columns. This is an operation that can be // much better optimized than matrix-vector // products. The functions - // FullMatrix::mmult - // and + // FullMatrix::mmult and // FullMatrix::mTmult // use the BLAS dgemm function (as long as // BLAS has been detected in deal.II // configuration), which provides optimized // kernels for doing this product. In our // case, a matrix-matrix product is between - // three and five times faster than doing - // the matrix-vector product on one cell - // after the other. The variables that hold - // the solution on the respective cell's - // support points and the quadrature points - // are thus full matrices. The number of - // rows is given by the number of cells - // they work on, and the number of columns - // is the number of degrees of freedom per - // cell for the first and the number of - // quadrature points times the number of - // components per point for the latter. + // three and five times faster than doing the + // matrix-vector product on one cell after + // the other. The variables that hold the + // solution on the respective cell's support + // points and the quadrature points are thus + // full matrices. The number of rows is given + // by the number of cells they work on, and + // the number of columns is the number of + // degrees of freedom per cell for the first + // and the number of quadrature points times + // the number of components per point for the + // latter. template template void @@ -478,72 +549,6 @@ local_vmult (CellChunkIterator cell_range, { const unsigned int chunk_size = cell_range->second - cell_range->first; - // OK, now we are sitting in the loop that - // goes over our chunks of cells. What we - // need to do is five things: First, we have - // to give the full matrices containing the - // solution at cell dofs and quadrature - // points the correct sizes. We use the - // true argument in order to - // specify that this should be done fast, - // i.e., the field will not be initialized - // since we fill them manually in the very - // next step anyway. Then, we copy the - // source values from the global vector to - // the local cell range, and we perform a - // matrix-matrix product to transform the - // values to the quadrature points. It is a - // bit tricky to find out how the matrices - // should be multiplied with each other, - // i.e., which matrix needs to be - // transposed. One way to resolve this is to - // look at the matrix dimensions: - // solution_cells has - // current_chunk_size rows and - // matrix_sizes.m columns, - // whereas small_matrix has - // matrix_sizes.m rows and - // matrix_sizes.n columns, which - // is also the size of columns in the output - // matrix - // solution_points. Hence, the - // columns of the first matrix are as many as - // there are rows in the second, which means - // that the product is done non-transposed - // for both matrices. - // - // Once the first product is calculated, we - // apply the derivative information on all - // the cells and all the quadrature points by - // calling the transform - // operation of the - // Transformation class, and - // then use a second matrix-matrix product to - // get back to the solution values at the - // support points. This time, we need to - // transpose the small matrix, indicated by a - // mTmult in the operations. The - // fifth and last step is to add the local - // data into the global vector, which is what - // we did in many tutorial programs when - // assembling right hand sides. We use the - // indices_local_to_global field - // to find out how local dofs and global dofs - // are related to each other. Since we - // simultaneously apply the constraints, we - // hand this task off to the ConstraintMatrix - // object. Most often, the ConstraintMatrix - // function is applied to data - // from one cell at a time, but since we work - // on a whole chunk of dofs, we can feed the - // function with data from all the cells at - // once. We do this in an extra function - // since we split between %parallel code that - // can be run independently (this function) - // and code that needs to be synchronized - // between threads - // (copy_local_to_global - // function). copy.solutions.reinit (chunk_size,matrix_sizes.m, true); copy.first_cell = cell_range->first; copy.n_dofs = chunk_size*matrix_sizes.m; @@ -553,13 +558,13 @@ local_vmult (CellChunkIterator cell_range, ©.solutions(0,0), ©.solutions(0,0)+copy.n_dofs); - copy.solutions.mmult (scratch.solutions, small_matrix); + copy.solutions.mmult (scratch.solutions, B_ref_cell); for (unsigned int i=0, k = copy.first_cell; i::vmult_add (Vector &dst, // uses the small matrix for determining // the number of degrees of freedom per // cell (number of rows in - // small_matrix). The number + // B_ref_cell). The number // of quadrature points needs to be passed // through the last variable // n_points_per_cell, since @@ -756,23 +761,23 @@ template void MatrixFree:: reinit (const unsigned int n_dofs_in, const unsigned int n_cells_in, - const FullMatrix &small_matrix_in, + const FullMatrix &B_ref_cell_in, const unsigned int n_points_per_cell) { - small_matrix = small_matrix_in; + B_ref_cell = B_ref_cell_in; derivatives.reinit (n_cells_in, n_points_per_cell); - indices_local_to_global.reinit (n_cells_in, small_matrix.m()); + indices_local_to_global.reinit (n_cells_in, B_ref_cell.m()); diagonal_is_calculated = false; matrix_sizes.n_dofs = n_dofs_in; matrix_sizes.n_cells = n_cells_in; - matrix_sizes.m = small_matrix.m(); - matrix_sizes.n = small_matrix.n(); + matrix_sizes.m = B_ref_cell.m(); + matrix_sizes.n = B_ref_cell.n(); matrix_sizes.n_points = n_points_per_cell; - matrix_sizes.n_comp = small_matrix.n()/matrix_sizes.n_points; - Assert(matrix_sizes.n_comp * n_points_per_cell == small_matrix.n(), + matrix_sizes.n_comp = B_ref_cell.n()/matrix_sizes.n_points; + Assert(matrix_sizes.n_comp * n_points_per_cell == B_ref_cell.n(), ExcInternalError()); // One thing to make the matrix-vector @@ -850,7 +855,7 @@ template void MatrixFree::clear () { - small_matrix.reinit(0,0); + B_ref_cell.reinit(0,0); derivatives.reinit (0,0); indices_local_to_global.reinit(0,0); @@ -921,13 +926,13 @@ MatrixFree::calculate_diagonal() const for (unsigned int cell=0; cell::memory_consumption () const std::size_t glob_size = derivatives.memory_consumption() + indices_local_to_global.memory_consumption() + constraints.memory_consumption() + - small_matrix.memory_consumption() + + B_ref_cell.memory_consumption() + diagonal_values.memory_consumption() + matrix_sizes.chunks.size()*2*sizeof(unsigned int) + sizeof(*this);