<a name="Intro"></a>
<h1>Introduction</h1>
-This example shows how to implement a matrix-free method, that is, a
-method that does not explicitly store the matrix elements, for a
-second-order Poisson equation with variable coefficients on a
-hypercube. The linear system will be solved with a multigrid
-method.
+This example shows how to implement a matrix-free method, that is, a method
+that does not explicitly store the matrix elements, for a second-order Poisson
+equation with variable coefficients on a hypercube. The linear system will be
+solved with a multigrid method and uses large-scale parallelism with MPI.
The major motivation for matrix-free methods is the fact that on today's
processors access to main memory (i.e., for objects that do not fit in the
cached data, whereas the latter specifically stores the coefficient for the
Laplacian. In case applications demand for it, this specialization could pay
off and would be worthwhile to consider. Note that the implementation in
-deal.II is smart enough to detenct Cartesian or affine geometries where the
+deal.II is smart enough to detect Cartesian or affine geometries where the
Jacobian is constant throughout the cell and needs not be stored for every
cell (and indeed often is the same over different cells as well).
DoFs, i.e. DoFs that are owned by a remote processor but accessed by cells
that are owned by the present processor. In the @ref GlossLocallyActiveDof
"glossary" these degrees of freedom are referred to as locally active degrees
-of freedom. The function MatrixFree::initialize_dof_vector() prodides a method
+of freedom. The function MatrixFree::initialize_dof_vector() provides a method
that sets this design. Note that hanging nodes can relate to additional
ghosted degrees of freedom that must be included in the distributed vector but
are not part of the locally active DoFs in the sense of the @ref
GlossLocallyActiveDof "glossary". Moreover, the distributed vector holds the
-some MPI metadata for DoFs that are owned locally but needed by other
+MPI metadata for DoFs that are owned locally but needed by other
processors. A benefit of the design of this vector class is the way ghosted
entries are accessed. In the storage scheme of the vector, the data array
extends beyond the processor-local part of the solution with further vector
PETScWrappers::MPI::Vector and TrilinosWrappers::MPI::Vector data types we
have used in step-40 and step-32 before, but since we do not need any other
parallel functionality of these libraries, we use the
-parallel::distributed::Vector class of deal.II instead of linking in another
+LinearAlgebra::distributed::Vector class of deal.II instead of linking in another
large library in this tutorial program. Also note that the PETSc and Trilinos
vectors do not provide the fine-grained control over ghost entries with direct
array access because they abstract away the necessary implementation details.
increasing number of degrees of freedom. A constant number of iterations
(together with optimal computational properties) means that the computing time
approximately quadruples as the problem size quadruples from one cycle to the
-next. A look at the memory consumption reveals that the code is also very
-efficient in terms of storage. Around 2-4 million degrees of freedom fit into
-1 GB of memory. An interesting fact is that solving one linear system is
-cheaper than the setup, despite not building a matrix (approximately half of
-which is spent in the DoFHandler::distribute_dofs() and
-DoFHandler::distributed_mg_dofs() calls). This shows the high efficiency of
-this approach, but also that the deal.II data structures are quite expensive
-to set up and the setup cost must be amortized over several system solves.
-
-Not much changes if we run the program in three spatial dimensions, with the
-exception that problem sizes grow by a factor eight when refining the mesh and
-the obvious increase in computing times:
+next. The code is also very efficient in terms of storage. Around 2-4 million
+degrees of freedom fit into 1 GB of memory, see also the MPI results below. An
+interesting fact is that solving one linear system is cheaper than the setup,
+despite not building a matrix (approximately half of which is spent in the
+DoFHandler::distribute_dofs() and DoFHandler::distributed_mg_dofs()
+calls). This shows the high efficiency of this approach, but also that the
+deal.II data structures are quite expensive to set up and the setup cost must
+be amortized over several system solves.
+
+Not much changes if we run the program in three spatial dimensions. Since we
+use uniform mesh refinement, we get eight times as many elements and
+approximately eight times as many degrees of freedom with each cycle:
@code
Cycle 0
<h3>Comparison with a sparse matrix</h3>
In order to understand the capabilities of the matrix-free implementation, we
-compare the performance of the 3d example above with a SparseMatrix
-implementation by measuring both the computation times for the initialization of
-the problem (distribute DoFs, setup and assemble matrices, setup multigrid
-structures) and the actual solution for the matrix-free variant and the
-variant based on sparse matrices. We base the preconditioner on float
-numbers and the actual matrix and vectors on double numbers, as shown
-above. Tests are run on an Intel Core i7-2620M notebook processor (two cores
-and <a href="http://en.wikipedia.org/wiki/Advanced_Vector_Extensions">AVX</a>
+compare the performance of the 3d example above with a sparse matrix
+implementation based on TrilinosWrappers::SparseMatrix by measuring both the
+computation times for the initialization of the problem (distribute DoFs,
+setup and assemble matrices, setup multigrid structures) and the actual
+solution for the matrix-free variant and the variant based on sparse
+matrices. We base the preconditioner on float numbers and the actual matrix
+and vectors on double numbers, as shown above. Tests are run on an Intel Core
+i7-5500U notebook processor (two cores and <a
+href="http://en.wikipedia.org/wiki/Advanced_Vector_Extensions">AVX</a>
support, i.e., four operations on doubles can be done with one CPU
-instruction, which is heavily used in FEEvaluation) and optimized mode. The
-example makes use of multithreading, so both cores are actually used.
+instruction, which is heavily used in FEEvaluation), optimized mode, and two
+MPI ranks.
<table align="center" border="1">
<tr>
</tr>
<tr>
<td align="right">125</td>
- <td align="center">0.0048s</td>
- <td align="center">0.00075s</td>
- <td align="center">0.0023s</td>
- <td align="center">0.00092s</td>
+ <td align="center">0.0042s</td>
+ <td align="center">0.0012s</td>
+ <td align="center">0.0022s</td>
+ <td align="center">0.00095s</td>
</tr>
<tr>
<td align="right">729</td>
- <td align="center">0.014s</td>
- <td align="center">0.0022s</td>
- <td align="center">0.0029s</td>
- <td align="center">0.0028s</td>
+ <td align="center">0.012s</td>
+ <td align="center">0.0040s</td>
+ <td align="center">0.0027s</td>
+ <td align="center">0.0021s</td>
</tr>
<tr>
<td align="right">4,913</td>
- <td align="center">0.10s</td>
+ <td align="center">0.082s</td>
<td align="center">0.012s</td>
- <td align="center">0.014s</td>
<td align="center">0.011s</td>
+ <td align="center">0.0057s</td>
</tr>
<tr>
<td align="right">35,937</td>
- <td align="center">0.80s</td>
- <td align="center">0.14s</td>
- <td align="center">0.087s</td>
- <td align="center">0.065s</td>
+ <td align="center">0.73s</td>
+ <td align="center">0.13s</td>
+ <td align="center">0.048s</td>
+ <td align="center">0.040s</td>
</tr>
<tr>
<td align="right">274,625</td>
- <td align="center">5.93s</td>
- <td align="center">1.05s</td>
- <td align="center">0.60s</td>
- <td align="center">0.43s</td>
+ <td align="center">5.43s</td>
+ <td align="center">1.01s</td>
+ <td align="center">0.33s</td>
+ <td align="center">0.25s</td>
</tr>
<tr>
<td align="right">2,146,689</td>
- <td align="center">46.7s</td>
- <td align="center">8.44s</td>
- <td align="center">4.96s</td>
- <td align="center">3.56s</td>
+ <td align="center">43.8s</td>
+ <td align="center">8.24s</td>
+ <td align="center">2.42s</td>
+ <td align="center">2.06s</td>
</tr>
</table>
program is also a form of global communication. Think about the coarse grid
solve that happens on a single processor: It accumulates the contributions
from all processors before it starts. When completed, the coarse grid solution
-is transfered to finer levels, where more and more processors help in
+is transferred to finer levels, where more and more processors help in
smoothing until the fine grid. Essentially, this is a tree-like pattern over
the processors in the network and controlled by the mesh. As opposed to the
@p MPI_Allreduce operations where the tree in the reduction is optimized to the
this for 1k cores, 8k cores, and 65k cores and see that the problem size can
be varied over almost two orders of magnitude with ideal scaling. The largest
computation shown in this picture involves 292 billion ($2.92 \cdot 10^{11}$)
-degrees of freedom. On a DG computation of 147k DoFs, the above algorithms
-were also run on 532 billion DoFs.
+degrees of freedom. On a DG computation of 147k cores, the above algorithms
+were also run involving up to 549 billion (2^39) DoFs.
<img src="https://www.dealii.org/images/steps/developer/step-37.scaling_size.png" alt="">
number of threads in the MPI_InitFinalize data structure in the main function,
and set the MatrixFree::AdditionalData::tasks_parallel_scheme to
partition_color to actually do the loop in parallel. This use case is
-discussed in step-48.
\ No newline at end of file
+discussed in step-48.
// diagonal entries of the underlying matrix. We need the diagonal for the
// definition of the multigrid smoother. Since we consider a problem with
// variable coefficient, we further implement a method that can fill the
- // coefficiient values.
+ // coefficient values.
//
// Note that the file <code>include/deal.II/matrix_free/operators.h</code>
// already contains an implementation of the Laplacian through the class
// FEEvaluation are designed to access vectors in MPI-local index space also
// when working with multiple processors. Working in local index space means
// that no index translation needs to be performed at the place the vector
- // access happns, apart from the unavoidable indirect addressing. However,
+ // access happens, apart from the unavoidable indirect addressing. However,
// local index spaces are ambiguous: While it is standard convention to
// access the locally owned range of a vector with indices between 0 and the
// local size, the numbering is not so clear for the ghosted entries and
// is used during the restriction phase of the multigrid V-cycle, whereas
// vmult_interface_up is used during the prolongation phase.
//
- // Once the interface matrix created, we set up the remaining Multigrid
+ // Once the interface matrix is created, we set up the remaining Multigrid
// preconditioner infrastructure in complete analogy to step-16 to obtain
// a @p preconditioner object that can be applied to a matrix.
mg::Matrix<LinearAlgebra::distributed::Vector<float> > mg_matrix(mg_matrices);
MGTransferMatrixFree<dim,float> >
preconditioner(dof_handler, mg, mg_transfer);
- // The setup of the multigrid routines was quite easy and one cannot see
+ // The setup of the multigrid routines is quite easy and one cannot see
// any difference in the solve process as compared to step-16. All the
// magic is hidden behind the implementation of the LaplaceOperator::vmult
// operation. Note that we print out the solve time and the accumulated