<h3>Overview</h3>
-This program does not introduce any new mathematical ideas; in fact, all it
-does is to do the exact same computations that step-8
-already does, but it does so in a different manner: instead of using deal.II's
-own linear algebra classes, we build everything on top of classes deal.II
-provides that wrap around the linear algebra implementation of the <a
+This program does not introduce any new mathematical ideas; in fact, all it does
+is to do the same computations that step-8 already does, but it does so in a
+different manner. Instead of using deal.II's linear algebra classes, we build
+everything on top of classes deal.II provides that wrap around the linear algebra implementation of the <a
href="http://www.mcs.anl.gov/petsc/" target="_top">PETSc</a> library. And
-since PETSc allows to distribute matrices and vectors across several computers
-within an MPI network, the resulting code will even be able to solve the
-problem in %parallel. If you don't know what PETSc is, then this would be a
-good time to take a quick glimpse at their homepage.
+since PETSc allows the distribution of matrices and vectors across several
+computers within an MPI network, the resulting code will even be capable of
+solving the problem in %parallel. If you don't know what PETSc is, then this
+would be a good time to take a quick glimpse at their homepage.
As a prerequisite of this program, you need to have PETSc installed, and if
you want to run in %parallel on a cluster, you also need <a
For larger problems, having to store the <i>entire</i> mesh on every processor
will clearly yield a bottleneck. Splitting up the mesh is slightly, though not
-much more complicated (from a user perspective, though it is <i>much</i> more
+much more, complicated (from a user perspective, though it is <i>much</i> more
complicated under the hood) to achieve and
we will show how to do this in step-40 and some other programs. There are
numerous occasions where, in the course of discussing how a function of this
from each processor, add them all up, and return the sum to all
processors. Internally, this is implemented using individual messages,
but to the user this is transparent. We call such operations <i>collectives</i>
-because <i>all</i> processors participate in them. Collectives allow us
-to write programs where not every copy of the executable is doing something
-completely different (this would be incredibly difficult to program) but
-where in essence all copies are doing the same thing (though on different
-data) for themselves, running through the same blocks of code; then they
-communicate data through collectives; and then go back to doing something
-for themselves again running through the same blocks of data. This is the
-key piece to being able to write programs, and it is the key component
-to making sure that programs can run on any number of processors,
+because <i>all</i> processors participate in them. Collectives allow us
+to write programs where not every copy of the executable is doing
+something completely different (this would be incredibly difficult to
+program) but where all copies are doing the same thing (though on
+different data) for themselves, running through the same blocks of code;
+then they communicate data through collectives and then go back to doing
+something for themselves again running through the same blocks of data.
+This is the key piece to being able to write programs, and it is the
+key component to making sure that programs can run on any number of processors,
since we do not have to write different code for each of the participating
processors.
processor is clearly not going to scale: it is going to take forever,
and maybe more importantly no single machine will have enough memory
to store a mesh that has a billion cells (at least not at the time of
-writing this). In reality, programs like step-17 and step-18 can
-therefore not be run on more than maybe 100 or 200 processors and even
-there storing the Triangulation and DoFHandler objects consumes the
-vast majority of memory on each machine.
+writing this). In reality, programs like step-17 and step-18 can therefore
+not be run on more than maybe 100 or 200 processors, and even then storing
+the Triangulation and DoFHandler objects consumes the vast majority of
+memory on each machine.
-Consequently, we need to approach the problem differently: to scale to
-very large problems each processor can only store its own little piece
+Consequently, we need to approach the problem differently: to scale to
+very large problems, each processor can only store its own little piece
of the Triangulation and DoFHandler objects. deal.II implements such a
scheme in the parallel::distributed namespace and the classes
therein. It builds on an external library, <a
<i>parallel forest</i> that describes the parallel storage of a
hierarchically constructed mesh as a forest of quad- or
oct-trees). You need to <a
-href="../../external-libs/p4est.html">install and configure p4est</a>
-but apart from that all of its workings are hidden under the surface
+href="../../external-libs/p4est.html">install and configure p4est,</a>
+but apart from that, all of its workings are hidden under the surface
of deal.II.
In essence, what the parallel::distributed::Triangulation class and
// number within this universe the processor this job runs on is:
#include <deal.II/base/utilities.h>
// The next one provides a class, ConditionOStream that allows us to write
-// code that would output things to a stream (such as <code>std::cout</code>
+// code that would output things to a stream (such as <code>std::cout</code>)
// on every processor but throws the text away on all but one of them. We
// could achieve the same by simply putting an <code>if</code> statement in
// front of each place where we may generate output, but this doesn't make the
// The last part of this function deals with initializing the matrix with
// accompanying sparsity pattern. As in previous tutorial programs, we use
// the DynamicSparsityPattern as an intermediate with which we
- // then initialize the system matrix. To do so we have to tell the sparsity
- // pattern its size but as above there is no way the resulting object will
+ // then initialize the system matrix. To do so, we have to tell the sparsity
+ // pattern its size, but as above, there is no way the resulting object will
// be able to store even a single pointer for each global degree of
// freedom; the best we can hope for is that it stores information about
- // each locally relevant degree of freedom, i.e. all those that we may
+ // each locally relevant degree of freedom, i.e., all those that we may
// ever touch in the process of assembling the matrix (the
// @ref distributed_paper "distributed computing paper" has a long
// discussion why one really needs the locally relevant, and not the small