<h3>Overview</h3>
-This program does not introduce any new mathematical ideas; in fact, all it does
-is to do the same computations that step-8 already does, but it does so in a
-different manner. Instead of using deal.II's linear algebra classes, we build
+This program does not introduce any new mathematical ideas; in fact, all it does
+is to do the same computations that step-8 already does, but it does so in a
+different manner. Instead of using deal.II's linear algebra classes, we build
everything on top of classes deal.II provides that wrap around the linear algebra implementation of the <a
href="http://www.mcs.anl.gov/petsc/" target="_top">PETSc</a> library. And
-since PETSc allows the distribution of matrices and vectors across several
-computers within an MPI network, the resulting code will even be capable of
-solving the problem in %parallel. If you don't know what PETSc is, then this
+since PETSc allows the distribution of matrices and vectors across several
+computers within an MPI network, the resulting code will even be capable of
+solving the problem in %parallel. If you don't know what PETSc is, then this
would be a good time to take a quick glimpse at their homepage.
As a prerequisite of this program, you need to have PETSc installed, and if
from each processor, add them all up, and return the sum to all
processors. Internally, this is implemented using individual messages,
but to the user this is transparent. We call such operations <i>collectives</i>
-because <i>all</i> processors participate in them. Collectives allow us
-to write programs where not every copy of the executable is doing
-something completely different (this would be incredibly difficult to
-program) but where all copies are doing the same thing (though on
-different data) for themselves, running through the same blocks of code;
-then they communicate data through collectives and then go back to doing
-something for themselves again running through the same blocks of data.
-This is the key piece to being able to write programs, and it is the
+because <i>all</i> processors participate in them. Collectives allow us
+to write programs where not every copy of the executable is doing
+something completely different (this would be incredibly difficult to
+program) but where all copies are doing the same thing (though on
+different data) for themselves, running through the same blocks of code;
+then they communicate data through collectives and then go back to doing
+something for themselves again running through the same blocks of data.
+This is the key piece to being able to write programs, and it is the
key component to making sure that programs can run on any number of processors,
since we do not have to write different code for each of the participating
processors.
processor is clearly not going to scale: it is going to take forever,
and maybe more importantly no single machine will have enough memory
to store a mesh that has a billion cells (at least not at the time of
-writing this). In reality, programs like step-17 and step-18 can therefore
-not be run on more than maybe 100 or 200 processors, and even then storing
-the Triangulation and DoFHandler objects consumes the vast majority of
+writing this). In reality, programs like step-17 and step-18 can therefore
+not be run on more than maybe 100 or 200 processors, and even then storing
+the Triangulation and DoFHandler objects consumes the vast majority of
memory on each machine.
-Consequently, we need to approach the problem differently: to scale to
-very large problems, each processor can only store its own little piece
+Consequently, we need to approach the problem differently: to scale to
+very large problems, each processor can only store its own little piece
of the Triangulation and DoFHandler objects. deal.II implements such a
scheme in the parallel::distributed namespace and the classes
therein. It builds on an external library, <a