<a name="Intro"></a>
<h1>Introduction</h1>
-<p>
+
This program does not introduce any new mathematical ideas; in fact, all it
-does is to do the exact same computations that <a href="step-8.html">step-8</a>
+does is to do the exact same computations that @ref step_8 "step-8"
already does, but it does so in a different manner: instead of using deal.II's
own linear algebra classes, we build everything on top of classes deal.II
provides that wrap around the linear algebra implementation of the <a
within an MPI network, the resulting code will even be able to solve the
problem in parallel. If you don't know what PETSc is, then this would be a
good time to take a quick glimpse at their homepage.
-</p>
-<p>
+
+
As a prerequisite of this program, you need to have PETSc installed, and if
you want to run in parallel on a cluster, you also need <a
href="http://www-users.cs.umn.edu/~karypis/metis/index.html"
target="_top">METIS</a> to partition meshes. The installation of deal.II
together with these two additional libraries is described in the <a
href="../../readme.html" target="body">README</a> file.
-</p>
-<p>
+
+
Now, for the details: as mentioned, the program does not compute anything new,
so the use of finite element classes etc. is exactly the same as before. The
difference to previous programs is that we have replaced almost all uses of
functions. The wrappers are therefore only used to give PETSc a more modern,
object oriented interface, and to make the use of PETSc and deal.II objects as
interchangable as possible.
-</p>
-<p>
+
+
While the sequential PETSc wrappers classes do not have any advantage over
their deal.II counterparts, the main point of using PETSc is that it can run
in parallel. We will make use of this by partitioning the domain into as many
blocks (``subdomains'') as there are processes in the MPI network. At the same
time, PETSc provides dummy MPI stubs that allow to run the same program on a
single machine if so desired, without any changes.
-</p>
-<p>
+
+
Note, however, that the only data structures we parallelize are matrices and
vectors. We do, in particular, not split up the <code>Triangulation</code> and
<code>DoFHandler</code> classes: each process still has a complete copy of
have to be changed, since for example loops over all cells can only include
locally available cells. We thus went for the path of least resistance and
only parallelized the linear algebra part.
-</p>
-<p>
+
+
The techniques this program demonstrates are: how to use the PETSc wrapper
classes; how to parallelize operations for jobs running on an MPI network; and
how to partition the domain into subdomains to parallelize up the work. Since
all this can only be demonstrated using actual code, let us go straight to the
code without much further ado.
-</p>
+
<a name="Results"></a>
<h1>Results</h1>
-<p>
+
If the program above is compiled and run on a single processor machine, it
should generate results that are very similar to those that we already got
with step-8. However, it becomes more interesting if we run it on a cluster of
that the exact command line syntax varies. If you have found out how to run a
job on your system, you should get output like this for a job on 8 processors,
and with a few more refinement cycles than in the code above:
-<code><pre>
+@code
Cycle 0:
Number of active cells: 64
Number of degrees of freedom: 162 (by partition: 22+22+20+20+18+16+20+24)
Number of degrees of freedom: 3771884 (by partition: 468452+474204+470818+470884+469960+
471186+470686+475694)
Solver converged in 2251 iterations.
-</pre></code>
-</p>
+@endcode
+
+
-<p>
As can be seen, we can easily get to almost four million unknowns. In fact, the
code's runtime with 8 processes was less than 7 minutes up to (and including)
cycle 14, and 14 minutes including the second to last step. I lost the timing
consumption was about 600 bytes per unknown, which is not bad, but one has to
consider that this is for every unknown, whether we store the matrix and vector
entries locally or not.
-</p>
-<p>
+
+
Here is some output generated in the 12th cycle of the program, i.e. with roughly
300,000 unknowns:
-</p>
-<p align="center">
- <a href="step-17.data/solution-12-ux.png" target="_top"><img
- src="step-17.data/solution-12-ux.png" alt="ux" width="45%"></a>
- <a href="step-17.data/solution-12-uy.png" target="_top"><img
- src="step-17.data/solution-12-uy.png" alt="uy" width="45%"></a>
-</p>
-<p>
+@image html step-17.12-ux.png
+@image html step-17.12-uy.png
+
+
+
As one would hope for, the x- (left) and y-displacements (right) shown here
closely match what we already saw in step-8. What may be more interesting,
though, is to look at the mesh and partition at this step (to see the picture
in its original size, simply click on it):
-</p>
-<p align="center">
- <a href="step-17.data/solution-12-grid.png" target="_top"><img
- src="step-17.data/solution-12-grid.png" alt="grid" width="45%"></a>
- <a href="step-17.data/solution-12-partition.png" target="_top"><img
- src="step-17.data/solution-12-partition.png" alt="partition"
- width="45%"></a>
-</p>
+@image html step-17.12-grid.png
+@image html step-17.12-partition.png
+
-<p>
Again, the mesh (left) shows the same refinement pattern as seen
previously. The right panel shows the partitioning of the domain across the 8
processes, each indicated by a different color. The picture shows that the
of cells in each subdomain; this equilibration is also easily identified in
the output shown above, where the number of degrees per subdomain is roughly
the same.
-</p>
-<p>
+
+
It is worth noting that if we ran the same program with a different number of
processes, that we would likely get slightly different output: a different
mesh, different number of unknowns and iterations to convergence. The reason
then lead to slightly different mesh cells tagged for refinement, and larger
differences in subsequent steps. The solution will always look very similar,
though.
-</p>
-<p>
+
+
Finally, here are some results for a 3d simulation. You can repeat these by
first changing
-<code><pre>
+@code
ElasticProblem<2> elastic_problem;
-</pre></code>
+@endcode
to
-<code><pre>
+@code
ElasticProblem<3> elastic_problem;
-</pre></code>
+@endcode
in the main function, and then in the Makefile, change the reference to the 2d
libraries to their 3d counterparts. If you then run the program in parallel,
you get something similar to this (this is for a job with 16 processes):
-<code><pre>
+@code
Cycle 0:
Number of active cells: 512
Number of degrees of freedom: 2187 (by partition: 114+156+150+114+114+210+105+102+120+120+96+123+141+183+156+183)
Number of active cells: 461392
Number of degrees of freedom: 1497951 (by partition: 103587+100827+97611+93726+93429+88074+95892+88296+96882+93000+87864+90915+92232+86931+98091+90594)
Solver converged in 261 iterations.
-</pre></code>
-</p>
+@endcode
+
+
-<p>
The last step, going up to 1.5 million unknowns, takes about 55 minutes with
16 processes on 8 dual-processor machines. The graphical output generated by
this job is rather large (cycle 5 already prints around 82 MB of GMV data), so
we contend ourselves with showing output from cycle 4 (again, clicking on the
picture gives a version in original size):
-</p>
-<p align="center">
- <a href="step-17.data/solution-4-3d-partition.png" target="_top"><img
- src="step-17.data/solution-4-3d-partition.png" alt="uy" width="45%"></a>
- <a href="step-17.data/solution-4-3d-ux.png" target="_top"><img
- src="step-17.data/solution-4-3d-ux.png" alt="ux" width="45%"></a>
-</p>
-<p>
+@image html step-17.4-3d-partition.png
+@image html step-17.4-3d-ux.png
+
+
The left picture shows the partitioning of the cube into 16 processes, whereas
the right one shows the x-displacement along two cutplanes through the cube.
-</p>
+