<h3>Background and purpose</h3>
-<p>
+
In this example program, we will not so much be concerned with
describing new ways how to use deal.II and its facilities, but rather
with presenting methods of writing modular and extensible finite
<li> a wave equation solver: 21,020 lines of code.
</ol>
(The library proper - without example programs and
-test suite - has slightly more than 150,000 lines of code.) In
-the opinion of the author of this example program, the sizes of these
+test suite - has slightly more than 150,000 lines of code as of spring 2002.)
+In the opinion of the author of this example program, the sizes of these
applications are at the edge of what one person, even an experienced
programmer, can manage.
-</p>
-<p>
+
+
The numbers above make one thing rather clear: monolithic programs that
are not broken up into smaller, mostly independent pieces have no way
of surviving, since even the author will quickly lose the overview of
since otherwise confusion will quickly prevail as one developer
would need to know if another changed something about the internals of
a different module if they were not cleanly separated.
-</p>
-<p>
+
+
In previous examples, you have seen how the library itself is broken
up into several complexes each building atop the underying ones, but
relatively independent of the other ones:
<li>output in various graphical formats;
<li>linear algebra classes.
</ol>
-</p>
-<p>
+
+
The goal of this program is now to give an example of how a relatively
simple finite element program could be structured such that we end up
with a set of modules that are as independent of each other as
worry that it might break at the other, as long as we do not touch the
interface through which the two ends communicate. The interface in
C++, of course, is the declaration of abstract base classes.
-</p>
-<p>
+
+
Here, we will implement (again) a Laplace solver, although with a
number of differences compared to previous example programs:
<ol>
<li>Separate the description of the test case with which we will
present the program, from the rest of the program.
</ol>
-</p>
-<p>
+
+
The things the program does are not new. In fact, this is more like a
melange of previous programs, cannibalizing various parts and
functions from earlier examples. It is the way they are arranged in
order to write successful numerical software if you feel uncomfortable
with the chosen ways. It should serve as a case study, however,
inspiring the reader with ideas to the desired end.
-</p>
-<p>
+
+
Once you have worked through the program, you will remark that it is
already somewhat complex in its structure. Nevertheless, it
only has about 850 lines of code, without comments. In real
the start is thus indispensible. Otherwise, it will have to be
redesigned at one point in its life, once it becomes too large to be
manageable.
-</p>
-<p>
+
+
Despite of this, all three programs listed above have undergone major
revisions, or even rewrites. The wave program, for example, was once
entirely teared to parts when it was still significantly smaller, just
an application where the objective is different. It should serve as an
inspiration for writing your own application in a modular way, to
avoid the pitfalls of too closely coupled codes.
-</p>
+
<h3>What the program does</h3>
-<p>
+
What the program actually does is not even the main point of this
program, the structure of the program is more important. However, in a
few words, a description would be: solve the Laplace equation for a
given right hand side such that the solution is the function
- <em> u(x,t)=exp(x+sin(10y+5x<sup>2</sup>)) </em>. The goal of the
+$u(x,t)=\exp(x+\sin(10y+5x^2))$. The goal of the
computation is to get the value of the solution at the point
-<em>x<sub>0</sub>=(0.5,0.5)</em>, and to compare the accuracy with
+$x_0=(0.5,0.5)$, and to compare the accuracy with
which we resolve this value for two refinement criteria, namely global
refinement and refinement by the error indicator by Kelly et al. which
we have already used in previous examples.
-</p>
-<p>
+
+
The results will, as usual, be discussed in the respective section of
this document. In doing so, we will find a slightly irritating
observation about the relative performance of the two refinement
criteria. In a later example program, building atop this one, we will
devise a different method that should hopefully perform better than
the techniques discussed here.
-</p>
-<p>
+
+
So much now for all the theoretical and anecdotal background. The best
way of learning about a program is to look at it, so here it is:
-</p>
+
<h1>Results</h1>
-<p>
+
The results of this program are not that interesting - after all
its purpose was not to demonstrate some new mathematical idea, and
also not how to program with deal.II, but rather to use the material
which we have developed in the previous examples to form something
which demonstrates a way to build modern finite element software in a
modular and extensible way.
-</p>
-<p>
+
+
Nevertheless, we of course show the results of the program. Of
foremost interest is the point value computation, for which we had
implemented the corresponding evaluation class. The results (i.e. the
output) of the program looks as follows:
-<code>
-<pre>
+@code
Running tests with "global" refinement criterion:
-------------------------------------------------
Refinement cycle: 0 1 2 3 4 5 6
7100 1.5876
13059 1.5942
24749 1.5933
-</pre>
-</code>
-</p>
+@endcode
+
-<p>
What surprises here is that the the exact value is 1.59491554..., and that
it is obviously suprisingly complicated to compute the solution even to
only one per cent accuracy, although the solution is smooth (in fact
infinite often differentiable). This smoothness is shown in the
graphical output generated by the program, here coarse grid and the
first 9 refinement steps of the Kelly refinement indicator:
-</p>
+
<table width="80%" align="center">
<tr>
<td>
- <img src="step-13.data/pix-kelly/solution-kelly-0.jpg"
- alt="Solution Kelly, coarse grid">
+ @image html step-13.solution-kelly-0.png
</td>
<td>
- <img src="step-13.data/pix-kelly/solution-kelly-1.jpg"
- alt="Solution Kelly, 1 refinement steps">
+ @image html step-13.solution-kelly-1.png
</td>
</tr>
<tr>
<td>
- <img src="step-13.data/pix-kelly/solution-kelly-2.jpg"
- alt="Solution Kelly, 2 refinement steps">
+ @image html step-13.solution-kelly-2.png
</td>
<td>
- <img src="step-13.data/pix-kelly/solution-kelly-3.jpg"
- alt="Solution Kelly, 3 refinement steps">
+ @image html step-13.solution-kelly-3.png
</td>
</tr>
<tr>
<td>
- <img src="step-13.data/pix-kelly/solution-kelly-4.jpg"
- alt="Solution Kelly, 4 refinement steps">
+ @image html step-13.solution-kelly-4.png
</td>
<td>
- <img src="step-13.data/pix-kelly/solution-kelly-5.jpg"
- alt="Solution Kelly, 5 refinement steps">
+ @image html step-13.solution-kelly-5.png
</td>
</tr>
<tr>
<td>
- <img src="step-13.data/pix-kelly/solution-kelly-6.jpg"
- alt="Solution Kelly, 6 refinement steps">
+ @image html step-13.solution-kelly-6.png
</td>
<td>
- <img src="step-13.data/pix-kelly/solution-kelly-7.jpg"
- alt="Solution Kelly, 7 refinement steps">
+ @image html step-13.solution-kelly-7.png
</td>
</tr>
<tr>
<td>
- <img src="step-13.data/pix-kelly/solution-kelly-8.jpg"
- alt="Solution Kelly, 8 refinement steps">
+ @image html step-13.solution-kelly-8.png
</td>
<td>
- <img src="step-13.data/pix-kelly/solution-kelly-9.jpg"
- alt="Solution Kelly, 9 refinement steps">
+ @image html step-13.solution-kelly-9.png
</td>
</tr>
</table>
-<p>
+
While we're already at watching pictures, this is the eighth grid, as
viewed from top:
-</p>
-<p align="center">
- <img src="step-13.data/pix-kelly/grid-kelly-8.gif" alt="Kelly, grid 8">
-</p>
+@image html step-13.kelly-grid-8.png
-<p>
However, we are not yet finished with evaluation the point value
computation. In fact, plotting the error
-<em>e=|u(x<sub>h</sub>)-u<sub>h</sub>(x<sub>0</sub>)|</em> for the two
+$e=|u(x_0)-u_h(x_0)|$ for the two
refinement criteria yields the following picture:
-</p>
-<p align="center">
- <img src="step-13.data/error.gif" alt="error" width="80%">
-</p>
+
+@image html step-13.error.png
+
+
-<p>
What <em>is</em> disturbing about this picture is that not only is the
adaptive mesh refinement not better than global refinement as one
would usually expect, it is even significantly worse since its
convergence is irregular, preventing all extrapolation techniques when
using the values of subsequent meshes! On the other hand, global
-refinement provides a perfect <em>1/N</em> or <em>h<sup>-2</sup></em>
+refinement provides a perfect $1/N$ or $h^{-2}$
convergence history and provides every opportunity to even improve on
the point values by extrapolation. Global mesh refinement must
therefore be considered superior in this example! This is even more
surprising as the evaluation point is not somewhere in the left part
where the mesh is coarse, but rather to the right and the adaptive
refinement should refine the mesh around the evaluation point as well.
-</p>
-<p>
+
+
We thus close the discussion of this example program with a question:
-</p>
+
<p align="center">
<strong><em>What is wrong with adaptivity if it is not better than
global refinement?</em></strong>
-</p>
-<p>
+
+
<em>Exercise at the end of this example:</em> There is a simple reason
for the bad and irregular behavior of the adapted mesh solutions. It
is simple to find out by looking at the mesh around the evaluation
better than global refinement, and if so if even a better order of
convergence (in terms of the number of degrees of freedom) is
achieved, or only by a better constant.
-</p>
-<p>
+
+
(<em>Very brief answers for the impatient:</em> at steps with larger
errors, the mesh is not regular at the point of evaluation, i.e. some
of the adjacent cells have hanging nodes; this destroys some
superapproximation effects of which the globally refined mesh can
profit. Answer 2: this quick hack
-<code><pre>
+@code
bool refinement_indicated = false;
typename Triangulation<dim>::active_cell_iterator cell;
for (cell=triangulation->begin_active();
for (unsigned int v=0; v<GeometryInfo<dim>::vertices_per_cell; ++v)
if (cell->vertex(v) == Point<dim>(.5,.5))
cell->set_refine_flag ();
-</pre></code>
+@endcode
in the refinement function of the Kelly refinement class right before
executing refinement would improve the results (exercise: what does
the code do?), making them consistently better than global
refinement. Behavior is still irregular, though, so no results about
an order of convergence are possible.)
-</p>
+