<a name="Intro"></a>
<h1>Introduction</h1>
-<p>
-In <a href="step-18.html" target="body">step-18</a>, we saw a need to write
+
+In @ref step_18 "step-18", we saw a need to write
output files in an intermediate format: in a parallel program, it doesn't scale
well if all processors participate in computing a result, and then only a
single processor generates the graphical output. Rather, each of them should
generate output for its share of the domain, and later on merge all these
output files into a single one.
-</p>
-<p>
+
+
Thus was the beginning of step-19: it is the program that reads a number of
files written in intermediate format, and merges and converts them into the
final format that one would like to use for visualization. It can also be used
graphics program you would like to use, write your results in intermediate
format; it can later be converted, using the present program, to any other
format you may want.
-</p>
-<p>
+
+
While this in itself was not interesting enough to make a tutorial program, we
have used the opportunity to introduce one class that has proven to be
extremely help- and useful in real application programs, but had not been
equation to be solved, at run time. Other typical parameters are the number of
nonlinear iterations, the name of output files, or the names of input files
specifying material properties or boundary conditions.
-</p>
-<p>
+
+
Working with such parameter files is not rocket science. However, it is rather
tedious to write the parsers for such files, in particular if they should be
extensible, be able to group parameters into subsections, perform some error
declares a number of parameters for you), the <code>ParameterHandler</code>
class then reads an input file with all these parameters, and the application
program can then get their values back from this class.
-</p>
-<p>
+
+
In order to perform these three steps, the <code>ParameterHandler</code> offers
three sets of functions: first, the
<code>ParameterHandler::declare_entry</code> function is used to declare the
that, there are optional arguments indicating a pattern that a parameter has to
satisfy, such as being an integer (see the discussion above), and a help text
that might later give an explanation of what the parameter stands for.
-</p>
-<p>
+
+
Once all parameters have been declared, parameters can be read, using the
<code>ParameterHandler::read_input</code> family of functions. There are
versions of this function that can read from a file stream, that take a file
been given to describe the kind of values a parameter can have. Input that uses
undeclared parameters or values for parameters that do not conform to the
pattern are rejected by raising an exception.
-</p>
-<p>
+
+
A typical input file will look like this:
-<code>
-<pre>
+@code
set Output format = dx
set Output file = my_output_file.dx
set Color of output = blue
set Generate output = false
end
-</pre>
-</code>
+@endcode
Note that subsections can be nested.
-</p>
-<p>
+
+
Finally, the application program can get the values of declared parameters back
by traversing the subsections of the parameter tree and using the
<code>ParameterHandler::get</code> and related functions. The
<code>ParameterHandler::get_double</code>, and
<code>ParameterHandler::get_bool</code> already convert them to the indicated
type.
-</p>
-<p>
+
+
Using the <code>ParameterHandler</code> class therefore provides for a pretty
flexible mechanism to handle all sorts of moderately complex input files without
much effort on the side of the application programmer. We will use this to
provide all sorts of options to the step-19 program in order to convert from
intermediate file format to any other graphical file format.
-</p>
-<p>
+
+
The rest of the story is probably best told by looking at the source of step-19
itself. Let us, however, end this introduction by pointing the reader at the
extensive class documentation of the <code>ParameterHandler</code> class for
more information on specific details of that class.
-</p>
+
<a name="Results"></a>
<h1>Results</h1>
-<p>
+
With all that above, here is first what we get if we just run the program
without any parameters at all:
-<code>
-<pre>
+@code
examples/step-19> ./step-19
Converter from deal.II intermediate format to other graphics formats.
# creating program
set Write preamble = true
end
-</pre>
-</code>
+@endcode
-<p>
That's a lot of output for such a little program, but then that's also a lot of
output formats that deal.II supports. You will realize that the output consists
of first entries in the top-level section (sorted alphabetically), then a
<code>DataOutBase</code> class, but there are also the dummy entries and
sections we have added in the <code>declare_parameters()</code> function, along
with their default values and documentations.
-</p>
-<p>
+
+
Let us try to run this program on a set of input files generated by a modified
- <a href="step-18.html" target="body">step-18</a> run on 32 nodes of a
- cluster. The computation was rather big, with more
+@ref step_18 "step-18" run on 32 nodes of a
+cluster. The computation was rather big, with more
than 350,000 cells and some 1.2M unknowns. That makes for 32 rather big
intermediate files that we will try to merge using the present program. Here is
the list of files, totaling some 245MB of data:
-<code>
-<pre>
+@code
examples/step-19> ls -l *d2
-rw-r--r-- 1 bangerth wheeler 7982085 Aug 12 10:11 solution-0005.0000-000.d2
-rw-r--r-- 1 bangerth wheeler 7888316 Aug 12 10:13 solution-0005.0000-001.d2
-rw-r--r-- 1 bangerth wheeler 7682418 Aug 12 10:08 solution-0005.0000-029.d2
-rw-r--r-- 1 bangerth wheeler 7544141 Aug 12 10:05 solution-0005.0000-030.d2
-rw-r--r-- 1 bangerth wheeler 7348899 Aug 12 10:04 solution-0005.0000-031.d2
-</pre>
-</code>
+@endcode
-<p>
So let's see what happens if we attempt to merge all these files into a single
one:
-<code>
-<pre>
+@code
examples/step-19> time ./step-19 solution-0005.0000-*.d2 -x gmv -o solution-0005.gmv
real 2m08.35s
user 1m26.61s
examples/step-19> ls -l solution-0005.gmv
-rw-r--r-- 1 bangerth wheeler 240680494 Sep 9 11:53 solution-0005.gmv
-</pre>
-</code>
+@endcode
So in roughly two minutes we have merged 240MB of data. Counting reading and
writing, that averages a throughput of 3.8MB per second, not so bad.
-</p>
-<p>
-If visualized, the output looks very much like that shown for <a
-href="step-18.html" target="body">step-18</a>. But that's not quite as
+
+
+If visualized, the output looks very much like that shown for
+@ref step_18 "step-18". But that's not quite as
important for the moment, rather we are interested in showing how to use the
parameter file. To this end, remember that if no parameter file is given, or if
it is empty, all the default values listed above are used. However, whatever we
specify in the parameter file is used, unless overridden again by
parameters found later on the command line.
-</p>
-<p>
+
+
For example, let us use a simple parameter file named
<code>solution-0005.prm</code> that contains only one line:
-<code>
-<pre>
+@code
set Output format = gnuplot
-</pre>
-</code>
+@endcode
If we run step-19 with it again, we obtain this (for simplicity, and because we
don't want to visualize 240MB of data anyway, we only convert the one, the
twelfth, intermediate file to gnuplot format):
-<code>
-<pre>
+@code
examples/step-19> ./step-19 solution-0005.0000-012.d2 -p solution-0005.prm -o solution-0005.gnuplot
examples/step-19> ls -l solution-0005.gnuplot
-rw-r--r-- 1 bangerth wheeler 20281669 Sep 9 12:15 solution-0005.gnuplot
-</pre>
-</code>
+@endcode
-<p>
We can then visualize this one file with gnuplot, obtaining something like
this:
-<p align="center">
-<img src="step-19.data/solution-0005.png" width="80%"></img>
-</p>
+@image html step-19.solution-0005.png
+
That's not particularly exciting, but the file we're looking at has only one
32nd of the entire domain anyway, so we can't expect much.
In more complicated situations, we would use parameter files that set more of
the values to non-default values. A file for which this is the case could look
like this, generating output for the OpenDX visualization program:
-<code>
-<pre>
+@code
set Output format = dx
set Output file = my_output_file.dx
set Dummy color of output = blue
set Dummy generate output = false
end
-</pre>
-</code>
+@endcode
If one wanted to, one could write comments into the file using the
same format as used above in the help text, i.e. everything on a line
following a hashmark (<code>#</code>) is considered a comment.
-</p>
-<p>
+
+
If one runs step-19 with this input file, this is what is going to happen:
-<code>
-<pre>
+@code
examples/step-19> ./step-19 solution-0005.0000-012.d2 -p solution-0005.prm
Line 4:
The entry value
Dummy iterations
does not match the given pattern
[Integer range 1...1000 (inclusive)]
-</pre>
-</code>
+@endcode
Ah, right: valid values for the iteration parameter needed to be within the
range [1...1000]. We would fix that, then go back to run the program with
correct parameters.
-</p>
-<p>
+
+
This program should have given some insight into the input parameter file
handling that deal.II provides. The <code>ParameterHandler</code> class has a
few more goodies beyond what has been shown in this program, for those who want
to use this class, it would be useful to read the documentation of that class
to get the full picture.
-</p>
+
<h3>Output of the program and graphical visualization</h3>
-<p>
+
If we run the program as is, we get this output:
-<pre><code>
+@code
examples/step-20> make run
============================ Remaking Makefile.dep
==============debug========= step-20.cc
Number of degrees of freedom: 208 (144+64)
10 CG Schur complement iterations to obtain convergence.
Errors: ||e_p||_L2 = 0.178055, ||e_u||_L2 = 0.0433435
-</code></pre>
+@endcode
+
The fact that the number of iterations is so small, of course, is due to good
(but expensive!) preconditioner we have developed. To get confidence in the
solution, let us take a look at it. The following three images show (from left
to right) the x-velocity, the y-velocity, and the pressure (click on the images
for larger versions):
-<p ALIGN=CENTER>
-<a href="step-20.data/u.png"><img width="32%" src="step-20.data/u.png"></a>
-<a href="step-20.data/v.png"><img width="32%" src="step-20.data/v.png"></a>
-<a href="step-20.data/p.png"><img width="32%" src="step-20.data/p.png"></a>
-</p>
+@image html step-20.u.png
+@image html step-20.v.png
+@image html step-20.p.png
+
+
-<p>
Let us start with the pressure: it is highest at the left and lowest at the
right, so flow will be from left to right. In addition, though hardly visible
in the graph, we have chosen the pressure field such that the flow left-right
something that can easily be seen in the left image. The middle image
represents inward flow in y-direction at the left end of the domain, and
outward flow in y-directino at the right end of the domain.
-</p>
-<p>
+
+
As an additional remark, note how the x-velocity in the left image is only
continuous in x-direction, whereas the y-velocity is continuous in
y-direction. The flow fields are discontinuous in the other directions. This
very obviously reflects the continuity properties of the Raviart-Thomas
elements, which are, in fact, only in the space H(div) and not in the space
-H<sup>1</sup>. Finally, the pressure field is completely discontinuous, but
+$H^1$. Finally, the pressure field is completely discontinuous, but
that should not surprise given that we have chosen <code>FE_DGQ(0)</code> as
the finite element for that solution component.
-</p>
+
<h3>Convergence</h3>
-<p>
+
The program offers two obvious places where playing and observing convergence
is in order: the degree of the finite elements used (passed to the constructor
of the <code>MixedLaplaceProblem</code> class from <code>main()</code>), and
<code>MixedLaplaceProblem::make_grid_and_dofs</code>). What one can do is to
change these values and observe the errors computed later on in the course of
the program run.
-</p>
-<p>
-If one does this, one finds the following pattern for the L<sub>2</sub> error
+
+
+If one does this, one finds the following pattern for the $L_2$ error
in the pressure variable:
<table align="center" border="1" cellspacing="3" cellpadding="3">
<tr>
</tr>
<tr>
- <td></td> <td>O(h)</td> <td>O(h<sup>2)</sup></td> <td>O(h<sup>3)</sup></td>
+ <td></td> <td>O(h)</td> <td>$O(h^2)$</td> <td>$O(h^3)$</td>
</tr>
</table>
The theoretically expected convergence orders are very nicely reflected by the
experimentally observed ones indicated in the last row of the table.
-</p>
-<p>
-One can make the same experiment with the L<sub>2</sub> error
+
+
+One can make the same experiment with the $L_2$ error
in the velocity variables:
<table align="center" border="1" cellspacing="3" cellpadding="3">
<tr>
</tr>
<tr>
- <td></td> <td>O(h)</td> <td>O(h<sup>2)</sup></td> <td>O(h<sup>3)</sup></td>
+ <td></td> <td>O(h)</td> <td>$O(h^2)$</td> <td>$O(h^3)$</td>
</tr>
</table>
The result concerning the convergence order is the same here.
-</p>
+
<a name="extensions"></a>
<h3>Possibilities for extensions</h3>
-<p>
+
Realistic flow computations for ground water or oil reservoir simulations will
not use a constant permeability. Here's a first, rather simple way to change
this situation: we use a permeability that decays very rapidly away from a
the stone has cracked, or faulted, along one line, and the fluids flow much
easier along this large crask. Here is how we could implement something like
this:
-<pre><code>
-template <int dim>
+@code
+template <int dim>
void
-KInverse<dim>::value_list (const std::vector<Point<dim> > &points,
- std::vector<Tensor<2,dim> > &values) const
+KInverse<dim>::value_list (const std::vector<Point<dim> > &points,
+ std::vector<Tensor<2,dim> > &values) const
{
Assert (points.size() == values.size(),
ExcDimensionMismatch (points.size(), values.size()));
- for (unsigned int p=0; p<points.size(); ++p)
+ for (unsigned int p=0; p<points.size(); ++p)
{
values[p].clear ();
/ (0.1 * 0.1)),
0.001);
- for (unsigned int d=0; d<dim; ++d)
+ for (unsigned int d=0; d<dim; ++d)
values[p][d][d] = 1./permeability;
}
}
-</code></pre>
+@endcode
Remember that the function returns the inverse of the permeability tensor.
-</p>
-<p>
+
+
With a significantly higher mesh resolution, we can visualize this, here with
x- and y-velocity:
-</p>
-<p ALIGN=CENTER>
-<a href="step-20.data/u-wiggle.png"><img width="48%" src="step-20.data/u-wiggle.png"></a>
-<a href="step-20.data/v-wiggle.png"><img width="48%" src="step-20.data/v-wiggle.png"></a>
-</p>
+
+@image html step-20.u-wiggle.png
+@image html step-20.v-wiggle.png
+
It is obvious how fluids flow essentially only along the middle line, and not
anywhere else.
-</p>
-<p>
+
+
Another possibility would be to use a random permeability field. A simple way
to achieve this would be to scatter a number of centers around the domain and
then use a permeability field that is the sum of (negative) exponentials for
permeability to the next one. This is an entirely unscientific attempt at
describing a random medium, but one possibility to implement this behavior
would look like this:
-<pre><code>
-template <int dim>
-class KInverse : public TensorFunction<2,dim>
+@code
+template <int dim>
+class KInverse : public TensorFunction<2,dim>
{
public:
KInverse ();
- virtual void value_list (const std::vector<Point<dim> > &points,
- std::vector<Tensor<2,dim> > &values) const;
+ virtual void value_list (const std::vector<Point<dim> > &points,
+ std::vector<Tensor<2,dim> > &values) const;
private:
- std::vector<Point<dim> > centers;
+ std::vector<Point<dim> > centers;
};
-template <int dim>
-KInverse<dim>::KInverse ()
+template <int dim>
+KInverse<dim>::KInverse ()
{
const unsigned int N = 40;
centers.resize (N);
- for (unsigned int i=0; i<N; ++i)
- for (unsigned int d=0; d<dim; ++d)
+ for (unsigned int i=0; i<N; ++i)
+ for (unsigned int d=0; d<dim; ++d)
centers[i][d] = 2.*rand()/RAND_MAX-1;
}
-template <int dim>
+template <int dim>
void
-KInverse<dim>::value_list (const std::vector<Point<dim> > &points,
- std::vector<Tensor<2,dim> > &values) const
+KInverse<dim>::value_list (const std::vector<Point<dim> > &points,
+ std::vector<Tensor<2,dim> > &values) const
{
Assert (points.size() == values.size(),
ExcDimensionMismatch (points.size(), values.size()));
- for (unsigned int p=0; p<points.size(); ++p)
+ for (unsigned int p=0; p<points.size(); ++p)
{
values[p].clear ();
double permeability = 0;
- for (unsigned int i=0; i<centers.size(); ++i)
+ for (unsigned int i=0; i<centers.size(); ++i)
permeability += std::exp(-(points[p]-centers[i]).square()
/ (0.1 * 0.1));
const double normalized_permeability
= std::max(permeability, 0.005);
- for (unsigned int d=0; d<dim; ++d)
+ for (unsigned int d=0; d<dim; ++d)
values[p][d][d] = 1./normalized_permeability;
}
}
-</code></pre>
+@endcode
+
-<p>
With a permeability field like this, we would get x-velocities and pressures as
follows:
-</p>
-<p ALIGN=CENTER>
-<a href="step-20.data/u-random.png"><img width="48%" src="step-20.data/u-random.png"></a>
-<a href="step-20.data/p-random.png"><img width="48%" src="step-20.data/p-random.png"></a>
-</p>
+
+@image html step-20.u-random.png
+@image html step-20.p-random.png
+