From 01bad3e8b55caabeec50edf1bd7a77540e9e2221 Mon Sep 17 00:00:00 2001 From: Wolfgang Bangerth Date: Tue, 28 Mar 2006 15:39:10 +0000 Subject: [PATCH] Move documentation files to the example directories directly. git-svn-id: https://svn.dealii.org/trunk@12724 0785d39b-7218-0410-832d-ea1e28bc413d --- deal.II/examples/step-1/doc/intro.dox | 16 + deal.II/examples/step-1/doc/results.dox | 21 + deal.II/examples/step-10/doc/intro.dox | 61 + deal.II/examples/step-10/doc/results.dox | 194 +++ deal.II/examples/step-11/doc/intro.dox | 110 ++ deal.II/examples/step-11/doc/results.dox | 45 + deal.II/examples/step-12/doc/intro.dox | 266 ++++ deal.II/examples/step-12/doc/results.dox | 89 ++ deal.II/examples/step-13/doc/intro.dox | 187 +++ deal.II/examples/step-13/doc/results.dox | 190 +++ deal.II/examples/step-14/doc/intro.dox | 408 ++++++ deal.II/examples/step-14/doc/results.html | 514 +++++++ deal.II/examples/step-15/doc/intro.dox | 299 ++++ deal.II/examples/step-15/doc/results.dox | 119 ++ deal.II/examples/step-16/doc/intro.dox | 35 + deal.II/examples/step-16/doc/results.dox | 2 + deal.II/examples/step-17/doc/intro.dox | 73 + deal.II/examples/step-17/doc/results.dox | 208 +++ deal.II/examples/step-18/doc/intro.html | 1538 +++++++++++++++++++++ deal.II/examples/step-18/doc/intro.tex | 666 +++++++++ deal.II/examples/step-18/doc/results.html | 380 +++++ deal.II/examples/step-19/doc/intro.dox | 121 ++ deal.II/examples/step-19/doc/results.dox | 260 ++++ deal.II/examples/step-2/doc/intro.dox | 17 + deal.II/examples/step-2/doc/results.dox | 58 + deal.II/examples/step-20/doc/intro.dox | 715 ++++++++++ deal.II/examples/step-20/doc/results.dox | 294 ++++ deal.II/examples/step-20/doc/solution.gmv | 147 ++ deal.II/examples/step-21/doc/intro.dox | 2 + deal.II/examples/step-21/doc/results.dox | 2 + deal.II/examples/step-3/doc/intro.dox | 16 + deal.II/examples/step-3/doc/results.dox | 143 ++ deal.II/examples/step-4/doc/intro.dox | 133 ++ deal.II/examples/step-4/doc/results.dox | 111 ++ deal.II/examples/step-5/doc/intro.dox | 37 + deal.II/examples/step-5/doc/results.dox | 163 +++ deal.II/examples/step-6/doc/intro.dox | 57 + deal.II/examples/step-6/doc/intro.dox~ | 57 + deal.II/examples/step-6/doc/results.dox | 164 +++ deal.II/examples/step-7/doc/intro.dox | 154 +++ deal.II/examples/step-7/doc/results.dox | 224 +++ deal.II/examples/step-8/doc/intro.dox | 322 +++++ deal.II/examples/step-8/doc/results.dox | 47 + deal.II/examples/step-8/doc/results.dox~ | 50 + deal.II/examples/step-9/doc/intro.dox | 286 ++++ deal.II/examples/step-9/doc/results.dox | 50 + 46 files changed, 9051 insertions(+) create mode 100644 deal.II/examples/step-1/doc/intro.dox create mode 100644 deal.II/examples/step-1/doc/results.dox create mode 100644 deal.II/examples/step-10/doc/intro.dox create mode 100644 deal.II/examples/step-10/doc/results.dox create mode 100644 deal.II/examples/step-11/doc/intro.dox create mode 100644 deal.II/examples/step-11/doc/results.dox create mode 100644 deal.II/examples/step-12/doc/intro.dox create mode 100644 deal.II/examples/step-12/doc/results.dox create mode 100644 deal.II/examples/step-13/doc/intro.dox create mode 100644 deal.II/examples/step-13/doc/results.dox create mode 100644 deal.II/examples/step-14/doc/intro.dox create mode 100644 deal.II/examples/step-14/doc/results.html create mode 100644 deal.II/examples/step-15/doc/intro.dox create mode 100644 deal.II/examples/step-15/doc/results.dox create mode 100644 deal.II/examples/step-16/doc/intro.dox create mode 100644 deal.II/examples/step-16/doc/results.dox create mode 100644 deal.II/examples/step-17/doc/intro.dox create mode 100644 deal.II/examples/step-17/doc/results.dox create mode 100644 deal.II/examples/step-18/doc/intro.html create mode 100644 deal.II/examples/step-18/doc/intro.tex create mode 100644 deal.II/examples/step-18/doc/results.html create mode 100644 deal.II/examples/step-19/doc/intro.dox create mode 100644 deal.II/examples/step-19/doc/results.dox create mode 100644 deal.II/examples/step-2/doc/intro.dox create mode 100644 deal.II/examples/step-2/doc/results.dox create mode 100644 deal.II/examples/step-20/doc/intro.dox create mode 100644 deal.II/examples/step-20/doc/results.dox create mode 100644 deal.II/examples/step-20/doc/solution.gmv create mode 100644 deal.II/examples/step-21/doc/intro.dox create mode 100644 deal.II/examples/step-21/doc/results.dox create mode 100644 deal.II/examples/step-3/doc/intro.dox create mode 100644 deal.II/examples/step-3/doc/results.dox create mode 100644 deal.II/examples/step-4/doc/intro.dox create mode 100644 deal.II/examples/step-4/doc/results.dox create mode 100644 deal.II/examples/step-5/doc/intro.dox create mode 100644 deal.II/examples/step-5/doc/results.dox create mode 100644 deal.II/examples/step-6/doc/intro.dox create mode 100644 deal.II/examples/step-6/doc/intro.dox~ create mode 100644 deal.II/examples/step-6/doc/results.dox create mode 100644 deal.II/examples/step-7/doc/intro.dox create mode 100644 deal.II/examples/step-7/doc/results.dox create mode 100644 deal.II/examples/step-8/doc/intro.dox create mode 100644 deal.II/examples/step-8/doc/results.dox create mode 100644 deal.II/examples/step-8/doc/results.dox~ create mode 100644 deal.II/examples/step-9/doc/intro.dox create mode 100644 deal.II/examples/step-9/doc/results.dox diff --git a/deal.II/examples/step-1/doc/intro.dox b/deal.II/examples/step-1/doc/intro.dox new file mode 100644 index 0000000000..9faf332d1c --- /dev/null +++ b/deal.II/examples/step-1/doc/intro.dox @@ -0,0 +1,16 @@ + +

Introduction

+ +In this first example, we don't actually do very much, but show two +techniques: what is the syntax to generate triangulation objects, and +some elements of simple loops over all cells. We create two grids, one +which is a regularly refined square (not very exciting, but a common +starting grid for some problems), and one more geometric attempt: a +ring-shaped domain, which is refined towards the inner edge. The +latter is certainly not very useful and is probably only rarely used +in numerical analysis for PDEs (although, to everyone's surprise, it +has actually found its way into the literature, see the paper by M. Mu +titled "PDE.MART: A network-based problem-solving environment", ACM +Trans. Math. Software, vol. 31, pp. 508-531, 2005 :-), but looks nice +and illustrates how loops over cells are written and some of the +things you can do with cells. diff --git a/deal.II/examples/step-1/doc/results.dox b/deal.II/examples/step-1/doc/results.dox new file mode 100644 index 0000000000..60cb962c79 --- /dev/null +++ b/deal.II/examples/step-1/doc/results.dox @@ -0,0 +1,21 @@ + +

Results

+ +The program has, after having been run, produced two grids, which look +like this: + + + + + + + +
+ @image html step-1.grid-1.png + + @image html step-1.grid-2.png +
+ +The left one, well, is not very exciting. The right one is — at least +— unconventional. + diff --git a/deal.II/examples/step-10/doc/intro.dox b/deal.II/examples/step-10/doc/intro.dox new file mode 100644 index 0000000000..2749945848 --- /dev/null +++ b/deal.II/examples/step-10/doc/intro.dox @@ -0,0 +1,61 @@ + +

Introduction

+ + +This is a rather short example which only shows some aspects of using +higher order mappings. By mapping we mean the transformation +between the unit cell (i.e. the unit line, square, or cube) to the +cells in real space. In all the previous examples, we have implicitly +used linear or d-linear mappings; you will not have noticed this at +all, since this is what happens if you do not do anything +special. However, if your domain has curved boundaries, there are +cases where the piecewise linear approximation of the boundary +(i.e. by straight line segments) is not sufficient, and you want that +your computational domain is an approximation to the real domain using +curved boundaries as well. If the boundary approximation uses +piecewise quadratic parabolas to approximate the true boundary, then +we say that this is a quadratic or $Q_2$ approximation. If we +use piecewise graphs of cubic polynomials, then this is a $Q_3$ +approximation, and so on. + + + +For some differential equations, it is known that piecewise linear +approximations of the boundary, i.e. $Q_1$ mappings, are not +sufficient if the boundary of the domain is curved. Examples are the +biharmonic equation using $C^1$ elements, or the Euler +equation on domains with curved reflective boundaries. In these cases, +it is necessary to compute the integrals using a higher order +mapping. The reason, of course, is that if we do not use a higher +order mapping, the order of approximation of the boundary dominates +the order of convergence of the entire numerical scheme, irrespective +of the order of convergence of the discretization in the interior of +the domain. + + + +Rather than demonstrating the use of higher order mappings with one of +these more complicated examples, we do only a brief computation: +calculating the value of $\pi=3.141592653589793238462643\ldots$ by two +different methods. + + + +The first method uses a triangulated approximation of the circle with +unit radius and integrates the unit function over it. Of course, if +the domain were the exact unit circle, then the area would be pi, but +since we only use an approximation by piecewise polynomial segments, +the value of the area is not exactly pi. However, it is known that as +we refine the triangulation, a $Q_p$ mapping approximates the boundary +with an order $h^{p+1}$, where $h$ is the mesh +width. We will check the values of the computed area of the circle and +their convergence towards pi under mesh refinement for different +mappings. We will also find a convergence behavior that is surprising +at first, but has a good explanation. + + + +The second method works similarly, but this time does not use the area +of the triangulated unit circle, but rather its perimeter. Pi is then +approximated by half of the perimeter, as the radius is equal to one. + diff --git a/deal.II/examples/step-10/doc/results.dox b/deal.II/examples/step-10/doc/results.dox new file mode 100644 index 0000000000..cccf2683e5 --- /dev/null +++ b/deal.II/examples/step-10/doc/results.dox @@ -0,0 +1,194 @@ + +

Results

+ + +The program performs two tasks, the first being to generate a +visualization of the mapped domain, the second to compute pi by the +two methods described. Let us first take a look at the generated +graphics. They are generated in Gnuplot format, and can be viewed with +the commands +@code +set data style lines +set size 0.721, 1 +set nokey +plot [-1:1][-1:1] "ball0_mapping_q1.dat" +@endcode +or using one of the other filenames. The second line makes sure that +the aspect ratio of the generated output is actually 1:1, i.e. a +circle is drawn as a circle on your screen, rather than as an +ellipse. The third line switches off the key in the graphic, as that +will only print information (the filename) which is not that important +right now. + + + +The following table shows the triangulated computational domain for +Q1, Q2, and Q3 mappings, for the original coarse grid (left), and a +once uniformly refined grid (right). If your browser does not display +these pictures in acceptable quality, view them one by one. + + + + + + + + + + + + + + + +
+ @image html step-10.ball_mapping_q1_ref0.png + + @image html step-10.ball_mapping_q1_ref1.png +
+ @image html step-10.ball_mapping_q2_ref0.png + + @image html step-10.ball_mapping_q2_ref1.png +
+ @image html step-10.ball_mapping_q3_ref0.png + + @image html step-10.ball_mapping_q3_ref1.png +
+These pictures show the obvious advantage of higher order mappings: +they approximate the true boundary quite well also on rather coarse +meshes. To demonstrate this a little further, the following table +shows the upper right quarter of the circle of the coarse mesh, and +with dashed lines the exact circle: + + + + + + +
+ @image html step-10.quarter-q1.png + + @image html step-10.quarter-q2.png + + @image html step-10.quarter-q3.png +
+Obviously the quadratic mapping approximates the boundary quite well, +while for the cubic mapping the difference between approximated domain +and true one is hardly visible already for the coarse grid. You can +also see that the mapping only changes something at the outer +boundaries of the triangulation. In the interior, all lines are still +represented by linear functions, resulting in additional computations +only on cells at the boundary. Higher order mappings are therefore +usually not noticably slower than lower order ones, because the +additional computations are only performed on a small subset of all +cells. + + + +The second purpose of the program was to compute the value of pi to +good accuracy. This is the output of this part of the program: +@code +Computation of Pi by the area: +============================== +Degree = 1 +cells eval.pi error +5 1.9999999999999998 1.1416e+00 - +20 2.8284271247461898 3.1317e-01 1.87 +80 3.0614674589207178 8.0125e-02 1.97 +320 3.1214451522580520 2.0148e-02 1.99 +1280 3.1365484905459389 5.0442e-03 2.00 +5120 3.1403311569547521 1.2615e-03 2.00 + +Degree = 2 +cells eval.pi error +5 3.1045694996615869 3.7023e-02 - +20 3.1391475703122276 2.4451e-03 3.92 +80 3.1414377167038303 1.5494e-04 3.98 +320 3.1415829366419019 9.7169e-06 4.00 +1280 3.1415920457576907 6.0783e-07 4.00 +5120 3.1415926155921126 3.7998e-08 4.00 + +Degree = 3 +cells eval.pi error +5 3.1465390309173475 4.9464e-03 - +20 3.1419461263297386 3.5347e-04 3.81 +80 3.1416154689089382 2.2815e-05 3.95 +320 3.1415940909713274 1.4374e-06 3.99 +1280 3.1415927436051230 9.0015e-08 4.00 +5120 3.1415926592185492 5.6288e-09 4.00 + +Degree = 4 +cells eval.pi error +5 3.1418185737113964 2.2592e-04 - +20 3.1415963919525050 3.7384e-06 5.92 +80 3.1415927128397780 5.9250e-08 5.98 +320 3.1415926545188264 9.2903e-10 5.99 +1280 3.1415926536042722 1.4479e-11 6.00 +5120 3.1415926535899668 1.7343e-13 6.38 + + +Computation of Pi by the perimeter: +=================================== +Degree = 1 +cells eval.pi error +5 2.8284271247461903 3.1317e-01 - +20 3.0614674589207183 8.0125e-02 1.97 +80 3.1214451522580524 2.0148e-02 1.99 +320 3.1365484905459393 5.0442e-03 2.00 +1280 3.1403311569547525 1.2615e-03 2.00 +5120 3.1412772509327729 3.1540e-04 2.00 + +Degree = 2 +cells eval.pi error +5 3.1248930668550599 1.6700e-02 - +20 3.1404050605605454 1.1876e-03 3.81 +80 3.1415157631807014 7.6890e-05 3.95 +320 3.1415878042798613 4.8493e-06 3.99 +1280 3.1415923498174538 3.0377e-07 4.00 +5120 3.1415926345932004 1.8997e-08 4.00 + +Degree = 3 +cells eval.pi error +5 3.1442603311164286 2.6677e-03 - +20 3.1417729561193588 1.8030e-04 3.89 +80 3.1416041192612365 1.1466e-05 3.98 +320 3.1415933731961760 7.1961e-07 3.99 +1280 3.1415926986118001 4.5022e-08 4.00 +5120 3.1415926564043946 2.8146e-09 4.00 + +Degree = 4 +cells eval.pi error +5 3.1417078926581086 1.1524e-04 - +20 3.1415945317216001 1.8781e-06 5.94 +80 3.1415926832497720 2.9660e-08 5.98 +320 3.1415926540544636 4.6467e-10 6.00 +1280 3.1415926535970535 7.2602e-12 6.00 +5120 3.1415926535899010 1.0805e-13 6.07 +@endcode + + + +One of the immediate observations from the output is that in all cases +the values converge quickly to the true value of +$\pi=3.141592653589793238462643$. Note that for the $Q_4$ mapping, the last +number is correct to 13 digits in both computations, which is already +quite a lot. However, also note that for the $Q_1$ mapping, even on the +finest grid the accuracy is significantly worse than on the coarse +grid for a $Q_4$ mapping! + + + +The last column of the output shows the convergence order, in powers +of the mesh width $h$. In the introduction, we had stated that +the convergence order for a $Q_p$ mapping should be +$h^{p+1}$. However, in the example shown, the $Q_2$ and $Q_4$ +mappings show a convergence order of $h^{p+2}$! This at +first surprising fact is readily explained by the particular boundary +we have chosen in this example. In fact, the circle is described by the function +$\sqrt{1-x^2}$, which has the series expansion +$1-x^2/2-x^4/8-x^6/16+\ldots$ +around $x=0$. Thus, for the quadratic mapping where the +truncation error of the quadratic approximation should be cubic, there +is no such term but only a quartic one, which raises the convergence +order to 4, instead of 3. The same happens for the $Q_4$ mapping. + diff --git a/deal.II/examples/step-11/doc/intro.dox b/deal.II/examples/step-11/doc/intro.dox new file mode 100644 index 0000000000..d4671fcffd --- /dev/null +++ b/deal.II/examples/step-11/doc/intro.dox @@ -0,0 +1,110 @@ + +

Introduction

+ +The problem we will be considering is the solution of Laplace's problem with +Neumann boundary conditions only: +@f{eqnarray*} + -\Delta u &=& f \qquad \mathrm{in}\ \Omega, + \\ + \partial_n u &=& g \qquad \mathrm{on}\ \partial\Omega. +@f} +It is well known that if this problem is to have a solution, then the forces +need to satisfy the compatibility condition +@f[ + \int_\Omega f\; dx + \int_{\partial\Omega} g\; ds = 0. +@f] +We will consider the special case that $\Omega$ is the circle of radius 1 +around the origin, and $f=-2$, $g=1$. This choice satisfies the compatibility +condition. + +The compatibility condition allows a solution of the above equation, but it +nevertheless retains an ambiguity: since only derivatives of the solution +appear in the equations, the solution is only determined up to a constant. For +this reason, we have to pose another condition for the numerical solution, +which fixes this constant. + +For this, there are various possibilities: +
    +
  1. Fix one node of the discretization to zero or any other fixed value. + This amounts to an additional condition $u_h(x_0)=0$. Although this is + common practice, it is not necessarily a good idea, since we know that the + solutions of Laplace's equation are only in $H^1$, which does not allow for + the definition of point values because it is not a subset of the continuous + functions. Therefore, even though fixing one node is allowed for + discretitized functions, it is not for continuous functions, and one can + often see this in a resulting error spike at this point in the numerical + solution. + +
  2. Fixing the mean value over the domain to zero or any other value. This + is allowed on the continuous level, since $H^1(\Omega)\subset L^1(\Omega)$ + by Sobolev's inequality, and thus also on the discrete level since we + there only consider subsets of $H^1$. + +
  3. Fixing the mean value over the boundary of the domain to zero or any + other value. This is also allowed on the continuous level, since + $H^{1/2}(\partial\Omega)\subset L^1(\partial\Omega)$, again by Sobolev's + inequality. +
+We will choose the last possibility, since we want to demonstrate another +technique with it. + +While this describes the problem to be solved, we still have to figure out how +to implement it. Basically, except for the additional mean value constraint, +we have solved this problem several times, using Dirichlet boundary values, +and we only need to drop the treatment of Dirichlet boundary nodes. The use of +higher order mappings is also rather trivial and will be explained at the +various places where we use it; in almost all conceivable cases, you will only +consider the objects describing mappings as a black box which you need not +worry about, because their only uses seem to be to be passed to places deep +inside the library where functions know how to handle them (i.e. in the +FEValues classes and their descendents). + +The tricky point in this program is the use of the mean value +constraint. Fortunately, there is a class in the library which knows how to +handle such constraints, and we have used it quite often already, without +mentioning its generality. Note that if we assume that the boundary nodes are +spaced equally along the boundary, then the mean value constraint +@f[ + \int_{\partial \Omega} u(x) \; ds = 0 +@f] +can be written as +@f[ + \sum_{i\in\partial\Omega_h} u_i = 0, +@f] +where the sum shall run over all degree of freedom indices which are located +on the boundary of the computational domain. Let us denote by $i_0$ that index +on the boundary with the lowest number (or any other conveniently chosen +index), then the constraint can also be represented by +@f[ + u_{i_0} = \sum_{i\in\partial\Omega_h\backslash i_0} -u_i. +@f] +This, luckily, is exactly the form of constraints for which the +ConstraintMatrix class was designed. Note that we have used this +class in several previous examples for the representation of hanging nodes +constraints, which also have this form: there, the middle vertex shall have +the mean of the values of the adjacent vertices. In general, the +ConstraintMatrix class is designed to handle homogeneous constraints +of the form +@f[ + CU = 0 +@f] +where $C$ denotes a matrix, and $U$ the vector of nodal values. + +In this example, the mean value along the boundary allows just such a +representation, with $C$ being a matrix with just one row (i.e. there is only +one constraint). In the implementation, we will create a +ConstraintMatrix object, add one constraint (i.e. add another row to +the matrix) referring to the first boundary node $i_0$, and insert the weights +with which all the other nodes contribute, which in this example happens to be +just $-1$. + +Later, we will use this object to eliminate the first boundary node from the +linear system of equations, reducing it to one which has a solution without +the ambiguity of the constant shift value. One of the problems of the +implementation will be that the explicit elimination of this node results in a +number of additional elements in the matrix, of which we do not know in +advance where they are located and how many additional entries will be in each +of the rows of the matrix. We will show how we can use an intermediate object +to work around this problem. + +But now on to the implementation of the program solving this problem... diff --git a/deal.II/examples/step-11/doc/results.dox b/deal.II/examples/step-11/doc/results.dox new file mode 100644 index 0000000000..a18a13cb86 --- /dev/null +++ b/deal.II/examples/step-11/doc/results.dox @@ -0,0 +1,45 @@ + +

Results

+ +This is what the program outputs: +@code +Using mapping with degree 1: +============================ +cells |u|_1 error + 5 0.680402 0.572912 + 20 1.085518 0.167796 + 80 1.208981 0.044334 + 320 1.242041 0.011273 + 1280 1.250482 0.002832 + 5120 1.252605 0.000709 + +Using mapping with degree 2: +============================ +cells |u|_1 error + 5 1.050963 0.202351 + 20 1.199642 0.053672 + 80 1.239913 0.013401 + 320 1.249987 0.003327 + 1280 1.252486 0.000828 + 5120 1.253108 0.000206 + +Using mapping with degree 3: +============================ +cells |u|_1 error + 5 1.086161 0.167153 + 20 1.204349 0.048965 + 80 1.240502 0.012812 + 320 1.250059 0.003255 + 1280 1.252495 0.000819 + 5120 1.253109 0.000205 +@endcode +As we expected, the convergence order for each of the different +mappings is clearly quadratic in the mesh size. What is +interesting, though, is that the error for a bilinear mapping +(i.e. degree 1) is more than three times larger than that for the +higher order mappings; it is therefore clearly advantageous in this +case to use a higher order mapping, not because it improves the order +of convergence but just to reduce the constant before the convergence +order. On the other hand, using a cubic mapping only improves the +result further insignicantly, except for the case of very coarse +grids. diff --git a/deal.II/examples/step-12/doc/intro.dox b/deal.II/examples/step-12/doc/intro.dox new file mode 100644 index 0000000000..6f7bdb7fef --- /dev/null +++ b/deal.II/examples/step-12/doc/intro.dox @@ -0,0 +1,266 @@ + +

Introduction

+ + +

Overview

+ +This example is devoted to the discontinuous Galerkin method, or +in short: DG method. It includes the following topics. +
    +
  1. Discretization of the linear transport equation with the DG method +
  2. Two different assembling routines for the system matrix based on + face terms given as a sum of integrals that +\begin{enumerate} +
  3. loops over all cell and all their faces, or that +
  4. loops over all faces, whereas each face is treated only once. +\end{enumerate} +
  5. Time comparison of the two assembling routines. +
+ + +

Problem

+ +The DG method was first introduced to discretize simple transport +equations. Over the past years DG methods have been applied to a +variety of problems and many different schemes were introduced +employing a big zoo of different convective and diffusive fluxes. As +this example's purpose is to illustrate some implementational issues +of the DG discretization only, here we simply consider the linear +transport equation +@f[ + \nabla\cdot \left\{{\mathbf \beta} u\right\}=f \qquad\mbox{in }\Omega, +\qquad\qquad\qquad\mathrm{[transport-equation]}@f] +subject to the boundary conditions +@f[ +u=g\quad\mbox{on }\Gamma_-, +@f] +on the inflow part $\Gamma_-$ of the boundary $\Gamma=\partial\Omega$ +of the domain. Here, ${\mathbf \beta}={\mathbf \beta}(x)$ denotes a +vector field, $f$ a source function, $u$ the (scalar) solution +function, $g$ a boundary value function, +@f[ +\Gamma_-:=\{x\in\Gamma, {\mathbf \beta}(x)\cdot{\bf n}(x)<0\} +@f] +the inflow part of the boundary of the domain and ${\bf n}$ denotes +the unit outward normal to the boundary $\Gamma$. Equation +[transport-equation] is the conservative version of the +transport equation already considered in step 9 of this tutorial. + +In particular, we consider problem [transport-equation] on +$\Omega=[0,1]^2$ with ${\mathbf \beta}=\frac{1}{|x|}(-x_2, x_1)$ +representing a circular counterclockwise flow field, $f=0$ and $g=1$ +on $x\in\Gamma_-^1:=[0,0.5]\times\{0\}$ and $g=0$ on $x\in +\Gamma_-\setminus \Gamma_-^1$. + + +

Discretization

+ +Following the general paradigm of deriving DG discretizations for +purely hyperbolic equations, we first consider the general hyperbolic +problem +@f[ + \nabla\cdot {\mathcal F}(u)=f \qquad\mbox{in }\Omega, +@f] +subject to appropriate boundary conditions. Here ${\mathcal F}$ +denotes the flux function of the equation under consideration that in +our case, see equation [transport-equation], is represented by +${\mathcal F}(u)={\mathbf \beta} u$. For deriving the DG +discretization we start with a variational, mesh-dependent +formulation of the problem, +@f[ + \sum_\kappa\left\{-({\mathcal F}(u),\nabla v)_\kappa+({\mathcal + F}(u)\cdot{\bf n}, v)_{\partial\kappa}\right\}=(f,v)_\Omega, +@f] +that originates from [transport-equation] by multiplication with +a test function $v$ and integration by parts on each cell $\kappa$ of +the triangulation. Here $(\cdot, \cdot)_\kappa$ and $(\cdot, +\cdot)_{\partial\kappa}$ simply denote the integrals over the cell +$\kappa$ and the boundary $\partial\kappa$ of the cell, +respectively. To discretize the problem, the functions $u$ and $v$ are +replaced by discrete functions $u_h$ and $v_h$ that in the case of +discontinuous Galerkin methods belong to the space $V_h$ of +discontinuous piecewise polynomial functions of some degree $p$. Due +to the discontinuity of the discrete function $u_h$ on interelement +faces, the flux ${\mathcal F}(u)\cdot{\bf n}$ must be replaced by a +numerical flux function ${\mathcal H}(u_h^+, u_h^-, {\bf n})$, +where $u_h^+|_{\partial\kappa}$ denotes the inner trace (w.r.t. the +cell $\kappa$) of $u_h$ and $u_h^-|_{\partial\kappa}$ the outer trace, +i.e. the value of $u_h$ on the neighboring cell. Furthermore the +numerical flux function ${\mathcal H}$, among other things, must be +consistent, i.e. +@f[ +{\mathcal H}(u,u,{\bf n})={\mathcal F}(u)\cdot{\bf n}, +@f] +and conservative, i.e. +@f[ +{\mathcal H}(v,w,{\bf n})=-{\mathcal H}(w,v,-{\bf n}). +\qquad\qquad\qquad\mathrm{[conservative]}@f] +This yields the following discontinuous Galerkin + discretization: find $u_h\in V_h$ such that +@f[ + \sum_\kappa\left\{-({\mathcal F}(u_h),\nabla v_h)_\kappa+({\mathcal H}(u_h^+,u_h^-,{\bf n}), v_h)_{\partial\kappa}\right\}=(f,v_h)_\Omega, \quad\forall v_h\in V_h. +\qquad\qquad\qquad\mathrm{[dg-scheme]}@f] +Boundary conditions are realized by replacing $u_h^-$ on the inflow boundary $\Gamma_-$ by the boundary function $g$. +In the special case of the transport equation +[transport-equation] the numerical flux in its simplest form +is given by +@f[ + {\mathcal H}(u_h^+,u_h^-,{\bf n})(x)=\left\{\begin{array}{ll} + ({\mathbf \beta}\cdot{\bf n}\, u_h^-)(x),&\mbox{for } {\mathbf \beta}(x)\cdot{\bf n}(x)<0,\\ + ({\mathbf \beta}\cdot{\bf n}\, u_h^+)(x),&\mbox{for } {\mathbf \beta}(x)\cdot{\bf n}(x)\geq 0, +\end{array} +\right. +\qquad\qquad\qquad\mathrm{[flux-transport-equation]}@f] +where on the inflow part of the cell the value is taken from the +neighboring cell, $u_h^-$, and on the outflow part the value is +taken from the current cell, $u_h^+$. Hence, the discontinuous Galerkin +scheme for the transport equation [transport-equation] is given +by: find $u_h\in V_h$ such that for all $v_h\in V_h$ following +equation holds: +@f[ + \sum_\kappa\left\{-(u_h,{\mathbf \beta}\cdot\nabla v_h)_\kappa + +({\mathbf \beta}\cdot{\bf n}\, u_h, v_h)_{\partial\kappa_+} + +({\mathbf \beta}\cdot{\bf n}\, u_h^-, v_h)_{\partial\kappa_-\setminus\Gamma}\right\} + =(f,v_h)_\Omega-({\mathbf \beta}\cdot{\bf n}\, g, v_h)_{\Gamma_-}, +\qquad\qquad\qquad\mathrm{[dg-transport]}@f] +where $\partial\kappa_-:=\{x\in\partial\kappa, +{\mathbf \beta}(x)\cdot{\bf n}(x)<0\}$ denotes the inflow boundary +and $\partial\kappa_+=\partial\kappa\setminus \partial \kappa_-$ the +outflow part of cell $\kappa$. Below, this equation will be referred +to as first version of the DG method. We note that after a +second integration by parts, we obtain: find $u_h\in V_h$ such that +@f[ + \sum_\kappa\left\{(\nabla\cdot\{{\mathbf \beta} u_h\},v_h)_\kappa + -({\mathbf \beta}\cdot{\bf n} [u_h], v_h)_{\partial\kappa_-}\right\} + =(f,v_h)_\Omega, \quad\forall v_h\in V_h, +@f] +where $[u_h]=u_h^+-u_h^-$ denotes the jump of the discrete function +between two neighboring cells and is defined to be $[u_h]=u_h^+-g$ on +the boundary of the domain. This is the discontinuous Galerkin scheme +for the transport equation given in its original notation. +Nevertheless, we will base the implementation of the scheme on the +form given by [dg-scheme] and [flux-transport-equation], +or [dg-transport], respectively. + +Finally, we rewrite [dg-scheme] in terms of a summation over all +faces where each face $e=\partial \kappa\cap\partial \kappa'$ +between two neighboring cells $\kappa$ and $\kappa'$ occurs twice: +Find $u_h\in V_h$ such that +@f[ + -\sum_\kappa({\mathcal F}(u_h),\nabla v_h)_\kappa+\sum_e\left\{({\mathcal H}(u_h^+,u_h^-,{\bf n}), v_h)_e+({\mathcal H}(u_h^-, u_h^+,-{\bf n}), v_h^-)_{e\setminus\Gamma}\right\}=(f,v_h)_\Omega \quad\forall v_h\in V_h, +\qquad\qquad\qquad\mathrm{[dg-scheme-faces-long]}@f] +By employing conservativity [conservative] of the numerical flux +this equation simplifies to: find $u_h\in V_h$ such that +@f[ + -\sum_\kappa({\mathcal F}(u_h),\nabla v_h)_\kappa+\sum_e({\mathcal H}(u_h^+,u_h^-,{\bf n}), [v_h])_{e\setminus\Gamma}+({\mathcal H}(u_h,g,{\bf n}), v_h)_{\Gamma}=(f,v_h)_\Omega \quad\forall v_h\in V_h. +\qquad\qquad\qquad\mathrm{[dg-scheme-faces]}@f] +Whereas the outer unit normal ${\bf n}|_{\partial\kappa}$ is uniquely +defined this is not so for ${\bf n}_e$ as the latter might be the +normal from either side of the face. Hence, we need to fix the normal +${\bf n}$ on the face to be one of the two normals and denote the +other normal by $-{\bf n}$. This way we get $-{\bf n}$ in the second +face term in [dg-scheme-faces-long] that finally produces the +minus sign in the jump $[v_h]$ in equation [dg-scheme-faces]. + +For the linear transport equation [transport-equation] +equation [dg-scheme-faces] simplifies to +@f[ + -\sum_\kappa(u_h,{\mathbf \beta}\cdot\nabla v_h)_\kappa+\sum_e\left\{({\mathbf \beta}\cdot{\bf n}\, u_h, [v_h])_{e_+\setminus\Gamma}+({\mathbf \beta}\cdot{\bf n}\, u_h^-, [v_h])_{e_-\setminus\Gamma}\right\}=(f,v_h)_\Omega-({\mathbf \beta}\cdot{\bf n}\, g, v_h)_{\Gamma_-}, +\qquad\qquad\qquad\mathrm{[dg-transport-gamma]}@f] +which will be refered to as second version of the DG method. + + +

Implementation

+ + +As already mentioned at the beginning of this example we will +implement assembling the system matrix in two different ways. +The first one will be based on the first version [dg-transport] +of the DG method that includes a sum of integrals over all cell +boundaries $\partial\kappa$. This is realized by a loop over all cells and +a nested loop over all faces of each cell. Thereby each inner face +$e=\partial\kappa\cap\partial \kappa'$ is treated twice, the first +time when the outer loop treats cell $\kappa$ and the second time when it +treats cell $\kappa'$. This way some values like the shape function +values at quadrature points on faces need to be computed twice. + +To overcome this overhead and for comparison, we implement +assembling of matrix also in a second and different way. This will +be based on the second version [dg-transport-gamma] that +includes a sum of integrals over all faces $e$. Here, several +difficulties occurs. +
    +
  1. As degrees of freedom are associated with cells (and not to faces) + and as a normal is only defined w.r.t. a cell adjacent to the face we + cannot simply run over all faces of the triangulation but need to + perform the nested loop over all cells and all faces of each cell + like in the first implementation. This, because in deal.II + faces are accessible from cells but not visa versa. +
  2. Due to the nested loop we arrive twice at each face. In order to + assemble face terms only once we either need to track which + faces we have treated before, or we introduce a simple rule that decides + which of the two adjacent cells the face should be accessed and + treated from. Here, we employ the second approach and define the + following rule: +
      +
    1. If the two cells adjacent to a face are of the same refinement level we access and treat the face from the cell with lower index on this level. +
    2. If the two cells are of different refinement levels we access + and treat the face from the coarser cell. +
    +
+Before we start with the description of the code we first introduce +its main ingredients. The main class is called +DGMethod. It comprises all basic objects like the +triangulation, the dofhandler, the system matrix and solution vectors. +Furthermore it has got some member functions, the most prominent of +which are the assemble_system1 and assemble_system2 +functions that implement the two different ways mentioned above for +assembling the system matrix. Within these assembling routines several +different cases must be distinguished while performing the nested +loops over all cells and all faces of each cell and assembling the +respective face terms. While sitting on the current cell and looking +at a specific face there are the cases +
    +
  1. face is at boundary, +
  2. neighboring cell is finer, +
  3. neighboring cell is of the same refinement level, and +
  4. neighboring cell is coarser +
+where the `neighboring cell' and the current cell have the mentioned +faces in common. In last three cases the assembling of the face terms +are almost the same. Hence, we can implement the assembling of the +face terms either by `copy and paste' (the lazy way, whose +disadvantages come up when the scheme or the equation might want to be +changed afterwards) or by calling a separate function that covers all +three cases. To be kind of educational within this tutorial we perform +the latter approach, of course. We go even further and encapsulate +this function and everything that is needed for assembling the +specific equation under consideration within a class called +DGTransportEquation. This class includes objects of all +equation--specific functions, the RHS and the +BoundaryValues class, both derived from the Function +class, and the Beta class representing the vector field. +Furthermore, the DGTransportEquation class comprises member +functions assemble_face_terms1 and +assemble_face_terms2 that are invoked by the +assemble_system1 and assemble_system2 functions of the +DGMethod, respectively, and the functions +assemble_cell_term and assemble_boundary_term that +are the same for both assembling routines. Due to the encapsulation of +all equation- and scheme-specific functions, the +DGTransportEquation class can easily be replaced by a similar +class that implements a different equation and a different DG method. +Indeed, the implementation of the assemble_system1 and +assemble_system2 functions of the DGMethod class will +be general enough to serve for different DG methods, different +equations, even for systems of equations (!) and, under small +modifications, for nonlinear problems. Finally, we note that the +program is dimension independent, i.e. after replacing +DGMethod<2> by DGMethod<3> the code runs in 3d. + + + + + + + diff --git a/deal.II/examples/step-12/doc/results.dox b/deal.II/examples/step-12/doc/results.dox new file mode 100644 index 0000000000..63ff406f6f --- /dev/null +++ b/deal.II/examples/step-12/doc/results.dox @@ -0,0 +1,89 @@ + +

Results

+ + +The output of this program consist of the console output, the eps +files including the grids, and the solutions given in gnuplot format. +@code +Cycle 0: + Number of active cells: 64 + Number of degrees of freedom: 256 +Time of assemble_system1: 0.05 +Time of assemble_system2: 0.04 +solution1 and solution2 coincide. +Writing grid to ... +Writing solution to ... + +Cycle 1: + Number of active cells: 112 + Number of degrees of freedom: 448 +Time of assemble_system1: 0.09 +Time of assemble_system2: 0.07 +solution1 and solution2 coincide. +Writing grid to ... +Writing solution to ... + +Cycle 2: + Number of active cells: 214 + Number of degrees of freedom: 856 +Time of assemble_system1: 0.17 +Time of assemble_system2: 0.14 +solution1 and solution2 coincide. +Writing grid to ... +Writing solution to ... + +Cycle 3: + Number of active cells: 415 + Number of degrees of freedom: 1660 +Time of assemble_system1: 0.32 +Time of assemble_system2: 0.28 +solution1 and solution2 coincide. +Writing grid to ... +Writing solution to ... + +Cycle 4: + Number of active cells: 796 + Number of degrees of freedom: 3184 +Time of assemble_system1: 0.62 +Time of assemble_system2: 0.52 +solution1 and solution2 coincide. +Writing grid to ... +Writing solution to ... + +Cycle 5: + Number of active cells: 1561 + Number of degrees of freedom: 6244 +Time of assemble_system1: 1.23 +Time of assemble_system2: 1.03 +solution1 and solution2 coincide. +Writing grid to ... +Writing solution to ... +@endcode + +We see that, as expected, on each refinement step the two solutions +coincide. The difference measured in time of treating each face only +once (second version of the DG method) in comparison with treating +each face twice within a nested loop over all cells and all faces of +each cell (first version), is much less than one might have +expected. The gain is less than 20% on the last few refinement steps. + + + First we show the solutions on the initial mesh, the mesh after two +and after five adaptive refinement steps. + +@image html step-12.sol-0.png +@image html step-12.sol-2.png +@image html step-12.sol-5.png + + +Then we show the final grid (after 5 refinement steps). The +grid is largely concentrated in the vicinity of the jump of the +solution. + +@image html step-12.grid-5.png + + +And finally we show a plot of a 3d computation. + +@image html step-12.sol-5-3d.png + diff --git a/deal.II/examples/step-13/doc/intro.dox b/deal.II/examples/step-13/doc/intro.dox new file mode 100644 index 0000000000..7362602221 --- /dev/null +++ b/deal.II/examples/step-13/doc/intro.dox @@ -0,0 +1,187 @@ + +

Introduction

+ +

Background and purpose

+ + +In this example program, we will not so much be concerned with +describing new ways how to use deal.II and its facilities, but rather +with presenting methods of writing modular and extensible finite +element programs. The main reason for this is the size and complexity +of modern research software: applications implementing modern error +estimation concepts and adaptive solution methods tend to become +rather large. For example, the three largest applications by the main +authors of deal.II, are at the time of writing of this example +program: +
    +
  1. a program for solving conservation hyperbolic equations by the + Discontinuous Galerkin Finite Element method: 33,775 lines of + code; +
  2. a parameter estimation program: 28,980 lines of code; +
  3. a wave equation solver: 21,020 lines of code. +
+(The library proper - without example programs and +test suite - has slightly more than 150,000 lines of code as of spring 2002.) +In the opinion of the author of this example program, the sizes of these +applications are at the edge of what one person, even an experienced +programmer, can manage. + + + +The numbers above make one thing rather clear: monolithic programs that +are not broken up into smaller, mostly independent pieces have no way +of surviving, since even the author will quickly lose the overview of +the various dependencies between different parts of a program. Only +data encapsulation, for example using object oriented programming +methods, and modularization by defining small but fixed interfaces can +help structure data flow and mutual interdependencies. It is also an +absolute prerequisite if more than one person is developing a program, +since otherwise confusion will quickly prevail as one developer +would need to know if another changed something about the internals of +a different module if they were not cleanly separated. + + + +In previous examples, you have seen how the library itself is broken +up into several complexes each building atop the underying ones, but +relatively independent of the other ones: +
    +
  1. the triangulation class complex, with associated iterator classes; +
  2. the finite element classes; +
  3. the DoFHandler class complex, with associated iterators, built on + the triangulation and finite element classes; +
  4. the classes implementing mappings between unit and real cells; +
  5. the FEValues class complex, built atop the finite elements and + mappings. +
+Besides these, and a large number of smaller classes, there are of +course the following ``tool'' modules: +
    +
  1. output in various graphical formats; +
  2. linear algebra classes. +
+ + + + +The goal of this program is now to give an example of how a relatively +simple finite element program could be structured such that we end up +with a set of modules that are as independent of each other as +possible. This allows to change the program at one end, without having to +worry that it might break at the other, as long as we do not touch the +interface through which the two ends communicate. The interface in +C++, of course, is the declaration of abstract base classes. + + + +Here, we will implement (again) a Laplace solver, although with a +number of differences compared to previous example programs: +
    +
  1. The classes that implement the process of numerically solving the + equation are no more responsible for driving the process of + ``solving-estimating error-refining-solving again'', but we delegate + this to external functions. This allows first to use it as a + building block in a larger context, where the solution of a + Laplace equation might only be one part (for example, in a + nonlinear problem, where Laplace equations might have to be solved + in each nonlinear step). It would also allow to build a framework + around this class that would allow using solvers for other + equations (but with the same external interface) instead, in case + some techniques shall be evaluated for different types of partial + differential equations. +
  2. It splits the process of evaluating the computed solution to a + separate set of classes. The reason is that one is usually not + interested in the solution of a PDE per se, but rather in certain + aspects of it. For example, one might wish to compute the traction + at a certain boundary in elastic computations, or in the signal of + a seismic wave at a receiver position at a given + location. Sometimes, one might have an interest in several of + these aspects. Since the evaluation of a solution is something + that does not usually affect the process of solution, we split it + off into a separate module, to allow for the development of such + evaluation filters independently of the development of the solver + classes. +
  3. Separate the classes that implement mesh refinement from the + classes that compute the solution. +
  4. Separate the description of the test case with which we will + present the program, from the rest of the program. +
+ + + +The things the program does are not new. In fact, this is more like a +melange of previous programs, cannibalizing various parts and +functions from earlier examples. It is the way they are arranged in +this program that should be the focus of the reader, i.e. the software +design techniques used in the program to achieve the goal of +implementing the desired mathematical method. However, we must +stress that software design is in part also a subjective matter: +different persons have different programming backgrounds and have +different opinions about the ``right'' style of programming; this +program therefore expresses only what the author considers useful +practice, and is not necessarily a style that you have to adopt in +order to write successful numerical software if you feel uncomfortable +with the chosen ways. It should serve as a case study, however, +inspiring the reader with ideas to the desired end. + + + +Once you have worked through the program, you will remark that it is +already somewhat complex in its structure. Nevertheless, it +only has about 850 lines of code, without comments. In real +applications, there would of course be comments and class +documentation, which would bring that to maybe 1200 lines. Yet, compared to +the applications listed above, this is still small, as they are 20 to +25 times as large. For programs as large, a proper design right from +the start is thus indispensible. Otherwise, it will have to be +redesigned at one point in its life, once it becomes too large to be +manageable. + + + +Despite of this, all three programs listed above have undergone major +revisions, or even rewrites. The wave program, for example, was once +entirely teared to parts when it was still significantly smaller, just +to assemble it again in a more modular form. By that time, it had +become impossible to add functionality without affecting older parts +of the code (the main problem with the code was the data flow: in time +dependent application, the major concern is when to store data to disk +and when to reload it again; if this is not done in an organized +fashion, then you end up with data released too early, loaded too +late, or not released at all). Although the present example program +thus draws from sevelar years of experience, it is certainly not +without flaws in its design, and in particular might not be suited for +an application where the objective is different. It should serve as an +inspiration for writing your own application in a modular way, to +avoid the pitfalls of too closely coupled codes. + + + +

What the program does

+ + +What the program actually does is not even the main point of this +program, the structure of the program is more important. However, in a +few words, a description would be: solve the Laplace equation for a +given right hand side such that the solution is the function +$u(x,t)=\exp(x+\sin(10y+5x^2))$. The goal of the +computation is to get the value of the solution at the point +$x_0=(0.5,0.5)$, and to compare the accuracy with +which we resolve this value for two refinement criteria, namely global +refinement and refinement by the error indicator by Kelly et al. which +we have already used in previous examples. + + + +The results will, as usual, be discussed in the respective section of +this document. In doing so, we will find a slightly irritating +observation about the relative performance of the two refinement +criteria. In a later example program, building atop this one, we will +devise a different method that should hopefully perform better than +the techniques discussed here. + + + +So much now for all the theoretical and anecdotal background. The best +way of learning about a program is to look at it, so here it is: + diff --git a/deal.II/examples/step-13/doc/results.dox b/deal.II/examples/step-13/doc/results.dox new file mode 100644 index 0000000000..b888db9825 --- /dev/null +++ b/deal.II/examples/step-13/doc/results.dox @@ -0,0 +1,190 @@ + +

Results

+ + + +The results of this program are not that interesting - after all +its purpose was not to demonstrate some new mathematical idea, and +also not how to program with deal.II, but rather to use the material +which we have developed in the previous examples to form something +which demonstrates a way to build modern finite element software in a +modular and extensible way. + + + +Nevertheless, we of course show the results of the program. Of +foremost interest is the point value computation, for which we had +implemented the corresponding evaluation class. The results (i.e. the +output) of the program looks as follows: +@code + Running tests with "global" refinement criterion: + ------------------------------------------------- + Refinement cycle: 0 1 2 3 4 5 6 + DoFs u(x_0) + 25 1.2868 + 81 1.6945 + 289 1.4658 + 1089 1.5679 + 4225 1.5882 + 16641 1.5932 + 66049 1.5945 + + Running tests with "kelly" refinement criterion: + ------------------------------------------------ + Refinement cycle: 0 1 2 3 4 5 6 7 8 9 10 11 + DoFs u(x_0) + 25 1.2868 + 47 0.8775 + 89 1.5365 + 165 1.2974 + 316 1.6442 + 589 1.5221 + 1090 1.5724 + 2035 1.5622 + 3754 1.5916 + 7100 1.5876 + 13059 1.5942 + 24749 1.5933 +@endcode + + +What surprises here is that the the exact value is 1.59491554..., and that +it is obviously suprisingly complicated to compute the solution even to +only one per cent accuracy, although the solution is smooth (in fact +infinite often differentiable). This smoothness is shown in the +graphical output generated by the program, here coarse grid and the +first 9 refinement steps of the Kelly refinement indicator: + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ @image html step-13.solution-kelly-0.png + + @image html step-13.solution-kelly-1.png +
+ @image html step-13.solution-kelly-2.png + + @image html step-13.solution-kelly-3.png +
+ @image html step-13.solution-kelly-4.png + + @image html step-13.solution-kelly-5.png +
+ @image html step-13.solution-kelly-6.png + + @image html step-13.solution-kelly-7.png +
+ @image html step-13.solution-kelly-8.png + + @image html step-13.solution-kelly-9.png +
+ + +While we're already at watching pictures, this is the eighth grid, as +viewed from top: + + +@image html step-13.grid-kelly-8.png + + +However, we are not yet finished with evaluation the point value +computation. In fact, plotting the error +$e=|u(x_0)-u_h(x_0)|$ for the two +refinement criteria yields the following picture: + + +@image html step-13.error.png + + + + +What is disturbing about this picture is that not only is the +adaptive mesh refinement not better than global refinement as one +would usually expect, it is even significantly worse since its +convergence is irregular, preventing all extrapolation techniques when +using the values of subsequent meshes! On the other hand, global +refinement provides a perfect $1/N$ or $h^{-2}$ +convergence history and provides every opportunity to even improve on +the point values by extrapolation. Global mesh refinement must +therefore be considered superior in this example! This is even more +surprising as the evaluation point is not somewhere in the left part +where the mesh is coarse, but rather to the right and the adaptive +refinement should refine the mesh around the evaluation point as well. + + + +We thus close the discussion of this example program with a question: + +

+ What is wrong with adaptivity if it is not better than + global refinement? + + + + +Exercise at the end of this example: There is a simple reason +for the bad and irregular behavior of the adapted mesh solutions. It +is simple to find out by looking at the mesh around the evaluation +point in each of the steps - the data for this is in the output files +of the program. An exercise would therefore be to modify the mesh +refinement routine such that the problem (once you remark it) is +avoided. The second exercise is to check whether the results are then +better than global refinement, and if so if even a better order of +convergence (in terms of the number of degrees of freedom) is +achieved, or only by a better constant. + + + +(Very brief answers for the impatient: at steps with larger +errors, the mesh is not regular at the point of evaluation, i.e. some +of the adjacent cells have hanging nodes; this destroys some +superapproximation effects of which the globally refined mesh can +profit. Answer 2: this quick hack +@code + bool refinement_indicated = false; + typename Triangulation::active_cell_iterator cell; + for (cell=triangulation->begin_active(); + cell!=triangulation->end(); ++cell) + for (unsigned int v=0; v::vertices_per_cell; ++v) + if (cell->vertex(v) == Point(.5,.5)) + { + cell->clear_coarsen_flag(); + refinement_indicated |= cell->refine_flag_set(); + }; + if (refinement_indicated) + for (cell=triangulation->begin_active(); + cell!=triangulation->end(); ++cell) + for (unsigned int v=0; v::vertices_per_cell; ++v) + if (cell->vertex(v) == Point(.5,.5)) + cell->set_refine_flag (); +@endcode +in the refinement function of the Kelly refinement class right before +executing refinement would improve the results (exercise: what does +the code do?), making them consistently better than global +refinement. Behavior is still irregular, though, so no results about +an order of convergence are possible.) + diff --git a/deal.II/examples/step-14/doc/intro.dox b/deal.II/examples/step-14/doc/intro.dox new file mode 100644 index 0000000000..6a6ffd3abd --- /dev/null +++ b/deal.II/examples/step-14/doc/intro.dox @@ -0,0 +1,408 @@ + +

Introduction

+ +

The maths

+ +The Heidelberg group of Professor Rolf Rannacher, to which the three main +authors of the deal.II library belonged during their PhD time and partly also +afterwards, has been involved with adaptivity and error estimation for finite +element discretizations since the mid-90ies. The main achievement is the +development of error estimates for arbitrary functionals of the solution, and +of optimal mesh refinement for its computation. + +We will not discuss the derivation of these concepts in too great detail, but +will implement the main ideas in the present example program. For a thorough +introduction into the general idea, we refer to the seminal work of Becker and +Rannacher @ref step_14_BR95 "[BR95]",@ref step_14_BR96r "[BR96r]", and the overview article of the same authors in +Acta Numerica @ref step_14_BR01 "[BR01]"; the first introduces the concept of error +estimation and adaptivity for general functional output for the Laplace +equation, while the second gives many examples of applications of these +concepts to a large number of other, more complicated equations. For +applications to individual types of equations, see also the publications by +Becker @ref step_14_Bec95 "[Bec95]", @ref step_14_Bec98 "[Bec98]", +Kanschat @ref step_14_Kan96 "[Kan96]", @ref step_14_FK97 "[FK97]", +Suttmeier @ref step_14_Sut96 "[Sut96]", @ref step_14_RS97 "[RS97]", @ref step_14_RS98c "[RS98c]", +@ref step_14_RS99 "[RS99]", +Bangerth @ref step_14_BR99b "[BR99b]", @ref step_14_Ban00w "[Ban00w]", +@ref step_14_BR01a "[BR01a]", @ref step_14_Ban02 "[Ban02]", and +Hartmann @ref step_14_Har02 "[Har02]", @ref step_14_HH01 "[HH01]", +@ref step_14_HH01b "[HH01b]". +All of these works, from the original introduction by Becker and Rannacher to +individual contributions to particular equations, have later been summarized +in a book by Bangerth and Rannacher that covers all of these topics, see +@ref step_14_BR03 "[BR03]". + + +The basic idea is the following: in applications, one is not usually +interested in the solution per se, but rather in certain aspects of it. For +example, in simulations of flow problems, one may want to know the lift or +drag of a body immersed in the fluid; it is this quantity that we want to know +to best accuracy, and whether the rest of the solution of the describing +equations is well resolved is not of primary interest. Likewise, in elasticity +one might want to know about values of the stress at certain points to guess +whether maximal load values of joints are safe, for example. Or, in radiative +transfer problems, mean flux intensities are of interest. + +In all the cases just listed, it is the evaluation of a functional $J(u)$ of +the solution which we are interested in, rather than the values of $u$ +everywhere. Since the exact solution $u$ is not available, but only its +numerical approximation $u_h$, it is sensible to ask whether the computed +value $J(u_h)$ is within certain limits of the exact value $J(u)$, i.e. we +want to bound the error with respect to this functional, $J(u)-J(u_h)$. + +For simplicity of exposition, we henceforth assume that both the quantity of +interest $J$, as well as the equation are linear, and we will in particular +show the derivation for the Laplace equation with homogeneous Dirichlet +boundary conditions, although the concept is much more general. For this +general case, we refer to the references listed above. The goal is to obtain +bounds on the error, $J(e)=J(u)-J(u_h)$. For this, let us denote by $z$ the +solution of a dual problem, defined as follows: +@f[ + a(\varphi,z) = J(\varphi) \qquad \forall \varphi, +@f] +where $a(\cdot,\cdot)$ is the bilinear form associated with the differential +equation, and the test functions are chosen from the corresponding solution +space. Then, taking as special test function $\varphi=e$ the error, we have +that +@f[ + J(e) = a(e,z) +@f] +and we can, by Galerkin orthogonality, rewrite this as +@f[ + J(e) = a(e,z-\varphi_h) +@f] +for all possible functions $\varphi_h$ from the discrete test space. + +Concretely, for Laplace's equation, the error identity reads +@f[ + J(e) = (\nabla e, \nabla(z-\varphi_h)). +@f] +For reasons that we will not explain, we do not want to use this formula as +is, but rather split the scalar products into terms on all cells, and +integrate by parts on each of them: +@f{eqnarray*} + J(e) + &=& + \sum_K (\nabla (u-u_h), \nabla (z-\varphi_h))_K + \\ + &=& + \sum_K (-\Delta (u-u_h), z-\varphi_h)_K + + (\partial_n (u-u_h), z-z_h)_{\partial K}. +@f} +Next we use that $-\Delta u=f$, and that $\partial_n u$ is a quantity that is +continuous almost everywhere, so the terms involving $\partial_n u$ on one +cell cancels with that on its neighbor, where the normal vector has the +opposite sign. At the boundary of the domain, where there is no neighbor cell +with which this term could cancel, the weight $z-\varphi_h$ can be chosen as +zero, since $z$ has zero boundary values, and $\varphi_h$ can be chosen to +have the same. + +Thus, we have +@f{eqnarray*} + J(e) + &=& + \sum_K (f+\Delta u_h, z-\varphi_h)_K + - (\partial_n u_h, z-\varphi_h)_{\partial K\backslash \partial\Omega}. +@f} +In a final step, note that when taking the normal derivative of $u_h$, we mean +the value of this quantity as taken from this side of the cell (for the usual +Lagrange elements, derivatives are not continuous across edges). We then +rewrite the above formula by exchanging half of the edge integral of cell $K$ +with the neighbor cell $K'$, to obtain +@f{eqnarray*} + J(e) + &=& + \sum_K (f+\Delta u_h, z-\varphi_h)_K + - \frac 12 (\partial_n u_h|_K + \partial_{n'} u_h|_{K'}, + z-\varphi_h)_{\partial K\backslash \partial\Omega}. +@f} +Using that for the normal vectors $n'=-n$ holds, we define the jump of the +normal derivative by +@f[ + [\partial_n u_h] := \partial_n u_h|_K + \partial_{n'} u_h|_{K'} + = + \partial_n u_h|_K - \partial_n u_h|_{K'}, +@f] +and get the final form after setting the discrete function $\varphi_h$, which +is by now still arbitrary, to the point interpolation of the dual solution, +$\varphi_h=I_h z$: +@f{eqnarray*} + J(e) + &=& + \sum_K (f+\Delta u_h, z-I_h z)_K + - \frac 12 ([\partial_n u_h], + z-I_h z)_{\partial K\backslash \partial\Omega}. +@f} + +With this, we have obtained an exact representation of the error of the finite +element discretization with respect to arbitrary (linear) functionals +$J(\cdot)$. Its structure is a weighted form of a residual estimator, as both +$f+\Delta u_h$ and $[\partial_n u_h]$ are cell and edge residuals that vanish +on the exact solution, and $z-I_h z$ are weights indicating how important the +residuals on a certain cell is for the evaluation of the given functional. +Furthermore, it is a cell-wise quantity, so we can use it as a mesh refinement +criterion. The question, is: how to evaluate it? After all, the evaluation +requires knowledge of the dual solution $z$, which carries the information +about the quantity we want to know to best accuracy. + +In some, very special cases, this dual solution is known. For example, if the +functional $J(\cdot)$ is the point evaluation, $J(\varphi)=\varphi(x_0)$, then +the dual solution has to satisfy +@f[ + -\Delta z = \delta(x-x_0), +@f] +with the Dirac delta function on the right hand side, and the dual solution is +the Green's function with respect to the point $x_0$. For simple geometries, +this function is analytically known, and we could insert it into the error +representation formula. + +However, we do not want to restrict ourselves to such special cases. Rather, +we will compute the dual solution numerically, and approximate $z$ by some +numerically obtained $\tilde z$. We note that it is not sufficient to compute +this approximation $\tilde z$ using the same method as used for the primal +solution $u_h$, since then $\tilde z-I_h \tilde z=0$, and the overall error +estimate would be zero. Rather, the approximation $\tilde z$ has to be from a +larger space than the primal finite element space. There are various ways to +obtain such an approximation (see the cited literature), and we will choose to +compute it with a higher order finite element space. While this is certainly +not the most efficient way, it is simple since we already have all we need to +do that in place, and it also allows for simple experimenting. For more +efficient methods, again refer to the given literature, in particular +@ref step_14_BR95 "[BR95]", @ref step_14_BR03 "[BR03]". + +With this, we end the discussion of the mathematical side of this program and +turn to the actual implementation. + + +

The software

+ +The step-14 example program builds heavily on the techniques already used in +the @ref step_13 "step-13" program. Its implementation of the dual weighted residual error +estimator explained above is done by deriving a second class, properly called +DualSolver, from the Solver base class, and having a class +(WeightedResidual) that joins the two again and controls the solution +of the primal and dual problem, and then uses both to compute the error +indicator for mesh refinement. + +The program continues the modular concept of the previous example, by +implementing the dual functional, describing quantity of interest, by an +abstract base class, and providing two different functionals which implement +this interface. Adding a different quantity of interest is thus simple. + +One of the more fundamental differences is the handling of data. A common case +is that you develop a program that solves a certain equation, and test it with +different right hand sides, different domains, different coefficients and +boundary values, etc. Usually, these have to match, so that exact solutions +are known, or that their combination makes sense at all. + +We demonstrate a way how this can be achieved in a simple, yet very flexible +way. We will put everything that belongs to a certain setup into one class, +and provide a little C++ mortar around it, so that entire setups (domains, +coefficients, right hand sides, etc.) can be exchanged by only changing +something in one place. + +Going this way a little further, we have also centralized all the other +parameters that describe how the program is to work in one place, such as the +order of the finite element, the maximal number of degrees of freedom, the +evaluation objects that shall be executed on the computed solutions, and so +on. This allows for simpler configuration of the program, and we will show in +a later program how to use a library class that can handle setting these +parameters by reading an input file. The general aim is to reduce the places +within a program where one may have to look when wanting to change some +parameter, as it has turned out in practice that one forgets where they are as +programs grow. Furthermore, putting all options describing what the program +does in a certain run into a file (that can be stored with the results) helps +repeatability of results more than if the various flags were set somewhere in +the program, where their exact values are forgotten after the next change to +this place. + +Unfortunately, the program has become rather long. While this admittedly +reduces its usefulness as an example program, we think that it is a very good +starting point for development of a program for other kinds of problems, +involving different equations than the Laplace equation treated here. +Furthermore, it shows everything that we can show you about our way of a +posteriori error estimation, and its structure should make it simple for you +to adjust this method to other problems, other functionals, other geometries, +coefficients, etc. + +The author believes that the present program is his masterpiece among the +example programs, regarding the mathematical complexity, as well as the +simplicity to add extensions. If you use this program as a basis for your own +programs, we would kindly like to ask you to state this fact and the name of +the author of the example program, Wolfgang Bangerth, in publications that +arise from that, of your program consists in a considerable part of the +example program. + + +

Bibliography

+ +
+ +
@anchor step_14_Ban00w [Ban00w]
+
Wolfgang Bangerth. +
Mesh adaptivity and error control for a finite element approximation + of the elastic wave equation. +
In Alfredo Bermudez, Dolores Gomez, Christophe Hazard, Patrick + Joly, and Jean E. Roberts, editors, Proceedings of the Fifth + International Conference on Mathematical and Numerical Aspects of Wave + Propagation (Waves2000), Santiago de Compostela, Spain, 2000, pages + 725–729. SIAM, 2000. + + + +
@anchor step_14_Ban02 [Ban02]
+
Wolfgang Bangerth. +
Adaptive Finite Element Methods for the Identification of + Distributed Coefficient in Partial Differential Equations. +
PhD thesis, University of Heidelberg, 2002. + + + +
@anchor step_14_BR99b [BR99b]
+
Wolfgang Bangerth and Rolf Rannacher. +
Finite element approximation of the acoustic wave equation: Error + control and mesh adaptation. +
East–West J. Numer. Math., 7(4):263–282, 1999. + + + +
@anchor step_14_BR03 [BR03]
+
Wolfgang Bangerth and Rolf Rannacher. +
Adaptive Finite Element Methods for Differential Equations. +
Birkhäuser Verlag, Basel, 2003. + + + +
@anchor step_14_BR01a [BR01a]
+
Wolfgang Bangerth and Rolf Rannacher. +
Adaptive finite element techniques for the acoustic wave equation. +
J. Comput. Acoustics, 9(2):575–591, 2001. + + + +
@anchor step_14_BR01 [BR01]
+
R. Becker and R. Rannacher. +
An optimal control approach to error estimation and mesh adaptation + in finite element methods. +
Acta Numerica, 10:1–102, 2001. + + + +
@anchor step_14_Bec95 [Bec95]
+
Roland Becker. +
An Adaptive Finite Element Method for the Incompressible + Navier-Stokes Equations on Time-dependent Domains. +
Dissertation, Universität Heidelberg, 1995. + + + +
@anchor step_14_Bec98 [Bec98]
+
Roland Becker. +
Weighted error estimators for finite element approximations of the + incompressible Navier-Stokes equations. +
Preprint 98-20, SFB 359, Universität Heidelberg, 1998. + + + +
@anchor step_14_BR96r [BR96r]
+
Roland Becker and Rolf Rannacher. +
A feed-back approach to error control in finite element methods: + Basic analysis and examples. +
East–West J. Numer. Math., 4:237–264, 1996. + + + +
@anchor step_14_BR95 [BR95]
+
Roland Becker and Rolf Rannacher. +
Weighted a posteriori error control in FE methods. +
In H. G. Bock et al., ed.s, ENUMATH 95, pages 621–637, + Paris, September 1998. World Scientific Publ., Singapure. +
in @ref step_14_enumath97 "[enumath97]". + + + +
@anchor step_14_enumath97 [enumath97]
+
Hans Georg Bock, Franco Brezzi, Roland Glowinsky, Guido Kanschat, Yuri A. + Kuznetsov, Jacques Periaux, and Rolf Rannacher, editors. +
ENUMATH 97, Proceedings of the 2nd European Conference on + Numerical Mathematics and Advanced Applications, Singapore, 1998. World + Scientific. + + + +
@anchor step_14_FK97 [FK97]
+
Christian Führer and Guido Kanschat. +
A posteriori error control in radiative transfer. +
Computing, 58(4):317–334, 1997. + + + +
@anchor step_14_Har02 [Har02]
+
Ralf Hartmann. +
Adaptive Finite Element Methods for the Compressible Euler Equations. +
PhD thesis, University of Heidelberg, 2002. + + + +
@anchor step_14_HH01 [HH01]
+
Ralf Hartmann and Paul Houston. +
Adaptive discontinuous Galerkin finite element methods for + nonlinear hyperbolic conservation laws. +
SIAM J. Sci. Comput., 24 (2002), pp. 979-1004. + + + +
@anchor step_14_HH01b [HH01b]
+
Ralf Hartmann and Paul Houston. +
Adaptive discontinuous Galerkin finite element methods for the + compressible Euler equations. +
J. Comput. Phys. 183 (2002), pp. 508-532. + + + +
@anchor step_14_Kan96 [Kan96]
+
Guido Kanschat. +
Parallel and Adaptive Galerkin Methods for Radiative Transfer + Problems. +
Dissertation, Universität Heidelberg, 1996. + + + +
@anchor step_14_RS97 [RS97]
+
Rolf Rannacher and Franz-Theo Suttmeier. +
A feed-back approach to error control in finite element methods: + Application to linear elasticity. +
Comp. Mech., 19(5):434–446, 1997. + + + +
@anchor step_14_RS98c [RS98c]
+
Rolf Rannacher and Franz-Theo Suttmeier. +
A posteriori error control in finite element methods via duality + techniques: Application to perfect plasticity. +
Comp. Mech., 21(2):123–133, 1998. + + + +
@anchor step_14_RS99 [RS99]
+
Rolf Rannacher and Franz-Theo Suttmeier. +
A posteriori error control and mesh adaptation for finite element + models in elasticity and elasto-plasticity. +
Comput. Methods Appl. Mech. Engrg., pages 333–361, 1999. + + + +
@anchor step_14_Sut96 [Sut96]
+
Franz-Theo Suttmeier. +
Adaptive Finite Element Approximation of Problems in + Elasto-Plasticity Theory. +
Dissertation, Universität Heidelberg, 1996. + + + +
+ + + + + diff --git a/deal.II/examples/step-14/doc/results.html b/deal.II/examples/step-14/doc/results.html new file mode 100644 index 0000000000..85d13aedd1 --- /dev/null +++ b/deal.II/examples/step-14/doc/results.html @@ -0,0 +1,514 @@ + +

Results

+ +

Point values

+ +

+This program offers a lot of possibilities to play around. We can thus +only show a small part of all possible results that can be obtained +with the help of this program. However, you are encouraged to just try +it out, by changing the settings in the main program. Here, we start +by simply letting it run, unmodified: + +

+Refinement cycle: 0
+   Number of degrees of freedom=72
+   Point value=0.03243
+   Estimated error=0.000702385
+Refinement cycle: 1
+   Number of degrees of freedom=67
+   Point value=0.0324827
+   Estimated error=0.000888953
+Refinement cycle: 2
+   Number of degrees of freedom=130
+   Point value=0.0329619
+   Estimated error=0.000454606
+Refinement cycle: 3
+   Number of degrees of freedom=307
+   Point value=0.0331934
+   Estimated error=0.000241254
+Refinement cycle: 4
+   Number of degrees of freedom=718
+   Point value=0.0333675
+   Estimated error=7.4912e-05
+Refinement cycle: 5
+   Number of degrees of freedom=1691
+   Point value=0.0334104
+   Estimated error=3.47976e-05
+Refinement cycle: 6
+   Number of degrees of freedom=4065
+   Point value=0.0334315
+   Estimated error=1.49476e-05
+Refinement cycle: 7
+   Number of degrees of freedom=9113
+   Point value=0.0334407
+   Estimated error=6.23712e-06
+Refinement cycle: 8
+   Number of degrees of freedom=22303
+   Point value=0.0334445
+
+ +

+ +

+First let's look what the program actually computed. On the fifth +grid, primal and dual numerical solutions look like this: + + + + + + +
+ Primal solution + + + Dual solution + +
+Obviously, the region at the bottom left is so unimportant for the +point value evaluation at the top right that the grid is left entirely +unrefined there, even though the solution has singularities there! Due +to the symmetry in right hand side and domain, the solution should +actually look like at the top right in all four corners, but the mesh +refinement criterion involving the dual solution chose to refine them +differently. +

+ +

+Looking at the grids that are produced in the course of subsequent +refinement, these are some of them: + + + + + + + + + + + + + + + + + + +
+ + Grid 0 + + + + Grid 2 + +
+ + Grid 4 + + + + Grid 5 + +
+ + Grid 7 + + + + Grid 8 + +
+Note the subtle interplay between resolving the corner singularities, +and resolving around the point of evaluation. It will be rather +difficult to generate such a mesh by hand, as this would involve to +judge quantitatively how much which of the four corner singularities +shall be resolved, and to set the weight compared to the vicinity of +the evaluation point. +

+ +

+The program prints the point value and the estimated error in this +quantity. From extrapolating it, we can guess that the exact value is +somewhat like 0.0334473, plus or minus 0.0000001 (note that we get +almost 6 valid digits from only 22,000 (primal) degrees of +freedom. This number cannot be obtained from the value of the +functional alone, but I have used the assumption that the error +estimator is mostly exact, and extrapolated the computed value plus +the estimated error, to get an approximation of the true +value. Computing with more degrees of freedom shows that this +assumption is indeed valid. +

+ +

+From the computed results, we can generate two graphs: one that shows +the convergence of the error J(u)-J(uh) (taking the +extrapolated value as correct) in the point value, and the value that +we get by adding up computed value J(uh) and estimated +error eta (if the error estimator eta were exact, then the value +J(uh)+eta would equal the exact point value, and the error +in this quantity would always be zero; however, since the error +estimator is only a - good - approximation to the true error, we can +by this only reduce the size of the error). In this graph, we also +indicate the complexity O(1/N) to show that mesh refinement +acts optimal in this case. The second chart compares +true and estimated error, and shows that the two are actually very +close to each other, even for such a complicated quantity as the point +value: +

+ + + + + + + +
+ + Error in point value + + + + Error in point value + +
+ + +

Comparing refinement criteria

+ +

+Since we have accepted quite some effort when using the mesh +refinement driven by the dual weighted error estimator (for solving +the dual problem, and for evaluating the error representation), it is +worth while asking whether that effort was successful. To this end, we +first compare the achieved error levels for different mesh refinement +criteria. To generate this data, simply change the value of the mesh +refinement criterion variable in the main program. The results are +thus (for the weight in the Kelly indicator, we have chosen the +function 1/(r2+0.12), where r +is the distance to the evaluation point; it can be shown that this is +the optimal weight if we neglect the effects of boundaries): +

+

+ Error comparison + +

+ +

+Checking these numbers, we see that for global refinement, the error +is proportional to O(1/(sqrt(N) log(N))), and for the dual +estimator O(1/N). Generally speaking, we see that the dual +weighted error estimator is better than the other refinement +indicators, at least when compared with those that have a similarly +regular behavior. The Kelly indicator produces smaller errors, but +jumps about the picture rather irregularly, with the error also +changing signs sometimes. Therefore, its behavior does not allow to +extrapolate the results to larger values of N. Furthermore, if we +trust the error estimates of the dual weighted error estimator, the +results can be improved by adding the estimated error to the computed +values. In terms of reliability, the weighted estimator is thus better +than the Kelly indicator, although the latter sometimes produces +smaller errors. +

+ + +

Evaluation of point stresses

+ +

+Besides evaluating the values of the solution at a certain point, the +program also offers the possibility to evaluate the x-derivatives at a +certain point, and also to tailor mesh refinement for this. To let the +program compute these quantities, simply replace the two occurences of +PointValueEvaluation in the main function by +PointXDerivativeEvaluation, and let the program run: + +

+Refinement cycle: 0
+   Number of degrees of freedom=72
+   Point x-derivative=-0.0719397
+   Estimated error=-0.0126173
+Refinement cycle: 1
+   Number of degrees of freedom=61
+   Point x-derivative=-0.0707956
+   Estimated error=-0.00774316
+Refinement cycle: 2
+   Number of degrees of freedom=131
+   Point x-derivative=-0.0568671
+   Estimated error=-0.00313426
+Refinement cycle: 3
+   Number of degrees of freedom=247
+   Point x-derivative=-0.053033
+   Estimated error=-0.00136114
+Refinement cycle: 4
+   Number of degrees of freedom=541
+   Point x-derivative=-0.0526461
+   Estimated error=-0.000555479
+Refinement cycle: 5
+   Number of degrees of freedom=1286
+   Point x-derivative=-0.0526896
+   Estimated error=-0.0002261
+Refinement cycle: 6
+   Number of degrees of freedom=2924
+   Point x-derivative=-0.0527503
+   Estimated error=-9.38035e-05
+Refinement cycle: 7
+   Number of degrees of freedom=6578
+   Point x-derivative=-0.0527877
+   Estimated error=-3.94139e-05
+Refinement cycle: 8
+   Number of degrees of freedom=14780
+   Point x-derivative=-0.0528047
+   Estimated error=-1.85456e-05
+Refinement cycle: 9
+   Number of degrees of freedom=31438
+   Point x-derivative=-0.0528145
+
+ +

+ +

+The solution looks roughly the same as before (the exact solution of +course is the same, only the grid changed a little), but the +dual solution is now different. A close-up around the point of +evaluation shows this: + + + +
+ Dual solution + +
+This time, the grids in refinement cycles 0, 5, 6, 7, 8, and 9 look +like this: + + + + + + + + + + + + + + + + + + +
+ + Grid 0 + + + + Grid 5 + +
+ + Grid 6 + + + + Grid 7 + +
+ + Grid 8 + + + + Grid 9 + +
+Note the assymetry of the grids compared with those we obtained for +the point evaluation, which is due to the directionality of the +x-derivative for which we tailored the refinement criterion. +

+ +

+Then, it is interesting to compare actually computed values of the +quantity of interest (i.e. the x-derivative of the solution at one +point) with a reference value of -0.0528223... plus or minus +0.0000005. We get this reference value by computing on finer grid after +some more mesh refinements, with approximately 130,000 cells. +Recall that if the error is O(1/N) in the optimal case, then +taking a mesh with ten times more cells gives us one additional digit +in the result. +

+ +

+In the left part of the following chart, you again see the convergence +of the error towards this extrapolated value, while on the right you +see a comparison of true and estimated error: + + + + + + +
+ + Error in point derivative + + + + Error in point derivative + +
+After an initial phase where the true error changes its sign, the +estimated error matches it quite well, again. Also note the dramatic +improvement in the error when using the estimated error to correct the +computed value of J(uh). +

+ + +

Step-13 revisited

+ +

+If instead of the Exercise_2_3 data set, we choose +CurvedRidges in the main function, we can redo the +computations of the previous example program, to compare whether the +results obtained with the help of the dual weighted error estimator +are better than those we had previously. +

+ +

+First, the meshes after 9 and 10 adaptive refinement cycles, +respectively, look like this: + + + + + + +
+ Grid 9 + + + Grid 10 + +
+The features of the solution can still be seen slightly, but since the +solution is smooth, the roughness of the dual solution entirely +dominates the mesh refinement criterion, and leads to strongly +concentrated meshes. The solution after the seventh refinement step is +like so: + + + + +
+ Solution 7 + +
+Obviously, the solution is worse at some places, but the mesh +refinement process should have taken care that these places are not +important for computing the point value. +

+ + +

+The next point is to compare the new (duality based) mesh refinement +criterion with the old ones. These are the results: +

+

+ Error comparison + +

+ +

+The results are, well, somewhat mixed. First, the Kelly indicator +disqualifies itself by its unsteady behavior, changing the sign of the +error several times, and with increasing errors under mesh +refinement. The dual weighted error estimator has a monotone decrease +in the error, and is better than the weighted Kelly and global +refinement, but the margin is not as large as expected. This is, here, +due to the fact the global refinement can exploit the regular +structure of the meshes around the point of evaluation, which leads to +a better order of convergence for the point error. However, if we had +a mesh that is not locally rectangular, for example because we had to +approximate curved boundaries, or if the coefficients were not +constant, then this advantage of globally refinement meshes would +vanish, while the good performance of the duality based estimator +would remain. +

+ + + +

Conclusions and outlook

+ +

+The results here are not too clearly indicating the superiority of the +dual weighted error estimation approach for mesh refinement over other +mesh refinement criteria, such as the Kelly indicator. This is due to +the relative simplicity of the shown applications. If you are not +convinced yet that this approach is indeed superior, you are invited +to browse through the literature indicated in the introduction, where +plenty of examples are provided where the dual weighted approach can +reduce the necessary numerical work by orders of magnitude, making +this the only way to compute certain quantities to reasonable +accuracies at all. +

+ +

+Besides the objections you may raise against its use as a mesh +refinement criterion, consider that accurate knowledge of the error in +the quantity one might want to compute is of great use, since we can +stop computations when we are satisfied with the accuracy. Using more +traditional approaches, it is very difficult to get accurate estimates +for arbitrary quantities, except for, maybe, the error in the energy +norm, and we will then have no guarantee that the result we computed +satisfies any requirements on its accuracy. Also, as was shown for the +evaluation of point values and derivatives, the error estimate can be +used to extrapolate the results, yielding much higher accuracy in the +quantity we want to know. +

+ +

+Leaving these mathematical considerations, we tried to write the +program in a modular way, such that implementing another test case, or +another evaluation and dual functional is simple. You are encouraged +to take the program as a basis for your own experiments, and to play a +little. +

+ + + + diff --git a/deal.II/examples/step-15/doc/intro.dox b/deal.II/examples/step-15/doc/intro.dox new file mode 100644 index 0000000000..f89450a3ac --- /dev/null +++ b/deal.II/examples/step-15/doc/intro.dox @@ -0,0 +1,299 @@ + +

Introduction

+ +

Foreword

+ +This program demonstrates a number of techniques that have not been shown in +previous example programs. In particular, it shows how to program for +one-dimensional problems, and some aspects of what to do with nonlinear +problems, in particular how to transfer the solution from one grid to the next +finer one. Apart from this, however, the program does not attempt to do much +more than to entertain those who sometimes like to play with maths. + +The application we chose is, as you will see, not even very well suited for +anything, since it is rather impossible to solve. When I started to write the +program, I was not aware of this, and it only turned out later that the +optimization problem we are looking at here is severely plagued by many, +likely even degenerate minima, and that we cannot really hope to find a global +one. What we do instead is to rather start the optimization from many initial +guesses (which is cheap since the problem is 1d), and hope that we can get a +reasonable best solution for some of them. While the whole thing, as an +application, is not very satisfactory, keep in mind that solving particular +applications is not the goal of the tutorial programs; rather, we would like +to demonstrate techniques of programming with deal.II, which is indeed the +focus here. + + +

The problem

+ +Now for a description of the problem. In the book by Dacorogna on the +Calculus of Variations, I found the following statement, which confused me +tremendously at first (see Section 3.4.3, ``Lavrentiev Phenomenon'', very +slightly edited): + +@par Theorem 4.6: + + Let + @f[ + I(u)=\int_0^1 (x-u^3)^2 (u')^6\; dx. + @f] + Let + @f[ + {\cal W}_1 = \{ u\in W^{1,\infty}(0,1) : u(0)=0, u(1)=1 \} + @f] + @f[ + {\cal W}_2 = \{ u\in W^{1,1}(0,1) : u(0)=0, u(1)=1 \} + @f] + + +@par + + Then + @f[ + \inf_{u\in {\cal W}_1} I(u) \ge c_0 > 0 = \inf_{u\in {\cal W}_2} I(u). + @f] + Moreover the minimum of $I(u)$ over ${\cal W}_2$ is attained by + $u(x)=x^{1/3}$. + + +@par Remarks. + [...] + +@par + + ii) it is interesting to note that if one uses the usual finite element + methods (by taking piecewise affine functions, which are in $W^{1,\infty}$) + one will not be able to detect the minimum of some integrals such as the one + in the theorem. + + +In other words: minimizing the energy functional over one space +($W^{1,\infty}$) does not give the same value as when minimizing over a larger +space ($W^{1,1}$). Furthermore, they give a rough estimate of the value of the +constant $c_0$, which is $c_0=\frac{7^23^5}{2^{18}5^5}\approx 1.61\cdot +10^{-6}$ (although by their calculation it is obvious that this estimate is +far too small, but the point of course is just to show that it is strictly +larger than zero). + +While the theorem was not surprising, the remark stunned me at first. After +all, we know that we can approximate functions in $W^{1,1}$ to arbitrary +accuracy. Also, although it is true that finite element functions are in +$W^{1,\infty}$, this statement is not really accurate: if the function itself +is bounded pointwise by, say, a constant $C$, then its gradient is bounded by +$2C/h$, and thus $\|u_h\|_{1,\infty} \le 2C/h$. That means that we should be +able to lift this limit just by mesh refinement. Finite element functions are +therefore only in $W^{1,\infty}$ if one considers them on a fixed grid, not on +a sequence of successively finer grids. (Note, we can only lift the +boundedness in $W^{1,1}$ in the same way by considering functions that +oscillate at cell frequency; these, however, do not converge in any reasonable +measure.) + +So it took me a while to see where the problem lies. Here it is: While we are +able to approximate functions to arbitrary accuracies in Sobolev + norms, this does not necessarily also hold with respect to the functional +$I(u)$! After all, this functional was made to show exactly these +pathologies. + +What happens in this case is actually not so difficult to understand. Let us +look at what happens if we plug the lowest-order (piecewise linear) +interpolant $i_hu$ of the optimal solution $u=x^{1/3}$ into the functional +$I(u)$: on the leftmost cell, the left end of $i_hu$ is tagged to zero by the +boundary condition, and the right end has the value $i_hu(h)=u(h)=h^{1/3}$. So +let us only consider the contribution of this single cell to $I(u)$: +@f{eqnarray*} + \int_0^h (x-(i_hu)^3)^2 ((i_hu)')^6 dx + &=& + \int_0^h (x-(h^{1/3}x)^3)^2 ((h^{1/3}/h)')^6 dx + \\ + &=& + h^{-4} \int_0^h (x^2-2hx^4+h^2x^6) dx + \\ + &=& + h^{-4} (h^3/3-2h^5/5+h^9/7) + \\ + &=& {\cal O}(h^{-1}). +@f} +Ups, even the contribution of the first cell blows up under mesh refinement, +and we have not even summed up the contributions of the other cells! + +It turns out, that the other cells are not really problematic (since the +gradient is bounded there by a constant independent of $h$), but we cannot +really avoid the trouble with the first cell: if instead of the interpolant we +choose some other finite element function that is closer on average to +$x^{1/3}$ than the interpolant above, then we have to increase the slope of +this function, since we have to obey the boundary condition at the left +end. But then we are hit by the weight $(u')^6$. This weight is simply too +strong! + +On the other hand, the interpolation of the linear function $\varphi(x)=x$ +connecting the boundary values has the finite energy $I(i_h\varphi)=1/10$, +independent of the mesh size. Thus, $i_hx^{1/3}$ cannot be the minimizer of the +energy as $h\rightarrow 0$. This is also easy to see by noting that +the minimal value of $I$ cannot increase under mesh +refinement: if it is finite for some function on some mesh, then it must be +smaller or equal to that value on a finer mesh, since the original function is +still in the space spanned by the shape functions on the finer grid, as finite +element spaces are nested. However, the computation above shows that we should +not be surprised if the value of the functional does not converge to zero, but +rather some finite value. + +There is one more conclusion to be drawn from the blow-up lesson above: we +cannot expect the finite dimensional approximation to be close to the root +function at the left end of the domain, for any mesh we choose! Because, if it +would, then its energy would have to blow up. And we will see exactly this +in the results section below. + + +

What to do?

+ +After this somewhat theoretical introduction, let us just once in our life +have fun with pure mathematics, and actually see what happens in this problem +when we run the finite element method on it. So here it goes: to find the +minimum of $I(u)$, we have to find its stationary point. The condition for +this reads +@f[ + I'(u,\varphi) + = + \int_0^1 6 (x-u^3) (u')^5 \{ (x-u^3)\varphi' - u^2 u' \varphi\}\ dx, +@f] +for all test functions $\varphi$ from the same space as that from which we +take $u$, but with zero boundary conditions. If this space allows us to +integrate by parts, then we could associate this with a two point boundary +value problem +@f{eqnarray*} + -(x-u^3) u^2(u')^6 + - \frac{d}{dx} \left\{(x-u^3)^2 (u')^5\right\} = 0, + \qquad\qquad u(0)=0, + \quad u(1)=1. +@f} +Note that this equation degenerates wherever $u^3=x$, which is at least the +case at $x\in\{0,1\}$ due to the prescribed boundary values for $u$, but +possibly at other places as well. However, for finite elements, we will want +to have the equation in weak form anyway. Since the equation is still +nonlinear, one may be tempted to compute iterates +$u_{k+1}=u_k+\alpha_k\delta u_k$ using a Newton method for updates $\delta +u_k$, like in +@f[ + I''(u_k,\delta u_k,\varphi) + = + -I'(u_k, \varphi). +@f] +However, since $I''(u_k,\cdot,\cdot)$ may be an indefinite operator (and, as +numerical experiments indicate, is in fact during typical computations), we +don't want to use this. Instead, we use a gradient method, for which we +compute updates according to the following scheme: +@f{eqnarray*} + \left<\delta u_k,\varphi\right> + = + -I'(u_k, \varphi). +@f} +For the scalar product on the left hand side, there are multiple valid ways; +we choose the mesh dependent definition $\left = \int_\Omega (uv + +h(x)^2 \nabla u\cdot \nabla v)\; dx$, where the weight $h(x)^2$, i.e. using +the local mesh width, is chosen so that the definition is dimensionally +consistent. It also yields a matrix on the left hand side that is simple to +invert, as it is the sum of the well-conditioned mass matrix, and a Laplace +matrix times a factor that counters the growth of condition number of the +Laplace matrix. + +The step length $\alpha_k$ is then computed using a one-dimensional line search +finding +@f{eqnarray*} + \alpha_k = \arg\min_\alpha I(u_k+\alpha\delta u_k), +@f} +or at least an approximation to this using a one-dimensional Newton method +which itself has a line search. The details of this can be found in the code. +We iterate the updates and line searches until the change in energy $I(u_k)$ +becomes too small to warrant any further iterations. + +The basic idea that you should get in all this is that we formulate the +optimization method in a function space, and will only discretize each step +separately. A number of subsequent steps will be done on the same mesh, before +we refine it and go on to do the same on the next finer mesh. + +As for mesh refinement, it is instructional to recall how residual based error +estimates like the one used in the Kelly et al.~error estimator are usually +derived (the Kelly estimator is the one that we have used in most of the +previous example programs). In a similar way, by looking at the residual of +the strong form of the nonlinear equation we attempt here to solve, we may be +tempted to consider the following expression for refinement of cell $K$: +@f{eqnarray*} + \eta_K^2 &=& + h^2 \left\| + (x-u_h^3) (u_h')^4 \left\{ u_h^2 (u_h')^2 + 5(x-u_h^3)u_h'' + 2u_h'(1-3u_h^2u_h') \right\} + \right\|^2_K + \\ + && + + h \left| (x-u_h^3)^2 [(u_h')^5] \right|^2_{\partial K}, +@f} +where $[\cdot]$ is the jump of a quantity across an intercell boundary, and +$|\cdot|_{\partial K}$ is the sum of the quantity evaluated at the two end +points of a cell. Note that in the evaluation of the jump, we have made use of +the fact that $x-u_h^3$ is a continuous quantity, and can therefore be taken +out of the jump operator. + +All these details actually matter -- while writing the program I have played +around with many settings and different versions of the code, and the result +is that if you don't have a good line search, good stopping criteria, the +right metric (scalar product) for the gradient method, good initial values, +and a good refinement criterion, then the nonlinear solver gets stuck quite +readily for this highly nonlinear problem. Initially, I was hardly able to +find solutions for which the energy dropped below 0.005, while the energy +after the final iteration of the program as it is is usually around 0.0003, +and occasionally down to less than 3e-5. + +However, this is not enough. In the program, we start the solver on the coarse +mesh many times, with randomly perturbed starting values, and while it +converges it yields a different solution, with a different energy every +time. One can therefore not say that the solver converges to a certain energy, +and we can't answer the question what the smallest value of $I(u)$ might be in +$W^{1,\infty}$. This is unsatisfactory, but maybe to be expected for such a +contrived and pathological problem. Consider it an example in programming with +deal.II then, and not an example in solving this particular problem. + + +

Implementation

+ +The program implements all the steps mentioned above, and we will discuss them +in the commented code below. In general, however, note that formulating the +Newton method in function spaces, and only discretizing afterwards has +consequences: we have to linearize around $u_k$ when we want to compute +$\delta u_k$, and we have to sum up these two functions afterwards. However, +they may be living on different grids, if we have refined the grid before this +step, so we will have to present a way to actually get a function from one +grid to another. The SolutionTransfer class will help us here. On the +other hand, discretizing every nonlinear step separately has the advantage +that we can do the initial steps, when we are still far away from the +solution, on a coarse mesh, and only go on to more expensive computations when +we home in on an solution. We will use a +very simplistic strategy for when we refine the mesh (every fifth nonlinear +step), though. Realistic programs solving nonlinear problems will have to be more +clever in this respect, but it suffices for the purposes of this program. + +We will show some of the things that are really simple in 1d (but sometimes +different from what we are used to in 2d or 3d). Apart from this, the program +does not contain much new stuff, but if it explains a few of the techniques +that are available for nonlinear problems and in particular 1d problems, then +this is not so bad, after all. + +Note: As shown below, the program starts the nonlinear solver from 10 different +initial values, and outputs the results. This is not actually too many, but we +did so to keep run-time short (around 1:30 minutes on my laptop). If you want to +increase the number of realizations, you may want to switch to optimized mode +(by setting the ``debug-mode'' flag in the Makefile to ``off''), and increase +the number of realizations to a larger value. On the same machine as above, I +can compute 100 realizations in optimized mode in about 2 minutes. For +this particular program, the difference between debug and optimized mode is +thus about a factor of 7-8, which can be explained by the fact that we ask the +compiler to do optimizations on the code only in the latter mode, but in most +part due to the fact that in optimized mode all the ``Assert'' checks are +thrown out that make sure that function arguments are correct, and that check +the internal consistency of the library. The library contains several +thousands of these checks, and they significantly slow down debug +computations, but we feel that the benefit of finding programming errors +earlier and including where the problem exactly appeared to be of significantly +greater value than faster run-time. After all, all production runs of programs +should be done in optimized mode anyway. + +A slowdown of a factor of 7-8 is unusual, however. For 2d and 3d applications, +a typical value is around 4. diff --git a/deal.II/examples/step-15/doc/results.dox b/deal.II/examples/step-15/doc/results.dox new file mode 100644 index 0000000000..48e0e10f1a --- /dev/null +++ b/deal.II/examples/step-15/doc/results.dox @@ -0,0 +1,119 @@ + +

Results

+ + +If run, the program generates output like this: + + +@code +Realization 0: + Energy: 0.00377302 + Energy: 0.00106138 + Energy: 0.000514363 + Energy: 0.000382105 + Energy: 0.000339017 + Energy: 0.000327948 + Energy: 0.000320299 + Energy: 0.000318016 + Energy: 0.000316735 + Energy: 0.000316536 + Energy: 0.000316463 + Energy: 0.000316285 + Energy: 0.000316227 + Energy: 0.000316221 + Energy: 0.00031622 + +Realization 1: + Energy: 0.00279316 + Energy: 0.000896516 + Energy: 0.000504609 + Energy: 0.000392703 + Energy: 0.000317725 + Energy: 0.000291881 + Energy: 0.000288243 + Energy: 0.000283541 + Energy: 0.000282406 + Energy: 0.000281842 + Energy: 0.000281752 + Energy: 0.000281743 + Energy: 0.000281743 + +.... + +Realization 9: + Energy: 0.0103729 + Energy: 0.0082121 + Energy: 0.00733742 + Energy: 0.00728154 + Energy: 0.00725198 + Energy: 0.00724302 + Energy: 0.00724019 + Energy: 0.00723837 + Energy: 0.00723783 + Energy: 0.00723772 + Energy: 0.00690564 + Energy: 0.00690562 +@endcode + + +The lowest energy yet seen is in this run (you only get this by increasing the +number of runs): + +@code +Realization 18: + Energy: 0.00200645 + Energy: 0.000638519 + Energy: 0.00022749 + Energy: 9.18962e-05 + Energy: 5.42442e-05 + Energy: 3.94415e-05 + Energy: 3.42307e-05 + Energy: 3.30727e-05 + Energy: 3.19998e-05 + Energy: 3.18104e-05 + Energy: 2.97091e-05 + Energy: 3.5011e-05 +@endcode + + +Apparently something went wrong in the last step (the energy increased, which +it shouldn't - but then this is a strongly nonlinear problem), which is also +why the program aborted after this iteration. Apart from this, the iterations +shown above demonstrate that our program indeed is able to reduce the energy +in the solution in each iteration, as should be. + + + +Since the program did not really deliver the goal we had originally intended +for (the computation of the minimal energy of finite element spaces), the +graphical output is also not very exciting. Here are plots of five of the +first 10 solutions (clicking on a picture gives the unscaled version of the +image): + + +@image html step-15.solution-1.png + + + +And here are the first 100 solutions, where each node in each solution is +represented as a dot. As can be seen, all the solutions cluster somewhat +around the $x^{1/3}$ curve, here shown in turquoise: + + +@image html step-15.solution-2.png + + + +Note that this behavior is mostly independent of the choice of starting data +(which we have chosen to be close to this curve), which a posteriori justfies +our choice. Some of the curves actually show a linear behavior of the solution +close to the origin; this is particularly obvious when the curves are viewed +in a log-log plot (not shown here, but rather left as an exercise to the +reader). + + + +Given the almost complete absence of interesting results of this program, we +hope that at least its source code provided some information with respect to +programming with deal.II + diff --git a/deal.II/examples/step-16/doc/intro.dox b/deal.II/examples/step-16/doc/intro.dox new file mode 100644 index 0000000000..750048aaef --- /dev/null +++ b/deal.II/examples/step-16/doc/intro.dox @@ -0,0 +1,35 @@ + +

Introduction

+ + +This example shows the basic usage of the multilevel functions in +deal.II. It solves the Helmholtz equation with Neumann boundary conditions +to avoid additional complications due to Dirichlet boundary conditions (there, +some library functions are missing). Therefore, the solution is the constant +function with value unity. In all other respects, it is similar to step 5. + + + In order to allow sufficient flexibility in conjunction with systems of +differential equations and block preconditioners, quite a few different objects +have to be created before starting the multilevel method. These are +
    +
  • MGTransfer, the object handling transfer between grids +
  • MGCoarse, the solver on the coarsest level +
  • MGSmoother, the smoother on all other levels +
  • MGMatrix, the matrix object having a special level multiplication, i.e. we +basically store one matrix per grid level and allow multiplication with it. +
+ + + +These objects are combined in an object of type Multigrid, containing the +implementation of the V-cycle, which is in turn used by the preconditioner +PreconditionMG, ready for plug-in into a linear solver of the LAC library. + + + +The multilevel method in deal.II follows in many respects the outlines +of the various publications by James Bramble, Joseph Pasciak and Jinchao Xu. In +order to understand many of the options, a rough familiarity with their work is +quite helpful. + diff --git a/deal.II/examples/step-16/doc/results.dox b/deal.II/examples/step-16/doc/results.dox new file mode 100644 index 0000000000..e67fccc1c1 --- /dev/null +++ b/deal.II/examples/step-16/doc/results.dox @@ -0,0 +1,2 @@ + +

Results

diff --git a/deal.II/examples/step-17/doc/intro.dox b/deal.II/examples/step-17/doc/intro.dox new file mode 100644 index 0000000000..db168de9c3 --- /dev/null +++ b/deal.II/examples/step-17/doc/intro.dox @@ -0,0 +1,73 @@ + +

Introduction

+ + +This program does not introduce any new mathematical ideas; in fact, all it +does is to do the exact same computations that @ref step_8 "step-8" +already does, but it does so in a different manner: instead of using deal.II's +own linear algebra classes, we build everything on top of classes deal.II +provides that wrap around the linear algebra implementation of the PETSc library. And +since PETSc allows to distribute matrices and vectors across several computers +within an MPI network, the resulting code will even be able to solve the +problem in parallel. If you don't know what PETSc is, then this would be a +good time to take a quick glimpse at their homepage. + + + +As a prerequisite of this program, you need to have PETSc installed, and if +you want to run in parallel on a cluster, you also need METIS to partition meshes. The installation of deal.II +together with these two additional libraries is described in the README file. + + + +Now, for the details: as mentioned, the program does not compute anything new, +so the use of finite element classes etc. is exactly the same as before. The +difference to previous programs is that we have replaced almost all uses of +classes Vector and SparseMatrix by their +near-equivalents PETScWrappers::Vector and +PETScWrappers::SparseMatrix (for sequential vectors and matrices, +i.e. objects for which all elements are stored locally on one machine), and +PETScWrappers::MPI::Vector and +PETScWrappers::MPI::SparseMatrix for versions of these classes +where only a part of the matrix or vector is stored on each machine within an +MPI network. These classes provide an interface that is very similar to that +of the deal.II linear algebra classes, but instead of implementing this +functionality themselves, they simply pass on to their corresponding PETSc +functions. The wrappers are therefore only used to give PETSc a more modern, +object oriented interface, and to make the use of PETSc and deal.II objects as +interchangable as possible. + + + +While the sequential PETSc wrappers classes do not have any advantage over +their deal.II counterparts, the main point of using PETSc is that it can run +in parallel. We will make use of this by partitioning the domain into as many +blocks (``subdomains'') as there are processes in the MPI network. At the same +time, PETSc provides dummy MPI stubs that allow to run the same program on a +single machine if so desired, without any changes. + + + +Note, however, that the only data structures we parallelize are matrices and +vectors. We do, in particular, not split up the Triangulation and +DoFHandler classes: each process still has a complete copy of +these objects, and all processes have exact copies of what the other processes +have. Parallelizing the data structures used in hierarchic and unstructured +triangulations is a very hard problem, and we do not attempt to do so at +present. It also requires that many more aspects of the application program +have to be changed, since for example loops over all cells can only include +locally available cells. We thus went for the path of least resistance and +only parallelized the linear algebra part. + + + +The techniques this program demonstrates are: how to use the PETSc wrapper +classes; how to parallelize operations for jobs running on an MPI network; and +how to partition the domain into subdomains to parallelize up the work. Since +all this can only be demonstrated using actual code, let us go straight to the +code without much further ado. + diff --git a/deal.II/examples/step-17/doc/results.dox b/deal.II/examples/step-17/doc/results.dox new file mode 100644 index 0000000000..fbcf5be57c --- /dev/null +++ b/deal.II/examples/step-17/doc/results.dox @@ -0,0 +1,208 @@ + +

Results

+ + +If the program above is compiled and run on a single processor machine, it +should generate results that are very similar to those that we already got +with step-8. However, it becomes more interesting if we run it on a cluster of +computers. Most clusters have some kind of scheduling system, all of which +have different calling syntaxes - on my system, I have to use the command +bsub with a whole host of options to run a job in parallel - so +that the exact command line syntax varies. If you have found out how to run a +job on your system, you should get output like this for a job on 8 processors, +and with a few more refinement cycles than in the code above: +@code +Cycle 0: + Number of active cells: 64 + Number of degrees of freedom: 162 (by partition: 22+22+20+20+18+16+20+24) + Solver converged in 23 iterations. +Cycle 1: + Number of active cells: 124 + Number of degrees of freedom: 302 (by partition: 38+42+36+34+44+44+36+28) + Solver converged in 35 iterations. +Cycle 2: + Number of active cells: 238 + Number of degrees of freedom: 570 (by partition: 68+80+66+74+58+68+78+78) + Solver converged in 46 iterations. +Cycle 3: + Number of active cells: 454 + Number of degrees of freedom: 1046 (by partition: 120+134+124+130+154+138+122+124) + Solver converged in 55 iterations. +Cycle 4: + Number of active cells: 868 + Number of degrees of freedom: 1926 (by partition: 232+276+214+248+230+224+234+268) + Solver converged in 77 iterations. +Cycle 5: + Number of active cells: 1654 + Number of degrees of freedom: 3550 (by partition: 418+466+432+470+442+474+424+424) + Solver converged in 93 iterations. +Cycle 6: + Number of active cells: 3136 + Number of degrees of freedom: 6702 (by partition: 838+796+828+892+866+798+878+806) + Solver converged in 127 iterations. +Cycle 7: + Number of active cells: 5962 + Number of degrees of freedom: 12446 (by partition: 1586+1484+1652+1552+1556+1576+1560+1480) + Solver converged in 158 iterations. +Cycle 8: + Number of active cells: 11320 + Number of degrees of freedom: 23586 (by partition: 2988+2924+2890+2868+2864+3042+2932+3078) + Solver converged in 225 iterations. +Cycle 9: + Number of active cells: 21424 + Number of degrees of freedom: 43986 (by partition: 5470+5376+5642+5450+5630+5470+5416+5532) + Solver converged in 282 iterations. +Cycle 10: + Number of active cells: 40696 + Number of degrees of freedom: 83754 (by partition: 10660+10606+10364+10258+10354+10322+10586+10604) + Solver converged in 392 iterations. +Cycle 11: + Number of active cells: 76978 + Number of degrees of freedom: 156490 (by partition: 19516+20148+19390+19390+19336+19450+19730+19530) + Solver converged in 509 iterations. +Cycle 12: + Number of active cells: 146206 + Number of degrees of freedom: 297994 (by partition: 37462+37780+37000+37060+37232+37328+36860+37272) + Solver converged in 705 iterations. +Cycle 13: + Number of active cells: 276184 + Number of degrees of freedom: 558766 (by partition: 69206+69404+69882+71266+70348+69616+69796+69248) + Solver converged in 945 iterations. +Cycle 14: + Number of active cells: 523000 + Number of degrees of freedom: 1060258 (by partition: 132928+132296+131626+132172+132170+133588+132252+133226) + Solver converged in 1282 iterations. +Cycle 15: + Number of active cells: 987394 + Number of degrees of freedom: 1994226 (by partition: 253276+249068+247430+248402+248496+251380+248272+247902) + Solver converged in 1760 iterations. +Cycle 16: + Number of active cells: 1867477 + Number of degrees of freedom: 3771884 (by partition: 468452+474204+470818+470884+469960+ +471186+470686+475694) + Solver converged in 2251 iterations. +@endcode + + + +As can be seen, we can easily get to almost four million unknowns. In fact, the +code's runtime with 8 processes was less than 7 minutes up to (and including) +cycle 14, and 14 minutes including the second to last step. I lost the timing +information for the last step, though, but you get the idea. All this is if +the debug flag in the Makefile was changed to "off", i.e. "optimized", and +with the generation of graphical output switched off for the reasons stated in +the program comments above. The biggest 2d computations we did had roughly 7.1 +million unknowns, and were done on 32 processes. It took about 40 minutes. +Not surprisingly, the limiting factor for how far one can go is how much memory +one has, since every process has to hold the entire mesh and DoFHandler objects, +although matrices and vectors are split up. For the 7.1M computation, the memory +consumption was about 600 bytes per unknown, which is not bad, but one has to +consider that this is for every unknown, whether we store the matrix and vector +entries locally or not. + + + +Here is some output generated in the 12th cycle of the program, i.e. with roughly +300,000 unknowns: + + +@image html step-17.12-ux.png +@image html step-17.12-uy.png + + + +As one would hope for, the x- (left) and y-displacements (right) shown here +closely match what we already saw in step-8. What may be more interesting, +though, is to look at the mesh and partition at this step (to see the picture +in its original size, simply click on it): + + +@image html step-17.12-grid.png +@image html step-17.12-partition.png + + +Again, the mesh (left) shows the same refinement pattern as seen +previously. The right panel shows the partitioning of the domain across the 8 +processes, each indicated by a different color. The picture shows that the +subdomains are smaller where mesh cells are small, a fact that needs to be +expected given that the partitioning algorithm tries to equilibrate the number +of cells in each subdomain; this equilibration is also easily identified in +the output shown above, where the number of degrees per subdomain is roughly +the same. + + + +It is worth noting that if we ran the same program with a different number of +processes, that we would likely get slightly different output: a different +mesh, different number of unknowns and iterations to convergence. The reason +for this is that while the matrix and right hand side are the same independent +of the number of processes used, the preconditioner is not: it performs an +ILU(0) on the chunk of the matrix of each processor separately. Thus, +it's effectiveness as a preconditioner diminishes as the number of processes +increases, which makes the number of iterations increase. Since a different +preconditioner leads to slight changes in the computed solution, this will +then lead to slightly different mesh cells tagged for refinement, and larger +differences in subsequent steps. The solution will always look very similar, +though. + + + +Finally, here are some results for a 3d simulation. You can repeat these by +first changing +@code + ElasticProblem<2> elastic_problem; +@endcode +to +@code + ElasticProblem<3> elastic_problem; +@endcode +in the main function, and then in the Makefile, change the reference to the 2d +libraries to their 3d counterparts. If you then run the program in parallel, +you get something similar to this (this is for a job with 16 processes): +@code +Cycle 0: + Number of active cells: 512 + Number of degrees of freedom: 2187 (by partition: 114+156+150+114+114+210+105+102+120+120+96+123+141+183+156+183) + Solver converged in 27 iterations. +Cycle 1: + Number of active cells: 1604 + Number of degrees of freedom: 6549 (by partition: 393+291+342+354+414+417+570+366+444+288+543+525+345+387+489+381) + Solver converged in 42 iterations. +Cycle 2: + Number of active cells: 4992 + Number of degrees of freedom: 19167 (by partition: 1428+1266+1095+1005+1455+1257+1410+1041+1320+1380+1080+1050+963+1005+1188+1224) + Solver converged in 65 iterations. +Cycle 3: + Number of active cells: 15485 + Number of degrees of freedom: 56760 (by partition: 3099+3714+3384+3147+4332+3858+3615+3117+3027+3888+3942+3276+4149+3519+3030+3663) + Solver converged in 96 iterations. +Cycle 4: + Number of active cells: 48014 + Number of degrees of freedom: 168762 (by partition: 11043+10752+9846+10752+9918+10584+10545+11433+12393+11289+10488+9885+10056+9771+11031+8976) + Solver converged in 132 iterations. +Cycle 5: + Number of active cells: 148828 + Number of degrees of freedom: 492303 (by partition: 31359+30588+34638+32244+30984+28902+33297+31569+29778+29694+28482+28032+32283+30702+31491+28260) + Solver converged in 179 iterations. +Cycle 6: + Number of active cells: 461392 + Number of degrees of freedom: 1497951 (by partition: 103587+100827+97611+93726+93429+88074+95892+88296+96882+93000+87864+90915+92232+86931+98091+90594) + Solver converged in 261 iterations. +@endcode + + + +The last step, going up to 1.5 million unknowns, takes about 55 minutes with +16 processes on 8 dual-processor machines. The graphical output generated by +this job is rather large (cycle 5 already prints around 82 MB of GMV data), so +we contend ourselves with showing output from cycle 4 (again, clicking on the +picture gives a version in original size): + + +@image html step-17.4-3d-partition.png +@image html step-17.4-3d-ux.png + + +The left picture shows the partitioning of the cube into 16 processes, whereas +the right one shows the x-displacement along two cutplanes through the cube. + diff --git a/deal.II/examples/step-18/doc/intro.html b/deal.II/examples/step-18/doc/intro.html new file mode 100644 index 0000000000..22a9f287ba --- /dev/null +++ b/deal.II/examples/step-18/doc/intro.html @@ -0,0 +1,1538 @@ + +

Introduction

+ +

+[A higher quality version of the introduction is available as a PDF +file by clicking here] +

+ + +

+This tutorial program is another one in the series on the elasticity problem +that we have already started with step-8 and step-17. It extends it into two +different directions: first, it solves the quasistatic but time dependent +elasticity problem for large deformations with a Lagrangian mesh movement +approach. Secondly, it shows some more techniques for solving such problems +using parallel processing with PETSc's linear algebra. In addition to this, we +show how to work around the main bottleneck of step-17, namely that we +generated graphical output from only one process, and that this scaled very +badly with larger numbers of processes and on large problems. Finally, a good +number of assorted improvements and techniques are demonstrated that have not +been shown yet in previous programs. + +

+As before in step-17, the program runs just as fine on a single sequential +machine as long as you have PETSc installed. Information on how to tell +deal.II about a PETSc installation on your system can be found in the deal.II +README file, which is linked to from the main documentation page +doc/index.html in your installation of deal.II, or on the deal.II +webpage http://www.dealii.org/. + +

+ +

+Quasistatic elastic deformation +

+ +

+ +

+Motivation of the model +

+ +

+In general, time-dependent small elastic deformations are described by the +elastic wave equation +

+
+ + + +
$\displaystyle \rho \frac{\partial^2 \vec u}{\partial t^2} + c \frac{\partial \vec u}{\partial t} - \div ( C \varepsilon(\vec u)) = \vec f$   in $ \Omega$ +$\displaystyle ,$ +(1)
+

+where +$ \vec u=\vec u (\vec x,t)$ + is the deformation of the body, $ \rho$ + +and $ c$ + the density and attenuation coefficient, and $ \vec f$ + external forces. +In addition, initial conditions +

+
+ + + +
$\displaystyle \vec u(\cdot, 0) = \vec u_0(\cdot)$   on $ \Omega$ +$\displaystyle ,$ +(2)
+

+and Dirichlet (displacement) or Neumann (traction) boundary conditions need +to be specified for a unique solution: +

+
+ + + + + + + + + + + + +
$\displaystyle \vec u(\vec x,t)$$\displaystyle = \vec d(\vec x,t) \qquad$ on +$ \Gamma_D\subset\partial\Omega$ +$\displaystyle ,$ +(3)
$\displaystyle \vec n \ C \varepsilon(\vec u(\vec x,t))$$\displaystyle = \vec b(\vec x,t) \qquad$ on +$ \Gamma_N=\partial\Omega\backslash\Gamma_D$ +$\displaystyle .$ +(4)
+

+In above formulation, +$ \varepsilon(\vec u)= \tfrac 12 (\nabla \vec u + \nabla
+\vec u^T)$ + is the symmetric gradient of the displacement, also called the +strain. $ C$ + is a tensor of rank 4, called the stress-strain + tensor that contains knowledge of the elastic strength of the material; its +symmetry properties make sure that it maps symmetric tensors of rank 2 +(``matrices'' of dimension $ d$ +, where $ d$ + is the spatial dimensionality) onto +symmetric tensors of the same rank. We will comment on the roles of the strain +and stress tensors more below. For the moment it suffices to say that we +interpret the term +$ \div ( C \varepsilon(\vec u))$ + as the vector with +components +$ \tfrac \partial{\partial x_j} C_{ijkl} \varepsilon(\vec u)_{kl}$ +, +where summation over indices $ j,k,l$ + is implied. + +

+The quasistatic limit of this equation is motivated as follows: each small +perturbation of the body, for example by changes in boundary condition or the +forcing function, will result in a corresponding change in the configuration +of the body. In general, this will be in the form of waves radiating away from +the location of the disturbance. Due to the presence of the damping term, +these waves will be attenuated on a time scale of, say, $ \tau$ +. Now, assume +that all changes in external forcing happen on times scales that are +much larger than $ \tau$ +. In that case, the dynamic nature of the change is +unimportant: we can consider the body to always be in static equilibrium, +i.e. we can assume that at all times the body satisfies +

+
+ + + + + + + + + + + + + + + + + + +
$\displaystyle - \div ( C \varepsilon(\vec u))$$\displaystyle = \vec f$ in $ \Omega$ +$\displaystyle ,$ +(5)
$\displaystyle \vec u(\vec x,t)$$\displaystyle = \vec d(\vec x,t) \qquad$ on $ \Gamma_D$ +$\displaystyle ,$ +(6)
$\displaystyle \vec n \ C \varepsilon(\vec u(\vec x,t))$$\displaystyle = \vec b(\vec x,t) \qquad$ on $ \Gamma_N$ +$\displaystyle .$ +(7)
+

+Note that the differential equation does not contain any time derivatives any +more - all time dependence is introduced through boundary conditions and a +possibly time-varying force function +$ \vec f(\vec x,t)$ +. The changes in +configuration can therefore be considered as being stationary +instantaneously. An alternative view of this is that $ t$ + is not really a time +variable, but only a time-like parameter that governs the evolution of the +problem. + +

+While these equations are sufficient to describe small deformations, computing +large deformations is a little more complicated. To do so, let us first +introduce a tensorial stress variable $ \sigma$ +, and write the differential +equations in terms of the stress: +

+
+ + + + + + + + + + + + + + + + + + +
$\displaystyle - \div\sigma$$\displaystyle = \vec f$ in $ \Omega(t)$ +$\displaystyle ,$ +(8)
$\displaystyle \vec u(\vec x,t)$$\displaystyle = \vec d(\vec x,t) \qquad$ on +$ \Gamma_D\subset\partial\Omega(t)$ +$\displaystyle ,$ +(9)
$\displaystyle \vec n \ C \varepsilon(\vec u(\vec x,t))$$\displaystyle = \vec b(\vec x,t) \qquad$ on +$ \Gamma_N=\partial\Omega(t)\backslash\Gamma_D$ +$\displaystyle .$ +(10)
+

+Note that these equations are posed on a domain $ \Omega(t)$ + that +changes with time, with the boundary moving according to the +displacements +$ \vec u(\vec x,t)$ + of the points on the boundary. To +complete this system, we have to specify the incremental relationship between +the stress and the strain, as follows: +

+
+ + + +
$\displaystyle \dot\sigma = C \varepsilon (\dot{\vec u}),$ +(11)
+

+where a dot indicates a time derivative. Both the stress $ \sigma$ + and the +strain +$ \varepsilon(\vec u)$ + are symmetric tensors of rank 2. + +

+ +

+Time discretization +

+ +

+Numerically, this system is solved as follows: first, we discretize +the time component using a backward Euler scheme. This leads to a +discrete equilibrium of force at time step $ n$ +: +

+
+ + + +
$\displaystyle -\div\sigma^n = f^n,$ +(12)
+

+where +

+
+ + + +
$\displaystyle \sigma^n = \sigma^{n-1} + C \varepsilon (\Delta \vec u^n),$ +(13)
+

+and +$ \Delta \vec u^n$ + the incremental displacement for time step +$ n$ +. In addition, we have to specify initial data +$ \vec u(\cdot,0)=\vec u_0$ +. +This way, if we want to solve for the displacement increment, we +have to solve the following system: +

+
+ + + + + + + + + + + + + + + + + + +
$\displaystyle - \div C \varepsilon(\Delta\vec u^n)$$\displaystyle = \vec f + \div\sigma^{n-1}$ in +$ \Omega(t_{n-1})$ +$\displaystyle ,$ +(14)
$\displaystyle \Delta \vec u^n(\vec x,t)$$\displaystyle = \vec d(\vec x,t_n) - \vec d(\vec x,t_{n-1}) \qquad$ on +$ \Gamma_D\subset\partial\Omega(t_{n-1})$ +$\displaystyle ,$ +(15)
$\displaystyle \vec n \ C \varepsilon(\Delta \vec u^n(\vec x,t))$$\displaystyle = \vec b(\vec x,t_n)-\vec b(\vec x,t_{n-1}) \qquad$ on +$ \Gamma_N=\partial\Omega(t_{n-1})\backslash\Gamma_D$ +$\displaystyle .$ +(16)
+

+The weak form of this set of equations, which as usual is the basis for the +finite element formulation, reads as follows: find +$ \Delta \vec u^n \in
+\{v\in H^1(\Omega(t_{n-1}))^d: v\vert _{\Gamma_D}=\vec d(\cdot,t_n) - \vec d(\cdot,t_{n-1})\}$ + +such that +

+
+ + + +
\begin{gather*}\begin{split}(C \varepsilon(\Delta\vec u^n), \varepsilon(\varphi)...
+...in H^1(\Omega(t_{n-1}))^d: \vec v\vert _{\Gamma_D}=0\}. \end{split}\end{gather*} +(17)
+

+We note that, for simplicity, in the program we will always assume that there +are no boundary forces, i.e.  +$ \vec b = 0$ +, and that the deformation of the +body is driven by body forces $ \vec f$ + and prescribed boundary displacements +$ \vec d$ + alone. It is also worth noting that when integrating by parts, we +would get terms of the form +$ (C \varepsilon(\Delta\vec u^n), \nabla \varphi
+)_{\Omega(t_{n-1})}$ +, but that we replace it with the term involving the +symmetric gradient +$ \varepsilon(\varphi)$ + instead of +$ \nabla\varphi$ +. Due to +the symmetry of $ C$ +, the two terms are equivalent, but the symmetric version +avoids a potential for round-off to render the resulting matrix slightly +non-symmetric. + +

+The system at time step $ n$ +, to be solved on the old domain + +$ \Omega(t_{n-1})$ +, has exactly the form of a stationary elastic +problem, and is therefore similar to what we have already implemented +in previous example programs. We will therefore not comment on the +space discretization beyond saying that we again use lowest order +continuous finite elements. + +

+There are differences, however: + +

    +
  1. We have to move (update) the mesh after each time step, in order to be + able to solve the next time step on a new domain; + +

    +

  2. +
  3. We need to know +$ \sigma^{n-1}$ + to compute the next incremental + displacement, i.e. we need to compute it at the end of the time step + to make sure it is available for the next time step. Essentially, + the stress variable is our window to the history of deformation of + the body. +
  4. +
+These two operations are done in the functions move_mesh and +update_quadrature_point_history in the program. While moving +the mesh is only a technicality, updating the stress is a little more +complicated and will be discussed in the next section. + +

+ +

+Updating the stress variable +

+ +

+As indicated above, we need to have the stress variable $ \sigma^n$ + available +when computing time step $ n+1$ +, and we can compute it using +

+
+ + + +
$\displaystyle \sigma^n = \sigma^{n-1} + C \varepsilon (\Delta \vec u^n).$ +(18)
+

+There are, despite the apparent simplicity of this equation, two questions +that we need to discuss. The first concerns the way we store $ \sigma^n$ +: even +if we compute the incremental updates +$ \Delta \vec u^n$ + using lowest-order +finite elements, then its symmetric gradient +$ \varepsilon(\Delta\vec u^n)$ + is +in general still a function that is not easy to describe. In particular, it is +not a piecewise constant function, and on general meshes (with cells that are +not rectangles parallel to the coordinate axes) or with non-constant +stress-strain tensors $ C$ + it is not even a bi- or trilinear function. Thus, it +is a priori not clear how to store $ \sigma^n$ + in a computer program. + +

+To decide this, we have to see where it is used. The only place where we +require the stress is in the term + +$ (\sigma^{n-1},\varepsilon(\varphi))_{\Omega(t_{n-1})}$ +. In practice, we of +course replace this term by numerical quadrature: +

+
+ + + +
$\displaystyle (\sigma^{n-1},\varepsilon(\varphi))_{\Omega(t_{n-1})} = \sum_{K\s...
+...athbb{T}}} \sum_q w_q \ \sigma^{n-1}(\vec x_q) : \varepsilon(\varphi(\vec x_q),$ +(19)
+

+where $ w_q$ + are the quadrature weights and $ \vec x_q$ + the quadrature points on +cell $ K$ +. This should make clear that what we really need is not the stress + +$ \sigma^{n-1}$ + in itself, but only the values of the stress in the quadrature +points on all cells. This, however, is a simpler task: we only have to provide +a data structure that is able to hold one symmetric tensor of rank 2 for each +quadrature point on all cells (or, since we compute in parallel, all +quadrature points of all cells that the present MPI process ``owns''). At the +end of each time step we then only have to evaluate +$ \varepsilon(\Delta \vec u^n(\vec x_q))$ +, multiply it by the stress-strain tensor $ C$ +, and use the +result to update the stress +$ \sigma^n(\vec x_q)$ + at quadrature point $ q$ +. + +

+The second complication is not visible in our notation as chosen above. It is +due to the fact that we compute +$ \Delta u^n$ + on the domain +$ \Omega(t_{n-1})$ +, +and then use this displacement increment to both update the stress as well as +move the mesh nodes around to get to +$ \Omega(t_n)$ + on which the next increment +is computed. What we have to make sure, in this context, is that moving the +mesh does not only involve moving around the nodes, but also making +corresponding changes to the stress variable: the updated stress is a variable +that is defined with respect to the coordinate system of the material in the +old domain, and has to be transferred to the new domain. The reason for this +can be understood as follows: locally, the incremental deformation +$ \Delta\vec u$ + can be decomposed into three parts, a linear translation (the constant part +of the displacement increment field in the neighborhood of a point), a +dilational +component (that part of the gradient of the displacement field that has a +nonzero divergence), and a rotation. A linear translation of the material does +not affect the stresses that are frozen into it - the stress values are +simply translated along. The dilational or compressional change produces a +corresponding stress update. However, the rotational component does not +necessarily induce a nonzero stress update (think, in 2d, for example of the +situation where +$ \Delta\vec u=(y, -x)^T$ +, with which +$ \varepsilon(\Delta \vec u)=0$ +). Nevertheless, if the the material was pre-stressed in a certain +direction, then this direction will be rotated along with the material. To +this end, we have to define a rotation matrix +$ R(\Delta \vec u^n)$ + that +describes, in each point the rotation due to the displacement increments. It +is not hard to see that the actual dependence of $ R$ + on +$ \Delta \vec u^n$ + can +only be through the curl of the displacement, rather than the displacement +itself or its full gradient (as mentioned above, the constant components of +the increment describe translations, its divergence the dilational modes, and +the curl the rotational modes). Since the exact form of $ R$ + is cumbersome, we +only state it in the program code, and note that the correct updating formula +for the stress variable is then +

+
+ + + +
$\displaystyle \sigma^n = R(\Delta \vec u^n)^T [\sigma^{n-1} + C \varepsilon (\Delta \vec u^n)] R(\Delta \vec u^n).$ +(20)
+

+ +

+Both stress update and rotation are implemented in the function +update_quadrature_point_history of the example program. + +

+ +

+Parallel graphical output +

+ +

+In the step-17 example program, the main bottleneck for parallel computations +was that only the first processor generated output for the entire domain. +Since generating graphical output is expensive, this did not scale well when +large numbers of processors were involved. However, no viable ways around this +problem were implemented in the library at the time, and the problem was +deferred to a later version. + +

+This functionality has been implemented in the meantime, and this is the time +to explain its use. Basically, what we need to do is let every process +generate graphical output for that subset of cells that it owns, write them +into separate files and have a way to merge them later on. At this point, it +should be noted that none of the graphical output formats known to the author +of this program allows for a simple way to later re-read it and merge it with +other files corresponding to the same simulation. What deal.II therefore +offers is the following: When you call the DataOut::build_patches +function, an intermediate format is generated that contains all the +information for the data on each cell. Usually, this intermediate format is +then further processed and converted into one of the graphical formats that we +can presently write, such as gmv, eps, ucd, gnuplot, or a number of other +ones. Once written in these formats, there is no way to reconstruct the +necessary information to merge multiple blocks of output. However, the base +classes of DataOut also allow to simply dump the intermediate format +to a file, from which it can later be recovered without loss of information. + +

+This has two advantages: first, simulations may just dump the intermediate +format data during run-time, and the user may later decide which particular +graphics format she wants to have. This way, she does not have to re-run the +entire simulation if graphical output is requested in a different format. One +typical case is that one would like to take a quick look at the data with +gnuplot, and then create high-quality pictures using GMV or OpenDX. Since both +can be generated out of the intermediate format without problem, there is no +need to re-run the simulation. + +

+In the present context, of more interest is the fact that in contrast to any +of the other formats, it is simple to merge multiple files of intermediate +format, if they belong to the same simulation. This is what we will do here: +we will generate one output file in intermediate format for each processor +that belongs to this computation (in the sequential case, this will simply be +a single file). They may then later be read in and merged so that we can +output a single file in whatever graphical format is requested. + +

+The way to do this is to first instruct the DataOutBase class to +write intermediate format rather than in gmv or any other graphical +format. This is simple: just use +data_out.write_deal_II_intermediate. We will write to a file +called solution-TTTT.TTTT.d2 if there is only one processor, or +files solution-TTTT.TTTT.NNN.d2 if this is really a parallel +job. Here, TTTT.TTTT denotes the time for which this output has +been generated, and NNN the number of the MPI process that did this. + +

+The next step is to convert this file or these files into whatever +format you like. The program that does this is the step-19 tutorial program: +for example, for the first time step, call it through +

+../step-19/step-19 solution-0001.0000.*.d2 solution-0001.0000.gmv + +
+to merge all the intermediate format files into a single file in GMV +format. More details on the parameters of this program and what it can do for +you can be found in the documentation of the step-19 tutorial program. + +

+ +

+Overall structure of the program +

+ +

+The overall structure of the program can be inferred from the run() +function that first calls do_initial_timestep() for the first time +step, and then do_timestep() on all subsequent time steps. The +difference between these functions is only that in the first time step we +start on a coarse mesh, solve on it, refine the mesh adaptively, and then +start again with a clean state on that new mesh. This procedure gives us a +better starting mesh, although we should of course keep adapting the mesh as +iterations proceed - this isn't done in this program, but commented on below. + +

+The common part of the two functions treating time steps is the following +sequence of operations on the present mesh: + +

    +
  • assemble_system () [via solve_timestep ()]: + This first function is also the most interesting one. It assembles the + linear system corresponding to the discretized version of equation + ([*]). This leads to a system matrix +$ A_{ij} = \sum_K
+A^K_{ij}$ + built up of local contributions on each cell $ K$ + with entries +

    +
    + + + +
    $\displaystyle A^K_{ij} = (C \varepsilon(\varphi_j), \varepsilon(\varphi_i))_K;$ +(21)
    +

    +In practice, $ A^K$ + is computed using numerical quadrature according to the + formula +

    +
    + + + +
    $\displaystyle A^K_{ij} = \sum_q w_q [\varepsilon(\varphi_i(\vec x_q)) : C : \varepsilon(\varphi_j(\vec x_q))],$ +(22)
    +

    +with quadrature points $ \vec x_q$ + and weights $ w_q$ +. We have built these + contributions before, in step-8 and step-17, but in both of these cases we + have done so rather clumsily by using knowledge of how the rank-4 tensor $ C$ + + is composed, and considering individual elements of the strain tensors + +$ \varepsilon(\varphi_i),\varepsilon(\varphi_j)$ +. This is not really + convenient, in particular if we want to consider more complicated elasticity + models than the isotropic case for which $ C$ + had the convenient form + +$ C_{ijkl} = \lambda \delta_{ij} \delta_{kl} + \mu (\delta_{ik} \delta_{jl}
++ \delta_{il} \delta_{jk})$ +. While we in fact do not use a more complicated + form than this in the present program, we nevertheless want to write it in a + way that would easily allow for this. It is then natural to introduce + classes that represent symmetric tensors of rank 2 (for the strains and + stresses) and 4 (for the stress-strain tensor $ C$ +). Fortunately, deal.II + provides these: the SymmetricTensor<rank,dim> class template + provides a full-fledged implementation of such tensors of rank rank + (which needs to be an even number) and dimension dim. + +

    +What we then need is two things: a way to create the stress-strain rank-4 + tensor $ C$ + as well as to create a symmetric tensor of rank 2 (the strain + tensor) from the gradients of a shape function $ \varphi_i$ + at a quadrature + point $ \vec x_q$ + on a given cell. At the top of the implementation of this + example program, you will find such functions. The first one, + get_stress_strain_tensor, takes two arguments corresponding to + the Lamé constants $ \lambda$ + and $ \mu$ + and returns the stress-strain tensor + for the isotropic case corresponding to these constants (in the program, we + will choose constants corresponding to steel); it would be simple to replace + this function by one that computes this tensor for the anisotropic case, or + taking into account crystal symmetries, for example. The second one, + get_strain takes an object of type FEValues and indices + $ i$ + and $ q$ + and returns the symmetric gradient, i.e. the strain, + corresponding to shape function +$ \varphi_i(\vec x_q)$ +, evaluated on the cell + on which the FEValues object was last reinitialized. + +

    +Given this, the innermost loop of assemble_system computes the + local contributions to the matrix in the following elegant way (the variable + stress_strain_tensor, corresponding to the tensor $ C$ +, has + previously been initialized with the result of the first function above): +

    +for (unsigned int i=0; i<dofs_per_cell; ++i)
    +  for (unsigned int j=0; j<dofs_per_cell; ++j) 
    +    for (unsigned int q_point=0; q_point<n_q_points;
    +         ++q_point)
    +      {
    +        const SymmetricTensor<2,dim>
    +          eps_phi_i = get_strain (fe_values, i, q_point),
    +          eps_phi_j = get_strain (fe_values, j, q_point);
    +
    +        cell_matrix(i,j) 
    +          += (eps_phi_i * stress_strain_tensor * eps_phi_j
    +              *
    +              fe_values.JxW (q_point));
    +      }
    +
    + It is worth noting the expressive power of this piece of code, and to + compare it with the complications we had to go through in previous examples + for the elasticity problem. (To be fair, the SymmetricTensor class + template did not exist when these previous examples were written.) For + simplicity, operator* provides for the (double summation) product + between symmetric tensors of even rank here. + +

    +Assembling the local contributions +

    +
    + + + +
    \begin{gather*}\begin{split}f^K_i &= (\vec f, \varphi_i)_K -(\sigma^{n-1},\varep...
+...gma^{n-1}_q : \varepsilon(\varphi_i(\vec x_q)) \right\} \end{split}\end{gather*} +(23)
    +

    +to the right hand side of ([*]) is equally + straightforward (note that we do not consider any boundary tractions $ \vec b$ + here). Remember that we only had to store the old stress in the + quadrature points of cells. In the program, we will provide a variable + local_quadrature_points_data that allows to access the stress + +$ \sigma^{n-1}_q$ + in each quadrature point. With this the code for the right + hand side looks as this, again rather elegant: +
    +for (unsigned int i=0; i<dofs_per_cell; ++i)
    +  {
    +    const unsigned int 
    +      component_i = fe.system_to_component_index(i).first;
    +
    +    for (unsigned int q_point=0; q_point<n_q_points; ++q_point)
    +      {
    +        const SymmetricTensor<2,dim> &old_stress
    +          = local_quadrature_points_data[q_point].old_stress;
    +        
    +        cell_rhs(i) += (body_force_values[q_point](component_i) *
    +                        fe_values.shape_value (i,q_point)
    +                        -
    +                        old_stress *
    +                        get_strain (fe_values,i,q_point))
    +                       *
    +                       fe_values.JxW (q_point);
    +      }
    +  }
    +
    + Note that in the multiplication +$ \vec f(\vec x_q) \cdot \varphi_i(\vec x_q)$ +, we have made use of the fact that for the chosen finite element, only + one vector component (namely component_i) of $ \varphi_i$ + is + nonzero, and that we therefore also have to consider only one component of + +$ \vec f(\vec x_q)$ +. + +

    +This essentially concludes the new material we present in this function. It + later has to deal with boundary conditions as well as hanging node + constraints, but this parallels what we had to do previously in other + programs already. + +

    +

  • +
  • solve_linear_problem () [via solve_timestep ()]: + Unlike the previous one, this function is not really interesting, since it + does what similar functions have done in all previous tutorial programs - + solving the linear system using the CG method, using an incomplete LU + decomposition as a preconditioner (in the parallel case, it uses an ILU of + each processor's block separately). It is virtually unchanged + from step-17. + +

    +

  • +
  • update_quadrature_point_history () [via + solve_timestep ()]: Based on the displacement field +$ \Delta \vec u^n$ + computed before, we update the stress values in all quadrature points + according to ([*]) and ([*]), + including the rotation of the coordinate system. + +

    +

  • +
  • move_mesh (): Given the solution computed before, in this + function we deform the mesh by moving each vertex by the displacement vector + field evaluated at this particular vertex. + +

    +

  • +
  • output_results (): This function simply outputs the solution + based on what we have said above, i.e. every processor computes output only + for its own portion of the domain, and this can then be later merged by an + external program. In addition to the solution, we also compute the norm of + the stress averaged over all the quadrature points on each cell. +
  • +
+ +

+With this general structure of the code, we only have to define what case we +want to solve. For the present program, we have chosen to simulate the +quasistatic deformation of a vertical cylinder for which the bottom boundary +is fixed and the top boundary is pushed down at a prescribed vertical +velocity. However, the horizontal velocity of the top boundary is left +unspecified - one can imagine this situation as a well-greased plate pushing +from the top onto the cylinder, the points on the top boundary of the cylinder +being allowed to slide horizontally along the surface of the plate, but forced +to move downward by the plate. The inner and outer boundaries of the cylinder +are free and not subject to any prescribed deflection or traction. In +addition, gravity acts on the body. + +

+The program text will reveal more about how to implement this situation, and +the results section will show what displacement pattern comes out of this +simulation. + +

+ +

+Possible directions for extensions +

+ +

+The program as is does not really solve an equation that has many applications +in practice: quasi-static material deformation based on a purely elastic law +is almost boring. However, the program may serve as the starting point for +more interesting experiments, and that indeed was the initial motivation for +writing it. Here are some suggestions of what the program is missing and in +what direction it may be extended: + +

+ +

+Plasticity models. +

The most obvious extension is to use a more +realistic material model for large-scale quasistatic deformation. The natural +choice for this would be plasticity, in which a nonlinear relationship between +stress and strain replaces equation ([*]). Plasticity +models are usually rather complicated to program since the stress-strain +dependence is generally non-smooth. The material can be thought of being able +to withstand only a maximal stress (the yield stress) after which it starts to +``flow''. A mathematical description to this can be given in the form of a +variational inequality, which alternatively can be treated as minimizing the +elastic energy +

+
+ + + +
$\displaystyle E(\vec u) = (\varepsilon(\vec u), C\varepsilon(\vec u))_{\Omega} - (\vec f, \vec u)_{\Omega} - (\vec b, \vec u)_{\Gamma_N},$ +(24)
+

+subject to the constraint +

+
+ + + +
$\displaystyle f(\sigma(\vec u)) \le 0$ +(25)
+

+on the stress. This extension makes the problem to be solved in each time step +nonlinear, so we need another loop within each time step. + +

+Without going into further details of this model, we refer to the excellent +book by Simo and Hughes on ``Computational Inelasticity'' for a +comprehensive overview of computational strategies for solving plastic +models. Alternatively, a brief but concise description of an algorithm for +plasticity is given in an article by S. Commend, A. Truty, and Th. Zimmermann, +titled ``Stabilized finite elements applied to +elastoplasticity: I. Mixed displacement-pressure formulation'' +(Computer Methods in Applied Mechanics and Engineering, vol. 193, +pp. 3559-3586, 2004). + +

+ +

+Stabilization issues. +

The formulation we have chosen, i.e. using +piecewise (bi-, tri-)linear elements for all components of the displacement +vector, and treating the stress as a variable dependent on the displacement is +appropriate for most materials. However, this so-called displacement-based +formulation becomes unstable and exhibits spurious modes for incompressible or +nearly-incompressible materials. While fluids are usually not elastic (in most +cases, the stress depends on velocity gradients, not displacement gradients, +although there are exceptions such as electro-rheologic fluids), there are a +few solids that are nearly incompressible, for example rubber. Another case is +that many plasticity models ultimately let the material become incompressible, +although this is outside the scope of the present program. + +

+Incompressibility is characterized by Poisson's ratio +

+
+ + + +
$\displaystyle \nu = \frac{\lambda}{2(\lambda+\mu)},$ +   
+

+where +$ \lambda,\mu$ + are the Lamé constants of the material. +Physical constraints indicate that +$ -1\le \nu\le \tfrac 12$ + (the condition +also follows from mathematical stability considerations). If $ \nu$ + +approaches $ \tfrac 12$ +, then the material becomes incompressible. In that +case, pure displacement-based formulations are no longer appropriate for the +solution of such problems, and stabilization techniques have to be employed +for a stable and accurate solution. The book and paper cited above give +indications as to how to do this, but there is also a large volume of +literature on this subject; a good start to get an overview of the topic can +be found in the references of the paper by +H.-Y. Duan and Q. Lin on ``Mixed finite elements of least-squares type for +elasticity'' (Computer Methods in Applied Mechanics and Engineering, vol. 194, +pp. 1093-1112, 2005). + +

+ +

+Refinement during timesteps. +

In the present form, the program +only refines the initial mesh a number of times, but then never again. For any +kind of realistic simulation, one would want to extend this so that the mesh +is refined and coarsened every few time steps instead. This is not hard to do, +in fact, but has been left for future tutorial programs or as an exercise, if +you wish. The main complication one has to overcome is that one has to +transfer the data that is stored in the quadrature points of the cells of the +old mesh to the new mesh, preferably by some sort of projection scheme. This +is only slightly messy in the sequential case; in fact, the functions +FETools :: get_projection_from_quadrature_points_matrix will do +the projection, and the FiniteElement :: get_restriction_matrix and +FiniteElement :: get_prolongation_matrix functions will do the +transfer between mother and child cells. However, it becomes complicated +once we run the program in parallel, since then each process only stores this +data for the cells it owned on the old mesh, and it may need to know the +values of the quadrature point data on other cells if the corresponding cells +on the new mesh are assigned to this process after subdividing the new mesh. A +global communication of these data elements is therefore necessary, making the +entire process a little more unpleasant. + +

+ +

+Ensuring mesh regularity. +

At present, the program makes no attempt +to make sure that a cell, after moving its vertices at the end of the time +step, still has a valid geometry (i.e. that its Jacobian determinant is +positive and bounded away from zero everywhere). It is, in fact, not very hard +to set boundary values and forcing terms in such a way that one gets distorted +and inverted cells rather quickly. Certainly, in some cases of large +deformation, this is unavoidable with a mesh of finite mesh size, but in some +other cases this should be preventable by appropriate mesh refinement and/or a +reduction of the time step size. The program does not do that, but a more +sophisticated version definitely should employ some sort of heuristic defining +what amount of deformation of cells is acceptable, and what isn't. + +

+ +

+Compiling the program +

+ +

+Finally, just to remind everyone: the program runs in 3d (see the definition +of the elastic_problem variable in main(), unlike almost +all of the other example programs. While the compiler doesn't care what +dimension it compiles for, the linker has to know which library to link with. +And as explained in other places, this requires slight changes to the Makefile +compared to the other tutorial programs. In particular, everywhere where the +2d versions of libraries are mentioned, one needs to change this to 3d, +although this is already done in the distributed version of the Makefile. +Conversely, if you want to run the program in 2d (after making the necessary +changes to accommodate for a 2d geometry), you have to change the Makefile +back to allow for 2d. diff --git a/deal.II/examples/step-18/doc/intro.tex b/deal.II/examples/step-18/doc/intro.tex new file mode 100644 index 0000000000..0bc030bd3a --- /dev/null +++ b/deal.II/examples/step-18/doc/intro.tex @@ -0,0 +1,666 @@ +\documentclass{article} +\usepackage{amsmath} +\usepackage{amsfonts} +\renewcommand{\vec}[1]{\mathbf{#1}} +\renewcommand{\div}{\mathrm{div}\ } +\begin{document} + +This tutorial program is another one in the series on the elasticity problem +that we have already started with step-8 and step-17. It extends it into two +different directions: first, it solves the quasistatic but time dependent +elasticity problem for large deformations with a Lagrangian mesh movement +approach. Secondly, it shows some more techniques for solving such problems +using parallel processing with PETSc's linear algebra. In addition to this, we +show how to work around the main bottleneck of step-17, namely that we +generated graphical output from only one process, and that this scaled very +badly with larger numbers of processes and on large problems. Finally, a good +number of assorted improvements and techniques are demonstrated that have not +been shown yet in previous programs. + +As before in step-17, the program runs just as fine on a single sequential +machine as long as you have PETSc installed. Information on how to tell +deal.II about a PETSc installation on your system can be found in the deal.II +README file, which is linked to from the main documentation page +\texttt{doc/index.html} in your installation of deal.II, or on the deal.II +webpage \texttt{http://www.dealii.org/}. + + +\subsection*{Quasistatic elastic deformation} + +\subsubsection*{Motivation of the model} + +In general, time-dependent small elastic deformations are described by the +elastic wave equation +\begin{gather} + \rho \frac{\partial^2 \vec u}{\partial t^2} + + c \frac{\partial \vec u}{\partial t} + - \div ( C \varepsilon(\vec u)) = \vec f + \qquad + \text{in $\Omega$}, +\end{gather} +where $\vec u=\vec u (\vec x,t)$ is the deformation of the body, $\rho$ +and $c$ the density and attenuation coefficient, and $\vec f$ external forces. +In addition, initial conditions +\begin{align} + \vec u(\cdot, 0) = \vec u_0(\cdot) + \qquad + \text{on $\Omega$}, +\end{align} +and Dirichlet (displacement) or Neumann (traction) boundary conditions need +to be specified for a unique solution: +\begin{align} + \vec u(\vec x,t) &= \vec d(\vec x,t) + \qquad + &&\text{on $\Gamma_D\subset\partial\Omega$}, + \\ + \vec n \ C \varepsilon(\vec u(\vec x,t)) &= \vec b(\vec x,t) + \qquad + &&\text{on $\Gamma_N=\partial\Omega\backslash\Gamma_D$}. +\end{align} +In above formulation, $\varepsilon(\vec u)= \tfrac 12 (\nabla \vec u + \nabla +\vec u^T)$ is the symmetric gradient of the displacement, also called the +\textit{strain}. $C$ is a tensor of rank 4, called the \textit{stress-strain + tensor} that contains knowledge of the elastic strength of the material; its +symmetry properties make sure that it maps symmetric tensors of rank 2 +(``matrices'' of dimension $d$, where $d$ is the spatial dimensionality) onto +symmetric tensors of the same rank. We will comment on the roles of the strain +and stress tensors more below. For the moment it suffices to say that we +interpret the term $\div ( C \varepsilon(\vec u))$ as the vector with +components $\tfrac \partial{\partial x_j} C_{ijkl} \varepsilon(\vec u)_{kl}$, +where summation over indices $j,k,l$ is implied. + +The quasistatic limit of this equation is motivated as follows: each small +perturbation of the body, for example by changes in boundary condition or the +forcing function, will result in a corresponding change in the configuration +of the body. In general, this will be in the form of waves radiating away from +the location of the disturbance. Due to the presence of the damping term, +these waves will be attenuated on a time scale of, say, $\tau$. Now, assume +that all changes in external forcing happen on times scales that are +much larger than $\tau$. In that case, the dynamic nature of the change is +unimportant: we can consider the body to always be in static equilibrium, +i.e.~we can assume that at all times the body satisfies +\begin{align} + - \div ( C \varepsilon(\vec u)) &= \vec f + &&\text{in $\Omega$}, + \\ + \vec u(\vec x,t) &= \vec d(\vec x,t) + \qquad + &&\text{on $\Gamma_D$}, + \\ + \vec n \ C \varepsilon(\vec u(\vec x,t)) &= \vec b(\vec x,t) + \qquad + &&\text{on $\Gamma_N$}. +\end{align} +Note that the differential equation does not contain any time derivatives any +more -- all time dependence is introduced through boundary conditions and a +possibly time-varying force function $\vec f(\vec x,t)$. The changes in +configuration can therefore be considered as being stationary +instantaneously. An alternative view of this is that $t$ is not really a time +variable, but only a time-like parameter that governs the evolution of the +problem. + +While these equations are sufficient to describe small deformations, computing +large deformations is a little more complicated. To do so, let us first +introduce a tensorial stress variable $\sigma$, and write the differential +equations in terms of the stress: +\begin{align} + - \div \sigma &= \vec f + &&\text{in $\Omega(t)$}, + \\ + \vec u(\vec x,t) &= \vec d(\vec x,t) + \qquad + &&\text{on $\Gamma_D\subset\partial\Omega(t)$}, + \\ + \vec n \ C \varepsilon(\vec u(\vec x,t)) &= \vec b(\vec x,t) + \qquad + &&\text{on $\Gamma_N=\partial\Omega(t)\backslash\Gamma_D$}. +\end{align} +Note that these equations are posed on a domain $\Omega(t)$ that +changes with time, with the boundary moving according to the +displacements $\vec u(\vec x,t)$ of the points on the boundary. To +complete this system, we have to specify the incremental relationship between +the stress and the strain, as follows: +\begin{align} + \label{eq:stress-strain} + \dot\sigma = C \varepsilon (\dot{\vec u}), +\end{align} +where a dot indicates a time derivative. Both the stress $\sigma$ and the +strain $\varepsilon(\vec u)$ are symmetric tensors of rank 2. + + +\subsubsection*{Time discretization} + +Numerically, this system is solved as follows: first, we discretize +the time component using a backward Euler scheme. This leads to a +discrete equilibrium of force at time step $n$: +\begin{gather} + -\div \sigma^n = f^n, +\end{gather} +where +\begin{gather} + \sigma^n = \sigma^{n-1} + C \varepsilon (\Delta \vec u^n), +\end{gather} +and $\Delta \vec u^n$ the incremental displacement for time step +$n$. In addition, we have to specify initial data $\vec u(\cdot,0)=\vec u_0$. +This way, if we want to solve for the displacement increment, we +have to solve the following system: +\begin{align} + - \div C \varepsilon(\Delta\vec u^n) &= \vec f + \div \sigma^{n-1} + &&\text{in $\Omega(t_{n-1})$}, + \\ + \Delta \vec u^n(\vec x,t) &= \vec d(\vec x,t_n) - \vec d(\vec x,t_{n-1}) + \qquad + &&\text{on $\Gamma_D\subset\partial\Omega(t_{n-1})$}, + \\ + \vec n \ C \varepsilon(\Delta \vec u^n(\vec x,t)) &= \vec b(\vec x,t_n)-\vec b(\vec x,t_{n-1}) + \qquad + &&\text{on $\Gamma_N=\partial\Omega(t_{n-1})\backslash\Gamma_D$}. +\end{align} +The weak form of this set of equations, which as usual is the basis for the +finite element formulation, reads as follows: find $\Delta \vec u^n \in +\{v\in H^1(\Omega(t_{n-1}))^d: v|_{\Gamma_D}=\vec d(\cdot,t_n) - \vec d(\cdot,t_{n-1})\}$ +such that +\begin{gather} + \begin{split} + \label{eq:linear-system} + (C \varepsilon(\Delta\vec u^n), \varepsilon(\varphi) )_{\Omega(t_{n-1})} + = + (\vec f, \varphi)_{\Omega(t_{n-1})} + -(\sigma^{n-1},\varepsilon(\varphi))_{\Omega(t_{n-1})} + \\ + +(\vec b(\vec x,t_n)-\vec b(\vec x,t_{n-1}), \varphi)_{\Gamma_N} + \\ + \forall \varphi \in \{\vec v\in H^1(\Omega(t_{n-1}))^d: \vec + v|_{\Gamma_D}=0\}. + \end{split} +\end{gather} +We note that, for simplicity, in the program we will always assume that there +are no boundary forces, i.e.~$\vec b = 0$, and that the deformation of the +body is driven by body forces $\vec f$ and prescribed boundary displacements +$\vec d$ alone. It is also worth noting that when integrating by parts, we +would get terms of the form $(C \varepsilon(\Delta\vec u^n), \nabla \varphi +)_{\Omega(t_{n-1})}$, but that we replace it with the term involving the +symmetric gradient $\varepsilon(\varphi)$ instead of $\nabla\varphi$. Due to +the symmetry of $C$, the two terms are equivalent, but the symmetric version +avoids a potential for round-off to render the resulting matrix slightly +non-symmetric. + +The system at time step $n$, to be solved on the old domain +$\Omega(t_{n-1})$, has exactly the form of a stationary elastic +problem, and is therefore similar to what we have already implemented +in previous example programs. We will therefore not comment on the +space discretization beyond saying that we again use lowest order +continuous finite elements. + +There are differences, however: +\begin{enumerate} + \item We have to move (update) the mesh after each time step, in order to be + able to solve the next time step on a new domain; + + \item We need to know $\sigma^{n-1}$ to compute the next incremental + displacement, i.e.~we need to compute it at the end of the time step + to make sure it is available for the next time step. Essentially, + the stress variable is our window to the history of deformation of + the body. +\end{enumerate} +These two operations are done in the functions \texttt{move\_mesh} and +\texttt{update\_\-quadrature\_\-point\_history} in the program. While moving +the mesh is only a technicality, updating the stress is a little more +complicated and will be discussed in the next section. + + +\subsubsection*{Updating the stress variable} + +As indicated above, we need to have the stress variable $\sigma^n$ available +when computing time step $n+1$, and we can compute it using +\begin{gather} + \label{eq:stress-update} + \sigma^n = \sigma^{n-1} + C \varepsilon (\Delta \vec u^n). +\end{gather} +There are, despite the apparent simplicity of this equation, two questions +that we need to discuss. The first concerns the way we store $\sigma^n$: even +if we compute the incremental updates $\Delta\vec u^n$ using lowest-order +finite elements, then its symmetric gradient $\varepsilon(\Delta\vec u^n)$ is +in general still a function that is not easy to describe. In particular, it is +not a piecewise constant function, and on general meshes (with cells that are +not rectangles parallel to the coordinate axes) or with non-constant +stress-strain tensors $C$ it is not even a bi- or trilinear function. Thus, it +is a priori not clear how to store $\sigma^n$ in a computer program. + +To decide this, we have to see where it is used. The only place where we +require the stress is in the term +$(\sigma^{n-1},\varepsilon(\varphi))_{\Omega(t_{n-1})}$. In practice, we of +course replace this term by numerical quadrature: +\begin{gather} + (\sigma^{n-1},\varepsilon(\varphi))_{\Omega(t_{n-1})} + = + \sum_{K\subset {\mathbb{T}}} + (\sigma^{n-1},\varepsilon(\varphi))_K + \approx + \sum_{K\subset {\mathbb{T}}} + \sum_q + w_q \ \sigma^{n-1}(\vec x_q) : \varepsilon(\varphi(\vec x_q), +\end{gather} +where $w_q$ are the quadrature weights and $\vec x_q$ the quadrature points on +cell $K$. This should make clear that what we really need is not the stress +$\sigma^{n-1}$ in itself, but only the values of the stress in the quadrature +points on all cells. This, however, is a simpler task: we only have to provide +a data structure that is able to hold one symmetric tensor of rank 2 for each +quadrature point on all cells (or, since we compute in parallel, all +quadrature points of all cells that the present MPI process ``owns''). At the +end of each time step we then only have to evaluate $\varepsilon(\Delta \vec +u^n(\vec x_q))$, multiply it by the stress-strain tensor $C$, and use the +result to update the stress $\sigma^n(\vec x_q)$ at quadrature point $q$. + +The second complication is not visible in our notation as chosen above. It is +due to the fact that we compute $\Delta u^n$ on the domain $\Omega(t_{n-1})$, +and then use this displacement increment to both update the stress as well as +move the mesh nodes around to get to $\Omega(t_n)$ on which the next increment +is computed. What we have to make sure, in this context, is that moving the +mesh does not only involve moving around the nodes, but also making +corresponding changes to the stress variable: the updated stress is a variable +that is defined with respect to the coordinate system of the material in the +old domain, and has to be transferred to the new domain. The reason for this +can be understood as follows: locally, the incremental deformation $\Delta\vec +u$ can be decomposed into three parts, a linear translation (the constant part +of the displacement increment field in the neighborhood of a point), a +dilational +component (that part of the gradient of the displacement field that has a +nonzero divergence), and a rotation. A linear translation of the material does +not affect the stresses that are frozen into it -- the stress values are +simply translated along. The dilational or compressional change produces a +corresponding stress update. However, the rotational component does not +necessarily induce a nonzero stress update (think, in 2d, for example of the +situation where $\Delta\vec u=(y, -x)^T$, with which $\varepsilon(\Delta \vec +u)=0$). Nevertheless, if the the material was pre-stressed in a certain +direction, then this direction will be rotated along with the material. To +this end, we have to define a rotation matrix $R(\Delta \vec u^n)$ that +describes, in each point the rotation due to the displacement increments. It +is not hard to see that the actual dependence of $R$ on $\Delta \vec u^n$ can +only be through the curl of the displacement, rather than the displacement +itself or its full gradient (as mentioned above, the constant components of +the increment describe translations, its divergence the dilational modes, and +the curl the rotational modes). Since the exact form of $R$ is cumbersome, we +only state it in the program code, and note that the correct updating formula +for the stress variable is then +\begin{gather} + \label{eq:stress-update+rot} + \sigma^n + = + R(\Delta \vec u^n)^T + [\sigma^{n-1} + C \varepsilon (\Delta \vec u^n)] + R(\Delta \vec u^n). +\end{gather} + +Both stress update and rotation are implemented in the function +\texttt{update\_\-quadrature\_\-point\_history} of the example program. + + +\subsection*{Parallel graphical output} + +In the step-17 example program, the main bottleneck for parallel computations +was that only the first processor generated output for the entire domain. +Since generating graphical output is expensive, this did not scale well when +large numbers of processors were involved. However, no viable ways around this +problem were implemented in the library at the time, and the problem was +deferred to a later version. + +This functionality has been implemented in the meantime, and this is the time +to explain its use. Basically, what we need to do is let every process +generate graphical output for that subset of cells that it owns, write them +into separate files and have a way to merge them later on. At this point, it +should be noted that none of the graphical output formats known to the author +of this program allows for a simple way to later re-read it and merge it with +other files corresponding to the same simulation. What deal.II therefore +offers is the following: When you call the \texttt{DataOut::build\_patches} +function, an intermediate format is generated that contains all the +information for the data on each cell. Usually, this intermediate format is +then further processed and converted into one of the graphical formats that we +can presently write, such as gmv, eps, ucd, gnuplot, or a number of other +ones. Once written in these formats, there is no way to reconstruct the +necessary information to merge multiple blocks of output. However, the base +classes of \texttt{DataOut} also allow to simply dump the intermediate format +to a file, from which it can later be recovered without loss of information. + +This has two advantages: first, simulations may just dump the intermediate +format data during run-time, and the user may later decide which particular +graphics format she wants to have. This way, she does not have to re-run the +entire simulation if graphical output is requested in a different format. One +typical case is that one would like to take a quick look at the data with +gnuplot, and then create high-quality pictures using GMV or OpenDX. Since both +can be generated out of the intermediate format without problem, there is no +need to re-run the simulation. + +In the present context, of more interest is the fact that in contrast to any +of the other formats, it is simple to merge multiple files of intermediate +format, if they belong to the same simulation. This is what we will do here: +we will generate one output file in intermediate format for each processor +that belongs to this computation (in the sequential case, this will simply be +a single file). They may then later be read in and merged so that we can +output a single file in whatever graphical format is requested. + +The way to do this is to first instruct the \texttt{DataOutBase} class to +write intermediate format rather than in gmv or any other graphical +format. This is simple: just use +\texttt{data\_out.write\_deal\_II\_intermediate}. We will write to a file +called \texttt{solution-TTTT.TTTT.d2} if there is only one processor, or +files \texttt{solution-TTTT.TTTT.NNN.d2} if this is really a parallel +job. Here, \texttt{TTTT.TTTT} denotes the time for which this output has +been generated, and \texttt{NNN} the number of the MPI process that did this. + +The next step is to convert this file or these files into whatever +format you like. The program that does this is the step-19 tutorial program: +for example, for the first time step, call it through +\begin{center} + \texttt{../step-19/step-19 solution-0001.0000.*.d2 solution-0001.0000.gmv} +\end{center} +to merge all the intermediate format files into a single file in GMV +format. More details on the parameters of this program and what it can do for +you can be found in the documentation of the step-19 tutorial program. + + + +\subsection*{Overall structure of the program} + +The overall structure of the program can be inferred from the \texttt{run()} +function that first calls \texttt{do\_initial\_timestep()} for the first time +step, and then \texttt{do\_timestep()} on all subsequent time steps. The +difference between these functions is only that in the first time step we +start on a coarse mesh, solve on it, refine the mesh adaptively, and then +start again with a clean state on that new mesh. This procedure gives us a +better starting mesh, although we should of course keep adapting the mesh as +iterations proceed -- this isn't done in this program, but commented on below. + +The common part of the two functions treating time steps is the following +sequence of operations on the present mesh: +\begin{itemize} +\item \texttt{assemble\_system ()} [via \texttt{solve\_timestep ()}]: + This first function is also the most interesting one. It assembles the + linear system corresponding to the discretized version of equation + \eqref{eq:linear-system}. This leads to a system matrix $A_{ij} = \sum_K + A^K_{ij}$ built up of local contributions on each cell $K$ with entries + \begin{gather} + A^K_{ij} = (C \varepsilon(\varphi_j), \varepsilon(\varphi_i))_K; + \end{gather} + In practice, $A^K$ is computed using numerical quadrature according to the + formula + \begin{gather} + A^K_{ij} = \sum_q w_q [\varepsilon(\varphi_i(\vec x_q)) : C : + \varepsilon(\varphi_j(\vec x_q))], + \end{gather} + with quadrature points $\vec x_q$ and weights $w_q$. We have built these + contributions before, in step-8 and step-17, but in both of these cases we + have done so rather clumsily by using knowledge of how the rank-4 tensor $C$ + is composed, and considering individual elements of the strain tensors + $\varepsilon(\varphi_i),\varepsilon(\varphi_j)$. This is not really + convenient, in particular if we want to consider more complicated elasticity + models than the isotropic case for which $C$ had the convenient form + $C_{ijkl} = \lambda \delta_{ij} \delta_{kl} + \mu (\delta_{ik} \delta_{jl} + + \delta_{il} \delta_{jk})$. While we in fact do not use a more complicated + form than this in the present program, we nevertheless want to write it in a + way that would easily allow for this. It is then natural to introduce + classes that represent symmetric tensors of rank 2 (for the strains and + stresses) and 4 (for the stress-strain tensor $C$). Fortunately, deal.II + provides these: the \texttt{SymmetricTensor} class template + provides a full-fledged implementation of such tensors of rank \texttt{rank} + (which needs to be an even number) and dimension \texttt{dim}. + + What we then need is two things: a way to create the stress-strain rank-4 + tensor $C$ as well as to create a symmetric tensor of rank 2 (the strain + tensor) from the gradients of a shape function $\varphi_i$ at a quadrature + point $\vec x_q$ on a given cell. At the top of the implementation of this + example program, you will find such functions. The first one, + \texttt{get\_stress\_strain\_tensor}, takes two arguments corresponding to + the Lam\'e constants $\lambda$ and $\mu$ and returns the stress-strain tensor + for the isotropic case corresponding to these constants (in the program, we + will choose constants corresponding to steel); it would be simple to replace + this function by one that computes this tensor for the anisotropic case, or + taking into account crystal symmetries, for example. The second one, + \texttt{get\_strain} takes an object of type \texttt{FEValues} and indices + $i$ and $q$ and returns the symmetric gradient, i.e. the strain, + corresponding to shape function $\varphi_i(\vec x_q)$, evaluated on the cell + on which the \texttt{FEValues} object was last reinitialized. + + Given this, the innermost loop of \texttt{assemble\_system} computes the + local contributions to the matrix in the following elegant way (the variable + \texttt{stress\_strain\_tensor}, corresponding to the tensor $C$, has + previously been initialized with the result of the first function above): + \begin{verbatim} +for (unsigned int i=0; i + eps_phi_i = get_strain (fe_values, i, q_point), + eps_phi_j = get_strain (fe_values, j, q_point); + + cell_matrix(i,j) + += (eps_phi_i * stress_strain_tensor * eps_phi_j + * + fe_values.JxW (q_point)); + } + \end{verbatim} + It is worth noting the expressive power of this piece of code, and to + compare it with the complications we had to go through in previous examples + for the elasticity problem. (To be fair, the \texttt{SymmetricTensor} class + template did not exist when these previous examples were written.) For + simplicity, \texttt{operator*} provides for the (double summation) product + between symmetric tensors of even rank here. + + Assembling the local contributions + \begin{gather} + \begin{split} + f^K_i &= + (\vec f, \varphi_i)_K -(\sigma^{n-1},\varepsilon(\varphi_i))_K + \\ + &\approx + \sum_q + w_q \left\{ + \vec f(\vec x_q) \cdot \varphi_i(\vec x_q) - + \sigma^{n-1}_q : \varepsilon(\varphi_i(\vec x_q)) + \right\} + \end{split} + \end{gather} + to the right hand side of \eqref{eq:linear-system} is equally + straightforward (note that we do not consider any boundary tractions $\vec + b$ here). Remember that we only had to store the old stress in the + quadrature points of cells. In the program, we will provide a variable + \texttt{local\_quadrature\_points\_data} that allows to access the stress + $\sigma^{n-1}_q$ in each quadrature point. With this the code for the right + hand side looks as this, again rather elegant: + \begin{verbatim} +for (unsigned int i=0; i &old_stress + = local_quadrature_points_data[q_point].old_stress; + + cell_rhs(i) += (body_force_values[q_point](component_i) * + fe_values.shape_value (i,q_point) + - + old_stress * + get_strain (fe_values,i,q_point)) + * + fe_values.JxW (q_point); + } + } + \end{verbatim} + Note that in the multiplication $\vec f(\vec x_q) \cdot \varphi_i(\vec + x_q)$, we have made use of the fact that for the chosen finite element, only + one vector component (namely \texttt{component\_i}) of $\varphi_i$ is + nonzero, and that we therefore also have to consider only one component of + $\vec f(\vec x_q)$. + + This essentially concludes the new material we present in this function. It + later has to deal with boundary conditions as well as hanging node + constraints, but this parallels what we had to do previously in other + programs already. + +\item \texttt{solve\_linear\_problem ()} [via \texttt{solve\_timestep ()}]: + Unlike the previous one, this function is not really interesting, since it + does what similar functions have done in all previous tutorial programs -- + solving the linear system using the CG method, using an incomplete LU + decomposition as a preconditioner (in the parallel case, it uses an ILU of + each processor's block separately). It is virtually unchanged + from step-17. + +\item \texttt{update\_quadrature\_point\_history ()} [via + \texttt{solve\_timestep ()}]: Based on the displacement field $\Delta \vec + u^n$ computed before, we update the stress values in all quadrature points + according to \eqref{eq:stress-update} and \eqref{eq:stress-update+rot}, + including the rotation of the coordinate system. + +\item \texttt{move\_mesh ()}: Given the solution computed before, in this + function we deform the mesh by moving each vertex by the displacement vector + field evaluated at this particular vertex. + +\item \texttt{output\_results ()}: This function simply outputs the solution + based on what we have said above, i.e. every processor computes output only + for its own portion of the domain, and this can then be later merged by an + external program. In addition to the solution, we also compute the norm of + the stress averaged over all the quadrature points on each cell. +\end{itemize} + +With this general structure of the code, we only have to define what case we +want to solve. For the present program, we have chosen to simulate the +quasistatic deformation of a vertical cylinder for which the bottom boundary +is fixed and the top boundary is pushed down at a prescribed vertical +velocity. However, the horizontal velocity of the top boundary is left +unspecified -- one can imagine this situation as a well-greased plate pushing +from the top onto the cylinder, the points on the top boundary of the cylinder +being allowed to slide horizontally along the surface of the plate, but forced +to move downward by the plate. The inner and outer boundaries of the cylinder +are free and not subject to any prescribed deflection or traction. In +addition, gravity acts on the body. + +The program text will reveal more about how to implement this situation, and +the results section will show what displacement pattern comes out of this +simulation. + +\subsection*{Possible directions for extensions} + +The program as is does not really solve an equation that has many applications +in practice: quasi-static material deformation based on a purely elastic law +is almost boring. However, the program may serve as the starting point for +more interesting experiments, and that indeed was the initial motivation for +writing it. Here are some suggestions of what the program is missing and in +what direction it may be extended: + +\paragraph*{Plasticity models.} The most obvious extension is to use a more +realistic material model for large-scale quasistatic deformation. The natural +choice for this would be plasticity, in which a nonlinear relationship between +stress and strain replaces equation \eqref{eq:stress-strain}. Plasticity +models are usually rather complicated to program since the stress-strain +dependence is generally non-smooth. The material can be thought of being able +to withstand only a maximal stress (the yield stress) after which it starts to +``flow''. A mathematical description to this can be given in the form of a +variational inequality, which alternatively can be treated as minimizing the +elastic energy +\begin{gather} + E(\vec u) = + (\varepsilon(\vec u), C\varepsilon(\vec u))_{\Omega} + - (\vec f, \vec u)_{\Omega} - (\vec b, \vec u)_{\Gamma_N}, +\end{gather} +subject to the constraint +\begin{gather} + f(\sigma(\vec u)) \le 0 +\end{gather} +on the stress. This extension makes the problem to be solved in each time step +nonlinear, so we need another loop within each time step. + +Without going into further details of this model, we refer to the excellent +book by Simo and Hughes on ``Computational Inelasticity'' for a +comprehensive overview of computational strategies for solving plastic +models. Alternatively, a brief but concise description of an algorithm for +plasticity is given in an article by S. Commend, A. Truty, and Th. Zimmermann, +titled ``Stabilized finite elements applied to +elastoplasticity: I. Mixed displacement-pressure formulation'' +(Computer Methods in Applied Mechanics and Engineering, vol. 193, +pp. 3559--3586, 2004). + + +\paragraph*{Stabilization issues.} The formulation we have chosen, i.e. using +piecewise (bi-, tri-)linear elements for all components of the displacement +vector, and treating the stress as a variable dependent on the displacement is +appropriate for most materials. However, this so-called displacement-based +formulation becomes unstable and exhibits spurious modes for incompressible or +nearly-incompressible materials. While fluids are usually not elastic (in most +cases, the stress depends on velocity gradients, not displacement gradients, +although there are exceptions such as electro-rheologic fluids), there are a +few solids that are nearly incompressible, for example rubber. Another case is +that many plasticity models ultimately let the material become incompressible, +although this is outside the scope of the present program. + +Incompressibility is characterized by Poisson's ratio +\begin{gather*} + \nu = \frac{\lambda}{2(\lambda+\mu)}, +\end{gather*} +where $\lambda,\mu$ are the Lam\'e constants of the material. +Physical constraints indicate that $-1\le \nu\le \tfrac 12$ (the condition +also follows from mathematical stability considerations). If $\nu$ +approaches $\tfrac 12$, then the material becomes incompressible. In that +case, pure displacement-based formulations are no longer appropriate for the +solution of such problems, and stabilization techniques have to be employed +for a stable and accurate solution. The book and paper cited above give +indications as to how to do this, but there is also a large volume of +literature on this subject; a good start to get an overview of the topic can +be found in the references of the paper by +H.-Y. Duan and Q. Lin on ``Mixed finite elements of least-squares type for +elasticity'' (Computer Methods in Applied Mechanics and Engineering, vol. 194, +pp. 1093--1112, 2005). + + +\paragraph*{Refinement during timesteps.} In the present form, the program +only refines the initial mesh a number of times, but then never again. For any +kind of realistic simulation, one would want to extend this so that the mesh +is refined and coarsened every few time steps instead. This is not hard to do, +in fact, but has been left for future tutorial programs or as an exercise, if +you wish. The main complication one has to overcome is that one has to +transfer the data that is stored in the quadrature points of the cells of the +old mesh to the new mesh, preferably by some sort of projection scheme. This +is only slightly messy in the sequential case; in fact, the functions +\texttt{FETools} \texttt{::} \texttt{get\_projection\_from\_quadrature\_points\_matrix} will do +the projection, and the \texttt{FiniteElement} \texttt{::} \texttt{get\_restriction\_matrix} and +\texttt{FiniteElement} \texttt{::} \texttt{get\_prolongation\_matrix} functions will do the +transfer between mother and child cells. However, it becomes complicated +once we run the program in parallel, since then each process only stores this +data for the cells it owned on the old mesh, and it may need to know the +values of the quadrature point data on other cells if the corresponding cells +on the new mesh are assigned to this process after subdividing the new mesh. A +global communication of these data elements is therefore necessary, making the +entire process a little more unpleasant. + + +\paragraph*{Ensuring mesh regularity.} At present, the program makes no attempt +to make sure that a cell, after moving its vertices at the end of the time +step, still has a valid geometry (i.e. that its Jacobian determinant is +positive and bounded away from zero everywhere). It is, in fact, not very hard +to set boundary values and forcing terms in such a way that one gets distorted +and inverted cells rather quickly. Certainly, in some cases of large +deformation, this is unavoidable with a mesh of finite mesh size, but in some +other cases this should be preventable by appropriate mesh refinement and/or a +reduction of the time step size. The program does not do that, but a more +sophisticated version definitely should employ some sort of heuristic defining +what amount of deformation of cells is acceptable, and what isn't. + + +\subsection*{Compiling the program} + +Finally, just to remind everyone: the program runs in 3d (see the definition +of the \texttt{elastic\_problem} variable in \texttt{main()}, unlike almost +all of the other example programs. While the compiler doesn't care what +dimension it compiles for, the linker has to know which library to link with. +And as explained in other places, this requires slight changes to the Makefile +compared to the other tutorial programs. In particular, everywhere where the +2d versions of libraries are mentioned, one needs to change this to 3d, +although this is already done in the distributed version of the Makefile. +Conversely, if you want to run the program in 2d (after making the necessary +changes to accommodate for a 2d geometry), you have to change the Makefile +back to allow for 2d. + +\end{document} diff --git a/deal.II/examples/step-18/doc/results.html b/deal.II/examples/step-18/doc/results.html new file mode 100644 index 0000000000..4fae5cc422 --- /dev/null +++ b/deal.II/examples/step-18/doc/results.html @@ -0,0 +1,380 @@ + +

Results

+ +

+Running the program takes a good while if one doesn't change the flags +in the Makefile: in debug mode (the default) and on only a single +machine, it takes about 3h45min on my Athlon XP 2GHz. Fortunately, but +setting debug-mode = off in the Makefile, this can be +reduced significantly, to about 23 minutes, a much more reasonable time. + +

+ +

+If run, the program prints the following output, explaining what it is +doing during all that time: + +

+examples/step-18> time make run
+============================ Running step-18
+Timestep 1 at time 1
+  Cycle 0:
+    Number of active cells:       3712 (by partition: 3712)
+    Number of degrees of freedom: 17226 (by partition: 17226)
+    Assembling system... norm of rhs is 2.34224e+10
+    Solver converged in 117 iterations.
+    Updating quadrature point data...
+  Cycle 1:
+    Number of active cells:       12812 (by partition: 12812)
+    Number of degrees of freedom: 51726 (by partition: 51726)
+    Assembling system... norm of rhs is 2.34227e+10
+    Solver converged in 130 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+Timestep 2 at time 2
+    Assembling system... norm of rhs is 2.30852e+10
+    Solver converged in 131 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+Timestep 3 at time 3
+    Assembling system... norm of rhs is 2.27792e+10
+    Solver converged in 126 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+Timestep 4 at time 4
+    Assembling system... norm of rhs is 2.25107e+10
+    Solver converged in 124 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+Timestep 5 at time 5
+    Assembling system... norm of rhs is 2.22883e+10
+    Solver converged in 122 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+Timestep 6 at time 6
+    Assembling system... norm of rhs is 2.21272e+10
+    Solver converged in 118 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+Timestep 7 at time 7
+    Assembling system... norm of rhs is 2.20652e+10
+    Solver converged in 117 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+Timestep 8 at time 8
+    Assembling system... norm of rhs is 2.22501e+10
+    Solver converged in 127 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+Timestep 9 at time 9
+    Assembling system... norm of rhs is 2.32742e+10
+    Solver converged in 144 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+Timestep 10 at time 10
+    Assembling system... norm of rhs is 2.55929e+10
+    Solver converged in 149 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+ +In other words, it is computing on 12,000 cells and with some 52,000 +unknowns. Not a whole lot, but enough for a coupled three-dimensional +problem to keep a computer busy for a while. At the end of the day, +this is what we have for output: + +
+examples/step-18> ls -l *.d2
+-rw-r--r--  1 bangerth wheeler 8797414 May 25 09:10 solution-0001.0000.d2
+-rw-r--r--  1 bangerth wheeler 8788500 May 25 09:32 solution-0002.0000.d2
+-rw-r--r--  1 bangerth wheeler 8763718 May 25 09:55 solution-0003.0000.d2
+-rw-r--r--  1 bangerth wheeler 8738940 May 25 10:17 solution-0004.0000.d2
+-rw-r--r--  1 bangerth wheeler 8710104 May 25 10:39 solution-0005.0000.d2
+-rw-r--r--  1 bangerth wheeler 8685388 May 25 11:01 solution-0006.0000.d2
+-rw-r--r--  1 bangerth wheeler 8649088 May 25 11:23 solution-0007.0000.d2
+-rw-r--r--  1 bangerth wheeler 8585146 May 25 11:45 solution-0008.0000.d2
+-rw-r--r--  1 bangerth wheeler 8489764 May 25 12:07 solution-0009.0000.d2
+-rw-r--r--  1 bangerth wheeler 8405388 May 25 12:29 solution-0010.0000.d2
+
+
+

+ +

+Let us convert these files in deal.II intermediate format to gmv +format (this assumes that you have already compiled the step-19 example program): + +

+examples/step-18> ../step-19/step-19
+
+Converter from deal.II intermediate format to other graphics formats.
+
+Usage: ./step-19 [-p parameter_file] list_of_input_files [-x output_format] output_file
+
+examples/step-18> ../step-19/step-19 solution-0001.0000.d2 -x gmv solution-0001.0000.gmv
+examples/step-18> ../step-19/step-19 solution-0002.0000.d2 -x gmv solution-0002.0000.gmv
+[...]
+
+ +Of course, since we have run the program only in sequential mode, we +do have only one intermediate file for each time step that we have to +take as input. +

+ +

+If we visualize these files with GMV, we get to see the full picture +of the disaster our forced compression wreaks on the cylinder (click +on the images for a larger version; colors in the images encode the +norm of the stress in the material): +

+ + + + + + + + + + + + + + + + + +
+ + + Time = 2 + + + + Time = 5 + + + + Time = 7 +
+ + + Time = 8 + + + + Time = 9 + + + + Time = 10 +
+ +

+As is clearly visible, as we keep compressing the cylinder, it starts +to buckle and ultimately collapses. Towards the end of the simulation, +the deflection pattern becomes nonsymmetric (the cylinder top slides +to the right). The model clearly does not provide for this (all our +forces and boundary deflections are symmetric) but the effect is +probably physically correct anyway: in reality, small inhomogeneities +in the body's material properties would lead it to buckle to one side +to evade the forcing; in numerical simulations, small perturbations +such as numerical round-off or an inexact solution of a linear system +by an iterative solver could have the same effect. Another typical source for +asymmetries in adaptive computations is that only a certain fraction of cells +is refined in each step, which may lead to asymmetric meshes even if the +original coarse mesh was symmetric. +

+ + +

+Whether the computation is fully converged is a different matter. In order to +see whether it is, we ran the program again with one more global refinement at +the beginning and with the time step halved. This would have taken a very long +time on a single machine, so we used our cluster again and ran it on 16 +processors (8 dual-processor machines) in parallel. The beginning of the output +now looks like this: + +

+Timestep 1 at time 0.5
+  Cycle 0:
+    Number of active cells:       29696 (by partition: 1862+1890+1866+1850+1864+1850+1858+1842+1911+1851+1911+1804+1854+1816+1839+1828)
+    Number of degrees of freedom: 113100 (by partition: 7089+7218+6978+6972+7110+6840+7119+7023+7542+7203+7068+6741+6921+6759+7464+7053)
+    Assembling system... norm of rhs is 1.05874e+10
+    Solver converged in 289 iterations.
+    Updating quadrature point data...
+  Cycle 1:
+    Number of active cells:       102097 (by partition: 6346+6478+6442+6570+6370+6483+6413+6376+6403+6195+6195+6195+6494+6571+6371+6195)
+    Number of degrees of freedom: 358875 (by partition: 22257+22161+22554+22482+21759+23361+23040+21609+22347+20937+21801+21678+24126+25149+21321+22293)
+    Assembling system... norm of rhs is 3.46364e+10
+    Solver converged in 249 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+Timestep 2 at time 1
+    Assembling system... norm of rhs is 3.42269e+10
+    Solver converged in 248 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+Timestep 3 at time 1.5
+    Assembling system... norm of rhs is 3.38229e+10
+    Solver converged in 247 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+Timestep 4 at time 2
+    Assembling system... norm of rhs is 3.34247e+10
+    Solver converged in 247 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+[...]
+
+Timestep 20 at time 10
+    Assembling system... norm of rhs is 3.2449e+10
+    Solver converged in 493 iterations.
+    Updating quadrature point data...
+    Moving mesh...
+
+ +That's quite a good number of unknowns, given that we are in 3d. The output of +this program are 16 files for each time step: + +
+examples/step-18> ls -l solution-0001.000*
+-rw-r--r--    1 bangerth mfw       4325219 Aug 11 09:44 solution-0001.0000-000.d2
+-rw-r--r--    1 bangerth mfw       4454460 Aug 11 09:44 solution-0001.0000-001.d2
+-rw-r--r--    1 bangerth mfw       4485242 Aug 11 09:43 solution-0001.0000-002.d2
+-rw-r--r--    1 bangerth mfw       4517364 Aug 11 09:43 solution-0001.0000-003.d2
+-rw-r--r--    1 bangerth mfw       4462829 Aug 11 09:43 solution-0001.0000-004.d2
+-rw-r--r--    1 bangerth mfw       4482487 Aug 11 09:43 solution-0001.0000-005.d2
+-rw-r--r--    1 bangerth mfw       4548619 Aug 11 09:43 solution-0001.0000-006.d2
+-rw-r--r--    1 bangerth mfw       4522421 Aug 11 09:43 solution-0001.0000-007.d2
+-rw-r--r--    1 bangerth mfw       4337529 Aug 11 09:43 solution-0001.0000-008.d2
+-rw-r--r--    1 bangerth mfw       4163047 Aug 11 09:43 solution-0001.0000-009.d2
+-rw-r--r--    1 bangerth mfw       4288247 Aug 11 09:43 solution-0001.0000-010.d2
+-rw-r--r--    1 bangerth mfw       4350410 Aug 11 09:43 solution-0001.0000-011.d2
+-rw-r--r--    1 bangerth mfw       4458427 Aug 11 09:43 solution-0001.0000-012.d2
+-rw-r--r--    1 bangerth mfw       4466037 Aug 11 09:43 solution-0001.0000-013.d2
+-rw-r--r--    1 bangerth mfw       4505679 Aug 11 09:44 solution-0001.0000-014.d2
+-rw-r--r--    1 bangerth mfw       4340488 Aug 11 09:44 solution-0001.0000-015.d2
+
+
+We merge and convert these 16 intermediate files into a single gmv file as +follows: + +
+examples/step-18> time ../step-19/step-19 solution-0001.0000-* -x gmv -o solution-0001.0000.gmv
+
+real    0m45.929s
+user    0m41.290s
+sys     0m0.990s
+examples/step-18> ls -l solution-0001.0000.gmv
+-rw-r--r--    1 bangerth mfw      68925360 Aug 11 17:04 solution-0001.0000.gmv
+
+
+ +

+Doing so for all time steps, we obtain gmv files that we can visualize (albeit +with some difficulty, due to their size gmv isn't exactly fast when plotting +them). Here are first the mesh on which we compute as well as the partitioning +for the 16 processors: +

+ + + + + + + +
+ + + + + +
+ +

+Finally, here is the same output as we have shown before for the much smaller +sequential case: +

+ + + + + + + + + + + + + + + + + +
+ + + Time = 2 + + + + Time = 5 + + + + Time = 7 +
+ + + Time = 8 + + + + Time = 9 + + + + Time = 10 +
+ +

+If one compares this with the previous run, the results are qualitatively +similar, but quantitatively definitely different. The previous computation was +therefore certainly not converged, though we can't say for sure anything about +the present one. One would need an even finer computation to find out. However, +the point may be moot: looking at the last picture in detail (click on it to +see it in larger), it is pretty obvious that not only is the linear small +deformation model we chose completely inadequate, but for a realistic +simulation we would also need to make sure that the body does not intersects +itself during deformation. Without such a formulation we cannot expect anything +that make sense, even if it produces nice pictures! +

diff --git a/deal.II/examples/step-19/doc/intro.dox b/deal.II/examples/step-19/doc/intro.dox new file mode 100644 index 0000000000..9770ba1df4 --- /dev/null +++ b/deal.II/examples/step-19/doc/intro.dox @@ -0,0 +1,121 @@ + +

Introduction

+ + +In @ref step_18 "step-18", we saw a need to write +output files in an intermediate format: in a parallel program, it doesn't scale +well if all processors participate in computing a result, and then only a +single processor generates the graphical output. Rather, each of them should +generate output for its share of the domain, and later on merge all these +output files into a single one. + + + +Thus was the beginning of step-19: it is the program that reads a number of +files written in intermediate format, and merges and converts them into the +final format that one would like to use for visualization. It can also be used +for the following purpose: if you are unsure at the time of a computation what +graphics program you would like to use, write your results in intermediate +format; it can later be converted, using the present program, to any other +format you may want. + + + +While this in itself was not interesting enough to make a tutorial program, we +have used the opportunity to introduce one class that has proven to be +extremely help- and useful in real application programs, but had not been +covered by any of the previous tutorial programs: the +ParameterHandler class. This class is used in applications that +want to have some of their behavior determined at run time, using input +files. For example, one may want to specify the geometry, or specifics of the +equation to be solved, at run time. Other typical parameters are the number of +nonlinear iterations, the name of output files, or the names of input files +specifying material properties or boundary conditions. + + + +Working with such parameter files is not rocket science. However, it is rather +tedious to write the parsers for such files, in particular if they should be +extensible, be able to group parameters into subsections, perform some error +checks such as that parameters can have only certain kinds of values (for +example, it should only be allowed to have integer values in an input file for +parameters that denote a number of iteration), and similar requirements. The +ParameterHandler class allows for all this: an application program +will declare the parameters it expects (or call a function in the library that +declares a number of parameters for you), the ParameterHandler +class then reads an input file with all these parameters, and the application +program can then get their values back from this class. + + + +In order to perform these three steps, the ParameterHandler offers +three sets of functions: first, the +ParameterHandler::declare_entry function is used to declare the +existence of a named parameter in the present section of the input file (one +can enter and leave subsections in a parameter file just like you would +navigate through a directory tree in a file system, with the functions +ParameterHandler::enter_subsection and +ParameterHandler::leave_subsection taking on the roles of the +commands cd dir and cd ..; the only difference being +that if you enter a subsection that has never been visited before, it is +created: it isn't necessary to "create" subsections explicitly). When declaring +a parameter, one has to specify its name and default value, in case the +parameter isn't later listed explicitly in the parameter file. In addition to +that, there are optional arguments indicating a pattern that a parameter has to +satisfy, such as being an integer (see the discussion above), and a help text +that might later give an explanation of what the parameter stands for. + + + +Once all parameters have been declared, parameters can be read, using the +ParameterHandler::read_input family of functions. There are +versions of this function that can read from a file stream, that take a file +name, or that simply take a string and parse it. When reading parameters, the +class makes sure that only parameters are listed in the input that have been +declared before, and that the values of parameters satisfy the pattern that has +been given to describe the kind of values a parameter can have. Input that uses +undeclared parameters or values for parameters that do not conform to the +pattern are rejected by raising an exception. + + + +A typical input file will look like this: +@code +set Output format = dx +set Output file = my_output_file.dx + +set Maximal number of iterations = 13 + +subsection Application + set Color of output = blue + set Generate output = false +end +@endcode +Note that subsections can be nested. + + + +Finally, the application program can get the values of declared parameters back +by traversing the subsections of the parameter tree and using the +ParameterHandler::get and related functions. The +ParameterHandler::get simply returns the value of a parameter as a +string, whereas ParameterHandler::get_integer, +ParameterHandler::get_double, and +ParameterHandler::get_bool already convert them to the indicated +type. + + + +Using the ParameterHandler class therefore provides for a pretty +flexible mechanism to handle all sorts of moderately complex input files without +much effort on the side of the application programmer. We will use this to +provide all sorts of options to the step-19 program in order to convert from +intermediate file format to any other graphical file format. + + + +The rest of the story is probably best told by looking at the source of step-19 +itself. Let us, however, end this introduction by pointing the reader at the +extensive class documentation of the ParameterHandler class for +more information on specific details of that class. + diff --git a/deal.II/examples/step-19/doc/results.dox b/deal.II/examples/step-19/doc/results.dox new file mode 100644 index 0000000000..b13a41c982 --- /dev/null +++ b/deal.II/examples/step-19/doc/results.dox @@ -0,0 +1,260 @@ + +

Results

+ + +With all that above, here is first what we get if we just run the program +without any parameters at all: +@code +examples/step-19> ./step-19 + +Converter from deal.II intermediate format to other graphics formats. + +Usage: + ./step-19 [-p parameter_file] list_of_input_files + [-x output_format] [-o output_file] + +Parameter sequences in brackets can be omitted if a parameter file is +specified on the command line and if it provides values for these +missing parameters. + +The parameter file has the following format and allows the following +values (you can cut and paste this and use it for your own parameter +file): + +# Listing of Parameters +# --------------------- +# A dummy parameter asking for an integer +set Dummy iterations = 42 + +# The name of the output file to be generated +set Output file = + +# A name for the output format to be used +set Output format = gnuplot + + +subsection DX output parameters + # A boolean field indicating whether neighborship information between cells + # is to be written to the OpenDX output file + set Write neighbors = true +end + + +subsection Dummy subsection + # A dummy parameter that shows how one can define a parameter that can be + # assigned values from a finite set of values + set Dummy color of output = red + + # A dummy parameter that can be fed with either 'true' or 'false' + set Dummy generate output = true +end + + +subsection Eps output parameters + # Angle of the viewing position against the vertical axis + set Azimut angle = 60 + + # Name of a color function used to colorize mesh lines and/or cell + # interiors + set Color function = default + + # Whether the interior of cells shall be shaded + set Color shading of interior of cells = true + + # Whether the mesh lines, or only the surface should be drawn + set Draw mesh lines = true + + # Whether only the mesh lines, or also the interior of cells should be + # plotted. If this flag is false, then one can see through the mesh + set Fill interior of cells = true + + # Number of the input vector that is to be used to generate color + # information + set Index of vector for color = 0 + + # Number of the input vector that is to be used to generate height + # information + set Index of vector for height = 0 + + # The width in which the postscript renderer is to plot lines + set Line widths in eps units = 0.5 + + # Whether width or height should be scaled to match the given size + set Scale to width or height = width + + # Scaling for the z-direction relative to the scaling used in x- and + # y-directions + set Scaling for z-axis = 1 + + # The size (width or height) to which the eps output file is to be scaled + set Size (width or height) in eps units = 300 + + # Angle of the viewing direction against the y-axis + set Turn angle = 30 +end + +subsection Povray output parameters + # Whether camera and lightling information should be put into an external + # file "data.inc" or into the POVRAY input file + set Include external file = true + + # Whether POVRAY should use bicubic patches + set Use bicubic patches = false + + # A flag indicating whether POVRAY should use smoothed triangles instead of + # the usual ones + set Use smooth triangles = false +end + + +subsection UCD output parameters + # A flag indicating whether a comment should be written to the beginning of + # the output file indicating date and time of creation as well as the + # creating program + set Write preamble = true +end +@endcode + +That's a lot of output for such a little program, but then that's also a lot of +output formats that deal.II supports. You will realize that the output consists +of first entries in the top-level section (sorted alphabetically), then a +sorted list of subsections. Most of the parameters have been declared by the +DataOutBase class, but there are also the dummy entries and +sections we have added in the declare_parameters() function, along +with their default values and documentations. + + + +Let us try to run this program on a set of input files generated by a modified +@ref step_18 "step-18" run on 32 nodes of a +cluster. The computation was rather big, with more +than 350,000 cells and some 1.2M unknowns. That makes for 32 rather big +intermediate files that we will try to merge using the present program. Here is +the list of files, totaling some 245MB of data: +@code +examples/step-19> ls -l *d2 +-rw-r--r-- 1 bangerth wheeler 7982085 Aug 12 10:11 solution-0005.0000-000.d2 +-rw-r--r-- 1 bangerth wheeler 7888316 Aug 12 10:13 solution-0005.0000-001.d2 +-rw-r--r-- 1 bangerth wheeler 7715984 Aug 12 10:09 solution-0005.0000-002.d2 +-rw-r--r-- 1 bangerth wheeler 7887648 Aug 12 10:06 solution-0005.0000-003.d2 +-rw-r--r-- 1 bangerth wheeler 7833291 Aug 12 10:09 solution-0005.0000-004.d2 +-rw-r--r-- 1 bangerth wheeler 7536394 Aug 12 10:07 solution-0005.0000-005.d2 +-rw-r--r-- 1 bangerth wheeler 7817551 Aug 12 10:06 solution-0005.0000-006.d2 +-rw-r--r-- 1 bangerth wheeler 7996660 Aug 12 10:07 solution-0005.0000-007.d2 +-rw-r--r-- 1 bangerth wheeler 7761545 Aug 12 10:06 solution-0005.0000-008.d2 +-rw-r--r-- 1 bangerth wheeler 7754027 Aug 12 10:07 solution-0005.0000-009.d2 +-rw-r--r-- 1 bangerth wheeler 7607545 Aug 12 10:11 solution-0005.0000-010.d2 +-rw-r--r-- 1 bangerth wheeler 7728039 Aug 12 10:07 solution-0005.0000-011.d2 +-rw-r--r-- 1 bangerth wheeler 7577293 Aug 12 10:14 solution-0005.0000-012.d2 +-rw-r--r-- 1 bangerth wheeler 7735626 Aug 12 10:10 solution-0005.0000-013.d2 +-rw-r--r-- 1 bangerth wheeler 7629075 Aug 12 10:10 solution-0005.0000-014.d2 +-rw-r--r-- 1 bangerth wheeler 7314459 Aug 12 10:09 solution-0005.0000-015.d2 +-rw-r--r-- 1 bangerth wheeler 7414738 Aug 12 10:10 solution-0005.0000-016.d2 +-rw-r--r-- 1 bangerth wheeler 7330518 Aug 12 10:05 solution-0005.0000-017.d2 +-rw-r--r-- 1 bangerth wheeler 7418213 Aug 12 10:11 solution-0005.0000-018.d2 +-rw-r--r-- 1 bangerth wheeler 7508715 Aug 12 10:08 solution-0005.0000-019.d2 +-rw-r--r-- 1 bangerth wheeler 7747143 Aug 12 10:06 solution-0005.0000-020.d2 +-rw-r--r-- 1 bangerth wheeler 7563548 Aug 12 10:05 solution-0005.0000-021.d2 +-rw-r--r-- 1 bangerth wheeler 7846767 Aug 12 10:12 solution-0005.0000-022.d2 +-rw-r--r-- 1 bangerth wheeler 7479576 Aug 12 10:12 solution-0005.0000-023.d2 +-rw-r--r-- 1 bangerth wheeler 7925060 Aug 12 10:12 solution-0005.0000-024.d2 +-rw-r--r-- 1 bangerth wheeler 7842034 Aug 12 10:13 solution-0005.0000-025.d2 +-rw-r--r-- 1 bangerth wheeler 7585448 Aug 12 10:13 solution-0005.0000-026.d2 +-rw-r--r-- 1 bangerth wheeler 7609698 Aug 12 10:10 solution-0005.0000-027.d2 +-rw-r--r-- 1 bangerth wheeler 7576053 Aug 12 10:08 solution-0005.0000-028.d2 +-rw-r--r-- 1 bangerth wheeler 7682418 Aug 12 10:08 solution-0005.0000-029.d2 +-rw-r--r-- 1 bangerth wheeler 7544141 Aug 12 10:05 solution-0005.0000-030.d2 +-rw-r--r-- 1 bangerth wheeler 7348899 Aug 12 10:04 solution-0005.0000-031.d2 +@endcode + +So let's see what happens if we attempt to merge all these files into a single +one: +@code +examples/step-19> time ./step-19 solution-0005.0000-*.d2 -x gmv -o solution-0005.gmv +real 2m08.35s +user 1m26.61s +system 0m05.74s + +examples/step-19> ls -l solution-0005.gmv +-rw-r--r-- 1 bangerth wheeler 240680494 Sep 9 11:53 solution-0005.gmv +@endcode +So in roughly two minutes we have merged 240MB of data. Counting reading and +writing, that averages a throughput of 3.8MB per second, not so bad. + + + +If visualized, the output looks very much like that shown for +@ref step_18 "step-18". But that's not quite as +important for the moment, rather we are interested in showing how to use the +parameter file. To this end, remember that if no parameter file is given, or if +it is empty, all the default values listed above are used. However, whatever we +specify in the parameter file is used, unless overridden again by +parameters found later on the command line. + + + +For example, let us use a simple parameter file named +solution-0005.prm that contains only one line: +@code +set Output format = gnuplot +@endcode +If we run step-19 with it again, we obtain this (for simplicity, and because we +don't want to visualize 240MB of data anyway, we only convert the one, the +twelfth, intermediate file to gnuplot format): +@code +examples/step-19> ./step-19 solution-0005.0000-012.d2 -p solution-0005.prm -o solution-0005.gnuplot + +examples/step-19> ls -l solution-0005.gnuplot +-rw-r--r-- 1 bangerth wheeler 20281669 Sep 9 12:15 solution-0005.gnuplot +@endcode + +We can then visualize this one file with gnuplot, obtaining something like +this: +@image html step-19.solution-0005.png + +That's not particularly exciting, but the file we're looking at has only one +32nd of the entire domain anyway, so we can't expect much. + +In more complicated situations, we would use parameter files that set more of +the values to non-default values. A file for which this is the case could look +like this, generating output for the OpenDX visualization program: +@code +set Output format = dx +set Output file = my_output_file.dx + +set Dummy iterations = -13 + +subsection Dummy subsection + set Dummy color of output = blue + set Dummy generate output = false +end +@endcode +If one wanted to, one could write comments into the file using the +same format as used above in the help text, i.e. everything on a line +following a hashmark (#) is considered a comment. + + + +If one runs step-19 with this input file, this is what is going to happen: +@code +examples/step-19> ./step-19 solution-0005.0000-012.d2 -p solution-0005.prm +Line 4: + The entry value + -13 + for the entry named + Dummy iterations + does not match the given pattern + [Integer range 1...1000 (inclusive)] +@endcode +Ah, right: valid values for the iteration parameter needed to be within the +range [1...1000]. We would fix that, then go back to run the program with +correct parameters. + + + +This program should have given some insight into the input parameter file +handling that deal.II provides. The ParameterHandler class has a +few more goodies beyond what has been shown in this program, for those who want +to use this class, it would be useful to read the documentation of that class +to get the full picture. + diff --git a/deal.II/examples/step-2/doc/intro.dox b/deal.II/examples/step-2/doc/intro.dox new file mode 100644 index 0000000000..b9dd70b6e0 --- /dev/null +++ b/deal.II/examples/step-2/doc/intro.dox @@ -0,0 +1,17 @@ + +

Introduction

+ +After we have created a grid in the previous example, we now show how +to define degrees of freedom on this mesh. For this example, we +will use the lowest order (Q1) finite elements, for which the degrees +of freedom are associated with the vertices of the mesh. Later +examples will demonstrate higher order elements where degrees of freedom are +not necessarily associated with vertices any more, but can be associated +with edges, faces, or cells. + +Defining degrees of freedom ("DoF"s in short) on a mesh is a rather +simple task, since the library does all the work for you. However, for +some algorithms, especially for some linear solvers, it is +advantageous to have the degrees of freedom numbered in a certain +order, and we will use the algorithm of Cuthill and McKee to do +so. The results are written to a file and visualized using GNUPLOT. diff --git a/deal.II/examples/step-2/doc/results.dox b/deal.II/examples/step-2/doc/results.dox new file mode 100644 index 0000000000..300f50bb33 --- /dev/null +++ b/deal.II/examples/step-2/doc/results.dox @@ -0,0 +1,58 @@ + +

Results

+ +The program has, after having been run, produced two sparsity +patterns. We can visualize them using GNUPLOT: +@code +examples/step-2> gnuplot + + G N U P L O T + Version 3.7 patchlevel 3 + last modified Thu Dec 12 13:00:00 GMT 2002 + System: Linux 2.6.11.4-21.10-default + + Copyright(C) 1986 - 1993, 1998 - 2002 + Thomas Williams, Colin Kelley and many others + + Type `help` to access the on-line reference manual + The gnuplot FAQ is available from + http://www.gnuplot.info/gnuplot-faq.html + + Send comments and requests for help to + Send bugs, suggestions and mods to + + +Terminal type set to 'x11' +gnuplot> set data style points +gnuplot> plot "sparsity_pattern.1" +@endcode + +The results then look like this (every cross denotes an entry which +might be nonzero; of course the fact whether the entry actually is +zero or not depends on the equation under consideration, but the +indicated positions in the matrix tell us which shape functions can +and which can't couple, if the equation is a local, i.e. differential +one): + + + + + + +
+ @image html step-2.sparsity-1.png + + @image html step-2.sparsity-2.png +
+ +The different regions in the left picture represent the degrees of +freedom on the different refinement levels of the triangulation. As +can be seen in the right picture, the sparsity pattern is much better +clustered around the main diagonal of the matrix after +renumbering. Although this might not be apparent, the number of +nonzero entries is the same in both pictures, of course. + +A common observation is that the more refined the grid is, the better +the clustering around the diagonal will get. + + diff --git a/deal.II/examples/step-20/doc/intro.dox b/deal.II/examples/step-20/doc/intro.dox new file mode 100644 index 0000000000..aa7b812e97 --- /dev/null +++ b/deal.II/examples/step-20/doc/intro.dox @@ -0,0 +1,715 @@ + +

Introduction

+ +This program is devoted to two aspects: the use of mixed finite elements -- in +particular Raviart-Thomas elements -- and using block matrices to define +solvers, preconditioners, and nested versions of those that use the +substructure of the system matrix. The equation we are going to solve is again +the Laplace equation, though with a matrix-valued coefficient: +@f{eqnarray*} + -\nabla \cdot K({\mathbf x}) \nabla p &=& f \qquad {\textrm{in}\ } \Omega, \\ + p &=& g \qquad {\textrm{on}\ }\partial\Omega. +@f} +$K({\mathbf x})$ is assumed to be uniformly positive definite, i.e. there is +$\alpha>0$ such that the eigenvalues $\lambda_i({\mathbf x})$ of $K(x)$ satisfy +$\lambda_i({\mathbf x})\ge \alpha$. The use of the symbol $p$ instead of the usual +$u$ for the solution variable will become clear in the next section. + +After discussing the equation and the formulation we are going to use to solve +it, this introduction will cover the use of block matrices and vectors, the +definition of solvers and preconditioners, and finally the actual test case we +are going to solve. + +

Formulation, weak form, and discrete problem

+ +In the form above, the Laplace equation is considered a good model equation +for fluid flow in porous media. In particular, if flow is so slow that all +dynamic effects such as the acceleration terms in the Navier-Stokes equation +become irrelevant, and if the flow pattern is stationary, then the Laplace +equation models the pressure that drives the flow reasonable well. Because the +solution variable is a pressure, we here use the name $p$ instead of the +name $u$ more commonly used for the solution of partial differential equations. + +Typical applications of this view of the Laplace equation are then modeling +groundwater flow, or the flow of hydrocarbons in oil reservoirs. In these +applications, $K$ is then the permeability tensor, i.e. a measure for how much +resistance the soil or rock matrix asserts on the fluid flow. In the +applications just named, a desirable feature is that the numerical scheme is +locally conservative, i.e. that whatever flows into a cell also flows out of +it (or the difference is equal to the integral over the source terms over each +cell, if the sources are nonzero). However, as it turns out, the usual +discretizations of the Laplace equation do not satisfy this property. On the +other hand, one can achieve this by choosing a different formulation. + +To this end, one first introduces a second variable, called the flux, +${\mathbf u}=-K\nabla p$. By its definition, the flux is a vector in the +negative +direction of the pressure gradient, multiplied by the permeability tensor. If +the permeability tensor is proportional to the unit matrix, this equation is +easy to understand and intuitive: the higher the permeability, the higher the +flux; and the flux is proportional to the gradient of the pressure, going from +areas of high pressure to areas of low pressure. + +With this second variable, one then finds an alternative version of the +Laplace equation, called the mixed formulation: +@f{eqnarray*} + K^{-1} {\mathbf u} - \nabla p &=& 0 \qquad {\textrm{in}\ } \Omega, \\ + -{\textrm{div}}\ {\mathbf u} &=& 0 \qquad {\textrm{in}\ }\Omega, \\ + p &=& g \qquad {\textrm{on}\ } \partial\Omega. +@f} + +The weak formulation of this problem is found by multiplying the two +equations with test functions and integrating some terms by parts: +@f{eqnarray*} + A(\{{\mathbf u},p\},\{{\mathbf v},q\}) = F(\{{\mathbf v},q\}), +@f} +where +@f{eqnarray*} + A(\{{\mathbf u},p\},\{{\mathbf v},q\}) + &=& + ({\mathbf v}, K^{-1}{\mathbf u})_\Omega - ({\textrm{div}}\ {\mathbf v}, p)_\Omega + - (q,{\textrm{div}}\ {\mathbf u})_\Omega + \\ + F(\{{\mathbf v},q\}) &=& -(g,{\mathbf v}\cdot {\mathbf n})_{\partial\Omega} - (f,q)_\Omega. +@f} +Here, ${\mathbf n}$ is the outward normal vector at the boundary. Note how in this +formulation, Dirichlet boundary values of the original problem are +incorporated in the weak form. + +To be well-posed, we have to look for solutions and test functions in the +space $H({\textrm{div}})=\{{\mathbf w}\in L^2(\Omega)^d:\ {\textrm{div}}\ {\mathbf w}\in L^2\}$ +for $\mathbf u$,$\mathbf v$, and $L^2$ for $p,q$. It is a well-known fact stated in +almost every book on finite element theory that if one chooses discrete finite +element spaces for the approximation of ${\mathbf u},p$ inappropriately, then the +resulting discrete saddle-point problem is instable and the discrete solution +will not converge to the exact solution. + +To overcome this, a number of different finite element pairs for ${\mathbf u},p$ +have been developed that lead to a stable discrete problem. One such pair is +to use the Raviart-Thomas spaces $RT(k)$ for the velocity ${\mathbf u}$ and +discontinuous elements of class $DQ(k)$ for the pressure $p$. For details +about these spaces, we refer in particular to the book on mixed finite element +methods by Brezzi and Fortin, but many other books on the theory of finite +elements, for example the classic book by Brenner and Scott, also state the +relevant results. + + +

Assembling the linear system

+ +The deal.II library (of course) implements Raviart-Thomas elements $RT(k)$ of +arbitrary order $k$, as well as discontinuous elements $DG(k)$. If we forget +about their particular properties for a second, we then have to solve a +discrete problem +@f{eqnarray*} + A(x_h,w_h) = F(w_h), +@f} +with the bilinear form and right hand side as stated above, and $x_h=\{{\mathbf u}_h,p_h\}$, $w_h=\{{\mathbf v}_h,q_h\}$. Both $x_h$ and $w_h$ are from the space +$X_h=RT(k)\times DQ(k)$, where $RT(k)$ is itself a space of $dim$-dimensional +functions to accommodate for the fact that the flow velocity is vector-valued. +The necessary question then is: how do we do this in a program? + +Vector-valued elements have already been discussed in previous tutorial +programs, the first time and in detail in @ref step_8 "step-8". The main difference there +was that the vector-valued space $V_h$ is uniform in all its components: the +$dim$ components of the displacement vector are all equal and from the same +function space. What we could therefore do was to build $V_h$ as the outer +product of the $dim$ times the usual $Q(1)$ finite element space, and by this +make sure that all our shape functions have only a single non-zero vector +component. Instead of dealing with vector-valued shape functions, all we did +in @ref step_8 "step-8" was therefore to look at the (scalar) only non-zero component and +use the fe.system_to_component_index(i).first call to figure out +which component this actually is. + +This doesn't work with Raviart-Thomas elements: following from their +construction to satisfy certain regularity properties of the space +$H({\textrm{div}})$, the shape functions of $RT(k)$ are usually nonzero in all +their vector components at once. For this reason, were +fe.system_to_component_index(i).first applied to determine the only +nonzero component of shape function $i$, an exception would be generated. What +we really need to do is to get at all vector components of a shape +function. In deal.II diction, we call such finite elements +non-primitive, whereas finite elements that are either scalar or for +which every vector-valued shape function is nonzero only in a single vector +component are called primitive. + +So what do we have to do for non-primitive elements? To figure this out, let +us go back in the tutorial programs, almost to the very beginnings. There, we +learned that we use the FEValues class to determine the values and +gradients of shape functions at quadrature points. For example, we would call +fe_values.shape_value(i,q_point) to obtain the value of the +ith shape function on the quadrature point with number +q_point. Later, in @ref step_8 "step-8" and other tutorial programs, we learned +that this function call also works for vector-valued shape functions (of +primitive finite elements), and that it returned the value of the only +non-zero component of shape function i at quadrature point +q_point. + +For non-primitive shape functions, this is clearly not going to work: there is +no single non-zero vector component of shape function i, and the call +to fe_values.shape_value(i,q_point) would consequently not make +much sense. However, deal.II offers a second function call, +fe_values.shape_value_component(i,q_point,comp) that returns the +value of the compth vector component of shape function i at +quadrature point q_point, where comp is an index between +zero and the number of vector components of the present finite element; for +example, the element we will use to describe velocities and pressures is going +to have $dim+1$ components. It is worth noting that this function call can +also be used for primitive shape functions: it will simply return zero for all +components except one; for non-primitive shape functions, it will in general +return a non-zero value for more than just one component. + +We could now attempt to rewrite the bilinear form above in terms of vector +components. For example, in 2d, the first term could be rewritten like this +(note that $u_0=x_0, u_1=x_1, p=x_2$): +@f{eqnarray*} + ({\mathbf u}_h^i, K^{-1}{\mathbf u}_h^j) + = + &\left((x_h^i)_0, K^{-1}_{00} (x_h^j)_0\right) + + \left((x_h^i)_0, K^{-1}_{01} (x_h^j)_1\right) + \\ + &\left((x_h^i)_1, K^{-1}_{10} (x_h^j)_0\right) + + \left((x_h^i)_1, K^{-1}_{11} (x_h^j)_1\right). +@f} +If we implemented this, we would get code like this: +@code + for (unsigned int q=0; q +Tensor<1,dim> +extract_u (const FEValuesBase &fe_values, + const unsigned int i, + const unsigned int q) +{ + Tensor<1,dim> tmp; + + for (unsigned int d=0; dfe_values object, to extract +the values of the first $dim$ components of shape function i at +quadrature points q, that is the velocity components of that shape +function. Put differently, if we write shape functions $x_h^i$ as the tuple +$\{{\mathbf u}_h^i,p_h^i\}$, then the function returns the velocity part of this +tuple. Note that the velocity is of course a dim-dimensional tensor, and +that the function returns a corresponding object. + +Likewise, we have a function that extracts the pressure component of a shape +function: +@code +template +double extract_p (const FEValuesBase &fe_values, + const unsigned int i, + const unsigned int q) +{ + return fe_values.shape_value_component (i,q,dim); +} +@endcode +Finally, the bilinear form contains terms involving the gradients of the +velocity component of shape functions. To be more precise, we are not really +interested in the full gradient, but only the divergence of the velocity +components, i.e. ${\textrm{div}}\ {\mathbf u}_h^i = \sum_{d=0}^{dim-1} +\frac{\partial}{\partial x_d} ({\mathbf u}_h^i)_d$. Here's a function that returns +this quantity: +@code +template +double +extract_div_u (const FEValuesBase &fe_values, + const unsigned int i, + const unsigned int q) +{ + double divergence = 0; + for (unsigned int d=0; d phi_i_u = extract_u (fe_values, i, q); + const double div_phi_i_u = extract_div_u (fe_values, i, q); + const double phi_i_p = extract_p (fe_values, i, q); + + for (unsigned int j=0; j phi_j_u = extract_u (fe_values, j, q); + const double div_phi_j_u = extract_div_u (fe_values, j, q); + const double phi_j_p = extract_p (fe_values, j, q); + + local_matrix(i,j) += (phi_i_u * k_inverse_values[q] * phi_j_u + - div_phi_i_u * phi_j_p + - phi_i_p * div_phi_j_u) + * fe_values.JxW(q); + } + + local_rhs(i) += -(phi_i_p * + rhs_values[q] * + fe_values.JxW(q)); + } +@endcode +This very closely resembles the form we have originally written down the +bilinear form and right hand side. + +There is one final term that we have to take care of: the right hand side +contained the term $(g,{\mathbf v}\cdot {\mathbf n})_{\partial\Omega}$, constituting the +weak enforcement of pressure boundary conditions. We have already seen in +@ref step_7 "step-7" how to deal with face integrals: essentially exactly the same as with +domain integrals, except that we have to use the FEFaceValues class +instead of FEValues. To compute the boundary term we then simply have +to loop over all boundary faces and integrate there. If you look closely at +the definitions of the extract_* functions above, you will realize +that it isn't even necessary to write new functions that extract the velocity +and pressure components of shape functions from FEFaceValues objects: +both FEValues and FEFaceValues are derived from a common +base class, FEValuesBase, and the extraction functions above can +therefore deal with both in exactly the same way. Assembling the missing +boundary term then takes on the following form: +@code +for (unsigned int face_no=0; + face_no::faces_per_cell; + ++face_no) + if (cell->at_boundary(face_no)) + { + fe_face_values.reinit (cell, face_no); + + pressure_boundary_values + .value_list (fe_face_values.get_quadrature_points(), + boundary_values); + + for (unsigned int q=0; q + phi_i_u = extract_u (fe_face_values, i, q); + + local_rhs(i) += -(phi_i_u * + fe_face_values.normal_vector(q) * + boundary_values[q] * + fe_face_values.JxW(q)); + } + } +@endcode + +You will find the exact same code as above in the sources for the present +program. We will therefore not comment much on it below. + + +

Linear solvers and preconditioners

+ +After assembling the linear system we are faced with the task of solving +it. The problem here is: the matrix has a zero block at the bottom right +(there is no term in the bilinear form that couples the pressure $p$ with the +pressure test function $q$), and it is indefinite. At least it is +symmetric. In other words: the Conjugate Gradient method is not going to +work. We would have to resort to other iterative solvers instead, such as +MinRes, SymmLQ, or GMRES, that can deal with indefinite systems. However, then +the next problem immediately surfaces: due to the zero block, there are zeros +on the diagonal and none of the usual preconditioners (Jacobi, SSOR) will work +as they require division by diagonal elements. + + +

Solving using the Schur complement

+ +In view of this, let us take another look at the matrix. If we sort our +degrees of freedom so that all velocity come before all pressure variables, +then we can subdivide the linear system $AX=B$ into the following blocks: +@f{eqnarray*} + \left(\begin{array}{cc} + M & B^T \\ B & 0 + \end{array}\right) + \left(\begin{array}{cc} + U \\ P + \end{array}\right) + = + \left(\begin{array}{cc} + F \\ G + \end{array}\right), +@f} +where $U,P$ are the values of velocity and pressure degrees of freedom, +respectively, $M$ is the mass matrix on the velocity space, $B$ corresponds to +the negative divergence operator, and $B^T$ is its transpose and corresponds +to the negative gradient. + +By block elimination, we can then re-order this system in the following way +(multiply the first row of the system by $BM^{-1}$ and then subtract the +second row from it): +@f{eqnarray*} + BM^{-1}B^T P &=& BM^{-1} F - G, \\ + MU &=& F - B^TP. +@f} +Here, the matrix $S=BM^{-1}B^T$ (called the Schur complement of $A$) +is obviously symmetric and, owing to the positive definiteness of $M$ and the +fact that $B^T$ has full column rank, $S$ is also positive +definite. + +Consequently, if we could compute $S$, we could apply the Conjugate Gradient +method to it. However, computing $S$ is expensive, and $S$ is most +likely also a full matrix. On the other hand, the CG algorithm doesn't require +us to actually have a representation of $S$, it is sufficient to form +matrix-vector products with it. We can do so in steps: to compute $Sv$, we +
    +
  1. form $w = B^T v$; +
  2. solve $My = w$ for $y=M^{-1}w$, using the CG method applied to the + positive definite and symmetric mass matrix $M$; +
  3. form $z=By$ to obtain $Sv=z$. +
+We will implement a class that does that in the program. Before showing its +code, let us first note that we need to multiply with $M^{-1}$ in several +places here: in multiplying with the Schur complement $S$, forming the right +hand side of the first equation, and solving in the second equation. From a +coding viewpoint, it is therefore appropriate to relegate such a recurring +operation to a class of its own. We call it InverseMatrix. As far as +linear solvers are concerned, this class will have all operations that solvers +need, which in fact includes only the ability to perform matrix-vector +products; we form them by using a CG solve (this of course requires that the +matrix passed to this class satisfies the requirements of the CG +solvers). Here are the relevant parts of the code that implements this: +@code +class InverseMatrix +{ + public: + InverseMatrix (const SparseMatrix &m); + + void vmult (Vector &dst, + const Vector &src) const; + + private: + const SmartPointer > matrix; + // ... +}; + + +void InverseMatrix::vmult (Vector &dst, + const Vector &src) const +{ + SolverControl solver_control (src.size(), 1e-8*src.l2_norm()); + SolverCG<> cg (solver_control, vector_memory); + + cg.solve (*matrix, dst, src, PreconditionIdentity()); +} +@endcode +Once created, objects of this class can act as matrices: they perform +matrix-vector multiplications. How this is actually done is irrelevant to the +outside world. + +Using this class, we can then write a class that implements the Schur +complement in much the same way: to act as a matrix, it only needs to offer a +function to perform a matrix-vector multiplication, using the algorithm +above. Here are again the relevant parts of the code: +@code +class SchurComplement +{ + public: + SchurComplement (const BlockSparseMatrix &A, + const InverseMatrix &Minv); + + void vmult (Vector &dst, + const Vector &src) const; + + private: + const SmartPointer > system_matrix; + const SmartPointer m_inverse; + + mutable Vector tmp1, tmp2; +}; + + +void SchurComplement::vmult (Vector &dst, + const Vector &src) const +{ + system_matrix->block(0,1).vmult (tmp1, src); + m_inverse->vmult (tmp2, tmp1); + system_matrix->block(1,0).vmult (dst, tmp2); +} +@endcode + +In this code, the constructor takes a reference to a block sparse matrix for +the entire system, and a reference to an object representing the inverse of +the mass matrix. It stores these using SmartPointer objects (see +@ref step_7 "step-7"), and additionally allocates two temporary vectors tmp1 and +tmp2 for the vectors labeled $w,y$ in the list above. + +In the matrix-vector multiplication function, the product $Sv$ is performed in +exactly the order outlined above. Note how we access the blocks $B^T$ and $B$ +by calling system_matrix->block(0,1) and +system_matrix->block(1,0) respectively, thereby picking out +individual blocks of the block system. Multiplication by $M^{-1}$ happens +using the object introduced above. + +With all this, we can go ahead and write down the solver we are going to +use. Essentially, all we need to do is form the right hand sides of the two +equations defining $P$ and $U$, and then solve them with the Schur complement +matrix and the mass matrix, respectively: +@code +template +void MixedLaplaceProblem::solve () +{ + const InverseMatrix m_inverse (system_matrix.block(0,0)); + Vector tmp (solution.block(0).size()); + + { + Vector schur_rhs (solution.block(1).size()); + + m_inverse.vmult (tmp, system_rhs.block(0)); + system_matrix.block(1,0).vmult (schur_rhs, tmp); + schur_rhs -= system_rhs.block(1); + + SolverControl solver_control (system_matrix.block(0,0).m(), + 1e-6*schur_rhs.l2_norm()); + SolverCG<> cg (solver_control); + + cg.solve (SchurComplement(system_matrix, m_inverse), + solution.block(1), + schur_rhs, + PreconditionIdentity()); + } + { + system_matrix.block(0,1).vmult (tmp, solution.block(1)); + tmp *= -1; + tmp += system_rhs.block(0); + + m_inverse.vmult (solution.block(0), tmp); + } +} +@endcode + +This code looks more impressive than it actually is. At the beginning, we +declare an object representing $M^{-1}$ and a temporary vector (of the size of +the first block of the solution, i.e. with as many entries as there are +velocity unknowns), and the two blocks surrounded by braces then solve the two +equations for $P$ and $U$, in this order. Most of the code in each of the two +blocks is actually devoted to constructing the proper right hand sides. For +the first equation, this would be $BM^{-1}F-G$, and $-B^TP+G$ for the second +one. The first hand side is then solved with the Schur complement matrix, and +the second simply multiplied with $M^{-1}$. The code as shown uses no +preconditioner (i.e. the identity matrix as preconditioner) for the Schur +complement. + + + +

A preconditioner for the Schur complement

+ +One may ask whether it would help if we had a preconditioner for the Schur +complement $S=BM^{-1}B^T$. The general answer, as usual, is: of course. The +problem is only, we don't know anything about this Schur complement matrix. We +do not know its entries, all we know is its action. On the other hand, we have +to realize that our solver is expensive since in each iteration we have to do +one matrix-vector product with the Schur complement, which means that we have +to do invert the mass matrix once in each iteration. + +There are different approaches to preconditioning such a matrix. On the one +extreme is to use something that is cheap to apply and therefore has no real +impact on the work done in each iteration. The other extreme is a +preconditioner that is itself very expensive, but in return really brings down +the number of iterations required to solve with $S$. + +We will try something along the second approach, as much to improve the +performance of the program as to demonstrate some techniques. To this end, let +us recall that the ideal preconditioner is, of course, $S^{-1}$, but that is +unattainable. However, how about +@f{eqnarray*} + \tilde S^{-1} = [B^T ({\textrm{diag}\ }M)^{-1}B]^{-1} +@f} +as a preconditioner? That would mean that every time we have to do one +preconditioning step, we actually have to solve with $\tilde S$. At first, +this looks almost as expensive as solving with $S$ right away. However, note +that in the inner iteration, we do not have to calculate $M^{-1}$, but only +the inverse of its diagonal, which is cheap. + +To implement something like this, let us first generalize the +InverseMatrix class so that it can work not only with +SparseMatrix objects, but with any matrix type. This looks like so: +@code +template +class InverseMatrix +{ + public: + InverseMatrix (const Matrix &m); + + void vmult (Vector &dst, + const Vector &src) const; + + private: + const SmartPointer matrix; + + //... +}; + + +template +void InverseMatrix::vmult (Vector &dst, + const Vector &src) const +{ + SolverControl solver_control (src.size(), 1e-8*src.l2_norm()); + SolverCG<> cg (solver_control, vector_memory); + + dst = 0; + + cg.solve (*matrix, dst, src, PreconditionIdentity()); +} +@endcode +Essentially, the only change we have made is the introduction of a template +argument that generalizes the use of SparseMatrix. + +The next step is to define a class that represents the approximate Schur +complement. This should look very much like the Schur complement class itself, +except that it doesn't need the object representing $M^{-1}$ any more: +@code +class ApproximateSchurComplement : public Subscriptor +{ + public: + ApproximateSchurComplement (const BlockSparseMatrix &A); + + void vmult (Vector &dst, + const Vector &src) const; + + private: + const SmartPointer > system_matrix; + + mutable Vector tmp1, tmp2; +}; + + +void ApproximateSchurComplement::vmult (Vector &dst, + const Vector &src) const +{ + system_matrix->block(0,1).vmult (tmp1, src); + system_matrix->block(0,0).precondition_Jacobi (tmp2, tmp1); + system_matrix->block(1,0).vmult (dst, tmp2); +} +@endcode +Note how the vmult function differs in simply doing one Jacobi sweep +(i.e. multiplying with the inverses of the diagonal) instead of multiplying +with the full $M^{-1}$. + +With all this, we already have the preconditioner: it should be the inverse of +the approximate Schur complement, i.e. we need code like this: +@code + ApproximateSchurComplement + approximate_schur_complement (system_matrix); + + InverseMatrix + preconditioner (approximate_schur_complement) +@endcode +That's all! + +Taken together, the first block of our solve() function will then +look like this: +@code + Vector schur_rhs (solution.block(1).size()); + + m_inverse.vmult (tmp, system_rhs.block(0)); + system_matrix.block(1,0).vmult (schur_rhs, tmp); + schur_rhs -= system_rhs.block(1); + + SchurComplement + schur_complement (system_matrix, m_inverse); + + ApproximateSchurComplement + approximate_schur_complement (system_matrix); + + InverseMatrix + preconditioner (approximate_schur_complement); + + SolverControl solver_control (system_matrix.block(0,0).m(), + 1e-6*schur_rhs.l2_norm()); + SolverCG<> cg (solver_control); + + cg.solve (schur_complement, solution.block(1), schur_rhs, + preconditioner); +@endcode +Note how we pass the so-defined preconditioner to the solver working on the +Schur complement matrix. + +Obviously, applying this inverse of the approximate Schur complement is a very +expensive preconditioner, almost as expensive as inverting the Schur +complement itself. We can expect it to significantly reduce the number of +outer iterations required for the Schur complement. In fact it does: in a +typical run on 5 times refined meshes using elements of order 0, the number of +outer iterations drops from 164 to 12. On the other hand, we now have to apply +a very expensive preconditioner 12 times. A better measure is therefore simply +the run-time of the program: on my laptop, it drops from 28 to 23 seconds for +this test case. That doesn't seem too impressive, but the savings become more +pronounced on finer meshes and with elements of higher order. For example, a +six times refined mesh and using elements of order 2 yields an improvement of +318 to 12 outer iterations, at a runtime of 338 seconds to 229 seconds. Not +earth shattering, but significant. + + +

A remark on similar functionality in deal.II

+ +As a final remark about solvers and preconditioners, let us note that a +significant amount of functionality introduced above is actually also present +in the library itself. It probably even is more powerful and general, but we +chose to introduce this material here anyway to demonstrate how to work with +block matrices and to develop solvers and preconditioners, rather than using +black box components from the library. + +For those interested in looking up the corresponding library classes: the +InverseMatrix is roughly equivalent to the +PreconditionLACSolver class in the library. Likewise, the Schur +complement class corresponds to the SchurMatrix class. + + +

Definition of the test case

+ +In this tutorial program, we will solve the Laplace equation in mixed +formulation as stated above. Since we want to monitor convergence of the +solution inside the program, we choose right hand side, boundary conditions, +and the coefficient so that we recover a solution function known to us. In +particular, we choose the pressure solution +@f{eqnarray*} + p = -\left(\frac \alpha 2 xy^2 + \beta x - \frac \alpha 6 x^2\right), +@f} +and for the coefficient we choose the unit matrix $K_{ij}=\delta_{ij}$ for +simplicity. Consequently, the exact velocity satisfies +@f{eqnarray*} + {\mathbf u} = + \left(\begin{array}{cc} + \frac \alpha 2 y^2 + \beta - \frac \alpha 2 x^2 \\ + \alpha xy + \end{array}\right). +@f} +This solution was chosen since it is exactly divergence free, making it a +realistic test case for incompressible fluid flow. By consequence, the right +hand side equals $f=0$, and as boundary values we have to choose +$g=p|_{\partial\Omega}$. + +For the computations in this program, we choose $\alpha=0.3,\beta=1$. You can +find the resulting solution in the ``Results'' section below, after the +commented program. diff --git a/deal.II/examples/step-20/doc/results.dox b/deal.II/examples/step-20/doc/results.dox new file mode 100644 index 0000000000..902cfd1e54 --- /dev/null +++ b/deal.II/examples/step-20/doc/results.dox @@ -0,0 +1,294 @@ + +

Results

+ +

Output of the program and graphical visualization

+ + +If we run the program as is, we get this output: +@code +examples/step-20> make run +============================ Remaking Makefile.dep +==============debug========= step-20.cc +============================ Linking step-20 +============================ Running step-20 +Number of active cells: 64 +Total number of cells: 85 +Number of degrees of freedom: 208 (144+64) +10 CG Schur complement iterations to obtain convergence. +Errors: ||e_p||_L2 = 0.178055, ||e_u||_L2 = 0.0433435 +@endcode + +The fact that the number of iterations is so small, of course, is due to good +(but expensive!) preconditioner we have developed. To get confidence in the +solution, let us take a look at it. The following three images show (from left +to right) the x-velocity, the y-velocity, and the pressure (click on the images +for larger versions): + +@image html step-20.u.png +@image html step-20.v.png +@image html step-20.p.png + + + +Let us start with the pressure: it is highest at the left and lowest at the +right, so flow will be from left to right. In addition, though hardly visible +in the graph, we have chosen the pressure field such that the flow left-right +flow first channels towards the center and then outward again. Consequently, +the x-velocity has to increase to get the flow through the narrow part, +something that can easily be seen in the left image. The middle image +represents inward flow in y-direction at the left end of the domain, and +outward flow in y-directino at the right end of the domain. + + + +As an additional remark, note how the x-velocity in the left image is only +continuous in x-direction, whereas the y-velocity is continuous in +y-direction. The flow fields are discontinuous in the other directions. This +very obviously reflects the continuity properties of the Raviart-Thomas +elements, which are, in fact, only in the space H(div) and not in the space +$H^1$. Finally, the pressure field is completely discontinuous, but +that should not surprise given that we have chosen FE_DGQ(0) as +the finite element for that solution component. + + + +

Convergence

+ + +The program offers two obvious places where playing and observing convergence +is in order: the degree of the finite elements used (passed to the constructor +of the MixedLaplaceProblem class from main()), and +the refinement level (determined in +MixedLaplaceProblem::make_grid_and_dofs). What one can do is to +change these values and observe the errors computed later on in the course of +the program run. + + + +If one does this, one finds the following pattern for the $L_2$ error +in the pressure variable: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Finite element order
Refinement level012
0 1.45344 0.0831743 0.0235186
1 0.715099 0.0245341 0.00293983
2 0.356383 0.0063458 0.000367478
3 0.178055 0.00159944 4.59349e-05
4 0.0890105 0.000400669 5.74184e-06
5 0.0445032 0.000100218 7.17799e-07
6 0.0222513 2.50576e-05 9.0164e-08
$O(h)$ $O(h^2)$ $O(h^3)$
+ +The theoretically expected convergence orders are very nicely reflected by the +experimentally observed ones indicated in the last row of the table. + + + +One can make the same experiment with the $L_2$ error +in the velocity variables: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Finite element order
Refinement level012
0 0.367423 0.127657 5.10388e-14
1 0.175891 0.0319142 9.04414e-15
2 0.0869402 0.00797856 1.23723e-14
3 0.0433435 0.00199464 1.86345e-07
4 0.0216559 0.00049866 2.72566e-07
5 0.010826 0.000124664 3.57141e-07
6 0.00541274 3.1166e-05 4.46124e-07
$O(h)$ $O(h^2)$ $O(h^3)$
+The result concerning the convergence order is the same here. + + + + +

Possibilities for extensions

+ + +Realistic flow computations for ground water or oil reservoir simulations will +not use a constant permeability. Here's a first, rather simple way to change +this situation: we use a permeability that decays very rapidly away from a +central flowline until it hits a background value of 0.001. This is to mimick +the behavior of fluids in sandstone: in most of the domain, the sandstone is +homogenous and, while permeably to fluids, not overly so; on the other stone, +the stone has cracked, or faulted, along one line, and the fluids flow much +easier along this large crask. Here is how we could implement something like +this: +@code +template +void +KInverse::value_list (const std::vector > &points, + std::vector > &values) const +{ + Assert (points.size() == values.size(), + ExcDimensionMismatch (points.size(), values.size())); + + for (unsigned int p=0; p +class KInverse : public TensorFunction<2,dim> +{ + public: + KInverse (); + + virtual void value_list (const std::vector > &points, + std::vector > &values) const; + + private: + std::vector > centers; +}; + + +template +KInverse::KInverse () +{ + const unsigned int N = 40; + centers.resize (N); + for (unsigned int i=0; i +void +KInverse::value_list (const std::vector > &points, + std::vector > &values) const +{ + Assert (points.size() == values.size(), + ExcDimensionMismatch (points.size(), values.size())); + + for (unsigned int p=0; p +

Introduction

diff --git a/deal.II/examples/step-21/doc/results.dox b/deal.II/examples/step-21/doc/results.dox new file mode 100644 index 0000000000..e67fccc1c1 --- /dev/null +++ b/deal.II/examples/step-21/doc/results.dox @@ -0,0 +1,2 @@ + +

Results

diff --git a/deal.II/examples/step-3/doc/intro.dox b/deal.II/examples/step-3/doc/intro.dox new file mode 100644 index 0000000000..222a263a56 --- /dev/null +++ b/deal.II/examples/step-3/doc/intro.dox @@ -0,0 +1,16 @@ + +

Introduction

+ + +This is the first example where we actually use finite elements to compute +something. We +will solve a simple version of Laplace's equation with zero boundary +values, but a nonzero right hand side. This example is still quite +simple, but it already shows the basic structure of most finite +element programs, which are along the following lines: +
    +
  • Grid generation; +
  • Assembling matrices and vectors of the discrete system; +
  • Solving the linear system of equations; +
  • Writing results to disk. +
diff --git a/deal.II/examples/step-3/doc/results.dox b/deal.II/examples/step-3/doc/results.dox new file mode 100644 index 0000000000..19f31c400b --- /dev/null +++ b/deal.II/examples/step-3/doc/results.dox @@ -0,0 +1,143 @@ + +

Results

+ +The output of the program looks as follows: +@code +Number of active cells: 1024 +Total number of cells: 1365 +Number of degrees of freedom: 1089 +DEAL:cg::Starting value 0.121094 +DEAL:cg::Convergence step 48 value 5.33692e-13 +@endcode + +The first three lines is what we wrote to cout. The last +two lines were generated without our intervention by the CG +solver. The first two lines state the residual at the start of the +iteration, while the last line tells us that the solver needed 47 +iterations to bring the norm of the residual to 5.3e-13, i.e. below +the threshold 1e-12 which we have set in the `solve' function. We will +show in the next program how to suppress this output, which is +sometimes useful for debugging purposes, but often clutters up the +screen display. + +Apart from the output shown above, the program generated the file +solution.gpl, which is in GNUPLOT format. It can be +viewed as follows: invoke GNUPLOT and enter the following sequence of +commands at its prompt: +@code +examples/step-3> gnuplot + + G N U P L O T + Version 3.7 patchlevel 3 + last modified Thu Dec 12 13:00:00 GMT 2002 + System: Linux 2.6.11.4-21.10-default + + Copyright(C) 1986 - 1993, 1998 - 2002 + Thomas Williams, Colin Kelley and many others + + Type `help` to access the on-line reference manual + The gnuplot FAQ is available from + http://www.gnuplot.info/gnuplot-faq.html + + Send comments and requests for help to + Send bugs, suggestions and mods to + + +Terminal type set to 'x11' +gnuplot> set data style lines +gnuplot> splot "solution.gpl" +@endcode +This produces the picture of the solution below left. Alternatively, +you can order GNUPLOT to do some hidden line removal by the command +@code +gnuplot> set hidden3d +@endcode +to get the result at the right: + + + + + + + +
+ @image html step-3.solution-1.png + + @image html step-3.solution-2.png +
+ + + + +

Possibilities for extensions

+ +If you want to play around a little bit with this program, here are a few +suggestions: +

+ +
    +
  • + Change the geometry and mesh: In the program, we have generated a square + domain and mesh by using the GridGenerator::hyper_cube + function. However, the GridGenerator has a good number of other + functions as well. Try an L-shaped domain, a ring, or other domains you find + there. +
  • + +
  • + Change the boundary condition: The code uses the ZeroFunction + function to generate zero boundary conditions. However, you may want to try + non-zero constant boundary values using ConstantFunction<2> + (1) instead of ZeroFunction<2> () to have unit + Dirichlet boundary values. More exotic functions are described in the + documentation of the Functions namespace, and you may pick one + to describe your particular boundary values. +
  • + +
  • + Modify the type of boundary condition: Presently, what happens is that we use + Dirichlet boundary values all around, since the default is that all boundary + parts have boundary indicator zero, and then we tell the + VectorTools::interpolate_boundary_values function to interpolate + boundary values to zero on all boundary components with indicator zero. +

    + We can change this behavior if we assign parts of the boundary different + indicators. For example, try this immediately after calling + GridGenerator::hyper_cube: +

    +    triangulation.begin_active()->face(0)->set_boundary_indicator(1);
    +  
    + What this does is it first asks the triangulation to return an iterator that + points to the first active cell. Of course, this being the coarse mesh for + the triangulation of a square, the triangulation has only a single cell at + this moment, and it is active. Next, we ask the cell to return an iterator to + its first face, and then we ask the face to reset the boundary indicator of + that face to 1. What then follows is this: When the mesh is refined, faces of + child cells inherit the boundary indicator of their parents, i.e. even on the + finest mesh, the faces on one side of the square have boundary indicator + 1. Later, when we get to interpolating boundary conditions, the + interpolate_boundary_values will only produce boundary values + for those faces that have zero boundary indicator, and leave those faces + alone that have a different boundary indicator. Keeping with the theory of + the Laplace equation, this will then lead to homogenous Neumann conditions on + this side, i.e. a zero normal derivative of the solution. You will see this + if you run the program. + +
  • + A slight variation of the last point would be to set different boundary + values as above, but then use a different boundary value function for + boundary indicator one. In practice, what you have to do is to add a second + call to interpolate_boundary_values for boundary indicator one: + @code + VectorTools::interpolate_boundary_values (dof_handler, + 1, + ConstantFunction<2>(1.), + boundary_values); + @endcode + If you have this call immediately after the first one to this function, then + it will interpolate boundary values on faces with boundary indicator 1 to the + unit value, and merge these interpolated values with those previously + computed for boundary indicator 0. The result will be that we will get + discontinuous boundary values, zero on three sides of the square, and one on + the fourth. +
diff --git a/deal.II/examples/step-4/doc/intro.dox b/deal.II/examples/step-4/doc/intro.dox new file mode 100644 index 0000000000..a21e0340c3 --- /dev/null +++ b/deal.II/examples/step-4/doc/intro.dox @@ -0,0 +1,133 @@ + +

Introduction

+ + +deal.II has a unique feature which we call +``dimension independent programming''. You may have noticed in the +previous examples that many classes had a number in angle brackets +suffixed to them. This is to indicate that for example the +triangulation in two and three space dimensions are different, but +related data types. We could as well have called them +Triangulation2d and Triangulation3d instead +of Triangulation@<2@> and +Triangulation@<3@> to name the two classes, but this +has an important drawback: assume you have a function which does +exactly the same functionality, but on 2d or 3d triangulations, +depending on which dimension we would like to solve the equation in +presently (if you don't believe that it is the common case that a +function does something that is the same in all dimensions, just take +a look at the code below - there are almost no distinctions between 2d +and 3d!). We would have to write the same function twice, once +working on Triangulation2d and once working with a +Triangulation3d. This is an unnecessary obstacle in +programming and leads to a nuisance to keep the two function in synch +(at best) or difficult to find errors if the two versions get out of +sync (at worst; this would probably the more common case). + + + + +Such obstacles can be circumvented by using some template magic as +provided by the C++ language: templatized classes and functions are +not really classes or functions but only a pattern depending on an +as-yet undefined data type parameter or on a numerical value which is +also unknown at the point of definition. However, the compiler can +build proper classes or functions from these templates if you provide +it with the information that is needed for that. Of course, parts of +the template can depend on the template parameters, and they will be +resolved at the time of compilation for a specific template +parameter. For example, consider the following piece of code: +@code + template + void make_grid (Triangulation &triangulation) + { + GridGenerator::hyper_cube (triangulation, -1, 1); + }; +@endcode + + + +At the point where the compiler sees this function, it does not know +anything about the actual value of dim. The only thing the compiler has is +a template, i.e. a blueprint, to generate +functions make_grid if given a particular value of +dim. Since dim has an unknown value, there is no +code the compiler can generate for the moment. + + + +However, if later down the compiler would encounter code that looks, for +example, like this, +@code + Triangulation<2> triangulation; + make_grid (triangulation); +@endcode +then the compiler will deduce that the function make_grid for +dim==2 was +requested and will compile the template above into a function with dim replaced +by 2 everywhere, i.e. it will compile the function as if it were defined +as +@code + void make_grid (Triangulation<2> &triangulation) + { + GridGenerator::hyper_cube (triangulation, -1, 1); + }; +@endcode + + + +However, it is worth to note that the function +GridGenerator::hyper_cube depends on the dimension as +well, so in this case, the compiler will call the function +GridGenerator::hyper_cube@<2@> while if dim were 3, +it would call GridGenerator::hyper_cube@<3@> which +might be (and actually is) a totally unrelated function. + + + +The same can be done with member variables. Consider the following +function, which might in turn call the above one: +@code + template + void make_grid_and_dofs (Triangulation &triangulation) + { + make_grid (triangulation); + + DoFHandler dof_handler(triangulation); + ... + }; +@endcode +This function has a member variable of type +DoFHandler@. Again, the compiler can't +compile this function until it knows for which dimension. If you call +this function for a specific dimension as above, the compiler will +take the template, replace all occurences of dim by the dimension for +which it was called, and compile it. If you call the function several +times for different dimensions, it will compile it several times, each +time calling the right make_grid function and reserving the right +amount of memory for the member variable; note that the size of a +DoFHandler might, and indeed does, depend on the space dimension. + + + +The deal.II library is build around this concept +of dimension-independent programming, and therefore allows you to program in +a way that will not need to +distinguish between the space dimensions. It should be noted that in +only a very few places is it necessary to actually compare the +dimension using ifs or switches. However, since the compiler +has to compile each function for each dimension separately, even there +it knows the value of dim at the time of compilation and will +therefore be able to optimize away the if statement along with the +unused branch. + + + +In this example program, we will show how to program dimension +independently (which in fact is even simpler than if you had to take +care about the dimension) and we will extend the Laplace problem of +the last example to a program that runs in two and three space +dimensions at the same time. Other extensions are the use of a +non-constant right hand side function and of non-zero boundary values. + + diff --git a/deal.II/examples/step-4/doc/results.dox b/deal.II/examples/step-4/doc/results.dox new file mode 100644 index 0000000000..cfd3be1c1f --- /dev/null +++ b/deal.II/examples/step-4/doc/results.dox @@ -0,0 +1,111 @@ + +

Results

+ + +The output of the program looks as follows (the number of iterations +may vary by one or two, depending on your computer, since this is +often dependent on the round-off accuracy of floating point +operations, which differs between processors): +@code +Solving problem in 2 space dimensions. + Number of active cells: 256 + Total number of cells: 341 + Number of degrees of freedom: 289 + 26 CG iterations needed to obtain convergence. +Solving problem in 3 space dimensions. + Number of active cells: 4096 + Total number of cells: 4681 + Number of degrees of freedom: 4913 + 30 CG iterations needed to obtain convergence. +@endcode +It is obvious that in three spatial dimensions the number of cells and +therefore also the number of degrees of freedom is +much higher. What cannot be seen here, is that besides this higher +number of rows and columns in the matrix, there are also significantly +more entries per row of the matrix in three space +dimensions. Together, this leads to a much higher numerical effort for +solving the system of equation, which you can feel when you actually +run the program. + + + +The program produces two files: solution-2d.gmv and +solution-3d.gmv, which can be viewed using the program +GMV (in case you do not have that program, you can easily change the +output format in the program to something which you can view more +easily). From the two-dimensional output, we have produced the +following two pictures: + + + + + + + + + +
+ @image html step-4.solution-2d.png + + @image html step-4.grid-2d.png +
+ + +The left one shows the solution of the problem under consideration as +a 3D plot. As can be seen, the solution is almost flat in the interior +of the domain and has a higher curvature near the boundary. This, of +course, is due to the fact that for Laplace's equation the curvature +of the solution is equal to the right hand side and that was chosen as +a quartic polynomial which is nearly zero in the interior and is only +rising sharply when approaching the boundaries of the domain; the +maximal values of the right hand side function are at the corners of +the domain, where also the solution is moving most rapidly. +It is also nice to see that the solution follows the desired quadratic +boundary values along the boundaries of the domain. + + + +The right picture shows the two dimensional grid, colorized by the +values of the solution function. This is not very exciting, but the +colors are nice. + + + + +In three spatial dimensions, visualization is a bit more difficult. To +the left, you can see the solution at three of the six outer faces of +the cube in which we solved the equation, and on a plane through the +origin. On some of the planes, the cut through the grid is also shown. + + + + + + + + + +
+ @image html step-4.solution-3d.png + + @image html step-4.grid-3d.png +
+ + + +The right picture shows the three dimensional grid, colorized by the +solutions values. 3D grids are difficult to visualize, which can be +seen here already, even though the grid is not even locally refined. + + + + + +

Possibilities for extensions

+ + +Essentially the possibilities for playing around with the program are the same +as for the previous one, except that the will now also apply to the 3d +case. For inspiration read up on possible extensions in the documentation of step-3. + diff --git a/deal.II/examples/step-5/doc/intro.dox b/deal.II/examples/step-5/doc/intro.dox new file mode 100644 index 0000000000..43e24bfd97 --- /dev/null +++ b/deal.II/examples/step-5/doc/intro.dox @@ -0,0 +1,37 @@ + +

Introduction

+ + +This example does not show revolutionary new things, but it shows many +small improvements over the previous examples, and also many small +things that can usually be found in finite element programs. Among +them are: +
    +
  • Computations on successively refined grids. At least in the + mathematical sciences, it is common to compute solutions on + a hierarchy of grids, in order to get a feeling for the accuracy + of the solution; if you only have one solution on a single grid, you + usually can't guess the accuracy of the + solution. Furthermore, deal.II is designed to support adaptive + algorithms where iterative solution on successively refined + grids is at the heart of algorithms. Although adaptive grids + are not used in this example, the foundations for them is laid + here. +
  • In practical applications, the domains are often subdivided + into triangulations by automatic mesh generators. In order to + use them, it is important to read coarse grids from a file. In + this example, we will read a coarse grid in UCD (unstructured + cell data) format as used by AVS Explorer. +
  • Finite element programs usually use extensive amounts of + computing time, so some optimizations are sometimes + necessary. We will show some of them. +
  • On the other side, finite element programs tend to be rather + complex, so debugging is an important aspect. We support safe + programming by using assertions that check the validity of + parameters and internal states in a debug mode, but are removed + in optimized mode. +
  • Regarding the mathematical side, we show how to support a + variable coefficient in the elliptic operator and how to use + preconditioned iterative solvers for the linear systems of + equations. +
diff --git a/deal.II/examples/step-5/doc/results.dox b/deal.II/examples/step-5/doc/results.dox new file mode 100644 index 0000000000..3cd03c5bb0 --- /dev/null +++ b/deal.II/examples/step-5/doc/results.dox @@ -0,0 +1,163 @@ + +

Results

+ + +When the last block in main() is commented in, the output +of the program looks as follows: +@code +Cycle 0: + Number of active cells: 20 + Total number of cells: 20 + Number of degrees of freedom: 25 + 13 CG iterations needed to obtain convergence. +Cycle 1: + Number of active cells: 80 + Total number of cells: 100 + Number of degrees of freedom: 89 + 18 CG iterations needed to obtain convergence. +Cycle 2: + Number of active cells: 320 + Total number of cells: 420 + Number of degrees of freedom: 337 + 29 CG iterations needed to obtain convergence. +Cycle 3: + Number of active cells: 1280 + Total number of cells: 1700 + Number of degrees of freedom: 1313 + 52 CG iterations needed to obtain convergence. +Cycle 4: + Number of active cells: 5120 + Total number of cells: 6820 + Number of degrees of freedom: 5185 + 95 CG iterations needed to obtain convergence. +Cycle 5: + Number of active cells: 20480 + Total number of cells: 27300 + Number of degrees of freedom: 20609 + 182 CG iterations needed to obtain convergence. +-------------------------------------------------------- +An error occurred in line <273> of file in function + void Coefficient::value_list(const std::vector, std::allocator > >&, std::vector >&, unsigned int) + const [with int dim = 2] +The violated condition was: + values.size() == points.size() +The name and call sequence of the exception was: + ExcDimensionMismatch (values.size(), points.size()) +Additional Information: +Dimension 1 not equal to 2 + +Stacktrace: +----------- +#0 ./step-5: Coefficient<2>::value_list(std::vector, std::allocator > > const&, std::vector >&, unsigned) const +#1 ./step-5: main +-------------------------------------------------------- +make: *** [run] Aborted +@endcode + + + +Let's first focus on the things before the error: +In each cycle, the number of cells quadruples and the number of CG +iterations roughly doubles. +Also, in each cycle, the program writes one output graphic file in EPS +format. They are depicted in the following: + + + + + + + + + + + + + + + + + + +
+ @image html step-5.solution-0.png + + @image html step-5.solution-1.png +
+ @image html step-5.solution-2.png + + @image html step-5.solution-3.png +
+ @image html step-5.solution-4.png + + @image html step-5.solution-5.png +
+ + + +Due to the variable coefficient (the curvature there is reduced by the +same factor by which the coefficient is increased), the top region of +the solution is flattened. The gradient of the solution is +discontinuous there, although this is not very clearly visible in the +pictures above. We will look at this in more detail in the next +example. + + + + +As for the error — let's look at it again: +@code +-------------------------------------------------------- +An error occurred in line <273> of file in function + void Coefficient::value_list(const std::vector, std::allocator > >&, std::vector >&, unsigned int) + const [with int dim = 2] +The violated condition was: + values.size() == points.size() +The name and call sequence of the exception was: + ExcDimensionMismatch (values.size(), points.size()) +Additional Information: +Dimension 1 not equal to 2 + +Stacktrace: +----------- +#0 ./step-5: Coefficient<2>::value_list(std::vector, std::allocator > > const&, std::vector >&, unsigned) const +#1 ./step-5: main +-------------------------------------------------------- +make: *** [run] Aborted +@endcode + + + +What we see is that the error was triggered in line 273 of the +step-5.cc file (as we modify tutorial programs over time, these line +numbers change, so you should check what line number you actually get +in your output). That's already good information if you want to look up +in the code what exactly happened. But the text tells you even +more. First, it prints the function this happens in, and then the +plain text version of the condition that was violated. This will +almost always be enough already to let you know what exactly went wrong. + + + +But that's not all yet. You get to see the name of the exception +(ExcDimensionMismatch) and this exception even prints the +values of the two array sizes. If you go back to the code in +main(), you will remember that we gave the two variables +sizes 1 and 2, which of course are the ones that you find in the +output again. + + + +So now we know pretty exactly where the error happened and what went +wrong. What we don't know yet is how exactly we got there. The +stacktrace at the bottom actually tells us what happened: the problem +happened in +Coefficient::value_list (stackframe 0) and that it was +called from main() (stackframe 1). In realistic programs, +there would be many more functions in between these two. For example, +we might have made the mistake in the assemble_system +function, in which case stack frame 1 would be +LaplaceProblem<2>::assemble_system, stack frame 2 +would be LaplaceProblem<2>::run, and stack frame 3 +would be main() — you get the idea. + diff --git a/deal.II/examples/step-6/doc/intro.dox b/deal.II/examples/step-6/doc/intro.dox new file mode 100644 index 0000000000..990220acd0 --- /dev/null +++ b/deal.II/examples/step-6/doc/intro.dox @@ -0,0 +1,57 @@ + +

Introduction

+ + +The main emphasis in this example is the handling of locally refined +grids. The approach to adaptivity chosen in deal.II us to use grids in which +neighboring cells may be refined a different number of times. This then +results in nodes on the interfaces of cells which belong to one +side, but are unbalanced on the other. The common term for these is +“hanging nodes”. + + + +To guarantee that the global solution is continuous at these nodes as +well, we have to state some additional constraints on the values of +the solution at these nodes. In the program below, we will show how we +can get these constraints from the library, and how to use them in the +solution of the linear system of equations. + + + +The locally refined grids are produced using an error estimator class +which estimates the energy error with respect to the Laplace +operator. This error estimator, although developed for Laplace's +equation has proven to be a suitable tool to generate locally refined +meshes for a wide range of equations, not restricted to elliptic +problems. Although it will create non-optimal meshes for other +equations, it is often a good way to quickly produce meshes that are +well adapted to the features of solutions, such as regions of great +variation or discontinuities. Since it was developed by Kelly and +co-workers, we often refer to it as the “Kelly refinement +indicator” in the library, documentation, and mailing list. The +class that implements it is called +KellyErrorEstimator. Although the error estimator (and +its +implementation in the deal.II library) is capable of handling variable +coefficients in the equation, we will not use this feature since we +are only interested in a quick and simple way to generate locally +refined grids. + + + +Since the concepts used for locally refined grids are so important, +we do not show much additional new stuff in this example. The most +important exception is that we show how to use biquadratic elements +instead of the bilinear ones which we have used in all previous +examples. In fact, The use of higher order elements is accomplished by +only replacing three lines of the program, namely the declaration of +the fe variable, and the use of an appropriate quadrature formula +in two places. The rest of the program is unchanged. + + + +The only other new thing is a method to catch exceptions in the +main function in order to output some information in case the +program crashes for some reason. + diff --git a/deal.II/examples/step-6/doc/intro.dox~ b/deal.II/examples/step-6/doc/intro.dox~ new file mode 100644 index 0000000000..d7adb8dca4 --- /dev/null +++ b/deal.II/examples/step-6/doc/intro.dox~ @@ -0,0 +1,57 @@ + +

Introduction

+ + +The main emphasis in this example is the handling of locally refined +grids. The approach to adaptivity chosen in deal.II us to use grids in which +neighboring cells may be refined a different number of times. This then +results in nodes on the interfaces of cells which belong to one +side, but are unbalanced on the other. The common term for these is +“hanging nodes”. + + + +To guarantee that the global solution is continuous at these nodes as +well, we have to state some additional constraints on the values of +the solution at these nodes. In the program below, we will show how we +can get these constraints from the library, and how to use them in the +solution of the linear system of equations. + + + +The locally refined grids are produced using an error estimator class +which estimates the energy error with respect to the Laplace +operator. This error estimator, although developed for Laplace's +equation has proven to be a suitable tool to generate locally refined +meshes for a wide range of equations, not restricted to elliptic +problems. Although it will create non-optimal meshes for other +equations, it is often a good way to quickly produce meshes that are +well adapted to the features of solutions, such as regions of great +variation or discontinuities. Since it was developed by Kelly and +co-workers, we often refer to it as the “Kelly refinement +indicator” in the library, documentation, and mailing list. The +class that implements it is called +KellyErrorEstimator. Although the error estimator (and +its +implementation in the deal.II library) is capable of handling variable +coefficients in the equation, we will not use this feature since we +are only interested in a quick and simple way to generate locally +refined grids. + + + +Since the concepts used for locally refined grids are so important, +we do not show much additional new stuff in this example. The most +important exception is that we show how to use biquadratic elements +instead of the bilinear ones which we have used in all previous +examples. In fact, The use of higher order elements is accomplished by +only replacing three lines of the program, namely the declaration of +the fe variable, and the use of an appropriate quadrature formula +in two places. The rest of the program is unchanged. + + + +The only other new thing is a method to catch exceptions in the +main function in order to output some information in case the +program crashes for some reason. + diff --git a/deal.II/examples/step-6/doc/results.dox b/deal.II/examples/step-6/doc/results.dox new file mode 100644 index 0000000000..95700f9299 --- /dev/null +++ b/deal.II/examples/step-6/doc/results.dox @@ -0,0 +1,164 @@ + +

Results

+ + +The output of the program looks as follows: +

+Cycle 0:
+   Number of active cells:       20
+   Number of degrees of freedom: 89
+Cycle 1:
+   Number of active cells:       44
+   Number of degrees of freedom: 209
+Cycle 2:
+   Number of active cells:       92
+   Number of degrees of freedom: 449
+Cycle 3:
+   Number of active cells:       200
+   Number of degrees of freedom: 961
+Cycle 4:
+   Number of active cells:       440
+   Number of degrees of freedom: 2033
+Cycle 5:
+   Number of active cells:       932
+   Number of degrees of freedom: 4465
+Cycle 6:
+   Number of active cells:       1916
+   Number of degrees of freedom: 9113
+Cycle 7:
+   Number of active cells:       3884
+   Number of degrees of freedom: 18401
+
+ + + +As intended, the number of cells roughly doubles in each cycle. The +number of degrees is slightly more than four times the number of +cells; one would expect a factor of exactly four in two spatial +dimensions on an infinite grid (since the spacing between the degrees +of freedom is half the cell width: one additional degree of freedom on +each edge and one in the middle of each cell), but it is larger than +that factor due to the finite size of the mesh and due to additional +degrees of freedom which are introduced by hanging nodes and local +refinement. + + + +The final solution, as written by the program at the end of the +run() function, looks as follows: + + + +@image html step-6.solution.png + + + +In each cycle, the program furthermore writes the grid in EPS +format. These are shown in the following: + + + + + + + + + + + + + + + + + + + + + + + +
+ @image html step-6.grid-0.png + + @image html step-6.grid-1.png +
+ @image html step-6.grid-2.png + + @image html step-6.grid-3.png +
+ @image html step-6.grid-4.png + + @image html step-6.grid-5.png +
+ @image html step-6.grid-6.png + + @image html step-6.grid-7.png +
+ + + +It is clearly visible that the region where the solution has a kink, +i.e. the circle at radial distance 0.5 from the center, is +refined most. Furthermore, the central region where the solution is +very smooth and almost flat, is almost not refined at all, but this +results from the fact that we did not take into account that the +coefficient is large there. The region outside is refined rather +randomly, since the second derivative is constant there and refinement +is therefore mostly based on the size of the cells and their deviation +from the optimal square. + + + + +For completeness, we show what happens if the code we commented about +in the destructor of the LaplaceProblem class is omitted +from this example. + +@code +-------------------------------------------------------- +An error occurred in line <79> of file in function + virtual Subscriptor::~Subscriptor() +The violated condition was: + counter == 0 +The name and call sequence of the exception was: + ExcInUse(counter, object_info->name(), infostring) +Additional Information: +Object of class 4FE_QILi2EE is still used by 1 other objects. + from Subscriber 10DoFHandlerILi2EE + +Stacktrace: +----------- +#0 /u/bangerth/p/deal.II/1/deal.II/lib/libbase.g.so: Subscriptor::~Subscriptor() +#1 /u/bangerth/p/deal.II/1/deal.II/lib/libdeal_II_2d.g.so: FiniteElement<2>::~FiniteElement() +#2 ./step-6: FE_Poly, 2>::~FE_Poly() +#3 ./step-6: FE_Q<2>::~FE_Q() +#4 ./step-6: LaplaceProblem<2>::~LaplaceProblem() +#5 ./step-6: main +-------------------------------------------------------- +make: *** [run] Aborted +@endcode + + + +From the above error message, we conclude that an object of type +10DoFHandlerILi2EE is still using the object of type +4FE_QILi2EE. These are of course "mangled" names for +DoFHandler and FE_Q. The mangling works as +follows: the first number indicates the number of characters of the +template class, i.e. 10 for DoFHandler and 4 +forFE_Q; the rest of the text is then template +arguments. From this we can already glean a little bit who's the +culprit here, and who the victim.: +The one object that still uses the finite element is the +dof_handler object. + + + +The stacktrace gives an indication of where the problem happened. We +see that the exception was triggered in the +destructor of the FiniteElement class that was called +through a few more functions from the destructor of the +LaplaceProblem class, exactly where we have commented out +the call to DoFHandler::clear(). + diff --git a/deal.II/examples/step-7/doc/intro.dox b/deal.II/examples/step-7/doc/intro.dox new file mode 100644 index 0000000000..a928a5806f --- /dev/null +++ b/deal.II/examples/step-7/doc/intro.dox @@ -0,0 +1,154 @@ + +

Introduction

+ +In this program, we will mainly consider two aspects: +
    +
  1. Verification of correctness of the program and generation of convergence + tables; +
  2. Non-homogeneous Neumann boundary conditions for the Helmholtz equation. +
+Besides these topics, again a variety of improvements and tricks will be +shown. + +

Verification of correctness

+ +There has probably never been a +non-trivial finite element program that worked right from the start. It is +therefore necessary to find ways to verify whether a computed solution is +correct or not. Usually, this is done by choosing the set-up of a simulation +such that we know the exact continuous solution and evaluate the difference +between continuous and computed discrete solution. If this difference +converges to zero with the right order of convergence, this is already a good +indication of correctness, although there may be other sources of error +persisting which have only a small contribution to the total error or are of +higher order. + +In this example, we will not go into the theories of systematic software +verification which is a very complicated problem. Rather we will demonstrate +the tools which deal.II can offer in this respect. This is basically centered +around the functionality of a single function, integrate_difference. +This function computes the difference between a given continuous function and +a finite element field in various norms on each cell. At present, the +supported norms are the following, where $u$ denotes the continuous function +and $u_h$ the finite element field, and $K$ is an element of the +triangulation: +@f{eqnarray*} + {\| u-u_h \|}_{L_1(K)} &=& \int_K |u-u_h| \; dx, + \\ + {\| u-u_h \|}_{L_2(K)} &=& \left( \int_K |u-u_h|^2 \; dx \right)^{1/2}, + \\ + {\| u-u_h \|}_{L_\infty(K)} &=& \max_{x \in K} |u(x) - u_h(x)|, + \\ + {| u-u_h |}_{H^1(K)} &=& \left( \int_K |\nabla(u-u_h)|^2 \; dx \right)^{1/2}, + \\ + {\| u-u_h \|}_{H^1(K)} &=& \left( {\| u-u_h \|}^2_{L_2(K)} + +{| u-u_h |}^2_{H^1(K)} \right)^{1/2}. +@f} +All these norms and semi-norms can also be evaluated with weighting functions, +for example in order to exclude singularities from the determination of the +global error. The function also works for vector-valued functions. It should +be noted that all these quantities are evaluated using quadrature formulas; +the choice of the right quadrature formula is therefore crucial to the +accurate evaluation of the error. This holds in particular for the $L_\infty$ +norm, where we evaluate the maximal deviation of numerical and exact solution +only at the quadrature points; one should then not try to use a quadrature +rule with points only at points where super-convergence might occur. + +The function integrate_difference evaluates the desired norm on each +cell $K$ of the triangulation and returns a vector which holds these +values for each cell. From the local values, we can then obtain the global error. For +example, if the vector $(e_i)$ contains the local $L_2$ norms, then +@f[ + E = \| {\mathbf e} \| = \left( \sum_i e_i^2 \right)^{1/2} +@f] +is the global $L_2$ error. + +In the program, we will show how to evaluate and use these quantities, and we +will monitor their values under mesh refinement. Of course, we have to choose +the problem at hand such that we can explicitly state the solution and its +derivatives, but since we want to evaluate the correctness of the program, +this is only reasonable. If we know that the program produces the correct +solution for one (or, if one wants to be really sure: many) specifically +chosen right hand sides, we can be rather confident that it will also compute +the correct solution for problems where we don't know the exact values. + +In addition to simply computing these quantities, we will show how to generate +nicely formatted tables from the data generated by this program that +automatically computes convergence rates etc. In addition, we will compare +different strategies for mesh refinement. + + +

Non-homogeneous Neumann boundary conditions

+ +The second, totally +unrelated, subject of this example program is the use of non-homogeneous +boundary conditions. These are included into the variational form using +boundary integrals which we have to evaluate numerically when assembling the +right hand side vector. + +Before we go into programming, let's have a brief look at the mathematical +formulation. The equation which we want to solve is Helmholtz's equation +``with the nice sign'': +@f[ + -\Delta u + u = f, +@f] +on the square $[-1,1]^2$, augmented by boundary conditions +@f[ + u = g_1 +@f] +on some part $\Gamma_1$ of the boundary $\Gamma$, and +@f[ + {\mathbf n}\cdot \nabla u = g_2 +@f] +on the rest $\Gamma_2 = \Gamma \backslash \Gamma_1$. + +We choose the right hand side function $f$ such that the exact solution is +@f[ + u(x) = \sum_{i=1}^3 \exp\left(-\frac{|x-x_i|^2}{\sigma^2}\right) +@f] +where the centers $x_i$ of the exponentials are + $x_1=(-\frac 12,\frac 12)$, + $x_2=(-\frac 12,-\frac 12)$, and + $x_3=(\frac 12,-\frac 12)$. +The half width is set to $\sigma=\frac 13$. + +We further choose $\Gamma_1=\Gamma \cap\{\{x=1\} \cup \{y=1\}\}$, and there +set $g_1$ such that it resembles the exact values of $u$. Likewise, we choose +$g_2$ on the remaining portion of the boundary to be the exact normal +derivatives of the continuous solution. + +Using the above definitions, we can state the weak formulation of the +equation, which reads: find $u\in H^1_g=\{v\in H^1: v|_{\Gamma_1}=g_1\}$ such +that +@f[ + {(\nabla u, \nabla v)}_\Omega + {(u,v)}_\Omega + = + {(f,v)}_\Omega + {(g_2,v)}_{\Gamma_2} +@f] +for all test functions $v\in H^1_0=\{v\in H^1: v|_{\Gamma_1}=0\}$. The +boundary term ${(g_2,v)}_{\Gamma_2}$ has appeared by integration by parts and +using $\partial_n u=g$ on $\Gamma_2$ and $v=0$ on $\Gamma_1$. The cell +matrices and vectors which we use to build the global matrices and right hand +side vectors in the discrete formulation therefore look like this: +@f{eqnarray*} + A_{ij}^K &=& \left(\nabla \varphi_i, \nabla \varphi_j\right)_K + +\left(\varphi_i, \varphi_j\right)_K, + \\ + f_i^K &=& \left(f,\varphi_i\right)_K + +\left(g_2, \varphi_i\right)_{\partial K\cap \Gamma_2}. +@f} +Since the generation of the domain integrals has been shown in previous +examples several times, only the generation of the contour integral is of +interest here. It basically works along the following lines: for domain +integrals we have the FEValues class that provides values and +gradients of the shape values, as well as Jacobian determinants and other +information and specified quadrature points in the cell; likewise, there is a +class FEFaceValues that performs these tasks for integrations on +faces of cells. One provides it with a quadrature formula for a manifold with +dimension one less than the dimension of the domain is, and the cell and the +number of its face on which we want to perform the integration. The class will +then compute the values, gradients, normal vectors, weights, etc. at the +quadrature points on this face, which we can then use in the same way as for +the domain integrals. The details of how this is done are shown in the +following program. + diff --git a/deal.II/examples/step-7/doc/results.dox b/deal.II/examples/step-7/doc/results.dox new file mode 100644 index 0000000000..cbed6604ef --- /dev/null +++ b/deal.II/examples/step-7/doc/results.dox @@ -0,0 +1,224 @@ + +

Results

+ + +The program generates two kinds of output. The first are the output +files solution-adaptive-q1.gmv, +solution-global-q1.gmv, and +solution-global-q2.gmv. We show the latter in a 3d view +here: + + +@image html step-7.solution.png + + + + +Secondly, the program writes tables not only to disk, but also to the +screen while running: + + +@code +examples/step-7> make run +============================ Running step-7 +Solving with Q1 elements, adaptive refinement +============================================= + +Cycle 0: + Number of active cells: 4 + Number of degrees of freedom: 9 +Cycle 1: + Number of active cells: 13 + Number of degrees of freedom: 22 +Cycle 2: + Number of active cells: 31 + Number of degrees of freedom: 46 +Cycle 3: + Number of active cells: 64 + Number of degrees of freedom: 87 +Cycle 4: + Number of active cells: 127 + Number of degrees of freedom: 160 +Cycle 5: + Number of active cells: 244 + Number of degrees of freedom: 297 +Cycle 6: + Number of active cells: 466 + Number of degrees of freedom: 543 + +cycle cells dofs L2 H1 Linfty + 0 4 9 1.198e+00 2.732e+00 1.383e+00 + 1 13 22 8.795e-02 1.193e+00 1.816e-01 + 2 31 46 8.147e-02 1.167e+00 1.654e-01 + 3 64 87 7.702e-02 1.077e+00 1.310e-01 + 4 127 160 4.643e-02 7.988e-01 6.745e-02 + 5 244 297 2.470e-02 5.568e-01 3.668e-02 + 6 466 543 1.622e-02 4.107e-01 2.966e-02 + +Solving with Q1 elements, global refinement +=========================================== + +Cycle 0: + Number of active cells: 4 + Number of degrees of freedom: 9 +Cycle 1: + Number of active cells: 16 + Number of degrees of freedom: 25 +Cycle 2: + Number of active cells: 64 + Number of degrees of freedom: 81 +Cycle 3: + Number of active cells: 256 + Number of degrees of freedom: 289 +Cycle 4: + Number of active cells: 1024 + Number of degrees of freedom: 1089 +Cycle 5: + Number of active cells: 4096 + Number of degrees of freedom: 4225 +Cycle 6: + Number of active cells: 16384 + Number of degrees of freedom: 16641 + +cycle cells dofs L2 H1 Linfty + 0 4 9 1.198e+00 2.732e+00 1.383e+00 + 1 16 25 8.281e-02 1.190e+00 1.808e-01 + 2 64 81 8.142e-02 1.129e+00 1.294e-01 + 3 256 289 2.113e-02 5.828e-01 4.917e-02 + 4 1024 1089 5.319e-03 2.934e-01 1.359e-02 + 5 4096 4225 1.332e-03 1.469e-01 3.482e-03 + 6 16384 16641 3.332e-04 7.350e-02 8.758e-04 + +n cells H1 L2 + 0 4 2.732e+00 - 1.198e+00 - - + 1 16 1.190e+00 1.20 8.281e-02 14.47 3.86 + 2 64 1.129e+00 0.08 8.142e-02 1.02 0.02 + 3 256 5.828e-01 0.95 2.113e-02 3.85 1.95 + 4 1024 2.934e-01 0.99 5.319e-03 3.97 1.99 + 5 4096 1.469e-01 1.00 1.332e-03 3.99 2.00 + 6 16384 7.350e-02 1.00 3.332e-04 4.00 2.00 + +Solving with Q2 elements, global refinement +=========================================== + +Cycle 0: + Number of active cells: 4 + Number of degrees of freedom: 25 +Cycle 1: + Number of active cells: 16 + Number of degrees of freedom: 81 +Cycle 2: + Number of active cells: 64 + Number of degrees of freedom: 289 +Cycle 3: + Number of active cells: 256 + Number of degrees of freedom: 1089 +Cycle 4: + Number of active cells: 1024 + Number of degrees of freedom: 4225 +Cycle 5: + Number of active cells: 4096 + Number of degrees of freedom: 16641 +Cycle 6: + Number of active cells: 16384 + Number of degrees of freedom: 66049 + +cycle cells dofs L2 H1 Linfty + 0 4 25 1.433e+00 2.445e+00 1.286e+00 + 1 16 81 7.912e-02 1.168e+00 1.728e-01 + 2 64 289 7.755e-03 2.511e-01 1.991e-02 + 3 256 1089 9.969e-04 6.235e-02 2.764e-03 + 4 1024 4225 1.265e-04 1.571e-02 3.527e-04 + 5 4096 16641 1.587e-05 3.937e-03 4.343e-05 + 6 16384 66049 1.986e-06 9.847e-04 5.402e-06 + +n cells H1 L2 + 0 4 2.445e+00 - 1.433e+00 - - + 1 16 1.168e+00 1.07 7.912e-02 18.11 4.18 + 2 64 2.511e-01 2.22 7.755e-03 10.20 3.35 + 3 256 6.235e-02 2.01 9.969e-04 7.78 2.96 + 4 1024 1.571e-02 1.99 1.265e-04 7.88 2.98 + 5 4096 3.937e-03 2.00 1.587e-05 7.97 2.99 + 6 16384 9.847e-04 2.00 1.986e-06 7.99 3.00 +@endcode + + +One can see the error reduction upon grid refinement, and for the +cases where global refinement was performed, also the convergence +rates can be seen. The linear and quadratic convergence rates of Q1 +and Q2 elements in the $H^1$ norm can clearly be seen, as +are the quadratic and cubic rates in the $L_2$ norm. + + + + +Finally, the program generated various LaTeX tables. We show here +the convergence table of the Q2 element with global refinement, after +converting the format to HTML: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+n cellsH1-errorL2-error
042.445e+00-1.433e+00--
1161.168e+001.077.912e-0218.114.18
2642.511e-012.227.755e-0310.203.35
32566.235e-022.019.969e-047.782.96
410241.571e-021.991.265e-047.882.98
540963.937e-032.001.587e-057.972.99
6163849.847e-042.001.986e-067.993.00
+ diff --git a/deal.II/examples/step-8/doc/intro.dox b/deal.II/examples/step-8/doc/intro.dox new file mode 100644 index 0000000000..315a5b1d99 --- /dev/null +++ b/deal.II/examples/step-8/doc/intro.dox @@ -0,0 +1,322 @@ + +

Introduction

+ + +In real life, most partial differential equations are really systems +of equations. Accordingly, the solutions are usually +vector-valued. The deal.II library supports such problems, and we will show +that that is mostly rather simple. The only more complicated problems +are in assembling matrix and right hand side, but these are easily +understood as well. + +In the example, we will want to solve the elastic equations. They are +an extension to Laplace's equation with a vector-valued solution that +describes the displacement in each space direction of a rigid body +which is subject to a force. Of course, the force is also +vector-valued, meaning that in each point it has a direction and an +absolute value. The elastic equations are the following: +@f[ + - + \partial_j (c_{ijkl} \partial_k u_l) + = + f_i, + \qquad + i=1\ldots d, +@f] +where the values $c_{ijkl}$ are the stiffness coefficients and +will usually depend on the space coordinates. In +many cases, one knows that the material under consideration is +isotropic, in which case by introduction of the two coefficients +$\lambda$ and $\mu$ the coefficient tensor reduces to +@f[ + c_{ijkl} + = + \lambda \delta_{ij} \delta_{kl} + + \mu (\delta_{ik} \delta_{jl} + \delta_{il} \delta_{jk}). +@f] + +The elastic equations can then be rewritten in much simpler a form: +@f[ + - + \nabla \lambda (\nabla\cdot {\mathbf u}) + - + (\nabla \cdot \mu \nabla) {\mathbf u} + - + \nabla\cdot \mu (\nabla {\mathbf u})^T + = + {\mathbf f}, +@f] +and the respective bilinear form is then +@f[ + a({\mathbf u}, {\mathbf v}) = + \left( + \lambda \nabla\cdot {\mathbf u}, \nabla\cdot {\mathbf v} + \right)_\Omega + + + \sum_{i,j} + \left( + \mu \partial_i u_j, \partial_i v_j + \right)_\Omega, + + + \sum_{i,j} + \left( + \mu \partial_i u_j, \partial_j v_i + \right)_\Omega, +@f] +or also writing the first term a sum over components: +@f[ + a({\mathbf u}, {\mathbf v}) = + \sum_{i,j} + \left( + \lambda \partial_l u_l, \partial_k v_k + \right)_\Omega + + + \sum_{k,l} + \left( + \mu \partial_i u_j, \partial_i v_j + \right)_\Omega, + + + \sum_{i,j} + \left( + \mu \partial_i u_j, \partial_j v_i + \right)_\Omega. +@f] + + +How do we now assemble the matrix for such an equation? The first thing we +need is some knowledge about how the shape functions work in the case of +vector-valued finite elements. Basically, this comes down to the following: +let $n$ be the number of shape functions for the scalar finite element of +which we build the vector element (for example, we will use bilinear functions +for each component of the vector-valued finite element, so the scalar finite +element is the FE_Q(1) element which we have used in previous examples +already, and $n=4$ in two space dimensions). Further, let $N$ be the number of +shape functions for the vector element; in two space dimensions, we need $n$ +shape functions for each component of the vector, so $N=2n$. Then, the $i$th +shape function of the vector element has the form +@f[ + \Phi_i({\mathbf x}) = \varphi_{base(i)}({\mathbf x})\ {\mathbf e}_{comp(i)}, +@f] +where $e_l$ is the $l$th unit vector, $comp(i)$ is the function that tells +us which component of $\Phi_i$ is the one that is nonzero (for +each vector shape function, only one component is nonzero, and all others are +zero). $\varphi_{base(i)}(x)$ describes the space dependence of the shape +function, which is taken to be the $base(i)$-th shape function of the scalar +element. Of course, while $i$ is in the range $0,\ldots,N-1$, the functions +$comp(i)$ and $base(i)$ have the ranges $0,1$ (in 2D) and $0,\ldots,n-1$, +respectively. + +For example (though this sequence of shape functions is not +guaranteed, and you should not rely on it), +the following layout could be used by the library: +@f{eqnarray*} + \Phi_0({\mathbf x}) &=& + \left(\begin{array}{c} + \varphi_0({\mathbf x}) \\ 0 + \end{array}\right), + \\ + \Phi_1({\mathbf x}) &=& + \left(\begin{array}{c} + 0 \\ \varphi_0({\mathbf x}) + \end{array}\right), + \\ + \Phi_2({\mathbf x}) &=& + \left(\begin{array}{c} + \varphi_1({\mathbf x}) \\ 0 + \end{array}\right), + \\ + \Phi_3({\mathbf x}) &=& + \left(\begin{array}{c} + 0 \\ \varphi_1({\mathbf x}) + \end{array}\right), + \ldots +@f} +where here +@f[ + comp(0)=0, \quad comp(1)=1, \quad comp(2)=0, \quad comp(3)=1, \quad \ldots +@f] +@f[ + base(0)=0, \quad base(1)=0, \quad base(2)=1, \quad base(3)=1, \quad \ldots +@f] + +In all but very rare cases, you will not need to know which shape function +$\varphi_{base(i)}$ of the scalar element belongs to a shape function $\Phi_i$ +of the vector element. Let us therefore define +@f[ + \phi_i = \varphi_{base(i)} +@f] +by which we can write the vector shape function as +@f[ + \Phi_i({\mathbf x}) = \phi_{i}({\mathbf x})\ {\mathbf e}_{comp(i)}. +@f] +You can now safely forget about the function $base(i)$, at least for the rest +of this example program. + +Now using this vector shape functions, we can write the discrete finite +element solution as +@f[ + {\mathbf u}_h({\mathbf x}) = + \sum_i \Phi_i({\mathbf x})\ u_i +@f] +with scalar coefficients $u_i$. If we define an analog function ${\mathbf v}_h$ as +test function, we can write the discrete problem as follows: Find coefficients +$u_i$ such that +@f[ + a({\mathbf u}_h, {\mathbf v}_h) = ({\mathbf f}, {\mathbf v}_h) + \qquad + \forall {\mathbf v}_h. +@f] + +If we insert the definition of the bilinear form and the representation of +${\mathbf u}_h$ and ${\mathbf v}_h$ into this formula: +@f{eqnarray*} + \sum_{i,j} + u_i v_j + \sum_{k,l} + \left\{ + \left( + \lambda \partial_l (\Phi_i)_l, \partial_k (\Phi_j)_k + \right)_\Omega + + + \left( + \mu \partial_l (\Phi_i)_k, \partial_l (\Phi_j)_k + \right)_\Omega + + + \left( + \mu \partial_l (\Phi_i)_k, \partial_k (\Phi_j)_l + \right)_\Omega + \right\} +\\ += + \sum_j v_j + \sum_l + \left( + f_l, + (\Phi_j)_l + \right)_\Omega. +@f} +We note that here and in the following, the indices $k,l$ run over spatial +directions, i.e. $0\le k,l < d$, and that indices $i,j$ run over degrees +of freedoms. + +The local stiffness matrix on cell $K$ therefore has the following entries: +@f[ + A^K_{ij} + = + \sum_{k,l} + \left\{ + \left( + \lambda \partial_l (\Phi_i)_l, \partial_k (\Phi_j)_k + \right)_K + + + \left( + \mu \partial_l (\Phi_i)_k, \partial_l (\Phi_j)_k + \right)_K + + + \left( + \mu \partial_l (\Phi_i)_k, \partial_k (\Phi_j)_l + \right)_K + \right\}, +@f] +where $i,j$ now are local degrees of freedom and therefore $0\le i,j < N$. +In these formulas, we always take some component of the vector shape functions +$\Phi_i$, which are of course given as follows (see their definition): +@f[ + (\Phi_i)_l = \phi_i \delta_{l,comp(i)}, +@f] +with the Kronecker symbol $\delta_{nm}$. Due to this, we can delete some of +the sums over $k$ and $l$: +@f{eqnarray*} + A^K_{ij} + &=& + \sum_{k,l} + \Bigl\{ + \left( + \lambda \partial_l \phi_i\ \delta_{l,comp(i)}, + \partial_k \phi_j\ \delta_{k,comp(j)} + \right)_K +\\ + &\qquad\qquad& + + \left( + \mu \partial_l \phi_i\ \delta_{k,comp(i)}, + \partial_l \phi_j\ \delta_{k,comp(j)} + \right)_K + + + \left( + \mu \partial_l \phi_i\ \delta_{k,comp(i)}, + \partial_k \phi_j\ \delta_{l,comp(j)} + \right)_K + \Bigr\} +\\ + &=& + \left( + \lambda \partial_{comp(i)} \phi_i, + \partial_{comp(j)} \phi_j + \right)_K + + + \sum_l + \left( + \mu \partial_l \phi_i, + \partial_l \phi_j + \right)_K + \ \delta_{comp(i),comp(j)} + + + \left( + \mu \partial_{comp(j)} \phi_i, + \partial_{comp(i)} \phi_j + \right)_K +\\ + &=& + \left( + \lambda \partial_{comp(i)} \phi_i, + \partial_{comp(j)} \phi_j + \right)_K + + + \left( + \mu \nabla \phi_i, + \nabla \phi_j + \right)_K + \ \delta_{comp(i),comp(j)} + + + \left( + \mu \partial_{comp(j)} \phi_i, + \partial_{comp(i)} \phi_j + \right)_K. +@f} + +Likewise, the contribution of cell $K$ to the right hand side vector is +@f{eqnarray*} + f^K_j + &=& + \sum_l + \left( + f_l, + (\Phi_j)_l + \right)_K +\\ + &=& + \sum_l + \left( + f_l, + \phi_j \delta_{l,comp(j)} + \right)_K +\\ + &=& + \left( + f_{comp(j)}, + \phi_j + \right)_K. +@f} + +This is the form in which we will implement the local stiffness matrix and +right hand side vectors. + +As a final note: in the @ref step_17 "step-17" example program, we will revisit the elastic +problem laid out here, and will show how to solve it in parallel on a cluster +of computers. The resulting program will thus be able to solve this problem to +significantly higher accuracy, and more efficiently if this is +required. In addition, in @ref step_20 "step-20", we will revisit some +vector-valued problems and show a few techniques that may make it +simpler to actually go through all the stuff shown above, with +FiniteElement::system_to_component_index etc. + diff --git a/deal.II/examples/step-8/doc/results.dox b/deal.II/examples/step-8/doc/results.dox new file mode 100644 index 0000000000..bc6a11423e --- /dev/null +++ b/deal.II/examples/step-8/doc/results.dox @@ -0,0 +1,47 @@ + +

Results

+ + +There is not much to be said about the results of this program, apart +from that they look nice. All images were made using GMV from the +output files that the program wrote to disk. The first picture shows +the displacement as a vector field, where one vector is shown at each +vertex of the grid: + + +@image html step-8.vectors.png + + +You can clearly see the sources of x-displacement around x=0.5 and +x=-0.5, and of y-displacement at the origin. The next image shows the +final grid after eight steps of refinement: + + +@image html step-8.grid.png + + + +Finally, the x-displacement and y-displacement are displayed separately: + + + + + + + + +
+@image html step-8.x.png + +@image html step-8.y.png +
+ + + +It should be noted that intuitively one would have expected the +solution to be symmetric about the x- and y-axes since the x- and +y-forces are symmetric with respect to these axes. However, the force +considered as a vector is not symmetric and consequently neither is +the solution. + + diff --git a/deal.II/examples/step-8/doc/results.dox~ b/deal.II/examples/step-8/doc/results.dox~ new file mode 100644 index 0000000000..9f7c98cdf3 --- /dev/null +++ b/deal.II/examples/step-8/doc/results.dox~ @@ -0,0 +1,50 @@ + +

Results

+ +

+There is not much to be said about the results of this program, apart +from that they look nice. All images were made using GMV from the +output files that the program wrote to disk. The first picture shows +the displacement as a vector field, where one vector is shown at each +vertex of the grid: +

+ +

+displacement-vectors +

+ +

+You can clearly see the sources of x-displacement around x=0.5 and +x=-0.5, and of y-displacement at the origin. The next image shows the +final grid after eight steps of refinement: +

+ +

+final-grid +

+ +

+Finally, the x-displacement and y-displacement are displayed separately: +

+ +

+ + + + + +
+displacement-x + +displacement-y +
+

+ +

+It should be noted that intuitively one would have expected the +solution to be symmetric about the x- and y-axes since the x- and +y-forces are symmetric with respect to these axes. However, the force +considered as a vector is not symmetric and consequently neither is +the solution. +

+ diff --git a/deal.II/examples/step-9/doc/intro.dox b/deal.II/examples/step-9/doc/intro.dox new file mode 100644 index 0000000000..93caff4b43 --- /dev/null +++ b/deal.II/examples/step-9/doc/intro.dox @@ -0,0 +1,286 @@ + +

Introduction

+ + +In this example, our aims are the following: +
    +
  1. solve the advection equation $\beta \cdot \nabla u = f$; +
  2. show how we can use multiple threads to get quicker to + the desired results if we have a multi-processor machine; +
  3. develop a simple refinement criterion. +
+While the second aim is difficult to describe in general terms without +reference to the code, we will discuss the other two aims in the +following. The use of multiple threads will then be detailed at the +relevant places within the program. Furthermore, there exists a report on this +subject, which is also available online from the ``Documentation'' section of +the deal.II homepage. + + +

Discretizing the advection equation

+ +In the present example program, we shall numerically approximate the +solution of the advection equation +@f[ + \beta \cdot \nabla u = f, +@f] +where $\beta$ is a vector field that describes advection direction and +speed (which may be dependent on the space variables), $f$ is a source +function, and $u$ is the solution. The physical process that this +equation describes is that of a given flow field $\beta$, with which +another substance is transported, the density or concentration of +which is given by $u$. The equation does not contain diffusion of this +second species within its carrier substance, but there are source +terms. + +It is obvious that at the inflow, the above equation needs to be +augmented by boundary conditions: +@f[ + u = g \qquad\qquad \mathrm{on}\ \partial\Omega_-, +@f] +where $\partial\Omega_-$ describes the inflow portion of the boundary and is +formally defined by +@f[ + \partial\Omega_- + = + \{{\mathbf x}\in \partial\Omega: \beta\cdot{\mathbf n}({\mathbf x}) < 0\}, +@f] +and ${\mathbf n}({\mathbf x})$ being the outward normal to the domain at point +${\mathbf x}\in\partial\Omega$. This definition is quite intuitive, since +as ${\mathbf n}$ points outward, the scalar product with $\beta$ can only +be negative if the transport direction $\beta$ points inward, i.e. at +the inflow boundary. The mathematical theory states that we must not +pose any boundary condition on the outflow part of the boundary. + +As it is stated, the transport equation is not stably solvable using +the standard finite element method, however. The problem is that +solutions to this equation possess only insufficient regularity +orthogonal to the transport direction: while they are smooth parallel +to $\beta$, they may be discontinuous perpendicular to this +direction. These discontinuities lead to numerical instabilities that +make a stable solution by a straight-forward discretization +impossible. We will thus use the streamline diffusion stabilized +formulation, in which we test the equation with test functions $v + +\delta \beta\cdot\nabla v$ instead of $v$, where $\delta$ is a +parameter that is chosen in the range of the (local) mesh width $h$; +good results are usually obtained by setting $\delta=0.1h$. Note that +the modification in the test function vanishes as the mesh size tends +to zero. We will not discuss reasons, pros, and cons of the streamline +diffusion method, but rather use it ``as is'', and refer the +interested reader to the sufficiently available literature; every +recent good book on finite elements should have a discussion of that +topic. + +Using the test functions as defined above, the weak formulation of +our stabilized problem reads: find a discrete function $u_h$ such that +for all discrete test functions $v_h$ there holds +@f[ + (\beta \cdot \nabla u_h, v_h + \delta \beta\cdot\nabla v_h)_\Omega + - + (\beta\cdot {\mathbf n} u_h, v_h)_{\partial\Omega_-} + = + (f, v_h + \delta \beta\cdot\nabla v_h)_\Omega + - + (\beta\cdot {\mathbf n} g, v_h)_{\partial\Omega_-}. +@f] +Note that we have included the inflow boundary values into the weak +form, and that the respective terms to the left hand side operator are +positive definite due to the fact that $\beta\cdot{\mathbf n}<0$ on the +inflow boundary. One would think that this leads to a system matrix +to be inverted of the form +@f[ + a_{ij} = + (\beta \cdot \nabla \varphi_i, + \varphi_j + \delta \beta\cdot\nabla \varphi_j)_\Omega + - + (\beta\cdot {\mathbf n} \varphi_i, \varphi_j)_{\partial\Omega_-}, +@f] +with basis functions $\varphi_i,\varphi_j$. However, this is a +pitfall that happens to every numerical analyst at least once +(including the author): we have here expanded the solution +$u_h = u_i \varphi_i$, but if we do so, we will have to solve the +problem +@f[ + {\mathbf u}^T A = {\mathbf f}^T, +@f] +where ${\mathbf u}=(u_i)$, i.e. we have to solve the transpose problem of +what we might have expected naively. In order to obtain the usual form +of the linear system, it is therefore best to rewrite the weak +formulation to +@f[ + (v_h + \delta \beta\cdot\nabla v_h, \beta \cdot \nabla u_h)_\Omega + - + (\beta\cdot {\mathbf n} v_h, u_h)_{\partial\Omega_-} + = + (v_h + \delta \beta\cdot\nabla v_h, f)_\Omega + - + (\beta\cdot {\mathbf n} v_h, g)_{\partial\Omega_-} +@f] +and then to obtain +@f[ + a_{ij} = + (\varphi_i + \delta \beta \cdot \nabla \varphi_i, + \beta\cdot\nabla \varphi_j)_\Omega + - + (\beta\cdot {\mathbf n} \varphi_i, \varphi_j)_{\partial\Omega_-}, +@f] +as system matrix. We will assemble this matrix in the program. + +There remains the solution of this linear system of equations. As the +resulting matrix is no more symmetric positive definite, we can't +employ the usual CG method any more. Suitable for the solution of +systems as the one at hand is the BiCGStab (bi-conjugate gradients +stabilized) method, which is also available in deal.II, so we will use +it. + + +Regarding the exact form of the problem which we will solve, we use +the following domain and functions (in $d=2$ space dimensions): +@f{eqnarray*} + \Omega &=& [-1,1]^d \\ + \beta({\mathbf x}) + &=& + \left( + \begin{array}{c}1 \\ 1+\frac 45 \sin(8\pi x)\end{array} + \right), + \\ + f({\mathbf x}) + &=& + \left\{ + \begin{array}{ll} + \frac 1{10 s^d} & + \mathrm{for}\ |{\mathbf x}-{\mathbf x}_0|2$, we extend $\beta$ and ${\mathbf x}_0$ by the same as the last +component. Regarding these functions, we have the following +annotations: +
    +
  1. The advection field $\beta$ transports the solution roughly in +diagonal direction from lower left to upper right, but with a wiggle +structure superimposed. +
  2. The right hand side adds to the field generated by the inflow +boundary conditions a bulb in the lower left corner, which is then +transported along. +
  3. The inflow boundary conditions impose a weighted sinusoidal +structure that is transported along with the flow field. Since +$|{\mathbf x}|\ge 1$ on the boundary, the weighting term never gets very large. +
+ + +

A simple refinement criterion

+ +In all previous examples with adaptive refinement, we have used an +error estimator first developed by Kelly et al., which assigns to each +cell $K$ the following indicator: +@f[ + \eta_K = + \left( + \frac {h_K}{12} + \int_{\partial K} + [\partial_n u_h]^2 \; d\sigma + \right)^{1/2}, +@f] +where $[\partial n u_h]$ denotes the jump of the normal derivatives +across a face $\gamma\subset\partial K$ of the cell $K$. It can be +shown that this error indicator uses a discrete analogue of the second +derivatives, weighted by a power of the cell size that is adjusted to +the linear elements assumed to be in use here: +@f[ + \eta_K \approx + C h \| \nabla^2 u \|_K, +@f] +which itself is related to the error size in the energy norm. + +The problem with this error indicator in the present case is that it +assumes that the exact solution possesses second derivatives. This is +already questionable for solutions to Laplace's problem in some cases, +although there most problems allow solutions in $H^2$. If solutions +are only in $H^1$, then the second derivatives would be singular in +some parts (of lower dimension) of the domain and the error indicators +would not reduce there under mesh refinement. Thus, the algorithm +would continuously refine the cells around these parts, i.e. would +refine into points or lines (in 2d). + +However, for the present case, solutions are usually not even in $H^1$ +(and this missing regularity is not the exceptional case as for +Laplace's equation), so the error indicator described above is not +really applicable. We will thus develop an indicator that is based on +a discrete approximation of the gradient. Although the gradient often +does not exist, this is the only criterion available to us, at least +as long as we use continuous elements as in the present +example. To start with, we note that given two cells $K$, $K'$ of +which the centers are connected by the vector ${\mathbf y}_{KK'}$, we can +approximate the directional derivative of a function $u$ as follows: +@f[ + \frac{{\mathbf y}_{KK'}^T}{|{\mathbf y}_{KK'}|} \nabla u + \approx + \frac{u(K') - u(K)}{|{\mathbf y}_{KK'}|}, +@f] +where $u(K)$ and $u(K')$ denote $u$ evaluated at the centers of the +respective cells. We now multiply the above approximation by +${\mathbf y}_{KK'}/|{\mathbf y}_{KK'}|$ and sum over all neighbors $K'$ of $K$: +@f[ + \underbrace{ + \left(\sum_{K'} \frac{{\mathbf y}_{KK'} {\mathbf y}_{KK'}^T} + {|{\mathbf y}_{KK'}|^2}\right)}_{=:Y} + \nabla u + \approx + \sum_{K'} + \frac{{\mathbf y}_{KK'}}{|{\mathbf y}_{KK'}|} + \frac{u(K') - u(K)}{|{\mathbf y}_{KK'}|}. +@f] +If the vectors ${\mathbf y}_{KK'}$ connecting $K$ with its neighbors span +the whole space (i.e. roughly: $K$ has neighbors in all directions), +then the term in parentheses in the left hand side expression forms a +regular matrix, which we can invert to obtain an approximation of the +gradient of $u$ on $K$: +@f[ + \nabla u + \approx + Y^{-1} + \left( + \sum_{K'} + \frac{{\mathbf y}_{KK'}}{|{\mathbf y}_{KK'}|} + \frac{u(K') - u(K)}{|{\mathbf y}_{KK'}|} + \right). +@f] +We will denote the approximation on the right hand side by +$\nabla_h u(K)$, and we will use the following quantity as refinement +criterion: +@f[ + \eta_K = h^{1+d/2} |\nabla_h u_h(K)|, +@f] +which is inspired by the following (not rigorous) argument: +@f{eqnarray*} + \|u-u_h\|^2_{L_2} + &\le& + C h^2 \|\nabla u\|^2_{L_2} +\\ + &\approx& + C + \sum_K + h_K^2 \|\nabla u\|^2_{L_2(K)} +\\ + &\le& + C + \sum_K + h_K^2 h_K^d \|\nabla u\|^2_{L_\infty(K)} +\\ + &\approx& + C + \sum_K + h_K^{2+d} |\nabla_h u_h(K)|^2 +@f} diff --git a/deal.II/examples/step-9/doc/results.dox b/deal.II/examples/step-9/doc/results.dox new file mode 100644 index 0000000000..0217858a78 --- /dev/null +++ b/deal.II/examples/step-9/doc/results.dox @@ -0,0 +1,50 @@ + +

Results

+ + +The results of this program are not particularly spectacular. They +consist of the console output, some grid files, and the solution on +the finest grid. First for the console output: +@code +Cycle 0: + Number of active cells: 256 + Number of degrees of freedom: 289 +Cycle 1: + Number of active cells: 643 + Number of degrees of freedom: 785 +Cycle 2: + Number of active cells: 1618 + Number of degrees of freedom: 1915 +Cycle 3: + Number of active cells: 4090 + Number of degrees of freedom: 4800 +Cycle 4: + Number of active cells: 10324 + Number of degrees of freedom: 11943 +Cycle 5: + Number of active cells: 26002 + Number of degrees of freedom: 30035 +@endcode +As can be seen, quite a number of cells is used on the finest level to +resolve the features of the solution. The final grid showing this is +displayed in the following picture: + + +@image html step-9.grid.png + + + +The structure of the grid will be understandable by looking at the +solution itself: + + +@image html step-9.solution.png + + + +Note that the solution is created by that part that is transported +along the wiggled advection field from the left and lower boundaries +to the top right, and the part that is created by the source in the +lower left corner, and the results of which are also transported +along. The grid shown above is well-adapted to resolve these +features. -- 2.39.5