From 89c4b3497b2e8c3702876e0177901aefe14605fc Mon Sep 17 00:00:00 2001 From: bangerth Date: Sat, 22 Jul 2006 01:12:41 +0000 Subject: [PATCH] Remove intro.dox and results.dox files since those are now taken from examples/step-XX/doc anyway and should not be duplicated git-svn-id: https://svn.dealii.org/trunk@13404 0785d39b-7218-0410-832d-ea1e28bc413d --- .../step-1.data/intro.dox | 16 - .../step-1.data/results.dox | 21 - .../step-10.data/intro.dox | 61 -- .../step-10.data/results.dox | 194 ----- .../step-11.data/intro.dox | 110 --- .../step-11.data/results.dox | 45 -- .../step-12.data/intro.dox | 266 ------- .../step-12.data/results.dox | 89 --- .../step-13.data/intro.dox | 187 ----- .../step-13.data/results.dox | 190 ----- .../step-14.data/intro.dox | 408 ---------- .../step-15.data/intro.dox | 299 -------- .../step-15.data/results.dox | 119 --- .../step-16.data/intro.dox | 35 - .../step-16.data/results.dox | 2 - .../step-17.data/intro.dox | 73 -- .../step-17.data/results.dox | 208 ----- .../step-19.data/intro.dox | 121 --- .../step-19.data/results.dox | 260 ------- .../step-2.data/intro.dox | 17 - .../step-2.data/results.dox | 58 -- .../step-20.data/intro.dox | 715 ------------------ .../step-20.data/results.dox | 294 ------- .../step-21.data/intro.dox | 2 - .../step-21.data/results.dox | 2 - .../step-3.data/intro.dox | 16 - .../step-3.data/results.dox | 143 ---- .../step-4.data/intro.dox | 133 ---- .../step-4.data/results.dox | 111 --- .../step-5.data/intro.dox | 37 - .../step-5.data/results.dox | 163 ---- .../step-6.data/intro.dox | 57 -- .../step-6.data/results.dox | 164 ---- .../step-7.data/intro.dox | 154 ---- .../step-7.data/results.dox | 224 ------ .../step-8.data/intro.dox | 322 -------- .../step-8.data/results.dox | 47 -- .../step-9.data/intro.dox | 286 ------- .../step-9.data/results.dox | 50 -- 39 files changed, 5699 deletions(-) delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-1.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-1.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-10.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-10.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-11.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-11.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-12.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-12.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-13.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-13.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-14.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-15.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-15.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-16.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-16.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-17.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-17.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-19.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-19.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-2.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-2.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-20.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-20.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-21.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-21.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-3.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-3.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-4.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-4.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-5.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-5.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-6.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-6.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-7.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-7.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-8.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-8.data/results.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-9.data/intro.dox delete mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-9.data/results.dox diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-1.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-1.data/intro.dox deleted file mode 100644 index 9faf332d1c..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-1.data/intro.dox +++ /dev/null @@ -1,16 +0,0 @@ - -

Introduction

- -In this first example, we don't actually do very much, but show two -techniques: what is the syntax to generate triangulation objects, and -some elements of simple loops over all cells. We create two grids, one -which is a regularly refined square (not very exciting, but a common -starting grid for some problems), and one more geometric attempt: a -ring-shaped domain, which is refined towards the inner edge. The -latter is certainly not very useful and is probably only rarely used -in numerical analysis for PDEs (although, to everyone's surprise, it -has actually found its way into the literature, see the paper by M. Mu -titled "PDE.MART: A network-based problem-solving environment", ACM -Trans. Math. Software, vol. 31, pp. 508-531, 2005 :-), but looks nice -and illustrates how loops over cells are written and some of the -things you can do with cells. diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-1.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-1.data/results.dox deleted file mode 100644 index 60cb962c79..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-1.data/results.dox +++ /dev/null @@ -1,21 +0,0 @@ - -

Results

- -The program has, after having been run, produced two grids, which look -like this: - - - - - - - -
- @image html step-1.grid-1.png - - @image html step-1.grid-2.png -
- -The left one, well, is not very exciting. The right one is — at least -— unconventional. - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-10.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-10.data/intro.dox deleted file mode 100644 index 2749945848..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-10.data/intro.dox +++ /dev/null @@ -1,61 +0,0 @@ - -

Introduction

- - -This is a rather short example which only shows some aspects of using -higher order mappings. By mapping we mean the transformation -between the unit cell (i.e. the unit line, square, or cube) to the -cells in real space. In all the previous examples, we have implicitly -used linear or d-linear mappings; you will not have noticed this at -all, since this is what happens if you do not do anything -special. However, if your domain has curved boundaries, there are -cases where the piecewise linear approximation of the boundary -(i.e. by straight line segments) is not sufficient, and you want that -your computational domain is an approximation to the real domain using -curved boundaries as well. If the boundary approximation uses -piecewise quadratic parabolas to approximate the true boundary, then -we say that this is a quadratic or $Q_2$ approximation. If we -use piecewise graphs of cubic polynomials, then this is a $Q_3$ -approximation, and so on. - - - -For some differential equations, it is known that piecewise linear -approximations of the boundary, i.e. $Q_1$ mappings, are not -sufficient if the boundary of the domain is curved. Examples are the -biharmonic equation using $C^1$ elements, or the Euler -equation on domains with curved reflective boundaries. In these cases, -it is necessary to compute the integrals using a higher order -mapping. The reason, of course, is that if we do not use a higher -order mapping, the order of approximation of the boundary dominates -the order of convergence of the entire numerical scheme, irrespective -of the order of convergence of the discretization in the interior of -the domain. - - - -Rather than demonstrating the use of higher order mappings with one of -these more complicated examples, we do only a brief computation: -calculating the value of $\pi=3.141592653589793238462643\ldots$ by two -different methods. - - - -The first method uses a triangulated approximation of the circle with -unit radius and integrates the unit function over it. Of course, if -the domain were the exact unit circle, then the area would be pi, but -since we only use an approximation by piecewise polynomial segments, -the value of the area is not exactly pi. However, it is known that as -we refine the triangulation, a $Q_p$ mapping approximates the boundary -with an order $h^{p+1}$, where $h$ is the mesh -width. We will check the values of the computed area of the circle and -their convergence towards pi under mesh refinement for different -mappings. We will also find a convergence behavior that is surprising -at first, but has a good explanation. - - - -The second method works similarly, but this time does not use the area -of the triangulated unit circle, but rather its perimeter. Pi is then -approximated by half of the perimeter, as the radius is equal to one. - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-10.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-10.data/results.dox deleted file mode 100644 index cccf2683e5..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-10.data/results.dox +++ /dev/null @@ -1,194 +0,0 @@ - -

Results

- - -The program performs two tasks, the first being to generate a -visualization of the mapped domain, the second to compute pi by the -two methods described. Let us first take a look at the generated -graphics. They are generated in Gnuplot format, and can be viewed with -the commands -@code -set data style lines -set size 0.721, 1 -set nokey -plot [-1:1][-1:1] "ball0_mapping_q1.dat" -@endcode -or using one of the other filenames. The second line makes sure that -the aspect ratio of the generated output is actually 1:1, i.e. a -circle is drawn as a circle on your screen, rather than as an -ellipse. The third line switches off the key in the graphic, as that -will only print information (the filename) which is not that important -right now. - - - -The following table shows the triangulated computational domain for -Q1, Q2, and Q3 mappings, for the original coarse grid (left), and a -once uniformly refined grid (right). If your browser does not display -these pictures in acceptable quality, view them one by one. - - - - - - - - - - - - - - - -
- @image html step-10.ball_mapping_q1_ref0.png - - @image html step-10.ball_mapping_q1_ref1.png -
- @image html step-10.ball_mapping_q2_ref0.png - - @image html step-10.ball_mapping_q2_ref1.png -
- @image html step-10.ball_mapping_q3_ref0.png - - @image html step-10.ball_mapping_q3_ref1.png -
-These pictures show the obvious advantage of higher order mappings: -they approximate the true boundary quite well also on rather coarse -meshes. To demonstrate this a little further, the following table -shows the upper right quarter of the circle of the coarse mesh, and -with dashed lines the exact circle: - - - - - - -
- @image html step-10.quarter-q1.png - - @image html step-10.quarter-q2.png - - @image html step-10.quarter-q3.png -
-Obviously the quadratic mapping approximates the boundary quite well, -while for the cubic mapping the difference between approximated domain -and true one is hardly visible already for the coarse grid. You can -also see that the mapping only changes something at the outer -boundaries of the triangulation. In the interior, all lines are still -represented by linear functions, resulting in additional computations -only on cells at the boundary. Higher order mappings are therefore -usually not noticably slower than lower order ones, because the -additional computations are only performed on a small subset of all -cells. - - - -The second purpose of the program was to compute the value of pi to -good accuracy. This is the output of this part of the program: -@code -Computation of Pi by the area: -============================== -Degree = 1 -cells eval.pi error -5 1.9999999999999998 1.1416e+00 - -20 2.8284271247461898 3.1317e-01 1.87 -80 3.0614674589207178 8.0125e-02 1.97 -320 3.1214451522580520 2.0148e-02 1.99 -1280 3.1365484905459389 5.0442e-03 2.00 -5120 3.1403311569547521 1.2615e-03 2.00 - -Degree = 2 -cells eval.pi error -5 3.1045694996615869 3.7023e-02 - -20 3.1391475703122276 2.4451e-03 3.92 -80 3.1414377167038303 1.5494e-04 3.98 -320 3.1415829366419019 9.7169e-06 4.00 -1280 3.1415920457576907 6.0783e-07 4.00 -5120 3.1415926155921126 3.7998e-08 4.00 - -Degree = 3 -cells eval.pi error -5 3.1465390309173475 4.9464e-03 - -20 3.1419461263297386 3.5347e-04 3.81 -80 3.1416154689089382 2.2815e-05 3.95 -320 3.1415940909713274 1.4374e-06 3.99 -1280 3.1415927436051230 9.0015e-08 4.00 -5120 3.1415926592185492 5.6288e-09 4.00 - -Degree = 4 -cells eval.pi error -5 3.1418185737113964 2.2592e-04 - -20 3.1415963919525050 3.7384e-06 5.92 -80 3.1415927128397780 5.9250e-08 5.98 -320 3.1415926545188264 9.2903e-10 5.99 -1280 3.1415926536042722 1.4479e-11 6.00 -5120 3.1415926535899668 1.7343e-13 6.38 - - -Computation of Pi by the perimeter: -=================================== -Degree = 1 -cells eval.pi error -5 2.8284271247461903 3.1317e-01 - -20 3.0614674589207183 8.0125e-02 1.97 -80 3.1214451522580524 2.0148e-02 1.99 -320 3.1365484905459393 5.0442e-03 2.00 -1280 3.1403311569547525 1.2615e-03 2.00 -5120 3.1412772509327729 3.1540e-04 2.00 - -Degree = 2 -cells eval.pi error -5 3.1248930668550599 1.6700e-02 - -20 3.1404050605605454 1.1876e-03 3.81 -80 3.1415157631807014 7.6890e-05 3.95 -320 3.1415878042798613 4.8493e-06 3.99 -1280 3.1415923498174538 3.0377e-07 4.00 -5120 3.1415926345932004 1.8997e-08 4.00 - -Degree = 3 -cells eval.pi error -5 3.1442603311164286 2.6677e-03 - -20 3.1417729561193588 1.8030e-04 3.89 -80 3.1416041192612365 1.1466e-05 3.98 -320 3.1415933731961760 7.1961e-07 3.99 -1280 3.1415926986118001 4.5022e-08 4.00 -5120 3.1415926564043946 2.8146e-09 4.00 - -Degree = 4 -cells eval.pi error -5 3.1417078926581086 1.1524e-04 - -20 3.1415945317216001 1.8781e-06 5.94 -80 3.1415926832497720 2.9660e-08 5.98 -320 3.1415926540544636 4.6467e-10 6.00 -1280 3.1415926535970535 7.2602e-12 6.00 -5120 3.1415926535899010 1.0805e-13 6.07 -@endcode - - - -One of the immediate observations from the output is that in all cases -the values converge quickly to the true value of -$\pi=3.141592653589793238462643$. Note that for the $Q_4$ mapping, the last -number is correct to 13 digits in both computations, which is already -quite a lot. However, also note that for the $Q_1$ mapping, even on the -finest grid the accuracy is significantly worse than on the coarse -grid for a $Q_4$ mapping! - - - -The last column of the output shows the convergence order, in powers -of the mesh width $h$. In the introduction, we had stated that -the convergence order for a $Q_p$ mapping should be -$h^{p+1}$. However, in the example shown, the $Q_2$ and $Q_4$ -mappings show a convergence order of $h^{p+2}$! This at -first surprising fact is readily explained by the particular boundary -we have chosen in this example. In fact, the circle is described by the function -$\sqrt{1-x^2}$, which has the series expansion -$1-x^2/2-x^4/8-x^6/16+\ldots$ -around $x=0$. Thus, for the quadratic mapping where the -truncation error of the quadratic approximation should be cubic, there -is no such term but only a quartic one, which raises the convergence -order to 4, instead of 3. The same happens for the $Q_4$ mapping. - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-11.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-11.data/intro.dox deleted file mode 100644 index d4671fcffd..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-11.data/intro.dox +++ /dev/null @@ -1,110 +0,0 @@ - -

Introduction

- -The problem we will be considering is the solution of Laplace's problem with -Neumann boundary conditions only: -@f{eqnarray*} - -\Delta u &=& f \qquad \mathrm{in}\ \Omega, - \\ - \partial_n u &=& g \qquad \mathrm{on}\ \partial\Omega. -@f} -It is well known that if this problem is to have a solution, then the forces -need to satisfy the compatibility condition -@f[ - \int_\Omega f\; dx + \int_{\partial\Omega} g\; ds = 0. -@f] -We will consider the special case that $\Omega$ is the circle of radius 1 -around the origin, and $f=-2$, $g=1$. This choice satisfies the compatibility -condition. - -The compatibility condition allows a solution of the above equation, but it -nevertheless retains an ambiguity: since only derivatives of the solution -appear in the equations, the solution is only determined up to a constant. For -this reason, we have to pose another condition for the numerical solution, -which fixes this constant. - -For this, there are various possibilities: -
    -
  1. Fix one node of the discretization to zero or any other fixed value. - This amounts to an additional condition $u_h(x_0)=0$. Although this is - common practice, it is not necessarily a good idea, since we know that the - solutions of Laplace's equation are only in $H^1$, which does not allow for - the definition of point values because it is not a subset of the continuous - functions. Therefore, even though fixing one node is allowed for - discretitized functions, it is not for continuous functions, and one can - often see this in a resulting error spike at this point in the numerical - solution. - -
  2. Fixing the mean value over the domain to zero or any other value. This - is allowed on the continuous level, since $H^1(\Omega)\subset L^1(\Omega)$ - by Sobolev's inequality, and thus also on the discrete level since we - there only consider subsets of $H^1$. - -
  3. Fixing the mean value over the boundary of the domain to zero or any - other value. This is also allowed on the continuous level, since - $H^{1/2}(\partial\Omega)\subset L^1(\partial\Omega)$, again by Sobolev's - inequality. -
-We will choose the last possibility, since we want to demonstrate another -technique with it. - -While this describes the problem to be solved, we still have to figure out how -to implement it. Basically, except for the additional mean value constraint, -we have solved this problem several times, using Dirichlet boundary values, -and we only need to drop the treatment of Dirichlet boundary nodes. The use of -higher order mappings is also rather trivial and will be explained at the -various places where we use it; in almost all conceivable cases, you will only -consider the objects describing mappings as a black box which you need not -worry about, because their only uses seem to be to be passed to places deep -inside the library where functions know how to handle them (i.e. in the -FEValues classes and their descendents). - -The tricky point in this program is the use of the mean value -constraint. Fortunately, there is a class in the library which knows how to -handle such constraints, and we have used it quite often already, without -mentioning its generality. Note that if we assume that the boundary nodes are -spaced equally along the boundary, then the mean value constraint -@f[ - \int_{\partial \Omega} u(x) \; ds = 0 -@f] -can be written as -@f[ - \sum_{i\in\partial\Omega_h} u_i = 0, -@f] -where the sum shall run over all degree of freedom indices which are located -on the boundary of the computational domain. Let us denote by $i_0$ that index -on the boundary with the lowest number (or any other conveniently chosen -index), then the constraint can also be represented by -@f[ - u_{i_0} = \sum_{i\in\partial\Omega_h\backslash i_0} -u_i. -@f] -This, luckily, is exactly the form of constraints for which the -ConstraintMatrix class was designed. Note that we have used this -class in several previous examples for the representation of hanging nodes -constraints, which also have this form: there, the middle vertex shall have -the mean of the values of the adjacent vertices. In general, the -ConstraintMatrix class is designed to handle homogeneous constraints -of the form -@f[ - CU = 0 -@f] -where $C$ denotes a matrix, and $U$ the vector of nodal values. - -In this example, the mean value along the boundary allows just such a -representation, with $C$ being a matrix with just one row (i.e. there is only -one constraint). In the implementation, we will create a -ConstraintMatrix object, add one constraint (i.e. add another row to -the matrix) referring to the first boundary node $i_0$, and insert the weights -with which all the other nodes contribute, which in this example happens to be -just $-1$. - -Later, we will use this object to eliminate the first boundary node from the -linear system of equations, reducing it to one which has a solution without -the ambiguity of the constant shift value. One of the problems of the -implementation will be that the explicit elimination of this node results in a -number of additional elements in the matrix, of which we do not know in -advance where they are located and how many additional entries will be in each -of the rows of the matrix. We will show how we can use an intermediate object -to work around this problem. - -But now on to the implementation of the program solving this problem... diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-11.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-11.data/results.dox deleted file mode 100644 index a18a13cb86..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-11.data/results.dox +++ /dev/null @@ -1,45 +0,0 @@ - -

Results

- -This is what the program outputs: -@code -Using mapping with degree 1: -============================ -cells |u|_1 error - 5 0.680402 0.572912 - 20 1.085518 0.167796 - 80 1.208981 0.044334 - 320 1.242041 0.011273 - 1280 1.250482 0.002832 - 5120 1.252605 0.000709 - -Using mapping with degree 2: -============================ -cells |u|_1 error - 5 1.050963 0.202351 - 20 1.199642 0.053672 - 80 1.239913 0.013401 - 320 1.249987 0.003327 - 1280 1.252486 0.000828 - 5120 1.253108 0.000206 - -Using mapping with degree 3: -============================ -cells |u|_1 error - 5 1.086161 0.167153 - 20 1.204349 0.048965 - 80 1.240502 0.012812 - 320 1.250059 0.003255 - 1280 1.252495 0.000819 - 5120 1.253109 0.000205 -@endcode -As we expected, the convergence order for each of the different -mappings is clearly quadratic in the mesh size. What is -interesting, though, is that the error for a bilinear mapping -(i.e. degree 1) is more than three times larger than that for the -higher order mappings; it is therefore clearly advantageous in this -case to use a higher order mapping, not because it improves the order -of convergence but just to reduce the constant before the convergence -order. On the other hand, using a cubic mapping only improves the -result further insignicantly, except for the case of very coarse -grids. diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-12.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-12.data/intro.dox deleted file mode 100644 index 6f7bdb7fef..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-12.data/intro.dox +++ /dev/null @@ -1,266 +0,0 @@ - -

Introduction

- - -

Overview

- -This example is devoted to the discontinuous Galerkin method, or -in short: DG method. It includes the following topics. -
    -
  1. Discretization of the linear transport equation with the DG method -
  2. Two different assembling routines for the system matrix based on - face terms given as a sum of integrals that -\begin{enumerate} -
  3. loops over all cell and all their faces, or that -
  4. loops over all faces, whereas each face is treated only once. -\end{enumerate} -
  5. Time comparison of the two assembling routines. -
- - -

Problem

- -The DG method was first introduced to discretize simple transport -equations. Over the past years DG methods have been applied to a -variety of problems and many different schemes were introduced -employing a big zoo of different convective and diffusive fluxes. As -this example's purpose is to illustrate some implementational issues -of the DG discretization only, here we simply consider the linear -transport equation -@f[ - \nabla\cdot \left\{{\mathbf \beta} u\right\}=f \qquad\mbox{in }\Omega, -\qquad\qquad\qquad\mathrm{[transport-equation]}@f] -subject to the boundary conditions -@f[ -u=g\quad\mbox{on }\Gamma_-, -@f] -on the inflow part $\Gamma_-$ of the boundary $\Gamma=\partial\Omega$ -of the domain. Here, ${\mathbf \beta}={\mathbf \beta}(x)$ denotes a -vector field, $f$ a source function, $u$ the (scalar) solution -function, $g$ a boundary value function, -@f[ -\Gamma_-:=\{x\in\Gamma, {\mathbf \beta}(x)\cdot{\bf n}(x)<0\} -@f] -the inflow part of the boundary of the domain and ${\bf n}$ denotes -the unit outward normal to the boundary $\Gamma$. Equation -[transport-equation] is the conservative version of the -transport equation already considered in step 9 of this tutorial. - -In particular, we consider problem [transport-equation] on -$\Omega=[0,1]^2$ with ${\mathbf \beta}=\frac{1}{|x|}(-x_2, x_1)$ -representing a circular counterclockwise flow field, $f=0$ and $g=1$ -on $x\in\Gamma_-^1:=[0,0.5]\times\{0\}$ and $g=0$ on $x\in -\Gamma_-\setminus \Gamma_-^1$. - - -

Discretization

- -Following the general paradigm of deriving DG discretizations for -purely hyperbolic equations, we first consider the general hyperbolic -problem -@f[ - \nabla\cdot {\mathcal F}(u)=f \qquad\mbox{in }\Omega, -@f] -subject to appropriate boundary conditions. Here ${\mathcal F}$ -denotes the flux function of the equation under consideration that in -our case, see equation [transport-equation], is represented by -${\mathcal F}(u)={\mathbf \beta} u$. For deriving the DG -discretization we start with a variational, mesh-dependent -formulation of the problem, -@f[ - \sum_\kappa\left\{-({\mathcal F}(u),\nabla v)_\kappa+({\mathcal - F}(u)\cdot{\bf n}, v)_{\partial\kappa}\right\}=(f,v)_\Omega, -@f] -that originates from [transport-equation] by multiplication with -a test function $v$ and integration by parts on each cell $\kappa$ of -the triangulation. Here $(\cdot, \cdot)_\kappa$ and $(\cdot, -\cdot)_{\partial\kappa}$ simply denote the integrals over the cell -$\kappa$ and the boundary $\partial\kappa$ of the cell, -respectively. To discretize the problem, the functions $u$ and $v$ are -replaced by discrete functions $u_h$ and $v_h$ that in the case of -discontinuous Galerkin methods belong to the space $V_h$ of -discontinuous piecewise polynomial functions of some degree $p$. Due -to the discontinuity of the discrete function $u_h$ on interelement -faces, the flux ${\mathcal F}(u)\cdot{\bf n}$ must be replaced by a -numerical flux function ${\mathcal H}(u_h^+, u_h^-, {\bf n})$, -where $u_h^+|_{\partial\kappa}$ denotes the inner trace (w.r.t. the -cell $\kappa$) of $u_h$ and $u_h^-|_{\partial\kappa}$ the outer trace, -i.e. the value of $u_h$ on the neighboring cell. Furthermore the -numerical flux function ${\mathcal H}$, among other things, must be -consistent, i.e. -@f[ -{\mathcal H}(u,u,{\bf n})={\mathcal F}(u)\cdot{\bf n}, -@f] -and conservative, i.e. -@f[ -{\mathcal H}(v,w,{\bf n})=-{\mathcal H}(w,v,-{\bf n}). -\qquad\qquad\qquad\mathrm{[conservative]}@f] -This yields the following discontinuous Galerkin - discretization: find $u_h\in V_h$ such that -@f[ - \sum_\kappa\left\{-({\mathcal F}(u_h),\nabla v_h)_\kappa+({\mathcal H}(u_h^+,u_h^-,{\bf n}), v_h)_{\partial\kappa}\right\}=(f,v_h)_\Omega, \quad\forall v_h\in V_h. -\qquad\qquad\qquad\mathrm{[dg-scheme]}@f] -Boundary conditions are realized by replacing $u_h^-$ on the inflow boundary $\Gamma_-$ by the boundary function $g$. -In the special case of the transport equation -[transport-equation] the numerical flux in its simplest form -is given by -@f[ - {\mathcal H}(u_h^+,u_h^-,{\bf n})(x)=\left\{\begin{array}{ll} - ({\mathbf \beta}\cdot{\bf n}\, u_h^-)(x),&\mbox{for } {\mathbf \beta}(x)\cdot{\bf n}(x)<0,\\ - ({\mathbf \beta}\cdot{\bf n}\, u_h^+)(x),&\mbox{for } {\mathbf \beta}(x)\cdot{\bf n}(x)\geq 0, -\end{array} -\right. -\qquad\qquad\qquad\mathrm{[flux-transport-equation]}@f] -where on the inflow part of the cell the value is taken from the -neighboring cell, $u_h^-$, and on the outflow part the value is -taken from the current cell, $u_h^+$. Hence, the discontinuous Galerkin -scheme for the transport equation [transport-equation] is given -by: find $u_h\in V_h$ such that for all $v_h\in V_h$ following -equation holds: -@f[ - \sum_\kappa\left\{-(u_h,{\mathbf \beta}\cdot\nabla v_h)_\kappa - +({\mathbf \beta}\cdot{\bf n}\, u_h, v_h)_{\partial\kappa_+} - +({\mathbf \beta}\cdot{\bf n}\, u_h^-, v_h)_{\partial\kappa_-\setminus\Gamma}\right\} - =(f,v_h)_\Omega-({\mathbf \beta}\cdot{\bf n}\, g, v_h)_{\Gamma_-}, -\qquad\qquad\qquad\mathrm{[dg-transport]}@f] -where $\partial\kappa_-:=\{x\in\partial\kappa, -{\mathbf \beta}(x)\cdot{\bf n}(x)<0\}$ denotes the inflow boundary -and $\partial\kappa_+=\partial\kappa\setminus \partial \kappa_-$ the -outflow part of cell $\kappa$. Below, this equation will be referred -to as first version of the DG method. We note that after a -second integration by parts, we obtain: find $u_h\in V_h$ such that -@f[ - \sum_\kappa\left\{(\nabla\cdot\{{\mathbf \beta} u_h\},v_h)_\kappa - -({\mathbf \beta}\cdot{\bf n} [u_h], v_h)_{\partial\kappa_-}\right\} - =(f,v_h)_\Omega, \quad\forall v_h\in V_h, -@f] -where $[u_h]=u_h^+-u_h^-$ denotes the jump of the discrete function -between two neighboring cells and is defined to be $[u_h]=u_h^+-g$ on -the boundary of the domain. This is the discontinuous Galerkin scheme -for the transport equation given in its original notation. -Nevertheless, we will base the implementation of the scheme on the -form given by [dg-scheme] and [flux-transport-equation], -or [dg-transport], respectively. - -Finally, we rewrite [dg-scheme] in terms of a summation over all -faces where each face $e=\partial \kappa\cap\partial \kappa'$ -between two neighboring cells $\kappa$ and $\kappa'$ occurs twice: -Find $u_h\in V_h$ such that -@f[ - -\sum_\kappa({\mathcal F}(u_h),\nabla v_h)_\kappa+\sum_e\left\{({\mathcal H}(u_h^+,u_h^-,{\bf n}), v_h)_e+({\mathcal H}(u_h^-, u_h^+,-{\bf n}), v_h^-)_{e\setminus\Gamma}\right\}=(f,v_h)_\Omega \quad\forall v_h\in V_h, -\qquad\qquad\qquad\mathrm{[dg-scheme-faces-long]}@f] -By employing conservativity [conservative] of the numerical flux -this equation simplifies to: find $u_h\in V_h$ such that -@f[ - -\sum_\kappa({\mathcal F}(u_h),\nabla v_h)_\kappa+\sum_e({\mathcal H}(u_h^+,u_h^-,{\bf n}), [v_h])_{e\setminus\Gamma}+({\mathcal H}(u_h,g,{\bf n}), v_h)_{\Gamma}=(f,v_h)_\Omega \quad\forall v_h\in V_h. -\qquad\qquad\qquad\mathrm{[dg-scheme-faces]}@f] -Whereas the outer unit normal ${\bf n}|_{\partial\kappa}$ is uniquely -defined this is not so for ${\bf n}_e$ as the latter might be the -normal from either side of the face. Hence, we need to fix the normal -${\bf n}$ on the face to be one of the two normals and denote the -other normal by $-{\bf n}$. This way we get $-{\bf n}$ in the second -face term in [dg-scheme-faces-long] that finally produces the -minus sign in the jump $[v_h]$ in equation [dg-scheme-faces]. - -For the linear transport equation [transport-equation] -equation [dg-scheme-faces] simplifies to -@f[ - -\sum_\kappa(u_h,{\mathbf \beta}\cdot\nabla v_h)_\kappa+\sum_e\left\{({\mathbf \beta}\cdot{\bf n}\, u_h, [v_h])_{e_+\setminus\Gamma}+({\mathbf \beta}\cdot{\bf n}\, u_h^-, [v_h])_{e_-\setminus\Gamma}\right\}=(f,v_h)_\Omega-({\mathbf \beta}\cdot{\bf n}\, g, v_h)_{\Gamma_-}, -\qquad\qquad\qquad\mathrm{[dg-transport-gamma]}@f] -which will be refered to as second version of the DG method. - - -

Implementation

- - -As already mentioned at the beginning of this example we will -implement assembling the system matrix in two different ways. -The first one will be based on the first version [dg-transport] -of the DG method that includes a sum of integrals over all cell -boundaries $\partial\kappa$. This is realized by a loop over all cells and -a nested loop over all faces of each cell. Thereby each inner face -$e=\partial\kappa\cap\partial \kappa'$ is treated twice, the first -time when the outer loop treats cell $\kappa$ and the second time when it -treats cell $\kappa'$. This way some values like the shape function -values at quadrature points on faces need to be computed twice. - -To overcome this overhead and for comparison, we implement -assembling of matrix also in a second and different way. This will -be based on the second version [dg-transport-gamma] that -includes a sum of integrals over all faces $e$. Here, several -difficulties occurs. -
    -
  1. As degrees of freedom are associated with cells (and not to faces) - and as a normal is only defined w.r.t. a cell adjacent to the face we - cannot simply run over all faces of the triangulation but need to - perform the nested loop over all cells and all faces of each cell - like in the first implementation. This, because in deal.II - faces are accessible from cells but not visa versa. -
  2. Due to the nested loop we arrive twice at each face. In order to - assemble face terms only once we either need to track which - faces we have treated before, or we introduce a simple rule that decides - which of the two adjacent cells the face should be accessed and - treated from. Here, we employ the second approach and define the - following rule: -
      -
    1. If the two cells adjacent to a face are of the same refinement level we access and treat the face from the cell with lower index on this level. -
    2. If the two cells are of different refinement levels we access - and treat the face from the coarser cell. -
    -
-Before we start with the description of the code we first introduce -its main ingredients. The main class is called -DGMethod. It comprises all basic objects like the -triangulation, the dofhandler, the system matrix and solution vectors. -Furthermore it has got some member functions, the most prominent of -which are the assemble_system1 and assemble_system2 -functions that implement the two different ways mentioned above for -assembling the system matrix. Within these assembling routines several -different cases must be distinguished while performing the nested -loops over all cells and all faces of each cell and assembling the -respective face terms. While sitting on the current cell and looking -at a specific face there are the cases -
    -
  1. face is at boundary, -
  2. neighboring cell is finer, -
  3. neighboring cell is of the same refinement level, and -
  4. neighboring cell is coarser -
-where the `neighboring cell' and the current cell have the mentioned -faces in common. In last three cases the assembling of the face terms -are almost the same. Hence, we can implement the assembling of the -face terms either by `copy and paste' (the lazy way, whose -disadvantages come up when the scheme or the equation might want to be -changed afterwards) or by calling a separate function that covers all -three cases. To be kind of educational within this tutorial we perform -the latter approach, of course. We go even further and encapsulate -this function and everything that is needed for assembling the -specific equation under consideration within a class called -DGTransportEquation. This class includes objects of all -equation--specific functions, the RHS and the -BoundaryValues class, both derived from the Function -class, and the Beta class representing the vector field. -Furthermore, the DGTransportEquation class comprises member -functions assemble_face_terms1 and -assemble_face_terms2 that are invoked by the -assemble_system1 and assemble_system2 functions of the -DGMethod, respectively, and the functions -assemble_cell_term and assemble_boundary_term that -are the same for both assembling routines. Due to the encapsulation of -all equation- and scheme-specific functions, the -DGTransportEquation class can easily be replaced by a similar -class that implements a different equation and a different DG method. -Indeed, the implementation of the assemble_system1 and -assemble_system2 functions of the DGMethod class will -be general enough to serve for different DG methods, different -equations, even for systems of equations (!) and, under small -modifications, for nonlinear problems. Finally, we note that the -program is dimension independent, i.e. after replacing -DGMethod<2> by DGMethod<3> the code runs in 3d. - - - - - - - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-12.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-12.data/results.dox deleted file mode 100644 index 63ff406f6f..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-12.data/results.dox +++ /dev/null @@ -1,89 +0,0 @@ - -

Results

- - -The output of this program consist of the console output, the eps -files including the grids, and the solutions given in gnuplot format. -@code -Cycle 0: - Number of active cells: 64 - Number of degrees of freedom: 256 -Time of assemble_system1: 0.05 -Time of assemble_system2: 0.04 -solution1 and solution2 coincide. -Writing grid to ... -Writing solution to ... - -Cycle 1: - Number of active cells: 112 - Number of degrees of freedom: 448 -Time of assemble_system1: 0.09 -Time of assemble_system2: 0.07 -solution1 and solution2 coincide. -Writing grid to ... -Writing solution to ... - -Cycle 2: - Number of active cells: 214 - Number of degrees of freedom: 856 -Time of assemble_system1: 0.17 -Time of assemble_system2: 0.14 -solution1 and solution2 coincide. -Writing grid to ... -Writing solution to ... - -Cycle 3: - Number of active cells: 415 - Number of degrees of freedom: 1660 -Time of assemble_system1: 0.32 -Time of assemble_system2: 0.28 -solution1 and solution2 coincide. -Writing grid to ... -Writing solution to ... - -Cycle 4: - Number of active cells: 796 - Number of degrees of freedom: 3184 -Time of assemble_system1: 0.62 -Time of assemble_system2: 0.52 -solution1 and solution2 coincide. -Writing grid to ... -Writing solution to ... - -Cycle 5: - Number of active cells: 1561 - Number of degrees of freedom: 6244 -Time of assemble_system1: 1.23 -Time of assemble_system2: 1.03 -solution1 and solution2 coincide. -Writing grid to ... -Writing solution to ... -@endcode - -We see that, as expected, on each refinement step the two solutions -coincide. The difference measured in time of treating each face only -once (second version of the DG method) in comparison with treating -each face twice within a nested loop over all cells and all faces of -each cell (first version), is much less than one might have -expected. The gain is less than 20% on the last few refinement steps. - - - First we show the solutions on the initial mesh, the mesh after two -and after five adaptive refinement steps. - -@image html step-12.sol-0.png -@image html step-12.sol-2.png -@image html step-12.sol-5.png - - -Then we show the final grid (after 5 refinement steps). The -grid is largely concentrated in the vicinity of the jump of the -solution. - -@image html step-12.grid-5.png - - -And finally we show a plot of a 3d computation. - -@image html step-12.sol-5-3d.png - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-13.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-13.data/intro.dox deleted file mode 100644 index 7362602221..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-13.data/intro.dox +++ /dev/null @@ -1,187 +0,0 @@ - -

Introduction

- -

Background and purpose

- - -In this example program, we will not so much be concerned with -describing new ways how to use deal.II and its facilities, but rather -with presenting methods of writing modular and extensible finite -element programs. The main reason for this is the size and complexity -of modern research software: applications implementing modern error -estimation concepts and adaptive solution methods tend to become -rather large. For example, the three largest applications by the main -authors of deal.II, are at the time of writing of this example -program: -
    -
  1. a program for solving conservation hyperbolic equations by the - Discontinuous Galerkin Finite Element method: 33,775 lines of - code; -
  2. a parameter estimation program: 28,980 lines of code; -
  3. a wave equation solver: 21,020 lines of code. -
-(The library proper - without example programs and -test suite - has slightly more than 150,000 lines of code as of spring 2002.) -In the opinion of the author of this example program, the sizes of these -applications are at the edge of what one person, even an experienced -programmer, can manage. - - - -The numbers above make one thing rather clear: monolithic programs that -are not broken up into smaller, mostly independent pieces have no way -of surviving, since even the author will quickly lose the overview of -the various dependencies between different parts of a program. Only -data encapsulation, for example using object oriented programming -methods, and modularization by defining small but fixed interfaces can -help structure data flow and mutual interdependencies. It is also an -absolute prerequisite if more than one person is developing a program, -since otherwise confusion will quickly prevail as one developer -would need to know if another changed something about the internals of -a different module if they were not cleanly separated. - - - -In previous examples, you have seen how the library itself is broken -up into several complexes each building atop the underying ones, but -relatively independent of the other ones: -
    -
  1. the triangulation class complex, with associated iterator classes; -
  2. the finite element classes; -
  3. the DoFHandler class complex, with associated iterators, built on - the triangulation and finite element classes; -
  4. the classes implementing mappings between unit and real cells; -
  5. the FEValues class complex, built atop the finite elements and - mappings. -
-Besides these, and a large number of smaller classes, there are of -course the following ``tool'' modules: -
    -
  1. output in various graphical formats; -
  2. linear algebra classes. -
- - - - -The goal of this program is now to give an example of how a relatively -simple finite element program could be structured such that we end up -with a set of modules that are as independent of each other as -possible. This allows to change the program at one end, without having to -worry that it might break at the other, as long as we do not touch the -interface through which the two ends communicate. The interface in -C++, of course, is the declaration of abstract base classes. - - - -Here, we will implement (again) a Laplace solver, although with a -number of differences compared to previous example programs: -
    -
  1. The classes that implement the process of numerically solving the - equation are no more responsible for driving the process of - ``solving-estimating error-refining-solving again'', but we delegate - this to external functions. This allows first to use it as a - building block in a larger context, where the solution of a - Laplace equation might only be one part (for example, in a - nonlinear problem, where Laplace equations might have to be solved - in each nonlinear step). It would also allow to build a framework - around this class that would allow using solvers for other - equations (but with the same external interface) instead, in case - some techniques shall be evaluated for different types of partial - differential equations. -
  2. It splits the process of evaluating the computed solution to a - separate set of classes. The reason is that one is usually not - interested in the solution of a PDE per se, but rather in certain - aspects of it. For example, one might wish to compute the traction - at a certain boundary in elastic computations, or in the signal of - a seismic wave at a receiver position at a given - location. Sometimes, one might have an interest in several of - these aspects. Since the evaluation of a solution is something - that does not usually affect the process of solution, we split it - off into a separate module, to allow for the development of such - evaluation filters independently of the development of the solver - classes. -
  3. Separate the classes that implement mesh refinement from the - classes that compute the solution. -
  4. Separate the description of the test case with which we will - present the program, from the rest of the program. -
- - - -The things the program does are not new. In fact, this is more like a -melange of previous programs, cannibalizing various parts and -functions from earlier examples. It is the way they are arranged in -this program that should be the focus of the reader, i.e. the software -design techniques used in the program to achieve the goal of -implementing the desired mathematical method. However, we must -stress that software design is in part also a subjective matter: -different persons have different programming backgrounds and have -different opinions about the ``right'' style of programming; this -program therefore expresses only what the author considers useful -practice, and is not necessarily a style that you have to adopt in -order to write successful numerical software if you feel uncomfortable -with the chosen ways. It should serve as a case study, however, -inspiring the reader with ideas to the desired end. - - - -Once you have worked through the program, you will remark that it is -already somewhat complex in its structure. Nevertheless, it -only has about 850 lines of code, without comments. In real -applications, there would of course be comments and class -documentation, which would bring that to maybe 1200 lines. Yet, compared to -the applications listed above, this is still small, as they are 20 to -25 times as large. For programs as large, a proper design right from -the start is thus indispensible. Otherwise, it will have to be -redesigned at one point in its life, once it becomes too large to be -manageable. - - - -Despite of this, all three programs listed above have undergone major -revisions, or even rewrites. The wave program, for example, was once -entirely teared to parts when it was still significantly smaller, just -to assemble it again in a more modular form. By that time, it had -become impossible to add functionality without affecting older parts -of the code (the main problem with the code was the data flow: in time -dependent application, the major concern is when to store data to disk -and when to reload it again; if this is not done in an organized -fashion, then you end up with data released too early, loaded too -late, or not released at all). Although the present example program -thus draws from sevelar years of experience, it is certainly not -without flaws in its design, and in particular might not be suited for -an application where the objective is different. It should serve as an -inspiration for writing your own application in a modular way, to -avoid the pitfalls of too closely coupled codes. - - - -

What the program does

- - -What the program actually does is not even the main point of this -program, the structure of the program is more important. However, in a -few words, a description would be: solve the Laplace equation for a -given right hand side such that the solution is the function -$u(x,t)=\exp(x+\sin(10y+5x^2))$. The goal of the -computation is to get the value of the solution at the point -$x_0=(0.5,0.5)$, and to compare the accuracy with -which we resolve this value for two refinement criteria, namely global -refinement and refinement by the error indicator by Kelly et al. which -we have already used in previous examples. - - - -The results will, as usual, be discussed in the respective section of -this document. In doing so, we will find a slightly irritating -observation about the relative performance of the two refinement -criteria. In a later example program, building atop this one, we will -devise a different method that should hopefully perform better than -the techniques discussed here. - - - -So much now for all the theoretical and anecdotal background. The best -way of learning about a program is to look at it, so here it is: - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-13.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-13.data/results.dox deleted file mode 100644 index 45b084700f..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-13.data/results.dox +++ /dev/null @@ -1,190 +0,0 @@ - -

Results

- - - -The results of this program are not that interesting - after all -its purpose was not to demonstrate some new mathematical idea, and -also not how to program with deal.II, but rather to use the material -which we have developed in the previous examples to form something -which demonstrates a way to build modern finite element software in a -modular and extensible way. - - - -Nevertheless, we of course show the results of the program. Of -foremost interest is the point value computation, for which we had -implemented the corresponding evaluation class. The results (i.e. the -output) of the program looks as follows: -@code - Running tests with "global" refinement criterion: - ------------------------------------------------- - Refinement cycle: 0 1 2 3 4 5 6 - DoFs u(x_0) - 25 1.2868 - 81 1.6945 - 289 1.4658 - 1089 1.5679 - 4225 1.5882 - 16641 1.5932 - 66049 1.5945 - - Running tests with "kelly" refinement criterion: - ------------------------------------------------ - Refinement cycle: 0 1 2 3 4 5 6 7 8 9 10 11 - DoFs u(x_0) - 25 1.2868 - 47 0.8775 - 89 1.5365 - 165 1.2974 - 316 1.6442 - 589 1.5221 - 1093 1.5724 - 2042 1.5627 - 3766 1.5916 - 7124 1.5876 - 13111 1.5942 - 24838 1.5932 -@endcode - - -What surprises here is that the the exact value is 1.59491554..., and that -it is obviously suprisingly complicated to compute the solution even to -only one per cent accuracy, although the solution is smooth (in fact -infinite often differentiable). This smoothness is shown in the -graphical output generated by the program, here coarse grid and the -first 9 refinement steps of the Kelly refinement indicator: - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- @image html step-13.solution-kelly-0.png - - @image html step-13.solution-kelly-1.png -
- @image html step-13.solution-kelly-2.png - - @image html step-13.solution-kelly-3.png -
- @image html step-13.solution-kelly-4.png - - @image html step-13.solution-kelly-5.png -
- @image html step-13.solution-kelly-6.png - - @image html step-13.solution-kelly-7.png -
- @image html step-13.solution-kelly-8.png - - @image html step-13.solution-kelly-9.png -
- - -While we're already at watching pictures, this is the eighth grid, as -viewed from top: - - -@image html step-13.grid-kelly-8.png - - -However, we are not yet finished with evaluation the point value -computation. In fact, plotting the error -$e=|u(x_0)-u_h(x_0)|$ for the two -refinement criteria yields the following picture: - - -@image html step-13.error.png - - - - -What is disturbing about this picture is that not only is the -adaptive mesh refinement not better than global refinement as one -would usually expect, it is even significantly worse since its -convergence is irregular, preventing all extrapolation techniques when -using the values of subsequent meshes! On the other hand, global -refinement provides a perfect $1/N$ or $h^{-2}$ -convergence history and provides every opportunity to even improve on -the point values by extrapolation. Global mesh refinement must -therefore be considered superior in this example! This is even more -surprising as the evaluation point is not somewhere in the left part -where the mesh is coarse, but rather to the right and the adaptive -refinement should refine the mesh around the evaluation point as well. - - - -We thus close the discussion of this example program with a question: - -

- What is wrong with adaptivity if it is not better than - global refinement? - - - - -Exercise at the end of this example: There is a simple reason -for the bad and irregular behavior of the adapted mesh solutions. It -is simple to find out by looking at the mesh around the evaluation -point in each of the steps - the data for this is in the output files -of the program. An exercise would therefore be to modify the mesh -refinement routine such that the problem (once you remark it) is -avoided. The second exercise is to check whether the results are then -better than global refinement, and if so if even a better order of -convergence (in terms of the number of degrees of freedom) is -achieved, or only by a better constant. - - - -(Very brief answers for the impatient: at steps with larger -errors, the mesh is not regular at the point of evaluation, i.e. some -of the adjacent cells have hanging nodes; this destroys some -superapproximation effects of which the globally refined mesh can -profit. Answer 2: this quick hack -@code - bool refinement_indicated = false; - typename Triangulation::active_cell_iterator cell; - for (cell=triangulation->begin_active(); - cell!=triangulation->end(); ++cell) - for (unsigned int v=0; v::vertices_per_cell; ++v) - if (cell->vertex(v) == Point(.5,.5)) - { - cell->clear_coarsen_flag(); - refinement_indicated |= cell->refine_flag_set(); - }; - if (refinement_indicated) - for (cell=triangulation->begin_active(); - cell!=triangulation->end(); ++cell) - for (unsigned int v=0; v::vertices_per_cell; ++v) - if (cell->vertex(v) == Point(.5,.5)) - cell->set_refine_flag (); -@endcode -in the refinement function of the Kelly refinement class right before -executing refinement would improve the results (exercise: what does -the code do?), making them consistently better than global -refinement. Behavior is still irregular, though, so no results about -an order of convergence are possible.) - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-14.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-14.data/intro.dox deleted file mode 100644 index 6a6ffd3abd..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-14.data/intro.dox +++ /dev/null @@ -1,408 +0,0 @@ - -

Introduction

- -

The maths

- -The Heidelberg group of Professor Rolf Rannacher, to which the three main -authors of the deal.II library belonged during their PhD time and partly also -afterwards, has been involved with adaptivity and error estimation for finite -element discretizations since the mid-90ies. The main achievement is the -development of error estimates for arbitrary functionals of the solution, and -of optimal mesh refinement for its computation. - -We will not discuss the derivation of these concepts in too great detail, but -will implement the main ideas in the present example program. For a thorough -introduction into the general idea, we refer to the seminal work of Becker and -Rannacher @ref step_14_BR95 "[BR95]",@ref step_14_BR96r "[BR96r]", and the overview article of the same authors in -Acta Numerica @ref step_14_BR01 "[BR01]"; the first introduces the concept of error -estimation and adaptivity for general functional output for the Laplace -equation, while the second gives many examples of applications of these -concepts to a large number of other, more complicated equations. For -applications to individual types of equations, see also the publications by -Becker @ref step_14_Bec95 "[Bec95]", @ref step_14_Bec98 "[Bec98]", -Kanschat @ref step_14_Kan96 "[Kan96]", @ref step_14_FK97 "[FK97]", -Suttmeier @ref step_14_Sut96 "[Sut96]", @ref step_14_RS97 "[RS97]", @ref step_14_RS98c "[RS98c]", -@ref step_14_RS99 "[RS99]", -Bangerth @ref step_14_BR99b "[BR99b]", @ref step_14_Ban00w "[Ban00w]", -@ref step_14_BR01a "[BR01a]", @ref step_14_Ban02 "[Ban02]", and -Hartmann @ref step_14_Har02 "[Har02]", @ref step_14_HH01 "[HH01]", -@ref step_14_HH01b "[HH01b]". -All of these works, from the original introduction by Becker and Rannacher to -individual contributions to particular equations, have later been summarized -in a book by Bangerth and Rannacher that covers all of these topics, see -@ref step_14_BR03 "[BR03]". - - -The basic idea is the following: in applications, one is not usually -interested in the solution per se, but rather in certain aspects of it. For -example, in simulations of flow problems, one may want to know the lift or -drag of a body immersed in the fluid; it is this quantity that we want to know -to best accuracy, and whether the rest of the solution of the describing -equations is well resolved is not of primary interest. Likewise, in elasticity -one might want to know about values of the stress at certain points to guess -whether maximal load values of joints are safe, for example. Or, in radiative -transfer problems, mean flux intensities are of interest. - -In all the cases just listed, it is the evaluation of a functional $J(u)$ of -the solution which we are interested in, rather than the values of $u$ -everywhere. Since the exact solution $u$ is not available, but only its -numerical approximation $u_h$, it is sensible to ask whether the computed -value $J(u_h)$ is within certain limits of the exact value $J(u)$, i.e. we -want to bound the error with respect to this functional, $J(u)-J(u_h)$. - -For simplicity of exposition, we henceforth assume that both the quantity of -interest $J$, as well as the equation are linear, and we will in particular -show the derivation for the Laplace equation with homogeneous Dirichlet -boundary conditions, although the concept is much more general. For this -general case, we refer to the references listed above. The goal is to obtain -bounds on the error, $J(e)=J(u)-J(u_h)$. For this, let us denote by $z$ the -solution of a dual problem, defined as follows: -@f[ - a(\varphi,z) = J(\varphi) \qquad \forall \varphi, -@f] -where $a(\cdot,\cdot)$ is the bilinear form associated with the differential -equation, and the test functions are chosen from the corresponding solution -space. Then, taking as special test function $\varphi=e$ the error, we have -that -@f[ - J(e) = a(e,z) -@f] -and we can, by Galerkin orthogonality, rewrite this as -@f[ - J(e) = a(e,z-\varphi_h) -@f] -for all possible functions $\varphi_h$ from the discrete test space. - -Concretely, for Laplace's equation, the error identity reads -@f[ - J(e) = (\nabla e, \nabla(z-\varphi_h)). -@f] -For reasons that we will not explain, we do not want to use this formula as -is, but rather split the scalar products into terms on all cells, and -integrate by parts on each of them: -@f{eqnarray*} - J(e) - &=& - \sum_K (\nabla (u-u_h), \nabla (z-\varphi_h))_K - \\ - &=& - \sum_K (-\Delta (u-u_h), z-\varphi_h)_K - + (\partial_n (u-u_h), z-z_h)_{\partial K}. -@f} -Next we use that $-\Delta u=f$, and that $\partial_n u$ is a quantity that is -continuous almost everywhere, so the terms involving $\partial_n u$ on one -cell cancels with that on its neighbor, where the normal vector has the -opposite sign. At the boundary of the domain, where there is no neighbor cell -with which this term could cancel, the weight $z-\varphi_h$ can be chosen as -zero, since $z$ has zero boundary values, and $\varphi_h$ can be chosen to -have the same. - -Thus, we have -@f{eqnarray*} - J(e) - &=& - \sum_K (f+\Delta u_h, z-\varphi_h)_K - - (\partial_n u_h, z-\varphi_h)_{\partial K\backslash \partial\Omega}. -@f} -In a final step, note that when taking the normal derivative of $u_h$, we mean -the value of this quantity as taken from this side of the cell (for the usual -Lagrange elements, derivatives are not continuous across edges). We then -rewrite the above formula by exchanging half of the edge integral of cell $K$ -with the neighbor cell $K'$, to obtain -@f{eqnarray*} - J(e) - &=& - \sum_K (f+\Delta u_h, z-\varphi_h)_K - - \frac 12 (\partial_n u_h|_K + \partial_{n'} u_h|_{K'}, - z-\varphi_h)_{\partial K\backslash \partial\Omega}. -@f} -Using that for the normal vectors $n'=-n$ holds, we define the jump of the -normal derivative by -@f[ - [\partial_n u_h] := \partial_n u_h|_K + \partial_{n'} u_h|_{K'} - = - \partial_n u_h|_K - \partial_n u_h|_{K'}, -@f] -and get the final form after setting the discrete function $\varphi_h$, which -is by now still arbitrary, to the point interpolation of the dual solution, -$\varphi_h=I_h z$: -@f{eqnarray*} - J(e) - &=& - \sum_K (f+\Delta u_h, z-I_h z)_K - - \frac 12 ([\partial_n u_h], - z-I_h z)_{\partial K\backslash \partial\Omega}. -@f} - -With this, we have obtained an exact representation of the error of the finite -element discretization with respect to arbitrary (linear) functionals -$J(\cdot)$. Its structure is a weighted form of a residual estimator, as both -$f+\Delta u_h$ and $[\partial_n u_h]$ are cell and edge residuals that vanish -on the exact solution, and $z-I_h z$ are weights indicating how important the -residuals on a certain cell is for the evaluation of the given functional. -Furthermore, it is a cell-wise quantity, so we can use it as a mesh refinement -criterion. The question, is: how to evaluate it? After all, the evaluation -requires knowledge of the dual solution $z$, which carries the information -about the quantity we want to know to best accuracy. - -In some, very special cases, this dual solution is known. For example, if the -functional $J(\cdot)$ is the point evaluation, $J(\varphi)=\varphi(x_0)$, then -the dual solution has to satisfy -@f[ - -\Delta z = \delta(x-x_0), -@f] -with the Dirac delta function on the right hand side, and the dual solution is -the Green's function with respect to the point $x_0$. For simple geometries, -this function is analytically known, and we could insert it into the error -representation formula. - -However, we do not want to restrict ourselves to such special cases. Rather, -we will compute the dual solution numerically, and approximate $z$ by some -numerically obtained $\tilde z$. We note that it is not sufficient to compute -this approximation $\tilde z$ using the same method as used for the primal -solution $u_h$, since then $\tilde z-I_h \tilde z=0$, and the overall error -estimate would be zero. Rather, the approximation $\tilde z$ has to be from a -larger space than the primal finite element space. There are various ways to -obtain such an approximation (see the cited literature), and we will choose to -compute it with a higher order finite element space. While this is certainly -not the most efficient way, it is simple since we already have all we need to -do that in place, and it also allows for simple experimenting. For more -efficient methods, again refer to the given literature, in particular -@ref step_14_BR95 "[BR95]", @ref step_14_BR03 "[BR03]". - -With this, we end the discussion of the mathematical side of this program and -turn to the actual implementation. - - -

The software

- -The step-14 example program builds heavily on the techniques already used in -the @ref step_13 "step-13" program. Its implementation of the dual weighted residual error -estimator explained above is done by deriving a second class, properly called -DualSolver, from the Solver base class, and having a class -(WeightedResidual) that joins the two again and controls the solution -of the primal and dual problem, and then uses both to compute the error -indicator for mesh refinement. - -The program continues the modular concept of the previous example, by -implementing the dual functional, describing quantity of interest, by an -abstract base class, and providing two different functionals which implement -this interface. Adding a different quantity of interest is thus simple. - -One of the more fundamental differences is the handling of data. A common case -is that you develop a program that solves a certain equation, and test it with -different right hand sides, different domains, different coefficients and -boundary values, etc. Usually, these have to match, so that exact solutions -are known, or that their combination makes sense at all. - -We demonstrate a way how this can be achieved in a simple, yet very flexible -way. We will put everything that belongs to a certain setup into one class, -and provide a little C++ mortar around it, so that entire setups (domains, -coefficients, right hand sides, etc.) can be exchanged by only changing -something in one place. - -Going this way a little further, we have also centralized all the other -parameters that describe how the program is to work in one place, such as the -order of the finite element, the maximal number of degrees of freedom, the -evaluation objects that shall be executed on the computed solutions, and so -on. This allows for simpler configuration of the program, and we will show in -a later program how to use a library class that can handle setting these -parameters by reading an input file. The general aim is to reduce the places -within a program where one may have to look when wanting to change some -parameter, as it has turned out in practice that one forgets where they are as -programs grow. Furthermore, putting all options describing what the program -does in a certain run into a file (that can be stored with the results) helps -repeatability of results more than if the various flags were set somewhere in -the program, where their exact values are forgotten after the next change to -this place. - -Unfortunately, the program has become rather long. While this admittedly -reduces its usefulness as an example program, we think that it is a very good -starting point for development of a program for other kinds of problems, -involving different equations than the Laplace equation treated here. -Furthermore, it shows everything that we can show you about our way of a -posteriori error estimation, and its structure should make it simple for you -to adjust this method to other problems, other functionals, other geometries, -coefficients, etc. - -The author believes that the present program is his masterpiece among the -example programs, regarding the mathematical complexity, as well as the -simplicity to add extensions. If you use this program as a basis for your own -programs, we would kindly like to ask you to state this fact and the name of -the author of the example program, Wolfgang Bangerth, in publications that -arise from that, of your program consists in a considerable part of the -example program. - - -

Bibliography

- -
- -
@anchor step_14_Ban00w [Ban00w]
-
Wolfgang Bangerth. -
Mesh adaptivity and error control for a finite element approximation - of the elastic wave equation. -
In Alfredo Bermudez, Dolores Gomez, Christophe Hazard, Patrick - Joly, and Jean E. Roberts, editors, Proceedings of the Fifth - International Conference on Mathematical and Numerical Aspects of Wave - Propagation (Waves2000), Santiago de Compostela, Spain, 2000, pages - 725–729. SIAM, 2000. - - - -
@anchor step_14_Ban02 [Ban02]
-
Wolfgang Bangerth. -
Adaptive Finite Element Methods for the Identification of - Distributed Coefficient in Partial Differential Equations. -
PhD thesis, University of Heidelberg, 2002. - - - -
@anchor step_14_BR99b [BR99b]
-
Wolfgang Bangerth and Rolf Rannacher. -
Finite element approximation of the acoustic wave equation: Error - control and mesh adaptation. -
East–West J. Numer. Math., 7(4):263–282, 1999. - - - -
@anchor step_14_BR03 [BR03]
-
Wolfgang Bangerth and Rolf Rannacher. -
Adaptive Finite Element Methods for Differential Equations. -
Birkhäuser Verlag, Basel, 2003. - - - -
@anchor step_14_BR01a [BR01a]
-
Wolfgang Bangerth and Rolf Rannacher. -
Adaptive finite element techniques for the acoustic wave equation. -
J. Comput. Acoustics, 9(2):575–591, 2001. - - - -
@anchor step_14_BR01 [BR01]
-
R. Becker and R. Rannacher. -
An optimal control approach to error estimation and mesh adaptation - in finite element methods. -
Acta Numerica, 10:1–102, 2001. - - - -
@anchor step_14_Bec95 [Bec95]
-
Roland Becker. -
An Adaptive Finite Element Method for the Incompressible - Navier-Stokes Equations on Time-dependent Domains. -
Dissertation, Universität Heidelberg, 1995. - - - -
@anchor step_14_Bec98 [Bec98]
-
Roland Becker. -
Weighted error estimators for finite element approximations of the - incompressible Navier-Stokes equations. -
Preprint 98-20, SFB 359, Universität Heidelberg, 1998. - - - -
@anchor step_14_BR96r [BR96r]
-
Roland Becker and Rolf Rannacher. -
A feed-back approach to error control in finite element methods: - Basic analysis and examples. -
East–West J. Numer. Math., 4:237–264, 1996. - - - -
@anchor step_14_BR95 [BR95]
-
Roland Becker and Rolf Rannacher. -
Weighted a posteriori error control in FE methods. -
In H. G. Bock et al., ed.s, ENUMATH 95, pages 621–637, - Paris, September 1998. World Scientific Publ., Singapure. -
in @ref step_14_enumath97 "[enumath97]". - - - -
@anchor step_14_enumath97 [enumath97]
-
Hans Georg Bock, Franco Brezzi, Roland Glowinsky, Guido Kanschat, Yuri A. - Kuznetsov, Jacques Periaux, and Rolf Rannacher, editors. -
ENUMATH 97, Proceedings of the 2nd European Conference on - Numerical Mathematics and Advanced Applications, Singapore, 1998. World - Scientific. - - - -
@anchor step_14_FK97 [FK97]
-
Christian Führer and Guido Kanschat. -
A posteriori error control in radiative transfer. -
Computing, 58(4):317–334, 1997. - - - -
@anchor step_14_Har02 [Har02]
-
Ralf Hartmann. -
Adaptive Finite Element Methods for the Compressible Euler Equations. -
PhD thesis, University of Heidelberg, 2002. - - - -
@anchor step_14_HH01 [HH01]
-
Ralf Hartmann and Paul Houston. -
Adaptive discontinuous Galerkin finite element methods for - nonlinear hyperbolic conservation laws. -
SIAM J. Sci. Comput., 24 (2002), pp. 979-1004. - - - -
@anchor step_14_HH01b [HH01b]
-
Ralf Hartmann and Paul Houston. -
Adaptive discontinuous Galerkin finite element methods for the - compressible Euler equations. -
J. Comput. Phys. 183 (2002), pp. 508-532. - - - -
@anchor step_14_Kan96 [Kan96]
-
Guido Kanschat. -
Parallel and Adaptive Galerkin Methods for Radiative Transfer - Problems. -
Dissertation, Universität Heidelberg, 1996. - - - -
@anchor step_14_RS97 [RS97]
-
Rolf Rannacher and Franz-Theo Suttmeier. -
A feed-back approach to error control in finite element methods: - Application to linear elasticity. -
Comp. Mech., 19(5):434–446, 1997. - - - -
@anchor step_14_RS98c [RS98c]
-
Rolf Rannacher and Franz-Theo Suttmeier. -
A posteriori error control in finite element methods via duality - techniques: Application to perfect plasticity. -
Comp. Mech., 21(2):123–133, 1998. - - - -
@anchor step_14_RS99 [RS99]
-
Rolf Rannacher and Franz-Theo Suttmeier. -
A posteriori error control and mesh adaptation for finite element - models in elasticity and elasto-plasticity. -
Comput. Methods Appl. Mech. Engrg., pages 333–361, 1999. - - - -
@anchor step_14_Sut96 [Sut96]
-
Franz-Theo Suttmeier. -
Adaptive Finite Element Approximation of Problems in - Elasto-Plasticity Theory. -
Dissertation, Universität Heidelberg, 1996. - - - -
- - - - - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-15.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-15.data/intro.dox deleted file mode 100644 index f89450a3ac..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-15.data/intro.dox +++ /dev/null @@ -1,299 +0,0 @@ - -

Introduction

- -

Foreword

- -This program demonstrates a number of techniques that have not been shown in -previous example programs. In particular, it shows how to program for -one-dimensional problems, and some aspects of what to do with nonlinear -problems, in particular how to transfer the solution from one grid to the next -finer one. Apart from this, however, the program does not attempt to do much -more than to entertain those who sometimes like to play with maths. - -The application we chose is, as you will see, not even very well suited for -anything, since it is rather impossible to solve. When I started to write the -program, I was not aware of this, and it only turned out later that the -optimization problem we are looking at here is severely plagued by many, -likely even degenerate minima, and that we cannot really hope to find a global -one. What we do instead is to rather start the optimization from many initial -guesses (which is cheap since the problem is 1d), and hope that we can get a -reasonable best solution for some of them. While the whole thing, as an -application, is not very satisfactory, keep in mind that solving particular -applications is not the goal of the tutorial programs; rather, we would like -to demonstrate techniques of programming with deal.II, which is indeed the -focus here. - - -

The problem

- -Now for a description of the problem. In the book by Dacorogna on the -Calculus of Variations, I found the following statement, which confused me -tremendously at first (see Section 3.4.3, ``Lavrentiev Phenomenon'', very -slightly edited): - -@par Theorem 4.6: - - Let - @f[ - I(u)=\int_0^1 (x-u^3)^2 (u')^6\; dx. - @f] - Let - @f[ - {\cal W}_1 = \{ u\in W^{1,\infty}(0,1) : u(0)=0, u(1)=1 \} - @f] - @f[ - {\cal W}_2 = \{ u\in W^{1,1}(0,1) : u(0)=0, u(1)=1 \} - @f] - - -@par - - Then - @f[ - \inf_{u\in {\cal W}_1} I(u) \ge c_0 > 0 = \inf_{u\in {\cal W}_2} I(u). - @f] - Moreover the minimum of $I(u)$ over ${\cal W}_2$ is attained by - $u(x)=x^{1/3}$. - - -@par Remarks. - [...] - -@par - - ii) it is interesting to note that if one uses the usual finite element - methods (by taking piecewise affine functions, which are in $W^{1,\infty}$) - one will not be able to detect the minimum of some integrals such as the one - in the theorem. - - -In other words: minimizing the energy functional over one space -($W^{1,\infty}$) does not give the same value as when minimizing over a larger -space ($W^{1,1}$). Furthermore, they give a rough estimate of the value of the -constant $c_0$, which is $c_0=\frac{7^23^5}{2^{18}5^5}\approx 1.61\cdot -10^{-6}$ (although by their calculation it is obvious that this estimate is -far too small, but the point of course is just to show that it is strictly -larger than zero). - -While the theorem was not surprising, the remark stunned me at first. After -all, we know that we can approximate functions in $W^{1,1}$ to arbitrary -accuracy. Also, although it is true that finite element functions are in -$W^{1,\infty}$, this statement is not really accurate: if the function itself -is bounded pointwise by, say, a constant $C$, then its gradient is bounded by -$2C/h$, and thus $\|u_h\|_{1,\infty} \le 2C/h$. That means that we should be -able to lift this limit just by mesh refinement. Finite element functions are -therefore only in $W^{1,\infty}$ if one considers them on a fixed grid, not on -a sequence of successively finer grids. (Note, we can only lift the -boundedness in $W^{1,1}$ in the same way by considering functions that -oscillate at cell frequency; these, however, do not converge in any reasonable -measure.) - -So it took me a while to see where the problem lies. Here it is: While we are -able to approximate functions to arbitrary accuracies in Sobolev - norms, this does not necessarily also hold with respect to the functional -$I(u)$! After all, this functional was made to show exactly these -pathologies. - -What happens in this case is actually not so difficult to understand. Let us -look at what happens if we plug the lowest-order (piecewise linear) -interpolant $i_hu$ of the optimal solution $u=x^{1/3}$ into the functional -$I(u)$: on the leftmost cell, the left end of $i_hu$ is tagged to zero by the -boundary condition, and the right end has the value $i_hu(h)=u(h)=h^{1/3}$. So -let us only consider the contribution of this single cell to $I(u)$: -@f{eqnarray*} - \int_0^h (x-(i_hu)^3)^2 ((i_hu)')^6 dx - &=& - \int_0^h (x-(h^{1/3}x)^3)^2 ((h^{1/3}/h)')^6 dx - \\ - &=& - h^{-4} \int_0^h (x^2-2hx^4+h^2x^6) dx - \\ - &=& - h^{-4} (h^3/3-2h^5/5+h^9/7) - \\ - &=& {\cal O}(h^{-1}). -@f} -Ups, even the contribution of the first cell blows up under mesh refinement, -and we have not even summed up the contributions of the other cells! - -It turns out, that the other cells are not really problematic (since the -gradient is bounded there by a constant independent of $h$), but we cannot -really avoid the trouble with the first cell: if instead of the interpolant we -choose some other finite element function that is closer on average to -$x^{1/3}$ than the interpolant above, then we have to increase the slope of -this function, since we have to obey the boundary condition at the left -end. But then we are hit by the weight $(u')^6$. This weight is simply too -strong! - -On the other hand, the interpolation of the linear function $\varphi(x)=x$ -connecting the boundary values has the finite energy $I(i_h\varphi)=1/10$, -independent of the mesh size. Thus, $i_hx^{1/3}$ cannot be the minimizer of the -energy as $h\rightarrow 0$. This is also easy to see by noting that -the minimal value of $I$ cannot increase under mesh -refinement: if it is finite for some function on some mesh, then it must be -smaller or equal to that value on a finer mesh, since the original function is -still in the space spanned by the shape functions on the finer grid, as finite -element spaces are nested. However, the computation above shows that we should -not be surprised if the value of the functional does not converge to zero, but -rather some finite value. - -There is one more conclusion to be drawn from the blow-up lesson above: we -cannot expect the finite dimensional approximation to be close to the root -function at the left end of the domain, for any mesh we choose! Because, if it -would, then its energy would have to blow up. And we will see exactly this -in the results section below. - - -

What to do?

- -After this somewhat theoretical introduction, let us just once in our life -have fun with pure mathematics, and actually see what happens in this problem -when we run the finite element method on it. So here it goes: to find the -minimum of $I(u)$, we have to find its stationary point. The condition for -this reads -@f[ - I'(u,\varphi) - = - \int_0^1 6 (x-u^3) (u')^5 \{ (x-u^3)\varphi' - u^2 u' \varphi\}\ dx, -@f] -for all test functions $\varphi$ from the same space as that from which we -take $u$, but with zero boundary conditions. If this space allows us to -integrate by parts, then we could associate this with a two point boundary -value problem -@f{eqnarray*} - -(x-u^3) u^2(u')^6 - - \frac{d}{dx} \left\{(x-u^3)^2 (u')^5\right\} = 0, - \qquad\qquad u(0)=0, - \quad u(1)=1. -@f} -Note that this equation degenerates wherever $u^3=x$, which is at least the -case at $x\in\{0,1\}$ due to the prescribed boundary values for $u$, but -possibly at other places as well. However, for finite elements, we will want -to have the equation in weak form anyway. Since the equation is still -nonlinear, one may be tempted to compute iterates -$u_{k+1}=u_k+\alpha_k\delta u_k$ using a Newton method for updates $\delta -u_k$, like in -@f[ - I''(u_k,\delta u_k,\varphi) - = - -I'(u_k, \varphi). -@f] -However, since $I''(u_k,\cdot,\cdot)$ may be an indefinite operator (and, as -numerical experiments indicate, is in fact during typical computations), we -don't want to use this. Instead, we use a gradient method, for which we -compute updates according to the following scheme: -@f{eqnarray*} - \left<\delta u_k,\varphi\right> - = - -I'(u_k, \varphi). -@f} -For the scalar product on the left hand side, there are multiple valid ways; -we choose the mesh dependent definition $\left = \int_\Omega (uv + -h(x)^2 \nabla u\cdot \nabla v)\; dx$, where the weight $h(x)^2$, i.e. using -the local mesh width, is chosen so that the definition is dimensionally -consistent. It also yields a matrix on the left hand side that is simple to -invert, as it is the sum of the well-conditioned mass matrix, and a Laplace -matrix times a factor that counters the growth of condition number of the -Laplace matrix. - -The step length $\alpha_k$ is then computed using a one-dimensional line search -finding -@f{eqnarray*} - \alpha_k = \arg\min_\alpha I(u_k+\alpha\delta u_k), -@f} -or at least an approximation to this using a one-dimensional Newton method -which itself has a line search. The details of this can be found in the code. -We iterate the updates and line searches until the change in energy $I(u_k)$ -becomes too small to warrant any further iterations. - -The basic idea that you should get in all this is that we formulate the -optimization method in a function space, and will only discretize each step -separately. A number of subsequent steps will be done on the same mesh, before -we refine it and go on to do the same on the next finer mesh. - -As for mesh refinement, it is instructional to recall how residual based error -estimates like the one used in the Kelly et al.~error estimator are usually -derived (the Kelly estimator is the one that we have used in most of the -previous example programs). In a similar way, by looking at the residual of -the strong form of the nonlinear equation we attempt here to solve, we may be -tempted to consider the following expression for refinement of cell $K$: -@f{eqnarray*} - \eta_K^2 &=& - h^2 \left\| - (x-u_h^3) (u_h')^4 \left\{ u_h^2 (u_h')^2 + 5(x-u_h^3)u_h'' + 2u_h'(1-3u_h^2u_h') \right\} - \right\|^2_K - \\ - && + - h \left| (x-u_h^3)^2 [(u_h')^5] \right|^2_{\partial K}, -@f} -where $[\cdot]$ is the jump of a quantity across an intercell boundary, and -$|\cdot|_{\partial K}$ is the sum of the quantity evaluated at the two end -points of a cell. Note that in the evaluation of the jump, we have made use of -the fact that $x-u_h^3$ is a continuous quantity, and can therefore be taken -out of the jump operator. - -All these details actually matter -- while writing the program I have played -around with many settings and different versions of the code, and the result -is that if you don't have a good line search, good stopping criteria, the -right metric (scalar product) for the gradient method, good initial values, -and a good refinement criterion, then the nonlinear solver gets stuck quite -readily for this highly nonlinear problem. Initially, I was hardly able to -find solutions for which the energy dropped below 0.005, while the energy -after the final iteration of the program as it is is usually around 0.0003, -and occasionally down to less than 3e-5. - -However, this is not enough. In the program, we start the solver on the coarse -mesh many times, with randomly perturbed starting values, and while it -converges it yields a different solution, with a different energy every -time. One can therefore not say that the solver converges to a certain energy, -and we can't answer the question what the smallest value of $I(u)$ might be in -$W^{1,\infty}$. This is unsatisfactory, but maybe to be expected for such a -contrived and pathological problem. Consider it an example in programming with -deal.II then, and not an example in solving this particular problem. - - -

Implementation

- -The program implements all the steps mentioned above, and we will discuss them -in the commented code below. In general, however, note that formulating the -Newton method in function spaces, and only discretizing afterwards has -consequences: we have to linearize around $u_k$ when we want to compute -$\delta u_k$, and we have to sum up these two functions afterwards. However, -they may be living on different grids, if we have refined the grid before this -step, so we will have to present a way to actually get a function from one -grid to another. The SolutionTransfer class will help us here. On the -other hand, discretizing every nonlinear step separately has the advantage -that we can do the initial steps, when we are still far away from the -solution, on a coarse mesh, and only go on to more expensive computations when -we home in on an solution. We will use a -very simplistic strategy for when we refine the mesh (every fifth nonlinear -step), though. Realistic programs solving nonlinear problems will have to be more -clever in this respect, but it suffices for the purposes of this program. - -We will show some of the things that are really simple in 1d (but sometimes -different from what we are used to in 2d or 3d). Apart from this, the program -does not contain much new stuff, but if it explains a few of the techniques -that are available for nonlinear problems and in particular 1d problems, then -this is not so bad, after all. - -Note: As shown below, the program starts the nonlinear solver from 10 different -initial values, and outputs the results. This is not actually too many, but we -did so to keep run-time short (around 1:30 minutes on my laptop). If you want to -increase the number of realizations, you may want to switch to optimized mode -(by setting the ``debug-mode'' flag in the Makefile to ``off''), and increase -the number of realizations to a larger value. On the same machine as above, I -can compute 100 realizations in optimized mode in about 2 minutes. For -this particular program, the difference between debug and optimized mode is -thus about a factor of 7-8, which can be explained by the fact that we ask the -compiler to do optimizations on the code only in the latter mode, but in most -part due to the fact that in optimized mode all the ``Assert'' checks are -thrown out that make sure that function arguments are correct, and that check -the internal consistency of the library. The library contains several -thousands of these checks, and they significantly slow down debug -computations, but we feel that the benefit of finding programming errors -earlier and including where the problem exactly appeared to be of significantly -greater value than faster run-time. After all, all production runs of programs -should be done in optimized mode anyway. - -A slowdown of a factor of 7-8 is unusual, however. For 2d and 3d applications, -a typical value is around 4. diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-15.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-15.data/results.dox deleted file mode 100644 index 48e0e10f1a..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-15.data/results.dox +++ /dev/null @@ -1,119 +0,0 @@ - -

Results

- - -If run, the program generates output like this: - - -@code -Realization 0: - Energy: 0.00377302 - Energy: 0.00106138 - Energy: 0.000514363 - Energy: 0.000382105 - Energy: 0.000339017 - Energy: 0.000327948 - Energy: 0.000320299 - Energy: 0.000318016 - Energy: 0.000316735 - Energy: 0.000316536 - Energy: 0.000316463 - Energy: 0.000316285 - Energy: 0.000316227 - Energy: 0.000316221 - Energy: 0.00031622 - -Realization 1: - Energy: 0.00279316 - Energy: 0.000896516 - Energy: 0.000504609 - Energy: 0.000392703 - Energy: 0.000317725 - Energy: 0.000291881 - Energy: 0.000288243 - Energy: 0.000283541 - Energy: 0.000282406 - Energy: 0.000281842 - Energy: 0.000281752 - Energy: 0.000281743 - Energy: 0.000281743 - -.... - -Realization 9: - Energy: 0.0103729 - Energy: 0.0082121 - Energy: 0.00733742 - Energy: 0.00728154 - Energy: 0.00725198 - Energy: 0.00724302 - Energy: 0.00724019 - Energy: 0.00723837 - Energy: 0.00723783 - Energy: 0.00723772 - Energy: 0.00690564 - Energy: 0.00690562 -@endcode - - -The lowest energy yet seen is in this run (you only get this by increasing the -number of runs): - -@code -Realization 18: - Energy: 0.00200645 - Energy: 0.000638519 - Energy: 0.00022749 - Energy: 9.18962e-05 - Energy: 5.42442e-05 - Energy: 3.94415e-05 - Energy: 3.42307e-05 - Energy: 3.30727e-05 - Energy: 3.19998e-05 - Energy: 3.18104e-05 - Energy: 2.97091e-05 - Energy: 3.5011e-05 -@endcode - - -Apparently something went wrong in the last step (the energy increased, which -it shouldn't - but then this is a strongly nonlinear problem), which is also -why the program aborted after this iteration. Apart from this, the iterations -shown above demonstrate that our program indeed is able to reduce the energy -in the solution in each iteration, as should be. - - - -Since the program did not really deliver the goal we had originally intended -for (the computation of the minimal energy of finite element spaces), the -graphical output is also not very exciting. Here are plots of five of the -first 10 solutions (clicking on a picture gives the unscaled version of the -image): - - -@image html step-15.solution-1.png - - - -And here are the first 100 solutions, where each node in each solution is -represented as a dot. As can be seen, all the solutions cluster somewhat -around the $x^{1/3}$ curve, here shown in turquoise: - - -@image html step-15.solution-2.png - - - -Note that this behavior is mostly independent of the choice of starting data -(which we have chosen to be close to this curve), which a posteriori justfies -our choice. Some of the curves actually show a linear behavior of the solution -close to the origin; this is particularly obvious when the curves are viewed -in a log-log plot (not shown here, but rather left as an exercise to the -reader). - - - -Given the almost complete absence of interesting results of this program, we -hope that at least its source code provided some information with respect to -programming with deal.II - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-16.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-16.data/intro.dox deleted file mode 100644 index 750048aaef..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-16.data/intro.dox +++ /dev/null @@ -1,35 +0,0 @@ - -

Introduction

- - -This example shows the basic usage of the multilevel functions in -deal.II. It solves the Helmholtz equation with Neumann boundary conditions -to avoid additional complications due to Dirichlet boundary conditions (there, -some library functions are missing). Therefore, the solution is the constant -function with value unity. In all other respects, it is similar to step 5. - - - In order to allow sufficient flexibility in conjunction with systems of -differential equations and block preconditioners, quite a few different objects -have to be created before starting the multilevel method. These are -
    -
  • MGTransfer, the object handling transfer between grids -
  • MGCoarse, the solver on the coarsest level -
  • MGSmoother, the smoother on all other levels -
  • MGMatrix, the matrix object having a special level multiplication, i.e. we -basically store one matrix per grid level and allow multiplication with it. -
- - - -These objects are combined in an object of type Multigrid, containing the -implementation of the V-cycle, which is in turn used by the preconditioner -PreconditionMG, ready for plug-in into a linear solver of the LAC library. - - - -The multilevel method in deal.II follows in many respects the outlines -of the various publications by James Bramble, Joseph Pasciak and Jinchao Xu. In -order to understand many of the options, a rough familiarity with their work is -quite helpful. - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-16.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-16.data/results.dox deleted file mode 100644 index e67fccc1c1..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-16.data/results.dox +++ /dev/null @@ -1,2 +0,0 @@ - -

Results

diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-17.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-17.data/intro.dox deleted file mode 100644 index db168de9c3..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-17.data/intro.dox +++ /dev/null @@ -1,73 +0,0 @@ - -

Introduction

- - -This program does not introduce any new mathematical ideas; in fact, all it -does is to do the exact same computations that @ref step_8 "step-8" -already does, but it does so in a different manner: instead of using deal.II's -own linear algebra classes, we build everything on top of classes deal.II -provides that wrap around the linear algebra implementation of the PETSc library. And -since PETSc allows to distribute matrices and vectors across several computers -within an MPI network, the resulting code will even be able to solve the -problem in parallel. If you don't know what PETSc is, then this would be a -good time to take a quick glimpse at their homepage. - - - -As a prerequisite of this program, you need to have PETSc installed, and if -you want to run in parallel on a cluster, you also need METIS to partition meshes. The installation of deal.II -together with these two additional libraries is described in the README file. - - - -Now, for the details: as mentioned, the program does not compute anything new, -so the use of finite element classes etc. is exactly the same as before. The -difference to previous programs is that we have replaced almost all uses of -classes Vector and SparseMatrix by their -near-equivalents PETScWrappers::Vector and -PETScWrappers::SparseMatrix (for sequential vectors and matrices, -i.e. objects for which all elements are stored locally on one machine), and -PETScWrappers::MPI::Vector and -PETScWrappers::MPI::SparseMatrix for versions of these classes -where only a part of the matrix or vector is stored on each machine within an -MPI network. These classes provide an interface that is very similar to that -of the deal.II linear algebra classes, but instead of implementing this -functionality themselves, they simply pass on to their corresponding PETSc -functions. The wrappers are therefore only used to give PETSc a more modern, -object oriented interface, and to make the use of PETSc and deal.II objects as -interchangable as possible. - - - -While the sequential PETSc wrappers classes do not have any advantage over -their deal.II counterparts, the main point of using PETSc is that it can run -in parallel. We will make use of this by partitioning the domain into as many -blocks (``subdomains'') as there are processes in the MPI network. At the same -time, PETSc provides dummy MPI stubs that allow to run the same program on a -single machine if so desired, without any changes. - - - -Note, however, that the only data structures we parallelize are matrices and -vectors. We do, in particular, not split up the Triangulation and -DoFHandler classes: each process still has a complete copy of -these objects, and all processes have exact copies of what the other processes -have. Parallelizing the data structures used in hierarchic and unstructured -triangulations is a very hard problem, and we do not attempt to do so at -present. It also requires that many more aspects of the application program -have to be changed, since for example loops over all cells can only include -locally available cells. We thus went for the path of least resistance and -only parallelized the linear algebra part. - - - -The techniques this program demonstrates are: how to use the PETSc wrapper -classes; how to parallelize operations for jobs running on an MPI network; and -how to partition the domain into subdomains to parallelize up the work. Since -all this can only be demonstrated using actual code, let us go straight to the -code without much further ado. - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-17.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-17.data/results.dox deleted file mode 100644 index fbcf5be57c..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-17.data/results.dox +++ /dev/null @@ -1,208 +0,0 @@ - -

Results

- - -If the program above is compiled and run on a single processor machine, it -should generate results that are very similar to those that we already got -with step-8. However, it becomes more interesting if we run it on a cluster of -computers. Most clusters have some kind of scheduling system, all of which -have different calling syntaxes - on my system, I have to use the command -bsub with a whole host of options to run a job in parallel - so -that the exact command line syntax varies. If you have found out how to run a -job on your system, you should get output like this for a job on 8 processors, -and with a few more refinement cycles than in the code above: -@code -Cycle 0: - Number of active cells: 64 - Number of degrees of freedom: 162 (by partition: 22+22+20+20+18+16+20+24) - Solver converged in 23 iterations. -Cycle 1: - Number of active cells: 124 - Number of degrees of freedom: 302 (by partition: 38+42+36+34+44+44+36+28) - Solver converged in 35 iterations. -Cycle 2: - Number of active cells: 238 - Number of degrees of freedom: 570 (by partition: 68+80+66+74+58+68+78+78) - Solver converged in 46 iterations. -Cycle 3: - Number of active cells: 454 - Number of degrees of freedom: 1046 (by partition: 120+134+124+130+154+138+122+124) - Solver converged in 55 iterations. -Cycle 4: - Number of active cells: 868 - Number of degrees of freedom: 1926 (by partition: 232+276+214+248+230+224+234+268) - Solver converged in 77 iterations. -Cycle 5: - Number of active cells: 1654 - Number of degrees of freedom: 3550 (by partition: 418+466+432+470+442+474+424+424) - Solver converged in 93 iterations. -Cycle 6: - Number of active cells: 3136 - Number of degrees of freedom: 6702 (by partition: 838+796+828+892+866+798+878+806) - Solver converged in 127 iterations. -Cycle 7: - Number of active cells: 5962 - Number of degrees of freedom: 12446 (by partition: 1586+1484+1652+1552+1556+1576+1560+1480) - Solver converged in 158 iterations. -Cycle 8: - Number of active cells: 11320 - Number of degrees of freedom: 23586 (by partition: 2988+2924+2890+2868+2864+3042+2932+3078) - Solver converged in 225 iterations. -Cycle 9: - Number of active cells: 21424 - Number of degrees of freedom: 43986 (by partition: 5470+5376+5642+5450+5630+5470+5416+5532) - Solver converged in 282 iterations. -Cycle 10: - Number of active cells: 40696 - Number of degrees of freedom: 83754 (by partition: 10660+10606+10364+10258+10354+10322+10586+10604) - Solver converged in 392 iterations. -Cycle 11: - Number of active cells: 76978 - Number of degrees of freedom: 156490 (by partition: 19516+20148+19390+19390+19336+19450+19730+19530) - Solver converged in 509 iterations. -Cycle 12: - Number of active cells: 146206 - Number of degrees of freedom: 297994 (by partition: 37462+37780+37000+37060+37232+37328+36860+37272) - Solver converged in 705 iterations. -Cycle 13: - Number of active cells: 276184 - Number of degrees of freedom: 558766 (by partition: 69206+69404+69882+71266+70348+69616+69796+69248) - Solver converged in 945 iterations. -Cycle 14: - Number of active cells: 523000 - Number of degrees of freedom: 1060258 (by partition: 132928+132296+131626+132172+132170+133588+132252+133226) - Solver converged in 1282 iterations. -Cycle 15: - Number of active cells: 987394 - Number of degrees of freedom: 1994226 (by partition: 253276+249068+247430+248402+248496+251380+248272+247902) - Solver converged in 1760 iterations. -Cycle 16: - Number of active cells: 1867477 - Number of degrees of freedom: 3771884 (by partition: 468452+474204+470818+470884+469960+ -471186+470686+475694) - Solver converged in 2251 iterations. -@endcode - - - -As can be seen, we can easily get to almost four million unknowns. In fact, the -code's runtime with 8 processes was less than 7 minutes up to (and including) -cycle 14, and 14 minutes including the second to last step. I lost the timing -information for the last step, though, but you get the idea. All this is if -the debug flag in the Makefile was changed to "off", i.e. "optimized", and -with the generation of graphical output switched off for the reasons stated in -the program comments above. The biggest 2d computations we did had roughly 7.1 -million unknowns, and were done on 32 processes. It took about 40 minutes. -Not surprisingly, the limiting factor for how far one can go is how much memory -one has, since every process has to hold the entire mesh and DoFHandler objects, -although matrices and vectors are split up. For the 7.1M computation, the memory -consumption was about 600 bytes per unknown, which is not bad, but one has to -consider that this is for every unknown, whether we store the matrix and vector -entries locally or not. - - - -Here is some output generated in the 12th cycle of the program, i.e. with roughly -300,000 unknowns: - - -@image html step-17.12-ux.png -@image html step-17.12-uy.png - - - -As one would hope for, the x- (left) and y-displacements (right) shown here -closely match what we already saw in step-8. What may be more interesting, -though, is to look at the mesh and partition at this step (to see the picture -in its original size, simply click on it): - - -@image html step-17.12-grid.png -@image html step-17.12-partition.png - - -Again, the mesh (left) shows the same refinement pattern as seen -previously. The right panel shows the partitioning of the domain across the 8 -processes, each indicated by a different color. The picture shows that the -subdomains are smaller where mesh cells are small, a fact that needs to be -expected given that the partitioning algorithm tries to equilibrate the number -of cells in each subdomain; this equilibration is also easily identified in -the output shown above, where the number of degrees per subdomain is roughly -the same. - - - -It is worth noting that if we ran the same program with a different number of -processes, that we would likely get slightly different output: a different -mesh, different number of unknowns and iterations to convergence. The reason -for this is that while the matrix and right hand side are the same independent -of the number of processes used, the preconditioner is not: it performs an -ILU(0) on the chunk of the matrix of each processor separately. Thus, -it's effectiveness as a preconditioner diminishes as the number of processes -increases, which makes the number of iterations increase. Since a different -preconditioner leads to slight changes in the computed solution, this will -then lead to slightly different mesh cells tagged for refinement, and larger -differences in subsequent steps. The solution will always look very similar, -though. - - - -Finally, here are some results for a 3d simulation. You can repeat these by -first changing -@code - ElasticProblem<2> elastic_problem; -@endcode -to -@code - ElasticProblem<3> elastic_problem; -@endcode -in the main function, and then in the Makefile, change the reference to the 2d -libraries to their 3d counterparts. If you then run the program in parallel, -you get something similar to this (this is for a job with 16 processes): -@code -Cycle 0: - Number of active cells: 512 - Number of degrees of freedom: 2187 (by partition: 114+156+150+114+114+210+105+102+120+120+96+123+141+183+156+183) - Solver converged in 27 iterations. -Cycle 1: - Number of active cells: 1604 - Number of degrees of freedom: 6549 (by partition: 393+291+342+354+414+417+570+366+444+288+543+525+345+387+489+381) - Solver converged in 42 iterations. -Cycle 2: - Number of active cells: 4992 - Number of degrees of freedom: 19167 (by partition: 1428+1266+1095+1005+1455+1257+1410+1041+1320+1380+1080+1050+963+1005+1188+1224) - Solver converged in 65 iterations. -Cycle 3: - Number of active cells: 15485 - Number of degrees of freedom: 56760 (by partition: 3099+3714+3384+3147+4332+3858+3615+3117+3027+3888+3942+3276+4149+3519+3030+3663) - Solver converged in 96 iterations. -Cycle 4: - Number of active cells: 48014 - Number of degrees of freedom: 168762 (by partition: 11043+10752+9846+10752+9918+10584+10545+11433+12393+11289+10488+9885+10056+9771+11031+8976) - Solver converged in 132 iterations. -Cycle 5: - Number of active cells: 148828 - Number of degrees of freedom: 492303 (by partition: 31359+30588+34638+32244+30984+28902+33297+31569+29778+29694+28482+28032+32283+30702+31491+28260) - Solver converged in 179 iterations. -Cycle 6: - Number of active cells: 461392 - Number of degrees of freedom: 1497951 (by partition: 103587+100827+97611+93726+93429+88074+95892+88296+96882+93000+87864+90915+92232+86931+98091+90594) - Solver converged in 261 iterations. -@endcode - - - -The last step, going up to 1.5 million unknowns, takes about 55 minutes with -16 processes on 8 dual-processor machines. The graphical output generated by -this job is rather large (cycle 5 already prints around 82 MB of GMV data), so -we contend ourselves with showing output from cycle 4 (again, clicking on the -picture gives a version in original size): - - -@image html step-17.4-3d-partition.png -@image html step-17.4-3d-ux.png - - -The left picture shows the partitioning of the cube into 16 processes, whereas -the right one shows the x-displacement along two cutplanes through the cube. - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-19.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-19.data/intro.dox deleted file mode 100644 index 9770ba1df4..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-19.data/intro.dox +++ /dev/null @@ -1,121 +0,0 @@ - -

Introduction

- - -In @ref step_18 "step-18", we saw a need to write -output files in an intermediate format: in a parallel program, it doesn't scale -well if all processors participate in computing a result, and then only a -single processor generates the graphical output. Rather, each of them should -generate output for its share of the domain, and later on merge all these -output files into a single one. - - - -Thus was the beginning of step-19: it is the program that reads a number of -files written in intermediate format, and merges and converts them into the -final format that one would like to use for visualization. It can also be used -for the following purpose: if you are unsure at the time of a computation what -graphics program you would like to use, write your results in intermediate -format; it can later be converted, using the present program, to any other -format you may want. - - - -While this in itself was not interesting enough to make a tutorial program, we -have used the opportunity to introduce one class that has proven to be -extremely help- and useful in real application programs, but had not been -covered by any of the previous tutorial programs: the -ParameterHandler class. This class is used in applications that -want to have some of their behavior determined at run time, using input -files. For example, one may want to specify the geometry, or specifics of the -equation to be solved, at run time. Other typical parameters are the number of -nonlinear iterations, the name of output files, or the names of input files -specifying material properties or boundary conditions. - - - -Working with such parameter files is not rocket science. However, it is rather -tedious to write the parsers for such files, in particular if they should be -extensible, be able to group parameters into subsections, perform some error -checks such as that parameters can have only certain kinds of values (for -example, it should only be allowed to have integer values in an input file for -parameters that denote a number of iteration), and similar requirements. The -ParameterHandler class allows for all this: an application program -will declare the parameters it expects (or call a function in the library that -declares a number of parameters for you), the ParameterHandler -class then reads an input file with all these parameters, and the application -program can then get their values back from this class. - - - -In order to perform these three steps, the ParameterHandler offers -three sets of functions: first, the -ParameterHandler::declare_entry function is used to declare the -existence of a named parameter in the present section of the input file (one -can enter and leave subsections in a parameter file just like you would -navigate through a directory tree in a file system, with the functions -ParameterHandler::enter_subsection and -ParameterHandler::leave_subsection taking on the roles of the -commands cd dir and cd ..; the only difference being -that if you enter a subsection that has never been visited before, it is -created: it isn't necessary to "create" subsections explicitly). When declaring -a parameter, one has to specify its name and default value, in case the -parameter isn't later listed explicitly in the parameter file. In addition to -that, there are optional arguments indicating a pattern that a parameter has to -satisfy, such as being an integer (see the discussion above), and a help text -that might later give an explanation of what the parameter stands for. - - - -Once all parameters have been declared, parameters can be read, using the -ParameterHandler::read_input family of functions. There are -versions of this function that can read from a file stream, that take a file -name, or that simply take a string and parse it. When reading parameters, the -class makes sure that only parameters are listed in the input that have been -declared before, and that the values of parameters satisfy the pattern that has -been given to describe the kind of values a parameter can have. Input that uses -undeclared parameters or values for parameters that do not conform to the -pattern are rejected by raising an exception. - - - -A typical input file will look like this: -@code -set Output format = dx -set Output file = my_output_file.dx - -set Maximal number of iterations = 13 - -subsection Application - set Color of output = blue - set Generate output = false -end -@endcode -Note that subsections can be nested. - - - -Finally, the application program can get the values of declared parameters back -by traversing the subsections of the parameter tree and using the -ParameterHandler::get and related functions. The -ParameterHandler::get simply returns the value of a parameter as a -string, whereas ParameterHandler::get_integer, -ParameterHandler::get_double, and -ParameterHandler::get_bool already convert them to the indicated -type. - - - -Using the ParameterHandler class therefore provides for a pretty -flexible mechanism to handle all sorts of moderately complex input files without -much effort on the side of the application programmer. We will use this to -provide all sorts of options to the step-19 program in order to convert from -intermediate file format to any other graphical file format. - - - -The rest of the story is probably best told by looking at the source of step-19 -itself. Let us, however, end this introduction by pointing the reader at the -extensive class documentation of the ParameterHandler class for -more information on specific details of that class. - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-19.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-19.data/results.dox deleted file mode 100644 index b13a41c982..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-19.data/results.dox +++ /dev/null @@ -1,260 +0,0 @@ - -

Results

- - -With all that above, here is first what we get if we just run the program -without any parameters at all: -@code -examples/step-19> ./step-19 - -Converter from deal.II intermediate format to other graphics formats. - -Usage: - ./step-19 [-p parameter_file] list_of_input_files - [-x output_format] [-o output_file] - -Parameter sequences in brackets can be omitted if a parameter file is -specified on the command line and if it provides values for these -missing parameters. - -The parameter file has the following format and allows the following -values (you can cut and paste this and use it for your own parameter -file): - -# Listing of Parameters -# --------------------- -# A dummy parameter asking for an integer -set Dummy iterations = 42 - -# The name of the output file to be generated -set Output file = - -# A name for the output format to be used -set Output format = gnuplot - - -subsection DX output parameters - # A boolean field indicating whether neighborship information between cells - # is to be written to the OpenDX output file - set Write neighbors = true -end - - -subsection Dummy subsection - # A dummy parameter that shows how one can define a parameter that can be - # assigned values from a finite set of values - set Dummy color of output = red - - # A dummy parameter that can be fed with either 'true' or 'false' - set Dummy generate output = true -end - - -subsection Eps output parameters - # Angle of the viewing position against the vertical axis - set Azimut angle = 60 - - # Name of a color function used to colorize mesh lines and/or cell - # interiors - set Color function = default - - # Whether the interior of cells shall be shaded - set Color shading of interior of cells = true - - # Whether the mesh lines, or only the surface should be drawn - set Draw mesh lines = true - - # Whether only the mesh lines, or also the interior of cells should be - # plotted. If this flag is false, then one can see through the mesh - set Fill interior of cells = true - - # Number of the input vector that is to be used to generate color - # information - set Index of vector for color = 0 - - # Number of the input vector that is to be used to generate height - # information - set Index of vector for height = 0 - - # The width in which the postscript renderer is to plot lines - set Line widths in eps units = 0.5 - - # Whether width or height should be scaled to match the given size - set Scale to width or height = width - - # Scaling for the z-direction relative to the scaling used in x- and - # y-directions - set Scaling for z-axis = 1 - - # The size (width or height) to which the eps output file is to be scaled - set Size (width or height) in eps units = 300 - - # Angle of the viewing direction against the y-axis - set Turn angle = 30 -end - -subsection Povray output parameters - # Whether camera and lightling information should be put into an external - # file "data.inc" or into the POVRAY input file - set Include external file = true - - # Whether POVRAY should use bicubic patches - set Use bicubic patches = false - - # A flag indicating whether POVRAY should use smoothed triangles instead of - # the usual ones - set Use smooth triangles = false -end - - -subsection UCD output parameters - # A flag indicating whether a comment should be written to the beginning of - # the output file indicating date and time of creation as well as the - # creating program - set Write preamble = true -end -@endcode - -That's a lot of output for such a little program, but then that's also a lot of -output formats that deal.II supports. You will realize that the output consists -of first entries in the top-level section (sorted alphabetically), then a -sorted list of subsections. Most of the parameters have been declared by the -DataOutBase class, but there are also the dummy entries and -sections we have added in the declare_parameters() function, along -with their default values and documentations. - - - -Let us try to run this program on a set of input files generated by a modified -@ref step_18 "step-18" run on 32 nodes of a -cluster. The computation was rather big, with more -than 350,000 cells and some 1.2M unknowns. That makes for 32 rather big -intermediate files that we will try to merge using the present program. Here is -the list of files, totaling some 245MB of data: -@code -examples/step-19> ls -l *d2 --rw-r--r-- 1 bangerth wheeler 7982085 Aug 12 10:11 solution-0005.0000-000.d2 --rw-r--r-- 1 bangerth wheeler 7888316 Aug 12 10:13 solution-0005.0000-001.d2 --rw-r--r-- 1 bangerth wheeler 7715984 Aug 12 10:09 solution-0005.0000-002.d2 --rw-r--r-- 1 bangerth wheeler 7887648 Aug 12 10:06 solution-0005.0000-003.d2 --rw-r--r-- 1 bangerth wheeler 7833291 Aug 12 10:09 solution-0005.0000-004.d2 --rw-r--r-- 1 bangerth wheeler 7536394 Aug 12 10:07 solution-0005.0000-005.d2 --rw-r--r-- 1 bangerth wheeler 7817551 Aug 12 10:06 solution-0005.0000-006.d2 --rw-r--r-- 1 bangerth wheeler 7996660 Aug 12 10:07 solution-0005.0000-007.d2 --rw-r--r-- 1 bangerth wheeler 7761545 Aug 12 10:06 solution-0005.0000-008.d2 --rw-r--r-- 1 bangerth wheeler 7754027 Aug 12 10:07 solution-0005.0000-009.d2 --rw-r--r-- 1 bangerth wheeler 7607545 Aug 12 10:11 solution-0005.0000-010.d2 --rw-r--r-- 1 bangerth wheeler 7728039 Aug 12 10:07 solution-0005.0000-011.d2 --rw-r--r-- 1 bangerth wheeler 7577293 Aug 12 10:14 solution-0005.0000-012.d2 --rw-r--r-- 1 bangerth wheeler 7735626 Aug 12 10:10 solution-0005.0000-013.d2 --rw-r--r-- 1 bangerth wheeler 7629075 Aug 12 10:10 solution-0005.0000-014.d2 --rw-r--r-- 1 bangerth wheeler 7314459 Aug 12 10:09 solution-0005.0000-015.d2 --rw-r--r-- 1 bangerth wheeler 7414738 Aug 12 10:10 solution-0005.0000-016.d2 --rw-r--r-- 1 bangerth wheeler 7330518 Aug 12 10:05 solution-0005.0000-017.d2 --rw-r--r-- 1 bangerth wheeler 7418213 Aug 12 10:11 solution-0005.0000-018.d2 --rw-r--r-- 1 bangerth wheeler 7508715 Aug 12 10:08 solution-0005.0000-019.d2 --rw-r--r-- 1 bangerth wheeler 7747143 Aug 12 10:06 solution-0005.0000-020.d2 --rw-r--r-- 1 bangerth wheeler 7563548 Aug 12 10:05 solution-0005.0000-021.d2 --rw-r--r-- 1 bangerth wheeler 7846767 Aug 12 10:12 solution-0005.0000-022.d2 --rw-r--r-- 1 bangerth wheeler 7479576 Aug 12 10:12 solution-0005.0000-023.d2 --rw-r--r-- 1 bangerth wheeler 7925060 Aug 12 10:12 solution-0005.0000-024.d2 --rw-r--r-- 1 bangerth wheeler 7842034 Aug 12 10:13 solution-0005.0000-025.d2 --rw-r--r-- 1 bangerth wheeler 7585448 Aug 12 10:13 solution-0005.0000-026.d2 --rw-r--r-- 1 bangerth wheeler 7609698 Aug 12 10:10 solution-0005.0000-027.d2 --rw-r--r-- 1 bangerth wheeler 7576053 Aug 12 10:08 solution-0005.0000-028.d2 --rw-r--r-- 1 bangerth wheeler 7682418 Aug 12 10:08 solution-0005.0000-029.d2 --rw-r--r-- 1 bangerth wheeler 7544141 Aug 12 10:05 solution-0005.0000-030.d2 --rw-r--r-- 1 bangerth wheeler 7348899 Aug 12 10:04 solution-0005.0000-031.d2 -@endcode - -So let's see what happens if we attempt to merge all these files into a single -one: -@code -examples/step-19> time ./step-19 solution-0005.0000-*.d2 -x gmv -o solution-0005.gmv -real 2m08.35s -user 1m26.61s -system 0m05.74s - -examples/step-19> ls -l solution-0005.gmv --rw-r--r-- 1 bangerth wheeler 240680494 Sep 9 11:53 solution-0005.gmv -@endcode -So in roughly two minutes we have merged 240MB of data. Counting reading and -writing, that averages a throughput of 3.8MB per second, not so bad. - - - -If visualized, the output looks very much like that shown for -@ref step_18 "step-18". But that's not quite as -important for the moment, rather we are interested in showing how to use the -parameter file. To this end, remember that if no parameter file is given, or if -it is empty, all the default values listed above are used. However, whatever we -specify in the parameter file is used, unless overridden again by -parameters found later on the command line. - - - -For example, let us use a simple parameter file named -solution-0005.prm that contains only one line: -@code -set Output format = gnuplot -@endcode -If we run step-19 with it again, we obtain this (for simplicity, and because we -don't want to visualize 240MB of data anyway, we only convert the one, the -twelfth, intermediate file to gnuplot format): -@code -examples/step-19> ./step-19 solution-0005.0000-012.d2 -p solution-0005.prm -o solution-0005.gnuplot - -examples/step-19> ls -l solution-0005.gnuplot --rw-r--r-- 1 bangerth wheeler 20281669 Sep 9 12:15 solution-0005.gnuplot -@endcode - -We can then visualize this one file with gnuplot, obtaining something like -this: -@image html step-19.solution-0005.png - -That's not particularly exciting, but the file we're looking at has only one -32nd of the entire domain anyway, so we can't expect much. - -In more complicated situations, we would use parameter files that set more of -the values to non-default values. A file for which this is the case could look -like this, generating output for the OpenDX visualization program: -@code -set Output format = dx -set Output file = my_output_file.dx - -set Dummy iterations = -13 - -subsection Dummy subsection - set Dummy color of output = blue - set Dummy generate output = false -end -@endcode -If one wanted to, one could write comments into the file using the -same format as used above in the help text, i.e. everything on a line -following a hashmark (#) is considered a comment. - - - -If one runs step-19 with this input file, this is what is going to happen: -@code -examples/step-19> ./step-19 solution-0005.0000-012.d2 -p solution-0005.prm -Line 4: - The entry value - -13 - for the entry named - Dummy iterations - does not match the given pattern - [Integer range 1...1000 (inclusive)] -@endcode -Ah, right: valid values for the iteration parameter needed to be within the -range [1...1000]. We would fix that, then go back to run the program with -correct parameters. - - - -This program should have given some insight into the input parameter file -handling that deal.II provides. The ParameterHandler class has a -few more goodies beyond what has been shown in this program, for those who want -to use this class, it would be useful to read the documentation of that class -to get the full picture. - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-2.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-2.data/intro.dox deleted file mode 100644 index b9dd70b6e0..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-2.data/intro.dox +++ /dev/null @@ -1,17 +0,0 @@ - -

Introduction

- -After we have created a grid in the previous example, we now show how -to define degrees of freedom on this mesh. For this example, we -will use the lowest order (Q1) finite elements, for which the degrees -of freedom are associated with the vertices of the mesh. Later -examples will demonstrate higher order elements where degrees of freedom are -not necessarily associated with vertices any more, but can be associated -with edges, faces, or cells. - -Defining degrees of freedom ("DoF"s in short) on a mesh is a rather -simple task, since the library does all the work for you. However, for -some algorithms, especially for some linear solvers, it is -advantageous to have the degrees of freedom numbered in a certain -order, and we will use the algorithm of Cuthill and McKee to do -so. The results are written to a file and visualized using GNUPLOT. diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-2.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-2.data/results.dox deleted file mode 100644 index 300f50bb33..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-2.data/results.dox +++ /dev/null @@ -1,58 +0,0 @@ - -

Results

- -The program has, after having been run, produced two sparsity -patterns. We can visualize them using GNUPLOT: -@code -examples/step-2> gnuplot - - G N U P L O T - Version 3.7 patchlevel 3 - last modified Thu Dec 12 13:00:00 GMT 2002 - System: Linux 2.6.11.4-21.10-default - - Copyright(C) 1986 - 1993, 1998 - 2002 - Thomas Williams, Colin Kelley and many others - - Type `help` to access the on-line reference manual - The gnuplot FAQ is available from - http://www.gnuplot.info/gnuplot-faq.html - - Send comments and requests for help to - Send bugs, suggestions and mods to - - -Terminal type set to 'x11' -gnuplot> set data style points -gnuplot> plot "sparsity_pattern.1" -@endcode - -The results then look like this (every cross denotes an entry which -might be nonzero; of course the fact whether the entry actually is -zero or not depends on the equation under consideration, but the -indicated positions in the matrix tell us which shape functions can -and which can't couple, if the equation is a local, i.e. differential -one): - - - - - - -
- @image html step-2.sparsity-1.png - - @image html step-2.sparsity-2.png -
- -The different regions in the left picture represent the degrees of -freedom on the different refinement levels of the triangulation. As -can be seen in the right picture, the sparsity pattern is much better -clustered around the main diagonal of the matrix after -renumbering. Although this might not be apparent, the number of -nonzero entries is the same in both pictures, of course. - -A common observation is that the more refined the grid is, the better -the clustering around the diagonal will get. - - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-20.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-20.data/intro.dox deleted file mode 100644 index aa7b812e97..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-20.data/intro.dox +++ /dev/null @@ -1,715 +0,0 @@ - -

Introduction

- -This program is devoted to two aspects: the use of mixed finite elements -- in -particular Raviart-Thomas elements -- and using block matrices to define -solvers, preconditioners, and nested versions of those that use the -substructure of the system matrix. The equation we are going to solve is again -the Laplace equation, though with a matrix-valued coefficient: -@f{eqnarray*} - -\nabla \cdot K({\mathbf x}) \nabla p &=& f \qquad {\textrm{in}\ } \Omega, \\ - p &=& g \qquad {\textrm{on}\ }\partial\Omega. -@f} -$K({\mathbf x})$ is assumed to be uniformly positive definite, i.e. there is -$\alpha>0$ such that the eigenvalues $\lambda_i({\mathbf x})$ of $K(x)$ satisfy -$\lambda_i({\mathbf x})\ge \alpha$. The use of the symbol $p$ instead of the usual -$u$ for the solution variable will become clear in the next section. - -After discussing the equation and the formulation we are going to use to solve -it, this introduction will cover the use of block matrices and vectors, the -definition of solvers and preconditioners, and finally the actual test case we -are going to solve. - -

Formulation, weak form, and discrete problem

- -In the form above, the Laplace equation is considered a good model equation -for fluid flow in porous media. In particular, if flow is so slow that all -dynamic effects such as the acceleration terms in the Navier-Stokes equation -become irrelevant, and if the flow pattern is stationary, then the Laplace -equation models the pressure that drives the flow reasonable well. Because the -solution variable is a pressure, we here use the name $p$ instead of the -name $u$ more commonly used for the solution of partial differential equations. - -Typical applications of this view of the Laplace equation are then modeling -groundwater flow, or the flow of hydrocarbons in oil reservoirs. In these -applications, $K$ is then the permeability tensor, i.e. a measure for how much -resistance the soil or rock matrix asserts on the fluid flow. In the -applications just named, a desirable feature is that the numerical scheme is -locally conservative, i.e. that whatever flows into a cell also flows out of -it (or the difference is equal to the integral over the source terms over each -cell, if the sources are nonzero). However, as it turns out, the usual -discretizations of the Laplace equation do not satisfy this property. On the -other hand, one can achieve this by choosing a different formulation. - -To this end, one first introduces a second variable, called the flux, -${\mathbf u}=-K\nabla p$. By its definition, the flux is a vector in the -negative -direction of the pressure gradient, multiplied by the permeability tensor. If -the permeability tensor is proportional to the unit matrix, this equation is -easy to understand and intuitive: the higher the permeability, the higher the -flux; and the flux is proportional to the gradient of the pressure, going from -areas of high pressure to areas of low pressure. - -With this second variable, one then finds an alternative version of the -Laplace equation, called the mixed formulation: -@f{eqnarray*} - K^{-1} {\mathbf u} - \nabla p &=& 0 \qquad {\textrm{in}\ } \Omega, \\ - -{\textrm{div}}\ {\mathbf u} &=& 0 \qquad {\textrm{in}\ }\Omega, \\ - p &=& g \qquad {\textrm{on}\ } \partial\Omega. -@f} - -The weak formulation of this problem is found by multiplying the two -equations with test functions and integrating some terms by parts: -@f{eqnarray*} - A(\{{\mathbf u},p\},\{{\mathbf v},q\}) = F(\{{\mathbf v},q\}), -@f} -where -@f{eqnarray*} - A(\{{\mathbf u},p\},\{{\mathbf v},q\}) - &=& - ({\mathbf v}, K^{-1}{\mathbf u})_\Omega - ({\textrm{div}}\ {\mathbf v}, p)_\Omega - - (q,{\textrm{div}}\ {\mathbf u})_\Omega - \\ - F(\{{\mathbf v},q\}) &=& -(g,{\mathbf v}\cdot {\mathbf n})_{\partial\Omega} - (f,q)_\Omega. -@f} -Here, ${\mathbf n}$ is the outward normal vector at the boundary. Note how in this -formulation, Dirichlet boundary values of the original problem are -incorporated in the weak form. - -To be well-posed, we have to look for solutions and test functions in the -space $H({\textrm{div}})=\{{\mathbf w}\in L^2(\Omega)^d:\ {\textrm{div}}\ {\mathbf w}\in L^2\}$ -for $\mathbf u$,$\mathbf v$, and $L^2$ for $p,q$. It is a well-known fact stated in -almost every book on finite element theory that if one chooses discrete finite -element spaces for the approximation of ${\mathbf u},p$ inappropriately, then the -resulting discrete saddle-point problem is instable and the discrete solution -will not converge to the exact solution. - -To overcome this, a number of different finite element pairs for ${\mathbf u},p$ -have been developed that lead to a stable discrete problem. One such pair is -to use the Raviart-Thomas spaces $RT(k)$ for the velocity ${\mathbf u}$ and -discontinuous elements of class $DQ(k)$ for the pressure $p$. For details -about these spaces, we refer in particular to the book on mixed finite element -methods by Brezzi and Fortin, but many other books on the theory of finite -elements, for example the classic book by Brenner and Scott, also state the -relevant results. - - -

Assembling the linear system

- -The deal.II library (of course) implements Raviart-Thomas elements $RT(k)$ of -arbitrary order $k$, as well as discontinuous elements $DG(k)$. If we forget -about their particular properties for a second, we then have to solve a -discrete problem -@f{eqnarray*} - A(x_h,w_h) = F(w_h), -@f} -with the bilinear form and right hand side as stated above, and $x_h=\{{\mathbf u}_h,p_h\}$, $w_h=\{{\mathbf v}_h,q_h\}$. Both $x_h$ and $w_h$ are from the space -$X_h=RT(k)\times DQ(k)$, where $RT(k)$ is itself a space of $dim$-dimensional -functions to accommodate for the fact that the flow velocity is vector-valued. -The necessary question then is: how do we do this in a program? - -Vector-valued elements have already been discussed in previous tutorial -programs, the first time and in detail in @ref step_8 "step-8". The main difference there -was that the vector-valued space $V_h$ is uniform in all its components: the -$dim$ components of the displacement vector are all equal and from the same -function space. What we could therefore do was to build $V_h$ as the outer -product of the $dim$ times the usual $Q(1)$ finite element space, and by this -make sure that all our shape functions have only a single non-zero vector -component. Instead of dealing with vector-valued shape functions, all we did -in @ref step_8 "step-8" was therefore to look at the (scalar) only non-zero component and -use the fe.system_to_component_index(i).first call to figure out -which component this actually is. - -This doesn't work with Raviart-Thomas elements: following from their -construction to satisfy certain regularity properties of the space -$H({\textrm{div}})$, the shape functions of $RT(k)$ are usually nonzero in all -their vector components at once. For this reason, were -fe.system_to_component_index(i).first applied to determine the only -nonzero component of shape function $i$, an exception would be generated. What -we really need to do is to get at all vector components of a shape -function. In deal.II diction, we call such finite elements -non-primitive, whereas finite elements that are either scalar or for -which every vector-valued shape function is nonzero only in a single vector -component are called primitive. - -So what do we have to do for non-primitive elements? To figure this out, let -us go back in the tutorial programs, almost to the very beginnings. There, we -learned that we use the FEValues class to determine the values and -gradients of shape functions at quadrature points. For example, we would call -fe_values.shape_value(i,q_point) to obtain the value of the -ith shape function on the quadrature point with number -q_point. Later, in @ref step_8 "step-8" and other tutorial programs, we learned -that this function call also works for vector-valued shape functions (of -primitive finite elements), and that it returned the value of the only -non-zero component of shape function i at quadrature point -q_point. - -For non-primitive shape functions, this is clearly not going to work: there is -no single non-zero vector component of shape function i, and the call -to fe_values.shape_value(i,q_point) would consequently not make -much sense. However, deal.II offers a second function call, -fe_values.shape_value_component(i,q_point,comp) that returns the -value of the compth vector component of shape function i at -quadrature point q_point, where comp is an index between -zero and the number of vector components of the present finite element; for -example, the element we will use to describe velocities and pressures is going -to have $dim+1$ components. It is worth noting that this function call can -also be used for primitive shape functions: it will simply return zero for all -components except one; for non-primitive shape functions, it will in general -return a non-zero value for more than just one component. - -We could now attempt to rewrite the bilinear form above in terms of vector -components. For example, in 2d, the first term could be rewritten like this -(note that $u_0=x_0, u_1=x_1, p=x_2$): -@f{eqnarray*} - ({\mathbf u}_h^i, K^{-1}{\mathbf u}_h^j) - = - &\left((x_h^i)_0, K^{-1}_{00} (x_h^j)_0\right) + - \left((x_h^i)_0, K^{-1}_{01} (x_h^j)_1\right) + \\ - &\left((x_h^i)_1, K^{-1}_{10} (x_h^j)_0\right) + - \left((x_h^i)_1, K^{-1}_{11} (x_h^j)_1\right). -@f} -If we implemented this, we would get code like this: -@code - for (unsigned int q=0; q -Tensor<1,dim> -extract_u (const FEValuesBase &fe_values, - const unsigned int i, - const unsigned int q) -{ - Tensor<1,dim> tmp; - - for (unsigned int d=0; dfe_values object, to extract -the values of the first $dim$ components of shape function i at -quadrature points q, that is the velocity components of that shape -function. Put differently, if we write shape functions $x_h^i$ as the tuple -$\{{\mathbf u}_h^i,p_h^i\}$, then the function returns the velocity part of this -tuple. Note that the velocity is of course a dim-dimensional tensor, and -that the function returns a corresponding object. - -Likewise, we have a function that extracts the pressure component of a shape -function: -@code -template -double extract_p (const FEValuesBase &fe_values, - const unsigned int i, - const unsigned int q) -{ - return fe_values.shape_value_component (i,q,dim); -} -@endcode -Finally, the bilinear form contains terms involving the gradients of the -velocity component of shape functions. To be more precise, we are not really -interested in the full gradient, but only the divergence of the velocity -components, i.e. ${\textrm{div}}\ {\mathbf u}_h^i = \sum_{d=0}^{dim-1} -\frac{\partial}{\partial x_d} ({\mathbf u}_h^i)_d$. Here's a function that returns -this quantity: -@code -template -double -extract_div_u (const FEValuesBase &fe_values, - const unsigned int i, - const unsigned int q) -{ - double divergence = 0; - for (unsigned int d=0; d phi_i_u = extract_u (fe_values, i, q); - const double div_phi_i_u = extract_div_u (fe_values, i, q); - const double phi_i_p = extract_p (fe_values, i, q); - - for (unsigned int j=0; j phi_j_u = extract_u (fe_values, j, q); - const double div_phi_j_u = extract_div_u (fe_values, j, q); - const double phi_j_p = extract_p (fe_values, j, q); - - local_matrix(i,j) += (phi_i_u * k_inverse_values[q] * phi_j_u - - div_phi_i_u * phi_j_p - - phi_i_p * div_phi_j_u) - * fe_values.JxW(q); - } - - local_rhs(i) += -(phi_i_p * - rhs_values[q] * - fe_values.JxW(q)); - } -@endcode -This very closely resembles the form we have originally written down the -bilinear form and right hand side. - -There is one final term that we have to take care of: the right hand side -contained the term $(g,{\mathbf v}\cdot {\mathbf n})_{\partial\Omega}$, constituting the -weak enforcement of pressure boundary conditions. We have already seen in -@ref step_7 "step-7" how to deal with face integrals: essentially exactly the same as with -domain integrals, except that we have to use the FEFaceValues class -instead of FEValues. To compute the boundary term we then simply have -to loop over all boundary faces and integrate there. If you look closely at -the definitions of the extract_* functions above, you will realize -that it isn't even necessary to write new functions that extract the velocity -and pressure components of shape functions from FEFaceValues objects: -both FEValues and FEFaceValues are derived from a common -base class, FEValuesBase, and the extraction functions above can -therefore deal with both in exactly the same way. Assembling the missing -boundary term then takes on the following form: -@code -for (unsigned int face_no=0; - face_no::faces_per_cell; - ++face_no) - if (cell->at_boundary(face_no)) - { - fe_face_values.reinit (cell, face_no); - - pressure_boundary_values - .value_list (fe_face_values.get_quadrature_points(), - boundary_values); - - for (unsigned int q=0; q - phi_i_u = extract_u (fe_face_values, i, q); - - local_rhs(i) += -(phi_i_u * - fe_face_values.normal_vector(q) * - boundary_values[q] * - fe_face_values.JxW(q)); - } - } -@endcode - -You will find the exact same code as above in the sources for the present -program. We will therefore not comment much on it below. - - -

Linear solvers and preconditioners

- -After assembling the linear system we are faced with the task of solving -it. The problem here is: the matrix has a zero block at the bottom right -(there is no term in the bilinear form that couples the pressure $p$ with the -pressure test function $q$), and it is indefinite. At least it is -symmetric. In other words: the Conjugate Gradient method is not going to -work. We would have to resort to other iterative solvers instead, such as -MinRes, SymmLQ, or GMRES, that can deal with indefinite systems. However, then -the next problem immediately surfaces: due to the zero block, there are zeros -on the diagonal and none of the usual preconditioners (Jacobi, SSOR) will work -as they require division by diagonal elements. - - -

Solving using the Schur complement

- -In view of this, let us take another look at the matrix. If we sort our -degrees of freedom so that all velocity come before all pressure variables, -then we can subdivide the linear system $AX=B$ into the following blocks: -@f{eqnarray*} - \left(\begin{array}{cc} - M & B^T \\ B & 0 - \end{array}\right) - \left(\begin{array}{cc} - U \\ P - \end{array}\right) - = - \left(\begin{array}{cc} - F \\ G - \end{array}\right), -@f} -where $U,P$ are the values of velocity and pressure degrees of freedom, -respectively, $M$ is the mass matrix on the velocity space, $B$ corresponds to -the negative divergence operator, and $B^T$ is its transpose and corresponds -to the negative gradient. - -By block elimination, we can then re-order this system in the following way -(multiply the first row of the system by $BM^{-1}$ and then subtract the -second row from it): -@f{eqnarray*} - BM^{-1}B^T P &=& BM^{-1} F - G, \\ - MU &=& F - B^TP. -@f} -Here, the matrix $S=BM^{-1}B^T$ (called the Schur complement of $A$) -is obviously symmetric and, owing to the positive definiteness of $M$ and the -fact that $B^T$ has full column rank, $S$ is also positive -definite. - -Consequently, if we could compute $S$, we could apply the Conjugate Gradient -method to it. However, computing $S$ is expensive, and $S$ is most -likely also a full matrix. On the other hand, the CG algorithm doesn't require -us to actually have a representation of $S$, it is sufficient to form -matrix-vector products with it. We can do so in steps: to compute $Sv$, we -
    -
  1. form $w = B^T v$; -
  2. solve $My = w$ for $y=M^{-1}w$, using the CG method applied to the - positive definite and symmetric mass matrix $M$; -
  3. form $z=By$ to obtain $Sv=z$. -
-We will implement a class that does that in the program. Before showing its -code, let us first note that we need to multiply with $M^{-1}$ in several -places here: in multiplying with the Schur complement $S$, forming the right -hand side of the first equation, and solving in the second equation. From a -coding viewpoint, it is therefore appropriate to relegate such a recurring -operation to a class of its own. We call it InverseMatrix. As far as -linear solvers are concerned, this class will have all operations that solvers -need, which in fact includes only the ability to perform matrix-vector -products; we form them by using a CG solve (this of course requires that the -matrix passed to this class satisfies the requirements of the CG -solvers). Here are the relevant parts of the code that implements this: -@code -class InverseMatrix -{ - public: - InverseMatrix (const SparseMatrix &m); - - void vmult (Vector &dst, - const Vector &src) const; - - private: - const SmartPointer > matrix; - // ... -}; - - -void InverseMatrix::vmult (Vector &dst, - const Vector &src) const -{ - SolverControl solver_control (src.size(), 1e-8*src.l2_norm()); - SolverCG<> cg (solver_control, vector_memory); - - cg.solve (*matrix, dst, src, PreconditionIdentity()); -} -@endcode -Once created, objects of this class can act as matrices: they perform -matrix-vector multiplications. How this is actually done is irrelevant to the -outside world. - -Using this class, we can then write a class that implements the Schur -complement in much the same way: to act as a matrix, it only needs to offer a -function to perform a matrix-vector multiplication, using the algorithm -above. Here are again the relevant parts of the code: -@code -class SchurComplement -{ - public: - SchurComplement (const BlockSparseMatrix &A, - const InverseMatrix &Minv); - - void vmult (Vector &dst, - const Vector &src) const; - - private: - const SmartPointer > system_matrix; - const SmartPointer m_inverse; - - mutable Vector tmp1, tmp2; -}; - - -void SchurComplement::vmult (Vector &dst, - const Vector &src) const -{ - system_matrix->block(0,1).vmult (tmp1, src); - m_inverse->vmult (tmp2, tmp1); - system_matrix->block(1,0).vmult (dst, tmp2); -} -@endcode - -In this code, the constructor takes a reference to a block sparse matrix for -the entire system, and a reference to an object representing the inverse of -the mass matrix. It stores these using SmartPointer objects (see -@ref step_7 "step-7"), and additionally allocates two temporary vectors tmp1 and -tmp2 for the vectors labeled $w,y$ in the list above. - -In the matrix-vector multiplication function, the product $Sv$ is performed in -exactly the order outlined above. Note how we access the blocks $B^T$ and $B$ -by calling system_matrix->block(0,1) and -system_matrix->block(1,0) respectively, thereby picking out -individual blocks of the block system. Multiplication by $M^{-1}$ happens -using the object introduced above. - -With all this, we can go ahead and write down the solver we are going to -use. Essentially, all we need to do is form the right hand sides of the two -equations defining $P$ and $U$, and then solve them with the Schur complement -matrix and the mass matrix, respectively: -@code -template -void MixedLaplaceProblem::solve () -{ - const InverseMatrix m_inverse (system_matrix.block(0,0)); - Vector tmp (solution.block(0).size()); - - { - Vector schur_rhs (solution.block(1).size()); - - m_inverse.vmult (tmp, system_rhs.block(0)); - system_matrix.block(1,0).vmult (schur_rhs, tmp); - schur_rhs -= system_rhs.block(1); - - SolverControl solver_control (system_matrix.block(0,0).m(), - 1e-6*schur_rhs.l2_norm()); - SolverCG<> cg (solver_control); - - cg.solve (SchurComplement(system_matrix, m_inverse), - solution.block(1), - schur_rhs, - PreconditionIdentity()); - } - { - system_matrix.block(0,1).vmult (tmp, solution.block(1)); - tmp *= -1; - tmp += system_rhs.block(0); - - m_inverse.vmult (solution.block(0), tmp); - } -} -@endcode - -This code looks more impressive than it actually is. At the beginning, we -declare an object representing $M^{-1}$ and a temporary vector (of the size of -the first block of the solution, i.e. with as many entries as there are -velocity unknowns), and the two blocks surrounded by braces then solve the two -equations for $P$ and $U$, in this order. Most of the code in each of the two -blocks is actually devoted to constructing the proper right hand sides. For -the first equation, this would be $BM^{-1}F-G$, and $-B^TP+G$ for the second -one. The first hand side is then solved with the Schur complement matrix, and -the second simply multiplied with $M^{-1}$. The code as shown uses no -preconditioner (i.e. the identity matrix as preconditioner) for the Schur -complement. - - - -

A preconditioner for the Schur complement

- -One may ask whether it would help if we had a preconditioner for the Schur -complement $S=BM^{-1}B^T$. The general answer, as usual, is: of course. The -problem is only, we don't know anything about this Schur complement matrix. We -do not know its entries, all we know is its action. On the other hand, we have -to realize that our solver is expensive since in each iteration we have to do -one matrix-vector product with the Schur complement, which means that we have -to do invert the mass matrix once in each iteration. - -There are different approaches to preconditioning such a matrix. On the one -extreme is to use something that is cheap to apply and therefore has no real -impact on the work done in each iteration. The other extreme is a -preconditioner that is itself very expensive, but in return really brings down -the number of iterations required to solve with $S$. - -We will try something along the second approach, as much to improve the -performance of the program as to demonstrate some techniques. To this end, let -us recall that the ideal preconditioner is, of course, $S^{-1}$, but that is -unattainable. However, how about -@f{eqnarray*} - \tilde S^{-1} = [B^T ({\textrm{diag}\ }M)^{-1}B]^{-1} -@f} -as a preconditioner? That would mean that every time we have to do one -preconditioning step, we actually have to solve with $\tilde S$. At first, -this looks almost as expensive as solving with $S$ right away. However, note -that in the inner iteration, we do not have to calculate $M^{-1}$, but only -the inverse of its diagonal, which is cheap. - -To implement something like this, let us first generalize the -InverseMatrix class so that it can work not only with -SparseMatrix objects, but with any matrix type. This looks like so: -@code -template -class InverseMatrix -{ - public: - InverseMatrix (const Matrix &m); - - void vmult (Vector &dst, - const Vector &src) const; - - private: - const SmartPointer matrix; - - //... -}; - - -template -void InverseMatrix::vmult (Vector &dst, - const Vector &src) const -{ - SolverControl solver_control (src.size(), 1e-8*src.l2_norm()); - SolverCG<> cg (solver_control, vector_memory); - - dst = 0; - - cg.solve (*matrix, dst, src, PreconditionIdentity()); -} -@endcode -Essentially, the only change we have made is the introduction of a template -argument that generalizes the use of SparseMatrix. - -The next step is to define a class that represents the approximate Schur -complement. This should look very much like the Schur complement class itself, -except that it doesn't need the object representing $M^{-1}$ any more: -@code -class ApproximateSchurComplement : public Subscriptor -{ - public: - ApproximateSchurComplement (const BlockSparseMatrix &A); - - void vmult (Vector &dst, - const Vector &src) const; - - private: - const SmartPointer > system_matrix; - - mutable Vector tmp1, tmp2; -}; - - -void ApproximateSchurComplement::vmult (Vector &dst, - const Vector &src) const -{ - system_matrix->block(0,1).vmult (tmp1, src); - system_matrix->block(0,0).precondition_Jacobi (tmp2, tmp1); - system_matrix->block(1,0).vmult (dst, tmp2); -} -@endcode -Note how the vmult function differs in simply doing one Jacobi sweep -(i.e. multiplying with the inverses of the diagonal) instead of multiplying -with the full $M^{-1}$. - -With all this, we already have the preconditioner: it should be the inverse of -the approximate Schur complement, i.e. we need code like this: -@code - ApproximateSchurComplement - approximate_schur_complement (system_matrix); - - InverseMatrix - preconditioner (approximate_schur_complement) -@endcode -That's all! - -Taken together, the first block of our solve() function will then -look like this: -@code - Vector schur_rhs (solution.block(1).size()); - - m_inverse.vmult (tmp, system_rhs.block(0)); - system_matrix.block(1,0).vmult (schur_rhs, tmp); - schur_rhs -= system_rhs.block(1); - - SchurComplement - schur_complement (system_matrix, m_inverse); - - ApproximateSchurComplement - approximate_schur_complement (system_matrix); - - InverseMatrix - preconditioner (approximate_schur_complement); - - SolverControl solver_control (system_matrix.block(0,0).m(), - 1e-6*schur_rhs.l2_norm()); - SolverCG<> cg (solver_control); - - cg.solve (schur_complement, solution.block(1), schur_rhs, - preconditioner); -@endcode -Note how we pass the so-defined preconditioner to the solver working on the -Schur complement matrix. - -Obviously, applying this inverse of the approximate Schur complement is a very -expensive preconditioner, almost as expensive as inverting the Schur -complement itself. We can expect it to significantly reduce the number of -outer iterations required for the Schur complement. In fact it does: in a -typical run on 5 times refined meshes using elements of order 0, the number of -outer iterations drops from 164 to 12. On the other hand, we now have to apply -a very expensive preconditioner 12 times. A better measure is therefore simply -the run-time of the program: on my laptop, it drops from 28 to 23 seconds for -this test case. That doesn't seem too impressive, but the savings become more -pronounced on finer meshes and with elements of higher order. For example, a -six times refined mesh and using elements of order 2 yields an improvement of -318 to 12 outer iterations, at a runtime of 338 seconds to 229 seconds. Not -earth shattering, but significant. - - -

A remark on similar functionality in deal.II

- -As a final remark about solvers and preconditioners, let us note that a -significant amount of functionality introduced above is actually also present -in the library itself. It probably even is more powerful and general, but we -chose to introduce this material here anyway to demonstrate how to work with -block matrices and to develop solvers and preconditioners, rather than using -black box components from the library. - -For those interested in looking up the corresponding library classes: the -InverseMatrix is roughly equivalent to the -PreconditionLACSolver class in the library. Likewise, the Schur -complement class corresponds to the SchurMatrix class. - - -

Definition of the test case

- -In this tutorial program, we will solve the Laplace equation in mixed -formulation as stated above. Since we want to monitor convergence of the -solution inside the program, we choose right hand side, boundary conditions, -and the coefficient so that we recover a solution function known to us. In -particular, we choose the pressure solution -@f{eqnarray*} - p = -\left(\frac \alpha 2 xy^2 + \beta x - \frac \alpha 6 x^2\right), -@f} -and for the coefficient we choose the unit matrix $K_{ij}=\delta_{ij}$ for -simplicity. Consequently, the exact velocity satisfies -@f{eqnarray*} - {\mathbf u} = - \left(\begin{array}{cc} - \frac \alpha 2 y^2 + \beta - \frac \alpha 2 x^2 \\ - \alpha xy - \end{array}\right). -@f} -This solution was chosen since it is exactly divergence free, making it a -realistic test case for incompressible fluid flow. By consequence, the right -hand side equals $f=0$, and as boundary values we have to choose -$g=p|_{\partial\Omega}$. - -For the computations in this program, we choose $\alpha=0.3,\beta=1$. You can -find the resulting solution in the ``Results'' section below, after the -commented program. diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-20.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-20.data/results.dox deleted file mode 100644 index 902cfd1e54..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-20.data/results.dox +++ /dev/null @@ -1,294 +0,0 @@ - -

Results

- -

Output of the program and graphical visualization

- - -If we run the program as is, we get this output: -@code -examples/step-20> make run -============================ Remaking Makefile.dep -==============debug========= step-20.cc -============================ Linking step-20 -============================ Running step-20 -Number of active cells: 64 -Total number of cells: 85 -Number of degrees of freedom: 208 (144+64) -10 CG Schur complement iterations to obtain convergence. -Errors: ||e_p||_L2 = 0.178055, ||e_u||_L2 = 0.0433435 -@endcode - -The fact that the number of iterations is so small, of course, is due to good -(but expensive!) preconditioner we have developed. To get confidence in the -solution, let us take a look at it. The following three images show (from left -to right) the x-velocity, the y-velocity, and the pressure (click on the images -for larger versions): - -@image html step-20.u.png -@image html step-20.v.png -@image html step-20.p.png - - - -Let us start with the pressure: it is highest at the left and lowest at the -right, so flow will be from left to right. In addition, though hardly visible -in the graph, we have chosen the pressure field such that the flow left-right -flow first channels towards the center and then outward again. Consequently, -the x-velocity has to increase to get the flow through the narrow part, -something that can easily be seen in the left image. The middle image -represents inward flow in y-direction at the left end of the domain, and -outward flow in y-directino at the right end of the domain. - - - -As an additional remark, note how the x-velocity in the left image is only -continuous in x-direction, whereas the y-velocity is continuous in -y-direction. The flow fields are discontinuous in the other directions. This -very obviously reflects the continuity properties of the Raviart-Thomas -elements, which are, in fact, only in the space H(div) and not in the space -$H^1$. Finally, the pressure field is completely discontinuous, but -that should not surprise given that we have chosen FE_DGQ(0) as -the finite element for that solution component. - - - -

Convergence

- - -The program offers two obvious places where playing and observing convergence -is in order: the degree of the finite elements used (passed to the constructor -of the MixedLaplaceProblem class from main()), and -the refinement level (determined in -MixedLaplaceProblem::make_grid_and_dofs). What one can do is to -change these values and observe the errors computed later on in the course of -the program run. - - - -If one does this, one finds the following pattern for the $L_2$ error -in the pressure variable: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Finite element order
Refinement level012
0 1.45344 0.0831743 0.0235186
1 0.715099 0.0245341 0.00293983
2 0.356383 0.0063458 0.000367478
3 0.178055 0.00159944 4.59349e-05
4 0.0890105 0.000400669 5.74184e-06
5 0.0445032 0.000100218 7.17799e-07
6 0.0222513 2.50576e-05 9.0164e-08
$O(h)$ $O(h^2)$ $O(h^3)$
- -The theoretically expected convergence orders are very nicely reflected by the -experimentally observed ones indicated in the last row of the table. - - - -One can make the same experiment with the $L_2$ error -in the velocity variables: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Finite element order
Refinement level012
0 0.367423 0.127657 5.10388e-14
1 0.175891 0.0319142 9.04414e-15
2 0.0869402 0.00797856 1.23723e-14
3 0.0433435 0.00199464 1.86345e-07
4 0.0216559 0.00049866 2.72566e-07
5 0.010826 0.000124664 3.57141e-07
6 0.00541274 3.1166e-05 4.46124e-07
$O(h)$ $O(h^2)$ $O(h^3)$
-The result concerning the convergence order is the same here. - - - - -

Possibilities for extensions

- - -Realistic flow computations for ground water or oil reservoir simulations will -not use a constant permeability. Here's a first, rather simple way to change -this situation: we use a permeability that decays very rapidly away from a -central flowline until it hits a background value of 0.001. This is to mimick -the behavior of fluids in sandstone: in most of the domain, the sandstone is -homogenous and, while permeably to fluids, not overly so; on the other stone, -the stone has cracked, or faulted, along one line, and the fluids flow much -easier along this large crask. Here is how we could implement something like -this: -@code -template -void -KInverse::value_list (const std::vector > &points, - std::vector > &values) const -{ - Assert (points.size() == values.size(), - ExcDimensionMismatch (points.size(), values.size())); - - for (unsigned int p=0; p -class KInverse : public TensorFunction<2,dim> -{ - public: - KInverse (); - - virtual void value_list (const std::vector > &points, - std::vector > &values) const; - - private: - std::vector > centers; -}; - - -template -KInverse::KInverse () -{ - const unsigned int N = 40; - centers.resize (N); - for (unsigned int i=0; i -void -KInverse::value_list (const std::vector > &points, - std::vector > &values) const -{ - Assert (points.size() == values.size(), - ExcDimensionMismatch (points.size(), values.size())); - - for (unsigned int p=0; p -

Introduction

diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-21.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-21.data/results.dox deleted file mode 100644 index e67fccc1c1..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-21.data/results.dox +++ /dev/null @@ -1,2 +0,0 @@ - -

Results

diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-3.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-3.data/intro.dox deleted file mode 100644 index 222a263a56..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-3.data/intro.dox +++ /dev/null @@ -1,16 +0,0 @@ - -

Introduction

- - -This is the first example where we actually use finite elements to compute -something. We -will solve a simple version of Laplace's equation with zero boundary -values, but a nonzero right hand side. This example is still quite -simple, but it already shows the basic structure of most finite -element programs, which are along the following lines: -
    -
  • Grid generation; -
  • Assembling matrices and vectors of the discrete system; -
  • Solving the linear system of equations; -
  • Writing results to disk. -
diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-3.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-3.data/results.dox deleted file mode 100644 index 19f31c400b..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-3.data/results.dox +++ /dev/null @@ -1,143 +0,0 @@ - -

Results

- -The output of the program looks as follows: -@code -Number of active cells: 1024 -Total number of cells: 1365 -Number of degrees of freedom: 1089 -DEAL:cg::Starting value 0.121094 -DEAL:cg::Convergence step 48 value 5.33692e-13 -@endcode - -The first three lines is what we wrote to cout. The last -two lines were generated without our intervention by the CG -solver. The first two lines state the residual at the start of the -iteration, while the last line tells us that the solver needed 47 -iterations to bring the norm of the residual to 5.3e-13, i.e. below -the threshold 1e-12 which we have set in the `solve' function. We will -show in the next program how to suppress this output, which is -sometimes useful for debugging purposes, but often clutters up the -screen display. - -Apart from the output shown above, the program generated the file -solution.gpl, which is in GNUPLOT format. It can be -viewed as follows: invoke GNUPLOT and enter the following sequence of -commands at its prompt: -@code -examples/step-3> gnuplot - - G N U P L O T - Version 3.7 patchlevel 3 - last modified Thu Dec 12 13:00:00 GMT 2002 - System: Linux 2.6.11.4-21.10-default - - Copyright(C) 1986 - 1993, 1998 - 2002 - Thomas Williams, Colin Kelley and many others - - Type `help` to access the on-line reference manual - The gnuplot FAQ is available from - http://www.gnuplot.info/gnuplot-faq.html - - Send comments and requests for help to - Send bugs, suggestions and mods to - - -Terminal type set to 'x11' -gnuplot> set data style lines -gnuplot> splot "solution.gpl" -@endcode -This produces the picture of the solution below left. Alternatively, -you can order GNUPLOT to do some hidden line removal by the command -@code -gnuplot> set hidden3d -@endcode -to get the result at the right: - - - - - - - -
- @image html step-3.solution-1.png - - @image html step-3.solution-2.png -
- - - - -

Possibilities for extensions

- -If you want to play around a little bit with this program, here are a few -suggestions: -

- -
    -
  • - Change the geometry and mesh: In the program, we have generated a square - domain and mesh by using the GridGenerator::hyper_cube - function. However, the GridGenerator has a good number of other - functions as well. Try an L-shaped domain, a ring, or other domains you find - there. -
  • - -
  • - Change the boundary condition: The code uses the ZeroFunction - function to generate zero boundary conditions. However, you may want to try - non-zero constant boundary values using ConstantFunction<2> - (1) instead of ZeroFunction<2> () to have unit - Dirichlet boundary values. More exotic functions are described in the - documentation of the Functions namespace, and you may pick one - to describe your particular boundary values. -
  • - -
  • - Modify the type of boundary condition: Presently, what happens is that we use - Dirichlet boundary values all around, since the default is that all boundary - parts have boundary indicator zero, and then we tell the - VectorTools::interpolate_boundary_values function to interpolate - boundary values to zero on all boundary components with indicator zero. -

    - We can change this behavior if we assign parts of the boundary different - indicators. For example, try this immediately after calling - GridGenerator::hyper_cube: -

    -    triangulation.begin_active()->face(0)->set_boundary_indicator(1);
    -  
    - What this does is it first asks the triangulation to return an iterator that - points to the first active cell. Of course, this being the coarse mesh for - the triangulation of a square, the triangulation has only a single cell at - this moment, and it is active. Next, we ask the cell to return an iterator to - its first face, and then we ask the face to reset the boundary indicator of - that face to 1. What then follows is this: When the mesh is refined, faces of - child cells inherit the boundary indicator of their parents, i.e. even on the - finest mesh, the faces on one side of the square have boundary indicator - 1. Later, when we get to interpolating boundary conditions, the - interpolate_boundary_values will only produce boundary values - for those faces that have zero boundary indicator, and leave those faces - alone that have a different boundary indicator. Keeping with the theory of - the Laplace equation, this will then lead to homogenous Neumann conditions on - this side, i.e. a zero normal derivative of the solution. You will see this - if you run the program. - -
  • - A slight variation of the last point would be to set different boundary - values as above, but then use a different boundary value function for - boundary indicator one. In practice, what you have to do is to add a second - call to interpolate_boundary_values for boundary indicator one: - @code - VectorTools::interpolate_boundary_values (dof_handler, - 1, - ConstantFunction<2>(1.), - boundary_values); - @endcode - If you have this call immediately after the first one to this function, then - it will interpolate boundary values on faces with boundary indicator 1 to the - unit value, and merge these interpolated values with those previously - computed for boundary indicator 0. The result will be that we will get - discontinuous boundary values, zero on three sides of the square, and one on - the fourth. -
diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-4.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-4.data/intro.dox deleted file mode 100644 index a21e0340c3..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-4.data/intro.dox +++ /dev/null @@ -1,133 +0,0 @@ - -

Introduction

- - -deal.II has a unique feature which we call -``dimension independent programming''. You may have noticed in the -previous examples that many classes had a number in angle brackets -suffixed to them. This is to indicate that for example the -triangulation in two and three space dimensions are different, but -related data types. We could as well have called them -Triangulation2d and Triangulation3d instead -of Triangulation@<2@> and -Triangulation@<3@> to name the two classes, but this -has an important drawback: assume you have a function which does -exactly the same functionality, but on 2d or 3d triangulations, -depending on which dimension we would like to solve the equation in -presently (if you don't believe that it is the common case that a -function does something that is the same in all dimensions, just take -a look at the code below - there are almost no distinctions between 2d -and 3d!). We would have to write the same function twice, once -working on Triangulation2d and once working with a -Triangulation3d. This is an unnecessary obstacle in -programming and leads to a nuisance to keep the two function in synch -(at best) or difficult to find errors if the two versions get out of -sync (at worst; this would probably the more common case). - - - - -Such obstacles can be circumvented by using some template magic as -provided by the C++ language: templatized classes and functions are -not really classes or functions but only a pattern depending on an -as-yet undefined data type parameter or on a numerical value which is -also unknown at the point of definition. However, the compiler can -build proper classes or functions from these templates if you provide -it with the information that is needed for that. Of course, parts of -the template can depend on the template parameters, and they will be -resolved at the time of compilation for a specific template -parameter. For example, consider the following piece of code: -@code - template - void make_grid (Triangulation &triangulation) - { - GridGenerator::hyper_cube (triangulation, -1, 1); - }; -@endcode - - - -At the point where the compiler sees this function, it does not know -anything about the actual value of dim. The only thing the compiler has is -a template, i.e. a blueprint, to generate -functions make_grid if given a particular value of -dim. Since dim has an unknown value, there is no -code the compiler can generate for the moment. - - - -However, if later down the compiler would encounter code that looks, for -example, like this, -@code - Triangulation<2> triangulation; - make_grid (triangulation); -@endcode -then the compiler will deduce that the function make_grid for -dim==2 was -requested and will compile the template above into a function with dim replaced -by 2 everywhere, i.e. it will compile the function as if it were defined -as -@code - void make_grid (Triangulation<2> &triangulation) - { - GridGenerator::hyper_cube (triangulation, -1, 1); - }; -@endcode - - - -However, it is worth to note that the function -GridGenerator::hyper_cube depends on the dimension as -well, so in this case, the compiler will call the function -GridGenerator::hyper_cube@<2@> while if dim were 3, -it would call GridGenerator::hyper_cube@<3@> which -might be (and actually is) a totally unrelated function. - - - -The same can be done with member variables. Consider the following -function, which might in turn call the above one: -@code - template - void make_grid_and_dofs (Triangulation &triangulation) - { - make_grid (triangulation); - - DoFHandler dof_handler(triangulation); - ... - }; -@endcode -This function has a member variable of type -DoFHandler@. Again, the compiler can't -compile this function until it knows for which dimension. If you call -this function for a specific dimension as above, the compiler will -take the template, replace all occurences of dim by the dimension for -which it was called, and compile it. If you call the function several -times for different dimensions, it will compile it several times, each -time calling the right make_grid function and reserving the right -amount of memory for the member variable; note that the size of a -DoFHandler might, and indeed does, depend on the space dimension. - - - -The deal.II library is build around this concept -of dimension-independent programming, and therefore allows you to program in -a way that will not need to -distinguish between the space dimensions. It should be noted that in -only a very few places is it necessary to actually compare the -dimension using ifs or switches. However, since the compiler -has to compile each function for each dimension separately, even there -it knows the value of dim at the time of compilation and will -therefore be able to optimize away the if statement along with the -unused branch. - - - -In this example program, we will show how to program dimension -independently (which in fact is even simpler than if you had to take -care about the dimension) and we will extend the Laplace problem of -the last example to a program that runs in two and three space -dimensions at the same time. Other extensions are the use of a -non-constant right hand side function and of non-zero boundary values. - - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-4.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-4.data/results.dox deleted file mode 100644 index cfd3be1c1f..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-4.data/results.dox +++ /dev/null @@ -1,111 +0,0 @@ - -

Results

- - -The output of the program looks as follows (the number of iterations -may vary by one or two, depending on your computer, since this is -often dependent on the round-off accuracy of floating point -operations, which differs between processors): -@code -Solving problem in 2 space dimensions. - Number of active cells: 256 - Total number of cells: 341 - Number of degrees of freedom: 289 - 26 CG iterations needed to obtain convergence. -Solving problem in 3 space dimensions. - Number of active cells: 4096 - Total number of cells: 4681 - Number of degrees of freedom: 4913 - 30 CG iterations needed to obtain convergence. -@endcode -It is obvious that in three spatial dimensions the number of cells and -therefore also the number of degrees of freedom is -much higher. What cannot be seen here, is that besides this higher -number of rows and columns in the matrix, there are also significantly -more entries per row of the matrix in three space -dimensions. Together, this leads to a much higher numerical effort for -solving the system of equation, which you can feel when you actually -run the program. - - - -The program produces two files: solution-2d.gmv and -solution-3d.gmv, which can be viewed using the program -GMV (in case you do not have that program, you can easily change the -output format in the program to something which you can view more -easily). From the two-dimensional output, we have produced the -following two pictures: - - - - - - - - - -
- @image html step-4.solution-2d.png - - @image html step-4.grid-2d.png -
- - -The left one shows the solution of the problem under consideration as -a 3D plot. As can be seen, the solution is almost flat in the interior -of the domain and has a higher curvature near the boundary. This, of -course, is due to the fact that for Laplace's equation the curvature -of the solution is equal to the right hand side and that was chosen as -a quartic polynomial which is nearly zero in the interior and is only -rising sharply when approaching the boundaries of the domain; the -maximal values of the right hand side function are at the corners of -the domain, where also the solution is moving most rapidly. -It is also nice to see that the solution follows the desired quadratic -boundary values along the boundaries of the domain. - - - -The right picture shows the two dimensional grid, colorized by the -values of the solution function. This is not very exciting, but the -colors are nice. - - - - -In three spatial dimensions, visualization is a bit more difficult. To -the left, you can see the solution at three of the six outer faces of -the cube in which we solved the equation, and on a plane through the -origin. On some of the planes, the cut through the grid is also shown. - - - - - - - - - -
- @image html step-4.solution-3d.png - - @image html step-4.grid-3d.png -
- - - -The right picture shows the three dimensional grid, colorized by the -solutions values. 3D grids are difficult to visualize, which can be -seen here already, even though the grid is not even locally refined. - - - - - -

Possibilities for extensions

- - -Essentially the possibilities for playing around with the program are the same -as for the previous one, except that the will now also apply to the 3d -case. For inspiration read up on possible extensions in the documentation of step-3. - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-5.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-5.data/intro.dox deleted file mode 100644 index 43e24bfd97..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-5.data/intro.dox +++ /dev/null @@ -1,37 +0,0 @@ - -

Introduction

- - -This example does not show revolutionary new things, but it shows many -small improvements over the previous examples, and also many small -things that can usually be found in finite element programs. Among -them are: -
    -
  • Computations on successively refined grids. At least in the - mathematical sciences, it is common to compute solutions on - a hierarchy of grids, in order to get a feeling for the accuracy - of the solution; if you only have one solution on a single grid, you - usually can't guess the accuracy of the - solution. Furthermore, deal.II is designed to support adaptive - algorithms where iterative solution on successively refined - grids is at the heart of algorithms. Although adaptive grids - are not used in this example, the foundations for them is laid - here. -
  • In practical applications, the domains are often subdivided - into triangulations by automatic mesh generators. In order to - use them, it is important to read coarse grids from a file. In - this example, we will read a coarse grid in UCD (unstructured - cell data) format as used by AVS Explorer. -
  • Finite element programs usually use extensive amounts of - computing time, so some optimizations are sometimes - necessary. We will show some of them. -
  • On the other side, finite element programs tend to be rather - complex, so debugging is an important aspect. We support safe - programming by using assertions that check the validity of - parameters and internal states in a debug mode, but are removed - in optimized mode. -
  • Regarding the mathematical side, we show how to support a - variable coefficient in the elliptic operator and how to use - preconditioned iterative solvers for the linear systems of - equations. -
diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-5.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-5.data/results.dox deleted file mode 100644 index 3cd03c5bb0..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-5.data/results.dox +++ /dev/null @@ -1,163 +0,0 @@ - -

Results

- - -When the last block in main() is commented in, the output -of the program looks as follows: -@code -Cycle 0: - Number of active cells: 20 - Total number of cells: 20 - Number of degrees of freedom: 25 - 13 CG iterations needed to obtain convergence. -Cycle 1: - Number of active cells: 80 - Total number of cells: 100 - Number of degrees of freedom: 89 - 18 CG iterations needed to obtain convergence. -Cycle 2: - Number of active cells: 320 - Total number of cells: 420 - Number of degrees of freedom: 337 - 29 CG iterations needed to obtain convergence. -Cycle 3: - Number of active cells: 1280 - Total number of cells: 1700 - Number of degrees of freedom: 1313 - 52 CG iterations needed to obtain convergence. -Cycle 4: - Number of active cells: 5120 - Total number of cells: 6820 - Number of degrees of freedom: 5185 - 95 CG iterations needed to obtain convergence. -Cycle 5: - Number of active cells: 20480 - Total number of cells: 27300 - Number of degrees of freedom: 20609 - 182 CG iterations needed to obtain convergence. --------------------------------------------------------- -An error occurred in line <273> of file in function - void Coefficient::value_list(const std::vector, std::allocator > >&, std::vector >&, unsigned int) - const [with int dim = 2] -The violated condition was: - values.size() == points.size() -The name and call sequence of the exception was: - ExcDimensionMismatch (values.size(), points.size()) -Additional Information: -Dimension 1 not equal to 2 - -Stacktrace: ------------ -#0 ./step-5: Coefficient<2>::value_list(std::vector, std::allocator > > const&, std::vector >&, unsigned) const -#1 ./step-5: main --------------------------------------------------------- -make: *** [run] Aborted -@endcode - - - -Let's first focus on the things before the error: -In each cycle, the number of cells quadruples and the number of CG -iterations roughly doubles. -Also, in each cycle, the program writes one output graphic file in EPS -format. They are depicted in the following: - - - - - - - - - - - - - - - - - - -
- @image html step-5.solution-0.png - - @image html step-5.solution-1.png -
- @image html step-5.solution-2.png - - @image html step-5.solution-3.png -
- @image html step-5.solution-4.png - - @image html step-5.solution-5.png -
- - - -Due to the variable coefficient (the curvature there is reduced by the -same factor by which the coefficient is increased), the top region of -the solution is flattened. The gradient of the solution is -discontinuous there, although this is not very clearly visible in the -pictures above. We will look at this in more detail in the next -example. - - - - -As for the error — let's look at it again: -@code --------------------------------------------------------- -An error occurred in line <273> of file in function - void Coefficient::value_list(const std::vector, std::allocator > >&, std::vector >&, unsigned int) - const [with int dim = 2] -The violated condition was: - values.size() == points.size() -The name and call sequence of the exception was: - ExcDimensionMismatch (values.size(), points.size()) -Additional Information: -Dimension 1 not equal to 2 - -Stacktrace: ------------ -#0 ./step-5: Coefficient<2>::value_list(std::vector, std::allocator > > const&, std::vector >&, unsigned) const -#1 ./step-5: main --------------------------------------------------------- -make: *** [run] Aborted -@endcode - - - -What we see is that the error was triggered in line 273 of the -step-5.cc file (as we modify tutorial programs over time, these line -numbers change, so you should check what line number you actually get -in your output). That's already good information if you want to look up -in the code what exactly happened. But the text tells you even -more. First, it prints the function this happens in, and then the -plain text version of the condition that was violated. This will -almost always be enough already to let you know what exactly went wrong. - - - -But that's not all yet. You get to see the name of the exception -(ExcDimensionMismatch) and this exception even prints the -values of the two array sizes. If you go back to the code in -main(), you will remember that we gave the two variables -sizes 1 and 2, which of course are the ones that you find in the -output again. - - - -So now we know pretty exactly where the error happened and what went -wrong. What we don't know yet is how exactly we got there. The -stacktrace at the bottom actually tells us what happened: the problem -happened in -Coefficient::value_list (stackframe 0) and that it was -called from main() (stackframe 1). In realistic programs, -there would be many more functions in between these two. For example, -we might have made the mistake in the assemble_system -function, in which case stack frame 1 would be -LaplaceProblem<2>::assemble_system, stack frame 2 -would be LaplaceProblem<2>::run, and stack frame 3 -would be main() — you get the idea. - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-6.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-6.data/intro.dox deleted file mode 100644 index 990220acd0..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-6.data/intro.dox +++ /dev/null @@ -1,57 +0,0 @@ - -

Introduction

- - -The main emphasis in this example is the handling of locally refined -grids. The approach to adaptivity chosen in deal.II us to use grids in which -neighboring cells may be refined a different number of times. This then -results in nodes on the interfaces of cells which belong to one -side, but are unbalanced on the other. The common term for these is -“hanging nodes”. - - - -To guarantee that the global solution is continuous at these nodes as -well, we have to state some additional constraints on the values of -the solution at these nodes. In the program below, we will show how we -can get these constraints from the library, and how to use them in the -solution of the linear system of equations. - - - -The locally refined grids are produced using an error estimator class -which estimates the energy error with respect to the Laplace -operator. This error estimator, although developed for Laplace's -equation has proven to be a suitable tool to generate locally refined -meshes for a wide range of equations, not restricted to elliptic -problems. Although it will create non-optimal meshes for other -equations, it is often a good way to quickly produce meshes that are -well adapted to the features of solutions, such as regions of great -variation or discontinuities. Since it was developed by Kelly and -co-workers, we often refer to it as the “Kelly refinement -indicator” in the library, documentation, and mailing list. The -class that implements it is called -KellyErrorEstimator. Although the error estimator (and -its -implementation in the deal.II library) is capable of handling variable -coefficients in the equation, we will not use this feature since we -are only interested in a quick and simple way to generate locally -refined grids. - - - -Since the concepts used for locally refined grids are so important, -we do not show much additional new stuff in this example. The most -important exception is that we show how to use biquadratic elements -instead of the bilinear ones which we have used in all previous -examples. In fact, The use of higher order elements is accomplished by -only replacing three lines of the program, namely the declaration of -the fe variable, and the use of an appropriate quadrature formula -in two places. The rest of the program is unchanged. - - - -The only other new thing is a method to catch exceptions in the -main function in order to output some information in case the -program crashes for some reason. - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-6.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-6.data/results.dox deleted file mode 100644 index 95700f9299..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-6.data/results.dox +++ /dev/null @@ -1,164 +0,0 @@ - -

Results

- - -The output of the program looks as follows: -

-Cycle 0:
-   Number of active cells:       20
-   Number of degrees of freedom: 89
-Cycle 1:
-   Number of active cells:       44
-   Number of degrees of freedom: 209
-Cycle 2:
-   Number of active cells:       92
-   Number of degrees of freedom: 449
-Cycle 3:
-   Number of active cells:       200
-   Number of degrees of freedom: 961
-Cycle 4:
-   Number of active cells:       440
-   Number of degrees of freedom: 2033
-Cycle 5:
-   Number of active cells:       932
-   Number of degrees of freedom: 4465
-Cycle 6:
-   Number of active cells:       1916
-   Number of degrees of freedom: 9113
-Cycle 7:
-   Number of active cells:       3884
-   Number of degrees of freedom: 18401
-
- - - -As intended, the number of cells roughly doubles in each cycle. The -number of degrees is slightly more than four times the number of -cells; one would expect a factor of exactly four in two spatial -dimensions on an infinite grid (since the spacing between the degrees -of freedom is half the cell width: one additional degree of freedom on -each edge and one in the middle of each cell), but it is larger than -that factor due to the finite size of the mesh and due to additional -degrees of freedom which are introduced by hanging nodes and local -refinement. - - - -The final solution, as written by the program at the end of the -run() function, looks as follows: - - - -@image html step-6.solution.png - - - -In each cycle, the program furthermore writes the grid in EPS -format. These are shown in the following: - - - - - - - - - - - - - - - - - - - - - - - -
- @image html step-6.grid-0.png - - @image html step-6.grid-1.png -
- @image html step-6.grid-2.png - - @image html step-6.grid-3.png -
- @image html step-6.grid-4.png - - @image html step-6.grid-5.png -
- @image html step-6.grid-6.png - - @image html step-6.grid-7.png -
- - - -It is clearly visible that the region where the solution has a kink, -i.e. the circle at radial distance 0.5 from the center, is -refined most. Furthermore, the central region where the solution is -very smooth and almost flat, is almost not refined at all, but this -results from the fact that we did not take into account that the -coefficient is large there. The region outside is refined rather -randomly, since the second derivative is constant there and refinement -is therefore mostly based on the size of the cells and their deviation -from the optimal square. - - - - -For completeness, we show what happens if the code we commented about -in the destructor of the LaplaceProblem class is omitted -from this example. - -@code --------------------------------------------------------- -An error occurred in line <79> of file in function - virtual Subscriptor::~Subscriptor() -The violated condition was: - counter == 0 -The name and call sequence of the exception was: - ExcInUse(counter, object_info->name(), infostring) -Additional Information: -Object of class 4FE_QILi2EE is still used by 1 other objects. - from Subscriber 10DoFHandlerILi2EE - -Stacktrace: ------------ -#0 /u/bangerth/p/deal.II/1/deal.II/lib/libbase.g.so: Subscriptor::~Subscriptor() -#1 /u/bangerth/p/deal.II/1/deal.II/lib/libdeal_II_2d.g.so: FiniteElement<2>::~FiniteElement() -#2 ./step-6: FE_Poly, 2>::~FE_Poly() -#3 ./step-6: FE_Q<2>::~FE_Q() -#4 ./step-6: LaplaceProblem<2>::~LaplaceProblem() -#5 ./step-6: main --------------------------------------------------------- -make: *** [run] Aborted -@endcode - - - -From the above error message, we conclude that an object of type -10DoFHandlerILi2EE is still using the object of type -4FE_QILi2EE. These are of course "mangled" names for -DoFHandler and FE_Q. The mangling works as -follows: the first number indicates the number of characters of the -template class, i.e. 10 for DoFHandler and 4 -forFE_Q; the rest of the text is then template -arguments. From this we can already glean a little bit who's the -culprit here, and who the victim.: -The one object that still uses the finite element is the -dof_handler object. - - - -The stacktrace gives an indication of where the problem happened. We -see that the exception was triggered in the -destructor of the FiniteElement class that was called -through a few more functions from the destructor of the -LaplaceProblem class, exactly where we have commented out -the call to DoFHandler::clear(). - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-7.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-7.data/intro.dox deleted file mode 100644 index a928a5806f..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-7.data/intro.dox +++ /dev/null @@ -1,154 +0,0 @@ - -

Introduction

- -In this program, we will mainly consider two aspects: -
    -
  1. Verification of correctness of the program and generation of convergence - tables; -
  2. Non-homogeneous Neumann boundary conditions for the Helmholtz equation. -
-Besides these topics, again a variety of improvements and tricks will be -shown. - -

Verification of correctness

- -There has probably never been a -non-trivial finite element program that worked right from the start. It is -therefore necessary to find ways to verify whether a computed solution is -correct or not. Usually, this is done by choosing the set-up of a simulation -such that we know the exact continuous solution and evaluate the difference -between continuous and computed discrete solution. If this difference -converges to zero with the right order of convergence, this is already a good -indication of correctness, although there may be other sources of error -persisting which have only a small contribution to the total error or are of -higher order. - -In this example, we will not go into the theories of systematic software -verification which is a very complicated problem. Rather we will demonstrate -the tools which deal.II can offer in this respect. This is basically centered -around the functionality of a single function, integrate_difference. -This function computes the difference between a given continuous function and -a finite element field in various norms on each cell. At present, the -supported norms are the following, where $u$ denotes the continuous function -and $u_h$ the finite element field, and $K$ is an element of the -triangulation: -@f{eqnarray*} - {\| u-u_h \|}_{L_1(K)} &=& \int_K |u-u_h| \; dx, - \\ - {\| u-u_h \|}_{L_2(K)} &=& \left( \int_K |u-u_h|^2 \; dx \right)^{1/2}, - \\ - {\| u-u_h \|}_{L_\infty(K)} &=& \max_{x \in K} |u(x) - u_h(x)|, - \\ - {| u-u_h |}_{H^1(K)} &=& \left( \int_K |\nabla(u-u_h)|^2 \; dx \right)^{1/2}, - \\ - {\| u-u_h \|}_{H^1(K)} &=& \left( {\| u-u_h \|}^2_{L_2(K)} - +{| u-u_h |}^2_{H^1(K)} \right)^{1/2}. -@f} -All these norms and semi-norms can also be evaluated with weighting functions, -for example in order to exclude singularities from the determination of the -global error. The function also works for vector-valued functions. It should -be noted that all these quantities are evaluated using quadrature formulas; -the choice of the right quadrature formula is therefore crucial to the -accurate evaluation of the error. This holds in particular for the $L_\infty$ -norm, where we evaluate the maximal deviation of numerical and exact solution -only at the quadrature points; one should then not try to use a quadrature -rule with points only at points where super-convergence might occur. - -The function integrate_difference evaluates the desired norm on each -cell $K$ of the triangulation and returns a vector which holds these -values for each cell. From the local values, we can then obtain the global error. For -example, if the vector $(e_i)$ contains the local $L_2$ norms, then -@f[ - E = \| {\mathbf e} \| = \left( \sum_i e_i^2 \right)^{1/2} -@f] -is the global $L_2$ error. - -In the program, we will show how to evaluate and use these quantities, and we -will monitor their values under mesh refinement. Of course, we have to choose -the problem at hand such that we can explicitly state the solution and its -derivatives, but since we want to evaluate the correctness of the program, -this is only reasonable. If we know that the program produces the correct -solution for one (or, if one wants to be really sure: many) specifically -chosen right hand sides, we can be rather confident that it will also compute -the correct solution for problems where we don't know the exact values. - -In addition to simply computing these quantities, we will show how to generate -nicely formatted tables from the data generated by this program that -automatically computes convergence rates etc. In addition, we will compare -different strategies for mesh refinement. - - -

Non-homogeneous Neumann boundary conditions

- -The second, totally -unrelated, subject of this example program is the use of non-homogeneous -boundary conditions. These are included into the variational form using -boundary integrals which we have to evaluate numerically when assembling the -right hand side vector. - -Before we go into programming, let's have a brief look at the mathematical -formulation. The equation which we want to solve is Helmholtz's equation -``with the nice sign'': -@f[ - -\Delta u + u = f, -@f] -on the square $[-1,1]^2$, augmented by boundary conditions -@f[ - u = g_1 -@f] -on some part $\Gamma_1$ of the boundary $\Gamma$, and -@f[ - {\mathbf n}\cdot \nabla u = g_2 -@f] -on the rest $\Gamma_2 = \Gamma \backslash \Gamma_1$. - -We choose the right hand side function $f$ such that the exact solution is -@f[ - u(x) = \sum_{i=1}^3 \exp\left(-\frac{|x-x_i|^2}{\sigma^2}\right) -@f] -where the centers $x_i$ of the exponentials are - $x_1=(-\frac 12,\frac 12)$, - $x_2=(-\frac 12,-\frac 12)$, and - $x_3=(\frac 12,-\frac 12)$. -The half width is set to $\sigma=\frac 13$. - -We further choose $\Gamma_1=\Gamma \cap\{\{x=1\} \cup \{y=1\}\}$, and there -set $g_1$ such that it resembles the exact values of $u$. Likewise, we choose -$g_2$ on the remaining portion of the boundary to be the exact normal -derivatives of the continuous solution. - -Using the above definitions, we can state the weak formulation of the -equation, which reads: find $u\in H^1_g=\{v\in H^1: v|_{\Gamma_1}=g_1\}$ such -that -@f[ - {(\nabla u, \nabla v)}_\Omega + {(u,v)}_\Omega - = - {(f,v)}_\Omega + {(g_2,v)}_{\Gamma_2} -@f] -for all test functions $v\in H^1_0=\{v\in H^1: v|_{\Gamma_1}=0\}$. The -boundary term ${(g_2,v)}_{\Gamma_2}$ has appeared by integration by parts and -using $\partial_n u=g$ on $\Gamma_2$ and $v=0$ on $\Gamma_1$. The cell -matrices and vectors which we use to build the global matrices and right hand -side vectors in the discrete formulation therefore look like this: -@f{eqnarray*} - A_{ij}^K &=& \left(\nabla \varphi_i, \nabla \varphi_j\right)_K - +\left(\varphi_i, \varphi_j\right)_K, - \\ - f_i^K &=& \left(f,\varphi_i\right)_K - +\left(g_2, \varphi_i\right)_{\partial K\cap \Gamma_2}. -@f} -Since the generation of the domain integrals has been shown in previous -examples several times, only the generation of the contour integral is of -interest here. It basically works along the following lines: for domain -integrals we have the FEValues class that provides values and -gradients of the shape values, as well as Jacobian determinants and other -information and specified quadrature points in the cell; likewise, there is a -class FEFaceValues that performs these tasks for integrations on -faces of cells. One provides it with a quadrature formula for a manifold with -dimension one less than the dimension of the domain is, and the cell and the -number of its face on which we want to perform the integration. The class will -then compute the values, gradients, normal vectors, weights, etc. at the -quadrature points on this face, which we can then use in the same way as for -the domain integrals. The details of how this is done are shown in the -following program. - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-7.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-7.data/results.dox deleted file mode 100644 index cbed6604ef..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-7.data/results.dox +++ /dev/null @@ -1,224 +0,0 @@ - -

Results

- - -The program generates two kinds of output. The first are the output -files solution-adaptive-q1.gmv, -solution-global-q1.gmv, and -solution-global-q2.gmv. We show the latter in a 3d view -here: - - -@image html step-7.solution.png - - - - -Secondly, the program writes tables not only to disk, but also to the -screen while running: - - -@code -examples/step-7> make run -============================ Running step-7 -Solving with Q1 elements, adaptive refinement -============================================= - -Cycle 0: - Number of active cells: 4 - Number of degrees of freedom: 9 -Cycle 1: - Number of active cells: 13 - Number of degrees of freedom: 22 -Cycle 2: - Number of active cells: 31 - Number of degrees of freedom: 46 -Cycle 3: - Number of active cells: 64 - Number of degrees of freedom: 87 -Cycle 4: - Number of active cells: 127 - Number of degrees of freedom: 160 -Cycle 5: - Number of active cells: 244 - Number of degrees of freedom: 297 -Cycle 6: - Number of active cells: 466 - Number of degrees of freedom: 543 - -cycle cells dofs L2 H1 Linfty - 0 4 9 1.198e+00 2.732e+00 1.383e+00 - 1 13 22 8.795e-02 1.193e+00 1.816e-01 - 2 31 46 8.147e-02 1.167e+00 1.654e-01 - 3 64 87 7.702e-02 1.077e+00 1.310e-01 - 4 127 160 4.643e-02 7.988e-01 6.745e-02 - 5 244 297 2.470e-02 5.568e-01 3.668e-02 - 6 466 543 1.622e-02 4.107e-01 2.966e-02 - -Solving with Q1 elements, global refinement -=========================================== - -Cycle 0: - Number of active cells: 4 - Number of degrees of freedom: 9 -Cycle 1: - Number of active cells: 16 - Number of degrees of freedom: 25 -Cycle 2: - Number of active cells: 64 - Number of degrees of freedom: 81 -Cycle 3: - Number of active cells: 256 - Number of degrees of freedom: 289 -Cycle 4: - Number of active cells: 1024 - Number of degrees of freedom: 1089 -Cycle 5: - Number of active cells: 4096 - Number of degrees of freedom: 4225 -Cycle 6: - Number of active cells: 16384 - Number of degrees of freedom: 16641 - -cycle cells dofs L2 H1 Linfty - 0 4 9 1.198e+00 2.732e+00 1.383e+00 - 1 16 25 8.281e-02 1.190e+00 1.808e-01 - 2 64 81 8.142e-02 1.129e+00 1.294e-01 - 3 256 289 2.113e-02 5.828e-01 4.917e-02 - 4 1024 1089 5.319e-03 2.934e-01 1.359e-02 - 5 4096 4225 1.332e-03 1.469e-01 3.482e-03 - 6 16384 16641 3.332e-04 7.350e-02 8.758e-04 - -n cells H1 L2 - 0 4 2.732e+00 - 1.198e+00 - - - 1 16 1.190e+00 1.20 8.281e-02 14.47 3.86 - 2 64 1.129e+00 0.08 8.142e-02 1.02 0.02 - 3 256 5.828e-01 0.95 2.113e-02 3.85 1.95 - 4 1024 2.934e-01 0.99 5.319e-03 3.97 1.99 - 5 4096 1.469e-01 1.00 1.332e-03 3.99 2.00 - 6 16384 7.350e-02 1.00 3.332e-04 4.00 2.00 - -Solving with Q2 elements, global refinement -=========================================== - -Cycle 0: - Number of active cells: 4 - Number of degrees of freedom: 25 -Cycle 1: - Number of active cells: 16 - Number of degrees of freedom: 81 -Cycle 2: - Number of active cells: 64 - Number of degrees of freedom: 289 -Cycle 3: - Number of active cells: 256 - Number of degrees of freedom: 1089 -Cycle 4: - Number of active cells: 1024 - Number of degrees of freedom: 4225 -Cycle 5: - Number of active cells: 4096 - Number of degrees of freedom: 16641 -Cycle 6: - Number of active cells: 16384 - Number of degrees of freedom: 66049 - -cycle cells dofs L2 H1 Linfty - 0 4 25 1.433e+00 2.445e+00 1.286e+00 - 1 16 81 7.912e-02 1.168e+00 1.728e-01 - 2 64 289 7.755e-03 2.511e-01 1.991e-02 - 3 256 1089 9.969e-04 6.235e-02 2.764e-03 - 4 1024 4225 1.265e-04 1.571e-02 3.527e-04 - 5 4096 16641 1.587e-05 3.937e-03 4.343e-05 - 6 16384 66049 1.986e-06 9.847e-04 5.402e-06 - -n cells H1 L2 - 0 4 2.445e+00 - 1.433e+00 - - - 1 16 1.168e+00 1.07 7.912e-02 18.11 4.18 - 2 64 2.511e-01 2.22 7.755e-03 10.20 3.35 - 3 256 6.235e-02 2.01 9.969e-04 7.78 2.96 - 4 1024 1.571e-02 1.99 1.265e-04 7.88 2.98 - 5 4096 3.937e-03 2.00 1.587e-05 7.97 2.99 - 6 16384 9.847e-04 2.00 1.986e-06 7.99 3.00 -@endcode - - -One can see the error reduction upon grid refinement, and for the -cases where global refinement was performed, also the convergence -rates can be seen. The linear and quadratic convergence rates of Q1 -and Q2 elements in the $H^1$ norm can clearly be seen, as -are the quadratic and cubic rates in the $L_2$ norm. - - - - -Finally, the program generated various LaTeX tables. We show here -the convergence table of the Q2 element with global refinement, after -converting the format to HTML: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-n cellsH1-errorL2-error
042.445e+00-1.433e+00--
1161.168e+001.077.912e-0218.114.18
2642.511e-012.227.755e-0310.203.35
32566.235e-022.019.969e-047.782.96
410241.571e-021.991.265e-047.882.98
540963.937e-032.001.587e-057.972.99
6163849.847e-042.001.986e-067.993.00
- diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-8.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-8.data/intro.dox deleted file mode 100644 index 315a5b1d99..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-8.data/intro.dox +++ /dev/null @@ -1,322 +0,0 @@ - -

Introduction

- - -In real life, most partial differential equations are really systems -of equations. Accordingly, the solutions are usually -vector-valued. The deal.II library supports such problems, and we will show -that that is mostly rather simple. The only more complicated problems -are in assembling matrix and right hand side, but these are easily -understood as well. - -In the example, we will want to solve the elastic equations. They are -an extension to Laplace's equation with a vector-valued solution that -describes the displacement in each space direction of a rigid body -which is subject to a force. Of course, the force is also -vector-valued, meaning that in each point it has a direction and an -absolute value. The elastic equations are the following: -@f[ - - - \partial_j (c_{ijkl} \partial_k u_l) - = - f_i, - \qquad - i=1\ldots d, -@f] -where the values $c_{ijkl}$ are the stiffness coefficients and -will usually depend on the space coordinates. In -many cases, one knows that the material under consideration is -isotropic, in which case by introduction of the two coefficients -$\lambda$ and $\mu$ the coefficient tensor reduces to -@f[ - c_{ijkl} - = - \lambda \delta_{ij} \delta_{kl} + - \mu (\delta_{ik} \delta_{jl} + \delta_{il} \delta_{jk}). -@f] - -The elastic equations can then be rewritten in much simpler a form: -@f[ - - - \nabla \lambda (\nabla\cdot {\mathbf u}) - - - (\nabla \cdot \mu \nabla) {\mathbf u} - - - \nabla\cdot \mu (\nabla {\mathbf u})^T - = - {\mathbf f}, -@f] -and the respective bilinear form is then -@f[ - a({\mathbf u}, {\mathbf v}) = - \left( - \lambda \nabla\cdot {\mathbf u}, \nabla\cdot {\mathbf v} - \right)_\Omega - + - \sum_{i,j} - \left( - \mu \partial_i u_j, \partial_i v_j - \right)_\Omega, - + - \sum_{i,j} - \left( - \mu \partial_i u_j, \partial_j v_i - \right)_\Omega, -@f] -or also writing the first term a sum over components: -@f[ - a({\mathbf u}, {\mathbf v}) = - \sum_{i,j} - \left( - \lambda \partial_l u_l, \partial_k v_k - \right)_\Omega - + - \sum_{k,l} - \left( - \mu \partial_i u_j, \partial_i v_j - \right)_\Omega, - + - \sum_{i,j} - \left( - \mu \partial_i u_j, \partial_j v_i - \right)_\Omega. -@f] - - -How do we now assemble the matrix for such an equation? The first thing we -need is some knowledge about how the shape functions work in the case of -vector-valued finite elements. Basically, this comes down to the following: -let $n$ be the number of shape functions for the scalar finite element of -which we build the vector element (for example, we will use bilinear functions -for each component of the vector-valued finite element, so the scalar finite -element is the FE_Q(1) element which we have used in previous examples -already, and $n=4$ in two space dimensions). Further, let $N$ be the number of -shape functions for the vector element; in two space dimensions, we need $n$ -shape functions for each component of the vector, so $N=2n$. Then, the $i$th -shape function of the vector element has the form -@f[ - \Phi_i({\mathbf x}) = \varphi_{base(i)}({\mathbf x})\ {\mathbf e}_{comp(i)}, -@f] -where $e_l$ is the $l$th unit vector, $comp(i)$ is the function that tells -us which component of $\Phi_i$ is the one that is nonzero (for -each vector shape function, only one component is nonzero, and all others are -zero). $\varphi_{base(i)}(x)$ describes the space dependence of the shape -function, which is taken to be the $base(i)$-th shape function of the scalar -element. Of course, while $i$ is in the range $0,\ldots,N-1$, the functions -$comp(i)$ and $base(i)$ have the ranges $0,1$ (in 2D) and $0,\ldots,n-1$, -respectively. - -For example (though this sequence of shape functions is not -guaranteed, and you should not rely on it), -the following layout could be used by the library: -@f{eqnarray*} - \Phi_0({\mathbf x}) &=& - \left(\begin{array}{c} - \varphi_0({\mathbf x}) \\ 0 - \end{array}\right), - \\ - \Phi_1({\mathbf x}) &=& - \left(\begin{array}{c} - 0 \\ \varphi_0({\mathbf x}) - \end{array}\right), - \\ - \Phi_2({\mathbf x}) &=& - \left(\begin{array}{c} - \varphi_1({\mathbf x}) \\ 0 - \end{array}\right), - \\ - \Phi_3({\mathbf x}) &=& - \left(\begin{array}{c} - 0 \\ \varphi_1({\mathbf x}) - \end{array}\right), - \ldots -@f} -where here -@f[ - comp(0)=0, \quad comp(1)=1, \quad comp(2)=0, \quad comp(3)=1, \quad \ldots -@f] -@f[ - base(0)=0, \quad base(1)=0, \quad base(2)=1, \quad base(3)=1, \quad \ldots -@f] - -In all but very rare cases, you will not need to know which shape function -$\varphi_{base(i)}$ of the scalar element belongs to a shape function $\Phi_i$ -of the vector element. Let us therefore define -@f[ - \phi_i = \varphi_{base(i)} -@f] -by which we can write the vector shape function as -@f[ - \Phi_i({\mathbf x}) = \phi_{i}({\mathbf x})\ {\mathbf e}_{comp(i)}. -@f] -You can now safely forget about the function $base(i)$, at least for the rest -of this example program. - -Now using this vector shape functions, we can write the discrete finite -element solution as -@f[ - {\mathbf u}_h({\mathbf x}) = - \sum_i \Phi_i({\mathbf x})\ u_i -@f] -with scalar coefficients $u_i$. If we define an analog function ${\mathbf v}_h$ as -test function, we can write the discrete problem as follows: Find coefficients -$u_i$ such that -@f[ - a({\mathbf u}_h, {\mathbf v}_h) = ({\mathbf f}, {\mathbf v}_h) - \qquad - \forall {\mathbf v}_h. -@f] - -If we insert the definition of the bilinear form and the representation of -${\mathbf u}_h$ and ${\mathbf v}_h$ into this formula: -@f{eqnarray*} - \sum_{i,j} - u_i v_j - \sum_{k,l} - \left\{ - \left( - \lambda \partial_l (\Phi_i)_l, \partial_k (\Phi_j)_k - \right)_\Omega - + - \left( - \mu \partial_l (\Phi_i)_k, \partial_l (\Phi_j)_k - \right)_\Omega - + - \left( - \mu \partial_l (\Phi_i)_k, \partial_k (\Phi_j)_l - \right)_\Omega - \right\} -\\ -= - \sum_j v_j - \sum_l - \left( - f_l, - (\Phi_j)_l - \right)_\Omega. -@f} -We note that here and in the following, the indices $k,l$ run over spatial -directions, i.e. $0\le k,l < d$, and that indices $i,j$ run over degrees -of freedoms. - -The local stiffness matrix on cell $K$ therefore has the following entries: -@f[ - A^K_{ij} - = - \sum_{k,l} - \left\{ - \left( - \lambda \partial_l (\Phi_i)_l, \partial_k (\Phi_j)_k - \right)_K - + - \left( - \mu \partial_l (\Phi_i)_k, \partial_l (\Phi_j)_k - \right)_K - + - \left( - \mu \partial_l (\Phi_i)_k, \partial_k (\Phi_j)_l - \right)_K - \right\}, -@f] -where $i,j$ now are local degrees of freedom and therefore $0\le i,j < N$. -In these formulas, we always take some component of the vector shape functions -$\Phi_i$, which are of course given as follows (see their definition): -@f[ - (\Phi_i)_l = \phi_i \delta_{l,comp(i)}, -@f] -with the Kronecker symbol $\delta_{nm}$. Due to this, we can delete some of -the sums over $k$ and $l$: -@f{eqnarray*} - A^K_{ij} - &=& - \sum_{k,l} - \Bigl\{ - \left( - \lambda \partial_l \phi_i\ \delta_{l,comp(i)}, - \partial_k \phi_j\ \delta_{k,comp(j)} - \right)_K -\\ - &\qquad\qquad& + - \left( - \mu \partial_l \phi_i\ \delta_{k,comp(i)}, - \partial_l \phi_j\ \delta_{k,comp(j)} - \right)_K - + - \left( - \mu \partial_l \phi_i\ \delta_{k,comp(i)}, - \partial_k \phi_j\ \delta_{l,comp(j)} - \right)_K - \Bigr\} -\\ - &=& - \left( - \lambda \partial_{comp(i)} \phi_i, - \partial_{comp(j)} \phi_j - \right)_K - + - \sum_l - \left( - \mu \partial_l \phi_i, - \partial_l \phi_j - \right)_K - \ \delta_{comp(i),comp(j)} - + - \left( - \mu \partial_{comp(j)} \phi_i, - \partial_{comp(i)} \phi_j - \right)_K -\\ - &=& - \left( - \lambda \partial_{comp(i)} \phi_i, - \partial_{comp(j)} \phi_j - \right)_K - + - \left( - \mu \nabla \phi_i, - \nabla \phi_j - \right)_K - \ \delta_{comp(i),comp(j)} - + - \left( - \mu \partial_{comp(j)} \phi_i, - \partial_{comp(i)} \phi_j - \right)_K. -@f} - -Likewise, the contribution of cell $K$ to the right hand side vector is -@f{eqnarray*} - f^K_j - &=& - \sum_l - \left( - f_l, - (\Phi_j)_l - \right)_K -\\ - &=& - \sum_l - \left( - f_l, - \phi_j \delta_{l,comp(j)} - \right)_K -\\ - &=& - \left( - f_{comp(j)}, - \phi_j - \right)_K. -@f} - -This is the form in which we will implement the local stiffness matrix and -right hand side vectors. - -As a final note: in the @ref step_17 "step-17" example program, we will revisit the elastic -problem laid out here, and will show how to solve it in parallel on a cluster -of computers. The resulting program will thus be able to solve this problem to -significantly higher accuracy, and more efficiently if this is -required. In addition, in @ref step_20 "step-20", we will revisit some -vector-valued problems and show a few techniques that may make it -simpler to actually go through all the stuff shown above, with -FiniteElement::system_to_component_index etc. - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-8.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-8.data/results.dox deleted file mode 100644 index bc6a11423e..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-8.data/results.dox +++ /dev/null @@ -1,47 +0,0 @@ - -

Results

- - -There is not much to be said about the results of this program, apart -from that they look nice. All images were made using GMV from the -output files that the program wrote to disk. The first picture shows -the displacement as a vector field, where one vector is shown at each -vertex of the grid: - - -@image html step-8.vectors.png - - -You can clearly see the sources of x-displacement around x=0.5 and -x=-0.5, and of y-displacement at the origin. The next image shows the -final grid after eight steps of refinement: - - -@image html step-8.grid.png - - - -Finally, the x-displacement and y-displacement are displayed separately: - - - - - - - - -
-@image html step-8.x.png - -@image html step-8.y.png -
- - - -It should be noted that intuitively one would have expected the -solution to be symmetric about the x- and y-axes since the x- and -y-forces are symmetric with respect to these axes. However, the force -considered as a vector is not symmetric and consequently neither is -the solution. - - diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-9.data/intro.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-9.data/intro.dox deleted file mode 100644 index 93caff4b43..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-9.data/intro.dox +++ /dev/null @@ -1,286 +0,0 @@ - -

Introduction

- - -In this example, our aims are the following: -
    -
  1. solve the advection equation $\beta \cdot \nabla u = f$; -
  2. show how we can use multiple threads to get quicker to - the desired results if we have a multi-processor machine; -
  3. develop a simple refinement criterion. -
-While the second aim is difficult to describe in general terms without -reference to the code, we will discuss the other two aims in the -following. The use of multiple threads will then be detailed at the -relevant places within the program. Furthermore, there exists a report on this -subject, which is also available online from the ``Documentation'' section of -the deal.II homepage. - - -

Discretizing the advection equation

- -In the present example program, we shall numerically approximate the -solution of the advection equation -@f[ - \beta \cdot \nabla u = f, -@f] -where $\beta$ is a vector field that describes advection direction and -speed (which may be dependent on the space variables), $f$ is a source -function, and $u$ is the solution. The physical process that this -equation describes is that of a given flow field $\beta$, with which -another substance is transported, the density or concentration of -which is given by $u$. The equation does not contain diffusion of this -second species within its carrier substance, but there are source -terms. - -It is obvious that at the inflow, the above equation needs to be -augmented by boundary conditions: -@f[ - u = g \qquad\qquad \mathrm{on}\ \partial\Omega_-, -@f] -where $\partial\Omega_-$ describes the inflow portion of the boundary and is -formally defined by -@f[ - \partial\Omega_- - = - \{{\mathbf x}\in \partial\Omega: \beta\cdot{\mathbf n}({\mathbf x}) < 0\}, -@f] -and ${\mathbf n}({\mathbf x})$ being the outward normal to the domain at point -${\mathbf x}\in\partial\Omega$. This definition is quite intuitive, since -as ${\mathbf n}$ points outward, the scalar product with $\beta$ can only -be negative if the transport direction $\beta$ points inward, i.e. at -the inflow boundary. The mathematical theory states that we must not -pose any boundary condition on the outflow part of the boundary. - -As it is stated, the transport equation is not stably solvable using -the standard finite element method, however. The problem is that -solutions to this equation possess only insufficient regularity -orthogonal to the transport direction: while they are smooth parallel -to $\beta$, they may be discontinuous perpendicular to this -direction. These discontinuities lead to numerical instabilities that -make a stable solution by a straight-forward discretization -impossible. We will thus use the streamline diffusion stabilized -formulation, in which we test the equation with test functions $v + -\delta \beta\cdot\nabla v$ instead of $v$, where $\delta$ is a -parameter that is chosen in the range of the (local) mesh width $h$; -good results are usually obtained by setting $\delta=0.1h$. Note that -the modification in the test function vanishes as the mesh size tends -to zero. We will not discuss reasons, pros, and cons of the streamline -diffusion method, but rather use it ``as is'', and refer the -interested reader to the sufficiently available literature; every -recent good book on finite elements should have a discussion of that -topic. - -Using the test functions as defined above, the weak formulation of -our stabilized problem reads: find a discrete function $u_h$ such that -for all discrete test functions $v_h$ there holds -@f[ - (\beta \cdot \nabla u_h, v_h + \delta \beta\cdot\nabla v_h)_\Omega - - - (\beta\cdot {\mathbf n} u_h, v_h)_{\partial\Omega_-} - = - (f, v_h + \delta \beta\cdot\nabla v_h)_\Omega - - - (\beta\cdot {\mathbf n} g, v_h)_{\partial\Omega_-}. -@f] -Note that we have included the inflow boundary values into the weak -form, and that the respective terms to the left hand side operator are -positive definite due to the fact that $\beta\cdot{\mathbf n}<0$ on the -inflow boundary. One would think that this leads to a system matrix -to be inverted of the form -@f[ - a_{ij} = - (\beta \cdot \nabla \varphi_i, - \varphi_j + \delta \beta\cdot\nabla \varphi_j)_\Omega - - - (\beta\cdot {\mathbf n} \varphi_i, \varphi_j)_{\partial\Omega_-}, -@f] -with basis functions $\varphi_i,\varphi_j$. However, this is a -pitfall that happens to every numerical analyst at least once -(including the author): we have here expanded the solution -$u_h = u_i \varphi_i$, but if we do so, we will have to solve the -problem -@f[ - {\mathbf u}^T A = {\mathbf f}^T, -@f] -where ${\mathbf u}=(u_i)$, i.e. we have to solve the transpose problem of -what we might have expected naively. In order to obtain the usual form -of the linear system, it is therefore best to rewrite the weak -formulation to -@f[ - (v_h + \delta \beta\cdot\nabla v_h, \beta \cdot \nabla u_h)_\Omega - - - (\beta\cdot {\mathbf n} v_h, u_h)_{\partial\Omega_-} - = - (v_h + \delta \beta\cdot\nabla v_h, f)_\Omega - - - (\beta\cdot {\mathbf n} v_h, g)_{\partial\Omega_-} -@f] -and then to obtain -@f[ - a_{ij} = - (\varphi_i + \delta \beta \cdot \nabla \varphi_i, - \beta\cdot\nabla \varphi_j)_\Omega - - - (\beta\cdot {\mathbf n} \varphi_i, \varphi_j)_{\partial\Omega_-}, -@f] -as system matrix. We will assemble this matrix in the program. - -There remains the solution of this linear system of equations. As the -resulting matrix is no more symmetric positive definite, we can't -employ the usual CG method any more. Suitable for the solution of -systems as the one at hand is the BiCGStab (bi-conjugate gradients -stabilized) method, which is also available in deal.II, so we will use -it. - - -Regarding the exact form of the problem which we will solve, we use -the following domain and functions (in $d=2$ space dimensions): -@f{eqnarray*} - \Omega &=& [-1,1]^d \\ - \beta({\mathbf x}) - &=& - \left( - \begin{array}{c}1 \\ 1+\frac 45 \sin(8\pi x)\end{array} - \right), - \\ - f({\mathbf x}) - &=& - \left\{ - \begin{array}{ll} - \frac 1{10 s^d} & - \mathrm{for}\ |{\mathbf x}-{\mathbf x}_0|2$, we extend $\beta$ and ${\mathbf x}_0$ by the same as the last -component. Regarding these functions, we have the following -annotations: -
    -
  1. The advection field $\beta$ transports the solution roughly in -diagonal direction from lower left to upper right, but with a wiggle -structure superimposed. -
  2. The right hand side adds to the field generated by the inflow -boundary conditions a bulb in the lower left corner, which is then -transported along. -
  3. The inflow boundary conditions impose a weighted sinusoidal -structure that is transported along with the flow field. Since -$|{\mathbf x}|\ge 1$ on the boundary, the weighting term never gets very large. -
- - -

A simple refinement criterion

- -In all previous examples with adaptive refinement, we have used an -error estimator first developed by Kelly et al., which assigns to each -cell $K$ the following indicator: -@f[ - \eta_K = - \left( - \frac {h_K}{12} - \int_{\partial K} - [\partial_n u_h]^2 \; d\sigma - \right)^{1/2}, -@f] -where $[\partial n u_h]$ denotes the jump of the normal derivatives -across a face $\gamma\subset\partial K$ of the cell $K$. It can be -shown that this error indicator uses a discrete analogue of the second -derivatives, weighted by a power of the cell size that is adjusted to -the linear elements assumed to be in use here: -@f[ - \eta_K \approx - C h \| \nabla^2 u \|_K, -@f] -which itself is related to the error size in the energy norm. - -The problem with this error indicator in the present case is that it -assumes that the exact solution possesses second derivatives. This is -already questionable for solutions to Laplace's problem in some cases, -although there most problems allow solutions in $H^2$. If solutions -are only in $H^1$, then the second derivatives would be singular in -some parts (of lower dimension) of the domain and the error indicators -would not reduce there under mesh refinement. Thus, the algorithm -would continuously refine the cells around these parts, i.e. would -refine into points or lines (in 2d). - -However, for the present case, solutions are usually not even in $H^1$ -(and this missing regularity is not the exceptional case as for -Laplace's equation), so the error indicator described above is not -really applicable. We will thus develop an indicator that is based on -a discrete approximation of the gradient. Although the gradient often -does not exist, this is the only criterion available to us, at least -as long as we use continuous elements as in the present -example. To start with, we note that given two cells $K$, $K'$ of -which the centers are connected by the vector ${\mathbf y}_{KK'}$, we can -approximate the directional derivative of a function $u$ as follows: -@f[ - \frac{{\mathbf y}_{KK'}^T}{|{\mathbf y}_{KK'}|} \nabla u - \approx - \frac{u(K') - u(K)}{|{\mathbf y}_{KK'}|}, -@f] -where $u(K)$ and $u(K')$ denote $u$ evaluated at the centers of the -respective cells. We now multiply the above approximation by -${\mathbf y}_{KK'}/|{\mathbf y}_{KK'}|$ and sum over all neighbors $K'$ of $K$: -@f[ - \underbrace{ - \left(\sum_{K'} \frac{{\mathbf y}_{KK'} {\mathbf y}_{KK'}^T} - {|{\mathbf y}_{KK'}|^2}\right)}_{=:Y} - \nabla u - \approx - \sum_{K'} - \frac{{\mathbf y}_{KK'}}{|{\mathbf y}_{KK'}|} - \frac{u(K') - u(K)}{|{\mathbf y}_{KK'}|}. -@f] -If the vectors ${\mathbf y}_{KK'}$ connecting $K$ with its neighbors span -the whole space (i.e. roughly: $K$ has neighbors in all directions), -then the term in parentheses in the left hand side expression forms a -regular matrix, which we can invert to obtain an approximation of the -gradient of $u$ on $K$: -@f[ - \nabla u - \approx - Y^{-1} - \left( - \sum_{K'} - \frac{{\mathbf y}_{KK'}}{|{\mathbf y}_{KK'}|} - \frac{u(K') - u(K)}{|{\mathbf y}_{KK'}|} - \right). -@f] -We will denote the approximation on the right hand side by -$\nabla_h u(K)$, and we will use the following quantity as refinement -criterion: -@f[ - \eta_K = h^{1+d/2} |\nabla_h u_h(K)|, -@f] -which is inspired by the following (not rigorous) argument: -@f{eqnarray*} - \|u-u_h\|^2_{L_2} - &\le& - C h^2 \|\nabla u\|^2_{L_2} -\\ - &\approx& - C - \sum_K - h_K^2 \|\nabla u\|^2_{L_2(K)} -\\ - &\le& - C - \sum_K - h_K^2 h_K^d \|\nabla u\|^2_{L_\infty(K)} -\\ - &\approx& - C - \sum_K - h_K^{2+d} |\nabla_h u_h(K)|^2 -@f} diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-9.data/results.dox b/deal.II/doc/tutorial/chapter-2.step-by-step/step-9.data/results.dox deleted file mode 100644 index e750b85c91..0000000000 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-9.data/results.dox +++ /dev/null @@ -1,50 +0,0 @@ - -

Results

- - -The results of this program are not particularly spectacular. They -consist of the console output, some grid files, and the solution on -the finest grid. First for the console output: -@code -Cycle 0: - Number of active cells: 256 - Number of degrees of freedom: 289 -Cycle 1: - Number of active cells: 643 - Number of degrees of freedom: 793 -Cycle 2: - Number of active cells: 1663 - Number of degrees of freedom: 1946 -Cycle 3: - Number of active cells: 4219 - Number of degrees of freedom: 4905 -Cycle 4: - Number of active cells: 10708 - Number of degrees of freedom: 12128 -Cycle 5: - Number of active cells: 26908 - Number of degrees of freedom: 29698 -@endcode -As can be seen, quite a number of cells is used on the finest level to -resolve the features of the solution. The final grid showing this is -displayed in the following picture: - - -@image html step-9.grid.png - - - -The structure of the grid will be understandable by looking at the -solution itself: - - -@image html step-9.solution.png - - - -Note that the solution is created by that part that is transported -along the wiggled advection field from the left and lower boundaries -to the top right, and the part that is created by the source in the -lower left corner, and the results of which are also transported -along. The grid shown above is well-adapted to resolve these -features. -- 2.39.5