]> https://gitweb.dealii.org/ - dealii-svn.git/commitdiff
Update with Sven Wetterauer's minimal surface equation problem.
authorbangerth <bangerth@0785d39b-7218-0410-832d-ea1e28bc413d>
Thu, 12 Apr 2012 08:09:38 +0000 (08:09 +0000)
committerbangerth <bangerth@0785d39b-7218-0410-832d-ea1e28bc413d>
Thu, 12 Apr 2012 08:09:38 +0000 (08:09 +0000)
git-svn-id: https://svn.dealii.org/trunk@25407 0785d39b-7218-0410-832d-ea1e28bc413d

deal.II/examples/step-15/doc/intro.dox
deal.II/examples/step-15/doc/results.dox
deal.II/examples/step-15/step-15.cc

index beae654f9e8889ea49fc07991d520c1f1080ce62..d10cb2e0aa022604699ea33007b61a1d0caa1022 100644 (file)
 
 <h3>Foreword</h3>
 
-This program demonstrates a number of techniques that have not been shown in
-previous example programs. In particular, it shows how to program for
-one-dimensional problems, and some aspects of what to do with nonlinear
-problems, in particular how to transfer the solution from one grid to the next
-finer one. Apart from this, however, the program does not attempt to do much
-more than to entertain those who sometimes like to play with maths.
-
-The application we chose is, as you will see, not even very well suited for
-anything, since it is rather impossible to solve. When I started to write the
-program, I was not aware of this, and it only turned out later that the
-optimization problem we are looking at here is severely plagued by many,
-likely even degenerate minima, and that we cannot really hope to find a global
-one. What we do instead is to rather start the optimization from many initial
-guesses (which is cheap since the problem is 1d), and hope that we can get a
-reasonable best solution for some of them. While the whole thing, as an
-application, is not very satisfactory, keep in mind that solving particular
-applications is not the goal of the tutorial programs; rather, we would like
-to demonstrate techniques of programming with deal.II, which is indeed the
-focus here.
-
-
-<h3>The problem</h3>
-
-Now for a description of the problem.  In the book by Dacorogna on the
-Calculus of Variations, I found the following statement, which confused me
-tremendously at first (see Section 3.4.3, ``Lavrentiev Phenomenon'', very
-slightly edited):
-
-@par Theorem 4.6:
-  <em>
-  Let 
+This programm deals with an example of a non-linear elliptic pde, the minimal
+surface equation. You can imagine the solution as a soap bubble inside a
+closed wire, where the wire isn't smooth, but curved. The soap bubble will
+take a shape with minimal surface. The solution of the minimal surface equation
+describes this shape with the wire as a boundary condition.
+
+Because the equation is non-linear, we can't solve it directly, but have to use
+the newton-method to compute the solution iterativly.
+
+
+
+<h3>Classical formulation</h3>
+
+In a classical sense, the problem posseses following form:
+
+@par
   @f[
-    I(u)=\int_0^1 (x-u^3)^2 (u')^6\; dx.
+    -\nabla \cdot \left( \frac{1}{\sqrt{1+|\nabla u|^{2}}}\nabla u \right) = 0 \qquad
+    \qquad in ~ \Omega
   @f]
-  Let
   @f[
-    {\cal W}_1 = \{ u\in W^{1,\infty}(0,1) : u(0)=0, u(1)=1 \}
+    u=g \qquad\qquad on ~ \partial \Omega
+  @f]
+
+In this example, we choose the unitball as our domain $\Omega$.
+
+As described above, we have to formulate the Newton-method for this problem
+with a damping parameter $\lambda$ to have a better global convergence behaviour:
+
+@par
+  @f[
+    F'(u^{n},\delta u^{n})=- \lambda F(u^{n})
   @f]
   @f[
-    {\cal W}_2 = \{ u\in W^{1,1}(0,1) : u(0)=0, u(1)=1 \}
+    u^{n+1}=u^{n}+\delta u^{n}
   @f]
-  </em>
 
-@par
-  <em>
-  Then
+with:
+
+  @f[
+    F(u):= -\nabla \cdot \left( \frac{1}{\sqrt{1+|\nabla u|^{2}}}\nabla u \right)
+  @f]
+
+and $F'(u,\delta u)$ the derivative of F in direction of $\delta u$:
+
+@f[
+  F'(u,\delta u)=\lim \limits_{\epsilon \rightarrow 0}{\frac{F(u+\epsilon \delta u)-
+  F(u)}{\epsilon}}.
+@f]
+
+So we have to solve a linear elliptic pde in every Newton-step, with $\delta u$ as
+the solution of:
+
   @f[
-  \inf_{u\in {\cal W}_1} I(u) \ge c_0 > 0 = \inf_{u\in {\cal W}_2} I(u).
+  - \nabla \cdot \left( \frac{1}{(1+|\nabla u^{n}|^{2})^{\frac{1}{2}}}\nabla
+  \delta u^{n} \right) +
+  \nabla \cdot \left( \frac{\nabla u^{n} \cdot
+  \nabla \delta u^{n}}{(1+|\nabla u^{n}|^{2})^{\frac{3}{2}}} \nabla u^{n}
+  \right)  =
+  -\left( - \nabla \cdot \left( \frac{1}{(1+|\nabla u^{n}|^{2})^{\frac{1}{2}}}
+  \nabla u^{n} \right) \right)
   @f]
-  Moreover the minimum of $I(u)$ over ${\cal W}_2$ is attained by
-  $u(x)=x^{1/3}$. 
-  </em>
 
-@par Remarks.
-  [...]
+In order to solve the minimal surface equation, we have to solve this equation in every
+Newton step. To solve this, we have to take a look at the boundary condition of this
+problem. Assuming that $u^{n}$ already has the right boundary values, the Newton update
+$\delta u^{n}$ should have zero boundary conditions, in order to have the right boundary
+condition after adding both.
+In the first Newton step, we are starting with the solution $u^{0}\equiv 0$, the Newton
+update still has to deliever the right boundary condition to the solution $u^{1}$.
 
 @par
-  <em>
-  ii) it is interesting to note that if one uses the usual finite element
-  methods (by taking piecewise affine functions, which are in $W^{1,\infty}$)
-  one will not be able to detect the minimum of some integrals such as the one
-  in the theorem.
-  </em>
-
-In other words: minimizing the energy functional over one space
-($W^{1,\infty}$) does not give the same value as when minimizing over a larger
-space ($W^{1,1}$). Furthermore, they give a rough estimate of the value of the
-constant $c_0$, which is $c_0=\frac{7^23^5}{2^{18}5^5}\approx 1.61\cdot
-10^{-6}$ (although by their calculation it is obvious that this estimate is
-far too small, but the point of course is just to show that it is strictly
-larger than zero).
-
-While the theorem was not surprising, the remark stunned me at first. After
-all, we know that we can approximate functions in $W^{1,1}$ to arbitrary
-accuracy.  Also, although it is true that finite element functions are in
-$W^{1,\infty}$, this statement is not really accurate: if the function itself
-is bounded pointwise by, say, a constant $C$, then its gradient is bounded by
-$2C/h$, and thus $\|u_h\|_{1,\infty} \le 2C/h$. That means that we should be
-able to lift this limit just by mesh refinement. Finite element functions are
-therefore only in $W^{1,\infty}$ if one considers them on a fixed grid, not on
-a sequence of successively finer grids. (Note, we can only lift the
-boundedness in $W^{1,1}$ in the same way by considering functions that
-oscillate at cell frequency; these, however, do not converge in any reasonable
-measure.)
-
-So it took me a while to see where the problem lies. Here it is: While we are
-able to approximate functions to arbitrary accuracies in <em>Sobolev
-  norms</em>, this does not necessarily also hold with respect to the functional
-$I(u)$!  After all, this functional was made to show exactly these
-pathologies.
-
-What happens in this case is actually not so difficult to understand. Let us
-look at what happens if we plug the lowest-order (piecewise linear)
-interpolant $i_hu$ of the optimal solution $u=x^{1/3}$ into the functional
-$I(u)$: on the leftmost cell, the left end of $i_hu$ is tagged to zero by the
-boundary condition, and the right end has the value $i_hu(h)=u(h)=h^{1/3}$. So
-let us only consider the contribution of this single cell to $I(u)$:
-@f{eqnarray*}
-  \int_0^h (x-(i_hu)^3)^2 ((i_hu)')^6 dx
-  &=&
-  \int_0^h (x-(h^{1/3}x)^3)^2 ((h^{1/3}/h)')^6 dx
-  \\
-  &=&
-  h^{-4} \int_0^h (x^2-2hx^4+h^2x^6) dx
-  \\
-  &=&
-  h^{-4} (h^3/3-2h^5/5+h^9/7)
-  \\
-  &=& {\cal O}(h^{-1}).
-@f}
-Ups, even the contribution of the first cell blows up under mesh refinement,
-and we have not even summed up the contributions of the other cells!
-
-It turns out, that the other cells are not really problematic (since the
-gradient is bounded there by a constant independent of $h$), but we cannot
-really avoid the trouble with the first cell: if instead of the interpolant we
-choose some other finite element function that is closer on average to
-$x^{1/3}$ than the interpolant above, then we have to increase the slope of
-this function, since we have to obey the boundary condition at the left
-end. But then we are hit by the weight $(u')^6$. This weight is simply too
-strong!
-
-On the other hand, the interpolation of the linear function $\varphi(x)=x$
-connecting the boundary values has the finite energy $I(i_h\varphi)=1/10$,
-independent of the mesh size. Thus, $i_hx^{1/3}$ cannot be the minimizer of the
-energy as $h\rightarrow 0$. This is also easy to see by noting that 
-the minimal value of $I$ cannot increase under mesh
-refinement: if it is finite for some function on some mesh, then it must be
-smaller or equal to that value on a finer mesh, since the original function is
-still in the space spanned by the shape functions on the finer grid, as finite
-element spaces are nested. However, the computation above shows that we should
-not be surprised if the value of the functional does not converge to zero, but
-rather some finite value.
-
-There is one more conclusion to be drawn from the blow-up lesson above: we
-cannot expect the finite dimensional approximation to be close to the root
-function at the left end of the domain, for any mesh we choose! Because, if it
-would, then its energy would have to blow up. And we will see exactly this
-in the results section below.
-
-
-<h3>What to do?</h3>
-
-After this somewhat theoretical introduction, let us just once in our life
-have fun with pure mathematics, and actually see what happens in this problem
-when we run the finite element method on it. So here it goes: to find the
-minimum of $I(u)$, we have to find its stationary point. The condition for
-this reads
+
+Summing up, we have to solve the pde above with the boundary condition $\delta u^{0}=g$
+in the first step and with $\delta u^{n}=0$ in all the other steps.
+
+
+<h3>Weak formulation of the problem</h3>
+
+Starting with the strong formulation above, we get the weak formulation by multiplying
+both sides of the pde with a testfunction $\varphi$ and integrating by parts on both sides:
+
+  @f[
+  \left( \nabla \varphi , \frac{1}{(1+|\nabla u^{n}|^{2})^{\frac{1}{2}}}\nabla
+  \delta u^{n} \right)-\left(\nabla \varphi ,\frac{\nabla u^{n} \cdot \nabla
+  \delta u^{n}}{(1+|\nabla u^{n}|^{2})^{\frac{3}{2}}}\nabla u^{n}  \right)
+  = -\left(\nabla \varphi , \frac{1}{(1+|\nabla u^{n}|^{2})^{\frac{1}{2}}} \nabla u^{n}
+   \right)
+  @f]
+
+Where the solution $\delta u^{n}$ is a function in the infinte space $H^{1}(\Omega)$.
+Reducing this space to a finite space with basis $\left\{ \varphi_{0},\dots ,
+\varphi_{N-1}\right\}$, we can write the solution:
+
 @f[
-  I'(u,\varphi) 
-  = 
-  \int_0^1 6 (x-u^3) (u')^5 \{ (x-u^3)\varphi' - u^2 u' \varphi\}\ dx,
+  \delta u^{n}=\sum_{j=0}^{N-1} \varphi_{j} \cdot U_{j}
 @f]
-for all test functions $\varphi$ from the same space as that from which we
-take $u$, but with zero boundary conditions. If this space allows us to
-integrate by parts, then we could associate this with a two point boundary
-value problem
-@f{eqnarray*}
-  -(x-u^3) u^2(u')^6
-  - \frac{d}{dx} \left\{(x-u^3)^2 (u')^5\right\} = 0,
-  \qquad\qquad u(0)=0,
-  \quad u(1)=1.
-@f}
-Note that this equation degenerates wherever $u^3=x$, which is at least the
-case at $x\in\{0,1\}$ due to the prescribed boundary values for $u$, but
-possibly at other places as well.  However, for finite elements, we will want
-to have the equation in weak form anyway.  Since the equation is still
-nonlinear, one may be tempted to compute iterates
-$u_{k+1}=u_k+\alpha_k\delta u_k$ using a Newton method for updates $\delta
-u_k$, like in
+
+Using the basis functions as testfunctions and defining $a_{n}:=\frac{1}
+{\sqrt{1+|\nabla u^{n}|^{2}}}$, we can rewrite the weak formualtion:
+
+@f[
+  \sum_{j=0}^{N-1}\left[ \left( \nabla \varphi_{i} , a_{n} \nabla \varphi_{j} \right) -
+  \left(\nabla u^{n}\cdot \nabla \varphi_{i} , a_{n}^{3} \nabla u^{n} \cdot \nabla
+  \varphi_{j} \right) \right] \cdot U_{j}=\left( \nabla \varphi_{i} , a_{n}
+  \nabla u^{n}\right) \qquad \forall i=0,\dots ,N-1
+@f]
+
+where the solution $\delta u^{n}$ is given by the coefficents $\delta U^{n}_{j}$.
+This linear equation system can be rewritten as:
+
 @f[
-  I''(u_k,\delta u_k,\varphi) 
-  = 
-  -I'(u_k, \varphi).
+  A^{n}U^{n}=b^{n}
 @f]
-However, since $I''(u_k,\cdot,\cdot)$ may be an indefinite operator (and, as
-numerical experiments indicate, is in fact during typical computations), we
-don't want to use this. Instead, we use a gradient method, for which we
-compute updates according to the following scheme:
-@f{eqnarray*}
-  \left<\delta u_k,\varphi\right>
-  = 
-  -I'(u_k, \varphi).
-@f}
-For the scalar product on the left hand side, there are multiple valid ways;
-we choose the mesh dependent definition $\left<u,v\right> = \int_\Omega (uv +
-h(x)^2 \nabla u\cdot \nabla v)\; dx$, where the weight $h(x)^2$, i.e. using
-the local mesh width, is chosen so that the definition is dimensionally
-consistent. It also yields a matrix on the left hand side that is simple to
-invert, as it is the sum of the well-conditioned mass matrix, and a Laplace
-matrix times a factor that counters the growth of condition number of the
-Laplace matrix.
-
-The step length $\alpha_k$ is then computed using a one-dimensional line search
-finding
-@f{eqnarray*}
-  \alpha_k = \arg\min_\alpha I(u_k+\alpha\delta u_k),
-@f}
-or at least an approximation to this using a one-dimensional Newton method
-which itself has a line search. The details of this can be found in the code.
-We iterate the updates and line searches until the change in energy $I(u_k)$
-becomes too small to warrant any further iterations.
-
-The basic idea that you should get in all this is that we formulate the
-optimization method in a function space, and will only discretize each step
-separately. A number of subsequent steps will be done on the same mesh, before
-we refine it and go on to do the same on the next finer mesh.
-
-As for mesh refinement, it is instructional to recall how residual based error
-estimates like the one used in the Kelly et al.~error estimator are usually
-derived (the Kelly estimator is the one that we have used in most of the
-previous example programs). In a similar way, by looking at the residual of
-the strong form of the nonlinear equation we attempt here to solve, we may be
-tempted to consider the following expression for refinement of cell $K$:
-@f{eqnarray*}
-  \eta_K^2 &=&
-  h^2 \left\|
-    (x-u_h^3) (u_h')^4 \left\{ u_h^2 (u_h')^2 + 5(x-u_h^3)u_h'' + 2u_h'(1-3u_h^2u_h') \right\}
-  \right\|^2_K
-  \\
-  && +
-  h \left| (x-u_h^3)^2 [(u_h')^5] \right|^2_{\partial K},
-@f}
-where $[\cdot]$ is the jump of a quantity across an intercell boundary, and 
-$|\cdot|_{\partial K}$ is the sum of the quantity evaluated at the two end
-points of a cell. Note that in the evaluation of the jump, we have made use of
-the fact that $x-u_h^3$ is a continuous quantity, and can therefore be taken
-out of the jump operator.
-
-All these details actually matter -- while writing the program I have played
-around with many settings and different versions of the code, and the result
-is that if you don't have a good line search, good stopping criteria, the
-right metric (scalar product) for the gradient method, good initial values,
-and a good refinement criterion, then the nonlinear solver gets stuck quite
-readily for this highly nonlinear problem. Initially, I was hardly able to
-find solutions for which the energy dropped below 0.005, while the energy
-after the final iteration of the program as it is is usually around 0.0003,
-and occasionally down to less than 3e-5.
-
-However, this is not enough. In the program, we start the solver on the coarse
-mesh many times, with randomly perturbed starting values, and while it
-converges it yields a different solution, with a different energy every
-time. One can therefore not say that the solver converges to a certain energy,
-and we can't answer the question what the smallest value of $I(u)$ might be in
-$W^{1,\infty}$. This is unsatisfactory, but may be expected for such a
-contrived and pathological problem. Consider it an example in programming with
-deal.II then, and not an example in solving this particular problem.
-
-
-<h3>Implementation</h3>
-
-The program implements all the steps mentioned above, and we will discuss them
-in the commented code below. In general, however, note that formulating the
-Newton method in function spaces, and only discretizing afterwards has
-consequences: we have to linearize around $u_k$ when we want to compute
-$\delta u_k$, and we have to sum up these two functions afterwards.  However,
-they may be living on different grids, if we have refined the grid before this
-step, so we will have to present a way to actually get a function from one
-grid to another. The <code>SolutionTransfer</code> class will help us here. On the
-other hand, discretizing every nonlinear step separately has the advantage
-that we can do the initial steps, when we are still far away from the
-solution, on a coarse mesh, and only go on to more expensive computations when
-we home in on an solution. We will use a
-very simplistic strategy for when we refine the mesh (every fifth nonlinear
-step), though. Realistic programs solving nonlinear problems will have to be more
-clever in this respect, but it suffices for the purposes of this program.
-
-We will show some of the things that are really simple in 1d (but sometimes
-different from what we are used to in 2d or 3d). Apart from this, the program
-does not contain much new stuff, but if it explains a few of the techniques
-that are available for nonlinear problems and in particular 1d problems, then
-this is not so bad, after all.
-
-<em>Note:</em> As shown below, the program starts the nonlinear solver from 10 different
-initial values, and outputs the results. This is not actually too many, but we
-did so to keep run-time short (around 1:30 minutes on my laptop). If you want to
-increase the number of realizations, you may want to switch to optimized mode
-(by setting the ``debug-mode'' flag in the Makefile to ``off''), and increase
-the number of realizations to a larger value. On the same machine as above, I
-can compute 100 realizations in optimized mode in about 2 minutes. For
-this particular program, the difference between debug and optimized mode is
-thus about a factor of 7-8, which can be explained by the fact that we ask the
-compiler to do optimizations on the code only in the latter mode, but in most
-part due to the fact that in optimized mode all the ``Assert'' checks are
-thrown out that make sure that function arguments are correct, and that check
-the internal consistency of the library. The library contains several
-thousands of these checks, and they significantly slow down debug
-computations, but we feel that the benefit of finding programming errors
-earlier and including where the problem exactly appeared to be of significantly
-greater value than faster run-time. After all, all production runs of programs
-should be done in optimized mode anyway.
-
-A slowdown of a factor of 7-8 is unusual, however. For 2d and 3d applications,
-a typical value is around 4. 
+
+where the entries of the matrix $A^{n}$ are given by:
+
+@f[
+  A^{n}_{ij}:= \left( \nabla \varphi_{i} , a_{n} \nabla \varphi_{j} \right) -
+  \left(\nabla u^{n}\cdot \nabla \varphi_{i} , a_{n}^{3} \nabla u^{n} \cdot \nabla
+  \varphi_{j} \right)
+@f]
+
+and the right hand side $b^{n}$ is given by:
+
+@f[
+  b^{n}_{i}:=\left( \nabla \varphi_{i} , a_{n} \nabla u^{n}\right)
+@f]
+
+The matrix A is symmetric, but it is indefinite. So we have to take a better look
+at the solver we choose for this linear system. The CG-method needs
+positive-definiteness of the matrix A, which is not given, so it can't be used.
+Using the symmetry of the matrix we can choose the minimal residual method as a
+solver, which needs symmetry but no definiteness.
+
+<h3>Summary</h3>
+
+Starting with the function $u^{0}\equiv 0$, the first Newton update is computed by
+solving the system $A^{0}U^{0}=b^{0}$ with boundary condition $\delta u^{0}=g$ on
+ $\partial \Omega$. The new approximation of the solution is given by
+ $u^{1}=u^{0}+\delta u^{0}$. The next updates are given as solution of
+ the linear system $A^{n}U^{n}=b^{n}$ with boundary condition $\delta u^{n}=0$ on
+ $\partial \Omega$ and the new approximation given by $u^{n+1}=u^{n}+\delta u^{n}$.
+
index baf0ffa1a69d9337a6920f0f147f78511fac7b62..df6c8220697f2f076687dfdc1d2ce41b3555fbe1 100644 (file)
 <h1>Results</h1>
 
 
-If run, the program generates output like this:
-
-
+The output of the program looks as follows:
 @code
-Realization 0:
-   Energy: 0.00377302
-   Energy: 0.00106138
-   Energy: 0.000514363
-   Energy: 0.000382105
-   Energy: 0.000339017
-   Energy: 0.000327948
-   Energy: 0.000320299
-   Energy: 0.000318016
-   Energy: 0.000316735
-   Energy: 0.000316536
-   Energy: 0.000316463
-   Energy: 0.000316285
-   Energy: 0.000316227
-   Energy: 0.000316221
-   Energy: 0.00031622
-
-Realization 1:
-   Energy: 0.00279316
-   Energy: 0.000896516
-   Energy: 0.000504609
-   Energy: 0.000392703
-   Energy: 0.000317725
-   Energy: 0.000291881
-   Energy: 0.000288243
-   Energy: 0.000283541
-   Energy: 0.000282406
-   Energy: 0.000281842
-   Energy: 0.000281752
-   Energy: 0.000281743
-   Energy: 0.000281743
-
+residual:17.1036
+residual:0.0493566
+residual:0.0437197
+residual:0.0388877
+residual:0.0346918
+mesh-refinement:1
+residual:0.0479423
+residual:0.0427299
+residual:0.0381861
+residual:0.03418
+residual:0.0306245
+mesh-refinement:2
 ....
+residual:0.00354605
+residual:0.00318818
+residual:0.00286721
+residual:0.00257911
+residual:0.00232034
+mesh-refinement:7
+residual:0.00211166
+residual:0.00189985
+residual:0.00170948
+residual:0.00153832
+residual:0.00138438
+mesh-refinement:8
+residual:0.00143114
+residual:0.00128729
+residual:0.00115805
+residual:0.00104189
+residual:0.000937454
+mesh-refinement:9
 
-Realization 9:
-   Energy: 0.0103729
-   Energy: 0.0082121
-   Energy: 0.00733742
-   Energy: 0.00728154
-   Energy: 0.00725198
-   Energy: 0.00724302
-   Energy: 0.00724019
-   Energy: 0.00723837
-   Energy: 0.00723783
-   Energy: 0.00723772
-   Energy: 0.00690564
-   Energy: 0.00690562
 @endcode
 
 
-The lowest energy yet seen is in this run (you only get this by increasing the
-number of runs):
-
-@code
-Realization 18:
-   Energy: 0.00200645
-   Energy: 0.000638519
-   Energy: 0.00022749
-   Energy: 9.18962e-05
-   Energy: 5.42442e-05
-   Energy: 3.94415e-05
-   Energy: 3.42307e-05
-   Energy: 3.30727e-05
-   Energy: 3.19998e-05
-   Energy: 3.18104e-05
-   Energy: 2.97091e-05
-   Energy: 3.5011e-05
-@endcode
 
 
-Apparently something went wrong in the last step (the energy increased, which
-it shouldn't - but then this is a strongly nonlinear problem), which is also
-why the program aborted after this iteration. Apart from this, the iterations
-shown above demonstrate that our program indeed is able to reduce the energy
-in the solution in each iteration, as should be.
 
 
+The final solution, as written by the program at the end of the
+<code>%run()</code> function, looks as follows:
 
-Since the program did not really deliver the goal we had originally intended
-for (the computation of the minimal energy of finite element spaces), the
-graphical output is also not very exciting. Here are plots of five of the
-first 10 solutions:
 
 
-@image html step-15.solution-1.png
+@image html step-6.solution.png
 
 
 
-And here are the first 100 solutions, where each node in each solution is
-represented as a dot. As can be seen, all the solutions cluster somewhat
-around the $x^{1/3}$ curve, here shown in turquoise:
+In each cycle, the program furthermore writes the grid in EPS
+format. These are shown in the following:
 
 
-@image html step-15.solution-2.png
 
+<TABLE WIDTH="100%">
+<tr>
+<td>
+  @image html step-6.grid-0.png
+</td>
+<td>
+  @image html step-6.grid-1.png
+</td>
+</tr>
 
+<tr>
+<td>
+  @image html step-6.grid-2.png
+</td>
+<td>
+  @image html step-6.grid-3.png
+</td>
+</tr>
 
-Note that this behavior is mostly independent of the choice of starting data
-(which we have chosen to be close to this curve), which a posteriori justfies
-our choice. Some of the curves actually show a linear behavior of the solution
-close to the origin; this is particularly obvious when the curves are viewed
-in a log-log plot (not shown here, but rather left as an exercise to the
-reader).
+<tr>
+<td>
+  @image html step-6.grid-4.png
+</td>
+<td>
+  @image html step-6.grid-5.png
+</td>
+</tr>
 
+<tr>
+<td>
+  @image html step-6.grid-6.png
+</td>
+<td>
+  @image html step-6.grid-7.png
+</td>
+</tr>
+</table>
 
 
-Given the almost complete absence of interesting results of this program, we
-hope that at least its source code provided some information with respect to
-programming with deal.II
 
+It is clearly visible that the region where the solution has a kink,
+i.e. the circle at radial distance 0.5 from the center, is
+refined most. Furthermore, the central region where the solution is
+very smooth and almost flat, is almost not refined at all, but this
+results from the fact that we did not take into account that the
+coefficient is large there. The region outside is refined rather
+randomly, since the second derivative is constant there and refinement
+is therefore mostly based on the size of the cells and their deviation
+from the optimal square.
index 6e4c008397b2b135bef97f51d544083afae80bb9..0b4c4bd532e55b432be7ee652071480bbbdeeaf8 100644 (file)
-/* Author: Wolfgang Bangerth, University of Heidelberg, 2002 */
+/* $Id$ */
+/* Author: Wolfgang Bangerth, University of Heidelberg, 2000 */
 
 /*    $Id$       */
 /*                                                                */
-/*    Copyright (C) 2002, 2003, 2004, 2006, 2007, 2008, 2009, 2010, 2011, 2012 by the deal.II authors                   */
+/*    Copyright (C) 2000, 2001, 2002, 2003, 2004, 2006, 2007, 2008, 2010, 2011 by the deal.II authors */
 /*                                                                */
 /*    This file is subject to QPL and may not be  distributed     */
 /*    without copyright and license information. Please refer     */
 /*    to the file deal.II/doc/license.html for the  text  and     */
 /*    further information on this license.                        */
 
-                                 // As usual, most of the headers here have
-                                 // already been used and discussed in
-                                 // previous examples:
+                                 // @sect3{Include files}
+
+                                // The first few files have already
+                                // been covered in previous examples
+                                // and will thus not be further
+                                // commented on.
 #include <deal.II/base/quadrature_lib.h>
 #include <deal.II/base/function.h>
 #include <deal.II/base/logstream.h>
 #include <deal.II/lac/vector.h>
 #include <deal.II/lac/full_matrix.h>
 #include <deal.II/lac/sparse_matrix.h>
-#include <deal.II/lac/solver_cg.h>
+#include <deal.II/lac/compressed_sparsity_pattern.h>
 #include <deal.II/lac/precondition.h>
 #include <deal.II/lac/constraint_matrix.h>
 #include <deal.II/grid/tria.h>
+#include <deal.II/dofs/dof_handler.h>
 #include <deal.II/grid/grid_generator.h>
-#include <deal.II/grid/grid_refinement.h>
 #include <deal.II/grid/tria_accessor.h>
 #include <deal.II/grid/tria_iterator.h>
 #include <deal.II/grid/tria_boundary_lib.h>
-#include <deal.II/dofs/dof_handler.h>
+#include <deal.II/grid/grid_refinement.h>
 #include <deal.II/dofs/dof_accessor.h>
 #include <deal.II/dofs/dof_tools.h>
-#include <deal.II/fe/fe_q.h>
 #include <deal.II/fe/fe_values.h>
+#include <deal.II/fe/fe_q.h>
 #include <deal.II/numerics/vectors.h>
 #include <deal.II/numerics/matrices.h>
 #include <deal.II/numerics/data_out.h>
+#include <deal.II/numerics/error_estimator.h>
 
-                                 // This is probably the only new one -- it
-                                 // declares the class that we use to transfer
-                                 // a solution on one grid to the one we
-                                 // obtain after refining/coarsening it:
-#include <deal.II/numerics/solution_transfer.h>
 
-                                 // And here comes the usual assortment of C++
-                                 // header files:
 #include <fstream>
 #include <iostream>
-#include <sstream>
 
-                                 // The last step is as in all
-                                 // previous programs:
-namespace Step15
+                               // We will use adaptive mesh refinement between Newton
+                               // interations. To do so, we need to be able to work
+                               // with a solution on the new mesh, although it was
+                               // computed on the old one. The SolutionTransfer
+                               // class transfers the solution to the new mesh.
+
+#include <deal.II/numerics/solution_transfer.h>
+
+                               // In this tutorial, we can't use the CG-method as a solver, as
+                               // described above, but we use the minimal residual method, which
+                               // is included with this file.
+
+#include <deal.II/lac/solver_minres.h>
+
+                               // As in previous programs:
+
+using namespace dealii;
+
+
+                                                               // @sect3{The <code>Step15</code> class template}
+
+                               // The class template is basically the same as in step 6.
+                               // Four additions are made: There are two solution vectors,
+                               // one for the Newton update, and one for the solution of
+                               // the original pde. Also we need a double for the residual
+                               // of the Newton method, an integer, which counts the mesh
+                               // refinements and a bool for the boundary condition in the first
+                               // Newton step.
+
+template <int dim>
+class Step15
 {
-  using namespace dealii;
-
-                                   // The first thing we have here is a helper
-                                   // function that computes an even power $|v|^n$
-                                   // of a vector $v$, by evaluating
-                                   // $(v\cdot v)^{n/2}$. We need this in the
-                                   // computations below where we do not want to
-                                   // dwell on the fact that the gradient of the
-                                   // solution is actually a scalar in the 1d
-                                   // situation we consider in this program (in
-                                   // 1d, the gradient is a vector with a single
-                                   // element, which is easily extracted). Small
-                                   // tricks like this make it significantly
-                                   // simpler to later extend a program so that
-                                   // it also runs in higher space dimensions.
-                                   //
-                                   // While the implementation of the function
-                                   // is obvious, note the assertion at the
-                                   // beginning of the function body, which
-                                   // makes sure that the exponent is indeed an
-                                   // even number (here, we use that <code>n/2</code> is
-                                   // computed in integer arithmetic, i.e. any
-                                   // remainder of the division is
-                                   // lost). <code>ExcMessage</code> is a pre-defined
-                                   // exception class that takes a string
-                                   // argument explaining what goes wrong. It is
-                                   // a simpler way to declare exceptions than
-                                   // the ones shown in step-9 and step-13/14
-                                   // where we explicitly declared exception
-                                   // classes. However, by using a generic
-                                   // exception class, we lose the ability to
-                                   // attach additional information at run-time
-                                   // to the exception message, such as the
-                                   // value of the variable <code>n</code>. By following
-                                   // the way explained in above example
-                                   // programs, adding this feature is simple,
-                                   // though.
-  template <int dim>
-  inline
-  double gradient_power (const Tensor<1,dim> &v,
-                         const unsigned int n)
-  {
-    Assert ((n/2)*2 == n, ExcMessage ("Value of 'n' must be even"));
-    double p = 1;
-    for (unsigned int k=0; k<n; k+=2)
-      p *= (v*v);
-    return p;
-  }
+  public:
+    Step15 ();
+    ~Step15 ();
 
+    void run ();
 
+  private:
+    void setup_system ();
+    void assemble_system ();
+    void solve ();
+    void refine_grid ();
 
-                                   // Secondly, we declare a class that defines
-                                   // our initial values for the nonlinear
-                                   // iteration. It is a function object,
-                                   // i.e. it has a member operator that returns
-                                   // for a given point the value of the
-                                   // function. The value we return is a random
-                                   // perturbation of the $x^{1/3}$ function
-                                   // which we know is the optimal solution in a
-                                   // larger function space. To make things a
-                                   // little simpler on the optimizer, we return
-                                   // zero if the proposed random value is
-                                   // negative.
-                                   //
-                                   // Note that this class works strictly only
-                                   // for 1d. If the program is to be extended
-                                   // to higher space dimensions, so has to be
-                                   // this class.
-  class InitializationValues : public Function<1>
-  {
-    public:
-      InitializationValues () : Function<1>() {}
-
-      virtual double value (const Point<1>     &p,
-                            const unsigned int  component = 0) const;
-  };
-
-
-
-                                   // So here comes the function that implements
-                                   // the function object. The <code>base</code> value is
-                                   // $x^{1/3}$, while <code>random</code> is a random
-                                   // number between -1 and 1 (note that
-                                   // <code>rand()</code> returns a random integer value
-                                   // between zero and <code>RAND_MAX</code>; to convert
-                                   // it to a floating point value between 0 and
-                                   // 2, we have to divide by <code>RAND_MAX</code> and
-                                   // multiply by two -- note that the first
-                                   // multiplication has to happen in floating
-                                   // point arithmetic, so that the division is
-                                   // done in non-truncating floating point mode
-                                   // as well; the final step is then to shift
-                                   // the interval [0,2] to [-1,1]).
-                                   //
-                                   // In a second step, we add the base value
-                                   // and a random value in [-0.1,0.1] together
-                                   // and return it, unless it is less than
-                                   // zero, in which case we take zero.
-  double InitializationValues::value (const Point<1> &p,
-                                      const unsigned int) const
-  {
-    const double base = std::pow(p(0), 1./3.);
-    const double random = 2.*rand()/RAND_MAX-1;
-    return std::max (base+.1*random, 0.);
-  }
+    Triangulation<dim>   triangulation;
 
+    DoFHandler<dim>      dof_handler;
+    FE_Q<dim>            fe;
 
+    ConstraintMatrix     hanging_node_constraints;
 
-                                   // Next is the declaration of the main
-                                   // class. As in most of the previous example
-                                   // programs, the public interface of the
-                                   // class consists only of a constructor and a
-                                   // <code>run</code> function that does the actual
-                                   // work. The constructor takes an additional
-                                   // argument that indicates the number of the
-                                   // run we are presently performing. This
-                                   // value is only used at the very end when we
-                                   // generate graphical output with a filename
-                                   // that matches this number.
-                                   //
-                                   // The private section of the class has the
-                                   // usual assortment of functions setting up
-                                   // the computations, doing one nonlinear
-                                   // step, refineming the mesh, doing a line
-                                   // search for step length computations,
-                                   // etc. The <code>energy</code> function computes the
-                                   // value of the optimization functional on an
-                                   // arbitrary finite element function with
-                                   // nodal values given on the <code>DoFHandler</code>
-                                   // given as an argument. Since it does not
-                                   // depend on the state of this object, we
-                                   // declare this function as <code>static</code>.
-                                   //
-                                   // The member variables of this class are
-                                   // what we have seen before, and the
-                                   // variables that characterize the linear
-                                   // system to be solved in the next nonlinear
-                                   // step, as well as the present approximation
-                                   // of the solution.
-  template <int dim>
-  class MinimizationProblem
-  {
-    public:
-      MinimizationProblem  (const unsigned int run_number);
-      void run ();
-
-    private:
-      void initialize_solution ();
-      void setup_system_on_mesh ();
-      void assemble_step ();
-      double line_search (const Vector<double> & update) const;
-      void do_step ();
-      void output_results () const;
-      void refine_grid ();
-
-      static double energy (const DoFHandler<dim> &dof_handler,
-                            const Vector<double>  &function);
-
-
-      const unsigned int run_number;
-
-      Triangulation<dim>   triangulation;
-
-      FE_Q<dim>            fe;
-      DoFHandler<dim>      dof_handler;
-
-      ConstraintMatrix     hanging_node_constraints;
-
-      SparsityPattern      sparsity_pattern;
-      SparseMatrix<double> matrix;
-
-      Vector<double>       present_solution;
-      Vector<double>       residual;
-  };
-
-
-
-                                   // The constructor of this class is actually
-                                   // somewhat boring:
-  template <int dim>
-  MinimizationProblem<dim>::MinimizationProblem (const unsigned int run_number)
-                  :
-                  run_number (run_number),
-                  fe (1),
-                  dof_handler (triangulation)
-  {}
-
-
-                                   // Then, here is the function that
-                                   // initializes the solution before the first
-                                   // non-linear iteration, by setting the
-                                   // initial values to the random function
-                                   // described above and making sure that the
-                                   // boundary values are set correctly. We will
-                                   // then only seek updates to this function
-                                   // with zero boundary values, so that the
-                                   // boundary values are always correct.
-                                   //
-                                   // Note how we have specialized this function
-                                   // to 1d only. We do this since the second
-                                   // part of the function, where we deal with
-                                   // boundary values, is only correct if we are
-                                   // in 1d. Not generating a general template
-                                   // for this function prevents the compiler
-                                   // from erroneously compiling this function
-                                   // for other space dimensions, then.
-  template <>
-  void MinimizationProblem<1>::initialize_solution ()
+    SparsityPattern      sparsity_pattern;
+    SparseMatrix<double> system_matrix;
+
+    Vector<double>       present_solution;
+    Vector<double>       newton_update;
+    Vector<double>       system_rhs;
+
+
+
+    double                              res;
+    unsigned int         refinement;
+
+                       // As described in the Introduction, the first Newton iteration
+                       // is special, because of the boundary condition. To implement
+                       // these correctly, there is a bool, which is true in the first
+                       // step and false ever after.
+    bool                                first_step;
+};
+
+                                                               // @sect3{Boundary condition}
+
+                               // The boundary condition is implemented just like in step 4.
+                               // It was chosen as $g(x,y)=sin(2 \pi (x+y))$ in this example.
+
+template <int dim>
+class BoundaryValues : public Function<dim>
+{
+  public:
+    BoundaryValues () : Function<dim>() {}
+
+    virtual double value (const Point<dim>   &p,
+                         const unsigned int  component = 0) const;
+};
+
+template <int dim>
+double BoundaryValues<dim>::value (const Point<dim> &p,
+                                  const unsigned int /*component*/) const
+{
+  double return_value=sin(2*M_PI*(p[0]+p[1]));
+  return return_value;
+}
+
+                                                               // @sect3{The <code>Step15</code> class implementation}
+
+                                                               // @sect4{Step15::Step15}
+
+                               // The constructor and destructor of the class are the same
+                               // as in the first few tutorials.
+
+template <int dim>
+Step15<dim>::Step15 ()
+               :
+               dof_handler (triangulation),
+                fe (2)
+{}
+
+
+
+                               //
+template <int dim>
+Step15<dim>::~Step15 ()
+{
+  dof_handler.clear ();
+}
+
+                                                               // @sect4{Step15::setup_system}
+
+                               // As always in the setup-system function, we setup the variables
+                               // of the finite element method. There are same differences to
+                               // step 6, because we don't have to solve one pde over all,
+                               // but one in every Newton step. Also the starting function
+                               // has to be setup in the first step.
+
+template <int dim>
+void Step15<dim>::setup_system ()
+{
+
+                               // This function will be called, every time we refine the mesh
+                               // to resize the system matrix,  Newton update - and right hand
+                               // side vector and to set the right values of hanging nodes to
+                               // get a continuous solution.
+                               // But only the first time, the starting solution has to be
+                               // initialized. Also the vector of the solution will be
+                               // resized in the <code>refine_grid</code> function, while the
+                               // vector is transfered to the new mesh.
+
+  if(first_step)
   {
-                                     // The first part is to assign the correct
-                                     // size to the vector, and use library
-                                     // function that takes a function object,
-                                     // and interpolates the given vector living
-                                     // on a <code>DoFHandler</code> to this function
-                                     // object:
-    present_solution.reinit (dof_handler.n_dofs());
-    VectorTools::interpolate (dof_handler,
-                              InitializationValues(),
-                              present_solution);
-
-                                     // Then we still have to make sure that we
-                                     // get the boundary values right. This
-                                     // could have been done inside the
-                                     // <code>InitializationValues</code> class, but it
-                                     // is instructive to see how it can also be
-                                     // done, in particular since it is so
-                                     // simple in 1d. First, start out with an
-                                     // arbitrary cell on level 0, i.e. the
-                                     // coarse mesh:
-    DoFHandler<1>::cell_iterator cell;
-    cell = dof_handler.begin(0);
-                                     // Then move as far to the left as
-                                     // possible. Note that while in two or more
-                                     // space dimensions, there is is no
-                                     // guarantee as to the coordinate
-                                     // directions of a given face number of a
-                                     // cell, in 1d the zeroth face (and
-                                     // neighbor) is always the one to the left,
-                                     // and the first one the one to the
-                                     // right. Similarly, the zeroth child is
-                                     // the left one, the first child is the
-                                     // right one.
-    while (cell->at_boundary(0) == false)
-      cell = cell->neighbor(0);
-                                     // Now that we are at the leftmost coarse
-                                     // grid cell, go recursively through its
-                                     // left children until we find a terminal
-                                     // one:
-    while (cell->has_children() == true)
-      cell = cell->child(0);
-                                     // Then set the value of the solution
-                                     // corresponding to the zeroth degree of
-                                     // freedom and the zeroth vertex of the
-                                     // cell to zero. Note that the zeroth
-                                     // vertex is the left one, and that zero is
-                                     // the only valid second argument to the
-                                     // call to <code>vertex_dof_index</code>, since we
-                                     // have a scalar finite element; thus,
-                                     // there is only a single component.
-    present_solution(cell->vertex_dof_index(0,0)) = 0;
-
-                                     // Now do all the same with the right
-                                     // boundary value, and set it to one:
-    cell = dof_handler.begin(0);
-    while (cell->at_boundary(1) == false)
-      cell = cell->neighbor(1);
-    while (cell->has_children())
-      cell = cell->child(1);
-    present_solution(cell->vertex_dof_index(1,0)) = 1;
+       dof_handler.distribute_dofs (fe);
+       present_solution.reinit (dof_handler.n_dofs());
+       for(unsigned int i=0; i<dof_handler.n_dofs();++i)
+       {
+         present_solution(i)=0;
+       }
+                               // The constraint matrix, holding a list of the hanging nodes,
+                               // will be setup in the <code>refine_grid</code> function
+                               // after refining the mesh.
+
+       hanging_node_constraints.clear ();
+       DoFTools::make_hanging_node_constraints (dof_handler,
+                                  hanging_node_constraints);
+       hanging_node_constraints.close ();
   }
 
 
-                                   // The function that prepares the member
-                                   // variables of this class for assembling the
-                                   // linear system in each nonlinear step is
-                                   // also not very interesting. This has all
-                                   // been shown before in previous example
-                                   // programs. Note, however, that all this
-                                   // works in 1d just as in any other space
-                                   // dimension, and would not require any
-                                   // changes if we were to use the program in
-                                   // another space dimension.
-                                   //
-                                   // Note that this function is only called
-                                   // when the mesh has been changed (or before
-                                   // the first nonlinear step). It only
-                                   // initializes the variables to their right
-                                   // sizes, but since these sizes don't change
-                                   // as long as we don't change the mesh, we
-                                   // can use them for more than just one
-                                   // nonlinear iteration without reinitializing
-                                   // them.
-  template <int dim>
-  void MinimizationProblem<dim>::setup_system_on_mesh ()
-  {
-    hanging_node_constraints.clear ();
-    DoFTools::make_hanging_node_constraints (dof_handler,
-                                             hanging_node_constraints);
-    hanging_node_constraints.close ();
+                           // The remaining parts of the function are the same as in step 6.
 
-    sparsity_pattern.reinit (dof_handler.n_dofs(),
-                             dof_handler.n_dofs(),
-                             dof_handler.max_couplings_between_dofs());
-    DoFTools::make_sparsity_pattern (dof_handler, sparsity_pattern);
+  newton_update.reinit (dof_handler.n_dofs());
+  system_rhs.reinit (dof_handler.n_dofs());
 
-    hanging_node_constraints.condense (sparsity_pattern);
+  CompressedSparsityPattern c_sparsity(dof_handler.n_dofs());
+  DoFTools::make_sparsity_pattern (dof_handler, c_sparsity);
 
-    sparsity_pattern.compress();
-  }
+  hanging_node_constraints.condense (c_sparsity);
+
+  sparsity_pattern.copy_from(c_sparsity);
+  system_matrix.reinit (sparsity_pattern);
+}
 
+                                                               // @sect4{Step15::assemble_system}
+
+                               // This function does the same as in the previous tutorials.
+                               // The only additional step is the correct implementation of
+                               // the boundary condition and the usage of the gradients of
+                               // the old solution.
+
+template <int dim>
+void Step15<dim>::assemble_system ()
+{
+  const QGauss<dim>  quadrature_formula(3);
 
+  system_matrix = 0;
+  system_rhs = 0;
 
-                                   // Next is the function that assembles the
-                                   // linear system. The first part,
-                                   // initializing various local variables is
-                                   // what we have been doing previously
-                                   // already.
-  template <int dim>
-  void MinimizationProblem<dim>::assemble_step ()
+  FEValues<dim> fe_values (fe, quadrature_formula,
+                          update_values    |  update_gradients |
+                          update_quadrature_points  |  update_JxW_values);
+
+  const unsigned int   dofs_per_cell = fe.dofs_per_cell;
+  const unsigned int   n_q_points    = quadrature_formula.size();
+
+  FullMatrix<double>   cell_matrix (dofs_per_cell, dofs_per_cell);
+  Vector<double>       cell_rhs (dofs_per_cell);
+
+  std::vector<unsigned int> local_dof_indices (dofs_per_cell);
+
+  typename DoFHandler<dim>::active_cell_iterator
+    cell = dof_handler.begin_active(),
+    endc = dof_handler.end();
+  for (; cell!=endc; ++cell)
+    {
+      cell_matrix = 0;
+      cell_rhs = 0;
+
+      fe_values.reinit (cell);
+
+
+      for (unsigned int q_point = 0; q_point < n_q_points; ++q_point) {
+
+                 // To setup up the linear system, the gradient of the old solution
+                 // in the quadrature points is needed. For this purpose there is
+                 // is a function, which will write these gradients in a vector,
+                 // where every component of the vector is a vector itself:
+
+                       std::vector<Tensor<1, dim> > gradients(n_q_points);
+                       fe_values.get_function_gradients(present_solution, gradients);
+
+                         // Having the gradients of the old solution in the quadrature
+                         // points, we are able to compute the coefficients $a_{n}$
+                         // in these points.
+
+                       const double coeff = 1/sqrt(1 + gradients[q_point] * gradients[q_point]);
+
+                         // The assembly of the system then is the same as always, except
+                         // of the damping parameter of the Newton method, which we set on
+                         // 0.1 in this case.
+
+                       for (unsigned int i = 0; i < dofs_per_cell; ++i) {
+                               for (unsigned int j = 0; j < dofs_per_cell; ++j) {
+                                       cell_matrix(i, j) += (fe_values.shape_grad(i, q_point)
+                                                       * coeff
+                                                       * (fe_values.shape_grad(j, q_point)
+                                                               - coeff * coeff
+                                                               * (fe_values.shape_grad(j, q_point)
+                                                               * gradients[q_point])
+                                                               * gradients[q_point])
+                                                       * fe_values.JxW(q_point));
+                               }
+
+                               cell_rhs(i) -=0.1 *
+                                               (fe_values.shape_grad(i, q_point) * coeff
+                                                               * gradients[q_point] * fe_values.JxW(q_point));
+                       }
+               }
+
+      cell->get_dof_indices (local_dof_indices);
+      for (unsigned int i=0; i<dofs_per_cell; ++i)
+       {
+         for (unsigned int j=0; j<dofs_per_cell; ++j)
+           system_matrix.add (local_dof_indices[i],
+                              local_dof_indices[j],
+                              cell_matrix(i,j));
+
+         system_rhs(local_dof_indices[i]) += cell_rhs(i);
+       }
+    }
+  hanging_node_constraints.condense (system_matrix);
+  hanging_node_constraints.condense (system_rhs);
+  std::map<unsigned int,double> boundary_values;
+
+                       // As described above, there is a different boundary condition
+                               // in the first Newton step than in the later ones. This is
+                               // implemented with the help of the bool first_step, which
+                               // will later be false for all times. Starting with the zero-
+                               // function in the first step, we have to set the boundary
+                           // condition $\delta u^{0}=g$ on $\partial \Omega $:
+
+  if(first_step)
   {
-                                     // The first two lines of the function
-                                     // clear the matrix and right hand side
-                                     // values of their prior content, which
-                                     // could possibly still be there from the
-                                     // previous nonlinear step.
-    matrix.reinit (sparsity_pattern);
-    residual.reinit (dof_handler.n_dofs());
-
-                                     // Then we initialize a <code>FEValues</code> object
-                                     // with a 4-point Gauss quadrature
-                                     // formula. This object will be used to
-                                     // compute the values and gradients of the
-                                     // shape functions at the quadrature
-                                     // points, which we need to assemble the
-                                     // matrix and right hand side of the
-                                     // nonlinear step as outlined in the
-                                     // introduction to this example program. In
-                                     // order to compute values and gradients,
-                                     // we need to pass the <code>update_values</code>
-                                     // and <code>update_gradients</code> flags to the
-                                     // constructor, and the
-                                     // <code>update_JxW_values</code> flag for the
-                                     // Jacobian times the weight at a
-                                     // quadrature point. In addition, we need
-                                     // to have the coordinate values of each
-                                     // quadrature point in real space for the
-                                     // $x-u^3$ terms; to get these from the
-                                     // <code>FEValues</code> object, we need to pass it
-                                     // the <code>update_quadrature_points</code> flag.
-                                     //
-                                     // It is a simple calculation to figure out
-                                     // that for linear elements, the integrals
-                                     // in the right hand side semilinear form
-                                     // is a polynomial of sixth order. Thus,
-                                     // the appropriate quadrature formula is
-                                     // the one we have chosen here.
-    QGauss<dim>  quadrature_formula(4);
-    FEValues<dim> fe_values (fe, quadrature_formula,
-                             update_values   | update_gradients |
-                             update_quadrature_points | update_JxW_values);
-
-                                     // Next, here are the usual two convenience
-                                     // variables, followed by declarations for
-                                     // the local contributions to matrix and
-                                     // right hand side, as well as an array to
-                                     // hold the indices of the local degrees of
-                                     // freedom on each cell:
-    const unsigned int dofs_per_cell = fe.dofs_per_cell;
-    const unsigned int n_q_points    = quadrature_formula.size();
-
-    FullMatrix<double>   cell_matrix (dofs_per_cell, dofs_per_cell);
-    Vector<double>       cell_rhs (dofs_per_cell);
-
-    std::vector<unsigned int> local_dof_indices (dofs_per_cell);
-
-                                     // The next two variables are needed since
-                                     // the problem we consider is nonlinear,
-                                     // and thus the right hand side depends on
-                                     // the previous solution (in a Newton
-                                     // method, for example, the left hand side
-                                     // matrix would also depend on the previous
-                                     // solution, but as explained in the
-                                     // introduction, we only use a simple
-                                     // gradient-type method in which the matrix
-                                     // is a scaled Laplace-type matrix). In
-                                     // order to compute the values of the
-                                     // integrand for the right hand side, we
-                                     // therefore need to have the values and
-                                     // gradients of the previous solution at
-                                     // the quadrature points. We will get them
-                                     // from the <code>FEValues</code> object above, and
-                                     // will put them into the following two
-                                     // variables:
-    std::vector<double>         local_solution_values (n_q_points);
-    std::vector<Tensor<1,dim> > local_solution_grads (n_q_points);
-
-                                     // Now, here comes the main loop over all
-                                     // the cells of the mesh:
-    typename DoFHandler<dim>::active_cell_iterator
-      cell = dof_handler.begin_active(),
-      endc = dof_handler.end();
-    for (; cell!=endc; ++cell)
-      {
-                                         // First, clear the objects that hold
-                                         // the local matrix and right hand side
-                                         // contributions for this cell:
-        cell_matrix = 0;
-        cell_rhs = 0;
-
-                                         // Then initialize the values and
-                                         // gradients of the shape functions at
-                                         // the quadrature points of this cell:
-        fe_values.reinit (cell);
-
-                                         // And get the values and gradients of
-                                         // the previous solution at the
-                                         // quadrature points. To get them, we
-                                         // don't actually have to do much,
-                                         // except for giving the <code>FEValues</code>
-                                         // object the global node vector from
-                                         // which to compute this data, and a
-                                         // reference to the objects into which
-                                         // to put them. After the calls, the
-                                         // <code>local_solution_values</code> and
-                                         // <code>local_solution_values</code> variables
-                                         // will contain values and gradients
-                                         // for each of the quadrature points on
-                                         // this cell.
-        fe_values.get_function_values (present_solution,
-                                       local_solution_values);
-        fe_values.get_function_grads (present_solution,
-                                      local_solution_grads);
-
-                                         // Then loop over all quadrature
-                                         // points:
-        for (unsigned int q_point=0; q_point<n_q_points; ++q_point)
-          {
-                                             // Have convenience variables for
-                                             // the values and gradient of the
-                                             // solution at the present
-                                             // quadrature point, as well as the
-                                             // location in real space of this
-                                             // quadrature point, and of the
-                                             // expression $x-u^3$, since it
-                                             // appears so often:
-            const double u = local_solution_values[q_point],
-                         x = fe_values.quadrature_point(q_point)(0);
-            const double x_minus_u3 = (x-std::pow(u,3));
-            const Tensor<1,dim> u_prime = local_solution_grads[q_point];
-
-                                             // Then do the double loop over all
-                                             // shape functions to compute the
-                                             // local contribution to the
-                                             // matrix. The terms are simple
-                                             // equivalents of the formula
-                                             // stated in the introduction. Note
-                                             // how we extract the size of an
-                                             // element from the iterator to the
-                                             // present cell:
-            for (unsigned int i=0; i<dofs_per_cell; ++i)
-              for (unsigned int j=0; j<dofs_per_cell; ++j)
-                cell_matrix(i,j)
-                  += (fe_values.shape_grad(i,q_point) *
-                      fe_values.shape_grad(j,q_point) *
-                      cell->diameter() *
-                      cell->diameter()
-                      +
-                      fe_values.shape_value(i,q_point) *
-                      fe_values.shape_value(j,q_point)) *
-                  fe_values.JxW(q_point);
-
-                                             // And here comes the loop over all
-                                             // local degrees of freedom to form
-                                             // the right hand side. The formula
-                                             // looks a little convoluted, but
-                                             // is again a simple image of what
-                                             // was given in the introduction:
-            for (unsigned int i=0; i<dofs_per_cell; ++i)
-              cell_rhs(i) += -((6. * x_minus_u3 *
-                                gradient_power (u_prime, 4) *
-                                fe_values.shape_value(i,q_point)
-                                *
-                                (x_minus_u3 *
-                                 (u_prime *
-                                  fe_values.shape_grad(i,q_point))
-                                 -
-                                 (u_prime*u_prime) * u * u *
-                                 fe_values.shape_value(i,q_point))
-                               )
-                               *
-                               fe_values.JxW(q_point));
-          }
-
-                                         // After summing up all the
-                                         // contributions, we have to transfer
-                                         // them to the global objects. This is
-                                         // done in the same way as always
-                                         // before:
-        cell->get_dof_indices (local_dof_indices);
-        for (unsigned int i=0; i<dofs_per_cell; ++i)
-          {
-            for (unsigned int j=0; j<dofs_per_cell; ++j)
-              matrix.add (local_dof_indices[i],
-                          local_dof_indices[j],
-                          cell_matrix(i,j));
-
-            residual(local_dof_indices[i]) += cell_rhs(i);
-          }
-      }
-
-                                     // Now that we have all the local
-                                     // contributions summed up, we have to
-                                     // eliminate hanging node constraints and
-                                     // boundary values. Hanging nodes are
-                                     // simple:
-    hanging_node_constraints.condense (matrix);
-    hanging_node_constraints.condense (residual);
-
-                                     // %Boundary values are, too, but with a
-                                     // twist this time: in all previous example
-                                     // programs, we have used that by default
-                                     // (i.e. unless something else is set), all
-                                     // boundaries have indicator zero. To
-                                     // figure out what boundary indicator a
-                                     // face of a cell had, the library
-                                     // functions would query an iterator
-                                     // designating this face, which would in
-                                     // turn pluck out this value from some of
-                                     // the data structures in the
-                                     // library. Unfortunately, in 1d cells have
-                                     // no faces: these would only be points,
-                                     // and we don't associated anything in the
-                                     // library with points except for their
-                                     // coordinates. Thus there are no face
-                                     // iterators, and no way to figure out
-                                     // which boundary indicator it may have. On
-                                     // the other hand, in 1d, there can only be
-                                     // two boundaries anyway for a connected
-                                     // domain: the left end point and the right
-                                     // end point. And in contrast to the case
-                                     // in higher dimensions, where the
-                                     // (changeable) default is zero for all
-                                     // boundary parts, in 1d the convention is
-                                     // that the left boundary point has
-                                     // indicator zero, while the right boundary
-                                     // point has indicator one. Since there are
-                                     // no face iterators, it is also not
-                                     // possible to change this, but you will
-                                     // hardly ever have to. So in order to
-                                     // assign zero boundary values on both
-                                     // sides, in 1d we not only need to
-                                     // evaluate boundary values for indicator
-                                     // zero, but also for indicator one. If
-                                     // this program is ever going to be run in
-                                     // higher dimensions, then we should only
-                                     // evaluate for indicator zero, which is
-                                     // why we have placed the <code>if</code> statement
-                                     // in front of the second function call.
-                                     //
-                                     // Note that we need zero boundary
-                                     // conditions on both ends, since the space
-                                     // in which search for the solution has
-                                     // fixed boundary conditions zero and one,
-                                     // and we have set the initial values to
-                                     // already satisfy them. Thus, the updates
-                                     // computed in each nonlinear step must
-                                     // have zero boundary values.
-    std::map<unsigned int,double> boundary_values;
+       VectorTools::interpolate_boundary_values (dof_handler,
+                                             0,
+                                         BoundaryValues<dim>(),
+                                                 boundary_values);
+  }
+                           // In later steps, the Newton update has to have homogeneous
+                           // boundary conditions, in order for the solution to have the
+                           // right ones.
+
+  else{
     VectorTools::interpolate_boundary_values (dof_handler,
-                                              0,
-                                              ZeroFunction<dim>(),
-                                              boundary_values);
-    if (dim == 1)
-      VectorTools::interpolate_boundary_values (dof_handler,
-                                                1,
-                                                ZeroFunction<dim>(),
-                                                boundary_values);
-    Vector<double> dummy (residual.size());
-    MatrixTools::apply_boundary_values (boundary_values,
-                                        matrix,
-                                        dummy,
-                                        residual);
+                                             0,
+                                             ZeroFunction<dim>(),
+                                             boundary_values);
   }
 
+  MatrixTools::apply_boundary_values (boundary_values,
+                                     system_matrix,
+                                     newton_update,
+                                     system_rhs);
+}
 
-                                   // Once we have a search (update) direction,
-                                   // we need to figure out how far to go in
-                                   // this direction. This is what line search
-                                   // is good for, and this function does
-                                   // exactly this: compute and return the
-                                   // length of the update step.
-                                   //
-                                   // Since we already know the direction, we
-                                   // only have to solve the one-dimensional
-                                   // problem of minimizing the energy along
-                                   // this direction. Note, however, that in
-                                   // general we do not have the gradient of the
-                                   // energy functional in this direction, so we
-                                   // have to approximate it (and the second
-                                   // derivatives) using finite differences.
-                                   //
-                                   // In most applications, it is sufficient to
-                                   // find an approximate minimizer of this
-                                   // one-dimensional problem, or even just a
-                                   // point that may not be a minimizer but
-                                   // instead just satisfies a few conditions
-                                   // like those of Armijo and Goldstein. The
-                                   // rational for this is generally that
-                                   // evaluating the objective function too
-                                   // often is too expensive. However, here, we
-                                   // are a little more lenient, since the
-                                   // overall run-time is dominated by inverting
-                                   // the system matrix in each nonlinear
-                                   // step. Thus, we will do this minimization
-                                   // by using a fixed number of five Newton
-                                   // steps in this one-dimensional problem, and
-                                   // using a bisection algorithm as a substep
-                                   // in it.
-                                   //
-                                   // As is quite common in step length
-                                   // procedures, this function contains a fair
-                                   // number of heuristics and strategies that
-                                   // might not be obvious at first. Step length
-                                   // determination is notorious for its
-                                   // complications, and this implementation is
-                                   // not an exception. Note that if one tries
-                                   // to omit the special-casing, then one
-                                   // oftentimes encounters situations where the
-                                   // found step length is really not very good.
-  template <int dim>
-  double
-  MinimizationProblem<dim>::line_search (const Vector<double> &update) const
-  {
-                                     // Start out with a zero step length:
-    double alpha = 0.;
-    Vector<double> tmp (present_solution.size());
-
-                                     // Then do at most five Newton steps:
-    for (unsigned int step=0; step<5; ++step)
-      {
-                                         // At the present location, which is
-                                         // <code>present_solution+alpha*update</code>,
-                                         // evaluate the energy
-        tmp = present_solution;
-        tmp.add (alpha, update);
-        const double f_a = energy (dof_handler, tmp);
-
-                                         // Then determine a finite difference
-                                         // step length <code>dalpha</code>, and also
-                                         // evaluate the energy functional at
-                                         // positions <code>alpha+dalpha</code> and
-                                         // <code>alpha-dalpha</code> along the search
-                                         // direction:
-        const double dalpha = (alpha != 0 ? alpha/100 : 0.01);
-
-        tmp = present_solution;
-        tmp.add (alpha+dalpha, update);
-        const double f_a_plus = energy (dof_handler, tmp);
-
-        tmp = present_solution;
-        tmp.add (alpha-dalpha, update);
-        const double f_a_minus = energy (dof_handler, tmp);
-
-                                         // From these three data points, we can
-                                         // compute a finite difference
-                                         // approximation of the first and
-                                         // second derivatives:
-        const double f_a_prime       = (f_a_plus-f_a_minus) / (2*dalpha);
-        const double f_a_doubleprime = ((f_a_plus-2*f_a+f_a_minus) /
-                                        (dalpha*dalpha));
-
-                                         // If the gradient is (relative to the
-                                         // energy value) too small, then this
-                                         // means that we have found a minimum
-                                         // of the energy functional along the
-                                         // search direction. In this case,
-                                         // abort here and return the found step
-                                         // length value:
-        if (std::fabs(f_a_prime) < 1e-7*std::fabs(f_a))
-          break;
-
-                                         // Alternatively, also abort if the
-                                         // curvature is too small, because we
-                                         // can't compute a Newton step
-                                         // then. This is somewhat
-                                         // unsatisfactory, since we are not at
-                                         // a minimum, and can certainly be
-                                         // improved. There are a number of
-                                         // other strategies for this case,
-                                         // which we leave for interested
-                                         // readers:
-        if (std::fabs(f_a_doubleprime) < 1e-7*std::fabs(f_a_prime))
-          break;
-
-                                         // Then compute the Newton step as the
-                                         // negative of the inverse Hessian
-                                         // applied to the gradient.
-        double step_length = -f_a_prime / f_a_doubleprime;
-
-                                         // And do a number of correcting steps:
-                                         // if the energy at the predicted new
-                                         // position would be larger than at the
-                                         // present position, then halve the
-                                         // step length and try again. If this
-                                         // does not help after three such
-                                         // cycles, then simply give up and use
-                                         // the value we have.
-        for (unsigned int i=0; i<3; ++i)
-          {
-            tmp = present_solution;
-            tmp.add (alpha+step_length, update);
-            const double e = energy (dof_handler, tmp);
-
-            if (e >= f_a)
-              step_length /= 2;
-            else
-              break;
-          }
-
-                                         // After all this, update alpha and go
-                                         // on to the next Newton step.
-        alpha += step_length;
-      }
-
-                                     // Finally, return with the computed step length.
-    return alpha;
-  }
+                                                               // @sect4{Step15::solve}
 
+                               // The solve function is the same as always, we just have to
+                               // implement the minimal residual method as a solver and
+                               // apply the Newton update to the solution.
 
+template <int dim>
+void Step15<dim>::solve ()
+{
+  res=system_rhs.l2_norm();
+  SolverControl           solver_control (1000, res*1e-6);
+  SolverMinRes<>              solver (solver_control);
 
-                                   // The next function is again a rather boring
-                                   // one: it does one nonlinear step, by
-                                   // calling the function that assembles the
-                                   // linear system, then solving it, computing
-                                   // a step length, and finally updating the
-                                   // solution vector. This should all be mostly
-                                   // self-explanatory, given that we have shown
-                                   // the solution of a linear system before.
-  template <int dim>
-  void MinimizationProblem<dim>::do_step ()
-  {
-    assemble_step ();
+  PreconditionSSOR<> preconditioner;
+  preconditioner.initialize(system_matrix, 1.2);
 
-    Vector<double> update (present_solution.size());
-    {
-      SolverControl           solver_control (residual.size(),
-                                              1e-2*residual.l2_norm());
-      SolverCG<>              solver (solver_control);
+  solver.solve (system_matrix, newton_update, system_rhs,
+               preconditioner);
 
-      PreconditionSSOR<> preconditioner;
-      preconditioner.initialize(matrix);
+  hanging_node_constraints.distribute (newton_update);
 
-      solver.solve (matrix, update, residual,
-                    preconditioner);
-      hanging_node_constraints.distribute (update);
-    }
+                                 // In this step, the old solution is updated to the new one:
 
-    const double step_length = line_search (update);
-    present_solution.add (step_length, update);
-  }
+  present_solution += newton_update;
+}
 
+                                                               // @sect4{Step15::refine_grid}
 
+                               // The first part of this function is the same as in step 6.
+                               // But after refining the mesh we have to transfer the old
+                               // solution to the new one, which is done with the help of
+                               // the SolutionTransfer class.
 
-                                   // The same holds for the function that
-                                   // outputs the solution in gnuplot format
-                                   // into a file with a name that includes the
-                                   // number of the run we are presently
-                                   // performing.
-  template <int dim>
-  void
-  MinimizationProblem<dim>::output_results () const
-  {
-    DataOut<dim> data_out;
-    data_out.attach_dof_handler (dof_handler);
-    data_out.add_data_vector (present_solution, "solution");
-    data_out.build_patches ();
-
-    std::ostringstream filename;
-    filename << "solution-"
-             << run_number
-             << ".gnuplot"
-             << std::ends;
-
-    std::ofstream out (filename.str().c_str());
-    data_out.write_gnuplot (out);
-  }
 
+template <int dim>
+void Step15<dim>::refine_grid ()
+{
+  Vector<float> estimated_error_per_cell (triangulation.n_active_cells());
+
+  KellyErrorEstimator<dim>::estimate (dof_handler,
+                                     QGauss<dim-1>(3),
+                                     typename FunctionMap<dim>::type(),
+                                     present_solution,
+                                     estimated_error_per_cell);
+
+  GridRefinement::refine_and_coarsen_fixed_number (triangulation,
+                                                  estimated_error_per_cell,
+                                                  0.3, 0.03);
+
+                       // Then we need an additional step: if, for example,
+                               // you flag a cell that is once more refined than its neighbor,
+                               // and that neighbor is not flagged for refinement, we would end
+                               // up with a jump of two refinement levels across a cell interface.
+                               // To avoid these situations, the library will
+                               // silently also have to refine the neighbor cell once. It does so
+                               // by calling the Triangulation::prepare_coarsening_and_refinement
+                               // function before actually doing the refinement and coarsening.
+                               // This function flags a set of additional cells for refinement or
+                               // coarsening, to enforce rules like the one-hanging-node rule.
+                               // The cells that are flagged for refinement and coarsening after
+                               // calling this function are exactly the ones that will actually
+                               // be refined or coarsened. Since the SolutionTransfer class needs
+                               // this information in order to store the data from the old mesh
+                               // and transfer to the new one.
+
+  triangulation.prepare_coarsening_and_refinement ();
+
+                               // With this out of the way, we initialize a SolutionTransfer
+                               // object with the present DoFHandler and attach the solution
+                               // vector to it:
+
+  SolutionTransfer<dim> solution_transfer(dof_handler);
+  solution_transfer.prepare_for_coarsening_and_refinement(present_solution);
+
+                               // Then we do the actual refinement, and distribute degrees
+                               // of freedom on the new mesh:
+
+  triangulation.execute_coarsening_and_refinement();
+  dof_handler.distribute_dofs(fe);
+
+                               // Finally, we retrieve the old solution interpolated to the new
+                               // mesh. Since the SolutionTransfer function does not actually
+                               // store the values of the old solution, but rather indices, we
+                               // need to preserve the old solution vector until we have gotten
+                               // the new interpolated values. Thus, we have the new values
+                               // written into a temporary vector, and only afterwards write
+                               // them into the solution vector object:
+
+  Vector<double> tmp(dof_handler.n_dofs());
+  solution_transfer.interpolate(present_solution,tmp);
+  present_solution=tmp;
+
+                               // Having refined the mesh, there might be new nodal points on
+                               // the boundary. These have just interpolated values, but
+                               // not the right boundary values. This is fixed up, by
+                               // setting all boundary nodals explicit to the right value:
+
+  std::map<unsigned int, double> boundary_values2;
+  VectorTools::interpolate_boundary_values(dof_handler,
+                                                                                   0,
+                                                                                   BoundaryValues<dim>(),
+                                                                                   boundary_values2);
+  for (std::map<unsigned int, double>::const_iterator
+                 p = boundary_values2.begin();
+                 p != boundary_values2.end();
+                 ++p)
+         present_solution(p->first) = p->second;
+
+                               // On the new mesh, there are different hanging nodes, which shall
+                               // be enlisted in a matrix like before. To ensure there are no
+                               // hanging nodes of the old mesh in the matrix, it's first cleared:
+  hanging_node_constraints.clear();
+
+                               // After doing so, the hanging nodes of the new mesh can be
+                               // enlisted in the matrix, like before. Calling the
+                               // <code>setup_system</code> function in the <code>run</code>
+                               // function again after this, the hanging nodes don't have to
+                               // be enlisted there once more.
+
+  DoFTools::make_hanging_node_constraints(dof_handler, hanging_node_constraints);
+  hanging_node_constraints.close();
+  hanging_node_constraints.distribute(present_solution);
+}
 
+                                                               // @sect4{Step15::run}
 
-                                   // The function to compute error indicator
-                                   // and refine the mesh accordingly is a
-                                   // little more interesting. In particular, it
-                                   // shows some more of the techniques usually
-                                   // used in 1d applications. First, note that
-                                   // this again is a specialization that only
-                                   // works in 1d. However, to make later
-                                   // extension to higher space dimensions
-                                   // simpler, we define a constant integer
-                                   // <code>dim</code> at the beginning of the function;
-                                   // by using this constant as template
-                                   // argument in all places, we are actually
-                                   // able to write most of the code as if it
-                                   // were dimension independent, thus
-                                   // minimizing the amount of later changes.
-  template <>
-  void MinimizationProblem<1>::refine_grid ()
-  {
-    const unsigned int dim = 1;
-
-    Vector<float> error_indicators (triangulation.n_active_cells());
-
-                                     // Then define the quadrature formula, and
-                                     // what values we will want to extract from
-                                     // the solution. Here, we use the two-point
-                                     // trapezoidal rule, i.e. we evaluate the
-                                     // residual only at the end points of the
-                                     // cells. Incidentally, this also makes
-                                     // evaluating the jump terms between cells
-                                     // simpler. Note that for the error
-                                     // indicators, we not only need values and
-                                     // gradients of the solution, but also its
-                                     // second derivatives, as well as the
-                                     // physical location of quadrature points.
-    QTrapez<dim> quadrature;
-    FEValues<dim> fe_values (fe, quadrature,
-                             update_values   | update_gradients |
-                             update_hessians |
-                             update_quadrature_points | update_JxW_values);
-
-                                     // The error indicator formula presented in
-                                     // the introduction requires us to compute
-                                     // jumps of the solution and gradient
-                                     // across cell boundaries. Since the
-                                     // solution itself is continuous, we only
-                                     // need to evaluate the gradient on the
-                                     // neighbor cells. To avoid some of the
-                                     // work needed to reinitialize a
-                                     // <code>FEValues</code> object on a cell, we define
-                                     // another such object here that we will
-                                     // only use for the neighbor cells. The
-                                     // data we need from the side of the
-                                     // present cell is provided by above
-                                     // object.
-    FEValues<dim> neighbor_fe_values (fe, quadrature,
-                                      update_gradients);
-
-                                     // Then, as before, we need objects holding
-                                     // values and derivatives of the solution
-                                     // at quadrature points. Here, we also need
-                                     // second derivatives, which is simple,
-                                     // however:
-    std::vector<double> local_values (quadrature.size());
-    std::vector<Tensor<1,dim> > local_gradients (quadrature.size());
-    std::vector<Tensor<2,dim> > local_2nd_derivs (quadrature.size());
-
-                                     // With all this, we can start the loop
-                                     // over all cells. Since we need to write
-                                     // the result for each cell into
-                                     // consecutive elements of a vector, we
-                                     // also keep a running index <code>cell_index</code>
-                                     // that we increase with each cell treated.
-    DoFHandler<dim>::active_cell_iterator
-      cell = dof_handler.begin_active (),
-      endc = dof_handler.end ();
-    for (unsigned int cell_index = 0; cell!=endc; ++cell, ++cell_index)
-      {
-                                         // After initializing the <code>FEValues</code>
-                                         // object on each cell, use it to
-                                         // evaluate solution and first and
-                                         // second derivatives of it at the
-                                         // quadrature points:
-        fe_values.reinit (cell);
-        fe_values.get_function_values (present_solution, local_values);
-        fe_values.get_function_grads (present_solution, local_gradients);
-        fe_values.get_function_2nd_derivatives (present_solution, local_2nd_derivs);
-
-                                         // Given the formula in the
-                                         // introduction, the computation of the
-                                         // cell residuals should actually be
-                                         // relatively obvious. The result,
-                                         // multiplied by the appropriate power
-                                         // of the cell's size is then written
-                                         // into the vector of error indicators.
-                                         //
-                                         // Note that in the following
-                                         // computations, we have already made
-                                         // use of the fact that we are in 1d,
-                                         // since we extract the gradient as a
-                                         // scalar value.
-        double cell_residual_norm = 0;
-        for (unsigned int q=0; q<quadrature.size(); ++q)
-          {
-            const double x             = fe_values.quadrature_point(q)[0];
-            const double u             = local_values[q];
-            const double u_prime       = local_gradients[q][0];
-            const double u_doubleprime = local_2nd_derivs[q][0][0];
-            const double local_residual_value
-              = ((x-u*u*u) * std::pow(u_prime, 4) *
-                 (u*u*u_prime*u_prime
-                  +
-                  5*(x-u*u*u)*u_doubleprime
-                  +
-                  2*u_prime*(1-3*u*u*u_prime)));
-
-            cell_residual_norm += (local_residual_value * local_residual_value *
-                                   fe_values.JxW(q));
-          }
-        error_indicators(cell_index) = cell_residual_norm *
-                                       cell->diameter() * cell->diameter();
-
-                                         // The next step is to evaluate the
-                                         // jump terms. To make computations
-                                         // somewhat simpler (and to free up the
-                                         // <code>local_*</code> variables for use on
-                                         // neighboring elements), we define
-                                         // some convenience variables for the
-                                         // positions of the left and right cell
-                                         // boundary point, as well as the
-                                         // values and gradients at these
-                                         // points.
-                                         //
-                                         // To be cautious, we don't blindly
-                                         // trust that the trapezoidal rule has
-                                         // its evaluation points as the left
-                                         // and right end point of the cell (it
-                                         // could in principle have them in the
-                                         // reverse order, i.e. the zeroth point
-                                         // is at x=1, and the first one at
-                                         // x=0), and use an assertion to
-                                         // actually check for this. If this
-                                         // would not be the case, an exception
-                                         // of the (predefined) class
-                                         // <code>ExcInternalError</code> would be
-                                         // thrown. Of course, this does not
-                                         // happen in this program, but it shows
-                                         // a way of defensive coding: if you
-                                         // are not sure of an assumption, guard
-                                         // it by a test. This also guards us
-                                         // against possible future changes in
-                                         // the library: the quadrature classes
-                                         // do not promise any particular order
-                                         // of their quadrature points, so the
-                                         // <code>QTrapez</code> class could in principle
-                                         // change the order of its two
-                                         // evaluation points. In that case,
-                                         // your code would tell you that
-                                         // something changed, rather than
-                                         // computing a wrong result when you
-                                         // upgrade to a new version of the
-                                         // library. (The point made here is
-                                         // theoretical: we are not going to
-                                         // change the order of evaluation
-                                         // points; the intent is simply how to
-                                         // add some defensive touches to a
-                                         // program that make sure that it
-                                         // really does what it is hoped to do.)
-                                         //
-                                         // Given that we are now sure that
-                                         // <code>x_left</code> and <code>x_right</code>,
-                                         // extracted from the zeroth and first
-                                         // quadrature point, are indeed the
-                                         // left and right vertex of the cell,
-                                         // we can also be sure that the values
-                                         // we extract for <code>u_left</code> et al. are
-                                         // the ones we expect them to be, since
-                                         // the order of these values must of
-                                         // course match the order of the
-                                         // quadrature points.
-        const double x_left  = fe_values.quadrature_point(0)[0];
-        const double x_right = fe_values.quadrature_point(1)[0];
-
-        Assert (x_left  == cell->vertex(0)[0], ExcInternalError());
-        Assert (x_right == cell->vertex(1)[0], ExcInternalError());
-
-        const double u_left  = local_values[0];
-        const double u_right = local_values[1];
-
-        const double u_prime_left  = local_gradients[0][0];
-        const double u_prime_right = local_gradients[1][0];
-
-                                         // Next, we have to check whether this
-                                         // cell has a left neighbor:
-        if (cell->at_boundary(0) == false)
-          {
-                                             // If so, find its left
-                                             // neighbor. We do so by asking for
-                                             // the cell that is immediately
-                                             // adjacent to the left (the zeroth
-                                             // neighbor in 1d). However, this
-                                             // may be a cell that in itself has
-                                             // children, so to get to the
-                                             // active left neighbor, we have to
-                                             // recursively check whether that
-                                             // cell has children, and if so
-                                             // take its right child, since that
-                                             // is adjacent to the left of the
-                                             // present cell. Note that unless
-                                             // you are in 1d, there is no safe
-                                             // way to assume that the first
-                                             // child of the zeroth neighbor is
-                                             // indeed adjacent to the present
-                                             // cell. Rather, more than one of
-                                             // the children of a neighbor may
-                                             // be adjacent to the present
-                                             // cell. Also note that in two or
-                                             // higher space dimensions, a
-                                             // neighbor of an active cell may
-                                             // only be at most once refined,
-                                             // since we have the rule that
-                                             // there can only be one hanging
-                                             // node per face. This rule does
-                                             // not exist in 1d: neighboring
-                                             // cells may have totally
-                                             // independent refinement
-                                             // levels. Thus, we really need the
-                                             // <code>while</code> loop, not only an
-                                             // <code>if</code> clause.
-            DoFHandler<dim>::cell_iterator left_neighbor = cell->neighbor(0);
-            while (left_neighbor->has_children())
-              left_neighbor = left_neighbor->child(1);
-
-                                             // With the so-found neighbor,
-                                             // initialize the second
-                                             // <code>FEValues</code> object to it,
-                                             // extract the gradients of the
-                                             // solution there, and from this
-                                             // get the gradient at the
-                                             // interface (this is the first
-                                             // element of <code>local_gradients</code>,
-                                             // since the right end point of the
-                                             // neighbor cell has index 1) as a
-                                             // scalar value (this is the zeroth
-                                             // component of
-                                             // <code>local_gradients[1]</code>.
-            neighbor_fe_values.reinit (left_neighbor);
-            neighbor_fe_values.get_function_grads (present_solution, local_gradients);
-
-            const double neighbor_u_prime_left = local_gradients[1][0];
-
-                                             // Then compute the jump, and add a
-                                             // suitable multiple to the error
-                                             // indicator for this cell:
-            const double left_jump = std::pow(x_left-std::pow(u_left,3), 2) *
-                                     (std::pow(neighbor_u_prime_left,5) -
-                                      std::pow(u_prime_left,5));
-            error_indicators(cell_index) += left_jump * left_jump *
-                                            cell->diameter();
-          }
-
-                                         // Once we have done the left neighbor,
-                                         // we can play exactly the same game
-                                         // with the right neighbor:
-        if (cell->at_boundary(1) == false)
-          {
-            DoFHandler<dim>::cell_iterator right_neighbor = cell->neighbor(1);
-            while (right_neighbor->has_children())
-              right_neighbor = right_neighbor->child(0);
-
-            neighbor_fe_values.reinit (right_neighbor);
-            neighbor_fe_values.get_function_grads (present_solution, local_gradients);
-
-            const double neighbor_u_prime_right = local_gradients[0][0];
-
-            const double right_jump = std::pow(x_right-std::pow(u_right,3), 2) *
-                                      (std::pow(neighbor_u_prime_right,5) -
-                                       std::pow(u_prime_right,5));
-            error_indicators(cell_index) += right_jump * right_jump *
-                                            cell->diameter();
-          }
-      }
-
-                                     // Now we have all the refinement
-                                     // indicators computed, and want to refine
-                                     // the grid. In contrast to previous
-                                     // examples, however, we would like to
-                                     // transfer the solution vector from the
-                                     // old to the new grid. This is what the
-                                     // <code>SolutionTransfer</code> class is good for,
-                                     // but it requires some preliminary
-                                     // work. First, we need to tag the cells
-                                     // that we want to refine or coarsen, as
-                                     // usual:
-    GridRefinement::refine_and_coarsen_fixed_number (triangulation,
-                                                     error_indicators,
-                                                     0.3, 0.03);
-                                     // Then, however, we need an additional
-                                     // step: if, for example, you flag a cell
-                                     // that is once more refined than its
-                                     // neighbor, and that neighbor is not
-                                     // flagged for refinement, we would end up
-                                     // with a jump of two refinement levels
-                                     // across a cell interface. In 1d, this
-                                     // would in general be allowed, but not in
-                                     // higher space dimensions, and some mesh
-                                     // smoothing algorithms in 1d may also
-                                     // disallow this. To avoid these
-                                     // situations, the library will silently
-                                     // also have to refine the neighbor cell
-                                     // once. It does so by calling the
-                                     // <code>Triangulation::prepare_coarsening_and_refinement</code>
-                                     // function before actually doing the
-                                     // refinement and coarsening. This function
-                                     // flags a set of additional cells for
-                                     // refinement or coarsening, to enforce
-                                     // rules like the one-hanging-node
-                                     // rule. The cells that are flagged for
-                                     // refinement and coarsening after calling
-                                     // this function are exactly the ones that
-                                     // will actually be refined or
-                                     // coarsened. Since the
-                                     // <code>SolutionTransfer</code> class needs this
-                                     // information in order to store the data
-                                     // from the old mesh and transfer to the
-                                     // new one.
-    triangulation.prepare_coarsening_and_refinement();
-
-                                     // With this out of the way, we initialize
-                                     // a <code>SolutionTransfer</code> object with the
-                                     // present <code>DoFHandler</code> and attach the
-                                     // solution vector to it:
-    SolutionTransfer<dim> solution_transfer(dof_handler);
-    solution_transfer.prepare_for_coarsening_and_refinement (present_solution);
-
-                                     // Then we do the actual refinement, and
-                                     // distribute degrees of freedom on the new
-                                     // mesh:
-    triangulation.execute_coarsening_and_refinement ();
-    dof_handler.distribute_dofs (fe);
-
-                                     // Finally, we retrieve the old solution
-                                     // interpolated to the new mesh. Since the
-                                     // <code>SolutionTransfer</code> function does not
-                                     // actually store the values of the old
-                                     // solution, but rather indices, we need to
-                                     // preserve the old solution vector until
-                                     // we have gotten the new interpolated
-                                     // values. Thus, we have the new values
-                                     // written into a temporary vector, and
-                                     // only afterwards write them into the
-                                     // solution vector object:
-    Vector<double> tmp (dof_handler.n_dofs());
-    solution_transfer.interpolate (present_solution, tmp);
-    present_solution = tmp;
-
-                                     // Here is some final thing, that is
-                                     // actually unnecessary in 1d, but
-                                     // necessary for higher space dimensions,
-                                     // so we show it anyway: the result of what
-                                     // the <code>SolutionTransfer</code> class provides
-                                     // is a vector that is interpolated from
-                                     // the old to the new mesh. Unfortunately,
-                                     // it does not necessarily have the right
-                                     // values at constrained (hanging) nodes,
-                                     // so we have to fix this up to make the
-                                     // solution conforming again. The simplest
-                                     // way to do this is this:
-    hanging_node_constraints.clear ();
-    DoFTools::make_hanging_node_constraints (dof_handler,
-                                             hanging_node_constraints);
-    hanging_node_constraints.close ();
-    hanging_node_constraints.distribute (present_solution);
-                                     // This is wasteful, since we create a
-                                     // <code>ConstraintMatrix</code> object that will be
-                                     // recreated again in the next call to
-                                     // <code>setup_system_on_mesh</code> immediately
-                                     // afterwards. A more efficient
-                                     // implementation would make sure that it
-                                     // is created only once. We don't care so
-                                     // much here, since in 1d there are no
-                                     // constraints, so all of these operations
-                                     // are really cheap, but we do not
-                                     // recommend this as general programming
-                                     // strategy.
-  }
+                               // In the run function, the first grid is build. Also in this
+                               // function, the Newton iteration is implemented.
 
+template <int dim>
+void Step15<dim>::run ()
+{
 
+                               // The integer refinement counts the mesh refinements. Obviously
+                               // starting the program, it should be zero.
+       refinement=0;
+       first_step=true;
 
-                                   // Before going over to the framework
-                                   // functions, we still need to look at the
-                                   // implementation of the function that
-                                   // computes the energy of a nodal vector in
-                                   // the functional considered in this example
-                                   // program. Its idea is simple: take a nodal
-                                   // vector and the <code>DoFHandler</code> object it is
-                                   // living on, then loop over all cells and
-                                   // add up the local contributions to the
-                                   // energy:
-  template <int dim>
-  double
-  MinimizationProblem<dim>::energy (const DoFHandler<dim> &dof_handler,
-                                    const Vector<double>  &function)
-  {
-                                     // First define the quadrature formula and
-                                     // a <code>FEValues</code> object with which to
-                                     // compute the values of the input function
-                                     // at the quadrature points. Note again
-                                     // that the integrand is a polynomial of
-                                     // degree six, so a 4-point Gauss formula
-                                     // is appropriate:
-    QGauss<dim>  quadrature_formula(4);
-    FEValues<dim> fe_values (dof_handler.get_fe(), quadrature_formula,
-                             update_values   | update_gradients |
-                             update_quadrature_points | update_JxW_values);
-
-    const unsigned int   n_q_points    = quadrature_formula.size();
-
-                                     // Then, just as when we integrated the
-                                     // linear system, we need two variables
-                                     // that will hold the values and gradients
-                                     // of the given function at the quadrature
-                                     // points:
-    std::vector<double>         local_solution_values (n_q_points);
-    std::vector<Tensor<1,dim> > local_solution_grads (n_q_points);
-
-                                     // With this, define an energy variable,
-                                     // and loop over all the cells:
-    double energy = 0.;
-
-    typename DoFHandler<dim>::active_cell_iterator
-      cell = dof_handler.begin_active(),
-      endc = dof_handler.end();
-    for (; cell!=endc; ++cell)
-      {
-                                         // On each cell, initialize the
-                                         // <code>FEValues</code> object, and extract
-                                         // values and gradients of the given
-                                         // function:
-        fe_values.reinit (cell);
-        fe_values.get_function_values (function,
-                                       local_solution_values);
-        fe_values.get_function_grads (function,
-                                      local_solution_grads);
-
-                                         // Then loop over all quadrature points
-                                         // on this cell, and add up the
-                                         // contribution of each to the global
-                                         // energy:
-        for (unsigned int q_point=0; q_point<n_q_points; ++q_point)
-          energy += (std::pow (fe_values.quadrature_point(q_point)(0)
-                               -
-                               std::pow (local_solution_values[q_point], 3),
-                               2) *
-                     gradient_power (local_solution_grads[q_point], 6) *
-                     fe_values.JxW (q_point));
-      }
-
-                                     // Once we have done this, return the
-                                     // integrated value.
-    return energy;
-  }
+                               // As described in the introduction, the domain is a unitball around
+                               // the origin. The Mesh is globally refined two times, not to start
+                               // on the coarse mesh, which consists only of five cells.
 
+       GridGenerator::hyper_ball (triangulation);
+       static const HyperBallBoundary<dim> boundary;
+       triangulation.set_boundary (0, boundary);
+       triangulation.refine_global(2);
 
-                                   // So here is the driver function,
-                                   // <code>run()</code>. It generate a coarse mesh,
-                                   // refines it a couple of times, and
-                                   // initializes the starting values. It then
-                                   // goes into a loop in which we first set up
-                                   // the member variables for the new mesh, and
-                                   // then do a fixed number of five gradient
-                                   // steps. If after this the energy has not
-                                   // significantly decreased compares to the
-                                   // last time we checked, we assume that we
-                                   // have converged and exit, otherwise we
-                                   // refine the mesh and start over. Once we
-                                   // have determined that the computations have
-                                   // converged somewhere, we output the
-                                   // results.
-  template <int dim>
-  void MinimizationProblem<dim>::run ()
-  {
-    GridGenerator::hyper_cube (triangulation, 0., 1.);
-    triangulation.refine_global (4);
-    dof_handler.distribute_dofs (fe);
-    initialize_solution ();
+                               // The Newton iteration starts here. During the first step, there is
+                               // no residual computed, so the bool is needed here to enter the
+                               // iteration scheme. Later the Newton method will continue until the
+                               // residual is less than $10^{-3}$.
 
-    double last_energy = energy (dof_handler, present_solution);
+       while(first_step || (res>1e-3))
+       {
 
-    while (true)
-      {
-        setup_system_on_mesh ();
+                               // First thing to do after refining the mesh, is to setup the vectors,
+                               // matrices, etc., which is done in the <code>setup_system</code>
+                               // function.
 
-        for (unsigned int iteration=0; iteration<5; ++iteration)
-          do_step ();
+               setup_system();
 
-        const double this_energy = energy (dof_handler, present_solution);
-        std::cout << "   Energy: " << this_energy << std::endl;
+                               // On every mesh there are done five Newton steps, in order to get a
+                               // better solution, before the mesh gets too fine and the computations
+                               // take more time.
 
-        if ((last_energy-this_energy) < 1e-5*last_energy)
-          break;
+               for(unsigned int i=0; i<5;++i)
+               {
 
-        last_energy = this_energy;
+                               // In every Newton step the system matrix and the right hand side
+                               // have to be computed.
 
-        refine_grid ();
-      }
+                       assemble_system ();
+                       solve ();
+                       first_step=false;
+                       std::cout<<"residual:"<<res<<std::endl;
+               }
 
-    output_results ();
+                           // The fifth solution, as well as the Newton update,
+                               // on every mesh will be written in a vtk-file,
+                               // in order to show the convergence of the solution.
 
-    std::cout << std::endl;
-  }
+               Assert (refinement < 100, ExcNotImplemented());
+
+               DataOutBase::EpsFlags vtk_flags;
+
+               DataOut<dim> data_out;
+               data_out.set_flags (vtk_flags);
+
+               data_out.attach_dof_handler (dof_handler);
+               data_out.add_data_vector (newton_update, "update");
+               data_out.add_data_vector (present_solution, "solution");
+               data_out.build_patches (6);
+               std::string filename = "solution-";
+               if(refinement<10)
+               {
+                 filename += ('0' + refinement);
+               }
+               else{
+                 filename += ('0' + refinement/10);
+                 filename += ('0' + refinement%10);
+               }
+               filename += ".vtk";
+               std::ofstream output (filename.c_str());
+               data_out.write_vtk (output);
+
+                               // last thing to do is to refine the mesh:
+
+               refine_grid();
+
+               std::cout<<"********mesh-refinement:"<<refinement+1<<" ********"<<std::endl;
+               refinement++;
+
+       }
+
+                               // After the residual is less than $10^{-3}$, the final-solution
+                               // is written in a vtk-file:
+
+  DataOutBase::EpsFlags vtk_flags;
+
+  DataOut<dim> data_out;
+  data_out.set_flags (vtk_flags);
+
+  data_out.attach_dof_handler (dof_handler);
+  data_out.add_data_vector (present_solution, "solution");
+  data_out.build_patches (6);
+
+  std::ofstream output ("final-solution.vtk");
+  data_out.write_vtk (output);
 }
 
+                                                               // @ sect4{The main function}
+
+                               // Finally the main function, this follows the scheme of all other main
+                               // functions:
 
-                                 // Finally: <code>main()</code>. This function does
-                                 // what all its counterparts in previous
-                                 // examples already did, i.e. create an
-                                 // object of the main class, and hand off
-                                 // work to them, only retaining its role as a
-                                 // guard to catch exceptions and print some
-                                 // information if we get one. The only
-                                 // difference is that it generates objects
-                                 // multiple times and runs them. Since the
-                                 // initial value for each run is a random
-                                 // function, where the random number
-                                 // generator returns different values each
-                                 // time, all these runs are actually
-                                 // different, although it may seem that they
-                                 // are independent of each other.
 int main ()
 {
+
   try
     {
-      using namespace dealii;
-      using namespace Step15;
-
       deallog.depth_console (0);
 
-      const unsigned int n_realizations = 10;
-      for (unsigned int realization=0; realization<n_realizations; ++realization)
-        {
-          std::cout << "Realization " << realization << ":" << std::endl;
-
-          MinimizationProblem<1> minimization_problem_1d (realization);
-          minimization_problem_1d.run ();
-        }
+      Step15<2> laplace_problem_2d;
+      laplace_problem_2d.run ();
     }
   catch (std::exception &exc)
     {
       std::cerr << std::endl << std::endl
-                << "----------------------------------------------------"
-                << std::endl;
+               << "----------------------------------------------------"
+               << std::endl;
       std::cerr << "Exception on processing: " << std::endl
-                << exc.what() << std::endl
-                << "Aborting!" << std::endl
-                << "----------------------------------------------------"
-                << std::endl;
+               << exc.what() << std::endl
+               << "Aborting!" << std::endl
+               << "----------------------------------------------------"
+               << std::endl;
+
       return 1;
     }
   catch (...)
     {
       std::cerr << std::endl << std::endl
-                << "----------------------------------------------------"
-                << std::endl;
+               << "----------------------------------------------------"
+               << std::endl;
       std::cerr << "Unknown exception!" << std::endl
-                << "Aborting!" << std::endl
-                << "----------------------------------------------------"
-                << std::endl;
+               << "Aborting!" << std::endl
+               << "----------------------------------------------------"
+               << std::endl;
       return 1;
     }
   return 0;

In the beginning the Universe was created. This has made a lot of people very angry and has been widely regarded as a bad move.

Douglas Adams


Typeset in Trocchi and Trocchi Bold Sans Serif.