From 4f6aea69efebb8ce5295fbb00cf9d75c49ad5ec8 Mon Sep 17 00:00:00 2001
From: wolf
Date: Thu, 2 May 2002 11:26:19 +0000
Subject: [PATCH] .
git-svn-id: https://svn.dealii.org/trunk@5783 0785d39b-7218-0410-832d-ea1e28bc413d
---
.../step-14.data/intro.tex | 179 ++++++++++++++++++
.../step-14.data/results.html | 43 ++++-
2 files changed, 216 insertions(+), 6 deletions(-)
create mode 100644 deal.II/doc/tutorial/chapter-2.step-by-step/step-14.data/intro.tex
diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-14.data/intro.tex b/deal.II/doc/tutorial/chapter-2.step-by-step/step-14.data/intro.tex
new file mode 100644
index 0000000000..65910d47f0
--- /dev/null
+++ b/deal.II/doc/tutorial/chapter-2.step-by-step/step-14.data/intro.tex
@@ -0,0 +1,179 @@
+\documentclass{article}
+\usepackage{amsmath}
+\begin{document}
+
+\section{The maths}
+
+The Heidelberg group of Professor Rolf Rannacher, to which the three main
+authors of the deal.II library belonged for the PhD time and partly also
+afterwards, has been involved with adaptivity and error estimation for finite
+element discretizations since the mid-90ies. The main achievement is the
+development of error estimates for arbitrary functionals of the solution, and
+of optimal mesh refinement for its computation.
+
+We will not discuss the derivation of these concepts in too great detail, but
+will implement the main ideas in the present example program. For a thorough
+introduction into the general idea, we refer to the seminal work of Becker and
+Rannacher \cite{BR95,BR96r}, and the overview article of the same authors in
+Acta Numerica \cite{BR01}; the first introduces the concept of error
+estimation and adaptivity for general functional output for the Laplace
+equation, while the second gives many examples of applications of these
+concepts to a large number of other, more complicated equations. For
+applications to individual types of equations, see also the publications by
+Becker \cite{Bec95,Bec98}, Kanschat \cite{Kan96,FK97}, Suttmeier
+\cite{Sut96,RS97,RS98c,RS99}, Bangerth \cite{BR99b,Ban00w,BR01a,Ban02}, and
+Hartmann \cite{HH01,HH01a,HH01b}.
+
+The basic idea is the following: in applications, one is not usually
+interested in the solution per se, but rather in certain aspects of it. For
+example, in simulations of flow problems, one may want to know the lift or
+drag of a body emersed in the fluid; it is this quantity that we want to know
+to best accuracy, and whether the rest of the solution of the describing
+equations is well resolved is not of primary interest. Likewise, in elasticity
+one might want to know about values of the stress at certain points to guess
+whether maximal load values of joints are safe, for example. Or, in radiative
+transfer problems, mean flux intensities are of interest.
+
+In all the cases just listed, it is the evaluation of a functional $J(u)$ of
+the solution which we are interested in, rather than the values of $u$
+everywhere. Since the exact solution $u$ is not available, but only its
+numerical approximation $u_h$, it is sensible to ask whether the computed
+value $J(u_h)$ is within certain limits of the exact value $J(u)$, i.e. we
+want to bound the error with respect to this functional, $J(u)-J(u_h)$.
+
+For simplicity of exposition, we henceforth assume that both the quantity of
+interest $J$, as well as the equation are linear, and we will in particular
+show the derivation for the Laplace equation with homogeneous Dirichlet
+boundary conditions, although the concept is much more general. For this
+general case, we refer to the references listed above. The goal is to obtain
+bounds on the error, $J(e)=J(u)-J(u_h)$. For this, let us denote by $z$ the
+solution of a dual problem, defined as follows:
+\begin{gather}
+ a(\varphi,z) = J(\varphi) \qquad \forall \varphi,
+\end{gather}
+where $a(\cdot,\cdot)$ is the bilinear form associated with the differential
+equation, and the test functions are chosen from the corresponding solution
+space. Then, taking as special test function $\varphi=e$ the error, we have
+that
+\begin{gather}
+ J(e) = a(e,z)
+\end{gather}
+and we can, by Galerkin orthogonality, rewrite this as
+\begin{gather}
+ J(e) = a(e,z-\varphi_h)
+\end{gather}
+for all possible functions $\varphi_h$ from the discrete test space.
+
+Concretely, for Laplace's equation, the error identity reads
+\begin{gather}
+ J(e) = (\nabla e, \nabla(z-\varphi_h)).
+\end{gather}
+For reasons that we will not explain, we do not want to use this formula as
+is, but rather split the scalar products into terms on all cells, and
+integrate by parts on each of them:
+\begin{align*}
+ J(e)
+ &=
+ \sum_K (\nabla (u-u_h), \nabla (z-\varphi_h))_K
+ \\
+ &=
+ \sum_K (-\Delta (u-u_h), z-\varphi_h)_K
+ + (\partial_n (u-u_h), z-z_h)_{\partial K}.
+\end{align*}
+Next we use that $-\Delta u=f$, and that $\partial_n u$ is a quantity that is
+continuous almost everywhere, so the terms involving $\partial_n u$ on one
+cell cancels with that on its neighbor, where the normal vector has the
+opposite sign. At the boundary of the domain, where there is no neighbor cell
+with wich this term could cancel, the weight $z-\varphi_h$ can be chosen as
+zero, since $z$ has zero boundary values, and $\varphi_h$ can be chosen to
+have the same.
+
+Thus, we have
+\begin{align*}
+ J(e)
+ &=
+ \sum_K (f+u_h), z-\varphi_h)_K
+ - (\partial_n u_h, z-\varphi_h)_{\partial K\backslash \partial\Omega}.
+\end{align*}
+In a final step, note that when taking the normal derivative of $u_h$, we mean
+the value of this quantity as taken from this side of the cell (for the usual
+Lagrange elements, derivatives are not continuous across edges). We then
+rewrite the above formula by exchanging half of the edge integral of cell $K$
+with the neighbor cell $K'$, to obtain
+\begin{align*}
+ J(e)
+ &=
+ \sum_K (f+u_h), z-\varphi_h)_K
+ - \frac 12 (\partial_n u_h|_K + \partial_{n'} u_h|_{K'},
+ z-\varphi_h)_{\partial K\backslash \partial\Omega}.
+\end{align*}
+Using that for the normal vectors $n'=-n$ holds, we define the jump of the
+normal derivative by
+\begin{gather*}
+ [\partial_n u_h] := \partial_n u_h|_K + \partial_{n'} u_h|_{K'}
+ =
+ \partial_n u_h|_K - \partial_n u_h|_{K'},
+\end{gather*}
+and get the final form after setting the discrete function $\varphi_h$, which
+is by now still arbitrary, to the point interpolation of the dual solution,
+$\varphi_h=I_h z$:
+\begin{align*}
+ J(e)
+ &=
+ \sum_K (f+u_h), z-I_h z)_K
+ - \frac 12 ([\partial_n u_h],
+ z-I_h z)_{\partial K\backslash \partial\Omega}.
+\end{align*}
+
+With this, we have obtained an exact representation of the error of the finite
+element discretization with respect to arbitrary (linear) functionals
+$J(\cdot)$. Its structure is a weighted form of a residual estimator, as both
+$f+\Delta u_h$ and $[\partial_n u_h]$ are cell and edge residuals that vanish
+on the exact solution, and $z-I_h z$ are weights indicating how important the
+residuals on a certain cell is for the evaluation of the given functional.
+Furthermore, it is a cell-wise quantity, so we can use it as a mesh refinement
+criterion. The question, is: how to evaluate it? After all, the evaluation
+requires knowledge of the dual solution $z$, which carries the information
+about the quantity we want to know to best accuracy.
+
+In some, very special cases, this dual solution is known. For example, if the
+functional $J(\cdot)$ is the point evaluation, $J(\varphi)=\varphi(x_0)$, then
+the dual solution has to satisfy
+\begin{gather*}
+ -\Delta z = \delta(x-x_0),
+\end{gather*}
+with the Dirac delta function on the right hand side, and the dual solution is
+the Green's function with respect to the point $x_0$. For simple geometries,
+this function is analytically known, and we could insert it into the error
+representation formula.
+
+However, we do not want to restrict ourselves to such special cases. Rather,
+we will compute the dual solution numerically, and approximate $z$ by some
+numerically obtained $\tilde z$. We note that it is not sufficient to compute
+this approximation $\tilde z$ using the same method as used for the primal
+solution $u_h$, since then $\tilde z-I_h \tilde z=0$, and the overall error
+estimate would be zero. Rather, the approximation $\tilde z$ has to be from a
+larger space than the primal finite element space. There are various ways to
+obtain such an approximation (see the cited literature), and we will choose to
+compute it with a higher order finite element space. While this is certainly
+not the most efficient way, it is simple since we already have all we need to
+do that in place, and it also allows for simple experimenting. For more
+efficient methods, again refer to the given literature, in particular
+\cite{BR95,BR96r,BR01}.
+
+With this, we end the discussion of the mathematical side of this program and
+turn to the actual implementation.
+
+
+\section{The software}
+
+
+\bibliographystyle{plain}
+\bibliography{own}
+
+\end{document}
+
+
+
+
+
diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-14.data/results.html b/deal.II/doc/tutorial/chapter-2.step-by-step/step-14.data/results.html
index c4119d3fff..36bdcab564 100644
--- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-14.data/results.html
+++ b/deal.II/doc/tutorial/chapter-2.step-by-step/step-14.data/results.html
@@ -443,6 +443,7 @@ refinement process should have taken care that these places are not
important for computing the point value.
+
The next point is to compare the new (duality based) mesh refinement
criterion with the old ones. These are the results:
@@ -458,11 +459,41 @@ TODO
-Outlook
+
+Conclusions and outlook
-As stated, the program is quite modular, and implementing another test
-case, or another evaluation and dual functional is simple. You are
-encouraged to take the program as a basis for your own experiments,
-and to play a little.
-
\ No newline at end of file
+The results here are not too clearly indicating the superiority of the
+dual weighted error estimation approach for mesh refinement over other
+mesh refinement criteria, such as the Kelly indicator. This is due to
+the relative simplicity of the shown application. If you are not
+convinced yet that this approach is indeed superior, you are invited
+to browse through the literature indicated in the introduction, where
+plenty of examples are provided where the dual weighted approach can
+reduce the necessary numerical work by orders of magnitude, making
+this the only way to compute certain quantities to reasonable
+accuracies at all.
+
+
+
+Besides the objections you may raise against its use as a mesh
+refinement criterion, consider that accurate knowledge of the error in
+the quantity one might want to compute is of great use, since we can
+stop computations when we are satisfied with the accuracy. Using more
+traditional approaches, it is very difficult to get accurate estimates
+for arbitrary quantities, except for, maybe, the error in the energy
+norm, and we will then have no guarantee that the result we computed
+satisfies any requirements on its accuracy.
+
+
+
+Leaving these mathematical considerations, we tried to write the
+program in a modular way, such that implementing another test case, or
+another evaluation and dual functional is simple. You are encouraged
+to take the program as a basis for your own experiments, and to play a
+little.
+
+
+
+
+
--
2.39.5