320 3.1415926535897576 3.5527e-14 7.94
1280 3.1415926535897896 3.5527e-15 3.32
5120 3.1415926535897940 8.8818e-16 2.00
- @endcode
+@endcode
@note Once the error reaches a level on the
order of $10^{-13}$ to $10^{-15}$, it is essentially dominated by
in internal computations. Since these things change, the precise values
and errors change from release to release at these round-off levels,
though the overall order of errors should of course remain the same.
+ See also the comment below in the section on
+ <a href="#extensions">Possibilities for extensions</a> about how to compute
+ these results more accurately.
One of the immediate observations from the output above is that in all cases
the values converge quickly to the true value of
publication: A. Bonito, A. Demlow, and J. Owen: "A priori error
estimates for finite element approximations to eigenvalues and
eigenfunctions of the Laplace-Beltrami operator", submitted, 2018.)
+
+
+
+<h3>Possibilities for extensions</h3>
+<a name="extensions"></a>
+
+As the table of numbers copied from the output of the program shows above,
+it is not very difficult to compute the value of $\pi$ to 13 or 15 digits. But,
+the output also shows that once we approach the level of accuracy with which
+`double` precision numbers store information (namely, with roughly 16 digits
+of accuracy), we no longer see the expected convergence order and the error
+no longer decreases with mesh refinement as anticipated. This is because both
+within this code and within the many computations that happen within deal.II
+itself, each operation incurs an error on the order of $10^{-16}$; adding
+such errors many times over then results in an error that may be on the
+order of $10^{-14}$, which will dominate the discretization error after
+a number of refinement steps and consequently destroy the convergence rate.
+
+The question is whether one can do anything about this. One thought is to
+use a higher-precision data type. For example, one could think of declaring
+both the `area` and `perimeter` variables in `compute_pi_by_area()` and
+`compute_pi_by_perimeter()` with data type `long double`. `long double`
+is a data type that is not well specified in the C++ standard but at least
+on Intel processors has around 19, instead of around 16, digits of accuracy.
+If we were to do that, we would get results that differ from the ones shown
+above. However, maybe counter-intuitively, they are not uniformly better.
+For example, when computing $\pi$ by the area, at the time of writing
+these sentences we get these values with `double` precision for degree 4:
+@code
+ 5 3.1415871927401144 5.4608e-06 -
+ 20 3.1415926314742491 2.2116e-08 7.95
+ 80 3.1415926535026268 8.7166e-11 7.99
+ 320 3.1415926535894005 3.9257e-13 7.79
+ 1280 3.1415926535899774 1.8430e-13 1.09
+ 5120 3.1415926535897669 2.6201e-14 2.81
+@endcode
+On the other hand, the results are as follows when using `long double`:
+@code
+ cells eval.pi error
+ 5 3.1415871927401136 5.4608e-06 -
+ 20 3.1415926314742446 2.2116e-08 7.95
+ 80 3.1415926535026215 8.7172e-11 7.99
+ 320 3.1415926535894516 3.4157e-13 8.00
+ 1280 3.1415926535897918 1.5339e-15 7.80
+ 5120 3.1415926535897927 5.2649e-16 1.54
+@endcode
+Indeed, here we get results that are approximately 50 times as accurate.
+On the other hand, when computing $\pi$ by the perimeter, we get this with
+`double` precision:
+@code
+ 5 3.1415921029432572 5.5065e-07 -
+ 20 3.1415926513737582 2.2160e-09 7.96
+ 80 3.1415926535810699 8.7232e-12 7.99
+ 320 3.1415926535897576 3.5527e-14 7.94
+ 1280 3.1415926535897896 3.5527e-15 3.32
+ 5120 3.1415926535897940 8.8818e-16 2.00
+@endcode
+Whereas we get the following with `long double`:
+@code
+ 5 3.1415921029432572 5.5065e-07 -
+ 20 3.1415926513737595 2.2160e-09 7.96
+ 80 3.1415926535810703 8.7230e-12 7.99
+ 320 3.1415926535897576 3.5705e-14 7.93
+ 1280 3.1415926535897918 1.3785e-15 4.70
+ 5120 3.1415926535897944 1.3798e-15 -0.00
+@endcode
+Here, using `double` precision is more accurate by about a factor of
+two. (Of course, in all cases, we have computed $\pi$ with more
+accuracy than any engineer would ever want to know.)
+
+What explains this unpredictability? In general, round-off errors can
+be thought of as random, and add up in ways that are not worth thinking
+too much about; we should therefore always treat any accuracy beyond, say,
+thirteen digits as suspect. Thus, it is probably not worth spending
+too much time on wondering why we get different winners and losers in the
+data type exchange from `double` and `long double`. The accuracy of the
+results is also largely not determined by the precision of the data type
+in which we accumulate each cell's (or face's) contributions, but the
+accuracy of what deal.II gives us via FEValues::JxW() and FEFaceValues::JxW(),
+which always uses `double` precision and which we cannot directly affect.
+
+But there are cases where one can do something about the precision, and it
+is worth at least mentioning the name of the most well-known algorithm in
+this area. Specifically, what we are doing when we add contributions into
+the `area` and `perimeter` values is that we are adding together *positive*
+numbers as we do here. In general, the round-off errors associated with each
+of these numbers is random, and if we add up contributions of substantially
+different sizes, then we will likely be dominated by the error in the largest
+contributions. One can avoid this by adding up numbers sorted by their
+size, and this may then result in marginally more accurate end results.
+The algorithm that implements this is typically called
+<a href="https://en.wikipedia.org/wiki/Kahan_summation_algorithm">Kahan's summation algorithm</a>.
+While one could play with it in the current context, it is likely not going
+to improve the accuracy in ways that will truly matter.