From 41572c9c90b9e2dea3fa13877313ee2e091a21f6 Mon Sep 17 00:00:00 2001 From: Wolfgang Bangerth Date: Thu, 4 Mar 2021 02:38:45 +0100 Subject: [PATCH] Some more additions to step-7. --- examples/step-7/doc/intro.dox | 22 ++++--- examples/step-7/doc/results.dox | 106 ++++++++++++++++++++++++++++---- 2 files changed, 108 insertions(+), 20 deletions(-) diff --git a/examples/step-7/doc/intro.dox b/examples/step-7/doc/intro.dox index a39af2564c..3641bb7919 100644 --- a/examples/step-7/doc/intro.dox +++ b/examples/step-7/doc/intro.dox @@ -23,6 +23,8 @@ converges to zero with the right order of convergence, this is already a good indication of correctness, although there may be other sources of error persisting which have only a small contribution to the total error or are of higher order. In the context of finite element simulations, this technique +of picking the solution by choosing appropriate right hand sides and +boundary conditions is often called the Method of Manufactured Solution. In this example, we will not go into the theories of systematic software @@ -36,7 +38,8 @@ the choice of the right quadrature formula is therefore crucial to the accurate evaluation of the error. This holds in particular for the $L_\infty$ norm, where we evaluate the maximal deviation of numerical and exact solution only at the quadrature points; one should then not try to use a quadrature -rule whose evaluation occurs only at points where super-convergence might occur, such as +rule whose evaluation occurs only at points where +[super-convergence](https://en.wikipedia.org/wiki/Superconvergence) might occur, such as the Gauss points of the lowest-order Gauss quadrature formula for which the integrals in the assembly of the matrix is correct (e.g., for linear elements, do not use the QGauss(2) quadrature formula). In fact, this is generally good @@ -50,11 +53,12 @@ error norms than for the assembly of the linear system. The function VectorTools::integrate_difference() evaluates the desired norm on each cell $K$ of the triangulation and returns a vector which holds these values for each cell. From the local values, we can then obtain the global error. For -example, if the vector $(e_i)$ contains the local $L_2$ norms, then +example, if the vector $\mathbf e$ with element $e_K$ for all cells +$K$ contains the local $L_2$ norms $\|u-u_h\|_K$, then @f[ - E = \| {\mathbf e} \| = \left( \sum_i e_i^2 \right)^{1/2} + E = \| {\mathbf e} \| = \left( \sum_K e_K^2 \right)^{1/2} @f] -is the global $L_2$ error. +is the global $L_2$ error $E=\|u-u_h\|_\Omega$. In the program, we will show how to evaluate and use these quantities, and we will monitor their values under mesh refinement. Of course, we have to choose @@ -97,13 +101,15 @@ on the rest $\Gamma_2 = \Gamma \backslash \Gamma_1$. In our particular testcase, we will use $\Gamma_1=\Gamma \cap\{\{x=1\} \cup \{y=1\}\}$. (We say that this equation has the "nice sign" because the operator -$-\Delta + \alpha I$ with the identity $I$ is a positive definite +$-\Delta + \alpha I$ with the identity $I$ and $\alpha>0$ is a positive definite operator; the equation with the "bad sign" is $-\Delta u - \alpha u$ and results from modeling -time-harmonic processes. The operator is not necessarily positive -definite, and this leads to all sorts of issues we need not discuss -here.) +time-harmonic processes. The operator is not positive +definite if $\alpha>0$ is large, and this leads to all sorts of issues +we need not discuss here. The operator may also not be invertible -- +i.e., the equation does not have a unique solution -- if $\alpha$ +happens to be one of the eigenvalues of $-\Delta$.) Because we want to verify the convergence of our numerical solution $u_h$, we want a setup so that we know the exact solution $u$. This is where diff --git a/examples/step-7/doc/results.dox b/examples/step-7/doc/results.dox index 6b806a1dce..7c8caa4829 100644 --- a/examples/step-7/doc/results.dox +++ b/examples/step-7/doc/results.dox @@ -183,14 +183,88 @@ are the quadratic and cubic rates in the $L_2$ norm. Finally, the program also generated LaTeX versions of the tables (not shown -here). +here) that is written into a file in a way so that it could be +copy-pasted into a LaTeX document. + + +

When is the error "small"?

+ +What we showed above is how to determine the size of the error +$\|u-u_h\|$ in a number of different norms. We did this primarily +because we were interested in testing that our solutions *converge*. +But from an engineering perspective, the question is often more +practical: How fine do I have to make my mesh so that the error is +"small enough"? In other words, if in the table above the $H^1$ +semi-norm has been reduced to `4.121e-03`, is this good enough for me +to sign the blueprint and declare that our numerical simulation showed +that the bridge is strong enough? + +In practice, we are rarely in this situation because I can not +typically compare the numerical solution $u_h$ against the exact +solution $u$ in situations that matter -- if I knew $u$, I would not +have to compute $u_h$. But even if I could, the question to ask in +general is then: `4.121e-03` *what*? The solution will have physical +units, say kg-times-meter-squared, and I'm integrating a function with +units square of the above over the domain, and then take the square +root. So if the domain is two-dimensional, the units of +$\|u-u_h\|_{L_2}$ are kg-times-meter-cubed. The question is then: Is +$4.121\times 10^{-3}$ kg-times-meter-cubed small? That depends on what +you're trying to simulate: If you're an astronomer used to masses +measured in solar masses and distances in light years, then yes, this +is a fantastically small number. But if you're doing atomic physics, +then no: That's not small, and your error is most certainly not +sufficiently small; you need a finer mesh. + +In other words, when we look at these sorts of numbers, we generally +need to compare against a "scale". One way to do that is to not look +at the *absolute* error $\|u-u_h\|$ in whatever norm, but at the +*relative* error $\|u-u_h\|/\|u\|$. If this ratio is $10^{-5}$, then +you know that *on average*, the difference between $u$ and $u_h$ is +0.001 per cent -- probably small enough for engineering purposes. + +How do we compute $\|u\|$? We just need to do an integration loop over +all cells, quadrature points on these cells, and then sum things up +and take the square root at the end. But there is a simpler way often +used: You can call +@code + Vector zero_vector (dof_handler.n_dofs()); + Vector norm_per_cell(triangulation.n_active_cells()); + VectorTools::integrate_difference(dof_handler, + zero_vector, + Solution(), + norm_per_cell, + QGauss(fe->degree + 1), + VectorTools::L2_norm); +@endcode +which computes $\|u-0\|_{L_2}$. Alternatively, if you're particularly +lazy and don't feel like creating the `zero_vector`, you could use +that if the mesh is not too coarse, then $\|u\| \approx \|u_h\|$, and +we can compute $\|u\| \approx \|u_h\|=\|0-u_h\|$ by calling +@code + Vector norm_per_cell(triangulation.n_active_cells()); + VectorTools::integrate_difference(dof_handler, + solution, + ZeroFunction(), + norm_per_cell, + QGauss(fe->degree + 1), + VectorTools::L2_norm); +@endcode +In both cases, one then only has to combine the vector of cellwise +norms into one global norm as we already do in the program, by calling +@code + const double L2_norm = + VectorTools::compute_global_error(triangulation, + norm_per_cell, + VectorTools::L2_norm); +@endcode +

Possible extensions

Higher Order Elements

-Go ahead and run the program with higher order elements (Q3, Q4, ...). You +Go ahead and run the program with higher order elements ($Q_3$, $Q_4$, ...). You will notice that assertions in several parts of the code will trigger (for example in the generation of the filename for the data output). You might have to address these, but it should not be very hard to get the program to work! @@ -202,14 +276,22 @@ unfair but typical) metric to compare them, is to look at the error as a function of the number of unknowns. To see this, create a plot in log-log style with the number of unknowns on the -x axis and the L2 error on the y axis. You can add reference lines for +$x$ axis and the $L_2$ error on the $y$ axis. You can add reference lines for $h^2=N^{-1}$ and $h^3=N^{-3/2}$ and check that global and adaptive refinement -follow those. - -Note that changing the half width of the peaks influences if adaptive or -global refinement is more efficient (if the solution is very smooth, local -refinement does not give any advantage over global refinement). Verify this. - -Finally, a more fair comparison would be to plot runtime (switch to release -mode first!) instead of number of unknowns on the x axis. Picking a better -linear solver might be appropriate though. +follow those. If one makes the (not completely unreasonable) +assumption that with a good linear solver, the computational effort is +proportional to the number of unknowns $N$, then it is clear that an +error reduction of ${\cal O}(N^{-3/2})$ is substantially better than a +reduction of the form ${\cal O}(N^{-1})$: That is, that adaptive +refinement gives us the desired error level with less computational +work than if we used global refinement. This is not a particularly +surprising conclusion, but it's worth checking these sorts of +assumptions in practice. + +Of course, a fairer comparison would be to plot runtime (switch to release +mode first!) instead of number of unknowns on the $x$ axis. If you +plotted run time against the number of unknowns by timing each +refinement step (e.g., using the Timer class), you will notice that +the linear solver is not perfect -- its run time grows faster than +proportional to the linear system size -- and picking a better +linear solver might be appropriate for this kind of comparison. -- 2.39.5