From: Wolfgang Bangerth Date: Mon, 7 Jun 2021 16:08:06 +0000 (-0600) Subject: Minor edits to step-66. X-Git-Tag: v9.4.0-rc1~1267^2 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=00dafc691f8e4e4f663c1c78386dded93665cb38;p=dealii.git Minor edits to step-66. --- diff --git a/examples/step-66/doc/intro.dox b/examples/step-66/doc/intro.dox index d65f785336..7013ad2dca 100644 --- a/examples/step-66/doc/intro.dox +++ b/examples/step-66/doc/intro.dox @@ -103,18 +103,18 @@ identified with a vector $U\in\mathbb{R}^N$ via the representation formula: $u_h = \sum_{i=1}^N U_i \varphi_i$. So using this we can give an expression for the discrete Jacobian and the residual: @f{align*}{ - A_{i,j} = \bigl( F'(u_h^n) \bigr)_{i,j} + A_{ij} = \bigl( F'(u_h^n) \bigr)_{ij} &= \int_\Omega \nabla\varphi_i \cdot \nabla \varphi_j \,\mathrm{d} x - - \int_\Omega \varphi_i \, \exp( u_h ) \varphi_j \,\mathrm{d} x,\\ + \int_\Omega \varphi_i \, \exp( u_h^n ) \varphi_j \,\mathrm{d} x,\\ b_{i} = \bigl( F(u_h^n) \bigr)_{i} &= \int_\Omega \nabla\varphi_i \cdot \nabla u_h^n \,\mathrm{d} x - \int_\Omega \varphi_i \, \exp( u_h^n ) \,\mathrm{d} x. @f} -Compared to step-15 we could also have formed the Frech{\'e}t derivative of the +Compared to step-15 we could also have formed the Fréchet derivative of the nonlinear function corresponding to the strong formulation of the problem and discretized it afterwards. However, in the end we would get the same set of discrete equations. @@ -127,7 +127,9 @@ the system matrix about the solution at the last Newton step. In an implementation with a classical assemble_system() function we would gather this information from the last Newton step during assembly by the use of the member functions FEValuesBase::get_function_values() and -FEValuesBase::get_function_gradients(). The assemble_system() +FEValuesBase::get_function_gradients(). This is how step-15, for +example, does things. +The assemble_system() function would then looks like: @code template @@ -272,8 +274,8 @@ void JacobianOperator::evaluate_newton_step( @endcode -

Triangulation

-As said in step-37 the matrix-free method gets more efficient if we choose a +

%Triangulation

+As said in step-37, the matrix-free method gets more efficient if we choose a higher order finite element space. Since we want to solve the problem on the $d$-dimensional unit ball, it would be good to have an appropriate boundary approximation to overcome convergence issues. For this reason we use an diff --git a/examples/step-66/doc/results.dox b/examples/step-66/doc/results.dox index 89d5f5309d..28a7f95b28 100644 --- a/examples/step-66/doc/results.dox +++ b/examples/step-66/doc/results.dox @@ -196,11 +196,12 @@ present code as well as a deeper numerical investigation of the Gelfand problem.

More sophisticated Newton iteration

Beside a step size controlled version of the Newton iteration as mentioned -already in step-15, one could also implement a more flexible stopping criterion +already in step-15 (and actually implemented, with many more bells and +whistles, in step-77), one could also implement a more flexible stopping criterion for the Newton iteration. For example one could replace the fixed tolerances for the residual TOLf and for the Newton updated TOLx and implement a mixed error control with a given absolute and relative -tolerance, such that the Newton iteration exists with success as, e.g., +tolerance, such that the Newton iteration exits with success as, e.g., @f{align*}{ \|F(u_h^{n+1})\| \leq \texttt{RelTol} \|u_h^{n+1}\| + \texttt{AbsTol}. @f} @@ -237,13 +238,16 @@ Analogously to step-50 and the mentioned possible extension of step-75, you can convince yourself which method is faster.

Eigenvalue problem

-One can consider the corresponding eigenvalue problem, which is called Bratu -problem. For example, if we define a fixed eigenvalue $\lambda\in[0,6]$, we can +One can consider the corresponding eigenvalue problem, which is called +Bratu +problem. For example, if we define a fixed eigenvalue $\lambda\in[0,6]$, we can compute the corresponding discrete eigenfunction. You will notice that the number of Newton steps will increase with increasing $\lambda$. To reduce the number of Newton steps you can use the following trick: start from a certain $\lambda$, compute the eigenfunction, increase $\lambda=\lambda + \delta_\lambda$, and then use the previous solution as an initial guess for the -Newton iteration. In the end you can plot the $H^1(\Omega)$-norm over the +Newton iteration -- this approach is called a "continuation +method". In the end you can plot the $H^1(\Omega)$-norm over the eigenvalue $\lambda \mapsto \|u_h\|_{H^1(\Omega)}$. What do you observe for further increasing $\lambda>7$?