$u_h = \sum_{i=1}^N U_i \varphi_i$. So using this we can give an expression for
the discrete Jacobian and the residual:
@f{align*}{
- A_{i,j} = \bigl( F'(u_h^n) \bigr)_{i,j}
+ A_{ij} = \bigl( F'(u_h^n) \bigr)_{ij}
&=
\int_\Omega \nabla\varphi_i \cdot \nabla \varphi_j \,\mathrm{d} x
-
- \int_\Omega \varphi_i \, \exp( u_h ) \varphi_j \,\mathrm{d} x,\\
+ \int_\Omega \varphi_i \, \exp( u_h^n ) \varphi_j \,\mathrm{d} x,\\
b_{i} = \bigl( F(u_h^n) \bigr)_{i}
&=
\int_\Omega \nabla\varphi_i \cdot \nabla u_h^n \,\mathrm{d} x
-
\int_\Omega \varphi_i \, \exp( u_h^n ) \,\mathrm{d} x.
@f}
-Compared to step-15 we could also have formed the Frech{\'e}t derivative of the
+Compared to step-15 we could also have formed the Fréchet derivative of the
nonlinear function corresponding to the strong formulation of the problem and
discretized it afterwards. However, in the end we would get the same set of
discrete equations.
implementation with a classical <code>assemble_system()</code> function we
would gather this information from the last Newton step during assembly by the
use of the member functions FEValuesBase::get_function_values() and
-FEValuesBase::get_function_gradients(). The <code>assemble_system()</code>
+FEValuesBase::get_function_gradients(). This is how step-15, for
+example, does things.
+The <code>assemble_system()</code>
function would then looks like:
@code
template <int dim>
@endcode
-<h3>Triangulation</h3>
-As said in step-37 the matrix-free method gets more efficient if we choose a
+<h3>%Triangulation</h3>
+As said in step-37, the matrix-free method gets more efficient if we choose a
higher order finite element space. Since we want to solve the problem on the
$d$-dimensional unit ball, it would be good to have an appropriate boundary
approximation to overcome convergence issues. For this reason we use an
<h4>More sophisticated Newton iteration</h4>
Beside a step size controlled version of the Newton iteration as mentioned
-already in step-15, one could also implement a more flexible stopping criterion
+already in step-15 (and actually implemented, with many more bells and
+whistles, in step-77), one could also implement a more flexible stopping criterion
for the Newton iteration. For example one could replace the fixed tolerances
for the residual <code>TOLf</code> and for the Newton updated <code>TOLx</code>
and implement a mixed error control with a given absolute and relative
-tolerance, such that the Newton iteration exists with success as, e.g.,
+tolerance, such that the Newton iteration exits with success as, e.g.,
@f{align*}{
\|F(u_h^{n+1})\| \leq \texttt{RelTol} \|u_h^{n+1}\| + \texttt{AbsTol}.
@f}
convince yourself which method is faster.
<h4>Eigenvalue problem</h4>
-One can consider the corresponding eigenvalue problem, which is called Bratu
-problem. For example, if we define a fixed eigenvalue $\lambda\in[0,6]$, we can
+One can consider the corresponding eigenvalue problem, which is called
+<a
+href="https://en.wikipedia.org/wiki/Liouville%E2%80%93Bratu%E2%80%93Gelfand_equation">Bratu
+problem</a>. For example, if we define a fixed eigenvalue $\lambda\in[0,6]$, we can
compute the corresponding discrete eigenfunction. You will notice that the
number of Newton steps will increase with increasing $\lambda$. To reduce the
number of Newton steps you can use the following trick: start from a certain
$\lambda$, compute the eigenfunction, increase $\lambda=\lambda +
\delta_\lambda$, and then use the previous solution as an initial guess for the
-Newton iteration. In the end you can plot the $H^1(\Omega)$-norm over the
+Newton iteration -- this approach is called a "continuation
+method". In the end you can plot the $H^1(\Omega)$-norm over the
eigenvalue $\lambda \mapsto \|u_h\|_{H^1(\Omega)}$. What do you observe for
further increasing $\lambda>7$?