From: bangerth Date: Sun, 22 Jun 2014 11:27:56 +0000 (+0000) Subject: Start to look over step-52. Adjust step-26 accordingly. X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=02d99a2a58454ad764c0c0fa379c95f9e8e1c2b5;p=dealii-svn.git Start to look over step-52. Adjust step-26 accordingly. git-svn-id: https://svn.dealii.org/trunk@33071 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-26/doc/intro.dox b/deal.II/examples/step-26/doc/intro.dox index 7aa1ab6697..11e9fe0e42 100644 --- a/deal.II/examples/step-26/doc/intro.dox +++ b/deal.II/examples/step-26/doc/intro.dox @@ -47,7 +47,8 @@ Here, $k_n=t_n-t_{n-1}$ is the time step size. The theta-scheme generalizes the explicit Euler ($\theta=0$), implicit Euler ($\theta=1$) and Crank-Nicolson ($\theta=\frac 12$) time discretizations. Since the latter has the highest convergence order, we will choose $\theta=\frac 12$ in the program -below, but make it so that playing with this parameter remains simple. +below, but make it so that playing with this parameter remains simple. (If you +are interested in playing with higher order methods, take a look at step-52.) Given this time discretization, space discretization happens as it always does, by multiplying with test functions, integrating by parts, and then diff --git a/deal.II/examples/step-26/doc/results.dox b/deal.II/examples/step-26/doc/results.dox index 798cc8f298..58a105902d 100644 --- a/deal.II/examples/step-26/doc/results.dox +++ b/deal.II/examples/step-26/doc/results.dox @@ -83,6 +83,15 @@ for example, in the PhD thesis of a former principal developer of deal.II, Ralf Hartmann, published by the University of Heidelberg, Germany, in 2002. +

Better time stepping methods

+ +We here use one of the simpler time stepping methods, namely the second order +in time Crank-Nicolson method. However, more accurate methods such as +Runge-Kutta methods are available and should be used as they do not represent +much additional effort. It is not difficult to implement this for the current +program, but a more systematic treatment is also given in step-52. + +

Better refinement criteria

If you look at the meshes in the movie above, it is clear that they are not diff --git a/deal.II/examples/step-52/doc/intro.dox b/deal.II/examples/step-52/doc/intro.dox index 926c81cdb4..d552ba90a8 100644 --- a/deal.II/examples/step-52/doc/intro.dox +++ b/deal.II/examples/step-52/doc/intro.dox @@ -12,46 +12,63 @@ problem. In this example, we solve the one-group time-dependent diffusion approximation of the neutron transport equation (see step-28 for the -time-independent multigroup diffusion). We assume that the medium is not +time-independent multigroup diffusion). This is a model for how neutrons move +around highly scattering media, and consequently it is a variant of the +time-dependent diffusion equation -- which is just a different name for the +heat equation discussed in step-26, plus some extra terms. +We assume that the medium is not fissible and therefore, the neutron flux satisfies the following equation: @f{eqnarray*} -\frac{1}{v}\frac{\partial \phi(x,t)}{\partial t} = \nabla D(x) \nabla \phi(x,t) +\frac{1}{v}\frac{\partial \phi(x,t)}{\partial t} = \nabla \cdot D(x) \nabla \phi(x,t) - \Sigma_a(x) \phi(x,t) + S(x,t) @f} augmented by appropriate boundary conditions. Here, $v$ is the velocity of -neutrons (for simplicity we assume it is equal to 1), $D$ is the diffusion coefficient, +neutrons (for simplicity we assume it is equal to 1 which can be achieved by +simply scaling the time variable), $D$ is the diffusion coefficient, $\Sigma_a$ is the absorption cross section, and $S$ is a source. Because we are only interested in the time dependence, we assume that $D$ and $\Sigma_a$ are -constant. We are looking for a solution on a square domain $[0,b]\times[0,b]$ of -the form: +constant. + +Since this program only intends to demonstrate how to use advanced time +stepping algorithms, we will only look for the solutions of relatively simple +problems. Specifically, we are looking for a solution on a square domain +$[0,b]\times[0,b]$ of the form @f{eqnarray*} \phi(x,t) = A\sin(\omega t)(bx-x^2). @f} -By using quadratic finite elements, all the error will be due to the time discretization. We +By using quadratic finite elements, we can represent this function exactly at +any particular time, and all the error will be due to the time discretization. We impose the following boundary conditions: homogeneous Dirichlet for $x=0$ and -$x=b$ and homogeneous Neumann conditions for $y=0$ and $y=b$. The source is -given by: +$x=b$ and homogeneous Neumann conditions for $y=0$ and $y=b$. We choose the +source term so that the corresponding solution is +in fact of the form stated above: @f{eqnarray*} S=A\left(\frac{1}{v}\omega \cos(\omega t)(bx -x^2) + \sin(\omega t) \left(\Sigma_a (bx-x^2)+2D\right) \right). @f} Because the solution is a sine, we know that $\phi\left(x,\pi\right) = 0$. -Therefore, the error at this time is simply the norm of the numerical solution. +Therefore, the error at time $t=\pi$ is simply the norm of the numerical +solution and is particularly easily evaluated. +

Runge-Kutta

The Runge-Kutta methods implemented in deal.II assume that the equation to be solved can be written as: @f{eqnarray*} -\frac{dy}{dt} = f(t,y). +\frac{dy}{dt} = g(t,y). @f} -When using finite elements, the previous equation becomes: +When using finite elements, discretized time derivatives always result in the +presence of a mass matrix. If the solution vector $y(t)$ is given by the vector +of nodal coefficients $U(t)$ the previous equation, then spatially discretized +equations always have the form @f{eqnarray*} -M\frac{dy}{dt} = f(t,y), +M\frac{dU}{dt} = f(t,U), @f} -where $M$ is the mass matrix. Therefore, we have: +where $M$ is the mass matrix. In other words, this fits the scheme above if we +write @f{eqnarray*} -\frac{dy}{dt} = M^{-1}f(t,y). +\frac{dy}{dt} = g(t,y) = M^{-1}f(t,y). @f} Runke-Kutta methods can be written as: @f{eqnarray*} @@ -62,7 +79,12 @@ where k_i = h M^{-1} f\left(t_n+c_ih,y_n+\sum_{j=1}^sa_{ij}k_j\right) @f} where $a_{ij}$, $b_i$, and $c_i$ are known coefficients and $h$ is the time step -used. At the time of the writing of this tutorial, the methods implemented in +used. Different time stepping methods of the Runge-Kutta class differ in the +number of stages $s$ and the values they use for the coefficients $a_{ij}$, +$b_i$, and $c_i$ but are otherwise easy to implement since one can look up +tabulated values for these coefficients. + +At the time of the writing of this tutorial, the methods implemented in deal.II can be divided in three categories:
  1. explicit Runge-Kutta @@ -70,42 +92,71 @@ deal.II can be divided in three categories:
  2. implicit Runge-Kutta
-

Explicit Runge-Kutta

-These methods that include forward Euler, third order Runge-Kutta, and -fourth order Runge-Kutta, require a function to evaluate $M^{-1}f(t,y)$. These -methods become unstable when the time step chosen is too large. +

Explicit Runge-Kutta

+These methods that include forward Euler, third order Runge-Kutta, and fourth +order Runge-Kutta, only require a function to evaluate $M^{-1}f(t,y)$ but not +(as implicit methods) to solve an equation for $y$ that involves +$f(t,y)$. These methods become unstable when the time step chosen is too +large.

Embedded Runge-Kutta

These methods include Heun-Euler, Bogacki-Shampine, Dormand-Prince (ode45 in Matlab), Fehlberg, and Cash-Karp. These methods use a low order method to -estimate the error and decide if the time step needs to be refined or coarsen. +estimate the error and decide if the time step needs to be refined or coarsened. Only embedded explicit methods have been implemented at the time of the writing.

Implicit Runge-Kutta

These methods include backward Euler, implicit midpoint, Crank-Nicolson, and a two stages SDIRK. These methods require to evaluate $M^{-1}f(t,y)$ and $\left(I-\Delta t M^{-1} \frac{\partial f}{\partial y}\right)^{-1}$ or equivalently -$\left(M - \Delta t \frac{\partial f}{\partial y}\right)^{-1} M$. These methods are -always stable. +$\left(M - \Delta t \frac{\partial f}{\partial y}\right)^{-1} M$. This is necessary in order to solve for the solution of (possibly nonlinear) +systems in every time step, where each Newton step requires the solution of an +equation of the form +@f{align*} + \left(M - \Delta t \frac{\partial f}{\partial y}\right) \Delta y + = -M h(t,y) +@f} +for some (given) $h(t,y)$. These methods are +always stable -

Weak form

-To use the Runge-Kutta methods, we need to be able to evaluate: + +

Spatially discrete formulation

+ +By expanding the solution as always using shape functions $\psi_j$ and writing +@f{eqnarray*} +\phi_h(x,t) = \sum_j U_j(t) \psi_j(x), +@f} +we can then get the spatially discretized version of the diffusion equation as +@f{eqnarray*} + M \frac{dU(t)}{dt} + = -{\cal D} U(t) - {\cal A} U(t) + {\cal S}(t) +@f} +where @f{eqnarray*} -f = \oint D b_i \frac{\partial b_j}{\partial n} d\boldsymbol{r} - \int D \nabla -b_i \nabla b_j \phi_j d\boldsymbol{r} -\int \Sigma_a b_i b_j \phi_j -d\boldsymbol{r} + \int b_j S d\boldsymbol{r} + M_{ij} &= (\psi_i,\psi_j), \\ + {\cal D}_{ij} &= (D\nabla\psi_i,\nabla\psi_j)_\Omega, \\ + {\cal A}_{ij} &= (\Sigma_a\psi_i,\Sigma_a\psi_j)_\Omega, \\ + {\cal S}_{i}(t) &= (\psi_i,S(x,t))_\Omega. @f} -and $\frac{\partial f}{\partial y}$. Because $f$ is linear in $y$ (or $\phi$ in -this case) $\frac{\partial f}{\partial y} y = f$. +Boundary terms are not necessary due to the chosen boundary conditions. +To use the Runge-Kutta methods, we can then recast this as follows: +@f{eqnarray*} +f(y) = -{\cal D)y - {\cal A}y + {\cal S} +@f} +In the code, we will need to be able to evaluate this function $f(U)$ along +with its derivative. However, in view of the linearity of $f$, we will be able +to use $\frac{\partial f}{\partial y} y = f(y)$. +

Remarks

To simplify the problem, the domain is two dimensional and the mesh is uniform (there is no need to adapt the mesh since we use quadratic finite elements and the exact solution is quadratic). Going from a two dimensional domain to a three dimensional domain is not very challenging. However if the -mesh must be adapted, it is important to remember to: +mesh must be adapted, it is important to remember to do the following:
    -
  1. project the solution to the new mesh when the mesh is changed. The mesh -used should be the same at the beginning and at the end of the time step. +
  2. Project the solution to the new mesh when the mesh is changed. The mesh + used should be the same at the beginning and at the end of the time step.
  3. update the mass matrix and its inverse.
+The techniques to do all of this are available in step-26.