From: bangerth Date: Tue, 14 Nov 2006 23:58:03 +0000 (+0000) Subject: Add Ivan Christov's many suggestions X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=901dd9a5eaaf8ca85d9068fbbf19ee4ae395657e;p=dealii-svn.git Add Ivan Christov's many suggestions git-svn-id: https://svn.dealii.org/trunk@14190 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-25/doc/intro.dox b/deal.II/examples/step-25/doc/intro.dox index c7c692f385..48a5da4e6a 100644 --- a/deal.II/examples/step-25/doc/intro.dox +++ b/deal.II/examples/step-25/doc/intro.dox @@ -32,15 +32,21 @@ Nonlinear Waves (Chapter 17, Sections 10-13).

Statement of the problem

The sine-Gordon initial-boundary-value problem (IBVP) we wish to solve -is the following nonlinear equation similar to the wave equation we -discussed in @ref step_23 "step-23" and @ref step_24 "step-24": +consists of the following equations: \f{eqnarray*} u_{tt}-\Delta u &=& -\sin(u) \quad\mbox{for}\quad (x,t) \in \Omega \times (t_0,t_f],\\ {\mathbf n} \cdot \nabla u &=& 0 \quad\mbox{for}\quad (x,t) \in \partial\Omega - \times (t_0,t_f],\\ + \times (t_0,t_f],\\ u(x,t_0) &=& u_0(x). \f} -We have chosen to enforce zero Neumann boundary conditions in order for waves to reflect off the boundaries of our domain. It should be noted, however, that Dirichlet boundary conditions are not appropriate for this problem. Even though the solutions to the sine-Gordon equation are localized, it only makes sense to specify (Dirichlet) boundary conditions at $x=\pm\infty$, otherwise either a solution does not exist or only the trivial solution $u=0$ exists. +It is a nonlinear equation similar to the wave equation we +discussed in @ref step_23 "step-23" and @ref step_24 "step-24". +We have chosen to enforce zero Neumann boundary conditions in order for waves +to reflect off the boundaries of our domain. It should be noted, however, that +Dirichlet boundary conditions are not appropriate for this problem. Even +though the solutions to the sine-Gordon equation are localized, it only makes +sense to specify (Dirichlet) boundary conditions at $x=\pm\infty$, otherwise +either a solution does not exist or only the trivial solution $u=0$ exists. However, the form of the equation above is not ideal for numerical discretization. If we were to discretize the second-order time @@ -67,7 +73,7 @@ or implicit Euler method, respectively. Another important choice is $\theta=\frac{1}{2}$, which gives the second-order accurate Crank-Nicolson scheme. Henceforth, a superscript $n$ denotes the values of the variables at the $n^{\mathrm{th}}$ time step, i.e. at -$t=t_n\equiv n k$, where $k$ is the (fixed) the time step size. Thus, +$t=t_n:= n k$, where $k$ is the (fixed) time step size. Thus, the split formulation of the time-discretized sine-Gordon equation becomes \f{eqnarray*} \frac{u^n - u^{n-1}}{k} - \left[\theta v^n + (1-\theta) v^{n-1}\right] &=& 0,\\ @@ -78,28 +84,33 @@ the split formulation of the time-discretized sine-Gordon equation becomes We can simplify the latter via a bit of algebra. Eliminating $v^n$ from the first equation and rearranging, we obtain \f{eqnarray*} \left[ 1-k^2\theta^2\Delta \right] u^n &=& - \left[ 1+k^2\theta(1-\theta)\Delta\right] u^{n-1} + k v^{n-1} - - k^2\theta\sin\left[\theta u^n + (1-\theta) u^{n-1}\right],\\ + \left[ 1+k^2\theta(1-\theta)\Delta\right] u^{n-1} + k v^{n-1} + - k^2\theta\sin\left[\theta u^n + (1-\theta) u^{n-1}\right],\\ v^n &=& v^{n-1} + k\Delta\left[ \theta u^n + (1-\theta) u^{n-1}\right] - - k\sin\left[ \theta u^n + (1-\theta) u^{n-1} \right]. + - k\sin\left[ \theta u^n + (1-\theta) u^{n-1} \right]. \f} It may seem as though we can just proceed to discretize the equations in space at this point. While this is true for the second equation -(which is linear in $v^n$), this would not work for all $\theta$ since the first equation above is nonlinear. Therefore, a nonlinear solver must be implemented, then equations can be discretized in space and solved. +(which is linear in $v^n$), this would not work for all $\theta$ since the +first equation above is nonlinear. Therefore, a nonlinear solver must be +implemented, then the equations can be discretized in space and solved. To this end, we can use Newton's method. Given the nonlinear equation $F(u^n) = 0$, we produce successive approximations to $u^n$ as follows: \f{eqnarray*} \mbox{ Find } \delta u^n_l \mbox{ s.t. } F'(u^n_l)\delta u^n_l = -F(u^n_l) \mbox{, set } u^n_{l+1} = u^n_l + \delta u^n_l. \f} -The iteration can be initialized with the old time step, i.e. $u^n_0 = u^{n-1}$, and eventually it will produce a solution to the first equation of the split formulation (see above). For the time discretizaion of the sine-Gordon under consideration here, we have that +The iteration can be initialized with the old time step, i.e. $u^{n,0} = +u^{n-1}$, and eventually it will produce a solution to the first equation of +the split formulation (see above). For the time discretizaion of the +sine-Gordon equation under consideration here, we have that \f{eqnarray*} F(u^n_l) &=& \left[ 1-k^2\theta^2\Delta \right] u^n_l - - \left[ 1+k^2\theta(1-\theta)\Delta\right] u^{n-1} - k v^{n-1} - + k^2\theta\sin\left[\theta u^n_l + (1-\theta) u^{n-1}\right],\\ + \left[ 1+k^2\theta(1-\theta)\Delta\right] u^{n-1} - k v^{n-1} + + k^2\theta\sin\left[\theta u^n_l + (1-\theta) u^{n-1}\right],\\ F'(u^n_l) &=& 1-k^2\theta^2\Delta - k^2\theta^2\cos\left[\theta u^n_l - + (1-\theta) u^{n-1}\right]. + + (1-\theta) u^{n-1}\right]. \f} Notice that while $F(u^n_l)$ is a function, $F'(u^n_l)$ is an operator. @@ -109,13 +120,13 @@ With hindsight, we choose both the solution and the test space to be $H^1(\Omega &\mbox{ Find}& \delta u^n_l \in H^1(\Omega) \mbox{ s.t. } \left( F'(u^n_l)\delta u^n_l, \varphi \right)_{\Omega} = -\left(F(u^n_l), \varphi \right)_{\Omega} \;\forall\varphi\in H^1(\Omega), - \mbox{ set } u^n_{l+1} = u^n_l + \delta u^n_l,\; u^n_0 = u^{n-1}.\\ + \mbox{ set } u^n_{l+1} = u^n_l + \delta u^n_l,\; u^{n,0} = u^{n-1}.\\ &\mbox{ Find}& v^n \in H^1(\Omega) \mbox{ s.t. } \left( v^n, \varphi \right)_{\Omega} = \left( v^{n-1}, \varphi \right)_{\Omega} - - k\theta\left( \nabla u^n, \nabla\varphi \right)_{\Omega} - - k (1-\theta)\left( \nabla u^{n-1}, \nabla\varphi \right)_{\Omega} - - k\left(\sin\left[ \theta u^n + (1-\theta) u^{n-1} \right], - \varphi \right)_{\Omega} \;\forall\varphi\in H^1(\Omega). + - k\theta\left( \nabla u^n, \nabla\varphi \right)_{\Omega} + - k (1-\theta)\left( \nabla u^{n-1}, \nabla\varphi \right)_{\Omega} + - k\left(\sin\left[ \theta u^n + (1-\theta) u^{n-1} \right], + \varphi \right)_{\Omega} \;\forall\varphi\in H^1(\Omega). \f} Note that the we have used integration by parts and the zero Neumann boundary conditions on all terms involving the Laplacian @@ -124,13 +135,13 @@ and $(\cdot,\cdot)_{\Omega}$ denotes the usual $L^2$ inner product over the domain $\Omega$, i.e. $(f,g)_{\Omega} = \int_\Omega fg \,\mathrm{d}x$. Finally, notice that the first equation is, in fact, the definition of an iterative procedure, so it is solved multiple -times in each time step until a stopping criterion is met. +times during each time step until a stopping criterion is met.

Discretization of the weak formulation in space

Using the Finite Element Method, we discretize the variational formulation in space. To this end, let $V_h$ be a finite-dimensional $H^1(\Omega)$-conforming finite element space ($\mathrm{dim}\, V_h = N -< \infty$) with nodal basis $\{\varphi_1,\ldots,\varphi_N\}$. Hence, +< \infty$) with nodal basis $\{\varphi_1,\ldots,\varphi_N\}$. Now, we can expand all functions in the weak formulation (see above) in terms of the nodal basis. Henceforth, we shall denote by a capital letter the vector of coefficients (in the nodal basis) of a function @@ -138,17 +149,17 @@ denoted by the same letter in lower case; e.g., $u^n = \sum_{i=1}^N U^n_i \varphi_i$ where $U^n \in {R}^N$ and $u^n \in H^1(\Omega)$. Thus, the finite-dimensional version of the variational fomulation requires that we solve the following matrix equations at each time step: @f{eqnarray*} - F_h'(U^n_l)\delta U^n_l &=& -F_h(U^n_l), \qquad - U^n_{l+1} = U^n_l + \delta U^n_l, \qquad U^n_0 = U^{n-1}; \\ + F_h'(U^{n,l})\delta U^{n,l} &=& -F_h(U^{n,l}), \qquad + U^{n,l+1} = U^{n,l} + \delta U^{n,l}, \qquad U^{n,0} = U^{n-1}; \\ MV^n &=& MV^{n-1} - k \theta AU^n -k (1-\theta) AU^{n-1} - k S(u^n,u^{n-1}). @f} Above, the matrix $F_h'(\cdot)$ and the vector $F_h(\cdot)$ denote the discrete versions of the gadgets discussed above, i.e. \f{eqnarray*} - F_h(U^n_l) &=& \left[ M+k^2\theta^2A \right] U^n_l - - \left[ M-k^2\theta(1-\theta)A \right] U^{n-1} - k MV^{n-1} - + k^2\theta S(u^n_l, u^{n-1}),\\ - F_h'(U^n_l) &=& M+k^2\theta^2A - - k^2\theta^2N(u^n_l,u^{n-1}) + F_h(U^{n,l}) &=& \left[ M+k^2\theta^2A \right] U^{n,l} - + \left[ M-k^2\theta(1-\theta)A \right] U^{n-1} - k MV^{n-1} + + k^2\theta S(u^{n,l}, u^{n-1}),\\ + F_h'(U^{n,l}) &=& M+k^2\theta^2A + - k^2\theta^2N(u^{n,l},u^{n-1}) \f} Again, note that the first matrix equation above is, in fact, the defition of an iterative procedure, so it is solved multiple times @@ -156,58 +167,59 @@ until a stopping criterion is met. Moreover, $M$ is the mass matrix, i.e. $M_{ij} = \left( \varphi_i,\varphi_j \right)_{\Omega}$, $A$ is the Laplace matrix, i.e. $A_{ij} = \left( \nabla \varphi_i, \nabla \varphi_j \right)_{\Omega}$, $S$ is the nonlinear term in the -auxilliary equation, i.e. $S_j(f,g) = \left( \sin\left[ \theta f + -(1-\theta) g\right], \varphi_j \right)_{\Omega}$, and $N$ is the -nonlinear term in the Jacobian matrix of $F(\cdot)$, i.e. $N_{ij}(f,g) -= \left( \cos\left[ \theta f + (1-\theta) g\right]\varphi_i, \varphi_j -\right)_{\Omega}$. +equation that defines our auxiliary velocity variable, i.e. $S_j(f,g) = \left( + \sin\left[ \theta f + (1-\theta) g\right], \varphi_j \right)_{\Omega}$, and +$N$ is the nonlinear term in the Jacobian matrix of $F(\cdot)$, +i.e. $N_{ij}(f,g) = \left( \cos\left[ \theta f + (1-\theta) g\right]\varphi_i, + \varphi_j \right)_{\Omega}$. What solvers can we use for the first equation? Let's look at the matrix we have to invert: @f[ - M-k^2\theta^2N)_{ij} = + (M-k^2\theta^2N)_{ij} = \int_\Omega (1-k^2\theta^2 \cos \alpha) \varphi_i\varphi_j \; dx, @f] for some $\alpha$ that depends on the present and previous solution. First, note that the matrix is symmetric. In addition, if the time step $k$ is small -enough, i.e. if $k\theta<1$ then the matrix is also going to be positive +enough, i.e. if $k\theta<1$, then the matrix is also going to be positive definite. In the program below, this will always be the case, so we will use the Conjugate Gradient method together with the SSOR method as -preconditioner. We should keep in mind, however, that this is a point that -will break if we happen to use a bigger time step. Fortunately, in that case +preconditioner. We should keep in mind, however, that this will fail +if we happen to use a bigger time step. Fortunately, in that case the solver will just throw an exception indicating a failure to converge, rather than silently producing a wrong result. If that happens, then we can simply replace the CG method by something that can handle indefinite symmetric systems. The GMRES solver is typically the standard method for all "bad" linear systems, but it is also a slow one. Possibly better would be a solver -that utilizes the symmetry, such as for example SymmLQ, which is also +that utilizes the symmetry, such as, for example, SymmLQ, which is also implemented in deal.II. This program uses a clever optimization over @ref step_23 "step-23" and @ref step_24 "step-24": If you read the above formulas closely, it becomes clear that the velocity $V$ only ever appears in products with the mass matrix. In -@ref step_23 "step-23" and @ref step_24 "step-24", we we therefore a bit +@ref step_23 "step-23" and @ref step_24 "step-24", we were, therefore, a bit wasteful: in each time step, we would solve a linear system with the mass matrix, only to multiply the solution of that system by $M$ again in the next time step. This can, of course, be avoided, and we do so in this program. -

The testcase

+

The test case

-There are a few analytical solutions for the sine-Gordon equation, both in 1d -and 2d. In particular, the program as is computes the solution to a single -kink-like solitary wave problem. This solution is given by Leibbrandt in \e -Phys. \e Rev. \e Lett. \b 41(7), and is implemented in the -ExactSolution class. +There are a few analytical solutions for the sine-Gordon equation, both in 1D +and 2D. In particular, the program as is computes the solution to a problem +with a single kink-like solitary wave initial condition. This solution is +given by Leibbrandt in \e Phys. \e Rev. \e Lett. \b 41(7), and is implemented +in the ExactSolution class. -It should be noted that this closed-form solution strictly speaking only holds +It should be noted that this closed-form solution, strictly speaking, only holds for the infinite-space initial-value problem (not the Neumann initial-boundary-value problem under consideration here). However, given that we impose \e zero Neumann boundary conditions, we expect that the solution to our initial-boundary-value problem would be close to the solution of the infinite-space initial-value problem, if reflections of waves off the -boundaries of our domain do \e not occur. +boundaries of our domain do \e not occur. In practice, this is of course not +the case, but we can at least assume that this were so. The constants $\vartheta$ and $\lambda$ in the 2D solution and $\vartheta$, $\phi$ and $\tau$ in the 3D solution are called the Bäcklund @@ -217,40 +229,40 @@ solution, one should choose the parameters so that the kink is aligned with the grid. The solutions that we implement in the ExactSolution class are -this: +these: @@ -259,5 +271,5 @@ this: Since it makes it easier to play around, the InitialValues class that is used to set — surprise! — the initial values of our simulation simply queries the class that describes the exact solution for the -value at the start time, rather than duplicating the effort to implement a +value at the initial time, rather than duplicating the effort to implement a solution function. diff --git a/deal.II/examples/step-25/doc/results.dox b/deal.II/examples/step-25/doc/results.dox index 222badfccf..c8198185c8 100644 --- a/deal.II/examples/step-25/doc/results.dox +++ b/deal.II/examples/step-25/doc/results.dox @@ -1,9 +1,9 @@

Results

The explicit Euler time stepping scheme ($\theta=0$) performs adequately for the problems we wish to solve. Unfortunately, a rather small time step has to be chosen due to stability issues --- $k\sim h/10$ appears to work for most the simulations we performed. On the other hand, the Crank-Nicolson scheme ($\theta=\frac{1}{2}$) is unconditionally stable, and (at least for the case of the 1D breather) we can pick the time step to be as large as $25h$ without any ill effects on the solution. The implicit Euler scheme ($\theta=1$) is "exponentially damped," so it is not a good choice for solving the sine-Gordon equation, which is conservative. However, some of the damped schemes in the continuum that is offered by the $\theta$-method were useful for eliminating spurious oscillations due to boundary effects. -In the simulations below, we solve the sine-Gordon on the interval $\Omega = +In the simulations below, we solve the sine-Gordon equation on the interval $\Omega = [-10,10]$ in 1D and on the square $\Omega = [-10,10]\times [-10,10]$ in 2D. In -each case, the respective grid is refined uniformly 6 times, i.e. with $h\sim +each case, the respective grid is refined uniformly 6 times, i.e. $h\sim 2^{-6}$.

An (1+1)-d Solution

@@ -20,7 +20,7 @@ where $c_1$, $c_2$ and $m<1$ are constants. In the simulation below, we have cho Though not shown how to do this in the program, another way to visualize the (1+1)-d solution is to use output generated by the DataOutStack class; it allows to "stack" the solutions of individual time steps, so that we get -2-dimensional space-time graphs from 1-dimensional time dependent +2D space-time graphs from 1D time-dependent solutions. This produces the space-time plot below instead of the animation above. @@ -39,9 +39,9 @@ $10^{-2}$. Hence, we can conclude that the numerical method has been implemented correctly in the program. -

A few (2+1)-d Solutions

+

A few (2+1)D Solutions

-The only analytical solution to the sine-Gordon equation in (2+1)-d that can be found in the literature is the so-called kink solitary wave. It has the following closed-form expression: +The only analytical solution to the sine-Gordon equation in (2+1)D that can be found in the literature is the so-called kink solitary wave. It has the following closed-form expression: @f[ u(x,y,t) = 4 \arctan \left[a_0 e^{s\xi}\right] @f] @@ -59,11 +59,21 @@ The simulation shown below was performed with $u_0(x) = u_{\mathrm{kink}}(x,t_0) \image html step-25.2d-kink.png "Stationary 2D kink." width=5cm -Now that we have validated the code in 1D and 2D, we move to a problem where an analytical solution is unknown. +Now that we have validated the code in 1D and 2D, we move to a problem where the analytical solution is unknown. -To this end, we can rotate the kink solution discussed above about the $z$ axis, e.g. let $\vartheta=\frac{\pi}{4}$. The latter results in a solitary wave that is not aligned with the grid, so reflections occur at the boundaries of the domain immediately. For the simulation shown below, we have taken $u_0(x)=u_{\mathrm{kink}}(x,t_0)$, $\theta=\frac{2}{3}$, $k=20h$, $t_0=0$ and $t_f=20$. Moreover, we had to pick $\theta=\frac{2}{3}$ because for any $\theta\le\frac{1}{2}$ oscillations arose at the boundary, which are likely due to the scheme and not the equation, thus picking a value of $\theta$ a good bit into the "exponentially damped" spectrum of the time stepping schemes assures these oscillations are not created. +To this end, we rotate the kink solution discussed above about the $z$ +axis: we let $\vartheta=\frac{\pi}{4}$. The latter results in a +solitary wave that is not aligned with the grid, so reflections occur +at the boundaries of the domain immediately. For the simulation shown +below, we have taken $u_0(x)=u_{\mathrm{kink}}(x,t_0)$, +$\theta=\frac{2}{3}$, $k=20h$, $t_0=0$ and $t_f=20$. Moreover, we had +to pick $\theta=\frac{2}{3}$ because for any $\theta\le\frac{1}{2}$ +oscillations arose at the boundary, which are likely due to the scheme +and not the equation, thus picking a value of $\theta$ a good bit into +the "exponentially damped" spectrum of the time stepping schemes +assures these oscillations are not created. -\image html step-25.2d-angled_kink.gif "Animation of a moving 2D kink, at 45 degrees to the axis of the grid, showing boundary effects." width=5cm +\image html step-25.2d-angled_kink.gif "Animation of a moving 2D kink, at 45 degrees to the axes of the grid, showing boundary effects." width=5cm Another interesting solution to the sine-Gordon equation (which cannot be obtained analytically) can be produced by using two 1D breathers to construct @@ -91,7 +101,9 @@ it appears to break up and reassemble, rather than just oscillate.

Possibilities for extensions

It is instructive to change the initial conditions. Most choices will not lead -to solutions that stay localized, but lead to solutions where the wave-like +to solutions that stay localized (in the soliton community, such +solutions are called "stationary", though the solution does change +with time), but lead to solutions where the wave-like character of the equation dominates and a wave travels away from the location of a localized initial condition. For example, it is worth playing around with the InitialValues class, by replacing the call to the @@ -101,7 +113,19 @@ the InitialValues class, by replacing the call to the @f] if $|x|,|y|\le \frac\pi 2$, and $u_0(x,y)=0$ outside this region. -Beyond this, clearly adaptivity (i.e. time-adaptive grids) would be of +A second area would be to investigate whether the scheme is +energy-preserving. For the pure wave equation, discussed in @ref +step_23 "step-23", this is the case if we choose the time stepping +parameter such that we get the Crank-Nicolson scheme. One could do a +similar thing here, noting that the energy in the sine-Gordon solution +is defined as +@f[ + E(t) = \frac 12 \int_\Omega \left(\frac{\partial u}{\partial + t}\right)^2 + + \left(\nabla u\right)^2 + \cos u \; dx. +@f] + +Beyond this, clearly, adaptivity (i.e. time-adaptive grids) would be of interest to problems like these. Their complexity leads us to leave this out of this program again, though the general comments in the introduction of @ref step_23 "step-23" remain true. diff --git a/deal.II/examples/step-25/step-25.cc b/deal.II/examples/step-25/step-25.cc index cd65ca9b1c..3b2645c06d 100644 --- a/deal.II/examples/step-25/step-25.cc +++ b/deal.II/examples/step-25/step-25.cc @@ -23,7 +23,7 @@ // numerics (since each // of these categories roughly builds // upon previous ones), then a few - // C++ headers for file, input/output + // C++ headers for file input/output // and string streams. #include #include @@ -71,18 +71,18 @@ using namespace dealii; // reader should consult step-3 and step-4. // // Compared to step-23 and step-24, there - // isn't much newsworthy in the general + // isn't anything newsworthy in the general // structure of the program (though there is - // of course in the inner working of the + // of course in the inner workings of the // various functions!). The most notable // difference is the presence of the two new // functions compute_nl_term and // compute_nl_matrix that // compute the nonlinear contributions to the - // matrix and right hand sides of the first + // system matrix and right-hand side of the first // equation, as discussed in the // Introduction. In addition, we have to have - // a vector update_solution that + // a vector solution_update that // contains the nonlinear update to the // solution vector in each Newton step. // @@ -91,25 +91,29 @@ using namespace dealii; // program, but the mass matrix times the // velocity. This is done in the // M_x_velocity variable (the - // "x" is intended to stand for + // "x" is intended to stand for // "times"). // // Finally, the - // output_timestep_skip variable - // stores every how many time steps graphical - // output is to be generated. This is of - // importance when using fine meshes (and - // consequently small time steps) where we - // would run lots of time steps and create - // lots of output files of solutions that - // look almost the same in subsequent + // output_timestep_skip + // variable stores the number of time + // steps to be taken each time before + // graphical output is to be + // generated. This is of importance + // when using fine meshes (and + // consequently small time steps) + // where we would run lots of time + // steps and create lots of output + // files of solutions that look + // almost the same in subsequent // files. This only clogs up our - // visualization procedures and we should - // avoid creating more output than we are - // really interested in. Therefore, if this - // variable is to a value $n$ bigger than - // one, output is generated only every $n$th - // time step. + // visualization procedures and we + // should avoid creating more output + // than we are really interested + // in. Therefore, if this variable is + // set to a value $n$ bigger than one, + // output is generated only every + // $n$th time step. template class SineGordonProblem { @@ -144,7 +148,7 @@ class SineGordonProblem const double final_time, time_step; const double theta; - Vector solution, update_solution, old_solution; + Vector solution, solution_update, old_solution; Vector M_x_velocity; Vector system_rhs; @@ -155,8 +159,8 @@ class SineGordonProblem // @sect3{Initial conditions} // In the following two classes, we first - // implement the exact solution for 1d, 2d, - // and 3d mentioned in the introduction to + // implement the exact solution for 1D, 2D, + // and 3D mentioned in the introduction to // this program. This space-time solution may // be of independent interest if one wanted // to test the accuracy of the program by @@ -167,8 +171,8 @@ class SineGordonProblem // unbounded domain). This may, for example, // be done using the // VectorTools::integrate_difference - // function. Note again (as was already - // discussed in step-23) how we describe + // function. Note, again (as was already + // discussed in step-23), how we describe // space-time functions as spatial functions // that depend on a time variable that can be // set and queried using the @@ -238,8 +242,8 @@ double ExactSolution::value (const Point &p, } } - // The second part of this section is that we - // provide initial conditions. We are lazy + // In the second part of this section, we + // provide the initial conditions. We are lazy // (and cautious) and don't want to implement // the same functions as above a second // time. Rather, if we are queried for @@ -380,7 +384,7 @@ void SineGordonProblem::make_grid_and_dofs () laplace_matrix); solution.reinit (dof_handler.n_dofs()); - update_solution.reinit (dof_handler.n_dofs()); + solution_update.reinit (dof_handler.n_dofs()); old_solution.reinit (dof_handler.n_dofs()); M_x_velocity.reinit (dof_handler.n_dofs()); system_rhs.reinit (dof_handler.n_dofs()); @@ -395,7 +399,7 @@ void SineGordonProblem::make_grid_and_dofs () // explicit formulas for the system matrix // and right-hand side. // - // Note that in each time step, we have to + // Note that during each time step, we have to // add up the various contributions to the // matrix and right hand sides. In contrast // to step-23 and step-24, this requires @@ -411,8 +415,8 @@ template void SineGordonProblem::assemble_system () { // First we assemble the Jacobian - // matrix $F'_h(U^n_l)$, where - // $U^n_l$ is stored in the vector + // matrix $F'_h(U^{n,l})$, where + // $U^{n,l}$ is stored in the vector // solution for // convenience. system_matrix = 0; @@ -424,7 +428,7 @@ void SineGordonProblem::assemble_system () system_matrix.add (-std::pow(time_step*theta,2), tmp_matrix); // Then, we compute the right-hand - // side vector $-F_h(U^n_l)$. + // side vector $-F_h(U^{n,l})$. system_rhs = 0; tmp_matrix = 0; @@ -469,7 +473,7 @@ void SineGordonProblem::assemble_system () // problem stored in // old_solution and // solution, but are simply the - // two functions we linearize around. For the + // two functions we linearize about. For the // purposes of this function, let us call the // first two arguments $w_{\mathrm{old}}$ and // $w_{\mathrm{new}}$ in the documentation of @@ -555,9 +559,9 @@ void SineGordonProblem::compute_nl_term (const Vector &old_data, // @sect4{SineGordonProblem::compute_nl_matrix} - // This second function dealing with the - // nonlinear scheme computes the matrix - // $N(\cdot,\cdot)$ appearing in the + // This is the second function dealing with the + // nonlinear scheme. It computes the matrix + // $N(\cdot,\cdot)$, whicih appears in the // nonlinear term in the Jacobian of // $F(\cdot)$. Just as // compute_nl_term, we must @@ -642,8 +646,8 @@ void SineGordonProblem::compute_nl_matrix (const Vector &old_data, // Newton's method for the (nonlinear) first // equation of the split formulation. The // solution to the system is, in fact, - // $\delta U^n_l$ so it is stored in - // update_solution and used to update + // $\delta U^{n,l}$ so it is stored in + // solution_update and used to update // solution in the // run function. // @@ -656,11 +660,11 @@ void SineGordonProblem::compute_nl_matrix (const Vector &old_data, // worthwhile to start from that vector, but // as a general observation it is a fact that // the starting point doesn't matter very - // much: it has to be a very very good guess + // much: it has to be a very, very good guess // to reduce the number of iterations by more - // than a few. It turns out that here, it + // than a few. It turns out that for this problem, // using the previous nonlinear update as a - // starting point actually hurts and + // starting point actually hurts convergence and // increases the number of iterations needed, // so we simply set it to zero. // @@ -680,8 +684,8 @@ SineGordonProblem::solve () PreconditionSSOR<> preconditioner; preconditioner.initialize(system_matrix, 1.2); - update_solution = 0; - cg.solve (system_matrix, update_solution, + solution_update = 0; + cg.solve (system_matrix, solution_update, system_rhs, preconditioner); @@ -725,16 +729,19 @@ void SineGordonProblem::run () { make_grid_and_dofs (); - // To aknowledge the initial condition, we - // must use the function $u_0(x)$. To this - // end, below we will create an object of - // type InitialValues; ote - // that when we create this object (which - // is derived from the - // Function class), we set its - // internal time variable to $t_0$, to - // indicate that the initial condition is a - // function of space and time evaluated at + // To aknowledge the initial + // condition, we must use the + // function $u_0(x)$ to compute + // $U^0$. To this end, below we + // will create an object of type + // InitialValues; note + // that when we create this object + // (which is derived from the + // Function class), we + // set its internal time variable + // to $t_0$, to indicate that the + // initial condition is a function + // of space and time evaluated at // $t=t_0$. // // Then we produce $U^0$ by projecting @@ -780,18 +787,21 @@ void SineGordonProblem::run () << "advancing to t = " << time << "." << std::endl; - // The first step in each time step is - // that we must solve the nonlinear - // equation in the split formulation - // via Newton's method --- i.e. solve - // for $\delta U^n_l$ then compute - // $U^n_{l+1}$ and so on. As stopping - // criterion for this nonlinear - // iteration we choose that - // $\|F_h(U^n_l)\|_2 \le 10^{-6} - // \|F_h(U^n_0)\|_2$. To this end, we - // need to record the norm of the - // residual in the first + // At the beginning of each + // time step we must solve the + // nonlinear equation in the + // split formulation via + // Newton's method --- + // i.e. solve for $\delta + // U^{n,l}$ then compute + // $U^{n,l+1}$ and so on. The + // stopping criterion for this + // nonlinear iteration is that + // $\|F_h(U^{n,l})\|_2 \le + // 10^{-6} + // \|F_h(U^{n,0})\|_2$. Consequently, + // we need to record the norm + // of the residual in the first // iteration. // // At the end of each iteration, we @@ -811,7 +821,7 @@ void SineGordonProblem::run () const unsigned int n_iterations = solve (); - solution += update_solution; + solution += solution_update; if (first_iteration == true) std::cout << " " << n_iterations; @@ -827,7 +837,7 @@ void SineGordonProblem::run () // Upon obtaining the solution to the // first equation of the problem at // $t=t_n$, we must update the - // auxilliary velocity variable + // auxiliary velocity variable // $V^n$. However, we do not compute // and store $V^n$ since it is not a // quantity we use directly in the @@ -845,18 +855,22 @@ void SineGordonProblem::run () compute_nl_term (old_solution, solution, tmp_vector); M_x_velocity.add (-time_step, tmp_vector); - // Oftentimes, in particular for fine - // meshes, we must pick the time step - // to be quite small in order for the - // scheme to be stable. Therefore, - // there are a lot of time steps during - // which "nothing interesting happens" - // in the solution. To improve overall - // efficiency --- in particular, speed - // up the program and save disk space - // --- we only output the solution - // every - // output_timestep_skip: + // Oftentimes, in particular + // for fine meshes, we must + // pick the time step to be + // quite small in order for the + // scheme to be + // stable. Therefore, there are + // a lot of time steps during + // which "nothing interesting + // happens" in the solution. To + // improve overall efficiency + // -- in particular, speed up + // the program and save disk + // space -- we only output the + // solution every + // output_timestep_skip + // time steps: if (timestep_number % output_timestep_skip == 0) output_results (timestep_number); }