From c82794834e028fca7d295e64b98b6d83b5df73f4 Mon Sep 17 00:00:00 2001 From: bangerth Date: Thu, 2 Nov 2006 01:23:19 +0000 Subject: [PATCH] A bit forward git-svn-id: https://svn.dealii.org/trunk@14139 0785d39b-7218-0410-832d-ea1e28bc413d --- deal.II/examples/step-25/doc/intro.dox | 94 ++- deal.II/examples/step-25/step-25.cc | 833 +++++++++++++++---------- 2 files changed, 605 insertions(+), 322 deletions(-) diff --git a/deal.II/examples/step-25/doc/intro.dox b/deal.II/examples/step-25/doc/intro.dox index 0aa5ab59f6..46081e898b 100644 --- a/deal.II/examples/step-25/doc/intro.dox +++ b/deal.II/examples/step-25/doc/intro.dox @@ -1,10 +1,39 @@

Introduction

-The goal of this program is to solve the sine-Gordon soliton equation in 1, 2 or 3 spatial dimensions. The motivation for solving this equation is that very little is known about the nature of the solutions in 2D and 3D, even though the 1D case has been studied extensively. -Rather facetiously, the sine-Gordon equation's moniker is a pun on the so-called Klein-Gordon equation, which is the relativistic version of the Schrödinger equation. The resemblence is not just superficial, the sine-Gordon equation has been shown to model some unified-field phenomena such as interaction of subatomic particles (see, e.g., Perring & Skyrme in Nuclear Physics 31) and the Josephson (quantum) effect in semiconductors junctions (see, e.g., http://en.wikipedia.org/wiki/Long_Josephson_junction). Furthermore, from the mathematical standpoint, since the sine-Gordon equation is "completely integrable," it is a candidate for study using the usual methods such as the inverse scattering transform. Consequently, over the years, many interesting solitary-wave, and even stationary, solutions to the sine-Gordon equation have been found. For more on the sine-Gordon equation, the inverse scattering transform and other methods for finding analyical soliton equations, the reader should consult the following "classical" references on the subject: G. L. Lamb's Elements of Soliton Theory (Chapter 5, Section 2) and G. B. Whitham's Linear and Nonlinear Waves (Chapter 17, Sections 10-13). +This program grew out of a student project by Ivan Christov at Texas A&M +University. Most of the work for this program is by him. + +The goal of this program is to solve the sine-Gordon soliton equation +in 1, 2 or 3 spatial dimensions. The motivation for solving this +equation is that very little is known about the nature of the +solutions in 2D and 3D, even though the 1D case has been studied +extensively. + +Rather facetiously, the sine-Gordon equation's moniker is a pun on the +so-called Klein-Gordon equation, which is a relativistic version of +the Schrödinger equation for particles with non-zero mass. The resemblence is not just +superficial, the sine-Gordon equation has been shown to model some +unified-field phenomena such as interaction of subatomic particles +(see, e.g., Perring & Skyrme in Nuclear Physics 31) and the +Josephson (quantum) effect in semiconductors junctions (see, e.g., http://en.wikipedia.org/wiki/Long_Josephson_junction). +Furthermore, from the mathematical standpoint, since the sine-Gordon +equation is "completely integrable," it is a candidate for study using +the usual methods such as the inverse scattering +transform. Consequently, over the years, many interesting +solitary-wave, and even stationary, solutions to the sine-Gordon +equation have been found. In these solutions, particles correspond to +localized features. For more on the sine-Gordon equation, the +inverse scattering transform and other methods for finding analytical +soliton equations, the reader should consult the following "classical" +references on the subject: G. L. Lamb's Elements of Soliton +Theory (Chapter 5, Section 2) and G. B. Whitham's Linear and +Nonlinear Waves (Chapter 17, Sections 10-13).

Statement of the problem

-The sine-Gordon initial-boundary-value problem (IBVP) we wish to solve is +The sine-Gordon initial-boundary-value problem (IBVP) we wish to solve +is the following nonlinear equation similar to the wave equation we +discussed in @ref step_23 "step-23" and @ref step_24 "step-24": \f{eqnarray*} u_{tt}-\Delta u &=& -\sin(u) \quad\mbox{for}\quad (x,t) \in \Omega \times (t_0,t_f],\\ {\mathbf n} \cdot \nabla u &=& 0 \quad\mbox{for}\quad (x,t) \in \partial\Omega @@ -13,14 +42,33 @@ The sine-Gordon initial-boundary-value problem (IBVP) we wish to solve is \f} We have chosen to enforce zero Neumann boundary conditions in order for waves to reflect off the boundaries of our domain. It should be noted, however, that Dirichlet boundary conditions are not appropriate for this problem. Even though the solutions to the sine-Gordon equation are localized, it only makes sense to specify (Dirichlet) boundary conditions at $x=\pm\infty$, otherwise either a solution does not exist or only the trivial solution $u=0$ exists. -However, the form of the equation above is not ideal for numerical discretization. If we were to discretize the second-order time derivative directly and accurately, then we would need a large stencil (i.e., several time steps would need to kept in the memory), which could become expensive. Therefore, we split the second-order (in time) sine-Gordon equation into a system of two first-order (in time) equations, which we call the split, or velocity, formulation. To this end, by setting $v = u_t$, it is easy to see that the sine-Gordon equation is equivalent to +However, the form of the equation above is not ideal for numerical +discretization. If we were to discretize the second-order time +derivative directly and accurately, then we would need a large +stencil (i.e., several time steps would need to be kept in the +memory), which could become expensive. Therefore, in complete analogy +to what we did in @ref step_23 "step-23" and @ref step_24 "step-24", +we split the +second-order (in time) sine-Gordon equation into a system of two +first-order (in time) equations, which we call the split, or velocity, +formulation. To this end, by setting $v = u_t$, it is easy to see that the sine-Gordon equation is equivalent to \f{eqnarray*} u_t - v &=& 0,\\ v_t - \Delta u &=& -\sin(u). \f}

Discretization of the equations in time

-Now, we can discretize the split formulation in time using the the $\theta$-method, which has a stencil of only two time steps. By choosing a $\theta\in [0,1]$, the latter discretization allows us to choose from a continuum of schemes. In particular, if we pick $\theta=0$ or $\theta=1$, we obtain the first-order accurate explicit or implicit Euler method, respectively. Another important choice is $\theta=\frac{1}{2}$, which gives the second-order accurate Crank-Nicolson scheme. Henceforth, a superscript $n$ denotes the values of the variables at the $n^{\mathrm{th}}$ time step, i.e. at $t=t_n\equiv n k$, where $k$ is (fixed) the time step size. Thus, the split formulation of the sine-Gordon equation becomes +Now, we can discretize the split formulation in time using the the +$\theta$-method, which has a stencil of only two time steps. By +choosing a $\theta\in [0,1]$, the latter discretization allows us to +choose from a continuum of schemes. In particular, if we pick +$\theta=0$ or $\theta=1$, we obtain the first-order accurate explicit +or implicit Euler method, respectively. Another important choice is +$\theta=\frac{1}{2}$, which gives the second-order accurate +Crank-Nicolson scheme. Henceforth, a superscript $n$ denotes the +values of the variables at the $n^{\mathrm{th}}$ time step, i.e. at +$t=t_n\equiv n k$, where $k$ is the (fixed) the time step size. Thus, +the split formulation of the time-discretized sine-Gordon equation becomes \f{eqnarray*} \frac{u^n - u^{n-1}}{k} - \left[\theta v^n + (1-\theta) v^{n-1}\right] &=& 0,\\ \frac{v^n - v^{n-1}}{k} - \Delta\left[\theta u^n + (1-\theta) u^{n-1}\right] @@ -36,7 +84,9 @@ We can simplify the latter via a bit of algebra. Eliminating $v^n$ from the firs - k\sin\left[ \theta u^n + (1-\theta) u^{n-1} \right]. \f} -It may seem as though we can just proceed to discretize the equations in space at this point, however, that would not work for all $\theta$ since the first equation above is nonlinear. Therefore, a nonlinear solver must be implemented, then equations can be discretized in space and solved. +It may seem as though we can just proceed to discretize the equations +in space at this point. While this is true for the second equation +(which is linear in $v^n$), this would not work for all $\theta$ since the first equation above is nonlinear. Therefore, a nonlinear solver must be implemented, then equations can be discretized in space and solved. To this end, we can use Newton's method. Given the nonlinear equation $F(u^n) = 0$, we produce successive approximations to $u^n$ as follows: \f{eqnarray*} @@ -67,10 +117,26 @@ With hindsight, we choose both the solution and the test space to be $H^1(\Omega - k\left(\sin\left[ \theta u^n + (1-\theta) u^{n-1} \right], \varphi \right)_{\Omega} \;\forall\varphi\in H^1(\Omega). \f} -Note that the we have used integration by parts and the zero Neumann boundary conditions on all terms involving the Laplacian operator ($\Delta$). Moreover, $F(\cdot)$ and $F'(\cdot)$ are as defined above, and $(\cdot,\cdot)_{\Omega}$ denotes the usual $L^2$ inner product over the domain $\Omega$, i.e. $(f,g)_{\Omega} = \int_\Omega fg \,\mathrm{d}x$. Finally, notice that the first equation is, in fact, the definition of an interative procedure, so it solved multiple times until a stopping criterion is met. +Note that the we have used integration by parts and the zero Neumann +boundary conditions on all terms involving the Laplacian +operator. Moreover, $F(\cdot)$ and $F'(\cdot)$ are as defined above, +and $(\cdot,\cdot)_{\Omega}$ denotes the usual $L^2$ inner product +over the domain $\Omega$, i.e. $(f,g)_{\Omega} = \int_\Omega fg +\,\mathrm{d}x$. Finally, notice that the first equation is, in fact, +the definition of an iterative procedure, so it is solved multiple +times in each time step until a stopping criterion is met.

Discretization of the weak formulation in space

-Using the Finite Element Method, we discretize the variation formulation in space. To this end, let $V_h$ be a finite-dimensional $H^1(\Omega)$-conforming finite element space ($\mathrm{dim}\, V_h = N < \infty$) with nodal basis $\{\varphi_1,\ldots,\varphi_N\}$. Hence, we can expand all functions in the weak formulation (see above) in terms of the nodal basis. Henceforth, we shall denote by a capital letter the vector of coefficients (in the nodal basis) of a function denoted by the same letter in lower case; e.g., $u^n = \sum_{i=1}^N U^n_i \varphi_i$ where $U^n \in {R}^N$ and $u^n \in H^1(\Omega)$. Thus, the finite-dimensional version of variation fomulation requires that we solve the following matrix equations at each time step: +Using the Finite Element Method, we discretize the variational +formulation in space. To this end, let $V_h$ be a finite-dimensional +$H^1(\Omega)$-conforming finite element space ($\mathrm{dim}\, V_h = N +< \infty$) with nodal basis $\{\varphi_1,\ldots,\varphi_N\}$. Hence, +we can expand all functions in the weak formulation (see above) in +terms of the nodal basis. Henceforth, we shall denote by a capital +letter the vector of coefficients (in the nodal basis) of a function +denoted by the same letter in lower case; e.g., $u^n = \sum_{i=1}^N +U^n_i \varphi_i$ where $U^n \in {R}^N$ and $u^n \in +H^1(\Omega)$. Thus, the finite-dimensional version of the variational fomulation requires that we solve the following matrix equations at each time step: @f{eqnarray*} F_h'(U^n_l)\delta U^n_l &=& -F_h(U^n_l), \quad U^n_{l+1} = U^n_l + \delta U^n_l, \quad U^n_0 = U^{n-1}; \\ @@ -84,4 +150,14 @@ Above, the matrix $F_h'(\cdot)$ and the vector $F_h(\cdot)$ denote the discrete F_h'(U^n_l) &=& M+k^2\theta^2A - k^2\theta^2N(u^n_l,u^{n-1}) \f} -Again, note that the first matrix equation above is, in fact, the defition of an iterative procedure, so it is solve multiple times until a stopping criterion is met. Moreover, $M$ is the mass matrix, i.e. $M_{ij} = \left( \varphi_i,\varphi_j \right)_{\Omega}$, $A$ is the Laplace matrix, i.e. $A_{ij} = \left( \nabla \varphi_i, \nabla \varphi_j \right)_{\Omega}$, $S$ is the nonlinear term in the auxilliary equation, i.e. $S_j(f,g) = \left( \sin\left[ \theta f + (1-\theta) g\right], \varphi_j \right)_{\Omega}$, and $N$ is the nonlinear term in the Jacobian matrix of $F(\cdot)$, i.e. $N_{ij}(f,g) = \left( \cos\left[ \theta f + (1-\theta) g\right]\varphi_i, \varphi_j \right)_{\Omega}$. +Again, note that the first matrix equation above is, in fact, the +defition of an iterative procedure, so it is solved multiple times +until a stopping criterion is met. Moreover, $M$ is the mass matrix, +i.e. $M_{ij} = \left( \varphi_i,\varphi_j \right)_{\Omega}$, $A$ is +the Laplace matrix, i.e. $A_{ij} = \left( \nabla \varphi_i, \nabla +\varphi_j \right)_{\Omega}$, $S$ is the nonlinear term in the +auxilliary equation, i.e. $S_j(f,g) = \left( \sin\left[ \theta f + +(1-\theta) g\right], \varphi_j \right)_{\Omega}$, and $N$ is the +nonlinear term in the Jacobian matrix of $F(\cdot)$, i.e. $N_{ij}(f,g) += \left( \cos\left[ \theta f + (1-\theta) g\right]\varphi_i, \varphi_j +\right)_{\Omega}$. diff --git a/deal.II/examples/step-25/step-25.cc b/deal.II/examples/step-25/step-25.cc index 44c7c871b8..4de0038734 100644 --- a/deal.II/examples/step-25/step-25.cc +++ b/deal.II/examples/step-25/step-25.cc @@ -1,11 +1,6 @@ -/* $Id: project.cc descends from heat-equation.cc, which */ -/* descends from step-4.cc (2006/03/01). */ -/* Author: Ivan Christov, Texas A&M University, 2006 */ - -/* $Id: step-4.cc,v 1.34 2006/02/06 21:33:10 wolf Exp $ */ -/* Version: $Name: $ */ -/* */ -/* Copyright (C) 2006 by the deal.II authors */ +/* $Id: $. */ +/* Copyright (C) 2006 by the deal.II authors */ +/* Author: Ivan Christov, Wolfgang Bangerth, Texas A&M University, 2006 */ /* */ /* This file is subject to QPL and may not be distributed */ /* without copyright and license information. Please refer */ @@ -13,14 +8,23 @@ /* further information on this license. */ - // @sect3{Include files and global variables} - -// For an explanation of the include files, the reader should refer to -// the example programs step-1 through step-4. They are in the -// standard order, which is base -- lac -- grid -- -// dofs -- fe -- numerics (since each of these categories -// roughly builds upon previous ones), then a few C++ headers for -// file, input/output and string streams. + // @sect3{Include files and global variables} + + // For an explanation of the include + // files, the reader should refer to + // the example programs step-1 + // through step-4. They are in the + // standard order, which is + // base -- + // lac -- + // grid -- + // dofs -- + // fe -- + // numerics (since each + // of these categories roughly builds + // upon previous ones), then a few + // C++ headers for file, input/output + // and string streams. #include #include #include @@ -51,23 +55,22 @@ // previous programs: using namespace dealii; -// The following global variable is used to determine whether the -// problem being solved is one for which an exact solution is known, -// e.g. we are using the exact solution as the initial condition. It -// is set to zero by default, and modified by InitialValues::value -// (see below). Things such as the computation of the error between -// the numerical and exact solutions depend on the value of this -// variable. -bool exact_solution_known = false; - -// @sect3{The SineGordonProblem class template} - -// The entire algorithm for solving the problem is encapsulated in -// this class. Also, note that the class is declared with a template -// parameter, which is the spatial dimension, so that we can solve the -// sine-Gordon equation in one, two or three spatial dimension. For -// more on the dimension-independent class-encapsulation of the -// problem, the reader should consult step-3 and step-4. + + // @sect3{The SineGordonProblem class template} + + // The entire algorithm for solving + // the problem is encapsulated in + // this class. Also, note that the + // class is declared with a template + // parameter, which is the spatial + // dimension, so that we can solve + // the sine-Gordon equation in one, + // two or three spatial + // dimension. For more on the + // dimension-independent + // class-encapsulation of the + // problem, the reader should consult + // step-3 and step-4. template class SineGordonProblem { @@ -85,7 +88,6 @@ class SineGordonProblem const Vector &new_data, SparseMatrix &nl_matrix) const; void solve (); - void compute_error (const unsigned int timestep_number); void output_results (const unsigned int timestep_number); Triangulation triangulation; @@ -111,35 +113,61 @@ class SineGordonProblem static const int n_global_refinements = 6; }; -// @sect3{Exact solitary wave solutions of the sine-Gordon equation} - -// A kink-like solitary wave solution to the (dim+1) dimensional -// sine-Gordon equation, which we can test our code against, is given -// by Leibbrandt in \e Phys. \e Rev. \e Lett. \b 41(7), and is -// implemented in the ExactSolution class. However, it should be -// noted that a closed-form solution can only be obtained for the -// infinite-line initial-value problem (not the Neumann -// initial-boundary-value problem under consideration here). However, -// given that we impose \e zero Neumann boundary conditions, we expect -// that the solution to our initial-boundary-value problem would be -// close (in fact, equal) to the solution infinite-line initial-value -// problem, if reflections of waves off the boundaries of our domain -// do \e not occur. -// -// The constants $\vartheta$ (th) and $\lambda$ (ld) in the 2D -// solution and $\vartheta$ (th), $\phi$ (phi) and $\tau$ -// (tau) in the 3D solution are called the Bäcklund -// transformation parameters. They control such things as the -// orientation and steepness of the kink. For the purposes of testing -// the code against the exact solution, one should choose the -// parameters so that the kink is aligned with the grid, e.g. $\vartheta -// = \phi = \pi$. -// -// In 1D, more interesting analytical solutions are known. Many of -// them are listed on -// http://mathworld.wolfram.com/Sine-GordonEquation.html . We have -// implemented the one kink, two kink, kink-antikink and stationary -// breather solitary-wave solutions. + // @sect3{Exact solitary wave solutions of the sine-Gordon equation} + + // A kink-like solitary wave solution + // to the (dim+1) + // dimensional sine-Gordon equation, + // which we can test our code + // against, is given by Leibbrandt in + // \e Phys. \e Rev. \e Lett. \b + // 41(7), and is implemented in the + // ExactSolution class. + // However, it should be noted that a + // closed-form solution can only be + // obtained for the infinite-line + // initial-value problem (not the + // Neumann initial-boundary-value + // problem under consideration + // here). However, given that we + // impose \e zero Neumann boundary + // conditions, we expect that the + // solution to our + // initial-boundary-value problem + // would be close (in fact, equal) to + // the solution infinite-line + // initial-value problem, if + // reflections of waves off the + // boundaries of our domain do \e not + // occur. + // + // The constants $\vartheta$ + // (th) and $\lambda$ + // (ld) in the 2D + // solution and $\vartheta$ + // (th), $\phi$ + // (phi) and $\tau$ + // (tau) in the 3D + // solution are called the + // Bäcklund transformation + // parameters. They control such + // things as the orientation and + // steepness of the kink. For the + // purposes of testing the code + // against the exact solution, one + // should choose the parameters so + // that the kink is aligned with the + // grid, e.g. $\vartheta = \phi = + // \pi$. + // + // In 1D, more interesting analytical + // solutions are known. Many of them + // are listed on + // http://mathworld.wolfram.com/Sine-GordonEquation.html + // . We have implemented the one + // kink, two kink, kink-antikink and + // stationary breather solitary-wave + // solutions. template class ExactSolution : public Function { @@ -161,11 +189,12 @@ double ExactSolution::value (const Point &p, case 1: { double m = 0.5; -// double beta = std::sqrt(m*m-1.)/m; + // double beta = + // std::sqrt(m*m-1.)/m; double c1 = 0.; double c2 = 0.; -// double s1 = 1.; -// double s2 = -1.; + // double s1 = 1.; + // double s2 = -1.; /* one kink (m>1) */ /* return 4.*std::atan(std::exp(s1*(p[0]+s2*beta*t)/std::sqrt(1.-beta*beta))); */ @@ -213,14 +242,20 @@ double ExactSolution::value (const Point &p, } } -// @sect3{Boundary values and initial values} - -// For our problem, we do not enforce Dirichlet boundary conditions -// and the Neumann boundary conditions are enforced directly through -// the variational formulation. However, since our problem is time -// dependent, we must specify the value of the independent variable -// $u$ at the initial time $t_0$. We do so via the InitialValues -// class below. + // @sect3{Boundary values and initial values} + + // For our problem, we do not enforce + // Dirichlet boundary conditions and + // the Neumann boundary conditions + // are enforced directly through the + // variational formulation. However, + // since our problem is time + // dependent, we must specify the + // value of the independent variable + // $u$ at the initial time $t_0$. We + // do so via the + // InitialValues class + // below. template class InitialValues : public Function { @@ -236,61 +271,84 @@ template double InitialValues::value (const Point &p, const unsigned int /*component*/) const { - // We could also use a localized wave form for our initial - // condition, and see how it evolves when governed by the - // sine-Gordon equation. An example of such an initial condition is - // the following: - /* - exact_solution_known = false; - if ((p[0]>=-M_PI) && (p[0]<=M_PI) && (p[1]>=-M_PI) && (p[1]<=M_PI)) { - return std::cos(p[0]/2.)*std::cos(p[1]/2.); - } else { - return 0.; - } - */ - - // In 2D, another possibility for a localized-wave initial condition - // is a separable solution composed of two 1D breathers: - exact_solution_known = false; + // We could also use a localized + // wave form for our initial + // condition, and see how it + // evolves when governed by the + // sine-Gordon equation. An example + // of such an initial condition is + // the following: + /* + if ((p[0]>=-M_PI) && (p[0]<=M_PI) && (p[1]>=-M_PI) && (p[1]<=M_PI)) { + return std::cos(p[0]/2.)*std::cos(p[1]/2.); + } else { + return 0.; + } + */ + + // In 2D, another possibility for a + // localized-wave initial condition + // is a separable solution composed + // of two 1D breathers: double m = 0.5; double t = this->get_time(); double argx = m/std::sqrt(1-m*m)*std::sin(std::sqrt(1-m*m)*t)/std::cosh(m*p[0]); double argy = m/std::sqrt(1-m*m)*std::sin(std::sqrt(1-m*m)*t)/std::cosh(m*p[1]); return 16.*std::atan(argx)*std::atan(argy); - // For the purposes of validating the program, we can use an exact - // solution of the sine-Gordon equation, at $t=t_0$, as the initial - // condition for our problem. Though, perhaps, this is not the most - // efficient way to implement the exact solution as the initial - // conditons, it is instuctive. - /* - exact_solution_known = false; - ExactSolution exact_solution (1, this->get_time()); - return exact_solution.value (p); - */ + // For the purposes of validating + // the program, we can use an exact + // solution of the sine-Gordon + // equation, at $t=t_0$, as the + // initial condition for our + // problem. Though, perhaps, this + // is not the most efficient way to + // implement the exact solution as + // the initial conditons, it is + // instuctive. + /* + ExactSolution exact_solution (1, this->get_time()); + return exact_solution.value (p); + */ } -// @sect3{Implementation of the SineGordonProblem class} - -// \b TO \b DO: present the big picture here? - -// @sect4{SineGordonProblem::SineGordonProblem} - -// This is the constructor of the SineGordonProblem class. It -// specifies the desired polynomial degree of the finite elements, -// associates a DoFHandler to the triangulation object (just -// as in the example programs step-3 and step-4), initializes the -// current or initial time, the final time, the time step size, and -// the value of $\theta$ for the time stepping scheme. -// -// Note that if we were to chose the explicit Euler time stepping -// scheme ($\theta = 0$), then we must pick a time step $k \le h$, -// otherwise the scheme is not stable and oscillations might arise in -// the solution. The Crank-Nicolson scheme ($\theta = \frac{1}{2}$) -// and the implicit Euler scheme ($\theta=1$) do not suffer from this -// deficiency, since they are unconditionally stable. However, even -// then the time step should be chosen to be on the order of $h$ in -// order to obtain a good solution. + // @sect3{Implementation of the SineGordonProblem class} + + // \b TO \b DO: present the big + // picture here? + + // @sect4{SineGordonProblem::SineGordonProblem} + + // This is the constructor of the + // SineGordonProblem + // class. It specifies the desired + // polynomial degree of the finite + // elements, associates a + // DoFHandler to the + // triangulation object + // (just as in the example programs + // step-3 and step-4), initializes + // the current or initial time, the + // final time, the time step size, + // and the value of $\theta$ for the + // time stepping scheme. + // + // Note that if we were to chose the + // explicit Euler time stepping + // scheme ($\theta = 0$), then we + // must pick a time step $k \le h$, + // otherwise the scheme is not stable + // and oscillations might arise in + // the solution. The Crank-Nicolson + // scheme ($\theta = \frac{1}{2}$) + // and the implicit Euler scheme + // ($\theta=1$) do not suffer from + // this deficiency, since they are + // unconditionally stable. However, + // even then the time step should be + // chosen to be on the order of $h$ + // in order to obtain a good + // solution. template SineGordonProblem::SineGordonProblem () : fe (1), @@ -301,16 +359,25 @@ SineGordonProblem::SineGordonProblem () : theta (0.5) {} -// @sect4{SineGordonProblem::make_grid_and_dofs} - -// This function creates a rectangular grid in dim dimensions and -// refines it several times. Also, all matrix and vector members of -// the SineGordonProblem class are initialized to their -// approrpiate sizes once the degrees of freedom have been -// assembled. Unlike its analogue in step-3 (and step-4) this function -// uses MatrixCreator class to generate a mass matrix $M$ and a -// Laplace matrix $A$ and store them in the appropriate variables -// for the remainder of the program's life. + // @sect4{SineGordonProblem::make_grid_and_dofs} + + // This function creates a + // rectangular grid in + // dim dimensions and + // refines it several times. Also, + // all matrix and vector members of + // the SineGordonProblem + // class are initialized to their + // approrpiate sizes once the degrees + // of freedom have been + // assembled. Unlike its analogue in + // step-3 (and step-4) this function + // uses MatrixCreator + // class to generate a mass matrix + // $M$ and a Laplace matrix $A$ and + // store them in the appropriate + // variables for the remainder of the + // program's life. template void SineGordonProblem::make_grid_and_dofs () { @@ -351,29 +418,43 @@ void SineGordonProblem::make_grid_and_dofs () massmatxvel.reinit (dof_handler.n_dofs()); system_rhs.reinit (dof_handler.n_dofs()); - // We will use the fem_errors vector, which is of size equal to - // the number of time steps, to store the errors in the finite - // element solution after each time step. Note that we must make the - // first element of the vector equal to zero, since there is no - // error in the solution after zeroth time step because the solution - // is just the initial condition. + // We will use the + // fem_errors vector, + // which is of size equal to the + // number of time steps, to store + // the errors in the finite element + // solution after each time + // step. Note that we must make the + // first element of the vector + // equal to zero, since there is no + // error in the solution after + // zeroth time step because the + // solution is just the initial + // condition. const unsigned int n_time_steps = static_cast(std::ceil(std::fabs(final_time-time)/time_step)); fem_errors.reinit (n_time_steps); fem_errors(0) = 0.; } -// @sect4{SineGordonProblem::assemble_system} + // @sect4{SineGordonProblem::assemble_system} -// This functions assembles the system matrix and right-hand side -// vector for each iteration of Newton's method. The reader should -// refer to the last section of the Introduction for the explicit -// formulas for the system matrix and right-hand side. + // This functions assembles the + // system matrix and right-hand side + // vector for each iteration of + // Newton's method. The reader should + // refer to the last section of the + // Introduction for the explicit + // formulas for the system matrix and + // right-hand side. template void SineGordonProblem::assemble_system () { - // First we assemble the Jacobian matrix $F'_h(U^n_l)$, where - // $U^n_l$ is stored in the vector solution for convenience. + // First we assemble the Jacobian + // matrix $F'_h(U^n_l)$, where + // $U^n_l$ is stored in the vector + // solution for + // convenience. system_matrix = 0; system_matrix.copy_from (mass_matrix); system_matrix.add (std::pow(time_step*theta,2), laplace_matrix); @@ -381,7 +462,8 @@ void SineGordonProblem::assemble_system () compute_nl_matrix (old_solution, solution, tmp_matrix); system_matrix.add (-std::pow(time_step*theta,2), tmp_matrix); - // Then, we compute the right-hand side vector $-F_h(U^n_l)$. + // Then, we compute the right-hand + // side vector $-F_h(U^n_l)$. system_rhs = 0; tmp_matrix = 0; @@ -407,23 +489,36 @@ void SineGordonProblem::assemble_system () system_rhs *= -1; } -// @sect4{SineGordonProblem::compute_nl_term} - -// This function computes the vector $S(\cdot,\cdot)$ corresponding to the -// nonlinear term in the auxilliary (second) equation of the split -// formulation. This function not only simplifies the repeated -// computation of this term, but it is also a fundamental part of -// nonlinear iterative solver that we use when the time stepping is -// implicit (i.e. $\theta\ne 0$). Moreover, we must allow the function -// to receive as input an "old" and a "new" solution, which may not be -// the actual solutions of the problem stored in old_solution and -// solution. For the purposes of this function, let us call the -// first two arguments $w_{\mathrm{old}}$ and $w_{\mathrm{new}}$, -// respectively. -// -// It is perhaps worth investigating what order quadrature formula is -// best suited for this type of integration, since $\sin(\cdot)$ is an -// oscillatory function. + // @sect4{SineGordonProblem::compute_nl_term} + + // This function computes the vector + // $S(\cdot,\cdot)$ corresponding to + // the nonlinear term in the + // auxilliary (second) equation of + // the split formulation. This + // function not only simplifies the + // repeated computation of this term, + // but it is also a fundamental part + // of nonlinear iterative solver that + // we use when the time stepping is + // implicit (i.e. $\theta\ne + // 0$). Moreover, we must allow the + // function to receive as input an + // "old" and a "new" solution, which + // may not be the actual solutions of + // the problem stored in + // old_solution and + // solution. For the + // purposes of this function, let us + // call the first two arguments + // $w_{\mathrm{old}}$ and + // $w_{\mathrm{new}}$, respectively. + // + // It is perhaps worth investigating + // what order quadrature formula is + // best suited for this type of + // integration, since $\sin(\cdot)$ + // is an oscillatory function. template void SineGordonProblem::compute_nl_term (const Vector &old_data, const Vector &new_data, @@ -447,18 +542,29 @@ void SineGordonProblem::compute_nl_term (const Vector &old_data, for (; cell!=endc; ++cell) { - // Once we re-initialize our FEValues instantiation to the - // current cell, we make use of the get_function_values - // routine to get the obtain the values of the "old" data - // (presumably at $t=t_{n-1}$) and the "new" data (presumably at - // $t=t_n$) at the nodes of the chosen quadrature formula. + // Once we re-initialize our + // FEValues + // instantiation to the current + // cell, we make use of the + // get_function_values + // routine to get the obtain + // the values of the "old" data + // (presumably at $t=t_{n-1}$) + // and the "new" data + // (presumably at $t=t_n$) at + // the nodes of the chosen + // quadrature formula. fe_values.reinit (cell); fe_values.get_function_values (old_data, old_data_values); fe_values.get_function_values (new_data, new_data_values); - // Now, we can evaluate $\int_K \sin\left[\theta w_{\mathrm{new}} + - // (1-\theta) w_{\mathrm{old}}\right]\,\varphi_j\,\mathrm{d}x$ using - // the desired quadrature formula. + // Now, we can evaluate $\int_K + // \sin\left[\theta + // w_{\mathrm{new}} + + // (1-\theta) + // w_{\mathrm{old}}\right]\,\varphi_j\,\mathrm{d}x$ + // using the desired quadrature + // formula. for (unsigned int q_point=0; q_point::compute_nl_term (const Vector &old_data, fe_values.shape_value (i, q_point) * fe_values.JxW (q_point)); - // We conclude by adding up the contributions of the - // integrals over the cells to the global integral. + // We conclude by adding up the + // contributions of the + // integrals over the cells to + // the global integral. cell->get_dof_indices (local_dof_indices); for (unsigned int i=0; i::compute_nl_term (const Vector &old_data, } } -// @sect4{SineGordonProblem::compute_nl_matrix} - -// This function computes the matrix $N(\cdot,\cdot)$ corresponding to -// the nonlinear term in the Jacobian of $F(\cdot)$. It is also a -// fundamental part of nonlinear iterative solver. Just as -// compute_nl_term, we must allow this function to receive -// as input an "old" and a "new" solution, which we call the -// $w_{\mathrm{old}}$ and $w_{\mathrm{new}}$, respectively. + // @sect4{SineGordonProblem::compute_nl_matrix} + + // This function computes the matrix + // $N(\cdot,\cdot)$ corresponding to + // the nonlinear term in the Jacobian + // of $F(\cdot)$. It is also a + // fundamental part of nonlinear + // iterative solver. Just as + // compute_nl_term, we + // must allow this function to + // receive as input an "old" and a + // "new" solution, which we call the + // $w_{\mathrm{old}}$ and + // $w_{\mathrm{new}}$, respectively. template void SineGordonProblem::compute_nl_matrix (const Vector &old_data, const Vector &new_data, @@ -508,15 +622,24 @@ void SineGordonProblem::compute_nl_matrix (const Vector &old_data, for (; cell!=endc; ++cell) { - // Again, first we re-initialize our FEValues instantiation - // to the current cell. + // Again, first we + // re-initialize our + // FEValues + // instantiation to the current + // cell. fe_values.reinit (cell); fe_values.get_function_values (old_data, old_data_values); fe_values.get_function_values (new_data, new_data_values); - // Then, we evaluate $\int_K \cos\left[\theta w_{\mathrm{new}} + - // (1-\theta) w_{\mathrm{old}}\right]\, \varphi_i\, - // \varphi_j\,\mathrm{d}x$ using the desired quadrature formula. + // Then, we evaluate $\int_K + // \cos\left[\theta + // w_{\mathrm{new}} + + // (1-\theta) + // w_{\mathrm{old}}\right]\, + // \varphi_i\, + // \varphi_j\,\mathrm{d}x$ + // using the desired quadrature + // formula. for (unsigned int q_point=0; q_point::compute_nl_matrix (const Vector &old_data, fe_values.shape_value (j, q_point) * fe_values.JxW (q_point)); - // Finally, we add up the contributions of the integrals over - // the cells to the global integral. + // Finally, we add up the + // contributions of the + // integrals over the cells to + // the global integral. cell->get_dof_indices (local_dof_indices); for (unsigned int i=0; i::compute_nl_matrix (const Vector &old_data, } } -// @sect4{SineGordonProblem::compute_error} - -// This function computes the norm of the difference between the -// computed (i.e., finite element) solution after time step -// timestep_number and the exact solution to see how well we are -// doing. There are several choices for norms available to us in the -// VectorTools class. We use the $L^2$ norm because it is a -// natural choice for our problem, since the solutions to the -// sine-Gordon equation have finite energy or, equivalently, are $L^2$ -// functions. Given our weak formulation of the sine-Gordon equation, -// we are computing a solution $u\in H^1(\Omega)$, hence we could also -// use the $H^1$ norm to compute the error of the spatial -// discretization. For more information on the details behind this -// computation, the reader should refer to step-7. + // @sect4{SineGordonProblem::compute_error} + + // This function computes the norm of + // the difference between the + // computed (i.e., finite element) + // solution after time step + // timestep_number and + // the exact solution to see how well + // we are doing. There are several + // choices for norms available to us + // in the VectorTools + // class. We use the $L^2$ norm + // because it is a natural choice for + // our problem, since the solutions + // to the sine-Gordon equation have + // finite energy or, equivalently, + // are $L^2$ functions. Given our + // weak formulation of the + // sine-Gordon equation, we are + // computing a solution $u\in + // H^1(\Omega)$, hence we could also + // use the $H^1$ norm to compute the + // error of the spatial + // discretization. For more + // information on the details behind + // this computation, the reader + // should refer to step-7. +/* template void SineGordonProblem::compute_error (const unsigned int timestep_number) { +//TODO: do we need this still now? And do we still need fem_errors? We never call this function since exact_solution_known was always false... ExactSolution exact_solution (1, time); Vector difference_per_cell (triangulation.n_active_cells()); @@ -571,22 +711,36 @@ void SineGordonProblem::compute_error (const unsigned int timestep_number) << fem_errors(timestep_number) << "." << std::endl; } - -// @sect4{SineGordonProblem::solve} - -// This function uses the GMRES iterative solver on the linear system -// of equations resulting from the finite element spatial -// discretization of each iteration of Newton's method for the -// (nonlinear) first equation in the split formulation we derived in -// the Introduction. The solution to the system is, in fact, $\delta -// U^n_l$ so it is stored in d_solution and used to update -// solution in the run function. We cannot use the Conjugate -// Gradient solver because the nonlinear term in the Jacobian matrix -// results in a non-positive-definite matrix to invert. Moreover, we -// would like the solver to quit when the \e relative error is -// $10^{-12}$. This function is similar to its analogue in step-3 (and -// step-4); the only difference is the choice of iterative solver and -// the new stopping criterion. +*/ + + // @sect4{SineGordonProblem::solve} + + // This function uses the GMRES + // iterative solver on the linear + // system of equations resulting from + // the finite element spatial + // discretization of each iteration + // of Newton's method for the + // (nonlinear) first equation in the + // split formulation we derived in + // the Introduction. The solution to + // the system is, in fact, $\delta + // U^n_l$ so it is stored in + // d_solution and used + // to update solution in + // the run function. We + // cannot use the Conjugate Gradient + // solver because the nonlinear term + // in the Jacobian matrix results in + // a non-positive-definite matrix to + // invert. Moreover, we would like + // the solver to quit when the \e + // relative error is $10^{-12}$. This + // function is similar to its + // analogue in step-3 (and step-4); + // the only difference is the choice + // of iterative solver and the new + // stopping criterion. template void SineGordonProblem::solve () { @@ -600,13 +754,17 @@ void SineGordonProblem::solve () << std::endl; } -// @sect4{SineGordonProblem::output_results} - -// This function outputs the results to a file. It is almost identical -// to its counterpart in step-3 (and step-4). The only new thing is -// that the function now takes a parameter --- the time step number -// --- so that it can append it to the name of the file, which the -// current solution is output to. + // @sect4{SineGordonProblem::output_results} + + // This function outputs the results + // to a file. It is almost identical + // to its counterpart in step-3 (and + // step-4). The only new thing is + // that the function now takes a + // parameter --- the time step number + // --- so that it can append it to + // the name of the file, which the + // current solution is output to. template void SineGordonProblem::output_results (const unsigned int timestep_number) { @@ -619,9 +777,12 @@ void SineGordonProblem::output_results (const unsigned int timestep_number) std::ostringstream filename; filename << "solution-" << dim << "d-"; - // Pad the time step number in filename with zeros in the beginning - // so that the files are ordered correctly in the shell and we can - // generate a good animation using convert. + // Pad the time step number in + // filename with zeros in the + // beginning so that the files are + // ordered correctly in the shell + // and we can generate a good + // animation using convert. if (timestep_number<10) filename << "0000" << timestep_number; else if (timestep_number<100) @@ -633,15 +794,20 @@ void SineGordonProblem::output_results (const unsigned int timestep_number) else filename << timestep_number; - // We output the solution at the desired times in vtk format, so - // that we can use VisIt to make plots and/or animations. + // We output the solution at the + // desired times in + // vtk format, so that + // we can use VisIt to make plots + // and/or animations. filename << ".vtk"; std::ofstream output (filename.str().c_str()); data_out.write_vtk (output); - // We also store the current solution in our instantiation of a - // DataOutStack object, so that we can make a space-time plot of - // the solution. + // We also store the current + // solution in our instantiation of + // a DataOutStack + // object, so that we can make a + // space-time plot of the solution. data_out_stack.new_parameter_value (time, time_step*output_timestep_skip); data_out_stack.attach_dof_handler (dof_handler); data_out_stack.add_data_vector (solution, "solution"); @@ -649,13 +815,17 @@ void SineGordonProblem::output_results (const unsigned int timestep_number) data_out_stack.finish_parameter_value (); } -// @sect4{SineGordonProblem::run} - -// This function has the top-level control over everything: it runs -// the (outer) time-stepping loop, the (inner) nonlinear-solver loop, -// outputs the solution after each time step and calls the -// compute_error routine after each time step if an exact solution -// is known. + // @sect4{SineGordonProblem::run} + + // This function has the top-level + // control over everything: it runs + // the (outer) time-stepping loop, + // the (inner) nonlinear-solver loop, + // outputs the solution after each + // time step and calls the + // compute_error routine + // after each time step if an exact + // solution is known. template void SineGordonProblem::run () { @@ -667,21 +837,34 @@ void SineGordonProblem::run () make_grid_and_dofs (); - // To aknowledge the initial condition, we must use the function - // $u_0(x)$ to compute the zeroth time step solution $U^0$. Note - // that when we create the InitialValues Function object, we - // set its internal time variable to $t_0$, in case our initial - // condition is a function of space and time evaluated at $t=t_0$. + // To aknowledge the initial + // condition, we must use the + // function $u_0(x)$ to compute the + // zeroth time step solution + // $U^0$. Note that when we create + // the InitialValues + // Function object, we + // set its internal time variable + // to $t_0$, in case our initial + // condition is a function of space + // and time evaluated at $t=t_0$. InitialValues initial_condition (1, time); - // Then, in 2D and 3D, we produce $U^0$ by projecting $u_0(x)$ onto - // the grid using VectorTools::project. In 1D, however, we - // obtain the zeroth time step solution by interpolating $u_0(x)$ at - // the global degrees of freedom using - // VectorTools::interpolate. We must make an exception for the - // 1D case because the projection algorithm computes integrals over - // the boundary of the domain, which do not make sense in 1D, so we - // cannot use it. + // Then, in 2D and 3D, we produce + // $U^0$ by projecting $u_0(x)$ + // onto the grid using + // VectorTools::project. In + // 1D, however, we obtain the + // zeroth time step solution by + // interpolating $u_0(x)$ at the + // global degrees of freedom using + // VectorTools::interpolate. We + // must make an exception for the + // 1D case because the projection + // algorithm computes integrals + // over the boundary of the domain, + // which do not make sense in 1D, + // so we cannot use it. if (dim == 1) { VectorTools::interpolate (dof_handler, initial_condition, solution); @@ -694,15 +877,20 @@ void SineGordonProblem::run () initial_condition, solution); } - // For completeness, we output the zeroth time step to a file just - // like any other other time step. + // For completeness, we output the + // zeroth time step to a file just + // like any other other time step. output_results (0); - // Now we perform the time stepping: at every time step we solve the - // matrix equation(s) corresponding to the finite element - // discretization of the problem, and then advance our solution - // according to the time stepping formulas we discussed in the - // Introduction. + // Now we perform the time + // stepping: at every time step we + // solve the matrix equation(s) + // corresponding to the finite + // element discretization of the + // problem, and then advance our + // solution according to the time + // stepping formulas we discussed + // in the Introduction. unsigned int timestep_number = 1; for (time+=time_step; time<=final_time; time+=time_step, ++timestep_number) { @@ -713,12 +901,19 @@ void SineGordonProblem::run () << "advancing to t = " << time << "." << std::endl; - // First we must solve the nonlinear equation in the split - // formulation via Newton's method --- i.e. solve for $\delta - // U^n_l$ then compute $U^n_{l+1}$ and so on. The stopping - // criterion is that $\|F_h(U^n_l)\|_2 \le 10^{-6} - // \|F_h(U^n_0)\|_2$. When the loop below is done, we have (an - // approximation of) $U^n$. + // First we must solve the + // nonlinear equation in the + // split formulation via + // Newton's method --- + // i.e. solve for $\delta + // U^n_l$ then compute + // $U^n_{l+1}$ and so on. The + // stopping criterion is that + // $\|F_h(U^n_l)\|_2 \le + // 10^{-6} + // \|F_h(U^n_0)\|_2$. When the + // loop below is done, we have + // (an approximation of) $U^n$. double initial_rhs_norm = 0.; unsigned int nliter = 1; do @@ -733,23 +928,39 @@ void SineGordonProblem::run () } while (system_rhs.l2_norm() > 1e-6 * initial_rhs_norm); - // In the case of the explicit Euler time stepping scheme, we - // must pick the time step to be quite small in order for the - // scheme to be stable. Therefore, there are a lot of time steps - // during which "nothing interesting happens" in the - // solution. To improve overall efficiency --- in particular, - // speed up the program and save disk space --- we only output - // the solution after output_timestep_skip time steps have - // been taken. + // In the case of the explicit + // Euler time stepping scheme, + // we must pick the time step + // to be quite small in order + // for the scheme to be + // stable. Therefore, there are + // a lot of time steps during + // which "nothing interesting + // happens" in the solution. To + // improve overall efficiency + // --- in particular, speed up + // the program and save disk + // space --- we only output the + // solution after + // output_timestep_skip + // time steps have been taken. if (timestep_number % output_timestep_skip == 0) - output_results (timestep_number); + output_results (timestep_number); - // Upon obtaining the solution to the problem at $t=t_n$, we - // must update the auxilliary velocity variable $V^n$. However, - // we do not compute and store $V^n$ since it is not a quantity - // we use directly in the problem. Hence, for simplicity, we - // update $MV^n$ directly using the second equation in the last - // subsection of the Introduction. + // Upon obtaining the solution + // to the problem at $t=t_n$, + // we must update the + // auxilliary velocity variable + // $V^n$. However, we do not + // compute and store $V^n$ + // since it is not a quantity + // we use directly in the + // problem. Hence, for + // simplicity, we update $MV^n$ + // directly using the second + // equation in the last + // subsection of the + // Introduction. Vector tmp_vector (solution.size()); laplace_matrix.vmult (tmp_vector, solution); massmatxvel.add (-time_step*theta, tmp_vector); @@ -761,39 +972,35 @@ void SineGordonProblem::run () tmp_vector = 0; compute_nl_term (old_solution, solution, tmp_vector); massmatxvel.add (-time_step, tmp_vector); - - // Before concluding the $n^{\mathrm{th}}$ time step, we compute - // the error in the finite element solution at $t=t_n$ if the - // exact solution to the problem being solved is known. - if (exact_solution_known) - compute_error (timestep_number); } - // After the time stepping is complete, we report the maximum (over - // all time steps) of the errors in the finite element solution if - // the exact solution of the problem being solved is know. - if (exact_solution_known) - std::cout << " The maximum L^2 error in the solution was " - << fem_errors.linfty_norm() << "." - << std::endl << std::endl; - - // Finally, we output the sequence of solutions stored - // data_out_stack to a file of the appropriate format. + // Finally, we output the sequence + // of solutions stored + // data_out_stack to a + // file of the appropriate format. std::ostringstream filename; filename << "solution-" << dim << "d-" << "stacked" << ".vtk"; std::ofstream output (filename.str().c_str()); data_out_stack.write_vtk (output); } -// @sect3{The main function} - -// This is the main function of the program. It creates an object of -// top-level class and calls its principal function. Also, we supress -// some of the library output by setting deallog.depth_console to -// zero. Furthermore, if exceptions are thrown during the execution of -// the run method of the SineGordonProblem class, we catch and -// report them here. For more information about exceptions the reader -// should consult step-6. + // @sect3{The main function} + + // This is the main function of the + // program. It creates an object of + // top-level class and calls its + // principal function. Also, we + // supress some of the library output + // by setting + // deallog.depth_console + // to zero. Furthermore, if + // exceptions are thrown during the + // execution of the run method of the + // SineGordonProblem + // class, we catch and report them + // here. For more information about + // exceptions the reader should + // consult step-6. int main () { try @@ -818,14 +1025,14 @@ int main () } catch (...) { - std::cerr << std::endl << std::endl - << "----------------------------------------------------" - << std::endl; - std::cerr << "Unknown exception!" << std::endl - << "Aborting!" << std::endl - << "----------------------------------------------------" - << std::endl; - return 1; + std::cerr << std::endl << std::endl + << "----------------------------------------------------" + << std::endl; + std::cerr << "Unknown exception!" << std::endl + << "Aborting!" << std::endl + << "----------------------------------------------------" + << std::endl; + return 1; } return 0; -- 2.39.5