From: kronbichler Date: Tue, 28 Oct 2008 12:17:56 +0000 (+0000) Subject: Now the Stokes solution is computed at the old time level with the old temperature... X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=1e2e11a01539f748a9ba2ba130ecaac803618f2c;p=dealii-svn.git Now the Stokes solution is computed at the old time level with the old temperature, instead of the new one. This makes the discretization O(h^2) again. git-svn-id: https://svn.dealii.org/trunk@17359 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-31/doc/intro.dox b/deal.II/examples/step-31/doc/intro.dox index 96174b0ae5..ca22b36209 100644 --- a/deal.II/examples/step-31/doc/intro.dox +++ b/deal.II/examples/step-31/doc/intro.dox @@ -178,29 +178,33 @@ in the top-left corner of the differential operator.

Time stepping

-The structure of the problem as a DAE allows us to use the same -strategy as we have already used in @ref step_21 "step-21", i.e. we -use a time lag scheme: first solve the Stokes equations for velocity and -pressure using the temperature field from the previous time step, then -with the new velocities update the temperature field for the current -time step. In other words, in time step n we first solve the Stokes -system +The structure of the problem as a DAE allows us to use the same strategy as +we have already used in @ref step_21 "step-21", i.e. we use a time lag +scheme: we first solve the temperature equation (using an extrapolated +velocity field), and then insert the new temperature solution into the right +hand side of the velocity equation. The way we implement this in our code +looks at things from a slightly different perspective, though. We first +solve the Stokes equations for velocity and pressure using the temperature +field from the previous time step, which means that we get the velocity for +the previous time step. In other words, we first solve the Stokes system for +time step n-1 as @f{eqnarray*} - -\nabla \cdot \eta \varepsilon ({\mathbf u}^n) + \nabla p^n &=& + -\nabla \cdot \eta \varepsilon ({\mathbf u}^{n-1}) + \nabla p^{n-1} &=& \mathrm{Ra} \; T^{n-1} \mathbf{g}, \\ - \nabla \cdot {\mathbf u}^n &=& 0, + \nabla \cdot {\mathbf u}^{n-1} &=& 0, @f} -and then the temperature equation with the so-computed velocity field -${\mathbf u}^n$. In contrast to @ref step_21 "step-21", we'll use a -higher order time stepping scheme here, namely the Backward -Differentiation Formula scheme of order 2 (BDF-2 in short) that -replaces the time derivative $\frac{\partial T}{\partial t}$ by the (one-sided) -difference quotient $\frac{\frac 32 T^{n}-2T^{n-1}+\frac 12 T^{n-2}}{k}$ with -k the time step size. +and then the temperature equation with an extrapolated velocity field to +time n. -This gives the discretized-in-time temperature equation +In contrast to @ref step_21 "step-21", we'll use a higher order time +stepping scheme here, namely the Backward +Differentiation Formula scheme of order 2 (BDF-2 in short) that replaces +the time derivative $\frac{\partial T}{\partial t}$ by the (one-sided) +difference quotient $\frac{\frac 32 T^{n}-2T^{n-1}+\frac 12 T^{n-2}}{k}$ +with k the time step size. This gives the discretized-in-time +temperature equation @f{eqnarray*} \frac 32 T^n - @@ -210,38 +214,35 @@ This gives the discretized-in-time temperature equation - \frac 12 T^{n-2} - - k{\mathbf u}^n \cdot \nabla (2T^{n-1}-T^{n-2}) + k(2{\mathbf u}^{n-1} - {\mathbf u}^{n-2} ) \cdot \nabla (2T^{n-1}-T^{n-2}) + k\gamma. @f} -Note how the temperature equation is -solved semi-explicitly: diffusion is treated implicitly whereas -advection is treated explicitly using the just-computed velocity -field but only previously computed temperature fields. The -temperature terms appearing in the advection term are forward -projected to the current time: -$T^n \approx T^{n-1} + k_n -\frac{\partial T}{\partial t} \approx T^{n-1} + k_n -\frac{T^{n-1}-T^{n-2}}{k_n} = 2T^{n-1}-T^{n-2}$. We need this projection -for maintaining the order of accuracy of the BDF-2 scheme. In other words, the -temperature fields we use in the explicit right hand side are first -order approximations of the current temperature field — not -quite an explicit time stepping scheme, but by character not too far -away either. - -The introduction of the temperature extrapolation limits the time step -by a -Courant-Friedrichs-Lewy (CFL) condition just like it was in -@ref step_21 "step-21". (We wouldn't have had that stability condition if -we treated the advection term implicitly since the BDF-2 scheme is A-stable, -at the price that we needed to build a new temperature matrix at each time -step.) We will discuss the exact choice of time step in the results section, but for the moment of importance is that -this CFL condition means that the time step -size k may change from time step to time step, and that we have to -modify the above formula slightly. If $k_n,k_{n-1}$ are the time steps -sizes of the current and previous time step, then we use the -approximations +Note how the temperature equation is solved semi-explicitly: diffusion is +treated implicitly whereas advection is treated explicitly using an +extrapolation (or forward-projection) of temperature and velocity, including +the just-computed velocity ${\mathbf u}^{n-1}$. The forward-projection to +the current time level n is derived from a Taylor expansion, $T^n +\approx T^{n-1} + k_n \frac{\partial T}{\partial t} \approx T^{n-1} + k_n +\frac{T^{n-1}-T^{n-2}}{k_n} = 2T^{n-1}-T^{n-2}$. We need this projection for +maintaining the order of accuracy of the BDF-2 scheme. In other words, the +temperature fields we use in the explicit right hand side are second order +approximations of the current temperature field — not quite an +explicit time stepping scheme, but by character not too far away either. + +The introduction of the temperature extrapolation limits the time step by a + +Courant-Friedrichs-Lewy (CFL) condition just like it was in @ref step_21 +"step-21". (We wouldn't have had that stability condition if we treated the +advection term implicitly since the BDF-2 scheme is A-stable, at the price +that we needed to build a new temperature matrix at each time step.) We will +discuss the exact choice of time step in the results +section, but for the moment of importance is that this CFL condition +means that the time step size k may change from time step to time +step, and that we have to modify the above formula slightly. If +$k_n,k_{n-1}$ are the time steps sizes of the current and previous time +step, then we use the approximations + $\frac{\partial T}{\partial t} \approx \frac 1{k_n} \left( @@ -269,15 +270,18 @@ and above equation is generalized as follows: - \frac{k_n^2}{k_{n-1}(k_n+k_{n-1})} T^{n-2} - - k_n{\mathbf u}^n \cdot \nabla \left[ - \left(1+\frac{k_n}{k_{n-1}}\right)T^{n-1}-\frac{k_n}{k_{n-1}}T^{n-2} - \right] + k_n{\mathbf u}^{*,n} \cdot \nabla T^{*,n} + - k_n\gamma. + k_n\gamma, @f} -That's not an easy to read equation, but will provide us with the -desired higher order accuracy. As a consistency check, it is easy to -verify that it reduces to the same equation as above if $k_n=k_{n-1}$. + +where ${(\cdot)}^{*,n} = \left(1+\frac{k_n}{k_{n-1}}\right)(\cdot)^{n-1} - +\frac{k_n}{k_{n-1}}(\cdot)^{n-2}$ denotes the extrapolation of velocity +u and temperature T to time level n, using the values +at the two previous time steps. That's not an easy to read equation, but +will provide us with the desired higher order accuracy. As a consistency +check, it is easy to verify that it reduces to the same equation as above if +$k_n=k_{n-1}$. As a final remark we note that the choice of a higher order time stepping scheme of course forces us to keep more time steps in memory; @@ -302,32 +306,32 @@ elements, so we can form the weak form of the Stokes equation without problem by integrating by parts and substituting continuous functions by their discrete counterparts: @f{eqnarray*} - (\nabla {\mathbf v}_h, \eta \varepsilon ({\mathbf u}^n_h)) + (\nabla {\mathbf v}_h, \eta \varepsilon ({\mathbf u}^{n-1}_h)) - - (\nabla \cdot {\mathbf v}_h, p^n_h) + (\nabla \cdot {\mathbf v}_h, p^{n-1}_h) &=& ({\mathbf v}_h, \mathrm{Ra} \; T^{n-1}_h \mathbf{g}), \\ - (q_h, \nabla \cdot {\mathbf u}^n_h) &=& 0, + (q_h, \nabla \cdot {\mathbf u}^{n-1}_h) &=& 0, @f} for all test functions $\mathbf v_h, q_h$. The first term of the first equation is considered as the inner product between tensors, i.e. -$(\nabla {\mathbf v}_h, \eta \varepsilon ({\mathbf u}^n_h))_\Omega +$(\nabla {\mathbf v}_h, \eta \varepsilon ({\mathbf u}^{n-1}_h))_\Omega = \int_\Omega \sum_{i,j=1}^d [\nabla {\mathbf v}_h]_{ij} - \eta [\varepsilon ({\mathbf u}^n_h)]_{ij}\, dx$. + \eta [\varepsilon ({\mathbf u}^{n-1}_h)]_{ij}\, dx$. Because the second tensor in this product is symmetric, the anti-symmetric component of $\nabla {\mathbf v}_h$ plays no role and it leads to the entirely same form if we use the symmetric gradient of $\mathbf v_h$ instead. Consequently, the formulation we consider and that we implement is @f{eqnarray*} - (\varepsilon({\mathbf v}_h), \eta \varepsilon ({\mathbf u}^n_h)) + (\varepsilon({\mathbf v}_h), \eta \varepsilon ({\mathbf u}^{n-1}_h)) - - (\nabla \cdot {\mathbf v}_h, p^n_h) + (\nabla \cdot {\mathbf v}_h, p^{n-1}_h) &=& ({\mathbf v}_h, \mathrm{Ra} \; T^{n-1}_h \mathbf{g}), \\ - (q_h, \nabla \cdot {\mathbf u}^n_h) &=& 0. + (q_h, \nabla \cdot {\mathbf u}^{n-1}_h) &=& 0. @f} This is exactly the same as what we already discussed in @@ -529,7 +533,7 @@ this yields for the simpler case of uniform time steps of size k: \\ && - - k{\mathbf u}^n \cdot \nabla (2T^{n-1}-T^{n-2}) + k(2{\mathbf u}^{n-1}-{\mathbf u}^{n-2}) \cdot \nabla (2T^{n-1}-T^{n-2}) \\ && + @@ -573,7 +577,13 @@ form above first) and reads: \\ && - - k_n{\mathbf u}^n \cdot \nabla \left[ + k_n + \left[ + \left(1+\frac{k_n}{k_{n-1}}\right){\mathbf u}^{n-1} - + \frac{k_n}{k_{n-1}}{\mathbf u}^{n-2} + \right] + \cdot \nabla + \left[ \left(1+\frac{k_n}{k_{n-1}}\right)T^{n-1}-\frac{k_n}{k_{n-1}}T^{n-2} \right] \\ @@ -595,9 +605,15 @@ at the weak form of the discretized equations: - \frac{k_n^2}{k_{n-1}(k_n+k_{n-1})} T_h^{n-2} \\ - &&\qquad\qquad + &&\qquad - - k_n{\mathbf u}_h^n \cdot \nabla \left[ + k_n + \left[ + \left(1+\frac{k_n}{k_{n-1}}\right){\mathbf u}^{n-1} - + \frac{k_n}{k_{n-1}}{\mathbf u}^{n-2} + \right] + \cdot \nabla + \left[ \left(1+\frac{k_n}{k_{n-1}}\right)T^{n-1}-\frac{k_n}{k_{n-1}}T^{n-2} \right] + @@ -619,7 +635,8 @@ $\mathbf{n}\cdot\kappa\nabla T|_{\partial\Omega}=0$. This then results in a matrix equation of form @f{eqnarray*} - \left( \frac{2k_n+k_{n-1}}{k_n+k_{n-1}} M+k_n A_T\right) T_h^n = F(U_h^n,T_h^{n-1},T_h^{n-2}), + \left( \frac{2k_n+k_{n-1}}{k_n+k_{n-1}} M+k_n A_T\right) T_h^n + = F(U_h^{n-1}, U_h^{n-2},T_h^{n-1},T_h^{n-2}), @f} which given the structure of matrix on the left (the sum of two positive definite matrices) is easily solved using the Conjugate @@ -835,11 +852,11 @@ look like this: A & B^T & 0 \\ B & 0 &0 \\ C & 0 & K \end{array}\right) \left(\begin{array}{ccc} - U^n \\ P^n \\ T^n + U^{n-1} \\ P^{n-1} \\ T^n \end{array}\right) = \left(\begin{array}{ccc} - F_U(T^{n-1}) \\ 0 \\ F_T(U^n,T^{n-1},T^{n-1}) + F_U(T^{n-1}) \\ 0 \\ F_T(U^{n-1},U^{n-2},T^{n-1},T^{n-2}) \end{array}\right). @f} The problem with this is: We never use the whole matrix at the same time. In diff --git a/deal.II/examples/step-31/step-31.cc b/deal.II/examples/step-31/step-31.cc index 35a0cf3b19..dcbdedafee 100644 --- a/deal.II/examples/step-31/step-31.cc +++ b/deal.II/examples/step-31/step-31.cc @@ -364,35 +364,33 @@ namespace LinearSolvers // @sect4{Schur complement preconditioner} - // This is the implementation of - // the Schur complement - // preconditioner as described in - // detail in the introduction. As - // opposed to step-20 and step-22, - // we solve the block system - // all-at-once using GMRES, and use - // the Schur complement of the - // block structured matrix to build - // a good preconditioner instead. + // This is the implementation of the + // Schur complement preconditioner as + // described in detail in the + // introduction. As opposed to step-20 + // and step-22, we solve the block system + // all-at-once using GMRES, and use the + // Schur complement of the block + // structured matrix to build a good + // preconditioner instead. // // Let's have a look at the ideal // preconditioner matrix - // $P=\left(\begin{array}{cc} A & 0 \\ B & - // -S \end{array}\right)$ - // described in the introduction. If - // we apply this matrix in the - // solution of a linear system, - // convergence of an iterative - // GMRES solver will be - // governed by the matrix + // $P=\left(\begin{array}{cc} A & 0 \\ B + // & -S \end{array}\right)$ described in + // the introduction. If we apply this + // matrix in the solution of a linear + // system, convergence of an iterative + // GMRES solver will be governed by the + // matrix // @f{eqnarray*} // P^{-1}\left(\begin{array}{cc} A // & B^T \\ B & 0 // \end{array}\right) = // \left(\begin{array}{cc} I & // A^{-1} B^T \\ 0 & 0 - // \end{array}\right), @f} - // + // \end{array}\right), + // @f} // which indeed is very simple. A // GMRES solver based on exact // matrices would converge in two @@ -407,59 +405,56 @@ namespace LinearSolvers // SIAM J. Numer. Anal., 31 (1994), // pp. 1352-1367). // - // Replacing P by - // $\tilde{P}$ does not change the - // situation dramatically. The - // product $P^{-1} A$ will still be - // close to a matrix with - // eigenvalues 0 and 1, which lets - // us hope to be able to get a - // number of GMRES iterations that - // does not depend on the problem - // size. + // Replacing P by $\tilde{P}$ does + // not change the situation + // dramatically. The product $P^{-1} A$ + // will still be close to a matrix with + // eigenvalues 0 and 1, which lets us + // hope to be able to get a number of + // GMRES iterations that does not depend + // on the problem size. // - // The deal.II users who have already gone - // through the step-20 and step-22 + // The deal.II users who have already + // gone through the step-20 and step-22 // tutorials can certainly imagine how // we're going to implement this. We // replace the exact inverse matrices in // $P^{-1}$ by some approximate inverses - // built from the InverseMatrix class, and - // the inverse Schur complement will be - // approximated by the pressure mass matrix - // $M_p$ (weighted by $\eta^{-1}$ as - // mentioned in the introduction). As + // built from the InverseMatrix class, + // and the inverse Schur complement will + // be approximated by the pressure mass + // matrix $M_p$ (weighted by $\eta^{-1}$ + // as mentioned in the introduction). As // pointed out in the results section of // step-22, we can replace the exact // inverse of A by just the - // application of a preconditioner, in this - // case on a vector Laplace matrix as was - // explained in the introduction. This does - // increase the number of (outer) GMRES - // iterations, but is still significantly - // cheaper than an exact inverse, which - // would require between 20 and 35 CG + // application of a preconditioner, in + // this case on a vector Laplace matrix + // as was explained in the + // introduction. This does increase the + // number of (outer) GMRES iterations, + // but is still significantly cheaper + // than an exact inverse, which would + // require between 20 and 35 CG // iterations for each outer // solver step (using the AMG // preconditioner). // - // Having the above explanations in - // mind, we define a preconditioner - // class with a vmult - // functionality, which is all we - // need for the interaction with - // the usual solver functions - // further below in the program + // Having the above explanations in mind, + // we define a preconditioner class with + // a vmult functionality, + // which is all we need for the + // interaction with the usual solver + // functions further below in the program // code. // - // First the declarations. These - // are similar to the definition of - // the Schur complement in step-20, - // with the difference that we need - // some more preconditioners in the - // constructor and that the - // matrices we use here are built - // upon Trilinos: + // First the declarations. These are + // similar to the definition of the Schur + // complement in step-20, with the + // difference that we need some more + // preconditioners in the constructor and + // that the matrices we use here are + // built upon Trilinos: template class BlockSchurPreconditioner : public Subscriptor { @@ -603,7 +598,8 @@ class BoussinesqFlowProblem const std::vector > &old_old_temperature_grads, const std::vector > &old_temperature_hessians, const std::vector > &old_old_temperature_hessians, - const std::vector > &present_stokes_values, + const std::vector > &old_stokes_values, + const std::vector > &old_old_stokes_values, const std::vector &gamma_values, const double global_u_infty, const double global_T_variation, @@ -624,6 +620,7 @@ class BoussinesqFlowProblem TrilinosWrappers::BlockSparseMatrix stokes_preconditioner_matrix; TrilinosWrappers::BlockVector stokes_solution; + TrilinosWrappers::BlockVector old_stokes_solution; TrilinosWrappers::BlockVector stokes_rhs; @@ -931,43 +928,39 @@ BoussinesqFlowProblem::get_extrapolated_temperature_range () const // The last of the tool functions computes // the artificial viscosity parameter // $\nu|_K$ on a cell $K$ as a function of - // the extrapolated temperature, its gradient - // and Hessian (second derivatives), the - // velocity, the right hand side $\gamma$ all - // on the quadrature points of the current - // cell, and various other parameters as - // described in detail in the introduction. + // the extrapolated temperature, its + // gradient and Hessian (second + // derivatives), the velocity, the right + // hand side $\gamma$ all on the quadrature + // points of the current cell, and various + // other parameters as described in detail + // in the introduction. // - // There are some universal constants - // worth mentioning here. First, we - // need to fix $\beta$; we choose - // $\beta=0.015\cdot dim$, a choice - // discussed in detail in the results - // section of this tutorial - // program. The second is the - // exponent $\alpha$; $\alpha=1$ - // appears to work fine for the - // current program, even though some - // additional benefit might be + // There are some universal constants worth + // mentioning here. First, we need to fix + // $\beta$; we choose $\beta=0.015\cdot + // dim$, a choice discussed in detail in + // the results section of this tutorial + // program. The second is the exponent + // $\alpha$; $\alpha=1$ appears to work + // fine for the current program, even + // though some additional benefit might be // expected from chosing $\alpha = - // 2$. Finally, there is one thing - // that requires special casing: In - // the first time step, the velocity - // equals zero, and the formula for - // $\nu|_K$ is not defined. In that - // case, we return $\nu|_K=5\cdot - // 10^3 \cdot h_K$, a choice - // admittedly more motivated by - // heuristics than anything else (it - // is in the same order of magnitude, - // however, as the value returned for - // most cells on the second time - // step). + // 2$. Finally, there is one thing that + // requires special casing: In the first + // time step, the velocity equals zero, and + // the formula for $\nu|_K$ is not + // defined. In that case, we return + // $\nu|_K=5\cdot 10^3 \cdot h_K$, a choice + // admittedly more motivated by heuristics + // than anything else (it is in the same + // order of magnitude, however, as the + // value returned for most cells on the + // second time step). // // The rest of the function should be - // mostly obvious based on the - // material discussed in the - // introduction: + // mostly obvious based on the material + // discussed in the introduction: template double BoussinesqFlowProblem:: @@ -977,7 +970,8 @@ compute_viscosity (const std::vector &old_temperature, const std::vector > &old_old_temperature_grads, const std::vector > &old_temperature_hessians, const std::vector > &old_old_temperature_hessians, - const std::vector > &present_stokes_values, + const std::vector > &old_stokes_values, + const std::vector > &old_old_stokes_values, const std::vector &gamma_values, const double global_u_infty, const double global_T_variation, @@ -1000,7 +994,7 @@ compute_viscosity (const std::vector &old_temperature, { Tensor<1,dim> u; for (unsigned int d=0; d::setup_dofs () } // Lastly, we set the vectors for the - // solution $\mathbf u$ and $T^k$, the old - // solutions $T^{k-1}$ and $T^{k-2}$ - // (required for time stepping) and the - // system right hand sides to their correct - // sizes and block structure: + // Stokes solutions $\mathbf u^{n-1}$ and + // $\mathbf u^{n-2}$, as well as for the + // temperatures $T^{n}$, $T^{n-1}$ and + // $T^{n-2}$ (required for time stepping) + // and all the system right hand sides to + // their correct sizes and block + // structure: stokes_solution.reinit (stokes_block_sizes); + old_stokes_solution.reinit (stokes_block_sizes); stokes_rhs.reinit (stokes_block_sizes); temperature_solution.reinit (temperature_dof_handler.n_dofs()); @@ -1961,21 +1958,26 @@ void BoussinesqFlowProblem::assemble_temperature_system () std::vector local_dof_indices (dofs_per_cell); - // Next comes the declaration of vectors to - // hold the old and present solution values - // and gradients at quadrature points of - // the current cell. We also declarate an - // object to hold the temperature right - // hande side values - // (gamma_values), and we - // again use shortcuts for the temperature - // basis functions. Eventually, we need to - // find the maximum of velocity, - // temperature and the diameter of the - // computational domain which will be used - // for the definition of the stabilization + // Next comes the declaration of vectors + // to hold the old and older solution + // values (as a notation for time levels + // n-1 and n-2, + // respectively) and gradients at + // quadrature points of the current + // cell. We also declarate an object to + // hold the temperature right hande side + // values (gamma_values), + // and we again use shortcuts for the + // temperature basis + // functions. Eventually, we need to find + // the maximum of velocity, temperature + // and the diameter of the computational + // domain which will be used for the + // definition of the stabilization // parameter. - std::vector > present_stokes_values (n_q_points, + std::vector > old_stokes_values (n_q_points, + Vector(dim+1)); + std::vector > old_old_stokes_values (n_q_points, Vector(dim+1)); std::vector old_temperature_values (n_q_points); std::vector old_old_temperature_values(n_q_points); @@ -1995,21 +1997,20 @@ void BoussinesqFlowProblem::assemble_temperature_system () global_T_range = get_extrapolated_temperature_range(); const double global_Omega_diameter = GridTools::diameter (triangulation); - // Now, let's start the loop over all cells - // in the triangulation. Again, we need two - // cell iterators that walk in parallel - // through the cells of the two involved - // DoFHandler objects for the Stokes and - // temperature part. Within the loop, we - // first set the local rhs to zero, and - // then get the values and derivatives of - // the old solution functions (and the - // current velocity) at the quadrature - // points, since they are going to be - // needed for the definition of the - // stabilization parameters and as - // coefficients in the equation, - // respectively. + // Now, let's start the loop over all + // cells in the triangulation. Again, we + // need two cell iterators that walk in + // parallel through the cells of the two + // involved DoFHandler objects for the + // Stokes and temperature part. Within + // the loop, we first set the local rhs + // to zero, and then get the values and + // derivatives of the old solution + // functions at the quadrature points, + // since they are going to be needed for + // the definition of the stabilization + // parameters and as coefficients in the + // equation, respectively. typename DoFHandler::active_cell_iterator cell = temperature_dof_handler.begin_active(), endc = temperature_dof_handler.end(); @@ -2042,7 +2043,9 @@ void BoussinesqFlowProblem::assemble_temperature_system () gamma_values); stokes_fe_values.get_function_values (stokes_solution, - present_stokes_values); + old_stokes_values); + stokes_fe_values.get_function_values (old_stokes_solution, + old_old_stokes_values); // Next, we calculate the // artificial viscosity for @@ -2074,7 +2077,8 @@ void BoussinesqFlowProblem::assemble_temperature_system () old_old_temperature_grads, old_temperature_hessians, old_old_temperature_hessians, - present_stokes_values, + old_stokes_values, + old_old_stokes_values, gamma_values, global_u_infty, global_T_range.second - global_T_range.first, @@ -2096,9 +2100,16 @@ void BoussinesqFlowProblem::assemble_temperature_system () const Tensor<1,dim> old_old_grad_T = old_old_temperature_grads[q]; - Tensor<1,dim> present_u; + Tensor<1,dim> extrapolated_u; for (unsigned int d=0; d::assemble_temperature_system () old_old_T * phi_T[i] - time_step * - present_u * + extrapolated_u * ((1+time_step/old_time_step) * old_grad_T - time_step / old_time_step * old_old_grad_T) * @@ -2135,7 +2146,7 @@ void BoussinesqFlowProblem::assemble_temperature_system () local_rhs(i) += (old_T * phi_T[i] - time_step * - present_u * old_grad_T * phi_T[i] + extrapolated_u * old_grad_T * phi_T[i] - time_step * nu * @@ -2160,59 +2171,48 @@ void BoussinesqFlowProblem::assemble_temperature_system () // @sect4{BoussinesqFlowProblem::solve} // - // This function solves the linear - // systems of equations. Following to - // the introduction, we start with - // the Stokes system, where we need - // to generate our block Schur - // preconditioner. Since all the - // relevant actions are implemented - // in the class + // This function solves the linear systems + // of equations. Following to the + // introduction, we start with the Stokes + // system, where we need to generate our + // block Schur preconditioner. Since all + // the relevant actions are implemented in + // the class // BlockSchurPreconditioner, - // all we have to do is to - // initialize the class - // appropriately. What we need to + // all we have to do is to initialize the + // class appropriately. What we need to // pass down is an - // InverseMatrix object - // for the pressure mass matrix, - // which we set up using the - // respective class together with - // the IC preconditioner we already - // generated, and the AMG - // preconditioner for the - // velocity-velocity matrix. Note - // that both - // Mp_preconditioner and - // Amg_preconditioner are - // only pointers, so we use - // * to pass down the - // actual preconditioner objects. + // InverseMatrix object for + // the pressure mass matrix, which we set + // up using the respective class together + // with the IC preconditioner we already + // generated, and the AMG preconditioner + // for the velocity-velocity matrix. Note + // that both Mp_preconditioner + // and Amg_preconditioner are + // only pointers, so we use * + // to pass down the actual preconditioner + // objects. // - // Once the preconditioner is - // ready, we create a GMRES solver - // for the block system. Since we - // are working with Trilinos data - // structures, we have to set the - // respective template argument in - // the solver. GMRES needs to - // internally store temporary - // vectors for each iteration (see - // the discussion in the - // results section of step-22) - // – the more vectors it can - // use, the better it will + // Once the preconditioner is ready, we + // create a GMRES solver for the block + // system. Since we are working with + // Trilinos data structures, we have to set + // the respective template argument in the + // solver. GMRES needs to internally store + // temporary vectors for each iteration + // (see the discussion in the results + // section of step-22) – the more + // vectors it can use, the better it will // generally perform. To keep memory - // demands in check, we - // set the number of vectors to - // 100. This means that up to 100 - // solver iterations, every - // temporary vector can be - // stored. If the solver needs to - // iterate more often to get the - // specified tolerance, it will - // work on a reduced set of vectors - // by restarting at every 100 - // iterations. + // demands in check, we set the number of + // vectors to 100. This means that up to + // 100 solver iterations, every temporary + // vector can be stored. If the solver + // needs to iterate more often to get the + // specified tolerance, it will work on a + // reduced set of vectors by restarting at + // every 100 iterations. // // With this all set up, we solve the system // and distribute the constraints in the @@ -2595,37 +2595,31 @@ void BoussinesqFlowProblem::refine_mesh (const unsigned int max_grid_level) cell->clear_refine_flag (); // Before we can apply the mesh - // refinement, we have to prepare - // the solution vectors that should - // be transfered to the new grid - // (we will lose the old grid once - // we have done the - // refinement). What we definetely + // refinement, we have to prepare the + // solution vectors that should be + // transfered to the new grid (we will + // lose the old grid once we have done + // the refinement). What we definetely // need are the current and the old // temperature (BDF-2 time stepping - // requires two old - // solutions). Since the - // SolutionTransfer objects only - // support to transfer one object - // per dof handler, we need to - // collect the two temperature - // solutions in one data - // structure. Moreover, we choose - // to transfer the Stokes solution, - // too. The reason for doing so is - // that the Stokes solution will - // not change dramatically from - // step to step, so we get a good - // initial guess for the linear - // solver when we reuse old data, - // which reduces the number of - // needed solver iterations. Next, - // we initialize the - // SolutionTransfer objects, by - // attaching them to the old dof - // handler. With this at place, we - // can prepare the triangulation - // and the data vectors for + // requires two old solutions). Since the + // SolutionTransfer objects only support + // to transfer one object per dof + // handler, we need to collect the two + // temperature solutions in one data + // structure. Moreover, we choose to + // transfer the Stokes solution, too. The + // reason for doing so is that the Stokes + // solution will not change dramatically + // from step to step, so we get a good + // initial guess for the linear solver + // when we reuse old data, which reduces + // the number of needed solver + // iterations. Next, we initialize the + // SolutionTransfer objects, by attaching + // them to the old dof handler. With this + // at place, we can prepare the + // triangulation and the data vectors for // refinement (in this order). std::vector x_temperature (2); x_temperature[0].reinit (temperature_solution); @@ -2761,19 +2755,18 @@ void BoussinesqFlowProblem::run () // change in case we've remeshed // before), and then do the // solve. The solution is then - // written to screen. Before going - // on with the next time step, we - // have to check whether we should - // first finish the pre-refinement - // steps or if we should remesh - // (every fifth time step), - // refining up to a level that is - // consistent with initial + // written to screen. Before going on + // with the next time step, we have + // to check whether we should first + // finish the pre-refinement steps or + // if we should remesh (every fifth + // time step), refining up to a level + // that is consistent with initial // refinement and pre-refinement // steps. Last in the loop is to // advance the solutions, i.e. to - // copy the temperature solution to - // the next "older" time level. + // copy the solutions to the next + // "older" time level. assemble_stokes_system (); build_stokes_preconditioner (); assemble_temperature_matrix (); @@ -2798,11 +2791,12 @@ void BoussinesqFlowProblem::run () time += time_step; ++timestep_number; + old_stokes_solution = stokes_solution; old_old_temperature_solution = old_temperature_solution; - old_temperature_solution = temperature_solution; + old_temperature_solution = temperature_solution; } - // Do all the above until we arrive - // at time 100. + // Do all the above until we arrive at + // time 100. while (time <= 100); } @@ -2810,17 +2804,15 @@ void BoussinesqFlowProblem::run () // @sect3{The main function} // - // The main function looks almost - // the same as in all other - // programs. The only difference is - // that Trilinos wants to get the - // arguments from calling the - // function (argc and argv) in - // order to correctly set up the - // MPI system in case we use those - // compilers (even though this - // program is only meant to be run - // in serial). + // The main function looks almost the same + // as in all other programs. The only + // difference is that Trilinos wants to get + // the arguments from calling the function + // (argc and argv) in order to correctly + // set up the MPI system in case those + // compilers are in use (even though this + // program is only meant to be run on one + // processor). int main (int argc, char *argv[]) { try