From 24f7b734d276d61c81e75155c208fbee55dcd14c Mon Sep 17 00:00:00 2001 From: kronbichler Date: Wed, 18 Sep 2013 12:10:58 +0000 Subject: [PATCH] Went through the complete tutorial and made the presentation consistent. git-svn-id: https://svn.dealii.org/trunk@30785 0785d39b-7218-0410-832d-ea1e28bc413d --- deal.II/examples/step-51/doc/intro.dox | 121 ++++++------ deal.II/examples/step-51/doc/results.dox | 144 +++++++------- deal.II/examples/step-51/step-51.cc | 242 ++++++++++++----------- 3 files changed, 265 insertions(+), 242 deletions(-) diff --git a/deal.II/examples/step-51/doc/intro.dox b/deal.II/examples/step-51/doc/intro.dox index ec87ce51db..6679ad0a77 100644 --- a/deal.II/examples/step-51/doc/intro.dox +++ b/deal.II/examples/step-51/doc/intro.dox @@ -61,13 +61,13 @@ first block and the skeleton variables $\Lambda$ as the second block: \begin{pmatrix} A & B \\ C & D \end{pmatrix} \begin{pmatrix} U \\ \Lambda \end{pmatrix} = -\begin{pmatrix} F \\ G \end{pmatrix} +\begin{pmatrix} F \\ G \end{pmatrix}. @f} Our aim is now to eliminate the U block with a Schur complement approach similar to step-20, which results in the following two steps: @f{eqnarray*} -(D - C A^{-1} B) \Lambda &=& G - C A^{-1} F \\ -A U &=& F - B \Lambda +(D - C A^{-1} B) \Lambda &=& G - C A^{-1} F, \\ +A U &=& F - B \Lambda. @f} The point is that the presence of $A^{-1}$ is not a problem because $A$ is a block diagonal matrix where each block corresponds to one cell and is @@ -78,9 +78,9 @@ The coupling to other cells is introduced by the matrices matrix A element by element (the local solution of the Dirichlet problem) and subtract $CA^{-1}B$ from $D$. The steps in the Dirichlet-to-Neumann map concept hence correspond to
    -
  1. constructing the Schur complement matrix $D-C A^{-1} B$ and right hand side $G - C A^{-1} F$, +
  2. constructing the Schur complement matrix $D-C A^{-1} B$ and right hand side $G - C A^{-1} F$ locally on each cell and insert the contribution into the global trace matrix in the usual way,
  3. solving the Schur complement system for $\Lambda$, and -
  4. solving the equation for U using the second equation which uses $\Lambda$. +
  5. solving the equation for U using the second equation, given $\Lambda$.
@@ -135,7 +135,7 @@ and integrate by parts over every element $K$ to obtain: @f} The terms decorated with a hat denote the numerical traces (also commonly referred -to as numerical fluxes). They are approximations +to as numerical fluxes). They are approximations to the interior values on the boundary of the element. To ensure conservation, these terms must be single-valued on any given element edge $\partial K$ even though, with discontinuous shape functions, there may of course be multiple @@ -145,71 +145,77 @@ We eliminate the numerical trace $\hat{\mathbf{q}}$ by using traces of the form: \hat{\mathbf{c} u}+\hat{\mathbf{q}} = \mathbf{c}\hat{u} + \mathbf{q} + \tau(u - \hat{u})\mathbf{n} \quad \text{ on } \partial K. @f} + +The variable $\hat {u}$ is introduced as an additional independent variable +and is the one for which we finally set up a globally coupled linear +system. As mentioned above, it is defined on the element faces and +discontinuous from one face to another. + The local stabilization parameter $\tau$ has effects on stability and accuracy of HDG solutions; see the literature for a further discussion. A stabilization parameter of unity is reported to be the choice which gives best results. A stabilization parameter $\tau$ that tends to infinity prohibits jumps in the solution over the element boundaries, making the HDG solution approach the -approximation of continuous finite elements. In the program below, we choose +approximation with continuous finite elements. In the program below, we choose the stabilization parameter as @f{eqnarray*} \tau = \frac{\kappa}{\ell} + |\mathbf{c} \cdot \mathbf{n}| @f} where we set the diffusion length scale to $\ell = \frac{1}{5}$. -The trace/skeleton variables in HDG methods are single-valued on element faces. As such, -they must strongly represent the Dirichlet data on $\partial\Omega_D$. We introduce -a new variable $\lambda$ such that +The trace/skeleton variables in HDG methods are single-valued on element +faces. As such, they must strongly represent the Dirichlet data on +$\partial\Omega_D$. This means that @f{equation*} - \hat{u} = - \begin{cases} - g_D & \text{ on } \partial \Omega_D, \\ - \lambda & \text{otherwise}. -\end{cases} + \hat{u}|_{\partial \Omega_D} = g_D, @f} +where the equal sign actually means an $L_2$ projection of the boundary +function $g$ onto the space of the face variables (e.g. linear functions on +the faces). This constraint is then applied to the skeleton variable $\hat{u}$ +using inhomogeneous constraints by the method +VectorTools::project_boundary_values. -Eliminating $\hat{u}$ from the weak form in favor of $\lambda$, summing the elemental +Summing the elemental contributions across all elements in the triangulation, enforcing the normal component of the numerical flux, and integrating by parts on the equation weighted by $w$, we arrive at the final form of the problem: -Find $(\mathbf{q}_h, u_h, \lambda_h) \in +Find $(\mathbf{q}_h, u_h, \hat{u}_h) \in \mathcal{V}_h^p \times \mathcal{W}_h^p \times \mathcal{M}_h^p$ such that @f{eqnarray*} (\mathbf{v}, \kappa^{-1} \mathbf{q}_h)_{\mathcal{T}} - ( \nabla\cdot\mathbf{v}, u_h)_{\mathcal{T}} - + \left<\mathbf{v}\cdot\mathbf{n}, \lambda_h\right>_{\partial\mathcal{T}} - &=& - - \left<\mathbf{v}\cdot\mathbf{n}, g_D\right>_{\partial\Omega_D}, + + \left<\mathbf{v}\cdot\mathbf{n}, \hat{u}_h\right>_{\partial\mathcal{T}} + &=& 0, \quad \forall \mathbf{v} \in \mathcal{V}_h^p, \\ (w, \mathbf{c}\nabla u_h + \nabla \cdot \mathbf{q}_h)_{\mathcal{T}} - + \left_{\partial \mathcal{T}} + + \left_{\partial \mathcal{T}} &=& (w, f)_{\mathcal{T}}, \quad \forall w \in \mathcal{W}_h^p, \\ - \left< \mu, \mathbf{c} \lambda_h\cdot \mathbf{n} + \left< \mu, \hat{u}_h\mathbf{c} \cdot \mathbf{n} + \mathbf{q}_h\cdot \mathbf{n} - + \tau (u_h - \lambda_h)\right>_{\partial \mathcal{T}} + + \tau (u_h - \hat{u}_h)\right>_{\partial \mathcal{T}} &=& \left<\mu, g_N\right>_{\partial\Omega_N}, \quad \forall \mu \in \mathcal{M}_h^p. @f} The unknowns $(\mathbf{q}_h, u_h)$ are referred to as local variables; they are -represented as standard DG variables. The unknown $\lambda_h$ is the skeleton +represented as standard DG variables. The unknown $\hat{u}_h$ is the skeleton variable which has support on the codimension-1 surfaces (faces) of the mesh. In the equation above, the space $\mathcal {W}_h^{p}$ for the scalar variable uh is defined as the space of functions that are tensor -product polynomials of degree $p$ on each cell and discontinuous over the +product polynomials of degree p on each cell and discontinuous over the element boundaries $\mathcal Q_{-p}$, i.e., the space described by FE_DGQ(p). The space for the gradient or flux variable qh is a vector element space where each component is a locally polynomial and discontinuous $\mathcal Q_{-p}$. In the code below, we collect these two local parts together in one FESystem where the first @p dim components denote the gradient part and the last scalar component -corresponds to the scalar variable. For the skeleton component $\lambda_h$, we +corresponds to the scalar variable. For the skeleton component $\hat{u}_h$, we define a space that consists of discontinuous tensor product polynomials that live on the element faces, which in deal.II is implemented by the class FE_FaceQ. This space is otherwise similar to FE_DGQ, i.e., the solution @@ -223,11 +229,11 @@ In the weak form given above, we can note the following coupling patterns: $(\mathbf{q}_h, u_h)$.
  • The matrix $B$ represents the local-face coupling. These are the terms with weighting functions $(\mathbf{v}, w)$ multiplying the skeleton variable - $\lambda_h$. + $\hat{u}_h$.
  • The matrix $C$ represents the face-local coupling, which involves the weighting function $\mu$ multiplying the local solutions $(\mathbf{q}_h, u_h)$.
  • The matrix $D$ is the face-face coupling; - terms involve both $\mu$ and $\lambda_h$. + terms involve both $\mu$ and $\hat{u}_h$.

    Post-processing and super-convergence

    @@ -245,42 +251,45 @@ ingredients: on each cell K super-converges at rate $\mathcal{O}(h^{p+2})$. -We now introduce a new variable $u_h^* \in \mathcal{V}_h^{p+1}$. With the two -ingredients above, we immediately deduce the following layout for -post-processing on each element: +We now introduce a new variable $u_h^* \in \mathcal{V}_h^{p+1}$, which we find +by the expression $|\kappa \nabla u_h^* + \mathbf{q}_h|^2$ over the cell +K under the constraint $\left(1, u_h^*\right)_K &=& \left(1, +u_h\right)_K$. This translates to the following system of equations: @f{eqnarray*} \left(1, u_h^*\right)_K &=& \left(1, u_h\right)_K\\ -\left(\nabla w_h^*, \kappa \nabla u_h^*\right)_K &=& -\left(\nabla w_h^*, \mathbf{q}_h\right)_K \quad \text{for all } w_h^* \in \mathcal Q^{p+1}. +\left(\nabla w_h^*, \kappa \nabla u_h^*\right)_K &=& +-\left(\nabla w_h^*, \mathbf{q}_h\right)_K +\quad \text{for all } w_h^* \in \mathcal Q^{p+1}. @f} -Since the second set of equations is already equal to the cell-wise dimension -of the new function space $|\mathcal Q^{p+1}|$, there is one more equation in -the above than unknown, which we fix in the code below by omitting one of -these equations (since the Laplacian is singular on the constant function). As -we will see below, this form of the post-processing already gives the desired -super-convergence result with rate $\mathcal {O}(h^{p+2})$. The motivation for -the above equation is the minimization of the expression $|\kappa \nabla u_h^* -+ \mathbf{q}_h|^2$ over the cell K. It should be noted that there is -some freedom in constructing $u_h^*$ and this approach to extract the -information from the gradient is not the only one. In particular, the -post-processed solution defined here does not satisfy the convection-diffusion -equation in any sense. As an alternative, the paper by Nguyen, Peraire and -Cockburn cited above suggests another somewhat more involved formula for -convection-diffusion that can also post-process the flux variable into an -$H(\Omega,\mathrm{div})$-conforming variant and better represents the local -convection-diffusion operator when the diffusion is small. We leave the -implementation of a more sophisticated post-processing as a possible extension -to the interested reader. - -Note that for vector-valued problem, the approach is very similar. One simply -sets the constraint for the mean value of each vector component separately and -uses the gradient for the main work. +Since we test by the whole set of basis functions in the space of tensor +product polynomials of degree p+1 in the second set of equations, this +is an overdetermined system with one more equation than unknowns. We fix this +in the code below by omitting one of these equations (since the rows in the +Laplacian are linearly dependent when representing a constant function). As we +will see below, this form of the post-processing gives the desired +super-convergence result with rate $\mathcal {O}(h^{p+2})$. It should be +noted that there is some freedom in constructing $u_h^*$ and this minimization +approach to extract the information from the gradient is not the only one. In +particular, the post-processed solution defined here does not satisfy the +convection-diffusion equation in any sense. As an alternative, the paper by +Nguyen, Peraire and Cockburn cited above suggests another somewhat more +involved formula for convection-diffusion that can also post-process the flux +variable into an $H(\Omega,\mathrm{div})$-conforming variant and better +represents the local convection-diffusion operator when the diffusion is +small. We leave the implementation of a more sophisticated post-processing as +a possible extension to the interested reader. + +Note that for vector-valued problems, the post-processing works similarly. One +simply sets the constraint for the mean value of each vector component +separately and uses the gradient as the main source of information.

    Problem specific data

    For this tutorial program, we consider almost the same test case as in step-7. The computational domain is $\Omega := [-1,1]^d$ and the exact -solution corresponds to the one in step-7, except for a scaling. We use the following source centers xi for the exponentials +solution corresponds to the one in step-7, except for a scaling. We use the +following source centers xi for the exponentials
    • 1D: $\{x_i\}^1 = \{ -\frac{1}{3}, 0, \frac{1}{3} \}$,
    • 2D: $\{\mathbf{x}_i\}^2 = \{ (-\frac{1}{2},\frac{1}{2}), @@ -312,7 +321,7 @@ Besides implementing the above equations, the implementation below provides the
      • WorkStream to parallelize local solvers. Workstream is already used in step-32, step-44. -
      • Reconstruct the local DG solution from the trace trace. +
      • Reconstruct the local DG solution from the trace.
      • Post-processing the solution for superconvergence.
      • DataOutFaces for direct output of the global skeleton solution.
      diff --git a/deal.II/examples/step-51/doc/results.dox b/deal.II/examples/step-51/doc/results.dox index 153b3317d1..9f0e2bfbc0 100644 --- a/deal.II/examples/step-51/doc/results.dox +++ b/deal.II/examples/step-51/doc/results.dox @@ -1,26 +1,29 @@

      Results

      +

      Program output

      + We first have a look at the output generated by the program when run in 2D. In the four images below, we show the solution for polynomial degree p=1 -and the cycle 2, 3, 4, and 8 of the program. In the plots, we overlay the data -generated from the internal data (DG part) with the skeleton part into the -same plot. We had to generate two different data sets because cells and faces -represent different geometric entities, the combination of which in the same -file are not supported in the VTK output of deal.II. +and cycles 2, 3, 4, and 8 of the program. In the plots, we overlay the data +generated from the internal data (DG part) with the skeleton part (lambda) +into the same plot. We had to generate two different data sets because cells +and faces represent different geometric entities, the combination of which in +the same file are not supported in the VTK output of deal.II. The images show the distinctive features of HDG: The cell solution (colored surfaces) is discontinuous between the cells. The solution on the skeleton variable sits on the faces and ties together the local parts. The skeleton solution is not continuous on the vertices where the faces meet, even though -its values are quite close within lines in one direction. It can be -interpreted as a string between the two sides that balances the jumps in the -solution (or rather, the flux $\kappa \nabla u + \mathbf{c} u$). As the mesh -is refined, the jumps between the cells get small as we represent a smooth -solution, and the skeleton solution approaches the cell parts. For cycle 8, -there is no visible difference in the two variables. We also see how boundary -conditions are implemented weakly. On the lower and left boundaries, we set -Neumann boundary conditions, whereas we set Dirichlet conditions on the right -and top boundaries. +its values are quite close along lines in the same coordinate direction. The +skeleton solution can be interpreted as a rubber spring between the two sides +that balances the jumps in the solution (or rather, the flux $\kappa \nabla u ++ \mathbf{c} u$). As the mesh is refined, the jumps between the cells get +small (we represent a smooth solution), and the skeleton solution approaches +the interior parts. For cycle 8, there is no visible difference in the two +variables. We also see how boundary conditions are implemented weakly and that +the interior variables do not exactly satisfy boundary conditions. On the +lower and left boundaries, we set Neumann boundary conditions, whereas we set +Dirichlet conditions on the right and top boundaries. @@ -68,10 +71,10 @@ analytical solution.
      Finally, we look at the solution for p=3 at cycle 2. Despite the coarse -mesh with only 64 cells, the solution looks quite good. And the -post-processed solution is similar in quality than the linear solution (not -post-processed) at cycle 8 with 4,096 cells. This clearly shows the -superiority of high order methods for smooth solutions. +mesh with only 64 cells, the post-processed solution is similar in quality +to the linear solution (not post-processed) at cycle 8 with 4,096 +cells. This clearly shows the superiority of high order methods for smooth +solutions. @@ -137,13 +140,13 @@ global refinement was performed, also the convergence rates. The quadratic convergence rates of Q1 elements in the $L_2$ norm for both the scalar variable and the gradient variable is apparent, as is the cubic rate for the postprocessed scalar variable in the $L_2$ norm. Note that is is a distinctive -feature of an HDG solution. In typical continuous finite element, the gradient -of the solution of order p converges at rate p only, as opposed -to p+1 for the actual solution. Even though superconvergence results -for finite elements are also available (e.g. superconvergent patch recovery -first introduced by Zienkiewicz and Zhu), these are typically limited to -structured meshes and other special cases. Likewise, the scalar variable and -gradient for Q3 elements converge at fourth order and the postprocessed scalar +feature of an HDG solution. In typical continuous finite elements, the +gradient of the solution of order p converges at rate p only, as +opposed to p+1 for the actual solution. Even though superconvergence +results for finite elements are also available (e.g. superconvergent patch +recovery first introduced by Zienkiewicz and Zhu), these are typically limited +to structured meshes and other special cases. For Q3 HDG variables, the scalar +variable and gradient converge at fourth order and the postprocessed scalar variable at fifth order. The same convergence rates are observed in 3d. @@ -188,24 +191,26 @@ cells dofs val L2 grad L2 val L2-post 110592 5419008 3.482e-05 3.94 3.055e-04 3.95 7.374e-07 5.00 @endcode -

      Comparison with continuous finite elements in 2D

      +

      Comparison with continuous finite elements

      -From the convergence tables, we see the expected convergence rates as -mentioned in the introduction. Now, we want to compare the computational +

      Results for 2D

      + +The convergence tables verify the expected convergence rates stated in the +introduction. Now, we want to show a quick comparison of the computational efficiency of the HDG method compared to a usual finite element (continuous Galkerin) method on the problem of this tutorial. Of course, stability aspects of the HDG method compared to continuous finite elements for transport-dominated problems are also important in practice, which is an -aspect not present on a problem with smooth analytic solution. In the picture +aspect not seen on a problem with smooth analytic solution. In the picture below, we compare the $L_2$ error as a function of the number of degrees of -freedom (left) and of the computing time spent in the linear solver for two -space dimensions for continuous finite elements (CG) and the hybridized +freedom (left) and of the computing time spent in the linear solver (right) +for two space dimensions of continuous finite elements (CG) and the hybridized discontinuous Galerkin method presented in this tutorial. As opposed to the tutorial where we only use unpreconditioned BiCGStab, the times shown in the figures below use the Trilinos algebraic multigrid preconditioner in -TrilinosWrappers::PreconditionAMG for the CG part and a wrapper around -ChunkSparseMatrix for the trace variable (in order to utilize the block -structure in the matrix), respectively. +TrilinosWrappers::PreconditionAMG. For the HDG part, a wrapper around +ChunkSparseMatrix for the trace variable has been used in order to utilize the +block structure in the matrix on the finest level.
      @@ -218,14 +223,15 @@ structure in the matrix), respectively.
      -The results in the table show that the HDG method is slower than continuous +The results in the graphs show that the HDG method is slower than continuous finite elements at p=1, about equally fast for cubic elements and faster for sixth order elements. However, we have seen above that the HDG method actually produces solutions which are more accurate than what is represented in the original variables. Therefore, in the next two plots below -we show how the post-processed solution for HDG performs (denoted by $p=1^*$ -for example). We now see a clear advantage of HDG for the same amount of work -for both p=3 and p=6, and about the same quality for p=1. +we instead display the error of the post-processed solution for HDG (denoted +by $p=1^*$ for example). We now see a clear advantage of HDG for the same +amount of work for both p=3 and p=6, and about the same quality +for p=1. @@ -241,9 +247,9 @@ for both p=3 and p=6, and about the same quality for p=1. Since the HDG method actually produces results converging as hp+2, we should compare it to a continuous Galerkin solution with the same asymptotic convergence behavior, i.e., FE_Q with degree -p+1. If we do this, we get the convergence curves as below. We see that +p+1. If we do this, we get the convergence curves below. We see that CG with second order polynomials is again clearly better than HDG with -linears. However, for higher orders the advantage of HDG remains. +linears. However, the advantage of HDG for higher orders remains.
      @@ -257,15 +263,15 @@ linears. However, for higher orders the advantage of HDG remains.
      The results are in line with properties of DG methods in general: Best -performance is typically not achieved for linear elements, but rather around -p=3. This is because of a volume-to-surface effect for discontinuous -solutions with too much of the solution living on the surfaces and hence -duplicating work when the elements are linear. Put in other words, DG methods -are often most efficient when used at relatively high order, despite their -focus on discontinuous (and hence, seemingly low accurate) representation of -solutions. +performance is typically not achieved for linear elements, but rather at +somewhat higher order, usually around p=3. This is because of a +volume-to-surface effect for discontinuous solutions with too much of the +solution living on the surfaces and hence duplicating work when the elements +are linear. Put in other words, DG methods are often most efficient when used +at relatively high order, despite their focus on discontinuous (and hence, +seemingly low accurate) representation of solutions. -

      Comparison with continuous finite elements in 3D

      +

      Resuls for 3D

      We now show the same figures in 3D: The first row shows the number of degrees of freedom and computing time versus the $L_2$ error in the scalar variable @@ -273,14 +279,14 @@ of freedom and computing time versus the $L_2$ error in the scalar variable post-processed HDG solution instead of the original one, and the third row compares the post-processed HDG solution with CG at order p+1. In 3D, the volume-to-surface effect makes the cost of HDG somewhat higher and the CG -solution is clearly better than HDG for linears in any metric. For cubics, HDG -and CG are again of similar quality, whereas HDG is again more efficient for -sixth order polynomials. One can alternatively also use the combination of -FE_DGP and FE_FaceP instead of (FE_DGQ, FE_FaceQ), which do not use tensor -product polynomials of degree p but Legendre polynomials of -complete degree p. While there are less degrees of freedom on -the skeleton variable for FE_FaceP for a given mesh size, the solution quality -(error vs. number of DoFs) is very similar between the two. +solution is clearly better than HDG for linears by any metric. For cubics, HDG +and CG are of similar quality, whereas HDG is again more efficient for sixth +order polynomials. One can alternatively also use the combination of FE_DGP +and FE_FaceP instead of (FE_DGQ, FE_FaceQ), which do not use tensor product +polynomials of degree p but Legendre polynomials of complete +degree p. There are less degrees of freedom on the skeleton variable +for FE_FaceP for a given mesh size, but the solution quality (error vs. number +of DoFs) is very similar to the results for FE_FaceQ. @@ -309,17 +315,17 @@ the skeleton variable for FE_FaceP for a given mesh size, the solution quality
      -One final note on the efficiency comparison: We tried to use similar solvers -(optimal AMG preconditioners for both without particular tuning of the AMG -parameters on any of the two) to give a fair picture of the two methods on a -toy example. It should be noted however that GMG for continuous finite -elements is about a factor four to five faster on this (easy) problem for -p=3 and p=6. The authors of this tutorial have not seen similar -solvers for the HDG linear system. Also, there are other implementation -aspects for CG available such as fast matrix-free approaches as shown in -step-37 that make higher order continuous elements more competitive. Again, it -is not clear to the authors of the tutorial whether similar improvements could -be made for HDG. +One final note on the efficiency comparison: We tried to use general-purpose +sparse matrix structures and similar solvers (optimal AMG preconditioners for +both without particular tuning of the AMG parameters on any of them) to give a +fair picture of the cost versus accuracy of two methods, on a toy example. It +should be noted however that GMG for continuous finite elements is about a +factor four to five faster for p=3 and p=6. The authors of this +tutorial have not seen similarly advanced solvers for the HDG linear +systems. Also, there are other implementation aspects for CG available such as +fast matrix-free approaches as shown in step-37 that make higher order +continuous elements more competitive. Again, it is not clear to the authors of +the tutorial whether similar improvements could be made for HDG.

      Possibilities for improvements

      @@ -404,7 +410,9 @@ improvement makes most sense. multigrid preconditioner from Trilinos. For diffusion-dominated problems as the problem at hand with finer meshes, such a solver can be designed that uses the matrix-vector products from the more efficient ChunkSparseMatrix on - the finest level, as long as we are not working in parallel with MPI. + the finest level, as long as we are not working in parallel with MPI. For + MPI-parallelized computation, a standard TrilinosWrappers::SparseMatrix can + be used.
    • Speed up assembly by pre-assembling parts that do not change from one cell to another (those that do neither contain variable coefficients nor diff --git a/deal.II/examples/step-51/step-51.cc b/deal.II/examples/step-51/step-51.cc index 522e9ce2b3..ad9c58ff03 100644 --- a/deal.II/examples/step-51/step-51.cc +++ b/deal.II/examples/step-51/step-51.cc @@ -328,25 +328,24 @@ namespace Step51 private: -// Data for the assembly and solution of the primal variables. - struct PerTaskData; - struct ScratchData; - -// Post-processing the solution to obtain $u^*$ is an element-by-element -// procedure; as such, we do not need to assemble any global data and do -// not declare any 'task data' for WorkStream to use. - struct PostProcessScratchData; - void setup_system (); void assemble_system (const bool reconstruct_trace = false); void solve (); void postprocess (); - void refine_grid (const unsigned int cylce); void output_results (const unsigned int cycle); -// The following three functions are used by WorkStream to do the actual work of -// the program. + // Data for the assembly and solution of the primal variables. + struct PerTaskData; + struct ScratchData; + + // Post-processing the solution to obtain $u^*$ is an element-by-element + // procedure; as such, we do not need to assemble any global data and do + // not declare any 'task data' for WorkStream to use. + struct PostProcessScratchData; + + // The following three functions are used by WorkStream to do the actual + // work of the program. void assemble_system_one_cell (const typename DoFHandler::active_cell_iterator &cell, ScratchData &scratch, PerTaskData &task_data); @@ -407,14 +406,14 @@ namespace Step51 ConvergenceTable convergence_table; }; -// @sect3{The HDG class implementation} + // @sect3{The HDG class implementation} -// @sect4{Constructor} -// The constructor is similar to those in other examples, -// with the exception of handling multiple DoFHandler and -// FiniteElement objects. Note that we create a system of finite -// elements for the local DG part, including the gradient/flux part and the -// scalar part. + // @sect4{Constructor} + // The constructor is similar to those in other examples, + // with the exception of handling multiple DoFHandler and + // FiniteElement objects. Note that we create a system of finite + // elements for the local DG part, including the gradient/flux part and the + // scalar part. template HDG::HDG (const unsigned int degree, const RefinementMode refinement_mode) : @@ -430,11 +429,11 @@ namespace Step51 -// @sect4{HDG::setup_system} -// The system for an HDG solution is setup in an analogous manner to most -// of the other tutorial programs. We are careful to distribute dofs with -// all of our DoFHandler objects. The @p solution and @p system_matrix -// objects go with the global skeleton solution. + // @sect4{HDG::setup_system} + // The system for an HDG solution is setup in an analogous manner to most + // of the other tutorial programs. We are careful to distribute dofs with + // all of our DoFHandler objects. The @p solution and @p system_matrix + // objects go with the global skeleton solution. template void HDG::setup_system () @@ -464,6 +463,10 @@ namespace Step51 constraints); constraints.close (); + // When creating the chunk sparsity pattern, we first create the usual + // compressed sparsity pattern and then set the chunk size, which is equal + // to the number of dofs on a face, when copying this into the final + // sparsity pattern. { CompressedSimpleSparsityPattern csp (dof_handler.n_dofs()); DoFTools::make_sparsity_pattern (dof_handler, csp, @@ -475,21 +478,22 @@ namespace Step51 -// @sect4{HDG::PerTaskData} -// Next come the definition of the local data -// structures for the parallel assembly. The first structure @p PerTaskData -// contains the local vector and matrix that are written into the global -// matrix, whereas the ScratchData contains all data that we need for the -// local assembly. There is one variable worth noting here, namely the boolean -// variable @p trace_reconstruct. As mentioned introdution, we solve the HDG -// system in two steps. First, we create a linear system for the skeleton -// system where we condense the local part into it by $D-CA^{-1}B$. Then, we -// solve for the local part using the skeleton solution. For these two steps, -// we need the same matrices on the elements twice, which we want to compute -// by two assembly steps. Since most of the code is similar, we do this with -// the same function but only switch between the two based on a flag that we -// set when starting the assembly. Since we need to pass this information on -// to the local worker routines, we store it once in the task data. + // @sect4{HDG::PerTaskData} + // Next come the definition of the local data structures for the parallel + // assembly. The first structure @p PerTaskData contains the local vector + // and matrix that are written into the global matrix, whereas the + // ScratchData contains all data that we need for the local assembly. There + // is one variable worth noting here, namely the boolean variable @p + // trace_reconstruct. As mentioned in the introdution, we solve the HDG + // system in two steps. First, we create a linear system for the skeleton + // system where we condense the local part into it via the Schur complement + // $D-CA^{-1}B$. Then, we solve for the local part using the skeleton + // solution. For these two steps, we need the same matrices on the elements + // twice, which we want to compute by two assembly steps. Since most of the + // code is similar, we do this with the same function but only switch + // between the two based on a flag that we set when starting the + // assembly. Since we need to pass this information on to the local worker + // routines, we store it once in the task data. template struct HDG::PerTaskData { @@ -509,19 +513,19 @@ namespace Step51 -// @sect4{HDG::ScratchData} -// @p ScratchData contains persistent data for each -// thread within WorkStream. The FEValues, matrix, -// and vector objects should be familiar by now. There are two objects that -// need to be discussed: @p std::vector > -// fe_local_support_on_face and @p std::vector > -// fe_support_on_face. These are used to indicate whether or not the finite -// elements chosen have support (non-zero values) on a given face of the -// reference cell for the local part associated to @p fe_local and the -// skeleton part @p f, which is why we can extract this information in the -// constructor and store it once for all cells that we work on. Had we not -// stored this information, we would be forced to assemble a large number of -// zero terms on each cell, which would significantly slow the program. + // @sect4{HDG::ScratchData} + // @p ScratchData contains persistent data for each + // thread within WorkStream. The FEValues, matrix, + // and vector objects should be familiar by now. There are two objects that + // need to be discussed: @p std::vector > + // fe_local_support_on_face and @p std::vector > + // fe_support_on_face. These are used to indicate whether or not the finite + // elements chosen have support (non-zero values) on a given face of the + // reference cell for the local part associated to @p fe_local and the + // skeleton part @p fe. We extract this information in the + // constructor and store it once for all cells that we work on. Had we not + // stored this information, we would be forced to assemble a large number of + // zero terms on each cell, which would significantly slow the program. template struct HDG::ScratchData { @@ -621,10 +625,10 @@ namespace Step51 -// @sect4{HDG::PostProcessScratchData} -// @p PostProcessScratchData contains the data used by WorkStream -// when post-processing the local solution $u^*$. It is similar, but much -// simpler, than @p ScratchData. + // @sect4{HDG::PostProcessScratchData} + // @p PostProcessScratchData contains the data used by WorkStream + // when post-processing the local solution $u^*$. It is similar, but much + // simpler, than @p ScratchData. template struct HDG::PostProcessScratchData { @@ -671,28 +675,13 @@ namespace Step51 -// @sect4{HDG::copy_local_to_global} -// If we are in the first step of the solution, i.e. @p trace_reconstruct=false, -// then we assemble the global system. - template - void HDG::copy_local_to_global(const PerTaskData &data) - { - if (data.trace_reconstruct == false) - constraints.distribute_local_to_global (data.cell_matrix, - data.cell_vector, - data.dof_indices, - system_matrix, system_rhs); - } - - - -// @sect4{HDG::assemble_system} -// The @p assemble_system function is similar to Step-32, where -// the quadrature formula and the update flags are set up, and then -// WorkStream is used to do the work in a multi-threaded manner. -// The @p trace_reconstruct input parameter is used to decide whether we are -// solving for the local solution (true) or the global skeleton solution -// (false). + // @sect4{HDG::assemble_system} + // The @p assemble_system function is similar to Step-32, where + // the quadrature formula and the update flags are set up, and then + // WorkStream is used to do the work in a multi-threaded + // manner. The @p trace_reconstruct input parameter is used to decide + // whether we are solving for the global skeleton solution (false) or the + // local solution (true). template void HDG::assemble_system (const bool trace_reconstruct) @@ -729,17 +718,17 @@ namespace Step51 -// @sect4{HDG::assemble_system_one_cell} -// The real work of the HDG program is done by @p assemble_system_one_cell. -// Assembling the local matrices $A, B, C$ is done here, along with the -// local contributions of the global matrix $D$. + // @sect4{HDG::assemble_system_one_cell} + // The real work of the HDG program is done by @p assemble_system_one_cell. + // Assembling the local matrices $A, B, C$ is done here, along with the + // local contributions of the global matrix $D$. template void HDG::assemble_system_one_cell (const typename DoFHandler::active_cell_iterator &cell, ScratchData &scratch, PerTaskData &task_data) { -// Construct iterator for dof_handler_local for FEValues reinit function. + // Construct iterator for dof_handler_local for FEValues reinit function. typename DoFHandler::active_cell_iterator loc_cell (&triangulation, cell->level(), @@ -769,7 +758,8 @@ namespace Step51 // (referred to as matrix $A$ in the introduction) corresponding to // local-local coupling, as well as the local right-hand-side vector. We // store the values at each quadrature point for the basis functions, the - // right-hand-side value, and the convection velocity. + // right-hand-side value, and the convection velocity, in order to have + // quick access to these fields. for (unsigned int q=0; q + void HDG::copy_local_to_global(const PerTaskData &data) + { + if (data.trace_reconstruct == false) + constraints.distribute_local_to_global (data.cell_matrix, + data.cell_vector, + data.dof_indices, + system_matrix, system_rhs); + } + + + + // @sect4{HDG::solve} + // The skeleton solution is solved for by using a BiCGStab solver with + // identity preconditioner. template void HDG::solve () { @@ -1152,7 +1157,7 @@ namespace Step51 } // Having assembled all terms, we can again go on and solve the linear - // system. We again invert the matrix and then multiply the inverse by the + // system. We invert the matrix and then multiply the inverse by the // right hand side. An alternative (and more numerically stable) would have // been to only factorize the matrix and apply the factorization. scratch.cell_matrix.gauss_jordan(); @@ -1162,14 +1167,14 @@ namespace Step51 -// @sect4{HDG::output_results} -// We have 3 sets of results that we would like to output: the local solution, -// the post-processed local solution, and the skeleton solution. The former 2 -// both `live' on element volumes, wheras the latter lives on codimention-1 surfaces -// of the triangulation. Our @p output_results function writes all local solutions -// to the same vtk file, even though they correspond to different DoFHandler -// objects. The graphical output for the skeleton variable is done through -// use of the DataOutFaces class. + // @sect4{HDG::output_results} + // We have 3 sets of results that we would like to output: the local solution, + // the post-processed local solution, and the skeleton solution. The former 2 + // both 'live' on element volumes, wheras the latter lives on codimention-1 surfaces + // of the triangulation. Our @p output_results function writes all local solutions + // to the same vtk file, even though they correspond to different DoFHandler + // objects. The graphical output for the skeleton variable is done through + // use of the DataOutFaces class. template void HDG::output_results (const unsigned int cycle) { @@ -1196,8 +1201,8 @@ namespace Step51 DataOut data_out; -// We first define the names and types of the local solution, -// and add the data to @p data_out. + // We first define the names and types of the local solution, + // and add the data to @p data_out. std::vector names (dim, "gradient"); names.push_back ("solution"); std::vector @@ -1208,9 +1213,9 @@ namespace Step51 data_out.add_data_vector (dof_handler_local, solution_local, names, component_interpretation); -// The second data item we add is the post-processed solution. -// In this case, it is a single scalar variable belonging to -// a different DoFHandler. + // The second data item we add is the post-processed solution. + // In this case, it is a single scalar variable belonging to + // a different DoFHandler. std::vector post_name(1,"u_post"); std::vector post_comp_type(1, DataComponentInterpretation::component_is_scalar); @@ -1302,7 +1307,7 @@ namespace Step51 } } - // Just as in step-7, we set the boundary indicator of one of the faces to 1 + // Just as in step-7, we set the boundary indicator of two of the faces to 1 // where we want to specify Neumann boundary conditions instead of Dirichlet // conditions. Since we re-create the triangulation every time for global // refinement, the flags are set in every refinement step, not just at the @@ -1319,10 +1324,10 @@ namespace Step51 cell->face(face)->set_boundary_indicator (1); } -// @sect4{HDG::run} -// The functionality here is basically the same as Step-7. -// We loop over 10 cycles, refining the grid on each one. At the end, -// convergence tables are created. + // @sect4{HDG::run} + // The functionality here is basically the same as Step-7. + // We loop over 10 cycles, refining the grid on each one. At the end, + // convergence tables are created. template void HDG::run () { @@ -1350,11 +1355,10 @@ namespace Step51 // There is one minor change for the convergence table compared to step-7: // Since we did not refine our mesh by a factor two in each cycle (but // rather used the sequence 2, 3, 4, 6, 8, 12, ...), we need to tell the - // convergence rate evaluation about this. We do this by setting the number - // of cells as a reference column and additionally specifying the dimension - // of the problem, which gives the computation the necessary information for - // how much the mesh was refinement given a certain increase in the number - // of cells. + // convergence rate evaluation about this. We do this by setting the + // number of cells as a reference column and additionally specifying the + // dimension of the problem, which gives the necessary information for the + // relation between number of cells and mesh size. if (refinement_mode == global_refinement) { convergence_table @@ -1369,6 +1373,8 @@ namespace Step51 } // end of namespace Step51 + + int main (int argc, char **argv) { const unsigned int dim = 2; -- 2.39.5