must solve in an implicit system. This is because, unlike continuous finite
elements, in typical discontinuous elements there is one degree of freedom at
each vertex <i>for each of the adjacent elements</i>, rather than just one,
-and similarly for edges and faces. As another example,
-for the FE_DGP_Monomial basis, each
+and similarly for edges and faces. As an example of how fast the number of
+unknowns grows,
+consider the <code>FE_DGP_Monomial</code> basis: each
scalar solution component is represented by polynomials of degree $p$
-which yields $(1/dim!)*\prod_{i=1}^{dim}(p+i)$ degrees of freedom per
-element. Typically, all degrees of freedom in an element are coupled
+with $(1/dim!)*\prod_{i=1}^{dim}(p+i)$ degrees of freedom per
+element. Typically, all degrees of freedom in an element are coupled
to all of the degrees of freedom in the adjacent elements. The resulting
discrete equations yield very large linear systems very quickly, especially
-for systems of equations in dim=2 or dim=3.
+for systems of equations in 2 or 3 dimensions.
<h4> Reducing the size of the linear system </h4>
To alleviate the computational cost of solving such large linear systems,
matrix <i>A</i> element by element (the local solution of the Dirichlet
problem) and subtract $CA^{-1}B$ from $D$. The steps in the Dirichlet-to-Neumann map concept hence correspond to
<ol>
- <li> constructing the Schur complement matrix $D-C A^{-1} B$ and right hand side $G - C A^{-1} F$ <i>locally on each cell</i> and insert the contribution into the global trace matrix in the usual way,
+ <li> constructing the Schur complement matrix $D-C A^{-1} B$ and right hand side $G - C A^{-1} F$ <i>locally on each cell</i>
+ and inserting the contribution into the global trace matrix in the usual way,
<li> solving the Schur complement system for $\Lambda$, and
- <li> solving the equation for <i>U</i> using the second equation, given $\Lambda$.
+ <li> solving for <i>U</i> using the second equation, given $\Lambda$.
</ol>
(\mathbf{v}, \kappa^{-1} \mathbf{q})_K - (\nabla\cdot\mathbf{v}, u)_K
+ \left<\mathbf{v}\cdot\mathbf{n}, \hat{u}\right>_{\partial K} &=& 0, \\
- (\nabla w, \mathbf{c} u + \mathbf{q})_K
- + \left<(w, \hat{\mathbf{c} u}+\hat{\mathbf{q}})\cdot\mathbf{n}\right>_{\partial K}
+ + \left<(w, \widehat{\mathbf{c} u}+\hat{\mathbf{q}})\cdot\mathbf{n}\right>_{\partial K}
&=& (w,f)_K.
@f}
values coming from the cells adjacent to an interface.
We eliminate the numerical trace $\hat{\mathbf{q}}$ by using traces of the form:
@f{eqnarray*}
- \hat{\mathbf{c} u}+\hat{\mathbf{q}} = \mathbf{c}\hat{u} + \mathbf{q}
+ \widehat{\mathbf{c} u}+\hat{\mathbf{q}} = \mathbf{c}\hat{u} + \mathbf{q}
+ \tau(u - \hat{u})\mathbf{n} \quad \text{ on } \partial K.
@f}
and cycles 2, 3, 4, and 8 of the program. In the plots, we overlay the data
generated from the internal data (DG part) with the skeleton part ($\hat{u}$)
into the same plot. We had to generate two different data sets because cells
-and faces represent different geometric entities, the combination of which in
-the same file are not supported in the VTK output of deal.II.
+and faces represent different geometric entities, the combination of which (in
+the same file) is not supported in the VTK output of deal.II.
The images show the distinctive features of HDG: The cell solution (colored
surfaces) is discontinuous between the cells. The solution on the skeleton
order polynomials. One can alternatively also use the combination of FE_DGP
and FE_FaceP instead of (FE_DGQ, FE_FaceQ), which do not use tensor product
polynomials of degree <i>p</i> but Legendre polynomials of <i>complete</i>
-degree <i>p</i>. There are less degrees of freedom on the skeleton variable
+degree <i>p</i>. There are fewer degrees of freedom on the skeleton variable
for FE_FaceP for a given mesh size, but the solution quality (error vs. number
of DoFs) is very similar to the results for FE_FaceQ.
As already mentioned in the introduction, one possibility is to implement
another post-processing technique as discussed in the literature.
-A second thing that is not done optimally relates to the performance of this
-program, which is of course an issue in practical applications (weighting in
+A second item that is not done optimally relates to the performance of this
+program, which is of course an issue in practical applications (weighing in
also the better solution quality of (H)DG methods for transport-dominated
problems). Let us look at
the computing time of the tutorial program and the share of the individual
// Dirichlet boundary conditions, just as in a continuous Galerkin finite
// element method. We can enforce the boundary conditions in an analogous
// manner through the use of <code>ConstrainMatrix</code> constructs. In
- // addition, hanging nodes where cells of different refinement levels meet
- // are set as for continuous finite elements: For the face elements which
+ // addition, hanging nodes are handled in the same way as for
+ // continuous finite elements: For the face elements which
// only define degrees of freedom on the face, this process sets the
// solution on the refined to be the one from the coarse side.
ConstraintMatrix constraints;
// @sect4{HDG::PerTaskData}
- // Next come the definition of the local data structures for the parallel
+ // Next comes the definition of the local data structures for the parallel
// assembly. The first structure @p PerTaskData contains the local vector
// and matrix that are written into the global matrix, whereas the
// ScratchData contains all data that we need for the local assembly. There
scratch.u_phi[k] = scratch.fe_face_values_local[scalar].value(kk,q);
}
- // When @p trace_reconstruct=false, we are preparing assembling the
+ // When @p trace_reconstruct=false, we are preparing to assemble the
// system for the skeleton variable $\hat{u}$. If this is the case,
// we must assemble all local matrices associated with the problem:
// local-local, local-face, face-local, and face-face. The
// discussion in the introduction, we need to set up a system that projects
// the gradient part of the DG solution onto the gradient of the
// post-processed variable. Moreover, we need to set the average of the new
- // post-processed variable to be equal the average of the scalar DG solution
+ // post-processed variable to equal the average of the scalar DG solution
// on the cell.
//
// More technically speaking, the projection of the gradient is a system
// Having assembled all terms, we can again go on and solve the linear
// system. We invert the matrix and then multiply the inverse by the
- // right hand side. An alternative (and more numerically stable) would have
+ // right hand side. An alternative (and more numerically stable) method would have
// been to only factorize the matrix and apply the factorization.
scratch.cell_matrix.gauss_jordan();
scratch.cell_matrix.vmult(scratch.cell_sol, scratch.cell_rhs);