The above procedure also has a linear algebra interpretation and is referred to
as static condensation. Let us write the complete linear system associated to
-the HDG problem as a block system with the discrete DG variables <i>U</i> as
+the HDG problem as a block system with the discrete DG variables $U$ as
first block and the skeleton variables $\Lambda$ as the second block:
@f{eqnarray*}
\begin{pmatrix} A & B \\ C & D \end{pmatrix}
=
\begin{pmatrix} F \\ G \end{pmatrix}.
@f}
-Our aim is now to eliminate the <i>U</i> block with a Schur complement
+Our aim is now to eliminate the $U$ block with a Schur complement
approach similar to step-20, which results in the following two steps:
@f{eqnarray*}
(D - C A^{-1} B) \Lambda &=& G - C A^{-1} F, \\
block diagonal matrix where each block corresponds to one cell and is
therefore easy enough to invert.
The coupling to other cells is introduced by the matrices
-<i>B</i> and <i>C</i> over the skeleton variable. The block-diagonality of
-<i>A</i> and the structure in <i>B</i> and <i>C</i> allow us to invert the
-matrix <i>A</i> element by element (the local solution of the Dirichlet
+$B$ and $C$ over the skeleton variable. The block-diagonality of
+$A$ and the structure in $B$ and $C$ allow us to invert the
+matrix $A$ element by element (the local solution of the Dirichlet
problem) and subtract $CA^{-1}B$ from $D$. The steps in the Dirichlet-to-Neumann map concept hence correspond to
<ol>
<li> constructing the Schur complement matrix $D-C A^{-1} B$ and right hand side $G - C A^{-1} F$ <i>locally on each cell</i>
and inserting the contribution into the global trace matrix in the usual way,
<li> solving the Schur complement system for $\Lambda$, and
- <li> solving for <i>U</i> using the second equation, given $\Lambda$.
+ <li> solving for $U$ using the second equation, given $\Lambda$.
</ol>
from the DG method, with jumps of the solution over the cell boundaries.
In the equation above, the space $\mathcal {W}_h^{p}$ for the scalar variable
-<i>u<sub>h</sub></i> is defined as the space of functions that are tensor
-product polynomials of degree <i>p</i> on each cell and discontinuous over the
+$u_h$ is defined as the space of functions that are tensor
+product polynomials of degree $p$ on each cell and discontinuous over the
element boundaries $\mathcal Q_{-p}$, i.e., the space described by
<code>FE_DGQ<dim>(p)</code>. The space for the gradient or flux variable
-<b>q</b><i><sub>h</sub></i> is a vector element space where each component is
+$\mathbf{q}_i$ is a vector element space where each component is
a locally polynomial and discontinuous $\mathcal Q_{-p}$. In the code below,
we collect these two local parts together in one FESystem where the first @p
dim components denote the gradient part and the last scalar component
constructing an enriched solution that gains accuracy. This post-processing
takes the HDG solution in an element-by-element fashion and combines it such
that one can get $\mathcal O(h^{p+2})$ order of accuracy when using
-polynomials of degree <i>p</i>. For this to happen, there are two necessary
+polynomials of degree $p$. For this to happen, there are two necessary
ingredients:
<ol>
<li> The computed solution gradient $\mathbf{q}_h$ converges at optimal rate,
i.e., $\mathcal{O}(h^{p+1})$.
<li> The average of the scalar part of the solution, <i>u<sub>h</sub></i>,
- on each cell <i>K</i> super-converges at rate $\mathcal{O}(h^{p+2})$.
+ on each cell $K$ super-converges at rate $\mathcal{O}(h^{p+2})$.
</ol>
We now introduce a new variable $u_h^* \in \mathcal{V}_h^{p+1}$, which we find
by minimizing the expression $|\kappa \nabla u_h^* + \mathbf{q}_h|^2$ over the cell
-<i>K</i> under the constraint $\left(1, u_h^*\right)_K &=& \left(1,
+$K$ under the constraint $\left(1, u_h^*\right)_K = \left(1,
u_h\right)_K$. The constraint is necessary because the minimization
functional does not determine the constant part of $u_h^*$. This
translates to the following system of equations:
@f}
Since we test by the whole set of basis functions in the space of tensor
-product polynomials of degree <i>p</i>+1 in the second set of equations, this
+product polynomials of degree $p+1$ in the second set of equations, this
is an overdetermined system with one more equation than unknowns. We fix this
in the code below by omitting one of these equations (since the rows in the
Laplacian are linearly dependent when representing a constant function). As we
For this tutorial program, we consider almost the same test case as in
step-7. The computational domain is $\Omega := [-1,1]^d$ and the exact
solution corresponds to the one in step-7, except for a scaling. We use the
-following source centers <i>x<sub>i</sub></i> for the exponentials
+following source centers $x_i$ for the exponentials
<ul>
<li> 1D: $\{x_i\}^1 = \{ -\frac{1}{3}, 0, \frac{1}{3} \}$,
<li> 2D: $\{\mathbf{x}_i\}^2 = \{ (-\frac{1}{2},\frac{1}{2}),
<h3>Program output</h3>
We first have a look at the output generated by the program when run in 2D. In
-the four images below, we show the solution for polynomial degree <i>p</i>=1
+the four images below, we show the solution for polynomial degree $p=1$
and cycles 2, 3, 4, and 8 of the program. In the plots, we overlay the data
generated from the internal data (DG part) with the skeleton part ($\hat{u}$)
into the same plot. We had to generate two different data sets because cells
</tr>
</table>
-Finally, we look at the solution for <i>p</i>=3 at cycle 2. Despite the coarse
+Finally, we look at the solution for $p=3$ at cycle 2. Despite the coarse
mesh with only 64 cells, the post-processed solution is similar in quality
to the linear solution (not post-processed) at cycle 8 with 4,096
cells. This clearly shows the superiority of high order methods for smooth
variable and the gradient variable is apparent, as is the cubic rate for the
postprocessed scalar variable in the $L_2$ norm. Note this distinctive
feature of an HDG solution. In typical continuous finite elements, the
-gradient of the solution of order <i>p</i> converges at rate <i>p</i> only, as
-opposed to <i>p</i>+1 for the actual solution. Even though superconvergence
+gradient of the solution of order $p$ converges at rate $p$ only, as
+opposed to $p+1$ for the actual solution. Even though superconvergence
results for finite elements are also available (e.g. superconvergent patch
recovery first introduced by Zienkiewicz and Zhu), these are typically limited
to structured meshes and other special cases. For Q3 HDG variables, the scalar
</table>
The results in the graphs show that the HDG method is slower than continuous
-finite elements at <i>p</i>=1, about equally fast for cubic elements and
+finite elements at $p=1$, about equally fast for cubic elements and
faster for sixth order elements. However, we have seen above that the HDG
method actually produces solutions which are more accurate than what is
represented in the original variables. Therefore, in the next two plots below
we instead display the error of the post-processed solution for HDG (denoted
by $p=1^*$ for example). We now see a clear advantage of HDG for the same
-amount of work for both <i>p</i>=3 and <i>p</i>=6, and about the same quality
-for <i>p</i>=1.
+amount of work for both $p=3$ and $p=6$, and about the same quality
+for $p=1$.
<table align="center" border="1" cellspacing="3" cellpadding="3">
<tr>
</table>
Since the HDG method actually produces results converging as
-<i>h</i><sup><i>p</i>+2</sup>, we should compare it to a continuous Galerkin
+$h^{p+2}$, we should compare it to a continuous Galerkin
solution with the same asymptotic convergence behavior, i.e., FE_Q with degree
-<i>p</i>+1. If we do this, we get the convergence curves below. We see that
+$p+1$. If we do this, we get the convergence curves below. We see that
CG with second order polynomials is again clearly better than HDG with
linears. However, the advantage of HDG for higher orders remains.
The results are in line with properties of DG methods in general: Best
performance is typically not achieved for linear elements, but rather at
-somewhat higher order, usually around <i>p</i>=3. This is because of a
+somewhat higher order, usually around $p=3$. This is because of a
volume-to-surface effect for discontinuous solutions with too much of the
solution living on the surfaces and hence duplicating work when the elements
are linear. Put in other words, DG methods are often most efficient when used
We now show the same figures in 3D: The first row shows the number of degrees
of freedom and computing time versus the $L_2$ error in the scalar variable
-<i>u</i> for CG and HDG at order <i>p</i>, the second row shows the
+$u$ for CG and HDG at order $p$, the second row shows the
post-processed HDG solution instead of the original one, and the third row
-compares the post-processed HDG solution with CG at order <i>p</i>+1. In 3D,
+compares the post-processed HDG solution with CG at order $p+1$. In 3D,
the volume-to-surface effect makes the cost of HDG somewhat higher and the CG
solution is clearly better than HDG for linears by any metric. For cubics, HDG
and CG are of similar quality, whereas HDG is again more efficient for sixth
order polynomials. One can alternatively also use the combination of FE_DGP
and FE_FaceP instead of (FE_DGQ, FE_FaceQ), which do not use tensor product
-polynomials of degree <i>p</i> but Legendre polynomials of <i>complete</i>
-degree <i>p</i>. There are fewer degrees of freedom on the skeleton variable
+polynomials of degree $p$ but Legendre polynomials of <i>complete</i>
+degree $p$. There are fewer degrees of freedom on the skeleton variable
for FE_FaceP for a given mesh size, but the solution quality (error vs. number
of DoFs) is very similar to the results for FE_FaceQ.
both without particular tuning of the AMG parameters on any of them) to give a
fair picture of the cost versus accuracy of two methods, on a toy example. It
should be noted however that geometric multigrid (GMG) for continuous finite elements is about a
-factor four to five faster for <i>p</i>=3 and <i>p</i>=6. The authors of this
+factor four to five faster for $p=3$ and $p=6$. The authors of this
tutorial have not seen similarly advanced solvers for the HDG linear
systems. Also, there are other implementation aspects for CG available such as
fast matrix-free approaches as shown in step-37 that make higher order