In short, the question does not appear to be settled in the literature at this
time. There are a number of more or less complicated schemes that address it,
but there is nothing like the KellyErrorEstimator that is universally accepted
-as a good indicator of the error. Most proposals use the fact that it is
-beneficial to increase the polynomial degree whenever the solution is locally
-smooth whereas it is better to refine the mesh wherever it is rough. However,
-the questions of how to determine the local smoothness of the solution as well
-as the decision when a solution is smooth enough to allow for an increase in
-$p$ are certainly big and important ones.
+as a good, even if not optimal, indicator of the error. Most proposals use the
+fact that it is beneficial to increase the polynomial degree whenever the
+solution is locally smooth whereas it is better to refine the mesh wherever it
+is rough. However, the questions of how to determine the local smoothness of
+the solution as well as the decision when a solution is smooth enough to allow
+for an increase in $p$ are certainly big and important ones.
+
+In the following, we propose a simple estimator of the local smoothness of a
+solution. As we will see in the results section, this estimator has flaws, in
+particular as far as cells with local hanging nodes are concerned. We
+therefore do not intend to present the following ideas as a complete solution
+to the problem. Rather, it is intended as an idea to approach it that merits
+further research and investigation. In other words, we do not intend to enter
+a sophisticated proposal into the fray about answers to the general
+question. However, to demonstrate our approach to $hp$ finite elements, we
+need a simple indicator that does generate some useful information that is
+able to drive the simple calculations this tutorial program will perform.
<h4>The idea</h4>
-We do not intend to enter a sophisticated proposal into the fray about answers
-to the general question. However, to demonstrate our approach to hp finite
-elements, we need a simple indicator that does generate some useful
-information. Our approach here is simple: for a function $u(x)$ to be in the
+Our approach here is simple: for a function $u(x)$ to be in the
Sobolev space $H^s(K)$ on a cell $K$, it has to satisfy the condition
@f[
\int_K |\nabla^s u(x)|^2 \; dx < \infty.
cell. In other words, we can write it as a matrix-vector product
@f[
\hat U_{\vec k}
- = {\cal F}_{\vec k,i} u_i,
+ = {\cal F}_{\vec k,j} u_j,
@f]
with the matrix
@f[
- {\cal F}_{\vec k,i}
+ {\cal F}_{\vec k,j}
= \frac 1{(2\pi)^{d/2}}
- \int_{\hat K} e^{i\vec k \cdot \vec x} \hat \varphi_i(\hat x) dx.
+ \int_{\hat K} e^{i\vec k \cdot \vec x} \hat \varphi_j(\hat x) dx.
@f]
This matrix is easily computed for a given number of shape functions
-$\varphi_i$ and Fourier modes $N$. Consequently, finding the
+$\varphi_j$ and Fourier modes $N$. Consequently, finding the
coefficients $\hat U_{\vec k}$ is a rather trivial job.
The next task is that we have to estimate how fast these coefficients
x)$ is in $H^s(\hat K)$ with $s=\mu-\frac d2$.
+<h4>Compensating for anisotropy</h4>
+
+In the formulas above, we have derived the Fourier coefficients $\hat U_{\vec
+k}$. Because $\vec k$ is a vector, we will get a number of Fourier
+coefficients $\hat U_{\vec k}$ for the same absolute value $|\vec k|$,
+corresponding to the Fourier transform in different directions. If we now
+consider a function like $|x|y^2$ then we will find lots of large Fourier
+coefficients in $x$-direction because the function is non-smooth in this
+direction, but fast-decaying Fourier coefficients in $y$-direction because the
+function is smooth there. The question that arises is this: if we simply fit
+our polynomial decay $\alpha |\vec k|^\mu$ to <i>all</i> Fourier coefficients,
+we will fit it to a smoothness <i>averaged in all spatial directions</i>. Is
+this what we want? Or would it be better to only consider the largest
+coefficient $\hat U_{\vec k}$ for all $\vec k$ with the same magnitude,
+essentially trying to determine the smoothness of the solution in that spatial
+direction in which the solution appears to be roughest?
+
+One can probably argue for either case. The issue would be of more interest if
+deal.II had the ability to use anisotropic finite elements, i.e. ones that use
+different polynomial degrees in different spatial directions, as they would be
+able to exploit the directionally variable smoothness much better. Alas, this
+capability does not exist at the time of writing this tutorial program.
+
+Either way, because we only have isotopic finite element classes, we adopt the
+viewpoint that we should tailor the polynomial degree to the lowest amount of
+regularity, in order to keep numerical efforts low. Consequently, instead of
+using the formula
+@f[
+ \mu =
+ \frac 1{\left(\sum_{\vec k, |\vec k|\le N} 1\right)
+ \left(\sum_{\vec k, |\vec k|\le N} (\ln |\vec k|)^2\right)
+ -\left(\sum_{\vec k, |\vec k|\le N} \ln |\vec k|\right)^2}
+ \left[
+ \left(\sum_{\vec k, |\vec k|\le N} \ln |\vec k|\right)
+ \left(\sum_{\vec k, |\vec k|\le N} \ln |\hat U_{\vec k}|\right)
+ -
+ \left(\sum_{\vec k, |\vec k|\le N} 1\right)
+ \left(\sum_{\vec k, |\vec k|\le N} \ln |\hat U_{\vec k}| \ln |\vec k| \right)
+ \right].
+@f]
+to calculate $\mu$ as shown above, we have to slightly modify all sums:
+instead of summing over all Fourier modes, we only sum over those for which
+the Fourier coefficient is the largest one among all $\hat U_{\vec k}$ with
+the same magnitude $|\vec k|$, i.e. all sums above have to replaced by the
+following sums:
+@f[
+ \sum_{\vec k, |\vec k|\le N}
+ \longrightarrow
+ \sum_{{\vec k, |\vec k|\le N} \atop {|\hat U_{\vec k}| \ge |\hat U_{\vec k'}|
+ \ \textrm{for all}\ \vec k'\ \textrm{with}\ |\vec k'|=|\vec k|}}
+@f]
+This is the form we will implement in the program.
+
+
+<h4>Questions about cell sizes</h4>
+
+One may ask whether it is a problem that we only compute the Fourier transform
+on the <i>reference cell</i> (rather than the real cell) of the
+solution. After all, we stretch the solution by a factor $\frac 1h$ during the
+transformation, thereby shifting the Fourier frequencies by a factor of
+$h$. This is of particular concern since we may have neighboring cells with
+mesh sizes $h$ that differ by a factor of 2 if one of them is more refined
+than the other. The concern is also motivated by the fact that, as we will see
+in the results section below, the estimated smoothness of the solution should
+be a more or less continuous function, but exhibits jumps at locations where
+the mesh size jumps. It therefore seems natural to ask whether we have to
+compensate for the transformation.
+
+The short answer is "no". In the process outlined above, we attempt to find
+coefficients $\beta,\mu$ that minimize the sum of squares of the terms
+@f[
+ \ln |\hat U_{\vec k}| - \beta + \mu \ln |\vec k|.
+@f]
+To compensate for the transformation means not attempting to fit a decay
+$|\vec k|^\mu$ with respect to the Fourier frequencies $\vec k$ <i>on the unit
+cell</i>, but to fit the coefficients $\hat U_{\vec k}$ computed on the
+reference cell <i>to the Fourier frequencies on the real cell $|\vec
+k|h$</i>, where $h$ is the norm of the transformation operator (i.e. something
+like the diameter of the cell). In other words, we would have to minimize the
+sum of squares of the terms
+@f[
+ \ln |\hat U_{\vec k}| - \beta + \mu \ln (|\vec k|h).
+@f]
+instead. However, using fundamental properties of the logarithm, this is
+simply equivalent to minimizing
+@f[
+ \ln |\hat U_{\vec k}| - (\beta - \mu \ln h) + \mu \ln (|\vec k|).
+@f]
+In other words, this and the original least squares problem will produce the
+same best-fit exponent $\mu$, though the offset will in one case be $\beta$
+and in the other $\beta-\mu \ln h$. However, since we are not interested in
+the offset at all but only in the exponent, it doesn't matter whether we scale
+Fourier frequencies in order to account for mesh size effects or not, the
+estimated smoothness exponent will be the same in either case.
+
+
<h3>Complications with linear systems for hp discretizations</h3>
The first thing we learn from this is that the number of constrained
degrees of freedom is on the order of 20-25% of the total number of
degrees of freedom, at least on the later grids when we have elements
-of relatively high order. This is, in fact, on the same order of
+of relatively high order (in 3d, the fraction of constrained degrees of
+freedom can be up to 30%). This is, in fact, on the same order of
magnitude as for non-$hp$ discretizations. For example, in the last
step of the @ref step_6 "step-6" program, we have 18401 degrees of
freedom, 4104 of which are constrained. The difference is that in the
only the two adjacent degrees of freedom, whereas in the $hp$ case,
constrained nodes are constrained against many more degrees of
freedom.
+
+Of maybe more interest is to look at the graphical output. First, here is the
+solution of the problem:
+
+@image html step-27.solution.png
+
+Secondly, let us look at the sequence of meshes generated:
+
+<table align="center" border="1" cellspacing="3" cellpadding="3">
+ <tr>
+ <td>
+ @image html step-27.mesh-0.png
+ </td>
+ <td>
+ @image html step-27.mesh-1.png
+ </td>
+ <td>
+ @image html step-27.mesh-2.png
+ </td>
+ </tr>
+
+ <tr>
+ <td>
+ @image html step-27.mesh-3.png
+ </td>
+ <td>
+ @image html step-27.mesh-4.png
+ </td>
+ <td>
+ @image html step-27.mesh-5.png
+ </td>
+ </tr>
+</table>
+It is clearly visible how the mesh is refined near the corner singularities,
+as one would expect it. More interestingly, we should be curious to see the
+distribution of finite element polynomial degrees to these mesh cells:
+<table align="center" border="1" cellspacing="3" cellpadding="3">
+ <tr>
+ <td>
+ @image html step-27.fe_degree-0.png
+ </td>
+ <td>
+ @image html step-27.fe_degree-1.png
+ </td>
+ <td>
+ @image html step-27.fe_degree-2.png
+ </td>
+ </tr>
+
+ <tr>
+ <td>
+ @image html step-27.fe_degree-3.png
+ </td>
+ <td>
+ @image html step-27.fe_degree-4.png
+ </td>
+ <td>
+ @image html step-27.fe_degree-5.png
+ </td>
+ </tr>
+</table>
+
+While this is certainly not a perfect arrangement, it does make some sense: we
+use low order elements close to boundaries and corners where regularity is
+low. On the other hand, higher order elements are used where (i) the error was
+at one point fairly large, i.e. mainly in the general area around the corner
+singularities and in the top right corner where the solution is large, and
+(ii) where the solution is smooth, i.e. far away from the boundary.
+
+This arrangement of polynomial degrees of course follows from our smoothness
+estimator. Here is the estimated smoothness of the solution, with blue colors
+indicating least smoothness and red indicating the smoothest areas:
+
+<table align="center" border="1" cellspacing="3" cellpadding="3">
+ <tr>
+ <td>
+ @image html step-27.smoothness-0.png
+ </td>
+ <td>
+ @image html step-27.smoothness-1.png
+ </td>
+ <td>
+ @image html step-27.smoothness-2.png
+ </td>
+ </tr>
+
+ <tr>
+ <td>
+ @image html step-27.smoothness-3.png
+ </td>
+ <td>
+ @image html step-27.smoothness-4.png
+ </td>
+ <td>
+ @image html step-27.smoothness-5.png
+ </td>
+ </tr>
+</table>
+
+The first conclusion one can draw from these images is that apparently the
+estimated smoothness is a fairly stable quantity under mesh refinement: what
+we get on the coarsest mesh is pretty close to what we get on the finest mesh.
+It is also obvious that the smoothness estimates are independent of the actual
+size of the solution (see the picture of the solution above), as it should be.
+A point of larger concern, however, is that one realizes on closer inspection
+that the estimator we have overestimates the smoothness of the solution on
+cells with hanging nodes. This in turn leads to higher polynomial degrees in
+these areas, skewing the allocation of finite elements onto cells.
+
+We have no good explanation for this effect at the moment. One theory is that
+the numerical solution on cells with hanging nodes is, of course, constrained
+and therefore not entirely free to explore the function space to get close to
+the exact solution. This lack of degrees of freedom may manifest itself by
+yielding numerical solutions on these cells with suppressed oscillation,
+meaning a higher degree of smoothness. The estimator picks this signal up and
+the estimated smoothness overestimates the actual value. However, a definite
+answer to what is going on currently eludes the authors of this program.
+
+The bigger question is, of course, how to avoid this problem. Possibilities
+include estimating the smoothness not on single cells, but cell assemblies or
+patches surrounding each cell. It may also be possible to find simple
+correction factors for each cell depending on the number of constrained
+degrees of freedom it has. In either case, there are ample opportunities for
+further research on finding good $hp$ refinement criteria. On the other hand,
+the main point of the current program was to demonstrate using the $hp$
+technology in deal.II, which is unaffected by our use of a possible
+sub-optimal refinement criterion.
// VTK format):
const std::string filename = "solution-" +
Utilities::int_to_string (cycle, 2) +
- ".vtk";
+ ".gmv";
std::ofstream output (filename.c_str());
- data_out.write_vtk (output);
+ data_out.write_gmv (output);
}
// After this, we would like to actually
}
+ // @sect4{LaplaceProblem::estimate_smoothness}
+ // This last function of significance
+ // implements the algorithm to estimate the
+ // smoothness exponent using the algorithms
+ // explained in detail in the
+ // introduction. We will therefore only
+ // comment on those points that are of
+ // implementational importance.
template <int dim>
void
LaplaceProblem<dim>::
estimate_smoothness (Vector<float> &smoothness_indicators) const
{
+ // The first thing we need to do is to
+ // define the Fourier vectors $\vec k$ for
+ // which we want to compute Fourier
+ // coefficients of the solution on each
+ // cell. In 2d, we pick those vectors $\vec
+ // k=(\pi i, \pi j)^T$ for which
+ // $\sqrt{i^2+j^2}\le N$, with $i,j$
+ // integers and $N$ being the maximal
+ // polynomial degree we use for the finite
+ // elements in this program. The 3d case is
+ // handled analogously. 1d and dimensions
+ // higher than 3 are not implemented, and
+ // we guard our implementation by making
+ // sure that we receive an exception in
+ // case someone tries to compile the
+ // program for any of these dimensions.
+ //
+ // We exclude $\vec k=0$ to avoid problems
+ // computing $|\vec k|^{-mu}$ and $\ln
+ // |\vec k|$. The other vectors are stored
+ // in the field <code>k_vectors</code>. In
+ // addition, we store the square of the
+ // magnitude of each of these vectors (up
+ // to a factor $\pi^2$) in the
+ // <code>k_vectors_magnitude</code> array
+ // -- we will need that when we attempt to
+ // find out which of those Fourier
+ // coefficients corresponding to Fourier
+ // vectors of the same magnitude is the
+ // largest:
const unsigned int N = max_degree;
- // form all the Fourier vectors
- // that we want to
- // consider. exclude k=0 to avoid
- // problems with |k|^{-mu} and also
- // logarithms of |k|
std::vector<Tensor<1,dim> > k_vectors;
std::vector<unsigned int> k_vectors_magnitude;
switch (dim)
Assert (false, ExcNotImplemented());
}
+ // After we have set up the Fourier
+ // vectors, we also store their total
+ // number for simplicity, and compute the
+ // logarithm of the magnitude of each of
+ // these vectors since we will need it many
+ // times over further down below:
const unsigned n_fourier_modes = k_vectors.size();
std::vector<double> ln_k (n_fourier_modes);
for (unsigned int i=0; i<n_fourier_modes; ++i)
ln_k[i] = std::log (k_vectors[i].norm());
- // assemble the matrices that do
- // the Fourier transforms for each
- // of the finite elements we deal
- // with. note that these matrices
- // are complex-valued, so we can't
- // use FullMatrix
- QGauss<1> base_quadrature (2);
- QIterated<dim> quadrature (base_quadrature, N);
-
+ // Next, we need to assemble the matrices
+ // that do the Fourier transforms for each
+ // of the finite elements we deal with,
+ // i.e. the matrices ${\cal F}_{\vec k,j}$
+ // defined in the introduction. We have to
+ // do that for each of the finite elements
+ // in use. Note that these matrices are
+ // complex-valued, so we can't use the
+ // FullMatrix class. Instead, we use the
+ // Table class template.
std::vector<Table<2,std::complex<double> > >
fourier_transform_matrices (fe_collection.size());
+
+ // In order to compute them, we of course
+ // can't perform the Fourier transform
+ // analytically, but have to approximate it
+ // using quadrature. To this end, we use a
+ // quadrature formula that is obtained by
+ // iterating a 2-point Gauss formula as
+ // many times as the maximal exponent we
+ // use for the term $e^{i\vec k\cdot \vec
+ // x}$:
+ QGauss<1> base_quadrature (2);
+ QIterated<dim> quadrature (base_quadrature, N);
+
+ // With this, we then loop over all finite
+ // elements in use, reinitialize the
+ // respective matrix ${\cal F}$ to the
+ // right size, and integrate each entry of
+ // the matrix numerically as ${\cal
+ // F}_{\vec k,j}=\sum_q e^{i\vec k\cdot\vec
+ // x}\varphi_j(\vec x_q) w_q$, where $x_q$
+ // are the quadrature points and $w_q$ are
+ // the quadrature weights. Note that the
+ // imaginary unit $i=\sqrt{-1}$ is obtained
+ // from the standard C++ classes using
+ // <code>std::complex@<double@>(0,1)</code>.
+
+ // Because we work on the unit cell, we can
+ // do all this work without a mapping from
+ // reference to real cell and consequently
+ // do not need the FEValues class.
for (unsigned int fe=0; fe<fe_collection.size(); ++fe)
{
fourier_transform_matrices[fe].reinit (n_fourier_modes,
fe_collection[fe].dofs_per_cell);
for (unsigned int k=0; k<n_fourier_modes; ++k)
- for (unsigned int i=0; i<fe_collection[fe].dofs_per_cell; ++i)
+ for (unsigned int j=0; j<fe_collection[fe].dofs_per_cell; ++j)
{
std::complex<double> sum = 0;
for (unsigned int q=0; q<quadrature.n_quadrature_points; ++q)
const Point<dim> x_q = quadrature.point(q);
sum += std::exp(std::complex<double>(0,1) *
(k_vectors[k] * x_q)) *
- fe_collection[fe].shape_value(i,x_q) *
+ fe_collection[fe].shape_value(j,x_q) *
quadrature.weight(q);
}
- fourier_transform_matrices[fe](k,i)
+ fourier_transform_matrices[fe](k,j)
= sum / std::pow(2*deal_II_numbers::PI, 1.*dim/2);
}
}
- // the next thing is to loop over
- // all cells and do our work there,
- // i.e. to locally do the Fourier
- // transform and estimate the decay
- // coefficient
+ // The next thing is to loop over all cells
+ // and do our work there, i.e. to locally
+ // do the Fourier transform and estimate
+ // the decay coefficient. We will use the
+ // following two arrays as scratch arrays
+ // in the loop and allocate them here to
+ // avoid repeated memory allocations:
std::vector<std::complex<double> > fourier_coefficients (n_fourier_modes);
Vector<double> local_dof_values;
-
+
+ // Then here is the loop:
typename hp::DoFHandler<dim>::active_cell_iterator
cell = dof_handler.begin_active(),
endc = dof_handler.end();
for (unsigned int index=0; cell!=endc; ++cell, ++index)
{
+ // Inside the loop, we first need to
+ // get the values of the local degrees
+ // of freedom (which we put into the
+ // <code>local_dof_values</code> array
+ // after setting it to the right size)
+ // and then need to compute the Fourier
+ // transform by multiplying this vector
+ // with the matrix ${\cal F}$
+ // corresponding to this finite
+ // element. We need to write out the
+ // multiplication by hand because the
+ // objects holding the data do not have
+ // <code>vmult</code>-like functions
+ // declared:
local_dof_values.reinit (cell->get_fe().dofs_per_cell);
cell->get_dof_values (solution, local_dof_values);
- // first compute the Fourier
- // transform of the local
- // solution
- std::fill (fourier_coefficients.begin(), fourier_coefficients.end(), 0);
for (unsigned int f=0; f<n_fourier_modes; ++f)
- for (unsigned int i=0; i<cell->get_fe().dofs_per_cell; ++i)
- fourier_coefficients[f] +=
- fourier_transform_matrices[cell->active_fe_index()](f,i)
- *
- local_dof_values(i);
-
- // enter the Fourier
- // coefficients into a map with
- // the k-magnitudes, to make
- // sure that we get only the
- // largest magnitude for each
- // value of |k|
+ {
+ fourier_coefficients[f] = 0;
+
+ for (unsigned int i=0; i<cell->get_fe().dofs_per_cell; ++i)
+ fourier_coefficients[f] +=
+ fourier_transform_matrices[cell->active_fe_index()](f,i)
+ *
+ local_dof_values(i);
+ }
+
+ // The next thing, as explained in the
+ // introduction, is that we wanted to
+ // only fit our exponential decay of
+ // Fourier coefficients to the largest
+ // coefficients for each possible value
+ // of $|\vec k|$. To this end, we
+ // create a map that for each magnitude
+ // $|\vec k|$ stores the largest $|\hat
+ // U_{\vec k}|$ found so far, i.e. we
+ // overwrite the existing value (or add
+ // it to the map) if no value for the
+ // current $|\vec k|$ exists yet, or if
+ // the current value is larger than the
+ // previously stored one:
std::map<unsigned int, double> k_to_max_U_map;
for (unsigned int f=0; f<n_fourier_modes; ++f)
if ((k_to_max_U_map.find (k_vectors_magnitude[f]) ==
std::abs (fourier_coefficients[f])))
k_to_max_U_map[k_vectors_magnitude[f]]
= std::abs (fourier_coefficients[f]);
-
- // now we have to calculate the
- // various contributions to the
- // formula for mu. we'll only
- // take those fourier
- // coefficients with the
- // largest value for a given
- // |k|
+ // Note that it comes in handy here
+ // that we have stored the magnitudes
+ // of vectors as integers, since this
+ // way we do not have to deal with
+ // round-off-sized differences between
+ // different values of $|\vec k|$.
+
+ // As the final task, we have to
+ // calculate the various contributions
+ // to the formula for $\mu$. We'll only
+ // take those Fourier coefficients with
+ // the largest magnitude for a given
+ // value of $|\vec k|$ as explained
+ // above:
double sum_1 = 0,
sum_ln_k = 0,
sum_ln_k_square = 0,
sum_ln_k += ln_k[f];
sum_ln_k_square += ln_k[f]*ln_k[f];
sum_ln_U += std::log (std::abs (fourier_coefficients[f]));
- sum_ln_U_ln_k += std::log (std::abs (fourier_coefficients[f])) * ln_k[f];
+ sum_ln_U_ln_k += std::log (std::abs (fourier_coefficients[f])) *
+ ln_k[f];
}
+ // With these so-computed sums, we can
+ // now evaluate the formula for $\mu$
+ // derived in the introduction:
const double mu
= (1./(sum_1*sum_ln_k_square - sum_ln_k*sum_ln_k)
*
(sum_ln_k*sum_ln_U - sum_1*sum_ln_U_ln_k));
-
+
+ // The final step is to compute the
+ // Sobolev index $s=\mu-\frac d2$ and
+ // store it in the vector of estimated
+ // values for each cell:
smoothness_indicators(index) = mu - 1.*dim/2;
}
}