pages = {1352--1367}}
+% ------------------------------------
+% Step 27
+% ------------------------------------
+
+@phdthesis{fehling2020,
+ author = {Marc Fehling},
+ title = {Algorithms for massively parallel generic hp-adaptive finite element methods},
+ publisher = {Forschungszentrum Jülich GmbH Zentralbibliothek, Verlag},
+ school = {Bergische Universität Wuppertal},
+ year = {2020},
+ volume = {43},
+ pages = {vii, 78 pp},
+ url = {https://juser.fz-juelich.de/record/878206}
+}
+
+
% ------------------------------------
% Step 47
% ------------------------------------
--- /dev/null
+Changed: Tutorial step-27 has been simplified and now uses the recently
+introduced SmoothnessEstimator namespace.
+<br>
+(Marc Fehling, 2020/12/24)
<a name="Intro"></a>
<h1>Introduction</h1>
-This tutorial program attempts to show how to use $hp$ finite element methods
+This tutorial program attempts to show how to use $hp$-finite element methods
with deal.II. It solves the Laplace equation and so builds only on the first
few tutorial programs, in particular on step-4 for dimension
-independent programming and step-6 for adaptive mesh
-refinement.
+independent programming and step-6 for adaptive mesh refinement.
-The $hp$ finite element method was proposed in the early 1980s by
-Babuska and Guo as an alternative to either
-(i) mesh refinement (i.e. decreasing the mesh parameter $h$ in a finite
+The $hp$-finite element method was proposed in the early 1980s by
+Babuška and Guo as an alternative to either
+(i) mesh refinement (i.e., decreasing the mesh parameter $h$ in a finite
element computation) or (ii) increasing the polynomial degree $p$ used for
shape functions. It is based on the observation that increasing the polynomial
degree of the shape functions reduces the approximation error if the solution
-is sufficiently smooth. On the other hand, it is well known
+is sufficiently smooth. On the other hand, it is well known
that even for the generally well-behaved class of elliptic problems, higher
degrees of regularity can not be guaranteed in the vicinity of boundaries,
corners, or where coefficients are discontinuous; consequently, the
approximation can not be improved in these areas by increasing the polynomial
degree $p$ but only by refining the mesh, i.e., by reducing the mesh size
$h$. These differing means to reduce the
-error have led to the notion of $hp$ finite elements, where the approximating
+error have led to the notion of $hp$-finite elements, where the approximating
finite element spaces are adapted to have a high polynomial degree $p$
wherever the solution is sufficiently smooth, while the mesh width $h$ is
reduced at places wherever the solution lacks regularity. It was
-already realized in the first papers on this method that $hp$ finite elements
+already realized in the first papers on this method that $hp$-finite elements
can be a powerful tool that can guarantee that the error is reduced not only
with some negative power of the number of degrees of freedom, but in fact
exponentially.
<li>Degrees of freedom will then have to be allocated on each cell depending
on what finite element is associated with this particular cell. Constraints
- will have to generated in the same way as for hanging nodes, but now also
- including the case where two neighboring cells.</li>
+ will have to be generated in the same way as for hanging nodes, but we now
+ also have to deal with the case where two neighboring cells have different
+ finite elements assigned.</li>
<li>We will need to be able to assemble cell and face contributions
to global matrices and right hand side vectors.</li>
deal.II, and that we will only have to provide the logic of what the
program should do, not exactly how all this is going to happen.
-In deal.II, the $hp$ functionality is largely packaged into
+In deal.II, the $hp$-functionality is largely packaged into
the hp-namespace. This namespace provides classes that handle
-$hp$ discretizations, assembling matrices and vectors, and other
+$hp$-discretizations, assembling matrices and vectors, and other
tasks. We will get to know many of them further down below. In
addition, most of the functions in the DoFTools, and VectorTools
-namespaces accept $hp$ objects in addition to the non-$hp$ ones. Much of
-the $hp$ implementation is also discussed in the @ref hp documentation
+namespaces accept $hp$-objects in addition to the non-$hp$-ones. Much of
+the $hp$-implementation is also discussed in the @ref hp documentation
module and the links found there.
It may be worth giving a slightly larger perspective at the end of
-this first part of the introduction. $hp$ functionality has been
+this first part of the introduction. $hp$-functionality has been
implemented in a number of different finite element packages (see, for
example, the list of references cited in the @ref hp_paper "hp-paper").
However, by and large, most of these packages have implemented it only
across faces between cells and therefore do not require the special
treatment otherwise necessary whenever finite elements of different
polynomial degree meet at a common face. In contrast, deal.II
-implements the most general case, i.e. it allows for continuous and
+implements the most general case, i.e., it allows for continuous and
discontinuous elements in 1d, 2d, and 3d, and automatically handles
the resulting complexity. In particular, it handles computing the
constraints (similar to hanging node constraints) of elements of
@ref hp_paper "hp-paper" for those interested in such detail.
We hope that providing such a general implementation will help explore
-the potential of $hp$ methods further.
+the potential of $hp$-methods further.
+
<h3>Finite element collections</h3>
-Now on again to the details of how to use the $hp$ functionality in
+Now on again to the details of how to use the $hp$-functionality in
deal.II. The first aspect we have to deal with is that now we do not
have only a single finite element any more that is used on all cells,
but a number of different elements that cells can choose to use. For
a single finite element object, but rather may want to use different
elements on different cells. We therefore need two things: (i) a
version of the DoFHandler class that can deal with this situation, and
-(ii) a way to tell the DoF handler which element to use on which cell.
+(ii) a way to tell the DoFHandler which element to use on which cell.
The first of these two things is implemented in the <i>hp</i>-mode of
the DoFHandler class: rather than associating it with a triangulation
we will have to have some sort of strategy later on to decide which
element to use on which cell; we will come back to this later. The
main point here is that the first and last line of this code snippet
-is pretty much exactly the same as for the non-$hp$ case.
+is pretty much exactly the same as for the non-$hp$-case.
Another complication arises from the fact that this time we do not
simply have hanging nodes from local mesh refinement, but we also have
DoFTools::make_hanging_node_constraints(dof_handler, constraints);
@endcode
In other words, the DoFTools::make_hanging_node_constraints deals not
-only with hanging node constraints, but also with $hp$ constraints at
+only with hanging node constraints, but also with $hp$-constraints at
the same time.
Following this, we have to set up matrices and vectors for the linear system
of the correct size and assemble them. Setting them up works in exactly the
-same way as for the non-$hp$ case. Assembling requires a bit more thought.
+same way as for the non-$hp$-case. Assembling requires a bit more thought.
The main idea is of course unchanged: we have to loop over all cells, assemble
local contributions, and then copy them into the global objects. As discussed
that changes from cell to cell. It can then be used to sum up local
contributions to bilinear form and right hand side.
-In the context of $hp$ finite element methods, we have to deal with the fact
+In the context of $hp$-finite element methods, we have to deal with the fact
that we do not use the same finite element object on each cell. In fact, we
should not even use the same quadrature object for all cells, but rather
higher order quadrature formulas for cells where we use higher order finite
what we need in the current context. The difference is that instead of a
single finite element, quadrature formula, and mapping, it takes collections
of these objects. It's use is very much like the regular FEValues class,
-i.e. the interesting part of the loop over all cells would look like this:
+i.e., the interesting part of the loop over all cells would look like this:
@code
hp::FEValues<dim> hp_fe_values(mapping_collection,
fe_collection,
quadrature_collection,
- update_values | update_gradients |
+ update_values | update_gradients |
update_quadrature_points | update_JxW_values);
for (const auto &cell : dof_handler.active_cell_iterators())
for this index. The order of these arguments is chosen in this way because one
may sometimes want to pick a different quadrature or mapping object from their
respective collections, but hardly ever a different finite element than the
-one in use on this cell, i.e. one with an index different from
+one in use on this cell, i.e., one with an index different from
<code>cell-@>active_fe_index()</code>. The finite element collection index is
therefore the last default argument so that it can be conveniently omitted.
What this <code>reinit</code> call does is the following: the
hp::FEValues class checks whether it has previously already allocated a
-non-$hp$ FEValues object for this combination of finite element, quadrature,
+non-$hp$-FEValues object for this combination of finite element, quadrature,
and mapping objects. If not, it allocates one. It then re-initializes this
object for the current cell, after which there is now a FEValues object for
the selected finite element, quadrature and mapping usable on the current
In any case, as long as the decision is only "refine this cell" or "do not
refine this cell", the actual refinement step is not particularly
challenging. However, here we have a code that is capable of hp-refinement,
-i.e. we suddenly have two choices whenever we detect that the error on a
+i.e., we suddenly have two choices whenever we detect that the error on a
certain cell is too large for our liking: we can refine the cell by splitting
it into several smaller ones, or we can increase the polynomial degree of the
shape functions used on it. How do we know which is the more promising
-strategy? Answering this question is the central problem in $hp$ finite
+strategy? Answering this question is the central problem in $hp$-finite
element research at the time of this writing.
In short, the question does not appear to be settled in the literature at this
to the problem. Rather, it is intended as an idea to approach it that merits
further research and investigation. In other words, we do not intend to enter
a sophisticated proposal into the fray about answers to the general
-question. However, to demonstrate our approach to $hp$ finite elements, we
+question. However, to demonstrate our approach to $hp$-finite elements, we
need a simple indicator that does generate some useful information that is
able to drive the simple calculations this tutorial program will perform.
@f[
\int_K |\nabla^s u({\bf x})|^2 \; d{\bf x} < \infty.
@f]
-Assuming that the cell $K$ is not degenerate, i.e. that the mapping from the
+Assuming that the cell $K$ is not degenerate, i.e., that the mapping from the
unit cell to cell $K$ is sufficiently regular, above condition is of course
equivalent to
@f[
|{\bf k}|^{2s}
|\hat U_{\bf k}|^2.
@f]
-In other words, if this norm is to be finite (i.e. for $\hat u(\hat{\bf x})$ to be in $H^s(\hat K)$), we need that
+In other words, if this norm is to be finite (i.e., for $\hat u(\hat{\bf x})$ to be in $H^s(\hat K)$), we need that
@f[
|\hat U_{\bf k}| = {\cal O}\left(|{\bf k}|^{-\left(s+1/2+\frac{d-1}{2}+\epsilon\right)}\right).
@f]
@f]
with the matrix
@f[
- {\cal F}_{{\bf k},j}
- =
- \int_{\hat K} e^{i {\bf k}\cdot \hat{\bf x}} \hat \varphi_j(\hat{\bf x}) d\hat{\bf x}.
+ {\cal F}_{{\bf k},j}
+ =
+ \int_{\hat K} e^{i {\bf k}\cdot \hat{\bf x}} \hat \varphi_j(\hat{\bf x}) d\hat{\bf x}.
@f]
This matrix is easily computed for a given number of shape functions
$\varphi_j$ and Fourier modes $N$. Consequently, finding the
the exponent $\mu$ that we can then use to determine that
$\hat u(\hat{\bf x})$ is in $H^s(\hat K)$ with $s=\mu-\frac d2$.
+These steps outlined above are applicable to many different scenarios, which
+motivated the introduction of a generic function
+SmoothnessEstimator::Fourier::coefficient_decay() in deal.II, that combines all
+the tasks described in this section in one simple function call. We will use it
+in the implementation of this program.
+
<h4>Compensating for anisotropy</h4>
direction in which the solution appears to be roughest?
One can probably argue for either case. The issue would be of more interest if
-deal.II had the ability to use anisotropic finite elements, i.e. ones that use
+deal.II had the ability to use anisotropic finite elements, i.e., ones that use
different polynomial degrees in different spatial directions, as they would be
able to exploit the directionally variable smoothness much better. Alas, this
capability does not exist at the time of writing this tutorial program.
To calculate $\mu$ as shown above, we have to slightly modify all sums:
instead of summing over all Fourier modes, we only sum over those for which
the Fourier coefficient is the largest one among all $\hat U_{{\bf k}}$ with
-the same magnitude $|{\bf k}|$, i.e. all sums above have to replaced by the
+the same magnitude $|{\bf k}|$, i.e., all sums above have to replaced by the
following sums:
@f[
\sum_{{\bf k}, |{\bf k}|\le N}
$|{\bf k}|^\mu$ with respect to the Fourier frequencies ${\bf k}$ <i>on the unit
cell</i>, but to fit the coefficients $\hat U_{{\bf k}}$ computed on the
reference cell <i>to the Fourier frequencies on the real cell $|\bf
-k|h$</i>, where $h$ is the norm of the transformation operator (i.e. something
+k|h$</i>, where $h$ is the norm of the transformation operator (i.e., something
like the diameter of the cell). In other words, we would have to minimize the
sum of squares of the terms
@f[
<h4>Creating the sparsity pattern</h4>
-One of the problems with $hp$ methods is that the high polynomial degree of
+One of the problems with $hp$-methods is that the high polynomial degree of
shape functions together with the large number of constrained degrees of
freedom leads to matrices with large numbers of nonzero entries in some
rows. At the same time, because there are areas where we use low polynomial
resulting matrix will be much sparser (and, therefore, matrix-vector products or
factorizations will be substantially faster too).
+
<h4>Eliminating constrained degrees of freedom</h4>
-A second problem particular to $hp$ methods arises because we have so
+A second problem particular to $hp$-methods arises because we have so
many constrained degrees of freedom: typically up to about one third
of all degrees of freedom (in 3d) are constrained because they either
belong to cells with hanging nodes or because they are on cells
adjacent to cells with a higher or lower polynomial degree. This is,
in fact, not much more than the fraction of constrained degrees of
-freedom in non-$hp$ mode, but the difference is that each constrained
+freedom in non-$hp$-mode, but the difference is that each constrained
hanging node is constrained not only against the two adjacent degrees
of freedom, but is constrained against many more degrees of freedom.
is exactly what we've shown in step-6.
+
<h3>The test case</h3>
The test case we will solve with this program is a re-take of the one we
-\Delta u = f
@f]
in 2d, with $f=(x+1)(y+1)$, and with zero Dirichlet boundary values for
-$u$. We do so on the domain $[-1,1]^2\backslash[-\frac 12,\frac 12]^2$, i.e. a
-square with a square hole in the middle.
+$u$. We do so on the domain $[-1,1]^2\backslash[-\frac 12,\frac 12]^2$,
+i.e., a square with a square hole in the middle.
-The difference to step-14 is of course that we use $hp$ finite
+The difference to step-14 is of course that we use $hp$-finite
elements for the solution. The test case is of interest because it has
re-entrant corners in the corners of the hole, at which the solution has
singularities. We therefore expect that the solution will be smooth in the
In this section, we discuss a few results produced from running the
current tutorial program. More results, in particular the extension to
3d calculations and determining how much compute time the individual
-components of the program take, are given in the @ref hp_paper .
+components of the program take, are given in the @ref hp_paper "hp-paper".
When run, this is what the program produces:
of freedom is on the order of 20-25% of the total number of degrees of
freedom, at least on the later grids when we have elements of relatively
high order (in 3d, the fraction of constrained degrees of freedom can be up
-to 30%). This is, in fact, on the same order of magnitude as for non-$hp$
-discretizations. For example, in the last step of the step-6
+to 30%). This is, in fact, on the same order of magnitude as for
+non-$hp$-discretizations. For example, in the last step of the step-6
program, we have 18353 degrees of freedom, 4432 of which are
constrained. The difference is that in the latter program, each constrained
hanging node is constrained against only the two adjacent degrees of
-freedom, whereas in the $hp$ case, constrained nodes are constrained against
+freedom, whereas in the $hp$-case, constrained nodes are constrained against
many more degrees of freedom. Note also that the current program also
includes nodes subject to Dirichlet boundary conditions in the list of
constraints. In cycle 0, all the constraints are actually because of
While this is certainly not a perfect arrangement, it does make some sense: we
use low order elements close to boundaries and corners where regularity is
low. On the other hand, higher order elements are used where (i) the error was
-at one point fairly large, i.e. mainly in the general area around the corner
+at one point fairly large, i.e., mainly in the general area around the corner
singularities and in the top right corner where the solution is large, and
-(ii) where the solution is smooth, i.e. far away from the boundary.
+(ii) where the solution is smooth, i.e., far away from the boundary.
This arrangement of polynomial degrees of course follows from our smoothness
estimator. Here is the estimated smoothness of the solution, with darker colors
patches surrounding each cell. It may also be possible to find simple
correction factors for each cell depending on the number of constrained
degrees of freedom it has. In either case, there are ample opportunities for
-further research on finding good $hp$ refinement criteria. On the other hand,
-the main point of the current program was to demonstrate using the $hp$
-technology in deal.II, which is unaffected by our use of a possible
+further research on finding good $hp$-refinement criteria. On the other hand,
+the main point of the current program was to demonstrate using the
+$hp$-technology in deal.II, which is unaffected by our use of a possible
sub-optimal refinement criterion.
+
+
+
+<a name="extensions"></a>
+<h3>Possibilities for extensions</h3>
+
+<h4>Different hp-decision strategies</h4>
+
+This tutorial demonstrates only one particular strategy to decide between $h$- and
+$p$-adaptation. In fact, there are many more ways to automatically decide on the
+adaptation type, of which a few are already implemented in deal.II:
+<ul>
+ <li><i>Fourier coefficient decay:</i> This is the strategy currently
+ implemented in this tutorial. For more information on this strategy, see
+ the general documentation of the SmoothnessEstimator::Fourier namespace.</li>
+
+ <li><i>Legendre coefficient decay:</i> This strategy is quite similar
+ to the current one, but uses Legendre series expansion rather than the
+ Fourier one: instead of sinusoids as basis functions, this strategy uses
+ Legendre polynomials. Of course, since we approximate the solution using a
+ finite-dimensional polynomial on each cell, the expansion of the solution in
+ Legendre polynomials is also finite and, consequently, when we talk about the
+ "decay" of this expansion, we can only consider the finitely many nonzero
+ coefficients of this expansion, rather than think about it in asymptotic terms.
+ But, if we have enough of these coefficients, we can certainly think of the
+ decay of these coefficients as characteristic of the decay of the coefficients
+ of the exact solution (which is, in general, not polynomial and so will have an
+ infinite Legendre expansion), and considering the coefficients we have should
+ reveal something about the properties of the exact solution.
+
+ The transition from the Fourier strategy to the Legendre one is quite simple:
+ You just need to change the series expansion class and the corresponding
+ smoothness estimation function to be part of the proper namespaces
+ FESeries::Legendre and SmoothnessEstimator::Legendre. For the theoretical
+ background of this strategy, consult the general documentation of the
+ SmoothnessEstimator::Legendre namespace, as well as @cite mavriplis1994hp ,
+ @cite eibner2007hp and @cite davydov2017hp.</li>
+
+ <li><i>Refinement history:</i> The last strategy is quite different
+ from the other two. In theory, we know how the error will converge
+ after changing the discretization of the function space. With
+ $h$-refinement the solution converges algebraically as already pointed
+ out in step-7. If the solution is sufficiently smooth, though, we
+ expect that the solution will converge exponentially with increasing
+ polynomial degree of the finite element. We can compare a proper
+ prediction of the error with the actual error in the following step to
+ see if our choice of adaptation type was justified.
+
+ The transition to this strategy is a bit more complicated. For this, we need
+ an initialization step with pure $h$- or $p$-refinement and we need to
+ transfer the predicted errors over adapted meshes. The extensive
+ documentation of the hp::Refinement::predict_error() function describes not
+ only the theoretical details of this approach, but also presents a blueprint
+ on how to implement this strategy in your code. For more information, see
+ @cite melenk2001hp .
+
+ Note that with this particular function you cannot predict the error for
+ the next time step in time-dependent problems. Therefore, this strategy
+ cannot be applied to this type of problem without further ado. Alternatively,
+ the following approach could be used, which works for all the other
+ strategies as well: start each time step with a coarse mesh, keep refining
+ until happy with the result, and only then move on to the next time step.</li>
+</ul>
+
+Try implementing one of these strategies into this tutorial and observe the
+subtle changes to the results. You will notice that all strategies are
+capable of identifying the singularities near the reentrant corners and
+will perform $h$-refinement in these regions, while preferring $p$-refinement
+in the bulk domain. A detailed comparison of these strategies is presented
+in @cite fehling2020 .
+
+
+<h4>Parallel hp-adaptive finite elements</h4>
+
+All functionality presented in this tutorial already works for both
+sequential and parallel applications. It is possible without too much
+effort to change to either the parallel::shared::Triangulation or the
+parallel::distributed::Triangulation classes. If you feel eager to try
+it, we recommend reading step-18 for the former and step-40 for the
+latter case first for further background information on the topic, and
+then come back to this tutorial to try out your newly acquired skills.
*
* Authors: Wolfgang Bangerth, Texas A&M University, 2006, 2007;
- * Denis Davydov, University of Erlangen-Nuremberg, 2016.
+ * Denis Davydov, University of Erlangen-Nuremberg, 2016;
+ * Marc Fehling, Colorado State University, 2020.
*/
// These are the new files we need. The first and second provide the
// FECollection and the <i>hp</i> version of the FEValues class as described in
-// the introduction of this program. The last one provides Fourier
-// transformation class on the unit cell.
+// the introduction of this program. The next one provides the functionality
+// for automatic $hp$-adaptation, for which we will use the estimation
+// algorithms based on decaying series expansion coefficients that are part of
+// the last two files.
#include <deal.II/hp/fe_collection.h>
#include <deal.II/hp/fe_values.h>
+#include <deal.II/hp/refinement.h>
#include <deal.II/fe/fe_series.h>
+#include <deal.II/numerics/smoothness_estimator.h>
-// The last set of include files are standard C++ headers. We need support for
-// complex numbers when we compute the Fourier transform.
+// The last set of include files are standard C++ headers.
#include <fstream>
#include <iostream>
-#include <complex>
// Finally, this is as in previous programs:
// main difference is that we have merged the refine_grid and output_results
// functions into one since we will also want to output some of the
// quantities used in deciding how to refine the mesh (in particular the
- // estimated smoothness of the solution). There is also a function that
- // computes this estimated smoothness, as discussed in the introduction.
+ // estimated smoothness of the solution).
//
// As far as member variables are concerned, we use the same structure as
// already used in step-6, but we need collections instead of
void assemble_system();
void solve();
void create_coarse_grid();
- void estimate_smoothness(Vector<float> &smoothness_indicators);
void postprocess(const unsigned int cycle);
- std::pair<bool, unsigned int> predicate(const TableIndices<dim> &indices);
Triangulation<dim> triangulation;
hp::QCollection<dim> quadrature_collection;
hp::QCollection<dim - 1> face_quadrature_collection;
- hp::QCollection<dim> fourier_q_collection;
- std::unique_ptr<FESeries::Fourier<dim>> fourier;
- std::vector<double> ln_k;
- Table<dim, std::complex<double>> fourier_coefficients;
-
AffineConstraints<double> constraints;
SparsityPattern sparsity_pattern;
// face quadrature objects. We start with quadratic elements, and each
// quadrature formula is chosen so that it is appropriate for the matching
// finite element in the hp::FECollection object.
- //
- // Finally, we initialize FESeries::Fourier object which will be used to
- // calculate coefficient in Fourier series as described in the introduction.
- // In addition to the hp::FECollection, we need to provide quadrature rules
- // hp::QCollection for integration on the reference cell.
- //
- // In order to resize fourier_coefficients Table, we use the following
- // auxiliary function
- template <int dim, typename T>
- void resize(Table<dim, T> &coeff, const unsigned int N)
- {
- TableIndices<dim> size;
- for (unsigned int d = 0; d < dim; d++)
- size[d] = N;
- coeff.reinit(size);
- }
-
template <int dim>
LaplaceProblem<dim>::LaplaceProblem()
: dof_handler(triangulation)
quadrature_collection.push_back(QGauss<dim>(degree + 1));
face_quadrature_collection.push_back(QGauss<dim - 1>(degree + 1));
}
-
- // As described in the introduction, we define the Fourier vectors ${\bf
- // k}$ for which we want to compute Fourier coefficients of the solution
- // on each cell as follows. In 2d, we will need coefficients corresponding
- // to vectors ${\bf k}=(2 \pi i, 2\pi j)^T$ for which $\sqrt{i^2+j^2}\le N$,
- // with $i,j$ integers and $N$ being the maximal polynomial degree we use
- // for the finite elements in this program. The FESeries::Fourier class'
- // constructor first parameter $N$ defines the number of coefficients in 1D
- // with the total number of coefficients being $N^{dim}$. Although we will
- // not use coefficients corresponding to
- // $\sqrt{i^2+j^2}> N$ and $i+j==0$, the overhead of their calculation is
- // minimal. The transformation matrices for each FiniteElement will be
- // calculated only once the first time they are required in the course of
- // hp-adaptive refinement. Because we work on the unit cell, we can do all
- // this work without a mapping from reference to real cell and consequently
- // can precalculate these matrices. The calculation of expansion
- // coefficients for a particular set of local degrees of freedom on a given
- // cell then follows as a simple matrix-vector product.
- // The 3d case is handled analogously.
- const unsigned int N = max_degree;
-
- // We will need to assemble the matrices that do the Fourier transforms
- // for each of the finite elements we deal with, i.e. the matrices ${\cal
- // F}_{{\bf k},j}$ defined in the introduction. We have to do that for
- // each of the finite elements in use. To that end we need a quadrature
- // rule. In this example we use the same quadrature formula for each
- // finite element, namely that is obtained by iterating a
- // 2-point Gauss formula as many times as the maximal exponent we use for
- // the term $e^{i{\bf k}\cdot{\bf x}}$:
- QGauss<1> base_quadrature(2);
- QIterated<dim> quadrature(base_quadrature, N);
- for (unsigned int i = 0; i < fe_collection.size(); i++)
- fourier_q_collection.push_back(quadrature);
-
- // Now we are ready to set-up the FESeries::Fourier object
- const std::vector<unsigned int> n_coefficients_per_direction(
- fe_collection.size(), N);
- fourier =
- std::make_unique<FESeries::Fourier<dim>>(n_coefficients_per_direction,
- fe_collection,
- fourier_q_collection);
-
- // We need to resize the matrix of fourier coefficients according to the
- // number of modes N.
- resize(fourier_coefficients, N);
}
// This function is again a verbatim copy of what we already did in
// step-6. Despite function calls with exactly the same names and arguments,
// the algorithms used internally are different in some aspect since the
- // dof_handler variable here is an hp-object.
+ // dof_handler variable here is in $hp$-mode.
template <int dim>
void LaplaceProblem<dim>::setup_system()
{
// polynomial degrees on different cells, the matrices and vectors holding
// local contributions do not have the same size on all cells. At the
// beginning of the loop over all cells, we therefore each time have to
- // resize them to the correct size (given by
- // <code>dofs_per_cell</code>). Because these classes are implement in such
- // a way that reducing the size of a matrix or vector does not release the
- // currently allocated memory (unless the new size is zero), the process of
- // resizing at the beginning of the loop will only require re-allocation of
- // memory during the first few iterations. Once we have found in a cell with
- // the maximal finite element degree, no more re-allocations will happen
- // because all subsequent <code>reinit</code> calls will only set the size
- // to something that fits the currently allocated memory. This is important
- // since allocating memory is expensive, and doing so every time we visit a
- // new cell would take significant compute time.
+ // resize them to the correct size (given by <code>dofs_per_cell</code>).
+ // Because these classes are implemented in such a way that reducing the size
+ // of a matrix or vector does not release the currently allocated memory
+ // (unless the new size is zero), the process of resizing at the beginning of
+ // the loop will only require re-allocation of memory during the first few
+ // iterations. Once we have found in a cell with the maximal finite element
+ // degree, no more re-allocations will happen because all subsequent
+ // <code>reinit</code> calls will only set the size to something that fits the
+ // currently allocated memory. This is important since allocating memory is
+ // expensive, and doing so every time we visit a new cell would take
+ // significant compute time.
template <int dim>
void LaplaceProblem<dim>::assemble_system()
{
// Let us start with computing estimated error and smoothness indicators,
// which each are one number for each active cell of our
// triangulation. For the error indicator, we use the KellyErrorEstimator
- // class as always. Estimating the smoothness is done in the respective
- // function of this class; that function is discussed further down below:
+ // class as always.
Vector<float> estimated_error_per_cell(triangulation.n_active_cells());
KellyErrorEstimator<dim>::estimate(
dof_handler,
solution,
estimated_error_per_cell);
-
+ // Estimating the smoothness is performed with the method of decaying
+ // expansion coefficients as outlined in the introduction. We will first
+ // need to create an object capable of transforming the finite element
+ // solution on every single cell into a sequence of Fourier series
+ // coefficients. The SmoothnessEstimator namespace offers a factory function
+ // for such a FESeries::Fourier object that is optimized for the process of
+ // estimating smoothness. The actual determination of the decay of Fourier
+ // coefficients on every individual cell then happens in the last function.
Vector<float> smoothness_indicators(triangulation.n_active_cells());
- estimate_smoothness(smoothness_indicators);
+ FESeries::Fourier<dim> fourier =
+ SmoothnessEstimator::Fourier::default_fe_series(fe_collection);
+ SmoothnessEstimator::Fourier::coefficient_decay(fourier,
+ dof_handler,
+ solution,
+ smoothness_indicators);
// Next we want to generate graphical output. In addition to the two
// estimated quantities derived above, we would also like to output the
// that element. The result we put into a vector with one element per
// cell. The DataOut class requires this to be a vector of
// <code>float</code> or <code>double</code>, even though our values are
- // all integers, so that it what we use:
+ // all integers, so that is what we use:
{
Vector<float> fe_degrees(triangulation.n_active_cells());
for (const auto &cell : dof_handler.active_cell_iterators())
// $h$ decreased. The strategy we choose here is that we look at the
// smoothness indicators of those cells that are flagged for refinement,
// and increase $p$ for those with a smoothness larger than a certain
- // threshold. For this, we first have to determine the maximal and
- // minimal values of the smoothness indicators of all flagged cells,
- // which we do using a loop over all cells and comparing current minimal
- // and maximal values. (We start with the minimal and maximal values of
- // <i>all</i> cells, a range within which the minimal and maximal values
- // on cells flagged for refinement must surely lie.) Absent any better
- // strategies, we will then set the threshold above which will increase
- // $p$ instead of reducing $h$ as the mean value between minimal and
- // maximal smoothness indicators on cells flagged for refinement:
- float max_smoothness = *std::min_element(smoothness_indicators.begin(),
- smoothness_indicators.end()),
- min_smoothness = *std::max_element(smoothness_indicators.begin(),
- smoothness_indicators.end());
- for (const auto &cell : dof_handler.active_cell_iterators())
- if (cell->refine_flag_set())
- {
- max_smoothness =
- std::max(max_smoothness,
- smoothness_indicators(cell->active_cell_index()));
- min_smoothness =
- std::min(min_smoothness,
- smoothness_indicators(cell->active_cell_index()));
- }
- const float threshold_smoothness = (max_smoothness + min_smoothness) / 2;
-
- // With this, we can go back, loop over all cells again, and for those
- // cells for which (i) the refinement flag is set, (ii) the smoothness
- // indicator is larger than the threshold, and (iii) we still have a
- // finite element with a polynomial degree higher than the current one
- // in the finite element collection, we then increase the polynomial
- // degree and in return remove the flag indicating that the cell should
- // undergo bisection. For all other cells, the refinement flags remain
- // untouched:
- for (const auto &cell : dof_handler.active_cell_iterators())
- if (cell->refine_flag_set() &&
- (smoothness_indicators(cell->active_cell_index()) >
- threshold_smoothness) &&
- (cell->active_fe_index() + 1 < fe_collection.size()))
- {
- cell->clear_refine_flag();
- cell->set_active_fe_index(cell->active_fe_index() + 1);
- }
+ // relative threshold. In other words, for every cell for which (i) the
+ // refinement flag is set, (ii) the smoothness indicator is larger than
+ // the threshold, and (iii) we still have a finite element with a
+ // polynomial degree higher than the current one in the finite element
+ // collection, we will assign a future FE index that corresponds to a
+ // polynomial with degree one higher than it currently is. The following
+ // function is capable of doing exactly this. Absent any better
+ // strategies, we will set the threshold as the mean value between minimal
+ // and maximal smoothness indicators on cells flagged for refinement. This
+ // is achieved by setting the corresponding fraction parameter to a value
+ // of 0.5. In the same way, we deal with cells that are going to be
+ // coarsened and decrease their polynomial degree when their smoothness
+ // indicator is below the corresponding threshold determined on cells to
+ // be coarsened.
+ hp::Refinement::p_adaptivity_from_relative_threshold(
+ dof_handler, smoothness_indicators, 0.5, 0.5);
+
+ // The above function only determines whether the polynomial degree will
+ // change via future FE indices, but does not manipulate the
+ // $h$-refinement flags. So for cells that are flagged for both refinement
+ // categories, we prefer $p$- over $h$-refinement. The following function
+ // call ensures that only one of $p$- or $h$-refinement is imposed, and
+ // not both at once.
+ hp::Refinement::choose_p_over_h(dof_handler);
// At the end of this procedure, we then refine the mesh. During this
// process, children of cells undergoing bisection inherit their mother
- // cell's finite element index:
+ // cell's finite element index. Further, future finite element indices
+ // will turn into active ones, so that the new finite elements will be
+ // assigned to cells after the next call of DoFHandler::distribute_dofs().
triangulation.execute_coarsening_and_refinement();
}
}
// @sect4{LaplaceProblem::create_coarse_grid}
- // The following function is used when creating the initial grid. It is a
- // specialization for the 2d case, i.e. a corresponding function needs to be
- // implemented if the program is run in anything other then 2d. The function
- // is actually stolen from step-14 and generates the same mesh used already
- // there, i.e. the square domain with the square hole in the middle. The
- // meaning of the different parts of this function are explained in the
- // documentation of step-14:
- template <>
- void LaplaceProblem<2>::create_coarse_grid()
+ // The following function is used when creating the initial grid. The grid we
+ // would like to create is actually similar to the one from step-14, i.e., the
+ // square domain with the square hole in the middle. It can be generated by
+ // excatly the same function. However, since its implementation is only a
+ // specialization of the 2d case, we will present a different way of creating
+ // this domain which is dimension independent.
+ //
+ // We first create a hypercube triangulation with enough cells so that it
+ // already holds our desired domain $[-1,1]^d$, subdivided into $4^d$ cells.
+ // We then remove those cells in the center of the domain by testing the
+ // coordinate values of the vertices on each cell. In the end, we refine the
+ // so created grid globally as usual.
+ template <int dim>
+ void LaplaceProblem<dim>::create_coarse_grid()
{
- const unsigned int dim = 2;
-
- const std::vector<Point<2>> vertices = {
- {-1.0, -1.0}, {-0.5, -1.0}, {+0.0, -1.0}, {+0.5, -1.0}, {+1.0, -1.0}, //
- {-1.0, -0.5}, {-0.5, -0.5}, {+0.0, -0.5}, {+0.5, -0.5}, {+1.0, -0.5}, //
- {-1.0, +0.0}, {-0.5, +0.0}, {+0.5, +0.0}, {+1.0, +0.0}, //
- {-1.0, +0.5}, {-0.5, +0.5}, {+0.0, +0.5}, {+0.5, +0.5}, {+1.0, +0.5}, //
- {-1.0, +1.0}, {-0.5, +1.0}, {+0.0, +1.0}, {+0.5, +1.0}, {+1.0, +1.0}};
-
- const std::vector<std::array<int, GeometryInfo<dim>::vertices_per_cell>>
- cell_vertices = {{{0, 1, 5, 6}},
- {{1, 2, 6, 7}},
- {{2, 3, 7, 8}},
- {{3, 4, 8, 9}},
- {{5, 6, 10, 11}},
- {{8, 9, 12, 13}},
- {{10, 11, 14, 15}},
- {{12, 13, 17, 18}},
- {{14, 15, 19, 20}},
- {{15, 16, 20, 21}},
- {{16, 17, 21, 22}},
- {{17, 18, 22, 23}}};
-
- const unsigned int n_cells = cell_vertices.size();
-
- std::vector<CellData<dim>> cells(n_cells, CellData<dim>());
- for (unsigned int i = 0; i < n_cells; ++i)
- {
- for (unsigned int j = 0; j < cell_vertices[i].size(); ++j)
- cells[i].vertices[j] = cell_vertices[i][j];
- cells[i].material_id = 0;
- }
+ Triangulation<dim> cube;
+ GridGenerator::subdivided_hyper_cube(cube, 4, -1., 1.);
+
+ std::set<typename Triangulation<dim>::active_cell_iterator> cells_to_remove;
+ for (const auto &cell : cube.active_cell_iterators())
+ for (unsigned int v = 0; v < GeometryInfo<dim>::vertices_per_cell; ++v)
+ if (cell->vertex(v).square() < .1)
+ cells_to_remove.insert(cell);
+
+ GridGenerator::create_triangulation_with_removed_cells(cube,
+ cells_to_remove,
+ triangulation);
- triangulation.create_triangulation(vertices, cells, SubCellData());
triangulation.refine_global(3);
}
// @sect4{LaplaceProblem::run}
// This function implements the logic of the program, as did the respective
- // function in most of the previous programs already, see for example
- // step-6.
+ // function in most of the previous programs already, see for example step-6.
//
// Basically, it contains the adaptive loop: in the first iteration create a
// coarse grid, and then set up the linear system, assemble it, solve, and
postprocess(cycle);
}
}
-
-
- // @sect4{LaplaceProblem::estimate_smoothness}
-
- // As described in the introduction, we will need to take the maximum
- // absolute value of fourier coefficients which correspond to $k$-vector
- // $|{\bf k}|= const$. To filter the coefficients Table we
- // will use the FESeries::process_coefficients() which requires a predicate
- // to be specified. The predicate should operate on TableIndices and return
- // a pair of <code>bool</code> and <code>unsigned int</code>. The latter
- // is the value of the map from TableIndicies to unsigned int. It is
- // used to define subsets of coefficients from which we search for the one
- // with highest absolute value, i.e. $l^\infty$-norm. The <code>bool</code>
- // parameter defines which indices should be used in processing. In the
- // current case we are interested in coefficients which correspond to
- // $0 < i*i+j*j < N*N$ and $0 < i*i+j*j+k*k < N*N$ in 2D and 3D, respectively.
- template <int dim>
- std::pair<bool, unsigned int>
- LaplaceProblem<dim>::predicate(const TableIndices<dim> &ind)
- {
- unsigned int v = 0;
- for (unsigned int i = 0; i < dim; i++)
- v += ind[i] * ind[i];
- if (v > 0 && v < max_degree * max_degree)
- return std::make_pair(true, v);
- else
- return std::make_pair(false, v);
- }
-
- // This last function of significance implements the algorithm to estimate
- // the smoothness exponent using the algorithms explained in detail in the
- // introduction. We will therefore only comment on those points that are of
- // implementational importance.
- template <int dim>
- void
- LaplaceProblem<dim>::estimate_smoothness(Vector<float> &smoothness_indicators)
- {
- // Since most of the hard work is done for us in FESeries::Fourier and
- // we set up the object of this class in the constructor, what we are left
- // to do here is apply this class to calculate coefficients and then
- // perform linear regression to fit their decay slope.
-
-
- // First thing to do is to loop over all cells and do our work there, i.e.
- // to locally do the Fourier transform and estimate the decay coefficient.
- // We will use the following array as a scratch array in the loop to store
- // local DoF values:
- Vector<double> local_dof_values;
-
- // Then here is the loop:
- for (const auto &cell : dof_handler.active_cell_iterators())
- {
- // Inside the loop, we first need to get the values of the local
- // degrees of freedom (which we put into the
- // <code>local_dof_values</code> array after setting it to the right
- // size) and then need to compute the Fourier transform by multiplying
- // this vector with the matrix ${\cal F}$ corresponding to this finite
- // element. This is done by calling FESeries::Fourier::calculate(),
- // that has to be provided with the <code>local_dof_values</code>,
- // <code>cell->active_fe_index()</code> and a Table to store
- // coefficients.
- local_dof_values.reinit(cell->get_fe().n_dofs_per_cell());
- cell->get_dof_values(solution, local_dof_values);
-
- fourier->calculate(local_dof_values,
- cell->active_fe_index(),
- fourier_coefficients);
-
- // The next thing, as explained in the introduction, is that we wanted
- // to only fit our exponential decay of Fourier coefficients to the
- // largest coefficients for each possible value of $|{\bf k}|$. To
- // this end, we use FESeries::process_coefficients() to rework
- // coefficients into the desired format. We'll only take those Fourier
- // coefficients with the largest magnitude for a given value of $|{\bf
- // k}|$ and thereby need to use VectorTools::Linfty_norm:
- std::pair<std::vector<unsigned int>, std::vector<double>> res =
- FESeries::process_coefficients<dim>(
- fourier_coefficients,
- [this](const TableIndices<dim> &indices) {
- return this->predicate(indices);
- },
- VectorTools::Linfty_norm);
-
- Assert(res.first.size() == res.second.size(), ExcInternalError());
-
- // The first vector in the <code>std::pair</code> will store values of
- // the predicate, that is $i*i+j*j= const$ or $i*i+j*j+k*k = const$ in
- // 2D or 3D respectively. This vector will be the same for all the cells
- // so we can calculate logarithms of the corresponding Fourier vectors
- // $|{\bf k}|$ only once in the whole hp-refinement cycle:
- if (ln_k.size() == 0)
- {
- ln_k.resize(res.first.size(), 0);
- for (unsigned int f = 0; f < ln_k.size(); f++)
- ln_k[f] =
- std::log(2.0 * numbers::PI * std::sqrt(1. * res.first[f]));
- }
-
- // We have to calculate the logarithms of absolute values of
- // coefficients and use it in a linear regression fit to obtain $\mu$.
- for (double &residual_element : res.second)
- residual_element = std::log(residual_element);
-
- std::pair<double, double> fit =
- FESeries::linear_regression(ln_k, res.second);
-
- // The final step is to compute the Sobolev index $s=\mu-\frac d2$ and
- // store it in the vector of estimated values for each cell:
- smoothness_indicators(cell->active_cell_index()) =
- -fit.first - 1. * dim / 2;
- }
- }
} // namespace Step27
* existing triangulation. A prototypical case is a 2d domain with
* rectangular holes. This can be achieved by first meshing the entire
* domain and then using this function to get rid of the cells that are
- * located at the holes. Likewise, you could create the mesh that
+ * located at the holes. A demonstration of this particular use case is part
+ * of step-27. Likewise, you could create the mesh that
* GridGenerator::hyper_L() produces by starting with a
* GridGenerator::hyper_cube(), refining it once, and then calling the
* current function with a single cell in the second argument.