From: Marc Fehling Date: Mon, 9 Sep 2019 16:53:14 +0000 (+0200) Subject: step-27: Updated to new interface. X-Git-Tag: v9.3.0-rc1~668^2 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=9ad3b8d3e24f6085f170448f5e79df4a2eea74f9;p=dealii.git step-27: Updated to new interface. --- diff --git a/doc/doxygen/references.bib b/doc/doxygen/references.bib index 0325f6305c..4aa9c31852 100644 --- a/doc/doxygen/references.bib +++ b/doc/doxygen/references.bib @@ -290,6 +290,22 @@ pages = {1352--1367}} +% ------------------------------------ +% Step 27 +% ------------------------------------ + +@phdthesis{fehling2020, + author = {Marc Fehling}, + title = {Algorithms for massively parallel generic hp-adaptive finite element methods}, + publisher = {Forschungszentrum Jülich GmbH Zentralbibliothek, Verlag}, + school = {Bergische Universität Wuppertal}, + year = {2020}, + volume = {43}, + pages = {vii, 78 pp}, + url = {https://juser.fz-juelich.de/record/878206} +} + + % ------------------------------------ % Step 47 % ------------------------------------ diff --git a/doc/news/changes/minor/20201224Fehling b/doc/news/changes/minor/20201224Fehling new file mode 100644 index 0000000000..92abe3fb3d --- /dev/null +++ b/doc/news/changes/minor/20201224Fehling @@ -0,0 +1,4 @@ +Changed: Tutorial step-27 has been simplified and now uses the recently +introduced SmoothnessEstimator namespace. +
+(Marc Fehling, 2020/12/24) diff --git a/examples/step-27/doc/intro.dox b/examples/step-27/doc/intro.dox index f591b60bee..234abe6878 100644 --- a/examples/step-27/doc/intro.dox +++ b/examples/step-27/doc/intro.dox @@ -1,30 +1,29 @@

Introduction

-This tutorial program attempts to show how to use $hp$ finite element methods +This tutorial program attempts to show how to use $hp$-finite element methods with deal.II. It solves the Laplace equation and so builds only on the first few tutorial programs, in particular on step-4 for dimension -independent programming and step-6 for adaptive mesh -refinement. +independent programming and step-6 for adaptive mesh refinement. -The $hp$ finite element method was proposed in the early 1980s by -Babuska and Guo as an alternative to either -(i) mesh refinement (i.e. decreasing the mesh parameter $h$ in a finite +The $hp$-finite element method was proposed in the early 1980s by +Babuška and Guo as an alternative to either +(i) mesh refinement (i.e., decreasing the mesh parameter $h$ in a finite element computation) or (ii) increasing the polynomial degree $p$ used for shape functions. It is based on the observation that increasing the polynomial degree of the shape functions reduces the approximation error if the solution -is sufficiently smooth. On the other hand, it is well known +is sufficiently smooth. On the other hand, it is well known that even for the generally well-behaved class of elliptic problems, higher degrees of regularity can not be guaranteed in the vicinity of boundaries, corners, or where coefficients are discontinuous; consequently, the approximation can not be improved in these areas by increasing the polynomial degree $p$ but only by refining the mesh, i.e., by reducing the mesh size $h$. These differing means to reduce the -error have led to the notion of $hp$ finite elements, where the approximating +error have led to the notion of $hp$-finite elements, where the approximating finite element spaces are adapted to have a high polynomial degree $p$ wherever the solution is sufficiently smooth, while the mesh width $h$ is reduced at places wherever the solution lacks regularity. It was -already realized in the first papers on this method that $hp$ finite elements +already realized in the first papers on this method that $hp$-finite elements can be a powerful tool that can guarantee that the error is reduced not only with some negative power of the number of degrees of freedom, but in fact exponentially. @@ -40,8 +39,9 @@ we will have to discuss the following aspects:
  • Degrees of freedom will then have to be allocated on each cell depending on what finite element is associated with this particular cell. Constraints - will have to generated in the same way as for hanging nodes, but now also - including the case where two neighboring cells.
  • + will have to be generated in the same way as for hanging nodes, but we now + also have to deal with the case where two neighboring cells have different + finite elements assigned.
  • We will need to be able to assemble cell and face contributions to global matrices and right hand side vectors.
  • @@ -59,17 +59,17 @@ tasks are already well supported by functionality provided by the deal.II, and that we will only have to provide the logic of what the program should do, not exactly how all this is going to happen. -In deal.II, the $hp$ functionality is largely packaged into +In deal.II, the $hp$-functionality is largely packaged into the hp-namespace. This namespace provides classes that handle -$hp$ discretizations, assembling matrices and vectors, and other +$hp$-discretizations, assembling matrices and vectors, and other tasks. We will get to know many of them further down below. In addition, most of the functions in the DoFTools, and VectorTools -namespaces accept $hp$ objects in addition to the non-$hp$ ones. Much of -the $hp$ implementation is also discussed in the @ref hp documentation +namespaces accept $hp$-objects in addition to the non-$hp$-ones. Much of +the $hp$-implementation is also discussed in the @ref hp documentation module and the links found there. It may be worth giving a slightly larger perspective at the end of -this first part of the introduction. $hp$ functionality has been +this first part of the introduction. $hp$-functionality has been implemented in a number of different finite element packages (see, for example, the list of references cited in the @ref hp_paper "hp-paper"). However, by and large, most of these packages have implemented it only @@ -79,7 +79,7 @@ discontinuous finite elements by definition do not require continuity across faces between cells and therefore do not require the special treatment otherwise necessary whenever finite elements of different polynomial degree meet at a common face. In contrast, deal.II -implements the most general case, i.e. it allows for continuous and +implements the most general case, i.e., it allows for continuous and discontinuous elements in 1d, 2d, and 3d, and automatically handles the resulting complexity. In particular, it handles computing the constraints (similar to hanging node constraints) of elements of @@ -88,12 +88,13 @@ data structure techniques necessary for this are described in the @ref hp_paper "hp-paper" for those interested in such detail. We hope that providing such a general implementation will help explore -the potential of $hp$ methods further. +the potential of $hp$-methods further. +

    Finite element collections

    -Now on again to the details of how to use the $hp$ functionality in +Now on again to the details of how to use the $hp$-functionality in deal.II. The first aspect we have to deal with is that now we do not have only a single finite element any more that is used on all cells, but a number of different elements that cells can choose to use. For @@ -132,7 +133,7 @@ The situation here is a bit more complicated since we do not just have a single finite element object, but rather may want to use different elements on different cells. We therefore need two things: (i) a version of the DoFHandler class that can deal with this situation, and -(ii) a way to tell the DoF handler which element to use on which cell. +(ii) a way to tell the DoFHandler which element to use on which cell. The first of these two things is implemented in the hp-mode of the DoFHandler class: rather than associating it with a triangulation @@ -157,7 +158,7 @@ Dots in the call to set_active_fe_index() indicate that we will have to have some sort of strategy later on to decide which element to use on which cell; we will come back to this later. The main point here is that the first and last line of this code snippet -is pretty much exactly the same as for the non-$hp$ case. +is pretty much exactly the same as for the non-$hp$-case. Another complication arises from the fact that this time we do not simply have hanging nodes from local mesh refinement, but we also have @@ -172,7 +173,7 @@ and in fact the code looks exactly the same: DoFTools::make_hanging_node_constraints(dof_handler, constraints); @endcode In other words, the DoFTools::make_hanging_node_constraints deals not -only with hanging node constraints, but also with $hp$ constraints at +only with hanging node constraints, but also with $hp$-constraints at the same time. @@ -181,7 +182,7 @@ the same time. Following this, we have to set up matrices and vectors for the linear system of the correct size and assemble them. Setting them up works in exactly the -same way as for the non-$hp$ case. Assembling requires a bit more thought. +same way as for the non-$hp$-case. Assembling requires a bit more thought. The main idea is of course unchanged: we have to loop over all cells, assemble local contributions, and then copy them into the global objects. As discussed @@ -194,7 +195,7 @@ FEValues object, thereby asking it to re-compute that part of the information that changes from cell to cell. It can then be used to sum up local contributions to bilinear form and right hand side. -In the context of $hp$ finite element methods, we have to deal with the fact +In the context of $hp$-finite element methods, we have to deal with the fact that we do not use the same finite element object on each cell. In fact, we should not even use the same quadrature object for all cells, but rather higher order quadrature formulas for cells where we use higher order finite @@ -205,13 +206,13 @@ To facilitate these considerations, deal.II has a class hp::FEValues that does what we need in the current context. The difference is that instead of a single finite element, quadrature formula, and mapping, it takes collections of these objects. It's use is very much like the regular FEValues class, -i.e. the interesting part of the loop over all cells would look like this: +i.e., the interesting part of the loop over all cells would look like this: @code hp::FEValues hp_fe_values(mapping_collection, fe_collection, quadrature_collection, - update_values | update_gradients | + update_values | update_gradients | update_quadrature_points | update_JxW_values); for (const auto &cell : dof_handler.active_cell_iterators()) @@ -234,13 +235,13 @@ program below), in which case cell-@>active_fe_index() is used for this index. The order of these arguments is chosen in this way because one may sometimes want to pick a different quadrature or mapping object from their respective collections, but hardly ever a different finite element than the -one in use on this cell, i.e. one with an index different from +one in use on this cell, i.e., one with an index different from cell-@>active_fe_index(). The finite element collection index is therefore the last default argument so that it can be conveniently omitted. What this reinit call does is the following: the hp::FEValues class checks whether it has previously already allocated a -non-$hp$ FEValues object for this combination of finite element, quadrature, +non-$hp$-FEValues object for this combination of finite element, quadrature, and mapping objects. If not, it allocates one. It then re-initializes this object for the current cell, after which there is now a FEValues object for the selected finite element, quadrature and mapping usable on the current @@ -262,11 +263,11 @@ complicated strategies in some programs, most importantly in step-14. In any case, as long as the decision is only "refine this cell" or "do not refine this cell", the actual refinement step is not particularly challenging. However, here we have a code that is capable of hp-refinement, -i.e. we suddenly have two choices whenever we detect that the error on a +i.e., we suddenly have two choices whenever we detect that the error on a certain cell is too large for our liking: we can refine the cell by splitting it into several smaller ones, or we can increase the polynomial degree of the shape functions used on it. How do we know which is the more promising -strategy? Answering this question is the central problem in $hp$ finite +strategy? Answering this question is the central problem in $hp$-finite element research at the time of this writing. In short, the question does not appear to be settled in the literature at this @@ -286,7 +287,7 @@ therefore do not intend to present the following ideas as a complete solution to the problem. Rather, it is intended as an idea to approach it that merits further research and investigation. In other words, we do not intend to enter a sophisticated proposal into the fray about answers to the general -question. However, to demonstrate our approach to $hp$ finite elements, we +question. However, to demonstrate our approach to $hp$-finite elements, we need a simple indicator that does generate some useful information that is able to drive the simple calculations this tutorial program will perform. @@ -298,7 +299,7 @@ Sobolev space $H^s(K)$ on a cell $K$, it has to satisfy the condition @f[ \int_K |\nabla^s u({\bf x})|^2 \; d{\bf x} < \infty. @f] -Assuming that the cell $K$ is not degenerate, i.e. that the mapping from the +Assuming that the cell $K$ is not degenerate, i.e., that the mapping from the unit cell to cell $K$ is sufficiently regular, above condition is of course equivalent to @f[ @@ -335,7 +336,7 @@ It becomes clear that we can then write the $H^s$ norm of $\hat u$ as |{\bf k}|^{2s} |\hat U_{\bf k}|^2. @f] -In other words, if this norm is to be finite (i.e. for $\hat u(\hat{\bf x})$ to be in $H^s(\hat K)$), we need that +In other words, if this norm is to be finite (i.e., for $\hat u(\hat{\bf x})$ to be in $H^s(\hat K)$), we need that @f[ |\hat U_{\bf k}| = {\cal O}\left(|{\bf k}|^{-\left(s+1/2+\frac{d-1}{2}+\epsilon\right)}\right). @f] @@ -403,9 +404,9 @@ cell. In other words, we can write it as a matrix-vector product @f] with the matrix @f[ - {\cal F}_{{\bf k},j} - = - \int_{\hat K} e^{i {\bf k}\cdot \hat{\bf x}} \hat \varphi_j(\hat{\bf x}) d\hat{\bf x}. + {\cal F}_{{\bf k},j} + = + \int_{\hat K} e^{i {\bf k}\cdot \hat{\bf x}} \hat \varphi_j(\hat{\bf x}) d\hat{\bf x}. @f] This matrix is easily computed for a given number of shape functions $\varphi_j$ and Fourier modes $N$. Consequently, finding the @@ -509,6 +510,12 @@ $\beta$, the formula above gives us a mean to calculate the value of the exponent $\mu$ that we can then use to determine that $\hat u(\hat{\bf x})$ is in $H^s(\hat K)$ with $s=\mu-\frac d2$. +These steps outlined above are applicable to many different scenarios, which +motivated the introduction of a generic function +SmoothnessEstimator::Fourier::coefficient_decay() in deal.II, that combines all +the tasks described in this section in one simple function call. We will use it +in the implementation of this program. +

    Compensating for anisotropy

    @@ -528,7 +535,7 @@ essentially trying to determine the smoothness of the solution in that spatial direction in which the solution appears to be roughest? One can probably argue for either case. The issue would be of more interest if -deal.II had the ability to use anisotropic finite elements, i.e. ones that use +deal.II had the ability to use anisotropic finite elements, i.e., ones that use different polynomial degrees in different spatial directions, as they would be able to exploit the directionally variable smoothness much better. Alas, this capability does not exist at the time of writing this tutorial program. @@ -557,7 +564,7 @@ using the formula To calculate $\mu$ as shown above, we have to slightly modify all sums: instead of summing over all Fourier modes, we only sum over those for which the Fourier coefficient is the largest one among all $\hat U_{{\bf k}}$ with -the same magnitude $|{\bf k}|$, i.e. all sums above have to replaced by the +the same magnitude $|{\bf k}|$, i.e., all sums above have to replaced by the following sums: @f[ \sum_{{\bf k}, |{\bf k}|\le N} @@ -591,7 +598,7 @@ To compensate for the transformation means not attempting to fit a decay $|{\bf k}|^\mu$ with respect to the Fourier frequencies ${\bf k}$ on the unit cell, but to fit the coefficients $\hat U_{{\bf k}}$ computed on the reference cell to the Fourier frequencies on the real cell $|\bf -k|h$, where $h$ is the norm of the transformation operator (i.e. something +k|h$, where $h$ is the norm of the transformation operator (i.e., something like the diameter of the cell). In other words, we would have to minimize the sum of squares of the terms @f[ @@ -615,7 +622,7 @@ estimated smoothness exponent will be the same in either case.

    Creating the sparsity pattern

    -One of the problems with $hp$ methods is that the high polynomial degree of +One of the problems with $hp$-methods is that the high polynomial degree of shape functions together with the large number of constrained degrees of freedom leads to matrices with large numbers of nonzero entries in some rows. At the same time, because there are areas where we use low polynomial @@ -649,15 +656,16 @@ DoFs), it is worthwhile to take these constraints into consideration since the resulting matrix will be much sparser (and, therefore, matrix-vector products or factorizations will be substantially faster too). +

    Eliminating constrained degrees of freedom

    -A second problem particular to $hp$ methods arises because we have so +A second problem particular to $hp$-methods arises because we have so many constrained degrees of freedom: typically up to about one third of all degrees of freedom (in 3d) are constrained because they either belong to cells with hanging nodes or because they are on cells adjacent to cells with a higher or lower polynomial degree. This is, in fact, not much more than the fraction of constrained degrees of -freedom in non-$hp$ mode, but the difference is that each constrained +freedom in non-$hp$-mode, but the difference is that each constrained hanging node is constrained not only against the two adjacent degrees of freedom, but is constrained against many more degrees of freedom. @@ -682,6 +690,7 @@ transfer from local to global data on matrix and vector simultaneously. This is exactly what we've shown in step-6. +

    The test case

    The test case we will solve with this program is a re-take of the one we @@ -690,10 +699,10 @@ already look at in step-14: we solve the Laplace equation -\Delta u = f @f] in 2d, with $f=(x+1)(y+1)$, and with zero Dirichlet boundary values for -$u$. We do so on the domain $[-1,1]^2\backslash[-\frac 12,\frac 12]^2$, i.e. a -square with a square hole in the middle. +$u$. We do so on the domain $[-1,1]^2\backslash[-\frac 12,\frac 12]^2$, +i.e., a square with a square hole in the middle. -The difference to step-14 is of course that we use $hp$ finite +The difference to step-14 is of course that we use $hp$-finite elements for the solution. The test case is of interest because it has re-entrant corners in the corners of the hole, at which the solution has singularities. We therefore expect that the solution will be smooth in the diff --git a/examples/step-27/doc/results.dox b/examples/step-27/doc/results.dox index c4629af907..1347c087b5 100644 --- a/examples/step-27/doc/results.dox +++ b/examples/step-27/doc/results.dox @@ -3,7 +3,7 @@ In this section, we discuss a few results produced from running the current tutorial program. More results, in particular the extension to 3d calculations and determining how much compute time the individual -components of the program take, are given in the @ref hp_paper . +components of the program take, are given in the @ref hp_paper "hp-paper". When run, this is what the program produces: @@ -41,12 +41,12 @@ The first thing we learn from this is that the number of constrained degrees of freedom is on the order of 20-25% of the total number of degrees of freedom, at least on the later grids when we have elements of relatively high order (in 3d, the fraction of constrained degrees of freedom can be up -to 30%). This is, in fact, on the same order of magnitude as for non-$hp$ -discretizations. For example, in the last step of the step-6 +to 30%). This is, in fact, on the same order of magnitude as for +non-$hp$-discretizations. For example, in the last step of the step-6 program, we have 18353 degrees of freedom, 4432 of which are constrained. The difference is that in the latter program, each constrained hanging node is constrained against only the two adjacent degrees of -freedom, whereas in the $hp$ case, constrained nodes are constrained against +freedom, whereas in the $hp$-case, constrained nodes are constrained against many more degrees of freedom. Note also that the current program also includes nodes subject to Dirichlet boundary conditions in the list of constraints. In cycle 0, all the constraints are actually because of @@ -141,9 +141,9 @@ grey corresponds to degree two and pink corresponds to degree seven: While this is certainly not a perfect arrangement, it does make some sense: we use low order elements close to boundaries and corners where regularity is low. On the other hand, higher order elements are used where (i) the error was -at one point fairly large, i.e. mainly in the general area around the corner +at one point fairly large, i.e., mainly in the general area around the corner singularities and in the top right corner where the solution is large, and -(ii) where the solution is smooth, i.e. far away from the boundary. +(ii) where the solution is smooth, i.e., far away from the boundary. This arrangement of polynomial degrees of course follows from our smoothness estimator. Here is the estimated smoothness of the solution, with darker colors @@ -209,7 +209,88 @@ include estimating the smoothness not on single cells, but cell assemblies or patches surrounding each cell. It may also be possible to find simple correction factors for each cell depending on the number of constrained degrees of freedom it has. In either case, there are ample opportunities for -further research on finding good $hp$ refinement criteria. On the other hand, -the main point of the current program was to demonstrate using the $hp$ -technology in deal.II, which is unaffected by our use of a possible +further research on finding good $hp$-refinement criteria. On the other hand, +the main point of the current program was to demonstrate using the +$hp$-technology in deal.II, which is unaffected by our use of a possible sub-optimal refinement criterion. + + + + +

    Possibilities for extensions

    + +

    Different hp-decision strategies

    + +This tutorial demonstrates only one particular strategy to decide between $h$- and +$p$-adaptation. In fact, there are many more ways to automatically decide on the +adaptation type, of which a few are already implemented in deal.II: + + +Try implementing one of these strategies into this tutorial and observe the +subtle changes to the results. You will notice that all strategies are +capable of identifying the singularities near the reentrant corners and +will perform $h$-refinement in these regions, while preferring $p$-refinement +in the bulk domain. A detailed comparison of these strategies is presented +in @cite fehling2020 . + + +

    Parallel hp-adaptive finite elements

    + +All functionality presented in this tutorial already works for both +sequential and parallel applications. It is possible without too much +effort to change to either the parallel::shared::Triangulation or the +parallel::distributed::Triangulation classes. If you feel eager to try +it, we recommend reading step-18 for the former and step-40 for the +latter case first for further background information on the topic, and +then come back to this tutorial to try out your newly acquired skills. diff --git a/examples/step-27/step-27.cc b/examples/step-27/step-27.cc index fb5664b89c..dce22f0a9d 100644 --- a/examples/step-27/step-27.cc +++ b/examples/step-27/step-27.cc @@ -15,7 +15,8 @@ * * Authors: Wolfgang Bangerth, Texas A&M University, 2006, 2007; - * Denis Davydov, University of Erlangen-Nuremberg, 2016. + * Denis Davydov, University of Erlangen-Nuremberg, 2016; + * Marc Fehling, Colorado State University, 2020. */ @@ -49,17 +50,19 @@ // These are the new files we need. The first and second provide the // FECollection and the hp version of the FEValues class as described in -// the introduction of this program. The last one provides Fourier -// transformation class on the unit cell. +// the introduction of this program. The next one provides the functionality +// for automatic $hp$-adaptation, for which we will use the estimation +// algorithms based on decaying series expansion coefficients that are part of +// the last two files. #include #include +#include #include +#include -// The last set of include files are standard C++ headers. We need support for -// complex numbers when we compute the Fourier transform. +// The last set of include files are standard C++ headers. #include #include -#include // Finally, this is as in previous programs: @@ -75,8 +78,7 @@ namespace Step27 // main difference is that we have merged the refine_grid and output_results // functions into one since we will also want to output some of the // quantities used in deciding how to refine the mesh (in particular the - // estimated smoothness of the solution). There is also a function that - // computes this estimated smoothness, as discussed in the introduction. + // estimated smoothness of the solution). // // As far as member variables are concerned, we use the same structure as // already used in step-6, but we need collections instead of @@ -98,9 +100,7 @@ namespace Step27 void assemble_system(); void solve(); void create_coarse_grid(); - void estimate_smoothness(Vector &smoothness_indicators); void postprocess(const unsigned int cycle); - std::pair predicate(const TableIndices &indices); Triangulation triangulation; @@ -109,11 +109,6 @@ namespace Step27 hp::QCollection quadrature_collection; hp::QCollection face_quadrature_collection; - hp::QCollection fourier_q_collection; - std::unique_ptr> fourier; - std::vector ln_k; - Table> fourier_coefficients; - AffineConstraints constraints; SparsityPattern sparsity_pattern; @@ -166,23 +161,6 @@ namespace Step27 // face quadrature objects. We start with quadratic elements, and each // quadrature formula is chosen so that it is appropriate for the matching // finite element in the hp::FECollection object. - // - // Finally, we initialize FESeries::Fourier object which will be used to - // calculate coefficient in Fourier series as described in the introduction. - // In addition to the hp::FECollection, we need to provide quadrature rules - // hp::QCollection for integration on the reference cell. - // - // In order to resize fourier_coefficients Table, we use the following - // auxiliary function - template - void resize(Table &coeff, const unsigned int N) - { - TableIndices size; - for (unsigned int d = 0; d < dim; d++) - size[d] = N; - coeff.reinit(size); - } - template LaplaceProblem::LaplaceProblem() : dof_handler(triangulation) @@ -194,51 +172,6 @@ namespace Step27 quadrature_collection.push_back(QGauss(degree + 1)); face_quadrature_collection.push_back(QGauss(degree + 1)); } - - // As described in the introduction, we define the Fourier vectors ${\bf - // k}$ for which we want to compute Fourier coefficients of the solution - // on each cell as follows. In 2d, we will need coefficients corresponding - // to vectors ${\bf k}=(2 \pi i, 2\pi j)^T$ for which $\sqrt{i^2+j^2}\le N$, - // with $i,j$ integers and $N$ being the maximal polynomial degree we use - // for the finite elements in this program. The FESeries::Fourier class' - // constructor first parameter $N$ defines the number of coefficients in 1D - // with the total number of coefficients being $N^{dim}$. Although we will - // not use coefficients corresponding to - // $\sqrt{i^2+j^2}> N$ and $i+j==0$, the overhead of their calculation is - // minimal. The transformation matrices for each FiniteElement will be - // calculated only once the first time they are required in the course of - // hp-adaptive refinement. Because we work on the unit cell, we can do all - // this work without a mapping from reference to real cell and consequently - // can precalculate these matrices. The calculation of expansion - // coefficients for a particular set of local degrees of freedom on a given - // cell then follows as a simple matrix-vector product. - // The 3d case is handled analogously. - const unsigned int N = max_degree; - - // We will need to assemble the matrices that do the Fourier transforms - // for each of the finite elements we deal with, i.e. the matrices ${\cal - // F}_{{\bf k},j}$ defined in the introduction. We have to do that for - // each of the finite elements in use. To that end we need a quadrature - // rule. In this example we use the same quadrature formula for each - // finite element, namely that is obtained by iterating a - // 2-point Gauss formula as many times as the maximal exponent we use for - // the term $e^{i{\bf k}\cdot{\bf x}}$: - QGauss<1> base_quadrature(2); - QIterated quadrature(base_quadrature, N); - for (unsigned int i = 0; i < fe_collection.size(); i++) - fourier_q_collection.push_back(quadrature); - - // Now we are ready to set-up the FESeries::Fourier object - const std::vector n_coefficients_per_direction( - fe_collection.size(), N); - fourier = - std::make_unique>(n_coefficients_per_direction, - fe_collection, - fourier_q_collection); - - // We need to resize the matrix of fourier coefficients according to the - // number of modes N. - resize(fourier_coefficients, N); } @@ -257,7 +190,7 @@ namespace Step27 // This function is again a verbatim copy of what we already did in // step-6. Despite function calls with exactly the same names and arguments, // the algorithms used internally are different in some aspect since the - // dof_handler variable here is an hp-object. + // dof_handler variable here is in $hp$-mode. template void LaplaceProblem::setup_system() { @@ -299,17 +232,17 @@ namespace Step27 // polynomial degrees on different cells, the matrices and vectors holding // local contributions do not have the same size on all cells. At the // beginning of the loop over all cells, we therefore each time have to - // resize them to the correct size (given by - // dofs_per_cell). Because these classes are implement in such - // a way that reducing the size of a matrix or vector does not release the - // currently allocated memory (unless the new size is zero), the process of - // resizing at the beginning of the loop will only require re-allocation of - // memory during the first few iterations. Once we have found in a cell with - // the maximal finite element degree, no more re-allocations will happen - // because all subsequent reinit calls will only set the size - // to something that fits the currently allocated memory. This is important - // since allocating memory is expensive, and doing so every time we visit a - // new cell would take significant compute time. + // resize them to the correct size (given by dofs_per_cell). + // Because these classes are implemented in such a way that reducing the size + // of a matrix or vector does not release the currently allocated memory + // (unless the new size is zero), the process of resizing at the beginning of + // the loop will only require re-allocation of memory during the first few + // iterations. Once we have found in a cell with the maximal finite element + // degree, no more re-allocations will happen because all subsequent + // reinit calls will only set the size to something that fits the + // currently allocated memory. This is important since allocating memory is + // expensive, and doing so every time we visit a new cell would take + // significant compute time. template void LaplaceProblem::assemble_system() { @@ -405,8 +338,7 @@ namespace Step27 // Let us start with computing estimated error and smoothness indicators, // which each are one number for each active cell of our // triangulation. For the error indicator, we use the KellyErrorEstimator - // class as always. Estimating the smoothness is done in the respective - // function of this class; that function is discussed further down below: + // class as always. Vector estimated_error_per_cell(triangulation.n_active_cells()); KellyErrorEstimator::estimate( dof_handler, @@ -415,9 +347,21 @@ namespace Step27 solution, estimated_error_per_cell); - + // Estimating the smoothness is performed with the method of decaying + // expansion coefficients as outlined in the introduction. We will first + // need to create an object capable of transforming the finite element + // solution on every single cell into a sequence of Fourier series + // coefficients. The SmoothnessEstimator namespace offers a factory function + // for such a FESeries::Fourier object that is optimized for the process of + // estimating smoothness. The actual determination of the decay of Fourier + // coefficients on every individual cell then happens in the last function. Vector smoothness_indicators(triangulation.n_active_cells()); - estimate_smoothness(smoothness_indicators); + FESeries::Fourier fourier = + SmoothnessEstimator::Fourier::default_fe_series(fe_collection); + SmoothnessEstimator::Fourier::coefficient_decay(fourier, + dof_handler, + solution, + smoothness_indicators); // Next we want to generate graphical output. In addition to the two // estimated quantities derived above, we would also like to output the @@ -432,7 +376,7 @@ namespace Step27 // that element. The result we put into a vector with one element per // cell. The DataOut class requires this to be a vector of // float or double, even though our values are - // all integers, so that it what we use: + // all integers, so that is what we use: { Vector fe_degrees(triangulation.n_active_cells()); for (const auto &cell : dof_handler.active_cell_iterators()) @@ -474,52 +418,36 @@ namespace Step27 // $h$ decreased. The strategy we choose here is that we look at the // smoothness indicators of those cells that are flagged for refinement, // and increase $p$ for those with a smoothness larger than a certain - // threshold. For this, we first have to determine the maximal and - // minimal values of the smoothness indicators of all flagged cells, - // which we do using a loop over all cells and comparing current minimal - // and maximal values. (We start with the minimal and maximal values of - // all cells, a range within which the minimal and maximal values - // on cells flagged for refinement must surely lie.) Absent any better - // strategies, we will then set the threshold above which will increase - // $p$ instead of reducing $h$ as the mean value between minimal and - // maximal smoothness indicators on cells flagged for refinement: - float max_smoothness = *std::min_element(smoothness_indicators.begin(), - smoothness_indicators.end()), - min_smoothness = *std::max_element(smoothness_indicators.begin(), - smoothness_indicators.end()); - for (const auto &cell : dof_handler.active_cell_iterators()) - if (cell->refine_flag_set()) - { - max_smoothness = - std::max(max_smoothness, - smoothness_indicators(cell->active_cell_index())); - min_smoothness = - std::min(min_smoothness, - smoothness_indicators(cell->active_cell_index())); - } - const float threshold_smoothness = (max_smoothness + min_smoothness) / 2; - - // With this, we can go back, loop over all cells again, and for those - // cells for which (i) the refinement flag is set, (ii) the smoothness - // indicator is larger than the threshold, and (iii) we still have a - // finite element with a polynomial degree higher than the current one - // in the finite element collection, we then increase the polynomial - // degree and in return remove the flag indicating that the cell should - // undergo bisection. For all other cells, the refinement flags remain - // untouched: - for (const auto &cell : dof_handler.active_cell_iterators()) - if (cell->refine_flag_set() && - (smoothness_indicators(cell->active_cell_index()) > - threshold_smoothness) && - (cell->active_fe_index() + 1 < fe_collection.size())) - { - cell->clear_refine_flag(); - cell->set_active_fe_index(cell->active_fe_index() + 1); - } + // relative threshold. In other words, for every cell for which (i) the + // refinement flag is set, (ii) the smoothness indicator is larger than + // the threshold, and (iii) we still have a finite element with a + // polynomial degree higher than the current one in the finite element + // collection, we will assign a future FE index that corresponds to a + // polynomial with degree one higher than it currently is. The following + // function is capable of doing exactly this. Absent any better + // strategies, we will set the threshold as the mean value between minimal + // and maximal smoothness indicators on cells flagged for refinement. This + // is achieved by setting the corresponding fraction parameter to a value + // of 0.5. In the same way, we deal with cells that are going to be + // coarsened and decrease their polynomial degree when their smoothness + // indicator is below the corresponding threshold determined on cells to + // be coarsened. + hp::Refinement::p_adaptivity_from_relative_threshold( + dof_handler, smoothness_indicators, 0.5, 0.5); + + // The above function only determines whether the polynomial degree will + // change via future FE indices, but does not manipulate the + // $h$-refinement flags. So for cells that are flagged for both refinement + // categories, we prefer $p$- over $h$-refinement. The following function + // call ensures that only one of $p$- or $h$-refinement is imposed, and + // not both at once. + hp::Refinement::choose_p_over_h(dof_handler); // At the end of this procedure, we then refine the mesh. During this // process, children of cells undergoing bisection inherit their mother - // cell's finite element index: + // cell's finite element index. Further, future finite element indices + // will turn into active ones, so that the new finite elements will be + // assigned to cells after the next call of DoFHandler::distribute_dofs(). triangulation.execute_coarsening_and_refinement(); } } @@ -527,50 +455,34 @@ namespace Step27 // @sect4{LaplaceProblem::create_coarse_grid} - // The following function is used when creating the initial grid. It is a - // specialization for the 2d case, i.e. a corresponding function needs to be - // implemented if the program is run in anything other then 2d. The function - // is actually stolen from step-14 and generates the same mesh used already - // there, i.e. the square domain with the square hole in the middle. The - // meaning of the different parts of this function are explained in the - // documentation of step-14: - template <> - void LaplaceProblem<2>::create_coarse_grid() + // The following function is used when creating the initial grid. The grid we + // would like to create is actually similar to the one from step-14, i.e., the + // square domain with the square hole in the middle. It can be generated by + // excatly the same function. However, since its implementation is only a + // specialization of the 2d case, we will present a different way of creating + // this domain which is dimension independent. + // + // We first create a hypercube triangulation with enough cells so that it + // already holds our desired domain $[-1,1]^d$, subdivided into $4^d$ cells. + // We then remove those cells in the center of the domain by testing the + // coordinate values of the vertices on each cell. In the end, we refine the + // so created grid globally as usual. + template + void LaplaceProblem::create_coarse_grid() { - const unsigned int dim = 2; - - const std::vector> vertices = { - {-1.0, -1.0}, {-0.5, -1.0}, {+0.0, -1.0}, {+0.5, -1.0}, {+1.0, -1.0}, // - {-1.0, -0.5}, {-0.5, -0.5}, {+0.0, -0.5}, {+0.5, -0.5}, {+1.0, -0.5}, // - {-1.0, +0.0}, {-0.5, +0.0}, {+0.5, +0.0}, {+1.0, +0.0}, // - {-1.0, +0.5}, {-0.5, +0.5}, {+0.0, +0.5}, {+0.5, +0.5}, {+1.0, +0.5}, // - {-1.0, +1.0}, {-0.5, +1.0}, {+0.0, +1.0}, {+0.5, +1.0}, {+1.0, +1.0}}; - - const std::vector::vertices_per_cell>> - cell_vertices = {{{0, 1, 5, 6}}, - {{1, 2, 6, 7}}, - {{2, 3, 7, 8}}, - {{3, 4, 8, 9}}, - {{5, 6, 10, 11}}, - {{8, 9, 12, 13}}, - {{10, 11, 14, 15}}, - {{12, 13, 17, 18}}, - {{14, 15, 19, 20}}, - {{15, 16, 20, 21}}, - {{16, 17, 21, 22}}, - {{17, 18, 22, 23}}}; - - const unsigned int n_cells = cell_vertices.size(); - - std::vector> cells(n_cells, CellData()); - for (unsigned int i = 0; i < n_cells; ++i) - { - for (unsigned int j = 0; j < cell_vertices[i].size(); ++j) - cells[i].vertices[j] = cell_vertices[i][j]; - cells[i].material_id = 0; - } + Triangulation cube; + GridGenerator::subdivided_hyper_cube(cube, 4, -1., 1.); + + std::set::active_cell_iterator> cells_to_remove; + for (const auto &cell : cube.active_cell_iterators()) + for (unsigned int v = 0; v < GeometryInfo::vertices_per_cell; ++v) + if (cell->vertex(v).square() < .1) + cells_to_remove.insert(cell); + + GridGenerator::create_triangulation_with_removed_cells(cube, + cells_to_remove, + triangulation); - triangulation.create_triangulation(vertices, cells, SubCellData()); triangulation.refine_global(3); } @@ -579,8 +491,7 @@ namespace Step27 // @sect4{LaplaceProblem::run} // This function implements the logic of the program, as did the respective - // function in most of the previous programs already, see for example - // step-6. + // function in most of the previous programs already, see for example step-6. // // Basically, it contains the adaptive loop: in the first iteration create a // coarse grid, and then set up the linear system, assemble it, solve, and @@ -611,118 +522,6 @@ namespace Step27 postprocess(cycle); } } - - - // @sect4{LaplaceProblem::estimate_smoothness} - - // As described in the introduction, we will need to take the maximum - // absolute value of fourier coefficients which correspond to $k$-vector - // $|{\bf k}|= const$. To filter the coefficients Table we - // will use the FESeries::process_coefficients() which requires a predicate - // to be specified. The predicate should operate on TableIndices and return - // a pair of bool and unsigned int. The latter - // is the value of the map from TableIndicies to unsigned int. It is - // used to define subsets of coefficients from which we search for the one - // with highest absolute value, i.e. $l^\infty$-norm. The bool - // parameter defines which indices should be used in processing. In the - // current case we are interested in coefficients which correspond to - // $0 < i*i+j*j < N*N$ and $0 < i*i+j*j+k*k < N*N$ in 2D and 3D, respectively. - template - std::pair - LaplaceProblem::predicate(const TableIndices &ind) - { - unsigned int v = 0; - for (unsigned int i = 0; i < dim; i++) - v += ind[i] * ind[i]; - if (v > 0 && v < max_degree * max_degree) - return std::make_pair(true, v); - else - return std::make_pair(false, v); - } - - // This last function of significance implements the algorithm to estimate - // the smoothness exponent using the algorithms explained in detail in the - // introduction. We will therefore only comment on those points that are of - // implementational importance. - template - void - LaplaceProblem::estimate_smoothness(Vector &smoothness_indicators) - { - // Since most of the hard work is done for us in FESeries::Fourier and - // we set up the object of this class in the constructor, what we are left - // to do here is apply this class to calculate coefficients and then - // perform linear regression to fit their decay slope. - - - // First thing to do is to loop over all cells and do our work there, i.e. - // to locally do the Fourier transform and estimate the decay coefficient. - // We will use the following array as a scratch array in the loop to store - // local DoF values: - Vector local_dof_values; - - // Then here is the loop: - for (const auto &cell : dof_handler.active_cell_iterators()) - { - // Inside the loop, we first need to get the values of the local - // degrees of freedom (which we put into the - // local_dof_values array after setting it to the right - // size) and then need to compute the Fourier transform by multiplying - // this vector with the matrix ${\cal F}$ corresponding to this finite - // element. This is done by calling FESeries::Fourier::calculate(), - // that has to be provided with the local_dof_values, - // cell->active_fe_index() and a Table to store - // coefficients. - local_dof_values.reinit(cell->get_fe().n_dofs_per_cell()); - cell->get_dof_values(solution, local_dof_values); - - fourier->calculate(local_dof_values, - cell->active_fe_index(), - fourier_coefficients); - - // The next thing, as explained in the introduction, is that we wanted - // to only fit our exponential decay of Fourier coefficients to the - // largest coefficients for each possible value of $|{\bf k}|$. To - // this end, we use FESeries::process_coefficients() to rework - // coefficients into the desired format. We'll only take those Fourier - // coefficients with the largest magnitude for a given value of $|{\bf - // k}|$ and thereby need to use VectorTools::Linfty_norm: - std::pair, std::vector> res = - FESeries::process_coefficients( - fourier_coefficients, - [this](const TableIndices &indices) { - return this->predicate(indices); - }, - VectorTools::Linfty_norm); - - Assert(res.first.size() == res.second.size(), ExcInternalError()); - - // The first vector in the std::pair will store values of - // the predicate, that is $i*i+j*j= const$ or $i*i+j*j+k*k = const$ in - // 2D or 3D respectively. This vector will be the same for all the cells - // so we can calculate logarithms of the corresponding Fourier vectors - // $|{\bf k}|$ only once in the whole hp-refinement cycle: - if (ln_k.size() == 0) - { - ln_k.resize(res.first.size(), 0); - for (unsigned int f = 0; f < ln_k.size(); f++) - ln_k[f] = - std::log(2.0 * numbers::PI * std::sqrt(1. * res.first[f])); - } - - // We have to calculate the logarithms of absolute values of - // coefficients and use it in a linear regression fit to obtain $\mu$. - for (double &residual_element : res.second) - residual_element = std::log(residual_element); - - std::pair fit = - FESeries::linear_regression(ln_k, res.second); - - // The final step is to compute the Sobolev index $s=\mu-\frac d2$ and - // store it in the vector of estimated values for each cell: - smoothness_indicators(cell->active_cell_index()) = - -fit.first - 1. * dim / 2; - } - } } // namespace Step27 diff --git a/include/deal.II/grid/grid_generator.h b/include/deal.II/grid/grid_generator.h index 44ee503fd7..cf82d01569 100644 --- a/include/deal.II/grid/grid_generator.h +++ b/include/deal.II/grid/grid_generator.h @@ -1712,7 +1712,8 @@ namespace GridGenerator * existing triangulation. A prototypical case is a 2d domain with * rectangular holes. This can be achieved by first meshing the entire * domain and then using this function to get rid of the cells that are - * located at the holes. Likewise, you could create the mesh that + * located at the holes. A demonstration of this particular use case is part + * of step-27. Likewise, you could create the mesh that * GridGenerator::hyper_L() produces by starting with a * GridGenerator::hyper_cube(), refining it once, and then calling the * current function with a single cell in the second argument.