From: Wolfgang Bangerth Date: Tue, 5 Jan 2021 00:36:48 +0000 (-0700) Subject: Update text and code of step-74. X-Git-Tag: v9.3.0-rc1~684^2 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=refs%2Fpull%2F11467%2Fhead;p=dealii.git Update text and code of step-74. --- diff --git a/examples/step-74/doc/intro.dox b/examples/step-74/doc/intro.dox index 20b2e24b38..5092b17524 100644 --- a/examples/step-74/doc/intro.dox +++ b/examples/step-74/doc/intro.dox @@ -16,7 +16,8 @@ EAR-0949446 and EAR-1550901 and The University of California -- Davis.

Overview

In this tutorial, we display the usage of the FEInterfaceValues class, which is designed for assembling face terms arising from discontinuous Galerkin (DG) methods. -The FEInterfaceValues class provides an easy way to obtain the jump and the average of the solution across cell faces. +The FEInterfaceValues class provides an easy way to obtain the jump +and the average of shape functions and of the solution across cell faces. This tutorial includes the following topics.
  1. The SIPG method for Poisson's equation, which has already been used in step-39 and step-59. @@ -41,12 +42,12 @@ on cell faces. We denote the mesh by ${\mathbb T}_h$, and $K\in{\mathbb T}_h$ is a mesh cell. The sets of interior and boundary faces are denoted by ${\mathbb F}^i_h$ and ${\mathbb F}^b_h$ respectively. Let $K^0$ and $K^1$ be the two cells sharing a face $f\in F_h^i$, -and $\mathbf n$ be the outer normal vector of $K^0$. Then the jump and average -operators are given by +and $\mathbf n$ be the outer normal vector of $K^0$. Then the jump +operator is given by the "here minus there" formula, @f[ \jump{v} = v^0 - v^1 @f] -and +and the averaging operator as @f[ \average{v} = \frac{v^0 + v^1}{2} @f] @@ -72,58 +73,63 @@ The discretization using the SIPG is given by the following weak formula \right\}. @f} +

    The penalty parameter

    The penalty parameter is defined as $\sigma = \gamma/h_f$, where $h_f$ a local length scale associated -with the cell face; here we choose the approximation of the length of the cell in the direction normal to the face, -and $\gamma$ is the penalization constant. +with the cell face; here we choose an approximation of the length of the cell in the direction normal to the face: +$\frac 1{h_f} = \frac 12 \left(\frac 1{h_K} + \frac 1{h_{K'}}\right)$, +where $K,K'$ are the two cells adjacent to the face $f$ and we we +compute $h_K = \frac{|K|}{|f|}$. + +In the formula above, $\gamma$ is the penalization constant. To ensure the discrete coercivity, the penalization constant has to be large enough @cite ainsworth2007posteriori. -People do not really have consensus on which precise formula to choose, among what was proposed in the literature. +People do not really have consensus on which of the formulas proposed +in the literature should be used. (This is similar to the situation +discussed in the "Results" section of step-47.) One can just pick a large constant, while other options could be the multiples of $(p+1)^2$ or $p(p+1)$. In this code, we follow step-39 and use $\gamma = p(p+1)$. +

    A posteriori error estimator

    -In this example, we use the error estimator by Karakashian and Pascal @cite karakashian2003posteriori with a slight modification +In this example, with a slight modification, we use the error estimator by Karakashian and Pascal @cite karakashian2003posteriori @f[ \eta^2 = \sum_{K \in {\mathbb T}_h} \eta^2_{K} + \sum_{f_i \in {\mathbb F}^i_h} \eta^2_{f_i} + \sum_{f_b \in F^i_b}\eta^2_{f_b} @f] where -@f[ -\eta^2_{K} = h_K^2 \left\| f + \nu \Delta u_h \right\|_K^2 -@f] -@f[ -\eta^2_{f_i} = \sigma \left\| \jump{u_h} \right\|_f^2 + h_f \left\| \jump{\nu \nabla u_h} \cdot \mathbf n \right\|_f^2 -@f] -@f[ -\eta_{f_b}^2 = \sigma \left\| u_h-g_D \right\|_f^2 -@f] +@f{align*}{ +\eta^2_{K} &= h_K^2 \left\| f + \nu \Delta u_h \right\|_K^2, +\\ +\eta^2_{f_i} &= \sigma \left\| \jump{u_h} \right\|_f^2 + h_f \left\| \jump{\nu \nabla u_h} \cdot \mathbf n \right\|_f^2, +\\ +\eta_{f_b}^2 &= \sigma \left\| u_h-g_D \right\|_f^2. +@f} Here we use $\sigma = \gamma/h_f$ instead of $\gamma^2/h_f$ for the jump terms of $u_h$ (the first term in $\eta^2_{f_i}$ and $\eta_{f_b}^2$). -In each cell $K$, we compute -@f[ -\eta_{c}^2 = h_K^2 \left\| f + \nu \Delta u_h \right\|_K^2, +In order to compute this estimator, in each cell $K$ we compute +@f{align*}{ +\eta_{c}^2 &= h_K^2 \left\| f + \nu \Delta u_h \right\|_K^2, +\\ +\eta_{f}^2 &= \sum_{f\in \partial K}\lbrace \sigma \left\| \jump{u_h} \right\|_f^2 + h_f \left\| \jump{\nu \nabla u_h} \cdot \mathbf n \right\|_f^2 \rbrace, +\\ +\eta_{b}^2 &= \sum_{f\in \partial K \cap \partial \Omega} \sigma \left\| (u_h -g_D) \right\|_f^2. @f] +Then the square of the error estimate per cell is @f[ -\eta_{f}^2 = \sum_{f\in \partial K}\lbrace \sigma \left\| \jump{u_h} \right\|_f^2 + h_f \left\| \jump{\nu \nabla u_h} \cdot \mathbf n \right\|_f^2 \rbrace, +\eta_\text{local}^2 =\eta_{c}^2+0.5\eta_{f}^2+\eta_{b}^2. @f] -@f[ -\eta_{b}^2 = \sum_{f\in \partial K \cap \partial \Omega} \sigma \left\| (u_h -g_D) \right\|_f^2. -@f] -Then the error estimate square per cell is -@f[ -\eta_{local}^2 =\eta_{c}^2+0.5\eta_{f}^2+\eta_{b}^2. -@f] -Note that we compute $\eta_{local}^2$ instead of $\eta_{local}$ to simplify the implementation. -The error estimate square per cell is stored in a global vector, whose $L_1$ norm is equal to $\eta^2$. +The factor of $0.5$ results from the fact that the overall error +estimator includes each interior face only once, and so the estimators per cell +count it with a factor of one half for each of the two adjacent cells. +Note that we compute $\eta_\text{local}^2$ instead of $\eta_\text{local}$ to simplify the implementation. +The error estimate square per cell is then stored in a global vector, whose $l_1$ norm is equal to $\eta^2$.

    The test case

    In the first test problem, we run a convergence test using a smooth manufactured solution with $\nu =1$ in 2D -@f[ -u=\sin(2\pi x)\sin(2\pi y), (x,y)\in (0,1)\times (0,1), -@f] -correspondingly, -@f[ -u=0, \qquad \mbox{on } \partial \Omega. -@f] +@f{align*} +u&=\sin(2\pi x)\sin(2\pi y), &\qquad\qquad &(x,y)\in\Omega=(0,1)\times (0,1), +\\ +u&=0, &\qquad\qquad &\text{on } \partial \Omega, +@f} and $f= 8\pi^2 u$. We compute errors against the manufactured solution and evaluate the convergence rate. In the second test, we choose Functions::LSingularityFunction on a L-shaped domain (GridGenerator::hyper_L) in 2D. diff --git a/examples/step-74/doc/results.dox b/examples/step-74/doc/results.dox index b14ec3acfd..e2bc069733 100644 --- a/examples/step-74/doc/results.dox +++ b/examples/step-74/doc/results.dox @@ -24,7 +24,9 @@ Cycle 2 . . @endcode -Convergence rate for the smooth case with polynomial degree 3: + +When using the smooth case with polynomial degree 3, the convergence +table will look like this: @@ -99,7 +101,7 @@ Convergence rate for the smooth case with polynomial degree 3:
    cycle
    Theoretically, for polynomial degree $p$, the order of convergence in $L_2$ -norm and $H_1$ seminorm should be $p+1$ and $p$, respectively. Our numerical +norm and $H^1$ seminorm should be $p+1$ and $p$, respectively. Our numerical results are in good agreement with theory. In the second test case, when you run the program, the screen output should look like the following: @@ -128,22 +130,23 @@ Cycle 2 The following figure provides a log-log plot of the errors versus the number of degrees of freedom. Let $n$ be the number of degrees of -freedom, then $h$ is of order $1/\sqrt{n}$ in 2D. Combining the theoretical -results in the previous case, -we see that the error in $L_2$ norm is of order $O(n^{-\frac{p+1}{2}})$ -and in $H_1$ seminorm is $O(n^{-\frac{p}{2}})$. From the figure, we see +freedom, then on uniformly refined meshes, $h$ is of order +$1/\sqrt{n}$ in 2D. Combining the theoretical results in the previous case, +we see that if the solution is sufficiently smooth, +we can expect the error in the $L_2$ norm to be of order $O(n^{-\frac{p+1}{2}})$ +and in $H^1$ seminorm to be $O(n^{-\frac{p}{2}})$. From the figure, we see that the SIPG with adaptive mesh refinement produces desirable results -that match theoretical ones. +that match theoretical ones: + + In addition, we observe that the error estimator decreases -in almost the same rate as the errors in the energy norm and $H_1$ seminorm, -and one order lower than $L_2$ error, which illustrates +at almost the same rate as the errors in the energy norm and $H^1$ seminorm, +and one order lower than the $L_2$ error. This suggests its ability to predict regions with large errors. - - While this tutorial is focused on the implementation, the step-59 tutorial program achieves an efficient large-scale solver in terms of computing time with matrix-free solution techniques. Note that the step-59 tutorial does not work with meshes containing hanging nodes at this moment, because the multigrid interface matrices are not as easily determined, -but that is merely the lack of some interfaces in deal.II, nothing fundamental. \ No newline at end of file +but that is merely the lack of some interfaces in deal.II, nothing fundamental. diff --git a/examples/step-74/step-74.cc b/examples/step-74/step-74.cc index 5182d62866..61e2012c5e 100644 --- a/examples/step-74/step-74.cc +++ b/examples/step-74/step-74.cc @@ -57,13 +57,15 @@ namespace Step74 // @sect3{Equation data} // Here we define two test cases: convergence_rate for a smooth function // and l_singularity for the Functions::LSingularityFunction. - enum class Test_Case + enum class TestCase { convergence_rate, l_singularity }; - // A smooth solution for the convergence test. + + + // A smooth solution for the convergence test: template class SmoothSolution : public Function { @@ -71,14 +73,18 @@ namespace Step74 SmoothSolution() : Function() {} + virtual void value_list(const std::vector> &points, std::vector & values, const unsigned int component = 0) const override; + virtual Tensor<1, dim> gradient(const Point & point, const unsigned int component = 0) const override; }; + + template void SmoothSolution::value_list(const std::vector> &points, std::vector & values, @@ -90,6 +96,8 @@ namespace Step74 std::sin(2. * PI * points[i][0]) * std::sin(2. * PI * points[i][1]); } + + template Tensor<1, dim> SmoothSolution::gradient(const Point &point, @@ -104,7 +112,9 @@ namespace Step74 return return_value; } - // The corresponding right-hand side of the smooth function. + + + // The corresponding right-hand side of the smooth function: template class SmoothRightHandSide : public Function { @@ -112,11 +122,14 @@ namespace Step74 SmoothRightHandSide() : Function() {} + virtual void value_list(const std::vector> &points, std::vector & values, const unsigned int /*component*/) const override; }; + + template void SmoothRightHandSide::value_list(const std::vector> &points, @@ -129,8 +142,11 @@ namespace Step74 std::sin(2. * PI * points[i][1]); } - // The right-hand side corresponds to the function - // Functions::LSingularityFunction. + + + // The right-hand side that corresponds to the function + // Functions::LSingularityFunction, where we + // assume that the diffusion coefficient $\nu = 1$: template class SingularRightHandSide : public Function { @@ -138,14 +154,17 @@ namespace Step74 SingularRightHandSide() : Function() {} + virtual void value_list(const std::vector> &points, std::vector & values, const unsigned int /*component*/) const override; private: - Functions::LSingularityFunction ref; + const Functions::LSingularityFunction ref; }; + + template void SingularRightHandSide::value_list(const std::vector> &points, @@ -153,14 +172,15 @@ namespace Step74 const unsigned int /*component*/) const { for (unsigned int i = 0; i < values.size(); ++i) - // We assume that the diffusion coefficient $\nu$ = 1. values[i] = -ref.laplacian(points[i]); } + + // @sect3{Auxiliary functions} // The following two auxiliary functions are used to compute - // jump terms for $u_h$ and $\nabla u_h$ on the - // interface, respectively. + // jump terms for $u_h$ and $\nabla u_h$ on a face, + // respectively. template void get_function_jump(const FEInterfaceValues &fe_iv, const Vector & solution, @@ -179,6 +199,8 @@ namespace Step74 jump[q] = face_values[0][q] - face_values[1][q]; } + + template void get_function_gradient_jump(const FEInterfaceValues &fe_iv, const Vector & solution, @@ -198,9 +220,9 @@ namespace Step74 } // This function computes the penalty $\sigma$. - double compute_penalty(const unsigned int fe_degree, - const double cell_extent_left, - const double cell_extent_right) + double get_penalty_factor(const unsigned int fe_degree, + const double cell_extent_left, + const double cell_extent_right) { const unsigned int degree = std::max(1U, fe_degree); return degree * (degree + 1.) * 0.5 * @@ -209,10 +231,11 @@ namespace Step74 // @sect3{The CopyData} - // Here we define Copy objects for the MeshWorker::mesh_loop(), + // In the following, we define "Copy" objects for the MeshWorker::mesh_loop(), // which is essentially the same as step-12. Note that the - // Scratch object is not defined here because we use - // MeshWorker::ScratchData instead. + // "Scratch" object is not defined here because we use + // MeshWorker::ScratchData instead. (The use of "Copy" and "Scratch" + // objects is extensively explained in the WorkStream namespace documentation. struct CopyDataFace { FullMatrix cell_matrix; @@ -221,6 +244,8 @@ namespace Step74 std::array cell_indices; }; + + struct CopyData { FullMatrix cell_matrix; @@ -229,8 +254,10 @@ namespace Step74 std::vector face_data; double value; unsigned int cell_index; + + template - void reinit(const Iterator &cell, unsigned int dofs_per_cell) + void reinit(const Iterator &cell, const unsigned int dofs_per_cell) { cell_matrix.reinit(dofs_per_cell, dofs_per_cell); cell_rhs.reinit(dofs_per_cell); @@ -239,16 +266,19 @@ namespace Step74 } }; + + // @sect3{The SIPGLaplace class} - // After this preparations, we proceed with the main class of this program - // called SIPGLaplace. Major differences will only come up in the + // After these preparations, we proceed with the main class of this program, + // called `SIPGLaplace`. The overall structure of the class is as in many + // of the other tutorial programs. Major differences will only come up in the // implementation of the assemble functions, since we use FEInterfaceValues to // assemble face terms. template class SIPGLaplace { public: - SIPGLaplace(const Test_Case &test_case); + SIPGLaplace(const TestCase &test_case); void run(); private: @@ -260,7 +290,7 @@ namespace Step74 void compute_errors(); void compute_error_estimate(); - double compute_energy_norm(); + double compute_energy_norm_error(); Triangulation triangulation; const unsigned int degree; @@ -280,28 +310,29 @@ namespace Step74 Vector solution; Vector system_rhs; - // Vectors to store error estimator square and energy norm square per cell. + // The remainder of the class's members are used for the following: + // - Vectors to store error estimator square and energy norm square per + // cell. + // - Print convergence rate and errors on the screen. + // - The fiffusion coefficient $\nu$ is set to 1. + // - Members that store information about the test case to be computed. Vector estimated_error_square_per_cell; Vector energy_norm_square_per_cell; - // Print convergence rate and errors on the screen. ConvergenceTable convergence_table; - // Diffusion coefficient $\nu$ is set to 1. const double diffusion_coefficient = 1.; - const Test_Case test_case; - - // Pointers that point to the correct classes of solution and right-hand - // side according to test_case. - std::unique_ptr> exact_solution; - std::unique_ptr> rhs_function; + const TestCase test_case; + std::unique_ptr> exact_solution; + std::unique_ptr> rhs_function; }; - // The constructor here reads the test case as an input and then determines - // the correct solution and right-hand side classes. + // The constructor here takes the test case as input and then + // determines the correct solution and right-hand side classes. The + // remaining member variables are initialized in the obvious way. template - SIPGLaplace::SIPGLaplace(const Test_Case &test_case) + SIPGLaplace::SIPGLaplace(const TestCase &test_case) : degree(3) , quadrature(degree + 1) , face_quadrature(degree + 1) @@ -312,21 +343,24 @@ namespace Step74 , dof_handler(triangulation) , test_case(test_case) { - if (test_case == Test_Case::convergence_rate) + if (test_case == TestCase::convergence_rate) { - exact_solution = std::make_unique>(); - rhs_function = std::make_unique>(); + exact_solution = std::make_unique>(); + rhs_function = std::make_unique>(); } - else if (test_case == Test_Case::l_singularity) + else if (test_case == TestCase::l_singularity) { - exact_solution = std::make_unique(); - rhs_function = std::make_unique>(); + exact_solution = + std::make_unique(); + rhs_function = std::make_unique>(); } else AssertThrow(false, ExcNotImplemented()); } + + template void SIPGLaplace::setup_system() { @@ -340,16 +374,20 @@ namespace Step74 system_rhs.reinit(dof_handler.n_dofs()); } + + // @sect3{The assemble_system function} - // The assemble function here is similar to that in step-12. + // The assemble function here is similar to that in step-12 and step-47. // Different from assembling by hand, we just need to focus // on assembling on each cell, each boundary face, and each // interior face. The loops over cells and faces are handled // automatically by MeshWorker::mesh_loop(). + // + // The function starts by defining a local (lambda) function that is + // used to integrate the cell terms: template void SIPGLaplace::assemble_system() { - // This function assembles the cell integrals. const auto cell_worker = [&](const auto &cell, auto &scratch_data, auto ©_data) { const FEValues &fe_v = scratch_data.reinit(cell); @@ -379,7 +417,7 @@ namespace Step74 } }; - // This function assembles face integrals on the boundary. + // Next, we need a function that assembles face integrals on the boundary: const auto boundary_worker = [&](const auto & cell, const unsigned int &face_no, auto & scratch_data, @@ -398,7 +436,7 @@ namespace Step74 exact_solution->value_list(q_points, g); const double extent1 = cell->measure() / cell->face(face_no)->measure(); - const double penalty = compute_penalty(degree, extent1, extent1); + const double penalty = get_penalty_factor(degree, extent1, extent1); for (unsigned int point = 0; point < n_q_points; ++point) { @@ -438,10 +476,10 @@ namespace Step74 } }; - // This function assembles face integrals on interior faces. - // To reinitialize FEInterfaceValues, we need to pass cells, - // face and subface indices (for adaptive refinement) - // to the reinit() function of FEInterfaceValues. + // Finally, a function that assembles face integrals on interior + // faces. To reinitialize FEInterfaceValues, we need to pass + // cells, face and subface indices (for adaptive refinement) to + // the reinit() function of FEInterfaceValues: const auto face_worker = [&](const auto & cell, const unsigned int &f, const unsigned int &sf, @@ -467,7 +505,7 @@ namespace Step74 const double extent1 = cell->measure() / cell->face(f)->measure(); const double extent2 = ncell->measure() / ncell->face(nf)->measure(); - const double penalty = compute_penalty(degree, extent1, extent2); + const double penalty = get_penalty_factor(degree, extent1, extent2); for (unsigned int point = 0; point < n_q_points; ++point) { @@ -493,11 +531,11 @@ namespace Step74 } }; - // The following lambda function will copy data to - // the global matrix and right-hand side. - // Though there are no hanging node constraints in DG discretization, - // we define an empty AffineConstraints oject that - // allows us to use distribute_local_to_global functionality. + // The following lambda function will then copy data into the + // global matrix and right-hand side. Though there are no hanging + // node constraints in DG discretization, we define an empty + // AffineConstraints oject that allows us to use the + // AffineConstraints::distribute_local_to_global() functionality. AffineConstraints constraints; constraints.close(); const auto copier = [&](const auto &c) { @@ -516,26 +554,28 @@ namespace Step74 } }; - // Here we define ScratchData and CopyData objects, - // and pass them together with the lambda functions - // above to MeshWorker::mesh_loop. In addition, we - // need to specify that we want to assemble interior faces once. - UpdateFlags cell_flags = update_values | update_gradients | - update_quadrature_points | update_JxW_values; - UpdateFlags face_flags = update_values | update_gradients | - update_quadrature_points | update_normal_vectors | - update_JxW_values; + // With the assembly functions defined, we can now create + // ScratchData and CopyData objects, and pass them together with + // the lambda functions above to MeshWorker::mesh_loop(). In + // addition, we need to specify that we want to assemble on + // interior faces exactly once. + const UpdateFlags cell_flags = update_values | update_gradients | + update_quadrature_points | update_JxW_values; + const UpdateFlags face_flags = update_values | update_gradients | + update_quadrature_points | + update_normal_vectors | update_JxW_values; ScratchData scratch_data( mapping, fe, quadrature, cell_flags, face_quadrature, face_flags); - CopyData cd; + CopyData copy_data; + MeshWorker::mesh_loop(dof_handler.begin_active(), dof_handler.end(), cell_worker, copier, scratch_data, - cd, + copy_data, MeshWorker::assemble_own_cells | MeshWorker::assemble_boundary_faces | MeshWorker::assemble_own_interior_faces_once, @@ -543,6 +583,10 @@ namespace Step74 face_worker); } + + + // @sect3{The solve() and output_results() function} + // The following two functions are entirely standard and without difficulty. template void SIPGLaplace::solve() { @@ -551,11 +595,14 @@ namespace Step74 A_direct.vmult(solution, system_rhs); } + + template void SIPGLaplace::output_results(const unsigned int cycle) const { - std::string filename = "sol_Q" + Utilities::int_to_string(degree, 1) + "-" + - Utilities::int_to_string(cycle, 2) + ".vtu"; + const std::string filename = "sol_Q" + Utilities::int_to_string(degree, 1) + + "-" + Utilities::int_to_string(cycle, 2) + + ".vtu"; std::ofstream output(filename); DataOut data_out; @@ -565,14 +612,17 @@ namespace Step74 data_out.write_vtu(output); } + + // @sect3{The compute_error_estimate() function} // The assembly of the error estimator here is quite similar to - // that of the global matrix and right-had side. + // that of the global matrix and right-had side and can be handled + // by the MeshWorker::mesh_loop() framework. To understand what + // each of the local (lambda) functions is doing, recall first that + // the local cell residual is defined as + // $h_K^2 \left\| f + \nu \Delta u_h \right\|_K^2$: template void SIPGLaplace::compute_error_estimate() { - estimated_error_square_per_cell.reinit(triangulation.n_active_cells()); - - // Assemble cell residual $h_K^2 \left\| f + \nu \Delta u_h \right\|_K^2$. const auto cell_worker = [&](const auto &cell, auto &scratch_data, auto ©_data) { const FEValues &fe_v = scratch_data.reinit(cell); @@ -601,8 +651,8 @@ namespace Step74 copy_data.value = hk * hk * residual_norm_square; }; - // Assemble boundary terms $\sum_{f\in \partial K \cap \partial \Omega} - // \sigma \left\| [ u_h-g_D ] \right\|_f^2 $. + // Next compute boundary terms $\sum_{f\in \partial K \cap \partial \Omega} + // \sigma \left\| [ u_h-g_D ] \right\|_f^2 $: const auto boundary_worker = [&](const auto & cell, const unsigned int &face_no, auto & scratch_data, @@ -621,7 +671,7 @@ namespace Step74 fe_fv.get_function_values(solution, sol_u); const double extent1 = cell->measure() / cell->face(face_no)->measure(); - const double penalty = compute_penalty(degree, extent1, extent1); + const double penalty = get_penalty_factor(degree, extent1, extent1); double difference_norm_square = 0.; for (unsigned int point = 0; point < q_points.size(); ++point) @@ -632,9 +682,9 @@ namespace Step74 copy_data.value += penalty * difference_norm_square; }; - // Assemble interior face terms $\sum_{f\in \partial K}\lbrace \sigma + // And finally interior face terms $\sum_{f\in \partial K}\lbrace \sigma // \left\| [u_h] \right\|_f^2 + h_f \left\| [\nu \nabla u_h \cdot - // \mathbf n ] \right\|_f^2 \rbrace$. + // \mathbf n ] \right\|_f^2 \rbrace$: const auto face_worker = [&](const auto & cell, const unsigned int &f, const unsigned int &sf, @@ -668,7 +718,7 @@ namespace Step74 const double extent1 = cell->measure() / cell->face(f)->measure(); const double extent2 = ncell->measure() / ncell->face(nf)->measure(); - const double penalty = compute_penalty(degree, extent1, extent2); + const double penalty = get_penalty_factor(degree, extent1, extent2); double flux_jump_square = 0; double u_jump_square = 0; @@ -684,6 +734,9 @@ namespace Step74 copy_data_face.values[1] = copy_data_face.values[0]; }; + // Having computed local contributions for each cell, we still + // need a way to copy these into the global vector that will hold + // the error estimators for all cells: const auto copier = [&](const auto ©_data) { if (copy_data.cell_index != numbers::invalid_unsigned_int) estimated_error_square_per_cell[copy_data.cell_index] += @@ -693,22 +746,28 @@ namespace Step74 estimated_error_square_per_cell[cdf.cell_indices[j]] += cdf.values[j]; }; - UpdateFlags cell_flags = + // After all of this set-up, let's do the actual work: We resize + // the vector into which the results will be written, and then + // drive the whole process using the MeshWorker::mesh_loop() + // function. + estimated_error_square_per_cell.reinit(triangulation.n_active_cells()); + + const UpdateFlags cell_flags = update_hessians | update_quadrature_points | update_JxW_values; - UpdateFlags face_flags = update_values | update_gradients | - update_quadrature_points | update_JxW_values | - update_normal_vectors; + const UpdateFlags face_flags = update_values | update_gradients | + update_quadrature_points | + update_JxW_values | update_normal_vectors; ScratchData scratch_data( mapping, fe, quadrature, cell_flags, face_quadrature, face_flags); - CopyData cd; + CopyData copy_data; MeshWorker::mesh_loop(dof_handler.begin_active(), dof_handler.end(), cell_worker, copier, scratch_data, - cd, + copy_data, MeshWorker::assemble_own_cells | MeshWorker::assemble_own_interior_faces_once | MeshWorker::assemble_boundary_faces, @@ -716,10 +775,12 @@ namespace Step74 face_worker); } - // Here we compute the error in the energy norm, which - // is similar to the assembling of the error estimator. + + // @sect3{The compute_energy_norm_error() function} + // Next, we compute the error in the energy norm, which + // is similar to the assembling of the error estimator above. template - double SIPGLaplace::compute_energy_norm() + double SIPGLaplace::compute_energy_norm_error() { energy_norm_square_per_cell.reinit(triangulation.n_active_cells()); @@ -766,7 +827,7 @@ namespace Step74 fe_fv.get_function_values(solution, sol_u); const double extent1 = cell->measure() / cell->face(face_no)->measure(); - const double penalty = compute_penalty(degree, extent1, extent1); + const double penalty = get_penalty_factor(degree, extent1, extent1); double difference_norm_square = 0.; for (unsigned int point = 0; point < q_points.size(); ++point) @@ -804,7 +865,7 @@ namespace Step74 const double extent1 = cell->measure() / cell->face(f)->measure(); const double extent2 = ncell->measure() / ncell->face(nf)->measure(); - const double penalty = compute_penalty(degree, extent1, extent2); + const double penalty = get_penalty_factor(degree, extent1, extent2); double u_jump_square = 0; for (unsigned int point = 0; point < n_q_points; ++point) @@ -823,25 +884,25 @@ namespace Step74 energy_norm_square_per_cell[cdf.cell_indices[j]] += cdf.values[j]; }; - UpdateFlags cell_flags = + const UpdateFlags cell_flags = update_gradients | update_quadrature_points | update_JxW_values; UpdateFlags face_flags = update_values | update_quadrature_points | update_JxW_values; - ScratchData scratch_data(mapping, - fe, - quadrature_overintegration, - cell_flags, - face_quadrature_overintegration, - face_flags); + const ScratchData scratch_data(mapping, + fe, + quadrature_overintegration, + cell_flags, + face_quadrature_overintegration, + face_flags); - CopyData cd; + CopyData copy_data; MeshWorker::mesh_loop(dof_handler.begin_active(), dof_handler.end(), cell_worker, copier, scratch_data, - cd, + copy_data, MeshWorker::assemble_own_cells | MeshWorker::assemble_own_interior_faces_once | MeshWorker::assemble_boundary_faces, @@ -852,6 +913,9 @@ namespace Step74 return energy_error; } + + + // @sect3{The refine_grid() function} template void SIPGLaplace::refine_grid() { @@ -863,12 +927,18 @@ namespace Step74 triangulation.execute_coarsening_and_refinement(); } - // We compute three errors in $L_2$ norm, $H_1$ seminorm, and the energy norm, - // respectively. + + + // @sect3{The compute_errors() function} + // We compute three errors in the $L_2$ norm, $H_1$ seminorm, and + // the energy norm, respectively. These are then printed to screen, + // but also stored in a table that records how these errors decay + // with mesh refinement and which can be output in one step at the + // end of the program. template void SIPGLaplace::compute_errors() { - double L2_error, H1_error; + double L2_error, H1_error, energy_error; { Vector difference_per_cell(triangulation.n_active_cells()); @@ -883,6 +953,7 @@ namespace Step74 L2_error = VectorTools::compute_global_error(triangulation, difference_per_cell, VectorTools::L2_norm); + convergence_table.add_value("L2", L2_error); } { @@ -898,12 +969,13 @@ namespace Step74 H1_error = VectorTools::compute_global_error(triangulation, difference_per_cell, VectorTools::H1_seminorm); + convergence_table.add_value("H1", H1_error); } - convergence_table.add_value("L2", L2_error); - convergence_table.add_value("H1", H1_error); - const double energy_error = compute_energy_norm(); - convergence_table.add_value("Energy", energy_error); + { + energy_error = compute_energy_norm_error(); + convergence_table.add_value("Energy", energy_error); + } std::cout << " Error in the L2 norm : " << L2_error << std::endl << " Error in the H1 seminorm : " << H1_error << std::endl @@ -911,17 +983,21 @@ namespace Step74 << std::endl; } + + + // @sect3{The run() function} template void SIPGLaplace::run() { - unsigned int max_cycle = test_case == Test_Case::convergence_rate ? 6 : 20; + const unsigned int max_cycle = + (test_case == TestCase::convergence_rate ? 6 : 20); for (unsigned int cycle = 0; cycle < max_cycle; ++cycle) { std::cout << "Cycle " << cycle << std::endl; switch (test_case) { - case Test_Case::convergence_rate: + case TestCase::convergence_rate: { if (cycle == 0) { @@ -935,7 +1011,8 @@ namespace Step74 } break; } - case Test_Case::l_singularity: + + case TestCase::l_singularity: { if (cycle == 0) { @@ -948,11 +1025,13 @@ namespace Step74 } break; } + default: { Assert(false, ExcNotImplemented()); } } + std::cout << " Number of active cells : " << triangulation.n_active_cells() << std::endl; setup_system(); @@ -970,7 +1049,7 @@ namespace Step74 } compute_errors(); - if (test_case == Test_Case::l_singularity) + if (test_case == TestCase::l_singularity) { compute_error_estimate(); std::cout << " Estimated error : " @@ -983,35 +1062,39 @@ namespace Step74 } std::cout << std::endl; } - { - convergence_table.set_precision("L2", 3); - convergence_table.set_precision("H1", 3); - convergence_table.set_precision("Energy", 3); - convergence_table.set_scientific("L2", true); - convergence_table.set_scientific("H1", true); - convergence_table.set_scientific("Energy", true); + // Having run all of our computations, let us tell the convergence + // table how to format its data and output it to screen: + convergence_table.set_precision("L2", 3); + convergence_table.set_precision("H1", 3); + convergence_table.set_precision("Energy", 3); - if (test_case == Test_Case::l_singularity) - { - convergence_table.set_precision("Estimator", 3); - convergence_table.set_scientific("Estimator", true); - } - if (test_case == Test_Case::convergence_rate) - { - convergence_table.evaluate_convergence_rates( - "L2", ConvergenceTable::reduction_rate_log2); - convergence_table.evaluate_convergence_rates( - "H1", ConvergenceTable::reduction_rate_log2); - } - std::cout << "degree = " << degree << std::endl; - convergence_table.write_text( - std::cout, TableHandler::TextOutputFormat::org_mode_table); - } + convergence_table.set_scientific("L2", true); + convergence_table.set_scientific("H1", true); + convergence_table.set_scientific("Energy", true); + + if (test_case == TestCase::convergence_rate) + { + convergence_table.evaluate_convergence_rates( + "L2", ConvergenceTable::reduction_rate_log2); + convergence_table.evaluate_convergence_rates( + "H1", ConvergenceTable::reduction_rate_log2); + } + if (test_case == TestCase::l_singularity) + { + convergence_table.set_precision("Estimator", 3); + convergence_table.set_scientific("Estimator", true); + } + + std::cout << "degree = " << degree << std::endl; + convergence_table.write_text( + std::cout, TableHandler::TextOutputFormat::org_mode_table); } } // namespace Step74 + +// @sect3{The main() function} // The following main function is similar to previous examples as // well, and need not be commented on. int main() @@ -1020,7 +1103,9 @@ int main() { using namespace dealii; using namespace Step74; - Test_Case test_case = Test_Case::l_singularity; + + const TestCase test_case = TestCase::l_singularity; + SIPGLaplace<2> problem(test_case); problem.run(); }