From 7a1cb4ea6d19cc7f24253373f5cd05bba04d9300 Mon Sep 17 00:00:00 2001 From: David Wells Date: Sun, 15 Jul 2018 12:45:34 -0400 Subject: [PATCH] step-9: Improve the documentation and typography. 1. Remove an inaccurate remark on SUPG: The gradient of test functions scales like 1/h, so scaling it by h makes the term O(1), not O(h). 2. General improvements: Reword some sentences and fix some typos. 3. Remark on the choice of diffusion coefficient. 4. Notate floating point numbers as, e.g., '0.'. 5. Clarify text on TensorFunction. AFAIK this class never returned anything but Tensors so this documentation was incorrect. 6. Use face_n instead of face or face_no to indicate the face number. 7. Move the Tensor projected_gradient closer to where it is actually used and remove a remark about allocating it. 8. Mark an unchanged Tensor (gradient) as const. 9. Prefer p.norm() to std::sqrt(p.norm_square()). 10. Add 'using namespace dealii;' to improve the doxygen result. Since we strip all preceding dealii namespace qualifiers this prevents doxygen from writing ::MultithreadInfo::set_thread_limit(); which looks odd. 11. Clarify that Assert aborts (it does not terminate). --- examples/step-9/doc/intro.dox | 34 ++--- examples/step-9/step-9.cc | 245 +++++++++++++++------------------- 2 files changed, 128 insertions(+), 151 deletions(-) diff --git a/examples/step-9/doc/intro.dox b/examples/step-9/doc/intro.dox index 14d1e46a9f..0bd0a51c54 100644 --- a/examples/step-9/doc/intro.dox +++ b/examples/step-9/doc/intro.dox @@ -4,14 +4,14 @@ In this example, our aims are the following:
    -
  1. solve the advection equation $\beta \cdot \nabla u = f$; -
  2. show how we can use multiple threads to get quicker to - the desired results if we have a multi-processor machine; -
  3. develop a simple refinement criterion. +
  4. solve the advection equation $\beta \cdot \nabla u = f$; +
  5. show how we can use multiple threads to get results quicker if we have a + multi-processor machine; +
  6. develop a simple refinement criterion.
While the second aim is difficult to describe in general terms without reference to the code, we will discuss the other two aims in the -following. The use of multiple threads will then be detailed at the +following text. The use of multiple threads will then be detailed at the relevant places within the program. We will, however, follow the general discussion of the WorkStream approach detailed in the @ref threads "Parallel computing with multiple processors accessing shared memory" @@ -25,7 +25,7 @@ solution of the advection equation @f[ \beta \cdot \nabla u = f, @f] -where $\beta$ is a vector field that describes advection direction and +where $\beta$ is a vector field that describes the advection direction and speed (which may be dependent on the space variables), $f$ is a source function, and $u$ is the solution. The physical process that this equation describes is that of a given flow field $\beta$, with which @@ -55,7 +55,7 @@ pose any boundary condition on the outflow part of the boundary. As stated, the transport equation cannot be solved in a stable way using the standard finite element method, however. The problem is that -solutions to this equation possess only insufficient regularity +solutions to this equation possess insufficient regularity perpendicular to the transport direction: while they are smooth along the streamlines defined by the "wind field" $\beta$, they may be discontinuous perpendicular to this @@ -68,16 +68,17 @@ discontinuity will simply be transported along the streamline of the wind field that starts at this boundary point. These discontinuities lead to numerical instabilities that -make a stable solution by a straight-forward discretization +make a stable solution by a standard continuous finite element discretization impossible. We will thus use the streamline diffusion stabilized formulation, in which we test the equation with test functions $v + \delta \beta\cdot\nabla v$ instead of $v$, where $\delta$ is a parameter that is chosen in the range of the (local) mesh width $h$; -good results are usually obtained by setting $\delta=0.1h$. Note that -the modification in the test function vanishes as the mesh size tends -to zero. The modification in the test function can be thought as +good results are usually obtained by setting $\delta=0.1h$. +The modification in the test function can be thought as adding a small amount of diffusion along the direction $\beta$ of the -wind field, i.e., along "stream lines". +wind field, i.e., along "stream lines". The value added here is small enough +that we do not introduce excessive diffusion and also large enough that the +linear system can be solved in a reasonable amount of time. We will not discuss reasons, pros, and cons of the streamline diffusion method, but rather use it "as is", and refer the interested reader to the sufficiently available literature; every @@ -177,11 +178,10 @@ as system matrix. We will assemble this matrix in the program.

Solving the linear system that corresponds to the advection equation

-As the -resulting matrix is no longer symmetric positive definite, we can't -employ the usual Conjugate Gradient method (implemented in the -SolverCG class) any more. Instead, we use the BiCGStab (bi-conjugate gradients -stabilized) method (implemented in SolverBicgstab) that is suitable +As the resulting matrix is no longer symmetric positive definite, we cannot +use the usual Conjugate Gradient method (implemented in the +SolverCG class) to solve the system. Instead, we use the BiCGStab (bi-conjugate +gradient stabilized) method (implemented in SolverBicgstab) that is suitable for problems of the kind we have here. diff --git a/examples/step-9/step-9.cc b/examples/step-9/step-9.cc index ac32ace6ef..d10f1583b5 100644 --- a/examples/step-9/step-9.cc +++ b/examples/step-9/step-9.cc @@ -57,7 +57,7 @@ // The next new include file declares a base class TensorFunction // not unlike the Function class, but with the difference that -// the return value is tensor-valued rather than scalar of vector-valued. +// TensorFunction::value returns a Tensor instead of a scalar. #include #include @@ -74,9 +74,9 @@ namespace Step9 // @sect3{AdvectionProblem class declaration} - // Following we declare the main class of this program. It is very much - // like the main classes of previous examples, so we again only comment on - // the differences. + // Here comes the main class of this program. It is very much like the main + // classes of previous examples, so we again only comment on the + // differences. template class AdvectionProblem { @@ -147,8 +147,8 @@ namespace Step9 void copy_local_to_global(const AssemblyCopyData ©_data); - // The following functions again are as in previous examples, as are the - // subsequent variables. + // The following functions again are the same as they were in previous + // examples, as are the subsequent variables: void solve(); void refine_grid(); void output_results(const unsigned int cycle) const; @@ -177,20 +177,10 @@ namespace Step9 // Function base class, as we have done for boundary values and // coefficients in previous examples, but there is another possibility in // the library, namely a base class that describes tensor valued - // functions. In contrast to the usual Function objects, we - // provide the compiler with knowledge on the size of the objects of the - // return type. This enables the compiler to generate efficient code, which - // is not so simple for usual vector-valued functions where memory has to be - // allocated on the heap (thus, the Function::vector_value - // function has to be given the address of an object into which the result - // is to be written, in order to avoid copying and memory allocation and - // deallocation on the heap). In addition to the known size, it is possible - // not only to return vectors, but also tensors of higher rank; however, - // this is not very often requested by applications, to be honest... - // - // The interface of the TensorFunction class is relatively - // close to that of the Function class, so there is probably no - // need to comment in detail the following declaration: + // functions. This is more convenient than overriding Function::value() with + // a method that knows about multiple function components: at the end of the + // day we need a Tensor, so we may as well just use a class that returns a + // Tensor. template class AdvectionField : public TensorFunction<1, dim> { @@ -223,14 +213,12 @@ namespace Step9 // respective macro DeclExceptionN. // // To learn how the preprocessor expands this macro into actual code, - // please refer to the documentation of the exception classes in the base - // library. Suffice it to say that by this macro call, the respective - // exception class is declared, which also has error output functions - // already implemented. + // please refer to the documentation of the exception classes. In brief, + // this macro call declares and defines a class + // ExcDimensionMismatch inheriting from ExceptionBase which + // implements all necessary error output functions. }; - - // The following two functions implement the interface described above. The // first simply implements the function as described in the introduction, // while the second uses the same trick to avoid calling a virtual function @@ -246,19 +234,18 @@ namespace Step9 Point value; value[0] = 2; for (unsigned int i = 1; i < dim; ++i) - value[i] = 1 + 0.8 * std::sin(8 * numbers::PI * p[0]); + value[i] = 1 + 0.8 * std::sin(8. * numbers::PI * p[0]); return value; } // Besides the advection field, we need two functions describing the source - // terms (right hand side) and the boundary values. First for - // the right hand side, which follows the same pattern as in previous - // examples. As described in the introduction, the source is a constant - // function in the vicinity of a source point, which we denote by the - // constant static variable center_point. We set the values of - // this center using the same template tricks as we have shown in the step-7 - // example program. The rest is simple and has been shown previously. + // terms (right hand side) and the boundary values. As + // described in the introduction, the source is a constant function in the + // vicinity of a source point, which we denote by the constant static + // variable center_point. We set the values of this center + // using the same template tricks as we have shown in the step-7 example + // program. The rest is simple and has been shown previously. template class RightHandSide : public Function { @@ -300,8 +287,8 @@ namespace Step9 Assert(component == 0, ExcIndexRange(component, 0, 1)); const double diameter = 0.1; return ((p - center_point).norm_square() < diameter * diameter ? - .1 / std::pow(diameter, dim) : - 0); + 0.1 / std::pow(diameter, dim) : + 0.0); } @@ -325,9 +312,8 @@ namespace Step9 (void)component; Assert(component == 0, ExcIndexRange(component, 0, 1)); - const double sine_term = - std::sin(16 * numbers::PI * std::sqrt(p.norm_square())); - const double weight = std::exp(-5 * p.norm_square()) / std::exp(-5.); + const double sine_term = std::sin(16. * numbers::PI * p.norm()); + const double weight = std::exp(-5. * p.norm_square()) / std::exp(-5.); return sine_term * weight; } @@ -335,7 +321,7 @@ namespace Step9 // Now, finally, here comes the class that will compute the difference // approximation of the gradient on each cell and weighs that with a power - // of the mesh size, as described in the introduction. This class is a + // of the mesh size, as described in the introduction. This class is a // simple version of the DerivativeApproximation class in the // library, that uses similar techniques to obtain finite difference // approximations of the gradient of a finite element field, or of higher @@ -370,30 +356,33 @@ namespace Step9 // to the function rather than to the class itself. This way, you don't have // to specify the template parameter yourself as in most other cases, but // the compiler can figure its value out itself from the dimension of the - // DoF handler object that one passes as first argument. + // DoFHandler object that one passes as first argument. // // Before jumping into the fray with the implementation, let us also comment // on the parallelization strategy. We have already introduced the necessary // framework for using the WorkStream concept in the declaration of the main // class of this program above. We will use it again here. In the current - // context, this means that we have to define (i) classes for scratch and - // copy objects, (ii) a function that does the local computation on one - // cell, and (iii) a function that copies the local result into a global - // object. Given this general framework, we will, however, deviate from it a + // context, this means that we have to define + //
    + //
  1. classes for scratch and copy objects,
  2. + //
  3. a function that does the local computation on one cell, and
  4. + //
  5. a function that copies the local result into a global object.
  6. + //
+ // Given this general framework, we will, however, deviate from it a // bit. In particular, WorkStream was generally invented for cases where // each local computation on a cell adds to a global object -- for // example, when assembling linear systems where we add local contributions // into a global matrix and right hand side. WorkStream is designed to handle // the potential conflict of multiple threads trying to do this addition at // the same time, and consequently has to provide for some way to ensure that - // only thread gets to do this at a time. Here, however, the situation is + // only one thread gets to do this at a time. Here, however, the situation is // slightly different: we compute contributions from every cell // individually, but then all we need to do is put them into an element of // an output vector that is unique to each cell. Consequently, there is no // risk that the write operations from two cells might conflict, and the // elaborate machinery of WorkStream to avoid conflicting writes is not // necessary. Consequently, what we will do is this: We still need a scratch - // object that holds, for example, the FEValues object. However, we only + // object that holds, for example, the FEValues object. However, we only // create a fake, empty copy data structure. Likewise, we do need the // function that computes local contributions, but since it can already put // the result into its final location, we do not need a copy-local-to-global @@ -521,8 +510,8 @@ namespace Step9 // contain FEValues and FEFaceValues objects, and so we will need to // have constructors and copy constructors that allow us to create // them. In initializing them, note first that we use bilinear - // elements, soGauss formulae with two points in each space - // direction are sufficient. For the cell terms we need the values + // elements, so Gauss formulae with two points in each space + // direction are sufficient. For the cell terms we need the values // and gradients of the shape functions, the quadrature points in // order to determine the source density and the advection field at // a given point, and the weights of the quadrature points times the @@ -564,7 +553,7 @@ namespace Step9 // Now, this is the function that does the actual work. It is not very // different from the assemble_system functions of previous // example programs, so we will again only comment on the differences. The - // mathematical stuff follows closely what we have said in the introduction. + // mathematical stuff closely follows what we have said in the introduction. // // There are a number of points worth mentioning here, though. The // first one is that we have moved the FEValues and FEFaceValues @@ -680,15 +669,15 @@ namespace Step9 // the direction of flow at this point; we obtain this information // using the FEFaceValues object and only decide within the main loop // whether a quadrature point is on the inflow boundary. - for (unsigned int face = 0; face < GeometryInfo::faces_per_cell; - ++face) - if (cell->face(face)->at_boundary()) + for (unsigned int face_n = 0; face_n < GeometryInfo::faces_per_cell; + ++face_n) + if (cell->face(face_n)->at_boundary()) { // Ok, this face of the present cell is on the boundary of the // domain. Just as for the usual FEValues object which we have // used in previous examples and also above, we have to // reinitialize the FEFaceValues object for the present face: - scratch_data.fe_face_values.reinit(cell, face); + scratch_data.fe_face_values.reinit(cell, face_n); // For the quadrature points at hand, we ask for the values of // the inflow function and for the direction of flow: @@ -699,19 +688,19 @@ namespace Step9 scratch_data.fe_face_values.get_quadrature_points(), face_advection_directions); - // Now loop over all quadrature points and see whether it is on - // the inflow or outflow part of the boundary. This is - // determined by a test whether the advection direction points - // inwards or outwards of the domain (note that the normal - // vector points outwards of the cell, and since the cell is at - // the boundary, the normal vector points outward of the domain, + // Now loop over all quadrature points and see whether this face is on + // the inflow or outflow part of the boundary. The normal + // vector points out of the cell: since the face is at + // the boundary, the normal vector points out of the domain, // so if the advection direction points into the domain, its - // scalar product with the normal vector must be negative): + // scalar product with the normal vector must be negative (to see why + // this is true, consider the scalar product definition that uses a + // cosine): for (unsigned int q_point = 0; q_point < n_face_q_points; ++q_point) if (scratch_data.fe_face_values.normal_vector(q_point) * face_advection_directions[q_point] < - 0) - // If the is part of the inflow boundary, then compute the + 0.) + // If the face is part of the inflow boundary, then compute the // contributions of this face to the global matrix and right // hand side, using the values obtained from the // FEFaceValues object and the formulae discussed in the @@ -735,18 +724,17 @@ namespace Step9 } } - - // Now go on by transferring the local contributions to the system of - // equations into the global objects. The first step was to obtain the - // global indices of the degrees of freedom on this cell. + // The final piece of information the copy routine needs is the global + // indices of the degrees of freedom on this cell, so we end by writing + // them to the local array: cell->get_dof_indices(copy_data.local_dof_indices); } // The second function we needed to write was the one that copies - // the local contributions the previous function has computed and - // put into the copy data object, into the global matrix and right + // the local contributions the previous function computed (and + // put into the AssemblyCopyData object) into the global matrix and right // hand side vector objects. This is essentially what we always had // as the last block of code when assembling something on every // cell. The following should therefore be pretty obvious: @@ -765,13 +753,11 @@ namespace Step9 } } - - - // Following is the function that solves the linear system of equations. As - // the system is no more symmetric positive definite as in all the previous - // examples, we can't use the Conjugate Gradients method anymore. Rather, we - // use a solver that is tailored to nonsymmetric systems like the one at - // hand, the BiCGStab method. As preconditioner, we use the Jacobi method. + // Here comes the linear solver routine. As the system is no longer + // symmetric positive definite as in all the previous examples, we cannot + // use the Conjugate Gradient method anymore. Rather, we use a solver that + // is tailored to nonsymmetric systems like the one at hand, the BiCGStab + // method. As preconditioner, we use the Jacobi method. template void AdvectionProblem::solve() { @@ -786,7 +772,6 @@ namespace Step9 hanging_node_constraints.distribute(solution); } - // The following function refines the grid according to the quantity // described in the introduction. The respective computations are made in // the class GradientEstimation. The only difference to @@ -809,8 +794,6 @@ namespace Step9 triangulation.execute_coarsening_and_refinement(); } - - // Writing output to disk is done in the same way as in the previous // examples. Indeed, the function is identical to the one in step-6. template @@ -834,7 +817,8 @@ namespace Step9 } - // ... as is the main loop (setup -- solve -- refine) + // ... as is the main loop (setup -- solve -- refine), aside from the number + // of cycles and the initial grid: template void AdvectionProblem::run() { @@ -899,14 +883,14 @@ namespace Step9 {} - // Next for the implementation of the GradientEstimation + // Next comes the implementation of the GradientEstimation // class. The first function does not much except for delegating work to the // other function, but there is a bit of setup at the top. // // Before starting with the work, we check that the vector into // which the results are written has the right size. Programming // mistakes in which one forgets to size arguments correctly at the - // calling site are quite common. Because the resulting damage from + // calling site are quite common. Because the resulting damage from // not catching such errors is often subtle (e.g., corruption of // data somewhere in memory, or non-reproducible results), it is // well worth the effort to check for such things. @@ -931,12 +915,12 @@ namespace Step9 } - // Following now the function that actually computes the finite difference - // approximation to the gradient. The general outline of the function is to - // first compute the list of active neighbors of the present cell and then - // compute the quantities described in the introduction for each of the - // neighbors. The reason for this order is that it is not a one-liner to - // find a given neighbor with locally refined meshes. In principle, an + // Here comes the function that estimates the local error by computing the + // finite difference approximation of the gradient. The function first + // computes the list of active neighbors of the present cell and then + // computes the quantities described in the introduction for each of + // the neighbors. The reason for this order is that it is not a one-liner + // to find a given neighbor with locally refined meshes. In principle, an // optimized implementation would find neighbors and the quantities // depending on them in one step, rather than first building a list of // neighbors and in a second step their contributions but we will gladly @@ -944,14 +928,14 @@ namespace Step9 // passed to WorkStream::run works on "scratch" objects that keep all // temporary objects. This way, we do not need to create and initialize // objects that are expensive to initialize within the function that does - // the work, every time it is called for a given cell. Such an argument is + // the work every time it is called for a given cell. Such an argument is // passed as the second argument. The third argument would be a "copy-data" // object (see @ref threads for more information) but we do not actually use - // any of these here. Because WorkStream::run() insists on passing three + // any of these here. Since WorkStream::run() insists on passing three // arguments, we declare this function with three arguments, but simply // ignore the last one. // - // (This is unsatisfactory from an esthetic perspective. It can be avoided, + // (This is unsatisfactory from an aesthetic perspective. It can be avoided, // at the cost of some other trickery. If you allow, let us here show // how. First, assume that we had declared this function to only take two // arguments by omitting the unused last one. Now, WorkStream::run still @@ -959,7 +943,7 @@ namespace Step9 // way to "forget" the third argument in the call. Simply passing // WorkStream::run the pointer to the function as we do above will not do // this -- the compiler will complain that a function declared to have two - // arguments is called with three arguments. However, we can do this by + // arguments is called with three arguments. However, we can do this by // passing the following as the third argument when calling WorkStream::run() // above: // @code @@ -1002,13 +986,8 @@ namespace Step9 // Y tensor: scratch_data.fe_midpoint_value.reinit(cell); - // Then allocate the vector that will be the sum over the y-vectors - // times the approximate directional derivative: - Tensor<1, dim> projected_gradient; - - - // Now before going on first compute a list of all active neighbors of - // the present cell. We do so by first looping over all faces and see + // Now, before we go on, we first compute a list of all active neighbors + // of the present cell. We do so by first looping over all faces and see // whether the neighbor there is active, which would be the case if it // is on the same level as the present cell or one level coarser (note // that a neighbor can only be once coarser than the present cell, as @@ -1030,16 +1009,16 @@ namespace Step9 // have to clear the array storing the iterators to the active // neighbors, of course. active_neighbors.clear(); - for (unsigned int face_no = 0; face_no < GeometryInfo::faces_per_cell; - ++face_no) - if (!cell->at_boundary(face_no)) + for (unsigned int face_n = 0; face_n < GeometryInfo::faces_per_cell; + ++face_n) + if (!cell->at_boundary(face_n)) { // First define an abbreviation for the iterator to the face and // the neighbor const typename DoFHandler::face_iterator face = - cell->face(face_no); + cell->face(face_n); const typename DoFHandler::cell_iterator neighbor = - cell->neighbor(face_no); + cell->neighbor(face_n); // Then check whether the neighbor is active. If it is, then it // is on the same level or one level coarser (if we are not in @@ -1059,12 +1038,11 @@ namespace Step9 typename DoFHandler::cell_iterator neighbor_child = neighbor; while (neighbor_child->has_children()) - neighbor_child = - neighbor_child->child(face_no == 0 ? 1 : 0); + neighbor_child = neighbor_child->child(face_n == 0 ? 1 : 0); // As this used some non-trivial geometrical intuition, // we might want to check whether we did it right, - // i.e. check whether the neighbor of the cell we found + // i.e., check whether the neighbor of the cell we found // is indeed the cell we are presently working // on. Checks like this are often useful and have // frequently uncovered errors both in algorithms like @@ -1082,7 +1060,7 @@ namespace Step9 // an error that is irrecoverable and probably qualifies // as an internal error. We therefore use a predefined // exception class to throw here. - Assert(neighbor_child->neighbor(face_no == 0 ? 1 : 0) == cell, + Assert(neighbor_child->neighbor(face_n == 0 ? 1 : 0) == cell, ExcInternalError()); // If the check succeeded, we push the active neighbor @@ -1091,12 +1069,11 @@ namespace Step9 } else // If we are not in 1d, we collect all neighbor children - // `behind' the subfaces of the current face - for (unsigned int subface_no = 0; - subface_no < face->n_children(); - ++subface_no) + // `behind' the subfaces of the current face and move on: + for (unsigned int subface_n = 0; subface_n < face->n_children(); + ++subface_n) active_neighbors.push_back( - cell->neighbor_child_on_subface(face_no, subface_no)); + cell->neighbor_child_on_subface(face_n, subface_n)); } } @@ -1114,7 +1091,6 @@ namespace Step9 scratch_data.fe_midpoint_value.get_function_values(scratch_data.solution, this_midpoint_value); - // Now loop over all active neighbors and collect the data we // need. Allocate a vector just like this_midpoint_value // which we will use to store the value of the solution in the @@ -1123,6 +1099,7 @@ namespace Step9 // iteration of this inner loop (memory allocation is a rather // expensive operation): std::vector neighbor_midpoint_value(1); + Tensor<1, dim> projected_gradient; typename std::vector::active_cell_iterator>:: const_iterator neighbor_ptr = active_neighbors.begin(); for (; neighbor_ptr != active_neighbors.end(); ++neighbor_ptr) @@ -1133,9 +1110,9 @@ namespace Step9 *neighbor_ptr; // Then get the center of the neighbor cell and the value of the - // finite element function thereon. Note that for this information - // we have to reinitialize the FEValues object for - // the neighbor cell. + // finite element function at that point. Note that for this + // information we have to reinitialize the FEValues + // object for the neighbor cell. scratch_data.fe_midpoint_value.reinit(neighbor); const Point neighbor_center = scratch_data.fe_midpoint_value.quadrature_point(0); @@ -1167,16 +1144,16 @@ namespace Step9 // span the whole space, otherwise we would not have all components of // the gradient. This is indicated by the invertibility of the matrix. // - // If the matrix should not be invertible, this means that the present + // If the matrix is not invertible, then the present // cell had an insufficient number of active neighbors. In contrast to - // all previous cases, where we raised exceptions, this is, however, + // all previous cases (where we raised exceptions) this is, however, // not a programming error: it is a runtime error that can happen in // optimized mode even if it ran well in debug mode, so it is // reasonable to try to catch this error also in optimized mode. For // this case, there is the AssertThrow macro: it checks // the condition like the Assert macro, but not only in // debug mode; it then outputs an error message, but instead of - // terminating the program as in the case of the Assert + // aborting the program as in the case of the Assert // macro, the exception is thrown using the throw command // of C++. This way, one has the possibility to catch this error and // take reasonable counter actions. One such measure would be to @@ -1185,12 +1162,12 @@ namespace Step9 // least once. AssertThrow(determinant(Y) != 0, ExcInsufficientDirections()); - // If, on the other hand the matrix is invertible, then invert it, - // multiply the other quantity with it and compute the estimated error - // using this quantity and the right powers of the mesh width: + // If, on the other hand, the matrix is invertible, then invert it, + // multiply the other quantity with it, and compute the estimated error + // using this quantity and the correct powers of the mesh width: const Tensor<2, dim> Y_inverse = invert(Y); - Tensor<1, dim> gradient = Y_inverse * projected_gradient; + const Tensor<1, dim> gradient = Y_inverse * projected_gradient; // The last part of this function is the one where we write into // the element of the output vector what we have just @@ -1199,30 +1176,30 @@ namespace Step9 // at the correct element inside this vector -- but we can ask the // cell we're on the how-manyth active cell it is for this: scratch_data.error_per_cell(cell->active_cell_index()) = - (std::pow(cell->diameter(), 1 + 1.0 * dim / 2) * - std::sqrt(gradient.norm_square())); + (std::pow(cell->diameter(), 1 + 1.0 * dim / 2) * gradient.norm()); } } // namespace Step9 // @sect3{Main function} -// The main function is similar to the previous examples. The main -// difference is that we use MultithreadInfo to set the maximum -// number of threads (see @ref threads "Parallel computing with multiple -// processors accessing shared memory" documentation module for more -// explanation). The number of threads used is the minimum of the environment -// variable DEAL_II_NUM_THREADS and the parameter of +// The main function is similar to the previous examples. The +// primary difference is that we use MultithreadInfo to set the maximum +// number of threads (see the documentation module @ref threads +// "Parallel computing with multiple processors accessing shared memory" +// for more information). The number of threads used is the minimum of the +// environment variable DEAL_II_NUM_THREADS and the parameter of // set_thread_limit. If no value is given to // set_thread_limit, the default value from the Intel Threading // Building Blocks (TBB) library is used. If the call to // set_thread_limit is omitted, the number of threads will be -// chosen by TBB indepently of DEAL_II_NUM_THREADS. +// chosen by TBB independently of DEAL_II_NUM_THREADS. int main() { + using namespace dealii; try { - dealii::MultithreadInfo::set_thread_limit(); + MultithreadInfo::set_thread_limit(); Step9::AdvectionProblem<2> advection_problem_2d; advection_problem_2d.run(); -- 2.39.5