From: bangerth Date: Wed, 21 Nov 2012 13:12:20 +0000 (+0000) Subject: Reformat all tutorial programs using the not-so-far indented comment style. X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=bbc38f4fa18578e60a290725e962d7badf687895;p=dealii-svn.git Reformat all tutorial programs using the not-so-far indented comment style. git-svn-id: https://svn.dealii.org/trunk@27656 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/doxygen/block_matrix_array.cc b/deal.II/examples/doxygen/block_matrix_array.cc index 624264773a..1ab3d528f8 100644 --- a/deal.II/examples/doxygen/block_matrix_array.cc +++ b/deal.II/examples/doxygen/block_matrix_array.cc @@ -3,10 +3,10 @@ // // Copyright (C) 2005, 2006, 2012 by the deal.II authors // -// This file is subject to QPL and may not be distributed -// without copyright and license information. Please refer -// to the file deal.II/doc/license.html for the text and -// further information on this license. +// This file is subject to QPL and may not be distributed without copyright +// and license information. Please refer to the file +// deal.II/doc/license.html for the text and further information on this +// license. // //--------------------------------------------------------------------------- diff --git a/deal.II/examples/doxygen/product_matrix.cc b/deal.II/examples/doxygen/product_matrix.cc index d0c2f4abc1..9f314682af 100644 --- a/deal.II/examples/doxygen/product_matrix.cc +++ b/deal.II/examples/doxygen/product_matrix.cc @@ -1,12 +1,12 @@ //--------------------------------------------------------------------------- // $Id$ // -// Copyright (C) 2005, 2006, 2010 by the deal.II authors +// Copyright (C) 2005, 2006, 2010, 2012 by the deal.II authors // -// This file is subject to QPL and may not be distributed -// without copyright and license information. Please refer -// to the file deal.II/doc/license.html for the text and -// further information on this license. +// This file is subject to QPL and may not be distributed without copyright +// and license information. Please refer to the file +// deal.II/doc/license.html for the text and further information on this +// license. // //--------------------------------------------------------------------------- diff --git a/deal.II/examples/step-1/step-1.cc b/deal.II/examples/step-1/step-1.cc index 9c66c0295c..515cabd366 100644 --- a/deal.II/examples/step-1/step-1.cc +++ b/deal.II/examples/step-1/step-1.cc @@ -10,97 +10,60 @@ // @sect3{Include files} -// The most fundamental class in the -// library is the Triangulation -// class, which is declared here: +// The most fundamental class in the library is the Triangulation class, which +// is declared here: #include -// We need the following two includes -// for loops over cells and/or faces: +// We need the following two includes for loops over cells and/or faces: #include #include -// Here are some functions to -// generate standard grids: +// Here are some functions to generate standard grids: #include -// We would like to use boundaries -// which are not straight lines, so -// we import some classes which -// predefine some boundary -// descriptions: +// We would like to use boundaries which are not straight lines, so we import +// some classes which predefine some boundary descriptions: #include -// Output of grids in various -// graphics formats: +// Output of grids in various graphics formats: #include // This is needed for C++ output: #include -// And this for the declarations of the -// `sqrt' and `fabs' functions: +// And this for the declarations of the `sqrt' and `fabs' functions: #include -// The final step in importing -// deal.II is this: All deal.II -// functions and classes are in a -// namespace dealii, to -// make sure they don't clash with -// symbols from other libraries you -// may want to use in conjunction -// with deal.II. One could use these -// functions and classes by prefixing -// every use of these names by -// dealii::, but that -// would quickly become cumbersome -// and annoying. Rather, we simply -// import the entire deal.II +// The final step in importing deal.II is this: All deal.II functions and +// classes are in a namespace dealii, to make sure they don't +// clash with symbols from other libraries you may want to use in conjunction +// with deal.II. One could use these functions and classes by prefixing every +// use of these names by dealii::, but that would quickly become +// cumbersome and annoying. Rather, we simply import the entire deal.II // namespace for general use: using namespace dealii; // @sect3{Creating the first mesh} -// In the following, first function, we -// simply use the unit square as -// domain and produce a globally -// refined grid from it. +// In the following, first function, we simply use the unit square as domain +// and produce a globally refined grid from it. void first_grid () { - // The first thing to do is to - // define an object for a - // triangulation of a + // The first thing to do is to define an object for a triangulation of a // two-dimensional domain: Triangulation<2> triangulation; - // Here and in many following - // cases, the string "<2>" after a - // class name indicates that this - // is an object that shall work in - // two space dimensions. Likewise, - // there are versions of the - // triangulation class that are - // working in one ("<1>") and three - // ("<3>") space dimensions. The - // way this works is through some - // template magic that we will - // investigate in some more detail - // in later example programs; - // there, we will also see how to - // write programs in an essentially - // dimension independent way. - - // Next, we want to fill the - // triangulation with a single cell - // for a square domain. The - // triangulation is the refined - // four times, to yield 4^4=256 + // Here and in many following cases, the string "<2>" after a class name + // indicates that this is an object that shall work in two space + // dimensions. Likewise, there are versions of the triangulation class that + // are working in one ("<1>") and three ("<3>") space dimensions. The way + // this works is through some template magic that we will investigate in + // some more detail in later example programs; there, we will also see how + // to write programs in an essentially dimension independent way. + + // Next, we want to fill the triangulation with a single cell for a square + // domain. The triangulation is the refined four times, to yield 4^4=256 // cells in total: GridGenerator::hyper_cube (triangulation); triangulation.refine_global (4); - // Now we want to write a graphical - // representation of the mesh to an - // output file. The GridOut - // class of deal.II can do that in - // a number of different output - // formats; here, we choose - // encapsulated postscript (eps) - // format: + // Now we want to write a graphical representation of the mesh to an output + // file. The GridOut class of deal.II can do that in a number of different + // output formats; here, we choose encapsulated postscript (eps) format: std::ofstream out ("grid-1.eps"); GridOut grid_out; grid_out.write_eps (triangulation, out); @@ -110,163 +73,86 @@ void first_grid () // @sect3{Creating the second mesh} -// The grid in the following, second -// function is slightly more -// complicated in that we use a ring -// domain and refine the result once -// globally. +// The grid in the following, second function is slightly more complicated in +// that we use a ring domain and refine the result once globally. void second_grid () { - // We start again by defining an - // object for a triangulation of a + // We start again by defining an object for a triangulation of a // two-dimensional domain: Triangulation<2> triangulation; - // We then fill it with a ring - // domain. The center of the ring - // shall be the point (1,0), and - // inner and outer radius shall be - // 0.5 and 1. The number of - // circumferential cells could be - // adjusted automatically by this - // function, but we choose to set - // it explicitely to 10 as the last - // argument: + // We then fill it with a ring domain. The center of the ring shall be the + // point (1,0), and inner and outer radius shall be 0.5 and 1. The number of + // circumferential cells could be adjusted automatically by this function, + // but we choose to set it explicitely to 10 as the last argument: const Point<2> center (1,0); const double inner_radius = 0.5, outer_radius = 1.0; GridGenerator::hyper_shell (triangulation, center, inner_radius, outer_radius, 10); - // By default, the triangulation - // assumes that all boundaries are - // straight and given by the cells - // of the coarse grid (which we - // just created). It uses this - // information when cells at the - // boundary are refined and new - // points need to be introduced on - // the boundary; if the boundary is - // assumed to be straight, then new - // points will simply be in the - // middle of the surrounding ones. + // By default, the triangulation assumes that all boundaries are straight + // and given by the cells of the coarse grid (which we just created). It + // uses this information when cells at the boundary are refined and new + // points need to be introduced on the boundary; if the boundary is assumed + // to be straight, then new points will simply be in the middle of the + // surrounding ones. // - // Here, however, we would like to - // have a curved - // boundary. Fortunately, some good - // soul implemented an object which - // describes the boundary of a ring - // domain; it only needs the center - // of the ring and automatically - // figures out the inner and outer - // radius when needed. Note that we - // associate this boundary object - // with that part of the boundary - // that has the "boundary - // indicator" zero. By default (at - // least in 2d and 3d, the 1d case - // is slightly different), all - // boundary parts have this number, - // but you can change this number - // for some parts of the - // boundary. In that case, the - // curved boundary thus associated - // with number zero will not apply - // on those parts with a non-zero - // boundary indicator, but other - // boundary description objects can - // be associated with those - // non-zero indicators. If no - // boundary description is - // associated with a particular - // boundary indicator, a straight - // boundary is implied. + // Here, however, we would like to have a curved boundary. Fortunately, some + // good soul implemented an object which describes the boundary of a ring + // domain; it only needs the center of the ring and automatically figures + // out the inner and outer radius when needed. Note that we associate this + // boundary object with that part of the boundary that has the "boundary + // indicator" zero. By default (at least in 2d and 3d, the 1d case is + // slightly different), all boundary parts have this number, but you can + // change this number for some parts of the boundary. In that case, the + // curved boundary thus associated with number zero will not apply on those + // parts with a non-zero boundary indicator, but other boundary description + // objects can be associated with those non-zero indicators. If no boundary + // description is associated with a particular boundary indicator, a + // straight boundary is implied. const HyperShellBoundary<2> boundary_description(center); triangulation.set_boundary (0, boundary_description); - // In order to demonstrate how to - // write a loop over all cells, we - // will refine the grid in five - // steps towards the inner circle - // of the domain: + // In order to demonstrate how to write a loop over all cells, we will + // refine the grid in five steps towards the inner circle of the domain: for (unsigned int step=0; step<5; ++step) { - // Next, we need an iterator - // which points to a cell and - // which we will move over all - // active cells one by one - // (active cells are those that - // are not further refined, and - // the only ones that can be - // marked for further - // refinement, obviously). By - // convention, we almost always - // use the names cell and - // endc for the iterator - // pointing to the present cell - // and to the - // one-past-the-end + // Next, we need an iterator which points to a cell and which we will + // move over all active cells one by one (active cells are those that + // are not further refined, and the only ones that can be marked for + // further refinement, obviously). By convention, we almost always use + // the names cell and endc for the iterator + // pointing to the present cell and to the one-past-the-end // iterator: Triangulation<2>::active_cell_iterator cell = triangulation.begin_active(), endc = triangulation.end(); - // The loop over all cells is - // then rather trivial, and - // looks like any loop - // involving pointers instead - // of iterators: + // The loop over all cells is then rather trivial, and looks like any + // loop involving pointers instead of iterators: for (; cell!=endc; ++cell) - // Next, we want to loop over - // all vertices of the - // cells. Since we are in 2d, - // we know that each cell has - // exactly four - // vertices. However, instead - // of penning down a 4 in the - // loop bound, we make a - // first attempt at writing - // it in a - // dimension-independent way - // by which we find out about - // the number of vertices of - // a cell. Using the - // GeometryInfo class, we - // will later have an easier - // time getting the program - // to also run in 3d: we only - // have to change all - // occurrences of <2> to - // <3>, and do not have - // to audit our code for the - // hidden appearance of magic - // numbers like a 4 that - // needs to be replaced by an - // 8: + // Next, we want to loop over all vertices of the cells. Since we are + // in 2d, we know that each cell has exactly four vertices. However, + // instead of penning down a 4 in the loop bound, we make a first + // attempt at writing it in a dimension-independent way by which we + // find out about the number of vertices of a cell. Using the + // GeometryInfo class, we will later have an easier time getting the + // program to also run in 3d: we only have to change all occurrences + // of <2> to <3>, and do not + // have to audit our code for the hidden appearance of magic numbers + // like a 4 that needs to be replaced by an 8: for (unsigned int v=0; v < GeometryInfo<2>::vertices_per_cell; ++v) { - // If this cell is at the - // inner boundary, then - // at least one of its - // vertices must sit on - // the inner ring and - // therefore have a - // radial distance from - // the center of exactly - // 0.5, up to floating - // point - // accuracy. Compute this - // distance, and if we - // have found a vertex - // with this property - // flag this cell for - // later refinement. We - // can then also break - // the loop over all - // vertices and move on - // to the next cell. + // If this cell is at the inner boundary, then at least one of its + // vertices must sit on the inner ring and therefore have a radial + // distance from the center of exactly 0.5, up to floating point + // accuracy. Compute this distance, and if we have found a vertex + // with this property flag this cell for later refinement. We can + // then also break the loop over all vertices and move on to the + // next cell. const double distance_from_center = center.distance (cell->vertex(v)); @@ -277,73 +163,47 @@ void second_grid () } } - // Now that we have marked all - // the cells that we want - // refined, we let the - // triangulation actually do - // this refinement. The - // function that does so owes - // its long name to the fact - // that one can also mark cells - // for coarsening, and the - // function does coarsening and - // refinement all at once: + // Now that we have marked all the cells that we want refined, we let + // the triangulation actually do this refinement. The function that does + // so owes its long name to the fact that one can also mark cells for + // coarsening, and the function does coarsening and refinement all at + // once: triangulation.execute_coarsening_and_refinement (); } - // Finally, after these five - // iterations of refinement, we - // want to again write the - // resulting mesh to a file, again - // in eps format. This works just + // Finally, after these five iterations of refinement, we want to again + // write the resulting mesh to a file, again in eps format. This works just // as above: std::ofstream out ("grid-2.eps"); GridOut grid_out; grid_out.write_eps (triangulation, out); - // At this point, all objects - // created in this function will be - // destroyed in reverse - // order. Unfortunately, we defined - // the boundary object after the - // triangulation, which still has a - // pointer to it and the library - // will produce an error if the - // boundary object is destroyed - // before the triangulation. We - // therefore have to release it, - // which can be done as - // follows. Note that this sets the - // boundary object used for part - // "0" of the boundary back to a - // default object, over which the - // triangulation has full control. + // At this point, all objects created in this function will be destroyed in + // reverse order. Unfortunately, we defined the boundary object after the + // triangulation, which still has a pointer to it and the library will + // produce an error if the boundary object is destroyed before the + // triangulation. We therefore have to release it, which can be done as + // follows. Note that this sets the boundary object used for part "0" of the + // boundary back to a default object, over which the triangulation has full + // control. triangulation.set_boundary (0); - // An alternative to doing so, and - // one that is frequently more - // convenient, would have been to - // declare the boundary object - // before the triangulation - // object. In that case, the - // triangulation would have let - // lose of the boundary object upon - // its destruction, and everything - // would have been fine. + // An alternative to doing so, and one that is frequently more convenient, + // would have been to declare the boundary object before the triangulation + // object. In that case, the triangulation would have let lose of the + // boundary object upon its destruction, and everything would have been + // fine. } // @sect3{The main function} -// Finally, the main function. There -// isn't much to do here, only to -// call the two subfunctions, which -// produce the two grids. +// Finally, the main function. There isn't much to do here, only to call the +// two subfunctions, which produce the two grids. int main () { first_grid (); second_grid (); } - diff --git a/deal.II/examples/step-10/step-10.cc b/deal.II/examples/step-10/step-10.cc index dc39cdca28..2406a2ef97 100644 --- a/deal.II/examples/step-10/step-10.cc +++ b/deal.II/examples/step-10/step-10.cc @@ -9,10 +9,8 @@ /* to the file deal.II/doc/license.html for the text and */ /* further information on this license. */ -// The first of the following include -// files are probably well-known by -// now and need no further -// explanation. +// The first of the following include files are probably well-known by now and +// need no further explanation. #include #include #include @@ -26,10 +24,8 @@ #include #include -// This is the only new one: in it, -// we declare the MappingQ class -// which we will use for polynomial -// mappings of arbitrary order: +// This is the only new one: in it, we declare the MappingQ class +// which we will use for polynomial mappings of arbitrary order: #include // And this again is C++: @@ -37,315 +33,175 @@ #include #include -// The last step is as in previous -// programs: +// The last step is as in previous programs: namespace Step10 { using namespace dealii; - // Now, as we want to compute the - // value of $\pi$, we have to compare to - // somewhat. These are the first few - // digits of $\pi$, which we define - // beforehand for later use. Since we - // would like to compute the - // difference between two numbers - // which are quite accurate, with the - // accuracy of the computed - // approximation to $\pi$ being in the - // range of the number of digits - // which a double variable can hold, - // we rather declare the reference - // value as a long double and - // give it a number of extra digits: + // Now, as we want to compute the value of $\pi$, we have to compare to + // somewhat. These are the first few digits of $\pi$, which we define + // beforehand for later use. Since we would like to compute the difference + // between two numbers which are quite accurate, with the accuracy of the + // computed approximation to $\pi$ being in the range of the number of + // digits which a double variable can hold, we rather declare the reference + // value as a long double and give it a number of extra digits: const long double pi = 3.141592653589793238462643; - // Then, the first task will be to - // generate some output. Since this - // program is so small, we do not - // employ object oriented techniques - // in it and do not declare classes - // (although, of course, we use the - // object oriented features of the - // library). Rather, we just pack the - // functionality into separate - // functions. We make these functions - // templates on the number of space - // dimensions to conform to usual - // practice when using deal.II, - // although we will only use them for - // two space dimensions. + // Then, the first task will be to generate some output. Since this program + // is so small, we do not employ object oriented techniques in it and do not + // declare classes (although, of course, we use the object oriented features + // of the library). Rather, we just pack the functionality into separate + // functions. We make these functions templates on the number of space + // dimensions to conform to usual practice when using deal.II, although we + // will only use them for two space dimensions. // - // The first of these functions just - // generates a triangulation of a - // circle (hyperball) and outputs the - // Qp mapping of its cells for - // different values of p. Then, - // we refine the grid once and do so - // again. + // The first of these functions just generates a triangulation of a circle + // (hyperball) and outputs the Qp mapping of its cells for different values + // of p. Then, we refine the grid once and do so again. template void gnuplot_output() { std::cout << "Output of grids into gnuplot files:" << std::endl << "===================================" << std::endl; - // So first generate a coarse - // triangulation of the circle and - // associate a suitable boundary - // description to it. Note that the - // default values of the - // HyperBallBoundary constructor - // are a center at the origin and a + // So first generate a coarse triangulation of the circle and associate a + // suitable boundary description to it. Note that the default values of + // the HyperBallBoundary constructor are a center at the origin and a // radius equals one. Triangulation triangulation; GridGenerator::hyper_ball (triangulation); static const HyperBallBoundary boundary; triangulation.set_boundary (0, boundary); - // Next generate output for this - // grid and for a once refined - // grid. Note that we have hidden - // the mesh refinement in the loop - // header, which might be uncommon - // but nevertheless works. Also it - // is strangely consistent with - // incrementing the loop index - // denoting the refinement level. + // Next generate output for this grid and for a once refined grid. Note + // that we have hidden the mesh refinement in the loop header, which might + // be uncommon but nevertheless works. Also it is strangely consistent + // with incrementing the loop index denoting the refinement level. for (unsigned int refinement=0; refinement<2; ++refinement, triangulation.refine_global(1)) { std::cout << "Refinement level: " << refinement << std::endl; - // Then have a string which - // denotes the base part of the - // names of the files into - // which we write the - // output. Note that in the - // parentheses in the - // initializer we do arithmetic - // on characters, which assumes - // that first the characters - // denoting numbers are placed - // consecutively (which is - // probably true for all - // reasonable character sets - // nowadays), but also assumes - // that the increment - // refinement is less than - // ten. This is therefore more - // a quick hack if we know - // exactly the values which the - // increment can assume. A - // better implementation would - // use the - // std::istringstream - // class to generate a name. + // Then have a string which denotes the base part of the names of the + // files into which we write the output. Note that in the parentheses + // in the initializer we do arithmetic on characters, which assumes + // that first the characters denoting numbers are placed consecutively + // (which is probably true for all reasonable character sets + // nowadays), but also assumes that the increment + // refinement is less than ten. This is therefore more a + // quick hack if we know exactly the values which the increment can + // assume. A better implementation would use the + // std::istringstream class to generate a name. std::string filename_base = "ball"; filename_base += '0'+refinement; - // Then output the present grid - // for Q1, Q2, and Q3 mappings: + // Then output the present grid for Q1, Q2, and Q3 mappings: for (unsigned int degree=1; degree<4; ++degree) { std::cout << "Degree = " << degree << std::endl; - // For this, first set up - // an object describing the - // mapping. This is done - // using the MappingQ - // class, which takes as - // argument to the - // constructor the - // polynomial degree which - // it shall use. + // For this, first set up an object describing the mapping. This + // is done using the MappingQ class, which takes as + // argument to the constructor the polynomial degree which it + // shall use. const MappingQ mapping (degree); - // We note one interesting - // fact: if you want a - // piecewise linear - // mapping, then you could - // give a value of 1 to - // the - // constructor. However, - // for linear mappings, so - // many things can be - // generated simpler that - // there is another class, - // called MappingQ1 - // which does exactly the - // same is if you gave an - // degree of 1 to the - // MappingQ class, but - // does so significantly - // faster. MappingQ1 is - // also the class that is - // implicitly used - // throughout the library - // in many functions and - // classes if you do not - // specify another mapping - // explicitly. - - - // In degree to actually - // write out the present - // grid with this mapping, - // we set up an object - // which we will use for - // output. We will generate - // Gnuplot output, which - // consists of a set of - // lines describing the - // mapped triangulation. By - // default, only one line - // is drawn for each face - // of the triangulation, - // but since we want to - // explicitely see the - // effect of the mapping, - // we want to have the - // faces in more - // detail. This can be done - // by passing the output - // object a structure which - // contains some flags. In - // the present case, since - // Gnuplot can only draw - // straight lines, we - // output a number of - // additional points on the - // faces so that each face - // is drawn by 30 small - // lines instead of only - // one. This is sufficient - // to give us the - // impression of seeing a - // curved line, rather than - // a set of straight lines. + // We note one interesting fact: if you want a piecewise linear + // mapping, then you could give a value of 1 to the + // constructor. However, for linear mappings, so many things can + // be generated simpler that there is another class, called + // MappingQ1 which does exactly the same is if you + // gave an degree of 1 to the MappingQ + // class, but does so significantly faster. MappingQ1 + // is also the class that is implicitly used throughout the + // library in many functions and classes if you do not specify + // another mapping explicitly. + + + // In degree to actually write out the present grid with this + // mapping, we set up an object which we will use for output. We + // will generate Gnuplot output, which consists of a set of lines + // describing the mapped triangulation. By default, only one line + // is drawn for each face of the triangulation, but since we want + // to explicitely see the effect of the mapping, we want to have + // the faces in more detail. This can be done by passing the + // output object a structure which contains some flags. In the + // present case, since Gnuplot can only draw straight lines, we + // output a number of additional points on the faces so that each + // face is drawn by 30 small lines instead of only one. This is + // sufficient to give us the impression of seeing a curved line, + // rather than a set of straight lines. GridOut grid_out; GridOutFlags::Gnuplot gnuplot_flags(false, 30); grid_out.set_flags(gnuplot_flags); - // Finally, generate a - // filename and a file for - // output using the same - // evil hack as above: + // Finally, generate a filename and a file for output using the + // same evil hack as above: std::string filename = filename_base+"_mapping_q"; filename += ('0'+degree); filename += ".dat"; std::ofstream gnuplot_file (filename.c_str()); - // Then write out the - // triangulation to this - // file. The last argument - // of the function is a - // pointer to a mapping - // object. This argument - // has a default value, and - // if no value is given a - // simple MappingQ1 - // object is taken, which - // we briefly described - // above. This would then - // result in a piecewise - // linear approximation of - // the true boundary in the - // output. + // Then write out the triangulation to this file. The last + // argument of the function is a pointer to a mapping object. This + // argument has a default value, and if no value is given a simple + // MappingQ1 object is taken, which we briefly + // described above. This would then result in a piecewise linear + // approximation of the true boundary in the output. grid_out.write_gnuplot (triangulation, gnuplot_file, &mapping); } std::cout << std::endl; } } - // Now we proceed with the main part - // of the code, the approximation of - // $\pi$. The area of a circle is of - // course given by $\pi r^2$, so - // having a circle of radius 1, the - // area represents just the number - // that is searched for. The - // numerical computation of the area - // is performed by integrating the - // constant function of value 1 over - // the whole computational domain, - // i.e. by computing the areas - // $\int_K 1 dx=\int_{\hat K} 1 - // \ \textrm{det}\ J(\hat x) d\hat x - // \approx \sum_i \textrm{det} - // \ J(\hat x_i)w(\hat x_i)$, where the - // sum extends over all quadrature - // points on all active cells in the - // triangulation, with $w(x_i)$ being - // the weight of quadrature point - // $x_i$. The integrals on each cell - // are approximated by numerical - // quadrature, hence the only - // additional ingredient we need is - // to set up a FEValues object that - // provides the corresponding `JxW' - // values of each cell. (Note that - // `JxW' is meant to abbreviate - // Jacobian determinant times - // weight; since in numerical - // quadrature the two factors always - // occur at the same places, we only - // offer the combined quantity, - // rather than two separate ones.) We - // note that here we won't use the - // FEValues object in its original - // purpose, i.e. for the computation - // of values of basis functions of a - // specific finite element at certain - // quadrature points. Rather, we use - // it only to gain the `JxW' at the - // quadrature points, irrespective of - // the (dummy) finite element we will - // give to the constructor of the - // FEValues object. The actual finite - // element given to the FEValues - // object is not used at all, so we - // could give any. + // Now we proceed with the main part of the code, the approximation of + // $\pi$. The area of a circle is of course given by $\pi r^2$, so having a + // circle of radius 1, the area represents just the number that is searched + // for. The numerical computation of the area is performed by integrating + // the constant function of value 1 over the whole computational domain, + // i.e. by computing the areas $\int_K 1 dx=\int_{\hat K} 1 \ \textrm{det}\ + // J(\hat x) d\hat x \approx \sum_i \textrm{det} \ J(\hat x_i)w(\hat x_i)$, + // where the sum extends over all quadrature points on all active cells in + // the triangulation, with $w(x_i)$ being the weight of quadrature point + // $x_i$. The integrals on each cell are approximated by numerical + // quadrature, hence the only additional ingredient we need is to set up a + // FEValues object that provides the corresponding `JxW' values of each + // cell. (Note that `JxW' is meant to abbreviate Jacobian determinant + // times weight; since in numerical quadrature the two factors always + // occur at the same places, we only offer the combined quantity, rather + // than two separate ones.) We note that here we won't use the FEValues + // object in its original purpose, i.e. for the computation of values of + // basis functions of a specific finite element at certain quadrature + // points. Rather, we use it only to gain the `JxW' at the quadrature + // points, irrespective of the (dummy) finite element we will give to the + // constructor of the FEValues object. The actual finite element given to + // the FEValues object is not used at all, so we could give any. template void compute_pi_by_area () { std::cout << "Computation of Pi by the area:" << std::endl << "==============================" << std::endl; - // For the numerical quadrature on - // all cells we employ a quadrature - // rule of sufficiently high - // degree. We choose QGauss that - // is of order 8 (4 points), to be sure that - // the errors due to numerical - // quadrature are of higher order - // than the order (maximal 6) that - // will occur due to the order of - // the approximation of the - // boundary, i.e. the order of the - // mappings employed. Note that the - // integrand, the Jacobian - // determinant, is not a polynomial - // function (rather, it is a - // rational one), so we do not use - // Gauss quadrature in order to get - // the exact value of the integral - // as done often in finite element - // computations, but could as well - // have used any quadrature formula - // of like order instead. + // For the numerical quadrature on all cells we employ a quadrature rule + // of sufficiently high degree. We choose QGauss that is of order 8 (4 + // points), to be sure that the errors due to numerical quadrature are of + // higher order than the order (maximal 6) that will occur due to the + // order of the approximation of the boundary, i.e. the order of the + // mappings employed. Note that the integrand, the Jacobian determinant, + // is not a polynomial function (rather, it is a rational one), so we do + // not use Gauss quadrature in order to get the exact value of the + // integral as done often in finite element computations, but could as + // well have used any quadrature formula of like order instead. const QGauss quadrature(4); - // Now start by looping over - // polynomial mapping degrees=1..4: + // Now start by looping over polynomial mapping degrees=1..4: for (unsigned int degree=1; degree<5; ++degree) { std::cout << "Degree = " << degree << std::endl; - // First generate the - // triangulation, the boundary - // and the mapping object as - // already seen. + // First generate the triangulation, the boundary and the mapping + // object as already seen. Triangulation triangulation; GridGenerator::hyper_ball (triangulation); @@ -354,127 +210,69 @@ namespace Step10 const MappingQ mapping (degree); - // We now create a dummy finite - // element. Here we could - // choose any finite element, - // as we are only interested in - // the `JxW' values provided by - // the FEValues object - // below. Nevertheless, we have - // to provide a finite element - // since in this example we - // abuse the FEValues class a - // little in that we only ask - // it to provide us with the - // weights of certain - // quadrature points, in - // contrast to the usual - // purpose (and name) of the - // FEValues class which is to - // provide the values of finite - // elements at these points. + // We now create a dummy finite element. Here we could choose any + // finite element, as we are only interested in the `JxW' values + // provided by the FEValues object below. Nevertheless, we have to + // provide a finite element since in this example we abuse the + // FEValues class a little in that we only ask it to provide us with + // the weights of certain quadrature points, in contrast to the usual + // purpose (and name) of the FEValues class which is to provide the + // values of finite elements at these points. const FE_Q dummy_fe (1); - // Likewise, we need to create - // a DoFHandler object. We do - // not actually use it, but it - // will provide us with - // `active_cell_iterators' that - // are needed to reinitialize - // the FEValues object on each - // cell of the triangulation. + // Likewise, we need to create a DoFHandler object. We do not actually + // use it, but it will provide us with `active_cell_iterators' that + // are needed to reinitialize the FEValues object on each cell of the + // triangulation. DoFHandler dof_handler (triangulation); - // Now we set up the FEValues - // object, giving the Mapping, - // the dummy finite element and - // the quadrature object to the - // constructor, together with - // the update flags asking for - // the `JxW' values at the - // quadrature points only. This - // tells the FEValues object - // that it needs not compute - // other quantities upon - // calling the reinit - // function, thus saving - // computation time. + // Now we set up the FEValues object, giving the Mapping, the dummy + // finite element and the quadrature object to the constructor, + // together with the update flags asking for the `JxW' values at the + // quadrature points only. This tells the FEValues object that it + // needs not compute other quantities upon calling the + // reinit function, thus saving computation time. // - // The most important - // difference in the - // construction of the FEValues - // object compared to previous - // example programs is that we - // pass a mapping object as - // first argument, which is to - // be used in the computation - // of the mapping from unit to - // real cell. In previous - // examples, this argument was - // omitted, resulting in the - // implicit use of an object of - // type MappingQ1. + // The most important difference in the construction of the FEValues + // object compared to previous example programs is that we pass a + // mapping object as first argument, which is to be used in the + // computation of the mapping from unit to real cell. In previous + // examples, this argument was omitted, resulting in the implicit use + // of an object of type MappingQ1. FEValues fe_values (mapping, dummy_fe, quadrature, update_JxW_values); - // We employ an object of the - // ConvergenceTable class to - // store all important data - // like the approximated values - // for $\pi$ and the error with - // respect to the true value of - // $\pi$. We will also use - // functions provided by the - // ConvergenceTable class to - // compute convergence rates of - // the approximations to $\pi$. + // We employ an object of the ConvergenceTable class to store all + // important data like the approximated values for $\pi$ and the error + // with respect to the true value of $\pi$. We will also use functions + // provided by the ConvergenceTable class to compute convergence rates + // of the approximations to $\pi$. ConvergenceTable table; - // Now we loop over several - // refinement steps of the - // triangulation. + // Now we loop over several refinement steps of the triangulation. for (unsigned int refinement=0; refinement<6; ++refinement, triangulation.refine_global (1)) { - // In this loop we first - // add the number of active - // cells of the current - // triangulation to the - // table. This function - // automatically creates a - // table column with - // superscription `cells', - // in case this column was - // not created before. + // In this loop we first add the number of active cells of the + // current triangulation to the table. This function automatically + // creates a table column with superscription `cells', in case + // this column was not created before. table.add_value("cells", triangulation.n_active_cells()); - // Then we distribute the - // degrees of freedom for - // the dummy finite - // element. Strictly - // speaking we do not need - // this function call in - // our special case but we - // call it to make the - // DoFHandler happy -- - // otherwise it would throw - // an assertion in the - // FEValues::reinit + // Then we distribute the degrees of freedom for the dummy finite + // element. Strictly speaking we do not need this function call in + // our special case but we call it to make the DoFHandler happy -- + // otherwise it would throw an assertion in the FEValues::reinit // function below. dof_handler.distribute_dofs (dummy_fe); - // We define the variable - // area as `long double' - // like we did for the pi - // variable before. + // We define the variable area as `long double' like we did for + // the pi variable before. long double area = 0; - // Now we loop over all - // cells, reinitialize the - // FEValues object for each - // cell, and add up all the - // `JxW' values for this - // cell to `area'... + // Now we loop over all cells, reinitialize the FEValues object + // for each cell, and add up all the `JxW' values for this cell to + // `area'... typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); @@ -485,50 +283,33 @@ namespace Step10 area += fe_values.JxW (i); }; - // ...and store the - // resulting area values - // and the errors in the - // table. We need a static - // cast to double as there - // is no add_value(string, - // long double) function - // implemented. Note that - // this also concerns the - // second call as the fabs - // function in the std - // namespace is overloaded on - // its argument types, so there - // exists a version taking - // and returning a long double, - // in contrast to the global - // namespace where only one such - // function is declared (which - // takes and returns a double). + // ...and store the resulting area values and the errors in the + // table. We need a static cast to double as there is no + // add_value(string, long double) function implemented. Note that + // this also concerns the second call as the fabs + // function in the std namespace is overloaded on its + // argument types, so there exists a version taking and returning + // a long double, in contrast to the global namespace + // where only one such function is declared (which takes and + // returns a double). table.add_value("eval.pi", static_cast (area)); table.add_value("error", static_cast (std::fabs(area-pi))); }; - // We want to compute - // the convergence rates of the - // `error' column. Therefore we - // need to omit the other - // columns from the convergence - // rate evaluation before - // calling + // We want to compute the convergence rates of the `error' + // column. Therefore we need to omit the other columns from the + // convergence rate evaluation before calling // `evaluate_all_convergence_rates' table.omit_column_from_convergence_rate_evaluation("cells"); table.omit_column_from_convergence_rate_evaluation("eval.pi"); table.evaluate_all_convergence_rates(ConvergenceTable::reduction_rate_log2); - // Finally we set the precision - // and scientific mode for - // output of some of the - // quantities... + // Finally we set the precision and scientific mode for output of some + // of the quantities... table.set_precision("eval.pi", 16); table.set_scientific("error", true); - // ...and write the whole table - // to std::cout. + // ...and write the whole table to std::cout. table.write_text(std::cout); std::cout << std::endl; @@ -536,33 +317,24 @@ namespace Step10 } - // The following, second function also - // computes an approximation of $\pi$ - // but this time via the perimeter - // $2\pi r$ of the domain instead - // of the area. This function is only - // a variation of the previous - // function. So we will mainly give - // documentation for the differences. + // The following, second function also computes an approximation of $\pi$ + // but this time via the perimeter $2\pi r$ of the domain instead of the + // area. This function is only a variation of the previous function. So we + // will mainly give documentation for the differences. template void compute_pi_by_perimeter () { std::cout << "Computation of Pi by the perimeter:" << std::endl << "===================================" << std::endl; - // We take the same order of - // quadrature but this time a - // `dim-1' dimensional quadrature - // as we will integrate over - // (boundary) lines rather than - // over cells. + // We take the same order of quadrature but this time a `dim-1' + // dimensional quadrature as we will integrate over (boundary) lines + // rather than over cells. const QGauss quadrature(4); - // We loop over all degrees, create - // the triangulation, the boundary, - // the mapping, the dummy - // finite element and the DoFHandler - // object as seen before. + // We loop over all degrees, create the triangulation, the boundary, the + // mapping, the dummy finite element and the DoFHandler object as seen + // before. for (unsigned int degree=1; degree<5; ++degree) { std::cout << "Degree = " << degree << std::endl; @@ -577,12 +349,9 @@ namespace Step10 DoFHandler dof_handler (triangulation); - // Then we create a - // FEFaceValues object instead - // of a FEValues object as in - // the previous - // function. Again, we pass a - // mapping as first argument. + // Then we create a FEFaceValues object instead of a FEValues object + // as in the previous function. Again, we pass a mapping as first + // argument. FEFaceValues fe_face_values (mapping, fe, quadrature, update_JxW_values); ConvergenceTable table; @@ -594,14 +363,9 @@ namespace Step10 dof_handler.distribute_dofs (fe); - // Now we run over all - // cells and over all faces - // of each cell. Only the - // contributions of the - // `JxW' values on boundary - // faces are added to the - // long double variable - // `perimeter'. + // Now we run over all cells and over all faces of each cell. Only + // the contributions of the `JxW' values on boundary faces are + // added to the long double variable `perimeter'. typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); @@ -610,24 +374,18 @@ namespace Step10 for (unsigned int face_no=0; face_no::faces_per_cell; ++face_no) if (cell->face(face_no)->at_boundary()) { - // We reinit the - // FEFaceValues - // object with the - // cell iterator - // and the number - // of the face. + // We reinit the FEFaceValues object with the cell + // iterator and the number of the face. fe_face_values.reinit (cell, face_no); for (unsigned int i=0; i (perimeter/2.)); table.add_value("error", static_cast (std::fabs(perimeter/2.-pi))); }; - // ...and end this function as - // we did in the previous one: + // ...and end this function as we did in the previous one: table.omit_column_from_convergence_rate_evaluation("cells"); table.omit_column_from_convergence_rate_evaluation("eval.pi"); table.evaluate_all_convergence_rates(ConvergenceTable::reduction_rate_log2); @@ -643,11 +401,9 @@ namespace Step10 } -// The following main function just calls the -// above functions in the order of their -// appearance. Apart from this, it looks just -// like the main functions of previous -// tutorial programs. +// The following main function just calls the above functions in the order of +// their appearance. Apart from this, it looks just like the main functions of +// previous tutorial programs. int main () { try diff --git a/deal.II/examples/step-11/step-11.cc b/deal.II/examples/step-11/step-11.cc index 3d79aeb487..0fe9793fad 100644 --- a/deal.II/examples/step-11/step-11.cc +++ b/deal.II/examples/step-11/step-11.cc @@ -9,10 +9,8 @@ /* to the file deal.II/doc/license.html for the text and */ /* further information on this license. */ -// As usual, the program starts with -// a rather long list of include -// files which you are probably -// already used to by now: +// As usual, the program starts with a rather long list of include files which +// you are probably already used to by now: #include #include #include @@ -36,55 +34,36 @@ #include #include -// Just this one is new: it declares -// a class -// CompressedSparsityPattern, -// which we will use and explain +// Just this one is new: it declares a class +// CompressedSparsityPattern, which we will use and explain // further down below. #include -// We will make use of the std::find -// algorithm of the C++ standard -// library, so we have to include the -// following file for its -// declaration: +// We will make use of the std::find algorithm of the C++ standard library, so +// we have to include the following file for its declaration: #include #include #include #include -// The last step is as in all -// previous programs: +// The last step is as in all previous programs: namespace Step11 { using namespace dealii; - // Then we declare a class which - // represents the solution of a - // Laplace problem. As this example - // program is based on step-5, the - // class looks rather the same, with - // the sole structural difference - // that the functions - // assemble_system now calls - // solve itself, and is thus - // called assemble_and_solve, and - // that the output function was - // dropped since the solution - // function is so boring that it is - // not worth being viewed. + // Then we declare a class which represents the solution of a Laplace + // problem. As this example program is based on step-5, the class looks + // rather the same, with the sole structural difference that the functions + // assemble_system now calls solve itself, and is + // thus called assemble_and_solve, and that the output function + // was dropped since the solution function is so boring that it is not worth + // being viewed. // - // The only other noteworthy change - // is that the constructor takes a - // value representing the polynomial - // degree of the mapping to be used - // later on, and that it has another - // member variable representing - // exactly this mapping. In general, - // this variable will occur in real - // applications at the same places - // where the finite element is - // declared or used. + // The only other noteworthy change is that the constructor takes a value + // representing the polynomial degree of the mapping to be used later on, + // and that it has another member variable representing exactly this + // mapping. In general, this variable will occur in real applications at the + // same places where the finite element is declared or used. template class LaplaceProblem { @@ -114,13 +93,10 @@ namespace Step11 - // Construct such an object, by - // initializing the variables. Here, - // we use linear finite elements (the - // argument to the fe variable - // denotes the polynomial degree), - // and mappings of given order. Print - // to screen what we are about to do. + // Construct such an object, by initializing the variables. Here, we use + // linear finite elements (the argument to the fe variable + // denotes the polynomial degree), and mappings of given order. Print to + // screen what we are about to do. template LaplaceProblem::LaplaceProblem (const unsigned int mapping_degree) : fe (1), @@ -135,101 +111,61 @@ namespace Step11 - // The first task is to set up the - // variables for this problem. This - // includes generating a valid - // DoFHandler object, as well as - // the sparsity patterns for the - // matrix, and the object - // representing the constraints that - // the mean value of the degrees of - // freedom on the boundary be zero. + // The first task is to set up the variables for this problem. This includes + // generating a valid DoFHandler object, as well as the + // sparsity patterns for the matrix, and the object representing the + // constraints that the mean value of the degrees of freedom on the boundary + // be zero. template void LaplaceProblem::setup_system () { - // The first task is trivial: - // generate an enumeration of the - // degrees of freedom, and - // initialize solution and right - // hand side vector to their + // The first task is trivial: generate an enumeration of the degrees of + // freedom, and initialize solution and right hand side vector to their // correct sizes: dof_handler.distribute_dofs (fe); solution.reinit (dof_handler.n_dofs()); system_rhs.reinit (dof_handler.n_dofs()); - // Next task is to construct the - // object representing the - // constraint that the mean value - // of the degrees of freedom on the - // boundary shall be zero. For - // this, we first want a list of - // those nodes which are actually - // at the boundary. The - // DoFTools class has a - // function that returns an array - // of boolean values where true - // indicates that the node is at - // the boundary. The second - // argument denotes a mask - // selecting which components of - // vector valued finite elements we - // want to be considered. This sort - // of information is encoded using - // the ComponentMask class (see also - // @ref GlossComponentMask). Since we - // have a scalar finite element - // anyway, this mask in reality should - // have only one entry with a - // true value. However, - // the ComponentMask class has - // semantics that allow it to - // represents a mask of indefinite - // size whose every element equals - // true when one just - // default constructs such an object, - // so this is what we'll do here. + // Next task is to construct the object representing the constraint that + // the mean value of the degrees of freedom on the boundary shall be + // zero. For this, we first want a list of those nodes which are actually + // at the boundary. The DoFTools class has a function that + // returns an array of boolean values where true indicates + // that the node is at the boundary. The second argument denotes a mask + // selecting which components of vector valued finite elements we want to + // be considered. This sort of information is encoded using the + // ComponentMask class (see also @ref GlossComponentMask). Since we have a + // scalar finite element anyway, this mask in reality should have only one + // entry with a true value. However, the ComponentMask class + // has semantics that allow it to represents a mask of indefinite size + // whose every element equals true when one just default + // constructs such an object, so this is what we'll do here. std::vector boundary_dofs (dof_handler.n_dofs(), false); DoFTools::extract_boundary_dofs (dof_handler, ComponentMask(), boundary_dofs); - // Now first for the generation of - // the constraints: as mentioned in - // the introduction, we constrain - // one of the nodes on the boundary - // by the values of all other DoFs - // on the boundary. So, let us - // first pick out the first - // boundary node from this list. We - // do that by searching for the - // first true value in the - // array (note that std::find - // returns an iterator to this - // element), and computing its - // distance to the overall first - // element in the array to get its - // index: + // Now first for the generation of the constraints: as mentioned in the + // introduction, we constrain one of the nodes on the boundary by the + // values of all other DoFs on the boundary. So, let us first pick out the + // first boundary node from this list. We do that by searching for the + // first true value in the array (note that + // std::find returns an iterator to this element), and + // computing its distance to the overall first element in the array to get + // its index: const unsigned int first_boundary_dof = std::distance (boundary_dofs.begin(), std::find (boundary_dofs.begin(), boundary_dofs.end(), true)); - // Then generate a constraints - // object with just this one - // constraint. First clear all - // previous content (which might - // reside there from the previous - // computation on a once coarser - // grid), then add this one line - // constraining the - // first_boundary_dof to the - // sum of other boundary DoFs each - // with weight -1. Finally, close - // the constraints object, i.e. do - // some internal bookkeeping on it - // for faster processing of what is - // to come later: + // Then generate a constraints object with just this one constraint. First + // clear all previous content (which might reside there from the previous + // computation on a once coarser grid), then add this one line + // constraining the first_boundary_dof to the sum of other + // boundary DoFs each with weight -1. Finally, close the constraints + // object, i.e. do some internal bookkeeping on it for faster processing + // of what is to come later: mean_value_constraints.clear (); mean_value_constraints.add_line (first_boundary_dof); for (unsigned int i=first_boundary_dof+1; iDoFTools::make_sparsity_pattern - // and condense the result using - // the hanging node constraints. We - // have no hanging node constraints - // here (since we only refine - // globally in this example), but - // we have this global constraint - // on the boundary. This poses one - // severe problem in this context: - // the SparsityPattern class - // wants us to state beforehand the - // maximal number of entries per - // row, either for all rows or for - // each row separately. There are - // functions in the library which - // can tell you this number in case - // you just have hanging node - // constraints (namely - // DoFHandler::max_coupling_between_dofs), - // but how is this for the present - // case? The difficulty arises - // because the elimination of the - // constrained degree of freedom - // requires a number of additional - // entries in the matrix at places - // that are not so simple to - // determine. We would therefore - // have a problem had we to give a - // maximal number of entries per - // row here. + // Next task is to generate a sparsity pattern. This is indeed a tricky + // task here. Usually, we just call + // DoFTools::make_sparsity_pattern and condense the result + // using the hanging node constraints. We have no hanging node constraints + // here (since we only refine globally in this example), but we have this + // global constraint on the boundary. This poses one severe problem in + // this context: the SparsityPattern class wants us to state + // beforehand the maximal number of entries per row, either for all rows + // or for each row separately. There are functions in the library which + // can tell you this number in case you just have hanging node constraints + // (namely DoFHandler::max_coupling_between_dofs), but how is + // this for the present case? The difficulty arises because the + // elimination of the constrained degree of freedom requires a number of + // additional entries in the matrix at places that are not so simple to + // determine. We would therefore have a problem had we to give a maximal + // number of entries per row here. // - // Since this can be so difficult - // that no reasonable answer can be - // given that allows allocation of - // only a reasonable amount of - // memory, there is a class - // CompressedSparsityPattern, - // that can help us out here. It - // does not require that we know in - // advance how many entries rows - // could have, but allows just - // about any length. It is thus - // significantly more flexible in - // case you do not have good - // estimates of row lengths, - // however at the price that - // building up such a pattern is - // also significantly more - // expensive than building up a - // pattern for which you had - // information in - // advance. Nevertheless, as we - // have no other choice here, we'll - // just build such an object by - // initializing it with the - // dimensions of the matrix and - // calling another function - // DoFTools::make_sparsity_pattern - // to get the sparsity pattern due - // to the differential operator, - // then condense it with the - // constraints object which adds - // those positions in the sparsity - // pattern that are required for - // the elimination of the - // constraint. + // Since this can be so difficult that no reasonable answer can be given + // that allows allocation of only a reasonable amount of memory, there is + // a class CompressedSparsityPattern, that can help us out + // here. It does not require that we know in advance how many entries rows + // could have, but allows just about any length. It is thus significantly + // more flexible in case you do not have good estimates of row lengths, + // however at the price that building up such a pattern is also + // significantly more expensive than building up a pattern for which you + // had information in advance. Nevertheless, as we have no other choice + // here, we'll just build such an object by initializing it with the + // dimensions of the matrix and calling another function + // DoFTools::make_sparsity_pattern to get the sparsity + // pattern due to the differential operator, then condense it with the + // constraints object which adds those positions in the sparsity pattern + // that are required for the elimination of the constraint. CompressedSparsityPattern csp (dof_handler.n_dofs(), dof_handler.n_dofs()); DoFTools::make_sparsity_pattern (dof_handler, csp); mean_value_constraints.condense (csp); - // Finally, once we have the full - // pattern, we can initialize an - // object of type - // SparsityPattern from it and - // in turn initialize the matrix - // with it. Note that this is - // actually necessary, since the - // CompressedSparsityPattern is - // so inefficient compared to the - // SparsityPattern class due to - // the more flexible data - // structures it has to use, that - // we can impossibly base the - // sparse matrix class on it, but - // rather need an object of type - // SparsityPattern, which we - // generate by copying from the + // Finally, once we have the full pattern, we can initialize an object of + // type SparsityPattern from it and in turn initialize the + // matrix with it. Note that this is actually necessary, since the + // CompressedSparsityPattern is so inefficient compared to + // the SparsityPattern class due to the more flexible data + // structures it has to use, that we can impossibly base the sparse matrix + // class on it, but rather need an object of type + // SparsityPattern, which we generate by copying from the // intermediate object. // - // As a further sidenote, you will - // notice that we do not explicitly - // have to compress the - // sparsity pattern here. This, of - // course, is due to the fact that - // the copy_from function - // generates a compressed object - // right from the start, to which - // you cannot add new entries - // anymore. The compress call - // is therefore implicit in the - // copy_from call. + // As a further sidenote, you will notice that we do not explicitly have + // to compress the sparsity pattern here. This, of course, is + // due to the fact that the copy_from function generates a + // compressed object right from the start, to which you cannot add new + // entries anymore. The compress call is therefore implicit + // in the copy_from call. sparsity_pattern.copy_from (csp); system_matrix.reinit (sparsity_pattern); } - // The next function then assembles - // the linear system of equations, - // solves it, and evaluates the - // solution. This then makes three - // actions, and we will put them into - // eight true statements (excluding - // declaration of variables, and - // handling of temporary - // vectors). Thus, this function is - // something for the very - // lazy. Nevertheless, the functions - // called are rather powerful, and - // through them this function uses a - // good deal of the whole - // library. But let's look at each of - // the steps. + // The next function then assembles the linear system of equations, solves + // it, and evaluates the solution. This then makes three actions, and we + // will put them into eight true statements (excluding declaration of + // variables, and handling of temporary vectors). Thus, this function is + // something for the very lazy. Nevertheless, the functions called are + // rather powerful, and through them this function uses a good deal of the + // whole library. But let's look at each of the steps. template void LaplaceProblem::assemble_and_solve () { - // First, we have to assemble the - // matrix and the right hand - // side. In all previous examples, - // we have investigated various - // ways how to do this - // manually. However, since the - // Laplace matrix and simple right - // hand sides appear so frequently - // in applications, the library - // provides functions for actually - // doing this for you, i.e. they - // perform the loop over all cells, - // setting up the local matrices - // and vectors, and putting them + // First, we have to assemble the matrix and the right hand side. In all + // previous examples, we have investigated various ways how to do this + // manually. However, since the Laplace matrix and simple right hand sides + // appear so frequently in applications, the library provides functions + // for actually doing this for you, i.e. they perform the loop over all + // cells, setting up the local matrices and vectors, and putting them // together for the end result. // - // The following are the two most - // commonly used ones: creation of - // the Laplace matrix and creation - // of a right hand side vector from - // body or boundary forces. They - // take the mapping object, the - // DoFHandler object - // representing the degrees of - // freedom and the finite element - // in use, a quadrature formula to - // be used, and the output - // object. The function that - // creates a right hand side vector - // also has to take a function - // object describing the - // (continuous) right hand side - // function. + // The following are the two most commonly used ones: creation of the + // Laplace matrix and creation of a right hand side vector from body or + // boundary forces. They take the mapping object, the + // DoFHandler object representing the degrees of freedom and + // the finite element in use, a quadrature formula to be used, and the + // output object. The function that creates a right hand side vector also + // has to take a function object describing the (continuous) right hand + // side function. // - // Let us look at the way the - // matrix and body forces are - // integrated: + // Let us look at the way the matrix and body forces are integrated: const unsigned int gauss_degree = std::max (static_cast(std::ceil(1.*(mapping.get_degree()+1)/2)), 2U); @@ -419,150 +274,81 @@ namespace Step11 system_rhs); // That's quite simple, right? // - // Two remarks are in order, - // though: First, these functions - // are used in a lot of - // contexts. Maybe you want to - // create a Laplace or mass matrix - // for a vector values finite - // element; or you want to use the - // default Q1 mapping; or you want - // to assembled the matrix with a - // coefficient in the Laplace - // operator. For this reason, there - // are quite a large number of - // variants of these functions in - // the MatrixCreator and - // MatrixTools - // classes. Whenever you need a - // slightly different version of - // these functions than the ones - // called above, it is certainly - // worthwhile to take a look at the - // documentation and to check - // whether something fits your - // needs. + // Two remarks are in order, though: First, these functions are used in a + // lot of contexts. Maybe you want to create a Laplace or mass matrix for + // a vector values finite element; or you want to use the default Q1 + // mapping; or you want to assembled the matrix with a coefficient in the + // Laplace operator. For this reason, there are quite a large number of + // variants of these functions in the MatrixCreator and + // MatrixTools classes. Whenever you need a slightly + // different version of these functions than the ones called above, it is + // certainly worthwhile to take a look at the documentation and to check + // whether something fits your needs. // - // The second remark concerns the - // quadrature formula we use: we - // want to integrate over bilinear - // shape functions, so we know that - // we have to use at least a Gauss2 - // quadrature formula. On the other - // hand, we want to have the - // quadrature rule to have at least - // the order of the boundary - // approximation. Since the order - // of Gauss-r is 2r, and the order - // of the boundary approximation - // using polynomials of degree p is - // p+1, we know that 2r@>=p+1. Since - // r has to be an integer and (as - // mentioned above) has to be at - // least 2, this makes up for the - // formula above computing + // The second remark concerns the quadrature formula we use: we want to + // integrate over bilinear shape functions, so we know that we have to use + // at least a Gauss2 quadrature formula. On the other hand, we want to + // have the quadrature rule to have at least the order of the boundary + // approximation. Since the order of Gauss-r is 2r, and the order of the + // boundary approximation using polynomials of degree p is p+1, we know + // that 2r@>=p+1. Since r has to be an integer and (as mentioned above) + // has to be at least 2, this makes up for the formula above computing // gauss_degree. // - // Since the generation of the body - // force contributions to the right - // hand side vector was so simple, - // we do that all over again for - // the boundary forces as well: - // allocate a vector of the right - // size and call the right - // function. The boundary function - // has constant values, so we can - // generate an object from the - // library on the fly, and we use - // the same quadrature formula as - // above, but this time of lower - // dimension since we integrate + // Since the generation of the body force contributions to the right hand + // side vector was so simple, we do that all over again for the boundary + // forces as well: allocate a vector of the right size and call the right + // function. The boundary function has constant values, so we can generate + // an object from the library on the fly, and we use the same quadrature + // formula as above, but this time of lower dimension since we integrate // over faces now instead of cells: Vector tmp (system_rhs.size()); VectorTools::create_boundary_right_hand_side (mapping, dof_handler, QGauss(gauss_degree), ConstantFunction(1), tmp); - // Then add the contributions from - // the boundary to those from the - // interior of the domain: + // Then add the contributions from the boundary to those from the interior + // of the domain: system_rhs += tmp; - // For assembling the right hand - // side, we had to use two - // different vector objects, and - // later add them together. The - // reason we had to do so is that - // the - // VectorTools::create_right_hand_side - // and - // VectorTools::create_boundary_right_hand_side - // functions first clear the output - // vector, rather than adding up - // their results to previous - // contents. This can reasonably be - // called a design flaw in the - // library made in its infancy, but - // unfortunately things are as they - // are for some time now and it is - // difficult to change such things - // that silently break existing - // code, so we have to live with - // that. - - // Now, the linear system is set - // up, so we can eliminate the one - // degree of freedom which we - // constrained to the other DoFs on - // the boundary for the mean value - // constraint from matrix and right - // hand side vector, and solve the - // system. After that, distribute - // the constraints again, which in - // this case means setting the - // constrained degree of freedom to - // its proper value + // For assembling the right hand side, we had to use two different vector + // objects, and later add them together. The reason we had to do so is + // that the VectorTools::create_right_hand_side and + // VectorTools::create_boundary_right_hand_side functions + // first clear the output vector, rather than adding up their results to + // previous contents. This can reasonably be called a design flaw in the + // library made in its infancy, but unfortunately things are as they are + // for some time now and it is difficult to change such things that + // silently break existing code, so we have to live with that. + + // Now, the linear system is set up, so we can eliminate the one degree of + // freedom which we constrained to the other DoFs on the boundary for the + // mean value constraint from matrix and right hand side vector, and solve + // the system. After that, distribute the constraints again, which in this + // case means setting the constrained degree of freedom to its proper + // value mean_value_constraints.condense (system_matrix); mean_value_constraints.condense (system_rhs); solve (); mean_value_constraints.distribute (solution); - // Finally, evaluate what we got as - // solution. As stated in the - // introduction, we are interested - // in the H1 semi-norm of the - // solution. Here, as well, we have - // a function in the library that - // does this, although in a - // slightly non-obvious way: the - // VectorTools::integrate_difference - // function integrates the norm of - // the difference between a finite - // element function and a - // continuous function. If we - // therefore want the norm of a - // finite element field, we just - // put the continuous function to - // zero. Note that this function, - // just as so many other ones in - // the library as well, has at - // least two versions, one which - // takes a mapping as argument - // (which we make us of here), and - // the one which we have used in - // previous examples which - // implicitly uses MappingQ1. - // Also note that we take a - // quadrature formula of one degree - // higher, in order to avoid - // superconvergence effects where - // the solution happens to be - // especially close to the exact - // solution at certain points (we - // don't know whether this might be - // the case here, but there are - // cases known of this, and we just - // want to make sure): + // Finally, evaluate what we got as solution. As stated in the + // introduction, we are interested in the H1 semi-norm of the + // solution. Here, as well, we have a function in the library that does + // this, although in a slightly non-obvious way: the + // VectorTools::integrate_difference function integrates the + // norm of the difference between a finite element function and a + // continuous function. If we therefore want the norm of a finite element + // field, we just put the continuous function to zero. Note that this + // function, just as so many other ones in the library as well, has at + // least two versions, one which takes a mapping as argument (which we + // make us of here), and the one which we have used in previous examples + // which implicitly uses MappingQ1. Also note that we take a + // quadrature formula of one degree higher, in order to avoid + // superconvergence effects where the solution happens to be especially + // close to the exact solution at certain points (we don't know whether + // this might be the case here, but there are cases known of this, and we + // just want to make sure): Vector norm_per_cell (triangulation.n_active_cells()); VectorTools::integrate_difference (mapping, dof_handler, solution, @@ -570,14 +356,10 @@ namespace Step11 norm_per_cell, QGauss(gauss_degree+1), VectorTools::H1_seminorm); - // Then, the function just called - // returns its results as a vector - // of values each of which denotes - // the norm on one cell. To get the - // global norm, a simple - // computation shows that we have - // to take the l2 norm of the - // vector: + // Then, the function just called returns its results as a vector of + // values each of which denotes the norm on one cell. To get the global + // norm, a simple computation shows that we have to take the l2 norm of + // the vector: const double norm = norm_per_cell.l2_norm(); // Last task -- generate output: @@ -588,10 +370,8 @@ namespace Step11 - // The following function solving the - // linear system of equations is - // copied from step-5 and is - // explained there in some detail: + // The following function solving the linear system of equations is copied + // from step-5 and is explained there in some detail: template void LaplaceProblem::solve () { @@ -607,34 +387,19 @@ namespace Step11 - // Finally the main function - // controlling the different steps to - // be performed. Its content is - // rather straightforward, generating - // a triangulation of a circle, - // associating a boundary to it, and - // then doing several cycles on - // subsequently finer grids. Note - // again that we have put mesh - // refinement into the loop header; - // this may be something for a test - // program, but for real applications - // you should consider that this - // implies that the mesh is refined - // after the loop is executed the - // last time since the increment - // clause (the last part of the - // three-parted loop header) is - // executed before the comparison - // part (the second one), which may - // be rather costly if the mesh is - // already quite refined. In that - // case, you should arrange code such - // that the mesh is not further - // refined after the last loop run - // (or you should do it at the - // beginning of each run except for - // the first one). + // Finally the main function controlling the different steps to be + // performed. Its content is rather straightforward, generating a + // triangulation of a circle, associating a boundary to it, and then doing + // several cycles on subsequently finer grids. Note again that we have put + // mesh refinement into the loop header; this may be something for a test + // program, but for real applications you should consider that this implies + // that the mesh is refined after the loop is executed the last time since + // the increment clause (the last part of the three-parted loop header) is + // executed before the comparison part (the second one), which may be rather + // costly if the mesh is already quite refined. In that case, you should + // arrange code such that the mesh is not further refined after the last + // loop run (or you should do it at the beginning of each run except for the + // first one). template void LaplaceProblem::run () { @@ -648,8 +413,7 @@ namespace Step11 assemble_and_solve (); }; - // After all the data is generated, - // write a table of results to the + // After all the data is generated, write a table of results to the // screen: output_table.set_precision("|u|_1", 6); output_table.set_precision("error", 6); @@ -660,11 +424,8 @@ namespace Step11 -// Finally the main function. It's -// structure is the same as that used -// in several of the previous -// examples, so probably needs no -// more explanation. +// Finally the main function. It's structure is the same as that used in +// several of the previous examples, so probably needs no more explanation. int main () { try @@ -672,18 +433,12 @@ int main () dealii::deallog.depth_console (0); std::cout.precision(5); - // This is the main loop, doing - // the computations with - // mappings of linear through - // cubic mappings. Note that - // since we need the object of - // type LaplaceProblem@<2@> - // only once, we do not even - // name it, but create an - // unnamed such object and call - // the run function of it, - // subsequent to which it is - // immediately destroyed again. + // This is the main loop, doing the computations with mappings of linear + // through cubic mappings. Note that since we need the object of type + // LaplaceProblem@<2@> only once, we do not even name it, + // but create an unnamed such object and call the run + // function of it, subsequent to which it is immediately destroyed + // again. for (unsigned int mapping_degree=1; mapping_degree<=3; ++mapping_degree) Step11::LaplaceProblem<2>(mapping_degree).run (); } diff --git a/deal.II/examples/step-12/step-12.cc b/deal.II/examples/step-12/step-12.cc index 392eacf1be..e4afc17a6b 100644 --- a/deal.II/examples/step-12/step-12.cc +++ b/deal.II/examples/step-12/step-12.cc @@ -9,10 +9,8 @@ /* to the file deal.II/doc/license.html for the text and */ /* further information on this license. */ -// The first few files have already -// been covered in previous examples -// and will thus not be further -// commented on: +// The first few files have already been covered in previous examples and will +// thus not be further commented on: #include #include #include @@ -30,66 +28,40 @@ #include #include #include -// Here the discontinuous finite elements are -// defined. They are used in the same way as -// all other finite elements, though -- as -// you have seen in previous tutorial -// programs -- there isn't much user -// interaction with finite element classes at -// all: the are passed to -// DoFHandler and -// FEValues objects, and that is -// about it. +// Here the discontinuous finite elements are defined. They are used in the +// same way as all other finite elements, though -- as you have seen in +// previous tutorial programs -- there isn't much user interaction with finite +// element classes at all: they are passed to DoFHandler and +// FEValues objects, and that is about it. #include -// We are going to use the simplest -// possible solver, called Richardson -// iteration, that represents a -// simple defect correction. This, in -// combination with a block SSOR -// preconditioner (defined in -// precondition_block.h), that uses -// the special block matrix structure -// of system matrices arising from DG +// We are going to use the simplest possible solver, called Richardson +// iteration, that represents a simple defect correction. This, in combination +// with a block SSOR preconditioner (defined in precondition_block.h), that +// uses the special block matrix structure of system matrices arising from DG // discretizations. #include #include -// We are going to use gradients as -// refinement indicator. +// We are going to use gradients as refinement indicator. #include -// Here come the new include files -// for using the MeshWorker -// framework. The first contains the -// class -// MeshWorker::DoFInfo, -// which provides local integrators -// with a mapping between local and -// global degrees of freedom. It -// stores the results of local -// integrals as well in its base -// class Meshworker::LocalResults. -// In the second of these files, we -// find an object of type -// MeshWorker::IntegrationInfo, which -// is mostly a wrapper around a group -// of FEValues objects. The file -// meshworker/simple.h -// contains classes assembling -// locally integrated data into a -// global system containing only a -// single matrix. Finally, we will -// need the file that runs the loop +// Here come the new include files for using the MeshWorker framework. The +// first contains the class MeshWorker::DoFInfo, which provides local +// integrators with a mapping between local and global degrees of freedom. It +// stores the results of local integrals as well in its base class +// Meshworker::LocalResults. In the second of these files, we find an object +// of type MeshWorker::IntegrationInfo, which is mostly a wrapper around a +// group of FEValues objects. The file meshworker/simple.h contains +// classes assembling locally integrated data into a global system containing +// only a single matrix. Finally, we will need the file that runs the loop // over all mesh cells and faces. #include #include #include #include -// Like in all programs, we finish -// this section by including the -// needed C++ headers and declaring -// we want to use objects in the -// dealii namespace without prefix. +// Like in all programs, we finish this section by including the needed C++ +// headers and declaring we want to use objects in the dealii namespace +// without prefix. #include #include @@ -100,12 +72,9 @@ namespace Step12 // @sect3{Equation data} // - // First, we define a class - // describing the inhomogeneous - // boundary data. Since only its - // values are used, we implement - // value_list(), but leave all other - // functions of Function undefined. + // First, we define a class describing the inhomogeneous boundary + // data. Since only its values are used, we implement value_list(), but + // leave all other functions of Function undefined. template class BoundaryValues: public Function { @@ -116,14 +85,11 @@ namespace Step12 const unsigned int component=0) const; }; - // Given the flow direction, the inflow - // boundary of the unit square $[0,1]^2$ are - // the right and the lower boundaries. We - // prescribe discontinuous boundary values 1 - // and 0 on the x-axis and value 0 on the - // right boundary. The values of this - // function on the outflow boundaries will - // not be used within the DG scheme. + // Given the flow direction, the inflow boundary of the unit square + // $[0,1]^2$ are the right and the lower boundaries. We prescribe + // discontinuous boundary values 1 and 0 on the x-axis and value 0 on the + // right boundary. The values of this function on the outflow boundaries + // will not be used within the DG scheme. template void BoundaryValues::value_list(const std::vector > &points, std::vector &values, @@ -142,22 +108,15 @@ namespace Step12 } // @sect3{The AdvectionProblem class} // - // After this preparations, we - // proceed with the main class of - // this program, - // called AdvectionProblem. It is basically - // the main class of step-6. We do - // not have a ConstraintMatrix, - // because there are no hanging node + // After this preparations, we proceed with the main class of this program, + // called AdvectionProblem. It is basically the main class of step-6. We do + // not have a ConstraintMatrix, because there are no hanging node // constraints in DG discretizations. - // Major differences will only come - // up in the implementation of the - // assemble functions, since here, we - // not only need to cover the flux - // integrals over faces, we also use - // the MeshWorker interface to - // simplify the loops involved. + // Major differences will only come up in the implementation of the assemble + // functions, since here, we not only need to cover the flux integrals over + // faces, we also use the MeshWorker interface to simplify the loops + // involved. template class AdvectionProblem { @@ -175,29 +134,18 @@ namespace Step12 Triangulation triangulation; const MappingQ1 mapping; - // Furthermore we want to use DG - // elements of degree 1 (but this - // is only specified in the - // constructor). If you want to - // use a DG method of a different - // degree the whole program stays - // the same, only replace 1 in - // the constructor by the desired - // polynomial degree. + // Furthermore we want to use DG elements of degree 1 (but this is only + // specified in the constructor). If you want to use a DG method of a + // different degree the whole program stays the same, only replace 1 in + // the constructor by the desired polynomial degree. FE_DGQ fe; DoFHandler dof_handler; - // The next four members represent the - // linear system to be - // solved. system_matrix and - // right_hand_side are - // generated by - // assemble_system(), the - // solution is computed in - // solve(). The - // sparsity_pattern is used - // to determine the location of nonzero - // elements in + // The next four members represent the linear system to be + // solved. system_matrix and right_hand_side are + // generated by assemble_system(), the solution + // is computed in solve(). The sparsity_pattern + // is used to determine the location of nonzero elements in // system_matrix. SparsityPattern sparsity_pattern; SparseMatrix system_matrix; @@ -205,54 +153,29 @@ namespace Step12 Vector solution; Vector right_hand_side; - // Finally, we have to provide - // functions that assemble the - // cell, boundary, and inner face - // terms. Within the MeshWorker - // framework, the loop over all - // cells and much of the setup of - // operations will be done - // outside this class, so all we - // have to provide are these - // three operations. They will - // then work on intermediate - // objects for which first, we - // here define typedefs to the - // info objects handed to the - // local integration functions in - // order to make our life easier - // below. + // Finally, we have to provide functions that assemble the cell, boundary, + // and inner face terms. Within the MeshWorker framework, the loop over + // all cells and much of the setup of operations will be done outside this + // class, so all we have to provide are these three operations. They will + // then work on intermediate objects for which first, we here define + // typedefs to the info objects handed to the local integration functions + // in order to make our life easier below. typedef MeshWorker::DoFInfo DoFInfo; typedef MeshWorker::IntegrationInfo CellInfo; - // The following three functions - // are then the ones that get called - // inside the generic loop over all - // cells and faces. They are the - // ones doing the actual - // integration. + // The following three functions are then the ones that get called inside + // the generic loop over all cells and faces. They are the ones doing the + // actual integration. // - // In our code below, these - // functions do not access member - // variables of the current - // class, so we can mark them as - // static and simply - // pass pointers to these - // functions to the MeshWorker - // framework. If, however, these - // functions would want to access - // member variables (or needed - // additional arguments beyond - // the ones specified below), we - // could use the facilities of - // boost::bind (or std::bind, - // respectively) to provide the - // MeshWorker framework with - // objects that act as if they - // had the required number and - // types of arguments, but have - // in fact other arguments - // already bound. + // In our code below, these functions do not access member variables of + // the current class, so we can mark them as static and + // simply pass pointers to these functions to the MeshWorker + // framework. If, however, these functions would want to access member + // variables (or needed additional arguments beyond the ones specified + // below), we could use the facilities of boost::bind (or std::bind, + // respectively) to provide the MeshWorker framework with objects that act + // as if they had the required number and types of arguments, but have in + // fact other arguments already bound. static void integrate_cell_term (DoFInfo &dinfo, CellInfo &info); static void integrate_boundary_term (DoFInfo &dinfo, @@ -264,9 +187,8 @@ namespace Step12 }; - // We start with the constructor. The 1 in - // the constructor call of fe is - // the polynomial degree. + // We start with the constructor. The 1 in the constructor call of + // fe is the polynomial degree. template AdvectionProblem::AdvectionProblem () : @@ -279,32 +201,24 @@ namespace Step12 template void AdvectionProblem::setup_system () { - // In the function that sets up the usual - // finite element data structures, we first - // need to distribute the DoFs. + // In the function that sets up the usual finite element data structures, + // we first need to distribute the DoFs. dof_handler.distribute_dofs (fe); - // We start by generating the sparsity - // pattern. To this end, we first fill an - // intermediate object of type - // CompressedSparsityPattern with the - // couplings appearing in the system. After - // building the pattern, this object is - // copied to sparsity_pattern - // and can be discarded. - - // To build the sparsity pattern for DG - // discretizations, we can call the - // function analogue to - // DoFTools::make_sparsity_pattern, which - // is called + // We start by generating the sparsity pattern. To this end, we first fill + // an intermediate object of type CompressedSparsityPattern with the + // couplings appearing in the system. After building the pattern, this + // object is copied to sparsity_pattern and can be discarded. + + // To build the sparsity pattern for DG discretizations, we can call the + // function analogue to DoFTools::make_sparsity_pattern, which is called // DoFTools::make_flux_sparsity_pattern: CompressedSparsityPattern c_sparsity(dof_handler.n_dofs()); DoFTools::make_flux_sparsity_pattern (dof_handler, c_sparsity); sparsity_pattern.copy_from(c_sparsity); - // Finally, we set up the structure - // of all components of the linear system. + // Finally, we set up the structure of all components of the linear + // system. system_matrix.reinit (sparsity_pattern); solution.reinit (dof_handler.n_dofs()); right_hand_side.reinit (dof_handler.n_dofs()); @@ -312,58 +226,36 @@ namespace Step12 // @sect4{The assemble_system function} - // Here we see the major difference to - // assembling by hand. Instead of writing - // loops over cells and faces, we leave all - // this to the MeshWorker framework. In order - // to do so, we just have to define local - // integration functions and use one of the - // classes in namespace MeshWorker::Assembler + // Here we see the major difference to assembling by hand. Instead of + // writing loops over cells and faces, we leave all this to the MeshWorker + // framework. In order to do so, we just have to define local integration + // functions and use one of the classes in namespace MeshWorker::Assembler // to build the global system. template void AdvectionProblem::assemble_system () { - // This is the magic object, which - // knows everything about the data - // structures and local - // integration. This is the object - // doing the work in the function - // MeshWorker::loop(), which is - // implicitly called by - // MeshWorker::integration_loop() - // below. After the functions to - // which we provide pointers did - // the local integration, the - // MeshWorker::Assembler::SystemSimple - // object distributes these into - // the global sparse matrix and the - // right hand side vector. + // This is the magic object, which knows everything about the data + // structures and local integration. This is the object doing the work in + // the function MeshWorker::loop(), which is implicitly called by + // MeshWorker::integration_loop() below. After the functions to which we + // provide pointers did the local integration, the + // MeshWorker::Assembler::SystemSimple object distributes these into the + // global sparse matrix and the right hand side vector. MeshWorker::IntegrationInfoBox info_box; - // First, we initialize the - // quadrature formulae and the - // update flags in the worker base - // class. For quadrature, we play - // safe and use a QGauss formula - // with number of points one higher - // than the polynomial degree - // used. Since the quadratures for - // cells, boundary and interior - // faces can be selected - // independently, we have to hand - // over this value three times. + // First, we initialize the quadrature formulae and the update flags in + // the worker base class. For quadrature, we play safe and use a QGauss + // formula with number of points one higher than the polynomial degree + // used. Since the quadratures for cells, boundary and interior faces can + // be selected independently, we have to hand over this value three times. const unsigned int n_gauss_points = dof_handler.get_fe().degree+1; info_box.initialize_gauss_quadrature(n_gauss_points, n_gauss_points, n_gauss_points); - // These are the types of values we - // need for integrating our - // system. They are added to the - // flags used on cells, boundary - // and interior faces, as well as - // interior neighbor faces, which is - // forced by the four @p true + // These are the types of values we need for integrating our system. They + // are added to the flags used on cells, boundary and interior faces, as + // well as interior neighbor faces, which is forced by the four @p true // values. info_box.initialize_update_flags(); UpdateFlags update_flags = update_quadrature_points | @@ -371,51 +263,33 @@ namespace Step12 update_gradients; info_box.add_update_flags(update_flags, true, true, true, true); - // After preparing all data in - // info_box, we initialize - // the FEValues objects in there. + // After preparing all data in info_box, we initialize the + // FEValues objects in there. info_box.initialize(fe, mapping); - // The object created so far helps - // us do the local integration on - // each cell and face. Now, we need - // an object which receives the - // integrated (local) data and - // forwards them to the assembler. + // The object created so far helps us do the local integration on each + // cell and face. Now, we need an object which receives the integrated + // (local) data and forwards them to the assembler. MeshWorker::DoFInfo dof_info(dof_handler); - // Now, we have to create the - // assembler object and tell it, - // where to put the local - // data. These will be our system - // matrix and the right hand side. + // Now, we have to create the assembler object and tell it, where to put + // the local data. These will be our system matrix and the right hand + // side. MeshWorker::Assembler::SystemSimple, Vector > assembler; assembler.initialize(system_matrix, right_hand_side); - // Finally, the integration loop - // over all active cells - // (determined by the first - // argument, which is an active - // iterator). + // Finally, the integration loop over all active cells (determined by the + // first argument, which is an active iterator). // - // As noted in the discussion when - // declaring the local integration - // functions in the class - // declaration, the arguments - // expected by the assembling - // integrator class are not - // actually function - // pointers. Rather, they are - // objects that can be called like - // functions with a certain number - // of arguments. Consequently, we - // could also pass objects with - // appropriate operator() - // implementations here, or the - // result of std::bind if the local - // integrators were, for example, - // non-static member functions. + // As noted in the discussion when declaring the local integration + // functions in the class declaration, the arguments expected by the + // assembling integrator class are not actually function pointers. Rather, + // they are objects that can be called like functions with a certain + // number of arguments. Consequently, we could also pass objects with + // appropriate operator() implementations here, or the result of std::bind + // if the local integrators were, for example, non-static member + // functions. MeshWorker::integration_loop (dof_handler.begin_active(), dof_handler.end(), dof_info, info_box, @@ -428,33 +302,23 @@ namespace Step12 // @sect4{The local integrators} - // These are the functions given to - // the MeshWorker::integration_loop() - // called just above. They compute - // the local contributions to the - // system matrix and right hand side - // on cells and faces. + // These are the functions given to the MeshWorker::integration_loop() + // called just above. They compute the local contributions to the system + // matrix and right hand side on cells and faces. template void AdvectionProblem::integrate_cell_term (DoFInfo &dinfo, CellInfo &info) { - // First, let us retrieve some of - // the objects used here from - // @p info. Note that these objects - // can handle much more complex - // structures, thus the access here - // looks more complicated than - // might seem necessary. + // First, let us retrieve some of the objects used here from @p info. Note + // that these objects can handle much more complex structures, thus the + // access here looks more complicated than might seem necessary. const FEValuesBase &fe_v = info.fe_values(); FullMatrix &local_matrix = dinfo.matrix(0).matrix; const std::vector &JxW = fe_v.get_JxW_values (); - // With these objects, we continue - // local integration like - // always. First, we loop over the - // quadrature points and compute - // the advection vector in the - // current point. + // With these objects, we continue local integration like always. First, + // we loop over the quadrature points and compute the advection vector in + // the current point. for (unsigned int point=0; point beta; @@ -462,12 +326,8 @@ namespace Step12 beta(1) = fe_v.quadrature_point(point)(0); beta /= beta.norm(); - // We solve a homogeneous - // equation, thus no right - // hand side shows up in - // the cell term. - // What's left is - // integrating the matrix entries. + // We solve a homogeneous equation, thus no right hand side shows up + // in the cell term. What's left is integrating the matrix entries. for (unsigned int i=0; i void AdvectionProblem::integrate_boundary_term (DoFInfo &dinfo, CellInfo &info) @@ -521,55 +379,37 @@ namespace Step12 } } - // Finally, the interior face - // terms. The difference here is that - // we receive two info objects, one - // for each cell adjacent to the face - // and we assemble four matrices, one - // for each cell and two for coupling - // back and forth. + // Finally, the interior face terms. The difference here is that we receive + // two info objects, one for each cell adjacent to the face and we assemble + // four matrices, one for each cell and two for coupling back and forth. template void AdvectionProblem::integrate_face_term (DoFInfo &dinfo1, DoFInfo &dinfo2, CellInfo &info1, CellInfo &info2) { - // For quadrature points, weights, - // etc., we use the - // FEValuesBase object of the - // first argument. + // For quadrature points, weights, etc., we use the FEValuesBase object of + // the first argument. const FEValuesBase &fe_v = info1.fe_values(); - // For additional shape functions, - // we have to ask the neighbors + // For additional shape functions, we have to ask the neighbors // FEValuesBase. const FEValuesBase &fe_v_neighbor = info2.fe_values(); - // Then we get references to the - // four local matrices. The letters - // u and v refer to trial and test - // functions, respectively. The - // %numbers indicate the cells - // provided by info1 and info2. By - // convention, the two matrices in - // each info object refer to the - // test functions on the respective - // cell. The first matrix contains the - // interior couplings of that cell, - // while the second contains the - // couplings between cells. + // Then we get references to the four local matrices. The letters u and v + // refer to trial and test functions, respectively. The %numbers indicate + // the cells provided by info1 and info2. By convention, the two matrices + // in each info object refer to the test functions on the respective + // cell. The first matrix contains the interior couplings of that cell, + // while the second contains the couplings between cells. FullMatrix &u1_v1_matrix = dinfo1.matrix(0,false).matrix; FullMatrix &u2_v1_matrix = dinfo1.matrix(0,true).matrix; FullMatrix &u1_v2_matrix = dinfo2.matrix(0,true).matrix; FullMatrix &u2_v2_matrix = dinfo2.matrix(0,false).matrix; - // Here, following the previous - // functions, we would have the - // local right hand side - // vectors. Fortunately, the - // interface terms only involve the - // solution and the right hand side - // does not receive any contributions. + // Here, following the previous functions, we would have the local right + // hand side vectors. Fortunately, the interface terms only involve the + // solution and the right hand side does not receive any contributions. const std::vector &JxW = fe_v.get_JxW_values (); const std::vector > &normals = fe_v.get_normal_vectors (); @@ -584,8 +424,7 @@ namespace Step12 const double beta_n=beta * normals[point]; if (beta_n>0) { - // This term we've already - // seen: + // This term we've already seen: for (unsigned int i=0; i void AdvectionProblem::solve (Vector &solution) { SolverControl solver_control (1000, 1e-12); SolverRichardson<> solver (solver_control); - // Here we create the - // preconditioner, + // Here we create the preconditioner, PreconditionBlockSSOR > preconditioner; - // then assign the matrix to it and - // set the right block size: + // then assign the matrix to it and set the right block size: preconditioner.initialize(system_matrix, fe.dofs_per_cell); - // After these preparations we are - // ready to start the linear solver. + // After these preparations we are ready to start the linear solver. solver.solve (system_matrix, solution, right_hand_side, preconditioner); } - // We refine the grid according to a - // very simple refinement criterion, - // namely an approximation to the - // gradient of the solution. As here - // we consider the DG(1) method - // (i.e. we use piecewise bilinear - // shape functions) we could simply - // compute the gradients on each - // cell. But we do not want to base - // our refinement indicator on the - // gradients on each cell only, but - // want to base them also on jumps of - // the discontinuous solution - // function over faces between - // neighboring cells. The simplest - // way of doing that is to compute - // approximative gradients by - // difference quotients including the - // cell under consideration and its - // neighbors. This is done by the - // DerivativeApproximation class - // that computes the approximate - // gradients in a way similar to the - // GradientEstimation described - // in step-9 of this tutorial. In - // fact, the - // DerivativeApproximation class - // was developed following the - // GradientEstimation class of - // step-9. Relating to the - // discussion in step-9, here we - // consider $h^{1+d/2}|\nabla_h - // u_h|$. Furthermore we note that we - // do not consider approximate second - // derivatives because solutions to - // the linear advection equation are - // in general not in $H^2$ but in $H^1$ - // (to be more precise, in $H^1_\beta$) + // We refine the grid according to a very simple refinement criterion, + // namely an approximation to the gradient of the solution. As here we + // consider the DG(1) method (i.e. we use piecewise bilinear shape + // functions) we could simply compute the gradients on each cell. But we do + // not want to base our refinement indicator on the gradients on each cell + // only, but want to base them also on jumps of the discontinuous solution + // function over faces between neighboring cells. The simplest way of doing + // that is to compute approximative gradients by difference quotients + // including the cell under consideration and its neighbors. This is done by + // the DerivativeApproximation class that computes the + // approximate gradients in a way similar to the + // GradientEstimation described in step-9 of this tutorial. In + // fact, the DerivativeApproximation class was developed + // following the GradientEstimation class of step-9. Relating + // to the discussion in step-9, here we consider $h^{1+d/2}|\nabla_h + // u_h|$. Furthermore we note that we do not consider approximate second + // derivatives because solutions to the linear advection equation are in + // general not in $H^2$ but in $H^1$ (to be more precise, in $H^1_\beta$) // only. template void AdvectionProblem::refine_grid () { - // The DerivativeApproximation - // class computes the gradients to - // float precision. This is - // sufficient as they are - // approximate and serve as - // refinement indicators only. + // The DerivativeApproximation class computes the gradients + // to float precision. This is sufficient as they are approximate and + // serve as refinement indicators only. Vector gradient_indicator (triangulation.n_active_cells()); - // Now the approximate gradients - // are computed + // Now the approximate gradients are computed DerivativeApproximation::approximate_gradient (mapping, dof_handler, solution, gradient_indicator); - // and they are cell-wise scaled by - // the factor $h^{1+d/2}$ + // and they are cell-wise scaled by the factor $h^{1+d/2}$ typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); for (unsigned int cell_no=0; cell!=endc; ++cell, ++cell_no) gradient_indicator(cell_no)*=std::pow(cell->diameter(), 1+1.0*dim/2); - // Finally they serve as refinement - // indicator. + // Finally they serve as refinement indicator. GridRefinement::refine_and_coarsen_fixed_number (triangulation, gradient_indicator, 0.3, 0.1); @@ -745,13 +543,9 @@ namespace Step12 } - // The output of this program - // consists of eps-files of the - // adaptively refined grids and the - // numerical solutions given in - // gnuplot format. This was covered - // in previous examples and will not - // be further commented on. + // The output of this program consists of eps-files of the adaptively + // refined grids and the numerical solutions given in gnuplot format. This + // was covered in previous examples and will not be further commented on. template void AdvectionProblem::output_results (const unsigned int cycle) const { @@ -767,8 +561,7 @@ namespace Step12 GridOut grid_out; grid_out.write_eps (triangulation, eps_output); - // Output of the solution in - // gnuplot format. + // Output of the solution in gnuplot format. filename = "sol-"; filename += ('0' + cycle); Assert (cycle < 10, ExcInternalError()); @@ -787,8 +580,7 @@ namespace Step12 } - // The following run function is - // similar to previous examples. + // The following run function is similar to previous examples. template void AdvectionProblem::run () { @@ -825,9 +617,8 @@ namespace Step12 } -// The following main function is -// similar to previous examples as well, and -// need not be commented on. +// The following main function is similar to previous examples as +// well, and need not be commented on. int main () { try @@ -861,5 +652,3 @@ int main () return 0; } - - diff --git a/deal.II/examples/step-13/step-13.cc b/deal.II/examples/step-13/step-13.cc index 8e527273d0..5d3ef167e3 100644 --- a/deal.II/examples/step-13/step-13.cc +++ b/deal.II/examples/step-13/step-13.cc @@ -10,15 +10,11 @@ /* further information on this license. */ -// As in all programs, we start with -// a list of include files from the -// library, and as usual they are in -// the standard order which is -// base -- lac -- grid -- -// dofs -- fe -- numerics -// (as each of these categories -// roughly builds upon previous -// ones), then C++ standard headers: +// As in all programs, we start with a list of include files from the library, +// and as usual they are in the standard order which is base -- +// lac -- grid -- dofs -- +// fe -- numerics (as each of these categories +// roughly builds upon previous ones), then C++ standard headers: #include #include #include @@ -51,120 +47,66 @@ #include #include -// The last step is as in all -// previous programs: +// The last step is as in all previous programs: namespace Step13 { using namespace dealii; // @sect3{Evaluation of the solution} - // As for the program itself, we - // first define classes that evaluate - // the solutions of a Laplace - // equation. In fact, they can - // evaluate every kind of solution, - // as long as it is described by a - // DoFHandler object, and a - // solution vector. We define them - // here first, even before the - // classes that actually generate the - // solution to be evaluated, since we - // need to declare an abstract base - // class that the solver classes can - // refer to. + // As for the program itself, we first define classes that evaluate the + // solutions of a Laplace equation. In fact, they can evaluate every kind of + // solution, as long as it is described by a DoFHandler object, + // and a solution vector. We define them here first, even before the classes + // that actually generate the solution to be evaluated, since we need to + // declare an abstract base class that the solver classes can refer to. // - // From an abstract point of view, we - // declare a pure base class - // that provides an evaluation - // operator() which will - // do the evaluation of the solution - // (whatever derived classes might - // consider an evaluation). Since - // this is the only real function of - // this base class (except for some - // bookkeeping machinery), one - // usually terms such a class that - // only has an operator() a - // functor in C++ terminology, - // since it is used just like a + // From an abstract point of view, we declare a pure base class that + // provides an evaluation operator() which will do the evaluation of the + // solution (whatever derived classes might consider an + // evaluation). Since this is the only real function of this + // base class (except for some bookkeeping machinery), one usually terms + // such a class that only has an operator() a + // functor in C++ terminology, since it is used just like a // function object. // - // Objects of this functor type will - // then later be passed to the solver - // object, which applies it to the - // solution just computed. The - // evaluation objects may then - // extract any quantity they like - // from the solution. The advantage - // of putting these evaluation - // functions into a separate - // hierarchy of classes is that by - // design they cannot use the - // internals of the solver object and - // are therefore independent of - // changes to the way the solver - // works. Furthermore, it is trivial - // to write another evaluation class - // without modifying the solver - // class, which speeds up programming - // (not being able to use internals - // of another class also means that - // you do not have to worry about - // them -- programming evaluators is - // usually a rather quickly done - // task), as well as compilation (if - // solver and evaluation classes are - // put into different files: the - // solver only needs to see the - // declaration of the abstract base - // class, and therefore does not need - // to be recompiled upon addition of - // a new evaluation class, or - // modification of an old one). - // On a related note, you can reuse - // the evaluation classes for other - // projects, solving different - // equations. + // Objects of this functor type will then later be passed to the solver + // object, which applies it to the solution just computed. The evaluation + // objects may then extract any quantity they like from the solution. The + // advantage of putting these evaluation functions into a separate hierarchy + // of classes is that by design they cannot use the internals of the solver + // object and are therefore independent of changes to the way the solver + // works. Furthermore, it is trivial to write another evaluation class + // without modifying the solver class, which speeds up programming (not + // being able to use internals of another class also means that you do not + // have to worry about them -- programming evaluators is usually a rather + // quickly done task), as well as compilation (if solver and evaluation + // classes are put into different files: the solver only needs to see the + // declaration of the abstract base class, and therefore does not need to be + // recompiled upon addition of a new evaluation class, or modification of an + // old one). On a related note, you can reuse the evaluation classes for + // other projects, solving different equations. // - // In order to improve separation of - // code into different modules, we - // put the evaluation classes into a - // namespace of their own. This makes - // it easier to actually solve - // different equations in the same - // program, by assembling it from - // existing building blocks. The - // reason for this is that classes - // for similar purposes tend to have - // the same name, although they were - // developed in different - // contexts. In order to be able to - // use them together in one program, - // it is necessary that they are - // placed in different + // In order to improve separation of code into different modules, we put the + // evaluation classes into a namespace of their own. This makes it easier to + // actually solve different equations in the same program, by assembling it + // from existing building blocks. The reason for this is that classes for + // similar purposes tend to have the same name, although they were developed + // in different contexts. In order to be able to use them together in one + // program, it is necessary that they are placed in different // namespaces. This we do here: namespace Evaluation { - // Now for the abstract base class - // of evaluation classes: its main - // purpose is to declare a pure - // virtual function operator() - // taking a DoFHandler object, - // and the solution vector. In - // order to be able to use pointers - // to this base class only, it also - // has to declare a virtual - // destructor, which however does - // nothing. Besides this, it only - // provides for a little bit of - // bookkeeping: since we usually - // want to evaluate solutions on - // subsequent refinement levels, we - // store the number of the present - // refinement cycle, and provide a - // function to change this number. + // Now for the abstract base class of evaluation classes: its main purpose + // is to declare a pure virtual function operator() taking a + // DoFHandler object, and the solution vector. In order to be + // able to use pointers to this base class only, it also has to declare a + // virtual destructor, which however does nothing. Besides this, it only + // provides for a little bit of bookkeeping: since we usually want to + // evaluate solutions on subsequent refinement levels, we store the number + // of the present refinement cycle, and provide a function to change this + // number. template class EvaluationBase { @@ -180,10 +122,8 @@ namespace Step13 }; - // After the declaration has been - // discussed above, the - // implementation is rather - // straightforward: + // After the declaration has been discussed above, the implementation is + // rather straightforward: template EvaluationBase::~EvaluationBase () {} @@ -200,55 +140,30 @@ namespace Step13 // @sect4{%Point evaluation} - // The next thing is to implement - // actual evaluation classes. As - // noted in the introduction, we'd - // like to extract a point value - // from the solution, so the first - // class does this in its - // operator(). The actual point - // is given to this class through - // the constructor, as well as a - // table object into which it will - // put its findings. + // The next thing is to implement actual evaluation classes. As noted in + // the introduction, we'd like to extract a point value from the solution, + // so the first class does this in its operator(). The actual + // point is given to this class through the constructor, as well as a + // table object into which it will put its findings. // - // Finding out the value of a - // finite element field at an - // arbitrary point is rather - // difficult, if we cannot rely on - // knowing the actual finite - // element used, since then we - // cannot, for example, interpolate - // between nodes. For simplicity, - // we therefore assume here that - // the point at which we want to - // evaluate the field is actually a - // node. If, in the process of - // evaluating the solution, we find - // that we did not encounter this - // point upon looping over all - // vertices, we then have to throw - // an exception in order to signal - // to the calling functions that - // something has gone wrong, rather - // than silently ignore this error. + // Finding out the value of a finite element field at an arbitrary point + // is rather difficult, if we cannot rely on knowing the actual finite + // element used, since then we cannot, for example, interpolate between + // nodes. For simplicity, we therefore assume here that the point at which + // we want to evaluate the field is actually a node. If, in the process of + // evaluating the solution, we find that we did not encounter this point + // upon looping over all vertices, we then have to throw an exception in + // order to signal to the calling functions that something has gone wrong, + // rather than silently ignore this error. // - // In the step-9 example program, - // we have already seen how such an - // exception class can be declared, - // using the DeclExceptionN - // macros. We use this mechanism - // here again. + // In the step-9 example program, we have already seen how such an + // exception class can be declared, using the DeclExceptionN + // macros. We use this mechanism here again. // - // From this, the actual - // declaration of this class should - // be evident. Note that of course - // even if we do not list a - // destructor explicitely, an - // implicit destructor is generated - // from the compiler, and it is - // virtual just as the one of the - // base class. + // From this, the actual declaration of this class should be evident. Note + // that of course even if we do not list a destructor explicitely, an + // implicit destructor is generated from the compiler, and it is virtual + // just as the one of the base class. template class PointValueEvaluation : public EvaluationBase { @@ -269,10 +184,8 @@ namespace Step13 }; - // As for the definition, the - // constructor is trivial, just - // taking data and storing it in - // object-local ones: + // As for the definition, the constructor is trivial, just taking data and + // storing it in object-local ones: template PointValueEvaluation:: PointValueEvaluation (const Point &evaluation_point, @@ -284,40 +197,25 @@ namespace Step13 - // Now for the function that is - // mainly of interest in this - // class, the computation of the - // point value: + // Now for the function that is mainly of interest in this class, the + // computation of the point value: template void PointValueEvaluation:: operator () (const DoFHandler &dof_handler, const Vector &solution) const { - // First allocate a variable that - // will hold the point - // value. Initialize it with a - // value that is clearly bogus, - // so that if we fail to set it - // to a reasonable value, we will - // note at once. This may not be - // necessary in a function as - // small as this one, since we - // can easily see all possible - // paths of execution here, but - // it proved to be helpful for - // more complex cases, and so we - // employ this strategy here as - // well. + // First allocate a variable that will hold the point value. Initialize + // it with a value that is clearly bogus, so that if we fail to set it + // to a reasonable value, we will note at once. This may not be + // necessary in a function as small as this one, since we can easily see + // all possible paths of execution here, but it proved to be helpful for + // more complex cases, and so we employ this strategy here as well. double point_value = 1e20; - // Then loop over all cells and - // all their vertices, and check - // whether a vertex matches the - // evaluation point. If this is - // the case, then extract the - // point value, set a flag that - // we have found the point of + // Then loop over all cells and all their vertices, and check whether a + // vertex matches the evaluation point. If this is the case, then + // extract the point value, set a flag that we have found the point of // interest, and exit the loop. typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), @@ -329,172 +227,79 @@ namespace Step13 ++vertex) if (cell->vertex(vertex) == evaluation_point) { - // In order to extract - // the point value from - // the global solution - // vector, pick that - // component that belongs - // to the vertex of - // interest, and, in case - // the solution is - // vector-valued, take - // the first component of - // it: + // In order to extract the point value from the global solution + // vector, pick that component that belongs to the vertex of + // interest, and, in case the solution is vector-valued, take + // the first component of it: point_value = solution(cell->vertex_dof_index(vertex,0)); - // Note that by this we - // have made an - // assumption that is not - // valid always and - // should be documented - // in the class - // declaration if this - // were code for a real - // application rather - // than a tutorial - // program: we assume - // that the finite - // element used for the - // solution we try to - // evaluate actually has - // degrees of freedom - // associated with - // vertices. This, for - // example, does not hold - // for discontinuous - // elements, were the - // support points for the - // shape functions - // happen to be located - // at the vertices, but - // are not associated - // with the vertices but - // rather with the cell - // interior, since - // association with - // vertices would imply - // continuity there. It - // would also not hold - // for edge oriented - // elements, and the - // like. + // Note that by this we have made an assumption that is not + // valid always and should be documented in the class + // declaration if this were code for a real application rather + // than a tutorial program: we assume that the finite element + // used for the solution we try to evaluate actually has degrees + // of freedom associated with vertices. This, for example, does + // not hold for discontinuous elements, were the support points + // for the shape functions happen to be located at the vertices, + // but are not associated with the vertices but rather with the + // cell interior, since association with vertices would imply + // continuity there. It would also not hold for edge oriented + // elements, and the like. // - // Ideally, we would - // check this at the - // beginning of the - // function, for example - // by a statement like - // Assert - // (dof_handler.get_fe().dofs_per_vertex - // @> 0, - // ExcNotImplemented()), - // which should make it - // quite clear what is - // going wrong when the - // exception is - // triggered. In this - // case, we omit it - // (which is indeed bad - // style), but knowing - // that that does not - // hurt here, since the - // statement - // cell-@>vertex_dof_index(vertex,0) - // would fail if we asked - // it to give us the DoF - // index of a vertex if - // there were none. + // Ideally, we would check this at the beginning of the + // function, for example by a statement like Assert + // (dof_handler.get_fe().dofs_per_vertex @> 0, + // ExcNotImplemented()), which should make it quite clear + // what is going wrong when the exception is triggered. In this + // case, we omit it (which is indeed bad style), but knowing + // that that does not hurt here, since the statement + // cell-@>vertex_dof_index(vertex,0) would fail if + // we asked it to give us the DoF index of a vertex if there + // were none. // - // We stress again that - // this restriction on - // the allowed finite - // elements should be - // stated in the class - // documentation. - - // Since we found the - // right point, we now - // set the respective - // flag and exit the - // innermost loop. The - // outer loop will the - // also be terminated due - // to the set flag. + // We stress again that this restriction on the allowed finite + // elements should be stated in the class documentation. + + // Since we found the right point, we now set the respective + // flag and exit the innermost loop. The outer loop will the + // also be terminated due to the set flag. evaluation_point_found = true; break; }; - // Finally, we'd like to make - // sure that we have indeed found - // the evaluation point, since if - // that were not so we could not - // give a reasonable value of the - // solution there and the rest of - // the computations were useless - // anyway. So make sure through - // the AssertThrow macro - // already used in the step-9 - // program that we have indeed - // found this point. If this is - // not so, the macro throws an - // exception of the type that is - // given to it as second - // argument, but compared to a - // straightforward throw - // statement, it fills the - // exception object with a set of - // additional information, for - // example the source file and - // line number where the - // exception was generated, and - // the condition that failed. If - // you have a catch clause in - // your main function (as this - // program has), you will catch - // all exceptions that are not - // caught somewhere in between - // and thus already handled, and - // this additional information - // will help you find out what - // happened and where it went - // wrong. + // Finally, we'd like to make sure that we have indeed found the + // evaluation point, since if that were not so we could not give a + // reasonable value of the solution there and the rest of the + // computations were useless anyway. So make sure through the + // AssertThrow macro already used in the step-9 program + // that we have indeed found this point. If this is not so, the macro + // throws an exception of the type that is given to it as second + // argument, but compared to a straightforward throw + // statement, it fills the exception object with a set of additional + // information, for example the source file and line number where the + // exception was generated, and the condition that failed. If you have a + // catch clause in your main function (as this program + // has), you will catch all exceptions that are not caught somewhere in + // between and thus already handled, and this additional information + // will help you find out what happened and where it went wrong. AssertThrow (evaluation_point_found, ExcEvaluationPointNotFound(evaluation_point)); - // Note that we have used the - // Assert macro in other - // example programs as well. It - // differed from the - // AssertThrow macro used - // here in that it simply aborts - // the program, rather than - // throwing an exception, and - // that it did so only in debug - // mode. It was the right macro - // to use to check about the size - // of vectors passed as arguments + // Note that we have used the Assert macro in other example + // programs as well. It differed from the AssertThrow macro + // used here in that it simply aborts the program, rather than throwing + // an exception, and that it did so only in debug mode. It was the right + // macro to use to check about the size of vectors passed as arguments // to functions, and the like. // - // However, here the situation is - // different: whether we find the - // evaluation point or not may - // change from refinement to - // refinement (for example, if - // the four cells around point - // are coarsened away, then the - // point may vanish after - // refinement and - // coarsening). This is something - // that cannot be predicted from - // a few number of runs of the - // program in debug mode, but - // should be checked always, also - // in production runs. Thus the - // use of the AssertThrow - // macro here. - - // Now, if we are sure that we - // have found the evaluation - // point, we can add the results - // into the table of results: + // However, here the situation is different: whether we find the + // evaluation point or not may change from refinement to refinement (for + // example, if the four cells around point are coarsened away, then the + // point may vanish after refinement and coarsening). This is something + // that cannot be predicted from a few number of runs of the program in + // debug mode, but should be checked always, also in production + // runs. Thus the use of the AssertThrow macro here. + + // Now, if we are sure that we have found the evaluation point, we can + // add the results into the table of results: results_table.add_value ("DoFs", dof_handler.n_dofs()); results_table.add_value ("u(x_0)", point_value); } @@ -504,109 +309,55 @@ namespace Step13 // @sect4{Generating output} - // A different, maybe slightly odd - // kind of evaluation of a - // solution is to output it to a - // file in a graphical - // format. Since in the evaluation - // functions we are given a - // DoFHandler object and the - // solution vector, we have all we - // need to do this, so we can do it - // in an evaluation class. The - // reason for actually doing so - // instead of putting it into the - // class that computed the solution - // is that this way we have more - // flexibility: if we choose to - // only output certain aspects of - // it, or not output it at all. In - // any case, we do not need to - // modify the solver class, we just - // have to modify one of the - // modules out of which we build - // this program. This form of - // encapsulation, as above, helps - // us to keep each part of the - // program rather simple as the - // interfaces are kept simple, and - // no access to hidden data is - // possible. + // A different, maybe slightly odd kind of evaluation of a + // solution is to output it to a file in a graphical format. Since in the + // evaluation functions we are given a DoFHandler object and + // the solution vector, we have all we need to do this, so we can do it in + // an evaluation class. The reason for actually doing so instead of + // putting it into the class that computed the solution is that this way + // we have more flexibility: if we choose to only output certain aspects + // of it, or not output it at all. In any case, we do not need to modify + // the solver class, we just have to modify one of the modules out of + // which we build this program. This form of encapsulation, as above, + // helps us to keep each part of the program rather simple as the + // interfaces are kept simple, and no access to hidden data is possible. // - // Since this class which generates - // the output is derived from the - // common EvaluationBase base - // class, its main interface is the - // operator() - // function. Furthermore, it has a - // constructor taking a string that - // will be used as the base part of - // the file name to which output - // will be sent (we will augment it - // by a number indicating the - // number of the refinement cycle - // -- the base class has this - // information at hand --, and a - // suffix), and the constructor - // also takes a value that - // indicates which format is - // requested, i.e. for which - // graphics program we shall - // generate output (from this we - // will then also generate the - // suffix of the filename to which - // we write). + // Since this class which generates the output is derived from the common + // EvaluationBase base class, its main interface is the + // operator() function. Furthermore, it has a constructor + // taking a string that will be used as the base part of the file name to + // which output will be sent (we will augment it by a number indicating + // the number of the refinement cycle -- the base class has this + // information at hand --, and a suffix), and the constructor also takes a + // value that indicates which format is requested, i.e. for which graphics + // program we shall generate output (from this we will then also generate + // the suffix of the filename to which we write). // - // Regarding the output format, the - // DataOutInterface class - // (which is a base class of - // DataOut through which we - // will access its fields) provides - // an enumeration field - // OutputFormat, which lists - // names for all supported output - // formats. At the time of writing - // of this program, the supported - // graphics formats are represented - // by the enum values ucd, - // gnuplot, povray, - // eps, gmv, tecplot, - // tecplot_binary, dx, and - // vtk, but this list will - // certainly grow over time. Now, - // within various functions of that - // base class, you can use values - // of this type to get information - // about these graphics formats - // (for example the default suffix - // used for files of each format), - // and you can call a generic - // write function, which then - // branches to the - // write_gnuplot, - // write_ucd, etc functions - // which we have used in previous - // examples already, based on the - // value of a second argument given - // to it denoting the required - // output format. This mechanism - // makes it simple to write an - // extensible program that can - // decide which output format to - // use at runtime, and it also - // makes it rather simple to write - // the program in a way such that - // it takes advantage of newly - // implemented output formats, - // without the need to change the - // application program. + // Regarding the output format, the DataOutInterface class + // (which is a base class of DataOut through which we will + // access its fields) provides an enumeration field + // OutputFormat, which lists names for all supported output + // formats. At the time of writing of this program, the supported graphics + // formats are represented by the enum values ucd, + // gnuplot, povray, eps, + // gmv, tecplot, tecplot_binary, + // dx, and vtk, but this list will certainly + // grow over time. Now, within various functions of that base class, you + // can use values of this type to get information about these graphics + // formats (for example the default suffix used for files of each format), + // and you can call a generic write function, which then + // branches to the write_gnuplot, write_ucd, etc + // functions which we have used in previous examples already, based on the + // value of a second argument given to it denoting the required output + // format. This mechanism makes it simple to write an extensible program + // that can decide which output format to use at runtime, and it also + // makes it rather simple to write the program in a way such that it takes + // advantage of newly implemented output formats, without the need to + // change the application program. // - // Of these two fields, the base - // name and the output format - // descriptor, the constructor - // takes values and stores them for - // later use by the actual - // evaluation function. + // Of these two fields, the base name and the output format descriptor, + // the constructor takes values and stores them for later use by the + // actual evaluation function. template class SolutionOutput : public EvaluationBase { @@ -632,37 +383,22 @@ namespace Step13 {} - // After the description above, the - // function generating the actual - // output is now relatively - // straightforward. The only - // particularly interesting feature - // over previous example programs - // is the use of the - // DataOut::default_suffix - // function, returning the usual - // suffix for files of a given - // format (e.g. ".eps" for - // encapsulated postscript files, - // ".gnuplot" for Gnuplot files), - // and of the generic - // DataOut::write function with - // a second argument, which - // branches to the actual output - // functions for the different - // graphics formats, based on the - // value of the format descriptor - // passed as second argument. + // After the description above, the function generating the actual output + // is now relatively straightforward. The only particularly interesting + // feature over previous example programs is the use of the + // DataOut::default_suffix function, returning the usual + // suffix for files of a given format (e.g. ".eps" for encapsulated + // postscript files, ".gnuplot" for Gnuplot files), and of the generic + // DataOut::write function with a second argument, which + // branches to the actual output functions for the different graphics + // formats, based on the value of the format descriptor passed as second + // argument. // - // Also note that we have to prefix - // this-@> to access a member - // variable of the template - // dependent base class. The reason - // here, and further down in the - // program is the same as the one - // described in the step-7 example - // program (look for two-stage - // name lookup there). + // Also note that we have to prefix this-@> to access a + // member variable of the template dependent base class. The reason here, + // and further down in the program is the same as the one described in the + // step-7 example program (look for two-stage name lookup + // there). template void SolutionOutput::operator () (const DoFHandler &dof_handler, @@ -687,158 +423,83 @@ namespace Step13 // @sect4{Other evaluations} - // In practical applications, one - // would add here a list of other - // possible evaluation classes, - // representing quantities that one - // may be interested in. For this - // example, that much shall be - // sufficient, so we close the + // In practical applications, one would add here a list of other possible + // evaluation classes, representing quantities that one may be interested + // in. For this example, that much shall be sufficient, so we close the // namespace. } // @sect3{The Laplace solver classes} - // After defining what we want to - // know of the solution, we should - // now care how to get at it. We will - // pack everything we need into a - // namespace of its own, for much the - // same reasons as for the - // evaluations above. + // After defining what we want to know of the solution, we should now care + // how to get at it. We will pack everything we need into a namespace of its + // own, for much the same reasons as for the evaluations above. // - // Since we have discussed Laplace - // solvers already in considerable - // detail in previous examples, there - // is not much new stuff - // following. Rather, we have to a - // great extent cannibalized previous - // examples and put them, in slightly - // different form, into this example - // program. We will therefore mostly - // be concerned with discussing the - // differences to previous examples. + // Since we have discussed Laplace solvers already in considerable detail in + // previous examples, there is not much new stuff following. Rather, we have + // to a great extent cannibalized previous examples and put them, in + // slightly different form, into this example program. We will therefore + // mostly be concerned with discussing the differences to previous examples. // - // Basically, as already said in the - // introduction, the lack of new - // stuff in this example is - // deliberate, as it is more to - // demonstrate software design - // practices, rather than - // mathematics. The emphasis in - // explanations below will therefore - // be more on the actual - // implementation. + // Basically, as already said in the introduction, the lack of new stuff in + // this example is deliberate, as it is more to demonstrate software design + // practices, rather than mathematics. The emphasis in explanations below + // will therefore be more on the actual implementation. namespace LaplaceSolver { // @sect4{An abstract base class} - // In defining a Laplace solver, we - // start out by declaring an - // abstract base class, that has no - // functionality itself except for - // taking and storing a pointer to - // the triangulation to be used - // later. + // In defining a Laplace solver, we start out by declaring an abstract + // base class, that has no functionality itself except for taking and + // storing a pointer to the triangulation to be used later. // - // This base class is very general, - // and could as well be used for - // any other stationary problem. It - // provides declarations of - // functions that shall, in derived - // classes, solve a problem, - // postprocess the solution with a - // list of evaluation objects, and - // refine the grid, - // respectively. None of these - // functions actually does - // something itself in the base - // class. + // This base class is very general, and could as well be used for any + // other stationary problem. It provides declarations of functions that + // shall, in derived classes, solve a problem, postprocess the solution + // with a list of evaluation objects, and refine the grid, + // respectively. None of these functions actually does something itself in + // the base class. // - // Due to the lack of actual - // functionality, the programming - // style of declaring very abstract - // base classes reminds of the - // style used in Smalltalk or Java - // programs, where all classes are - // derived from entirely abstract - // classes Object, even number - // representations. The author - // admits that he does not - // particularly like the use of - // such a style in C++, as it puts - // style over reason. Furthermore, - // it promotes the use of virtual - // functions for everything (for - // example, in Java, all functions - // are virtual per se), which, - // however, has proven to be rather - // inefficient in many applications - // where functions are often only - // accessing data, not doing - // computations, and therefore - // quickly return; the overhead of - // virtual functions can then be - // significant. The opinion of the - // author is to have abstract base - // classes wherever at least some - // part of the code of actual - // implementations can be shared - // and thus separated into the base - // class. + // Due to the lack of actual functionality, the programming style of + // declaring very abstract base classes reminds of the style used in + // Smalltalk or Java programs, where all classes are derived from entirely + // abstract classes Object, even number representations. The + // author admits that he does not particularly like the use of such a + // style in C++, as it puts style over reason. Furthermore, it promotes + // the use of virtual functions for everything (for example, in Java, all + // functions are virtual per se), which, however, has proven to be rather + // inefficient in many applications where functions are often only + // accessing data, not doing computations, and therefore quickly return; + // the overhead of virtual functions can then be significant. The opinion + // of the author is to have abstract base classes wherever at least some + // part of the code of actual implementations can be shared and thus + // separated into the base class. // - // Besides all these theoretical - // questions, we here have a good - // reason, which will become - // clearer to the reader - // below. Basically, we want to be - // able to have a family of - // different Laplace solvers that - // differ so much that no larger - // common subset of functionality - // could be found. We therefore - // just declare such an abstract - // base class, taking a pointer to - // a triangulation in the - // constructor and storing it - // henceforth. Since this - // triangulation will be used - // throughout all computations, we - // have to make sure that the - // triangulation exists until the - // destructor exits. We do this by - // keeping a SmartPointer to - // this triangulation, which uses a - // counter in the triangulation - // class to denote the fact that - // there is still an object out - // there using this triangulation, - // thus leading to an abort in case - // the triangulation is attempted - // to be destructed while this - // object still uses it. + // Besides all these theoretical questions, we here have a good reason, + // which will become clearer to the reader below. Basically, we want to be + // able to have a family of different Laplace solvers that differ so much + // that no larger common subset of functionality could be found. We + // therefore just declare such an abstract base class, taking a pointer to + // a triangulation in the constructor and storing it henceforth. Since + // this triangulation will be used throughout all computations, we have to + // make sure that the triangulation exists until the destructor exits. We + // do this by keeping a SmartPointer to this triangulation, + // which uses a counter in the triangulation class to denote the fact that + // there is still an object out there using this triangulation, thus + // leading to an abort in case the triangulation is attempted to be + // destructed while this object still uses it. // - // Note that while the pointer - // itself is declared constant - // (i.e. throughout the lifetime of - // this object, the pointer points - // to the same object), it is not - // declared as a pointer to a - // constant triangulation. In fact, - // by this we allow that derived - // classes refine or coarsen the - // triangulation within the - // refine_grid function. + // Note that while the pointer itself is declared constant + // (i.e. throughout the lifetime of this object, the pointer points to the + // same object), it is not declared as a pointer to a constant + // triangulation. In fact, by this we allow that derived classes refine or + // coarsen the triangulation within the refine_grid function. // - // Finally, we have a function - // n_dofs is only a tool for - // the driver functions to decide - // whether we want to go on with - // mesh refinement or not. It - // returns the number of degrees of - // freedom the present simulation - // has. + // Finally, we have a function n_dofs is only a tool for the + // driver functions to decide whether we want to go on with mesh + // refinement or not. It returns the number of degrees of freedom the + // present simulation has. template class Base { @@ -856,9 +517,8 @@ namespace Step13 }; - // The implementation of the only - // two non-abstract functions is - // then rather boring: + // The implementation of the only two non-abstract functions is then + // rather boring: template Base::Base (Triangulation &coarse_grid) : @@ -873,76 +533,42 @@ namespace Step13 // @sect4{A general solver class} - // Following now the main class - // that implements assembling the - // matrix of the linear system, - // solving it, and calling the - // postprocessor objects on the - // solution. It implements the - // solve_problem and - // postprocess functions - // declared in the base class. It - // does not, however, implement the - // refine_grid method, as mesh - // refinement will be implemented - // in a number of derived classes. + // Following now the main class that implements assembling the matrix of + // the linear system, solving it, and calling the postprocessor objects on + // the solution. It implements the solve_problem and + // postprocess functions declared in the base class. It does + // not, however, implement the refine_grid method, as mesh + // refinement will be implemented in a number of derived classes. // - // It also declares a new abstract - // virtual function, - // assemble_rhs, that needs to - // be overloaded in subclasses. The - // reason is that we will implement - // two different classes that will - // implement different methods to - // assemble the right hand side - // vector. This function might also - // be interesting in cases where - // the right hand side depends not - // simply on a continuous function, - // but on something else as well, - // for example the solution of - // another discretized problem, - // etc. The latter happens - // frequently in non-linear - // problems. + // It also declares a new abstract virtual function, + // assemble_rhs, that needs to be overloaded in + // subclasses. The reason is that we will implement two different classes + // that will implement different methods to assemble the right hand side + // vector. This function might also be interesting in cases where the + // right hand side depends not simply on a continuous function, but on + // something else as well, for example the solution of another discretized + // problem, etc. The latter happens frequently in non-linear problems. // - // As we mentioned previously, the - // actual content of this class is - // not new, but a mixture of - // various techniques already used - // in previous examples. We will - // therefore not discuss them in - // detail, but refer the reader to - // these programs. + // As we mentioned previously, the actual content of this class is not + // new, but a mixture of various techniques already used in previous + // examples. We will therefore not discuss them in detail, but refer the + // reader to these programs. // - // Basically, in a few words, the - // constructor of this class takes - // pointers to a triangulation, a - // finite element, and a function - // object representing the boundary - // values. These are either passed - // down to the base class's - // constructor, or are stored and - // used to generate a - // DoFHandler object - // later. Since finite elements and - // quadrature formula should match, - // it is also passed a quadrature - // object. + // Basically, in a few words, the constructor of this class takes pointers + // to a triangulation, a finite element, and a function object + // representing the boundary values. These are either passed down to the + // base class's constructor, or are stored and used to generate a + // DoFHandler object later. Since finite elements and + // quadrature formula should match, it is also passed a quadrature object. // - // The solve_problem sets up - // the data structures for the - // actual solution, calls the - // functions to assemble the linear - // system, and solves it. + // The solve_problem sets up the data structures for the + // actual solution, calls the functions to assemble the linear system, and + // solves it. // - // The postprocess function - // finally takes an evaluation - // object and applies it to the - // computed solution. + // The postprocess function finally takes an evaluation + // object and applies it to the computed solution. // - // The n_dofs function finally - // implements the pure virtual + // The n_dofs function finally implements the pure virtual // function of the base class. template class Solver : public virtual Base @@ -967,11 +593,8 @@ namespace Step13 unsigned int n_dofs () const; - // In the protected section of - // this class, we first have a - // number of member variables, - // of which the use should be - // clear from the previous + // In the protected section of this class, we first have a number of + // member variables, of which the use should be clear from the previous // examples: protected: const SmartPointer > fe; @@ -980,32 +603,18 @@ namespace Step13 Vector solution; const SmartPointer > boundary_values; - // Then we declare an abstract - // function that will be used - // to assemble the right hand - // side. As explained above, - // there are various cases for - // which this action differs - // strongly in what is - // necessary, so we defer this - // to derived classes: + // Then we declare an abstract function that will be used to assemble + // the right hand side. As explained above, there are various cases for + // which this action differs strongly in what is necessary, so we defer + // this to derived classes: virtual void assemble_rhs (Vector &rhs) const = 0; - // Next, in the private - // section, we have a small - // class which represents an - // entire linear system, i.e. a - // matrix, a right hand side, - // and a solution vector, as - // well as the constraints that - // are applied to it, such as - // those due to hanging - // nodes. Its constructor - // initializes the various - // subobjects, and there is a - // function that implements a - // conjugate gradient method as - // solver. + // Next, in the private section, we have a small class which represents + // an entire linear system, i.e. a matrix, a right hand side, and a + // solution vector, as well as the constraints that are applied to it, + // such as those due to hanging nodes. Its constructor initializes the + // various subobjects, and there is a function that implements a + // conjugate gradient method as solver. private: struct LinearSystem { @@ -1019,19 +628,11 @@ namespace Step13 Vector rhs; }; - // Finally, there is a pair of - // functions which will be used - // to assemble the actual - // system matrix. It calls the - // virtual function assembling - // the right hand side, and - // installs a number threads - // each running the second - // function which assembles - // part of the system - // matrix. The mechanism for - // doing so is the same as in - // the step-9 example program. + // Finally, there is a pair of functions which will be used to assemble + // the actual system matrix. It calls the virtual function assembling + // the right hand side, and installs a number threads each running the + // second function which assembles part of the system matrix. The + // mechanism for doing so is the same as in the step-9 example program. void assemble_linear_system (LinearSystem &linear_system); @@ -1044,19 +645,12 @@ namespace Step13 - // Now here comes the constructor - // of the class. It does not do - // much except store pointers to - // the objects given, and generate - // DoFHandler object - // initialized with the given - // pointer to a triangulation. This - // causes the DoF handler to store - // that pointer, but does not - // already generate a finite - // element numbering (we only ask - // for that in the - // solve_problem function). + // Now here comes the constructor of the class. It does not do much except + // store pointers to the objects given, and generate + // DoFHandler object initialized with the given pointer to a + // triangulation. This causes the DoF handler to store that pointer, but + // does not already generate a finite element numbering (we only ask for + // that in the solve_problem function). template Solver::Solver (Triangulation &triangulation, const FiniteElement &fe, @@ -1071,10 +665,8 @@ namespace Step13 {} - // The destructor is simple, it - // only clears the information - // stored in the DoF handler object - // to release the memory. + // The destructor is simple, it only clears the information stored in the + // DoF handler object to release the memory. template Solver::~Solver () { @@ -1082,19 +674,12 @@ namespace Step13 } - // The next function is the one - // which delegates the main work in - // solving the problem: it sets up - // the DoF handler object with the - // finite element given to the - // constructor of this object, the - // creates an object that denotes - // the linear system (i.e. the - // matrix, the right hand side - // vector, and the solution - // vector), calls the function to - // assemble it, and finally solves - // it: + // The next function is the one which delegates the main work in solving + // the problem: it sets up the DoF handler object with the finite element + // given to the constructor of this object, the creates an object that + // denotes the linear system (i.e. the matrix, the right hand side vector, + // and the solution vector), calls the function to assemble it, and + // finally solves it: template void Solver::solve_problem () @@ -1108,14 +693,10 @@ namespace Step13 } - // As stated above, the - // postprocess function takes - // an evaluation object, and - // applies it to the computed - // solution. This function may be - // called multiply, once for each - // evaluation of the solution which - // the user required. + // As stated above, the postprocess function takes an + // evaluation object, and applies it to the computed solution. This + // function may be called multiply, once for each evaluation of the + // solution which the user required. template void Solver:: @@ -1125,8 +706,7 @@ namespace Step13 } - // The n_dofs function should - // be self-explanatory: + // The n_dofs function should be self-explanatory: template unsigned int Solver::n_dofs () const @@ -1135,36 +715,25 @@ namespace Step13 } - // The following function assembles matrix - // and right hand side of the linear system - // to be solved in each step. It goes along - // the same lines as used in previous - // examples, so we explain it only - // briefly. Note that we do a number of - // things in parallel, a process described - // in more detail in the @ref threads - // module. + // The following function assembles matrix and right hand side of the + // linear system to be solved in each step. It goes along the same lines + // as used in previous examples, so we explain it only briefly. Note that + // we do a number of things in parallel, a process described in more + // detail in the @ref threads module. template void Solver::assemble_linear_system (LinearSystem &linear_system) { - // First define a convenience - // abbreviation for these lengthy - // iterator names... + // First define a convenience abbreviation for these lengthy iterator + // names... typedef typename DoFHandler::active_cell_iterator active_cell_iterator; - // ... and use it to split up the - // set of cells into a number of - // pieces of equal size. The - // number of blocks is set to the - // default number of threads to - // be used, which by default is - // set to the number of - // processors found in your - // computer at startup of the - // program: + // ... and use it to split up the set of cells into a number of pieces + // of equal size. The number of blocks is set to the default number of + // threads to be used, which by default is set to the number of + // processors found in your computer at startup of the program: const unsigned int n_threads = multithread_info.n_default_threads; std::vector > thread_ranges @@ -1172,17 +741,11 @@ namespace Step13 dof_handler.end (), n_threads); - // These ranges are then assigned - // to a number of threads which - // we create next. Each will - // assemble the local cell - // matrices on the assigned - // cells, and fill the matrix - // object with it. Since there is - // need for synchronization when - // filling the same matrix from - // different threads, we need a - // mutex here: + // These ranges are then assigned to a number of threads which we create + // next. Each will assemble the local cell matrices on the assigned + // cells, and fill the matrix object with it. Since there is need for + // synchronization when filling the same matrix from different threads, + // we need a mutex here: Threads::ThreadMutex mutex; Threads::ThreadGroup<> threads; for (unsigned int thread=0; thread boundary_value_map; VectorTools::interpolate_boundary_values (dof_handler, 0, @@ -1216,19 +772,14 @@ namespace Step13 boundary_value_map); - // If this is done, wait for the - // matrix assembling threads, and - // condense the constraints in - // the matrix as well: + // If this is done, wait for the matrix assembling threads, and condense + // the constraints in the matrix as well: threads.join_all (); linear_system.hanging_node_constraints.condense (linear_system.matrix); - // Now that we have the linear - // system, we can also treat - // boundary values, which need to - // be eliminated from both the - // matrix and the right hand - // side: + // Now that we have the linear system, we can also treat boundary + // values, which need to be eliminated from both the matrix and the + // right hand side: MatrixTools::apply_boundary_values (boundary_value_map, linear_system.matrix, solution, @@ -1237,15 +788,10 @@ namespace Step13 } - // The second of this pair of - // functions takes a range of cell - // iterators, and assembles the - // system matrix on this part of - // the domain. Since it's actions - // have all been explained in - // previous programs, we do not - // comment on it any more, except - // for one pointe below. + // The second of this pair of functions takes a range of cell iterators, + // and assembles the system matrix on this part of the domain. Since it's + // actions have all been explained in previous programs, we do not comment + // on it any more, except for one pointe below. template void Solver::assemble_matrix (LinearSystem &linear_system, @@ -1280,164 +826,80 @@ namespace Step13 cell->get_dof_indices (local_dof_indices); - // In the step-9 program, we - // have shown that you have - // to use the mutex to lock - // the matrix when copying - // the elements from the - // local to the global - // matrix. This was necessary - // to avoid that two threads - // access it at the same - // time, eventually - // overwriting their - // respective - // work. Previously, we have - // used the acquire and - // release functions of - // the mutex to lock and - // unlock the mutex, - // respectively. While this - // is valid, there is one - // possible catch: if between - // the locking operation and - // the unlocking operation an - // exception is thrown, the - // mutex remains in the - // locked state, and in some - // cases this might lead to - // deadlocks. A similar - // situation arises, when one - // changes the code to have a - // return statement somewhere - // in the middle of the - // locked block, and forgets - // that before we call - // return, we also have - // to unlock the mutex. This - // all is not be a problem - // here, but we want to show - // the general technique to - // cope with these problems - // nevertheless: have an - // object that upon - // initialization (i.e. in - // its constructor) locks the - // mutex, and on running the - // destructor unlocks it - // again. This is called the - // scoped lock pattern - // (apparently invented by - // Doug Schmidt originally), - // and it works because - // destructors of local - // objects are also run when - // we exit the function - // either through a - // return statement, or - // when an exception is - // raised. Thus, it is - // guaranteed that the mutex - // will always be unlocked - // when we exit this part of - // the program, whether the - // operation completed - // successfully or not, - // whether the exit path was - // something we implemented - // willfully or whether the - // function was exited by an - // exception that we did not - // forsee. + // In the step-9 program, we have shown that you have to use the + // mutex to lock the matrix when copying the elements from the local + // to the global matrix. This was necessary to avoid that two + // threads access it at the same time, eventually overwriting their + // respective work. Previously, we have used the + // acquire and release functions of the + // mutex to lock and unlock the mutex, respectively. While this is + // valid, there is one possible catch: if between the locking + // operation and the unlocking operation an exception is thrown, the + // mutex remains in the locked state, and in some cases this might + // lead to deadlocks. A similar situation arises, when one changes + // the code to have a return statement somewhere in the middle of + // the locked block, and forgets that before we call + // return, we also have to unlock the mutex. This all + // is not be a problem here, but we want to show the general + // technique to cope with these problems nevertheless: have an + // object that upon initialization (i.e. in its constructor) locks + // the mutex, and on running the destructor unlocks it again. This + // is called the scoped lock pattern (apparently + // invented by Doug Schmidt originally), and it works because + // destructors of local objects are also run when we exit the + // function either through a return statement, or when + // an exception is raised. Thus, it is guaranteed that the mutex + // will always be unlocked when we exit this part of the program, + // whether the operation completed successfully or not, whether the + // exit path was something we implemented willfully or whether the + // function was exited by an exception that we did not forsee. // - // deal.II implements the - // scoped locking pattern in - // the - // ThreadMutex::ScopedLock - // class: it takes the mutex - // in the constructor and - // locks it; in its - // destructor, it unlocks it - // again. So here is how it - // is used: + // deal.II implements the scoped locking pattern in the + // ThreadMutex::ScopedLock class: it takes the mutex in the + // constructor and locks it; in its destructor, it unlocks it + // again. So here is how it is used: Threads::ThreadMutex::ScopedLock lock (mutex); for (unsigned int i=0; ilock variable goes out - // of existence and its - // destructor the mutex is - // unlocked. + // Here, at the brace, the current scope ends, so the + // lock variable goes out of existence and its + // destructor the mutex is unlocked. }; } - // Now for the functions that - // implement actions in the linear - // system class. First, the - // constructor initializes all data - // elements to their correct sizes, - // and sets up a number of - // additional data structures, such - // as constraints due to hanging - // nodes. Since setting up the - // hanging nodes and finding out - // about the nonzero elements of - // the matrix is independent, we do - // that in parallel (if the library - // was configured to use - // concurrency, at least; - // otherwise, the actions are - // performed sequentially). Note - // that we start only one thread, - // and do the second action in the - // main thread. Since only one - // thread is generated, we don't - // use the Threads::ThreadGroup - // class here, but rather use the - // one created thread object - // directly to wait for this - // particular thread's exit. + // Now for the functions that implement actions in the linear system + // class. First, the constructor initializes all data elements to their + // correct sizes, and sets up a number of additional data structures, such + // as constraints due to hanging nodes. Since setting up the hanging nodes + // and finding out about the nonzero elements of the matrix is + // independent, we do that in parallel (if the library was configured to + // use concurrency, at least; otherwise, the actions are performed + // sequentially). Note that we start only one thread, and do the second + // action in the main thread. Since only one thread is generated, we don't + // use the Threads::ThreadGroup class here, but rather use + // the one created thread object directly to wait for this particular + // thread's exit. // - // Note that taking up the address - // of the - // DoFTools::make_hanging_node_constraints - // function is a little tricky, - // since there are actually three - // of them, one for each supported - // space dimension. Taking - // addresses of overloaded - // functions is somewhat - // complicated in C++, since the - // address-of operator & in - // that case returns more like a - // set of values (the addresses of - // all functions with that name), - // and selecting the right one is - // then the next step. If the - // context dictates which one to - // take (for example by assigning - // to a function pointer of known - // type), then the compiler can do - // that by itself, but if this set - // of pointers shall be given as - // the argument to a function that - // takes a template, the compiler - // could choose all without having - // a preference for one. We - // therefore have to make it clear - // to the compiler which one we - // would like to have; for this, we - // could use a cast, but for more - // clarity, we assign it to a - // temporary mhnc_p (short for - // pointer to - // make_hanging_node_constraints) - // with the right type, and using + // Note that taking up the address of the + // DoFTools::make_hanging_node_constraints function is a + // little tricky, since there are actually three of them, one for each + // supported space dimension. Taking addresses of overloaded functions is + // somewhat complicated in C++, since the address-of operator + // & in that case returns more like a set of values (the + // addresses of all functions with that name), and selecting the right one + // is then the next step. If the context dictates which one to take (for + // example by assigning to a function pointer of known type), then the + // compiler can do that by itself, but if this set of pointers shall be + // given as the argument to a function that takes a template, the compiler + // could choose all without having a preference for one. We therefore have + // to make it clear to the compiler which one we would like to have; for + // this, we could use a cast, but for more clarity, we assign it to a + // temporary mhnc_p (short for pointer to + // make_hanging_node_constraints) with the right type, and using // this pointer instead. template Solver::LinearSystem:: @@ -1459,19 +921,14 @@ namespace Step13 dof_handler.max_couplings_between_dofs()); DoFTools::make_sparsity_pattern (dof_handler, sparsity_pattern); - // Wait until the - // hanging_node_constraints - // object is fully set up, then - // close it and use it to - // condense the sparsity pattern: + // Wait until the hanging_node_constraints object is fully + // set up, then close it and use it to condense the sparsity pattern: mhnc_thread.join (); hanging_node_constraints.close (); hanging_node_constraints.condense (sparsity_pattern); - // Finally, close the sparsity - // pattern, initialize the - // matrix, and set the right hand - // side vector to the right size. + // Finally, close the sparsity pattern, initialize the matrix, and set + // the right hand side vector to the right size. sparsity_pattern.compress(); matrix.reinit (sparsity_pattern); rhs.reinit (dof_handler.n_dofs()); @@ -1479,13 +936,9 @@ namespace Step13 - // The second function of this - // class simply solves the linear - // system by a preconditioned - // conjugate gradient method. This - // has been extensively discussed - // before, so we don't dwell into - // it any more. + // The second function of this class simply solves the linear system by a + // preconditioned conjugate gradient method. This has been extensively + // discussed before, so we don't dwell into it any more. template void Solver::LinearSystem::solve (Vector &solution) const @@ -1506,39 +959,23 @@ namespace Step13 // @sect4{A primal solver} - // In the previous section, a base - // class for Laplace solvers was - // implemented, that lacked the - // functionality to assemble the - // right hand side vector, however, - // for reasons that were explained - // there. Now we implement a - // corresponding class that can do - // this for the case that the right - // hand side of a problem is given - // as a function object. + // In the previous section, a base class for Laplace solvers was + // implemented, that lacked the functionality to assemble the right hand + // side vector, however, for reasons that were explained there. Now we + // implement a corresponding class that can do this for the case that the + // right hand side of a problem is given as a function object. // - // The actions of the class are - // rather what you have seen - // already in previous examples - // already, so a brief explanation - // should suffice: the constructor - // takes the same data as does that - // of the underlying class (to - // which it passes all information) - // except for one function object - // that denotes the right hand side - // of the problem. A pointer to - // this object is stored (again as - // a SmartPointer, in order to - // make sure that the function - // object is not deleted as long as - // it is still used by this class). + // The actions of the class are rather what you have seen already in + // previous examples already, so a brief explanation should suffice: the + // constructor takes the same data as does that of the underlying class + // (to which it passes all information) except for one function object + // that denotes the right hand side of the problem. A pointer to this + // object is stored (again as a SmartPointer, in order to + // make sure that the function object is not deleted as long as it is + // still used by this class). // - // The only functional part of this - // class is the assemble_rhs - // method that does what its name - // suggests. + // The only functional part of this class is the assemble_rhs + // method that does what its name suggests. template class PrimalSolver : public Solver { @@ -1554,9 +991,8 @@ namespace Step13 }; - // The constructor of this class - // basically does what it is - // announced to do above... + // The constructor of this class basically does what it is announced to do + // above... template PrimalSolver:: PrimalSolver (Triangulation &triangulation, @@ -1573,11 +1009,9 @@ namespace Step13 - // ... as does the assemble_rhs - // function. Since this is - // explained in several of the - // previous example programs, we - // leave it at that. + // ... as does the assemble_rhs function. Since this is + // explained in several of the previous example programs, we leave it at + // that. template void PrimalSolver:: @@ -1619,34 +1053,23 @@ namespace Step13 // @sect4{Global refinement} - // By now, all functions of the - // abstract base class except for - // the refine_grid function - // have been implemented. We will - // now have two classes that - // implement this function for the - // PrimalSolver class, one - // doing global refinement, one a + // By now, all functions of the abstract base class except for the + // refine_grid function have been implemented. We will now + // have two classes that implement this function for the + // PrimalSolver class, one doing global refinement, one a // form of local refinement. // - // The first, doing global - // refinement, is rather simple: - // its main function just calls - // triangulation-@>refine_global - // (1);, which does all the work. + // The first, doing global refinement, is rather simple: its main function + // just calls triangulation-@>refine_global (1);, which does + // all the work. // - // Note that since the Base - // base class of the Solver - // class is virtual, we have to - // declare a constructor that - // initializes the immediate base - // class as well as the abstract + // Note that since the Base base class of the + // Solver class is virtual, we have to declare a constructor + // that initializes the immediate base class as well as the abstract // virtual one. // - // Apart from this technical - // complication, the class is - // probably simple enough to be - // left without further comments. + // Apart from this technical complication, the class is probably simple + // enough to be left without further comments. template class RefinementGlobal : public PrimalSolver { @@ -1687,25 +1110,15 @@ namespace Step13 // @sect4{Local refinement by the Kelly error indicator} - // The second class implementing - // refinement strategies uses the - // Kelly refinemet indicator used - // in various example programs - // before. Since this indicator is - // already implemented in a class - // of its own inside the deal.II - // library, there is not much t do - // here except cal the function - // computing the indicator, then - // using it to select a number of - // cells for refinement and - // coarsening, and refinement the - // mesh accordingly. + // The second class implementing refinement strategies uses the Kelly + // refinemet indicator used in various example programs before. Since this + // indicator is already implemented in a class of its own inside the + // deal.II library, there is not much t do here except cal the function + // computing the indicator, then using it to select a number of cells for + // refinement and coarsening, and refinement the mesh accordingly. // - // Again, this should now be - // sufficiently standard to allow - // the omission of further - // comments. + // Again, this should now be sufficiently standard to allow the omission + // of further comments. template class RefinementKelly : public PrimalSolver { @@ -1759,40 +1172,24 @@ namespace Step13 // @sect3{Equation data} - // As this is one more academic - // example, we'd like to compare - // exact and computed solution - // against each other. For this, we - // need to declare function classes - // representing the exact solution - // (for comparison and for the - // Dirichlet boundary values), as - // well as a class that denotes the - // right hand side of the equation - // (this is simply the Laplace - // operator applied to the exact - // solution we'd like to recover). + // As this is one more academic example, we'd like to compare exact and + // computed solution against each other. For this, we need to declare + // function classes representing the exact solution (for comparison and for + // the Dirichlet boundary values), as well as a class that denotes the right + // hand side of the equation (this is simply the Laplace operator applied to + // the exact solution we'd like to recover). // - // For this example, let us choose as - // exact solution the function - // $u(x,y)=exp(x+sin(10y+5x^2))$. In more - // than two dimensions, simply repeat - // the sine-factor with y - // replaced by z and so on. Given - // this, the following two classes - // are probably straightforward from - // the previous examples. + // For this example, let us choose as exact solution the function + // $u(x,y)=exp(x+sin(10y+5x^2))$. In more than two dimensions, simply repeat + // the sine-factor with y replaced by z and so + // on. Given this, the following two classes are probably straightforward + // from the previous examples. // - // As in previous examples, the C++ - // language forces us to declare and - // define a constructor to the - // following classes even though they - // are empty. This is due to the fact - // that the base class has no default - // constructor (i.e. one without - // arguments), even though it has a - // constructor which has default - // values for all arguments. + // As in previous examples, the C++ language forces us to declare and define + // a constructor to the following classes even though they are empty. This + // is due to the fact that the base class has no default constructor + // (i.e. one without arguments), even though it has a constructor which has + // default values for all arguments. template class Solution : public Function { @@ -1858,62 +1255,38 @@ namespace Step13 // @sect3{The driver routines} - // What is now missing are only the - // functions that actually select the - // various options, and run the - // simulation on successively finer - // grids to monitor the progress as - // the mesh is refined. + // What is now missing are only the functions that actually select the + // various options, and run the simulation on successively finer grids to + // monitor the progress as the mesh is refined. // - // This we do in the following - // function: it takes a solver - // object, and a list of - // postprocessing (evaluation) - // objects, and runs them with + // This we do in the following function: it takes a solver object, and a + // list of postprocessing (evaluation) objects, and runs them with // intermittent mesh refinement: template void run_simulation (LaplaceSolver::Base &solver, const std::list *> &postprocessor_list) { - // We will give an indicator of the - // step we are presently computing, - // in order to keep the user - // informed that something is still - // happening, and that the program - // is not in an endless loop. This - // is the head of this status line: + // We will give an indicator of the step we are presently computing, in + // order to keep the user informed that something is still happening, and + // that the program is not in an endless loop. This is the head of this + // status line: std::cout << "Refinement cycle: "; - // Then start a loop which only - // terminates once the number of - // degrees of freedom is larger - // than 20,000 (you may of course - // change this limit, if you need - // more -- or less -- accuracy from - // your program). + // Then start a loop which only terminates once the number of degrees of + // freedom is larger than 20,000 (you may of course change this limit, if + // you need more -- or less -- accuracy from your program). for (unsigned int step=0; true; ++step) { - // Then give the alive - // indication for this - // iteration. Note that the - // std::flush is needed to - // have the text actually - // appear on the screen, rather - // than only in some buffer - // that is only flushed the - // next time we issue an - // end-line. + // Then give the alive indication for this + // iteration. Note that the std::flush is needed to have + // the text actually appear on the screen, rather than only in some + // buffer that is only flushed the next time we issue an end-line. std::cout << step << " " << std::flush; - // Now solve the problem on the - // present grid, and run the - // evaluators on it. The long - // type name of iterators into - // the list is a little - // annoying, but could be - // shortened by a typedef, if - // so desired. + // Now solve the problem on the present grid, and run the evaluators + // on it. The long type name of iterators into the list is a little + // annoying, but could be shortened by a typedef, if so desired. solver.solve_problem (); for (typename std::list *>::const_iterator @@ -1925,58 +1298,42 @@ namespace Step13 }; - // Now check whether more - // iterations are required, or - // whether the loop shall be - // ended: + // Now check whether more iterations are required, or whether the loop + // shall be ended: if (solver.n_dofs() < 20000) solver.refine_grid (); else break; }; - // Finally end the line in which we - // displayed status reports: + // Finally end the line in which we displayed status reports: std::cout << std::endl; } - // The final function is one which - // takes the name of a solver - // (presently "kelly" and "global" - // are allowed), creates a solver - // object out of it using a coarse - // grid (in this case the ubiquitous - // unit square) and a finite element - // object (here the likewise - // ubiquitous bilinear one), and uses - // that solver to ask for the - // solution of the problem on a - // sequence of successively refined - // grids. + // The final function is one which takes the name of a solver (presently + // "kelly" and "global" are allowed), creates a solver object out of it + // using a coarse grid (in this case the ubiquitous unit square) and a + // finite element object (here the likewise ubiquitous bilinear one), and + // uses that solver to ask for the solution of the problem on a sequence of + // successively refined grids. // - // The function also sets up two of - // evaluation functions, one - // evaluating the solution at the - // point (0.5,0.5), the other writing - // out the solution to a file. + // The function also sets up two of evaluation functions, one evaluating the + // solution at the point (0.5,0.5), the other writing out the solution to a + // file. template void solve_problem (const std::string &solver_name) { - // First minor task: tell the user - // what is going to happen. Thus - // write a header line, and a line - // with all '-' characters of the - // same length as the first one - // right below. + // First minor task: tell the user what is going to happen. Thus write a + // header line, and a line with all '-' characters of the same length as + // the first one right below. const std::string header = "Running tests with \"" + solver_name + "\" refinement criterion:"; std::cout << header << std::endl << std::string (header.size(), '-') << std::endl; - // Then set up triangulation, - // finite element, etc. + // Then set up triangulation, finite element, etc. Triangulation triangulation; GridGenerator::hyper_cube (triangulation, -1, 1); triangulation.refine_global (2); @@ -1985,11 +1342,8 @@ namespace Step13 const RightHandSide rhs_function; const Solution boundary_values; - // Create a solver object of the - // kind indicated by the argument - // to this function. If the name is - // not recognized, throw an - // exception! + // Create a solver object of the kind indicated by the argument to this + // function. If the name is not recognized, throw an exception! LaplaceSolver::Base *solver = 0; if (solver_name == "global") solver = new LaplaceSolver::RefinementGlobal (triangulation, fe, @@ -2004,58 +1358,43 @@ namespace Step13 else AssertThrow (false, ExcNotImplemented()); - // Next create a table object in - // which the values of the - // numerical solution at the point - // (0.5,0.5) will be stored, and - // create a respective evaluation - // object: + // Next create a table object in which the values of the numerical + // solution at the point (0.5,0.5) will be stored, and create a respective + // evaluation object: TableHandler results_table; Evaluation::PointValueEvaluation postprocessor1 (Point(0.5,0.5), results_table); - // Also generate an evaluator which - // writes out the solution: + // Also generate an evaluator which writes out the solution: Evaluation::SolutionOutput postprocessor2 (std::string("solution-")+solver_name, DataOut::gnuplot); - // Take these two evaluation - // objects and put them in a - // list... + // Take these two evaluation objects and put them in a list... std::list *> postprocessor_list; postprocessor_list.push_back (&postprocessor1); postprocessor_list.push_back (&postprocessor2); - // ... which we can then pass on to - // the function that actually runs - // the simulation on successively - // refined grids: + // ... which we can then pass on to the function that actually runs the + // simulation on successively refined grids: run_simulation (*solver, postprocessor_list); - // When this all is done, write out - // the results of the point - // evaluations, and finally delete - // the solver object: + // When this all is done, write out the results of the point evaluations, + // and finally delete the solver object: results_table.write_text (std::cout); delete solver; - // And one blank line after all - // results: + // And one blank line after all results: std::cout << std::endl; } } -// There is not much to say about the -// main function. It follows the same -// pattern as in all previous -// examples, with attempts to catch -// thrown exceptions, and displaying -// as much information as possible if -// we should get some. The rest is -// self-explanatory. +// There is not much to say about the main function. It follows the same +// pattern as in all previous examples, with attempts to catch thrown +// exceptions, and displaying as much information as possible if we should get +// some. The rest is self-explanatory. int main () { try diff --git a/deal.II/examples/step-14/step-14.cc b/deal.II/examples/step-14/step-14.cc index 9c58a26932..340368d0d4 100644 --- a/deal.II/examples/step-14/step-14.cc +++ b/deal.II/examples/step-14/step-14.cc @@ -45,26 +45,19 @@ #include #include -// The last step is as in all -// previous programs: +// The last step is as in all previous programs: namespace Step14 { using namespace dealii; // @sect3{Evaluating the solution} - // As mentioned in the introduction, - // significant parts of the program - // have simply been taken over from - // the step-13 example program. We - // therefore only comment on those - // things that are new. + // As mentioned in the introduction, significant parts of the program have + // simply been taken over from the step-13 example program. We therefore + // only comment on those things that are new. // - // First, the framework for - // evaluation of solutions is - // unchanged, i.e. the base class is - // the same, and the class to - // evaluate the solution at a grid + // First, the framework for evaluation of solutions is unchanged, i.e. the + // base class is the same, and the class to evaluate the solution at a grid // point is unchanged: namespace Evaluation { @@ -162,27 +155,17 @@ namespace Step14 // @sect4{The PointXDerivativeEvaluation class} - // Besides the class implementing - // the evaluation of the solution - // at one point, we here provide - // one which evaluates the gradient - // at a grid point. Since in - // general the gradient of a finite - // element function is not - // continuous at a vertex, we have - // to be a little bit more careful - // here. What we do is to loop over - // all cells, even if we have found - // the point already on one cell, - // and use the mean value of the - // gradient at the vertex taken - // from all adjacent cells. + // Besides the class implementing the evaluation of the solution at one + // point, we here provide one which evaluates the gradient at a grid + // point. Since in general the gradient of a finite element function is + // not continuous at a vertex, we have to be a little bit more careful + // here. What we do is to loop over all cells, even if we have found the + // point already on one cell, and use the mean value of the gradient at + // the vertex taken from all adjacent cells. // - // Given the interface of the - // PointValueEvaluation class, - // the declaration of this class - // provides little surprise, and - // neither does the constructor: + // Given the interface of the PointValueEvaluation class, the + // declaration of this class provides little surprise, and neither does + // the constructor: template class PointXDerivativeEvaluation : public EvaluationBase { @@ -209,26 +192,21 @@ namespace Step14 {} - // The more interesting things - // happen inside the function doing - // the actual evaluation: + // The more interesting things happen inside the function doing the actual + // evaluation: template void PointXDerivativeEvaluation:: operator () (const DoFHandler &dof_handler, const Vector &solution) const { - // This time initialize the - // return value with something - // useful, since we will have to - // add up a number of - // contributions and take the - // mean value afterwards... + // This time initialize the return value with something useful, since we + // will have to add up a number of contributions and take the mean value + // afterwards... double point_derivative = 0; - // ...then have some objects of - // which the meaning wil become - // clear below... + // ...then have some objects of which the meaning wil become clear + // below... QTrapez vertex_quadrature; FEValues fe_values (dof_handler.get_fe(), vertex_quadrature, @@ -236,10 +214,8 @@ namespace Step14 std::vector > solution_gradients (vertex_quadrature.size()); - // ...and next loop over all cells - // and their vertices, and count - // how often the vertex has been - // found: + // ...and next loop over all cells and their vertices, and count how + // often the vertex has been found: typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); @@ -250,72 +226,36 @@ namespace Step14 ++vertex) if (cell->vertex(vertex) == evaluation_point) { - // Things are now no more - // as simple, since we - // can't get the gradient - // of the finite element - // field as before, where - // we simply had to pick - // one degree of freedom - // at a vertex. + // Things are now no more as simple, since we can't get the + // gradient of the finite element field as before, where we + // simply had to pick one degree of freedom at a vertex. // - // Rather, we have to - // evaluate the finite - // element field on this - // cell, and at a certain - // point. As you know, - // evaluating finite - // element fields at - // certain points is done - // through the - // FEValues class, so - // we use that. The - // question is: the - // FEValues object - // needs to be a given a - // quadrature formula and - // can then compute the - // values of finite - // element quantities at - // the quadrature - // points. Here, we don't - // want to do quadrature, - // we simply want to - // specify some points! + // Rather, we have to evaluate the finite element field on this + // cell, and at a certain point. As you know, evaluating finite + // element fields at certain points is done through the + // FEValues class, so we use that. The question is: + // the FEValues object needs to be a given a + // quadrature formula and can then compute the values of finite + // element quantities at the quadrature points. Here, we don't + // want to do quadrature, we simply want to specify some points! // - // Nevertheless, the same - // way is chosen: use a - // special quadrature - // rule with points at - // the vertices, since - // these are what we are - // interested in. The - // appropriate rule is - // the trapezoidal rule, - // so that is the reason - // why we used that one + // Nevertheless, the same way is chosen: use a special + // quadrature rule with points at the vertices, since these are + // what we are interested in. The appropriate rule is the + // trapezoidal rule, so that is the reason why we used that one // above. // - // Thus: initialize the - // FEValues object on - // this cell, + // Thus: initialize the FEValues object on this + // cell, fe_values.reinit (cell); - // and extract the - // gradients of the - // solution vector at the + // and extract the gradients of the solution vector at the // vertices: fe_values.get_function_grads (solution, solution_gradients); - // Now we have the - // gradients at all - // vertices, so pick out - // that one which belongs - // to the evaluation - // point (note that the - // order of vertices is - // not necessarily the - // same as that of the + // Now we have the gradients at all vertices, so pick out that + // one which belongs to the evaluation point (note that the + // order of vertices is not necessarily the same as that of the // quadrature points): unsigned int q_point = 0; for (; q_point 0, ExcEvaluationPointNotFound(evaluation_point)); - // We have simply summed up the - // contributions of all adjacent - // cells, so we still have to - // compute the mean value. Once - // this is done, report the status: + // We have simply summed up the contributions of all adjacent cells, so + // we still have to compute the mean value. Once this is done, report + // the status: point_derivative /= evaluation_point_hits; std::cout << " Point x-derivative=" << point_derivative << std::endl; @@ -373,29 +296,17 @@ namespace Step14 // @sect4{The GridOutput class} - // Since this program has a more - // difficult structure (it computed - // a dual solution in addition to a - // primal one), writing out the - // solution is no more done by an - // evaluation object since we want - // to write both solutions at once - // into one file, and that requires - // some more information than - // available to the evaluation - // classes. + // Since this program has a more difficult structure (it computed a dual + // solution in addition to a primal one), writing out the solution is no + // more done by an evaluation object since we want to write both solutions + // at once into one file, and that requires some more information than + // available to the evaluation classes. // - // However, we also want to look at - // the grids generated. This again - // can be done with one such - // class. Its structure is analog - // to the SolutionOutput class - // of the previous example program, - // so we do not discuss it here in - // more detail. Furthermore, - // everything that is used here has - // already been used in previous - // example programs. + // However, we also want to look at the grids generated. This again can be + // done with one such class. Its structure is analog to the + // SolutionOutput class of the previous example program, so + // we do not discuss it here in more detail. Furthermore, everything that + // is used here has already been used in previous example programs. template class GridOutput : public EvaluationBase { @@ -436,39 +347,25 @@ namespace Step14 // @sect3{The Laplace solver classes} - // Next are the actual solver - // classes. Again, we discuss only - // the differences to the previous - // program. + // Next are the actual solver classes. Again, we discuss only the + // differences to the previous program. namespace LaplaceSolver { - // Before everything else, - // forward-declare one class that - // we will have later, since we - // will want to make it a friend of - // some of the classes that follow, - // which requires the class to be - // known: + // Before everything else, forward-declare one class that we will have + // later, since we will want to make it a friend of some of the classes + // that follow, which requires the class to be known: template class WeightedResidual; // @sect4{The Laplace solver base class} - // This class is almost unchanged, - // with the exception that it - // declares two more functions: - // output_solution will be used - // to generate output files from - // the actual solutions computed by - // derived classes, and the - // set_refinement_cycle - // function by which the testing - // framework sets the number of the - // refinement cycle to a local - // variable in this class; this - // number is later used to generate - // filenames for the solution - // output. + // This class is almost unchanged, with the exception that it declares two + // more functions: output_solution will be used to generate + // output files from the actual solutions computed by derived classes, and + // the set_refinement_cycle function by which the testing + // framework sets the number of the refinement cycle to a local variable + // in this class; this number is later used to generate filenames for the + // solution output. template class Base { @@ -515,8 +412,7 @@ namespace Step14 // @sect4{The Laplace Solver class} - // Likewise, the Solver class - // is entirely unchanged and will + // Likewise, the Solver class is entirely unchanged and will // thus not be discussed. template class Solver : public virtual Base @@ -768,62 +664,35 @@ namespace Step14 // @sect4{The PrimalSolver class} - // The PrimalSolver class is - // also mostly unchanged except for - // overloading the functions - // solve_problem, n_dofs, - // and postprocess of the base - // class, and implementing the - // output_solution - // function. These overloaded - // functions do nothing particular - // besides calling the functions of - // the base class -- that seems - // superfluous, but works around a - // bug in a popular compiler which - // requires us to write such - // functions for the following - // scenario: Besides the - // PrimalSolver class, we will - // have a DualSolver, both - // derived from Solver. We will - // then have a final classes which - // derived from these two, which - // will then have two instances of - // the Solver class as its base - // classes. If we want, for - // example, the number of degrees - // of freedom of the primal solver, - // we would have to indicate this - // like so: - // PrimalSolver::n_dofs(). - // However, the compiler does not - // accept this since the n_dofs - // function is actually from a base - // class of the PrimalSolver - // class, so we have to inject the - // name from the base to the - // derived class using these - // additional functions. + // The PrimalSolver class is also mostly unchanged except for + // overloading the functions solve_problem, + // n_dofs, and postprocess of the base class, + // and implementing the output_solution function. These + // overloaded functions do nothing particular besides calling the + // functions of the base class -- that seems superfluous, but works around + // a bug in a popular compiler which requires us to write such functions + // for the following scenario: Besides the PrimalSolver + // class, we will have a DualSolver, both derived from + // Solver. We will then have a final classes which derived + // from these two, which will then have two instances of the + // Solver class as its base classes. If we want, for example, + // the number of degrees of freedom of the primal solver, we would have to + // indicate this like so: PrimalSolver::n_dofs(). However, + // the compiler does not accept this since the n_dofs + // function is actually from a base class of the PrimalSolver + // class, so we have to inject the name from the base to the derived class + // using these additional functions. // - // Regarding the implementation of - // the output_solution - // function, we keep the - // GlobalRefinement and - // RefinementKelly classes in - // this program, and they can then - // rely on the default - // implementation of this function - // which simply outputs the primal - // solution. The class implementing - // dual weighted error estimators - // will overload this function - // itself, to also output the dual + // Regarding the implementation of the output_solution + // function, we keep the GlobalRefinement and + // RefinementKelly classes in this program, and they can then + // rely on the default implementation of this function which simply + // outputs the primal solution. The class implementing dual weighted error + // estimators will overload this function itself, to also output the dual // solution. // - // Except for this, the class is - // unchanged with respect to the - // previous example. + // Except for this, the class is unchanged with respect to the previous + // example. template class PrimalSolver : public Solver { @@ -851,17 +720,10 @@ namespace Step14 const SmartPointer > rhs_function; virtual void assemble_rhs (Vector &rhs) const; - // Now, in order to work around - // some problems in one of the - // compilers this library can - // be compiled with, we will - // have to declare a - // class that is actually - // derived from the present - // one, as a friend (strange as - // that seems). The full - // rationale will be explained - // below. + // Now, in order to work around some problems in one of the compilers + // this library can be compiled with, we will have to declare a class + // that is actually derived from the present one, as a friend (strange + // as that seems). The full rationale will be explained below. friend class WeightedResidual; }; @@ -973,10 +835,8 @@ namespace Step14 // @sect4{The RefinementGlobal and RefinementKelly classes} - // For the following two classes, - // the same applies as for most of - // the above: the class is taken - // from the previous example as-is: + // For the following two classes, the same applies as for most of the + // above: the class is taken from the previous example as-is: template class RefinementGlobal : public PrimalSolver { @@ -1072,28 +932,16 @@ namespace Step14 // @sect4{The RefinementWeightedKelly class} - // This class is a variant of the - // previous one, in that it allows - // to weight the refinement - // indicators we get from the - // library's Kelly indicator by - // some function. We include this - // class since the goal of this - // example program is to - // demonstrate automatic refinement - // criteria even for complex output - // quantities such as point values - // or stresses. If we did not solve - // a dual problem and compute the - // weights thereof, we would - // probably be tempted to give a - // hand-crafted weighting to the - // indicators to account for the - // fact that we are going to - // evaluate these quantities. This - // class accepts such a weighting - // function as argument to its - // constructor: + // This class is a variant of the previous one, in that it allows to + // weight the refinement indicators we get from the library's Kelly + // indicator by some function. We include this class since the goal of + // this example program is to demonstrate automatic refinement criteria + // even for complex output quantities such as point values or stresses. If + // we did not solve a dual problem and compute the weights thereof, we + // would probably be tempted to give a hand-crafted weighting to the + // indicators to account for the fact that we are going to evaluate these + // quantities. This class accepts such a weighting function as argument to + // its constructor: template class RefinementWeightedKelly : public PrimalSolver { @@ -1133,20 +981,14 @@ namespace Step14 - // Now, here comes the main - // function, including the - // weighting: + // Now, here comes the main function, including the weighting: template void RefinementWeightedKelly::refine_grid () { - // First compute some residual - // based error indicators for all - // cells by a method already - // implemented in the - // library. What exactly is - // computed can be read in the - // documentation of that class. + // First compute some residual based error indicators for all cells by a + // method already implemented in the library. What exactly is computed + // can be read in the documentation of that class. Vector estimated_error (this->triangulation->n_active_cells()); KellyErrorEstimator::estimate (this->dof_handler, *this->face_quadrature, @@ -1154,10 +996,8 @@ namespace Step14 this->solution, estimated_error); - // Now we are going to weight - // these indicators by the value - // of the function given to the - // constructor: + // Now we are going to weight these indicators by the value of the + // function given to the constructor: typename DoFHandler::active_cell_iterator cell = this->dof_handler.begin_active(), endc = this->dof_handler.end(); @@ -1176,146 +1016,79 @@ namespace Step14 // @sect3{Equation data} // - // In this example program, we work - // with the same data sets as in the - // previous one, but as it may so - // happen that someone wants to run - // the program with different - // boundary values and right hand side - // functions, or on a different grid, - // we show a simple technique to do - // exactly that. For more clarity, we - // furthermore pack everything that - // has to do with equation data into - // a namespace of its own. + // In this example program, we work with the same data sets as in the + // previous one, but as it may so happen that someone wants to run the + // program with different boundary values and right hand side functions, or + // on a different grid, we show a simple technique to do exactly that. For + // more clarity, we furthermore pack everything that has to do with equation + // data into a namespace of its own. // - // The underlying assumption is that - // this is a research program, and - // that there we often have a number - // of test cases that consist of a - // domain, a right hand side, - // boundary values, possibly a - // specified coefficient, and a - // number of other parameters. They - // often vary all at the same time - // when shifting from one example to - // another. To make handling such - // sets of problem description - // parameters simple is the goal of - // the following. + // The underlying assumption is that this is a research program, and that + // there we often have a number of test cases that consist of a domain, a + // right hand side, boundary values, possibly a specified coefficient, and a + // number of other parameters. They often vary all at the same time when + // shifting from one example to another. To make handling such sets of + // problem description parameters simple is the goal of the following. // - // Basically, the idea is this: let - // us have a structure for each set - // of data, in which we pack - // everything that describes a test - // case: here, these are two - // subclasses, one called - // BoundaryValues for the - // boundary values of the exact - // solution, and one called - // RightHandSide, and then a way - // to generate the coarse grid. Since - // the solution of the previous - // example program looked like curved - // ridges, we use this name here for - // the enclosing class. Note that the - // names of the two inner classes - // have to be the same for all - // enclosing test case classes, and - // also that we have attached the - // dimension template argument to the - // enclosing class rather than to the - // inner ones, to make further - // processing simpler. (From a - // language viewpoint, a namespace - // would be better to encapsulate - // these inner classes, rather than a - // structure. However, namespaces - // cannot be given as template - // arguments, so we use a structure - // to allow a second object to select - // from within its given - // argument. The enclosing structure, - // of course, has no member variables - // apart from the classes it - // declares, and a static function to - // generate the coarse mesh; it will - // in general never be instantiated.) + // Basically, the idea is this: let us have a structure for each set of + // data, in which we pack everything that describes a test case: here, these + // are two subclasses, one called BoundaryValues for the + // boundary values of the exact solution, and one called + // RightHandSide, and then a way to generate the coarse + // grid. Since the solution of the previous example program looked like + // curved ridges, we use this name here for the enclosing class. Note that + // the names of the two inner classes have to be the same for all enclosing + // test case classes, and also that we have attached the dimension template + // argument to the enclosing class rather than to the inner ones, to make + // further processing simpler. (From a language viewpoint, a namespace + // would be better to encapsulate these inner classes, rather than a + // structure. However, namespaces cannot be given as template arguments, so + // we use a structure to allow a second object to select from within its + // given argument. The enclosing structure, of course, has no member + // variables apart from the classes it declares, and a static function to + // generate the coarse mesh; it will in general never be instantiated.) // - // The idea is then the following - // (this is the right time to also - // take a brief look at the code - // below): we can generate objects - // for boundary values and - // right hand side by simply giving - // the name of the outer class as a - // template argument to a class which - // we call here Data::SetUp, and - // it then creates objects for the - // inner classes. In this case, to - // get all that characterizes the - // curved ridge solution, we would - // simply generate an instance of - // Data::SetUp@, - // and everything we need to know - // about the solution would be static - // member variables and functions of + // The idea is then the following (this is the right time to also take a + // brief look at the code below): we can generate objects for boundary + // values and right hand side by simply giving the name of the outer class + // as a template argument to a class which we call here + // Data::SetUp, and it then creates objects for the inner + // classes. In this case, to get all that characterizes the curved ridge + // solution, we would simply generate an instance of + // Data::SetUp@, and everything we need to + // know about the solution would be static member variables and functions of // that object. // - // This approach might seem like - // overkill in this case, but will - // become very handy once a certain - // set up is not only characterized - // by Dirichlet boundary values and a - // right hand side function, but in - // addition by material properties, - // Neumann values, different boundary - // descriptors, etc. In that case, - // the SetUp class might consist - // of a dozen or more objects, and - // each descriptor class (like the - // CurvedRidges class below) - // would have to provide them. Then, - // you will be happy to be able to - // change from one set of data to - // another by only changing the - // template argument to the SetUp - // class at one place, rather than at - // many. + // This approach might seem like overkill in this case, but will become very + // handy once a certain set up is not only characterized by Dirichlet + // boundary values and a right hand side function, but in addition by + // material properties, Neumann values, different boundary descriptors, + // etc. In that case, the SetUp class might consist of a dozen + // or more objects, and each descriptor class (like the + // CurvedRidges class below) would have to provide them. Then, + // you will be happy to be able to change from one set of data to another by + // only changing the template argument to the SetUp class at + // one place, rather than at many. // - // With this framework for different - // test cases, we are almost - // finished, but one thing remains: - // by now we can select statically, - // by changing one template argument, - // which data set to choose. In order - // to be able to do that dynamically, - // i.e. at run time, we need a base - // class. This we provide in the - // obvious way, see below, with - // virtual abstract functions. It - // forces us to introduce a second - // template parameter dim which - // we need for the base class (which - // could be avoided using some - // template magic, but we omit that), - // but that's all. + // With this framework for different test cases, we are almost finished, but + // one thing remains: by now we can select statically, by changing one + // template argument, which data set to choose. In order to be able to do + // that dynamically, i.e. at run time, we need a base class. This we provide + // in the obvious way, see below, with virtual abstract functions. It forces + // us to introduce a second template parameter dim which we + // need for the base class (which could be avoided using some template + // magic, but we omit that), but that's all. // - // Adding new testcases is now - // simple, you don't have to touch - // the framework classes, only a - // structure like the - // CurvedRidges one is needed. + // Adding new testcases is now simple, you don't have to touch the framework + // classes, only a structure like the CurvedRidges one is + // needed. namespace Data { // @sect4{The SetUpBase and SetUp classes} - // Based on the above description, - // the SetUpBase class then - // looks as follows. To allow using - // the SmartPointer class with - // this class, we derived from the - // Subscriptor class. + // Based on the above description, the SetUpBase class then + // looks as follows. To allow using the SmartPointer class + // with this class, we derived from the Subscriptor class. template struct SetUpBase : public Subscriptor { @@ -1330,19 +1103,13 @@ namespace Step14 }; - // And now for the derived class - // that takes the template argument - // as explained above. For some - // reason, C++ requires us to - // define a constructor (which - // maybe empty), as otherwise a - // warning is generated that some - // data is not initialized. + // And now for the derived class that takes the template argument as + // explained above. For some reason, C++ requires us to define a + // constructor (which maybe empty), as otherwise a warning is generated + // that some data is not initialized. // - // Here we pack the data elements - // into private variables, and - // allow access to them through the - // methods of the base class. + // Here we pack the data elements into private variables, and allow access + // to them through the methods of the base class. template struct SetUp : public SetUpBase { @@ -1363,16 +1130,14 @@ namespace Step14 static const typename Traits::RightHandSide right_hand_side; }; - // We have to provide definitions - // for the static member variables - // of the above class: + // We have to provide definitions for the static member variables of the + // above class: template const typename Traits::BoundaryValues SetUp::boundary_values; template const typename Traits::RightHandSide SetUp::right_hand_side; - // And definitions of the member - // functions: + // And definitions of the member functions: template const Function & SetUp::get_boundary_values () const @@ -1400,12 +1165,9 @@ namespace Step14 // @sect4{The CurvedRidges class} - // The class that is used to - // describe the boundary values and - // right hand side of the curved - // ridge problem already used in - // the step-13 example program is - // then like so: + // The class that is used to describe the boundary values and right hand + // side of the curved ridge problem already used in the + // step-13 example program is then like so: template struct CurvedRidges { @@ -1487,41 +1249,24 @@ namespace Step14 // @sect4{The Exercise_2_3 class} - // This example program was written - // while giving practical courses - // for a lecture on adaptive finite - // element methods and duality - // based error estimates. For these - // courses, we had one exercise, - // which required to solve the - // Laplace equation with constant - // right hand side on a square - // domain with a square hole in the - // center, and zero boundary - // values. Since the implementation - // of the properties of this - // problem is so particularly - // simple here, lets do it. As the - // number of the exercise was 2.3, - // we take the liberty to retain - // this name for the class as well. + // This example program was written while giving practical courses for a + // lecture on adaptive finite element methods and duality based error + // estimates. For these courses, we had one exercise, which required to + // solve the Laplace equation with constant right hand side on a square + // domain with a square hole in the center, and zero boundary + // values. Since the implementation of the properties of this problem is + // so particularly simple here, lets do it. As the number of the exercise + // was 2.3, we take the liberty to retain this name for the class as well. template struct Exercise_2_3 { - // We need a class to denote - // the boundary values of the - // problem. In this case, this - // is simple: it's the zero - // function, so don't even - // declare a class, just a - // typedef: + // We need a class to denote the boundary values of the problem. In this + // case, this is simple: it's the zero function, so don't even declare a + // class, just a typedef: typedef ZeroFunction BoundaryValues; - // Second, a class that denotes - // the right hand side. Since - // they are constant, just - // subclass the corresponding - // class of the library and be + // Second, a class that denotes the right hand side. Since they are + // constant, just subclass the corresponding class of the library and be // done: class RightHandSide : public ConstantFunction { @@ -1529,72 +1274,44 @@ namespace Step14 RightHandSide () : ConstantFunction (1.) {} }; - // Finally a function to - // generate the coarse - // grid. This is somewhat more - // complicated here, see - // immediately below. + // Finally a function to generate the coarse grid. This is somewhat more + // complicated here, see immediately below. static void create_coarse_grid (Triangulation &coarse_grid); }; - // As stated above, the grid for - // this example is the square - // [-1,1]^2 with the square - // [-1/2,1/2]^2 as hole in it. We - // create the coarse grid as 4 - // times 4 cells with the middle - // four ones missing. + // As stated above, the grid for this example is the square [-1,1]^2 with + // the square [-1/2,1/2]^2 as hole in it. We create the coarse grid as 4 + // times 4 cells with the middle four ones missing. // - // Of course, the example has an - // extension to 3d, but since this - // function cannot be written in a - // dimension independent way we - // choose not to implement this - // here, but rather only specialize - // the template for dim=2. If you - // compile the program for 3d, - // you'll get a message from the - // linker that this function is not - // implemented for 3d, and needs to - // be provided. + // Of course, the example has an extension to 3d, but since this function + // cannot be written in a dimension independent way we choose not to + // implement this here, but rather only specialize the template for + // dim=2. If you compile the program for 3d, you'll get a message from the + // linker that this function is not implemented for 3d, and needs to be + // provided. // - // For the creation of this - // geometry, the library has no - // predefined method. In this case, - // the geometry is still simple - // enough to do the creation by - // hand, rather than using a mesh - // generator. + // For the creation of this geometry, the library has no predefined + // method. In this case, the geometry is still simple enough to do the + // creation by hand, rather than using a mesh generator. template <> void Exercise_2_3<2>:: create_coarse_grid (Triangulation<2> &coarse_grid) { - // First define the space - // dimension, to allow those - // parts of the function that are - // actually dimension independent - // to use this variable. That - // makes it simpler if you later - // takes this as a starting point - // to implement the 3d version. + // First define the space dimension, to allow those parts of the + // function that are actually dimension independent to use this + // variable. That makes it simpler if you later takes this as a starting + // point to implement the 3d version. const unsigned int dim = 2; - // Then have a list of - // vertices. Here, they are 24 (5 - // times 5, with the middle one - // omitted). It is probably best - // to draw a sketch here. Note - // that we leave the number of - // vertices open at first, but - // then let the compiler compute - // this number afterwards. This - // reduces the possibility of - // having the dimension to large - // and leaving the last ones + // Then have a list of vertices. Here, they are 24 (5 times 5, with the + // middle one omitted). It is probably best to draw a sketch here. Note + // that we leave the number of vertices open at first, but then let the + // compiler compute this number afterwards. This reduces the possibility + // of having the dimension to large and leaving the last ones // uninitialized. static const Point<2> vertices_1[] = { Point<2> (-1., -1.), @@ -1629,20 +1346,14 @@ namespace Step14 const unsigned int n_vertices = sizeof(vertices_1) / sizeof(vertices_1[0]); - // From this static list of - // vertices, we generate an STL - // vector of the vertices, as - // this is the data type the - // library wants to see. + // From this static list of vertices, we generate an STL vector of the + // vertices, as this is the data type the library wants to see. const std::vector > vertices (&vertices_1[0], &vertices_1[n_vertices]); - // Next, we have to define the - // cells and the vertices they - // contain. Here, we have 8 - // vertices, but leave the number - // open and let it be computed - // afterwards: + // Next, we have to define the cells and the vertices they + // contain. Here, we have 8 vertices, but leave the number open and let + // it be computed afterwards: static const int cell_vertices[][GeometryInfo::vertices_per_cell] = {{0, 1, 5, 6}, {1, 2, 6, 7}, @@ -1660,13 +1371,9 @@ namespace Step14 const unsigned int n_cells = sizeof(cell_vertices) / sizeof(cell_vertices[0]); - // Again, we generate a C++ - // vector type from this, but - // this time by looping over the - // cells (yes, this is - // boring). Additionally, we set - // the material indicator to zero - // for all the cells: + // Again, we generate a C++ vector type from this, but this time by + // looping over the cells (yes, this is boring). Additionally, we set + // the material indicator to zero for all the cells: std::vector > cells (n_cells, CellData()); for (unsigned int i=0; iCurvedRidges class) directly - // as classes derived from - // Data::SetUpBase. Indeed, we - // could have done very well so. The - // only reason is that then we would - // have to have member variables for - // the solution and right hand side - // classes in the CurvedRidges - // class, as well as member functions - // overloading the abstract functions - // of the base class giving access to - // these member variables. The - // SetUp class has the sole - // reason to relieve us from the need - // to reiterate these member - // variables and functions that would - // be necessary in all such - // classes. In some way, the template - // mechanism here only provides a way - // to have default implementations - // for a number of functions that - // depend on external quantities and - // can thus not be provided using - // normal virtual functions, at least - // not without the help of templates. + // As you have now read through this framework, you may be wondering why we + // have not chosen to implement the classes implementing a certain setup + // (like the CurvedRidges class) directly as classes derived + // from Data::SetUpBase. Indeed, we could have done very well + // so. The only reason is that then we would have to have member variables + // for the solution and right hand side classes in the + // CurvedRidges class, as well as member functions overloading + // the abstract functions of the base class giving access to these member + // variables. The SetUp class has the sole reason to relieve us + // from the need to reiterate these member variables and functions that + // would be necessary in all such classes. In some way, the template + // mechanism here only provides a way to have default implementations for a + // number of functions that depend on external quantities and can thus not + // be provided using normal virtual functions, at least not without the help + // of templates. // - // However, there might be good - // reasons to actually implement - // classes derived from - // Data::SetUpBase, for example - // if the solution or right hand side - // classes require constructors that - // take arguments, which the - // Data::SetUpBase class cannot - // provide. In that case, subclassing - // is a worthwhile strategy. Other - // possibilities for special cases - // are to derive from - // Data::SetUp@ where - // SomeSetUp denotes a class, or - // even to explicitly specialize - // Data::SetUp@. The - // latter allows to transparently use - // the way the SetUp class is - // used for other set-ups, but with - // special actions taken for special - // arguments. + // However, there might be good reasons to actually implement classes + // derived from Data::SetUpBase, for example if the solution or + // right hand side classes require constructors that take arguments, which + // the Data::SetUpBase class cannot provide. In that case, + // subclassing is a worthwhile strategy. Other possibilities for special + // cases are to derive from Data::SetUp@ where + // SomeSetUp denotes a class, or even to explicitly specialize + // Data::SetUp@. The latter allows to transparently + // use the way the SetUp class is used for other set-ups, but + // with special actions taken for special arguments. // - // A final observation favoring the - // approach taken here is the - // following: we have found numerous - // times that when starting a - // project, the number of parameters - // (usually boundary values, right - // hand side, coarse grid, just as - // here) was small, and the number of - // test cases was small as well. One - // then starts out by handcoding them - // into a number of switch - // statements. Over time, projects - // grow, and so does the number of - // test cases. The number of - // switch statements grows with - // that, and their length as well, - // and one starts to find ways to - // consider impossible examples where - // domains, boundary values, and - // right hand sides do not fit - // together any more, and starts - // losing the overview over the - // whole structure. Encapsulating - // everything belonging to a certain - // test case into a structure of its - // own has proven worthwhile for - // this, as it keeps everything that - // belongs to one test case in one - // place. Furthermore, it allows to - // put these things all in one or - // more files that are only devoted - // to test cases and their data, - // without having to bring their - // actual implementation into contact - // with the rest of the program. + // A final observation favoring the approach taken here is the following: we + // have found numerous times that when starting a project, the number of + // parameters (usually boundary values, right hand side, coarse grid, just + // as here) was small, and the number of test cases was small as well. One + // then starts out by handcoding them into a number of switch + // statements. Over time, projects grow, and so does the number of test + // cases. The number of switch statements grows with that, and + // their length as well, and one starts to find ways to consider impossible + // examples where domains, boundary values, and right hand sides do not fit + // together any more, and starts losing the overview over the whole + // structure. Encapsulating everything belonging to a certain test case into + // a structure of its own has proven worthwhile for this, as it keeps + // everything that belongs to one test case in one place. Furthermore, it + // allows to put these things all in one or more files that are only devoted + // to test cases and their data, without having to bring their actual + // implementation into contact with the rest of the program. // @sect3{Dual functionals} - // As with the other components of - // the program, we put everything we - // need to describe dual functionals - // into a namespace of its own, and - // define an abstract base class that - // provides the interface the class - // solving the dual problem needs for - // its work. + // As with the other components of the program, we put everything we need to + // describe dual functionals into a namespace of its own, and define an + // abstract base class that provides the interface the class solving the + // dual problem needs for its work. // - // We will then implement two such - // classes, for the evaluation of a - // point value and of the derivative - // of the solution at that point. For - // these functionals we already have - // the corresponding evaluation - // objects, so they are comlementary. + // We will then implement two such classes, for the evaluation of a point + // value and of the derivative of the solution at that point. For these + // functionals we already have the corresponding evaluation objects, so they + // are comlementary. namespace DualFunctional { // @sect4{The DualFunctionalBase class} - // First start with the base class - // for dual functionals. Since for - // linear problems the - // characteristics of the dual - // problem play a role only in the - // right hand side, we only need to - // provide for a function that - // assembles the right hand side - // for a given discretization: + // First start with the base class for dual functionals. Since for linear + // problems the characteristics of the dual problem play a role only in + // the right hand side, we only need to provide for a function that + // assembles the right hand side for a given discretization: template class DualFunctionalBase : public Subscriptor { @@ -1835,17 +1478,11 @@ namespace Step14 // @sect4{The PointValueEvaluation class} - // As a first application, we - // consider the functional - // corresponding to the evaluation - // of the solution's value at a - // given point which again we - // assume to be a vertex. Apart - // from the constructor that takes - // and stores the evaluation point, - // this class consists only of the - // function that implements - // assembling the right hand side. + // As a first application, we consider the functional corresponding to the + // evaluation of the solution's value at a given point which again we + // assume to be a vertex. Apart from the constructor that takes and stores + // the evaluation point, this class consists only of the function that + // implements assembling the right hand side. template class PointValueEvaluation : public DualFunctionalBase { @@ -1875,32 +1512,18 @@ namespace Step14 {} - // As for doing the main purpose of - // the class, assembling the right - // hand side, let us first consider - // what is necessary: The right - // hand side of the dual problem is - // a vector of values J(phi_i), - // where J is the error functional, - // and phi_i is the i-th shape - // function. Here, J is the - // evaluation at the point x0, - // i.e. J(phi_i)=phi_i(x0). + // As for doing the main purpose of the class, assembling the right hand + // side, let us first consider what is necessary: The right hand side of + // the dual problem is a vector of values J(phi_i), where J is the error + // functional, and phi_i is the i-th shape function. Here, J is the + // evaluation at the point x0, i.e. J(phi_i)=phi_i(x0). // - // Now, we have assumed that the - // evaluation point is a - // vertex. Thus, for the usual - // finite elements we might be - // using in this program, we can - // take for granted that at such a - // point exactly one shape function - // is nonzero, and in particular - // has the value one. Thus, we set - // the right hand side vector to - // all-zeros, then seek for the - // shape function associated with - // that point and set the - // corresponding value of the right + // Now, we have assumed that the evaluation point is a vertex. Thus, for + // the usual finite elements we might be using in this program, we can + // take for granted that at such a point exactly one shape function is + // nonzero, and in particular has the value one. Thus, we set the right + // hand side vector to all-zeros, then seek for the shape function + // associated with that point and set the corresponding value of the right // hand side vector to one: template void @@ -1908,16 +1531,12 @@ namespace Step14 assemble_rhs (const DoFHandler &dof_handler, Vector &rhs) const { - // So, first set everything to - // zeros... + // So, first set everything to zeros... rhs.reinit (dof_handler.n_dofs()); - // ...then loop over cells and - // find the evaluation point - // among the vertices (or very - // close to a vertex, which may - // happen due to floating point - // round-off): + // ...then loop over cells and find the evaluation point among the + // vertices (or very close to a vertex, which may happen due to floating + // point round-off): typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); @@ -1928,32 +1547,23 @@ namespace Step14 if (cell->vertex(vertex).distance(evaluation_point) < cell->diameter()*1e-8) { - // Ok, found, so set - // corresponding entry, - // and leave function + // Ok, found, so set corresponding entry, and leave function // since we are finished: rhs(cell->vertex_dof_index(vertex,0)) = 1; return; } - // Finally, a sanity check: if we - // somehow got here, then we must - // have missed the evaluation - // point, so raise an exception - // unconditionally: + // Finally, a sanity check: if we somehow got here, then we must have + // missed the evaluation point, so raise an exception unconditionally: AssertThrow (false, ExcEvaluationPointNotFound(evaluation_point)); } // @sect4{The PointXDerivativeEvaluation class} - // As second application, we again - // consider the evaluation of the - // x-derivative of the solution at - // one point. Again, the - // declaration of the class, and - // the implementation of its - // constructor is not too + // As second application, we again consider the evaluation of the + // x-derivative of the solution at one point. Again, the declaration of + // the class, and the implementation of its constructor is not too // interesting: template class PointXDerivativeEvaluation : public DualFunctionalBase @@ -1984,39 +1594,22 @@ namespace Step14 {} - // What is interesting is the - // implementation of this - // functional: here, + // What is interesting is the implementation of this functional: here, // J(phi_i)=d/dx phi_i(x0). // - // We could, as in the - // implementation of the respective - // evaluation object take the - // average of the gradients of each - // shape function phi_i at this - // evaluation point. However, we - // take a slightly different - // approach: we simply take the - // average over all cells that - // surround this point. The - // question which cells - // surrounds the evaluation - // point is made dependent on the - // mesh width by including those - // cells for which the distance of - // the cell's midpoint to the - // evaluation point is less than + // We could, as in the implementation of the respective evaluation object + // take the average of the gradients of each shape function phi_i at this + // evaluation point. However, we take a slightly different approach: we + // simply take the average over all cells that surround this point. The + // question which cells surrounds the evaluation point is + // made dependent on the mesh width by including those cells for which the + // distance of the cell's midpoint to the evaluation point is less than // the cell's diameter. // - // Taking the average of the - // gradient over the area/volume of - // these cells leads to a dual - // solution which is very close to - // the one which would result from - // the point evaluation of the - // gradient. It is simple to - // justify theoretically that this - // does not change the method + // Taking the average of the gradient over the area/volume of these cells + // leads to a dual solution which is very close to the one which would + // result from the point evaluation of the gradient. It is simple to + // justify theoretically that this does not change the method // significantly. template void @@ -2024,15 +1617,12 @@ namespace Step14 assemble_rhs (const DoFHandler &dof_handler, Vector &rhs) const { - // Again, first set all entries - // to zero: + // Again, first set all entries to zero: rhs.reinit (dof_handler.n_dofs()); - // Initialize a FEValues - // object with a quadrature - // formula, have abbreviations - // for the number of quadrature - // points and shape functions... + // Initialize a FEValues object with a quadrature formula, + // have abbreviations for the number of quadrature points and shape + // functions... QGauss quadrature(4); FEValues fe_values (dof_handler.get_fe(), quadrature, update_gradients | @@ -2041,28 +1631,19 @@ namespace Step14 const unsigned int n_q_points = fe_values.n_quadrature_points; const unsigned int dofs_per_cell = dof_handler.get_fe().dofs_per_cell; - // ...and have two objects that - // are used to store the global - // indices of the degrees of - // freedom on a cell, and the - // values of the gradients of the - // shape functions at the - // quadrature points: + // ...and have two objects that are used to store the global indices of + // the degrees of freedom on a cell, and the values of the gradients of + // the shape functions at the quadrature points: Vector cell_rhs (dofs_per_cell); std::vector local_dof_indices (dofs_per_cell); - // Finally have a variable in - // which we will sum up the - // area/volume of the cells over - // which we integrate, by - // integrating the unit functions + // Finally have a variable in which we will sum up the area/volume of + // the cells over which we integrate, by integrating the unit functions // on these cells: double total_volume = 0; - // Then start the loop over all - // cells, and select those cells - // which are close enough to the - // evaluation point: + // Then start the loop over all cells, and select those cells which are + // close enough to the evaluation point: typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); @@ -2070,15 +1651,10 @@ namespace Step14 if (cell->center().distance(evaluation_point) <= cell->diameter()) { - // If we have found such a - // cell, then initialize - // the FEValues object - // and integrate the - // x-component of the - // gradient of each shape - // function, as well as the - // unit function for the - // total area/volume. + // If we have found such a cell, then initialize the + // FEValues object and integrate the x-component of + // the gradient of each shape function, as well as the unit + // function for the total area/volume. fe_values.reinit (cell); cell_rhs = 0; @@ -2090,34 +1666,23 @@ namespace Step14 total_volume += fe_values.JxW (q); } - // If we have the local - // contributions, - // distribute them to the + // If we have the local contributions, distribute them to the // global vector: cell->get_dof_indices (local_dof_indices); for (unsigned int i=0; i 0, ExcEvaluationPointNotFound(evaluation_point)); - // Finally, we have by now only - // integrated the gradients of - // the shape functions, not - // taking their mean value. We - // fix this by dividing by the - // measure of the volume over - // which we have integrated: + // Finally, we have by now only integrated the gradients of the shape + // functions, not taking their mean value. We fix this by dividing by + // the measure of the volume over which we have integrated: rhs.scale (1./total_volume); } @@ -2131,41 +1696,22 @@ namespace Step14 // @sect4{The DualSolver class} - // In the same way as the - // PrimalSolver class above, we - // now implement a - // DualSolver. It has all the - // same features, the only - // difference is that it does not - // take a function object denoting - // a right hand side object, but - // now takes a - // DualFunctionalBase object - // that will assemble the right - // hand side vector of the dual - // problem. The rest of the class - // is rather trivial. + // In the same way as the PrimalSolver class above, we now + // implement a DualSolver. It has all the same features, the + // only difference is that it does not take a function object denoting a + // right hand side object, but now takes a DualFunctionalBase + // object that will assemble the right hand side vector of the dual + // problem. The rest of the class is rather trivial. // - // Since both primal and dual - // solver will use the same - // triangulation, but different - // discretizations, it now becomes - // clear why we have made the - // Base class a virtual one: - // since the final class will be - // derived from both - // PrimalSolver as well as - // DualSolver, it would have - // two Base instances, would we - // not have marked the inheritance - // as virtual. Since in many - // applications the base class - // would store much more - // information than just the - // triangulation which needs to be - // shared between primal and dual - // solvers, we do not usually want - // to use two such base classes. + // Since both primal and dual solver will use the same triangulation, but + // different discretizations, it now becomes clear why we have made the + // Base class a virtual one: since the final class will be + // derived from both PrimalSolver as well as + // DualSolver, it would have two Base instances, + // would we not have marked the inheritance as virtual. Since in many + // applications the base class would store much more information than just + // the triangulation which needs to be shared between primal and dual + // solvers, we do not usually want to use two such base classes. template class DualSolver : public Solver { @@ -2194,9 +1740,7 @@ namespace Step14 static const ZeroFunction boundary_values; - // Same as above -- make a - // derived class a friend of - // this one: + // Same as above -- make a derived class a friend of this one: friend class WeightedResidual; }; @@ -2257,23 +1801,14 @@ namespace Step14 // @sect4{The WeightedResidual class} - // Here finally comes the main - // class of this program, the one - // that implements the dual - // weighted residual error - // estimator. It joins the primal - // and dual solver classes to use - // them for the computation of - // primal and dual solutions, and - // implements the error - // representation formula for use - // as error estimate and mesh - // refinement. + // Here finally comes the main class of this program, the one that + // implements the dual weighted residual error estimator. It joins the + // primal and dual solver classes to use them for the computation of + // primal and dual solutions, and implements the error representation + // formula for use as error estimate and mesh refinement. // - // The first few of the functions - // of this class are mostly - // overriders of the respective - // functions of the base class: + // The first few of the functions of this class are mostly overriders of + // the respective functions of the base class: template class WeightedResidual : public PrimalSolver, public DualSolver @@ -2307,144 +1842,71 @@ namespace Step14 output_solution () const; private: - // In the private section, we - // have two functions that are - // used to call the - // solve_problem functions - // of the primal and dual base - // classes. These two functions - // will be called in parallel - // by the solve_problem - // function of this class. + // In the private section, we have two functions that are used to call + // the solve_problem functions of the primal and dual base + // classes. These two functions will be called in parallel by the + // solve_problem function of this class. void solve_primal_problem (); void solve_dual_problem (); - // Then declare abbreviations - // for active cell iterators, - // to avoid that we have to - // write this lengthy name - // over and over again: + // Then declare abbreviations for active cell iterators, to avoid that + // we have to write this lengthy name over and over again: typedef typename DoFHandler::active_cell_iterator active_cell_iterator; - // Next, declare a data type - // that we will us to store the - // contribution of faces to the - // error estimator. The idea is - // that we can compute the face - // terms from each of the two - // cells to this face, as they - // are the same when viewed - // from both sides. What we - // will do is to compute them - // only once, based on some - // rules explained below which - // of the two adjacent cells - // will be in charge to do - // so. We then store the - // contribution of each face in - // a map mapping faces to their - // values, and only collect the - // contributions for each cell - // by looping over the cells a - // second time and grabbing the - // values from the map. + // Next, declare a data type that we will us to store the contribution + // of faces to the error estimator. The idea is that we can compute the + // face terms from each of the two cells to this face, as they are the + // same when viewed from both sides. What we will do is to compute them + // only once, based on some rules explained below which of the two + // adjacent cells will be in charge to do so. We then store the + // contribution of each face in a map mapping faces to their values, and + // only collect the contributions for each cell by looping over the + // cells a second time and grabbing the values from the map. // - // The data type of this map is - // declared here: + // The data type of this map is declared here: typedef typename std::map::face_iterator,double> FaceIntegrals; - // In the computation of the - // error estimates on cells and - // faces, we need a number of - // helper objects, such as - // FEValues and - // FEFaceValues functions, - // but also temporary objects - // storing the values and - // gradients of primal and dual - // solutions, for - // example. These fields are - // needed in the three - // functions that do the - // integration on cells, and - // regular and irregular faces, - // respectively. + // In the computation of the error estimates on cells and faces, we need + // a number of helper objects, such as FEValues and + // FEFaceValues functions, but also temporary objects + // storing the values and gradients of primal and dual solutions, for + // example. These fields are needed in the three functions that do the + // integration on cells, and regular and irregular faces, respectively. // - // There are three reasonable - // ways to provide these - // fields: first, as local - // variables in the function - // that needs them; second, as - // member variables of this - // class; third, as arguments - // passed to that function. + // There are three reasonable ways to provide these fields: first, as + // local variables in the function that needs them; second, as member + // variables of this class; third, as arguments passed to that function. // - // These three alternatives all - // have drawbacks: the third - // that their number is not - // neglectable and would make - // calling these functions a - // lengthy enterprise. The - // second has the drawback that - // it disallows - // parallelization, since the - // threads that will compute - // the error estimate have to - // have their own copies of - // these variables each, so - // member variables of the - // enclosing class will not - // work. The first approach, - // although straightforward, - // has a subtle but important - // drawback: we will call these - // functions over and over - // again, many thousands of times - // maybe; it has now turned out - // that allocating vectors and - // other objects that need - // memory from the heap is an - // expensive business in terms - // of run-time, since memory - // allocation is expensive when - // several threads are - // involved. In our experience, - // more than 20 per cent of the - // total run time of error - // estimation functions are due - // to memory allocation, if - // done on a per-call level. It - // is thus significantly better - // to allocate the memory only - // once, and recycle the - // objects as often as - // possible. + // These three alternatives all have drawbacks: the third that their + // number is not neglectable and would make calling these functions a + // lengthy enterprise. The second has the drawback that it disallows + // parallelization, since the threads that will compute the error + // estimate have to have their own copies of these variables each, so + // member variables of the enclosing class will not work. The first + // approach, although straightforward, has a subtle but important + // drawback: we will call these functions over and over again, many + // thousands of times maybe; it has now turned out that allocating + // vectors and other objects that need memory from the heap is an + // expensive business in terms of run-time, since memory allocation is + // expensive when several threads are involved. In our experience, more + // than 20 per cent of the total run time of error estimation functions + // are due to memory allocation, if done on a per-call level. It is thus + // significantly better to allocate the memory only once, and recycle + // the objects as often as possible. // - // What to do? Our answer is to - // use a variant of the third - // strategy, namely generating - // these variables once in the - // main function of each - // thread, and passing them - // down to the functions that - // do the actual work. To avoid - // that we have to give these - // functions a dozen or so - // arguments, we pack all these - // variables into two - // structures, one which is - // used for the computations on - // cells, the other doing them - // on the faces. Instead of - // many individual objects, we - // will then only pass one such - // object to these functions, - // making their calling - // sequence simpler. + // What to do? Our answer is to use a variant of the third strategy, + // namely generating these variables once in the main function of each + // thread, and passing them down to the functions that do the actual + // work. To avoid that we have to give these functions a dozen or so + // arguments, we pack all these variables into two structures, one which + // is used for the computations on cells, the other doing them on the + // faces. Instead of many individual objects, we will then only pass one + // such object to these functions, making their calling sequence + // simpler. struct CellData { FEValues fe_values; @@ -2475,18 +1937,11 @@ namespace Step14 - // Regarding the evaluation of - // the error estimator, we have - // two driver functions that do - // this: the first is called to - // generate the cell-wise - // estimates, and splits up the - // task in a number of threads - // each of which work on a - // subset of the cells. The - // first function will run the - // second for each of these - // threads: + // Regarding the evaluation of the error estimator, we have two driver + // functions that do this: the first is called to generate the cell-wise + // estimates, and splits up the task in a number of threads each of + // which work on a subset of the cells. The first function will run the + // second for each of these threads: void estimate_error (Vector &error_indicators) const; void estimate_some (const Vector &primal_solution, @@ -2496,15 +1951,10 @@ namespace Step14 Vector &error_indicators, FaceIntegrals &face_integrals) const; - // Then we have functions that - // do the actual integration of - // the error representation - // formula. They will treat the - // terms on the cell interiors, - // on those faces that have no - // hanging nodes, and on those - // faces with hanging nodes, - // respectively: + // Then we have functions that do the actual integration of the error + // representation formula. They will treat the terms on the cell + // interiors, on those faces that have no hanging nodes, and on those + // faces with hanging nodes, respectively: void integrate_over_cell (const active_cell_iterator &cell, const unsigned int cell_index, @@ -2531,15 +1981,11 @@ namespace Step14 - // In the implementation of this - // class, we first have the - // constructors of the CellData - // and FaceData member classes, - // and the WeightedResidual - // constructor. They only - // initialize fields to their - // correct lengths, so we do not - // have to discuss them to length. + // In the implementation of this class, we first have the constructors of + // the CellData and FaceData member classes, and + // the WeightedResidual constructor. They only initialize + // fields to their correct lengths, so we do not have to discuss them to + // length. template WeightedResidual::CellData:: CellData (const FiniteElement &fe, @@ -2611,15 +2057,10 @@ namespace Step14 {} - // The next five functions are - // boring, as they simply relay - // their work to the base - // classes. The first calls the - // primal and dual solvers in - // parallel, while postprocessing - // the solution and retrieving the - // number of degrees of freedom is - // done by the primal class. + // The next five functions are boring, as they simply relay their work to + // the base classes. The first calls the primal and dual solvers in + // parallel, while postprocessing the solution and retrieving the number + // of degrees of freedom is done by the primal class. template void WeightedResidual::solve_problem () @@ -2666,44 +2107,29 @@ namespace Step14 - // Now, it is becoming more - // interesting: the refine_grid - // function asks the error - // estimator to compute the - // cell-wise error indicators, then - // uses their absolute values for - // mesh refinement. + // Now, it is becoming more interesting: the refine_grid + // function asks the error estimator to compute the cell-wise error + // indicators, then uses their absolute values for mesh refinement. template void WeightedResidual::refine_grid () { - // First call the function that - // computes the cell-wise and - // global error: + // First call the function that computes the cell-wise and global error: Vector error_indicators (this->triangulation->n_active_cells()); estimate_error (error_indicators); - // Then note that marking cells - // for refinement or coarsening - // only works if all indicators - // are positive, to allow their - // comparison. Thus, drop the - // signs on all these indicators: + // Then note that marking cells for refinement or coarsening only works + // if all indicators are positive, to allow their comparison. Thus, drop + // the signs on all these indicators: for (Vector::iterator i=error_indicators.begin(); i != error_indicators.end(); ++i) *i = std::fabs (*i); - // Finally, we can select between - // different strategies for - // refinement. The default here - // is to refine those cells with - // the largest error indicators - // that make up for a total of 80 - // per cent of the error, while - // we coarsen those with the - // smallest indicators that make - // up for the bottom 2 per cent - // of the error. + // Finally, we can select between different strategies for + // refinement. The default here is to refine those cells with the + // largest error indicators that make up for a total of 80 per cent of + // the error, while we coarsen those with the smallest indicators that + // make up for the bottom 2 per cent of the error. GridRefinement::refine_and_coarsen_fixed_fraction (*this->triangulation, error_indicators, 0.8, 0.02); @@ -2711,90 +2137,48 @@ namespace Step14 } - // Since we want to output both the - // primal and the dual solution, we - // overload the output_solution - // function. The only interesting - // feature of this function is that - // the primal and dual solutions - // are defined on different finite - // element spaces, which is not the - // format the DataOut class - // expects. Thus, we have to - // transfer them to a common finite - // element space. Since we want the - // solutions only to see them - // qualitatively, we contend - // ourselves with interpolating the - // dual solution to the (smaller) - // primal space. For the - // interpolation, there is a - // library function, that takes a - // ConstraintMatrix object - // including the hanging node - // constraints. The rest is - // standard. + // Since we want to output both the primal and the dual solution, we + // overload the output_solution function. The only + // interesting feature of this function is that the primal and dual + // solutions are defined on different finite element spaces, which is not + // the format the DataOut class expects. Thus, we have to + // transfer them to a common finite element space. Since we want the + // solutions only to see them qualitatively, we contend ourselves with + // interpolating the dual solution to the (smaller) primal space. For the + // interpolation, there is a library function, that takes a + // ConstraintMatrix object including the hanging node + // constraints. The rest is standard. // - // There is, however, one - // work-around worth mentioning: in - // this function, as in a couple of - // following ones, we have to - // access the DoFHandler - // objects and solutions of both - // the primal as well as of the - // dual solver. Since these are - // members of the Solver base - // class which exists twice in the - // class hierarchy leading to the - // present class (once as base - // class of the PrimalSolver - // class, once as base class of the - // DualSolver class), we have - // to disambiguate accesses to them - // by telling the compiler a member - // of which of these two instances - // we want to access. The way to do - // this would be identify the - // member by pointing a path - // through the class hierarchy - // which disambiguates the base - // class, for example writing - // PrimalSolver::dof_handler to - // denote the member variable - // dof_handler from the - // Solver base class of the - // PrimalSolver - // class. Unfortunately, this - // confuses gcc's version 2.96 (a - // version that was intended as a - // development snapshot, but - // delivered as system compiler by - // Red Hat in their 7.x releases) - // so much that it bails out and - // refuses to compile the code. + // There is, however, one work-around worth mentioning: in this function, + // as in a couple of following ones, we have to access the + // DoFHandler objects and solutions of both the primal as + // well as of the dual solver. Since these are members of the + // Solver base class which exists twice in the class + // hierarchy leading to the present class (once as base class of the + // PrimalSolver class, once as base class of the + // DualSolver class), we have to disambiguate accesses to + // them by telling the compiler a member of which of these two instances + // we want to access. The way to do this would be identify the member by + // pointing a path through the class hierarchy which disambiguates the + // base class, for example writing PrimalSolver::dof_handler + // to denote the member variable dof_handler from the + // Solver base class of the PrimalSolver + // class. Unfortunately, this confuses gcc's version 2.96 (a version that + // was intended as a development snapshot, but delivered as system + // compiler by Red Hat in their 7.x releases) so much that it bails out + // and refuses to compile the code. // - // Thus, we have to work around - // this problem. We do this by - // introducing references to the - // PrimalSolver and - // DualSolver components of the - // WeightedResidual object at - // the beginning of the - // function. Since each of these - // has an unambiguous base class - // Solver, we can access the - // member variables we want through - // these references. However, we - // are now accessing protected - // member variables of these - // classes through a pointer other - // than the this pointer (in - // fact, this is of course the - // this pointer, but not - // explicitly). This finally is the - // reason why we had to declare the - // present class a friend of the - // classes we so access. + // Thus, we have to work around this problem. We do this by introducing + // references to the PrimalSolver and DualSolver + // components of the WeightedResidual object at the beginning + // of the function. Since each of these has an unambiguous base class + // Solver, we can access the member variables we want through + // these references. However, we are now accessing protected member + // variables of these classes through a pointer other than the + // this pointer (in fact, this is of course the + // this pointer, but not explicitly). This finally is the + // reason why we had to declare the present class a friend of the classes + // we so access. template void WeightedResidual::output_solution () const @@ -2816,11 +2200,9 @@ namespace Step14 DataOut data_out; data_out.attach_dof_handler (primal_solver.dof_handler); - // Add the data vectors for which - // we want output. Add them both, - // the DataOut functions can - // handle as many data vectors as - // you wish to write to output: + // Add the data vectors for which we want output. Add them both, the + // DataOut functions can handle as many data vectors as you + // wish to write to output: data_out.add_data_vector (primal_solver.solution, "primal_solution"); data_out.add_data_vector (dual_solution, @@ -2843,13 +2225,9 @@ namespace Step14 // @sect4{Error estimation driver functions} // - // As for the actual computation of - // error estimates, let's start - // with the function that drives - // all this, i.e. calls those - // functions that actually do the - // work, and finally collects the - // results. + // As for the actual computation of error estimates, let's start with the + // function that drives all this, i.e. calls those functions that actually + // do the work, and finally collects the results. template void @@ -2859,39 +2237,22 @@ namespace Step14 const PrimalSolver &primal_solver = *this; const DualSolver &dual_solver = *this; - // The first task in computing - // the error is to set up vectors - // that denote the primal - // solution, and the weights - // (z-z_h)=(z-I_hz), both in the - // finite element space for which - // we have computed the dual - // solution. For this, we have to - // interpolate the primal - // solution to the dual finite - // element space, and to subtract - // the interpolation of the - // computed dual solution to the - // primal finite element - // space. Fortunately, the - // library provides functions for - // the interpolation into larger - // or smaller finite element - // spaces, so this is mostly - // obvious. + // The first task in computing the error is to set up vectors that + // denote the primal solution, and the weights (z-z_h)=(z-I_hz), both in + // the finite element space for which we have computed the dual + // solution. For this, we have to interpolate the primal solution to the + // dual finite element space, and to subtract the interpolation of the + // computed dual solution to the primal finite element + // space. Fortunately, the library provides functions for the + // interpolation into larger or smaller finite element spaces, so this + // is mostly obvious. // - // First, let's do that for the - // primal solution: it is - // cell-wise interpolated into - // the finite element space in - // which we have solved the dual - // problem: But, again as in the - // WeightedResidual::output_solution - // function we first need to - // create a ConstraintMatrix - // including the hanging node - // constraints, but this time of - // the dual finite element space. + // First, let's do that for the primal solution: it is cell-wise + // interpolated into the finite element space in which we have solved + // the dual problem: But, again as in the + // WeightedResidual::output_solution function we first need + // to create a ConstraintMatrix including the hanging node constraints, + // but this time of the dual finite element space. ConstraintMatrix dual_hanging_node_constraints; DoFTools::make_hanging_node_constraints (dual_solver.dof_handler, dual_hanging_node_constraints); @@ -2903,17 +2264,11 @@ namespace Step14 dual_hanging_node_constraints, primal_solution); - // Then for computing the - // interpolation of the - // numerically approximated dual - // solution z into the finite - // element space of the primal - // solution and subtracting it - // from z: use the - // interpolate_difference - // function, that gives (z-I_hz) - // in the element space of the - // dual solution. + // Then for computing the interpolation of the numerically approximated + // dual solution z into the finite element space of the primal solution + // and subtracting it from z: use the + // interpolate_difference function, that gives (z-I_hz) in + // the element space of the dual solution. ConstraintMatrix primal_hanging_node_constraints; DoFTools::make_hanging_node_constraints (primal_solver.dof_handler, primal_hanging_node_constraints); @@ -2926,51 +2281,27 @@ namespace Step14 primal_hanging_node_constraints, dual_weights); - // Note that this could probably - // have been more efficient since - // those constraints have been - // used previously when - // assembling matrix and right - // hand side for the primal - // problem and writing out the - // dual solution. We leave the - // optimization of the program in - // this respect as an exercise. - - // Having computed the dual - // weights we now proceed with - // computing the cell and face - // residuals of the primal - // solution. First we set up a - // map between face iterators and - // their jump term contributions - // of faces to the error - // estimator. The reason is that - // we compute the jump terms only - // once, from one side of the - // face, and want to collect them - // only afterwards when looping - // over all cells a second time. + // Note that this could probably have been more efficient since those + // constraints have been used previously when assembling matrix and + // right hand side for the primal problem and writing out the dual + // solution. We leave the optimization of the program in this respect as + // an exercise. + + // Having computed the dual weights we now proceed with computing the + // cell and face residuals of the primal solution. First we set up a map + // between face iterators and their jump term contributions of faces to + // the error estimator. The reason is that we compute the jump terms + // only once, from one side of the face, and want to collect them only + // afterwards when looping over all cells a second time. // - // We initialize this map already - // with a value of -1e20 for all - // faces, since this value will - // strike in the results if - // something should go wrong and - // we fail to compute the value - // for a face for some - // reason. Secondly, we - // initialize the map once before - // we branch to different threads - // since this way the map's - // structure is no more modified - // by the individual threads, - // only existing entries are set - // to new values. This relieves - // us from the necessity to - // synchronise the threads - // through a mutex each time they - // write to (and modify the + // We initialize this map already with a value of -1e20 for all faces, + // since this value will strike in the results if something should go + // wrong and we fail to compute the value for a face for some + // reason. Secondly, we initialize the map once before we branch to + // different threads since this way the map's structure is no more + // modified by the individual threads, only existing entries are set to + // new values. This relieves us from the necessity to synchronise the + // threads through a mutex each time they write to (and modify the // structure of) this map. FaceIntegrals face_integrals; for (active_cell_iterator cell=dual_solver.dof_handler.begin_active(); @@ -2981,19 +2312,14 @@ namespace Step14 ++face_no) face_integrals[cell->face(face_no)] = -1e20; - // Then set up a vector with - // error indicators. Reserve one - // slot for each cell and set it - // to zero. + // Then set up a vector with error indicators. Reserve one slot for + // each cell and set it to zero. error_indicators.reinit (dual_solver.dof_handler .get_tria().n_active_cells()); - // Now start a number of threads - // which compute the error - // formula on parts of all the - // cells, and once they are all - // started wait until they have - // all finished: + // Now start a number of threads which compute the error formula on + // parts of all the cells, and once they are all started wait until they + // have all finished: const unsigned int n_threads = multithread_info.n_default_threads; Threads::ThreadGroup<> threads; for (unsigned int i=0; i void WeightedResidual:: @@ -3062,21 +2376,12 @@ namespace Step14 const PrimalSolver &primal_solver = *this; const DualSolver &dual_solver = *this; - // At the beginning, we - // initialize two variables for - // each thread which may be - // running this function. The - // reason for these functions was - // discussed above, when the - // respective classes were - // discussed, so we here only - // point out that since they are - // local to the function that is - // spawned when running more than - // one thread, the data of these - // objects exists actually once - // per thread, so we don't have - // to take care about + // At the beginning, we initialize two variables for each thread which + // may be running this function. The reason for these functions was + // discussed above, when the respective classes were discussed, so we + // here only point out that since they are local to the function that is + // spawned when running more than one thread, the data of these objects + // exists actually once per thread, so we don't have to take care about // synchronising access to them. CellData cell_data (*dual_solver.fe, *dual_solver.quadrature, @@ -3084,159 +2389,90 @@ namespace Step14 FaceData face_data (*dual_solver.fe, *dual_solver.face_quadrature); - // Then calculate the start cell - // for this thread. We let the - // different threads run on - // interleaved cells, i.e. for - // example if we have 4 threads, - // then the first thread treates - // cells 0, 4, 8, etc, while the - // second threads works on cells 1, - // 5, 9, and so on. The reason is - // that it takes vastly more time - // to work on cells with hanging - // nodes than on regular cells, but - // such cells are not evenly - // distributed across the range of - // cell iterators, so in order to - // have the different threads do - // approximately the same amount of - // work, we have to let them work - // interleaved to the effect of a - // pseudorandom distribution of the - // `hard' cells to the different - // threads. + // Then calculate the start cell for this thread. We let the different + // threads run on interleaved cells, i.e. for example if we have 4 + // threads, then the first thread treates cells 0, 4, 8, etc, while the + // second threads works on cells 1, 5, 9, and so on. The reason is that + // it takes vastly more time to work on cells with hanging nodes than on + // regular cells, but such cells are not evenly distributed across the + // range of cell iterators, so in order to have the different threads do + // approximately the same amount of work, we have to let them work + // interleaved to the effect of a pseudorandom distribution of the + // `hard' cells to the different threads. active_cell_iterator cell=dual_solver.dof_handler.begin_active(); for (unsigned int t=0; (terror_indicators - // variable: + // First task on each cell is to compute the cell residual + // contributions of this cell, and put them into the + // error_indicators variable: integrate_over_cell (cell, cell_index, primal_solution, dual_weights, cell_data, error_indicators); - // After computing the cell - // terms, turn to the face - // terms. For this, loop over - // all faces of the present - // cell, and see whether - // something needs to be - // computed on it: + // After computing the cell terms, turn to the face terms. For this, + // loop over all faces of the present cell, and see whether + // something needs to be computed on it: for (unsigned int face_no=0; face_no::faces_per_cell; ++face_no) { - // First, if this face is - // part of the boundary, - // then there is nothing - // to do. However, to - // make things easier - // when summing up the - // contributions of the - // faces of cells, we - // enter this face into - // the list of faces with - // a zero contribution to - // the error. + // First, if this face is part of the boundary, then there is + // nothing to do. However, to make things easier when summing up + // the contributions of the faces of cells, we enter this face + // into the list of faces with a zero contribution to the error. if (cell->face(face_no)->at_boundary()) { face_integrals[cell->face(face_no)] = 0; continue; } - // Next, note that since - // we want to compute the - // jump terms on each - // face only once - // although we access it - // twice (if it is not at - // the boundary), we have - // to define some rules - // who is responsible for - // computing on a face: + // Next, note that since we want to compute the jump terms on + // each face only once although we access it twice (if it is not + // at the boundary), we have to define some rules who is + // responsible for computing on a face: // - // First, if the - // neighboring cell is on - // the same level as this - // one, i.e. neither - // further refined not - // coarser, then the one - // with the lower index - // within this level does - // the work. In other - // words: if the other - // one has a lower index, - // then skip work on this - // face: + // First, if the neighboring cell is on the same level as this + // one, i.e. neither further refined not coarser, then the one + // with the lower index within this level does the work. In + // other words: if the other one has a lower index, then skip + // work on this face: if ((cell->neighbor(face_no)->has_children() == false) && (cell->neighbor(face_no)->level() == cell->level()) && (cell->neighbor(face_no)->index() < cell->index())) continue; - // Likewise, we always - // work from the coarser - // cell if this and its - // neighbor differ in - // refinement. Thus, if - // the neighboring cell - // is less refined than - // the present one, then - // do nothing since we - // integrate over the - // subfaces when we visit - // the coarse cell. + // Likewise, we always work from the coarser cell if this and + // its neighbor differ in refinement. Thus, if the neighboring + // cell is less refined than the present one, then do nothing + // since we integrate over the subfaces when we visit the coarse + // cell. if (cell->at_boundary(face_no) == false) if (cell->neighbor(face_no)->level() < cell->level()) continue; - // Now we know that we - // are in charge here, so - // actually compute the - // face jump terms. If - // the face is a regular - // one, i.e. the other - // side's cell is neither - // coarser not finer than - // this cell, then call - // one function, and if - // the cell on the other - // side is further - // refined, then use - // another function. Note - // that the case that the - // cell on the other side - // is coarser cannot - // happen since we have - // decided above that we - // handle this case when - // we pass over that - // other cell. + // Now we know that we are in charge here, so actually compute + // the face jump terms. If the face is a regular one, i.e. the + // other side's cell is neither coarser not finer than this + // cell, then call one function, and if the cell on the other + // side is further refined, then use another function. Note that + // the case that the cell on the other side is coarser cannot + // happen since we have decided above that we handle this case + // when we pass over that other cell. if (cell->face(face_no)->has_children() == false) integrate_over_regular_face (cell, face_no, primal_solution, @@ -3251,16 +2487,10 @@ namespace Step14 face_integrals); } - // After computing the cell - // contributions and looping - // over the faces, go to the - // next cell for this - // thread. Note again that - // the cells for each of the - // threads are interleaved. - // If we are at the end of - // our workload, jump out - // of the loop. + // After computing the cell contributions and looping over the + // faces, go to the next cell for this thread. Note again that the + // cells for each of the threads are interleaved. If we are at the + // end of our workload, jump out of the loop. for (unsigned int t=0; ((t void WeightedResidual:: integrate_over_cell (const active_cell_iterator &cell, @@ -3286,14 +2515,10 @@ namespace Step14 CellData &cell_data, Vector &error_indicators) const { - // The tasks to be done are what - // appears natural from looking - // at the error estimation - // formula: first get the - // right hand side and - // Laplacian of the numerical - // solution at the quadrature - // points for the cell residual, + // The tasks to be done are what appears natural from looking at the + // error estimation formula: first get the right hand side and Laplacian + // of the numerical solution at the quadrature points for the cell + // residual, cell_data.fe_values.reinit (cell); cell_data.right_hand_side ->value_list (cell_data.fe_values.get_quadrature_points(), @@ -3305,10 +2530,8 @@ namespace Step14 cell_data.fe_values.get_function_values (dual_weights, cell_data.dual_weights); - // ...and finally build the sum - // over all quadrature points and - // store it with the present - // cell: + // ...and finally build the sum over all quadrature points and store it + // with the present cell: double sum = 0; for (unsigned int p=0; p void WeightedResidual:: @@ -3343,115 +2560,74 @@ namespace Step14 const unsigned int n_q_points = face_data.fe_face_values_cell.n_quadrature_points; - // The first step is to get the - // values of the gradients at the - // quadrature points of the - // finite element field on the - // present cell. For this, - // initialize the - // FEFaceValues object - // corresponding to this side of - // the face, and extract the - // gradients using that - // object. + // The first step is to get the values of the gradients at the + // quadrature points of the finite element field on the present + // cell. For this, initialize the FEFaceValues object + // corresponding to this side of the face, and extract the gradients + // using that object. face_data.fe_face_values_cell.reinit (cell, face_no); face_data.fe_face_values_cell.get_function_grads (primal_solution, face_data.cell_grads); - // The second step is then to - // extract the gradients of the - // finite element solution at the - // quadrature points on the other - // side of the face, i.e. from - // the neighboring cell. + // The second step is then to extract the gradients of the finite + // element solution at the quadrature points on the other side of the + // face, i.e. from the neighboring cell. // - // For this, do a sanity check - // before: make sure that the - // neigbor actually exists (yes, - // we should not have come here - // if the neighbor did not exist, - // but in complicated software - // there are bugs, so better - // check this), and if this is - // not the case throw an error. + // For this, do a sanity check before: make sure that the neigbor + // actually exists (yes, we should not have come here if the neighbor + // did not exist, but in complicated software there are bugs, so better + // check this), and if this is not the case throw an error. Assert (cell->neighbor(face_no).state() == IteratorState::valid, ExcInternalError()); - // If we have that, then we need - // to find out with which face of - // the neighboring cell we have - // to work, i.e. the - // home-manythe neighbor the - // present cell is of the cell - // behind the present face. For - // this, there is a function, and - // we put the result into a - // variable with the name - // neighbor_neighbor: + // If we have that, then we need to find out with which face of the + // neighboring cell we have to work, i.e. the home-manythe + // neighbor the present cell is of the cell behind the present face. For + // this, there is a function, and we put the result into a variable with + // the name neighbor_neighbor: const unsigned int neighbor_neighbor = cell->neighbor_of_neighbor (face_no); - // Then define an abbreviation - // for the neigbor cell, - // initialize the - // FEFaceValues object on - // that cell, and extract the + // Then define an abbreviation for the neigbor cell, initialize the + // FEFaceValues object on that cell, and extract the // gradients on that cell: const active_cell_iterator neighbor = cell->neighbor(face_no); face_data.fe_face_values_neighbor.reinit (neighbor, neighbor_neighbor); face_data.fe_face_values_neighbor.get_function_grads (primal_solution, face_data.neighbor_grads); - // Now that we have the gradients - // on this and the neighboring - // cell, compute the jump - // residual by multiplying the - // jump in the gradient with the - // normal vector: + // Now that we have the gradients on this and the neighboring cell, + // compute the jump residual by multiplying the jump in the gradient + // with the normal vector: for (unsigned int p=0; pface(face_no)) != face_integrals.end(), ExcInternalError()); Assert (face_integrals[cell->face(face_no)] == -1e20, ExcInternalError()); - // ...then store computed value - // at assigned location. Note - // that the stored value does not - // contain the factor 1/2 that - // appears in the error - // representation. The reason is - // that the term actually does - // not have this factor if we - // loop over all faces in the - // triangulation, but only - // appears if we write it as a - // sum over all cells and all - // faces of each cell; we thus - // visit the same face twice. We - // take account of this by using - // this factor -1/2 later, when we - // sum up the contributions for + // ...then store computed value at assigned location. Note that the + // stored value does not contain the factor 1/2 that appears in the + // error representation. The reason is that the term actually does not + // have this factor if we loop over all faces in the triangulation, but + // only appears if we write it as a sum over all cells and all faces of + // each cell; we thus visit the same face twice. We take account of this + // by using this factor -1/2 later, when we sum up the contributions for // each cell individually. face_integrals[cell->face(face_no)] = face_integral; } @@ -3459,10 +2635,8 @@ namespace Step14 // @sect4{Computing edge term error contributions -- 2} - // We are still missing the case of - // faces with hanging nodes. This - // is what is covered in this - // function: + // We are still missing the case of faces with hanging nodes. This is what + // is covered in this function: template void WeightedResidual:: integrate_over_irregular_face (const active_cell_iterator &cell, @@ -3472,11 +2646,9 @@ namespace Step14 FaceData &face_data, FaceIntegrals &face_integrals) const { - // First again two abbreviations, - // and some consistency checks - // whether the function is called - // only on faces for which it is - // supposed to be called: + // First again two abbreviations, and some consistency checks whether + // the function is called only on faces for which it is supposed to be + // called: const unsigned int n_q_points = face_data.fe_face_values_cell.n_quadrature_points; @@ -3489,61 +2661,36 @@ namespace Step14 Assert (neighbor->has_children(), ExcInternalError()); - // Then find out which neighbor - // the present cell is of the - // adjacent cell. Note that we - // will operator on the children - // of this adjacent cell, but - // that their orientation is the - // same as that of their mother, - // i.e. the neigbor direction is - // the same. + // Then find out which neighbor the present cell is of the adjacent + // cell. Note that we will operator on the children of this adjacent + // cell, but that their orientation is the same as that of their mother, + // i.e. the neigbor direction is the same. const unsigned int neighbor_neighbor = cell->neighbor_of_neighbor (face_no); - // Then simply do everything we - // did in the previous function - // for one face for all the - // sub-faces now: + // Then simply do everything we did in the previous function for one + // face for all the sub-faces now: for (unsigned int subface_no=0; subface_non_children(); ++subface_no) { - // Start with some checks - // again: get an iterator - // pointing to the cell - // behind the present subface - // and check whether its face - // is a subface of the one we - // are considering. If that - // were not the case, then - // there would be either a - // bug in the - // neighbor_neighbor - // function called above, or - // -- worse -- some function - // in the library did not - // keep to some underlying - // assumptions about cells, - // their children, and their - // faces. In any case, even - // though this assertion - // should not be triggered, - // it does not harm to be - // cautious, and in optimized - // mode computations the - // assertion will be removed - // anyway. + // Start with some checks again: get an iterator pointing to the + // cell behind the present subface and check whether its face is a + // subface of the one we are considering. If that were not the case, + // then there would be either a bug in the + // neighbor_neighbor function called above, or -- worse + // -- some function in the library did not keep to some underlying + // assumptions about cells, their children, and their faces. In any + // case, even though this assertion should not be triggered, it does + // not harm to be cautious, and in optimized mode computations the + // assertion will be removed anyway. const active_cell_iterator neighbor_child = cell->neighbor_child_on_subface (face_no, subface_no); Assert (neighbor_child->face(neighbor_neighbor) == cell->face(face_no)->child(subface_no), ExcInternalError()); - // Now start the work by - // again getting the gradient - // of the solution first at - // this side of the - // interface, + // Now start the work by again getting the gradient of the solution + // first at this side of the interface, face_data.fe_subface_values_cell.reinit (cell, face_no, subface_no); face_data.fe_subface_values_cell.get_function_grads (primal_solution, face_data.cell_grads); @@ -3553,13 +2700,9 @@ namespace Step14 face_data.fe_face_values_neighbor.get_function_grads (primal_solution, face_data.neighbor_grads); - // and finally building the - // jump residuals. Since we - // take the normal vector - // from the other cell this - // time, revert the sign of - // the first term compared to - // the other function: + // and finally building the jump residuals. Since we take the normal + // vector from the other cell this time, revert the sign of the + // first term compared to the other function: for (unsigned int p=0; pn_children(); ++subface_no) @@ -3605,8 +2740,7 @@ namespace Step14 sum += face_integrals[face->child(subface_no)]; } - // Finally store the value with - // the parent face. + // Finally store the value with the parent face. face_integrals[face] = sum; } @@ -3615,102 +2749,55 @@ namespace Step14 // @sect3{A simulation framework} - // In the previous example program, - // we have had two functions that - // were used to drive the process of - // solving on subsequently finer - // grids. We extend this here to - // allow for a number of parameters - // to be passed to these functions, - // and put all of that into framework - // class. + // In the previous example program, we have had two functions that were used + // to drive the process of solving on subsequently finer grids. We extend + // this here to allow for a number of parameters to be passed to these + // functions, and put all of that into framework class. // - // You will have noted that this - // program is built up of a number of - // small parts (evaluation functions, - // solver classes implementing - // various refinement methods, - // different dual functionals, - // different problem and data - // descriptions), which makes the - // program relatively simple to - // extend, but also allows to solve a - // large number of different problems - // by replacing one part by - // another. We reflect this - // flexibility by declaring a - // structure in the following - // framework class that holds a - // number of parameters that may be - // set to test various combinations - // of the parts of this program, and - // which can be used to test it at - // various problems and + // You will have noted that this program is built up of a number of small + // parts (evaluation functions, solver classes implementing various + // refinement methods, different dual functionals, different problem and + // data descriptions), which makes the program relatively simple to extend, + // but also allows to solve a large number of different problems by + // replacing one part by another. We reflect this flexibility by declaring a + // structure in the following framework class that holds a number of + // parameters that may be set to test various combinations of the parts of + // this program, and which can be used to test it at various problems and // discretizations in a simple way. template struct Framework { public: - // First, we declare two - // abbreviations for simple use - // of the respective data types: + // First, we declare two abbreviations for simple use of the respective + // data types: typedef Evaluation::EvaluationBase Evaluator; typedef std::list EvaluatorList; - // Then we have the structure - // which declares all the - // parameters that may be set. In - // the default constructor of the - // structure, these values are - // all set to default values, for - // simple use. + // Then we have the structure which declares all the parameters that may + // be set. In the default constructor of the structure, these values are + // all set to default values, for simple use. struct ProblemDescription { - // First allow for the - // degrees of the piecewise - // polynomials by which the - // primal and dual problems - // will be discretized. They - // default to (bi-, - // tri-)linear ansatz - // functions for the primal, - // and (bi-, tri-)quadratic - // ones for the dual - // problem. If a refinement - // criterion is chosen that - // does not need the solution - // of a dual problem, the - // value of the dual finite - // element degree is of - // course ignored. + // First allow for the degrees of the piecewise polynomials by which the + // primal and dual problems will be discretized. They default to (bi-, + // tri-)linear ansatz functions for the primal, and (bi-, tri-)quadratic + // ones for the dual problem. If a refinement criterion is chosen that + // does not need the solution of a dual problem, the value of the dual + // finite element degree is of course ignored. unsigned int primal_fe_degree; unsigned int dual_fe_degree; - // Then have an object that - // describes the problem - // type, i.e. right hand - // side, domain, boundary - // values, etc. The pointer - // needed here defaults to - // the Null pointer, i.e. you - // will have to set it in - // actual instances of this - // object to make it useful. + // Then have an object that describes the problem type, i.e. right hand + // side, domain, boundary values, etc. The pointer needed here defaults + // to the Null pointer, i.e. you will have to set it in actual instances + // of this object to make it useful. SmartPointer > data; - // Since we allow to use - // different refinement - // criteria (global - // refinement, refinement by - // the Kelly error indicator, - // possibly with a weight, - // and using the dual - // estimator), define a - // number of enumeration - // values, and subsequently a - // variable of that type. It - // will default to + // Since we allow to use different refinement criteria (global + // refinement, refinement by the Kelly error indicator, possibly with a + // weight, and using the dual estimator), define a number of enumeration + // values, and subsequently a variable of that type. It will default to // dual_weighted_error_estimator. enum RefinementCriterion { @@ -3722,70 +2809,42 @@ namespace Step14 RefinementCriterion refinement_criterion; - // Next, an object that - // describes the dual - // functional. It is only - // needed if the dual - // weighted residual - // refinement is chosen, and - // also defaults to a Null - // pointer. + // Next, an object that describes the dual functional. It is only needed + // if the dual weighted residual refinement is chosen, and also defaults + // to a Null pointer. SmartPointer > dual_functional; - // Then a list of evaluation - // objects. Its default value - // is empty, i.e. no - // evaluation objects. + // Then a list of evaluation objects. Its default value is empty, + // i.e. no evaluation objects. EvaluatorList evaluator_list; - // Next to last, a function - // that is used as a weight - // to the - // RefinementWeightedKelly - // class. The default value - // of this pointer is zero, - // but you have to set it to - // some other value if you - // want to use the - // weighted_kelly_indicator - // refinement criterion. + // Next to last, a function that is used as a weight to the + // RefinementWeightedKelly class. The default value of this + // pointer is zero, but you have to set it to some other value if you + // want to use the weighted_kelly_indicator refinement + // criterion. SmartPointer > kelly_weight; - // Finally, we have a - // variable that denotes the - // maximum number of degrees - // of freedom we allow for - // the (primal) - // discretization. If it is - // exceeded, we stop the - // process of solving and - // intermittend mesh - // refinement. Its default - // value is 20,000. + // Finally, we have a variable that denotes the maximum number of + // degrees of freedom we allow for the (primal) discretization. If it is + // exceeded, we stop the process of solving and intermittend mesh + // refinement. Its default value is 20,000. unsigned int max_degrees_of_freedom; - // Finally the default - // constructor of this class: + // Finally the default constructor of this class: ProblemDescription (); }; - // The driver framework class - // only has one method which - // calls solver and mesh - // refinement intermittently, and - // does some other small tasks in - // between. Since it does not - // need data besides the - // parameters given to it, we - // make it static: + // The driver framework class only has one method which calls solver and + // mesh refinement intermittently, and does some other small tasks in + // between. Since it does not need data besides the parameters given to + // it, we make it static: static void run (const ProblemDescription &descriptor); }; - // As for the implementation, first - // the constructor of the parameter - // object, setting all values to - // their defaults: + // As for the implementation, first the constructor of the parameter object, + // setting all values to their defaults: template Framework::ProblemDescription::ProblemDescription () : @@ -3797,28 +2856,23 @@ namespace Step14 - // Then the function which drives the - // whole process: + // Then the function which drives the whole process: template void Framework::run (const ProblemDescription &descriptor) { - // First create a triangulation - // from the given data object, + // First create a triangulation from the given data object, Triangulation triangulation (Triangulation::smoothing_on_refinement); descriptor.data->create_coarse_grid (triangulation); - // then a set of finite elements - // and appropriate quadrature - // formula: + // then a set of finite elements and appropriate quadrature formula: const FE_Q primal_fe(descriptor.primal_fe_degree); const FE_Q dual_fe(descriptor.dual_fe_degree); const QGauss quadrature(descriptor.dual_fe_degree+1); const QGauss face_quadrature(descriptor.dual_fe_degree+1); - // Next, select one of the classes - // implementing different - // refinement criteria. + // Next, select one of the classes implementing different refinement + // criteria. LaplaceSolver::Base *solver = 0; switch (descriptor.refinement_criterion) { @@ -3877,20 +2931,13 @@ namespace Step14 AssertThrow (false, ExcInternalError()); } - // Now that all objects are in - // place, run the main loop. The - // stopping criterion is - // implemented at the bottom of the - // loop. + // Now that all objects are in place, run the main loop. The stopping + // criterion is implemented at the bottom of the loop. // - // In the loop, first set the new - // cycle number, then solve the - // problem, output its solution(s), - // apply the evaluation objects to - // it, then decide whether we want - // to refine the mesh further and - // solve again on this mesh, or - // jump out of the loop. + // In the loop, first set the new cycle number, then solve the problem, + // output its solution(s), apply the evaluation objects to it, then decide + // whether we want to refine the mesh further and solve again on this + // mesh, or jump out of the loop. for (unsigned int step=0; true; ++step) { std::cout << "Refinement cycle: " << step @@ -3918,9 +2965,8 @@ namespace Step14 break; } - // After the loop has run, clean up - // the screen, and delete objects - // no more needed: + // After the loop has run, clean up the screen, and delete objects no more + // needed: std::cout << std::endl; delete solver; solver = 0; @@ -3932,15 +2978,10 @@ namespace Step14 // @sect3{The main function} -// Here finally comes the main -// function. It drives the whole -// process by specifying a set of -// parameters to be used for the -// simulation (polynomial degrees, -// evaluation and dual functionals, -// etc), and passes them packed into -// a structure to the frame work -// class above. +// Here finally comes the main function. It drives the whole process by +// specifying a set of parameters to be used for the simulation (polynomial +// degrees, evaluation and dual functionals, etc), and passes them packed into +// a structure to the frame work class above. int main () { try @@ -3949,78 +2990,44 @@ int main () using namespace Step14; deallog.depth_console (0); - // Describe the problem we want - // to solve here by passing a - // descriptor object to the - // function doing the rest of - // the work: + // Describe the problem we want to solve here by passing a descriptor + // object to the function doing the rest of the work: const unsigned int dim = 2; Framework::ProblemDescription descriptor; - // First set the refinement - // criterion we wish to use: + // First set the refinement criterion we wish to use: descriptor.refinement_criterion = Framework::ProblemDescription::dual_weighted_error_estimator; - // Here, we could as well have - // used global_refinement - // or - // weighted_kelly_indicator. Note - // that the information given - // about dual finite elements, - // dual functional, etc is only - // important for the given - // choice of refinement - // criterion, and is ignored - // otherwise. - - // Then set the polynomial - // degrees of primal and dual - // problem. We choose here - // bi-linear and bi-quadratic - // ones: + // Here, we could as well have used global_refinement or + // weighted_kelly_indicator. Note that the information + // given about dual finite elements, dual functional, etc is only + // important for the given choice of refinement criterion, and is + // ignored otherwise. + + // Then set the polynomial degrees of primal and dual problem. We choose + // here bi-linear and bi-quadratic ones: descriptor.primal_fe_degree = 1; descriptor.dual_fe_degree = 2; - // Then set the description of - // the test case, i.e. domain, - // boundary values, and right - // hand side. These are - // prepackaged in classes. We - // take here the description of - // Exercise_2_3, but you - // can also use - // CurvedRidges@: + // Then set the description of the test case, i.e. domain, boundary + // values, and right hand side. These are prepackaged in classes. We + // take here the description of Exercise_2_3, but you can + // also use CurvedRidges@: descriptor.data = new Data::SetUp,dim> (); - // Next set first a dual - // functional, then a list of - // evaluation objects. We - // choose as default the - // evaluation of the - // value at an - // evaluation point, - // represented by the classes - // PointValueEvaluation - // in the namespaces of - // evaluation and dual - // functional classes. You can - // also set the - // PointXDerivativeEvaluation - // classes for the x-derivative - // instead of the value - // at the evaluation point. + // Next set first a dual functional, then a list of evaluation + // objects. We choose as default the evaluation of the value at an + // evaluation point, represented by the classes + // PointValueEvaluation in the namespaces of evaluation and + // dual functional classes. You can also set the + // PointXDerivativeEvaluation classes for the x-derivative + // instead of the value at the evaluation point. // - // Note that dual functional - // and evaluation objects - // should match. However, you - // can give as many evaluation - // functionals as you want, so - // you can have both point - // value and derivative - // evaluated after each step. - // One such additional - // evaluation is to output the - // grid in each step. + // Note that dual functional and evaluation objects should + // match. However, you can give as many evaluation functionals as you + // want, so you can have both point value and derivative evaluated after + // each step. One such additional evaluation is to output the grid in + // each step. const Point evaluation_point (0.75, 0.75); descriptor.dual_functional = new DualFunctional::PointValueEvaluation (evaluation_point); @@ -4033,23 +3040,16 @@ int main () descriptor.evaluator_list.push_back (&postprocessor1); descriptor.evaluator_list.push_back (&postprocessor2); - // Set the maximal number of - // degrees of freedom after - // which we want the program to - // stop refining the mesh - // further: + // Set the maximal number of degrees of freedom after which we want the + // program to stop refining the mesh further: descriptor.max_degrees_of_freedom = 20000; - // Finally pass the descriptor - // object to a function that - // runs the entire solution - // with it: + // Finally pass the descriptor object to a function that runs the entire + // solution with it: Framework::run (descriptor); } - // Catch exceptions to give - // information about things that - // failed: + // Catch exceptions to give information about things that failed: catch (std::exception &exc) { std::cerr << std::endl << std::endl diff --git a/deal.II/examples/step-15/step-15.cc b/deal.II/examples/step-15/step-15.cc index 655090b9ad..098be10b69 100644 --- a/deal.II/examples/step-15/step-15.cc +++ b/deal.II/examples/step-15/step-15.cc @@ -12,10 +12,8 @@ // @sect3{Include files} -// The first few files have already -// been covered in previous examples -// and will thus not be further -// commented on. +// The first few files have already been covered in previous examples and will +// thus not be further commented on. #include #include #include @@ -52,20 +50,15 @@ #include #include -// We will use adaptive mesh refinement -// between Newton interations. To do so, we -// need to be able to work with a solution on -// the new mesh, although it was computed on -// the old one. The SolutionTransfer class -// transfers the solution from the old to the -// new mesh: +// We will use adaptive mesh refinement between Newton interations. To do so, +// we need to be able to work with a solution on the new mesh, although it was +// computed on the old one. The SolutionTransfer class transfers the solution +// from the old to the new mesh: #include -// We then open a namepsace for this program -// and import everything from the dealii -// namespace into it, as in previous -// programs: +// We then open a namepsace for this program and import everything from the +// dealii namespace into it, as in previous programs: namespace Step15 { using namespace dealii; @@ -73,47 +66,29 @@ namespace Step15 // @sect3{The MinimalSurfaceProblem class template} - // The class template is basically the same - // as in step-6. Four additions are made: - // - There are two solution vectors, one for - // the Newton update $\delta u^n$, and one - // for the current iterate $u^n$. - // - The setup_system function - // takes an argument that denotes whether - // this is the first time it is called or - // not. The difference is that the first - // time around we need to distributed - // degrees of freedom and set the - // solution vector for $u^n$ to the - // correct size. The following times, the - // function is called after we have - // already done these steps as part of - // refining the mesh in - // refine_mesh. - // - We then also need new functions: - // set_boundary_values() - // takes care of setting the boundary - // values on the solution vector - // correctly, as discussed at the end of - // the - // introduction. compute_residual() - // is a function that computes the norm - // of the nonlinear (discrete) - // residual. We use this function to - // monitor convergence of the Newton - // iteration. The function takes a step - // length $\alpha^n$ as argument to - // compute the residual of $u^n + - // \alpha^n \; \delta u^n$. This is - // something one typically needs for step - // length control, although we will not - // use this feature here. Finally, - // determine_step_length() - // computes the step length $\alpha^n$ in - // each Newton iteration. As discussed in - // the introduction, we here use a fixed - // step length and leave implementing a - // better strategy as an exercise. + // The class template is basically the same as in step-6. Three additions + // are made: + // - There are two solution vectors, one for the Newton update + // $\delta u^n$, and one for the current iterate $u^n$. + // - The setup_system function takes an argument that denotes whether + // this is the first time it is called or not. The difference is that the + // first time around we need to distributed degrees of freedom and set the + // solution vector for $u^n$ to the correct size. The following times, the + // function is called after we have already done these steps as part of + // refining the mesh in refine_mesh. + // - We then also need new functions: set_boundary_values() + // takes care of setting the boundary values on the solution vector + // correctly, as discussed at the end of the + // introduction. compute_residual() is a function that computes + // the norm of the nonlinear (discrete) residual. We use this function to + // monitor convergence of the Newton iteration. The function takes a step + // length $\alpha^n$ as argument to compute the residual of $u^n + \alpha^n + // \; \delta u^n$. This is something one typically needs for step length + // control, although we will not use this feature here. Finally, + // determine_step_length() computes the step length $\alpha^n$ + // in each Newton iteration. As discussed in the introduction, we here use a + // fixed step length and leave implementing a better strategy as an + // exercise. template class MinimalSurfaceProblem @@ -150,10 +125,8 @@ namespace Step15 // @sect3{Boundary condition} - // The boundary condition is - // implemented just like in step-4. - // It is chosen as $g(x,y)=\sin(2 - // \pi (x+y))$: + // The boundary condition is implemented just like in step-4. It is chosen + // as $g(x,y)=\sin(2 \pi (x+y))$: template class BoundaryValues : public Function @@ -177,9 +150,8 @@ namespace Step15 // @sect4{MinimalSurfaceProblem::MinimalSurfaceProblem} - // The constructor and destructor - // of the class are the same as in - // the first few tutorials. + // The constructor and destructor of the class are the same as in the first + // few tutorials. template MinimalSurfaceProblem::MinimalSurfaceProblem () @@ -198,23 +170,15 @@ namespace Step15 // @sect4{MinimalSurfaceProblem::setup_system} - // As always in the setup-system function, - // we setup the variables of the finite - // element method. There are same - // differences to step-6, because there we - // start solving the PDE from scratch in - // every refinement cycle whereas here we - // need to take the solution from the - // previous mesh onto the current - // mesh. Consequently, we can't just reset - // solution vectors. The argument passed to - // this function thus indicates whether we - // can distributed degrees of freedom (plus - // compute constraints) and set the - // solution vector to zero or whether this - // has happened elsewhere already - // (specifically, in - // refine_mesh()). + // As always in the setup-system function, we setup the variables of the + // finite element method. There are same differences to step-6, because + // there we start solving the PDE from scratch in every refinement cycle + // whereas here we need to take the solution from the previous mesh onto the + // current mesh. Consequently, we can't just reset solution vectors. The + // argument passed to this function thus indicates whether we can + // distributed degrees of freedom (plus compute constraints) and set the + // solution vector to zero or whether this has happened elsewhere already + // (specifically, in refine_mesh()). template void MinimalSurfaceProblem::setup_system (const bool initial_step) @@ -231,9 +195,7 @@ namespace Step15 } - // The remaining parts of the - // function are the same as in - // step-6. + // The remaining parts of the function are the same as in step-6. newton_update.reinit (dof_handler.n_dofs()); system_rhs.reinit (dof_handler.n_dofs()); @@ -249,25 +211,17 @@ namespace Step15 // @sect4{MinimalSurfaceProblem::assemble_system} - // This function does the same as in the - // previous tutorials except that now, of - // course, the matrix and right hand side - // functions depend on the previous - // iteration's solution. As discussed in - // the introduction, we need to use zero - // boundary values for the Newton updates; - // we compute them at the end of this - // function. + // This function does the same as in the previous tutorials except that now, + // of course, the matrix and right hand side functions depend on the + // previous iteration's solution. As discussed in the introduction, we need + // to use zero boundary values for the Newton updates; we compute them at + // the end of this function. // - // The top of the function contains the - // usual boilerplate code, setting up the - // objects that allow us to evaluate shape - // functions at quadrature points and - // temporary storage locations for the - // local matrices and vectors, as well as - // for the gradients of the previous - // solution at the quadrature points. We - // then start the loop over all cells: + // The top of the function contains the usual boilerplate code, setting up + // the objects that allow us to evaluate shape functions at quadrature + // points and temporary storage locations for the local matrices and + // vectors, as well as for the gradients of the previous solution at the + // quadrature points. We then start the loop over all cells: template void MinimalSurfaceProblem::assemble_system () { @@ -301,40 +255,25 @@ namespace Step15 fe_values.reinit (cell); - // For the assembly of the linear - // system, we have to obtain the - // values of the previous solution's - // gradients at the quadrature - // points. There is a standard way of - // doing this: the - // FEValues::get_function function - // takes a vector that represents a - // finite element field defined on a - // DoFHandler, and evaluates the - // gradients of this field at the - // quadrature points of the cell with - // which the FEValues object has last - // been reinitialized. The values of - // the gradients at all quadrature - // points are then written into the + // For the assembly of the linear system, we have to obtain the values + // of the previous solution's gradients at the quadrature + // points. There is a standard way of doing this: the + // FEValues::get_function function takes a vector that represents a + // finite element field defined on a DoFHandler, and evaluates the + // gradients of this field at the quadrature points of the cell with + // which the FEValues object has last been reinitialized. The values + // of the gradients at all quadrature points are then written into the // second argument: fe_values.get_function_gradients(present_solution, old_solution_gradients); - // With this, we can then do the - // integration loop over all - // quadrature points and shape - // functions. Having just computed - // the gradients of the old solution - // in the quadrature points, we are - // able to compute the coefficients - // $a_{n}$ in these points. The - // assembly of the system itself then - // looks similar to what we always do - // with the exception of the - // nonlinear terms, as does copying - // the results from the local objects - // into the global ones: + // With this, we can then do the integration loop over all quadrature + // points and shape functions. Having just computed the gradients of + // the old solution in the quadrature points, we are able to compute + // the coefficients $a_{n}$ in these points. The assembly of the + // system itself then looks similar to what we always do with the + // exception of the nonlinear terms, as does copying the results from + // the local objects into the global ones: for (unsigned int q_point = 0; q_point < n_q_points; ++q_point) { const double coeff @@ -378,11 +317,9 @@ namespace Step15 } } - // Finally, we remove hanging nodes from - // the system and apply zero boundary - // values to the linear system that - // defines the Newton updates $\delta - // u^n$: + // Finally, we remove hanging nodes from the system and apply zero + // boundary values to the linear system that defines the Newton updates + // $\delta u^n$: hanging_node_constraints.condense (system_matrix); hanging_node_constraints.condense (system_rhs); @@ -401,11 +338,9 @@ namespace Step15 // @sect4{MinimalSurfaceProblem::solve} - // The solve function is the same as - // always. At the end of the solution - // process we update the current solution - // by setting $u^{n+1}=u^n+\alpha^n\;\delta - // u^n$. + // The solve function is the same as always. At the end of the solution + // process we update the current solution by setting + // $u^{n+1}=u^n+\alpha^n\;\delta u^n$. template void MinimalSurfaceProblem::solve () { @@ -428,14 +363,10 @@ namespace Step15 // @sect4{MinimalSurfaceProblem::refine_mesh} - // The first part of this function is the - // same as in step-6... However, after - // refining the mesh we have to transfer - // the old solution to the new one which we - // do with the help of the SolutionTransfer - // class. The process is slightly - // convoluted, so let us describe it in - // detail: + // The first part of this function is the same as in step-6... However, + // after refining the mesh we have to transfer the old solution to the new + // one which we do with the help of the SolutionTransfer class. The process + // is slightly convoluted, so let us describe it in detail: template void MinimalSurfaceProblem::refine_mesh () { @@ -451,47 +382,29 @@ namespace Step15 estimated_error_per_cell, 0.3, 0.03); - // Then we need an additional step: if, - // for example, you flag a cell that is - // once more refined than its neighbor, - // and that neighbor is not flagged for - // refinement, we would end up with a - // jump of two refinement levels across a - // cell interface. To avoid these - // situations, the library will silently - // also have to refine the neighbor cell - // once. It does so by calling the - // Triangulation::prepare_coarsening_and_refinement - // function before actually doing the - // refinement and coarsening. This - // function flags a set of additional - // cells for refinement or coarsening, to - // enforce rules like the - // one-hanging-node rule. The cells that - // are flagged for refinement and - // coarsening after calling this function - // are exactly the ones that will - // actually be refined or - // coarsened. Usually, you don't have to - // do this by hand - // (Triangulation::execute_coarsening_and_refinement - // does this for you). However, we need - // to initialize the SolutionTransfer - // class and it needs to know the final - // set of cells that will be coarsened or - // refined in order to store the data - // from the old mesh and transfer to the - // new one. Thus, we call the function by - // hand: + // Then we need an additional step: if, for example, you flag a cell that + // is once more refined than its neighbor, and that neighbor is not + // flagged for refinement, we would end up with a jump of two refinement + // levels across a cell interface. To avoid these situations, the library + // will silently also have to refine the neighbor cell once. It does so by + // calling the Triangulation::prepare_coarsening_and_refinement function + // before actually doing the refinement and coarsening. This function + // flags a set of additional cells for refinement or coarsening, to + // enforce rules like the one-hanging-node rule. The cells that are + // flagged for refinement and coarsening after calling this function are + // exactly the ones that will actually be refined or coarsened. Usually, + // you don't have to do this by hand + // (Triangulation::execute_coarsening_and_refinement does this for + // you). However, we need to initialize the SolutionTransfer class and it + // needs to know the final set of cells that will be coarsened or refined + // in order to store the data from the old mesh and transfer to the new + // one. Thus, we call the function by hand: triangulation.prepare_coarsening_and_refinement (); - // With this out of the way, we - // initialize a SolutionTransfer object - // with the present DoFHandler and attach - // the solution vector to it, followed by - // doing the actual refinement and - // distribution of degrees of freedom on - // the new mesh + // With this out of the way, we initialize a SolutionTransfer object with + // the present DoFHandler and attach the solution vector to it, followed + // by doing the actual refinement and distribution of degrees of freedom + // on the new mesh SolutionTransfer solution_transfer(dof_handler); solution_transfer.prepare_for_coarsening_and_refinement(present_solution); @@ -499,42 +412,28 @@ namespace Step15 dof_handler.distribute_dofs(fe); - // Finally, we retrieve the old solution - // interpolated to the new mesh. Since - // the SolutionTransfer function does not - // actually store the values of the old - // solution, but rather indices, we need - // to preserve the old solution vector - // until we have gotten the new - // interpolated values. Thus, we have the - // new values written into a temporary - // vector, and only afterwards write them - // into the solution vector object. Once - // we have this solution we have to make - // sure that the $u^n$ we now have - // actually has the correct boundary - // values. As explained at the end of the - // introduction, this is not - // automatically the case even if the - // solution before refinement had the - // correct boundary values, and so we - // have to explicitly make sure that it - // now has: + // Finally, we retrieve the old solution interpolated to the new + // mesh. Since the SolutionTransfer function does not actually store the + // values of the old solution, but rather indices, we need to preserve the + // old solution vector until we have gotten the new interpolated + // values. Thus, we have the new values written into a temporary vector, + // and only afterwards write them into the solution vector object. Once we + // have this solution we have to make sure that the $u^n$ we now have + // actually has the correct boundary values. As explained at the end of + // the introduction, this is not automatically the case even if the + // solution before refinement had the correct boundary values, and so we + // have to explicitly make sure that it now has: Vector tmp(dof_handler.n_dofs()); solution_transfer.interpolate(present_solution, tmp); present_solution = tmp; set_boundary_values (); - // On the new mesh, there are different - // hanging nodes, which we have to - // compute again. To ensure there are no - // hanging nodes of the old mesh in the - // object, it's first cleared. To be on - // the safe side, we then also make sure - // that the current solution's vector - // entries satisfy the hanging node - // constraints: + // On the new mesh, there are different hanging nodes, which we have to + // compute again. To ensure there are no hanging nodes of the old mesh in + // the object, it's first cleared. To be on the safe side, we then also + // make sure that the current solution's vector entries satisfy the + // hanging node constraints: hanging_node_constraints.clear(); @@ -544,13 +443,10 @@ namespace Step15 hanging_node_constraints.distribute (present_solution); - // We end the function by updating all - // the remaining data structures, - // indicating to - // setup_dofs() that this is - // not the first go-around and that it - // needs to preserve the content of the - // solution vector: + // We end the function by updating all the remaining data structures, + // indicating to setup_dofs() that this is not the first + // go-around and that it needs to preserve the content of the solution + // vector: setup_system (false); } @@ -558,18 +454,13 @@ namespace Step15 // @sect4{MinimalSurfaceProblem::set_boundary_values} - // The next function ensures that the - // solution vector's entries respect the - // boundary values for our problem. Having - // refined the mesh (or just started - // computations), there might be new nodal - // points on the boundary. These have - // values that are simply interpolated from - // the previous mesh (or are just zero), - // instead of the correct boundary - // values. This is fixed up by setting all - // boundary nodes explicit to the right - // value: + // The next function ensures that the solution vector's entries respect the + // boundary values for our problem. Having refined the mesh (or just + // started computations), there might be new nodal points on the + // boundary. These have values that are simply interpolated from the + // previous mesh (or are just zero), instead of the correct boundary + // values. This is fixed up by setting all boundary nodes explicit to the + // right value: template void MinimalSurfaceProblem::set_boundary_values () { @@ -587,32 +478,21 @@ namespace Step15 // @sect4{MinimalSurfaceProblem::compute_residual} - // In order to monitor convergence, we need - // a way to compute the norm of the - // (discrete) residual, i.e., the norm of - // the vector - // $\left$ with - // $F(u)=-\nabla \cdot \left( - // \frac{1}{\sqrt{1+|\nabla u|^{2}}}\nabla - // u \right)$ as discussed in the - // introduction. It turns out that - // (although we don't use this feature in - // the current version of the program) one - // needs to compute the residual - // $\left$ - // when determining optimal step lengths, - // and so this is what we implement here: - // the function takes the step length - // $\alpha^n$ as an argument. The original - // functionality is of course obtained by - // passing a zero as argument. + // In order to monitor convergence, we need a way to compute the norm of the + // (discrete) residual, i.e., the norm of the vector + // $\left$ with $F(u)=-\nabla \cdot \left( + // \frac{1}{\sqrt{1+|\nabla u|^{2}}}\nabla u \right)$ as discussed in the + // introduction. It turns out that (although we don't use this feature in + // the current version of the program) one needs to compute the residual + // $\left$ when determining + // optimal step lengths, and so this is what we implement here: the function + // takes the step length $\alpha^n$ as an argument. The original + // functionality is of course obtained by passing a zero as argument. // - // In the function below, we first set up a - // vector for the residual, and then a - // vector for the evaluation point - // $u^n+\alpha^n\;\delta u^n$. This is - // followed by the same boilerplate code we - // use for all integration operations: + // In the function below, we first set up a vector for the residual, and + // then a vector for the evaluation point $u^n+\alpha^n\;\delta u^n$. This + // is followed by the same boilerplate code we use for all integration + // operations: template double MinimalSurfaceProblem::compute_residual (const double alpha) const { @@ -644,14 +524,10 @@ namespace Step15 cell_rhs = 0; fe_values.reinit (cell); - // The actual computation is much as - // in - // assemble_system(). We - // first evaluate the gradients of - // $u^n+\alpha^n\,\delta u^n$ at the - // quadrature points, then compute - // the coefficient $a_n$, and then - // plug it all into the formula for + // The actual computation is much as in + // assemble_system(). We first evaluate the gradients of + // $u^n+\alpha^n\,\delta u^n$ at the quadrature points, then compute + // the coefficient $a_n$, and then plug it all into the formula for // the residual: fe_values.get_function_gradients (evaluation_point, gradients); @@ -675,35 +551,21 @@ namespace Step15 residual(local_dof_indices[i]) += cell_rhs(i); } - // At the end of this function we also - // have to deal with the hanging node - // constraints and with the issue of - // boundary values. With regard to the - // latter, we have to set to zero the - // elements of the residual vector for - // all entries that correspond to degrees - // of freedom that sit at the - // boundary. The reason is that because - // the value of the solution there is - // fixed, they are of course no "real" - // degrees of freedom and so, strictly - // speaking, we shouldn't have assembled - // entries in the residual vector for - // them. However, as we always do, we - // want to do exactly the same thing on - // every cell and so we didn't not want - // to deal with the question of whether a - // particular degree of freedom sits at - // the boundary in the integration - // above. Rather, we will simply set to - // zero these entries after the fact. To - // this end, we first need to determine - // which degrees of freedom do in fact - // belong to the boundary and then loop - // over all of those and set the residual - // entry to zero. This happens in the - // following lines which we have already - // seen used in step-11: + // At the end of this function we also have to deal with the hanging node + // constraints and with the issue of boundary values. With regard to the + // latter, we have to set to zero the elements of the residual vector for + // all entries that correspond to degrees of freedom that sit at the + // boundary. The reason is that because the value of the solution there is + // fixed, they are of course no "real" degrees of freedom and so, strictly + // speaking, we shouldn't have assembled entries in the residual vector + // for them. However, as we always do, we want to do exactly the same + // thing on every cell and so we didn't not want to deal with the question + // of whether a particular degree of freedom sits at the boundary in the + // integration above. Rather, we will simply set to zero these entries + // after the fact. To this end, we first need to determine which degrees + // of freedom do in fact belong to the boundary and then loop over all of + // those and set the residual entry to zero. This happens in the following + // lines which we have already seen used in step-11: hanging_node_constraints.condense (residual); std::vector boundary_dofs (dof_handler.n_dofs()); @@ -714,8 +576,7 @@ namespace Step15 if (boundary_dofs[i] == true) residual(i) = 0; - // At the end of the function, we return - // the norm of the residual: + // At the end of the function, we return the norm of the residual: return residual.l2_norm(); } @@ -723,24 +584,17 @@ namespace Step15 // @sect4{MinimalSurfaceProblem::determine_step_length} - // As discussed in the introduction, - // Newton's method frequently does not - // converge if we always take full steps, - // i.e., compute $u^{n+1}=u^n+\delta - // u^n$. Rather, one needs a damping - // parameter (step length) $\alpha^n$ and - // set $u^{n+1}=u^n+\alpha^n\; delta - // u^n$. This function is the one called to - // compute $\alpha^n$. + // As discussed in the introduction, Newton's method frequently does not + // converge if we always take full steps, i.e., compute $u^{n+1}=u^n+\delta + // u^n$. Rather, one needs a damping parameter (step length) $\alpha^n$ and + // set $u^{n+1}=u^n+\alpha^n\; delta u^n$. This function is the one called + // to compute $\alpha^n$. // - // Here, we simply always return 0.1. This - // is of course a sub-optimal choice: - // ideally, what one wants is that the step - // size goes to one as we get closer to the - // solution, so that we get to enjoy the - // rapid quadratic convergence of Newton's - // method. We will discuss better - // strategies below in the results section. + // Here, we simply always return 0.1. This is of course a sub-optimal + // choice: ideally, what one wants is that the step size goes to one as we + // get closer to the solution, so that we get to enjoy the rapid quadratic + // convergence of Newton's method. We will discuss better strategies below + // in the results section. template double MinimalSurfaceProblem::determine_step_length() const { @@ -751,47 +605,33 @@ namespace Step15 // @sect4{MinimalSurfaceProblem::run} - // In the run function, we build the first - // grid and then have the top-level logic - // for the Newton iteration. The function - // has two variables, one that indicates - // whether this is the first time we solve - // for a Newton update and one that - // indicates the refinement level of the - // mesh: + // In the run function, we build the first grid and then have the top-level + // logic for the Newton iteration. The function has two variables, one that + // indicates whether this is the first time we solve for a Newton update and + // one that indicates the refinement level of the mesh: template void MinimalSurfaceProblem::run () { unsigned int refinement = 0; bool first_step = true; - // As described in the introduction, the - // domain is the unit disk around the - // origin, created in the same way as - // shown in step-6. The mesh is globally - // refined twice followed later on by - // several adaptive cycles: + // As described in the introduction, the domain is the unit disk around + // the origin, created in the same way as shown in step-6. The mesh is + // globally refined twice followed later on by several adaptive cycles: GridGenerator::hyper_ball (triangulation); static const HyperBallBoundary boundary; triangulation.set_boundary (0, boundary); triangulation.refine_global(2); - // The Newton iteration starts - // next. During the first step we do not - // have information about the residual - // prior to this step and so we continue - // the Newton iteration until we have - // reached at least one iteration and + // The Newton iteration starts next. During the first step we do not have + // information about the residual prior to this step and so we continue + // the Newton iteration until we have reached at least one iteration and // until residual is less than $10^{-3}$. // - // At the beginning of the loop, we do a - // bit of setup work. In the first go - // around, we compute the solution on the - // twice globally refined mesh after - // setting up the basic data - // structures. In all following mesh - // refinement loops, the mesh will be - // refined adaptively. + // At the beginning of the loop, we do a bit of setup work. In the first + // go around, we compute the solution on the twice globally refined mesh + // after setting up the basic data structures. In all following mesh + // refinement loops, the mesh will be refined adaptively. double previous_res = 0; while (first_step || (previous_res>1e-3)) { @@ -814,23 +654,15 @@ namespace Step15 refine_mesh(); } - // On every mesh we do exactly five - // Newton steps. We print the initial - // residual here and then start the - // iterations on this mesh. + // On every mesh we do exactly five Newton steps. We print the initial + // residual here and then start the iterations on this mesh. // - // In every Newton step the system - // matrix and the right hand side - // have to be computed first, after - // which we store the norm of the - // right hand side as the residual to - // check against when deciding - // whether to stop the iterations. We - // then solve the linear system (the - // function also updates - // $u^{n+1}=u^n+\alpha^n\;\delta - // u^n$) and output the residual at - // the end of this Newton step: + // In every Newton step the system matrix and the right hand side have + // to be computed first, after which we store the norm of the right + // hand side as the residual to check against when deciding whether to + // stop the iterations. We then solve the linear system (the function + // also updates $u^{n+1}=u^n+\alpha^n\;\delta u^n$) and output the + // residual at the end of this Newton step: std::cout << " Initial residual: " << compute_residual(0) << std::endl; @@ -848,11 +680,9 @@ namespace Step15 << std::endl; } - // Every fifth iteration, i.e., just - // before we refine the mesh again, - // we output the solution as well as - // the Newton update. This happens as - // in all programs before: + // Every fifth iteration, i.e., just before we refine the mesh again, + // we output the solution as well as the Newton update. This happens + // as in all programs before: DataOut data_out; data_out.attach_dof_handler (dof_handler); @@ -871,9 +701,8 @@ namespace Step15 // @sect4{The main function} -// Finally the main function. This -// follows the scheme of all other -// main functions: +// Finally the main function. This follows the scheme of all other main +// functions: int main () { try @@ -912,4 +741,3 @@ int main () } return 0; } - diff --git a/deal.II/examples/step-16/step-16.cc b/deal.II/examples/step-16/step-16.cc index 3a45b37e73..4d75a89c6a 100644 --- a/deal.II/examples/step-16/step-16.cc +++ b/deal.II/examples/step-16/step-16.cc @@ -11,25 +11,18 @@ /* to the file deal.II/doc/license.html for the text and */ /* further information on this license. */ -// As discussed in the introduction, most of -// this program is copied almost verbatim -// from step-6, which itself is only a slight -// modification of step-5. Consequently, a -// significant part of this program is not -// new if you've read all the material up to -// step-6, and we won't comment on that part -// of the functionality that is -// unchanged. Rather, we will focus on those -// aspects of the program that have to do -// with the multigrid functionality which -// forms the new aspect of this tutorial -// program. +// As discussed in the introduction, most of this program is copied almost +// verbatim from step-6, which itself is only a slight modification of +// step-5. Consequently, a significant part of this program is not new if +// you've read all the material up to step-6, and we won't comment on that +// part of the functionality that is unchanged. Rather, we will focus on those +// aspects of the program that have to do with the multigrid functionality +// which forms the new aspect of this tutorial program. // @sect3{Include files} -// Again, the first few include files -// are already known, so we won't -// comment on them: +// Again, the first few include files are already known, so we won't comment +// on them: #include #include #include @@ -59,19 +52,14 @@ #include #include -// These, now, are the include necessary for -// the multi-level methods. The first two -// declare classes that allow us to enumerate -// degrees of freedom not only on the finest -// mesh level, but also on intermediate -// levels (that's what the MGDoFHandler class -// does) as well as allow to access this -// information (iterators and accessors over -// these cells). +// These, now, are the include necessary for the multi-level methods. The +// first two declare classes that allow us to enumerate degrees of freedom not +// only on the finest mesh level, but also on intermediate levels (that's what +// the MGDoFHandler class does) as well as allow to access this information +// (iterators and accessors over these cells). // -// The rest of the include files deals with -// the mechanics of multigrid as a linear -// operator (solver or preconditioner). +// The rest of the include files deals with the mechanics of multigrid as a +// linear operator (solver or preconditioner). #include #include #include @@ -86,8 +74,7 @@ #include #include -// The last step is as in all -// previous programs: +// The last step is as in all previous programs: namespace Step16 { using namespace dealii; @@ -95,13 +82,10 @@ namespace Step16 // @sect3{The LaplaceProblem class template} - // This main class is basically the same - // class as in step-6. As far as member - // functions is concerned, the only addition - // is the assemble_multigrid - // function that assembles the matrices that - // correspond to the discrete operators on - // intermediate levels: + // This main class is basically the same class as in step-6. As far as + // member functions is concerned, the only addition is the + // assemble_multigrid function that assembles the matrices that + // correspond to the discrete operators on intermediate levels: template class LaplaceProblem { @@ -124,14 +108,10 @@ namespace Step16 SparsityPattern sparsity_pattern; SparseMatrix system_matrix; - // We need an additional object for the - // hanging nodes constraints. They are - // handed to the transfer object in the - // multigrid. Since we call a compress - // inside the multigrid these constraints - // are not allowed to be inhomogeneous so - // we store them in different ConstraintMatrix - // objects. + // We need an additional object for the hanging nodes constraints. They + // are handed to the transfer object in the multigrid. Since we call a + // compress inside the multigrid these constraints are not allowed to be + // inhomogeneous so we store them in different ConstraintMatrix objects. ConstraintMatrix hanging_node_constraints; ConstraintMatrix constraints; @@ -140,43 +120,26 @@ namespace Step16 const unsigned int degree; - // The following four objects are the - // only additional member variables, - // compared to step-6. They first three - // represent the - // operators that act on individual - // levels of the multilevel hierarchy, - // rather than on the finest mesh as do - // the objects above while the last object - // stores information about the boundary - // indices on each level and information - // about indices lying on a refinement - // edge between two different refinement - // levels. + // The following four objects are the only additional member variables, + // compared to step-6. They first three represent the operators that act + // on individual levels of the multilevel hierarchy, rather than on the + // finest mesh as do the objects above while the last object stores + // information about the boundary indices on each level and information + // about indices lying on a refinement edge between two different + // refinement levels. // - // To facilitate having objects on each - // level of a multilevel hierarchy, - // deal.II has the MGLevelObject class - // template that provides storage for - // objects on each level. What we need - // here are matrices on each level, which - // implies that we also need sparsity - // patterns on each level. As outlined in - // the @ref mg_paper, the operators - // (matrices) that we need are actually - // twofold: one on the interior of each - // level, and one at the interface - // between each level and that part of - // the domain where the mesh is - // coarser. In fact, we will need the - // latter in two versions: for the - // direction from coarse to fine mesh and - // from fine to coarse. Fortunately, - // however, we here have a self-adjoint - // problem for which one of these is the - // transpose of the other, and so we only - // have to build one; we choose the one - // from coarse to fine. + // To facilitate having objects on each level of a multilevel hierarchy, + // deal.II has the MGLevelObject class template that provides storage for + // objects on each level. What we need here are matrices on each level, + // which implies that we also need sparsity patterns on each level. As + // outlined in the @ref mg_paper, the operators (matrices) that we need + // are actually twofold: one on the interior of each level, and one at the + // interface between each level and that part of the domain where the mesh + // is coarser. In fact, we will need the latter in two versions: for the + // direction from coarse to fine mesh and from fine to + // coarse. Fortunately, however, we here have a self-adjoint problem for + // which one of these is the transpose of the other, and so we only have + // to build one; we choose the one from coarse to fine. MGLevelObject mg_sparsity_patterns; MGLevelObject > mg_matrices; MGLevelObject > mg_interface_matrices; @@ -187,9 +150,8 @@ namespace Step16 // @sect3{Nonconstant coefficients} - // The implementation of nonconstant - // coefficients is copied verbatim - // from step-5 and step-6: + // The implementation of nonconstant coefficients is copied verbatim from + // step-5 and step-6: template class Coefficient : public Function @@ -241,29 +203,22 @@ namespace Step16 // @sect4{LaplaceProblem::LaplaceProblem} - // The constructor is left mostly - // unchanged. We take the polynomial degree - // of the finite elements to be used as a - // constructor argument and store it in a - // member variable. + // The constructor is left mostly unchanged. We take the polynomial degree + // of the finite elements to be used as a constructor argument and store it + // in a member variable. // - // By convention, all adaptively refined - // triangulations in deal.II never change by - // more than one level across a face between - // cells. For our multigrid algorithms, - // however, we need a slightly stricter - // guarantee, namely that the mesh also does - // not change by more than refinement level - // across vertices that might connect two - // cells. In other words, we must prevent the - // following situation: + // By convention, all adaptively refined triangulations in deal.II never + // change by more than one level across a face between cells. For our + // multigrid algorithms, however, we need a slightly stricter guarantee, + // namely that the mesh also does not change by more than refinement level + // across vertices that might connect two cells. In other words, we must + // prevent the following situation: // // @image html limit_level_difference_at_vertices.png "" // // This is achieved by passing the - // Triangulation::limit_level_difference_at_vertices - // flag to the constructor of the - // triangulation class. + // Triangulation::limit_level_difference_at_vertices flag to the constructor + // of the triangulation class. template LaplaceProblem::LaplaceProblem (const unsigned int degree) : @@ -278,19 +233,15 @@ namespace Step16 // @sect4{LaplaceProblem::setup_system} - // The following function extends what the - // corresponding one in step-6 did. The top - // part, apart from the additional output, - // does the same: + // The following function extends what the corresponding one in step-6 + // did. The top part, apart from the additional output, does the same: template void LaplaceProblem::setup_system () { mg_dof_handler.distribute_dofs (fe); - // Here we output not only the - // degrees of freedom on the finest - // level, but also in the - // multilevel structure + // Here we output not only the degrees of freedom on the finest level, but + // also in the multilevel structure deallog << "Number of degrees of freedom: " << mg_dof_handler.n_dofs(); @@ -307,26 +258,17 @@ namespace Step16 solution.reinit (mg_dof_handler.n_dofs()); system_rhs.reinit (mg_dof_handler.n_dofs()); - // But it starts to be a wee bit different - // here, although this still doesn't have - // anything to do with multigrid - // methods. step-6 took care of boundary - // values and hanging nodes in a separate - // step after assembling the global matrix - // from local contributions. This works, - // but the same can be done in a slightly - // simpler way if we already take care of - // these constraints at the time of copying - // local contributions into the global - // matrix. To this end, we here do not just - // compute the constraints do to hanging - // nodes, but also due to zero boundary - // conditions. We will - // use this set of constraints later on to - // help us copy local contributions - // correctly into the global linear system - // right away, without the need for a later - // clean-up stage: + // But it starts to be a wee bit different here, although this still + // doesn't have anything to do with multigrid methods. step-6 took care of + // boundary values and hanging nodes in a separate step after assembling + // the global matrix from local contributions. This works, but the same + // can be done in a slightly simpler way if we already take care of these + // constraints at the time of copying local contributions into the global + // matrix. To this end, we here do not just compute the constraints do to + // hanging nodes, but also due to zero boundary conditions. We will use + // this set of constraints later on to help us copy local contributions + // correctly into the global linear system right away, without the need + // for a later clean-up stage: constraints.clear (); hanging_node_constraints.clear (); DoFTools::make_hanging_node_constraints (mg_dof_handler, hanging_node_constraints); @@ -344,30 +286,22 @@ namespace Step16 sparsity_pattern.compress(); system_matrix.reinit (sparsity_pattern); - // The multigrid constraints have to be - // initialized. They need to know about - // the boundary values as well, so we - // pass the dirichlet_boundary - // here as well. + // The multigrid constraints have to be initialized. They need to know + // about the boundary values as well, so we pass the + // dirichlet_boundary here as well. mg_constrained_dofs.clear(); mg_constrained_dofs.initialize(mg_dof_handler, dirichlet_boundary); - // Now for the things that concern the - // multigrid data structures. First, we - // resize the multi-level objects to hold - // matrices and sparsity patterns for every - // level. The coarse level is zero (this is - // mandatory right now but may change in a - // future revision). Note that these - // functions take a complete, inclusive - // range here (not a starting index and - // size), so the finest level is - // n_levels-1. We first have - // to resize the container holding the - // SparseMatrix classes, since they have to - // release their SparsityPattern before the - // can be destroyed upon resizing. + // Now for the things that concern the multigrid data structures. First, + // we resize the multi-level objects to hold matrices and sparsity + // patterns for every level. The coarse level is zero (this is mandatory + // right now but may change in a future revision). Note that these + // functions take a complete, inclusive range here (not a starting index + // and size), so the finest level is n_levels-1. We first + // have to resize the container holding the SparseMatrix classes, since + // they have to release their SparsityPattern before the can be destroyed + // upon resizing. const unsigned int n_levels = triangulation.n_levels(); mg_interface_matrices.resize(0, n_levels-1); @@ -376,31 +310,20 @@ namespace Step16 mg_matrices.clear (); mg_sparsity_patterns.resize(0, n_levels-1); - // Now, we have to provide a matrix on each - // level. To this end, we first use the - // MGTools::make_sparsity_pattern function - // to first generate a preliminary - // compressed sparsity pattern on each - // level (see the @ref Sparsity module for - // more information on this topic) and then - // copy it over to the one we really - // want. The next step is to initialize - // both kinds of level matrices with these - // sparsity patterns. + // Now, we have to provide a matrix on each level. To this end, we first + // use the MGTools::make_sparsity_pattern function to first generate a + // preliminary compressed sparsity pattern on each level (see the @ref + // Sparsity module for more information on this topic) and then copy it + // over to the one we really want. The next step is to initialize both + // kinds of level matrices with these sparsity patterns. // - // It may be worth pointing out that the - // interface matrices only have entries for - // degrees of freedom that sit at or next - // to the interface between coarser and - // finer levels of the mesh. They are - // therefore even sparser than the matrices - // on the individual levels of our - // multigrid hierarchy. If we were more - // concerned about memory usage (and - // possibly the speed with which we can - // multiply with these matrices), we should - // use separate and different sparsity - // patterns for these two kinds of + // It may be worth pointing out that the interface matrices only have + // entries for degrees of freedom that sit at or next to the interface + // between coarser and finer levels of the mesh. They are therefore even + // sparser than the matrices on the individual levels of our multigrid + // hierarchy. If we were more concerned about memory usage (and possibly + // the speed with which we can multiply with these matrices), we should + // use separate and different sparsity patterns for these two kinds of // matrices. for (unsigned int level=0; level void LaplaceProblem::assemble_system () @@ -492,21 +410,16 @@ namespace Step16 // @sect4{LaplaceProblem::assemble_multigrid} - // The next function is the one that builds - // the linear operators (matrices) that - // define the multigrid method on each level - // of the mesh. The integration core is the - // same as above, but the loop below will go - // over all existing cells instead of just - // the active ones, and the results must be - // entered into the correct matrix. Note also - // that since we only do multi-level - // preconditioning, no right-hand side needs - // to be assembled here. + // The next function is the one that builds the linear operators (matrices) + // that define the multigrid method on each level of the mesh. The + // integration core is the same as above, but the loop below will go over + // all existing cells instead of just the active ones, and the results must + // be entered into the correct matrix. Note also that since we only do + // multi-level preconditioning, no right-hand side needs to be assembled + // here. // - // Before we go there, however, we have to - // take care of a significant amount of book - // keeping: + // Before we go there, however, we have to take care of a significant amount + // of book keeping: template void LaplaceProblem::assemble_multigrid () { @@ -526,63 +439,40 @@ namespace Step16 const Coefficient coefficient; std::vector coefficient_values (n_q_points); - // Next a few things that are specific to - // building the multigrid data structures - // (since we only need them in the current - // function, rather than also elsewhere, we - // build them here instead of the - // setup_system - // function). Some of the following may be - // a bit obscure if you're not familiar - // with the algorithm actually implemented - // in deal.II to support multilevel - // algorithms on adaptive meshes; if some - // of the things below seem strange, take a - // look at the @ref mg_paper. + // Next a few things that are specific to building the multigrid data + // structures (since we only need them in the current function, rather + // than also elsewhere, we build them here instead of the + // setup_system function). Some of the following may be a bit + // obscure if you're not familiar with the algorithm actually implemented + // in deal.II to support multilevel algorithms on adaptive meshes; if some + // of the things below seem strange, take a look at the @ref mg_paper. // - // Our first job is to identify those - // degrees of freedom on each level that - // are located on interfaces between - // adaptively refined levels, and those - // that lie on the interface but also on - // the exterior boundary of the domain. As - // in many other parts of the library, we - // do this by using boolean masks, - // i.e. vectors of booleans each element of - // which indicates whether the - // corresponding degree of freedom index is - // an interface DoF or not. The MGConstraints - // already computed the information for us - // when we called initialize in setup_system(). + // Our first job is to identify those degrees of freedom on each level + // that are located on interfaces between adaptively refined levels, and + // those that lie on the interface but also on the exterior boundary of + // the domain. As in many other parts of the library, we do this by using + // boolean masks, i.e. vectors of booleans each element of which indicates + // whether the corresponding degree of freedom index is an interface DoF + // or not. The MGConstraints already computed the information + // for us when we called initialize in setup_system(). std::vector > interface_dofs = mg_constrained_dofs.get_refinement_edge_indices (); std::vector > boundary_interface_dofs = mg_constrained_dofs.get_refinement_edge_boundary_indices (); - // The indices just identified will later - // be used to decide where the assembled value - // has to be added into on each level. - // On the other hand, - // we also have to impose zero boundary - // conditions on the external boundary of - // each level. But this the MGConstraints - // knows it. So we simply ask for them by calling - // get_boundary_indices (). - // The third step is to construct - // constraints on all those degrees of - // freedom: their value should be zero - // after each application of the level - // operators. To this end, we construct - // ConstraintMatrix objects for each level, - // and add to each of these constraints for - // each degree of freedom. Due to the way - // the ConstraintMatrix stores its data, - // the function to add a constraint on a - // single degree of freedom and force it to - // be zero is called - // Constraintmatrix::add_line(); doing so - // for several degrees of freedom at once - // can be done using + // The indices just identified will later be used to decide where the + // assembled value has to be added into on each level. On the other hand, + // we also have to impose zero boundary conditions on the external + // boundary of each level. But this the MGConstraints knows + // it. So we simply ask for them by calling get_boundary_indices + // (). The third step is to construct constraints on all those + // degrees of freedom: their value should be zero after each application + // of the level operators. To this end, we construct ConstraintMatrix + // objects for each level, and add to each of these constraints for each + // degree of freedom. Due to the way the ConstraintMatrix stores its data, + // the function to add a constraint on a single degree of freedom and + // force it to be zero is called Constraintmatrix::add_line(); doing so + // for several degrees of freedom at once can be done using // Constraintmatrix::add_lines(): std::vector boundary_constraints (triangulation.n_levels()); std::vector boundary_interface_constraints (triangulation.n_levels()); @@ -597,20 +487,13 @@ namespace Step16 boundary_interface_constraints[level].close (); } - // Now that we're done with most of our - // preliminaries, let's start the - // integration loop. It looks mostly like - // the loop in - // assemble_system, with two - // exceptions: (i) we don't need a right - // hand side, and more significantly (ii) we - // don't just loop over all active cells, - // but in fact all cells, active or - // not. Consequently, the correct iterator - // to use is MGDoFHandler::cell_iterator - // rather than - // MGDoFHandler::active_cell_iterator. Let's - // go about it: + // Now that we're done with most of our preliminaries, let's start the + // integration loop. It looks mostly like the loop in + // assemble_system, with two exceptions: (i) we don't need a + // right hand side, and more significantly (ii) we don't just loop over + // all active cells, but in fact all cells, active or not. Consequently, + // the correct iterator to use is MGDoFHandler::cell_iterator rather than + // MGDoFHandler::active_cell_iterator. Let's go about it: typename MGDoFHandler::cell_iterator cell = mg_dof_handler.begin(), endc = mg_dof_handler.end(); @@ -630,85 +513,54 @@ namespace Step16 fe_values.shape_grad(j,q_point) * fe_values.JxW(q_point)); - // The rest of the assembly is again - // slightly different. This starts with - // a gotcha that is easily forgotten: - // The indices of global degrees of - // freedom we want here are the ones - // for current level, not for the - // global matrix. We therefore need the - // function - // MGDoFAccessorLLget_mg_dof_indices, - // not MGDoFAccessor::get_dof_indices - // as used in the assembly of the + // The rest of the assembly is again slightly different. This starts + // with a gotcha that is easily forgotten: The indices of global + // degrees of freedom we want here are the ones for current level, not + // for the global matrix. We therefore need the function + // MGDoFAccessorLLget_mg_dof_indices, not + // MGDoFAccessor::get_dof_indices as used in the assembly of the // global system: cell->get_mg_dof_indices (local_dof_indices); - // Next, we need to copy local - // contributions into the level - // objects. We can do this in the same - // way as in the global assembly, using - // a constraint object that takes care - // of constrained degrees (which here - // are only boundary nodes, as the - // individual levels have no hanging - // node constraints). Note that the - // boundary_constraints - // object makes sure that the level - // matrices contains no contributions - // from degrees of freedom at the - // interface between cells of different - // refinement level. + // Next, we need to copy local contributions into the level + // objects. We can do this in the same way as in the global assembly, + // using a constraint object that takes care of constrained degrees + // (which here are only boundary nodes, as the individual levels have + // no hanging node constraints). Note that the + // boundary_constraints object makes sure that the level + // matrices contains no contributions from degrees of freedom at the + // interface between cells of different refinement level. boundary_constraints[cell->level()] .distribute_local_to_global (cell_matrix, local_dof_indices, mg_matrices[cell->level()]); - // The next step is again slightly more - // obscure (but explained in the @ref - // mg_paper): We need the remainder of - // the operator that we just copied - // into the mg_matrices - // object, namely the part on the - // interface between cells at the - // current level and cells one level - // coarser. This matrix exists in two - // directions: for interior DoFs (index - // $i$) of the current level to those - // sitting on the interface (index - // $j$), and the other way around. Of - // course, since we have a symmetric - // operator, one of these matrices is - // the transpose of the other. + // The next step is again slightly more obscure (but explained in the + // @ref mg_paper): We need the remainder of the operator that we just + // copied into the mg_matrices object, namely the part on + // the interface between cells at the current level and cells one + // level coarser. This matrix exists in two directions: for interior + // DoFs (index $i$) of the current level to those sitting on the + // interface (index $j$), and the other way around. Of course, since + // we have a symmetric operator, one of these matrices is the + // transpose of the other. // - // The way we assemble these matrices - // is as follows: since the are formed - // from parts of the local - // contributions, we first delete all - // those parts of the local - // contributions that we are not - // interested in, namely all those - // elements of the local matrix for - // which not $i$ is an interface DoF - // and $j$ is not. The result is one of - // the two matrices that we are - // interested in, and we then copy it - // into the - // mg_interface_matrices - // object. The - // boundary_interface_constraints - // object at the same time makes sure - // that we delete contributions from - // all degrees of freedom that are not - // only on the interface but also on - // the external boundary of the domain. + // The way we assemble these matrices is as follows: since the are + // formed from parts of the local contributions, we first delete all + // those parts of the local contributions that we are not interested + // in, namely all those elements of the local matrix for which not $i$ + // is an interface DoF and $j$ is not. The result is one of the two + // matrices that we are interested in, and we then copy it into the + // mg_interface_matrices object. The + // boundary_interface_constraints object at the same time + // makes sure that we delete contributions from all degrees of freedom + // that are not only on the interface but also on the external + // boundary of the domain. // - // The last part to remember is how to - // get the other matrix. Since it is - // only the transpose, we will later - // (in the solve() - // function) be able to just pass the - // transpose matrix where necessary. + // The last part to remember is how to get the other matrix. Since it + // is only the transpose, we will later (in the solve() + // function) be able to just pass the transpose matrix where + // necessary. for (unsigned int i=0; ilevel()][local_dof_indices[i]]==true && @@ -726,51 +578,36 @@ namespace Step16 // @sect4{LaplaceProblem::solve} - // This is the other function that is - // significantly different in support of the - // multigrid solver (or, in fact, the - // preconditioner for which we use the - // multigrid method). + // This is the other function that is significantly different in support of + // the multigrid solver (or, in fact, the preconditioner for which we use + // the multigrid method). // - // Let us start out by setting up two of the - // components of multilevel methods: transfer - // operators between levels, and a solver on - // the coarsest level. In finite element - // methods, the transfer operators are - // derived from the finite element function - // spaces involved and can often be computed - // in a generic way independent of the - // problem under consideration. In that case, - // we can use the MGTransferPrebuilt class - // that, given the constraints on the global - // level and an MGDoFHandler object computes - // the matrices corresponding to these - // transfer operators. + // Let us start out by setting up two of the components of multilevel + // methods: transfer operators between levels, and a solver on the coarsest + // level. In finite element methods, the transfer operators are derived from + // the finite element function spaces involved and can often be computed in + // a generic way independent of the problem under consideration. In that + // case, we can use the MGTransferPrebuilt class that, given the constraints + // on the global level and an MGDoFHandler object computes the matrices + // corresponding to these transfer operators. // - // The second part of the following lines - // deals with the coarse grid solver. Since - // our coarse grid is very coarse indeed, we - // decide for a direct solver (a Householder - // decomposition of the coarsest level - // matrix), even if its implementation is not - // particularly sophisticated. If our coarse - // mesh had many more cells than the five we - // have here, something better suited would - // obviously be necessary here. + // The second part of the following lines deals with the coarse grid + // solver. Since our coarse grid is very coarse indeed, we decide for a + // direct solver (a Householder decomposition of the coarsest level matrix), + // even if its implementation is not particularly sophisticated. If our + // coarse mesh had many more cells than the five we have here, something + // better suited would obviously be necessary here. template void LaplaceProblem::solve () { - // Create the object that deals with the transfer - // between different refinement levels. We need to - // pass it the hanging node constraints. + // Create the object that deals with the transfer between different + // refinement levels. We need to pass it the hanging node constraints. MGTransferPrebuilt > mg_transfer(hanging_node_constraints, mg_constrained_dofs); - // Now the prolongation matrix has to be built. - // This matrix needs to take the boundary values on - // each level into account and needs to know about - // the indices at the refinement egdes. The - // MGConstraints knows about that so - // pass it as an argument. + // Now the prolongation matrix has to be built. This matrix needs to take + // the boundary values on each level into account and needs to know about + // the indices at the refinement egdes. The MGConstraints + // knows about that so pass it as an argument. mg_transfer.build_matrices(mg_dof_handler); FullMatrix coarse_matrix; @@ -778,60 +615,37 @@ namespace Step16 MGCoarseGridHouseholder<> coarse_grid_solver; coarse_grid_solver.initialize (coarse_matrix); - // The next component of a multilevel - // solver or preconditioner is that we need - // a smoother on each level. A common - // choice for this is to use the - // application of a relaxation method (such - // as the SOR, Jacobi or Richardson method) - // or a small number of iterations of a - // solver method (such as CG or GMRES). The - // MGSmootherRelaxation and - // MGSmootherPrecondition classes provide - // support for these two kinds of - // smoothers. Here, we opt for the - // application of a single SOR - // iteration. To this end, we define an - // appropriate typedef and - // then setup a smoother object. + // The next component of a multilevel solver or preconditioner is that we + // need a smoother on each level. A common choice for this is to use the + // application of a relaxation method (such as the SOR, Jacobi or + // Richardson method) or a small number of iterations of a solver method + // (such as CG or GMRES). The MGSmootherRelaxation and + // MGSmootherPrecondition classes provide support for these two kinds of + // smoothers. Here, we opt for the application of a single SOR + // iteration. To this end, we define an appropriate typedef + // and then setup a smoother object. // - // Since this smoother needs temporary - // vectors to store intermediate results, - // we need to provide a VectorMemory - // object. Since these vectors will be - // reused over and over, the - // GrowingVectorMemory is more time - // efficient than the PrimitiveVectorMemory - // class in the current case. + // Since this smoother needs temporary vectors to store intermediate + // results, we need to provide a VectorMemory object. Since these vectors + // will be reused over and over, the GrowingVectorMemory is more time + // efficient than the PrimitiveVectorMemory class in the current case. // - // The last step is to initialize the - // smoother object with our level matrices - // and to set some smoothing parameters. - // The initialize() function - // can optionally take additional arguments - // that will be passed to the smoother - // object on each level. In the current - // case for the SOR smoother, this could, - // for example, include a relaxation - // parameter. However, we here leave these - // at their default values. The call to - // set_steps() indicates that - // we will use two pre- and two - // post-smoothing steps on each level; to - // use a variable number of smoother steps - // on different levels, more options can be - // set in the constructor call to the - // mg_smoother object. + // The last step is to initialize the smoother object with our level + // matrices and to set some smoothing parameters. The + // initialize() function can optionally take additional + // arguments that will be passed to the smoother object on each level. In + // the current case for the SOR smoother, this could, for example, include + // a relaxation parameter. However, we here leave these at their default + // values. The call to set_steps() indicates that we will use + // two pre- and two post-smoothing steps on each level; to use a variable + // number of smoother steps on different levels, more options can be set + // in the constructor call to the mg_smoother object. // - // The last step results from the fact that - // we use the SOR method as a smoother - - // which is not symmetric - but we use the - // conjugate gradient iteration (which - // requires a symmetric preconditioner) - // below, we need to let the multilevel - // preconditioner make sure that we get a - // symmetric operator even for nonsymmetric - // smoothers: + // The last step results from the fact that we use the SOR method as a + // smoother - which is not symmetric - but we use the conjugate gradient + // iteration (which requires a symmetric preconditioner) below, we need to + // let the multilevel preconditioner make sure that we get a symmetric + // operator even for nonsymmetric smoothers: typedef PreconditionSOR > Smoother; GrowingVectorMemory<> vector_memory; MGSmootherRelaxation, Smoother, Vector > @@ -840,26 +654,19 @@ namespace Step16 mg_smoother.set_steps(2); mg_smoother.set_symmetric(true); - // The next preparatory step is that we - // must wrap our level and interface - // matrices in an object having the - // required multiplication functions. We - // will create two objects for the - // interface objects going from coarse to - // fine and the other way around; the - // multigrid algorithm will later use the - // transpose operator for the latter - // operation, allowing us to initialize - // both up and down versions of the - // operator with the matrices we already - // built: + // The next preparatory step is that we must wrap our level and interface + // matrices in an object having the required multiplication functions. We + // will create two objects for the interface objects going from coarse to + // fine and the other way around; the multigrid algorithm will later use + // the transpose operator for the latter operation, allowing us to + // initialize both up and down versions of the operator with the matrices + // we already built: MGMatrix<> mg_matrix(&mg_matrices); MGMatrix<> mg_interface_up(&mg_interface_matrices); MGMatrix<> mg_interface_down(&mg_interface_matrices); - // Now, we are ready to set up the - // V-cycle operator and the - // multilevel preconditioner. + // Now, we are ready to set up the V-cycle operator and the multilevel + // preconditioner. Multigrid > mg(mg_dof_handler, mg_matrix, coarse_grid_solver, @@ -871,9 +678,8 @@ namespace Step16 PreconditionMG, MGTransferPrebuilt > > preconditioner(mg_dof_handler, mg, mg_transfer); - // With all this together, we can finally - // get about solving the linear system in - // the usual way: + // With all this together, we can finally get about solving the linear + // system in the usual way: SolverControl solver_control (1000, 1e-12); SolverCG<> cg (solver_control); @@ -892,22 +698,15 @@ namespace Step16 // @sect4{Postprocessing} - // The following two functions postprocess a - // solution once it is computed. In - // particular, the first one refines the mesh - // at the beginning of each cycle while the - // second one outputs results at the end of - // each such cycle. The functions are almost - // unchanged from those in step-6, with the - // exception of two minor differences: The - // KellyErrorEstimator::estimate function - // wants an argument of type DoFHandler, not - // MGDoFHandler, and so we have to cast from - // derived to base class; and we generate - // output in VTK format, to use the more - // modern visualization programs available - // today compared to those that were - // available when step-6 was written. + // The following two functions postprocess a solution once it is + // computed. In particular, the first one refines the mesh at the beginning + // of each cycle while the second one outputs results at the end of each + // such cycle. The functions are almost unchanged from those in step-6, with + // the exception of two minor differences: The KellyErrorEstimator::estimate + // function wants an argument of type DoFHandler, not MGDoFHandler, and so + // we have to cast from derived to base class; and we generate output in VTK + // format, to use the more modern visualization programs available today + // compared to those that were available when step-6 was written. template void LaplaceProblem::refine_grid () { @@ -947,14 +746,10 @@ namespace Step16 // @sect4{LaplaceProblem::run} - // Like several of the functions above, this - // is almost exactly a copy of of the - // corresponding function in step-6. The only - // difference is the call to - // assemble_multigrid that takes - // care of forming the matrices on every - // level that we need in the multigrid - // method. + // Like several of the functions above, this is almost exactly a copy of of + // the corresponding function in step-6. The only difference is the call to + // assemble_multigrid that takes care of forming the matrices + // on every level that we need in the multigrid method. template void LaplaceProblem::run () { @@ -1002,8 +797,7 @@ namespace Step16 // @sect3{The main() function} // -// This is again the same function as -// in step-6: +// This is again the same function as in step-6: int main () { try diff --git a/deal.II/examples/step-17/step-17.cc b/deal.II/examples/step-17/step-17.cc index 46a7ea35eb..a335882eb8 100644 --- a/deal.II/examples/step-17/step-17.cc +++ b/deal.II/examples/step-17/step-17.cc @@ -10,9 +10,8 @@ /* further information on this license. */ -// First the usual assortment of header files -// we have already used in previous example -// programs: +// First the usual assortment of header files we have already used in previous +// example programs: #include #include #include @@ -36,58 +35,40 @@ #include #include -// And here come the things that we -// need particularly for this example -// program and that weren't in -// step-8. First, we replace the -// standard output std::cout by a -// new stream pcout which is used -// in %parallel computations for -// generating output only on one of -// the MPI processes. +// And here come the things that we need particularly for this example program +// and that weren't in step-8. First, we replace the standard output +// std::cout by a new stream pcout which is used in +// %parallel computations for generating output only on one of the MPI +// processes. #include -// We are going to query the number -// of processes and the number of the -// present process by calling the -// respective functions in the -// Utilities::MPI namespace. +// We are going to query the number of processes and the number of the present +// process by calling the respective functions in the Utilities::MPI +// namespace. #include -// Then, we are -// going to replace all linear algebra -// components that involve the (global) -// linear system by classes that wrap -// interfaces similar to our own linear -// algebra classes around what PETSc offers -// (PETSc is a library written in C, and -// deal.II comes with wrapper classes that -// provide the PETSc functionality with an -// interface that is similar to the interface -// we already had for our own linear algebra -// classes). In particular, we need vectors -// and matrices that are distributed across -// several processes in MPI programs (and -// simply map to sequential, local vectors -// and matrices if there is only a single -// process, i.e. if you are running on only -// one machine, and without MPI support): +// Then, we are going to replace all linear algebra components that involve +// the (global) linear system by classes that wrap interfaces similar to our +// own linear algebra classes around what PETSc offers (PETSc is a library +// written in C, and deal.II comes with wrapper classes that provide the PETSc +// functionality with an interface that is similar to the interface we already +// had for our own linear algebra classes). In particular, we need vectors and +// matrices that are distributed across several processes in MPI programs (and +// simply map to sequential, local vectors and matrices if there is only a +// single process, i.e. if you are running on only one machine, and without +// MPI support): #include #include #include -// Then we also need interfaces for solvers -// and preconditioners that PETSc provides: +// Then we also need interfaces for solvers and preconditioners that PETSc +// provides: #include #include -// And in addition, we need some algorithms -// for partitioning our meshes so that they -// can be efficiently distributed across an -// MPI network. The partitioning algorithm is -// implemented in the GridTools class, -// and we need an additional include file for -// a function in DoFRenumbering that -// allows to sort the indices associated with -// degrees of freedom so that they are -// numbered according to the subdomain they -// are associated with: +// And in addition, we need some algorithms for partitioning our meshes so +// that they can be efficiently distributed across an MPI network. The +// partitioning algorithm is implemented in the GridTools class, +// and we need an additional include file for a function in +// DoFRenumbering that allows to sort the indices associated with +// degrees of freedom so that they are numbered according to the subdomain +// they are associated with: #include #include @@ -96,25 +77,19 @@ #include #include -// The last step is as in all -// previous programs: +// The last step is as in all previous programs: namespace Step17 { using namespace dealii; - // Now, here comes the declaration of the - // main class and of various other things - // below it. As mentioned in the - // introduction, almost all of this has been - // copied verbatim from step-8, so we only - // comment on the few things that are - // different. There is one (cosmetic) change - // in that we let solve return a value, - // namely the number of iterations it took to - // converge, so that we can output this to - // the screen at the appropriate place. In - // addition, we introduce a stream-like - // variable pcout, explained below: + // Now, here comes the declaration of the main class and of various other + // things below it. As mentioned in the introduction, almost all of this has + // been copied verbatim from step-8, so we only comment on the few things + // that are different. There is one (cosmetic) change in that we let + // solve return a value, namely the number of iterations it + // took to converge, so that we can output this to the screen at the + // appropriate place. In addition, we introduce a stream-like variable + // pcout, explained below: template class ElasticProblem { @@ -130,33 +105,23 @@ namespace Step17 void refine_grid (); void output_results (const unsigned int cycle) const; - // The first variable is basically only - // for convenience: in %parallel program, - // if each process outputs status - // information, then there quickly is a - // lot of clutter. Rather, we would want - // to only have one process output - // everything once, for example the one - // with process number - // zero. ConditionalOStream does - // exactly this: it acts as if it were a - // stream, but only forwards to a real, - // underlying stream if a flag is set. By - // setting this condition to - // this_mpi_process==0, we make sure - // that output is only generated from the - // first process and that we don't get - // the same lines of output over and over - // again, once per process. + // The first variable is basically only for convenience: in %parallel + // program, if each process outputs status information, then there quickly + // is a lot of clutter. Rather, we would want to only have one process + // output everything once, for example the one with process number + // zero. ConditionalOStream does exactly this: it acts as if + // it were a stream, but only forwards to a real, underlying stream if a + // flag is set. By setting this condition to + // this_mpi_process==0, we make sure that output is only + // generated from the first process and that we don't get the same lines + // of output over and over again, once per process. // - // With this simple trick, we make sure - // that we don't have to guard each and - // every write to std::cout by a - // prefixed if(this_mpi_process==0). + // With this simple trick, we make sure that we don't have to guard each + // and every write to std::cout by a prefixed + // if(this_mpi_process==0). ConditionalOStream pcout; - // The next few variables are taken - // verbatim from step-8: + // The next few variables are taken verbatim from step-8: Triangulation triangulation; DoFHandler dof_handler; @@ -164,64 +129,43 @@ namespace Step17 ConstraintMatrix hanging_node_constraints; - // In step-8, this would have been the - // place where we would have declared the - // member variables for the sparsity - // pattern, the system matrix, right - // hand, and solution vector. We change - // these declarations to use %parallel - // PETSc objects instead (note that the - // fact that we use the %parallel versions - // is denoted the fact that we use the - // classes from the - // PETScWrappers::MPI namespace; - // sequential versions of these classes - // are in the PETScWrappers - // namespace, i.e. without the MPI - // part). Note also that we do not use a - // separate sparsity pattern, since PETSc - // manages that as part of its matrix - // data structures. + // In step-8, this would have been the place where we would have declared + // the member variables for the sparsity pattern, the system matrix, right + // hand, and solution vector. We change these declarations to use + // %parallel PETSc objects instead (note that the fact that we use the + // %parallel versions is denoted the fact that we use the classes from the + // PETScWrappers::MPI namespace; sequential versions of these + // classes are in the PETScWrappers namespace, i.e. without + // the MPI part). Note also that we do not use a separate + // sparsity pattern, since PETSc manages that as part of its matrix data + // structures. PETScWrappers::MPI::SparseMatrix system_matrix; PETScWrappers::MPI::Vector solution; PETScWrappers::MPI::Vector system_rhs; - // The next change is that we have to - // declare a variable that indicates the - // MPI communicator over which we are - // supposed to distribute our - // computations. Note that if this is a - // sequential job without support by MPI, - // then PETSc provides some dummy type - // for MPI_Comm, so we do not have to - // care here whether the job is really a - // %parallel one: + // The next change is that we have to declare a variable that indicates + // the MPI communicator over which we are supposed to distribute our + // computations. Note that if this is a sequential job without support by + // MPI, then PETSc provides some dummy type for MPI_Comm, so + // we do not have to care here whether the job is really a %parallel one: MPI_Comm mpi_communicator; - // Then we have two variables that tell - // us where in the %parallel world we - // are. The first of the following - // variables, n_mpi_processes tells - // us how many MPI processes there exist - // in total, while the second one, - // this_mpi_process, indicates which - // is the number of the present process - // within this space of processes. The - // latter variable will have a unique - // value for each process between zero - // and (less than) - // n_mpi_processes. If this program - // is run on a single machine without MPI - // support, then their values are 1 - // and 0, respectively. + // Then we have two variables that tell us where in the %parallel world we + // are. The first of the following variables, n_mpi_processes + // tells us how many MPI processes there exist in total, while the second + // one, this_mpi_process, indicates which is the number of + // the present process within this space of processes. The latter variable + // will have a unique value for each process between zero and (less than) + // n_mpi_processes. If this program is run on a single + // machine without MPI support, then their values are 1 and + // 0, respectively. const unsigned int n_mpi_processes; const unsigned int this_mpi_process; }; - // The following is again taken from step-8 - // without change: + // The following is again taken from step-8 without change: template class RightHandSide : public Function { @@ -284,28 +228,18 @@ namespace Step17 } - // The first step in the actual - // implementation of things is the - // constructor of the main class. Apart from - // initializing the same member variables - // that we already had in step-8, we here - // initialize the MPI communicator variable - // we shall use with the global MPI - // communicator linking all processes - // together (in more complex applications, - // one could here use a communicator object - // that only links a subset of all - // processes), and call the Utilities helper - // functions to determine the number of - // processes and where the present one fits - // into this picture. In addition, we make - // sure that output is only generated by the - // (globally) first process. As, - // this_mpi_process is determined after - // creation of pcout, we cannot set the - // condition through the constructor, i.e. by - // pcout(std::cout, this_mpi_process==0), but - // set the condition separately. + // The first step in the actual implementation of things is the constructor + // of the main class. Apart from initializing the same member variables that + // we already had in step-8, we here initialize the MPI communicator + // variable we shall use with the global MPI communicator linking all + // processes together (in more complex applications, one could here use a + // communicator object that only links a subset of all processes), and call + // the Utilities helper functions to determine the number of processes and + // where the present one fits into this picture. In addition, we make sure + // that output is only generated by the (globally) first process. As, + // this_mpi_process is determined after creation of pcout, we cannot set the + // condition through the constructor, i.e. by pcout(std::cout, + // this_mpi_process==0), but set the condition separately. template ElasticProblem::ElasticProblem () : @@ -328,88 +262,59 @@ namespace Step17 } - // The second step is the function in which - // we set up the various variables for the - // global linear system to be solved. + // The second step is the function in which we set up the various variables + // for the global linear system to be solved. template void ElasticProblem::setup_system () { - // Before we even start out setting up the - // system, there is one thing to do for a - // %parallel program: we need to assign - // cells to each of the processes. We do - // this by splitting (partitioning) the - // mesh cells into as many chunks - // (subdomains) as there are processes - // in this MPI job (if this is a sequential - // job, then there is only one job and all - // cells will get a zero as subdomain - // indicator). This is done using an - // interface to the METIS library that does - // this in a very efficient way, trying to - // minimize the number of nodes on the - // interfaces between subdomains. All this - // is hidden behind the following call to a - // deal.II library function: + // Before we even start out setting up the system, there is one thing to + // do for a %parallel program: we need to assign cells to each of the + // processes. We do this by splitting (partitioning) the mesh + // cells into as many chunks (subdomains) as there are + // processes in this MPI job (if this is a sequential job, then there is + // only one job and all cells will get a zero as subdomain + // indicator). This is done using an interface to the METIS library that + // does this in a very efficient way, trying to minimize the number of + // nodes on the interfaces between subdomains. All this is hidden behind + // the following call to a deal.II library function: GridTools::partition_triangulation (n_mpi_processes, triangulation); - // As for the linear system: First, we need - // to generate an enumeration for the - // degrees of freedom in our - // problem. Further below, we will show how - // we assign each cell to one of the MPI - // processes before we even get here. What - // we then need to do is to enumerate the - // degrees of freedom in a way so that all - // degrees of freedom associated with cells - // in subdomain zero (which resides on - // process zero) come before all DoFs - // associated with cells on subdomain one, - // before those on cells on process two, - // and so on. We need this since we have to - // split the global vectors for right hand - // side and solution, as well as the matrix - // into contiguous chunks of rows that live - // on each of the processors, and we will - // want to do this in a way that requires - // minimal communication. This is done - // using the following two functions, which - // first generates an initial ordering of - // all degrees of freedom, and then re-sort - // them according to above criterion: + // As for the linear system: First, we need to generate an enumeration for + // the degrees of freedom in our problem. Further below, we will show how + // we assign each cell to one of the MPI processes before we even get + // here. What we then need to do is to enumerate the degrees of freedom in + // a way so that all degrees of freedom associated with cells in subdomain + // zero (which resides on process zero) come before all DoFs associated + // with cells on subdomain one, before those on cells on process two, and + // so on. We need this since we have to split the global vectors for right + // hand side and solution, as well as the matrix into contiguous chunks of + // rows that live on each of the processors, and we will want to do this + // in a way that requires minimal communication. This is done using the + // following two functions, which first generates an initial ordering of + // all degrees of freedom, and then re-sort them according to above + // criterion: dof_handler.distribute_dofs (fe); DoFRenumbering::subdomain_wise (dof_handler); - // While we're at it, let us also count how - // many degrees of freedom there exist on - // the present process: + // While we're at it, let us also count how many degrees of freedom there + // exist on the present process: const unsigned int n_local_dofs = DoFTools::count_dofs_with_subdomain_association (dof_handler, this_mpi_process); - // Then we initialize the system matrix, - // solution, and right hand side - // vectors. Since they all need to work in - // %parallel, we have to pass them an MPI - // communication object, as well as their - // global sizes (both dimensions are equal - // to the number of degrees of freedom), - // and also how many rows out of this - // global size are to be stored locally - // (n_local_dofs). In addition, PETSc - // needs to know how to partition the - // columns in the chunk of the matrix that - // is stored locally; for square matrices, - // the columns should be partitioned in the - // same way as the rows (indicated by the - // second n_local_dofs in the call) but - // in the case of rectangular matrices one - // has to partition the columns in the same - // way as vectors are partitioned with - // which the matrix is multiplied, while - // rows have to partitioned in the same way - // as destination vectors of matrix-vector - // multiplications: + // Then we initialize the system matrix, solution, and right hand side + // vectors. Since they all need to work in %parallel, we have to pass them + // an MPI communication object, as well as their global sizes (both + // dimensions are equal to the number of degrees of freedom), and also how + // many rows out of this global size are to be stored locally + // (n_local_dofs). In addition, PETSc needs to know how to + // partition the columns in the chunk of the matrix that is stored + // locally; for square matrices, the columns should be partitioned in the + // same way as the rows (indicated by the second n_local_dofs + // in the call) but in the case of rectangular matrices one has to + // partition the columns in the same way as vectors are partitioned with + // which the matrix is multiplied, while rows have to partitioned in the + // same way as destination vectors of matrix-vector multiplications: system_matrix.reinit (mpi_communicator, dof_handler.n_dofs(), dof_handler.n_dofs(), @@ -420,15 +325,11 @@ namespace Step17 solution.reinit (mpi_communicator, dof_handler.n_dofs(), n_local_dofs); system_rhs.reinit (mpi_communicator, dof_handler.n_dofs(), n_local_dofs); - // Finally, we need to initialize the - // objects denoting hanging node - // constraints for the present grid. Note - // that since PETSc handles the sparsity - // pattern internally to the matrix, there - // is no need to set up an independent - // sparsity pattern here, and to condense - // it for constraints, as we have done in - // all other example programs. + // Finally, we need to initialize the objects denoting hanging node + // constraints for the present grid. Note that since PETSc handles the + // sparsity pattern internally to the matrix, there is no need to set up + // an independent sparsity pattern here, and to condense it for + // constraints, as we have done in all other example programs. hanging_node_constraints.clear (); DoFTools::make_hanging_node_constraints (dof_handler, hanging_node_constraints); @@ -436,82 +337,48 @@ namespace Step17 } - // The third step is to actually assemble the - // matrix and right hand side of the - // problem. There are some things worth - // mentioning before we go into - // detail. First, we will be assembling the - // system in %parallel, i.e. each process will - // be responsible for assembling on cells - // that belong to this particular - // processor. Note that the degrees of - // freedom are split in a way such that all - // DoFs in the interior of cells and between - // cells belonging to the same subdomain - // belong to the process that owns the - // cell. However, even then we sometimes need - // to assemble on a cell with a neighbor that - // belongs to a different process, and in - // these cases when we write the local - // contributions into the global matrix or - // right hand side vector, we actually have - // to transfer these entries to the other - // process. Fortunately, we don't have to do - // this by hand, PETSc does all this for us - // by caching these elements locally, and - // sending them to the other processes as - // necessary when we call the compress() - // functions on the matrix and vector at the - // end of this function. + // The third step is to actually assemble the matrix and right hand side of + // the problem. There are some things worth mentioning before we go into + // detail. First, we will be assembling the system in %parallel, i.e. each + // process will be responsible for assembling on cells that belong to this + // particular processor. Note that the degrees of freedom are split in a way + // such that all DoFs in the interior of cells and between cells belonging + // to the same subdomain belong to the process that owns the + // cell. However, even then we sometimes need to assemble on a cell with a + // neighbor that belongs to a different process, and in these cases when we + // write the local contributions into the global matrix or right hand side + // vector, we actually have to transfer these entries to the other + // process. Fortunately, we don't have to do this by hand, PETSc does all + // this for us by caching these elements locally, and sending them to the + // other processes as necessary when we call the compress() + // functions on the matrix and vector at the end of this function. // - // The second point is that once we - // have handed over matrix and vector - // contributions to PETSc, it is a) - // hard, and b) very inefficient to - // get them back for - // modifications. This is not only - // the fault of PETSc, it is also a - // consequence of the distributed - // nature of this program: if an - // entry resides on another - // processor, then it is necessarily - // expensive to get it. The - // consequence of this is that where - // we previously first assembled the - // matrix and right hand side as if - // there were no hanging node - // constraints and boundary values, - // and then eliminated these in a - // second step, we should now try to - // do that while still assembling the - // local systems, and before handing - // these entries over to PETSc. At - // least as far as eliminating - // hanging nodes is concerned, this - // is actually possible, though - // removing boundary nodes isn't that - // simple. deal.II provides functions - // to do this first part: instead of - // copying elements by hand into the - // global matrix, we use the - // distribute_local_to_global - // functions below to take care of - // hanging nodes at the same - // time. The second step, elimination - // of boundary nodes, is then done in - // exactly the same way as in all - // previous example programs. + // The second point is that once we have handed over matrix and vector + // contributions to PETSc, it is a) hard, and b) very inefficient to get + // them back for modifications. This is not only the fault of PETSc, it is + // also a consequence of the distributed nature of this program: if an entry + // resides on another processor, then it is necessarily expensive to get + // it. The consequence of this is that where we previously first assembled + // the matrix and right hand side as if there were no hanging node + // constraints and boundary values, and then eliminated these in a second + // step, we should now try to do that while still assembling the local + // systems, and before handing these entries over to PETSc. At least as far + // as eliminating hanging nodes is concerned, this is actually possible, + // though removing boundary nodes isn't that simple. deal.II provides + // functions to do this first part: instead of copying elements by hand into + // the global matrix, we use the distribute_local_to_global + // functions below to take care of hanging nodes at the same time. The + // second step, elimination of boundary nodes, is then done in exactly the + // same way as in all previous example programs. // // So, here is the actual implementation: template void ElasticProblem::assemble_system () { - // The infrastructure to assemble linear - // systems is the same as in all the other - // programs, and in particular unchanged - // from step-8. Note that we still use the - // deal.II full matrix and vector types for - // the local systems. + // The infrastructure to assemble linear systems is the same as in all the + // other programs, and in particular unchanged from step-8. Note that we + // still use the deal.II full matrix and vector types for the local + // systems. QGauss quadrature_formula(2); FEValues fe_values (fe, quadrature_formula, update_values | update_gradients | @@ -535,35 +402,24 @@ namespace Step17 Vector(dim)); - // The next thing is the loop over all - // elements. Note that we do not have to do - // all the work: our job here is only to - // assemble the system on cells that - // actually belong to this MPI process, all - // other cells will be taken care of by - // other processes. This is what the - // if-clause immediately after the for-loop - // takes care of: it queries the subdomain - // identifier of each cell, which is a - // number associated with each cell that - // tells which process handles it. In more - // generality, the subdomain id is used to - // split a domain into several parts (we do - // this above, at the beginning of - // setup_system), and which allows to - // identify which subdomain a cell is - // living on. In this application, we have - // each process handle exactly one - // subdomain, so we identify the terms - // subdomain and MPI process with - // each other. + // The next thing is the loop over all elements. Note that we do not have + // to do all the work: our job here is only to assemble the system on + // cells that actually belong to this MPI process, all other cells will be + // taken care of by other processes. This is what the if-clause + // immediately after the for-loop takes care of: it queries the subdomain + // identifier of each cell, which is a number associated with each cell + // that tells which process handles it. In more generality, the subdomain + // id is used to split a domain into several parts (we do this above, at + // the beginning of setup_system), and which allows to + // identify which subdomain a cell is living on. In this application, we + // have each process handle exactly one subdomain, so we identify the + // terms subdomain and MPI process with each + // other. // - // Apart from this, assembling the local - // system is relatively uneventful if you - // have understood how this is done in - // step-8, and only becomes interesting - // again once we start distributing it into - // the global matrix and right hand sides. + // Apart from this, assembling the local system is relatively uneventful + // if you have understood how this is done in step-8, and only becomes + // interesting again once we start distributing it into the global matrix + // and right hand sides. typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); @@ -628,38 +484,22 @@ namespace Step17 fe_values.JxW(q_point); } - // Now we have the local system, and - // need to transfer it into the - // global objects. However, as - // described in the introduction to - // this function, we want to avoid - // any operations to matrix and - // vector entries after handing them - // off to PETSc (i.e. after - // distributing to the global - // objects). Therefore, we will take - // care of hanging node constraints - // already here. This is not quite - // trivial since the rows and columns - // of constrained nodes have to be - // distributed to the rows and - // columns of those nodes to which - // they are constrained. This can't - // be done on a purely local basis - // (because the degrees of freedom to - // which hanging nodes are - // constrained may not be associated - // with the cell we are presently - // treating, and are therefore not - // represented in the local matrix - // and vector), but it can be done - // while distributing the local - // system to the global one. This is - // what the following two calls do, - // i.e. they distribute to the global - // objects and at the same time make - // sure that hanging node constraints - // are taken care of: + // Now we have the local system, and need to transfer it into the + // global objects. However, as described in the introduction to this + // function, we want to avoid any operations to matrix and vector + // entries after handing them off to PETSc (i.e. after distributing + // to the global objects). Therefore, we will take care of hanging + // node constraints already here. This is not quite trivial since + // the rows and columns of constrained nodes have to be distributed + // to the rows and columns of those nodes to which they are + // constrained. This can't be done on a purely local basis (because + // the degrees of freedom to which hanging nodes are constrained may + // not be associated with the cell we are presently treating, and + // are therefore not represented in the local matrix and vector), + // but it can be done while distributing the local system to the + // global one. This is what the following two calls do, i.e. they + // distribute to the global objects and at the same time make sure + // that hanging node constraints are taken care of: cell->get_dof_indices (local_dof_indices); hanging_node_constraints .distribute_local_to_global (cell_matrix, @@ -672,14 +512,12 @@ namespace Step17 system_rhs); } - // The global matrix and right hand side - // vectors have now been formed. Note that - // since we took care of this already - // above, we do not have to condense away - // hanging node constraints any more. + // The global matrix and right hand side vectors have now been + // formed. Note that since we took care of this already above, we do not + // have to condense away hanging node constraints any more. // - // However, we still have to apply boundary - // values, in the same way as we always do: + // However, we still have to apply boundary values, in the same way as we + // always do: std::map boundary_values; VectorTools::interpolate_boundary_values (dof_handler, 0, @@ -688,77 +526,46 @@ namespace Step17 MatrixTools::apply_boundary_values (boundary_values, system_matrix, solution, system_rhs, false); - // The last argument to the call just - // performed allows for some - // optimizations. It controls - // whether we should also delete the - // column corresponding to a boundary - // node, or keep it (and passing - // true as above means: yes, do - // eliminate the column). If we do, - // then the resulting matrix will be - // symmetric again if it was before; - // if we don't, then it won't. The - // solution of the resulting system - // should be the same, though. The - // only reason why we may want to - // make the system symmetric again is - // that we would like to use the CG - // method, which only works with - // symmetric matrices. Experience - // tells that CG also works (and - // works almost as well) if we don't - // remove the columns associated with - // boundary nodes, which can be - // easily explained by the special - // structure of the - // non-symmetry. Since eliminating - // columns from dense matrices is not - // expensive, though, we let the - // function do it; not doing so is - // more important if the linear - // system is either non-symmetric - // anyway, or we are using the - // non-local version of this function - // (as in all the other example - // programs before) and want to save - // a few cycles during this - // operation. + // The last argument to the call just performed allows for some + // optimizations. It controls whether we should also delete the column + // corresponding to a boundary node, or keep it (and passing + // true as above means: yes, do eliminate the column). If we + // do, then the resulting matrix will be symmetric again if it was before; + // if we don't, then it won't. The solution of the resulting system should + // be the same, though. The only reason why we may want to make the system + // symmetric again is that we would like to use the CG method, which only + // works with symmetric matrices. Experience tells that CG also works + // (and works almost as well) if we don't remove the columns associated + // with boundary nodes, which can be easily explained by the special + // structure of the non-symmetry. Since eliminating columns from dense + // matrices is not expensive, though, we let the function do it; not doing + // so is more important if the linear system is either non-symmetric + // anyway, or we are using the non-local version of this function (as in + // all the other example programs before) and want to save a few cycles + // during this operation. } - // The fourth step is to solve the linear - // system, with its distributed matrix and - // vector objects. Fortunately, PETSc offers - // a variety of sequential and %parallel - // solvers, for which we have written - // wrappers that have almost the same - // interface as is used for the deal.II - // solvers used in all previous example - // programs. + // The fourth step is to solve the linear system, with its distributed + // matrix and vector objects. Fortunately, PETSc offers a variety of + // sequential and %parallel solvers, for which we have written wrappers that + // have almost the same interface as is used for the deal.II solvers used in + // all previous example programs. template unsigned int ElasticProblem::solve () { - // First, we have to set up a convergence - // monitor, and assign it the accuracy to - // which we would like to solve the linear - // system. Next, an actual solver object - // using PETSc's CG solver which also works - // with %parallel (distributed) vectors and - // matrices. And finally a preconditioner; - // we choose to use a block Jacobi - // preconditioner which works by computing - // an incomplete LU decomposition on each - // block (i.e. the chunk of matrix that is - // stored on each MPI process). That means - // that if you run the program with only - // one process, then you will use an ILU(0) - // as a preconditioner, while if it is run - // on many processes, then we will have a - // number of blocks on the diagonal and the - // preconditioner is the ILU(0) of each of - // these blocks. + // First, we have to set up a convergence monitor, and assign it the + // accuracy to which we would like to solve the linear system. Next, an + // actual solver object using PETSc's CG solver which also works with + // %parallel (distributed) vectors and matrices. And finally a + // preconditioner; we choose to use a block Jacobi preconditioner which + // works by computing an incomplete LU decomposition on each block + // (i.e. the chunk of matrix that is stored on each MPI process). That + // means that if you run the program with only one process, then you will + // use an ILU(0) as a preconditioner, while if it is run on many + // processes, then we will have a number of blocks on the diagonal and the + // preconditioner is the ILU(0) of each of these blocks. SolverControl solver_control (solution.size(), 1e-8*system_rhs.l2_norm()); PETScWrappers::SolverCG cg (solver_control, @@ -770,178 +577,109 @@ namespace Step17 cg.solve (system_matrix, solution, system_rhs, preconditioner); - // The next step is to distribute hanging - // node constraints. This is a little - // tricky, since to fill in the value of a - // constrained node you need access to the - // values of the nodes to which it is - // constrained (for example, for a Q1 - // element in 2d, we need access to the two - // nodes on the big side of a hanging node - // face, to compute the value of the - // constrained node in the middle). Since - // PETSc (and, for that matter, the MPI - // model on which it is built) does not - // allow to query the value of another node - // in a simple way if we should need it, - // what we do here is to get a copy of the - // distributed vector where we keep all - // elements locally. This is simple, since - // the deal.II wrappers have a conversion - // constructor for the non-MPI vector - // class: + // The next step is to distribute hanging node constraints. This is a + // little tricky, since to fill in the value of a constrained node you + // need access to the values of the nodes to which it is constrained (for + // example, for a Q1 element in 2d, we need access to the two nodes on the + // big side of a hanging node face, to compute the value of the + // constrained node in the middle). Since PETSc (and, for that matter, the + // MPI model on which it is built) does not allow to query the value of + // another node in a simple way if we should need it, what we do here is + // to get a copy of the distributed vector where we keep all elements + // locally. This is simple, since the deal.II wrappers have a conversion + // constructor for the non-MPI vector class: PETScWrappers::Vector localized_solution (solution); - // Then we distribute hanging node - // constraints on this local copy, i.e. we - // compute the values of all constrained - // nodes: + // Then we distribute hanging node constraints on this local copy, i.e. we + // compute the values of all constrained nodes: hanging_node_constraints.distribute (localized_solution); - // Then transfer everything back - // into the global vector. The - // following operation copies those - // elements of the localized - // solution that we store locally - // in the distributed solution, and - // does not touch the other - // ones. Since we do the same - // operation on all processors, we - // end up with a distributed vector - // that has all the constrained - // nodes fixed. + // Then transfer everything back into the global vector. The following + // operation copies those elements of the localized solution that we store + // locally in the distributed solution, and does not touch the other + // ones. Since we do the same operation on all processors, we end up with + // a distributed vector that has all the constrained nodes fixed. solution = localized_solution; - // After this has happened, flush the PETSc - // buffers. This may or may not be strictly - // necessary here (the PETSc documentation - // is not very verbose on these things), - // but certainly doesn't hurt either. + // After this has happened, flush the PETSc buffers. This may or may not + // be strictly necessary here (the PETSc documentation is not very verbose + // on these things), but certainly doesn't hurt either. solution.compress (); - // Finally return the number of iterations - // it took to converge, to allow for some - // output: + // Finally return the number of iterations it took to converge, to allow + // for some output: return solver_control.last_step(); } - // Step five is to output the results we - // computed in this iteration. This is - // actually the same as done in step-8 - // before, with two small differences. First, - // all processes call this function, but not - // all of them need to do the work associated - // with generating output. In fact, they - // shouldn't, since we would try to write to - // the same file multiple times at once. So - // we let only the first job do this, and all - // the other ones idle around during this - // time (or start their work for the next - // iteration, or simply yield their CPUs to - // other jobs that happen to run at the same - // time). The second thing is that we not - // only output the solution vector, but also - // a vector that indicates which subdomain - // each cell belongs to. This will make for - // some nice pictures of partitioned domains. + // Step five is to output the results we computed in this iteration. This is + // actually the same as done in step-8 before, with two small + // differences. First, all processes call this function, but not all of them + // need to do the work associated with generating output. In fact, they + // shouldn't, since we would try to write to the same file multiple times at + // once. So we let only the first job do this, and all the other ones idle + // around during this time (or start their work for the next iteration, or + // simply yield their CPUs to other jobs that happen to run at the same + // time). The second thing is that we not only output the solution vector, + // but also a vector that indicates which subdomain each cell belongs + // to. This will make for some nice pictures of partitioned domains. // - // In practice, the present implementation of - // the output function is a major bottleneck - // of this program, since generating - // graphical output is expensive and doing so - // only on one process does, of course, not - // scale if we significantly increase the - // number of processes. In effect, this - // function will consume most of the run-time - // if you go to very large numbers of - // unknowns and processes, and real - // applications should limit the number of - // times they generate output through this - // function. + // In practice, the present implementation of the output function is a major + // bottleneck of this program, since generating graphical output is + // expensive and doing so only on one process does, of course, not scale if + // we significantly increase the number of processes. In effect, this + // function will consume most of the run-time if you go to very large + // numbers of unknowns and processes, and real applications should limit the + // number of times they generate output through this function. // - // The solution to this is to have - // each process generate output data - // only for it's own local cells, and - // write them to separate files, one - // file per process. This would - // distribute the work of generating - // the output to all processes - // equally. In a second step, - // separate from running this - // program, we would then take all - // the output files for a given cycle - // and merge these parts into one - // single output file. This has to be - // done sequentially, but can be done - // on a different machine, and should - // be relatively cheap. However, the - // necessary functionality for this - // is not yet implemented in the - // library, and since we are too - // close to the next release, we do - // not want to do such major - // destabilizing changes any - // more. This has been fixed in the - // meantime, though, and a better way - // to do things is explained in the - // step-18 example program. + // The solution to this is to have each process generate output data only + // for it's own local cells, and write them to separate files, one file per + // process. This would distribute the work of generating the output to all + // processes equally. In a second step, separate from running this program, + // we would then take all the output files for a given cycle and merge these + // parts into one single output file. This has to be done sequentially, but + // can be done on a different machine, and should be relatively + // cheap. However, the necessary functionality for this is not yet + // implemented in the library, and since we are too close to the next + // release, we do not want to do such major destabilizing changes any + // more. This has been fixed in the meantime, though, and a better way to do + // things is explained in the step-18 example program. template void ElasticProblem::output_results (const unsigned int cycle) const { - // One point to realize is that when we - // want to generate output on process zero - // only, we need to have access to all - // elements of the solution vector. So we - // need to get a local copy of the - // distributed vector, which is in fact - // simple: + // One point to realize is that when we want to generate output on process + // zero only, we need to have access to all elements of the solution + // vector. So we need to get a local copy of the distributed vector, which + // is in fact simple: const PETScWrappers::Vector localized_solution (solution); - // The thing to notice, however, is that - // we do this localization operation on all - // processes, not only the one that - // actually needs the data. This can't be - // avoided, however, with the communication - // model of MPI: MPI does not have a way to - // query data on another process, both - // sides have to initiate a communication - // at the same time. So even though most of - // the processes do not need the localized - // solution, we have to place the call here - // so that all processes execute it. + // The thing to notice, however, is that we do this localization operation + // on all processes, not only the one that actually needs the data. This + // can't be avoided, however, with the communication model of MPI: MPI + // does not have a way to query data on another process, both sides have + // to initiate a communication at the same time. So even though most of + // the processes do not need the localized solution, we have to place the + // call here so that all processes execute it. // - // (In reality, part of this work can in - // fact be avoided. What we do is send the - // local parts of all processes to all - // other processes. What we would really - // need to do is to initiate an operation - // on all processes where each process - // simply sends its local chunk of data to - // process zero, since this is the only one - // that actually needs it, i.e. we need - // something like a gather operation. PETSc - // can do this, but for simplicity's sake - // we don't attempt to make use of this - // here. We don't, since what we do is not - // very expensive in the grand scheme of - // things: it is one vector communication - // among all processes , which has to be - // compared to the number of communications - // we have to do when solving the linear - // system, setting up the block-ILU for the - // preconditioner, and other operations.) - - // This being done, process zero goes ahead - // with setting up the output file as in - // step-8, and attaching the (localized) - // solution vector to the output - // object:. (The code to generate the output - // file name is stolen and slightly - // modified from step-5, since we expect - // that we can do a number of cycles - // greater than 10, which is the maximum of - // what the code in step-8 could handle.) + // (In reality, part of this work can in fact be avoided. What we do is + // send the local parts of all processes to all other processes. What we + // would really need to do is to initiate an operation on all processes + // where each process simply sends its local chunk of data to process + // zero, since this is the only one that actually needs it, i.e. we need + // something like a gather operation. PETSc can do this, but for + // simplicity's sake we don't attempt to make use of this here. We don't, + // since what we do is not very expensive in the grand scheme of things: + // it is one vector communication among all processes , which has to be + // compared to the number of communications we have to do when solving the + // linear system, setting up the block-ILU for the preconditioner, and + // other operations.) + + // This being done, process zero goes ahead with setting up the output + // file as in step-8, and attaching the (localized) solution vector to the + // output object:. (The code to generate the output file name is stolen + // and slightly modified from step-5, since we expect that we can do a + // number of cycles greater than 10, which is the maximum of what the code + // in step-8 could handle.) if (this_mpi_process == 0) { std::ostringstream filename; @@ -973,33 +711,26 @@ namespace Step17 data_out.add_data_vector (localized_solution, solution_names); - // The only thing we do here - // additionally is that we also output - // one value per cell indicating which - // subdomain (i.e. MPI process) it - // belongs to. This requires some - // conversion work, since the data the - // library provides us with is not the - // one the output class expects, but - // this is not difficult. First, set up - // a vector of integers, one per cell, - // that is then filled by the number of - // subdomain each cell is in: + // The only thing we do here additionally is that we also output one + // value per cell indicating which subdomain (i.e. MPI process) it + // belongs to. This requires some conversion work, since the data the + // library provides us with is not the one the output class expects, + // but this is not difficult. First, set up a vector of integers, one + // per cell, that is then filled by the number of subdomain each cell + // is in: std::vector partition_int (triangulation.n_active_cells()); GridTools::get_subdomain_association (triangulation, partition_int); - // Then convert this integer vector - // into a floating point vector just as - // the output functions want to see: + // Then convert this integer vector into a floating point vector just + // as the output functions want to see: const Vector partitioning(partition_int.begin(), partition_int.end()); // And finally add this vector as well: data_out.add_data_vector (partitioning, "partitioning"); - // This all being done, generate the - // intermediate format and write it out - // in GMV output format: + // This all being done, generate the intermediate format and write it + // out in GMV output format: data_out.build_patches (); data_out.write_gmv (output); } @@ -1007,55 +738,36 @@ namespace Step17 - // The sixth step is to take the solution - // just computed, and evaluate some kind of - // refinement indicator to refine the - // mesh. The problem is basically the same as - // with distributing hanging node - // constraints: in order to compute the error - // indicator, we need access to all elements - // of the solution vector. We then compute - // the indicators for the cells that belong - // to the present process, but then we need - // to distribute the refinement indicators - // into a distributed vector so that all - // processes have the values of the - // refinement indicator for all cells. But - // then, in order for each process to refine - // its copy of the mesh, they need to have - // acces to all refinement indicators - // locally, so they have to copy the global - // vector back into a local one. That's a - // little convoluted, but thinking about it - // quite straightforward nevertheless. So - // here's how we do it: + // The sixth step is to take the solution just computed, and evaluate some + // kind of refinement indicator to refine the mesh. The problem is basically + // the same as with distributing hanging node constraints: in order to + // compute the error indicator, we need access to all elements of the + // solution vector. We then compute the indicators for the cells that belong + // to the present process, but then we need to distribute the refinement + // indicators into a distributed vector so that all processes have the + // values of the refinement indicator for all cells. But then, in order for + // each process to refine its copy of the mesh, they need to have acces to + // all refinement indicators locally, so they have to copy the global vector + // back into a local one. That's a little convoluted, but thinking about it + // quite straightforward nevertheless. So here's how we do it: template void ElasticProblem::refine_grid () { - // So, first part: get a local copy of the - // distributed solution vector. This is - // necessary since the error estimator - // needs to get at the value of neighboring - // cells even if they do not belong to the - // subdomain associated with the present - // MPI process: + // So, first part: get a local copy of the distributed solution + // vector. This is necessary since the error estimator needs to get at the + // value of neighboring cells even if they do not belong to the subdomain + // associated with the present MPI process: const PETScWrappers::Vector localized_solution (solution); - // Second part: set up a vector of error - // indicators for all cells and let the - // Kelly class compute refinement - // indicators for all cells belonging to - // the present subdomain/process. Note that - // the last argument of the call indicates - // which subdomain we are interested - // in. The three arguments before it are - // various other default arguments that one - // usually doesn't need (and doesn't state - // values for, but rather uses the - // defaults), but which we have to state - // here explicitly since we want to modify - // the value of a following argument - // (i.e. the one indicating the subdomain): + // Second part: set up a vector of error indicators for all cells and let + // the Kelly class compute refinement indicators for all cells belonging + // to the present subdomain/process. Note that the last argument of the + // call indicates which subdomain we are interested in. The three + // arguments before it are various other default arguments that one + // usually doesn't need (and doesn't state values for, but rather uses the + // defaults), but which we have to state here explicitly since we want to + // modify the value of a following argument (i.e. the one indicating the + // subdomain): Vector local_error_per_cell (triangulation.n_active_cells()); KellyErrorEstimator::estimate (dof_handler, QGauss(2), @@ -1067,68 +779,43 @@ namespace Step17 multithread_info.n_default_threads, this_mpi_process); - // Now all processes have computed error - // indicators for their own cells and - // stored them in the respective elements - // of the local_error_per_cell - // vector. The elements of this vector for - // cells not on the present process are - // zero. However, since all processes have - // a copy of a copy of the entire - // triangulation and need to keep these - // copies in synch, they need the values of - // refinement indicators for all cells of - // the triangulation. Thus, we need to - // distribute our results. We do this by - // creating a distributed vector where each - // process has its share, and sets the - // elements it has computed. We will then - // later generate a local sequential copy - // of this distributed vector to allow each - // process to access all elements of this + // Now all processes have computed error indicators for their own cells + // and stored them in the respective elements of the + // local_error_per_cell vector. The elements of this vector + // for cells not on the present process are zero. However, since all + // processes have a copy of a copy of the entire triangulation and need to + // keep these copies in synch, they need the values of refinement + // indicators for all cells of the triangulation. Thus, we need to + // distribute our results. We do this by creating a distributed vector + // where each process has its share, and sets the elements it has + // computed. We will then later generate a local sequential copy of this + // distributed vector to allow each process to access all elements of this // vector. // - // So in the first step, we need to set up - // a %parallel vector. For simplicity, every - // process will own a chunk with as many - // elements as this process owns cells, so - // that the first chunk of elements is - // stored with process zero, the next chunk - // with process one, and so on. It is - // important to remark, however, that these - // elements are not necessarily the ones we - // will write to. This is so, since the - // order in which cells are arranged, - // i.e. the order in which the elements of - // the vector correspond to cells, is not - // ordered according to the subdomain these - // cells belong to. In other words, if on - // this process we compute indicators for - // cells of a certain subdomain, we may - // write the results to more or less random - // elements if the distributed vector, that - // do not necessarily lie within the chunk - // of vector we own on the present - // process. They will subsequently have to - // be copied into another process's memory - // space then, an operation that PETSc does - // for us when we call the compress - // function. This inefficiency could be - // avoided with some more code, but we - // refrain from it since it is not a major - // factor in the program's total runtime. + // So in the first step, we need to set up a %parallel vector. For + // simplicity, every process will own a chunk with as many elements as + // this process owns cells, so that the first chunk of elements is stored + // with process zero, the next chunk with process one, and so on. It is + // important to remark, however, that these elements are not necessarily + // the ones we will write to. This is so, since the order in which cells + // are arranged, i.e. the order in which the elements of the vector + // correspond to cells, is not ordered according to the subdomain these + // cells belong to. In other words, if on this process we compute + // indicators for cells of a certain subdomain, we may write the results + // to more or less random elements if the distributed vector, that do not + // necessarily lie within the chunk of vector we own on the present + // process. They will subsequently have to be copied into another + // process's memory space then, an operation that PETSc does for us when + // we call the compress function. This inefficiency could be + // avoided with some more code, but we refrain from it since it is not a + // major factor in the program's total runtime. // - // So here's how we do it: count how many - // cells belong to this process, set up a - // distributed vector with that many - // elements to be stored locally, and copy - // over the elements we computed locally, - // then compress the result. In fact, we - // really only copy the elements that are - // nonzero, so we may miss a few that we - // computed to zero, but this won't hurt - // since the original values of the vector - // is zero anyway. + // So here's how we do it: count how many cells belong to this process, + // set up a distributed vector with that many elements to be stored + // locally, and copy over the elements we computed locally, then compress + // the result. In fact, we really only copy the elements that are nonzero, + // so we may miss a few that we computed to zero, but this won't hurt + // since the original values of the vector is zero anyway. const unsigned int n_local_cells = GridTools::count_cells_with_subdomain_association (triangulation, this_mpi_process); @@ -1143,14 +830,12 @@ namespace Step17 distributed_all_errors.compress (); - // So now we have this distributed vector - // out there that contains the refinement - // indicators for all cells. To use it, we - // need to obtain a local copy... + // So now we have this distributed vector out there that contains the + // refinement indicators for all cells. To use it, we need to obtain a + // local copy... const Vector localized_all_errors (distributed_all_errors); - // ...which we can the subsequently use to - // finally refine the grid: + // ...which we can the subsequently use to finally refine the grid: GridRefinement::refine_and_coarsen_fixed_number (triangulation, localized_all_errors, 0.3, 0.03); @@ -1159,14 +844,12 @@ namespace Step17 - // Lastly, here is the driver function. It is - // almost unchanged from step-8, with the - // exception that we replace std::cout by - // the pcout stream. Apart from this, the - // only other cosmetic change is that we - // output how many degrees of freedom there - // are per process, and how many iterations - // it took for the linear solver to converge: + // Lastly, here is the driver function. It is almost unchanged from step-8, + // with the exception that we replace std::cout by the + // pcout stream. Apart from this, the only other cosmetic + // change is that we output how many degrees of freedom there are per + // process, and how many iterations it took for the linear solver to + // converge: template void ElasticProblem::run () { @@ -1210,11 +893,9 @@ namespace Step17 } -// So that's it, almost. main() works the -// same way as most of the main functions in -// the other example programs, i.e. it -// delegates work to the run function of -// a master object, and only wraps everything +// So that's it, almost. main() works the same way as most of the +// main functions in the other example programs, i.e. it delegates work to the +// run function of a master object, and only wraps everything // into some code to catch exceptions: int main (int argc, char **argv) { @@ -1223,24 +904,16 @@ int main (int argc, char **argv) using namespace dealii; using namespace Step17; - // Here is the only real difference: - // PETSc requires that we initialize it - // at the beginning of the program, and - // un-initialize it at the end. The - // class MPI_InitFinalize takes care - // of that. The original code - // sits in between, enclosed in braces - // to make sure that the - // elastic_problem variable goes - // out of scope (and is destroyed) - // before PETSc is closed with - // PetscFinalize. (If we wouldn't - // use braces, the destructor of - // elastic_problem would run after - // PetscFinalize; since the - // destructor involves calls to PETSc - // functions, we would get strange - // error messages from PETSc.) + // Here is the only real difference: PETSc requires that we initialize + // it at the beginning of the program, and un-initialize it at the + // end. The class MPI_InitFinalize takes care of that. The original code + // sits in between, enclosed in braces to make sure that the + // elastic_problem variable goes out of scope (and is + // destroyed) before PETSc is closed with + // PetscFinalize. (If we wouldn't use braces, the + // destructor of elastic_problem would run after + // PetscFinalize; since the destructor involves calls to + // PETSc functions, we would get strange error messages from PETSc.) Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv); { diff --git a/deal.II/examples/step-18/step-18.cc b/deal.II/examples/step-18/step-18.cc index e1528caebb..463c776187 100644 --- a/deal.II/examples/step-18/step-18.cc +++ b/deal.II/examples/step-18/step-18.cc @@ -10,9 +10,8 @@ /* further information on this license. */ -// First the usual list of header files that -// have already been used in previous example -// programs: +// First the usual list of header files that have already been used in +// previous example programs: #include #include #include @@ -46,18 +45,14 @@ #include #include -// And here the only two new things among the -// header files: an include file in which -// symmetric tensors of rank 2 and 4 are -// implemented, as introduced in the -// introduction: +// And here the only two new things among the header files: an include file in +// which symmetric tensors of rank 2 and 4 are implemented, as introduced in +// the introduction: #include -// And a header that implements filters for -// iterators looping over all cells. We will -// use this when selecting only those cells -// for output that are owned by the present -// process in a %parallel program: +// And a header that implements filters for iterators looping over all +// cells. We will use this when selecting only those cells for output that are +// owned by the present process in a %parallel program: #include // This is then simply C++ again: @@ -66,40 +61,28 @@ #include #include -// The last step is as in all -// previous programs: +// The last step is as in all previous programs: namespace Step18 { using namespace dealii; // @sect3{The PointHistory class} - // As was mentioned in the introduction, we - // have to store the old stress in - // quadrature point so that we can compute - // the residual forces at this point during - // the next time step. This alone would not - // warrant a structure with only one - // member, but in more complicated - // applications, we would have to store - // more information in quadrature points as - // well, such as the history variables of - // plasticity, etc. In essence, we have to - // store everything that affects the - // present state of the material here, - // which in plasticity is determined by the - // deformation history variables. + // As was mentioned in the introduction, we have to store the old stress in + // quadrature point so that we can compute the residual forces at this point + // during the next time step. This alone would not warrant a structure with + // only one member, but in more complicated applications, we would have to + // store more information in quadrature points as well, such as the history + // variables of plasticity, etc. In essence, we have to store everything + // that affects the present state of the material here, which in plasticity + // is determined by the deformation history variables. // - // We will not give this class any - // meaningful functionality beyond being - // able to store data, i.e. there are no - // constructors, destructors, or other - // member functions. In such cases of - // `dumb' classes, we usually opt to - // declare them as struct rather than - // class, to indicate that they are - // closer to C-style structures than - // C++-style classes. + // We will not give this class any meaningful functionality beyond being + // able to store data, i.e. there are no constructors, destructors, or other + // member functions. In such cases of `dumb' classes, we usually opt to + // declare them as struct rather than class, to + // indicate that they are closer to C-style structures than C++-style + // classes. template struct PointHistory { @@ -109,18 +92,13 @@ namespace Step18 // @sect3{The stress-strain tensor} - // Next, we define the linear relationship - // between the stress and the strain in - // elasticity. It is given by a tensor of - // rank 4 that is usually written in the - // form $C_{ijkl} = \mu (\delta_{ik} - // \delta_{jl} + \delta_{il} \delta_{jk}) + - // \lambda \delta_{ij} \delta_{kl}$. This - // tensor maps symmetric tensor of rank 2 - // to symmetric tensors of rank 2. A - // function implementing its creation for - // given values of the Lame constants - // lambda and mu is straightforward: + // Next, we define the linear relationship between the stress and the strain + // in elasticity. It is given by a tensor of rank 4 that is usually written + // in the form $C_{ijkl} = \mu (\delta_{ik} \delta_{jl} + \delta_{il} + // \delta_{jk}) + \lambda \delta_{ij} \delta_{kl}$. This tensor maps + // symmetric tensor of rank 2 to symmetric tensors of rank 2. A function + // implementing its creation for given values of the Lame constants lambda + // and mu is straightforward: template SymmetricTensor<4,dim> get_stress_strain_tensor (const double lambda, const double mu) @@ -136,107 +114,61 @@ namespace Step18 return tmp; } - // With this function, we will - // define a static member variable - // of the main class below that - // will be used throughout the - // program as the stress-strain - // tensor. Note that - // in more elaborate programs, this will - // probably be a member variable of some - // class instead, or a function that - // returns the stress-strain relationship - // depending on other input. For example in - // damage theory models, the Lame constants - // are considered a function of the prior - // stress/strain history of a - // point. Conversely, in plasticity the - // form of the stress-strain tensor is - // modified if the material has reached the - // yield stress in a certain point, and - // possibly also depending on its prior - // history. + // With this function, we will define a static member variable of the main + // class below that will be used throughout the program as the stress-strain + // tensor. Note that in more elaborate programs, this will probably be a + // member variable of some class instead, or a function that returns the + // stress-strain relationship depending on other input. For example in + // damage theory models, the Lame constants are considered a function of the + // prior stress/strain history of a point. Conversely, in plasticity the + // form of the stress-strain tensor is modified if the material has reached + // the yield stress in a certain point, and possibly also depending on its + // prior history. // - // In the present program, however, we - // assume that the material is completely - // elastic and linear, and a constant - // stress-strain tensor is sufficient for - // our present purposes. + // In the present program, however, we assume that the material is + // completely elastic and linear, and a constant stress-strain tensor is + // sufficient for our present purposes. // @sect3{Auxiliary functions} - // Before the rest of the program, - // here are a few functions that we - // need as tools. These are small - // functions that are called in - // inner loops, so we mark them as - // inline. + // Before the rest of the program, here are a few functions that we need as + // tools. These are small functions that are called in inner loops, so we + // mark them as inline. // - // The first one computes the - // symmetric strain tensor for - // shape function shape_func at - // quadrature point q_point by - // forming the symmetric gradient - // of this shape function. We need - // that when we want to form the - // matrix, for example. + // The first one computes the symmetric strain tensor for shape function + // shape_func at quadrature point q_point by + // forming the symmetric gradient of this shape function. We need that when + // we want to form the matrix, for example. // - // We should note that in previous - // examples where we have treated - // vector-valued problems, we have - // always asked the finite element - // object in which of the vector - // component the shape function is - // actually non-zero, and thereby - // avoided to compute any terms - // that we could prove were zero - // anyway. For this, we used the - // fe.system_to_component_index - // function that returns in which - // component a shape function was - // zero, and also that the - // fe_values.shape_value and - // fe_values.shape_grad - // functions only returned the - // value and gradient of the single - // non-zero component of a shape - // function if this is a - // vector-valued element. + // We should note that in previous examples where we have treated + // vector-valued problems, we have always asked the finite element object in + // which of the vector component the shape function is actually non-zero, + // and thereby avoided to compute any terms that we could prove were zero + // anyway. For this, we used the fe.system_to_component_index + // function that returns in which component a shape function was zero, and + // also that the fe_values.shape_value and + // fe_values.shape_grad functions only returned the value and + // gradient of the single non-zero component of a shape function if this is + // a vector-valued element. // - // This was an optimization, and if - // it isn't terribly time critical, - // we can get away with a simpler - // technique: just ask the - // fe_values for the value or - // gradient of a given component of - // a given shape function at a - // given quadrature point. This is - // what the - // fe_values.shape_grad_component(shape_func,q_point,i) - // call does: return the full - // gradient of the ith - // component of shape function - // shape_func at quadrature - // point q_point. If a certain - // component of a certain shape - // function is always zero, then - // this will simply always return - // zero. + // This was an optimization, and if it isn't terribly time critical, we can + // get away with a simpler technique: just ask the fe_values + // for the value or gradient of a given component of a given shape function + // at a given quadrature point. This is what the + // fe_values.shape_grad_component(shape_func,q_point,i) call + // does: return the full gradient of the ith component of shape + // function shape_func at quadrature point + // q_point. If a certain component of a certain shape function + // is always zero, then this will simply always return zero. // - // As mentioned, using - // fe_values.shape_grad_component - // instead of the combination of - // fe.system_to_component_index - // and fe_values.shape_grad may - // be less efficient, but its - // implementation is optimized for - // such cases and shouldn't be a - // big slowdown. We demonstrate the - // technique here since it is so - // much simpler and - // straightforward. + // As mentioned, using fe_values.shape_grad_component instead + // of the combination of fe.system_to_component_index and + // fe_values.shape_grad may be less efficient, but its + // implementation is optimized for such cases and shouldn't be a big + // slowdown. We demonstrate the technique here since it is so much simpler + // and straightforward. template inline SymmetricTensor<2,dim> @@ -244,36 +176,23 @@ namespace Step18 const unsigned int shape_func, const unsigned int q_point) { - // Declare a temporary that will - // hold the return value: + // Declare a temporary that will hold the return value: SymmetricTensor<2,dim> tmp; - // First, fill diagonal terms - // which are simply the - // derivatives in direction i - // of the i component of the - // vector-valued shape - // function: + // First, fill diagonal terms which are simply the derivatives in + // direction i of the i component of the + // vector-valued shape function: for (unsigned int i=0; iSymmetricTensor class - // makes sure that at least to - // the outside the symmetric - // entries are also filled (in - // practice, the class of course - // stores only one copy). Here, - // we have picked the upper right - // half of the tensor, but the - // lower left one would have been - // just as good: + // Then fill the rest of the strain tensor. Note that since the tensor is + // symmetric, we only have to compute one half (here: the upper right + // corner) of the off-diagonal elements, and the implementation of the + // SymmetricTensor class makes sure that at least to the + // outside the symmetric entries are also filled (in practice, the class + // of course stores only one copy). Here, we have picked the upper right + // half of the tensor, but the lower left one would have been just as + // good: for (unsigned int i=0; ife_values.get_function_grads - // function allows you to extract - // the gradients of each component - // of your solution field at a - // quadrature point. It returns - // this as a vector of rank-1 - // tensors: one rank-1 tensor - // (gradient) per vector component - // of the solution. From this we - // have to reconstruct the - // (symmetric) strain tensor by - // transforming the data storage - // format and symmetrization. We do - // this in the same way as above, - // i.e. we avoid a few computations - // by filling first the diagonal - // and then only one half of the - // symmetric tensor (the - // SymmetricTensor class makes - // sure that it is sufficient to - // write only one of the two + // The second function does something very similar (and therefore is given + // the same name): compute the symmetric strain tensor from the gradient of + // a vector-valued field. If you already have a solution field, the + // fe_values.get_function_grads function allows you to extract + // the gradients of each component of your solution field at a quadrature + // point. It returns this as a vector of rank-1 tensors: one rank-1 tensor + // (gradient) per vector component of the solution. From this we have to + // reconstruct the (symmetric) strain tensor by transforming the data + // storage format and symmetrization. We do this in the same way as above, + // i.e. we avoid a few computations by filling first the diagonal and then + // only one half of the symmetric tensor (the SymmetricTensor + // class makes sure that it is sufficient to write only one of the two // symmetric components). // - // Before we do this, though, we - // make sure that the input has the - // kind of structure we expect: - // that is that there are dim - // vector components, i.e. one - // displacement component for each - // coordinate direction. We test - // this with the Assert macro - // that will simply abort our - // program if the condition is not - // met. + // Before we do this, though, we make sure that the input has the kind of + // structure we expect: that is that there are dim vector + // components, i.e. one displacement component for each coordinate + // direction. We test this with the Assert macro that will + // simply abort our program if the condition is not met. template inline SymmetricTensor<2,dim> @@ -345,53 +241,35 @@ namespace Step18 } - // Finally, below we will need a function - // that computes the rotation matrix - // induced by a displacement at a given - // point. In fact, of course, the - // displacement at a single point only has - // a direction and a magnitude, it is the - // change in direction and magnitude that - // induces rotations. In effect, the - // rotation matrix can be computed from the - // gradients of a displacement, or, more - // specifically, from the curl. + // Finally, below we will need a function that computes the rotation matrix + // induced by a displacement at a given point. In fact, of course, the + // displacement at a single point only has a direction and a magnitude, it + // is the change in direction and magnitude that induces rotations. In + // effect, the rotation matrix can be computed from the gradients of a + // displacement, or, more specifically, from the curl. // - // The formulas by which the rotation - // matrices are determined are a little - // awkward, especially in 3d. For 2d, there - // is a simpler way, so we implement this - // function twice, once for 2d and once for - // 3d, so that we can compile and use the - // program in both space dimensions if so - // desired -- after all, deal.II is all - // about dimension independent programming - // and reuse of algorithm thoroughly tested - // with cheap computations in 2d, for the - // more expensive computations in 3d. Here - // is one case, where we have to implement - // different algorithms for 2d and 3d, but - // then can write the rest of the program - // in a way that is independent of the - // space dimension. + // The formulas by which the rotation matrices are determined are a little + // awkward, especially in 3d. For 2d, there is a simpler way, so we + // implement this function twice, once for 2d and once for 3d, so that we + // can compile and use the program in both space dimensions if so desired -- + // after all, deal.II is all about dimension independent programming and + // reuse of algorithm thoroughly tested with cheap computations in 2d, for + // the more expensive computations in 3d. Here is one case, where we have to + // implement different algorithms for 2d and 3d, but then can write the rest + // of the program in a way that is independent of the space dimension. // - // So, without further ado to the 2d - // implementation: + // So, without further ado to the 2d implementation: Tensor<2,2> get_rotation_matrix (const std::vector > &grad_u) { - // First, compute the curl of the - // velocity field from the - // gradients. Note that we are in 2d, so - // the rotation is a scalar: + // First, compute the curl of the velocity field from the gradients. Note + // that we are in 2d, so the rotation is a scalar: const double curl = (grad_u[1][0] - grad_u[0][1]); - // From this, compute the angle of - // rotation: + // From this, compute the angle of rotation: const double angle = std::atan (curl); - // And from this, build the antisymmetric - // rotation matrix: + // And from this, build the antisymmetric rotation matrix: const double t[2][2] = {{ cos(angle), sin(angle) }, {-sin(angle), cos(angle) } }; @@ -403,37 +281,27 @@ namespace Step18 Tensor<2,3> get_rotation_matrix (const std::vector > &grad_u) { - // Again first compute the curl of the - // velocity field. This time, it is a + // Again first compute the curl of the velocity field. This time, it is a // real vector: const Point<3> curl (grad_u[2][1] - grad_u[1][2], grad_u[0][2] - grad_u[2][0], grad_u[1][0] - grad_u[0][1]); - // From this vector, using its magnitude, - // compute the tangent of the angle of - // rotation, and from it the actual - // angle: + // From this vector, using its magnitude, compute the tangent of the angle + // of rotation, and from it the actual angle: const double tan_angle = std::sqrt(curl*curl); const double angle = std::atan (tan_angle); - // Now, here's one problem: if the angle - // of rotation is too small, that means - // that there is no rotation going on - // (for example a translational - // motion). In that case, the rotation - // matrix is the identity matrix. + // Now, here's one problem: if the angle of rotation is too small, that + // means that there is no rotation going on (for example a translational + // motion). In that case, the rotation matrix is the identity matrix. // - // The reason why we stress that is that - // in this case we have that - // tan_angle==0. Further down, we - // need to divide by that number in the - // computation of the axis of rotation, - // and we would get into trouble when - // dividing doing so. Therefore, let's - // shortcut this and simply return the - // identity matrix if the angle of - // rotation is really small: + // The reason why we stress that is that in this case we have that + // tan_angle==0. Further down, we need to divide by that + // number in the computation of the axis of rotation, and we would get + // into trouble when dividing doing so. Therefore, let's shortcut this and + // simply return the identity matrix if the angle of rotation is really + // small: if (angle < 1e-9) { static const double rotation[3][3] @@ -442,16 +310,12 @@ namespace Step18 return rot; } - // Otherwise compute the real rotation - // matrix. The algorithm for this is not - // exactly obvious, but can be found in a - // number of books, particularly on - // computer games where rotation is a - // very frequent operation. Online, you - // can find a description at - // http://www.makegames.com/3drotation/ - // and (this particular form, with the - // signs as here) at + // Otherwise compute the real rotation matrix. The algorithm for this is + // not exactly obvious, but can be found in a number of books, + // particularly on computer games where rotation is a very frequent + // operation. Online, you can find a description at + // http://www.makegames.com/3drotation/ and (this particular form, with + // the signs as here) at // http://www.gamedev.net/reference/articles/article1199.asp: const double c = std::cos(angle); const double s = std::sin(angle); @@ -482,27 +346,18 @@ namespace Step18 // @sect3{The TopLevel class} - // This is the main class of the - // program. Since the namespace already - // indicates what problem we are solving, - // let's call it by what it does: it - // directs the flow of the program, i.e. it - // is the toplevel driver. + // This is the main class of the program. Since the namespace already + // indicates what problem we are solving, let's call it by what it does: it + // directs the flow of the program, i.e. it is the toplevel driver. // - // The member variables of this class are - // essentially as before, i.e. it has to - // have a triangulation, a DoF handler and - // associated objects such as constraints, - // variables that describe the linear - // system, etc. There are a good number of - // more member functions now, which we will - // explain below. + // The member variables of this class are essentially as before, i.e. it has + // to have a triangulation, a DoF handler and associated objects such as + // constraints, variables that describe the linear system, etc. There are a + // good number of more member functions now, which we will explain below. // - // The external interface of the class, - // however, is unchanged: it has a public - // constructor and desctructor, and it has - // a run function that initiated all - // the work. + // The external interface of the class, however, is unchanged: it has a + // public constructor and desctructor, and it has a run + // function that initiated all the work. template class TopLevel { @@ -512,22 +367,14 @@ namespace Step18 void run (); private: - // The private interface is more - // extensive than in step-17. First, we - // obviously need functions that create - // the initial mesh, set up the - // variables that describe the linear - // system on the present mesh - // (i.e. matrices and vectors), and - // then functions that actually - // assemble the system, direct what has - // to be solved in each time step, a - // function that solves the linear - // system that arises in each timestep - // (and returns the number of - // iterations it took), and finally - // output the solution vector on the - // currect mesh: + // The private interface is more extensive than in step-17. First, we + // obviously need functions that create the initial mesh, set up the + // variables that describe the linear system on the present mesh + // (i.e. matrices and vectors), and then functions that actually assemble + // the system, direct what has to be solved in each time step, a function + // that solves the linear system that arises in each timestep (and returns + // the number of iterations it took), and finally output the solution + // vector on the currect mesh: void create_coarse_grid (); void setup_system (); @@ -540,65 +387,42 @@ namespace Step18 void output_results () const; - // All, except for the first two, of - // these functions are called in each - // timestep. Since the first time step - // is a little special, we have - // separate functions that describe - // what has to happen in a timestep: - // one for the first, and one for all - // following timesteps: + // All, except for the first two, of these functions are called in each + // timestep. Since the first time step is a little special, we have + // separate functions that describe what has to happen in a timestep: one + // for the first, and one for all following timesteps: void do_initial_timestep (); void do_timestep (); - // Then we need a whole bunch of - // functions that do various - // things. The first one refines the - // initial grid: we start on the coarse - // grid with a pristine state, solve - // the problem, then look at it and - // refine the mesh accordingly, and - // start the same process over again, - // again with a pristine state. Thus, - // refining the initial mesh is - // somewhat simpler than refining a - // grid between two successive time - // steps, since it does not involve - // transferring data from the old to - // the new triangulation, in particular - // the history data that is stored in - // each quadrature point. + // Then we need a whole bunch of functions that do various things. The + // first one refines the initial grid: we start on the coarse grid with a + // pristine state, solve the problem, then look at it and refine the mesh + // accordingly, and start the same process over again, again with a + // pristine state. Thus, refining the initial mesh is somewhat simpler + // than refining a grid between two successive time steps, since it does + // not involve transferring data from the old to the new triangulation, in + // particular the history data that is stored in each quadrature point. void refine_initial_grid (); - // At the end of each time step, we - // want to move the mesh vertices - // around according to the incremental - // displacement computed in this time - // step. This is the function in which - // this is done: + // At the end of each time step, we want to move the mesh vertices around + // according to the incremental displacement computed in this time + // step. This is the function in which this is done: void move_mesh (); - // Next are two functions that handle - // the history variables stored in each - // quadrature point. The first one is - // called before the first timestep to - // set up a pristine state for the - // history variables. It only works on - // those quadrature points on cells - // that belong to the present - // processor: + // Next are two functions that handle the history variables stored in each + // quadrature point. The first one is called before the first timestep to + // set up a pristine state for the history variables. It only works on + // those quadrature points on cells that belong to the present processor: void setup_quadrature_point_history (); - // The second one updates the history - // variables at the end of each + // The second one updates the history variables at the end of each // timestep: void update_quadrature_point_history (); - // After the member functions, here are - // the member variables. The first ones - // have all been discussed in more - // detail in previous example programs: + // After the member functions, here are the member variables. The first + // ones have all been discussed in more detail in previous example + // programs: Triangulation triangulation; FESystem fe; @@ -607,126 +431,77 @@ namespace Step18 ConstraintMatrix hanging_node_constraints; - // One difference of this program is - // that we declare the quadrature - // formula in the class - // declaration. The reason is that in - // all the other programs, it didn't do - // much harm if we had used different - // quadrature formulas when computing - // the matrix and the righ hand side, - // for example. However, in the present - // case it does: we store information - // in the quadrature points, so we - // have to make sure all parts of the - // program agree on where they are and - // how many there are on each - // cell. Thus, let us first declare the - // quadrature formula that will be used - // throughout... + // One difference of this program is that we declare the quadrature + // formula in the class declaration. The reason is that in all the other + // programs, it didn't do much harm if we had used different quadrature + // formulas when computing the matrix and the righ hand side, for + // example. However, in the present case it does: we store information in + // the quadrature points, so we have to make sure all parts of the program + // agree on where they are and how many there are on each cell. Thus, let + // us first declare the quadrature formula that will be used throughout... const QGauss quadrature_formula; - // ... and then also have a vector of - // history objects, one per quadrature - // point on those cells for which we - // are responsible (i.e. we don't store - // history data for quadrature points - // on cells that are owned by other + // ... and then also have a vector of history objects, one per quadrature + // point on those cells for which we are responsible (i.e. we don't store + // history data for quadrature points on cells that are owned by other // processors). std::vector > quadrature_point_history; - // The way this object is accessed is - // through a user pointer that each - // cell, face, or edge holds: it is a - // void* pointer that can be used - // by application programs to associate - // arbitrary data to cells, faces, or - // edges. What the program actually - // does with this data is within its - // own responsibility, the library just - // allocates some space for these - // pointers, and application programs - // can set and read the pointers for - // each of these objects. - - - // Further: we need the objects of - // linear systems to be solved, - // i.e. matrix, right hand side vector, - // and the solution vector. Since we - // anticipate solving big problems, we - // use the same types as in step-17, - // i.e. distributed %parallel matrices - // and vectors built on top of the - // PETSc library. Conveniently, they - // can also be used when running on - // only a single machine, in which case - // this machine happens to be the only - // one in our %parallel universe. + // The way this object is accessed is through a user pointer + // that each cell, face, or edge holds: it is a void* pointer + // that can be used by application programs to associate arbitrary data to + // cells, faces, or edges. What the program actually does with this data + // is within its own responsibility, the library just allocates some space + // for these pointers, and application programs can set and read the + // pointers for each of these objects. + + + // Further: we need the objects of linear systems to be solved, + // i.e. matrix, right hand side vector, and the solution vector. Since we + // anticipate solving big problems, we use the same types as in step-17, + // i.e. distributed %parallel matrices and vectors built on top of the + // PETSc library. Conveniently, they can also be used when running on only + // a single machine, in which case this machine happens to be the only one + // in our %parallel universe. // - // However, as a difference to step-17, - // we do not store the solution vector - // -- which here is the incremental - // displacements computed in each time - // step -- in a distributed - // fashion. I.e., of course it must be - // a distributed vector when computing - // it, but immediately after that we - // make sure each processor has a - // complete copy. The reason is that we - // had already seen in step-17 that - // many functions needed a complete - // copy. While it is not hard to get - // it, this requires communication on - // the network, and is thus slow. In - // addition, these were repeatedly the - // same operations, which is certainly - // undesirable unless the gains of not - // always having to store the entire - // vector outweighs it. When writing - // this program, it turned out that we - // need a complete copy of the solution - // in so many places that it did not - // seem worthwhile to only get it when - // necessary. Instead, we opted to - // obtain the complete copy once and - // for all, and instead get rid of the - // distributed copy immediately. Thus, - // note that the declaration of - // inremental_displacement does not - // denote a distribute vector as would - // be indicated by the middle namespace - // MPI: + // However, as a difference to step-17, we do not store the solution + // vector -- which here is the incremental displacements computed in each + // time step -- in a distributed fashion. I.e., of course it must be a + // distributed vector when computing it, but immediately after that we + // make sure each processor has a complete copy. The reason is that we had + // already seen in step-17 that many functions needed a complete + // copy. While it is not hard to get it, this requires communication on + // the network, and is thus slow. In addition, these were repeatedly the + // same operations, which is certainly undesirable unless the gains of not + // always having to store the entire vector outweighs it. When writing + // this program, it turned out that we need a complete copy of the + // solution in so many places that it did not seem worthwhile to only get + // it when necessary. Instead, we opted to obtain the complete copy once + // and for all, and instead get rid of the distributed copy + // immediately. Thus, note that the declaration of + // inremental_displacement does not denote a distribute + // vector as would be indicated by the middle namespace MPI: PETScWrappers::MPI::SparseMatrix system_matrix; PETScWrappers::MPI::Vector system_rhs; PETScWrappers::Vector incremental_displacement; - // The next block of variables is then - // related to the time dependent nature - // of the problem: they denote the - // length of the time interval which we - // want to simulate, the present time - // and number of time step, and length - // of present timestep: + // The next block of variables is then related to the time dependent + // nature of the problem: they denote the length of the time interval + // which we want to simulate, the present time and number of time step, + // and length of present timestep: double present_time; double present_timestep; double end_time; unsigned int timestep_no; - // Then a few variables that have to do - // with %parallel processing: first, a - // variable denoting the MPI - // communicator we use, and then two - // numbers telling us how many - // participating processors there are, - // and where in this world we - // are. Finally, a stream object that - // makes sure only one processor is - // actually generating output to the - // console. This is all the same as in - // step-17: + // Then a few variables that have to do with %parallel processing: first, + // a variable denoting the MPI communicator we use, and then two numbers + // telling us how many participating processors there are, and where in + // this world we are. Finally, a stream object that makes sure only one + // processor is actually generating output to the console. This is all the + // same as in step-17: MPI_Comm mpi_communicator; const unsigned int n_mpi_processes; @@ -735,95 +510,58 @@ namespace Step18 ConditionalOStream pcout; - // Here is a vector where each entry - // denotes the numbers of degrees of - // freedom that are stored on the - // processor with that particular - // number: + // Here is a vector where each entry denotes the numbers of degrees of + // freedom that are stored on the processor with that particular number: std::vector local_dofs_per_process; - // Next, how many degrees of freedom - // the present processor stores. This + // Next, how many degrees of freedom the present processor stores. This // is, of course, an abbreviation to // local_dofs_per_process[this_mpi_process]. unsigned int n_local_dofs; - // In the same direction, also - // cache how many cells the - // present processor owns. Note - // that the cells that belong - // to a processor are not - // necessarily contiguously - // numbered (when iterating - // over them using + // In the same direction, also cache how many cells the present processor + // owns. Note that the cells that belong to a processor are not + // necessarily contiguously numbered (when iterating over them using // active_cell_iterator). unsigned int n_local_cells; - // Finally, we have a - // static variable that denotes - // the linear relationship - // between the stress and - // strain. Since it is a - // constant object that does - // not depend on any input (at - // least not in this program), - // we make it a static variable - // and will initialize it in - // the same place where we - // define the constructor of - // this class: + // Finally, we have a static variable that denotes the linear relationship + // between the stress and strain. Since it is a constant object that does + // not depend on any input (at least not in this program), we make it a + // static variable and will initialize it in the same place where we + // define the constructor of this class: static const SymmetricTensor<4,dim> stress_strain_tensor; }; // @sect3{The BodyForce class} - // Before we go on to the main - // functionality of this program, we have - // to define what forces will act on the - // body whose deformation we want to - // study. These may either be body forces - // or boundary forces. Body forces are - // generally mediated by one of the four - // basic physical types of forces: gravity, - // strong and weak interaction, and - // electromagnetism. Unless one wants to - // consider subatomic objects (for which - // quasistatic deformation is irrelevant - // and an inappropriate description - // anyway), only gravity and - // electromagnetic forces need to be - // considered. Let us, for simplicity - // assume that our body has a certain mass - // density, but is either non-magnetic and - // not electrically conducting or that - // there are no significant electromagnetic - // fields around. In that case, the body - // forces are simply rho g, where - // rho is the material density and - // g is a vector in negative - // z-direction with magnitude 9.81 m/s^2. - // Both the density and g are defined - // in the function, and we take as the - // density 7700 kg/m^3, a value commonly + // Before we go on to the main functionality of this program, we have to + // define what forces will act on the body whose deformation we want to + // study. These may either be body forces or boundary forces. Body forces + // are generally mediated by one of the four basic physical types of forces: + // gravity, strong and weak interaction, and electromagnetism. Unless one + // wants to consider subatomic objects (for which quasistatic deformation is + // irrelevant and an inappropriate description anyway), only gravity and + // electromagnetic forces need to be considered. Let us, for simplicity + // assume that our body has a certain mass density, but is either + // non-magnetic and not electrically conducting or that there are no + // significant electromagnetic fields around. In that case, the body forces + // are simply rho g, where rho is the material + // density and g is a vector in negative z-direction with + // magnitude 9.81 m/s^2. Both the density and g are defined in + // the function, and we take as the density 7700 kg/m^3, a value commonly // assumed for steel. // - // To be a little more general and to be - // able to do computations in 2d as well, - // we realize that the body force is always - // a function returning a dim - // dimensional vector. We assume that - // gravity acts along the negative - // direction of the last, i.e. dim-1th - // coordinate. The rest of the - // implementation of this function should - // be mostly self-explanatory given similar - // definitions in previous example - // programs. Note that the body force is - // independent of the location; to avoid - // compiler warnings about unused function - // arguments, we therefore comment out the - // name of the first argument of the + // To be a little more general and to be able to do computations in 2d as + // well, we realize that the body force is always a function returning a + // dim dimensional vector. We assume that gravity acts along + // the negative direction of the last, i.e. dim-1th + // coordinate. The rest of the implementation of this function should be + // mostly self-explanatory given similar definitions in previous example + // programs. Note that the body force is independent of the location; to + // avoid compiler warnings about unused function arguments, we therefore + // comment out the name of the first argument of the // vector_value function: template class BodyForce : public Function @@ -887,54 +625,31 @@ namespace Step18 // @sect3{The IncrementalBoundaryValue class} - // In addition to body forces, movement can - // be induced by boundary forces and forced - // boundary displacement. The latter case - // is equivalent to forces being chosen in - // such a way that they induce certain - // displacement. + // In addition to body forces, movement can be induced by boundary forces + // and forced boundary displacement. The latter case is equivalent to forces + // being chosen in such a way that they induce certain displacement. // - // For quasistatic displacement, typical - // boundary forces would be pressure on a - // body, or tangential friction against - // another body. We chose a somewhat - // simpler case here: we prescribe a - // certain movement of (parts of) the - // boundary, or at least of certain - // components of the displacement - // vector. We describe this by another - // vector-valued function that, for a given - // point on the boundary, returns the - // prescribed displacement. + // For quasistatic displacement, typical boundary forces would be pressure + // on a body, or tangential friction against another body. We chose a + // somewhat simpler case here: we prescribe a certain movement of (parts of) + // the boundary, or at least of certain components of the displacement + // vector. We describe this by another vector-valued function that, for a + // given point on the boundary, returns the prescribed displacement. // - // Since we have a time-dependent problem, - // the displacement increment of the - // boundary equals the displacement - // accumulated during the length of the - // timestep. The class therefore has to - // know both the present time and the - // length of the present time step, and can - // then approximate the incremental - // displacement as the present velocity - // times the present timestep. + // Since we have a time-dependent problem, the displacement increment of the + // boundary equals the displacement accumulated during the length of the + // timestep. The class therefore has to know both the present time and the + // length of the present time step, and can then approximate the incremental + // displacement as the present velocity times the present timestep. // - // For the purposes of this - // program, we choose a simple form - // of boundary displacement: we - // displace the top boundary with - // constant velocity downwards. The - // rest of the boundary is either - // going to be fixed (and is then - // described using an object of - // type ZeroFunction) or free - // (Neumann-type, in which case - // nothing special has to be done). - // The implementation of the - // class describing the constant - // downward motion should then be - // obvious using the knowledge we - // gained through all the previous - // example programs: + // For the purposes of this program, we choose a simple form of boundary + // displacement: we displace the top boundary with constant velocity + // downwards. The rest of the boundary is either going to be fixed (and is + // then described using an object of type ZeroFunction) or free + // (Neumann-type, in which case nothing special has to be done). The + // implementation of the class describing the constant downward motion + // should then be obvious using the knowledge we gained through all the + // previous example programs: template class IncrementalBoundaryValues : public Function { @@ -1006,13 +721,9 @@ namespace Step18 // @sect3{Implementation of the TopLevel class} - // Now for the implementation of the main - // class. First, we initialize the - // stress-strain tensor, which we - // have declared as a static const - // variable. We chose Lame - // constants that are appropriate - // for steel: + // Now for the implementation of the main class. First, we initialize the + // stress-strain tensor, which we have declared as a static const + // variable. We chose Lame constants that are appropriate for steel: template const SymmetricTensor<4,dim> TopLevel::stress_strain_tensor @@ -1023,15 +734,11 @@ namespace Step18 // @sect4{The public interface} - // The next step is the definition of - // constructors and descructors. There are - // no surprises here: we choose linear and - // continuous finite elements for each of - // the dim vector components of the - // solution, and a Gaussian quadrature - // formula with 2 points in each coordinate - // direction. The destructor should be - // obvious: + // The next step is the definition of constructors and descructors. There + // are no surprises here: we choose linear and continuous finite elements + // for each of the dim vector components of the solution, and a + // Gaussian quadrature formula with 2 points in each coordinate + // direction. The destructor should be obvious: template TopLevel::TopLevel () : @@ -1054,17 +761,12 @@ namespace Step18 - // The last of the public functions is the - // one that directs all the work, - // run(). It initializes the variables - // that describe where in time we presently - // are, then runs the first time step, then - // loops over all the other time - // steps. Note that for simplicity we use a - // fixed time step, whereas a more - // sophisticated program would of course - // have to choose it in some more - // reasonable way adaptively: + // The last of the public functions is the one that directs all the work, + // run(). It initializes the variables that describe where in + // time we presently are, then runs the first time step, then loops over all + // the other time steps. Note that for simplicity we use a fixed time step, + // whereas a more sophisticated program would of course have to choose it in + // some more reasonable way adaptively: template void TopLevel::run () { @@ -1082,33 +784,19 @@ namespace Step18 // @sect4{TopLevel::create_coarse_grid} - // The next function in the order - // in which they were declared - // above is the one that creates - // the coarse grid from which we - // start. For this example program, - // we want to compute the - // deformation of a cylinder under - // axial compression. The first - // step therefore is to generate a - // mesh for a cylinder of length 3 - // and with inner and outer radii - // of 0.8 and 1, - // respectively. Fortunately, there - // is a library function for such a - // mesh. + // The next function in the order in which they were declared above is the + // one that creates the coarse grid from which we start. For this example + // program, we want to compute the deformation of a cylinder under axial + // compression. The first step therefore is to generate a mesh for a + // cylinder of length 3 and with inner and outer radii of 0.8 and 1, + // respectively. Fortunately, there is a library function for such a mesh. // - // In a second step, we have to associated - // boundary conditions with the upper and - // lower faces of the cylinder. We choose a - // boundary indicator of 0 for the boundary - // faces that are characterized by their - // midpoints having z-coordinates of either - // 0 (bottom face), an indicator of 1 for - // z=3 (top face); finally, we use boundary - // indicator 2 for all faces on the inside - // of the cylinder shell, and 3 for the - // outside. + // In a second step, we have to associated boundary conditions with the + // upper and lower faces of the cylinder. We choose a boundary indicator of + // 0 for the boundary faces that are characterized by their midpoints having + // z-coordinates of either 0 (bottom face), an indicator of 1 for z=3 (top + // face); finally, we use boundary indicator 2 for all faces on the inside + // of the cylinder shell, and 3 for the outside. template void TopLevel::create_coarse_grid () { @@ -1137,84 +825,49 @@ namespace Step18 cell->face(f)->set_boundary_indicator (3); } - // In order to make sure that new - // vertices are placed correctly on mesh - // refinement, we have to associate - // objects describing those parts of the - // boundary that do not consist of - // straight parts. Corresponding to the - // cylinder shell generator function used - // above, there are classes that can be - // used to describe the geometry of - // cylinders. We need to use different - // objects for the inner and outer parts - // of the cylinder, with different radii; - // the second argument to the constructor - // indicates the axis around which the - // cylinder revolves -- in this case the - // z-axis. Note that the boundary objects - // need to live as long as the - // triangulation does; we can achieve - // this by making the objects static, - // which means that they live as long as - // the program runs: + // In order to make sure that new vertices are placed correctly on mesh + // refinement, we have to associate objects describing those parts of the + // boundary that do not consist of straight parts. Corresponding to the + // cylinder shell generator function used above, there are classes that + // can be used to describe the geometry of cylinders. We need to use + // different objects for the inner and outer parts of the cylinder, with + // different radii; the second argument to the constructor indicates the + // axis around which the cylinder revolves -- in this case the + // z-axis. Note that the boundary objects need to live as long as the + // triangulation does; we can achieve this by making the objects static, + // which means that they live as long as the program runs: static const CylinderBoundary inner_cylinder (inner_radius, 2); static const CylinderBoundary outer_cylinder (outer_radius, 2); - // We then attach these two objects to - // the triangulation, and make them - // correspond to boundary indicators 2 - // and 3: + // We then attach these two objects to the triangulation, and make them + // correspond to boundary indicators 2 and 3: triangulation.set_boundary (2, inner_cylinder); triangulation.set_boundary (3, outer_cylinder); - // There's one more thing we have to take - // care of (we should have done so above - // already, but for didactic reasons it - // was more appropriate to handle it - // after discussing boundary - // objects). %Boundary indicators in - // deal.II, for mostly historic reasons, - // serve a dual purpose: they describe - // the type of a boundary for other - // places in a program where different - // boundary conditions are implemented; - // and they describe which boundary - // object (as the ones associated above) - // should be queried when new boundary - // points need to be placed upon mesh - // refinement. In the prefix to this - // function, we have discussed the - // boundary condition issue, and the - // boundary geometry issue was mentioned - // just above. But there is a case where - // we have to be careful with geometry: - // what happens if a cell is refined that - // has two faces with different boundary - // indicators? For example one at the - // edges of the cylinder? In that case, - // the library wouldn't know where to put - // new points in the middle of edges (one - // of the twelve lines of a - // hexahedron). In fact, the library - // doesn't even care about the boundary - // indicator of adjacent faces when - // refining edges: it considers the - // boundary indicators associated with - // the edges themselves. So what do we - // want to happen with the edges of the - // cylinder shell: they sit on both faces - // with boundary indicators 2 or 3 (inner - // or outer shell) and 0 or 1 (for which - // no boundary objects have been - // specified, and for which the library - // therefore assumes straight - // lines). Obviously, we want these lines - // to follow the curved shells, so we - // have to assign all edges along faces - // with boundary indicators 2 or 3 these - // same boundary indicators to make sure - // they are refined using the appropriate - // geometry objects. This is easily done: + // There's one more thing we have to take care of (we should have done so + // above already, but for didactic reasons it was more appropriate to + // handle it after discussing boundary objects). %Boundary indicators in + // deal.II, for mostly historic reasons, serve a dual purpose: they + // describe the type of a boundary for other places in a program where + // different boundary conditions are implemented; and they describe which + // boundary object (as the ones associated above) should be queried when + // new boundary points need to be placed upon mesh refinement. In the + // prefix to this function, we have discussed the boundary condition + // issue, and the boundary geometry issue was mentioned just above. But + // there is a case where we have to be careful with geometry: what happens + // if a cell is refined that has two faces with different boundary + // indicators? For example one at the edges of the cylinder? In that case, + // the library wouldn't know where to put new points in the middle of + // edges (one of the twelve lines of a hexahedron). In fact, the library + // doesn't even care about the boundary indicator of adjacent faces when + // refining edges: it considers the boundary indicators associated with + // the edges themselves. So what do we want to happen with the edges of + // the cylinder shell: they sit on both faces with boundary indicators 2 + // or 3 (inner or outer shell) and 0 or 1 (for which no boundary objects + // have been specified, and for which the library therefore assumes + // straight lines). Obviously, we want these lines to follow the curved + // shells, so we have to assign all edges along faces with boundary + // indicators 2 or 3 these same boundary indicators to make sure they are + // refined using the appropriate geometry objects. This is easily done: for (typename Triangulation::active_face_iterator face=triangulation.begin_active_face(); face!=triangulation.end_face(); ++face) @@ -1227,21 +880,14 @@ namespace Step18 face->line(edge) ->set_boundary_indicator (face->boundary_indicator()); - // Once all this is done, we can refine - // the mesh once globally: + // Once all this is done, we can refine the mesh once globally: triangulation.refine_global (1); - // As the final step, we need to - // set up a clean state of the - // data that we store in the - // quadrature points on all cells - // that are treated on the - // present processor. To do so, - // we also have to know which - // processors are ours in the - // first place. This is done in - // the following two function + // As the final step, we need to set up a clean state of the data that we + // store in the quadrature points on all cells that are treated on the + // present processor. To do so, we also have to know which processors are + // ours in the first place. This is done in the following two function // calls: GridTools::partition_triangulation (n_mpi_processes, triangulation); setup_quadrature_point_history (); @@ -1252,195 +898,123 @@ namespace Step18 // @sect4{TopLevel::setup_system} - // The next function is the one - // that sets up the data structures - // for a given mesh. This is done - // in most the same way as in - // step-17: distribute the degrees - // of freedom, then sort these - // degrees of freedom in such a way - // that each processor gets a - // contiguous chunk of them. Note - // that subdivions into chunks for - // each processor is handled in the - // functions that create or refine - // grids, unlike in the previous - // example program (the point where - // this happens is mostly a matter - // of taste; here, we chose to do - // it when grids are created since - // in the do_initial_timestep - // and do_timestep functions we - // want to output the number of - // cells on each processor at a - // point where we haven't called - // the present function yet). + // The next function is the one that sets up the data structures for a given + // mesh. This is done in most the same way as in step-17: distribute the + // degrees of freedom, then sort these degrees of freedom in such a way that + // each processor gets a contiguous chunk of them. Note that subdivions into + // chunks for each processor is handled in the functions that create or + // refine grids, unlike in the previous example program (the point where + // this happens is mostly a matter of taste; here, we chose to do it when + // grids are created since in the do_initial_timestep and + // do_timestep functions we want to output the number of cells + // on each processor at a point where we haven't called the present function + // yet). template void TopLevel::setup_system () { dof_handler.distribute_dofs (fe); DoFRenumbering::subdomain_wise (dof_handler); - // The next thing is to store some - // information for later use on how many - // cells or degrees of freedom the - // present processor, or any of the - // processors has to work on. First the - // cells local to this processor... + // The next thing is to store some information for later use on how many + // cells or degrees of freedom the present processor, or any of the + // processors has to work on. First the cells local to this processor... n_local_cells = GridTools::count_cells_with_subdomain_association (triangulation, this_mpi_process); - // ...and then a list of numbers of how - // many degrees of freedom each processor - // has to handle: + // ...and then a list of numbers of how many degrees of freedom each + // processor has to handle: local_dofs_per_process.resize (n_mpi_processes); for (unsigned int i=0; iCompressedSparsityPattern class - // here that was already introduced in - // step-11, rather than the - // SparsityPattern class that we have - // used in all other cases. The reason - // for this is that for the latter class - // to work we have to give an initial - // upper bound for the number of entries - // in each row, a task that is - // traditionally done by - // DoFHandler::max_couplings_between_dofs(). However, - // this function suffers from a serious - // problem: it has to compute an upper - // bound to the number of nonzero entries - // in each row, and this is a rather - // complicated task, in particular in - // 3d. In effect, while it is quite - // accurate in 2d, it often comes up with - // much too large a number in 3d, and in - // that case the SparsityPattern - // allocates much too much memory at - // first, often several 100 MBs. This is - // later corrected when - // DoFTools::make_sparsity_pattern is - // called and we realize that we don't - // need all that much memory, but at time - // it is already too late: for large - // problems, the temporary allocation of - // too much memory can lead to - // out-of-memory situations. + // Note that we have used the CompressedSparsityPattern class + // here that was already introduced in step-11, rather than the + // SparsityPattern class that we have used in all other + // cases. The reason for this is that for the latter class to work we have + // to give an initial upper bound for the number of entries in each row, a + // task that is traditionally done by + // DoFHandler::max_couplings_between_dofs(). However, this + // function suffers from a serious problem: it has to compute an upper + // bound to the number of nonzero entries in each row, and this is a + // rather complicated task, in particular in 3d. In effect, while it is + // quite accurate in 2d, it often comes up with much too large a number in + // 3d, and in that case the SparsityPattern allocates much + // too much memory at first, often several 100 MBs. This is later + // corrected when DoFTools::make_sparsity_pattern is called + // and we realize that we don't need all that much memory, but at time it + // is already too late: for large problems, the temporary allocation of + // too much memory can lead to out-of-memory situations. // - // In order to avoid this, we resort to - // the CompressedSparsityPattern - // class that is slower but does not - // require any up-front estimate on the - // number of nonzero entries per row. It - // therefore only ever allocates as much - // memory as it needs at any given time, - // and we can build it even for large 3d - // problems. + // In order to avoid this, we resort to the + // CompressedSparsityPattern class that is slower but does + // not require any up-front estimate on the number of nonzero entries per + // row. It therefore only ever allocates as much memory as it needs at any + // given time, and we can build it even for large 3d problems. // - // It is also worth noting that the - // sparsity pattern we construct is - // global, i.e. comprises all degrees of - // freedom whether they will be owned by - // the processor we are on or another one - // (in case this program is run in - // %parallel via MPI). This of course is - // not optimal -- it limits the size of - // the problems we can solve, since - // storing the entire sparsity pattern - // (even if only for a short time) on - // each processor does not scale - // well. However, there are several more - // places in the program in which we do - // this, for example we always keep the - // global triangulation and DoF handler - // objects around, even if we only work - // on part of them. At present, deal.II - // does not have the necessary facilities - // to completely distribute these objects - // (a task that, indeed, is very hard to - // achieve with adaptive meshes, since - // well-balanced subdivisions of a domain - // tend to become unbalanced as the mesh - // is adaptively refined). + // It is also worth noting that the sparsity pattern we construct is + // global, i.e. comprises all degrees of freedom whether they will be + // owned by the processor we are on or another one (in case this program + // is run in %parallel via MPI). This of course is not optimal -- it + // limits the size of the problems we can solve, since storing the entire + // sparsity pattern (even if only for a short time) on each processor does + // not scale well. However, there are several more places in the program + // in which we do this, for example we always keep the global + // triangulation and DoF handler objects around, even if we only work on + // part of them. At present, deal.II does not have the necessary + // facilities to completely distribute these objects (a task that, indeed, + // is very hard to achieve with adaptive meshes, since well-balanced + // subdivisions of a domain tend to become unbalanced as the mesh is + // adaptively refined). // - // With this data structure, we can then - // go to the PETSc sparse matrix and tell - // it to pre-allocate all the entries we - // will later want to write to: + // With this data structure, we can then go to the PETSc sparse matrix and + // tell it to pre-allocate all the entries we will later want to write to: system_matrix.reinit (mpi_communicator, sparsity_pattern, local_dofs_per_process, local_dofs_per_process, this_mpi_process); - // After this point, no further explicit - // knowledge of the sparsity pattern is - // required any more and we can let the - // sparsity_pattern variable go out - // of scope without any problem. - - // The last task in this function - // is then only to reset the - // right hand side vector as well - // as the solution vector to its - // correct size; remember that - // the solution vector is a local - // one, unlike the right hand - // side that is a distributed - // %parallel one and therefore - // needs to know the MPI - // communicator over which it is - // supposed to transmit messages: + // After this point, no further explicit knowledge of the sparsity pattern + // is required any more and we can let the sparsity_pattern + // variable go out of scope without any problem. + + // The last task in this function is then only to reset the right hand + // side vector as well as the solution vector to its correct size; + // remember that the solution vector is a local one, unlike the right hand + // side that is a distributed %parallel one and therefore needs to know + // the MPI communicator over which it is supposed to transmit messages: system_rhs.reinit (mpi_communicator, dof_handler.n_dofs(), n_local_dofs); incremental_displacement.reinit (dof_handler.n_dofs()); } @@ -1449,30 +1023,17 @@ namespace Step18 // @sect4{TopLevel::assemble_system} - // Again, assembling the system - // matrix and right hand side - // follows the same structure as in - // many example programs before. In - // particular, it is mostly - // equivalent to step-17, except - // for the different right hand - // side that now only has to take - // into account internal - // stresses. In addition, - // assembling the matrix is made - // significantly more transparent - // by using the SymmetricTensor - // class: note the elegance of - // forming the scalar products of - // symmetric tensors of rank 2 and - // 4. The implementation is also - // more general since it is - // independent of the fact that we - // may or may not be using an - // isotropic elasticity tensor. + // Again, assembling the system matrix and right hand side follows the same + // structure as in many example programs before. In particular, it is mostly + // equivalent to step-17, except for the different right hand side that now + // only has to take into account internal stresses. In addition, assembling + // the matrix is made significantly more transparent by using the + // SymmetricTensor class: note the elegance of forming the + // scalar products of symmetric tensors of rank 2 and 4. The implementation + // is also more general since it is independent of the fact that we may or + // may not be using an isotropic elasticity tensor. // - // The first part of the assembly routine - // is as always: + // The first part of the assembly routine is as always: template void TopLevel::assemble_system () { @@ -1495,8 +1056,7 @@ namespace Step18 std::vector > body_force_values (n_q_points, Vector(dim)); - // As in step-17, we only need to loop - // over all cells that belong to the + // As in step-17, we only need to loop over all cells that belong to the // present processor: typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), @@ -1509,23 +1069,15 @@ namespace Step18 fe_values.reinit (cell); - // Then loop over all indices i,j - // and quadrature points and - // assemble the system matrix - // contributions from this cell. - // Note how we extract the - // symmetric gradients (strains) of - // the shape functions at a given - // quadrature point from the - // FEValues object, and the - // elegance with which we form the - // triple contraction eps_phi_i : - // C : eps_phi_j; the latter - // needs to be compared to the - // clumsy computations needed in - // step-17, both in the - // introduction as well as in the - // respective place in the program: + // Then loop over all indices i,j and quadrature points and assemble + // the system matrix contributions from this cell. Note how we + // extract the symmetric gradients (strains) of the shape functions + // at a given quadrature point from the FEValues + // object, and the elegance with which we form the triple + // contraction eps_phi_i : C : eps_phi_j; the latter + // needs to be compared to the clumsy computations needed in + // step-17, both in the introduction as well as in the respective + // place in the program: for (unsigned int i=0; i *local_quadrature_points_data = reinterpret_cast*>(cell->user_pointer()); - // In addition, we need the values - // of the external body forces at - // the quadrature points on this - // cell: + // In addition, we need the values of the external body forces at + // the quadrature points on this cell: body_force.vector_value_list (fe_values.get_quadrature_points(), body_force_values); - // Then we can loop over all - // degrees of freedom on this cell - // and compute local contributions - // to the right hand side: + // Then we can loop over all degrees of freedom on this cell and + // compute local contributions to the right hand side: for (unsigned int i=0; iget_dof_indices (local_dof_indices); hanging_node_constraints @@ -1606,89 +1145,42 @@ namespace Step18 system_rhs); } - // The last step is to again fix - // up boundary values, just as we - // already did in previous - // programs. A slight - // complication is that the - // apply_boundary_values - // function wants to have a - // solution vector compatible - // with the matrix and right hand - // side (i.e. here a distributed - // %parallel vector, rather than - // the sequential vector we use - // in this program) in order to - // preset the entries of the - // solution vector with the - // correct boundary values. We - // provide such a compatible - // vector in the form of a - // temporary vector which we then - // copy into the sequential one. - - // We make up for this - // complication by showing how - // boundary values can be used - // flexibly: following the way we - // create the triangulation, - // there are three distinct - // boundary indicators used to - // describe the domain, - // corresponding to the bottom - // and top faces, as well as the - // inner/outer surfaces. We would - // like to impose boundary - // conditions of the following - // type: The inner and outer - // cylinder surfaces are free of - // external forces, a fact that - // corresponds to natural - // (Neumann-type) boundary - // conditions for which we don't - // have to do anything. At the - // bottom, we want no movement at - // all, corresponding to the - // cylinder being clamped or - // cemented in at this part of - // the boundary. At the top, - // however, we want a prescribed - // vertical downward motion - // compressing the cylinder; in - // addition, we only want to - // restrict the vertical - // movement, but not the - // horizontal ones -- one can - // think of this situation as a - // well-greased plate sitting on - // top of the cylinder pushing it - // downwards: the atoms of the - // cylinder are forced to move - // downward, but they are free to - // slide horizontally along the - // plate. - - // The way to describe this is as - // follows: for boundary - // indicator zero (bottom face) - // we use a dim-dimensional zero - // function representing no - // motion in any coordinate - // direction. For the boundary - // with indicator 1 (top - // surface), we use the - // IncrementalBoundaryValues - // class, but we specify an - // additional argument to the - // VectorTools::interpolate_boundary_values - // function denoting which vector - // components it should apply to; - // this is a vector of bools for - // each vector component and - // because we only want to - // restrict vertical motion, it - // has only its last component - // set: + // The last step is to again fix up boundary values, just as we already + // did in previous programs. A slight complication is that the + // apply_boundary_values function wants to have a solution + // vector compatible with the matrix and right hand side (i.e. here a + // distributed %parallel vector, rather than the sequential vector we use + // in this program) in order to preset the entries of the solution vector + // with the correct boundary values. We provide such a compatible vector + // in the form of a temporary vector which we then copy into the + // sequential one. + + // We make up for this complication by showing how boundary values can be + // used flexibly: following the way we create the triangulation, there are + // three distinct boundary indicators used to describe the domain, + // corresponding to the bottom and top faces, as well as the inner/outer + // surfaces. We would like to impose boundary conditions of the following + // type: The inner and outer cylinder surfaces are free of external + // forces, a fact that corresponds to natural (Neumann-type) boundary + // conditions for which we don't have to do anything. At the bottom, we + // want no movement at all, corresponding to the cylinder being clamped or + // cemented in at this part of the boundary. At the top, however, we want + // a prescribed vertical downward motion compressing the cylinder; in + // addition, we only want to restrict the vertical movement, but not the + // horizontal ones -- one can think of this situation as a well-greased + // plate sitting on top of the cylinder pushing it downwards: the atoms of + // the cylinder are forced to move downward, but they are free to slide + // horizontally along the plate. + + // The way to describe this is as follows: for boundary indicator zero + // (bottom face) we use a dim-dimensional zero function representing no + // motion in any coordinate direction. For the boundary with indicator 1 + // (top surface), we use the IncrementalBoundaryValues class, + // but we specify an additional argument to the + // VectorTools::interpolate_boundary_values function denoting + // which vector components it should apply to; this is a vector of bools + // for each vector component and because we only want to restrict vertical + // motion, it has only its last component set: FEValuesExtractors::Scalar z_component (dim-1); std::map boundary_values; VectorTools:: @@ -1716,11 +1208,9 @@ namespace Step18 // @sect4{TopLevel::solve_timestep} - // The next function is the one that - // controls what all has to happen within a - // timestep. The order of things should be - // relatively self-explanatory from the - // function names: + // The next function is the one that controls what all has to happen within + // a timestep. The order of things should be relatively self-explanatory + // from the function names: template void TopLevel::solve_timestep () { @@ -1743,32 +1233,18 @@ namespace Step18 // @sect4{TopLevel::solve_linear_problem} - // Solving the linear system again - // works mostly as before. The only - // difference is that we want to - // only keep a complete local copy - // of the solution vector instead - // of the distributed one that we - // get as output from PETSc's - // solver routines. To this end, we - // declare a local temporary - // variable for the distributed - // vector and initialize it with - // the contents of the local - // variable (remember that the - // apply_boundary_values - // function called in - // assemble_system preset the - // values of boundary nodes in this - // vector), solve with it, and at - // the end of the function copy it - // again into the complete local - // vector that we declared as a - // member variable. Hanging node - // constraints are then distributed - // only on the local copy, - // i.e. independently of each other - // on each of the processors: + // Solving the linear system again works mostly as before. The only + // difference is that we want to only keep a complete local copy of the + // solution vector instead of the distributed one that we get as output from + // PETSc's solver routines. To this end, we declare a local temporary + // variable for the distributed vector and initialize it with the contents + // of the local variable (remember that the + // apply_boundary_values function called in + // assemble_system preset the values of boundary nodes in this + // vector), solve with it, and at the end of the function copy it again into + // the complete local vector that we declared as a member variable. Hanging + // node constraints are then distributed only on the local copy, + // i.e. independently of each other on each of the processors: template unsigned int TopLevel::solve_linear_problem () { @@ -1799,74 +1275,43 @@ namespace Step18 // @sect4{TopLevel::output_results} - // This function generates the - // graphical output in intermediate - // format as explained in the - // introduction. Each process will - // only work on the cells it owns, - // and then write the result into a - // file of its own. These files may - // later be merged to get a single - // file in any of the supported - // output files, as mentioned in - // the introduction. + // This function generates the graphical output in intermediate format as + // explained in the introduction. Each process will only work on the cells + // it owns, and then write the result into a file of its own. These files + // may later be merged to get a single file in any of the supported output + // files, as mentioned in the introduction. // - // The crucial part of this function is to - // give the DataOut class a way to only - // work on the cells that the present - // process owns. This class is already - // well-equipped for that: it has two - // virtual functions first_cell and - // next_cell that return the first cell - // to be worked on, and given one cell - // return the next cell to be worked on. By - // default, these functions return the - // first active cell (i.e. the first one - // that has no children) and the next - // active cell. What we have to do here is - // derive a class from DataOut that - // overloads these two functions to only - // iterate over those cells with the right - // subdomain indicator. + // The crucial part of this function is to give the DataOut + // class a way to only work on the cells that the present process owns. This + // class is already well-equipped for that: it has two virtual functions + // first_cell and next_cell that return the first + // cell to be worked on, and given one cell return the next cell to be + // worked on. By default, these functions return the first active cell + // (i.e. the first one that has no children) and the next active cell. What + // we have to do here is derive a class from DataOut that + // overloads these two functions to only iterate over those cells with the + // right subdomain indicator. // - // We do this at the beginning of this - // function. The first_cell function - // just starts with the first active cell, - // and then iterates to the next cells - // while the cell presently under - // consideration does not yet have the - // correct subdomain id. The only thing - // that needs to be taken care of is that - // we don't try to keep iterating when we - // have hit the end iterator. + // We do this at the beginning of this function. The first_cell + // function just starts with the first active cell, and then iterates to the + // next cells while the cell presently under consideration does not yet have + // the correct subdomain id. The only thing that needs to be taken care of + // is that we don't try to keep iterating when we have hit the end iterator. // - // The next_cell function could be - // implemented in a similar way. However, - // we use this occasion as a pretext to - // introduce one more thing that the - // library offers: filtered - // iterators. These are wrappers for the - // iterator classes that just skip all - // cells (or faces, lines, etc) that do not - // satisfy a certain predicate (a predicate - // in computer-lingo is a function that - // when applied to a data element either - // returns true or false). In the present - // case, the predicate is that the cell has - // to have a certain subdomain id, and the - // library already has this predicate built - // in. If the cell iterator is not the end - // iterator, what we then have to do is to - // initialize such a filtered iterator with - // the present cell and the predicate, and - // then increase the iterator exactly - // once. While the more conventional loop - // would probably not have been much - // longer, this is definitely the more - // elegant way -- and then, these example - // programs also serve the purpose of - // introducing what is available in - // deal.II. + // The next_cell function could be implemented in a similar + // way. However, we use this occasion as a pretext to introduce one more + // thing that the library offers: filtered iterators. These are wrappers for + // the iterator classes that just skip all cells (or faces, lines, etc) that + // do not satisfy a certain predicate (a predicate in computer-lingo is a + // function that when applied to a data element either returns true or + // false). In the present case, the predicate is that the cell has to have a + // certain subdomain id, and the library already has this predicate built + // in. If the cell iterator is not the end iterator, what we then have to do + // is to initialize such a filtered iterator with the present cell and the + // predicate, and then increase the iterator exactly once. While the more + // conventional loop would probably not have been much longer, this is + // definitely the more elegant way -- and then, these example programs also + // serve the purpose of introducing what is available in deal.II. template class FilteredDataOut : public DataOut { @@ -1914,24 +1359,17 @@ namespace Step18 template void TopLevel::output_results () const { - // With this newly defined class, declare - // an object that is going to generate - // the graphical output and attach the - // dof handler with it from which to get - // the solution vector: + // With this newly defined class, declare an object that is going to + // generate the graphical output and attach the dof handler with it from + // which to get the solution vector: FilteredDataOut data_out(this_mpi_process); data_out.attach_dof_handler (dof_handler); - // Then, just as in step-17, define the - // names of solution variables (which - // here are the displacement increments) - // and queue the solution vector for - // output. Note in the following switch - // how we make sure that if the space - // dimension should be unhandled that we - // throw an exception saying that we - // haven't implemented this case yet - // (another case of defensive + // Then, just as in step-17, define the names of solution variables (which + // here are the displacement increments) and queue the solution vector for + // output. Note in the following switch how we make sure that if the space + // dimension should be unhandled that we throw an exception saying that we + // haven't implemented this case yet (another case of defensive // programming): std::vector solution_names; switch (dim) @@ -1956,29 +1394,18 @@ namespace Step18 solution_names); - // The next thing is that we wanted to - // output something like the average norm - // of the stresses that we have stored in - // each cell. This may seem complicated, - // since on the present processor we only - // store the stresses in quadrature - // points on those cells that actually - // belong to the present process. In - // other words, it seems as if we can't - // compute the average stresses for all - // cells. However, remember that our - // class derived from DataOut only - // iterates over those cells that - // actually do belong to the present - // processor, i.e. we don't have to - // compute anything for all the other - // cells as this information would not be - // touched. The following little loop - // does this. We enclose the entire block - // into a pair of braces to make sure - // that the iterator variables do not - // remain accidentally visible beyond the - // end of the block in which they are + // The next thing is that we wanted to output something like the average + // norm of the stresses that we have stored in each cell. This may seem + // complicated, since on the present processor we only store the stresses + // in quadrature points on those cells that actually belong to the present + // process. In other words, it seems as if we can't compute the average + // stresses for all cells. However, remember that our class derived from + // DataOut only iterates over those cells that actually do + // belong to the present processor, i.e. we don't have to compute anything + // for all the other cells as this information would not be touched. The + // following little loop does this. We enclose the entire block into a + // pair of braces to make sure that the iterator variables do not remain + // accidentally visible beyond the end of the block in which they are // used: Vector norm_of_stress (triangulation.n_active_cells()); { @@ -1987,12 +1414,10 @@ namespace Step18 cell = triangulation.begin_active(), endc = triangulation.end(); for (unsigned int index=0; cell!=endc; ++cell, ++index) - // ... and pick those that are - // relevant to us: + // ... and pick those that are relevant to us: if (cell->subdomain_id() == this_mpi_process) { - // On these cells, add up the - // stresses over all quadrature + // On these cells, add up the stresses over all quadrature // points... SymmetricTensor<2,dim> accumulated_stress; for (unsigned int q=0; @@ -2002,107 +1427,64 @@ namespace Step18 reinterpret_cast*>(cell->user_pointer())[q] .old_stress; - // ...then write the norm of the - // average to their destination: + // ...then write the norm of the average to their destination: norm_of_stress(index) = (accumulated_stress / quadrature_formula.size()).norm(); } - // And on the cells that we are not - // interested in, set the respective - // value in the vector to a bogus value - // (norms must be positive, and a large - // negative value should catch your - // eye) in order to make sure that if - // we were somehow wrong about our - // assumption that these elements would - // not appear in the output file, that - // we would find out by looking at the - // graphical output: + // And on the cells that we are not interested in, set the respective + // value in the vector to a bogus value (norms must be positive, and a + // large negative value should catch your eye) in order to make sure + // that if we were somehow wrong about our assumption that these + // elements would not appear in the output file, that we would find out + // by looking at the graphical output: else norm_of_stress(index) = -1e+20; } - // Finally attach this vector as well to - // be treated for output: + // Finally attach this vector as well to be treated for output: data_out.add_data_vector (norm_of_stress, "norm_of_stress"); - // As a last piece of data, let - // us also add the partitioning - // of the domain into subdomains - // associated with the processors - // if this is a parallel - // job. This works in the exact - // same way as in the step-17 - // program: + // As a last piece of data, let us also add the partitioning of the domain + // into subdomains associated with the processors if this is a parallel + // job. This works in the exact same way as in the step-17 program: std::vector partition_int (triangulation.n_active_cells()); GridTools::get_subdomain_association (triangulation, partition_int); const Vector partitioning(partition_int.begin(), partition_int.end()); data_out.add_data_vector (partitioning, "partitioning"); - // Finally, with all this data, - // we can instruct deal.II to - // munge the information and - // produce some intermediate data - // structures that contain all - // these solution and other data - // vectors: + // Finally, with all this data, we can instruct deal.II to munge the + // information and produce some intermediate data structures that contain + // all these solution and other data vectors: data_out.build_patches (); - // Now that we have generated the - // intermediate format, let us - // determine the name of the file - // we will want to write it - // to. We compose it of the - // prefix solution-, followed - // by a representation of the - // present time written as a - // fixed point number so that - // file names sort naturally: + // Now that we have generated the intermediate format, let us determine + // the name of the file we will want to write it to. We compose it of the + // prefix solution-, followed by a representation of the + // present time written as a fixed point number so that file names sort + // naturally: std::ostringstream filename; filename << "solution-"; filename << std::setfill('0'); filename.setf(std::ios::fixed, std::ios::floatfield); filename << std::setw(9) << std::setprecision(4) << present_time; - // Next, in case there are - // multiple processes working - // together, we have to generate - // different file names for the - // output of each process. In our - // case, we encode the process - // number as a three-digit - // integer, padded with - // zeros. The assertion in the - // first line of the block makes - // sure that there are less than - // 1000 processes (a very - // conservative check, but worth - // having anyway) as our scheme - // of generating process numbers - // would overflow if there were - // 1000 processes or more. Note - // that we choose to use - // AssertThrow rather than - // Assert since the number of - // processes is a variable that - // depends on input files or the - // way the process is started, - // rather than static assumptions - // in the program - // code. Therefore, it is - // inappropriate to use - // Assert that is optimized - // away in optimized mode, - // whereas here we actually can - // assume that users will run the - // largest computations with the - // most processors in optimized - // mode, and we should check our - // assumptions in this particular - // case, and not only when - // running in debug mode: + // Next, in case there are multiple processes working together, we have to + // generate different file names for the output of each process. In our + // case, we encode the process number as a three-digit integer, padded + // with zeros. The assertion in the first line of the block makes sure + // that there are less than 1000 processes (a very conservative check, but + // worth having anyway) as our scheme of generating process numbers would + // overflow if there were 1000 processes or more. Note that we choose to + // use AssertThrow rather than Assert since the + // number of processes is a variable that depends on input files or the + // way the process is started, rather than static assumptions in the + // program code. Therefore, it is inappropriate to use Assert + // that is optimized away in optimized mode, whereas here we actually can + // assume that users will run the largest computations with the most + // processors in optimized mode, and we should check our assumptions in + // this particular case, and not only when running in debug mode: if (n_mpi_processes != 1) { AssertThrow (n_mpi_processes < 1000, ExcNotImplemented()); @@ -2112,19 +1494,13 @@ namespace Step18 filename << std::setw(3) << this_mpi_process; } - // To the file name, attach the - // file name suffix usually used - // for the deal.II intermediate - // format. To determine it, we - // use the same function that has - // already been used in step-13: + // To the file name, attach the file name suffix usually used for the + // deal.II intermediate format. To determine it, we use the same function + // that has already been used in step-13: filename << data_out.default_suffix(DataOut::deal_II_intermediate); - // With the so-completed - // filename, let us open a file - // and write the data we have - // generated into it, using the - // intermediate format: + // With the so-completed filename, let us open a file and write the data + // we have generated into it, using the intermediate format: std::ofstream output (filename.str().c_str()); data_out.write_deal_II_intermediate (output); } @@ -2133,33 +1509,22 @@ namespace Step18 // @sect4{TopLevel::do_initial_timestep} - // This and the next function handle the - // overall structure of the first and - // following timesteps, respectively. The - // first timestep is slightly more involved - // because we want to compute it multiple - // times on successively refined meshes, - // each time starting from a clean - // state. At the end of these computations, - // in which we compute the incremental - // displacements each time, we use the last - // results obtained for the incremental - // displacements to compute the resulting - // stress updates and move the mesh - // accordingly. On this new mesh, we then - // output the solution and any additional - // data we consider important. + // This and the next function handle the overall structure of the first and + // following timesteps, respectively. The first timestep is slightly more + // involved because we want to compute it multiple times on successively + // refined meshes, each time starting from a clean state. At the end of + // these computations, in which we compute the incremental displacements + // each time, we use the last results obtained for the incremental + // displacements to compute the resulting stress updates and move the mesh + // accordingly. On this new mesh, we then output the solution and any + // additional data we consider important. // - // All this is interspersed by generating - // output to the console to update the - // person watching the screen on what is - // going on. As in step-17, the use of - // pcout instead of std::cout makes - // sure that only one of the parallel - // processes is actually writing to the - // console, without having to explicitly - // code an if-statement in each place where - // we generate output: + // All this is interspersed by generating output to the console to update + // the person watching the screen on what is going on. As in step-17, the + // use of pcout instead of std::cout makes sure + // that only one of the parallel processes is actually writing to the + // console, without having to explicitly code an if-statement in each place + // where we generate output: template void TopLevel::do_initial_timestep () { @@ -2210,10 +1575,8 @@ namespace Step18 // @sect4{TopLevel::do_timestep} - // Subsequent timesteps are simpler, and - // probably do not require any more - // documentation given the explanations for - // the previous function above: + // Subsequent timesteps are simpler, and probably do not require any more + // documentation given the explanations for the previous function above: template void TopLevel::do_timestep () { @@ -2239,18 +1602,14 @@ namespace Step18 // @sect4{TopLevel::refine_initial_grid} - // The following function is called when - // solving the first time step on - // successively refined meshes. After each - // iteration, it computes a refinement - // criterion, refines the mesh, and sets up - // the history variables in each quadrature - // point again to a clean state. + // The following function is called when solving the first time step on + // successively refined meshes. After each iteration, it computes a + // refinement criterion, refines the mesh, and sets up the history variables + // in each quadrature point again to a clean state. template void TopLevel::refine_initial_grid () { - // First, let each process compute error - // indicators for the cells it owns: + // First, let each process compute error indicators for the cells it owns: Vector error_per_cell (triangulation.n_active_cells()); KellyErrorEstimator::estimate (dof_handler, QGauss(2), @@ -2262,9 +1621,8 @@ namespace Step18 multithread_info.n_default_threads, this_mpi_process); - // Then set up a global vector into which - // we merge the local indicators from - // each of the %parallel processes: + // Then set up a global vector into which we merge the local indicators + // from each of the %parallel processes: const unsigned int n_local_cells = GridTools::count_cells_with_subdomain_association (triangulation, this_mpi_process); @@ -2278,8 +1636,7 @@ namespace Step18 distributed_error_per_cell(i) = error_per_cell(i); distributed_error_per_cell.compress (); - // Once we have that, copy it back into - // local copies on all processors and + // Once we have that, copy it back into local copies on all processors and // refine the mesh accordingly: error_per_cell = distributed_error_per_cell; GridRefinement::refine_and_coarsen_fixed_number (triangulation, @@ -2287,11 +1644,8 @@ namespace Step18 0.35, 0.03); triangulation.execute_coarsening_and_refinement (); - // Finally, set up quadrature - // point data again on the new - // mesh, and only on those cells - // that we have determined to be - // ours: + // Finally, set up quadrature point data again on the new mesh, and only + // on those cells that we have determined to be ours: GridTools::partition_triangulation (n_mpi_processes, triangulation); setup_quadrature_point_history (); } @@ -2300,136 +1654,81 @@ namespace Step18 // @sect4{TopLevel::move_mesh} - // At the end of each time step, we move - // the nodes of the mesh according to the - // incremental displacements computed in - // this time step. To do this, we keep a - // vector of flags that indicate for each - // vertex whether we have already moved it - // around, and then loop over all cells and - // move those vertices of the cell that - // have not been moved yet. It is worth - // noting that it does not matter from - // which of the cells adjacent to a vertex - // we move this vertex: since we compute - // the displacement using a continuous - // finite element, the displacement field - // is continuous as well and we can compute - // the displacement of a given vertex from - // each of the adjacent cells. We only have - // to make sure that we move each node - // exactly once, which is why we keep the - // vector of flags. + // At the end of each time step, we move the nodes of the mesh according to + // the incremental displacements computed in this time step. To do this, we + // keep a vector of flags that indicate for each vertex whether we have + // already moved it around, and then loop over all cells and move those + // vertices of the cell that have not been moved yet. It is worth noting + // that it does not matter from which of the cells adjacent to a vertex we + // move this vertex: since we compute the displacement using a continuous + // finite element, the displacement field is continuous as well and we can + // compute the displacement of a given vertex from each of the adjacent + // cells. We only have to make sure that we move each node exactly once, + // which is why we keep the vector of flags. // - // There are two noteworthy things in this - // function. First, how we get the - // displacement field at a given vertex - // using the - // cell-@>vertex_dof_index(v,d) function - // that returns the index of the dth - // degree of freedom at vertex v of the - // given cell. In the present case, - // displacement in the k-th coordinate - // direction corresonds to the kth - // component of the finite element. Using a - // function like this bears a certain risk, - // because it uses knowledge of the order - // of elements that we have taken together - // for this program in the FESystem - // element. If we decided to add an - // additional variable, for example a - // pressure variable for stabilization, and - // happened to insert it as the first - // variable of the element, then the - // computation below will start to produce - // non-sensical results. In addition, this - // computation rests on other assumptions: - // first, that the element we use has, - // indeed, degrees of freedom that are - // associated with vertices. This is indeed - // the case for the present Q1 element, as - // would be for all Qp elements of - // polynomial order p. However, it - // would not hold for discontinuous - // elements, or elements for mixed - // formulations. Secondly, it also rests on - // the assumption that the displacement at - // a vertex is determined solely by the - // value of the degree of freedom - // associated with this vertex; in other - // words, all shape functions corresponding - // to other degrees of freedom are zero at - // this particular vertex. Again, this is - // the case for the present element, but is - // not so for all elements that are - // presently available in deal.II. Despite - // its risks, we choose to use this way in - // order to present a way to query - // individual degrees of freedom associated - // with vertices. + // There are two noteworthy things in this function. First, how we get the + // displacement field at a given vertex using the + // cell-@>vertex_dof_index(v,d) function that returns the index + // of the dth degree of freedom at vertex v of the + // given cell. In the present case, displacement in the k-th coordinate + // direction corresonds to the kth component of the finite element. Using a + // function like this bears a certain risk, because it uses knowledge of the + // order of elements that we have taken together for this program in the + // FESystem element. If we decided to add an additional + // variable, for example a pressure variable for stabilization, and happened + // to insert it as the first variable of the element, then the computation + // below will start to produce non-sensical results. In addition, this + // computation rests on other assumptions: first, that the element we use + // has, indeed, degrees of freedom that are associated with vertices. This + // is indeed the case for the present Q1 element, as would be for all Qp + // elements of polynomial order p. However, it would not hold + // for discontinuous elements, or elements for mixed formulations. Secondly, + // it also rests on the assumption that the displacement at a vertex is + // determined solely by the value of the degree of freedom associated with + // this vertex; in other words, all shape functions corresponding to other + // degrees of freedom are zero at this particular vertex. Again, this is the + // case for the present element, but is not so for all elements that are + // presently available in deal.II. Despite its risks, we choose to use this + // way in order to present a way to query individual degrees of freedom + // associated with vertices. // - // In this context, it is instructive to - // point out what a more general way would - // be. For general finite elements, the way - // to go would be to take a quadrature - // formula with the quadrature points in - // the vertices of a cell. The QTrapez - // formula for the trapezoidal rule does - // exactly this. With this quadrature - // formula, we would then initialize an - // FEValues object in each cell, and - // use the - // FEValues::get_function_values - // function to obtain the values of the - // solution function in the quadrature - // points, i.e. the vertices of the - // cell. These are the only values that we - // really need, i.e. we are not at all - // interested in the weights (or the - // JxW values) associated with this - // particular quadrature formula, and this - // can be specified as the last argument in - // the constructor to FEValues. The - // only point of minor inconvenience in - // this scheme is that we have to figure - // out which quadrature point corresponds - // to the vertex we consider at present, as - // they may or may not be ordered in the - // same order. + // In this context, it is instructive to point out what a more general way + // would be. For general finite elements, the way to go would be to take a + // quadrature formula with the quadrature points in the vertices of a + // cell. The QTrapez formula for the trapezoidal rule does + // exactly this. With this quadrature formula, we would then initialize an + // FEValues object in each cell, and use the + // FEValues::get_function_values function to obtain the values + // of the solution function in the quadrature points, i.e. the vertices of + // the cell. These are the only values that we really need, i.e. we are not + // at all interested in the weights (or the JxW values) + // associated with this particular quadrature formula, and this can be + // specified as the last argument in the constructor to + // FEValues. The only point of minor inconvenience in this + // scheme is that we have to figure out which quadrature point corresponds + // to the vertex we consider at present, as they may or may not be ordered + // in the same order. // - // Another point worth explaining about - // this short function is the way in which - // the triangulation class exports - // information about its vertices: through - // the Triangulation::n_vertices - // function, it advertises how many - // vertices there are in the - // triangulation. Not all of them are - // actually in use all the time -- some are - // left-overs from cells that have been - // coarsened previously and remain in - // existence since deal.II never changes - // the number of a vertex once it has come - // into existence, even if vertices with - // lower number go away. Secondly, the - // location returned by cell-@>vertex(v) - // is not only a read-only object of type - // Point@, but in fact a reference - // that can be written to. This allows to - // move around the nodes of a mesh with - // relative ease, but it is worth pointing - // out that it is the responsibility of an - // application program using this feature - // to make sure that the resulting cells - // are still useful, i.e. are not distorted - // so much that the cell is degenerated - // (indicated, for example, by negative - // Jacobians). Note that we do not have any - // provisions in this function to actually - // ensure this, we just have faith. + // Another point worth explaining about this short function is the way in + // which the triangulation class exports information about its vertices: + // through the Triangulation::n_vertices function, it + // advertises how many vertices there are in the triangulation. Not all of + // them are actually in use all the time -- some are left-overs from cells + // that have been coarsened previously and remain in existence since deal.II + // never changes the number of a vertex once it has come into existence, + // even if vertices with lower number go away. Secondly, the location + // returned by cell-@>vertex(v) is not only a read-only object + // of type Point@, but in fact a reference that can be + // written to. This allows to move around the nodes of a mesh with relative + // ease, but it is worth pointing out that it is the responsibility of an + // application program using this feature to make sure that the resulting + // cells are still useful, i.e. are not distorted so much that the cell is + // degenerated (indicated, for example, by negative Jacobians). Note that we + // do not have any provisions in this function to actually ensure this, we + // just have faith. // - // After this lengthy introduction, here - // are the full 20 or so lines of code: + // After this lengthy introduction, here are the full 20 or so lines of + // code: template void TopLevel::move_mesh () { @@ -2457,52 +1756,32 @@ namespace Step18 // @sect4{TopLevel::setup_quadrature_point_history} - // At the beginning of our computations, we - // needed to set up initial values of the - // history variables, such as the existing - // stresses in the material, that we store - // in each quadrature point. As mentioned - // above, we use the user_pointer for - // this that is available in each cell. + // At the beginning of our computations, we needed to set up initial values + // of the history variables, such as the existing stresses in the material, + // that we store in each quadrature point. As mentioned above, we use the + // user_pointer for this that is available in each cell. // - // To put this into larger perspective, we - // note that if we had previously available - // stresses in our model (which we assume - // do not exist for the purpose of this - // program), then we would need to - // interpolate the field of pre-existing - // stresses to the quadrature - // points. Likewise, if we were to simulate - // elasto-plastic materials with - // hardening/softening, then we would have - // to store additional history variables - // like the present yield stress of the - // accumulated plastic strains in each - // quadrature points. Pre-existing - // hardening or weakening would then be - // implemented by interpolating these - // variables in the present function as - // well. + // To put this into larger perspective, we note that if we had previously + // available stresses in our model (which we assume do not exist for the + // purpose of this program), then we would need to interpolate the field of + // pre-existing stresses to the quadrature points. Likewise, if we were to + // simulate elasto-plastic materials with hardening/softening, then we would + // have to store additional history variables like the present yield stress + // of the accumulated plastic strains in each quadrature + // points. Pre-existing hardening or weakening would then be implemented by + // interpolating these variables in the present function as well. template void TopLevel::setup_quadrature_point_history () { - // What we need to do here is to first - // count how many quadrature points are - // within the responsibility of this - // processor. This, of course, equals the - // number of cells that belong to this - // processor times the number of - // quadrature points our quadrature - // formula has on each cell. + // What we need to do here is to first count how many quadrature points + // are within the responsibility of this processor. This, of course, + // equals the number of cells that belong to this processor times the + // number of quadrature points our quadrature formula has on each cell. // - // For good measure, we also set all user - // pointers of all cells, whether ours of - // not, to the null pointer. This way, if - // we ever access the user pointer of a - // cell which we should not have - // accessed, a segmentation fault will - // let us know that this should not have - // happened: + // For good measure, we also set all user pointers of all cells, whether + // ours of not, to the null pointer. This way, if we ever access the user + // pointer of a cell which we should not have accessed, a segmentation + // fault will let us know that this should not have happened: unsigned int our_cells = 0; for (typename Triangulation::active_cell_iterator cell = triangulation.begin_active(); @@ -2512,29 +1791,19 @@ namespace Step18 triangulation.clear_user_data(); - // Next, allocate as many quadrature - // objects as we need. Since the - // resize function does not actually - // shrink the amount of allocated memory - // if the requested new size is smaller - // than the old size, we resort to a - // trick to first free all memory, and - // then reallocate it: we declare an - // empty vector as a temporary variable - // and then swap the contents of the old - // vector and this temporary + // Next, allocate as many quadrature objects as we need. Since the + // resize function does not actually shrink the amount of + // allocated memory if the requested new size is smaller than the old + // size, we resort to a trick to first free all memory, and then + // reallocate it: we declare an empty vector as a temporary variable and + // then swap the contents of the old vector and this temporary // variable. This makes sure that the - // quadrature_point_history is now - // really empty, and we can let the - // temporary variable that now holds the - // previous contents of the vector go out - // of scope and be destroyed. In the next - // step. we can then re-allocate as many - // elements as we need, with the vector - // default-initializing the - // PointHistory objects, which - // includes setting the stress variables - // to zero. + // quadrature_point_history is now really empty, and we can + // let the temporary variable that now holds the previous contents of the + // vector go out of scope and be destroyed. In the next step. we can then + // re-allocate as many elements as we need, with the vector + // default-initializing the PointHistory objects, which + // includes setting the stress variables to zero. { std::vector > tmp; tmp.swap (quadrature_point_history); @@ -2542,12 +1811,10 @@ namespace Step18 quadrature_point_history.resize (our_cells * quadrature_formula.size()); - // Finally loop over all cells again and - // set the user pointers from the cells - // that belong to the present processor - // to point to the first quadrature point - // objects corresponding to this cell in - // the vector of such objects: + // Finally loop over all cells again and set the user pointers from the + // cells that belong to the present processor to point to the first + // quadrature point objects corresponding to this cell in the vector of + // such objects: unsigned int history_index = 0; for (typename Triangulation::active_cell_iterator cell = triangulation.begin_active(); @@ -2558,21 +1825,15 @@ namespace Step18 history_index += quadrature_formula.size(); } - // At the end, for good measure make sure - // that our count of elements was correct - // and that we have both used up all - // objects we allocated previously, and - // not point to any objects beyond the - // end of the vector. Such defensive - // programming strategies are always good - // checks to avoid accidental errors and - // to guard against future changes to - // this function that forget to update - // all uses of a variable at the same - // time. Recall that constructs using the - // Assert macro are optimized away in - // optimized mode, so do not affect the - // run time of optimized runs: + // At the end, for good measure make sure that our count of elements was + // correct and that we have both used up all objects we allocated + // previously, and not point to any objects beyond the end of the + // vector. Such defensive programming strategies are always good checks to + // avoid accidental errors and to guard against future changes to this + // function that forget to update all uses of a variable at the same + // time. Recall that constructs using the Assert macro are + // optimized away in optimized mode, so do not affect the run time of + // optimized runs: Assert (history_index == quadrature_point_history.size(), ExcInternalError()); } @@ -2582,125 +1843,72 @@ namespace Step18 // @sect4{TopLevel::update_quadrature_point_history} - // At the end of each time step, we - // should have computed an - // incremental displacement update - // so that the material in its new - // configuration accommodates for - // the difference between the - // external body and boundary - // forces applied during this time - // step minus the forces exerted - // through pre-existing internal - // stresses. In order to have the - // pre-existing stresses available - // at the next time step, we - // therefore have to update the - // pre-existing stresses with the - // stresses due to the incremental - // displacement computed during the - // present time step. Ideally, the - // resulting sum of internal - // stresses would exactly counter - // all external forces. Indeed, a - // simple experiment can make sure - // that this is so: if we choose - // boundary conditions and body - // forces to be time independent, - // then the forcing terms (the sum - // of external forces and internal - // stresses) should be exactly - // zero. If you make this - // experiment, you will realize - // from the output of the norm of - // the right hand side in each time - // step that this is almost the - // case: it is not exactly zero, - // since in the first time step the - // incremental displacement and - // stress updates were computed - // relative to the undeformed mesh, - // which was then deformed. In the - // second time step, we again - // compute displacement and stress - // updates, but this time in the - // deformed mesh -- there, the - // resulting updates are very small - // but not quite zero. This can be - // iterated, and in each such - // iteration the residual, i.e. the - // norm of the right hand side - // vector, is reduced; if one makes - // this little experiment, one - // realizes that the norm of this - // residual decays exponentially - // with the number of iterations, - // and after an initial very rapid - // decline is reduced by roughly a - // factor of about 3.5 in each - // iteration (for one testcase I - // looked at, other testcases, and - // other numbers of unknowns change - // the factor, but not the - // exponential decay). - - // In a sense, this can then be considered - // as a quasi-timestepping scheme to - // resolve the nonlinear problem of solving - // large-deformation elasticity on a mesh - // that is moved along in a Lagrangian - // manner. + // At the end of each time step, we should have computed an incremental + // displacement update so that the material in its new configuration + // accommodates for the difference between the external body and boundary + // forces applied during this time step minus the forces exerted through + // pre-existing internal stresses. In order to have the pre-existing + // stresses available at the next time step, we therefore have to update the + // pre-existing stresses with the stresses due to the incremental + // displacement computed during the present time step. Ideally, the + // resulting sum of internal stresses would exactly counter all external + // forces. Indeed, a simple experiment can make sure that this is so: if we + // choose boundary conditions and body forces to be time independent, then + // the forcing terms (the sum of external forces and internal stresses) + // should be exactly zero. If you make this experiment, you will realize + // from the output of the norm of the right hand side in each time step that + // this is almost the case: it is not exactly zero, since in the first time + // step the incremental displacement and stress updates were computed + // relative to the undeformed mesh, which was then deformed. In the second + // time step, we again compute displacement and stress updates, but this + // time in the deformed mesh -- there, the resulting updates are very small + // but not quite zero. This can be iterated, and in each such iteration the + // residual, i.e. the norm of the right hand side vector, is reduced; if one + // makes this little experiment, one realizes that the norm of this residual + // decays exponentially with the number of iterations, and after an initial + // very rapid decline is reduced by roughly a factor of about 3.5 in each + // iteration (for one testcase I looked at, other testcases, and other + // numbers of unknowns change the factor, but not the exponential decay). + + // In a sense, this can then be considered as a quasi-timestepping scheme to + // resolve the nonlinear problem of solving large-deformation elasticity on + // a mesh that is moved along in a Lagrangian manner. // - // Another complication is that the - // existing (old) stresses are defined on - // the old mesh, which we will move around - // after updating the stresses. If this - // mesh update involves rotations of the - // cell, then we need to also rotate the - // updated stress, since it was computed - // relative to the coordinate system of the - // old cell. + // Another complication is that the existing (old) stresses are defined on + // the old mesh, which we will move around after updating the stresses. If + // this mesh update involves rotations of the cell, then we need to also + // rotate the updated stress, since it was computed relative to the + // coordinate system of the old cell. // - // Thus, what we need is the following: on - // each cell which the present processor - // owns, we need to extract the old stress - // from the data stored with each - // quadrature point, compute the stress - // update, add the two together, and then - // rotate the result together with the - // incremental rotation computed from the - // incremental displacement at the present - // quadrature point. We will detail these - // steps below: + // Thus, what we need is the following: on each cell which the present + // processor owns, we need to extract the old stress from the data stored + // with each quadrature point, compute the stress update, add the two + // together, and then rotate the result together with the incremental + // rotation computed from the incremental displacement at the present + // quadrature point. We will detail these steps below: template void TopLevel::update_quadrature_point_history () { - // First, set up an FEValues object - // by which we will evaluate the - // incremental displacements and the - // gradients thereof at the quadrature - // points, together with a vector that - // will hold this information: + // First, set up an FEValues object by which we will evaluate + // the incremental displacements and the gradients thereof at the + // quadrature points, together with a vector that will hold this + // information: FEValues fe_values (fe, quadrature_formula, update_values | update_gradients); std::vector > > displacement_increment_grads (quadrature_formula.size(), std::vector >(dim)); - // Then loop over all cells and do the - // job in the cells that belong to our + // Then loop over all cells and do the job in the cells that belong to our // subdomain: for (typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(); cell != dof_handler.end(); ++cell) if (cell->subdomain_id() == this_mpi_process) { - // Next, get a pointer to the - // quadrature point history data - // local to the present cell, and, - // as a defensive measure, make - // sure that this pointer is within - // the bounds of the global array: + // Next, get a pointer to the quadrature point history data local to + // the present cell, and, as a defensive measure, make sure that + // this pointer is within the bounds of the global array: PointHistory *local_quadrature_points_history = reinterpret_cast *>(cell->user_pointer()); Assert (local_quadrature_points_history >= @@ -2710,115 +1918,67 @@ namespace Step18 &quadrature_point_history.back(), ExcInternalError()); - // Then initialize the FEValues - // object on the present cell, and - // extract the gradients of the - // displacement at the quadrature - // points for later computation of - // the strains + // Then initialize the FEValues object on the present + // cell, and extract the gradients of the displacement at the + // quadrature points for later computation of the strains fe_values.reinit (cell); fe_values.get_function_grads (incremental_displacement, displacement_increment_grads); - // Then loop over the quadrature - // points of this cell: + // Then loop over the quadrature points of this cell: for (unsigned int q=0; q new_stress = (local_quadrature_points_history[q].old_stress + (stress_strain_tensor * get_strain (displacement_increment_grads[q]))); - // Finally, we have to rotate - // the result. For this, we - // first have to compute a - // rotation matrix at the - // present quadrature point - // from the incremental - // displacements. In fact, it - // can be computed from the - // gradients, and we already - // have a function for that - // purpose: + // Finally, we have to rotate the result. For this, we first + // have to compute a rotation matrix at the present quadrature + // point from the incremental displacements. In fact, it can be + // computed from the gradients, and we already have a function + // for that purpose: const Tensor<2,dim> rotation = get_rotation_matrix (displacement_increment_grads[q]); - // Note that the result, a - // rotation matrix, is in - // general an antisymmetric - // tensor of rank 2, so we must - // store it as a full tensor. - - // With this rotation matrix, - // we can compute the rotated - // tensor by contraction from - // the left and right, after we - // expand the symmetric tensor - // new_stress into a full - // tensor: + // Note that the result, a rotation matrix, is in general an + // antisymmetric tensor of rank 2, so we must store it as a full + // tensor. + + // With this rotation matrix, we can compute the rotated tensor + // by contraction from the left and right, after we expand the + // symmetric tensor new_stress into a full tensor: const SymmetricTensor<2,dim> rotated_new_stress = symmetrize(transpose(rotation) * static_cast >(new_stress) * rotation); - // Note that while the - // result of the - // multiplication of - // these three matrices - // should be symmetric, - // it is not due to - // floating point round - // off: we get an - // asymmetry on the - // order of 1e-16 of - // the off-diagonal - // elements of the - // result. When - // assigning the result - // to a - // SymmetricTensor, - // the constuctor of - // that class checks - // the symmetry and - // realizes that it - // isn't exactly - // symmetric; it will - // then raise an - // exception. To avoid - // that, we explicitly - // symmetrize the - // result to make it - // exactly symmetric. - - // The result of all these - // operations is then written - // back into the original - // place: + // Note that while the result of the multiplication of these + // three matrices should be symmetric, it is not due to floating + // point round off: we get an asymmetry on the order of 1e-16 of + // the off-diagonal elements of the result. When assigning the + // result to a SymmetricTensor, the constuctor of + // that class checks the symmetry and realizes that it isn't + // exactly symmetric; it will then raise an exception. To avoid + // that, we explicitly symmetrize the result to make it exactly + // symmetric. + + // The result of all these operations is then written back into + // the original place: local_quadrature_points_history[q].old_stress = rotated_new_stress; } } } - // This ends the project specific - // namespace - // Step18. The - // rest is as usual and as already - // shown in step-17: A main() - // function that initializes and - // terminates PETSc, calls the - // classes that do the actual work, - // and makes sure that we catch all - // exceptions that propagate up to - // this point: + // This ends the project specific namespace Step18. The rest is + // as usual and as already shown in step-17: A main() function + // that initializes and terminates PETSc, calls the classes that do the + // actual work, and makes sure that we catch all exceptions that propagate + // up to this point: } diff --git a/deal.II/examples/step-19/step-19.cc b/deal.II/examples/step-19/step-19.cc index c0a2f21da0..8c065bccd4 100644 --- a/deal.II/examples/step-19/step-19.cc +++ b/deal.II/examples/step-19/step-19.cc @@ -12,12 +12,10 @@ // @sect4{Preliminaries} -// As usual, we start with include -// files. This program is content with really -// few of these -- we only need two files -// from the library (one for input and output -// of graphical data, one for parameter -// handling), and a few C++ standard headers: +// As usual, we start with include files. This program is content with really +// few of these -- we only need two files from the library (one for input and +// output of graphical data, one for parameter handling), and a few C++ +// standard headers: #include #include @@ -25,61 +23,42 @@ #include #include -// As mentioned in the first few tutorial -// programs, all names in deal.II are -// declared in a namespace -// dealii. To make using these -// function and class names simpler, we -// import the entire content of that -// namespace into the global scope. As done -// for all previous programs already, we'll -// also place everything we do here into a -// namespace of its own: +// As mentioned in the first few tutorial programs, all names in deal.II are +// declared in a namespace dealii. To make using these function +// and class names simpler, we import the entire content of that namespace +// into the global scope. As done for all previous programs already, we'll +// also place everything we do here into a namespace of its own: namespace Step19 { using namespace dealii; - // Before we start with the actual program, - // let us declare a few global variables that - // will be used to hold the parameters this - // program is going to use. Usually, global - // variables are frowned upon for a good - // reason, but since we have such a short - // program here that does only a single - // thing, we may stray from our usual line - // and make these variables global, rather - // than passing them around to all functions - // or encapsulating them into a class. + // Before we start with the actual program, let us declare a few global + // variables that will be used to hold the parameters this program is going + // to use. Usually, global variables are frowned upon for a good reason, but + // since we have such a short program here that does only a single thing, we + // may stray from our usual line and make these variables global, rather + // than passing them around to all functions or encapsulating them into a + // class. // - // The variables we have are: first, an - // object that will hold parameters of - // operation, such as output format (unless - // given on the command line); second, the - // names of input and output files; and third, - // the format in which the output is to be - // written: + // The variables we have are: first, an object that will hold parameters of + // operation, such as output format (unless given on the command line); + // second, the names of input and output files; and third, the format in + // which the output is to be written: ParameterHandler prm; std::vector input_file_names; std::string output_file; std::string output_format; - // All the stuff this program does can be - // done from here on. As described in the - // introduction, what we have to do is - // declare what values the parameter file can - // have, parse the command line, read the - // input files, then write the output. We - // will do this in this order of operation, - // but before that let us declare a function - // that prints a message about how this - // program is to be used; the function first - // prints a general message, and then goes on - // to list the parameters that are allowed in - // the parameter file (the - // ParameterHandler class has a function - // to do exactly this; see the results - // section for what it prints): + // All the stuff this program does can be done from here on. As described in + // the introduction, what we have to do is declare what values the parameter + // file can have, parse the command line, read the input files, then write + // the output. We will do this in this order of operation, but before that + // let us declare a function that prints a message about how this program is + // to be used; the function first prints a general message, and then goes on + // to list the parameters that are allowed in the parameter file (the + // ParameterHandler class has a function to do exactly this; + // see the results section for what it prints): void print_usage_message () { @@ -108,123 +87,76 @@ namespace Step19 // @sect4{Declaring parameters for the input file} - // The second function is used to declare the - // parameters this program accepts from the - // input file. While we don't actually take - // many parameters from the input file except - // for, possibly, the output file name and - // format, we nevertheless want to show how - // to work with parameter files. + // The second function is used to declare the parameters this program + // accepts from the input file. While we don't actually take many parameters + // from the input file except for, possibly, the output file name and + // format, we nevertheless want to show how to work with parameter files. // - // In short, the ParameterHandler class - // works as follows: one declares the entries - // of parameters that can be given in input - // files together, and later on one can read - // an input file in which these parameters - // are set to their values. If a parameter is - // not listed in the input file, the default - // value specified in the declaration of that - // parameter is used. After that, the program - // can query the values assigned to certain - // parameters from the ParameterHandler - // object. + // In short, the ParameterHandler class works as follows: one + // declares the entries of parameters that can be given in input files + // together, and later on one can read an input file in which these + // parameters are set to their values. If a parameter is not listed in the + // input file, the default value specified in the declaration of that + // parameter is used. After that, the program can query the values assigned + // to certain parameters from the ParameterHandler object. // // Declaring parameters can be done using the - // ParameterHandler::declare_entry - // function. It's arguments are the name of a - // parameter, a default value (given as a - // string, even if the parameter is numeric - // in nature, and thirdly an object that - // describes constraints on values that may - // be passed to this parameter. In the - // example below, we use an object of type - // Patterns::Anything to denote that - // there are no constraints on file names - // (this is, of course, not true -- the - // operating system does have constraints, - // but from an application standpoint, almost - // all names are valid). In other cases, one - // may, for example, use - // Patterns::Integer to make sure that - // only parameters are accepted that can be - // interpreted as integer values (it is also - // possible to specify bounds for integer - // values, and all values outside this range - // are rejected), Patterns::Double for - // floating point values, classes that make - // sure that the given parameter value is a - // comma separated list of things, etc. Take - // a look at the Patterns namespace to - // see what is possible. + // ParameterHandler::declare_entry function. It's arguments are + // the name of a parameter, a default value (given as a string, even if the + // parameter is numeric in nature, and thirdly an object that describes + // constraints on values that may be passed to this parameter. In the + // example below, we use an object of type Patterns::Anything + // to denote that there are no constraints on file names (this is, of + // course, not true -- the operating system does have constraints, but from + // an application standpoint, almost all names are valid). In other cases, + // one may, for example, use Patterns::Integer to make sure + // that only parameters are accepted that can be interpreted as integer + // values (it is also possible to specify bounds for integer values, and all + // values outside this range are rejected), Patterns::Double + // for floating point values, classes that make sure that the given + // parameter value is a comma separated list of things, etc. Take a look at + // the Patterns namespace to see what is possible. // - // The fourth argument to declare_entry - // is a help string that can be printed to - // document what this parameter is meant to - // be used for and other information you may - // consider important when declaring this - // parameter. The default value of this - // fourth argument is the empty string. + // The fourth argument to declare_entry is a help string that + // can be printed to document what this parameter is meant to be used for + // and other information you may consider important when declaring this + // parameter. The default value of this fourth argument is the empty string. // - // I always wanted to have an example program - // describing the ParameterHandler class, - // because it is so particularly useful. It - // would have been useful in a number of - // previous example programs (for example, in - // order to let the tolerance for linear - // solvers, or the number of refinement steps - // be determined by a run-time parameter, - // rather than hard-coding them into the - // program), but it turned out that trying to - // explain this class there would have - // overloaded them with things that would - // have distracted from the main - // purpose. However, while writing this - // program, I realized that there aren't all - // that many parameters this program can - // usefully ask for, or better, it turned - // out: declaring and querying these - // parameters was already done centralized in - // one place of the libray, namely the - // DataOutInterface class that handles - // exactly this -- managing parameters for - // input and output. + // I always wanted to have an example program describing the + // ParameterHandler class, because it is so particularly + // useful. It would have been useful in a number of previous example + // programs (for example, in order to let the tolerance for linear solvers, + // or the number of refinement steps be determined by a run-time parameter, + // rather than hard-coding them into the program), but it turned out that + // trying to explain this class there would have overloaded them with things + // that would have distracted from the main purpose. However, while writing + // this program, I realized that there aren't all that many parameters this + // program can usefully ask for, or better, it turned out: declaring and + // querying these parameters was already done centralized in one place of + // the libray, namely the DataOutInterface class that handles + // exactly this -- managing parameters for input and output. // - // So the second function call in this - // function is to let the - // DataOutInterface declare a good number - // of parameters that control everything from - // the output format to what kind of output - // should be generated if output is written - // in a specific graphical format. For - // example, when writing data in encapsulated - // postscript (EPS) format, the result is - // just a 2d projection, not data that can be - // viewed and rotated with a - // viewer. Therefore, one has to choose the - // viewing angle and a number of other - // options up front, when output is - // generated, rather than playing around with - // them later on. The call to - // DataOutInterface::declare_parameters - // declares entries that allow to specify - // them in the parameter input file during - // run-time. If the parameter file does not - // contain entries for them, defaults are - // taken. + // So the second function call in this function is to let the + // DataOutInterface declare a good number of parameters that + // control everything from the output format to what kind of output should + // be generated if output is written in a specific graphical format. For + // example, when writing data in encapsulated postscript (EPS) format, the + // result is just a 2d projection, not data that can be viewed and rotated + // with a viewer. Therefore, one has to choose the viewing angle and a + // number of other options up front, when output is generated, rather than + // playing around with them later on. The call to + // DataOutInterface::declare_parameters declares entries that + // allow to specify them in the parameter input file during run-time. If the + // parameter file does not contain entries for them, defaults are taken. // - // As a final note: DataOutInterface is a - // template, because it is usually used to - // write output for a specific space - // dimension. However, this program is - // supposed to be used for all dimensions at - // the same time, so we don't know at compile - // time what the right dimension is when - // specifying the template - // parameter. Fortunately, declaring - // parameters is something that is space - // dimension independent, so we can just pick - // one arbitrarily. We pick 1, but it - // could have been any other number as well. + // As a final note: DataOutInterface is a template, because it + // is usually used to write output for a specific space dimension. However, + // this program is supposed to be used for all dimensions at the same time, + // so we don't know at compile time what the right dimension is when + // specifying the template parameter. Fortunately, declaring parameters is + // something that is space dimension independent, so we can just pick one + // arbitrarily. We pick 1, but it could have been any other + // number as well. void declare_parameters () { prm.declare_entry ("Output file", "", @@ -233,44 +165,30 @@ namespace Step19 DataOutInterface<1>::declare_parameters (prm); - // Since everything that this program can - // usefully request in terms of input - // parameters is already handled by now, - // let us nevertheless show how to use - // input parameters in other - // circumstances. First, parameters are - // like files in a directory tree: they can - // be in the top-level directory, but you - // can also group them into subdirectories - // to make it easier to find them or to be - // able to use the same parameter name in + // Since everything that this program can usefully request in terms of + // input parameters is already handled by now, let us nevertheless show + // how to use input parameters in other circumstances. First, parameters + // are like files in a directory tree: they can be in the top-level + // directory, but you can also group them into subdirectories to make it + // easier to find them or to be able to use the same parameter name in // different contexts. // - // Let us first declare a dummy parameter - // in the top-level section; we assume that - // it will denote the number of iterations, - // and that useful numbers of iterations - // that a user should be able to specify - // are in the range 1...1000, with a - // default value of 42: + // Let us first declare a dummy parameter in the top-level section; we + // assume that it will denote the number of iterations, and that useful + // numbers of iterations that a user should be able to specify are in the + // range 1...1000, with a default value of 42: prm.declare_entry ("Dummy iterations", "42", Patterns::Integer (1,1000), "A dummy parameter asking for an integer"); - // Next, let us declare a sub-section (the - // equivalent to a subdirectory). When - // entered, all following parameter - // declarations will be within this - // subsection. To also visually group these - // declarations with the subsection name, I - // like to use curly braces to force my - // editor to indent everything that goes - // into this sub-section by one level of - // indentation. In this sub-section, we - // shall have two entries, one that takes a - // boolean parameter and one that takes a - // selection list of values, separated by - // the '|' character: + // Next, let us declare a sub-section (the equivalent to a + // subdirectory). When entered, all following parameter declarations will + // be within this subsection. To also visually group these declarations + // with the subsection name, I like to use curly braces to force my editor + // to indent everything that goes into this sub-section by one level of + // indentation. In this sub-section, we shall have two entries, one that + // takes a boolean parameter and one that takes a selection list of + // values, separated by the '|' character: prm.enter_subsection ("Dummy subsection"); { prm.declare_entry ("Dummy generate output", "true", @@ -284,37 +202,26 @@ namespace Step19 "set of values"); } prm.leave_subsection (); - // After this, we have left the subsection - // again. You should have gotten the idea - // by now how one can nest subsections to - // separate parameters. There are a number - // of other possible patterns describing - // possible values of parameters; in all - // cases, if you try to pass a parameter to - // the program that does not match the - // expectations of the pattern, it will - // reject the parameter file and ask you to - // fix it. After all, it does not make much - // sense if you had an entry that contained - // the entry "red" for the parameter - // "Generate output". + // After this, we have left the subsection again. You should have gotten + // the idea by now how one can nest subsections to separate + // parameters. There are a number of other possible patterns describing + // possible values of parameters; in all cases, if you try to pass a + // parameter to the program that does not match the expectations of the + // pattern, it will reject the parameter file and ask you to fix it. After + // all, it does not make much sense if you had an entry that contained the + // entry "red" for the parameter "Generate output". } // @sect4{Parsing the command line} - // Our next task is to see what information - // has been provided on the command - // line. First, we need to be sure that there - // is at least one parameter: an input - // file. The format and the output file can - // be specified in the parameter file, but - // the list of input files can't, so at least - // one parameter needs to be there. Together - // with the name of the program (the zeroth - // parameter), argc must therefore be at - // least 2. If this is not the case, we print - // an error message and exit: + // Our next task is to see what information has been provided on the command + // line. First, we need to be sure that there is at least one parameter: an + // input file. The format and the output file can be specified in the + // parameter file, but the list of input files can't, so at least one + // parameter needs to be there. Together with the name of the program (the + // zeroth parameter), argc must therefore be at least 2. If + // this is not the case, we print an error message and exit: void parse_command_line (const int argc, char *const *argv) @@ -325,27 +232,19 @@ namespace Step19 exit (1); } - // Next, collect all parameters in a list - // that will be somewhat simpler to handle - // than the argc/argv mechanism. We - // omit the name of the executable at the - // zeroth index: + // Next, collect all parameters in a list that will be somewhat simpler to + // handle than the argc/argv mechanism. We omit + // the name of the executable at the zeroth index: std::list args; for (int i=1; i-p, then there must be a - // parameter file following (which - // we should then read), in case of - // -x it is the name of an - // output format. Finally, for - // -o it is the name of the - // output file. In all cases, once - // we've treated a parameter, we - // remove it from the list of - // parameters: + // Then process all these parameters. If the parameter is -p, + // then there must be a parameter file following (which we should then + // read), in case of -x it is the name of an output + // format. Finally, for -o it is the name of the output + // file. In all cases, once we've treated a parameter, we remove it from + // the list of parameters: while (args.size()) { if (args.front() == std::string("-p")) @@ -365,49 +264,35 @@ namespace Step19 // Now read the input file: prm.read_input (parameter_file); - // Both the output file name as - // well as the format can be - // specified on the command - // line. We have therefore given - // them global variables that hold - // their values, but they can also - // be set in the parameter file. We - // therefore need to extract them - // from the parameter file here, - // because they may be overridden - // by later command line - // parameters: + // Both the output file name as well as the format can be + // specified on the command line. We have therefore given them + // global variables that hold their values, but they can also be + // set in the parameter file. We therefore need to extract them + // from the parameter file here, because they may be overridden by + // later command line parameters: if (output_file == "") output_file = prm.get ("Output file"); if (output_format == "") output_format = prm.get ("Output format"); - // Finally, let us note that if we - // were interested in the values of - // the parameters declared above in - // the dummy subsection, we would - // write something like this to - // extract the value of the boolean - // flag (the prm.get function - // returns the value of a parameter - // as a string, whereas the - // prm.get_X functions return a - // value already converted to a - // different type): + // Finally, let us note that if we were interested in the values + // of the parameters declared above in the dummy subsection, we + // would write something like this to extract the value of the + // boolean flag (the prm.get function returns the + // value of a parameter as a string, whereas the + // prm.get_X functions return a value already + // converted to a different type): prm.enter_subsection ("Dummy subsection"); { prm.get_bool ("Dummy generate output"); } prm.leave_subsection (); - // We would assign the result to a - // variable, or course, but don't - // here in order not to generate an - // unused variable that the + // We would assign the result to a variable, or course, but don't + // here in order not to generate an unused variable that the // compiler might warn about. // - // Alas, let's move on to handling - // of output formats: + // Alas, let's move on to handling of output formats: } else if (args.front() == std::string("-x")) { @@ -438,12 +323,9 @@ namespace Step19 args.pop_front (); } - // Otherwise, this is not a parameter - // that starts with a known minus - // sequence, and we should consider it - // to be the name of an input file. Let - // us therefore add this file to the - // list of input files: + // Otherwise, this is not a parameter that starts with a known minus + // sequence, and we should consider it to be the name of an input + // file. Let us therefore add this file to the list of input files: else { input_file_names.push_back (args.front()); @@ -451,8 +333,7 @@ namespace Step19 } } - // Next check a few things and create - // errors if the checks fail. Firstly, + // Next check a few things and create errors if the checks fail. Firstly, // there must be at least one input file if (input_file_names.size() == 0) { @@ -465,24 +346,18 @@ namespace Step19 // @sect4{Generating output} - // Now that we have all the information, we - // need to read all the input files, merge - // them, and generate a single output - // file. This, after all, was the motivation, - // borne from the necessity encountered in - // the step-18 tutorial program, to write - // this program in the first place. + // Now that we have all the information, we need to read all the input + // files, merge them, and generate a single output file. This, after all, + // was the motivation, borne from the necessity encountered in the step-18 + // tutorial program, to write this program in the first place. // - // So what we do first is to declare an - // object into which we will merge the data - // from all the input file, and read in the - // first file through a stream. Note that - // every time we open a file, we use the - // AssertThrow macro to check whether the - // file is really readable -- if it isn't - // then this will trigger an exception and - // corresponding output will be generated - // from the exception handler in main(): + // So what we do first is to declare an object into which we will merge the + // data from all the input file, and read in the first file through a + // stream. Note that every time we open a file, we use the + // AssertThrow macro to check whether the file is really + // readable -- if it isn't then this will trigger an exception and + // corresponding output will be generated from the exception handler in + // main(): template void do_convert () { @@ -495,10 +370,8 @@ namespace Step19 merged_data.read (input); } - // For all the other input files, we read - // their data into an intermediate object, - // and then merge that into the first - // object declared above: + // For all the other input files, we read their data into an intermediate + // object, and then merge that into the first object declared above: for (unsigned int i=1; iDataOutBase class has a function - // that does this parsing for us, i.e. it - // knows about all the presently supported - // output formats and makes sure that they - // can be specified in the parameter file - // or on the command line. Note that this - // ensures that if the library acquires the - // ability to output in other output - // formats, this program will be able to - // make use of this ability without having - // to be changed! + // Once we have this, let us open an output stream, and parse what we got + // as the name of the output format into an identifier. Fortunately, the + // DataOutBase class has a function that does this parsing + // for us, i.e. it knows about all the presently supported output formats + // and makes sure that they can be specified in the parameter file or on + // the command line. Note that this ensures that if the library acquires + // the ability to output in other output formats, this program will be + // able to make use of this ability without having to be changed! std::ofstream output_stream (output_file.c_str()); AssertThrow (output_stream, ExcIO()); const DataOutBase::OutputFormat format = DataOutBase::parse_output_format (output_format); - // Finally, write the merged data to the - // output: + // Finally, write the merged data to the output: merged_data.write(output_stream, format); } // @sect4{Dispatching output generation} - // The function above takes template - // parameters relating to the space dimension - // of the output, and the dimension of the - // objects to be output. (For example, when - // outputting whole cells, these two - // dimensions are the same, but the - // intermediate files may contain only data - // pertaining to the faces of cells, in which - // case the first parameter will be one less + // The function above takes template parameters relating to the space + // dimension of the output, and the dimension of the objects to be + // output. (For example, when outputting whole cells, these two dimensions + // are the same, but the intermediate files may contain only data pertaining + // to the faces of cells, in which case the first parameter will be one less // than the space dimension.) // - // The problem is: at compile time, we of - // course don't know the dimensions used in - // the input files. We have to plan for all - // cases, therefore. This is a little clumsy, - // since we need to specify the dimensions - // statically at compile time, even though we - // will only know about them at run time. + // The problem is: at compile time, we of course don't know the dimensions + // used in the input files. We have to plan for all cases, therefore. This + // is a little clumsy, since we need to specify the dimensions statically at + // compile time, even though we will only know about them at run time. // - // So here is what we do: from the first - // input file, we determine (using a function - // in DataOutBase that exists for this - // purpose) these dimensions. We then have a - // series of switches that dispatch, - // statically, to the do_convert - // functions with different template - // arguments. Not pretty, but works. Apart - // from this, the function does nothing -- - // except making sure that it covered the - // dimensions for which it was called, using - // the AssertThrow macro at places in the + // So here is what we do: from the first input file, we determine (using a + // function in DataOutBase that exists for this purpose) these + // dimensions. We then have a series of switches that dispatch, statically, + // to the do_convert functions with different template + // arguments. Not pretty, but works. Apart from this, the function does + // nothing -- except making sure that it covered the dimensions for which it + // was called, using the AssertThrow macro at places in the // code that shouldn't be reached: void convert () { @@ -627,16 +480,12 @@ namespace Step19 // @sect4{main()} -// Finally, the main program. There is not -// much more to do than to make sure -// parameters are declared, the command line -// is parsed (which includes reading -// parameter files), and finally making sure -// the input files are read and output is -// generated. Everything else just has to do -// with handling exceptions and making sure -// that appropriate output is generated if -// one is thrown. +// Finally, the main program. There is not much more to do than to make sure +// parameters are declared, the command line is parsed (which includes reading +// parameter files), and finally making sure the input files are read and +// output is generated. Everything else just has to do with handling +// exceptions and making sure that appropriate output is generated if one is +// thrown. int main (int argc, char **argv) { try diff --git a/deal.II/examples/step-2/step-2.cc b/deal.II/examples/step-2/step-2.cc index d018ba2c78..4cf5d591d7 100644 --- a/deal.II/examples/step-2/step-2.cc +++ b/deal.II/examples/step-2/step-2.cc @@ -9,91 +9,66 @@ /* to the file deal.II/doc/license.html for the text and */ /* further information on this license. */ -// The first few includes are just -// like in the previous program, so -// do not require additional comments: +// The first few includes are just like in the previous program, so do not +// require additional comments: #include #include #include #include #include -// However, the next file is new. We need -// this include file for the association of -// degrees of freedom ("DoF"s) to vertices, -// lines, and cells: +// However, the next file is new. We need this include file for the +// association of degrees of freedom ("DoF"s) to vertices, lines, and cells: #include -// The following include contains the -// description of the bilinear finite -// element, including the facts that -// it has one degree of freedom on -// each vertex of the triangulation, -// but none on faces and none in the -// interior of the cells. +// The following include contains the description of the bilinear finite +// element, including the facts that it has one degree of freedom on each +// vertex of the triangulation, but none on faces and none in the interior of +// the cells. // -// (In fact, the file contains the -// description of Lagrange elements in -// general, i.e. also the quadratic, cubic, -// etc versions, and not only for 2d but also -// 1d and 3d.) +// (In fact, the file contains the description of Lagrange elements in +// general, i.e. also the quadratic, cubic, etc versions, and not only for 2d +// but also 1d and 3d.) #include -// In the following file, several -// tools for manipulating degrees of -// freedom can be found: +// In the following file, several tools for manipulating degrees of freedom +// can be found: #include -// We will use a sparse matrix to -// visualize the pattern of nonzero -// entries resulting from the -// distribution of degrees of freedom -// on the grid. That class can be -// found here: +// We will use a sparse matrix to visualize the pattern of nonzero entries +// resulting from the distribution of degrees of freedom on the grid. That +// class can be found here: #include -// We will also need to use an -// intermediate sparsity patter -// structure, which is found in this -// file: +// We will also need to use an intermediate sparsity patter structure, which +// is found in this file: #include -// We will want to use a special -// algorithm to renumber degrees of -// freedom. It is declared here: +// We will want to use a special algorithm to renumber degrees of freedom. It +// is declared here: #include // And this is again needed for C++ output: #include -// Finally, as in step-1, we import -// the deal.II namespace into the -// global scope: +// Finally, as in step-1, we import the deal.II namespace into the global +// scope: using namespace dealii; // @sect3{Mesh generation} -// This is the function that produced the -// circular grid in the previous step-1 -// example program. The sole difference is -// that it returns the grid it produces via -// its argument. +// This is the function that produced the circular grid in the previous step-1 +// example program. The sole difference is that it returns the grid it +// produces via its argument. // -// The details of what the function does are -// explained in step-1. The only thing we -// would like to comment on is this: +// The details of what the function does are explained in step-1. The only +// thing we would like to comment on is this: // -// Since we want to export the triangulation -// through this function's parameter, we need -// to make sure that the boundary object -// lives at least as long as the -// triangulation does. However, in step-1, -// the boundary object is a local variable, -// and it would be deleted at the end of the -// function, which is too early. We avoid the -// problem by declaring it 'static' which -// makes sure that the object is initialized -// the first time control the program passes -// this point, but at the same time assures -// that it lives until the end of the -// program. +// Since we want to export the triangulation through this function's +// parameter, we need to make sure that the boundary object lives at least as +// long as the triangulation does. However, in step-1, the boundary object is +// a local variable, and it would be deleted at the end of the function, which +// is too early. We avoid the problem by declaring it 'static' which makes +// sure that the object is initialized the first time control the program +// passes this point, but at the same time assures that it lives until the end +// of the program. void make_grid (Triangulation<2> &triangulation) { const Point<2> center (1,0); @@ -133,257 +108,149 @@ void make_grid (Triangulation<2> &triangulation) // @sect3{Creation of a DoFHandler} -// Up to now, we only have a grid, i.e. some -// geometrical (the position of the vertices) -// and some topological information (how -// vertices are connected to lines, and lines -// to cells, as well as which cells neighbor -// which other cells). To use numerical -// algorithms, one needs some logic -// information in addition to that: we would -// like to associate degree of freedom -// numbers to each vertex (or line, or cell, -// in case we were using higher order -// elements) to later generate matrices and -// vectors which describe a finite element +// Up to now, we only have a grid, i.e. some geometrical (the position of the +// vertices) and some topological information (how vertices are connected to +// lines, and lines to cells, as well as which cells neighbor which other +// cells). To use numerical algorithms, one needs some logic information in +// addition to that: we would like to associate degree of freedom numbers to +// each vertex (or line, or cell, in case we were using higher order elements) +// to later generate matrices and vectors which describe a finite element // field on the triangulation. // -// This function shows how to do this. The -// object to consider is the DoFHandler -// class template. Before we do so, however, -// we first need something that describes how -// many degrees of freedom are to be -// associated to each of these objects. Since -// this is one aspect of the definition of a -// finite element space, the finite element -// base class stores this information. In the -// present context, we therefore create an -// object of the derived class FE_Q that -// describes Lagrange elements. Its -// constructor takes one argument that states -// the polynomial degree of the element, -// which here is one (indicating a bi-linear -// element); this then corresponds to one -// degree of freedom for each vertex, while -// there are none on lines and inside the -// quadrilateral. A value of, say, three -// given to the constructor would instead -// give us a bi-cubic element with one degree -// of freedom per vertex, two per line, and -// four inside the cell. In general, FE_Q -// denotes the family of continuous elements -// with complete polynomials -// (i.e. tensor-product polynomials) up to -// the specified order. +// This function shows how to do this. The object to consider is the +// DoFHandler class template. Before we do so, however, we first +// need something that describes how many degrees of freedom are to be +// associated to each of these objects. Since this is one aspect of the +// definition of a finite element space, the finite element base class stores +// this information. In the present context, we therefore create an object of +// the derived class FE_Q that describes Lagrange elements. Its +// constructor takes one argument that states the polynomial degree of the +// element, which here is one (indicating a bi-linear element); this then +// corresponds to one degree of freedom for each vertex, while there are none +// on lines and inside the quadrilateral. A value of, say, three given to the +// constructor would instead give us a bi-cubic element with one degree of +// freedom per vertex, two per line, and four inside the cell. In general, +// FE_Q denotes the family of continuous elements with complete +// polynomials (i.e. tensor-product polynomials) up to the specified order. // -// We first need to create an object of this -// class and then pass it on to the -// DoFHandler object to allocate storage -// for the degrees of freedom (in deal.II -// lingo: we distribute degrees of -// freedom). Note that the DoFHandler -// object will store a reference to this -// finite element object, so we have to -// make sure its lifetime is at least as long -// as that of the DoFHandler; one way to -// make sure this is so is to make it static -// as well, in order to prevent its -// preemptive destruction. (However, the -// library would warn us if we forgot about -// this and abort the program if that -// occured. You can check this, if you want, -// by removing the 'static' declaration.) +// We first need to create an object of this class and then pass it on to the +// DoFHandler object to allocate storage for the degrees of +// freedom (in deal.II lingo: we distribute degrees of +// freedom). Note that the DoFHandler object will store a reference to +// this finite element object, so we have to make sure its lifetime is at +// least as long as that of the DoFHandler; one way to make sure +// this is so is to make it static as well, in order to prevent its preemptive +// destruction. (However, the library would warn us if we forgot about this +// and abort the program if that occured. You can check this, if you want, by +// removing the 'static' declaration.) void distribute_dofs (DoFHandler<2> &dof_handler) { - // As described above, let us first create - // a finite element object, and then use it - // to allocate degrees of freedom on the - // triangulation with which the dof_handler - // object is associated: + // As described above, let us first create a finite element object, and then + // use it to allocate degrees of freedom on the triangulation with which the + // dof_handler object is associated: static const FE_Q<2> finite_element(1); dof_handler.distribute_dofs (finite_element); - // Now that we have associated a degree of - // freedom with a global number to each - // vertex, we wonder how to visualize this? - // There is no simple way to directly - // visualize the DoF number associated with - // each vertex. However, such information - // would hardly ever be truly important, - // since the numbering itself is more or - // less arbitrary. There are more important - // factors, of which we will demonstrate - // one in the following. + // Now that we have associated a degree of freedom with a global number to + // each vertex, we wonder how to visualize this? There is no simple way to + // directly visualize the DoF number associated with each vertex. However, + // such information would hardly ever be truly important, since the + // numbering itself is more or less arbitrary. There are more important + // factors, of which we will demonstrate one in the following. // - // Associated with each vertex of the - // triangulation is a shape - // function. Assume we want to solve - // something like Laplace's equation, then - // the different matrix entries will be the - // integrals over the gradient of each pair - // of such shape functions. Obviously, - // since the shape functions are nonzero - // only on the cells adjacent to the vertex - // they are associated with, matrix entries - // will be nonzero only if the supports of - // the shape functions associated to that - // column and row %numbers intersect. This - // is only the case for adjacent shape - // functions, and therefore only for - // adjacent vertices. Now, since the - // vertices are numbered more or less - // randomly by the above function - // (DoFHandler::distribute_dofs), the - // pattern of nonzero entries in the matrix - // will be somewhat ragged, and we will + // Associated with each vertex of the triangulation is a shape + // function. Assume we want to solve something like Laplace's equation, then + // the different matrix entries will be the integrals over the gradient of + // each pair of such shape functions. Obviously, since the shape functions + // are nonzero only on the cells adjacent to the vertex they are associated + // with, matrix entries will be nonzero only if the supports of the shape + // functions associated to that column and row %numbers intersect. This is + // only the case for adjacent shape functions, and therefore only for + // adjacent vertices. Now, since the vertices are numbered more or less + // randomly by the above function (DoFHandler::distribute_dofs), the pattern + // of nonzero entries in the matrix will be somewhat ragged, and we will // take a look at it now. // - // First we have to create a - // structure which we use to store - // the places of nonzero - // elements. This can then later be - // used by one or more sparse - // matrix objects that store the - // values of the entries in the - // locations stored by this - // sparsity pattern. The class that - // stores the locations is the - // SparsityPattern class. As it - // turns out, however, this class - // has some drawbacks when we try - // to fill it right away: its data - // structures are set up in such a - // way that we need to have an - // estimate for the maximal number - // of entries we may wish to have - // in each row. In two space - // dimensions, reasonable values - // for this estimate are available - // through the - // DoFHandler::max_couplings_between_dofs() - // function, but in three - // dimensions the function almost - // always severely overestimates - // the true number, leading to a - // lot of wasted memory, sometimes - // too much for the machine used, - // even if the unused memory can be - // released immediately after - // computing the sparsity - // pattern. In order to avoid this, - // we use an intermediate object of - // type CompressedSparsityPattern - // that uses a different %internal - // data structure and that we can - // later copy into the - // SparsityPattern object without - // much overhead. (Some more - // information on these data - // structures can be found in the - // @ref Sparsity module.) In order - // to initialize this intermediate - // data structure, we have to give - // it the size of the matrix, which - // in our case will be square with - // as many rows and columns as - // there are degrees of freedom on - // the grid: + // First we have to create a structure which we use to store the places of + // nonzero elements. This can then later be used by one or more sparse + // matrix objects that store the values of the entries in the locations + // stored by this sparsity pattern. The class that stores the locations is + // the SparsityPattern class. As it turns out, however, this class has some + // drawbacks when we try to fill it right away: its data structures are set + // up in such a way that we need to have an estimate for the maximal number + // of entries we may wish to have in each row. In two space dimensions, + // reasonable values for this estimate are available through the + // DoFHandler::max_couplings_between_dofs() function, but in three + // dimensions the function almost always severely overestimates the true + // number, leading to a lot of wasted memory, sometimes too much for the + // machine used, even if the unused memory can be released immediately after + // computing the sparsity pattern. In order to avoid this, we use an + // intermediate object of type CompressedSparsityPattern that uses a + // different %internal data structure and that we can later copy into the + // SparsityPattern object without much overhead. (Some more information on + // these data structures can be found in the @ref Sparsity module.) In order + // to initialize this intermediate data structure, we have to give it the + // size of the matrix, which in our case will be square with as many rows + // and columns as there are degrees of freedom on the grid: CompressedSparsityPattern compressed_sparsity_pattern(dof_handler.n_dofs(), dof_handler.n_dofs()); - // We then fill this object with the - // places where nonzero elements will be - // located given the present numbering of - // degrees of freedom: + // We then fill this object with the places where nonzero elements will be + // located given the present numbering of degrees of freedom: DoFTools::make_sparsity_pattern (dof_handler, compressed_sparsity_pattern); - // Now we are ready to create the actual - // sparsity pattern that we could later use - // for our matrix. It will just contain the - // data already assembled in the - // CompressedSparsityPattern. + // Now we are ready to create the actual sparsity pattern that we could + // later use for our matrix. It will just contain the data already assembled + // in the CompressedSparsityPattern. SparsityPattern sparsity_pattern; sparsity_pattern.copy_from (compressed_sparsity_pattern); - // With this, we can now write the results - // to a file: + // With this, we can now write the results to a file: std::ofstream out ("sparsity_pattern.1"); sparsity_pattern.print_gnuplot (out); - // The result is in GNUPLOT format, - // where in each line of the output - // file, the coordinates of one - // nonzero entry are listed. The - // output will be shown below. + // The result is in GNUPLOT format, where in each line of the output file, + // the coordinates of one nonzero entry are listed. The output will be shown + // below. // - // If you look at it, you will note that - // the sparsity pattern is symmetric. This - // should not come as a surprise, since we - // have not given the - // DoFTools::make_sparsity_pattern any - // information that would indicate that our - // bilinear form may couple shape functions - // in a non-symmetric way. You will also - // note that it has several distinct - // region, which stem from the fact that - // the numbering starts from the coarsest - // cells and moves on to the finer ones; - // since they are all distributed - // symmetrically around the origin, this - // shows up again in the sparsity pattern. + // If you look at it, you will note that the sparsity pattern is + // symmetric. This should not come as a surprise, since we have not given + // the DoFTools::make_sparsity_pattern any information that + // would indicate that our bilinear form may couple shape functions in a + // non-symmetric way. You will also note that it has several distinct + // region, which stem from the fact that the numbering starts from the + // coarsest cells and moves on to the finer ones; since they are all + // distributed symmetrically around the origin, this shows up again in the + // sparsity pattern. } // @sect3{Renumbering of DoFs} -// In the sparsity pattern produced above, -// the nonzero entries extended quite far off -// from the diagonal. For some algorithms, -// for example for incomplete LU -// decompositions or Gauss-Seidel -// preconditioners, this is unfavorable, and -// we will show a simple way how to improve -// this situation. +// In the sparsity pattern produced above, the nonzero entries extended quite +// far off from the diagonal. For some algorithms, for example for incomplete +// LU decompositions or Gauss-Seidel preconditioners, this is unfavorable, and +// we will show a simple way how to improve this situation. // -// Remember that for an entry $(i,j)$ -// in the matrix to be nonzero, the -// supports of the shape functions i -// and j needed to intersect -// (otherwise in the integral, the -// integrand would be zero everywhere -// since either the one or the other -// shape function is zero at some -// point). However, the supports of -// shape functions intersected only -// if they were adjacent to each -// other, so in order to have the -// nonzero entries clustered around -// the diagonal (where $i$ equals $j$), -// we would like to have adjacent -// shape functions to be numbered -// with indices (DoF numbers) that -// differ not too much. +// Remember that for an entry $(i,j)$ in the matrix to be nonzero, the +// supports of the shape functions i and j needed to intersect (otherwise in +// the integral, the integrand would be zero everywhere since either the one +// or the other shape function is zero at some point). However, the supports +// of shape functions intersected only if they were adjacent to each other, so +// in order to have the nonzero entries clustered around the diagonal (where +// $i$ equals $j$), we would like to have adjacent shape functions to be +// numbered with indices (DoF numbers) that differ not too much. // -// This can be accomplished by a -// simple front marching algorithm, -// where one starts at a given vertex -// and gives it the index zero. Then, -// its neighbors are numbered -// successively, making their indices -// close to the original one. Then, -// their neighbors, if not yet -// numbered, are numbered, and so -// on. +// This can be accomplished by a simple front marching algorithm, where one +// starts at a given vertex and gives it the index zero. Then, its neighbors +// are numbered successively, making their indices close to the original +// one. Then, their neighbors, if not yet numbered, are numbered, and so on. // -// One algorithm that adds a little bit of -// sophistication along these lines is the -// one by Cuthill and McKee. We will use it -// in the following function to renumber the -// degrees of freedom such that the resulting -// sparsity pattern is more localized around -// the diagonal. The only interesting part of -// the function is the first call to -// DoFRenumbering::Cuthill_McKee, the -// rest is essentially as before: +// One algorithm that adds a little bit of sophistication along these lines is +// the one by Cuthill and McKee. We will use it in the following function to +// renumber the degrees of freedom such that the resulting sparsity pattern is +// more localized around the diagonal. The only interesting part of the +// function is the first call to DoFRenumbering::Cuthill_McKee, +// the rest is essentially as before: void renumber_dofs (DoFHandler<2> &dof_handler) { DoFRenumbering::Cuthill_McKee (dof_handler); @@ -399,44 +266,29 @@ void renumber_dofs (DoFHandler<2> &dof_handler) sparsity_pattern.print_gnuplot (out); } -// Again, the output is shown -// below. Note that the nonzero -// entries are clustered far better -// around the diagonal than -// before. This effect is even more -// distinguished for larger -// matrices (the present one has -// 1260 rows and columns, but large -// matrices often have several -// 100,000s). - -// It is worth noting that the -// DoFRenumbering class offers a number -// of other algorithms as well to renumber -// degrees of freedom. For example, it would -// of course be ideal if all couplings were -// in the lower or upper triangular part of a -// matrix, since then solving the linear -// system would among to only forward or -// backward substitution. This is of course -// unachievable for symmetric sparsity -// patterns, but in some special situations -// involving transport equations, this is -// possible by enumerating degrees of freedom -// from the inflow boundary along streamlines -// to the outflow boundary. Not surprisingly, -// DoFRenumbering also has algorithms for -// this. +// Again, the output is shown below. Note that the nonzero entries are +// clustered far better around the diagonal than before. This effect is even +// more distinguished for larger matrices (the present one has 1260 rows and +// columns, but large matrices often have several 100,000s). + +// It is worth noting that the DoFRenumbering class offers a +// number of other algorithms as well to renumber degrees of freedom. For +// example, it would of course be ideal if all couplings were in the lower or +// upper triangular part of a matrix, since then solving the linear system +// would among to only forward or backward substitution. This is of course +// unachievable for symmetric sparsity patterns, but in some special +// situations involving transport equations, this is possible by enumerating +// degrees of freedom from the inflow boundary along streamlines to the +// outflow boundary. Not surprisingly, DoFRenumbering also has +// algorithms for this. // @sect3{The main function} -// Finally, this is the main program. The -// only thing it does is to allocate and -// create the triangulation, then create a -// DoFHandler object and associate it to -// the triangulation, and finally call above -// two functions on it: +// Finally, this is the main program. The only thing it does is to allocate +// and create the triangulation, then create a DoFHandler object +// and associate it to the triangulation, and finally call above two functions +// on it: int main () { Triangulation<2> triangulation; diff --git a/deal.II/examples/step-20/step-20.cc b/deal.II/examples/step-20/step-20.cc index 01303b4d7e..3424e10edf 100644 --- a/deal.II/examples/step-20/step-20.cc +++ b/deal.II/examples/step-20/step-20.cc @@ -11,14 +11,10 @@ // @sect3{Include files} -// Since this program is only an -// adaptation of step-4, there is not -// much new stuff in terms of header -// files. In deal.II, we usually list -// include files in the order -// base-lac-grid-dofs-fe-numerics, -// followed by C++ standard include -// files: +// Since this program is only an adaptation of step-4, there is not much new +// stuff in terms of header files. In deal.II, we usually list include files +// in the order base-lac-grid-dofs-fe-numerics, followed by C++ standard +// include files: #include #include #include @@ -27,11 +23,9 @@ #include #include #include -// For our Schur complement solver, -// we need two new objects. One is a -// matrix object which acts as the -// inverse of a matrix by calling an -// iterative solver. +// For our Schur complement solver, we need two new objects. One is a matrix +// object which acts as the inverse of a matrix by calling an iterative +// solver. #include #include @@ -52,47 +46,31 @@ #include #include -// This is the only significant new -// header, namely the one in which -// the Raviart-Thomas finite element -// is declared: +// This is the only significant new header, namely the one in which the +// Raviart-Thomas finite element is declared: #include -// Finally, as a bonus in this -// program, we will use a tensorial -// coefficient. Since it may have a -// spatial dependence, we consider it -// a tensor-valued function. The -// following include file provides -// the TensorFunction class that -// offers such functionality: +// Finally, as a bonus in this program, we will use a tensorial +// coefficient. Since it may have a spatial dependence, we consider it a +// tensor-valued function. The following include file provides the +// TensorFunction class that offers such functionality: #include -// The last step is as in all -// previous programs: +// The last step is as in all previous programs: namespace Step20 { using namespace dealii; // @sect3{The MixedLaplaceProblem class template} - // Again, since this is an adaptation - // of step-6, the main class is - // almost the same as the one in that - // tutorial program. In terms of - // member functions, the main - // differences are that the - // constructor takes the degree of - // the Raviart-Thomas element as an - // argument (and that there is a - // corresponding member variable to - // store this value) and the addition - // of the compute_error function - // in which, no surprise, we will - // compute the difference between the - // exact and the numerical solution - // to determine convergence of our - // computations: + // Again, since this is an adaptation of step-6, the main class is almost + // the same as the one in that tutorial program. In terms of member + // functions, the main differences are that the constructor takes the degree + // of the Raviart-Thomas element as an argument (and that there is a + // corresponding member variable to store this value) and the addition of + // the compute_error function in which, no surprise, we will + // compute the difference between the exact and the numerical solution to + // determine convergence of our computations: template class MixedLaplaceProblem { @@ -113,19 +91,11 @@ namespace Step20 FESystem fe; DoFHandler dof_handler; - // The second difference is that - // the sparsity pattern, the - // system matrix, and solution - // and right hand side vectors - // are now blocked. What this - // means and what one can do with - // such objects is explained in - // the introduction to this - // program as well as further - // down below when we explain the - // linear solvers and - // preconditioners for this - // problem: + // The second difference is that the sparsity pattern, the system matrix, + // and solution and right hand side vectors are now blocked. What this + // means and what one can do with such objects is explained in the + // introduction to this program as well as further down below when we + // explain the linear solvers and preconditioners for this problem: BlockSparsityPattern sparsity_pattern; BlockSparseMatrix system_matrix; @@ -136,27 +106,16 @@ namespace Step20 // @sect3{Right hand side, boundary values, and exact solution} - // Our next task is to define the - // right hand side of our problem - // (i.e., the scalar right hand side - // for the pressure in the original - // Laplace equation), boundary values - // for the pressure, as well as a - // function that describes both the - // pressure and the velocity of the - // exact solution for later - // computations of the error. Note - // that these functions have one, - // one, and dim+1 components, - // respectively, and that we pass the - // number of components down to the - // Function@ base class. For - // the exact solution, we only - // declare the function that actually - // returns the entire solution vector - // (i.e. all components of it) at - // once. Here are the respective - // declarations: + // Our next task is to define the right hand side of our problem (i.e., the + // scalar right hand side for the pressure in the original Laplace + // equation), boundary values for the pressure, as well as a function that + // describes both the pressure and the velocity of the exact solution for + // later computations of the error. Note that these functions have one, one, + // and dim+1 components, respectively, and that we pass the + // number of components down to the Function@ base + // class. For the exact solution, we only declare the function that actually + // returns the entire solution vector (i.e. all components of it) at + // once. Here are the respective declarations: template class RightHandSide : public Function { @@ -191,13 +150,9 @@ namespace Step20 }; - // And then we also have to define - // these respective functions, of - // course. Given our discussion in - // the introduction of how the - // solution should look like, the - // following computations should be - // straightforward: + // And then we also have to define these respective functions, of + // course. Given our discussion in the introduction of how the solution + // should look like, the following computations should be straightforward: template double RightHandSide::value (const Point & /*p*/, const unsigned int /*component*/) const @@ -238,52 +193,30 @@ namespace Step20 // @sect3{The inverse permeability tensor} - // In addition to the other equation - // data, we also want to use a - // permeability tensor, or better -- - // because this is all that appears - // in the weak form -- the inverse of - // the permeability tensor, - // KInverse. For the purpose of - // verifying the exactness of the - // solution and determining - // convergence orders, this tensor is - // more in the way than helpful. We - // will therefore simply set it to - // the identity matrix. + // In addition to the other equation data, we also want to use a + // permeability tensor, or better -- because this is all that appears in the + // weak form -- the inverse of the permeability tensor, + // KInverse. For the purpose of verifying the exactness of the + // solution and determining convergence orders, this tensor is more in the + // way than helpful. We will therefore simply set it to the identity matrix. // - // However, a spatially varying - // permeability tensor is - // indispensable in real-life porous - // media flow simulations, and we - // would like to use the opportunity - // to demonstrate the technique to - // use tensor valued functions. + // However, a spatially varying permeability tensor is indispensable in + // real-life porous media flow simulations, and we would like to use the + // opportunity to demonstrate the technique to use tensor valued functions. // - // Possibly unsurprising, deal.II - // also has a base class not only for - // scalar and generally vector-valued - // functions (the Function base - // class) but also for functions that - // return tensors of fixed dimension - // and rank, the TensorFunction - // template. Here, the function under - // consideration returns a dim-by-dim - // matrix, i.e. a tensor of rank 2 - // and dimension dim. We then - // choose the template arguments of - // the base class appropriately. + // Possibly unsurprising, deal.II also has a base class not only for scalar + // and generally vector-valued functions (the Function base + // class) but also for functions that return tensors of fixed dimension and + // rank, the TensorFunction template. Here, the function under + // consideration returns a dim-by-dim matrix, i.e. a tensor of rank 2 and + // dimension dim. We then choose the template arguments of the + // base class appropriately. // - // The interface that the - // TensorFunction class provides - // is essentially equivalent to the - // Function class. In particular, - // there exists a value_list - // function that takes a list of - // points at which to evaluate the - // function, and returns the values - // of the function in the second - // argument, a list of tensors: + // The interface that the TensorFunction class provides is + // essentially equivalent to the Function class. In particular, + // there exists a value_list function that takes a list of + // points at which to evaluate the function, and returns the values of the + // function in the second argument, a list of tensors: template class KInverse : public TensorFunction<2,dim> { @@ -295,20 +228,12 @@ namespace Step20 }; - // The implementation is less - // interesting. As in previous - // examples, we add a check to the - // beginning of the class to make - // sure that the sizes of input and - // output parameters are the same - // (see step-5 for a discussion of - // this technique). Then we loop over - // all evaluation points, and for - // each one first clear the output - // tensor and then set all its - // diagonal elements to one - // (i.e. fill the tensor with the - // identity matrix): + // The implementation is less interesting. As in previous examples, we add a + // check to the beginning of the class to make sure that the sizes of input + // and output parameters are the same (see step-5 for a discussion of this + // technique). Then we loop over all evaluation points, and for each one + // first clear the output tensor and then set all its diagonal elements to + // one (i.e. fill the tensor with the identity matrix): template void KInverse::value_list (const std::vector > &points, @@ -332,58 +257,33 @@ namespace Step20 // @sect4{MixedLaplaceProblem::MixedLaplaceProblem} - // In the constructor of this class, - // we first store the value that was - // passed in concerning the degree of - // the finite elements we shall use - // (a degree of zero, for example, - // means to use RT(0) and DG(0)), and - // then construct the vector valued - // element belonging to the space X_h - // described in the introduction. The - // rest of the constructor is as in - // the early tutorial programs. + // In the constructor of this class, we first store the value that was + // passed in concerning the degree of the finite elements we shall use (a + // degree of zero, for example, means to use RT(0) and DG(0)), and then + // construct the vector valued element belonging to the space X_h described + // in the introduction. The rest of the constructor is as in the early + // tutorial programs. // - // The only thing worth describing - // here is the constructor call of - // the fe variable. The - // FESystem class to which this - // variable belongs has a number of - // different constructors that all - // refer to binding simpler elements - // together into one larger - // element. In the present case, we - // want to couple a single RT(degree) - // element with a single DQ(degree) - // element. The constructor to - // FESystem that does this - // requires us to specity first the - // first base element (the - // FE_RaviartThomas object of - // given degree) and then the number - // of copies for this base element, - // and then similarly the kind and - // number of FE_DGQ - // elements. Note that the Raviart - // Thomas element already has dim - // vector components, so that the - // coupled element will have - // dim+1 vector components, the - // first dim of which correspond - // to the velocity variable whereas the - // last one corresponds to the - // pressure. + // The only thing worth describing here is the constructor call of the + // fe variable. The FESystem class to which this + // variable belongs has a number of different constructors that all refer to + // binding simpler elements together into one larger element. In the present + // case, we want to couple a single RT(degree) element with a single + // DQ(degree) element. The constructor to FESystem that does + // this requires us to specity first the first base element (the + // FE_RaviartThomas object of given degree) and then the number + // of copies for this base element, and then similarly the kind and number + // of FE_DGQ elements. Note that the Raviart Thomas element + // already has dim vector components, so that the coupled + // element will have dim+1 vector components, the first + // dim of which correspond to the velocity variable whereas the + // last one corresponds to the pressure. // - // It is also worth comparing the way - // we constructed this element from - // its base elements, with the way we - // have done so in step-8: there, we - // have built it as fe - // (FE_Q@(1), dim), i.e. we - // have simply used dim copies of - // the FE_Q(1) element, one copy - // for the displacement in each - // coordinate direction. + // It is also worth comparing the way we constructed this element from its + // base elements, with the way we have done so in step-8: there, we have + // built it as fe (FE_Q@(1), dim), i.e. we have simply + // used dim copies of the FE_Q(1) element, one + // copy for the displacement in each coordinate direction. template MixedLaplaceProblem::MixedLaplaceProblem (const unsigned int degree) : @@ -397,11 +297,8 @@ namespace Step20 // @sect4{MixedLaplaceProblem::make_grid_and_dofs} - // This next function starts out with - // well-known functions calls that - // create and refine a mesh, and then - // associate degrees of freedom with - // it: + // This next function starts out with well-known functions calls that create + // and refine a mesh, and then associate degrees of freedom with it: template void MixedLaplaceProblem::make_grid_and_dofs () { @@ -410,55 +307,29 @@ namespace Step20 dof_handler.distribute_dofs (fe); - // However, then things become - // different. As mentioned in the - // introduction, we want to - // subdivide the matrix into blocks - // corresponding to the two - // different kinds of variables, - // velocity and pressure. To this end, - // we first have to make sure that - // the indices corresponding to - // velocities and pressures are not - // intermingled: First all velocity - // degrees of freedom, then all - // pressure DoFs. This way, the - // global matrix separates nicely - // into a 2x2 system. To achieve - // this, we have to renumber - // degrees of freedom base on their - // vector component, an operation - // that conveniently is already - // implemented: + // However, then things become different. As mentioned in the + // introduction, we want to subdivide the matrix into blocks corresponding + // to the two different kinds of variables, velocity and pressure. To this + // end, we first have to make sure that the indices corresponding to + // velocities and pressures are not intermingled: First all velocity + // degrees of freedom, then all pressure DoFs. This way, the global matrix + // separates nicely into a 2x2 system. To achieve this, we have to + // renumber degrees of freedom base on their vector component, an + // operation that conveniently is already implemented: DoFRenumbering::component_wise (dof_handler); - // The next thing is that we want - // to figure out the sizes of these - // blocks, so that we can allocate - // an appropriate amount of - // space. To this end, we call the - // DoFTools::count_dofs_per_component - // function that counts how many - // shape functions are non-zero for - // a particular vector - // component. We have dim+1 - // vector components, and we have - // to use the knowledge that for - // Raviart-Thomas elements all - // shape functions are nonzero in - // all components. In other words, - // the number of velocity shape - // functions equals the number of - // overall shape functions that are - // nonzero in the zeroth vector - // component. On the other hand, - // the number of pressure variables - // equals the number of shape - // functions that are nonzero in - // the dim-th component. Let us - // compute these numbers and then - // create some nice output with - // that: + // The next thing is that we want to figure out the sizes of these blocks, + // so that we can allocate an appropriate amount of space. To this end, we + // call the DoFTools::count_dofs_per_component function that + // counts how many shape functions are non-zero for a particular vector + // component. We have dim+1 vector components, and we have to + // use the knowledge that for Raviart-Thomas elements all shape functions + // are nonzero in all components. In other words, the number of velocity + // shape functions equals the number of overall shape functions that are + // nonzero in the zeroth vector component. On the other hand, the number + // of pressure variables equals the number of shape functions that are + // nonzero in the dim-th component. Let us compute these numbers and then + // create some nice output with that: std::vector dofs_per_component (dim+1); DoFTools::count_dofs_per_component (dof_handler, dofs_per_component); const unsigned int n_u = dofs_per_component[0], @@ -475,35 +346,19 @@ namespace Step20 << " (" << n_u << '+' << n_p << ')' << std::endl; - // The next task is to allocate a - // sparsity pattern for the matrix - // that we will create. The way - // this works is that we first - // obtain a guess for the maximal - // number of nonzero entries per - // row (this could be done more - // efficiently in this case, but we - // only want to solve relatively - // small problems for which this is - // not so important). In the second - // step, we allocate a 2x2 block - // pattern and then reinitialize - // each of the blocks to its - // correct size using the n_u - // and n_p variables defined - // above that hold the number of - // velocity and pressure - // variables. In this second step, - // we only operate on the - // individual blocks of the - // system. In the third step, we - // therefore have to instruct the - // overlying block system to update - // its knowledge about the sizes of - // the blocks it manages; this - // happens with the - // sparsity_pattern.collect_sizes() - // call: + // The next task is to allocate a sparsity pattern for the matrix that we + // will create. The way this works is that we first obtain a guess for the + // maximal number of nonzero entries per row (this could be done more + // efficiently in this case, but we only want to solve relatively small + // problems for which this is not so important). In the second step, we + // allocate a 2x2 block pattern and then reinitialize each of the blocks + // to its correct size using the n_u and n_p + // variables defined above that hold the number of velocity and pressure + // variables. In this second step, we only operate on the individual + // blocks of the system. In the third step, we therefore have to instruct + // the overlying block system to update its knowledge about the sizes of + // the blocks it manages; this happens with the + // sparsity_pattern.collect_sizes() call: const unsigned int n_couplings = dof_handler.max_couplings_between_dofs(); @@ -514,22 +369,17 @@ namespace Step20 sparsity_pattern.block(1,1).reinit (n_p, n_p, n_couplings); sparsity_pattern.collect_sizes(); - // Now that the sparsity pattern - // and its blocks have the correct - // sizes, we actually need to - // construct the content of this - // pattern, and as usual compress - // it, before we also initialize a - // block matrix with this block + // Now that the sparsity pattern and its blocks have the correct sizes, we + // actually need to construct the content of this pattern, and as usual + // compress it, before we also initialize a block matrix with this block // sparsity pattern: DoFTools::make_sparsity_pattern (dof_handler, sparsity_pattern); sparsity_pattern.compress(); system_matrix.reinit (sparsity_pattern); - // Then we have to resize the - // solution and right hand side - // vectors in exactly the same way: + // Then we have to resize the solution and right hand side vectors in + // exactly the same way: solution.reinit (2); solution.block(0).reinit (n_u); solution.block(1).reinit (n_p); @@ -542,25 +392,15 @@ namespace Step20 } - // @sect4{MixedLaplaceProblem::assemble_system} - // Similarly, the function that - // assembles the linear system has - // mostly been discussed already in - // the introduction to this - // example. At its top, what happens - // are all the usual steps, with the - // addition that we do not only - // allocate quadrature and - // FEValues objects for the cell - // terms, but also for face - // terms. After that, we define the - // usual abbreviations for variables, - // and the allocate space for the - // local matrix and right hand side - // contributions, and the array that - // holds the global numbers of the - // degrees of freedom local to the - // present cell. + // @sect4{MixedLaplaceProblem::assemble_system} Similarly, the function that + // assembles the linear system has mostly been discussed already in the + // introduction to this example. At its top, what happens are all the usual + // steps, with the addition that we do not only allocate quadrature and + // FEValues objects for the cell terms, but also for face + // terms. After that, we define the usual abbreviations for variables, and + // the allocate space for the local matrix and right hand side + // contributions, and the array that holds the global numbers of the degrees + // of freedom local to the present cell. template void MixedLaplaceProblem::assemble_system () { @@ -583,20 +423,12 @@ namespace Step20 std::vector local_dof_indices (dofs_per_cell); - // The next step is to declare - // objects that represent the - // source term, pressure boundary - // value, and coefficient in the - // equation. In addition to these - // objects that represent - // continuous functions, we also - // need arrays to hold their values - // at the quadrature points of - // individual cells (or faces, for - // the boundary values). Note that - // in the case of the coefficient, - // the array has to be one of - // matrices. + // The next step is to declare objects that represent the source term, + // pressure boundary value, and coefficient in the equation. In addition + // to these objects that represent continuous functions, we also need + // arrays to hold their values at the quadrature points of individual + // cells (or faces, for the boundary values). Note that in the case of the + // coefficient, the array has to be one of matrices. const RightHandSide right_hand_side; const PressureBoundaryValues pressure_boundary_values; const KInverse k_inverse; @@ -605,30 +437,21 @@ namespace Step20 std::vector boundary_values (n_face_q_points); std::vector > k_inverse_values (n_q_points); - // Finally, we need a couple of extractors - // that we will use to get at the velocity - // and pressure components of vector-valued - // shape functions. Their function and use - // is described in detail in the @ref - // vector_valued report. Essentially, we - // will use them as subscripts on the - // FEValues objects below: the FEValues - // object describes all vector components - // of shape functions, while after - // subscription, it will only refer to the - // velocities (a set of dim - // components starting at component zero) - // or the pressure (a scalar component - // located at position dim): + // Finally, we need a couple of extractors that we will use to get at the + // velocity and pressure components of vector-valued shape + // functions. Their function and use is described in detail in the @ref + // vector_valued report. Essentially, we will use them as subscripts on + // the FEValues objects below: the FEValues object describes all vector + // components of shape functions, while after subscription, it will only + // refer to the velocities (a set of dim components starting + // at component zero) or the pressure (a scalar component located at + // position dim): const FEValuesExtractors::Vector velocities (0); const FEValuesExtractors::Scalar pressure (dim); - // With all this in place, we can - // go on with the loop over all - // cells. The body of this loop has - // been discussed in the - // introduction, and will not be - // commented any further here: + // With all this in place, we can go on with the loop over all cells. The + // body of this loop has been discussed in the introduction, and will not + // be commented any further here: typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); @@ -686,23 +509,13 @@ namespace Step20 fe_face_values.JxW(q)); } - // The final step in the loop - // over all cells is to - // transfer local contributions - // into the global matrix and - // right hand side vector. Note - // that we use exactly the same - // interface as in previous - // examples, although we now - // use block matrices and - // vectors instead of the - // regular ones. In other - // words, to the outside world, - // block objects have the same - // interface as matrices and - // vectors, but they - // additionally allow to access - // individual blocks. + // The final step in the loop over all cells is to transfer local + // contributions into the global matrix and right hand side + // vector. Note that we use exactly the same interface as in previous + // examples, although we now use block matrices and vectors instead of + // the regular ones. In other words, to the outside world, block + // objects have the same interface as matrices and vectors, but they + // additionally allow to access individual blocks. cell->get_dof_indices (local_dof_indices); for (unsigned int i=0; iSchurComplement class template} - // The next class is the Schur - // complement class. Its rationale - // has also been discussed in length - // in the introduction. The only - // things we would like to note is - // that the class, too, is derived - // from the Subscriptor class and - // that as mentioned above it stores - // pointers to the entire block - // matrix and the inverse of the mass - // matrix block using + // The next class is the Schur complement class. Its rationale has also been + // discussed in length in the introduction. The only things we would like to + // note is that the class, too, is derived from the Subscriptor + // class and that as mentioned above it stores pointers to the entire block + // matrix and the inverse of the mass matrix block using // SmartPointer objects. // - // The vmult function requires - // two temporary vectors that we do - // not want to re-allocate and free - // every time we call this - // function. Since here, we have full - // control over the use of these - // vectors (unlike above, where a - // class called by the vmult - // function required these vectors, - // not the vmult function - // itself), we allocate them - // directly, rather than going - // through the VectorMemory - // mechanism. However, again, these - // member variables do not carry any - // state between successive calls to - // the member functions of this class - // (i.e., we never care what values - // they were set to the last time a - // member function was called), we - // mark these vectors as mutable. + // The vmult function requires two temporary vectors that we do + // not want to re-allocate and free every time we call this function. Since + // here, we have full control over the use of these vectors (unlike above, + // where a class called by the vmult function required these + // vectors, not the vmult function itself), we allocate them + // directly, rather than going through the VectorMemory + // mechanism. However, again, these member variables do not carry any state + // between successive calls to the member functions of this class (i.e., we + // never care what values they were set to the last time a member function + // was called), we mark these vectors as mutable. // - // The rest of the (short) - // implementation of this class is - // straightforward if you know the - // order of matrix-vector - // multiplications performed by the + // The rest of the (short) implementation of this class is straightforward + // if you know the order of matrix-vector multiplications performed by the // vmult function: class SchurComplement : public Subscriptor { @@ -809,28 +597,17 @@ namespace Step20 // @sect4{The ApproximateSchurComplement class template} - // The third component of our solver - // and preconditioner system is the - // class that approximates the Schur - // complement so we can form a - // an InverseIterate - // object that approximates the - // inverse of the Schur - // complement. It follows the same - // pattern as the Schur complement - // class, with the only exception - // that we do not multiply with the - // inverse mass matrix in vmult, - // but rather just do a single Jacobi - // step. Consequently, the class also - // does not have to store a pointer - // to an inverse mass matrix object. + // The third component of our solver and preconditioner system is the class + // that approximates the Schur complement so we can form a an InverseIterate + // object that approximates the inverse of the Schur complement. It follows + // the same pattern as the Schur complement class, with the only exception + // that we do not multiply with the inverse mass matrix in + // vmult, but rather just do a single Jacobi + // step. Consequently, the class also does not have to store a pointer to an + // inverse mass matrix object. // - // Since InverseIterate follows the - // standard convention for matrices, - // we need to provide a - // Tvmult function here as - // well. + // Since InverseIterate follows the standard convention for matrices, we + // need to provide a Tvmult function here as well. class ApproximateSchurComplement : public Subscriptor { public: @@ -877,21 +654,13 @@ namespace Step20 // @sect4{MixedLaplace::solve} - // After all these preparations, we - // can finally write the function - // that actually solves the linear - // problem. We will go through the - // two parts it has that each solve - // one of the two equations, the - // first one for the pressure - // (component 1 of the solution), - // then the velocities (component 0 - // of the solution). Both parts need - // an object representing the inverse - // mass matrix and an auxiliary - // vector, and we therefore declare - // these objects at the beginning of - // this function. + // After all these preparations, we can finally write the function that + // actually solves the linear problem. We will go through the two parts it + // has that each solve one of the two equations, the first one for the + // pressure (component 1 of the solution), then the velocities (component 0 + // of the solution). Both parts need an object representing the inverse mass + // matrix and an auxiliary vector, and we therefore declare these objects at + // the beginning of this function. template void MixedLaplaceProblem::solve () { @@ -905,19 +674,12 @@ namespace Step20 Vector tmp (solution.block(0).size()); - // Now on to the first - // equation. The right hand side of - // it is BM^{-1}F-G, which is what - // we compute in the first few - // lines. We then declare the - // objects representing the Schur - // complement, its approximation, - // and the inverse of the - // approximation. Finally, we - // declare a solver object and hand - // off all these matrices and - // vectors to it to compute block 1 - // (the pressure) of the solution: + // Now on to the first equation. The right hand side of it is BM^{-1}F-G, + // which is what we compute in the first few lines. We then declare the + // objects representing the Schur complement, its approximation, and the + // inverse of the approximation. Finally, we declare a solver object and + // hand off all these matrices and vectors to it to compute block 1 (the + // pressure) of the solution: { Vector schur_rhs (solution.block(1).size()); @@ -951,14 +713,10 @@ namespace Step20 << std::endl; } - // After we have the pressure, we - // can compute the velocity. The - // equation reads MU=-B^TP+F, and - // we solve it by first computing - // the right hand side, and then - // multiplying it with the object - // that represents the inverse of - // the mass matrix: + // After we have the pressure, we can compute the velocity. The equation + // reads MU=-B^TP+F, and we solve it by first computing the right hand + // side, and then multiplying it with the object that represents the + // inverse of the mass matrix: { system_matrix.block(0,1).vmult (tmp, solution.block(1)); tmp *= -1; @@ -973,73 +731,38 @@ namespace Step20 // @sect4{MixedLaplace::compute_errors} - // After we have dealt with the - // linear solver and preconditioners, - // we continue with the - // implementation of our main - // class. In particular, the next - // task is to compute the errors in - // our numerical solution, in both - // the pressures as well as - // velocities. + // After we have dealt with the linear solver and preconditioners, we + // continue with the implementation of our main class. In particular, the + // next task is to compute the errors in our numerical solution, in both the + // pressures as well as velocities. // - // To compute errors in the solution, - // we have already introduced the - // VectorTools::integrate_difference - // function in step-7 and - // step-11. However, there we only - // dealt with scalar solutions, - // whereas here we have a - // vector-valued solution with - // components that even denote - // different quantities and may have - // different orders of convergence - // (this isn't the case here, by - // choice of the used finite - // elements, but is frequently the - // case in mixed finite element - // applications). What we therefore - // have to do is to `mask' the - // components that we are interested + // To compute errors in the solution, we have already introduced the + // VectorTools::integrate_difference function in step-7 and + // step-11. However, there we only dealt with scalar solutions, whereas here + // we have a vector-valued solution with components that even denote + // different quantities and may have different orders of convergence (this + // isn't the case here, by choice of the used finite elements, but is + // frequently the case in mixed finite element applications). What we + // therefore have to do is to `mask' the components that we are interested // in. This is easily done: the - // VectorTools::integrate_difference - // function takes as its last - // argument a pointer to a weight - // function (the parameter defaults - // to the null pointer, meaning unit - // weights). What we simply have to - // do is to pass a function object - // that equals one in the components - // we are interested in, and zero in - // the other ones. For example, to - // compute the pressure error, we - // should pass a function that - // represents the constant vector - // with a unit value in component - // dim, whereas for the velocity - // the constant vector should be one - // in the first dim components, - // and zero in the location of the - // pressure. + // VectorTools::integrate_difference function takes as its last + // argument a pointer to a weight function (the parameter defaults to the + // null pointer, meaning unit weights). What we simply have to do is to pass + // a function object that equals one in the components we are interested in, + // and zero in the other ones. For example, to compute the pressure error, + // we should pass a function that represents the constant vector with a unit + // value in component dim, whereas for the velocity the + // constant vector should be one in the first dim components, + // and zero in the location of the pressure. // - // In deal.II, the - // ComponentSelectFunction does - // exactly this: it wants to know how - // many vector components the - // function it is to represent should - // have (in our case this would be - // dim+1, for the joint - // velocity-pressure space) and which - // individual or range of components - // should be equal to one. We - // therefore define two such masks at - // the beginning of the function, - // following by an object - // representing the exact solution - // and a vector in which we will - // store the cellwise errors as - // computed by - // integrate_difference: + // In deal.II, the ComponentSelectFunction does exactly this: + // it wants to know how many vector components the function it is to + // represent should have (in our case this would be dim+1, for + // the joint velocity-pressure space) and which individual or range of + // components should be equal to one. We therefore define two such masks at + // the beginning of the function, following by an object representing the + // exact solution and a vector in which we will store the cellwise errors as + // computed by integrate_difference: template void MixedLaplaceProblem::compute_errors () const { @@ -1051,43 +774,25 @@ namespace Step20 ExactSolution exact_solution; Vector cellwise_errors (triangulation.n_active_cells()); - // As already discussed in step-7, - // we have to realize that it is - // impossible to integrate the - // errors exactly. All we can do is - // approximate this integral using - // quadrature. This actually - // presents a slight twist here: if - // we naively chose an object of - // type QGauss@(degree+1) - // as one may be inclined to do - // (this is what we used for - // integrating the linear system), - // one realizes that the error is - // very small and does not follow - // the expected convergence curves - // at all. What is happening is - // that for the mixed finite - // elements used here, the Gauss - // points happen to be - // superconvergence points in which - // the pointwise error is much - // smaller (and converges with - // higher order) than anywhere - // else. These are therefore not - // particularly good points for - // ingration. To avoid this - // problem, we simply use a - // trapezoidal rule and iterate it - // degree+2 times in each - // coordinate direction (again as - // explained in step-7): + // As already discussed in step-7, we have to realize that it is + // impossible to integrate the errors exactly. All we can do is + // approximate this integral using quadrature. This actually presents a + // slight twist here: if we naively chose an object of type + // QGauss@(degree+1) as one may be inclined to do (this + // is what we used for integrating the linear system), one realizes that + // the error is very small and does not follow the expected convergence + // curves at all. What is happening is that for the mixed finite elements + // used here, the Gauss points happen to be superconvergence points in + // which the pointwise error is much smaller (and converges with higher + // order) than anywhere else. These are therefore not particularly good + // points for ingration. To avoid this problem, we simply use a + // trapezoidal rule and iterate it degree+2 times in each + // coordinate direction (again as explained in step-7): QTrapez<1> q_trapez; QIterated quadrature (q_trapez, degree+2); - // With this, we can then let the - // library compute the errors and - // output them to the screen: + // With this, we can then let the library compute the errors and output + // them to the screen: VectorTools::integrate_difference (dof_handler, solution, exact_solution, cellwise_errors, quadrature, VectorTools::L2_norm, @@ -1108,45 +813,27 @@ namespace Step20 // @sect4{MixedLaplace::output_results} - // The last interesting function is - // the one in which we generate - // graphical output. Everything here - // looks obvious and familiar. Note - // how we construct unique names for - // all the solution variables at the - // beginning, like we did in step-8 - // and other programs later on. The - // only thing worth mentioning is - // that for higher order elements, in - // seems inappropriate to only show a - // single bilinear quadrilateral per - // cell in the graphical output. We - // therefore generate patches of size - // (degree+1)x(degree+1) to capture - // the full information content of - // the solution. See the step-7 - // tutorial program for more - // information on this. + // The last interesting function is the one in which we generate graphical + // output. Everything here looks obvious and familiar. Note how we construct + // unique names for all the solution variables at the beginning, like we did + // in step-8 and other programs later on. The only thing worth mentioning is + // that for higher order elements, in seems inappropriate to only show a + // single bilinear quadrilateral per cell in the graphical output. We + // therefore generate patches of size (degree+1)x(degree+1) to capture the + // full information content of the solution. See the step-7 tutorial program + // for more information on this. // - // Note that we output the dim+1 - // components of the solution vector as a - // collection of individual scalars - // here. Most visualization programs will - // then only offer to visualize them - // individually, rather than allowing us to - // plot the flow field as a vector - // field. However, as explained in the - // corresponding function of step-22 or the - // @ref VVOutput "Generating graphical output" - // section of the @ref vector_valued module, - // instructing the DataOut class to identify - // components of the FESystem object as - // elements of a dim-dimensional - // vector is not actually very difficult and - // will then allow us to show results as - // vector plots. We skip this here for - // simplicity and refer to the links above - // for more information. + // Note that we output the dim+1 components of the solution + // vector as a collection of individual scalars here. Most visualization + // programs will then only offer to visualize them individually, rather than + // allowing us to plot the flow field as a vector field. However, as + // explained in the corresponding function of step-22 or the @ref VVOutput + // "Generating graphical output" section of the @ref vector_valued module, + // instructing the DataOut class to identify components of the FESystem + // object as elements of a dim-dimensional vector is not + // actually very difficult and will then allow us to show results as vector + // plots. We skip this here for simplicity and refer to the links above for + // more information. template void MixedLaplaceProblem::output_results () const { @@ -1186,10 +873,8 @@ namespace Step20 // @sect4{MixedLaplace::run} - // This is the final function of our - // main class. It's only job is to - // call the other functions in their - // natural order: + // This is the final function of our main class. It's only job is to call + // the other functions in their natural order: template void MixedLaplaceProblem::run () { @@ -1204,16 +889,11 @@ namespace Step20 // @sect3{The main function} -// The main function we stole from -// step-6 instead of step-4. It is -// almost equal to the one in step-6 -// (apart from the changed class -// names, of course), the only -// exception is that we pass the -// degree of the finite element space -// to the constructor of the mixed -// laplace problem (here, we use -// zero-th order elements). +// The main function we stole from step-6 instead of step-4. It is almost +// equal to the one in step-6 (apart from the changed class names, of course), +// the only exception is that we pass the degree of the finite element space +// to the constructor of the mixed laplace problem (here, we use zero-th order +// elements). int main () { try diff --git a/deal.II/examples/step-21/step-21.cc b/deal.II/examples/step-21/step-21.cc index 14bedbbb04..2136c76463 100644 --- a/deal.II/examples/step-21/step-21.cc +++ b/deal.II/examples/step-21/step-21.cc @@ -9,18 +9,14 @@ /* to the file deal.II/doc/license.html for the text and */ /* further information on this license. */ -// This program is an adaptation of step-20 -// and includes some technique of DG methods -// from step-12. A good part of the program -// is therefore very similar to step-20 and -// we will not comment again on these -// parts. Only the new stuff will be -// discussed in more detail. +// This program is an adaptation of step-20 and includes some technique of DG +// methods from step-12. A good part of the program is therefore very similar +// to step-20 and we will not comment again on these parts. Only the new stuff +// will be discussed in more detail. // @sect3{Include files} -// All of these include files have been used -// before: +// All of these include files have been used before: #include #include #include @@ -55,16 +51,13 @@ #include #include -// In this program, we use a tensor-valued -// coefficient. Since it may have a spatial -// dependence, we consider it a tensor-valued -// function. The following include file -// provides the TensorFunction -// class that offers such functionality: +// In this program, we use a tensor-valued coefficient. Since it may have a +// spatial dependence, we consider it a tensor-valued function. The following +// include file provides the TensorFunction class that offers +// such functionality: #include -// The last step is as in all -// previous programs: +// The last step is as in all previous programs: namespace Step21 { using namespace dealii; @@ -72,35 +65,24 @@ namespace Step21 // @sect3{The TwoPhaseFlowProblem class} - // This is the main class of the program. It - // is close to the one of step-20, but with a - // few additional functions: + // This is the main class of the program. It is close to the one of step-20, + // but with a few additional functions: // - //
    - //
  • assemble_rhs_S assembles the - // right hand side of the saturation - // equation. As explained in the - // introduction, this can't be integrated - // into assemble_rhs since it depends - // on the velocity that is computed in the - // first part of the time step. + //
    • assemble_rhs_S assembles the right hand side of the + // saturation equation. As explained in the introduction, this can't be + // integrated into assemble_rhs since it depends on the + // velocity that is computed in the first part of the time step. // - //
    • get_maximal_velocity does as its - // name suggests. This function is used in - // the computation of the time step size. + //
    • get_maximal_velocity does as its name suggests. This + // function is used in the computation of the time step size. // - //
    • project_back_saturation resets - // all saturation degrees of freedom with - // values less than zero to zero, and all - // those with saturations greater than one - // to one. - //
    + //
  • project_back_saturation resets all saturation degrees + // of freedom with values less than zero to zero, and all those with + // saturations greater than one to one.
// - // The rest of the class should be pretty - // much obvious. The viscosity variable - // stores the viscosity $\mu$ that enters - // several of the formulas in the nonlinear - // equations. + // The rest of the class should be pretty much obvious. The + // viscosity variable stores the viscosity $\mu$ that enters + // several of the formulas in the nonlinear equations. template class TwoPhaseFlowProblem { @@ -140,12 +122,9 @@ namespace Step21 // @sect3{Equation data} - // @sect4{Pressure right hand side} - // At present, the right hand side of the - // pressure equation is simply the zero - // function. However, the rest of the program - // is fully equipped to deal with anything - // else, if this is desired: + // @sect4{Pressure right hand side} At present, the right hand side of the + // pressure equation is simply the zero function. However, the rest of the + // program is fully equipped to deal with anything else, if this is desired: template class PressureRightHandSide : public Function { @@ -167,10 +146,9 @@ namespace Step21 } - // @sect4{Pressure boundary values} - // The next are pressure boundary values. As - // mentioned in the introduction, we choose a - // linear pressure field: + // @sect4{Pressure boundary values} The next are pressure boundary + // values. As mentioned in the introduction, we choose a linear pressure + // field: template class PressureBoundaryValues : public Function { @@ -193,13 +171,10 @@ namespace Step21 // @sect4{Saturation boundary values} - // Then we also need boundary values on the - // inflow portions of the boundary. The - // question whether something is an inflow - // part is decided when assembling the right - // hand side, we only have to provide a - // functional description of the boundary - // values. This is as explained in the + // Then we also need boundary values on the inflow portions of the + // boundary. The question whether something is an inflow part is decided + // when assembling the right hand side, we only have to provide a functional + // description of the boundary values. This is as explained in the // introduction: template class SaturationBoundaryValues : public Function @@ -228,22 +203,16 @@ namespace Step21 // @sect4{Initial data} - // Finally, we need initial data. In reality, - // we only need initial data for the - // saturation, but we are lazy, so we will - // later, before the first time step, simply - // interpolate the entire solution for the - // previous time step from a function that - // contains all vector components. + // Finally, we need initial data. In reality, we only need initial data for + // the saturation, but we are lazy, so we will later, before the first time + // step, simply interpolate the entire solution for the previous time step + // from a function that contains all vector components. // - // We therefore simply create a function that - // returns zero in all components. We do that - // by simply forward every function to the - // ZeroFunction class. Why not use that right - // away in the places of this program where - // we presently use the InitialValues - // class? Because this way it is simpler to - // later go back and choose a different + // We therefore simply create a function that returns zero in all + // components. We do that by simply forward every function to the + // ZeroFunction class. Why not use that right away in the places of this + // program where we presently use the InitialValues class? + // Because this way it is simpler to later go back and choose a different // function for initial values. template class InitialValues : public Function @@ -282,27 +251,18 @@ namespace Step21 // @sect3{The inverse permeability tensor} - // As announced in the introduction, we - // implement two different permeability - // tensor fields. Each of them we put into a - // namespace of its own, so that it will be - // easy later to replace use of one by the - // other in the code. + // As announced in the introduction, we implement two different permeability + // tensor fields. Each of them we put into a namespace of its own, so that + // it will be easy later to replace use of one by the other in the code. // @sect4{Single curving crack permeability} - // The first function for the - // permeability was the one that - // models a single curving crack. It - // was already used at the end of - // step-20, and its functional form - // is given in the introduction of - // the present tutorial program. As - // in some previous programs, we have - // to declare a (seemingly - // unnecessary) default constructor - // of the KInverse class to avoid - // warnings from some compilers: + // The first function for the permeability was the one that models a single + // curving crack. It was already used at the end of step-20, and its + // functional form is given in the introduction of the present tutorial + // program. As in some previous programs, we have to declare a (seemingly + // unnecessary) default constructor of the KInverse class to avoid warnings + // from some compilers: namespace SingleCurvingCrack { template @@ -348,54 +308,35 @@ namespace Step21 // @sect4{Random medium permeability} - // This function does as announced in the - // introduction, i.e. it creates an overlay - // of exponentials at random places. There is - // one thing worth considering for this - // class. The issue centers around the - // problem that the class creates the centers - // of the exponentials using a random - // function. If we therefore created the - // centers each time we create an object of - // the present type, we would get a different - // list of centers each time. That's not what - // we expect from classes of this type: they - // should reliably represent the same - // function. + // This function does as announced in the introduction, i.e. it creates an + // overlay of exponentials at random places. There is one thing worth + // considering for this class. The issue centers around the problem that the + // class creates the centers of the exponentials using a random function. If + // we therefore created the centers each time we create an object of the + // present type, we would get a different list of centers each time. That's + // not what we expect from classes of this type: they should reliably + // represent the same function. // - // The solution to this problem is to make - // the list of centers a static member - // variable of this class, i.e. there exists - // exactly one such variable for the entire - // program, rather than for each object of - // this type. That's exactly what we are - // going to do. + // The solution to this problem is to make the list of centers a static + // member variable of this class, i.e. there exists exactly one such + // variable for the entire program, rather than for each object of this + // type. That's exactly what we are going to do. // - // The next problem, however, is that we need - // a way to initialize this variable. Since - // this variable is initialized at the - // beginning of the program, we can't use a - // regular member function for that since - // there may not be an object of this type - // around at the time. The C++ standard - // therefore says that only non-member and - // static member functions can be used to - // initialize a static variable. We use the - // latter possibility by defining a function - // get_centers that computes the list of + // The next problem, however, is that we need a way to initialize this + // variable. Since this variable is initialized at the beginning of the + // program, we can't use a regular member function for that since there may + // not be an object of this type around at the time. The C++ standard + // therefore says that only non-member and static member functions can be + // used to initialize a static variable. We use the latter possibility by + // defining a function get_centers that computes the list of // center points when called. // - // Note that this class works just fine in - // both 2d and 3d, with the only difference - // being that we use more points in 3d: by - // experimenting we find that we need more - // exponentials in 3d than in 2d (we have - // more ground to cover, after all, if we - // want to keep the distance between centers - // roughly equal), so we choose 40 in 2d and - // 100 in 3d. For any other dimension, the - // function does presently not know what to - // do so simply throws an exception + // Note that this class works just fine in both 2d and 3d, with the only + // difference being that we use more points in 3d: by experimenting we find + // that we need more exponentials in 3d than in 2d (we have more ground to + // cover, after all, if we want to keep the distance between centers roughly + // equal), so we choose 40 in 2d and 100 in 3d. For any other dimension, the + // function does presently not know what to do so simply throws an exception // indicating exactly this. namespace RandomMedium { @@ -474,11 +415,9 @@ namespace Step21 // @sect3{The inverse mobility and saturation functions} - // There are two more pieces of data that we - // need to describe, namely the inverse - // mobility function and the saturation - // curve. Their form is also given in the - // introduction: + // There are two more pieces of data that we need to describe, namely the + // inverse mobility function and the saturation curve. Their form is also + // given in the introduction: double mobility_inverse (const double S, const double viscosity) { @@ -497,28 +436,16 @@ namespace Step21 // @sect3{Linear solvers and preconditioners} - // The linear solvers we use are also - // completely analogous to the ones - // used in step-20. The following - // classes are therefore copied - // verbatim from there. There is a - // single change: if the size of a - // linear system is small, i.e. when - // the mesh is very coarse, then it - // is sometimes not sufficient to set - // a maximum of - // src.size() CG - // iterations before the solver in - // the vmult() function - // converges. (This is, of course, a - // result of numerical round-off, - // since we know that on paper, the - // CG method converges in at most - // src.size() steps.) As - // a consequence, we set the maximum - // number of iterations equal to the - // maximum of the size of the linear - // system and 200. + // The linear solvers we use are also completely analogous to the ones used + // in step-20. The following classes are therefore copied verbatim from + // there. There is a single change: if the size of a linear system is small, + // i.e. when the mesh is very coarse, then it is sometimes not sufficient to + // set a maximum of src.size() CG iterations before the solver + // in the vmult() function converges. (This is, of course, a + // result of numerical round-off, since we know that on paper, the CG method + // converges in at most src.size() steps.) As a consequence, we + // set the maximum number of iterations equal to the maximum of the size of + // the linear system and 200. template class InverseMatrix : public Subscriptor { @@ -633,20 +560,15 @@ namespace Step21 // @sect3{TwoPhaseFlowProblem class implementation} - // Here now the implementation of the main - // class. Much of it is actually copied from - // step-20, so we won't comment on it in much - // detail. You should try to get familiar - // with that program first, then most of what - // is happening here should be mostly clear. - - // @sect4{TwoPhaseFlowProblem::TwoPhaseFlowProblem} - // First for the constructor. We use $RT_k - // \times DQ_k \times DQ_k$ spaces. The time - // step is set to zero initially, but will be - // computed before it is needed first, as - // described in a subsection of the - // introduction. + // Here now the implementation of the main class. Much of it is actually + // copied from step-20, so we won't comment on it in much detail. You should + // try to get familiar with that program first, then most of what is + // happening here should be mostly clear. + + // @sect4{TwoPhaseFlowProblem::TwoPhaseFlowProblem} First for the + // constructor. We use $RT_k \times DQ_k \times DQ_k$ spaces. The time step + // is set to zero initially, but will be computed before it is needed first, + // as described in a subsection of the introduction. template TwoPhaseFlowProblem::TwoPhaseFlowProblem (const unsigned int degree) : @@ -664,12 +586,10 @@ namespace Step21 // @sect4{TwoPhaseFlowProblem::make_grid_and_dofs} - // This next function starts out with - // well-known functions calls that create and - // refine a mesh, and then associate degrees - // of freedom with it. It does all the same - // things as in step-20, just now for three - // components instead of two. + // This next function starts out with well-known functions calls that create + // and refine a mesh, and then associate degrees of freedom with it. It does + // all the same things as in step-20, just now for three components instead + // of two. template void TwoPhaseFlowProblem::make_grid_and_dofs () { @@ -739,28 +659,20 @@ namespace Step21 // @sect4{TwoPhaseFlowProblem::assemble_system} - // This is the function that assembles the - // linear system, or at least everything - // except the (1,3) block that depends on the - // still-unknown velocity computed during - // this time step (we deal with this in - // assemble_rhs_S). Much of it - // is again as in step-20, but we have to - // deal with some nonlinearity this time. - // However, the top of the function is pretty - // much as usual (note that we set matrix and - // right hand side to zero at the beginning - // — something we didn't have to do for - // stationary problems since there we use - // each matrix object only once and it is - // empty at the beginning anyway). + // This is the function that assembles the linear system, or at least + // everything except the (1,3) block that depends on the still-unknown + // velocity computed during this time step (we deal with this in + // assemble_rhs_S). Much of it is again as in step-20, but we + // have to deal with some nonlinearity this time. However, the top of the + // function is pretty much as usual (note that we set matrix and right hand + // side to zero at the beginning — something we didn't have to do for + // stationary problems since there we use each matrix object only once and + // it is empty at the beginning anyway). // - // Note that in its present form, the - // function uses the permeability implemented - // in the RandomMedium::KInverse - // class. Switching to the single curved - // crack permeability function is as simple - // as just changing the namespace name. + // Note that in its present form, the function uses the permeability + // implemented in the RandomMedium::KInverse class. Switching to the single + // curved crack permeability function is as simple as just changing the + // namespace name. template void TwoPhaseFlowProblem::assemble_system () { @@ -812,44 +724,30 @@ namespace Step21 local_matrix = 0; local_rhs = 0; - // Here's the first significant - // difference: We have to get the - // values of the saturation function of - // the previous time step at the - // quadrature points. To this end, we - // can use the - // FEValues::get_function_values - // (previously already used in step-9, - // step-14 and step-15), a function - // that takes a solution vector and - // returns a list of function values at - // the quadrature points of the present - // cell. In fact, it returns the - // complete vector-valued solution at - // each quadrature point, i.e. not only - // the saturation but also the - // velocities and pressure: + // Here's the first significant difference: We have to get the values + // of the saturation function of the previous time step at the + // quadrature points. To this end, we can use the + // FEValues::get_function_values (previously already used in step-9, + // step-14 and step-15), a function that takes a solution vector and + // returns a list of function values at the quadrature points of the + // present cell. In fact, it returns the complete vector-valued + // solution at each quadrature point, i.e. not only the saturation but + // also the velocities and pressure: fe_values.get_function_values (old_solution, old_solution_values); - // Then we also have to get the values - // of the pressure right hand side and - // of the inverse permeability tensor - // at the quadrature points: + // Then we also have to get the values of the pressure right hand side + // and of the inverse permeability tensor at the quadrature points: pressure_right_hand_side.value_list (fe_values.get_quadrature_points(), pressure_rhs_values); k_inverse.value_list (fe_values.get_quadrature_points(), k_inverse_values); - // With all this, we can now loop over - // all the quadrature points and shape - // functions on this cell and assemble - // those parts of the matrix and right - // hand side that we deal with in this - // function. The individual terms in - // the contributions should be - // self-explanatory given the explicit - // form of the bilinear form stated in - // the introduction: + // With all this, we can now loop over all the quadrature points and + // shape functions on this cell and assemble those parts of the matrix + // and right hand side that we deal with in this function. The + // individual terms in the contributions should be self-explanatory + // given the explicit form of the bilinear form stated in the + // introduction: for (unsigned int q=0; q::faces_per_cell; @@ -907,11 +804,8 @@ namespace Step21 } } - // The final step in the loop - // over all cells is to - // transfer local contributions - // into the global matrix and - // right hand side vector: + // The final step in the loop over all cells is to transfer local + // contributions into the global matrix and right hand side vector: cell->get_dof_indices (local_dof_indices); for (unsigned int i=0; i void TwoPhaseFlowProblem::assemble_rhs_S () { @@ -983,12 +873,10 @@ namespace Step21 fe_values.get_function_values (old_solution, old_solution_values); fe_values.get_function_values (solution, present_solution_values); - // First for the cell terms. These are, - // following the formulas in the - // introduction, $(S^n,\sigma)-(F(S^n) - // \mathbf{v}^{n+1},\nabla \sigma)$, - // where $\sigma$ is the saturation - // component of the test function: + // First for the cell terms. These are, following the formulas in the + // introduction, $(S^n,\sigma)-(F(S^n) \mathbf{v}^{n+1},\nabla + // \sigma)$, where $\sigma$ is the saturation component of the test + // function: for (unsigned int q=0; q::faces_per_cell; ++face_no) { @@ -1091,11 +972,9 @@ namespace Step21 // @sect4{TwoPhaseFlowProblem::solve} - // After all these preparations, we finally - // solve the linear system for velocity and - // pressure in the same way as in - // step-20. After that, we have to deal with - // the saturation equation (see below): + // After all these preparations, we finally solve the linear system for + // velocity and pressure in the same way as in step-20. After that, we have + // to deal with the saturation equation (see below): template void TwoPhaseFlowProblem::solve () { @@ -1106,9 +985,8 @@ namespace Step21 Vector tmp2 (solution.block(2).size()); - // First the pressure, using the pressure - // Schur complement of the first two - // equations: + // First the pressure, using the pressure Schur complement of the first + // two equations: { m_inverse.vmult (tmp, system_rhs.block(0)); system_matrix.block(1,0).vmult (schur_rhs, tmp); @@ -1147,36 +1025,24 @@ namespace Step21 m_inverse.vmult (solution.block(0), tmp); } - // Finally, we have to take care of the - // saturation equation. The first business - // we have here is to determine the time - // step using the formula in the - // introduction. Knowing the shape of our - // domain and that we created the mesh by - // regular subdivision of cells, we can - // compute the diameter of each of our - // cells quite easily (in fact we use the - // linear extensions in coordinate - // directions of the cells, not the - // diameter). Note that we will learn a - // more general way to do this in step-24, - // where we use the - // GridTools::minimal_cell_diameter - // function. + // Finally, we have to take care of the saturation equation. The first + // business we have here is to determine the time step using the formula + // in the introduction. Knowing the shape of our domain and that we + // created the mesh by regular subdivision of cells, we can compute the + // diameter of each of our cells quite easily (in fact we use the linear + // extensions in coordinate directions of the cells, not the + // diameter). Note that we will learn a more general way to do this in + // step-24, where we use the GridTools::minimal_cell_diameter function. // - // The maximal velocity we compute using a - // helper function to compute the maximal - // velocity defined below, and with all - // this we can evaluate our new time step - // length: + // The maximal velocity we compute using a helper function to compute the + // maximal velocity defined below, and with all this we can evaluate our + // new time step length: time_step = std::pow(0.5, double(n_refinement_steps)) / get_maximal_velocity(); - // The next step is to assemble the right - // hand side, and then to pass everything - // on for solution. At the end, we project - // back saturations onto the physically - // reasonable range: + // The next step is to assemble the right hand side, and then to pass + // everything on for solution. At the end, we project back saturations + // onto the physically reasonable range: assemble_rhs_S (); { @@ -1201,10 +1067,8 @@ namespace Step21 // @sect4{TwoPhaseFlowProblem::output_results} - // There is nothing surprising here. Since - // the program will do a lot of time steps, - // we create an output file only every fifth - // time step. + // There is nothing surprising here. Since the program will do a lot of time + // steps, we create an output file only every fifth time step. template void TwoPhaseFlowProblem::output_results () const { @@ -1251,30 +1115,21 @@ namespace Step21 // @sect4{TwoPhaseFlowProblem::project_back_saturation} - // In this function, we simply run over all - // saturation degrees of freedom and make - // sure that if they should have left the - // physically reasonable range, that they be - // reset to the interval $[0,1]$. To do this, - // we only have to loop over all saturation - // components of the solution vector; these - // are stored in the block 2 (block 0 are the - // velocities, block 1 are the pressures). + // In this function, we simply run over all saturation degrees of freedom + // and make sure that if they should have left the physically reasonable + // range, that they be reset to the interval $[0,1]$. To do this, we only + // have to loop over all saturation components of the solution vector; these + // are stored in the block 2 (block 0 are the velocities, block 1 are the + // pressures). // - // It may be instructive to note that this - // function almost never triggers when the - // time step is chosen as mentioned in the - // introduction. However, if we choose the - // timestep only slightly larger, we get - // plenty of values outside the proper - // range. Strictly speaking, the function is - // therefore unnecessary if we choose the - // time step small enough. In a sense, the - // function is therefore only a safety device - // to avoid situations where our entire - // solution becomes unphysical because - // individual degrees of freedom have become - // unphysical a few time steps earlier. + // It may be instructive to note that this function almost never triggers + // when the time step is chosen as mentioned in the introduction. However, + // if we choose the timestep only slightly larger, we get plenty of values + // outside the proper range. Strictly speaking, the function is therefore + // unnecessary if we choose the time step small enough. In a sense, the + // function is therefore only a safety device to avoid situations where our + // entire solution becomes unphysical because individual degrees of freedom + // have become unphysical a few time steps earlier. template void TwoPhaseFlowProblem::project_back_saturation () @@ -1289,12 +1144,9 @@ namespace Step21 // @sect4{TwoPhaseFlowProblem::get_maximal_velocity} - // The following function is used in - // determining the maximal allowable time - // step. What it does is to loop over all - // quadrature points in the domain and find - // what the maximal magnitude of the velocity - // is. + // The following function is used in determining the maximal allowable time + // step. What it does is to loop over all quadrature points in the domain + // and find what the maximal magnitude of the velocity is. template double TwoPhaseFlowProblem::get_maximal_velocity () const @@ -1334,36 +1186,24 @@ namespace Step21 // @sect4{TwoPhaseFlowProblem::run} - // This is the final function of our main - // class. Its brevity speaks for - // itself. There are only two points worth - // noting: First, the function projects the - // initial values onto the finite element - // space at the beginning; the - // VectorTools::project function doing this - // requires an argument indicating the - // hanging node constraints. We have none in - // this program (we compute on a uniformly - // refined mesh), but the function requires - // the argument anyway, of course. So we have - // to create a constraint object. In its - // original state, constraint objects are - // unsorted, and have to be sorted (using the - // ConstraintMatrix::close function) before - // they can be used. This is what we do here, - // and which is why we can't simply call the - // VectorTools::project function with an - // anonymous temporary object - // ConstraintMatrix() as the - // second argument. + // This is the final function of our main class. Its brevity speaks for + // itself. There are only two points worth noting: First, the function + // projects the initial values onto the finite element space at the + // beginning; the VectorTools::project function doing this requires an + // argument indicating the hanging node constraints. We have none in this + // program (we compute on a uniformly refined mesh), but the function + // requires the argument anyway, of course. So we have to create a + // constraint object. In its original state, constraint objects are + // unsorted, and have to be sorted (using the ConstraintMatrix::close + // function) before they can be used. This is what we do here, and which is + // why we can't simply call the VectorTools::project function with an + // anonymous temporary object ConstraintMatrix() as the second + // argument. // - // The second point worth mentioning is that - // we only compute the length of the present - // time step in the middle of solving the - // linear system corresponding to each time - // step. We can therefore output the present - // end time of a time step only at the end of - // the time step. + // The second point worth mentioning is that we only compute the length of + // the present time step in the middle of solving the linear system + // corresponding to each time step. We can therefore output the present end + // time of a time step only at the end of the time step. template void TwoPhaseFlowProblem::run () { @@ -1408,13 +1248,10 @@ namespace Step21 // @sect3{The main function} -// That's it. In the main function, we pass -// the degree of the finite element space to -// the constructor of the TwoPhaseFlowProblem -// object. Here, we use zero-th degree -// elements, i.e. $RT_0\times DQ_0 \times -// DQ_0$. The rest is as in all the other -// programs. +// That's it. In the main function, we pass the degree of the finite element +// space to the constructor of the TwoPhaseFlowProblem object. Here, we use +// zero-th degree elements, i.e. $RT_0\times DQ_0 \times DQ_0$. The rest is as +// in all the other programs. int main () { try diff --git a/deal.II/examples/step-22/step-22.cc b/deal.II/examples/step-22/step-22.cc index c010983377..1f6f29d016 100644 --- a/deal.II/examples/step-22/step-22.cc +++ b/deal.II/examples/step-22/step-22.cc @@ -12,8 +12,7 @@ // @sect3{Include files} -// As usual, we start by including -// some well-known files: +// As usual, we start by including some well-known files: #include #include #include @@ -48,52 +47,43 @@ #include #include -// Then we need to include the header file -// for the sparse direct solver UMFPACK: +// Then we need to include the header file for the sparse direct solver +// UMFPACK: #include -// This includes the library for the -// incomplete LU factorization that will -// be used as a preconditioner in 3D: +// This includes the library for the incomplete LU factorization that will be +// used as a preconditioner in 3D: #include // This is C++: #include #include -// As in all programs, the namespace dealii -// is included: +// As in all programs, the namespace dealii is included: namespace Step22 { using namespace dealii; // @sect3{Defining the inner preconditioner type} - // As explained in the introduction, we are - // going to use different preconditioners for - // two and three space dimensions, - // respectively. We distinguish between - // them by the use of the spatial dimension - // as a template parameter. See step-4 for - // details on templates. We are not going to - // create any preconditioner object here, all - // we do is to create class that holds a - // local typedef determining the - // preconditioner class so we can write our - // program in a dimension-independent way. + // As explained in the introduction, we are going to use different + // preconditioners for two and three space dimensions, respectively. We + // distinguish between them by the use of the spatial dimension as a + // template parameter. See step-4 for details on templates. We are not going + // to create any preconditioner object here, all we do is to create class + // that holds a local typedef determining the preconditioner class so we can + // write our program in a dimension-independent way. template struct InnerPreconditioner; - // In 2D, we are going to use a sparse direct - // solver as preconditioner: + // In 2D, we are going to use a sparse direct solver as preconditioner: template <> struct InnerPreconditioner<2> { typedef SparseDirectUMFPACK type; }; - // And the ILU preconditioning in 3D, called - // by SparseILU: + // And the ILU preconditioning in 3D, called by SparseILU: template <> struct InnerPreconditioner<3> { @@ -103,18 +93,13 @@ namespace Step22 // @sect3{The StokesProblem class template} - // This is an adaptation of step-20, so the - // main class and the data types are the - // same as used there. In this example we - // also use adaptive grid refinement, which - // is handled in analogy to - // step-6. According to the discussion in - // the introduction, we are also going to - // use the ConstraintMatrix for - // implementing Dirichlet boundary - // conditions. Hence, we change the name - // hanging_node_constraints - // into constraints. + // This is an adaptation of step-20, so the main class and the data types + // are the same as used there. In this example we also use adaptive grid + // refinement, which is handled in analogy to step-6. According to the + // discussion in the introduction, we are also going to use the + // ConstraintMatrix for implementing Dirichlet boundary conditions. Hence, + // we change the name hanging_node_constraints into + // constraints. template class StokesProblem { @@ -143,68 +128,40 @@ namespace Step22 BlockVector solution; BlockVector system_rhs; - // This one is new: We shall use a - // so-called shared pointer structure to - // access the preconditioner. Shared - // pointers are essentially just a - // convenient form of pointers. Several - // shared pointers can point to the same - // object (just like regular pointers), - // but when the last shared pointer - // object to point to a preconditioner - // object is deleted (for example if a - // shared pointer object goes out of - // scope, if the class of which it is a - // member is destroyed, or if the pointer - // is assigned a different preconditioner - // object) then the preconditioner object - // pointed to is also destroyed. This - // ensures that we don't have to manually - // track in how many places a - // preconditioner object is still - // referenced, it can never create a - // memory leak, and can never produce a - // dangling pointer to an already - // destroyed object: + // This one is new: We shall use a so-called shared pointer structure to + // access the preconditioner. Shared pointers are essentially just a + // convenient form of pointers. Several shared pointers can point to the + // same object (just like regular pointers), but when the last shared + // pointer object to point to a preconditioner object is deleted (for + // example if a shared pointer object goes out of scope, if the class of + // which it is a member is destroyed, or if the pointer is assigned a + // different preconditioner object) then the preconditioner object pointed + // to is also destroyed. This ensures that we don't have to manually track + // in how many places a preconditioner object is still referenced, it can + // never create a memory leak, and can never produce a dangling pointer to + // an already destroyed object: std_cxx1x::shared_ptr::type> A_preconditioner; }; // @sect3{Boundary values and right hand side} - // As in step-20 and most other - // example programs, the next task is - // to define the data for the PDE: - // For the Stokes problem, we are - // going to use natural boundary - // values on parts of the boundary - // (i.e. homogenous Neumann-type) for - // which we won't have to do anything - // special (the homogeneity implies - // that the corresponding terms in - // the weak form are simply zero), - // and boundary conditions on the - // velocity (Dirichlet-type) on the - // rest of the boundary, as described - // in the introduction. + // As in step-20 and most other example programs, the next task is to define + // the data for the PDE: For the Stokes problem, we are going to use natural + // boundary values on parts of the boundary (i.e. homogenous Neumann-type) + // for which we won't have to do anything special (the homogeneity implies + // that the corresponding terms in the weak form are simply zero), and + // boundary conditions on the velocity (Dirichlet-type) on the rest of the + // boundary, as described in the introduction. // - // In order to enforce the Dirichlet - // boundary values on the velocity, - // we will use the - // VectorTools::interpolate_boundary_values - // function as usual which requires - // us to write a function object with - // as many components as the finite - // element has. In other words, we - // have to define the function on the - // $(u,p)$-space, but we are going to - // filter out the pressure component - // when interpolating the boundary - // values. - - // The following function object is a - // representation of the boundary - // values described in the - // introduction: + // In order to enforce the Dirichlet boundary values on the velocity, we + // will use the VectorTools::interpolate_boundary_values function as usual + // which requires us to write a function object with as many components as + // the finite element has. In other words, we have to define the function on + // the $(u,p)$-space, but we are going to filter out the pressure component + // when interpolating the boundary values. + + // The following function object is a representation of the boundary values + // described in the introduction: template class BoundaryValues : public Function { @@ -244,8 +201,7 @@ namespace Step22 - // We implement similar functions for - // the right hand side which for the + // We implement similar functions for the right hand side which for the // current example is simply zero: template class RightHandSide : public Function @@ -283,38 +239,22 @@ namespace Step22 // @sect3{Linear solvers and preconditioners} - // The linear solvers and preconditioners are - // discussed extensively in the - // introduction. Here, we create the - // respective objects that will be used. + // The linear solvers and preconditioners are discussed extensively in the + // introduction. Here, we create the respective objects that will be used. // @sect4{The InverseMatrix class template} - // The InverseMatrix - // class represents the data - // structure for an inverse - // matrix. It is derived from the one - // in step-20. The only difference is - // that we now do include a - // preconditioner to the matrix since - // we will apply this class to - // different kinds of matrices that - // will require different - // preconditioners (in step-20 we did - // not use a preconditioner in this - // class at all). The types of matrix - // and preconditioner are passed to - // this class via template - // parameters, and matrix and - // preconditioner objects of these - // types will then be passed to the - // constructor when an - // InverseMatrix object - // is created. The member function - // vmult is, as in - // step-20, a multiplication with a - // vector, obtained by solving a - // linear system: + // The InverseMatrix class represents the data structure for an + // inverse matrix. It is derived from the one in step-20. The only + // difference is that we now do include a preconditioner to the matrix since + // we will apply this class to different kinds of matrices that will require + // different preconditioners (in step-20 we did not use a preconditioner in + // this class at all). The types of matrix and preconditioner are passed to + // this class via template parameters, and matrix and preconditioner objects + // of these types will then be passed to the constructor when an + // InverseMatrix object is created. The member function + // vmult is, as in step-20, a multiplication with a vector, + // obtained by solving a linear system: template class InverseMatrix : public Subscriptor { @@ -340,23 +280,16 @@ namespace Step22 {} - // This is the implementation of the - // vmult function. - - // In this class we use a rather large - // tolerance for the solver control. The - // reason for this is that the function is - // used very frequently, and hence, any - // additional effort to make the residual - // in the CG solve smaller makes the - // solution more expensive. Note that we do - // not only use this class as a - // preconditioner for the Schur complement, - // but also when forming the inverse of the - // Laplace matrix – which is hence - // directly responsible for the accuracy of - // the solution itself, so we can't choose - // a too large tolerance, either. + // This is the implementation of the vmult function. + + // In this class we use a rather large tolerance for the solver control. The + // reason for this is that the function is used very frequently, and hence, + // any additional effort to make the residual in the CG solve smaller makes + // the solution more expensive. Note that we do not only use this class as a + // preconditioner for the Schur complement, but also when forming the + // inverse of the Laplace matrix – which is hence directly responsible + // for the accuracy of the solution itself, so we can't choose a too large + // tolerance, either. template void InverseMatrix::vmult (Vector &dst, const Vector &src) const @@ -372,19 +305,14 @@ namespace Step22 // @sect4{The SchurComplement class template} - // This class implements the Schur complement - // discussed in the introduction. It is in - // analogy to step-20. Though, we now call - // it with a template parameter - // Preconditioner in order to - // access that when specifying the respective - // type of the inverse matrix class. As a - // consequence of the definition above, the - // declaration InverseMatrix now - // contains the second template parameter - // for a preconditioner class as above, which - // affects the SmartPointer - // object m_inverse as well. + // This class implements the Schur complement discussed in the introduction. + // It is in analogy to step-20. Though, we now call it with a template + // parameter Preconditioner in order to access that when + // specifying the respective type of the inverse matrix class. As a + // consequence of the definition above, the declaration + // InverseMatrix now contains the second template parameter for + // a preconditioner class as above, which affects the + // SmartPointer object m_inverse as well. template class SchurComplement : public Subscriptor { @@ -430,32 +358,20 @@ namespace Step22 // @sect4{StokesProblem::StokesProblem} - // The constructor of this class - // looks very similar to the one of - // step-20. The constructor - // initializes the variables for the - // polynomial degree, triangulation, - // finite element system and the dof - // handler. The underlying polynomial - // functions are of order - // degree+1 for the - // vector-valued velocity components - // and of order degree - // for the pressure. This gives the - // LBB-stable element pair - // $Q_{degree+1}^d\times Q_{degree}$, - // often referred to as the - // Taylor-Hood element. + // The constructor of this class looks very similar to the one of + // step-20. The constructor initializes the variables for the polynomial + // degree, triangulation, finite element system and the dof handler. The + // underlying polynomial functions are of order degree+1 for + // the vector-valued velocity components and of order degree + // for the pressure. This gives the LBB-stable element pair + // $Q_{degree+1}^d\times Q_{degree}$, often referred to as the Taylor-Hood + // element. // - // Note that we initialize the triangulation - // with a MeshSmoothing argument, which - // ensures that the refinement of cells is - // done in a way that the approximation of - // the PDE solution remains well-behaved - // (problems arise if grids are too - // unstructered), see the documentation of - // Triangulation::MeshSmoothing - // for details. + // Note that we initialize the triangulation with a MeshSmoothing argument, + // which ensures that the refinement of cells is done in a way that the + // approximation of the PDE solution remains well-behaved (problems arise if + // grids are too unstructered), see the documentation of + // Triangulation::MeshSmoothing for details. template StokesProblem::StokesProblem (const unsigned int degree) : @@ -469,94 +385,47 @@ namespace Step22 // @sect4{StokesProblem::setup_dofs} - // Given a mesh, this function - // associates the degrees of freedom - // with it and creates the - // corresponding matrices and - // vectors. At the beginning it also - // releases the pointer to the - // preconditioner object (if the - // shared pointer pointed at anything - // at all at this point) since it - // will definitely not be needed any - // more after this point and will - // have to be re-computed after - // assembling the matrix, and unties - // the sparse matrix from its - // sparsity pattern object. + // Given a mesh, this function associates the degrees of freedom with it and + // creates the corresponding matrices and vectors. At the beginning it also + // releases the pointer to the preconditioner object (if the shared pointer + // pointed at anything at all at this point) since it will definitely not be + // needed any more after this point and will have to be re-computed after + // assembling the matrix, and unties the sparse matrix from its sparsity + // pattern object. // - // We then proceed with distributing - // degrees of freedom and renumbering - // them: In order to make the ILU - // preconditioner (in 3D) work - // efficiently, it is important to - // enumerate the degrees of freedom - // in such a way that it reduces the - // bandwidth of the matrix, or maybe - // more importantly: in such a way - // that the ILU is as close as - // possible to a real LU - // decomposition. On the other hand, - // we need to preserve the block - // structure of velocity and pressure - // already seen in in step-20 and - // step-21. This is done in two - // steps: First, all dofs are - // renumbered to improve the ILU and - // then we renumber once again by - // components. Since - // DoFRenumbering::component_wise - // does not touch the renumbering - // within the individual blocks, the - // basic renumbering from the first - // step remains. As for how the - // renumber degrees of freedom to - // improve the ILU: deal.II has a - // number of algorithms that attempt - // to find orderings to improve ILUs, - // or reduce the bandwidth of - // matrices, or optimize some other - // aspect. The DoFRenumbering - // namespace shows a comparison of - // the results we obtain with several - // of these algorithms based on the - // testcase discussed here in this - // tutorial program. Here, we will - // use the traditional Cuthill-McKee - // algorithm already used in some of - // the previous tutorial programs. - // In the - // section on improved ILU - // we're going to discuss this issue - // in more detail. - - // There is one more change compared - // to previous tutorial programs: - // There is no reason in sorting the - // dim velocity - // components individually. In fact, - // rather than first enumerating all - // $x$-velocities, then all - // $y$-velocities, etc, we would like - // to keep all velocities at the same - // location together and only - // separate between velocities (all - // components) and pressures. By - // default, this is not what the - // DoFRenumbering::component_wise - // function does: it treats each - // vector component separately; what - // we have to do is group several - // components into "blocks" and pass - // this block structure to that - // function. Consequently, we - // allocate a vector - // block_component with - // as many elements as there are - // components and describe all - // velocity components to correspond - // to block 0, while the pressure - // component will form block 1: + // We then proceed with distributing degrees of freedom and renumbering + // them: In order to make the ILU preconditioner (in 3D) work efficiently, + // it is important to enumerate the degrees of freedom in such a way that it + // reduces the bandwidth of the matrix, or maybe more importantly: in such a + // way that the ILU is as close as possible to a real LU decomposition. On + // the other hand, we need to preserve the block structure of velocity and + // pressure already seen in in step-20 and step-21. This is done in two + // steps: First, all dofs are renumbered to improve the ILU and then we + // renumber once again by components. Since + // DoFRenumbering::component_wise does not touch the + // renumbering within the individual blocks, the basic renumbering from the + // first step remains. As for how the renumber degrees of freedom to improve + // the ILU: deal.II has a number of algorithms that attempt to find + // orderings to improve ILUs, or reduce the bandwidth of matrices, or + // optimize some other aspect. The DoFRenumbering namespace shows a + // comparison of the results we obtain with several of these algorithms + // based on the testcase discussed here in this tutorial program. Here, we + // will use the traditional Cuthill-McKee algorithm already used in some of + // the previous tutorial programs. In the section + // on improved ILU we're going to discuss this issue in more detail. + + // There is one more change compared to previous tutorial programs: There is + // no reason in sorting the dim velocity components + // individually. In fact, rather than first enumerating all $x$-velocities, + // then all $y$-velocities, etc, we would like to keep all velocities at the + // same location together and only separate between velocities (all + // components) and pressures. By default, this is not what the + // DoFRenumbering::component_wise function does: it treats each vector + // component separately; what we have to do is group several components into + // "blocks" and pass this block structure to that function. Consequently, we + // allocate a vector block_component with as many elements as + // there are components and describe all velocity components to correspond + // to block 0, while the pressure component will form block 1: template void StokesProblem::setup_dofs () { @@ -570,47 +439,27 @@ namespace Step22 block_component[dim] = 1; DoFRenumbering::component_wise (dof_handler, block_component); - // Now comes the implementation of - // Dirichlet boundary conditions, which - // should be evident after the discussion - // in the introduction. All that changed - // is that the function already appears - // in the setup functions, whereas we - // were used to see it in some assembly - // routine. Further down below where we - // set up the mesh, we will associate the - // top boundary where we impose Dirichlet - // boundary conditions with boundary - // indicator 1. We will have to pass - // this boundary indicator as second - // argument to the function below - // interpolating boundary values. There - // is one more thing, though. The - // function describing the Dirichlet - // conditions was defined for all - // components, both velocity and - // pressure. However, the Dirichlet - // conditions are to be set for the - // velocity only. To this end, we use a - // ComponentMask that only selects the - // velocity components. The component - // mask is obtained from the finite - // element by specifying the particular - // components we want. Since we use - // adaptively refined grids the - // constraint matrix needs to be first - // filled with hanging node constraints - // generated from the DoF handler. Note - // the order of the two functions — - // we first compute the hanging node - // constraints, and then insert the - // boundary values into the constraint - // matrix. This makes sure that we - // respect H1 conformity on - // boundaries with hanging nodes (in - // three space dimensions), where the - // hanging node needs to dominate the - // Dirichlet boundary values. + // Now comes the implementation of Dirichlet boundary conditions, which + // should be evident after the discussion in the introduction. All that + // changed is that the function already appears in the setup functions, + // whereas we were used to see it in some assembly routine. Further down + // below where we set up the mesh, we will associate the top boundary + // where we impose Dirichlet boundary conditions with boundary indicator + // 1. We will have to pass this boundary indicator as second argument to + // the function below interpolating boundary values. There is one more + // thing, though. The function describing the Dirichlet conditions was + // defined for all components, both velocity and pressure. However, the + // Dirichlet conditions are to be set for the velocity only. To this end, + // we use a ComponentMask that only selects the velocity components. The + // component mask is obtained from the finite element by specifying the + // particular components we want. Since we use adaptively refined grids + // the constraint matrix needs to be first filled with hanging node + // constraints generated from the DoF handler. Note the order of the two + // functions — we first compute the hanging node constraints, and + // then insert the boundary values into the constraint matrix. This makes + // sure that we respect H1 conformity on boundaries with + // hanging nodes (in three space dimensions), where the hanging node needs + // to dominate the Dirichlet boundary values. { constraints.clear (); @@ -626,17 +475,12 @@ namespace Step22 constraints.close (); - // In analogy to step-20, we count the dofs - // in the individual components. We could - // do this in the same way as there, but we - // want to operate on the block structure - // we used already for the renumbering: The - // function - // DoFTools::count_dofs_per_block - // does the same as - // DoFTools::count_dofs_per_component, - // but now grouped as velocity and pressure - // block via block_component. + // In analogy to step-20, we count the dofs in the individual components. + // We could do this in the same way as there, but we want to operate on + // the block structure we used already for the renumbering: The function + // DoFTools::count_dofs_per_block does the same as + // DoFTools::count_dofs_per_component, but now grouped as + // velocity and pressure block via block_component. std::vector dofs_per_block (2); DoFTools::count_dofs_per_block (dof_handler, dofs_per_block, block_component); const unsigned int n_u = dofs_per_block[0], @@ -650,84 +494,48 @@ namespace Step22 << " (" << n_u << '+' << n_p << ')' << std::endl; - // The next task is to allocate a - // sparsity pattern for the system matrix - // we will create. We could do this in - // the same way as in step-20, - // i.e. directly build an object of type - // SparsityPattern through - // DoFTools::make_sparsity_pattern. However, - // there is a major reason not to do so: - // In 3D, the function - // DoFTools::max_couplings_between_dofs - // yields a conservative but rather large - // number for the coupling between the - // individual dofs, so that the memory - // initially provided for the creation of - // the sparsity pattern of the matrix is - // far too much -- so much actually that - // the initial sparsity pattern won't - // even fit into the physical memory of - // most systems already for - // moderately-sized 3D problems, see also - // the discussion in step-18. Instead, - // we first build a temporary object that - // uses a different data structure that - // doesn't require allocating more memory - // than necessary but isn't suitable for - // use as a basis of SparseMatrix or - // BlockSparseMatrix objects; in a second - // step we then copy this object into an - // object of BlockSparsityPattern. This - // is entirely analgous to what we - // already did in step-11 and step-18. + // The next task is to allocate a sparsity pattern for the system matrix + // we will create. We could do this in the same way as in step-20, + // i.e. directly build an object of type SparsityPattern through + // DoFTools::make_sparsity_pattern. However, there is a major reason not + // to do so: In 3D, the function DoFTools::max_couplings_between_dofs + // yields a conservative but rather large number for the coupling between + // the individual dofs, so that the memory initially provided for the + // creation of the sparsity pattern of the matrix is far too much -- so + // much actually that the initial sparsity pattern won't even fit into the + // physical memory of most systems already for moderately-sized 3D + // problems, see also the discussion in step-18. Instead, we first build + // a temporary object that uses a different data structure that doesn't + // require allocating more memory than necessary but isn't suitable for + // use as a basis of SparseMatrix or BlockSparseMatrix objects; in a + // second step we then copy this object into an object of + // BlockSparsityPattern. This is entirely analgous to what we already did + // in step-11 and step-18. // - // There is one snag again here, though: - // it turns out that using the - // CompressedSparsityPattern (or the - // block version - // BlockCompressedSparsityPattern we - // would use here) has a bottleneck that - // makes the algorithm to build the - // sparsity pattern be quadratic in the - // number of degrees of freedom. This - // doesn't become noticeable until we get - // well into the range of several 100,000 - // degrees of freedom, but eventually - // dominates the setup of the linear - // system when we get to more than a - // million degrees of freedom. This is - // due to the data structures used in the - // CompressedSparsityPattern class, - // nothing that can easily be - // changed. Fortunately, there is an easy - // solution: the - // CompressedSimpleSparsityPattern class - // (and its block variant - // BlockCompressedSimpleSparsityPattern) - // has exactly the same interface, uses a - // different %internal data structure and - // is linear in the number of degrees of - // freedom and therefore much more - // efficient for large problems. As - // another alternative, we could also - // have chosen the class - // BlockCompressedSetSparsityPattern that - // uses yet another strategy for %internal - // memory management. Though, that class - // turns out to be more memory-demanding - // than - // BlockCompressedSimpleSparsityPattern - // for this example. + // There is one snag again here, though: it turns out that using the + // CompressedSparsityPattern (or the block version + // BlockCompressedSparsityPattern we would use here) has a bottleneck that + // makes the algorithm to build the sparsity pattern be quadratic in the + // number of degrees of freedom. This doesn't become noticeable until we + // get well into the range of several 100,000 degrees of freedom, but + // eventually dominates the setup of the linear system when we get to more + // than a million degrees of freedom. This is due to the data structures + // used in the CompressedSparsityPattern class, nothing that can easily be + // changed. Fortunately, there is an easy solution: the + // CompressedSimpleSparsityPattern class (and its block variant + // BlockCompressedSimpleSparsityPattern) has exactly the same interface, + // uses a different %internal data structure and is linear in the number + // of degrees of freedom and therefore much more efficient for large + // problems. As another alternative, we could also have chosen the class + // BlockCompressedSetSparsityPattern that uses yet another strategy for + // %internal memory management. Though, that class turns out to be more + // memory-demanding than BlockCompressedSimpleSparsityPattern for this + // example. // - // Consequently, this is the class that - // we will use for our intermediate - // sparsity representation. All this is - // done inside a new scope, which means - // that the memory of csp - // will be released once the information - // has been copied to - // sparsity_pattern. + // Consequently, this is the class that we will use for our intermediate + // sparsity representation. All this is done inside a new scope, which + // means that the memory of csp will be released once the + // information has been copied to sparsity_pattern. { BlockCompressedSimpleSparsityPattern csp (2,2); @@ -742,10 +550,8 @@ namespace Step22 sparsity_pattern.copy_from (csp); } - // Finally, the system matrix, - // solution and right hand side are - // created from the block - // structure as in step-20: + // Finally, the system matrix, solution and right hand side are created + // from the block structure as in step-20: system_matrix.reinit (sparsity_pattern); solution.reinit (2); @@ -762,14 +568,10 @@ namespace Step22 // @sect4{StokesProblem::assemble_system} - // The assembly process follows the - // discussion in step-20 and in the - // introduction. We use the well-known - // abbreviations for the data structures - // that hold the local matrix, right - // hand side, and global - // numbering of the degrees of freedom - // for the present cell. + // The assembly process follows the discussion in step-20 and in the + // introduction. We use the well-known abbreviations for the data structures + // that hold the local matrix, right hand side, and global numbering of the + // degrees of freedom for the present cell. template void StokesProblem::assemble_system () { @@ -797,63 +599,35 @@ namespace Step22 std::vector > rhs_values (n_q_points, Vector(dim+1)); - // Next, we need two objects that work as - // extractors for the FEValues - // object. Their use is explained in detail - // in the report on @ref vector_valued : + // Next, we need two objects that work as extractors for the FEValues + // object. Their use is explained in detail in the report on @ref + // vector_valued : const FEValuesExtractors::Vector velocities (0); const FEValuesExtractors::Scalar pressure (dim); - // As an extension over step-20 and - // step-21, we include a few - // optimizations that make assembly - // much faster for this particular - // problem. The improvements are - // based on the observation that we - // do a few calculations too many - // times when we do as in step-20: - // The symmetric gradient actually - // has dofs_per_cell - // different values per quadrature - // point, but we extract it - // dofs_per_cell*dofs_per_cell - // times from the FEValues object - - // for both the loop over - // i and the inner - // loop over j. In 3d, - // that means evaluating it - // $89^2=7921$ instead of $89$ - // times, a not insignificant - // difference. + // As an extension over step-20 and step-21, we include a few + // optimizations that make assembly much faster for this particular + // problem. The improvements are based on the observation that we do a + // few calculations too many times when we do as in step-20: The symmetric + // gradient actually has dofs_per_cell different values per + // quadrature point, but we extract it + // dofs_per_cell*dofs_per_cell times from the FEValues object + // - for both the loop over i and the inner loop over + // j. In 3d, that means evaluating it $89^2=7921$ instead of + // $89$ times, a not insignificant difference. // - // So what we're - // going to do here is to avoid - // such repeated calculations by - // getting a vector of rank-2 - // tensors (and similarly for - // the divergence and the basis - // function value on pressure) - // at the quadrature point prior - // to starting the loop over the - // dofs on the cell. First, we - // create the respective objects - // that will hold these - // values. Then, we start the - // loop over all cells and the loop - // over the quadrature points, - // where we first extract these - // values. There is one more - // optimization we implement here: - // the local matrix (as well as - // the global one) is going to - // be symmetric, since all - // the operations involved are - // symmetric with respect to $i$ - // and $j$. This is implemented by - // simply running the inner loop - // not to dofs_per_cell, - // but only up to i, - // the index of the outer loop. + // So what we're going to do here is to avoid such repeated calculations + // by getting a vector of rank-2 tensors (and similarly for the divergence + // and the basis function value on pressure) at the quadrature point prior + // to starting the loop over the dofs on the cell. First, we create the + // respective objects that will hold these values. Then, we start the loop + // over all cells and the loop over the quadrature points, where we first + // extract these values. There is one more optimization we implement here: + // the local matrix (as well as the global one) is going to be symmetric, + // since all the operations involved are symmetric with respect to $i$ and + // $j$. This is implemented by simply running the inner loop not to + // dofs_per_cell, but only up to i, the index of + // the outer loop. std::vector > symgrad_phi_u (dofs_per_cell); std::vector div_phi_u (dofs_per_cell); std::vector phi_p (dofs_per_cell); @@ -899,47 +673,27 @@ namespace Step22 } } - // Note that in the above computation - // of the local matrix contribution - // we added the term phi_p[i] * - // phi_p[j] , yielding a - // pressure mass matrix in the - // $(1,1)$ block of the matrix as - // discussed in the - // introduction. That this term only - // ends up in the $(1,1)$ block stems - // from the fact that both of the - // factors in phi_p[i] * - // phi_p[j] are only non-zero - // when all the other terms vanish - // (and the other way around). + // Note that in the above computation of the local matrix contribution + // we added the term phi_p[i] * phi_p[j] , yielding a + // pressure mass matrix in the $(1,1)$ block of the matrix as + // discussed in the introduction. That this term only ends up in the + // $(1,1)$ block stems from the fact that both of the factors in + // phi_p[i] * phi_p[j] are only non-zero when all the + // other terms vanish (and the other way around). // - // Note also that operator* is - // overloaded for symmetric - // tensors, yielding the scalar - // product between the two - // tensors in the first line of - // the local matrix - // contribution. - - // Before we can write the local data - // into the global matrix (and - // simultaneously use the - // ConstraintMatrix object to apply - // Dirichlet boundary conditions and - // eliminate hanging node - // constraints, as we discussed in - // the introduction), we have to be - // careful about one thing, - // though. We have only build up half - // of the local matrix because of - // symmetry, but we're going to save - // the full system matrix in order to - // use the standard functions for - // solution. This is done by flipping - // the indices in case we are - // pointing into the empty part of - // the local matrix. + // Note also that operator* is overloaded for symmetric tensors, + // yielding the scalar product between the two tensors in the first + // line of the local matrix contribution. + + // Before we can write the local data into the global matrix (and + // simultaneously use the ConstraintMatrix object to apply Dirichlet + // boundary conditions and eliminate hanging node constraints, as we + // discussed in the introduction), we have to be careful about one + // thing, though. We have only build up half of the local matrix + // because of symmetry, but we're going to save the full system matrix + // in order to use the standard functions for solution. This is done + // by flipping the indices in case we are pointing into the empty part + // of the local matrix. for (unsigned int i=0; iblock(0,0) in the - // system matrix. As mentioned - // above, this depends on the - // spatial dimension. Since the two - // classes described by the - // InnerPreconditioner::type - // typedef have the same interface, - // we do not have to do anything - // different whether we want to use - // a sparse direct solver or an - // ILU: + // Before we're going to solve this linear system, we generate a + // preconditioner for the velocity-velocity matrix, i.e., + // block(0,0) in the system matrix. As mentioned above, this + // depends on the spatial dimension. Since the two classes described by + // the InnerPreconditioner::type typedef have the same + // interface, we do not have to do anything different whether we want to + // use a sparse direct solver or an ILU: std::cout << " Computing preconditioner..." << std::endl << std::flush; A_preconditioner @@ -978,18 +724,13 @@ namespace Step22 // @sect4{StokesProblem::solve} - // After the discussion in the introduction - // and the definition of the respective - // classes above, the implementation of the - // solve function is rather - // straigt-forward and done in a similar way - // as in step-20. To start with, we need an - // object of the InverseMatrix - // class that represents the inverse of the - // matrix A. As described in the - // introduction, the inverse is generated - // with the help of an inner preconditioner - // of type + // After the discussion in the introduction and the definition of the + // respective classes above, the implementation of the solve + // function is rather straigt-forward and done in a similar way as in + // step-20. To start with, we need an object of the + // InverseMatrix class that represents the inverse of the + // matrix A. As described in the introduction, the inverse is generated with + // the help of an inner preconditioner of type // InnerPreconditioner::type. template void StokesProblem::solve () @@ -999,14 +740,11 @@ namespace Step22 A_inverse (system_matrix.block(0,0), *A_preconditioner); Vector tmp (solution.block(0).size()); - // This is as in step-20. We generate the - // right hand side $B A^{-1} F - G$ for the - // Schur complement and an object that - // represents the respective linear - // operation $B A^{-1} B^T$, now with a - // template parameter indicating the - // preconditioner - in accordance with the - // definition of the class. + // This is as in step-20. We generate the right hand side $B A^{-1} F - G$ + // for the Schur complement and an object that represents the respective + // linear operation $B A^{-1} B^T$, now with a template parameter + // indicating the preconditioner - in accordance with the definition of + // the class. { Vector schur_rhs (solution.block(1).size()); A_inverse.vmult (tmp, system_rhs.block(0)); @@ -1016,56 +754,34 @@ namespace Step22 SchurComplement::type> schur_complement (system_matrix, A_inverse); - // The usual control structures for - // the solver call are created... + // The usual control structures for the solver call are created... SolverControl solver_control (solution.block(1).size(), 1e-6*schur_rhs.l2_norm()); SolverCG<> cg (solver_control); - // Now to the preconditioner to the - // Schur complement. As explained in - // the introduction, the - // preconditioning is done by a mass - // matrix in the pressure variable. It - // is stored in the $(1,1)$ block of - // the system matrix (that is not used - // anywhere else but in - // preconditioning). + // Now to the preconditioner to the Schur complement. As explained in + // the introduction, the preconditioning is done by a mass matrix in the + // pressure variable. It is stored in the $(1,1)$ block of the system + // matrix (that is not used anywhere else but in preconditioning). // - // Actually, the solver needs to have - // the preconditioner in the form - // $P^{-1}$, so we need to create an - // inverse operation. Once again, we - // use an object of the class - // InverseMatrix, which - // implements the vmult - // operation that is needed by the - // solver. In this case, we have to - // invert the pressure mass matrix. As - // it already turned out in earlier - // tutorial programs, the inversion of - // a mass matrix is a rather cheap and - // straight-forward operation (compared - // to, e.g., a Laplace matrix). The CG - // method with ILU preconditioning - // converges in 5-10 steps, - // independently on the mesh size. - // This is precisely what we do here: - // We choose another ILU preconditioner - // and take it along to the - // InverseMatrix object via the - // corresponding template parameter. A - // CG solver is then called within the - // vmult operation of the inverse - // matrix. + // Actually, the solver needs to have the preconditioner in the form + // $P^{-1}$, so we need to create an inverse operation. Once again, we + // use an object of the class InverseMatrix, which + // implements the vmult operation that is needed by the + // solver. In this case, we have to invert the pressure mass matrix. As + // it already turned out in earlier tutorial programs, the inversion of + // a mass matrix is a rather cheap and straight-forward operation + // (compared to, e.g., a Laplace matrix). The CG method with ILU + // preconditioning converges in 5-10 steps, independently on the mesh + // size. This is precisely what we do here: We choose another ILU + // preconditioner and take it along to the InverseMatrix object via the + // corresponding template parameter. A CG solver is then called within + // the vmult operation of the inverse matrix. // - // An alternative that is cheaper to - // build, but needs more iterations - // afterwards, would be to choose a - // SSOR preconditioner with factor - // 1.2. It needs about twice the number - // of iterations, but the costs for its - // generation are almost neglible. + // An alternative that is cheaper to build, but needs more iterations + // afterwards, would be to choose a SSOR preconditioner with factor + // 1.2. It needs about twice the number of iterations, but the costs for + // its generation are almost neglible. SparseILU preconditioner; preconditioner.initialize (system_matrix.block(1,1), SparseILU::AdditionalData()); @@ -1073,20 +789,15 @@ namespace Step22 InverseMatrix,SparseILU > m_inverse (system_matrix.block(1,1), preconditioner); - // With the Schur complement and an - // efficient preconditioner at hand, we - // can solve the respective equation - // for the pressure (i.e. block 0 in - // the solution vector) in the usual - // way: + // With the Schur complement and an efficient preconditioner at hand, we + // can solve the respective equation for the pressure (i.e. block 0 in + // the solution vector) in the usual way: cg.solve (schur_complement, solution.block(1), schur_rhs, m_inverse); - // After this first solution step, the - // hanging node constraints have to be - // distributed to the solution in order - // to achieve a consistent pressure - // field. + // After this first solution step, the hanging node constraints have to + // be distributed to the solution in order to achieve a consistent + // pressure field. constraints.distribute (solution); std::cout << " " @@ -1095,17 +806,12 @@ namespace Step22 << std::endl; } - // As in step-20, we finally need to - // solve for the velocity equation where - // we plug in the solution to the - // pressure equation. This involves only - // objects we already know - so we simply - // multiply $p$ by $B^T$, subtract the - // right hand side and multiply by the - // inverse of $A$. At the end, we need to - // distribute the constraints from - // hanging nodes in order to obtain a - // constistent flow field: + // As in step-20, we finally need to solve for the velocity equation where + // we plug in the solution to the pressure equation. This involves only + // objects we already know - so we simply multiply $p$ by $B^T$, subtract + // the right hand side and multiply by the inverse of $A$. At the end, we + // need to distribute the constraints from hanging nodes in order to + // obtain a constistent flow field: { system_matrix.block(0,1).vmult (tmp, solution.block(1)); tmp *= -1; @@ -1120,46 +826,29 @@ namespace Step22 // @sect4{StokesProblem::output_results} - // The next function generates graphical - // output. In this example, we are going to - // use the VTK file format. We attach - // names to the individual variables in the - // problem: velocity to the - // dim components of velocity - // and pressure to the - // pressure. + // The next function generates graphical output. In this example, we are + // going to use the VTK file format. We attach names to the individual + // variables in the problem: velocity to the dim + // components of velocity and pressure to the pressure. // - // Not all visualization programs have the - // ability to group individual vector - // components into a vector to provide - // vector plots; in particular, this holds - // for some VTK-based visualization - // programs. In this case, the logical - // grouping of components into vectors - // should already be described in the file - // containing the data. In other words, - // what we need to do is provide our output - // writers with a way to know which of the - // components of the finite element - // logically form a vector (with $d$ - // components in $d$ space dimensions) - // rather than letting them assume that we - // simply have a bunch of scalar fields. - // This is achieved using the members of - // the - // DataComponentInterpretation - // namespace: as with the filename, we - // create a vector in which the first - // dim components refer to the - // velocities and are given the tag - // DataComponentInterpretation::component_is_part_of_vector; - // we finally push one tag - // DataComponentInterpretation::component_is_scalar - // to describe the grouping of the pressure - // variable. - - // The rest of the function is then - // the same as in step-20. + // Not all visualization programs have the ability to group individual + // vector components into a vector to provide vector plots; in particular, + // this holds for some VTK-based visualization programs. In this case, the + // logical grouping of components into vectors should already be described + // in the file containing the data. In other words, what we need to do is + // provide our output writers with a way to know which of the components of + // the finite element logically form a vector (with $d$ components in $d$ + // space dimensions) rather than letting them assume that we simply have a + // bunch of scalar fields. This is achieved using the members of the + // DataComponentInterpretation namespace: as with the filename, + // we create a vector in which the first dim components refer + // to the velocities and are given the tag + // DataComponentInterpretation::component_is_part_of_vector; we + // finally push one tag + // DataComponentInterpretation::component_is_scalar to describe + // the grouping of the pressure variable. + + // The rest of the function is then the same as in step-20. template void StokesProblem::output_results (const unsigned int refinement_cycle) const @@ -1192,23 +881,15 @@ namespace Step22 // @sect4{StokesProblem::refine_mesh} - // This is the last interesting function of - // the StokesProblem class. - // As indicated by its name, it takes the - // solution to the problem and refines the - // mesh where this is needed. The procedure - // is the same as in the respective step in - // step-6, with the exception that we base - // the refinement only on the change in - // pressure, i.e., we call the Kelly error - // estimator with a mask object of type - // ComponentMask that selects the single - // scalar component for the pressure that - // we are interested in (we get such a mask - // from the finite element class by - // specifying the component we - // want). Additionally, we do not coarsen - // the grid again: + // This is the last interesting function of the StokesProblem + // class. As indicated by its name, it takes the solution to the problem + // and refines the mesh where this is needed. The procedure is the same as + // in the respective step in step-6, with the exception that we base the + // refinement only on the change in pressure, i.e., we call the Kelly error + // estimator with a mask object of type ComponentMask that selects the + // single scalar component for the pressure that we are interested in (we + // get such a mask from the finite element class by specifying the component + // we want). Additionally, we do not coarsen the grid again: template void StokesProblem::refine_mesh () @@ -1232,25 +913,18 @@ namespace Step22 // @sect4{StokesProblem::run} - // The last step in the Stokes class is, as - // usual, the function that generates the - // initial grid and calls the other - // functions in the respective order. + // The last step in the Stokes class is, as usual, the function that + // generates the initial grid and calls the other functions in the + // respective order. // - // We start off with a rectangle of size $4 - // \times 1$ (in 2d) or $4 \times 1 \times - // 1$ (in 3d), placed in $R^2/R^3$ as - // $(-2,2)\times(-1,0)$ or - // $(-2,2)\times(0,1)\times(-1,0)$, - // respectively. It is natural to start - // with equal mesh size in each direction, - // so we subdivide the initial rectangle - // four times in the first coordinate - // direction. To limit the scope of the - // variables involved in the creation of - // the mesh to the range where we actually - // need them, we put the entire block - // between a pair of braces: + // We start off with a rectangle of size $4 \times 1$ (in 2d) or $4 \times 1 + // \times 1$ (in 3d), placed in $R^2/R^3$ as $(-2,2)\times(-1,0)$ or + // $(-2,2)\times(0,1)\times(-1,0)$, respectively. It is natural to start + // with equal mesh size in each direction, so we subdivide the initial + // rectangle four times in the first coordinate direction. To limit the + // scope of the variables involved in the creation of the mesh to the range + // where we actually need them, we put the entire block between a pair of + // braces: template void StokesProblem::run () { @@ -1271,12 +945,10 @@ namespace Step22 top_right); } - // A boundary indicator of 1 is set to all - // boundaries that are subject to Dirichlet - // boundary conditions, i.e. to faces that - // are located at 0 in the last coordinate - // direction. See the example description - // above for details. + // A boundary indicator of 1 is set to all boundaries that are subject to + // Dirichlet boundary conditions, i.e. to faces that are located at 0 in + // the last coordinate direction. See the example description above for + // details. for (typename Triangulation::active_cell_iterator cell = triangulation.begin_active(); cell != triangulation.end(); ++cell) @@ -1285,18 +957,14 @@ namespace Step22 cell->face(f)->set_all_boundary_indicators(1); - // We then apply an initial refinement - // before solving for the first time. In - // 3D, there are going to be more degrees - // of freedom, so we refine less there: + // We then apply an initial refinement before solving for the first + // time. In 3D, there are going to be more degrees of freedom, so we + // refine less there: triangulation.refine_global (4-dim); - // As first seen in step-6, we cycle over - // the different refinement levels and - // refine (except for the first cycle), - // setup the degrees of freedom and - // matrices, assemble, solve and create - // output: + // As first seen in step-6, we cycle over the different refinement levels + // and refine (except for the first cycle), setup the degrees of freedom + // and matrices, assemble, solve and create output: for (unsigned int refinement_cycle = 0; refinement_cycle<6; ++refinement_cycle) { @@ -1323,10 +991,8 @@ namespace Step22 // @sect3{The main function} -// The main function is the same as in -// step-20. We pass the element degree as a -// parameter and choose the space dimension -// at the well-known template slot. +// The main function is the same as in step-20. We pass the element degree as +// a parameter and choose the space dimension at the well-known template slot. int main () { try diff --git a/deal.II/examples/step-23/step-23.cc b/deal.II/examples/step-23/step-23.cc index 07538397b4..037a02a82b 100644 --- a/deal.II/examples/step-23/step-23.cc +++ b/deal.II/examples/step-23/step-23.cc @@ -13,9 +13,8 @@ // @sect3{Include files} -// We start with the usual assortment -// of include files that we've seen -// in so many of the previous tests: +// We start with the usual assortment of include files that we've seen in so +// many of the previous tests: #include #include #include @@ -44,72 +43,39 @@ #include #include -// Here are the only three include -// files of some new interest: The -// first one is already used, for -// example, for the -// VectorTools::interpolate_boundary_values -// and -// VectorTools::apply_boundary_values -// functions. However, we here use -// another function in that class, -// VectorTools::project to compute -// our initial values as the $L^2$ -// projection of the continuous -// initial values. Furthermore, we -// use -// VectorTools::create_right_hand_side -// to generate the integrals -// $(f^n,\phi^n_i)$. These were -// previously always generated by -// hand in -// assemble_system or -// similar functions in application -// code. However, we're too lazy to -// do that here, so simply use a -// library function: +// Here are the only three include files of some new interest: The first one +// is already used, for example, for the +// VectorTools::interpolate_boundary_values and +// VectorTools::apply_boundary_values functions. However, we here use another +// function in that class, VectorTools::project to compute our initial values +// as the $L^2$ projection of the continuous initial values. Furthermore, we +// use VectorTools::create_right_hand_side to generate the integrals +// $(f^n,\phi^n_i)$. These were previously always generated by hand in +// assemble_system or similar functions in application +// code. However, we're too lazy to do that here, so simply use a library +// function: #include -// In a very similar vein, we are -// also too lazy to write the code to -// assemble mass and Laplace -// matrices, although it would have -// only taken copying the relevant -// code from any number of previous -// tutorial programs. Rather, we want -// to focus on the things that are -// truly new to this program and -// therefore use the -// MatrixTools::create_mass_matrix -// and -// MatrixTools::create_laplace_matrix -// functions. They are declared here: +// In a very similar vein, we are also too lazy to write the code to assemble +// mass and Laplace matrices, although it would have only taken copying the +// relevant code from any number of previous tutorial programs. Rather, we +// want to focus on the things that are truly new to this program and +// therefore use the MatrixTools::create_mass_matrix and +// MatrixTools::create_laplace_matrix functions. They are declared here: #include -// Finally, here is an include file -// that contains all sorts of tool -// functions that one sometimes -// needs. In particular, we need the -// Utilities::int_to_string class -// that, given an integer argument, -// returns a string representation of -// it. It is particularly useful -// since it allows for a second -// parameter indicating the number of -// digits to which we want the result -// padded with leading zeros. We will -// use this to write output files -// that have the form -// solution-XXX.gnuplot -// where XXX denotes the -// number of the time step and always -// consists of three digits even if -// we are still in the single or -// double digit time steps. +// Finally, here is an include file that contains all sorts of tool functions +// that one sometimes needs. In particular, we need the +// Utilities::int_to_string class that, given an integer argument, returns a +// string representation of it. It is particularly useful since it allows for +// a second parameter indicating the number of digits to which we want the +// result padded with leading zeros. We will use this to write output files +// that have the form solution-XXX.gnuplot where XXX +// denotes the number of the time step and always consists of three digits +// even if we are still in the single or double digit time steps. #include -// The last step is as in all -// previous programs: +// The last step is as in all previous programs: namespace Step23 { using namespace dealii; @@ -117,40 +83,26 @@ namespace Step23 // @sect3{The WaveEquation class} - // Next comes the declaration of the main - // class. It's public interface of functions - // is like in most of the other tutorial - // programs. Worth mentioning is that we now - // have to store four matrices instead of - // one: the mass matrix $M$, the Laplace - // matrix $A$, the matrix $M+k^2\theta^2A$ - // used for solving for $U^n$, and a copy of - // the mass matrix with boundary conditions - // applied used for solving for $V^n$. Note - // that it is a bit wasteful to have an - // additional copy of the mass matrix - // around. We will discuss strategies for how - // to avoid this in the section on possible + // Next comes the declaration of the main class. It's public interface of + // functions is like in most of the other tutorial programs. Worth + // mentioning is that we now have to store four matrices instead of one: the + // mass matrix $M$, the Laplace matrix $A$, the matrix $M+k^2\theta^2A$ used + // for solving for $U^n$, and a copy of the mass matrix with boundary + // conditions applied used for solving for $V^n$. Note that it is a bit + // wasteful to have an additional copy of the mass matrix around. We will + // discuss strategies for how to avoid this in the section on possible // improvements. // - // Likewise, we need solution vectors for - // $U^n,V^n$ as well as for the corresponding - // vectors at the previous time step, - // $U^{n-1},V^{n-1}$. The - // system_rhs will be used for - // whatever right hand side vector we have - // when solving one of the two linear systems - // in each time step. These will be solved in - // the two functions solve_u and + // Likewise, we need solution vectors for $U^n,V^n$ as well as for the + // corresponding vectors at the previous time step, $U^{n-1},V^{n-1}$. The + // system_rhs will be used for whatever right hand side vector + // we have when solving one of the two linear systems in each time + // step. These will be solved in the two functions solve_u and // solve_v. // - // Finally, the variable - // theta is used to - // indicate the parameter $\theta$ - // that is used to define which time - // stepping scheme to use, as - // explained in the introduction. The - // rest is self-explanatory. + // Finally, the variable theta is used to indicate the + // parameter $\theta$ that is used to define which time stepping scheme to + // use, as explained in the introduction. The rest is self-explanatory. template class WaveEquation { @@ -189,24 +141,15 @@ namespace Step23 // @sect3{Equation data} - // Before we go on filling in the - // details of the main class, let us - // define the equation data - // corresponding to the problem, - // i.e. initial and boundary values - // for both the solution $u$ and its - // time derivative $v$, as well as a - // right hand side class. We do so - // using classes derived from the - // Function class template that has - // been used many times before, so - // the following should not be a - // surprise. + // Before we go on filling in the details of the main class, let us define + // the equation data corresponding to the problem, i.e. initial and boundary + // values for both the solution $u$ and its time derivative $v$, as well as + // a right hand side class. We do so using classes derived from the Function + // class template that has been used many times before, so the following + // should not be a surprise. // - // Let's start with initial values - // and choose zero for both the value - // $u$ as well as its time - // derivative, the velocity $v$: + // Let's start with initial values and choose zero for both the value $u$ as + // well as its time derivative, the velocity $v$: template class InitialValuesU : public Function { @@ -250,9 +193,8 @@ namespace Step23 - // Secondly, we have the right hand - // side forcing term. Boring as we - // are, we choose zero here as well: + // Secondly, we have the right hand side forcing term. Boring as we are, we + // choose zero here as well: template class RightHandSide : public Function { @@ -275,10 +217,8 @@ namespace Step23 - // Finally, we have boundary values for $u$ - // and $v$. They are as described in the - // introduction, one being the time - // derivative of the other: + // Finally, we have boundary values for $u$ and $v$. They are as described + // in the introduction, one being the time derivative of the other: template class BoundaryValuesU : public Function { @@ -343,22 +283,16 @@ namespace Step23 // @sect3{Implementation of the WaveEquation class} - // The implementation of the actual logic is - // actually fairly short, since we relegate - // things like assembling the matrices and - // right hand side vectors to the - // library. The rest boils down to not much - // more than 130 lines of actual code, a - // significant fraction of which is - // boilerplate code that can be taken from - // previous example programs (e.g. the - // functions that solve linear systems, or - // that generate output). + // The implementation of the actual logic is actually fairly short, since we + // relegate things like assembling the matrices and right hand side vectors + // to the library. The rest boils down to not much more than 130 lines of + // actual code, a significant fraction of which is boilerplate code that can + // be taken from previous example programs (e.g. the functions that solve + // linear systems, or that generate output). // - // Let's start with the constructor (for an - // explanation of the choice of time step, - // see the section on Courant, Friedrichs, - // and Lewy in the introduction): + // Let's start with the constructor (for an explanation of the choice of + // time step, see the section on Courant, Friedrichs, and Lewy in the + // introduction): template WaveEquation::WaveEquation () : fe (1), @@ -370,15 +304,10 @@ namespace Step23 // @sect4{WaveEquation::setup_system} - // The next function is the one that - // sets up the mesh, DoFHandler, and - // matrices and vectors at the - // beginning of the program, - // i.e. before the first time - // step. The first few lines are - // pretty much standard if you've - // read through the tutorial programs - // at least up to step-6: + // The next function is the one that sets up the mesh, DoFHandler, and + // matrices and vectors at the beginning of the program, i.e. before the + // first time step. The first few lines are pretty much standard if you've + // read through the tutorial programs at least up to step-6: template void WaveEquation::setup_system () { @@ -402,43 +331,28 @@ namespace Step23 DoFTools::make_sparsity_pattern (dof_handler, sparsity_pattern); sparsity_pattern.compress(); - // Then comes a block where we have to - // initialize the 3 matrices we need in the - // course of the program: the mass matrix, - // the laplace matrix, and the matrix - // $M+k^2\theta^2A$ used when solving for - // $U^n$ in each time step. + // Then comes a block where we have to initialize the 3 matrices we need + // in the course of the program: the mass matrix, the laplace matrix, and + // the matrix $M+k^2\theta^2A$ used when solving for $U^n$ in each time + // step. // - // When setting up these matrices, note - // that they all make use of the same - // sparsity pattern object. Finally, the - // reason why matrices and sparsity - // patterns are separate objects in deal.II - // (unlike in many other finite element or - // linear algebra classes) becomes clear: - // in a significant fraction of - // applications, one has to hold several - // matrices that happen to have the same - // sparsity pattern, and there is no reason - // for them not to share this information, - // rather than re-building and wasting - // memory on it several times. + // When setting up these matrices, note that they all make use of the same + // sparsity pattern object. Finally, the reason why matrices and sparsity + // patterns are separate objects in deal.II (unlike in many other finite + // element or linear algebra classes) becomes clear: in a significant + // fraction of applications, one has to hold several matrices that happen + // to have the same sparsity pattern, and there is no reason for them not + // to share this information, rather than re-building and wasting memory + // on it several times. // - // After initializing all of these - // matrices, we call library functions that - // build the Laplace and mass matrices. All - // they need is a DoFHandler object and a - // quadrature formula object that is to be - // used for numerical integration. Note - // that in many respects these functions - // are better than what we would usually do - // in application programs, for example - // because they automatically parallelize - // building the matrices if multiple - // processors are available in a - // machine. The matrices for solving linear - // systems will be filled in the run() - // method because we need to re-apply + // After initializing all of these matrices, we call library functions + // that build the Laplace and mass matrices. All they need is a DoFHandler + // object and a quadrature formula object that is to be used for numerical + // integration. Note that in many respects these functions are better than + // what we would usually do in application programs, for example because + // they automatically parallelize building the matrices if multiple + // processors are available in a machine. The matrices for solving linear + // systems will be filled in the run() method because we need to re-apply // boundary conditions every time step. mass_matrix.reinit (sparsity_pattern); laplace_matrix.reinit (sparsity_pattern); @@ -450,17 +364,12 @@ namespace Step23 MatrixCreator::create_laplace_matrix (dof_handler, QGauss(3), laplace_matrix); - // The rest of the function is spent on - // setting vector sizes to the correct - // value. The final line closes the hanging - // node constraints object. Since we work - // on a uniformly refined mesh, no - // constraints exist or have been computed - // (i.e. there was no need to call - // DoFTools::make_hanging_node_constraints - // as in other programs), but we need a - // constraints object in one place further - // down below anyway. + // The rest of the function is spent on setting vector sizes to the + // correct value. The final line closes the hanging node constraints + // object. Since we work on a uniformly refined mesh, no constraints exist + // or have been computed (i.e. there was no need to call + // DoFTools::make_hanging_node_constraints as in other programs), but we + // need a constraints object in one place further down below anyway. solution_u.reinit (dof_handler.n_dofs()); solution_v.reinit (dof_handler.n_dofs()); old_solution_u.reinit (dof_handler.n_dofs()); @@ -473,24 +382,18 @@ namespace Step23 // @sect4{WaveEquation::solve_u and WaveEquation::solve_v} - // The next two functions deal with solving - // the linear systems associated with the - // equations for $U^n$ and $V^n$. Both are - // not particularly interesting as they - // pretty much follow the scheme used in all - // the previous tutorial programs. + // The next two functions deal with solving the linear systems associated + // with the equations for $U^n$ and $V^n$. Both are not particularly + // interesting as they pretty much follow the scheme used in all the + // previous tutorial programs. // - // One can make little experiments with - // preconditioners for the two matrices we - // have to invert. As it turns out, however, - // for the matrices at hand here, using - // Jacobi or SSOR preconditioners reduces the - // number of iterations necessary to solve - // the linear system slightly, but due to the - // cost of applying the preconditioner it is - // no win in terms of run-time. It is not - // much of a loss either, but let's keep it - // simple and just do without: + // One can make little experiments with preconditioners for the two matrices + // we have to invert. As it turns out, however, for the matrices at hand + // here, using Jacobi or SSOR preconditioners reduces the number of + // iterations necessary to solve the linear system slightly, but due to the + // cost of applying the preconditioner it is no win in terms of run-time. It + // is not much of a loss either, but let's keep it simple and just do + // without: template void WaveEquation::solve_u () { @@ -524,13 +427,10 @@ namespace Step23 // @sect4{WaveEquation::output_results} - // Likewise, the following function is pretty - // much what we've done before. The only - // thing worth mentioning is how here we - // generate a string representation of the - // time step number padded with leading zeros - // to 3 character length using the - // Utilities::int_to_string function's second + // Likewise, the following function is pretty much what we've done + // before. The only thing worth mentioning is how here we generate a string + // representation of the time step number padded with leading zeros to 3 + // character length using the Utilities::int_to_string function's second // argument. template void WaveEquation::output_results () const @@ -555,20 +455,14 @@ namespace Step23 // @sect4{WaveEquation::run} - // The following is really the only - // interesting function of the program. It - // contains the loop over all time steps, but - // before we get to that we have to set up - // the grid, DoFHandler, and matrices. In - // addition, we have to somehow get started - // with initial values. To this end, we use - // the VectorTools::project function that - // takes an object that describes a - // continuous function and computes the $L^2$ - // projection of this function onto the - // finite element space described by the - // DoFHandler object. Can't be any simpler - // than that: + // The following is really the only interesting function of the program. It + // contains the loop over all time steps, but before we get to that we have + // to set up the grid, DoFHandler, and matrices. In addition, we have to + // somehow get started with initial values. To this end, we use the + // VectorTools::project function that takes an object that describes a + // continuous function and computes the $L^2$ projection of this function + // onto the finite element space described by the DoFHandler object. Can't + // be any simpler than that: template void WaveEquation::run () { @@ -581,45 +475,28 @@ namespace Step23 InitialValuesV(), old_solution_v); - // The next thing is to loop over all the - // time steps until we reach the end time - // ($T=5$ in this case). In each time step, - // we first have to solve for $U^n$, using - // the equation $(M^n + k^2\theta^2 A^n)U^n - // =$ $(M^{n,n-1} - k^2\theta(1-\theta) - // A^{n,n-1})U^{n-1} + kM^{n,n-1}V^{n-1} +$ - // $k\theta \left[k \theta F^n + k(1-\theta) - // F^{n-1} \right]$. Note that we use the - // same mesh for all time steps, so that - // $M^n=M^{n,n-1}=M$ and - // $A^n=A^{n,n-1}=A$. What we therefore - // have to do first is to add up $MU^{n-1} - // - k^2\theta(1-\theta) AU^{n-1} + kMV^{n-1}$ and - // the forcing terms, and put the result - // into the system_rhs - // vector. (For these additions, we need a - // temporary vector that we declare before - // the loop to avoid repeated memory - // allocations in each time step.) + // The next thing is to loop over all the time steps until we reach the + // end time ($T=5$ in this case). In each time step, we first have to + // solve for $U^n$, using the equation $(M^n + k^2\theta^2 A^n)U^n =$ + // $(M^{n,n-1} - k^2\theta(1-\theta) A^{n,n-1})U^{n-1} + kM^{n,n-1}V^{n-1} + // +$ $k\theta \left[k \theta F^n + k(1-\theta) F^{n-1} \right]$. Note + // that we use the same mesh for all time steps, so that $M^n=M^{n,n-1}=M$ + // and $A^n=A^{n,n-1}=A$. What we therefore have to do first is to add up + // $MU^{n-1} - k^2\theta(1-\theta) AU^{n-1} + kMV^{n-1}$ and the forcing + // terms, and put the result into the system_rhs vector. (For + // these additions, we need a temporary vector that we declare before the + // loop to avoid repeated memory allocations in each time step.) // - // The one thing to realize here is how we - // communicate the time variable to the - // object describing the right hand side: - // each object derived from the Function - // class has a time field that can be set - // using the Function::set_time and read by - // Function::get_time. In essence, using - // this mechanism, all functions of space - // and time are therefore considered - // functions of space evaluated at a - // particular time. This matches well what - // we typically need in finite element - // programs, where we almost always work on - // a single time step at a time, and where - // it never happens that, for example, one - // would like to evaluate a space-time - // function for all times at any given - // spatial location. + // The one thing to realize here is how we communicate the time variable + // to the object describing the right hand side: each object derived from + // the Function class has a time field that can be set using the + // Function::set_time and read by Function::get_time. In essence, using + // this mechanism, all functions of space and time are therefore + // considered functions of space evaluated at a particular time. This + // matches well what we typically need in finite element programs, where + // we almost always work on a single time step at a time, and where it + // never happens that, for example, one would like to evaluate a + // space-time function for all times at any given spatial location. Vector tmp (solution_u.size()); Vector forcing_terms (solution_u.size()); @@ -654,18 +531,13 @@ namespace Step23 system_rhs.add (theta * time_step, forcing_terms); - // After so constructing the right hand - // side vector of the first equation, - // all we have to do is apply the - // correct boundary values. As for the - // right hand side, this is a - // space-time function evaluated at a - // particular time, which we - // interpolate at boundary nodes and - // then use the result to apply - // boundary values as we usually - // do. The result is then handed off to - // the solve_u() function: + // After so constructing the right hand side vector of the first + // equation, all we have to do is apply the correct boundary + // values. As for the right hand side, this is a space-time function + // evaluated at a particular time, which we interpolate at boundary + // nodes and then use the result to apply boundary values as we + // usually do. The result is then handed off to the solve_u() + // function: { BoundaryValuesU boundary_values_u_function; boundary_values_u_function.set_time (time); @@ -676,20 +548,14 @@ namespace Step23 boundary_values_u_function, boundary_values); - // The matrix for solve_u() is the same in - // every time steps, so one could think - // that it is enough to do this only once - // at the beginning of the - // simulation. However, since we need to - // apply boundary values to the linear - // system (which eliminate some matrix rows - // and columns and give contributions to - // the right hand side), we have to refill - // the matrix in every time steps before we - // actually apply boundary data. The actual - // content is very simple: it is the sum of - // the mass matrix and a weighted Laplace - // matrix: + // The matrix for solve_u() is the same in every time steps, so one + // could think that it is enough to do this only once at the + // beginning of the simulation. However, since we need to apply + // boundary values to the linear system (which eliminate some matrix + // rows and columns and give contributions to the right hand side), + // we have to refill the matrix in every time steps before we + // actually apply boundary data. The actual content is very simple: + // it is the sum of the mass matrix and a weighted Laplace matrix: matrix_u.copy_from (mass_matrix); matrix_u.add (theta * theta * time_step * time_step, laplace_matrix); MatrixTools::apply_boundary_values (boundary_values, @@ -700,19 +566,13 @@ namespace Step23 solve_u (); - // The second step, i.e. solving for - // $V^n$, works similarly, except that - // this time the matrix on the left is - // the mass matrix (which we copy again - // in order to be able to apply - // boundary conditions, and the right - // hand side is $MV^{n-1} - k\left[ - // \theta A U^n + (1-\theta) - // AU^{n-1}\right]$ plus forcing - // terms. %Boundary values are applied - // in the same way as before, except - // that now we have to use the - // BoundaryValuesV class: + // The second step, i.e. solving for $V^n$, works similarly, except + // that this time the matrix on the left is the mass matrix (which we + // copy again in order to be able to apply boundary conditions, and + // the right hand side is $MV^{n-1} - k\left[ \theta A U^n + + // (1-\theta) AU^{n-1}\right]$ plus forcing terms. %Boundary values + // are applied in the same way as before, except that now we have to + // use the BoundaryValuesV class: laplace_matrix.vmult (system_rhs, solution_u); system_rhs *= -theta * time_step; @@ -741,21 +601,14 @@ namespace Step23 } solve_v (); - // Finally, after both solution - // components have been computed, we - // output the result, compute the - // energy in the solution, and go on to - // the next time step after shifting - // the present solution into the - // vectors that hold the solution at - // the previous time step. Note the - // function - // SparseMatrix::matrix_norm_square - // that can compute - // $\left$ and - // $\left$ in one step, - // saving us the expense of a temporary - // vector and several lines of code: + // Finally, after both solution components have been computed, we + // output the result, compute the energy in the solution, and go on to + // the next time step after shifting the present solution into the + // vectors that hold the solution at the previous time step. Note the + // function SparseMatrix::matrix_norm_square that can compute + // $\left$ and $\left$ in one step, + // saving us the expense of a temporary vector and several lines of + // code: output_results (); std::cout << " Total energy: " @@ -772,10 +625,8 @@ namespace Step23 // @sect3{The main function} -// What remains is the main function of the -// program. There is nothing here that hasn't -// been shown in several of the previous -// programs: +// What remains is the main function of the program. There is nothing here +// that hasn't been shown in several of the previous programs: int main () { try diff --git a/deal.II/examples/step-24/step-24.cc b/deal.II/examples/step-24/step-24.cc index dc525e955c..b4d2e3833d 100644 --- a/deal.II/examples/step-24/step-24.cc +++ b/deal.II/examples/step-24/step-24.cc @@ -11,8 +11,7 @@ // @sect3{Include files} -// The following have all been covered -// previously: +// The following have all been covered previously: #include #include #include @@ -45,23 +44,19 @@ #include #include -// This is the only new one: We will need a -// library function defined in a class -// GridTools that computes the minimal cell -// diameter. +// This is the only new one: We will need a library function defined in a +// class GridTools that computes the minimal cell diameter. #include -// The last step is as in all -// previous programs: +// The last step is as in all previous programs: namespace Step24 { using namespace dealii; // @sect3{The "forward problem" class template} - // The first part of the main class is - // exactly as in step-23 - // (except for the name): + // The first part of the main class is exactly as in step-23 (except for the + // name): template class TATForwardProblem { @@ -94,67 +89,42 @@ namespace Step24 unsigned int timestep_number; const double theta; - // Here's what's new: first, we need - // that boundary mass matrix $B$ that - // came out of the absorbing boundary - // condition. Likewise, since this time - // we consider a realistic medium, we - // must have a measure of the wave speed - // $c_0$ that will enter all the - // formulas with the Laplace matrix - // (which we still define as $(\nabla - // \phi_i,\nabla \phi_j)$): + // Here's what's new: first, we need that boundary mass matrix $B$ that + // came out of the absorbing boundary condition. Likewise, since this + // time we consider a realistic medium, we must have a measure of the + // wave speed $c_0$ that will enter all the formulas with the Laplace + // matrix (which we still define as $(\nabla \phi_i,\nabla \phi_j)$): SparseMatrix boundary_matrix; const double wave_speed; - // The last thing we have to take care of - // is that we wanted to evaluate the - // solution at a certain number of - // detector locations. We need an array - // to hold these locations, declared here - // and filled in the constructor: + // The last thing we have to take care of is that we wanted to evaluate + // the solution at a certain number of detector locations. We need an + // array to hold these locations, declared here and filled in the + // constructor: std::vector > detector_locations; }; // @sect3{Equation data} - // As usual, we have to define our - // initial values, boundary - // conditions, and right hand side - // functions. Except things are a bit - // simpler this time: we are to - // consider a problem that is driven - // by initial conditions, so there is - // no right hand side function - // (though you could look up in - // step-23 to see how this can be - // done. Secondly, there are no - // boundary conditions: the entire - // boundary of the domain consists of - // absorbing boundary - // conditions. That only leaves - // initial conditions, and there - // things are simple too since for - // this particular application only - // nonzero initial conditions for the - // pressure are prescribed, not for - // the velocity (which is zero at the - // initial time). + // As usual, we have to define our initial values, boundary conditions, and + // right hand side functions. Except things are a bit simpler this time: we + // are to consider a problem that is driven by initial conditions, so there + // is no right hand side function (though you could look up in step-23 to + // see how this can be done. Secondly, there are no boundary conditions: the + // entire boundary of the domain consists of absorbing boundary + // conditions. That only leaves initial conditions, and there things are + // simple too since for this particular application only nonzero initial + // conditions for the pressure are prescribed, not for the velocity (which + // is zero at the initial time). // - // So this is all we need: a class that - // specifies initial conditions for the - // pressure. In the physical setting - // considered in this program, these are - // small absorbers, which we model as a - // series of little circles where we assume - // that the pressure surplus is one, whereas - // no absorption and therefore no pressure - // surplus is anywhere else. This is how we - // do things (note that if we wanted to - // expand this program to not only compile - // but also to run, we would have to - // initialize the sources with + // So this is all we need: a class that specifies initial conditions for the + // pressure. In the physical setting considered in this program, these are + // small absorbers, which we model as a series of little circles where we + // assume that the pressure surplus is one, whereas no absorption and + // therefore no pressure surplus is anywhere else. This is how we do things + // (note that if we wanted to expand this program to not only compile but + // also to run, we would have to initialize the sources with // three-dimensional source locations): template class InitialValuesP : public Function @@ -206,17 +176,12 @@ namespace Step24 // @sect3{Implementation of the TATForwardProblem class} - // Let's start again with the - // constructor. Setting the member variables - // is straightforward. We use the acoustic - // wave speed of mineral oil (in millimeters - // per microsecond, a common unit in - // experimental biomedical imaging) since - // this is where many of the experiments we - // want to compare the output with are made - // in. The Crank-Nicolson scheme is used - // again, i.e. theta is set to 0.5. The time - // step is later selected to satisfy $k = + // Let's start again with the constructor. Setting the member variables is + // straightforward. We use the acoustic wave speed of mineral oil (in + // millimeters per microsecond, a common unit in experimental biomedical + // imaging) since this is where many of the experiments we want to compare + // the output with are made in. The Crank-Nicolson scheme is used again, + // i.e. theta is set to 0.5. The time step is later selected to satisfy $k = // \frac hc$ template TATForwardProblem::TATForwardProblem () @@ -226,29 +191,20 @@ namespace Step24 theta (0.5), wave_speed (1.437) { - // The second task in the constructor is to - // initialize the array that holds the - // detector locations. The results of this - // program were compared with experiments - // in which the step size of the detector - // spacing is 2.25 degree, corresponding to - // 160 detector locations. The radius of - // the scanning circle is selected to be - // half way between the center and the - // boundary to avoid that the remaining - // reflections from the imperfect boundary - // condition spoils our numerical results. + // The second task in the constructor is to initialize the array that + // holds the detector locations. The results of this program were compared + // with experiments in which the step size of the detector spacing is 2.25 + // degree, corresponding to 160 detector locations. The radius of the + // scanning circle is selected to be half way between the center and the + // boundary to avoid that the remaining reflections from the imperfect + // boundary condition spoils our numerical results. // - // The locations of the detectors are then - // calculated in clockwise order. Note that - // the following of course only works if we - // are computing in 2d, a condition that we - // guard with an assertion. If we later - // wanted to run the same program in 3d, we - // would have to add code here for the - // initialization of detector locations in - // 3d. Due to the assertion, there is no - // way we can forget to do this. + // The locations of the detectors are then calculated in clockwise + // order. Note that the following of course only works if we are computing + // in 2d, a condition that we guard with an assertion. If we later wanted + // to run the same program in 3d, we would have to add code here for the + // initialization of detector locations in 3d. Due to the assertion, there + // is no way we can forget to do this. Assert (dim == 2, ExcNotImplemented()); const double detector_step_angle = 2.25; @@ -266,67 +222,40 @@ namespace Step24 // @sect4{TATForwardProblem::setup_system} - // The following system is pretty much what - // we've already done in - // step-23, but with two important - // differences. First, we have to create a - // circular (or spherical) mesh around the - // origin, with a radius of 1. This nothing - // new: we've done so before in - // step-6, step-10, and - // step-11, where we also explain - // how to attach a boundary object to a - // triangulation to be used whenever the - // triangulation needs to know where new - // boundary points lie when a cell is - // refined. Following this, the mesh is - // refined a number of times. + // The following system is pretty much what we've already done in step-23, + // but with two important differences. First, we have to create a circular + // (or spherical) mesh around the origin, with a radius of 1. This nothing + // new: we've done so before in step-6, step-10, and step-11, where we also + // explain how to attach a boundary object to a triangulation to be used + // whenever the triangulation needs to know where new boundary points lie + // when a cell is refined. Following this, the mesh is refined a number of + // times. // - // One thing we had to make sure is that the - // time step satisfies the CFL condition - // discussed in the introduction of - // step-23. Back in that program, - // we ensured this by hand by setting a - // timestep that matches the mesh width, but - // that was error prone because if we refined - // the mesh once more we would also have to - // make sure the time step is changed. Here, - // we do that automatically: we ask a library - // function for the minimal diameter of any - // cell. Then we set $k=\frac h{c_0}$. The - // only problem is: what exactly is $h$? The - // point is that there is really no good - // theory on this question for the wave - // equation. It is known that for uniformly - // refined meshes consisting of rectangles, - // $h$ is the minimal edge length. But for - // meshes on general quadrilaterals, the - // exact relationship appears to be unknown, - // i.e. it is unknown what properties of - // cells are relevant for the CFL - // condition. The problem is that the CFL - // condition follows from knowledge of the - // smallest eigenvalue of the Laplace matrix, - // and that can only be computed analytically - // for simply structured meshes. + // One thing we had to make sure is that the time step satisfies the CFL + // condition discussed in the introduction of step-23. Back in that program, + // we ensured this by hand by setting a timestep that matches the mesh + // width, but that was error prone because if we refined the mesh once more + // we would also have to make sure the time step is changed. Here, we do + // that automatically: we ask a library function for the minimal diameter of + // any cell. Then we set $k=\frac h{c_0}$. The only problem is: what exactly + // is $h$? The point is that there is really no good theory on this question + // for the wave equation. It is known that for uniformly refined meshes + // consisting of rectangles, $h$ is the minimal edge length. But for meshes + // on general quadrilaterals, the exact relationship appears to be unknown, + // i.e. it is unknown what properties of cells are relevant for the CFL + // condition. The problem is that the CFL condition follows from knowledge + // of the smallest eigenvalue of the Laplace matrix, and that can only be + // computed analytically for simply structured meshes. // - // The upshot of all this is that we're not - // quite sure what exactly we should take for - // $h$. The function - // GridTools::minimal_cell_diameter computes - // the minimal diameter of all cells. If the - // cells were all squares or cubes, then the - // minimal edge length would be the minimal - // diameter divided by - // std::sqrt(dim). We simply - // generalize this, without theoretical - // justification, to the case of non-uniform - // meshes. + // The upshot of all this is that we're not quite sure what exactly we + // should take for $h$. The function GridTools::minimal_cell_diameter + // computes the minimal diameter of all cells. If the cells were all squares + // or cubes, then the minimal edge length would be the minimal diameter + // divided by std::sqrt(dim). We simply generalize this, + // without theoretical justification, to the case of non-uniform meshes. // - // The only other significant change is that - // we need to build the boundary mass - // matrix. We will comment on this further - // down below. + // The only other significant change is that we need to build the boundary + // mass matrix. We will comment on this further down below. template void TATForwardProblem::setup_system () { @@ -366,74 +295,47 @@ namespace Step24 MatrixCreator::create_laplace_matrix (dof_handler, QGauss(3), laplace_matrix); - // The second difference, as mentioned, to - // step-23 is that we need - // to build the boundary mass matrix that - // grew out of the absorbing boundary + // The second difference, as mentioned, to step-23 is that we need to + // build the boundary mass matrix that grew out of the absorbing boundary // conditions. // - // A first observation would be that this - // matrix is much sparser than the regular - // mass matrix, since none of the shape - // functions with purely interior support - // contributes to this matrix. We could - // therefore optimize the storage pattern - // to this situation and build up a second - // sparsity pattern that only contains the - // nonzero entries that we need. There is a - // trade-off to make here: first, we would - // have to have a second sparsity pattern - // object, so that costs memory. Secondly, - // the matrix attached to this sparsity - // pattern is going to be smaller and - // therefore requires less memory; it would - // also be faster to perform matrix-vector - // multiplications with it. The final - // argument, however, is the one that tips - // the scale: we are not primarily - // interested in performing matrix-vector - // with the boundary matrix alone (though - // we need to do that for the right hand - // side vector once per time step), but - // mostly wish to add it up to the other - // matrices used in the first of the two - // equations since this is the one that is - // going to be multiplied with once per - // iteration of the CG method, - // i.e. significantly more often. It is now - // the case that the SparseMatrix::add - // class allows to add one matrix to - // another, but only if they use the same - // sparsity pattern (the reason being that - // we can't add nonzero entries to a matrix - // after the sparsity pattern has been - // created, so we simply require that the - // two matrices have the same sparsity - // pattern). + // A first observation would be that this matrix is much sparser than the + // regular mass matrix, since none of the shape functions with purely + // interior support contributes to this matrix. We could therefore + // optimize the storage pattern to this situation and build up a second + // sparsity pattern that only contains the nonzero entries that we + // need. There is a trade-off to make here: first, we would have to have a + // second sparsity pattern object, so that costs memory. Secondly, the + // matrix attached to this sparsity pattern is going to be smaller and + // therefore requires less memory; it would also be faster to perform + // matrix-vector multiplications with it. The final argument, however, is + // the one that tips the scale: we are not primarily interested in + // performing matrix-vector with the boundary matrix alone (though we need + // to do that for the right hand side vector once per time step), but + // mostly wish to add it up to the other matrices used in the first of the + // two equations since this is the one that is going to be multiplied with + // once per iteration of the CG method, i.e. significantly more often. It + // is now the case that the SparseMatrix::add class allows to add one + // matrix to another, but only if they use the same sparsity pattern (the + // reason being that we can't add nonzero entries to a matrix after the + // sparsity pattern has been created, so we simply require that the two + // matrices have the same sparsity pattern). // // So let's go with that: boundary_matrix.reinit (sparsity_pattern); - // The second thing to do is to actually - // build the matrix. Here, we need to - // integrate over faces of cells, so first - // we need a quadrature object that works - // on dim-1 dimensional - // objects. Secondly, the FEFaceValues - // variant of FEValues that works on faces, - // as its name suggest. And finally, the - // other variables that are part of the - // assembly machinery. All of this we put - // between curly braces to limit the scope - // of these variables to where we actually - // need them. + // The second thing to do is to actually build the matrix. Here, we need + // to integrate over faces of cells, so first we need a quadrature object + // that works on dim-1 dimensional objects. Secondly, the + // FEFaceValues variant of FEValues that works on faces, as its name + // suggest. And finally, the other variables that are part of the assembly + // machinery. All of this we put between curly braces to limit the scope + // of these variables to where we actually need them. // - // The actual act of assembling the matrix - // is then fairly straightforward: we loop - // over all cells, over all faces of each - // of these cells, and then do something - // only if that particular face is at the - // boundary of the domain. Like this: + // The actual act of assembling the matrix is then fairly straightforward: + // we loop over all cells, over all faces of each of these cells, and then + // do something only if that particular face is at the boundary of the + // domain. Like this: { const QGauss quadrature_formula(3); FEFaceValues fe_values (fe, quadrature_formula, @@ -497,12 +399,10 @@ namespace Step24 // @sect4{TATForwardProblem::solve_p and TATForwardProblem::solve_v} - // The following two functions, solving the - // linear systems for the pressure and the - // velocity variable, are taken pretty much - // verbatim (with the exception of the change - // of name from $u$ to $p$ of the primary - // variable) from step-23: + // The following two functions, solving the linear systems for the pressure + // and the velocity variable, are taken pretty much verbatim (with the + // exception of the change of name from $u$ to $p$ of the primary variable) + // from step-23: template void TATForwardProblem::solve_p () { @@ -536,8 +436,7 @@ namespace Step24 // @sect4{TATForwardProblem::output_results} - // The same holds here: the function is from - // step-23. + // The same holds here: the function is from step-23. template void TATForwardProblem::output_results () const { @@ -560,25 +459,18 @@ namespace Step24 // @sect4{TATForwardProblem::run} - // This function that does most of the work - // is pretty much again like in step-23, - // though we make things a bit clearer by - // using the vectors G1 and G2 mentioned in - // the introduction. Compared to the overall - // memory consumption of the program, the - // introduction of a few temporary vectors + // This function that does most of the work is pretty much again like in + // step-23, though we make things a bit clearer by using the vectors G1 and + // G2 mentioned in the introduction. Compared to the overall memory + // consumption of the program, the introduction of a few temporary vectors // isn't doing much harm. // - // The only changes to this function are: - // First, that we do not have to project - // initial values for the velocity $v$, since - // we know that it is zero. And second that - // we evaluate the solution at the detector - // locations computed in the - // constructor. This is done using the - // VectorTools::point_value function. These - // values are then written to a file that we - // open at the beginning of the function. + // The only changes to this function are: First, that we do not have to + // project initial values for the velocity $v$, since we know that it is + // zero. And second that we evaluate the solution at the detector locations + // computed in the constructor. This is done using the + // VectorTools::point_value function. These values are then written to a + // file that we open at the beginning of the function. template void TATForwardProblem::run () { @@ -653,10 +545,8 @@ namespace Step24 // @sect3{The main function} -// What remains is the main function of the -// program. There is nothing here that hasn't -// been shown in several of the previous -// programs: +// What remains is the main function of the program. There is nothing here +// that hasn't been shown in several of the previous programs: int main () { try diff --git a/deal.II/examples/step-25/step-25.cc b/deal.II/examples/step-25/step-25.cc index 17aa23a7d4..a051f72452 100644 --- a/deal.II/examples/step-25/step-25.cc +++ b/deal.II/examples/step-25/step-25.cc @@ -11,21 +11,12 @@ // @sect3{Include files and global variables} -// For an explanation of the include -// files, the reader should refer to -// the example programs step-1 -// through step-4. They are in the -// standard order, which is -// base -- -// lac -- -// grid -- -// dofs -- -// fe -- -// numerics (since each -// of these categories roughly builds -// upon previous ones), then a few -// C++ headers for file input/output -// and string streams. +// For an explanation of the include files, the reader should refer to the +// example programs step-1 through step-4. They are in the standard order, +// which is base -- lac -- grid -- +// dofs -- fe -- numerics (since each +// of these categories roughly builds upon previous ones), then a few C++ +// headers for file input/output and string streams. #include #include #include @@ -53,8 +44,7 @@ #include -// The last step is as in all -// previous programs: +// The last step is as in all previous programs: namespace Step25 { using namespace dealii; @@ -62,61 +52,37 @@ namespace Step25 // @sect3{The SineGordonProblem class template} - // The entire algorithm for solving the - // problem is encapsulated in this class. As - // in previous example programs, the class is - // declared with a template parameter, which - // is the spatial dimension, so that we can - // solve the sine-Gordon equation in one, two - // or three spatial dimensions. For more on - // the dimension-independent - // class-encapsulation of the problem, the + // The entire algorithm for solving the problem is encapsulated in this + // class. As in previous example programs, the class is declared with a + // template parameter, which is the spatial dimension, so that we can solve + // the sine-Gordon equation in one, two or three spatial dimensions. For + // more on the dimension-independent class-encapsulation of the problem, the // reader should consult step-3 and step-4. // - // Compared to step-23 and step-24, there - // isn't anything newsworthy in the general - // structure of the program (though there is - // of course in the inner workings of the - // various functions!). The most notable - // difference is the presence of the two new - // functions compute_nl_term and - // compute_nl_matrix that - // compute the nonlinear contributions to the - // system matrix and right-hand side of the first - // equation, as discussed in the - // Introduction. In addition, we have to have - // a vector solution_update that - // contains the nonlinear update to the + // Compared to step-23 and step-24, there isn't anything newsworthy in the + // general structure of the program (though there is of course in the inner + // workings of the various functions!). The most notable difference is the + // presence of the two new functions compute_nl_term and + // compute_nl_matrix that compute the nonlinear contributions + // to the system matrix and right-hand side of the first equation, as + // discussed in the Introduction. In addition, we have to have a vector + // solution_update that contains the nonlinear update to the // solution vector in each Newton step. // - // As also mentioned in the introduction, we - // do not store the velocity variable in this - // program, but the mass matrix times the - // velocity. This is done in the - // M_x_velocity variable (the - // "x" is intended to stand for - // "times"). + // As also mentioned in the introduction, we do not store the velocity + // variable in this program, but the mass matrix times the velocity. This is + // done in the M_x_velocity variable (the "x" is intended to + // stand for "times"). // - // Finally, the - // output_timestep_skip - // variable stores the number of time - // steps to be taken each time before - // graphical output is to be - // generated. This is of importance - // when using fine meshes (and - // consequently small time steps) - // where we would run lots of time - // steps and create lots of output - // files of solutions that look - // almost the same in subsequent - // files. This only clogs up our - // visualization procedures and we - // should avoid creating more output - // than we are really interested - // in. Therefore, if this variable is - // set to a value $n$ bigger than one, - // output is generated only every - // $n$th time step. + // Finally, the output_timestep_skip variable stores the number + // of time steps to be taken each time before graphical output is to be + // generated. This is of importance when using fine meshes (and consequently + // small time steps) where we would run lots of time steps and create lots + // of output files of solutions that look almost the same in subsequent + // files. This only clogs up our visualization procedures and we should + // avoid creating more output than we are really interested in. Therefore, + // if this variable is set to a value $n$ bigger than one, output is + // generated only every $n$th time step. template class SineGordonProblem { @@ -161,28 +127,18 @@ namespace Step25 // @sect3{Initial conditions} - // In the following two classes, we first - // implement the exact solution for 1D, 2D, - // and 3D mentioned in the introduction to - // this program. This space-time solution may - // be of independent interest if one wanted - // to test the accuracy of the program by - // comparing the numerical against the - // analytic solution (note however that the - // program uses a finite domain, whereas - // these are analytic solutions for an - // unbounded domain). This may, for example, - // be done using the - // VectorTools::integrate_difference - // function. Note, again (as was already - // discussed in step-23), how we describe - // space-time functions as spatial functions - // that depend on a time variable that can be - // set and queried using the - // FunctionTime::set_time() and - // FunctionTime::get_time() member functions - // of the FunctionTime base class of the - // Function class. + // In the following two classes, we first implement the exact solution for + // 1D, 2D, and 3D mentioned in the introduction to this program. This + // space-time solution may be of independent interest if one wanted to test + // the accuracy of the program by comparing the numerical against the + // analytic solution (note however that the program uses a finite domain, + // whereas these are analytic solutions for an unbounded domain). This may, + // for example, be done using the VectorTools::integrate_difference + // function. Note, again (as was already discussed in step-23), how we + // describe space-time functions as spatial functions that depend on a time + // variable that can be set and queried using the FunctionTime::set_time() + // and FunctionTime::get_time() member functions of the FunctionTime base + // class of the Function class. template class ExactSolution : public Function { @@ -245,16 +201,11 @@ namespace Step25 } } - // In the second part of this section, we - // provide the initial conditions. We are lazy - // (and cautious) and don't want to implement - // the same functions as above a second - // time. Rather, if we are queried for - // initial conditions, we create an object - // ExactSolution, set it to the - // correct time, and let it compute whatever - // values the exact solution has at that - // time: + // In the second part of this section, we provide the initial conditions. We + // are lazy (and cautious) and don't want to implement the same functions as + // above a second time. Rather, if we are queried for initial conditions, we + // create an object ExactSolution, set it to the correct time, + // and let it compute whatever values the exact solution has at that time: template class InitialValues : public Function { @@ -280,48 +231,31 @@ namespace Step25 // @sect3{Implementation of the SineGordonProblem class} - // Let's move on to the implementation of the - // main class, as it implements the algorithm - // outlined in the introduction. + // Let's move on to the implementation of the main class, as it implements + // the algorithm outlined in the introduction. // @sect4{SineGordonProblem::SineGordonProblem} - // This is the constructor of the - // SineGordonProblem class. It - // specifies the desired polynomial degree of - // the finite elements, associates a - // DoFHandler to the - // triangulation object (just as - // in the example programs step-3 and - // step-4), initializes the current or - // initial time, the final time, the time - // step size, and the value of $\theta$ for - // the time stepping scheme. Since the - // solutions we compute here are - // time-periodic, the actual value of the - // start-time doesn't matter, and we choose - // it so that we start at an interesting - // time. + // This is the constructor of the SineGordonProblem class. It + // specifies the desired polynomial degree of the finite elements, + // associates a DoFHandler to the triangulation + // object (just as in the example programs step-3 and step-4), initializes + // the current or initial time, the final time, the time step size, and the + // value of $\theta$ for the time stepping scheme. Since the solutions we + // compute here are time-periodic, the actual value of the start-time + // doesn't matter, and we choose it so that we start at an interesting time. // - // Note that if we were to chose the explicit - // Euler time stepping scheme ($\theta = 0$), - // then we must pick a time step $k \le h$, - // otherwise the scheme is not stable and - // oscillations might arise in the - // solution. The Crank-Nicolson scheme - // ($\theta = \frac{1}{2}$) and the implicit - // Euler scheme ($\theta=1$) do not suffer - // from this deficiency, since they are - // unconditionally stable. However, even then - // the time step should be chosen to be on - // the order of $h$ in order to obtain a good - // solution. Since we know that our mesh - // results from the uniform subdivision of a - // rectangle, we can compute that time step - // easily; if we had a different domain, the - // technique in step-24 using - // GridTools::minimal_cell_diameter would - // work as well. + // Note that if we were to chose the explicit Euler time stepping scheme + // ($\theta = 0$), then we must pick a time step $k \le h$, otherwise the + // scheme is not stable and oscillations might arise in the solution. The + // Crank-Nicolson scheme ($\theta = \frac{1}{2}$) and the implicit Euler + // scheme ($\theta=1$) do not suffer from this deficiency, since they are + // unconditionally stable. However, even then the time step should be chosen + // to be on the order of $h$ in order to obtain a good solution. Since we + // know that our mesh results from the uniform subdivision of a rectangle, + // we can compute that time step easily; if we had a different domain, the + // technique in step-24 using GridTools::minimal_cell_diameter would work as + // well. template SineGordonProblem::SineGordonProblem () : @@ -337,19 +271,13 @@ namespace Step25 // @sect4{SineGordonProblem::make_grid_and_dofs} - // This function creates a rectangular grid - // in dim dimensions and refines - // it several times. Also, all matrix and - // vector members of the - // SineGordonProblem class are - // initialized to their appropriate sizes - // once the degrees of freedom have been - // assembled. Like step-24, we use the - // MatrixCreator class to - // generate a mass matrix $M$ and a Laplace - // matrix $A$ and store them in the - // appropriate variables for the remainder of - // the program's life. + // This function creates a rectangular grid in dim dimensions + // and refines it several times. Also, all matrix and vector members of the + // SineGordonProblem class are initialized to their appropriate + // sizes once the degrees of freedom have been assembled. Like step-24, we + // use the MatrixCreator class to generate a mass matrix $M$ + // and a Laplace matrix $A$ and store them in the appropriate variables for + // the remainder of the program's life. template void SineGordonProblem::make_grid_and_dofs () { @@ -395,33 +323,23 @@ namespace Step25 // @sect4{SineGordonProblem::assemble_system} - // This functions assembles the system matrix - // and right-hand side vector for each - // iteration of Newton's method. The reader - // should refer to the Introduction for the - // explicit formulas for the system matrix - // and right-hand side. + // This functions assembles the system matrix and right-hand side vector for + // each iteration of Newton's method. The reader should refer to the + // Introduction for the explicit formulas for the system matrix and + // right-hand side. // - // Note that during each time step, we have to - // add up the various contributions to the - // matrix and right hand sides. In contrast - // to step-23 and step-24, this requires - // assembling a few more terms, since they - // depend on the solution of the previous - // time step or previous nonlinear step. We - // use the functions - // compute_nl_matrix and - // compute_nl_term to do this, - // while the present function provides the - // top-level logic. + // Note that during each time step, we have to add up the various + // contributions to the matrix and right hand sides. In contrast to step-23 + // and step-24, this requires assembling a few more terms, since they depend + // on the solution of the previous time step or previous nonlinear step. We + // use the functions compute_nl_matrix and + // compute_nl_term to do this, while the present function + // provides the top-level logic. template void SineGordonProblem::assemble_system () { - // First we assemble the Jacobian - // matrix $F'_h(U^{n,l})$, where - // $U^{n,l}$ is stored in the vector - // solution for - // convenience. + // First we assemble the Jacobian matrix $F'_h(U^{n,l})$, where $U^{n,l}$ + // is stored in the vector solution for convenience. system_matrix = 0; system_matrix.copy_from (mass_matrix); system_matrix.add (std::pow(time_step*theta,2), laplace_matrix); @@ -430,8 +348,7 @@ namespace Step25 compute_nl_matrix (old_solution, solution, tmp_matrix); system_matrix.add (-std::pow(time_step*theta,2), tmp_matrix); - // Then, we compute the right-hand - // side vector $-F_h(U^{n,l})$. + // Then, we compute the right-hand side vector $-F_h(U^{n,l})$. system_rhs = 0; tmp_matrix = 0; @@ -461,42 +378,26 @@ namespace Step25 // @sect4{SineGordonProblem::compute_nl_term} - // This function computes the vector - // $S(\cdot,\cdot)$, which appears in the - // nonlinear term in the both equations of - // the split formulation. This function not - // only simplifies the repeated computation - // of this term, but it is also a fundamental - // part of the nonlinear iterative solver - // that we use when the time stepping is - // implicit (i.e. $\theta\ne 0$). Moreover, - // we must allow the function to receive as - // input an "old" and a "new" solution. These - // may not be the actual solutions of the - // problem stored in - // old_solution and - // solution, but are simply the - // two functions we linearize about. For the - // purposes of this function, let us call the - // first two arguments $w_{\mathrm{old}}$ and - // $w_{\mathrm{new}}$ in the documentation of - // this class below, respectively. + // This function computes the vector $S(\cdot,\cdot)$, which appears in the + // nonlinear term in the both equations of the split formulation. This + // function not only simplifies the repeated computation of this term, but + // it is also a fundamental part of the nonlinear iterative solver that we + // use when the time stepping is implicit (i.e. $\theta\ne 0$). Moreover, we + // must allow the function to receive as input an "old" and a "new" + // solution. These may not be the actual solutions of the problem stored in + // old_solution and solution, but are simply the + // two functions we linearize about. For the purposes of this function, let + // us call the first two arguments $w_{\mathrm{old}}$ and $w_{\mathrm{new}}$ + // in the documentation of this class below, respectively. // - // As a side-note, it is perhaps worth - // investigating what order quadrature - // formula is best suited for this type of - // integration. Since $\sin(\cdot)$ is not a - // polynomial, there are probably no - // quadrature formulas that can integrate - // these terms exactly. It is usually - // sufficient to just make sure that the - // right hand side is integrated up to the - // same order of accuracy as the - // discretization scheme is, but it may be - // possible to improve on the constant in the - // asympotitic statement of convergence by - // choosing a more accurate quadrature - // formula. + // As a side-note, it is perhaps worth investigating what order quadrature + // formula is best suited for this type of integration. Since $\sin(\cdot)$ + // is not a polynomial, there are probably no quadrature formulas that can + // integrate these terms exactly. It is usually sufficient to just make sure + // that the right hand side is integrated up to the same order of accuracy + // as the discretization scheme is, but it may be possible to improve on the + // constant in the asympotitic statement of convergence by choosing a more + // accurate quadrature formula. template void SineGordonProblem::compute_nl_term (const Vector &old_data, const Vector &new_data, @@ -522,24 +423,19 @@ namespace Step25 for (; cell!=endc; ++cell) { - // Once we re-initialize our - // FEValues instantiation - // to the current cell, we make use of - // the get_function_values - // routine to get the values of the - // "old" data (presumably at - // $t=t_{n-1}$) and the "new" data - // (presumably at $t=t_n$) at the nodes - // of the chosen quadrature formula. + // Once we re-initialize our FEValues instantiation to + // the current cell, we make use of the + // get_function_values routine to get the values of the + // "old" data (presumably at $t=t_{n-1}$) and the "new" data + // (presumably at $t=t_n$) at the nodes of the chosen quadrature + // formula. fe_values.reinit (cell); fe_values.get_function_values (old_data, old_data_values); fe_values.get_function_values (new_data, new_data_values); - // Now, we can evaluate $\int_K - // \sin\left[\theta w_{\mathrm{new}} + - // (1-\theta) w_{\mathrm{old}}\right] - // \,\varphi_j\,\mathrm{d}x$ using the - // desired quadrature formula. + // Now, we can evaluate $\int_K \sin\left[\theta w_{\mathrm{new}} + + // (1-\theta) w_{\mathrm{old}}\right] \,\varphi_j\,\mathrm{d}x$ using + // the desired quadrature formula. for (unsigned int q_point=0; q_pointget_dof_indices (local_dof_indices); for (unsigned int i=0; icompute_nl_term, we must - // allow this function to receive as input an - // "old" and a "new" solution, which we again - // call $w_{\mathrm{old}}$ and - // $w_{\mathrm{new}}$ below, respectively. + // This is the second function dealing with the nonlinear scheme. It + // computes the matrix $N(\cdot,\cdot)$, whicih appears in the nonlinear + // term in the Jacobian of $F(\cdot)$. Just as compute_nl_term, + // we must allow this function to receive as input an "old" and a "new" + // solution, which we again call $w_{\mathrm{old}}$ and $w_{\mathrm{new}}$ + // below, respectively. template void SineGordonProblem::compute_nl_matrix (const Vector &old_data, const Vector &new_data, @@ -595,24 +485,15 @@ namespace Step25 for (; cell!=endc; ++cell) { - // Again, first we - // re-initialize our - // FEValues - // instantiation to the current - // cell. + // Again, first we re-initialize our FEValues + // instantiation to the current cell. fe_values.reinit (cell); fe_values.get_function_values (old_data, old_data_values); fe_values.get_function_values (new_data, new_data_values); - // Then, we evaluate $\int_K - // \cos\left[\theta - // w_{\mathrm{new}} + - // (1-\theta) - // w_{\mathrm{old}}\right]\, - // \varphi_i\, - // \varphi_j\,\mathrm{d}x$ - // using the desired quadrature - // formula. + // Then, we evaluate $\int_K \cos\left[\theta w_{\mathrm{new}} + + // (1-\theta) w_{\mathrm{old}}\right]\, \varphi_i\, + // \varphi_j\,\mathrm{d}x$ using the desired quadrature formula. for (unsigned int q_point=0; q_pointget_dof_indices (local_dof_indices); for (unsigned int i=0; isolution_update and used to update - // solution in the - // run function. + // As discussed in the Introduction, this function uses the CG iterative + // solver on the linear system of equations resulting from the finite + // element spatial discretization of each iteration of Newton's method for + // the (nonlinear) first equation of the split formulation. The solution to + // the system is, in fact, $\delta U^{n,l}$ so it is stored in + // solution_update and used to update solution in + // the run function. // - // Note that we re-set the solution update to - // zero before solving for it. This is not - // necessary: iterative solvers can start - // from any point and converge to the correct - // solution. If one has a good estimate about - // the solution of a linear system, it may be - // worthwhile to start from that vector, but - // as a general observation it is a fact that - // the starting point doesn't matter very - // much: it has to be a very, very good guess - // to reduce the number of iterations by more - // than a few. It turns out that for this problem, - // using the previous nonlinear update as a - // starting point actually hurts convergence and - // increases the number of iterations needed, - // so we simply set it to zero. + // Note that we re-set the solution update to zero before solving for + // it. This is not necessary: iterative solvers can start from any point and + // converge to the correct solution. If one has a good estimate about the + // solution of a linear system, it may be worthwhile to start from that + // vector, but as a general observation it is a fact that the starting point + // doesn't matter very much: it has to be a very, very good guess to reduce + // the number of iterations by more than a few. It turns out that for this + // problem, using the previous nonlinear update as a starting point actually + // hurts convergence and increases the number of iterations needed, so we + // simply set it to zero. // - // The function returns the number of - // iterations it took to converge to a - // solution. This number will later be used - // to generate output on the screen showing - // how many iterations were needed in each - // nonlinear iteration. + // The function returns the number of iterations it took to converge to a + // solution. This number will later be used to generate output on the screen + // showing how many iterations were needed in each nonlinear iteration. template unsigned int SineGordonProblem::solve () @@ -697,10 +562,8 @@ namespace Step25 // @sect4{SineGordonProblem::output_results} - // This function outputs the results to a - // file. It is pretty much identical to the - // respective functions in step-23 and - // step-24: + // This function outputs the results to a file. It is pretty much identical + // to the respective functions in step-23 and step-24: template void SineGordonProblem::output_results (const unsigned int timestep_number) const @@ -721,41 +584,26 @@ namespace Step25 // @sect4{SineGordonProblem::run} - // This function has the top-level - // control over everything: it runs - // the (outer) time-stepping loop, - // the (inner) nonlinear-solver loop, - // and outputs the solution after each - // time step. + // This function has the top-level control over everything: it runs the + // (outer) time-stepping loop, the (inner) nonlinear-solver loop, and + // outputs the solution after each time step. template void SineGordonProblem::run () { make_grid_and_dofs (); - // To aknowledge the initial - // condition, we must use the - // function $u_0(x)$ to compute - // $U^0$. To this end, below we - // will create an object of type - // InitialValues; note - // that when we create this object - // (which is derived from the - // Function class), we - // set its internal time variable - // to $t_0$, to indicate that the - // initial condition is a function - // of space and time evaluated at - // $t=t_0$. + // To aknowledge the initial condition, we must use the function $u_0(x)$ + // to compute $U^0$. To this end, below we will create an object of type + // InitialValues; note that when we create this object (which + // is derived from the Function class), we set its internal + // time variable to $t_0$, to indicate that the initial condition is a + // function of space and time evaluated at $t=t_0$. // - // Then we produce $U^0$ by projecting - // $u_0(x)$ onto the grid using - // VectorTools::project. We - // have to use the same construct using - // hanging node constraints as in step-21: - // the VectorTools::project function - // requires a hanging node constraints - // object, but to be used we first need to - // close it: + // Then we produce $U^0$ by projecting $u_0(x)$ onto the grid using + // VectorTools::project. We have to use the same construct + // using hanging node constraints as in step-21: the VectorTools::project + // function requires a hanging node constraints object, but to be used we + // first need to close it: { ConstraintMatrix constraints; constraints.close(); @@ -766,20 +614,14 @@ namespace Step25 solution); } - // For completeness, we output the - // zeroth time step to a file just - // like any other other time step. + // For completeness, we output the zeroth time step to a file just like + // any other other time step. output_results (0); - // Now we perform the time - // stepping: at every time step we - // solve the matrix equation(s) - // corresponding to the finite - // element discretization of the - // problem, and then advance our - // solution according to the time - // stepping formulas we discussed - // in the Introduction. + // Now we perform the time stepping: at every time step we solve the + // matrix equation(s) corresponding to the finite element discretization + // of the problem, and then advance our solution according to the time + // stepping formulas we discussed in the Introduction. unsigned int timestep_number = 1; for (time+=time_step; time<=final_time; time+=time_step, ++timestep_number) { @@ -790,28 +632,16 @@ namespace Step25 << "advancing to t = " << time << "." << std::endl; - // At the beginning of each - // time step we must solve the - // nonlinear equation in the - // split formulation via - // Newton's method --- - // i.e. solve for $\delta - // U^{n,l}$ then compute - // $U^{n,l+1}$ and so on. The - // stopping criterion for this - // nonlinear iteration is that - // $\|F_h(U^{n,l})\|_2 \le - // 10^{-6} - // \|F_h(U^{n,0})\|_2$. Consequently, - // we need to record the norm - // of the residual in the first - // iteration. + // At the beginning of each time step we must solve the nonlinear + // equation in the split formulation via Newton's method --- + // i.e. solve for $\delta U^{n,l}$ then compute $U^{n,l+1}$ and so + // on. The stopping criterion for this nonlinear iteration is that + // $\|F_h(U^{n,l})\|_2 \le 10^{-6} \|F_h(U^{n,0})\|_2$. Consequently, + // we need to record the norm of the residual in the first iteration. // - // At the end of each iteration, we - // output to the console how many - // linear solver iterations it took - // us. When the loop below is done, we - // have (an approximation of) $U^n$. + // At the end of each iteration, we output to the console how many + // linear solver iterations it took us. When the loop below is done, + // we have (an approximation of) $U^n$. double initial_rhs_norm = 0.; bool first_iteration = true; do @@ -837,14 +667,10 @@ namespace Step25 std::cout << " CG iterations per nonlinear step." << std::endl; - // Upon obtaining the solution to the - // first equation of the problem at - // $t=t_n$, we must update the - // auxiliary velocity variable - // $V^n$. However, we do not compute - // and store $V^n$ since it is not a - // quantity we use directly in the - // problem. Hence, for simplicity, we + // Upon obtaining the solution to the first equation of the problem at + // $t=t_n$, we must update the auxiliary velocity variable + // $V^n$. However, we do not compute and store $V^n$ since it is not a + // quantity we use directly in the problem. Hence, for simplicity, we // update $MV^n$ directly: Vector tmp_vector (solution.size()); laplace_matrix.vmult (tmp_vector, solution); @@ -858,22 +684,13 @@ namespace Step25 compute_nl_term (old_solution, solution, tmp_vector); M_x_velocity.add (-time_step, tmp_vector); - // Oftentimes, in particular - // for fine meshes, we must - // pick the time step to be - // quite small in order for the - // scheme to be - // stable. Therefore, there are - // a lot of time steps during - // which "nothing interesting - // happens" in the solution. To - // improve overall efficiency - // -- in particular, speed up - // the program and save disk - // space -- we only output the - // solution every - // output_timestep_skip - // time steps: + // Oftentimes, in particular for fine meshes, we must pick the time + // step to be quite small in order for the scheme to be + // stable. Therefore, there are a lot of time steps during which + // "nothing interesting happens" in the solution. To improve overall + // efficiency -- in particular, speed up the program and save disk + // space -- we only output the solution every + // output_timestep_skip time steps: if (timestep_number % output_timestep_skip == 0) output_results (timestep_number); } @@ -882,21 +699,13 @@ namespace Step25 // @sect3{The main function} -// This is the main function of the -// program. It creates an object of -// top-level class and calls its -// principal function. Also, we -// suppress some of the library output -// by setting -// deallog.depth_console -// to zero. Furthermore, if -// exceptions are thrown during the -// execution of the run method of the -// SineGordonProblem -// class, we catch and report them -// here. For more information about -// exceptions the reader should -// consult step-6. +// This is the main function of the program. It creates an object of top-level +// class and calls its principal function. Also, we suppress some of the +// library output by setting deallog.depth_console to +// zero. Furthermore, if exceptions are thrown during the execution of the run +// method of the SineGordonProblem class, we catch and report +// them here. For more information about exceptions the reader should consult +// step-6. int main () { try diff --git a/deal.II/examples/step-26/step-26.cc b/deal.II/examples/step-26/step-26.cc index 4076c6d242..dbdb784c87 100644 --- a/deal.II/examples/step-26/step-26.cc +++ b/deal.II/examples/step-26/step-26.cc @@ -11,11 +11,8 @@ // @sect3{Include files} -// The first few (many?) include -// files have already been used in -// the previous example, so we will -// not explain their meaning here -// again. +// The first few (many?) include files have already been used in the previous +// example, so we will not explain their meaning here again. #include #include #include @@ -40,20 +37,17 @@ #include #include -// This is new, however: in the previous -// example we got some unwanted output from -// the linear solvers. If we want to suppress -// it, we have to include this file and add a -// single line somewhere to the program (see -// the main() function below for that): +// This is new, however: in the previous example we got some unwanted output +// from the linear solvers. If we want to suppress it, we have to include this +// file and add a single line somewhere to the program (see the main() +// function below for that): #include #include #include -// The last step is as in all -// previous programs: +// The last step is as in all previous programs: namespace Step26 { using namespace dealii; @@ -162,12 +156,11 @@ namespace Step26 AssertThrow (point_list.size() > 1, ExcIO()); } - // next fit a linear model through the data - // cloud to rectify it in a local + // next fit a linear model through the data cloud to rectify it in a local // coordinate system // - // the first step is to move the center of - // mass of the points to the origin + // the first step is to move the center of mass of the points to the + // origin { const Point<3> c_o_m = std::accumulate (point_list.begin(), point_list.end(), @@ -177,17 +170,16 @@ namespace Step26 point_list[i] -= c_o_m; } - // next do a least squares fit to the - // function ax+by. this leads to the + // next do a least squares fit to the function ax+by. this leads to the // following equations: // min f(a,b) = sum_i (zi-a xi - b yi)^2 / 2 // - // f_a = sum_i (zi - a xi - b yi) xi = 0 - // f_b = sum_i (zi - a xi - b yi) yi = 0 + // f_a = sum_i (zi - a xi - b yi) xi = 0 f_b = sum_i (zi - a xi - b yi) yi + // = 0 // - // f_a = (sum_i zi xi) - (sum xi^2) a - (sum xi yi) b = 0 - // f_a = (sum_i zi yi) - (sum xi yi) a - (sum yi^2) b = 0 + // f_a = (sum_i zi xi) - (sum xi^2) a - (sum xi yi) b = 0 f_a = (sum_i zi + // yi) - (sum xi yi) a - (sum yi^2) b = 0 { double A[2][2] = {{0,0},{0,0}}; double B[2] = {0,0}; @@ -207,10 +199,8 @@ namespace Step26 const double b = (A[0][0] * B[1] - A[0][1] * B[0]) / det; - // with this information, we can rotate - // the points so that the corresponding - // least-squares fit would be the x-y - // plane + // with this information, we can rotate the points so that the + // corresponding least-squares fit would be the x-y plane const Point<2> gradient_direction = Point<2>(a,b) / std::sqrt(a*a+b*b); const Point<2> orthogonal_direction @@ -220,27 +210,21 @@ namespace Step26 for (unsigned int i=0; i xy (point_list[i][0], point_list[i][1]); const double grad_distance = xy * gradient_direction; const double orth_distance = xy * orthogonal_direction; - // we then have to stretch the points - // in the gradient direction. the - // stretch factor is defined above - // (zero if the original plane was - // already the xy plane, infinity if - // it was vertical) + // we then have to stretch the points in the gradient direction. the + // stretch factor is defined above (zero if the original plane was + // already the xy plane, infinity if it was vertical) const Point<2> new_xy = (grad_distance * stretch_factor * gradient_direction + orth_distance * orthogonal_direction); @@ -327,22 +311,13 @@ namespace Step26 // @sect3{The LaplaceProblem class template} - // This is again the same - // LaplaceProblem class as in the - // previous example. The only - // difference is that we have now - // declared it as a class with a - // template parameter, and the - // template parameter is of course - // the spatial dimension in which we - // would like to solve the Laplace - // equation. Of course, several of - // the member variables depend on - // this dimension as well, in - // particular the Triangulation - // class, which has to represent - // quadrilaterals or hexahedra, - // respectively. Apart from this, + // This is again the same LaplaceProblem class as in the + // previous example. The only difference is that we have now declared it as + // a class with a template parameter, and the template parameter is of + // course the spatial dimension in which we would like to solve the Laplace + // equation. Of course, several of the member variables depend on this + // dimension as well, in particular the Triangulation class, which has to + // represent quadrilaterals or hexahedra, respectively. Apart from this, // everything is as before. template class LaplaceProblem @@ -397,51 +372,35 @@ namespace Step26 // @sect3{Implementation of the LaplaceProblem class} - // Next for the implementation of the class - // template that makes use of the functions - // above. As before, we will write everything - // as templates that have a formal parameter - // dim that we assume unknown at the time - // we define the template functions. Only - // later, the compiler will find a - // declaration of LaplaceProblem@<2@> (in - // the main function, actually) and - // compile the entire class with dim - // replaced by 2, a process referred to as - // `instantiation of a template'. When doing - // so, it will also replace instances of - // RightHandSide@ by - // RightHandSide@<2@> and instantiate the - // latter class from the class template. + // Next for the implementation of the class template that makes use of the + // functions above. As before, we will write everything as templates that + // have a formal parameter dim that we assume unknown at the + // time we define the template functions. Only later, the compiler will find + // a declaration of LaplaceProblem@<2@> (in the + // main function, actually) and compile the entire class with + // dim replaced by 2, a process referred to as `instantiation + // of a template'. When doing so, it will also replace instances of + // RightHandSide@ by RightHandSide@<2@> and + // instantiate the latter class from the class template. // - // In fact, the compiler will also find a - // declaration LaplaceProblem@<3@> in - // main(). This will cause it to again go - // back to the general - // LaplaceProblem@ template, replace - // all occurrences of dim, this time by - // 3, and compile the class a second - // time. Note that the two instantiations - // LaplaceProblem@<2@> and - // LaplaceProblem@<3@> are completely - // independent classes; their only common - // feature is that they are both instantiated - // from the same general template, but they - // are not convertible into each other, for - // example, and share no code (both - // instantiations are compiled completely - // independently). + // In fact, the compiler will also find a declaration + // LaplaceProblem@<3@> in main(). This will cause + // it to again go back to the general LaplaceProblem@ + // template, replace all occurrences of dim, this time by 3, + // and compile the class a second time. Note that the two instantiations + // LaplaceProblem@<2@> and LaplaceProblem@<3@> are + // completely independent classes; their only common feature is that they + // are both instantiated from the same general template, but they are not + // convertible into each other, for example, and share no code (both + // instantiations are compiled completely independently). // @sect4{LaplaceProblem::LaplaceProblem} - // After this introduction, here is the - // constructor of the LaplaceProblem - // class. It specifies the desired polynomial - // degree of the finite elements and - // associates the DoFHandler to the - // triangulation just as in the previous - // example program, step-3: + // After this introduction, here is the constructor of the + // LaplaceProblem class. It specifies the desired polynomial + // degree of the finite elements and associates the DoFHandler to the + // triangulation just as in the previous example program, step-3: template LaplaceProblem::LaplaceProblem () : fe (1), @@ -451,39 +410,23 @@ namespace Step26 // @sect4{LaplaceProblem::make_grid_and_dofs} - // Grid creation is something - // inherently dimension - // dependent. However, as long as the - // domains are sufficiently similar - // in 2D or 3D, the library can - // abstract for you. In our case, we - // would like to again solve on the - // square [-1,1]x[-1,1] in 2D, or on - // the cube [-1,1]x[-1,1]x[-1,1] in - // 3D; both can be termed - // hyper_cube, so we may use the - // same function in whatever - // dimension we are. Of course, the - // functions that create a hypercube - // in two and three dimensions are - // very much different, but that is - // something you need not care - // about. Let the library handle the + // Grid creation is something inherently dimension dependent. However, as + // long as the domains are sufficiently similar in 2D or 3D, the library can + // abstract for you. In our case, we would like to again solve on the square + // [-1,1]x[-1,1] in 2D, or on the cube [-1,1]x[-1,1]x[-1,1] in 3D; both can + // be termed hyper_cube, so we may use the same function in + // whatever dimension we are. Of course, the functions that create a + // hypercube in two and three dimensions are very much different, but that + // is something you need not care about. Let the library handle the // difficult things. // - // Likewise, associating a degree of freedom - // with each vertex is something which - // certainly looks different in 2D and 3D, - // but that does not need to bother you - // either. This function therefore looks - // exactly like in the previous example, - // although it performs actions that in their - // details are quite different if dim - // happens to be 3. The only significant - // difference from a user's perspective is - // the number of cells resulting, which is - // much higher in three than in two space - // dimensions! + // Likewise, associating a degree of freedom with each vertex is something + // which certainly looks different in 2D and 3D, but that does not need to + // bother you either. This function therefore looks exactly like in the + // previous example, although it performs actions that in their details are + // quite different if dim happens to be 3. The only significant + // difference from a user's perspective is the number of cells resulting, + // which is much higher in three than in two space dimensions! template void LaplaceProblem::make_grid_and_dofs () { @@ -550,36 +493,20 @@ namespace Step26 // @sect4{LaplaceProblem::assemble_system} - // Unlike in the previous example, we - // would now like to use a - // non-constant right hand side - // function and non-zero boundary - // values. Both are tasks that are - // readily achieved with a only a few - // new lines of code in the - // assemblage of the matrix and right - // hand side. + // Unlike in the previous example, we would now like to use a non-constant + // right hand side function and non-zero boundary values. Both are tasks + // that are readily achieved with a only a few new lines of code in the + // assemblage of the matrix and right hand side. // - // More interesting, though, is the - // way we assemble matrix and right - // hand side vector dimension - // independently: there is simply no - // difference to the - // two-dimensional case. Since the - // important objects used in this - // function (quadrature formula, - // FEValues) depend on the dimension - // by way of a template parameter as - // well, they can take care of - // setting up properly everything for - // the dimension for which this - // function is compiled. By declaring - // all classes which might depend on - // the dimension using a template - // parameter, the library can make - // nearly all work for you and you - // don't have to care about most - // things. + // More interesting, though, is the way we assemble matrix and right hand + // side vector dimension independently: there is simply no difference to the + // two-dimensional case. Since the important objects used in this function + // (quadrature formula, FEValues) depend on the dimension by way of a + // template parameter as well, they can take care of setting up properly + // everything for the dimension for which this function is compiled. By + // declaring all classes which might depend on the dimension using a + // template parameter, the library can make nearly all work for you and you + // don't have to care about most things. template void LaplaceProblem::assemble_system () { @@ -602,13 +529,9 @@ namespace Step26 // @sect4{LaplaceProblem::solve} - // Solving the linear system of - // equations is something that looks - // almost identical in most - // programs. In particular, it is - // dimension independent, so this - // function is copied verbatim from the - // previous example. + // Solving the linear system of equations is something that looks almost + // identical in most programs. In particular, it is dimension independent, + // so this function is copied verbatim from the previous example. template void LaplaceProblem::solve () { @@ -627,31 +550,22 @@ namespace Step26 // @sect4{LaplaceProblem::output_results} - // This function also does what the - // respective one did in step-3. No changes + // This function also does what the respective one did in step-3. No changes // here for dimension independence either. // - // The only difference to the previous - // example is that we want to write output in - // GMV format, rather than for gnuplot (GMV - // is another graphics program that, contrary - // to gnuplot, shows data in nice colors, - // allows rotation of geometries with the - // mouse, and generates reasonable - // representations of 3d data; for ways to - // obtain it see the ReadMe file of - // deal.II). To write data in this format, we - // simply replace the + // The only difference to the previous example is that we want to write + // output in GMV format, rather than for gnuplot (GMV is another graphics + // program that, contrary to gnuplot, shows data in nice colors, allows + // rotation of geometries with the mouse, and generates reasonable + // representations of 3d data; for ways to obtain it see the ReadMe file of + // deal.II). To write data in this format, we simply replace the // data_out.write_gnuplot call by // data_out.write_gmv. // - // Since the program will run both 2d and 3d - // versions of the laplace solver, we use the - // dimension in the filename to generate - // distinct filenames for each run (in a - // better program, one would check whether - // `dim' can have other values than 2 or 3, - // but we neglect this here for the sake of + // Since the program will run both 2d and 3d versions of the laplace solver, + // we use the dimension in the filename to generate distinct filenames for + // each run (in a better program, one would check whether `dim' can have + // other values than 2 or 3, but we neglect this here for the sake of // brevity). template void LaplaceProblem::output_results () const @@ -673,11 +587,9 @@ namespace Step26 // @sect4{LaplaceProblem::run} - // This is the function which has the - // top-level control over - // everything. Apart from one line of - // additional output, it is the same - // as for the previous example. + // This is the function which has the top-level control over + // everything. Apart from one line of additional output, it is the same as + // for the previous example. template void LaplaceProblem::run () { @@ -693,78 +605,48 @@ namespace Step26 // @sect3{The main function} -// And this is the main function. It also -// looks mostly like in step-3, but if you -// look at the code below, note how we first -// create a variable of type -// LaplaceProblem@<2@> (forcing the -// compiler to compile the class template -// with dim replaced by 2) and run a -// 2d simulation, and then we do the whole -// thing over in 3d. +// And this is the main function. It also looks mostly like in step-3, but if +// you look at the code below, note how we first create a variable of type +// LaplaceProblem@<2@> (forcing the compiler to compile the class +// template with dim replaced by 2) and run a 2d +// simulation, and then we do the whole thing over in 3d. // -// In practice, this is probably not what you -// would do very frequently (you probably -// either want to solve a 2d problem, or one -// in 3d, but not both at the same -// time). However, it demonstrates the -// mechanism by which we can simply change -// which dimension we want in a single place, -// and thereby force the compiler to -// recompile the dimension independent class -// templates for the dimension we -// request. The emphasis here lies on the -// fact that we only need to change a single -// place. This makes it rather trivial to -// debug the program in 2d where computations -// are fast, and then switch a single place -// to a 3 to run the much more computing -// intensive program in 3d for `real' +// In practice, this is probably not what you would do very frequently (you +// probably either want to solve a 2d problem, or one in 3d, but not both at +// the same time). However, it demonstrates the mechanism by which we can +// simply change which dimension we want in a single place, and thereby force +// the compiler to recompile the dimension independent class templates for the +// dimension we request. The emphasis here lies on the fact that we only need +// to change a single place. This makes it rather trivial to debug the program +// in 2d where computations are fast, and then switch a single place to a 3 to +// run the much more computing intensive program in 3d for `real' // computations. // -// Each of the two blocks is enclosed in -// braces to make sure that the -// laplace_problem_2d variable goes out -// of scope (and releases the memory it -// holds) before we move on to allocate -// memory for the 3d case. Without the -// additional braces, the -// laplace_problem_2d variable would only -// be destroyed at the end of the function, -// i.e. after running the 3d problem, and -// would needlessly hog memory while the 3d -// run could actually use it. +// Each of the two blocks is enclosed in braces to make sure that the +// laplace_problem_2d variable goes out of scope (and releases +// the memory it holds) before we move on to allocate memory for the 3d +// case. Without the additional braces, the laplace_problem_2d +// variable would only be destroyed at the end of the function, i.e. after +// running the 3d problem, and would needlessly hog memory while the 3d run +// could actually use it. // -// Finally, the first line of the function is -// used to suppress some output. Remember -// that in the previous example, we had the -// output from the linear solvers about the -// starting residual and the number of the -// iteration where convergence was -// detected. This can be suppressed through -// the deallog.depth_console(0) call. +// Finally, the first line of the function is used to suppress some output. +// Remember that in the previous example, we had the output from the linear +// solvers about the starting residual and the number of the iteration where +// convergence was detected. This can be suppressed through the +// deallog.depth_console(0) call. // -// The rationale here is the following: the -// deallog (i.e. deal-log, not de-allog) -// variable represents a stream to which some -// parts of the library write output. It -// redirects this output to the console and -// if required to a file. The output is -// nested in a way so that each function can -// use a prefix string (separated by colons) -// for each line of output; if it calls -// another function, that may also use its -// prefix which is then printed after the one -// of the calling function. Since output from -// functions which are nested deep below is -// usually not as important as top-level -// output, you can give the deallog variable -// a maximal depth of nested output for -// output to console and file. The depth zero -// which we gave here means that no output is -// written. By changing it you can get more -// information about the innards of the -// library. +// The rationale here is the following: the deallog (i.e. deal-log, not +// de-allog) variable represents a stream to which some parts of the library +// write output. It redirects this output to the console and if required to a +// file. The output is nested in a way so that each function can use a prefix +// string (separated by colons) for each line of output; if it calls another +// function, that may also use its prefix which is then printed after the one +// of the calling function. Since output from functions which are nested deep +// below is usually not as important as top-level output, you can give the +// deallog variable a maximal depth of nested output for output to console and +// file. The depth zero which we gave here means that no output is written. By +// changing it you can get more information about the innards of the library. int main () { try diff --git a/deal.II/examples/step-27/step-27.cc b/deal.II/examples/step-27/step-27.cc index 585ea26c9f..ab8fd12690 100644 --- a/deal.II/examples/step-27/step-27.cc +++ b/deal.II/examples/step-27/step-27.cc @@ -11,10 +11,8 @@ // @sect3{Include files} -// The first few files have already -// been covered in previous examples -// and will thus not be further -// commented on. +// The first few files have already been covered in previous examples and will +// thus not be further commented on. #include #include #include @@ -38,30 +36,23 @@ #include #include -// These are the new files we need. The first -// one provides an alternative to the usual -// SparsityPattern class and the -// CompressedSparsityPattern class already -// discussed in step-11 and step-18. The last -// two provide hp versions of the -// DoFHandler and FEValues classes as -// described in the introduction of this -// program. +// These are the new files we need. The first one provides an alternative to +// the usual SparsityPattern class and the CompressedSparsityPattern class +// already discussed in step-11 and step-18. The last two provide hp +// versions of the DoFHandler and FEValues classes as described in the +// introduction of this program. #include #include #include -// The last set of include files are standard -// C++ headers. We need support for complex -// numbers when we compute the Fourier -// transform. +// The last set of include files are standard C++ headers. We need support for +// complex numbers when we compute the Fourier transform. #include #include #include -// Finally, this is as in previous -// programs: +// Finally, this is as in previous programs: namespace Step27 { using namespace dealii; @@ -69,32 +60,21 @@ namespace Step27 // @sect3{The main class} - // The main class of this program looks very - // much like the one already used in the - // first few tutorial programs, for example - // the one in step-6. The main difference is - // that we have merged the refine_grid and - // output_results functions into one since we - // will also want to output some of the - // quantities used in deciding how to refine - // the mesh (in particular the estimated - // smoothness of the solution). There is also - // a function that computes this estimated - // smoothness, as discussed in the - // introduction. + // The main class of this program looks very much like the one already used + // in the first few tutorial programs, for example the one in step-6. The + // main difference is that we have merged the refine_grid and output_results + // functions into one since we will also want to output some of the + // quantities used in deciding how to refine the mesh (in particular the + // estimated smoothness of the solution). There is also a function that + // computes this estimated smoothness, as discussed in the introduction. // - // As far as member variables are concerned, - // we use the same structure as already used - // in step-6, but instead of a regular - // DoFHandler we use an object of type - // hp::DoFHandler, and we need collections - // instead of individual finite element, - // quadrature, and face quadrature - // objects. We will fill these collections in - // the constructor of the class. The last - // variable, max_degree, - // indicates the maximal polynomial degree of - // shape functions used. + // As far as member variables are concerned, we use the same structure as + // already used in step-6, but instead of a regular DoFHandler we use an + // object of type hp::DoFHandler, and we need collections instead of + // individual finite element, quadrature, and face quadrature objects. We + // will fill these collections in the constructor of the class. The last + // variable, max_degree, indicates the maximal polynomial + // degree of shape functions used. template class LaplaceProblem { @@ -134,9 +114,8 @@ namespace Step27 // @sect3{Equation data} // - // Next, let us define the right hand side - // function for this problem. It is $x+1$ in - // 1d, $(x+1)(y+1)$ in 2d, and so on. + // Next, let us define the right hand side function for this problem. It is + // $x+1$ in 1d, $(x+1)(y+1)$ in 2d, and so on. template class RightHandSide : public Function { @@ -166,23 +145,16 @@ namespace Step27 // @sect4{LaplaceProblem::LaplaceProblem} - // The constructor of this class is fairly - // straightforward. It associates the - // hp::DoFHandler object with the - // triangulation, and then sets the maximal - // polynomial degree to 7 (in 1d and 2d) or 5 - // (in 3d and higher). We do so because using - // higher order polynomial degrees becomes - // prohibitively expensive, especially in - // higher space dimensions. + // The constructor of this class is fairly straightforward. It associates + // the hp::DoFHandler object with the triangulation, and then sets the + // maximal polynomial degree to 7 (in 1d and 2d) or 5 (in 3d and higher). We + // do so because using higher order polynomial degrees becomes prohibitively + // expensive, especially in higher space dimensions. // - // Following this, we fill the collections of - // finite element, and cell and face - // quadrature objects. We start with - // quadratic elements, and each quadrature - // formula is chosen so that it is - // appropriate for the matching finite - // element in the hp::FECollection object. + // Following this, we fill the collections of finite element, and cell and + // face quadrature objects. We start with quadratic elements, and each + // quadrature formula is chosen so that it is appropriate for the matching + // finite element in the hp::FECollection object. template LaplaceProblem::LaplaceProblem () : @@ -200,8 +172,7 @@ namespace Step27 // @sect4{LaplaceProblem::~LaplaceProblem} - // The destructor is unchanged from what we - // already did in step-6: + // The destructor is unchanged from what we already did in step-6: template LaplaceProblem::~LaplaceProblem () { @@ -211,42 +182,26 @@ namespace Step27 // @sect4{LaplaceProblem::setup_system} // - // This function is again an almost - // verbatim copy of what we already did in - // step-6. The first change is that we - // append the Dirichlet boundary conditions - // to the ConstraintMatrix object, which we - // consequently call just - // constraints instead of - // hanging_node_constraints. The - // second difference is that we don't - // directly build the sparsity pattern, but - // first create an intermediate object that - // we later copy into the usual - // SparsityPattern data structure, since - // this is more efficient for the problem - // with many entries per row (and different - // number of entries in different rows). In - // another slight deviation, we do not - // first build the sparsity pattern and - // then condense away constrained degrees - // of freedom, but pass the constraint - // matrix object directly to the function - // that builds the sparsity pattern. We - // disable the insertion of constrained - // entries with false as fourth - // argument in the - // DoFTools::make_sparsity_pattern - // function. All of these changes are - // explained in the introduction of this + // This function is again an almost verbatim copy of what we already did in + // step-6. The first change is that we append the Dirichlet boundary + // conditions to the ConstraintMatrix object, which we consequently call + // just constraints instead of + // hanging_node_constraints. The second difference is that we + // don't directly build the sparsity pattern, but first create an + // intermediate object that we later copy into the usual SparsityPattern + // data structure, since this is more efficient for the problem with many + // entries per row (and different number of entries in different rows). In + // another slight deviation, we do not first build the sparsity pattern and + // then condense away constrained degrees of freedom, but pass the + // constraint matrix object directly to the function that builds the + // sparsity pattern. We disable the insertion of constrained entries with + // false as fourth argument in the DoFTools::make_sparsity_pattern + // function. All of these changes are explained in the introduction of this // program. // - // The last change, maybe hidden in plain - // sight, is that the dof_handler variable - // here is an hp object -- nevertheless all - // the function calls we had before still - // work in exactly the same way as they - // always did. + // The last change, maybe hidden in plain sight, is that the dof_handler + // variable here is an hp object -- nevertheless all the function calls we + // had before still work in exactly the same way as they always did. template void LaplaceProblem::setup_system () { @@ -276,49 +231,31 @@ namespace Step27 // @sect4{LaplaceProblem::assemble_system} - // This is the function that assembles the - // global matrix and right hand side vector - // from the local contributions of each - // cell. Its main working is as has been - // described in many of the tutorial programs - // before. The significant deviations are the - // ones necessary for hp finite element - // methods. In particular, that we need to - // use a collection of FEValues object - // (implemented through the hp::FEValues - // class), and that we have to eliminate - // constrained degrees of freedom already - // when copying local contributions into - // global objects. Both of these are - // explained in detail in the introduction of - // this program. + // This is the function that assembles the global matrix and right hand side + // vector from the local contributions of each cell. Its main working is as + // has been described in many of the tutorial programs before. The + // significant deviations are the ones necessary for hp finite + // element methods. In particular, that we need to use a collection of + // FEValues object (implemented through the hp::FEValues class), and that we + // have to eliminate constrained degrees of freedom already when copying + // local contributions into global objects. Both of these are explained in + // detail in the introduction of this program. // - // One other slight complication is the fact - // that because we use different polynomial - // degrees on different cells, the matrices - // and vectors holding local contributions do - // not have the same size on all cells. At - // the beginning of the loop over all cells, - // we therefore each time have to resize them - // to the correct size (given by - // dofs_per_cell). Because these - // classes are implement in such a way that - // reducing the size of a matrix or vector - // does not release the currently allocated - // memory (unless the new size is zero), the - // process of resizing at the beginning of - // the loop will only require re-allocation - // of memory during the first few - // iterations. Once we have found in a cell - // with the maximal finite element degree, no - // more re-allocations will happen because - // all subsequent reinit calls - // will only set the size to something that - // fits the currently allocated memory. This - // is important since allocating memory is - // expensive, and doing so every time we - // visit a new cell would take significant - // compute time. + // One other slight complication is the fact that because we use different + // polynomial degrees on different cells, the matrices and vectors holding + // local contributions do not have the same size on all cells. At the + // beginning of the loop over all cells, we therefore each time have to + // resize them to the correct size (given by + // dofs_per_cell). Because these classes are implement in such + // a way that reducing the size of a matrix or vector does not release the + // currently allocated memory (unless the new size is zero), the process of + // resizing at the beginning of the loop will only require re-allocation of + // memory during the first few iterations. Once we have found in a cell with + // the maximal finite element degree, no more re-allocations will happen + // because all subsequent reinit calls will only set the size + // to something that fits the currently allocated memory. This is important + // since allocating memory is expensive, and doing so every time we visit a + // new cell would take significant compute time. template void LaplaceProblem::assemble_system () { @@ -378,28 +315,20 @@ namespace Step27 system_matrix, system_rhs); } - // Now with the loop over all cells - // finished, we are done for this - // function. The steps we still had to do - // at this point in earlier tutorial - // programs, namely condensing hanging - // node constraints and applying - // Dirichlet boundary conditions, have - // been taken care of by the - // ConstraintMatrix object - // constraints on the fly. + // Now with the loop over all cells finished, we are done for this + // function. The steps we still had to do at this point in earlier + // tutorial programs, namely condensing hanging node constraints and + // applying Dirichlet boundary conditions, have been taken care of by the + // ConstraintMatrix object constraints on the fly. } // @sect4{LaplaceProblem::solve} - // The function solving the linear system is - // entirely unchanged from previous - // examples. We simply try to reduce the - // initial residual (which equals the $l_2$ - // norm of the right hand side) by a certain - // factor: + // The function solving the linear system is entirely unchanged from + // previous examples. We simply try to reduce the initial residual (which + // equals the $l_2$ norm of the right hand side) by a certain factor: template void LaplaceProblem::solve () { @@ -420,32 +349,21 @@ namespace Step27 // @sect4{LaplaceProblem::postprocess} - // After solving the linear system, we will - // want to postprocess the solution. Here, - // all we do is to estimate the error, - // estimate the local smoothness of the - // solution as described in the introduction, - // then write graphical output, and finally - // refine the mesh in both $h$ and $p$ - // according to the indicators computed - // before. We do all this in the same - // function because we want the estimated - // error and smoothness indicators not only - // for refinement, but also include them in - // the graphical output. + // After solving the linear system, we will want to postprocess the + // solution. Here, all we do is to estimate the error, estimate the local + // smoothness of the solution as described in the introduction, then write + // graphical output, and finally refine the mesh in both $h$ and $p$ + // according to the indicators computed before. We do all this in the same + // function because we want the estimated error and smoothness indicators + // not only for refinement, but also include them in the graphical output. template void LaplaceProblem::postprocess (const unsigned int cycle) { - // Let us start with computing estimated - // error and smoothness indicators, which - // each are one number for each active cell - // of our triangulation. For the error - // indicator, we use the - // KellyErrorEstimator class as - // always. Estimating the smoothness is - // done in the respective function of this - // class; that function is discussed - // further down below: + // Let us start with computing estimated error and smoothness indicators, + // which each are one number for each active cell of our + // triangulation. For the error indicator, we use the KellyErrorEstimator + // class as always. Estimating the smoothness is done in the respective + // function of this class; that function is discussed further down below: Vector estimated_error_per_cell (triangulation.n_active_cells()); KellyErrorEstimator::estimate (dof_handler, face_quadrature_collection, @@ -457,28 +375,20 @@ namespace Step27 Vector smoothness_indicators (triangulation.n_active_cells()); estimate_smoothness (smoothness_indicators); - // Next we want to generate graphical - // output. In addition to the two estimated - // quantities derived above, we would also - // like to output the polynomial degree of - // the finite elements used on each of the - // elements on the mesh. + // Next we want to generate graphical output. In addition to the two + // estimated quantities derived above, we would also like to output the + // polynomial degree of the finite elements used on each of the elements + // on the mesh. // - // The way to do that requires that we loop - // over all cells and poll the active - // finite element index of them using - // cell-@>active_fe_index(). We - // then use the result of this operation - // and query the finite element collection - // for the finite element with that index, - // and finally determine the polynomial - // degree of that element. The result we - // put into a vector with one element per - // cell. The DataOut class requires this to - // be a vector of float or - // double, even though our - // values are all integers, so that it what - // we use: + // The way to do that requires that we loop over all cells and poll the + // active finite element index of them using + // cell-@>active_fe_index(). We then use the result of this + // operation and query the finite element collection for the finite + // element with that index, and finally determine the polynomial degree of + // that element. The result we put into a vector with one element per + // cell. The DataOut class requires this to be a vector of + // float or double, even though our values are + // all integers, so that it what we use: { Vector fe_degrees (triangulation.n_active_cells()); { @@ -490,20 +400,13 @@ namespace Step27 = fe_collection[cell->active_fe_index()].degree; } - // With now all data vectors available -- - // solution, estimated errors and - // smoothness indicators, and finite - // element degrees --, we create a - // DataOut object for graphical output - // and attach all data. Note that the - // DataOut class has a second template - // argument (which defaults to - // DoFHandler@, which is why we - // have never seen it in previous - // tutorial programs) that indicates the - // type of DoF handler to be used. Here, - // we have to use the hp::DoFHandler - // class: + // With now all data vectors available -- solution, estimated errors and + // smoothness indicators, and finite element degrees --, we create a + // DataOut object for graphical output and attach all data. Note that + // the DataOut class has a second template argument (which defaults to + // DoFHandler@, which is why we have never seen it in previous + // tutorial programs) that indicates the type of DoF handler to be + // used. Here, we have to use the hp::DoFHandler class: DataOut > data_out; data_out.attach_dof_handler (dof_handler); @@ -513,11 +416,8 @@ namespace Step27 data_out.add_data_vector (fe_degrees, "fe_degree"); data_out.build_patches (); - // The final step in generating - // output is to determine a file - // name, open the file, and write - // the data into it (here, we use - // VTK format): + // The final step in generating output is to determine a file name, open + // the file, and write the data into it (here, we use VTK format): const std::string filename = "solution-" + Utilities::int_to_string (cycle, 2) + ".vtk"; @@ -525,44 +425,29 @@ namespace Step27 data_out.write_vtk (output); } - // After this, we would like to actually - // refine the mesh, in both $h$ and - // $p$. The way we are going to do this is - // as follows: first, we use the estimated - // error to flag those cells for refinement - // that have the largest error. This is - // what we have always done: + // After this, we would like to actually refine the mesh, in both $h$ and + // $p$. The way we are going to do this is as follows: first, we use the + // estimated error to flag those cells for refinement that have the + // largest error. This is what we have always done: { GridRefinement::refine_and_coarsen_fixed_number (triangulation, estimated_error_per_cell, 0.3, 0.03); - // Next we would like to figure out which - // of the cells that have been flagged - // for refinement should actually have - // $p$ increased instead of $h$ - // decreased. The strategy we choose here - // is that we look at the smoothness - // indicators of those cells that are - // flagged for refinement, and increase - // $p$ for those with a smoothness larger - // than a certain threshold. For this, we - // first have to determine the maximal - // and minimal values of the smoothness - // indicators of all flagged cells, which - // we do using a loop over all cells and - // comparing current minimal and maximal - // values. (We start with the minimal and - // maximal values of all cells, a - // range within which the minimal and - // maximal values on cells flagged for - // refinement must surely lie.) Absent - // any better strategies, we will then - // set the threshold above which will - // increase $p$ instead of reducing $h$ - // as the mean value between minimal and - // maximal smoothness indicators on cells - // flagged for refinement: + // Next we would like to figure out which of the cells that have been + // flagged for refinement should actually have $p$ increased instead of + // $h$ decreased. The strategy we choose here is that we look at the + // smoothness indicators of those cells that are flagged for refinement, + // and increase $p$ for those with a smoothness larger than a certain + // threshold. For this, we first have to determine the maximal and + // minimal values of the smoothness indicators of all flagged cells, + // which we do using a loop over all cells and comparing current minimal + // and maximal values. (We start with the minimal and maximal values of + // all cells, a range within which the minimal and maximal values + // on cells flagged for refinement must surely lie.) Absent any better + // strategies, we will then set the threshold above which will increase + // $p$ instead of reducing $h$ as the mean value between minimal and + // maximal smoothness indicators on cells flagged for refinement: float max_smoothness = *std::min_element (smoothness_indicators.begin(), smoothness_indicators.end()), min_smoothness = *std::max_element (smoothness_indicators.begin(), @@ -582,20 +467,14 @@ namespace Step27 } const float threshold_smoothness = (max_smoothness + min_smoothness) / 2; - // With this, we can go back, loop over - // all cells again, and for those cells - // for which (i) the refinement flag is - // set, (ii) the smoothness indicator is - // larger than the threshold, and (iii) - // we still have a finite element with a - // polynomial degree higher than the - // current one in the finite element - // collection, we then increase the - // polynomial degree and in return remove - // the flag indicating that the cell - // should undergo bisection. For all - // other cells, the refinement flags - // remain untouched: + // With this, we can go back, loop over all cells again, and for those + // cells for which (i) the refinement flag is set, (ii) the smoothness + // indicator is larger than the threshold, and (iii) we still have a + // finite element with a polynomial degree higher than the current one + // in the finite element collection, we then increase the polynomial + // degree and in return remove the flag indicating that the cell should + // undergo bisection. For all other cells, the refinement flags remain + // untouched: { typename hp::DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), @@ -612,11 +491,9 @@ namespace Step27 } } - // At the end of this procedure, we then - // refine the mesh. During this process, - // children of cells undergoing bisection - // inherit their mother cell's finite - // element index: + // At the end of this procedure, we then refine the mesh. During this + // process, children of cells undergoing bisection inherit their mother + // cell's finite element index: triangulation.execute_coarsening_and_refinement (); } } @@ -624,17 +501,12 @@ namespace Step27 // @sect4{LaplaceProblem::create_coarse_grid} - // The following function is used when - // creating the initial grid. It is a - // specialization for the 2d case, i.e. a - // corresponding function needs to be - // implemented if the program is run in - // anything other then 2d. The function is - // actually stolen from step-14 and generates - // the same mesh used already there, i.e. the - // square domain with the square hole in the - // middle. The meaning of the different parts - // of this function are explained in the + // The following function is used when creating the initial grid. It is a + // specialization for the 2d case, i.e. a corresponding function needs to be + // implemented if the program is run in anything other then 2d. The function + // is actually stolen from step-14 and generates the same mesh used already + // there, i.e. the square domain with the square hole in the middle. The + // meaning of the different parts of this function are explained in the // documentation of step-14: template <> void LaplaceProblem<2>::create_coarse_grid () @@ -713,20 +585,15 @@ namespace Step27 // @sect4{LaplaceProblem::run} - // This function implements the logic of the - // program, as did the respective function in - // most of the previous programs already, see - // for example step-6. + // This function implements the logic of the program, as did the respective + // function in most of the previous programs already, see for example + // step-6. // - // Basically, it contains the adaptive loop: - // in the first iteration create a coarse - // grid, and then set up the linear system, - // assemble it, solve, and postprocess the - // solution including mesh refinement. Then - // start over again. In the meantime, also - // output some information for those staring - // at the screen trying to figure out what - // the program does: + // Basically, it contains the adaptive loop: in the first iteration create a + // coarse grid, and then set up the linear system, assemble it, solve, and + // postprocess the solution including mesh refinement. Then start over + // again. In the meantime, also output some information for those staring at + // the screen trying to figure out what the program does: template void LaplaceProblem::run () { @@ -758,52 +625,32 @@ namespace Step27 // @sect4{LaplaceProblem::estimate_smoothness} - // This last function of significance - // implements the algorithm to estimate the - // smoothness exponent using the algorithms - // explained in detail in the - // introduction. We will therefore only - // comment on those points that are of + // This last function of significance implements the algorithm to estimate + // the smoothness exponent using the algorithms explained in detail in the + // introduction. We will therefore only comment on those points that are of // implementational importance. template void LaplaceProblem:: estimate_smoothness (Vector &smoothness_indicators) const { - // The first thing we need to do is - // to define the Fourier vectors - // ${\bf k}$ for which we want to - // compute Fourier coefficients of - // the solution on each cell. In - // 2d, we pick those vectors ${\bf - // k}=(\pi i, \pi j)^T$ for which - // $\sqrt{i^2+j^2}\le N$, with - // $i,j$ integers and $N$ being the - // maximal polynomial degree we use - // for the finite elements in this - // program. The 3d case is handled - // analogously. 1d and dimensions - // higher than 3 are not - // implemented, and we guard our - // implementation by making sure - // that we receive an exception in - // case someone tries to compile - // the program for any of these - // dimensions. + // The first thing we need to do is to define the Fourier vectors ${\bf + // k}$ for which we want to compute Fourier coefficients of the solution + // on each cell. In 2d, we pick those vectors ${\bf k}=(\pi i, \pi j)^T$ + // for which $\sqrt{i^2+j^2}\le N$, with $i,j$ integers and $N$ being the + // maximal polynomial degree we use for the finite elements in this + // program. The 3d case is handled analogously. 1d and dimensions higher + // than 3 are not implemented, and we guard our implementation by making + // sure that we receive an exception in case someone tries to compile the + // program for any of these dimensions. // - // We exclude ${\bf k}=0$ to avoid problems - // computing $|{\bf k}|^{-mu}$ and $\ln - // |{\bf k}|$. The other vectors are stored - // in the field k_vectors. In - // addition, we store the square of the - // magnitude of each of these vectors (up - // to a factor $\pi^2$) in the - // k_vectors_magnitude array - // -- we will need that when we attempt to - // find out which of those Fourier - // coefficients corresponding to Fourier - // vectors of the same magnitude is the - // largest: + // We exclude ${\bf k}=0$ to avoid problems computing $|{\bf k}|^{-mu}$ + // and $\ln |{\bf k}|$. The other vectors are stored in the field + // k_vectors. In addition, we store the square of the + // magnitude of each of these vectors (up to a factor $\pi^2$) in the + // k_vectors_magnitude array -- we will need that when we + // attempt to find out which of those Fourier coefficients corresponding + // to Fourier vectors of the same magnitude is the largest: const unsigned int N = max_degree; std::vector > k_vectors; @@ -848,63 +695,44 @@ namespace Step27 Assert (false, ExcNotImplemented()); } - // After we have set up the Fourier - // vectors, we also store their total - // number for simplicity, and compute the - // logarithm of the magnitude of each of - // these vectors since we will need it many - // times over further down below: + // After we have set up the Fourier vectors, we also store their total + // number for simplicity, and compute the logarithm of the magnitude of + // each of these vectors since we will need it many times over further + // down below: const unsigned n_fourier_modes = k_vectors.size(); std::vector ln_k (n_fourier_modes); for (unsigned int i=0; i > > fourier_transform_matrices (fe_collection.size()); - // In order to compute them, we of - // course can't perform the Fourier - // transform analytically, but have - // to approximate it using - // quadrature. To this end, we use - // a quadrature formula that is - // obtained by iterating a 2-point - // Gauss formula as many times as - // the maximal exponent we use for - // the term $e^{i{\bf k}\cdot{\bf - // x}}$: + // In order to compute them, we of course can't perform the Fourier + // transform analytically, but have to approximate it using quadrature. To + // this end, we use a quadrature formula that is obtained by iterating a + // 2-point Gauss formula as many times as the maximal exponent we use for + // the term $e^{i{\bf k}\cdot{\bf x}}$: QGauss<1> base_quadrature (2); QIterated quadrature (base_quadrature, N); - // With this, we then loop over all finite - // elements in use, reinitialize the - // respective matrix ${\cal F}$ to the - // right size, and integrate each entry of - // the matrix numerically as ${\cal - // F}_{{\bf k},j}=\sum_q e^{i{\bf k}\cdot - // {\bf x}}\varphi_j({\bf x}_q) - // w_q$, where $x_q$ - // are the quadrature points and $w_q$ are - // the quadrature weights. Note that the - // imaginary unit $i=\sqrt{-1}$ is obtained - // from the standard C++ classes using - // std::complex@(0,1). - - // Because we work on the unit cell, we can - // do all this work without a mapping from - // reference to real cell and consequently - // do not need the FEValues class. + // With this, we then loop over all finite elements in use, reinitialize + // the respective matrix ${\cal F}$ to the right size, and integrate each + // entry of the matrix numerically as ${\cal F}_{{\bf k},j}=\sum_q + // e^{i{\bf k}\cdot {\bf x}}\varphi_j({\bf x}_q) w_q$, where $x_q$ are the + // quadrature points and $w_q$ are the quadrature weights. Note that the + // imaginary unit $i=\sqrt{-1}$ is obtained from the standard C++ classes + // using std::complex@(0,1). + + // Because we work on the unit cell, we can do all this work without a + // mapping from reference to real cell and consequently do not need the + // FEValues class. for (unsigned int fe=0; fe > fourier_coefficients (n_fourier_modes); Vector local_dof_values; @@ -943,20 +768,14 @@ namespace Step27 endc = dof_handler.end(); for (unsigned int index=0; cell!=endc; ++cell, ++index) { - // Inside the loop, we first need to - // get the values of the local degrees - // of freedom (which we put into the - // local_dof_values array - // after setting it to the right size) - // and then need to compute the Fourier - // transform by multiplying this vector - // with the matrix ${\cal F}$ - // corresponding to this finite - // element. We need to write out the - // multiplication by hand because the - // objects holding the data do not have - // vmult-like functions - // declared: + // Inside the loop, we first need to get the values of the local + // degrees of freedom (which we put into the + // local_dof_values array after setting it to the right + // size) and then need to compute the Fourier transform by multiplying + // this vector with the matrix ${\cal F}$ corresponding to this finite + // element. We need to write out the multiplication by hand because + // the objects holding the data do not have vmult-like + // functions declared: local_dof_values.reinit (cell->get_fe().dofs_per_cell); cell->get_dof_values (solution, local_dof_values); @@ -971,20 +790,14 @@ namespace Step27 local_dof_values(i); } - // The next thing, as explained in the - // introduction, is that we wanted to - // only fit our exponential decay of - // Fourier coefficients to the largest - // coefficients for each possible value - // of $|{\bf k}|$. To this end, we - // create a map that for each magnitude - // $|{\bf k}|$ stores the largest $|\hat - // U_{{\bf k}}|$ found so far, i.e. we - // overwrite the existing value (or add - // it to the map) if no value for the - // current $|{\bf k}|$ exists yet, or if - // the current value is larger than the - // previously stored one: + // The next thing, as explained in the introduction, is that we wanted + // to only fit our exponential decay of Fourier coefficients to the + // largest coefficients for each possible value of $|{\bf k}|$. To + // this end, we create a map that for each magnitude $|{\bf k}|$ + // stores the largest $|\hat U_{{\bf k}}|$ found so far, i.e. we + // overwrite the existing value (or add it to the map) if no value for + // the current $|{\bf k}|$ exists yet, or if the current value is + // larger than the previously stored one: std::map k_to_max_U_map; for (unsigned int f=0; ftry block and catch whatever -// exceptions are thrown, thereby producing -// meaningful output if anything should go -// wrong: +// The main function is again verbatim what we had before: wrap creating and +// running an object of the main class into a try block and catch +// whatever exceptions are thrown, thereby producing meaningful output if +// anything should go wrong: int main () { try diff --git a/deal.II/examples/step-28/step-28.cc b/deal.II/examples/step-28/step-28.cc index a3c477b375..87cd9a6d74 100644 --- a/deal.II/examples/step-28/step-28.cc +++ b/deal.II/examples/step-28/step-28.cc @@ -11,10 +11,8 @@ // @sect3{Include files} -// We start with a bunch of include -// files that have already been -// explained in previous tutorial -// programs: +// We start with a bunch of include files that have already been explained in +// previous tutorial programs: #include #include #include @@ -50,40 +48,28 @@ #include -// We use the next include file to -// access block vectors which provide -// us a convenient way to manage -// solution and right hand side -// vectors of all energy groups: +// We use the next include file to access block vectors which provide us a +// convenient way to manage solution and right hand side vectors of all energy +// groups: #include -// This include file is for -// transferring solutions from one -// mesh to another different mesh. We -// use it when we are initializing -// solutions after each mesh -// iteration: +// This include file is for transferring solutions from one mesh to another +// different mesh. We use it when we are initializing solutions after each +// mesh iteration: #include -// When integrating functions defined -// on one mesh against shape -// functions defined on a different -// mesh, we need a function @p -// get_finest_common_cells (as -// discussed in the introduction) -// which is defined in the following -// header file: +// When integrating functions defined on one mesh against shape functions +// defined on a different mesh, we need a function @p get_finest_common_cells +// (as discussed in the introduction) which is defined in the following header +// file: #include -// Here are two more C++ standard -// headers that we use to define list -// data types as well as to fine-tune -// the output we generate: +// Here are two more C++ standard headers that we use to define list data +// types as well as to fine-tune the output we generate: #include #include -// The last step is as in all -// previous programs: +// The last step is as in all previous programs: namespace Step28 { using namespace dealii; @@ -91,66 +77,36 @@ namespace Step28 // @sect3{Material data} - // First up, we need to define a - // class that provides material data - // (including diffusion coefficients, - // removal cross sections, scattering - // cross sections, fission cross - // sections and fission spectra) to - // the main class. + // First up, we need to define a class that provides material data + // (including diffusion coefficients, removal cross sections, scattering + // cross sections, fission cross sections and fission spectra) to the main + // class. // - // The parameter to the constructor - // determines for how many energy - // groups we set up the relevant - // tables. At present, this program - // only includes data for 2 energy - // groups, but a more sophisticated - // program may be able to initialize - // the data structures for more - // groups as well, depending on how - // many energy groups are selected in - // the parameter file. + // The parameter to the constructor determines for how many energy groups we + // set up the relevant tables. At present, this program only includes data + // for 2 energy groups, but a more sophisticated program may be able to + // initialize the data structures for more groups as well, depending on how + // many energy groups are selected in the parameter file. // - // For each of the different - // coefficient types, there is one - // function that returns the value of - // this coefficient for a particular - // energy group (or combination of - // energy groups, as for the - // distribution cross section - // $\chi_g\nu\Sigma_{f,g'}$ or - // scattering cross section - // $\Sigma_{s,g'\to g}$). In addition - // to the energy group or groups, - // these coefficients depend on the - // type of fuel or control rod, as - // explained in the introduction. The - // functions therefore take an - // additional parameter, @p - // material_id, that identifies the - // particular kind of rod. Within - // this program, we use - // n_materials=8 - // different kinds of rods. + // For each of the different coefficient types, there is one function that + // returns the value of this coefficient for a particular energy group (or + // combination of energy groups, as for the distribution cross section + // $\chi_g\nu\Sigma_{f,g'}$ or scattering cross section $\Sigma_{s,g'\to + // g}$). In addition to the energy group or groups, these coefficients + // depend on the type of fuel or control rod, as explained in the + // introduction. The functions therefore take an additional parameter, @p + // material_id, that identifies the particular kind of rod. Within this + // program, we use n_materials=8 different kinds of rods. // - // Except for the scattering cross - // section, each of the coefficients - // therefore can be represented as an - // entry in a two-dimensional array - // of floating point values indexed - // by the energy group number as well - // as the material ID. The Table - // class template is the ideal way to - // store such data. Finally, the - // scattering coefficient depends on - // both two energy group indices and - // therefore needs to be stored in a - // three-dimensional array, for which - // we again use the Table class, - // where this time the first template - // argument (denoting the - // dimensionality of the array) of - // course needs to be three: + // Except for the scattering cross section, each of the coefficients + // therefore can be represented as an entry in a two-dimensional array of + // floating point values indexed by the energy group number as well as the + // material ID. The Table class template is the ideal way to store such + // data. Finally, the scattering coefficient depends on both two energy + // group indices and therefore needs to be stored in a three-dimensional + // array, for which we again use the Table class, where this time the first + // template argument (denoting the dimensionality of the array) of course + // needs to be three: class MaterialData { public: @@ -182,25 +138,16 @@ namespace Step28 Table<2,double> chi; }; - // The constructor of the class is - // used to initialize all the - // material data arrays. It takes the - // number of energy groups as an - // argument (an throws an error if - // that value is not equal to two, - // since at presently only data for - // two energy groups is implemented; - // however, using this, the function - // remains flexible and extendible - // into the future). In the member - // initialization part at the - // beginning, it also resizes the - // arrays to their correct sizes. + // The constructor of the class is used to initialize all the material data + // arrays. It takes the number of energy groups as an argument (an throws an + // error if that value is not equal to two, since at presently only data for + // two energy groups is implemented; however, using this, the function + // remains flexible and extendible into the future). In the member + // initialization part at the beginning, it also resizes the arrays to their + // correct sizes. // - // At present, material data is - // stored for 8 different types of - // material. This, as well, may - // easily be extended in the future. + // At present, material data is stored for 8 different types of + // material. This, as well, may easily be extended in the future. MaterialData::MaterialData (const unsigned int n_groups) : n_groups (n_groups), @@ -282,14 +229,10 @@ namespace Step28 } - // Next are the functions that return - // the coefficient values for given - // materials and energy groups. All - // they do is to make sure that the - // given arguments are within the - // allowed ranges, and then look the - // respective value up in the - // corresponding tables: + // Next are the functions that return the coefficient values for given + // materials and energy groups. All they do is to make sure that the given + // arguments are within the allowed ranges, and then look the respective + // value up in the corresponding tables: double MaterialData::get_diffusion_coefficient (const unsigned int group, const unsigned int material_id) const @@ -361,15 +304,10 @@ namespace Step28 } - // The function computing the fission - // distribution cross section is - // slightly different, since it - // computes its value as the product - // of two other coefficients. We - // don't need to check arguments - // here, since this already happens - // when we call the two other - // functions involved, even though it + // The function computing the fission distribution cross section is slightly + // different, since it computes its value as the product of two other + // coefficients. We don't need to check arguments here, since this already + // happens when we call the two other functions involved, even though it // would probably not hurt either: double MaterialData::get_fission_dist_XS (const unsigned int group_1, @@ -384,128 +322,64 @@ namespace Step28 // @sect3{The EnergyGroup class} - // The first interesting class is the - // one that contains everything that - // is specific to a single energy - // group. To group things that belong - // together into individual objects, - // we declare a structure that holds - // the Triangulation and DoFHandler - // objects for the mesh used for a - // single energy group, and a number - // of other objects and member - // functions that we will discuss in - // the following sections. + // The first interesting class is the one that contains everything that is + // specific to a single energy group. To group things that belong together + // into individual objects, we declare a structure that holds the + // Triangulation and DoFHandler objects for the mesh used for a single + // energy group, and a number of other objects and member functions that we + // will discuss in the following sections. // - // The main reason for this class is - // as follows: for both the forward - // problem (with a specified right - // hand side) as well as for the - // eigenvalue problem, one typically - // solves a sequence of problems for - // a single energy group each, rather - // than the fully coupled - // problem. This becomes - // understandable once one realizes - // that the system matrix for a - // single energy group is symmetric - // and positive definite (it is - // simply a diffusion operator), - // whereas the matrix for the fully - // coupled problem is generally - // nonsymmetric and not definite. It - // is also very large and quite full - // if more than a few energy groups - // are involved. + // The main reason for this class is as follows: for both the forward + // problem (with a specified right hand side) as well as for the eigenvalue + // problem, one typically solves a sequence of problems for a single energy + // group each, rather than the fully coupled problem. This becomes + // understandable once one realizes that the system matrix for a single + // energy group is symmetric and positive definite (it is simply a diffusion + // operator), whereas the matrix for the fully coupled problem is generally + // nonsymmetric and not definite. It is also very large and quite full if + // more than a few energy groups are involved. // - // Let us first look at the equation - // to solve in the case of an - // external right hand side (for the time - // independent case): - // @f{eqnarray*} - // -\nabla \cdot(D_g(x) \nabla \phi_g(x)) - // + - // \Sigma_{r,g}(x)\phi_g(x) - // = - // \chi_g\sum_{g'=1}^G\nu\Sigma_{f,g'}(x)\phi_{g'}(x) - // + - // \sum_{g'\ne g}\Sigma_{s,g'\to g}(x)\phi_{g'}(x) - // + - // s_{\mathrm{ext},g}(x) - // @f} + // Let us first look at the equation to solve in the case of an external + // right hand side (for the time independent case): @f{eqnarray*} -\nabla + // \cdot(D_g(x) \nabla \phi_g(x)) + \Sigma_{r,g}(x)\phi_g(x) = + // \chi_g\sum_{g'=1}^G\nu\Sigma_{f,g'}(x)\phi_{g'}(x) + \sum_{g'\ne + // g}\Sigma_{s,g'\to g}(x)\phi_{g'}(x) + s_{\mathrm{ext},g}(x) @f} // - // We would typically solve this - // equation by moving all the terms - // on the right hand side with $g'=g$ - // to the left hand side, and solving - // for $\phi_g$. Of course, we don't - // know $\phi_{g'}$ yet, since the - // equations for those variables - // include right hand side terms - // involving $\phi_g$. What one - // typically does in such situations - // is to iterate: compute - // @f{eqnarray*} - // -\nabla \cdot(D_g(x) \nabla \phi^{(n)}_g(x)) - // &+& - // \Sigma_{r,g}(x)\phi^{(n)}_g(x) - // \\ &=& - // \chi_g\sum_{g'=1}^{g-1}\nu\Sigma_{f,g'}(x)\phi^{(n)}_{g'}(x) - // + - // \chi_g\sum_{g'=g}^G\nu\Sigma_{f,g'}(x)\phi^{(n-1)}_{g'}(x) - // + - // \sum_{g'\ne g, g'g}\Sigma_{s,g'\to g}(x)\phi^{(n-1)}_{g'}(x) - // + - // s_{\mathrm{ext},g}(x) + // We would typically solve this equation by moving all the terms on the + // right hand side with $g'=g$ to the left hand side, and solving for + // $\phi_g$. Of course, we don't know $\phi_{g'}$ yet, since the equations + // for those variables include right hand side terms involving + // $\phi_g$. What one typically does in such situations is to iterate: + // compute @f{eqnarray*} -\nabla \cdot(D_g(x) \nabla \phi^{(n)}_g(x)) &+& + // \Sigma_{r,g}(x)\phi^{(n)}_g(x) \\ &=& + // \chi_g\sum_{g'=1}^{g-1}\nu\Sigma_{f,g'}(x)\phi^{(n)}_{g'}(x) + + // \chi_g\sum_{g'=g}^G\nu\Sigma_{f,g'}(x)\phi^{(n-1)}_{g'}(x) + \sum_{g'\ne + // g, g'g}\Sigma_{s,g'\to g}(x)\phi^{(n-1)}_{g'}(x) + s_{\mathrm{ext},g}(x) // @f} // - // In other words, we solve the - // equation one by one, using values - // for $\phi_{g'}$ from the previous - // iteration $n-1$ if $g'\ge g$ and - // already computed values for - // $\phi_{g'}$ from the present - // iteration if $g' class EnergyGroup { @@ -513,43 +387,22 @@ namespace Step28 // @sect5{Public member functions} // - // The class has a good number of - // public member functions, since - // its the way it operates is - // controlled from the outside, - // and therefore all functions - // that do something significant - // need to be called from another - // class. Let's start off with - // book-keeping: the class - // obviously needs to know which - // energy group it represents, - // which material data to use, - // and from what coarse grid to - // start. The constructor takes - // this information and - // initializes the relevant - // member variables with that - // (see below). + // The class has a good number of public member functions, since its the + // way it operates is controlled from the outside, and therefore all + // functions that do something significant need to be called from another + // class. Let's start off with book-keeping: the class obviously needs to + // know which energy group it represents, which material data to use, and + // from what coarse grid to start. The constructor takes this information + // and initializes the relevant member variables with that (see below). // - // Then we also need functions - // that set up the linear system, - // i.e. correctly size the matrix - // and its sparsity pattern, etc, - // given a finite element object - // to use. The - // setup_linear_system - // function does that. Finally, - // for this initial block, there - // are two functions that return - // the number of active cells and - // degrees of freedom used in - // this object -- using this, we - // can make the triangulation and - // DoF handler member variables - // private, and do not have to - // grant external use to it, - // enhancing encapsulation: + // Then we also need functions that set up the linear system, + // i.e. correctly size the matrix and its sparsity pattern, etc, given a + // finite element object to use. The setup_linear_system + // function does that. Finally, for this initial block, there are two + // functions that return the number of active cells and degrees of freedom + // used in this object -- using this, we can make the triangulation and + // DoF handler member variables private, and do not have to grant external + // use to it, enhancing encapsulation: EnergyGroup (const unsigned int group, const MaterialData &material_data, const Triangulation &coarse_grid, @@ -560,64 +413,32 @@ namespace Step28 unsigned int n_active_cells () const; unsigned int n_dofs () const; - // Then there are functions that - // assemble the linear system for - // each iteration and the present - // energy group. Note that the - // matrix is independent of the - // iteration number, so only has - // to be computed once for each - // refinement cycle. The - // situation is a bit more - // involved for the right hand - // side that has to be updated in - // each inverse power iteration, - // and that is further - // complicated by the fact that - // computing it may involve - // several different meshes as - // explained in the - // introduction. To make things - // more flexible with regard to - // solving the forward or the - // eigenvalue problem, we split - // the computation of the right - // hand side into a function that - // assembles the extraneous - // source and in-group - // contributions (which we will - // call with a zero function as - // source terms for the - // eigenvalue problem) and one - // that computes contributions to - // the right hand side from - // another energy group: + // Then there are functions that assemble the linear system for each + // iteration and the present energy group. Note that the matrix is + // independent of the iteration number, so only has to be computed once + // for each refinement cycle. The situation is a bit more involved for the + // right hand side that has to be updated in each inverse power iteration, + // and that is further complicated by the fact that computing it may + // involve several different meshes as explained in the introduction. To + // make things more flexible with regard to solving the forward or the + // eigenvalue problem, we split the computation of the right hand side + // into a function that assembles the extraneous source and in-group + // contributions (which we will call with a zero function as source terms + // for the eigenvalue problem) and one that computes contributions to the + // right hand side from another energy group: void assemble_system_matrix (); void assemble_ingroup_rhs (const Function &extraneous_source); void assemble_cross_group_rhs (const EnergyGroup &g_prime); - // Next we need a set of - // functions that actually - // compute the solution of a - // linear system, and do - // something with it (such as - // computing the fission source - // contribution mentioned in the - // introduction, writing - // graphical information to an - // output file, computing error - // indicators, or actually - // refining the grid based on - // these criteria and thresholds - // for refinement and - // coarsening). All these - // functions will later be called - // from the driver class - // NeutronDiffusionProblem, - // or any other class you may - // want to implement to solve a - // problem involving the neutron - // flux equations: + // Next we need a set of functions that actually compute the solution of a + // linear system, and do something with it (such as computing the fission + // source contribution mentioned in the introduction, writing graphical + // information to an output file, computing error indicators, or actually + // refining the grid based on these criteria and thresholds for refinement + // and coarsening). All these functions will later be called from the + // driver class NeutronDiffusionProblem, or any other class + // you may want to implement to solve a problem involving the neutron flux + // equations: void solve (); double get_fission_source () const; @@ -632,19 +453,12 @@ namespace Step28 // @sect5{Public data members} // - // As is good practice in object - // oriented programming, we hide - // most data members by making - // them private. However, we have - // to grant the class that drives - // the process access to the - // solution vector as well as the - // solution of the previous - // iteration, since in the power - // iteration, the solution vector - // is scaled in every iteration - // by the present guess of the - // eigenvalue we are looking for: + // As is good practice in object oriented programming, we hide most data + // members by making them private. However, we have to grant the class + // that drives the process access to the solution vector as well as the + // solution of the previous iteration, since in the power iteration, the + // solution vector is scaled in every iteration by the present guess of + // the eigenvalue we are looking for: public: Vector solution; @@ -653,30 +467,17 @@ namespace Step28 // @sect5{Private data members} // - // The rest of the data members - // are private. Compared to all - // the previous tutorial - // programs, the only new data - // members are an integer storing - // which energy group this object - // represents, and a reference to - // the material data object that - // this object's constructor gets - // passed from the driver - // class. Likewise, the - // constructor gets a reference - // to the finite element object - // we are to use. + // The rest of the data members are private. Compared to all the previous + // tutorial programs, the only new data members are an integer storing + // which energy group this object represents, and a reference to the + // material data object that this object's constructor gets passed from + // the driver class. Likewise, the constructor gets a reference to the + // finite element object we are to use. // - // Finally, we have to apply - // boundary values to the linear - // system in each iteration, - // i.e. quite frequently. Rather - // than interpolating them every - // time, we interpolate them once - // on each new mesh and then - // store them along with all the - // other data of this class: + // Finally, we have to apply boundary values to the linear system in each + // iteration, i.e. quite frequently. Rather than interpolating them every + // time, we interpolate them once on each new mesh and then store them + // along with all the other data of this class: private: const unsigned int group; @@ -697,28 +498,15 @@ namespace Step28 // @sect5{Private member functionss} // - // There is one private member - // function in this class. It - // recursively walks over cells - // of two meshes to compute the - // cross-group right hand side - // terms. The algorithm for this - // is explained in the - // introduction to this - // program. The arguments to this - // function are a reference to an - // object representing the energy - // group against which we want to - // integrate a right hand side - // term, an iterator to a cell of - // the mesh used for the present - // energy group, an iterator to a - // corresponding cell on the - // other mesh, and the matrix - // that interpolates the degrees - // of freedom from the coarser of - // the two cells to the finer - // one: + // There is one private member function in this class. It recursively + // walks over cells of two meshes to compute the cross-group right hand + // side terms. The algorithm for this is explained in the introduction to + // this program. The arguments to this function are a reference to an + // object representing the energy group against which we want to integrate + // a right hand side term, an iterator to a cell of the mesh used for the + // present energy group, an iterator to a corresponding cell on the other + // mesh, and the matrix that interpolates the degrees of freedom from the + // coarser of the two cells to the finer one: private: void @@ -731,18 +519,11 @@ namespace Step28 // @sect4{Implementation of the EnergyGroup class} - // The first few functions of this - // class are mostly - // self-explanatory. The constructor - // only sets a few data members and - // creates a copy of the given - // triangulation as the base for the - // triangulation used for this energy - // group. The next two functions - // simply return data from private - // data members, thereby enabling us - // to make these data members - // private. + // The first few functions of this class are mostly self-explanatory. The + // constructor only sets a few data members and creates a copy of the given + // triangulation as the base for the triangulation used for this energy + // group. The next two functions simply return data from private data + // members, thereby enabling us to make these data members private. template EnergyGroup::EnergyGroup (const unsigned int group, const MaterialData &material_data, @@ -780,25 +561,15 @@ namespace Step28 // @sect5{EnergyGroup::setup_linear_system} // - // The first "real" function is the - // one that sets up the mesh, - // matrices, etc, on the new mesh or - // after mesh refinement. We use this - // function to initialize sparse - // system matrices, and the right - // hand side vector. If the solution - // vector has never been set before - // (as indicated by a zero size), we - // also initialize it and set it to a - // default value. We don't do that if - // it already has a non-zero size - // (i.e. this function is called - // after mesh refinement) since in - // that case we want to preserve the - // solution across mesh refinement - // (something we do in the - // EnergyGroup::refine_grid - // function). + // The first "real" function is the one that sets up the mesh, matrices, + // etc, on the new mesh or after mesh refinement. We use this function to + // initialize sparse system matrices, and the right hand side vector. If the + // solution vector has never been set before (as indicated by a zero size), + // we also initialize it and set it to a default value. We don't do that if + // it already has a non-zero size (i.e. this function is called after mesh + // refinement) since in that case we want to preserve the solution across + // mesh refinement (something we do in the + // EnergyGroup::refine_grid function). template void EnergyGroup::setup_linear_system () @@ -831,56 +602,30 @@ namespace Step28 } - // At the end of this function, we - // update the list of boundary - // nodes and their values, by first - // clearing this list and the - // re-interpolating boundary values - // (remember that this function is - // called after first setting up - // the mesh, and each time after - // mesh refinement). + // At the end of this function, we update the list of boundary nodes and + // their values, by first clearing this list and the re-interpolating + // boundary values (remember that this function is called after first + // setting up the mesh, and each time after mesh refinement). // - // To understand the code, it is - // necessary to realize that we - // create the mesh using the - // GridGenerator::subdivided_hyper_rectangle - // function (in - // NeutronDiffusionProblem::initialize_problem) - // where we set the last parameter - // to true. This means that - // boundaries of the domain are - // "colored", i.e. the four (or - // six, in 3d) sides of the domain - // are assigned different boundary - // indicators. As it turns out, the - // bottom boundary gets indicator - // zero, the top one boundary - // indicator one, and left and - // right boundaries get indicators + // To understand the code, it is necessary to realize that we create the + // mesh using the GridGenerator::subdivided_hyper_rectangle + // function (in NeutronDiffusionProblem::initialize_problem) + // where we set the last parameter to true. This means that + // boundaries of the domain are "colored", i.e. the four (or six, in 3d) + // sides of the domain are assigned different boundary indicators. As it + // turns out, the bottom boundary gets indicator zero, the top one + // boundary indicator one, and left and right boundaries get indicators // two and three, respectively. // - // In this program, we simulate - // only one, namely the top right, - // quarter of a reactor. That is, - // we want to interpolate boundary - // conditions only on the top and - // right boundaries, while do - // nothing on the bottom and left - // boundaries (i.e. impose natural, - // no-flux Neumann boundary - // conditions). This is most easily - // generalized to arbitrary - // dimension by saying that we want - // to interpolate on those - // boundaries with indicators 1, 3, - // ..., which we do in the - // following loop (note that calls - // to - // VectorTools::interpolate_boundary_values - // are additive, i.e. they do not - // first clear the boundary value - // map): + // In this program, we simulate only one, namely the top right, quarter of + // a reactor. That is, we want to interpolate boundary conditions only on + // the top and right boundaries, while do nothing on the bottom and left + // boundaries (i.e. impose natural, no-flux Neumann boundary + // conditions). This is most easily generalized to arbitrary dimension by + // saying that we want to interpolate on those boundaries with indicators + // 1, 3, ..., which we do in the following loop (note that calls to + // VectorTools::interpolate_boundary_values are additive, + // i.e. they do not first clear the boundary value map): boundary_values.clear(); for (unsigned int i=0; iEnergyGroup::assemble_system_matrix} // - // Next we need functions assembling - // the system matrix and right hand - // sides. Assembling the matrix is - // straightforward given the - // equations outlined in the - // introduction as well as what we've - // seen in previous example - // programs. Note the use of - // cell->material_id() to get at - // the kind of material from which a - // cell is made up of. Note also how - // we set the order of the quadrature - // formula so that it is always - // appropriate for the finite element - // in use. + // Next we need functions assembling the system matrix and right hand + // sides. Assembling the matrix is straightforward given the equations + // outlined in the introduction as well as what we've seen in previous + // example programs. Note the use of cell->material_id() to get + // at the kind of material from which a cell is made up of. Note also how we + // set the order of the quadrature formula so that it is always appropriate + // for the finite element in use. // - // Finally, note that since we only - // assemble the system matrix here, - // we can't yet eliminate boundary - // values (we need the right hand - // side vector for this). We defer - // this to the EnergyGroup::solve - // function, at which point all the - // information is available. + // Finally, note that since we only assemble the system matrix here, we + // can't yet eliminate boundary values (we need the right hand side vector + // for this). We defer this to the EnergyGroup::solve function, + // at which point all the information is available. template void EnergyGroup::assemble_system_matrix () @@ -980,33 +713,18 @@ namespace Step28 // @sect5{EnergyGroup::assemble_ingroup_rhs} // - // As explained in the documentation - // of the EnergyGroup class, we - // split assembling the right hand - // side into two parts: the ingroup - // and the cross-group - // couplings. First, we need a - // function to assemble the right - // hand side of one specific group - // here, i.e. including an extraneous - // source (that we will set to zero - // for the eigenvalue problem) as - // well as the ingroup fission - // contributions. (In-group - // scattering has already been - // accounted for with the definition - // of removal cross section.) The - // function's workings are pretty - // standard as far as assembling - // right hand sides go, and therefore - // does not require more comments - // except that we mention that the - // right hand side vector is set to - // zero at the beginning of the - // function -- something we are not - // going to do for the cross-group - // terms that simply add to the right - // hand side vector. + // As explained in the documentation of the EnergyGroup class, + // we split assembling the right hand side into two parts: the ingroup and + // the cross-group couplings. First, we need a function to assemble the + // right hand side of one specific group here, i.e. including an extraneous + // source (that we will set to zero for the eigenvalue problem) as well as + // the ingroup fission contributions. (In-group scattering has already been + // accounted for with the definition of removal cross section.) The + // function's workings are pretty standard as far as assembling right hand + // sides go, and therefore does not require more comments except that we + // mention that the right hand side vector is set to zero at the beginning + // of the function -- something we are not going to do for the cross-group + // terms that simply add to the right hand side vector. template void EnergyGroup::assemble_ingroup_rhs (const Function &extraneous_source) { @@ -1065,29 +783,18 @@ namespace Step28 // @sect5{EnergyGroup::assemble_cross_group_rhs} // - // The more interesting function for - // assembling the right hand side - // vector for the equation of a - // single energy group is the one - // that couples energy group $g$ and - // $g'$. As explained in the - // introduction, we first have to - // find the set of cells common to - // the meshes of the two energy - // groups. First we call - // get_finest_common_cells to - // obtain this list of pairs of - // common cells from both - // meshes. Both cells in a pair may - // not be active but at least one of - // them is. We then hand each of - // these cell pairs off to a function - // tha computes the right hand side - // terms recursively. + // The more interesting function for assembling the right hand side vector + // for the equation of a single energy group is the one that couples energy + // group $g$ and $g'$. As explained in the introduction, we first have to + // find the set of cells common to the meshes of the two energy + // groups. First we call get_finest_common_cells to obtain this + // list of pairs of common cells from both meshes. Both cells in a pair may + // not be active but at least one of them is. We then hand each of these + // cell pairs off to a function tha computes the right hand side terms + // recursively. // - // Note that ingroup coupling is - // handled already before, so we exit - // the function early if $g=g'$. + // Note that ingroup coupling is handled already before, so we exit the + // function early if $g=g'$. template void EnergyGroup::assemble_cross_group_rhs (const EnergyGroup &g_prime) { @@ -1121,49 +828,27 @@ namespace Step28 // @sect5{EnergyGroup::assemble_cross_group_rhs_recursive} // - // This is finally the function that - // handles assembling right hand side - // terms on potentially different - // meshes recursively, using the - // algorithm described in the - // introduction. The function takes a - // reference to the object - // representing energy group $g'$, as - // well as iterators to corresponding - // cells in the meshes for energy - // groups $g$ and $g'$. At first, - // i.e. when this function is called - // from the one above, these two - // cells will be matching cells on - // two meshes; however, one of the - // two may be further refined, and we - // will call the function recursively - // with one of the two iterators - // replaced by one of the children of - // the original cell. + // This is finally the function that handles assembling right hand side + // terms on potentially different meshes recursively, using the algorithm + // described in the introduction. The function takes a reference to the + // object representing energy group $g'$, as well as iterators to + // corresponding cells in the meshes for energy groups $g$ and $g'$. At + // first, i.e. when this function is called from the one above, these two + // cells will be matching cells on two meshes; however, one of the two may + // be further refined, and we will call the function recursively with one of + // the two iterators replaced by one of the children of the original cell. // - // The last argument is the matrix - // product matrix $B_{c^{(k)}}^T - // \cdots B_{c'}^T B_c^T$ from the - // introduction that interpolates - // from the coarser of the two cells - // to the finer one. If the two cells - // match, then this is the identity - // matrix -- exactly what we pass to - // this function initially. + // The last argument is the matrix product matrix $B_{c^{(k)}}^T \cdots + // B_{c'}^T B_c^T$ from the introduction that interpolates from the coarser + // of the two cells to the finer one. If the two cells match, then this is + // the identity matrix -- exactly what we pass to this function initially. // - // The function has to consider two - // cases: that both of the two cells - // are not further refined, i.e. have - // no children, in which case we can - // finally assemble the right hand - // side contributions of this pair of - // cells; and that one of the two - // cells is further refined, in which - // case we have to keep recursing by - // looping over the children of the - // one cell that is not active. These - // two cases will be discussed below: + // The function has to consider two cases: that both of the two cells are + // not further refined, i.e. have no children, in which case we can finally + // assemble the right hand side contributions of this pair of cells; and + // that one of the two cells is further refined, in which case we have to + // keep recursing by looping over the children of the one cell that is not + // active. These two cases will be discussed below: template void EnergyGroup:: @@ -1172,25 +857,14 @@ namespace Step28 const typename DoFHandler::cell_iterator &cell_g_prime, const FullMatrix prolongation_matrix) { - // The first case is that both - // cells are no further refined. In - // that case, we can assemble the - // relevant terms (see the - // introduction). This involves - // assembling the mass matrix on - // the finer of the two cells (in - // fact there are two mass matrices - // with different coefficients, one - // for the fission distribution - // cross section - // $\chi_g\nu\Sigma_{f,g'}$ and one - // for the scattering cross section - // $\Sigma_{s,g'\to g}$). This is - // straight forward, but note how - // we determine which of the two - // cells is the finer one by - // looking at the refinement level - // of the two cells: + // The first case is that both cells are no further refined. In that case, + // we can assemble the relevant terms (see the introduction). This + // involves assembling the mass matrix on the finer of the two cells (in + // fact there are two mass matrices with different coefficients, one for + // the fission distribution cross section $\chi_g\nu\Sigma_{f,g'}$ and one + // for the scattering cross section $\Sigma_{s,g'\to g}$). This is + // straight forward, but note how we determine which of the two cells is + // the finer one by looking at the refinement level of the two cells: if (!cell_g->has_children() && !cell_g_prime->has_children()) { const QGauss quadrature_formula (fe.degree+1); @@ -1231,36 +905,16 @@ namespace Step28 fe_values.JxW(q_point)); } - // Now we have all the - // interpolation (prolongation) - // matrices as well as local - // mass matrices, so we only - // have to form the product - // @f[ - // F_i|_{K_{cc'\cdots - // c^{(k)}}} = [B_c B_{c'} - // \cdots B_{c^{(k)}} - // M_{K_{cc'\cdots - // c^{(k)}}}]^{ij} - // \phi_{g'}^j, - // @f] - // or - // @f[ - // F_i|_{K_{cc'\cdots - // c^{(k)}}} = [(B_c B_{c'} - // \cdots B_{c^{(k)}} - // M_{K_{cc'\cdots - // c^{(k)}}})^T]^{ij} - // \phi_{g'}^j, - // @f] - // depending on which of the two - // cells is the finer. We do this - // using either the matrix-vector - // product provided by the vmult - // function, or the product with the - // transpose matrix using Tvmult. - // After doing so, we transfer the - // result into the global right hand + // Now we have all the interpolation (prolongation) matrices as well + // as local mass matrices, so we only have to form the product @f[ + // F_i|_{K_{cc'\cdots c^{(k)}}} = [B_c B_{c'} \cdots B_{c^{(k)}} + // M_{K_{cc'\cdots c^{(k)}}}]^{ij} \phi_{g'}^j, @f] or @f[ + // F_i|_{K_{cc'\cdots c^{(k)}}} = [(B_c B_{c'} \cdots B_{c^{(k)}} + // M_{K_{cc'\cdots c^{(k)}}})^T]^{ij} \phi_{g'}^j, @f] depending on + // which of the two cells is the finer. We do this using either the + // matrix-vector product provided by the vmult function, + // or the product with the transpose matrix using Tvmult. + // After doing so, we transfer the result into the global right hand // side vector of energy group $g$. Vector g_prime_new_values (fe.dofs_per_cell); Vector g_prime_old_values (fe.dofs_per_cell); @@ -1294,22 +948,13 @@ namespace Step28 system_rhs(local_dof_indices[i]) += cell_rhs(i); } - // The alternative is that one of - // the two cells is further - // refined. In that case, we have - // to loop over all the children, - // multiply the existing - // interpolation (prolongation) - // product of matrices from the - // left with the interpolation from - // the present cell to its child - // (using the matrix-matrix - // multiplication function - // mmult), and then hand the - // result off to this very same - // function again, but with the - // cell that has children replaced - // by one of its children: + // The alternative is that one of the two cells is further refined. In + // that case, we have to loop over all the children, multiply the existing + // interpolation (prolongation) product of matrices from the left with the + // interpolation from the present cell to its child (using the + // matrix-matrix multiplication function mmult), and then + // hand the result off to this very same function again, but with the cell + // that has children replaced by one of its children: else for (unsigned int child=0; child::max_children_per_cell; ++child) { @@ -1331,11 +976,8 @@ namespace Step28 // @sect5{EnergyGroup::get_fission_source} // - // In the (inverse) power iteration, - // we use the integrated fission - // source to update the - // $k$-eigenvalue. Given its - // definition, the following function + // In the (inverse) power iteration, we use the integrated fission source to + // update the $k$-eigenvalue. Given its definition, the following function // is essentially self-explanatory: template double EnergyGroup::get_fission_source () const @@ -1374,15 +1016,10 @@ namespace Step28 // @sect5{EnergyGroup::solve} // - // Next a function that solves the - // linear system assembled - // before. Things are pretty much - // standard, except that we delayed - // applying boundary values until we - // get here, since in all the - // previous functions we were still - // adding up contributions the right - // hand side vector. + // Next a function that solves the linear system assembled before. Things + // are pretty much standard, except that we delayed applying boundary values + // until we get here, since in all the previous functions we were still + // adding up contributions the right hand side vector. template void EnergyGroup::solve () @@ -1409,16 +1046,11 @@ namespace Step28 // @sect5{EnergyGroup::estimate_errors} // - // Mesh refinement is split into two - // functions. The first estimates the - // error for each cell, normalizes it - // by the magnitude of the solution, - // and returns it in the vector given - // as an argument. The calling - // function collects all error - // indicators from all energy groups, - // and computes thresholds for - // refining and coarsening cells. + // Mesh refinement is split into two functions. The first estimates the + // error for each cell, normalizes it by the magnitude of the solution, and + // returns it in the vector given as an argument. The calling function + // collects all error indicators from all energy groups, and computes + // thresholds for refining and coarsening cells. template void EnergyGroup::estimate_errors (Vector &error_indicators) const { @@ -1434,25 +1066,15 @@ namespace Step28 // @sect5{EnergyGroup::refine_grid} // - // The second part is to refine the - // grid given the error indicators - // compute in the previous function - // and error thresholds above which - // cells shall be refined or below - // which cells shall be - // coarsened. Note that we do not use - // any of the functions in - // GridRefinement here, - // but rather set refinement flags - // ourselves. + // The second part is to refine the grid given the error indicators compute + // in the previous function and error thresholds above which cells shall be + // refined or below which cells shall be coarsened. Note that we do not use + // any of the functions in GridRefinement here, but rather set + // refinement flags ourselves. // - // After setting these flags, we use - // the SolutionTransfer class to move - // the solution vector from the old - // to the new mesh. The procedure - // used here is described in detail - // in the documentation of that - // class: + // After setting these flags, we use the SolutionTransfer class to move the + // solution vector from the old to the new mesh. The procedure used here is + // described in detail in the documentation of that class: template void EnergyGroup::refine_grid (const Vector &error_indicators, const double refine_threshold, @@ -1486,21 +1108,13 @@ namespace Step28 // @sect5{EnergyGroup::output_results} // - // The last function of this class - // outputs meshes and solutions after - // each mesh iteration. This has been - // shown many times before. The only - // thing worth pointing out is the - // use of the - // Utilities::int_to_string - // function to convert an integer - // into its string - // representation. The second - // argument of that function denotes - // how many digits we shall use -- if - // this value was larger than one, - // then the number would be padded by - // leading zeros. + // The last function of this class outputs meshes and solutions after each + // mesh iteration. This has been shown many times before. The only thing + // worth pointing out is the use of the + // Utilities::int_to_string function to convert an integer into + // its string representation. The second argument of that function denotes + // how many digits we shall use -- if this value was larger than one, then + // the number would be padded by leading zeros. template void EnergyGroup::output_results (const unsigned int cycle) const @@ -1539,56 +1153,29 @@ namespace Step28 // @sect3{The NeutronDiffusionProblem class template} - // This is the main class of the - // program, not because it implements - // all the functionality (in fact, - // most of it is implemented in the - // EnergyGroup class) - // but because it contains the - // driving algorithm that determines - // what to compute and when. It is - // mostly as shown in many of the - // other tutorial programs in that it - // has a public run - // function and private functions - // doing all the rest. In several - // places, we have to do something - // for all energy groups, in which - // case we will start threads for - // each group to let these things run - // in parallel if deal.II was - // configured for multithreading. - // For strategies of parallelization, - // take a look at the @ref threads module. + // This is the main class of the program, not because it implements all the + // functionality (in fact, most of it is implemented in the + // EnergyGroup class) but because it contains the driving + // algorithm that determines what to compute and when. It is mostly as shown + // in many of the other tutorial programs in that it has a public + // run function and private functions doing all the rest. In + // several places, we have to do something for all energy groups, in which + // case we will start threads for each group to let these things run in + // parallel if deal.II was configured for multithreading. For strategies of + // parallelization, take a look at the @ref threads module. // - // The biggest difference to previous - // example programs is that we also - // declare a nested class that has - // member variables for all the - // run-time parameters that can be - // passed to the program in an input - // file. Right now, these are the - // number of energy groups, the - // number of refinement cycles, the - // polynomial degree of the finite - // element to be used, and the - // tolerance used to determine when - // convergence of the inverse power - // iteration has occurred. In - // addition, we have a constructor of - // this class that sets all these - // values to their default values, a - // function - // declare_parameters - // that described to the - // ParameterHandler class already - // used in step-19 - // what parameters are accepted in - // the input file, and a function - // get_parameters that - // can extract the values of these - // parameters from a ParameterHandler - // object. + // The biggest difference to previous example programs is that we also + // declare a nested class that has member variables for all the run-time + // parameters that can be passed to the program in an input file. Right now, + // these are the number of energy groups, the number of refinement cycles, + // the polynomial degree of the finite element to be used, and the tolerance + // used to determine when convergence of the inverse power iteration has + // occurred. In addition, we have a constructor of this class that sets all + // these values to their default values, a function + // declare_parameters that described to the ParameterHandler + // class already used in step-19 what parameters are accepted in the input + // file, and a function get_parameters that can extract the + // values of these parameters from a ParameterHandler object. template class NeutronDiffusionProblem { @@ -1619,16 +1206,10 @@ namespace Step28 private: // @sect5{Private member functions} - // There are not that many member - // functions in this class since - // most of the functionality has - // been moved into the - // EnergyGroup class - // and is simply called from the - // run() member - // function of this class. The - // ones that remain have - // self-explanatory names: + // There are not that many member functions in this class since most of + // the functionality has been moved into the EnergyGroup + // class and is simply called from the run() member function + // of this class. The ones that remain have self-explanatory names: void initialize_problem(); void refine_grid (); @@ -1638,63 +1219,39 @@ namespace Step28 // @sect5{Private member variables} - // Next, we have a few member - // variables. In particular, - // these are (i) a reference to - // the parameter object (owned by - // the main function of this - // program, and passed to the - // constructor of this class), - // (ii) an object describing the - // material parameters for the - // number of energy groups - // requested in the input file, - // and (iii) the finite element - // to be used by all energy - // groups: + // Next, we have a few member variables. In particular, these are (i) a + // reference to the parameter object (owned by the main function of this + // program, and passed to the constructor of this class), (ii) an object + // describing the material parameters for the number of energy groups + // requested in the input file, and (iii) the finite element to be used by + // all energy groups: const Parameters ¶meters; const MaterialData material_data; FE_Q fe; - // Furthermore, we have (iv) the - // value of the computed - // eigenvalue at the present - // iteration. This is, in fact, - // the only part of the solution - // that is shared between all - // energy groups -- all other - // parts of the solution, such as - // neutron fluxes are particular - // to one or the other energy - // group, and are therefore - // stored in objects that - // describe a single energy + // Furthermore, we have (iv) the value of the computed eigenvalue at the + // present iteration. This is, in fact, the only part of the solution that + // is shared between all energy groups -- all other parts of the solution, + // such as neutron fluxes are particular to one or the other energy group, + // and are therefore stored in objects that describe a single energy // group: double k_eff; - // Finally, (v), we have an array - // of pointers to the energy - // group objects. The length of - // this array is, of course, - // equal to the number of energy - // groups specified in the - // parameter file. + // Finally, (v), we have an array of pointers to the energy group + // objects. The length of this array is, of course, equal to the number of + // energy groups specified in the parameter file. std::vector*> energy_groups; }; - // @sect4{Implementation of the NeutronDiffusionProblem::Parameters class} + // @sect4{Implementation of the + // NeutronDiffusionProblem::Parameters class} - // Before going on to the - // implementation of the outer class, - // we have to implement the functions - // of the parameters structure. This - // is pretty straightforward and, in - // fact, looks pretty much the same - // for all such parameters classes - // using the ParameterHandler - // capabilities. We will therefore - // not comment further on this: + // Before going on to the implementation of the outer class, we have to + // implement the functions of the parameters structure. This is pretty + // straightforward and, in fact, looks pretty much the same for all such + // parameters classes using the ParameterHandler capabilities. We will + // therefore not comment further on this: template NeutronDiffusionProblem::Parameters::Parameters () : @@ -1744,11 +1301,8 @@ namespace Step28 // @sect4{Implementation of the NeutronDiffusionProblem class} - // Now for the - // NeutronDiffusionProblem - // class. The constructor and - // destructor have nothing of much - // interest: + // Now for the NeutronDiffusionProblem class. The constructor + // and destructor have nothing of much interest: template NeutronDiffusionProblem:: NeutronDiffusionProblem (const Parameters ¶meters) @@ -1771,29 +1325,17 @@ namespace Step28 // @sect5{NeutronDiffusionProblem::initialize_problem} // - // The first function of interest is - // the one that sets up the geometry - // of the reactor core. This is - // described in more detail in the - // introduction. + // The first function of interest is the one that sets up the geometry of + // the reactor core. This is described in more detail in the introduction. // - // The first part of the function - // defines geometry data, and then - // creates a coarse mesh that has as - // many cells as there are fuel rods - // (or pin cells, for that matter) in - // that part of the reactor core that - // we simulate. As mentioned when - // interpolating boundary values - // above, the last parameter to the - // GridGenerator::subdivided_hyper_rectangle - // function specifies that sides of - // the domain shall have unique - // boundary indicators that will - // later allow us to determine in a - // simple way which of the boundaries - // have Neumann and which have - // Dirichlet conditions attached to + // The first part of the function defines geometry data, and then creates a + // coarse mesh that has as many cells as there are fuel rods (or pin cells, + // for that matter) in that part of the reactor core that we simulate. As + // mentioned when interpolating boundary values above, the last parameter to + // the GridGenerator::subdivided_hyper_rectangle function + // specifies that sides of the domain shall have unique boundary indicators + // that will later allow us to determine in a simple way which of the + // boundaries have Neumann and which have Dirichlet conditions attached to // them. template void NeutronDiffusionProblem::initialize_problem() @@ -1833,42 +1375,23 @@ namespace Step28 true); - // The second part of the function - // deals with material numbers of - // pin cells of each type of - // assembly. Here, we define four - // different types of assembly, for - // which we describe the - // arrangement of fuel rods in the + // The second part of the function deals with material numbers of pin + // cells of each type of assembly. Here, we define four different types of + // assembly, for which we describe the arrangement of fuel rods in the // following tables. // - // The assemblies described here - // are taken from the benchmark - // mentioned in the introduction - // and are (in this order): - //
    - //
  1. 'UX' Assembly: UO2 fuel assembly - // with 24 guide tubes and a central - // Moveable Fission Chamber - //
  2. 'UA' Assembly: UO2 fuel assembly - // with 24 AIC and a central - // Moveable Fission Chamber - //
  3. 'PX' Assembly: MOX fuel assembly - // with 24 guide tubes and a central - // Moveable Fission Chamber - //
  4. 'R' Assembly: a reflector. - //
+ // The assemblies described here are taken from the benchmark mentioned in + // the introduction and are (in this order):
  1. 'UX' Assembly: UO2 + // fuel assembly with 24 guide tubes and a central Moveable Fission + // Chamber
  2. 'UA' Assembly: UO2 fuel assembly with 24 AIC and a central + // Moveable Fission Chamber
  3. 'PX' Assembly: MOX fuel assembly with 24 + // guide tubes and a central Moveable Fission Chamber
  4. 'R' Assembly: a + // reflector.
// - // Note that the numbers listed - // here and taken from the - // benchmark description are, in - // good old Fortran fashion, - // one-based. We will later - // subtract one from each number - // when assigning materials to - // individual cells to convert - // things into the C-style - // zero-based indexing. + // Note that the numbers listed here and taken from the benchmark + // description are, in good old Fortran fashion, one-based. We will later + // subtract one from each number when assigning materials to individual + // cells to convert things into the C-style zero-based indexing. const unsigned int n_assemblies=4; const unsigned int assembly_materials[n_assemblies][rods_per_assembly_x][rods_per_assembly_y] @@ -1952,30 +1475,18 @@ namespace Step28 } }; - // After the description of the - // materials that make up an - // assembly, we have to specify the - // arrangement of assemblies within - // the core. We use a symmetric - // pattern that in fact only uses - // the 'UX' and 'PX' assemblies: + // After the description of the materials that make up an assembly, we + // have to specify the arrangement of assemblies within the core. We use a + // symmetric pattern that in fact only uses the 'UX' and 'PX' assemblies: const unsigned int core[assemblies_x][assemblies_y][assemblies_z] = {{{0}, {2}}, {{2}, {0}}}; - // We are now in a position to - // actually set material IDs for - // each cell. To this end, we loop - // over all cells, look at the - // location of the cell's center, - // and determine which assembly and - // fuel rod this would be in. (We - // add a few checks to see that the - // locations we compute are within - // the bounds of the arrays in - // which we have to look up - // materials.) At the end of the - // loop, we set material - // identifiers accordingly: + // We are now in a position to actually set material IDs for each cell. To + // this end, we loop over all cells, look at the location of the cell's + // center, and determine which assembly and fuel rod this would be in. (We + // add a few checks to see that the locations we compute are within the + // bounds of the arrays in which we have to look up materials.) At the end + // of the loop, we set material identifiers accordingly: for (typename Triangulation::active_cell_iterator cell = coarse_grid.begin_active(); cell!=coarse_grid.end(); @@ -2009,13 +1520,9 @@ namespace Step28 cell->set_material_id(assembly_materials[core[ax][ay][az]][cx][cy] - 1); } - // With the coarse mesh so - // initialized, we create the - // appropriate number of energy - // group objects and let them - // initialize their individual - // meshes with the coarse mesh - // generated above: + // With the coarse mesh so initialized, we create the appropriate number + // of energy group objects and let them initialize their individual meshes + // with the coarse mesh generated above: energy_groups.resize (parameters.n_groups); for (unsigned int group=0; group (group, material_data, @@ -2025,40 +1532,25 @@ namespace Step28 // @sect5{NeutronDiffusionProblem::get_total_fission_source} // - // In the eigenvalue computation, we - // need to calculate total fission - // neutron source after each power - // iteration. The total power then is - // used to renew k-effective. + // In the eigenvalue computation, we need to calculate total fission neutron + // source after each power iteration. The total power then is used to renew + // k-effective. // - // Since the total fission source is a sum - // over all the energy groups, and since each - // of these sums can be computed - // independently, we actually do this in - // parallel. One of the problems is that the - // function in the EnergyGroup - // class that computes the fission source - // returns a value. If we now simply spin off - // a new thread, we have to later capture the - // return value of the function run on that - // thread. The way this can be done is to use - // the return value of the - // Threads::new_thread function, which - // returns an object of type - // Threads::Thread@ if the function - // spawned returns a double. We can then later - // ask this object for the returned value - // (when doing so, the - // Threads::Thread::return_value - // function first waits for the thread to - // finish if it hasn't done so already). + // Since the total fission source is a sum over all the energy groups, and + // since each of these sums can be computed independently, we actually do + // this in parallel. One of the problems is that the function in the + // EnergyGroup class that computes the fission source returns a + // value. If we now simply spin off a new thread, we have to later capture + // the return value of the function run on that thread. The way this can be + // done is to use the return value of the Threads::new_thread function, + // which returns an object of type Threads::Thread@ if the function + // spawned returns a double. We can then later ask this object for the + // returned value (when doing so, the Threads::Thread::return_value function + // first waits for the thread to finish if it hasn't done so already). // - // The way this function then works - // is to first spawn one thread for - // each energy group we work with, - // then one-by-one collecting the - // returned values of each thread and - // return the sum. + // The way this function then works is to first spawn one thread for each + // energy group we work with, then one-by-one collecting the returned values + // of each thread and return the sum. template double NeutronDiffusionProblem::get_total_fission_source () const { @@ -2079,21 +1571,13 @@ namespace Step28 // @sect5{NeutronDiffusionProblem::refine_grid} // - // The next function lets the - // individual energy group objects - // refine their meshes. Much of this, - // again, is a task that can be done - // independently in parallel: first, - // let all the energy group objects - // calculate their error indicators - // in parallel, then compute the - // maximum error indicator over all - // energy groups and determine - // thresholds for refinement and - // coarsening of cells, and then ask - // all the energy groups to refine - // their meshes accordingly, again in - // parallel. + // The next function lets the individual energy group objects refine their + // meshes. Much of this, again, is a task that can be done independently in + // parallel: first, let all the energy group objects calculate their error + // indicators in parallel, then compute the maximum error indicator over all + // energy groups and determine thresholds for refinement and coarsening of + // cells, and then ask all the energy groups to refine their meshes + // accordingly, again in parallel. template void NeutronDiffusionProblem::refine_grid () { @@ -2131,16 +1615,12 @@ namespace Step28 // @sect5{NeutronDiffusionProblem::run} // - // Finally, this is the function - // where the meat is: iterate on a - // sequence of meshes, and on each of - // them do a power iteration to - // compute the eigenvalue. + // Finally, this is the function where the meat is: iterate on a sequence of + // meshes, and on each of them do a power iteration to compute the + // eigenvalue. // - // Given the description of the - // algorithm in the introduction, - // there is actually not much to - // comment on: + // Given the description of the algorithm in the introduction, there is + // actually not much to comment on: template void NeutronDiffusionProblem::run () { @@ -2239,35 +1719,20 @@ namespace Step28 // @sect3{The main() function} // -// The last thing in the program in -// the main() -// function. The structure is as in -// most other tutorial programs, with -// the only exception that we here -// handle a parameter file. To this -// end, we first look at the command -// line arguments passed to this -// function: if no input file is -// specified on the command line, -// then use "project.prm", otherwise -// take the filename given as the -// first argument on the command -// line. +// The last thing in the program in the main() function. The +// structure is as in most other tutorial programs, with the only exception +// that we here handle a parameter file. To this end, we first look at the +// command line arguments passed to this function: if no input file is +// specified on the command line, then use "project.prm", otherwise take the +// filename given as the first argument on the command line. // -// With this, we create a -// ParameterHandler object, let the -// NeutronDiffusionProblem::Parameters -// class declare all the parameters -// it wants to see in the input file -// (or, take the default values, if -// nothing is listed in the parameter -// file), then read the input file, -// ask the parameters object to -// extract the values, and finally -// hand everything off to an object -// of type -// NeutronDiffusionProblem -// for computation of the eigenvalue: +// With this, we create a ParameterHandler object, let the +// NeutronDiffusionProblem::Parameters class declare all the +// parameters it wants to see in the input file (or, take the default values, +// if nothing is listed in the parameter file), then read the input file, ask +// the parameters object to extract the values, and finally hand everything +// off to an object of type NeutronDiffusionProblem for +// computation of the eigenvalue: int main (int argc, char **argv) { try @@ -2326,4 +1791,3 @@ int main (int argc, char **argv) return 0; } - diff --git a/deal.II/examples/step-29/step-29.cc b/deal.II/examples/step-29/step-29.cc index 4e0c83615e..0e111ac510 100644 --- a/deal.II/examples/step-29/step-29.cc +++ b/deal.II/examples/step-29/step-29.cc @@ -12,8 +12,8 @@ // @sect3{Include files} -// The following header files are unchanged -// from step-7 and have been discussed before: +// The following header files are unchanged from step-7 and have been +// discussed before: #include #include @@ -42,50 +42,32 @@ #include -// This header file contains the -// necessary declarations for the -// ParameterHandler class that we -// will use to read our parameters -// from a configuration file: +// This header file contains the necessary declarations for the +// ParameterHandler class that we will use to read our parameters from a +// configuration file: #include -// For solving the linear system, -// we'll use the sparse -// LU-decomposition provided by -// UMFPACK (see the SparseDirectUMFPACK -// class), for which the following -// header file is needed. Note that -// in order to compile this tutorial -// program, the deal.II-library needs -// to be built with UMFPACK support, -// which can be most easily achieved -// by giving the -// --with-umfpack switch when -// configuring the library: +// For solving the linear system, we'll use the sparse LU-decomposition +// provided by UMFPACK (see the SparseDirectUMFPACK class), for which the +// following header file is needed. Note that in order to compile this +// tutorial program, the deal.II-library needs to be built with UMFPACK +// support, which can be most easily achieved by giving the +// --with-umfpack switch when configuring the library: #include -// The FESystem class allows us to -// stack several FE-objects to one -// compound, vector-valued finite -// element field. The necessary -// declarations for this class are -// provided in this header file: +// The FESystem class allows us to stack several FE-objects to one compound, +// vector-valued finite element field. The necessary declarations for this +// class are provided in this header file: #include -// Finally, include the header file -// that declares the Timer class that -// we will use to determine how much -// time each of the operations of our -// program takes: +// Finally, include the header file that declares the Timer class that we will +// use to determine how much time each of the operations of our program takes: #include -// As the last step at the beginning of this -// program, we put everything that is in this -// program into its namespace and, within it, -// make everything that is in the deal.II -// namespace globally available, without the -// need to prefix everything with -// dealii::: +// As the last step at the beginning of this program, we put everything that +// is in this program into its namespace and, within it, make everything that +// is in the deal.II namespace globally available, without the need to prefix +// everything with dealii::: namespace Step29 { using namespace dealii; @@ -93,28 +75,17 @@ namespace Step29 // @sect3{The DirichletBoundaryValues class} - // First we define a class for the - // function representing the - // Dirichlet boundary values. This - // has been done many times before - // and therefore does not need much - // explanation. + // First we define a class for the function representing the Dirichlet + // boundary values. This has been done many times before and therefore does + // not need much explanation. // - // Since there are two values $v$ and - // $w$ that need to be prescribed at - // the boundary, we have to tell the - // base class that this is a - // vector-valued function with two - // components, and the - // vector_value function - // and its cousin - // vector_value_list must - // return vectors with two entries. In - // our case the function is very - // simple, it just returns 1 for the - // real part $v$ and 0 for the - // imaginary part $w$ regardless of - // the point where it is evaluated. + // Since there are two values $v$ and $w$ that need to be prescribed at the + // boundary, we have to tell the base class that this is a vector-valued + // function with two components, and the vector_value function + // and its cousin vector_value_list must return vectors with + // two entries. In our case the function is very simple, it just returns 1 + // for the real part $v$ and 0 for the imaginary part $w$ regardless of the + // point where it is evaluated. template class DirichletBoundaryValues : public Function { @@ -154,18 +125,11 @@ namespace Step29 // @sect3{The ParameterReader class} - // The next class is responsible for - // preparing the ParameterHandler - // object and reading parameters from - // an input file. It includes a - // function - // declare_parameters - // that declares all the necessary - // parameters and a - // read_parameters - // function that is called from - // outside to initiate the parameter - // reading process. + // The next class is responsible for preparing the ParameterHandler object + // and reading parameters from an input file. It includes a function + // declare_parameters that declares all the necessary + // parameters and a read_parameters function that is called + // from outside to initiate the parameter reading process. class ParameterReader : public Subscriptor { public: @@ -177,8 +141,8 @@ namespace Step29 ParameterHandler &prm; }; - // The constructor stores a reference to - // the ParameterHandler object that is passed to it: + // The constructor stores a reference to the ParameterHandler object that is + // passed to it: ParameterReader::ParameterReader(ParameterHandler ¶mhandler) : prm(paramhandler) @@ -186,34 +150,21 @@ namespace Step29 // @sect4{ParameterReader::declare_parameters} - // The declare_parameters - // function declares all the - // parameters that our - // ParameterHandler object will be - // able to read from input files, - // along with their types, range - // conditions and the subsections they - // appear in. We will wrap all the - // entries that go into a section in a - // pair of braces to force the editor - // to indent them by one level, making - // it simpler to read which entries - // together form a section: + // The declare_parameters function declares all the parameters + // that our ParameterHandler object will be able to read from input files, + // along with their types, range conditions and the subsections they appear + // in. We will wrap all the entries that go into a section in a pair of + // braces to force the editor to indent them by one level, making it simpler + // to read which entries together form a section: void ParameterReader::declare_parameters() { - // Parameters for mesh and geometry - // include the number of global - // refinement steps that are applied - // to the initial coarse mesh and the - // focal distance $d$ of the - // transducer lens. For the number of - // refinement steps, we allow integer - // values in the range $[0,\infty)$, - // where the omitted second argument - // to the Patterns::Integer object - // denotes the half-open interval. - // For the focal distance any number - // greater than zero is accepted: + // Parameters for mesh and geometry include the number of global + // refinement steps that are applied to the initial coarse mesh and the + // focal distance $d$ of the transducer lens. For the number of refinement + // steps, we allow integer values in the range $[0,\infty)$, where the + // omitted second argument to the Patterns::Integer object denotes the + // half-open interval. For the focal distance any number greater than + // zero is accepted: prm.enter_subsection ("Mesh & geometry parameters"); { prm.declare_entry("Number of refinements", "6", @@ -228,15 +179,11 @@ namespace Step29 } prm.leave_subsection (); - // The next subsection is devoted to - // the physical parameters appearing - // in the equation, which are the - // frequency $\omega$ and wave speed - // $c$. Again, both need to lie in the - // half-open interval $[0,\infty)$ - // represented by calling the - // Patterns::Double class with only - // the left end-point as argument: + // The next subsection is devoted to the physical parameters appearing in + // the equation, which are the frequency $\omega$ and wave speed + // $c$. Again, both need to lie in the half-open interval $[0,\infty)$ + // represented by calling the Patterns::Double class with only the left + // end-point as argument: prm.enter_subsection ("Physical constants"); { prm.declare_entry("c", "1.5e5", @@ -250,70 +197,37 @@ namespace Step29 prm.leave_subsection (); - // Last but not least we would like - // to be able to change some - // properties of the output, like - // filename and format, through - // entries in the configuration - // file, which is the purpose of - // the last subsection: + // Last but not least we would like to be able to change some properties + // of the output, like filename and format, through entries in the + // configuration file, which is the purpose of the last subsection: prm.enter_subsection ("Output parameters"); { prm.declare_entry("Output file", "solution", Patterns::Anything(), "Name of the output file (without extension)"); - // Since different output formats - // may require different - // parameters for generating - // output (like for example, - // postscript output needs - // viewpoint angles, line widths, - // colors etc), it would be - // cumbersome if we had to - // declare all these parameters - // by hand for every possible - // output format supported in the - // library. Instead, each output - // format has a - // FormatFlags::declare_parameters - // function, which declares all - // the parameters specific to - // that format in an own - // subsection. The following call - // of - // DataOutInterface<1>::declare_parameters - // executes - // declare_parameters - // for all available output - // formats, so that for each - // format an own subsection will - // be created with parameters - // declared for that particular - // output format. (The actual - // value of the template - // parameter in the call, - // @<1@> above, does - // not matter here: the function - // does the same work independent - // of the dimension, but happens - // to be in a - // template-parameter-dependent - // class.) To find out what - // parameters there are for which - // output format, you can either - // consult the documentation of - // the DataOutBase class, or - // simply run this program - // without a parameter file - // present. It will then create a - // file with all declared - // parameters set to their - // default values, which can - // conveniently serve as a - // starting point for setting the - // parameters to the values you - // desire. + // Since different output formats may require different parameters for + // generating output (like for example, postscript output needs + // viewpoint angles, line widths, colors etc), it would be cumbersome if + // we had to declare all these parameters by hand for every possible + // output format supported in the library. Instead, each output format + // has a FormatFlags::declare_parameters function, which + // declares all the parameters specific to that format in an own + // subsection. The following call of + // DataOutInterface<1>::declare_parameters executes + // declare_parameters for all available output formats, so + // that for each format an own subsection will be created with + // parameters declared for that particular output format. (The actual + // value of the template parameter in the call, @<1@> + // above, does not matter here: the function does the same work + // independent of the dimension, but happens to be in a + // template-parameter-dependent class.) To find out what parameters + // there are for which output format, you can either consult the + // documentation of the DataOutBase class, or simply run this program + // without a parameter file present. It will then create a file with all + // declared parameters set to their default values, which can + // conveniently serve as a starting point for setting the parameters to + // the values you desire. DataOutInterface<1>::declare_parameters (prm); } prm.leave_subsection (); @@ -321,17 +235,11 @@ namespace Step29 // @sect4{ParameterReader::read_parameters} - // This is the main function in the - // ParameterReader class. It gets - // called from outside, first - // declares all the parameters, and - // then reads them from the input - // file whose filename is provided by - // the caller. After the call to this - // function is complete, the - // prm object can be - // used to retrieve the values of the - // parameters read in from the file: + // This is the main function in the ParameterReader class. It gets called + // from outside, first declares all the parameters, and then reads them from + // the input file whose filename is provided by the caller. After the call + // to this function is complete, the prm object can be used to + // retrieve the values of the parameters read in from the file: void ParameterReader::read_parameters (const std::string parameter_file) { declare_parameters(); @@ -343,74 +251,38 @@ namespace Step29 // @sect3{The ComputeIntensity class} - // As mentioned in the introduction, - // the quantity that we are really - // after is the spatial distribution - // of the intensity of the ultrasound - // wave, which corresponds to - // $|u|=\sqrt{v^2+w^2}$. Now we could - // just be content with having $v$ - // and $w$ in our output, and use a - // suitable visualization or - // postprocessing tool to derive - // $|u|$ from the solution we - // computed. However, there is also a - // way to output data derived from - // the solution in deal.II, and we - // are going to make use of this - // mechanism here. - - // So far we have always used the - // DataOut::add_data_vector function - // to add vectors containing output - // data to a DataOut object. There - // is a special version of this - // function that in addition to the - // data vector has an additional - // argument of type - // DataPostprocessor. What happens - // when this function is used for - // output is that at each point where - // output data is to be generated, - // the DataPostprocessor::compute_derived_quantities_scalar or DataPostprocessor::compute_derived_quantities_vector - // function of the specified - // DataPostprocessor object is - // invoked to compute the output - // quantities from the values, the - // gradients and the second - // derivatives of the finite element - // function represented by the data - // vector (in the case of face - // related data, normal vectors are - // available as well). Hence, this - // allows us to output any quantity - // that can locally be derived from - // the values of the solution and its - // derivatives. Of course, the - // ultrasound intensity $|u|$ is such - // a quantity and its computation - // doesn't even involve any - // derivatives of $v$ or $w$. - - // In practice, the - // DataPostprocessor class only - // provides an interface to this - // functionality, and we need to - // derive our own class from it in - // order to implement the functions - // specified by the interface. In - // the most general case one has to - // implement several member - // functions but if the output - // quantity is a single scalar then - // some of this boilerplate code - // can be handled by a more - // specialized class, - // DataPostprocessorScalar and we - // can derive from that one - // instead. This is what the - // ComputeIntensity - // class does: + // As mentioned in the introduction, the quantity that we are really after + // is the spatial distribution of the intensity of the ultrasound wave, + // which corresponds to $|u|=\sqrt{v^2+w^2}$. Now we could just be content + // with having $v$ and $w$ in our output, and use a suitable visualization + // or postprocessing tool to derive $|u|$ from the solution we + // computed. However, there is also a way to output data derived from the + // solution in deal.II, and we are going to make use of this mechanism here. + + // So far we have always used the DataOut::add_data_vector function to add + // vectors containing output data to a DataOut object. There is a special + // version of this function that in addition to the data vector has an + // additional argument of type DataPostprocessor. What happens when this + // function is used for output is that at each point where output data is to + // be generated, the DataPostprocessor::compute_derived_quantities_scalar or + // DataPostprocessor::compute_derived_quantities_vector function of the + // specified DataPostprocessor object is invoked to compute the output + // quantities from the values, the gradients and the second derivatives of + // the finite element function represented by the data vector (in the case + // of face related data, normal vectors are available as well). Hence, this + // allows us to output any quantity that can locally be derived from the + // values of the solution and its derivatives. Of course, the ultrasound + // intensity $|u|$ is such a quantity and its computation doesn't even + // involve any derivatives of $v$ or $w$. + + // In practice, the DataPostprocessor class only provides an interface to + // this functionality, and we need to derive our own class from it in order + // to implement the functions specified by the interface. In the most + // general case one has to implement several member functions but if the + // output quantity is a single scalar then some of this boilerplate code can + // be handled by a more specialized class, DataPostprocessorScalar and we + // can derive from that one instead. This is what the + // ComputeIntensity class does: template class ComputeIntensity : public DataPostprocessorScalar { @@ -427,38 +299,20 @@ namespace Step29 std::vector< Vector< double > > &computed_quantities) const; }; - // In the constructor, we need to - // call the constructor of the base - // class with two arguments. The - // first denotes the name by which - // the single scalar quantity - // computed by this class should be - // represented in output files. In - // our case, the postprocessor has - // $|u|$ as output, so we use - // "Intensity". + // In the constructor, we need to call the constructor of the base class + // with two arguments. The first denotes the name by which the single scalar + // quantity computed by this class should be represented in output files. In + // our case, the postprocessor has $|u|$ as output, so we use "Intensity". // - // The second argument is a set of - // flags that indicate which data is - // needed by the postprocessor in - // order to compute the output - // quantities. This can be any - // subset of update_values, - // update_gradients and - // update_hessians (and, in the case - // of face data, also - // update_normal_vectors), which are - // documented in UpdateFlags. Of - // course, computation of the - // derivatives requires additional - // resources, so only the flags for - // data that is really needed should - // be given here, just as we do when - // we use FEValues objects. In our - // case, only the function values of - // $v$ and $w$ are needed to compute - // $|u|$, so we're good with the - // update_values flag. + // The second argument is a set of flags that indicate which data is needed + // by the postprocessor in order to compute the output quantities. This can + // be any subset of update_values, update_gradients and update_hessians + // (and, in the case of face data, also update_normal_vectors), which are + // documented in UpdateFlags. Of course, computation of the derivatives + // requires additional resources, so only the flags for data that is really + // needed should be given here, just as we do when we use FEValues objects. + // In our case, only the function values of $v$ and $w$ are needed to + // compute $|u|$, so we're good with the update_values flag. template ComputeIntensity::ComputeIntensity () : @@ -467,34 +321,19 @@ namespace Step29 {} - // The actual prostprocessing happens - // in the following function. Its - // inputs are a vector representing - // values of the function (which is - // here vector-valued) representing - // the data vector given to - // DataOut::add_data_vector, - // evaluated at all evaluation points - // where we generate output, and some - // tensor objects representing - // derivatives (that we don't use - // here since $|u|$ is computed from - // just $v$ and $w$, and for which we - // assign no name to the - // corresponding function argument). - // The derived quantities are - // returned in the - // computed_quantities - // vector. Remember that this - // function may only use data for - // which the respective update flag - // is specified by - // get_needed_update_flags. For - // example, we may not use the - // derivatives here, since our - // implementation of - // get_needed_update_flags - // requests that only function values + // The actual prostprocessing happens in the following function. Its inputs + // are a vector representing values of the function (which is here + // vector-valued) representing the data vector given to + // DataOut::add_data_vector, evaluated at all evaluation points where we + // generate output, and some tensor objects representing derivatives (that + // we don't use here since $|u|$ is computed from just $v$ and $w$, and for + // which we assign no name to the corresponding function argument). The + // derived quantities are returned in the computed_quantities + // vector. Remember that this function may only use data for which the + // respective update flag is specified by + // get_needed_update_flags. For example, we may not use the + // derivatives here, since our implementation of + // get_needed_update_flags requests that only function values // are provided. template void @@ -510,12 +349,9 @@ namespace Step29 Assert(computed_quantities.size() == uh.size(), ExcDimensionMismatch (computed_quantities.size(), uh.size())); - // The computation itself is - // straightforward: We iterate over - // each entry in the output vector - // and compute $|u|$ from the - // corresponding values of $v$ and - // $w$: + // The computation itself is straightforward: We iterate over each entry + // in the output vector and compute $|u|$ from the corresponding values of + // $v$ and $w$: for (unsigned int i=0; iUltrasoundProblem class} - // Finally here is the main class of - // this program. It's member - // functions are very similar to the - // previous examples, in particular - // step-4, and the list of member - // variables does not contain any - // major surprises either. The - // ParameterHandler object that is - // passed to the constructor is - // stored as a reference to allow - // easy access to the parameters from - // all functions of the class. Since - // we are working with vector valued - // finite elements, the FE object we - // are using is of type FESystem. + // Finally here is the main class of this program. It's member functions + // are very similar to the previous examples, in particular step-4, and the + // list of member variables does not contain any major surprises either. + // The ParameterHandler object that is passed to the constructor is stored + // as a reference to allow easy access to the parameters from all functions + // of the class. Since we are working with vector valued finite elements, + // the FE object we are using is of type FESystem. template class UltrasoundProblem { @@ -572,14 +400,10 @@ namespace Step29 - // The constructor takes the - // ParameterHandler object and stores - // it in a reference. It also - // initializes the DoF-Handler and - // the finite element system, which - // consists of two copies of the - // scalar Q1 field, one for $v$ and - // one for $w$: + // The constructor takes the ParameterHandler object and stores it in a + // reference. It also initializes the DoF-Handler and the finite element + // system, which consists of two copies of the scalar Q1 field, one for $v$ + // and one for $w$: template UltrasoundProblem::UltrasoundProblem (ParameterHandler ¶m) : @@ -597,29 +421,21 @@ namespace Step29 // @sect4{UltrasoundProblem::make_grid} - // Here we setup the grid for our - // domain. As mentioned in the - // exposition, the geometry is just a - // unit square (in 2d) with the part - // of the boundary that represents - // the transducer lens replaced by a - // sector of a circle. + // Here we setup the grid for our domain. As mentioned in the exposition, + // the geometry is just a unit square (in 2d) with the part of the boundary + // that represents the transducer lens replaced by a sector of a circle. template void UltrasoundProblem::make_grid () { - // First we generate some logging - // output and start a timer so we - // can compute execution time when - // this function is done: + // First we generate some logging output and start a timer so we can + // compute execution time when this function is done: deallog << "Generating grid... "; Timer timer; timer.start (); - // Then we query the values for the - // focal distance of the transducer - // lens and the number of mesh - // refinement steps from our - // ParameterHandler object: + // Then we query the values for the focal distance of the transducer lens + // and the number of mesh refinement steps from our ParameterHandler + // object: prm.enter_subsection ("Mesh & geometry parameters"); const double focal_distance = prm.get_double("Focal distance"); @@ -627,24 +443,14 @@ namespace Step29 prm.leave_subsection (); - // Next, two points are defined for - // position and focal point of the - // transducer lens, which is the - // center of the circle whose - // segment will form the transducer - // part of the boundary. We compute - // the radius of this circle in - // such a way that the segment fits - // in the interval [0.4,0.6] on the - // x-axis. Notice that this is the - // only point in the program where - // things are slightly different in - // 2D and 3D. Even though this - // tutorial only deals with the 2D - // case, the necessary additions to - // make this program functional in - // 3D are so minimal that we opt - // for including them: + // Next, two points are defined for position and focal point of the + // transducer lens, which is the center of the circle whose segment will + // form the transducer part of the boundary. We compute the radius of this + // circle in such a way that the segment fits in the interval [0.4,0.6] on + // the x-axis. Notice that this is the only point in the program where + // things are slightly different in 2D and 3D. Even though this tutorial + // only deals with the 2D case, the necessary additions to make this + // program functional in 3D are so minimal that we opt for including them: const Point transducer = (dim == 2) ? Point (0.5, 0.0) : Point (0.5, 0.5, 0.0), @@ -657,25 +463,14 @@ namespace Step29 ((dim==2) ? 0.01 : 0.02)); - // As initial coarse grid we take a - // simple unit square with 5 - // subdivisions in each - // direction. The number of - // subdivisions is chosen so that - // the line segment $[0.4,0.6]$ - // that we want to designate as the - // transducer boundary is spanned - // by a single face. Then we step - // through all cells to find the - // faces where the transducer is to - // be located, which in fact is - // just the single edge from 0.4 to - // 0.6 on the x-axis. This is where - // we want the refinements to be - // made according to a circle - // shaped boundary, so we mark this - // edge with a different boundary - // indicator. + // As initial coarse grid we take a simple unit square with 5 subdivisions + // in each direction. The number of subdivisions is chosen so that the + // line segment $[0.4,0.6]$ that we want to designate as the transducer + // boundary is spanned by a single face. Then we step through all cells to + // find the faces where the transducer is to be located, which in fact is + // just the single edge from 0.4 to 0.6 on the x-axis. This is where we + // want the refinements to be made according to a circle shaped boundary, + // so we mark this edge with a different boundary indicator. GridGenerator::subdivided_hyper_cube (triangulation, 5, 0, 1); typename Triangulation::cell_iterator @@ -689,36 +484,24 @@ namespace Step29 cell->face(face)->set_boundary_indicator (1); - // For the circle part of the - // transducer lens, a hyper-ball - // object is used (which, of course, - // in 2D just represents a circle), - // with radius and center as computed - // above. By marking this object as - // static, we ensure that - // it lives until the end of the - // program and thereby longer than the - // triangulation object we will - // associated with it. We then assign - // this boundary-object to the part of - // the boundary with boundary - // indicator 1: + // For the circle part of the transducer lens, a hyper-ball object is used + // (which, of course, in 2D just represents a circle), with radius and + // center as computed above. By marking this object as + // static, we ensure that it lives until the end of the + // program and thereby longer than the triangulation object we will + // associated with it. We then assign this boundary-object to the part of + // the boundary with boundary indicator 1: static const HyperBallBoundary boundary(focal_point, radius); triangulation.set_boundary(1, boundary); - // Now global refinement is - // executed. Cells near the - // transducer location will be - // automatically refined according - // to the circle shaped boundary of - // the transducer lens: + // Now global refinement is executed. Cells near the transducer location + // will be automatically refined according to the circle shaped boundary + // of the transducer lens: triangulation.refine_global (n_refinements); - // Lastly, we generate some more - // logging output. We stop the - // timer and query the number of - // CPU seconds elapsed since the - // beginning of the function: + // Lastly, we generate some more logging output. We stop the timer and + // query the number of CPU seconds elapsed since the beginning of the + // function: timer.stop (); deallog << "done (" << timer() @@ -733,14 +516,10 @@ namespace Step29 // @sect4{UltrasoundProblem::setup_system} // - // Initialization of the system - // matrix, sparsity patterns and - // vectors are the same as in - // previous examples and therefore do - // not need further comment. As in - // the previous function, we also - // output the run time of what we do - // here: + // Initialization of the system matrix, sparsity patterns and vectors are + // the same as in previous examples and therefore do not need further + // comment. As in the previous function, we also output the run time of what + // we do here: template void UltrasoundProblem::setup_system () { @@ -773,10 +552,9 @@ namespace Step29 } - // @sect4{UltrasoundProblem::assemble_system} - // As before, this function takes - // care of assembling the system - // matrix and right hand side vector: + // @sect4{UltrasoundProblem::assemble_system} As before, this + // function takes care of assembling the system matrix and right hand side + // vector: template void UltrasoundProblem::assemble_system () { @@ -784,11 +562,8 @@ namespace Step29 Timer timer; timer.start (); - // First we query wavespeed and - // frequency from the - // ParameterHandler object and - // store them in local variables, - // as they will be used frequently + // First we query wavespeed and frequency from the ParameterHandler object + // and store them in local variables, as they will be used frequently // throughout this function. prm.enter_subsection ("Physical constants"); @@ -798,15 +573,10 @@ namespace Step29 prm.leave_subsection (); - // As usual, for computing - // integrals ordinary Gauss - // quadrature rule is used. Since - // our bilinear form involves - // boundary integrals on - // $\Gamma_2$, we also need a - // quadrature rule for surface - // integration on the faces, which - // are $dim-1$ dimensional: + // As usual, for computing integrals ordinary Gauss quadrature rule is + // used. Since our bilinear form involves boundary integrals on + // $\Gamma_2$, we also need a quadrature rule for surface integration on + // the faces, which are $dim-1$ dimensional: QGauss quadrature_formula(2); QGauss face_quadrature_formula(2); @@ -814,19 +584,11 @@ namespace Step29 n_face_q_points = face_quadrature_formula.size(), dofs_per_cell = fe.dofs_per_cell; - // The FEValues objects will - // evaluate the shape functions for - // us. For the part of the - // bilinear form that involves - // integration on $\Omega$, we'll - // need the values and gradients of - // the shape functions, and of - // course the quadrature weights. - // For the terms involving the - // boundary integrals, only shape - // function values and the - // quadrature weights are - // necessary. + // The FEValues objects will evaluate the shape functions for us. For the + // part of the bilinear form that involves integration on $\Omega$, we'll + // need the values and gradients of the shape functions, and of course the + // quadrature weights. For the terms involving the boundary integrals, + // only shape function values and the quadrature weights are necessary. FEValues fe_values (fe, quadrature_formula, update_values | update_gradients | update_JxW_values); @@ -834,14 +596,10 @@ namespace Step29 FEFaceValues fe_face_values (fe, face_quadrature_formula, update_values | update_JxW_values); - // As usual, the system matrix is - // assembled cell by cell, and we - // need a matrix for storing the - // local cell contributions as well - // as an index vector to transfer - // the cell contributions to the - // appropriate location in the - // global system matrix after. + // As usual, the system matrix is assembled cell by cell, and we need a + // matrix for storing the local cell contributions as well as an index + // vector to transfer the cell contributions to the appropriate location + // in the global system matrix after. FullMatrix cell_matrix (dofs_per_cell, dofs_per_cell); std::vector local_dof_indices (dofs_per_cell); @@ -852,13 +610,9 @@ namespace Step29 for (; cell!=endc; ++cell) { - // On each cell, we first need - // to reset the local - // contribution matrix and - // request the FEValues object - // to compute the shape - // functions for the current - // cell: + // On each cell, we first need to reset the local contribution matrix + // and request the FEValues object to compute the shape functions for + // the current cell: cell_matrix = 0; fe_values.reinit (cell); @@ -867,147 +621,52 @@ namespace Step29 for (unsigned int j=0; j::faces_per_cell; ++face) if (cell->face(face)->at_boundary() && @@ -1066,107 +702,43 @@ namespace Step29 { - // These faces will - // certainly contribute - // to the off-diagonal - // blocks of the system - // matrix, so we ask the - // FEFaceValues object to - // provide us with the - // shape function values - // on this face: + // These faces will certainly contribute to the off-diagonal + // blocks of the system matrix, so we ask the FEFaceValues + // object to provide us with the shape function values on this + // face: fe_face_values.reinit (cell, face); - // Next, we loop through - // all DoFs of the - // current cell to find - // pairs that belong to - // different components - // and both have support - // on the current face: + // Next, we loop through all DoFs of the current cell to find + // pairs that belong to different components and both have + // support on the current face: for (unsigned int i=0; iget_dof_indices (local_dof_indices); - // ...and then add the entries to - // the system matrix one by - // one: + // ...and then add the entries to the system matrix one by one: for (unsigned int i=0; iDirichletBoundaryValues - // class we defined above: + // The only thing left are the Dirichlet boundary values on $\Gamma_1$, + // which is characterized by the boundary indicator 1. The Dirichlet + // values are provided by the DirichletBoundaryValues class + // we defined above: std::map boundary_values; VectorTools::interpolate_boundary_values (dof_handler, 1, @@ -1228,35 +790,20 @@ namespace Step29 // @sect4{UltrasoundProblem::solve} - // As already mentioned in the - // introduction, the system matrix is - // neither symmetric nor definite, - // and so it is not quite obvious how - // to come up with an iterative - // solver and a preconditioner that - // do a good job on this matrix. We - // chose instead to go a different - // way and solve the linear system - // with the sparse LU decomposition - // provided by UMFPACK. This is often - // a good first choice for 2D - // problems and works reasonably well - // even for a large number of DoFs. - // The deal.II interface to UMFPACK - // is given by the - // SparseDirectUMFPACK class, which - // is very easy to use and allows us - // to solve our linear system with - // just 3 lines of code. - - // Note again that for compiling this - // example program, you need to have - // the deal.II library built with - // UMFPACK support, which can be - // achieved by providing the - // --with-umfpack switch to - // the configure script prior to - // compilation of the library. + // As already mentioned in the introduction, the system matrix is neither + // symmetric nor definite, and so it is not quite obvious how to come up + // with an iterative solver and a preconditioner that do a good job on this + // matrix. We chose instead to go a different way and solve the linear + // system with the sparse LU decomposition provided by UMFPACK. This is + // often a good first choice for 2D problems and works reasonably well even + // for a large number of DoFs. The deal.II interface to UMFPACK is given by + // the SparseDirectUMFPACK class, which is very easy to use and allows us to + // solve our linear system with just 3 lines of code. + + // Note again that for compiling this example program, you need to have the + // deal.II library built with UMFPACK support, which can be achieved by + // providing the --with-umfpack switch to the configure script + // prior to compilation of the library. template void UltrasoundProblem::solve () { @@ -1264,29 +811,18 @@ namespace Step29 Timer timer; timer.start (); - // The code to solve the linear - // system is short: First, we - // allocate an object of the right - // type. The following - // initialize call - // provides the matrix that we - // would like to invert to the - // SparseDirectUMFPACK object, and - // at the same time kicks off the - // LU-decomposition. Hence, this is - // also the point where most of the - // computational work in this - // program happens. + // The code to solve the linear system is short: First, we allocate an + // object of the right type. The following initialize call + // provides the matrix that we would like to invert to the + // SparseDirectUMFPACK object, and at the same time kicks off the + // LU-decomposition. Hence, this is also the point where most of the + // computational work in this program happens. SparseDirectUMFPACK A_direct; A_direct.initialize(system_matrix); - // After the decomposition, we can - // use A_direct like a - // matrix representing the inverse - // of our system matrix, so to - // compute the solution we just - // have to multiply with the right - // hand side vector: + // After the decomposition, we can use A_direct like a matrix + // representing the inverse of our system matrix, so to compute the + // solution we just have to multiply with the right hand side vector: A_direct.vmult (solution, system_rhs); timer.stop (); @@ -1300,19 +836,12 @@ namespace Step29 // @sect4{UltrasoundProblem::output_results} - // Here we output our solution $v$ - // and $w$ as well as the derived - // quantity $|u|$ in the format - // specified in the parameter - // file. Most of the work for - // deriving $|u|$ from $v$ and $w$ - // was already done in the - // implementation of the - // ComputeIntensity - // class, so that the output routine - // is rather straightforward and very - // similar to what is done in the - // previous tutorials. + // Here we output our solution $v$ and $w$ as well as the derived quantity + // $|u|$ in the format specified in the parameter file. Most of the work for + // deriving $|u|$ from $v$ and $w$ was already done in the implementation of + // the ComputeIntensity class, so that the output routine is + // rather straightforward and very similar to what is done in the previous + // tutorials. template void UltrasoundProblem::output_results () const { @@ -1320,27 +849,19 @@ namespace Step29 Timer timer; timer.start (); - // Define objects of our - // ComputeIntensity - // class and a DataOut object: + // Define objects of our ComputeIntensity class and a DataOut + // object: ComputeIntensity intensities; DataOut data_out; data_out.attach_dof_handler (dof_handler); - // Next we query the output-related - // parameters from the - // ParameterHandler. The - // DataOut::parse_parameters call - // acts as a counterpart to the - // DataOutInterface<1>::declare_parameters - // call in - // ParameterReader::declare_parameters. It - // collects all the output format - // related parameters from the - // ParameterHandler and sets the - // corresponding properties of the - // DataOut object accordingly. + // Next we query the output-related parameters from the ParameterHandler. + // The DataOut::parse_parameters call acts as a counterpart to the + // DataOutInterface<1>::declare_parameters call in + // ParameterReader::declare_parameters. It collects all the + // output format related parameters from the ParameterHandler and sets the + // corresponding properties of the DataOut object accordingly. prm.enter_subsection("Output parameters"); const std::string output_file = prm.get("Output file"); @@ -1348,42 +869,31 @@ namespace Step29 prm.leave_subsection (); - // Now we put together the filename from - // the base name provided by the - // ParameterHandler and the suffix which is - // provided by the DataOut class (the - // default suffix is set to the right type - // that matches the one set in the .prm - // file through parse_parameters()): + // Now we put together the filename from the base name provided by the + // ParameterHandler and the suffix which is provided by the DataOut class + // (the default suffix is set to the right type that matches the one set + // in the .prm file through parse_parameters()): const std::string filename = output_file + data_out.default_suffix(); std::ofstream output (filename.c_str()); - // The solution vectors $v$ and $w$ - // are added to the DataOut object - // in the usual way: + // The solution vectors $v$ and $w$ are added to the DataOut object in the + // usual way: std::vector solution_names; solution_names.push_back ("Re_u"); solution_names.push_back ("Im_u"); data_out.add_data_vector (solution, solution_names); - // For the intensity, we just call - // add_data_vector - // again, but this with our - // ComputeIntensity - // object as the second argument, - // which effectively adds $|u|$ to - // the output data: + // For the intensity, we just call add_data_vector again, but + // this with our ComputeIntensity object as the second + // argument, which effectively adds $|u|$ to the output data: data_out.add_data_vector (solution, intensities); - // The last steps are as before. Note - // that the actual output format is - // now determined by what is stated in - // the input file, i.e. one can change - // the output format without having to - // re-compile this program: + // The last steps are as before. Note that the actual output format is now + // determined by what is stated in the input file, i.e. one can change the + // output format without having to re-compile this program: data_out.build_patches (); data_out.write (output); @@ -1396,8 +906,7 @@ namespace Step29 - // @sect4{UltrasoundProblem::run} - // Here we simply execute our + // @sect4{UltrasoundProblem::run} Here we simply execute our // functions one after the other: template void UltrasoundProblem::run () @@ -1413,20 +922,13 @@ namespace Step29 // @sect4{The main function} -// Finally the main -// function of the program. It has the -// same structure as in almost all of -// the other tutorial programs. The -// only exception is that we define -// ParameterHandler and -// ParameterReader -// objects, and let the latter read in -// the parameter values from a -// textfile called -// step-29.prm. The -// values so read are then handed over -// to an instance of the -// UltrasoundProblem class: +// Finally the main function of the program. It has the same +// structure as in almost all of the other tutorial programs. The only +// exception is that we define ParameterHandler and +// ParameterReader objects, and let the latter read in the +// parameter values from a textfile called step-29.prm. The +// values so read are then handed over to an instance of the UltrasoundProblem +// class: int main () { try diff --git a/deal.II/examples/step-3/step-3.cc b/deal.II/examples/step-3/step-3.cc index f31f2ed4bf..1a1d3c122d 100644 --- a/deal.II/examples/step-3/step-3.cc +++ b/deal.II/examples/step-3/step-3.cc @@ -12,70 +12,48 @@ // @sect3{Many new include files} -// These include files are already -// known to you. They declare the -// classes which handle -// triangulations and enumeration of -// degrees of freedom: +// These include files are already known to you. They declare the classes +// which handle triangulations and enumeration of degrees of freedom: #include #include -// And this is the file in which the -// functions are declared that -// create grids: +// And this is the file in which the functions are declared that create grids: #include -// The next three files contain classes which -// are needed for loops over all cells and to -// get the information from the cell -// objects. The first two have been used -// before to get geometric information from -// cells; the last one is new and provides -// information about the degrees of freedom -// local to a cell: +// The next three files contain classes which are needed for loops over all +// cells and to get the information from the cell objects. The first two have +// been used before to get geometric information from cells; the last one is +// new and provides information about the degrees of freedom local to a cell: #include #include #include -// In this file contains the description of -// the Lagrange interpolation finite element: +// In this file contains the description of the Lagrange interpolation finite +// element: #include -// And this file is needed for the -// creation of sparsity patterns of -// sparse matrices, as shown in -// previous examples: +// And this file is needed for the creation of sparsity patterns of sparse +// matrices, as shown in previous examples: #include -// The next two file are needed for -// assembling the matrix using -// quadrature on each cell. The -// classes declared in them will be -// explained below: +// The next two file are needed for assembling the matrix using quadrature on +// each cell. The classes declared in them will be explained below: #include #include -// The following three include files -// we need for the treatment of -// boundary values: +// The following three include files we need for the treatment of boundary +// values: #include #include #include -// We're now almost to the end. The second to -// last group of include files is for the -// linear algebra which we employ to solve -// the system of equations arising from the -// finite element discretization of the -// Laplace equation. We will use vectors and -// full matrices for assembling the system of -// equations locally on each cell, and -// transfer the results into a sparse -// matrix. We will then use a Conjugate -// Gradient solver to solve the problem, for -// which we need a preconditioner (in this -// program, we use the identity -// preconditioner which does nothing, but we -// need to include the file anyway): +// We're now almost to the end. The second to last group of include files is +// for the linear algebra which we employ to solve the system of equations +// arising from the finite element discretization of the Laplace equation. We +// will use vectors and full matrices for assembling the system of equations +// locally on each cell, and transfer the results into a sparse matrix. We +// will then use a Conjugate Gradient solver to solve the problem, for which +// we need a preconditioner (in this program, we use the identity +// preconditioner which does nothing, but we need to include the file anyway): #include #include #include @@ -83,39 +61,27 @@ #include #include -// Finally, this is for output to a -// file and to the console: +// Finally, this is for output to a file and to the console: #include #include #include -// ...and this is to import the -// deal.II namespace into the global -// scope: +// ...and this is to import the deal.II namespace into the global scope: using namespace dealii; // @sect3{The Step3 class} -// Instead of the procedural programming of -// previous examples, we encapsulate -// everything into a class for this -// program. The class consists of functions -// which each perform certain aspects of a -// finite element program, a `main' function -// which controls what is done first and what -// is done next, and a list of member -// variables. - -// The public part of the class is rather -// short: it has a constructor and a function -// `run' that is called from the outside and -// acts as something like the `main' -// function: it coordinates which operations -// of this class shall be run in which -// order. Everything else in the class, -// i.e. all the functions that actually do -// anything, are in the private section of -// the class: +// Instead of the procedural programming of previous examples, we encapsulate +// everything into a class for this program. The class consists of functions +// which each perform certain aspects of a finite element program, a `main' +// function which controls what is done first and what is done next, and a +// list of member variables. + +// The public part of the class is rather short: it has a constructor and a +// function `run' that is called from the outside and acts as something like +// the `main' function: it coordinates which operations of this class shall be +// run in which order. Everything else in the class, i.e. all the functions +// that actually do anything, are in the private section of the class: class Step3 { public: @@ -123,12 +89,9 @@ public: void run (); - // Then there are the member functions - // that mostly do what their names - // suggest and whose have been discussed - // in the introduction already. Since - // they do not need to be called from - // outside, they are made private to this + // Then there are the member functions that mostly do what their names + // suggest and whose have been discussed in the introduction already. Since + // they do not need to be called from outside, they are made private to this // class. private: @@ -138,54 +101,37 @@ private: void solve (); void output_results () const; - // And finally we have some member - // variables. There are variables - // describing the triangulation - // and the global numbering of the - // degrees of freedom (we will - // specify the exact polynomial - // degree of the finite element - // in the constructor of this - // class)... + // And finally we have some member variables. There are variables describing + // the triangulation and the global numbering of the degrees of freedom (we + // will specify the exact polynomial degree of the finite element in the + // constructor of this class)... Triangulation<2> triangulation; FE_Q<2> fe; DoFHandler<2> dof_handler; - // ...variables for the sparsity - // pattern and values of the - // system matrix resulting from - // the discretization of the - // Laplace equation... + // ...variables for the sparsity pattern and values of the system matrix + // resulting from the discretization of the Laplace equation... SparsityPattern sparsity_pattern; SparseMatrix system_matrix; - // ...and variables which will - // hold the right hand side and - // solution vectors. + // ...and variables which will hold the right hand side and solution + // vectors. Vector solution; Vector system_rhs; }; // @sect4{Step3::Step3} -// Here comes the constructor. It does not -// much more than first to specify that we -// want bi-linear elements (denoted by the -// parameter to the finite element object, -// which indicates the polynomial degree), -// and to associate the dof_handler variable -// to the triangulation we use. (Note that -// the triangulation isn't set up with a mesh -// at all at the present time, but the -// DoFHandler doesn't care: it only wants to -// know which triangulation it will be -// associated with, and it only starts to -// care about an actual mesh once you try to -// distribute degree of freedom on the mesh -// using the distribute_dofs() function.) All -// the other member variables of the -// Step3 class have a default -// constructor which does all we want. +// Here comes the constructor. It does not much more than first to specify +// that we want bi-linear elements (denoted by the parameter to the finite +// element object, which indicates the polynomial degree), and to associate +// the dof_handler variable to the triangulation we use. (Note that the +// triangulation isn't set up with a mesh at all at the present time, but the +// DoFHandler doesn't care: it only wants to know which triangulation it will +// be associated with, and it only starts to care about an actual mesh once +// you try to distribute degree of freedom on the mesh using the +// distribute_dofs() function.) All the other member variables of the Step3 +// class have a default constructor which does all we want. Step3::Step3 () : fe (1), @@ -195,113 +141,75 @@ Step3::Step3 () // @sect4{Step3::make_grid} -// Now, the first thing we've got to -// do is to generate the -// triangulation on which we would -// like to do our computation and -// number each vertex with a degree -// of freedom. We have seen this in -// the previous examples before. +// Now, the first thing we've got to do is to generate the triangulation on +// which we would like to do our computation and number each vertex with a +// degree of freedom. We have seen this in the previous examples before. void Step3::make_grid () { - // First create the grid and refine - // all cells five times. Since the - // initial grid (which is the - // square [-1,1]x[-1,1]) consists - // of only one cell, the final grid - // has 32 times 32 cells, for a - // total of 1024. + // First create the grid and refine all cells five times. Since the initial + // grid (which is the square [-1,1]x[-1,1]) consists of only one cell, the + // final grid has 32 times 32 cells, for a total of 1024. GridGenerator::hyper_cube (triangulation, -1, 1); triangulation.refine_global (5); - // Unsure that 1024 is the correct number? - // Let's see: n_active_cells returns the - // number of active cells: + // Unsure that 1024 is the correct number? Let's see: n_active_cells + // returns the number of active cells: std::cout << "Number of active cells: " << triangulation.n_active_cells() << std::endl; - // Here, by active we mean the cells that aren't - // refined any further. We stress the - // adjective `active', since there are more - // cells, namely the parent cells of the - // finest cells, their parents, etc, up to - // the one cell which made up the initial - // grid. Of course, on the next coarser - // level, the number of cells is one - // quarter that of the cells on the finest - // level, i.e. 256, then 64, 16, 4, and - // 1. We can get the total number of cells + // Here, by active we mean the cells that aren't refined any further. We + // stress the adjective `active', since there are more cells, namely the + // parent cells of the finest cells, their parents, etc, up to the one cell + // which made up the initial grid. Of course, on the next coarser level, the + // number of cells is one quarter that of the cells on the finest level, + // i.e. 256, then 64, 16, 4, and 1. We can get the total number of cells // like this: std::cout << "Total number of cells: " << triangulation.n_cells() << std::endl; - // Note the distinction between - // n_active_cells() and n_cells(). + // Note the distinction between n_active_cells() and n_cells(). } // @sect4{Step3::setup_system} -// Next we enumerate all the degrees of -// freedom and set up matrix and vector -// objects to hold the system -// data. Enumerating is done by using -// DoFHandler::distribute_dofs(), as we have -// seen in the step-2 example. Since we use -// the FE_Q class and have set the polynomial -// degree to 1 in the constructor, -// i.e. bilinear elements, this associates -// one degree of freedom with each -// vertex. While we're at generating output, -// let us also take a look at how many -// degrees of freedom are generated: +// Next we enumerate all the degrees of freedom and set up matrix and vector +// objects to hold the system data. Enumerating is done by using +// DoFHandler::distribute_dofs(), as we have seen in the step-2 example. Since +// we use the FE_Q class and have set the polynomial degree to 1 in the +// constructor, i.e. bilinear elements, this associates one degree of freedom +// with each vertex. While we're at generating output, let us also take a look +// at how many degrees of freedom are generated: void Step3::setup_system () { dof_handler.distribute_dofs (fe); std::cout << "Number of degrees of freedom: " << dof_handler.n_dofs() << std::endl; - // There should be one DoF for each - // vertex. Since we have a 32 times - // 32 grid, the number of DoFs - // should be 33 times 33, or 1089. - - // As we have seen in the previous example, - // we set up a sparsity pattern by first - // creating a temporary structure, tagging - // those entries that might be nonzero, and - // then copying the data over to the - // SparsityPattern object that can then be - // used by the system matrix. + // There should be one DoF for each vertex. Since we have a 32 times 32 + // grid, the number of DoFs should be 33 times 33, or 1089. + + // As we have seen in the previous example, we set up a sparsity pattern by + // first creating a temporary structure, tagging those entries that might be + // nonzero, and then copying the data over to the SparsityPattern object + // that can then be used by the system matrix. CompressedSparsityPattern c_sparsity(dof_handler.n_dofs()); DoFTools::make_sparsity_pattern (dof_handler, c_sparsity); sparsity_pattern.copy_from(c_sparsity); - // Note that the - // SparsityPattern object does - // not hold the values of the - // matrix, it only stores the - // places where entries are. The - // entries themselves are stored in - // objects of type SparseMatrix, of - // which our variable system_matrix - // is one. + // Note that the SparsityPattern object does not hold the values of the + // matrix, it only stores the places where entries are. The entries + // themselves are stored in objects of type SparseMatrix, of which our + // variable system_matrix is one. // - // The distinction between sparsity pattern - // and matrix was made to allow several - // matrices to use the same sparsity - // pattern. This may not seem relevant - // here, but when you consider the size - // which matrices can have, and that it may - // take some time to build the sparsity - // pattern, this becomes important in - // large-scale problems if you have to - // store several matrices in your program. + // The distinction between sparsity pattern and matrix was made to allow + // several matrices to use the same sparsity pattern. This may not seem + // relevant here, but when you consider the size which matrices can have, + // and that it may take some time to build the sparsity pattern, this + // becomes important in large-scale problems if you have to store several + // matrices in your program. system_matrix.reinit (sparsity_pattern); - // The last thing to do in this - // function is to set the sizes of - // the right hand side vector and - // the solution vector to the right - // values: + // The last thing to do in this function is to set the sizes of the right + // hand side vector and the solution vector to the right values: solution.reinit (dof_handler.n_dofs()); system_rhs.reinit (dof_handler.n_dofs()); } @@ -309,265 +217,161 @@ void Step3::setup_system () // @sect4{Step3::assemble_system} -// The next step is to compute the entries of -// the matrix and right hand side that form -// the linear system from which we compute -// the solution. This is the central function -// of each finite element program and we have -// discussed the primary steps in the -// introduction already. +// The next step is to compute the entries of the matrix and right hand side +// that form the linear system from which we compute the solution. This is the +// central function of each finite element program and we have discussed the +// primary steps in the introduction already. // -// The general approach to assemble matrices -// and vectors is to loop over all cells, and -// on each cell compute the contribution of -// that cell to the global matrix and right -// hand side by quadrature. The point to -// realize now is that we need the values of -// the shape functions at the locations of -// quadrature points on the real -// cell. However, both the finite element -// shape functions as well as the quadrature -// points are only defined on the reference -// cell. They are therefore of little help to -// us, and we will in fact hardly ever query -// information about finite element shape -// functions or quadrature points from these -// objects directly. +// The general approach to assemble matrices and vectors is to loop over all +// cells, and on each cell compute the contribution of that cell to the global +// matrix and right hand side by quadrature. The point to realize now is that +// we need the values of the shape functions at the locations of quadrature +// points on the real cell. However, both the finite element shape functions +// as well as the quadrature points are only defined on the reference +// cell. They are therefore of little help to us, and we will in fact hardly +// ever query information about finite element shape functions or quadrature +// points from these objects directly. // -// Rather, what is required is a way to map -// this data from the reference cell to the -// real cell. Classes that can do that are -// derived from the Mapping class, though one -// again often does not have to deal with -// them directly: many functions in the -// library can take a mapping object as -// argument, but when it is omitted they -// simply resort to the standard bilinear Q1 -// mapping. We will go this route, and not -// bother with it for the moment (we come -// back to this in step-10, step-11, and -// step-12). +// Rather, what is required is a way to map this data from the reference cell +// to the real cell. Classes that can do that are derived from the Mapping +// class, though one again often does not have to deal with them directly: +// many functions in the library can take a mapping object as argument, but +// when it is omitted they simply resort to the standard bilinear Q1 +// mapping. We will go this route, and not bother with it for the moment (we +// come back to this in step-10, step-11, and step-12). // -// So what we now have is a collection of -// three classes to deal with: finite -// element, quadrature, and mapping -// objects. That's too much, so there is one -// type of class that orchestrates -// information exchange between these three: -// the FEValues class. If given one instance -// of each three of these objects (or two, -// and an implicit linear mapping), it will -// be able to provide you with information -// about values and gradients of shape -// functions at quadrature points on a real -// cell. +// So what we now have is a collection of three classes to deal with: finite +// element, quadrature, and mapping objects. That's too much, so there is one +// type of class that orchestrates information exchange between these three: +// the FEValues class. If given one instance of each three of these objects +// (or two, and an implicit linear mapping), it will be able to provide you +// with information about values and gradients of shape functions at +// quadrature points on a real cell. // -// Using all this, we will assemble the -// linear system for this problem in the +// Using all this, we will assemble the linear system for this problem in the // following function: void Step3::assemble_system () { - // Ok, let's start: we need a quadrature - // formula for the evaluation of the - // integrals on each cell. Let's take a - // Gauss formula with two quadrature points - // in each direction, i.e. a total of four - // points since we are in 2D. This - // quadrature formula integrates - // polynomials of degrees up to three - // exactly (in 1D). It is easy to check - // that this is sufficient for the present - // problem: + // Ok, let's start: we need a quadrature formula for the evaluation of the + // integrals on each cell. Let's take a Gauss formula with two quadrature + // points in each direction, i.e. a total of four points since we are in + // 2D. This quadrature formula integrates polynomials of degrees up to three + // exactly (in 1D). It is easy to check that this is sufficient for the + // present problem: QGauss<2> quadrature_formula(2); - // And we initialize the object which we - // have briefly talked about above. It - // needs to be told which finite element we - // want to use, and the quadrature points - // and their weights (jointly described by - // a Quadrature object). As mentioned, we - // use the implied Q1 mapping, rather than - // specifying one ourselves - // explicitly. Finally, we have to tell it - // what we want it to compute on each cell: - // we need the values of the shape - // functions at the quadrature points (for - // the right hand side $(\varphi,f)$), their - // gradients (for the matrix entries $(\nabla - // \varphi_i, \nabla \varphi_j)$), and also the - // weights of the quadrature points and the - // determinants of the Jacobian - // transformations from the reference cell - // to the real cells. + // And we initialize the object which we have briefly talked about above. It + // needs to be told which finite element we want to use, and the quadrature + // points and their weights (jointly described by a Quadrature object). As + // mentioned, we use the implied Q1 mapping, rather than specifying one + // ourselves explicitly. Finally, we have to tell it what we want it to + // compute on each cell: we need the values of the shape functions at the + // quadrature points (for the right hand side $(\varphi,f)$), their + // gradients (for the matrix entries $(\nabla \varphi_i, \nabla + // \varphi_j)$), and also the weights of the quadrature points and the + // determinants of the Jacobian transformations from the reference cell to + // the real cells. // - // This list of what kind of information we - // actually need is given as a - // collection of flags as the third - // argument to the constructor of - // FEValues. Since these values have to - // be recomputed, or updated, every time we - // go to a new cell, all of these flags - // start with the prefix update_ and - // then indicate what it actually is that - // we want updated. The flag to give if we - // want the values of the shape functions - // computed is #update_values; for the - // gradients it is - // #update_gradients. The determinants - // of the Jacobians and the quadrature - // weights are always used together, so - // only the products (Jacobians times - // weights, or short JxW) are computed; - // since we need them, we have to list - // #update_JxW_values as well: + // This list of what kind of information we actually need is given as a + // collection of flags as the third argument to the constructor of + // FEValues. Since these values have to be recomputed, or updated, every + // time we go to a new cell, all of these flags start with the prefix + // update_ and then indicate what it actually is that we want + // updated. The flag to give if we want the values of the shape functions + // computed is #update_values; for the gradients it is + // #update_gradients. The determinants of the Jacobians and the quadrature + // weights are always used together, so only the products (Jacobians times + // weights, or short JxW) are computed; since we need them, we + // have to list #update_JxW_values as well: FEValues<2> fe_values (fe, quadrature_formula, update_values | update_gradients | update_JxW_values); - // The advantage of this approach is that - // we can specify what kind of information - // we actually need on each cell. It is - // easily understandable that this approach - // can significant speed up finite element - // computations, compared to approaches - // where everything, including second - // derivatives, normal vectors to cells, - // etc are computed on each cell, - // regardless whether they are needed or - // not. - - // For use further down below, we define - // two short cuts for values that will be - // used very frequently. First, an - // abbreviation for the number of degrees - // of freedom on each cell (since we are in - // 2D and degrees of freedom are associated - // with vertices only, this number is four, - // but we rather want to write the - // definition of this variable in a way - // that does not preclude us from later - // choosing a different finite element that - // has a different number of degrees of - // freedom per cell, or work in a different - // space dimension). + // The advantage of this approach is that we can specify what kind of + // information we actually need on each cell. It is easily understandable + // that this approach can significant speed up finite element computations, + // compared to approaches where everything, including second derivatives, + // normal vectors to cells, etc are computed on each cell, regardless + // whether they are needed or not. + + // For use further down below, we define two short cuts for values that will + // be used very frequently. First, an abbreviation for the number of degrees + // of freedom on each cell (since we are in 2D and degrees of freedom are + // associated with vertices only, this number is four, but we rather want to + // write the definition of this variable in a way that does not preclude us + // from later choosing a different finite element that has a different + // number of degrees of freedom per cell, or work in a different space + // dimension). // - // Secondly, we also define an abbreviation - // for the number of quadrature points - // (here that should be four). In general, - // it is a good idea to use their symbolic - // names instead of hard-coding these - // number even if you know them, since you - // may want to change the quadrature - // formula and/or finite element at some - // time; the program will just work with - // these changes, without the need to - // change anything in this function. + // Secondly, we also define an abbreviation for the number of quadrature + // points (here that should be four). In general, it is a good idea to use + // their symbolic names instead of hard-coding these number even if you know + // them, since you may want to change the quadrature formula and/or finite + // element at some time; the program will just work with these changes, + // without the need to change anything in this function. // - // The shortcuts, finally, are only defined - // to make the following loops a bit more - // readable. You will see them in many - // places in larger programs, and - // `dofs_per_cell' and `n_q_points' are - // more or less by convention the standard - // names for these purposes: + // The shortcuts, finally, are only defined to make the following loops a + // bit more readable. You will see them in many places in larger programs, + // and `dofs_per_cell' and `n_q_points' are more or less by convention the + // standard names for these purposes: const unsigned int dofs_per_cell = fe.dofs_per_cell; const unsigned int n_q_points = quadrature_formula.size(); - // Now, we said that we wanted to assemble - // the global matrix and vector - // cell-by-cell. We could write the results - // directly into the global matrix, but - // this is not very efficient since access - // to the elements of a sparse matrix is - // slow. Rather, we first compute the - // contribution of each cell in a small - // matrix with the degrees of freedom on - // the present cell, and only transfer them - // to the global matrix when the - // computations are finished for this - // cell. We do the same for the right hand - // side vector. So let's first allocate - // these objects (these being local - // objects, all degrees of freedom are - // coupling with all others, and we should - // use a full matrix object rather than a - // sparse one for the local operations; - // everything will be transferred to a - // global sparse matrix later on): + // Now, we said that we wanted to assemble the global matrix and vector + // cell-by-cell. We could write the results directly into the global matrix, + // but this is not very efficient since access to the elements of a sparse + // matrix is slow. Rather, we first compute the contribution of each cell in + // a small matrix with the degrees of freedom on the present cell, and only + // transfer them to the global matrix when the computations are finished for + // this cell. We do the same for the right hand side vector. So let's first + // allocate these objects (these being local objects, all degrees of freedom + // are coupling with all others, and we should use a full matrix object + // rather than a sparse one for the local operations; everything will be + // transferred to a global sparse matrix later on): FullMatrix cell_matrix (dofs_per_cell, dofs_per_cell); Vector cell_rhs (dofs_per_cell); - // When assembling the - // contributions of each cell, we - // do this with the local numbering - // of the degrees of freedom - // (i.e. the number running from - // zero through - // dofs_per_cell-1). However, when - // we transfer the result into the - // global matrix, we have to know - // the global numbers of the - // degrees of freedom. When we query - // them, we need a scratch - // (temporary) array for these - // numbers: + // When assembling the contributions of each cell, we do this with the local + // numbering of the degrees of freedom (i.e. the number running from zero + // through dofs_per_cell-1). However, when we transfer the result into the + // global matrix, we have to know the global numbers of the degrees of + // freedom. When we query them, we need a scratch (temporary) array for + // these numbers: std::vector local_dof_indices (dofs_per_cell); - // Now for the loop over all cells. We have - // seen before how this works, so this - // should be familiar including the - // conventional names for these variables: + // Now for the loop over all cells. We have seen before how this works, so + // this should be familiar including the conventional names for these + // variables: DoFHandler<2>::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); for (; cell!=endc; ++cell) { - // We are now sitting on one cell, and - // we would like the values and - // gradients of the shape functions be - // computed, as well as the - // determinants of the Jacobian - // matrices of the mapping between - // reference cell and true cell, at the - // quadrature points. Since all these - // values depend on the geometry of the - // cell, we have to have the FEValues - // object re-compute them on each cell: + // We are now sitting on one cell, and we would like the values and + // gradients of the shape functions be computed, as well as the + // determinants of the Jacobian matrices of the mapping between + // reference cell and true cell, at the quadrature points. Since all + // these values depend on the geometry of the cell, we have to have the + // FEValues object re-compute them on each cell: fe_values.reinit (cell); - // Next, reset the local cell's - // contributions to - // global matrix and global right hand - // side to zero, before we fill them: + // Next, reset the local cell's contributions to global matrix and + // global right hand side to zero, before we fill them: cell_matrix = 0; cell_rhs = 0; - // Then finally assemble the matrix: - // For the Laplace problem, the matrix - // on each cell is the integral over - // the gradients of shape function i - // and j. Since we do not integrate, - // but rather use quadrature, this is - // the sum over all quadrature points - // of the integrands times the - // determinant of the Jacobian matrix - // at the quadrature point times the - // weight of this quadrature point. You - // can get the gradient of shape - // function $i$ at quadrature point - // q_point by using - // fe_values.shape_grad(i,q_point); - // this gradient is a 2-dimensional - // vector (in fact it is of type - // Tensor@<1,dim@>, with here dim=2) and - // the product of two such vectors is - // the scalar product, i.e. the product - // of the two shape_grad function calls - // is the dot product. This is in turn - // multiplied by the Jacobian - // determinant and the quadrature point - // weight (that one gets together by - // the call to - // FEValues::JxW() ). Finally, this is - // repeated for all shape functions - // $i$ and $j$: + // Then finally assemble the matrix: For the Laplace problem, the matrix + // on each cell is the integral over the gradients of shape function i + // and j. Since we do not integrate, but rather use quadrature, this is + // the sum over all quadrature points of the integrands times the + // determinant of the Jacobian matrix at the quadrature point times the + // weight of this quadrature point. You can get the gradient of shape + // function $i$ at quadrature point q_point by using + // fe_values.shape_grad(i,q_point); this gradient is a + // 2-dimensional vector (in fact it is of type Tensor@<1,dim@>, with + // here dim=2) and the product of two such vectors is the scalar + // product, i.e. the product of the two shape_grad function calls is the + // dot product. This is in turn multiplied by the Jacobian determinant + // and the quadrature point weight (that one gets together by the call + // to FEValues::JxW() ). Finally, this is repeated for all shape + // functions $i$ and $j$: for (unsigned int i=0; iget_dof_indices (local_dof_indices); - // Then again loop over all - // shape functions i and j and - // transfer the local elements - // to the global matrix. The - // global numbers can be - // obtained using - // local_dof_indices[i]: + // Then again loop over all shape functions i and j and transfer the + // local elements to the global matrix. The global numbers can be + // obtained using local_dof_indices[i]: for (unsigned int i=0; istd::map class. + // Finally, the output object is a list of pairs of global degree of freedom + // numbers (i.e. the number of the degrees of freedom on the boundary) and + // their boundary values (which are zero here for all entries). This mapping + // of DoF numbers to boundary values is done by the std::map + // class. std::map boundary_values; VectorTools::interpolate_boundary_values (dof_handler, 0, ZeroFunction<2>(), boundary_values); - // Now that we got the list of - // boundary DoFs and their - // respective boundary values, - // let's use them to modify the - // system of equations - // accordingly. This is done by the - // following function call: + // Now that we got the list of boundary DoFs and their respective boundary + // values, let's use them to modify the system of equations + // accordingly. This is done by the following function call: MatrixTools::apply_boundary_values (boundary_values, system_matrix, solution, @@ -734,120 +475,74 @@ void Step3::assemble_system () // @sect4{Step3::solve} -// The following function simply -// solves the discretized -// equation. As the system is quite a -// large one for direct solvers such -// as Gauss elimination or LU -// decomposition, we use a Conjugate -// Gradient algorithm. You should -// remember that the number of -// variables here (only 1089) is a -// very small number for finite -// element computations, where -// 100.000 is a more usual number. -// For this number of variables, -// direct methods are no longer -// usable and you are forced to use -// methods like CG. +// The following function simply solves the discretized equation. As the +// system is quite a large one for direct solvers such as Gauss elimination or +// LU decomposition, we use a Conjugate Gradient algorithm. You should +// remember that the number of variables here (only 1089) is a very small +// number for finite element computations, where 100.000 is a more usual +// number. For this number of variables, direct methods are no longer usable +// and you are forced to use methods like CG. void Step3::solve () { - // First, we need to have an object that - // knows how to tell the CG algorithm when - // to stop. This is done by using a - // SolverControl object, and as stopping - // criterion we say: stop after a maximum - // of 1000 iterations (which is far more - // than is needed for 1089 variables; see - // the results section to find out how many - // were really used), and stop if the norm - // of the residual is below $10^{-12}$. In - // practice, the latter criterion will be - // the one which stops the iteration: + // First, we need to have an object that knows how to tell the CG algorithm + // when to stop. This is done by using a SolverControl object, and as + // stopping criterion we say: stop after a maximum of 1000 iterations (which + // is far more than is needed for 1089 variables; see the results section to + // find out how many were really used), and stop if the norm of the residual + // is below $10^{-12}$. In practice, the latter criterion will be the one + // which stops the iteration: SolverControl solver_control (1000, 1e-12); - // Then we need the solver itself. The - // template parameters to the SolverCG - // class are the matrix type and the type - // of the vectors, but the empty angle - // brackets indicate that we simply take - // the default arguments (which are - // SparseMatrix@ and + // Then we need the solver itself. The template parameters to the SolverCG + // class are the matrix type and the type of the vectors, but the empty + // angle brackets indicate that we simply take the default arguments (which + // are SparseMatrix@ and // Vector@): SolverCG<> solver (solver_control); - // Now solve the system of equations. The - // CG solver takes a preconditioner as its - // fourth argument. We don't feel ready to - // delve into this yet, so we tell it to - // use the identity operation as - // preconditioner: + // Now solve the system of equations. The CG solver takes a preconditioner + // as its fourth argument. We don't feel ready to delve into this yet, so we + // tell it to use the identity operation as preconditioner: solver.solve (system_matrix, solution, system_rhs, PreconditionIdentity()); - // Now that the solver has done its - // job, the solution variable - // contains the nodal values of the - // solution function. + // Now that the solver has done its job, the solution variable contains the + // nodal values of the solution function. } // @sect4{Step3::output_results} -// The last part of a typical finite -// element program is to output the -// results and maybe do some -// postprocessing (for example -// compute the maximal stress values -// at the boundary, or the average -// flux across the outflow, etc). We -// have no such postprocessing here, -// but we would like to write the -// solution to a file. +// The last part of a typical finite element program is to output the results +// and maybe do some postprocessing (for example compute the maximal stress +// values at the boundary, or the average flux across the outflow, etc). We +// have no such postprocessing here, but we would like to write the solution +// to a file. void Step3::output_results () const { - // To write the output to a file, - // we need an object which knows - // about output formats and the - // like. This is the DataOut class, - // and we need an object of that - // type: + // To write the output to a file, we need an object which knows about output + // formats and the like. This is the DataOut class, and we need an object of + // that type: DataOut<2> data_out; - // Now we have to tell it where to take the - // values from which it shall write. We - // tell it which DoFHandler object to - // use, and the solution vector (and - // the name by which the solution variable - // shall appear in the output file). If - // we had more than one vector which we - // would like to look at in the output (for - // example right hand sides, errors per - // cell, etc) we would add them as well: + // Now we have to tell it where to take the values from which it shall + // write. We tell it which DoFHandler object to use, and the solution vector + // (and the name by which the solution variable shall appear in the output + // file). If we had more than one vector which we would like to look at in + // the output (for example right hand sides, errors per cell, etc) we would + // add them as well: data_out.attach_dof_handler (dof_handler); data_out.add_data_vector (solution, "solution"); - // After the DataOut object knows - // which data it is to work on, we - // have to tell it to process them - // into something the back ends can - // handle. The reason is that we - // have separated the frontend - // (which knows about how to treat - // DoFHandler objects and data - // vectors) from the back end (which - // knows many different output formats) - // and use an intermediate data - // format to transfer data from the - // front- to the backend. The data - // is transformed into this - // intermediate format by the - // following function: + // After the DataOut object knows which data it is to work on, we have to + // tell it to process them into something the back ends can handle. The + // reason is that we have separated the frontend (which knows about how to + // treat DoFHandler objects and data vectors) from the back end (which knows + // many different output formats) and use an intermediate data format to + // transfer data from the front- to the backend. The data is transformed + // into this intermediate format by the following function: data_out.build_patches (); - // Now we have everything in place - // for the actual output. Just open - // a file and write the data into - // it, using GNUPLOT format (there - // are other functions which write - // their data in postscript, AVS, - // GMV, or some other format): + // Now we have everything in place for the actual output. Just open a file + // and write the data into it, using GNUPLOT format (there are other + // functions which write their data in postscript, AVS, GMV, or some other + // format): std::ofstream output ("solution.gpl"); data_out.write_gnuplot (output); } @@ -855,14 +550,11 @@ void Step3::output_results () const // @sect4{Step3::run} -// Finally, the last function of this class -// is the main function which calls all the -// other functions of the Step3 -// class. The order in which this is done -// resembles the order in which most finite -// element programs work. Since the names are -// mostly self-explanatory, there is not much -// to comment about: +// Finally, the last function of this class is the main function which calls +// all the other functions of the Step3 class. The order in which +// this is done resembles the order in which most finite element programs +// work. Since the names are mostly self-explanatory, there is not much to +// comment about: void Step3::run () { make_grid (); @@ -875,15 +567,10 @@ void Step3::run () // @sect3{The main function} -// This is the main function of the -// program. Since the concept of a -// main function is mostly a remnant -// from the pre-object era in C/C++ -// programming, it often does not -// much more than creating an object -// of the top-level class and calling -// its principle function. This is -// what is done here as well: +// This is the main function of the program. Since the concept of a main +// function is mostly a remnant from the pre-object era in C/C++ programming, +// it often does not much more than creating an object of the top-level class +// and calling its principle function. This is what is done here as well: int main () { Step3 laplace_problem; diff --git a/deal.II/examples/step-30/step-30.cc b/deal.II/examples/step-30/step-30.cc index 30b06b08ff..cd719b0491 100644 --- a/deal.II/examples/step-30/step-30.cc +++ b/deal.II/examples/step-30/step-30.cc @@ -10,10 +10,8 @@ /* to the file deal.II/doc/license.html for the text and */ /* further information on this license. */ -// The deal.II include files have already -// been covered in previous examples -// and will thus not be further -// commented on. +// The deal.II include files have already been covered in previous examples +// and will thus not be further commented on. #include #include #include @@ -40,18 +38,16 @@ #include #include -// The last step is as in all -// previous programs: +// The last step is as in all previous programs: namespace Step30 { using namespace dealii; // @sect3{Equation data} // - // The classes describing equation data and the - // actual assembly of individual terms are - // almost entirely copied from step-12. We will - // comment on differences. + // The classes describing equation data and the actual assembly of + // individual terms are almost entirely copied from step-12. We will comment + // on differences. template class RHS: public Function { @@ -95,27 +91,16 @@ namespace Step30 } - // The flow field is chosen to be a - // quarter circle with - // counterclockwise flow direction - // and with the origin as midpoint - // for the right half of the domain - // with positive $x$ values, whereas - // the flow simply goes to the left - // in the left part of the domain at - // a velocity that matches the one - // coming in from the right. In the - // circular part the magnitude of the - // flow velocity is proportional to - // the distance from the origin. This - // is a difference to step-12, where - // the magnitude was 1 - // evereywhere. the new definition - // leads to a linear variation of - // $\beta$ along each given face of a - // cell. On the other hand, the - // solution $u(x,y)$ is exactly the - // same as before. + // The flow field is chosen to be a quarter circle with counterclockwise + // flow direction and with the origin as midpoint for the right half of the + // domain with positive $x$ values, whereas the flow simply goes to the left + // in the left part of the domain at a velocity that matches the one coming + // in from the right. In the circular part the magnitude of the flow + // velocity is proportional to the distance from the origin. This is a + // difference to step-12, where the magnitude was 1 evereywhere. the new + // definition leads to a linear variation of $\beta$ along each given face + // of a cell. On the other hand, the solution $u(x,y)$ is exactly the same + // as before. template void Beta::value_list(const std::vector > &points, std::vector > &values) const @@ -159,12 +144,9 @@ namespace Step30 // @sect3{Class: DGTransportEquation} // - // This declaration of this - // class is utterly unaffected by our - // current changes. The only - // substantial change is that we use - // only the second assembly scheme - // described in step-12. + // This declaration of this class is utterly unaffected by our current + // changes. The only substantial change is that we use only the second + // assembly scheme described in step-12. template class DGTransportEquation { @@ -192,26 +174,15 @@ namespace Step30 }; - // Likewise, the constructor of the - // class as well as the functions - // assembling the terms corresponding - // to cell interiors and boundary - // faces are unchanged from - // before. The function that - // assembles face terms between cells - // also did not change because all it - // does is operate on two objects of - // type FEFaceValuesBase (which is - // the base class of both - // FEFaceValues and - // FESubfaceValues). Where these - // objects come from, i.e. how they - // are initialized, is of no concern - // to this function: it simply - // assumes that the quadrature points - // on faces or subfaces represented - // by the two objects correspond to - // the same points in physical space. + // Likewise, the constructor of the class as well as the functions + // assembling the terms corresponding to cell interiors and boundary faces + // are unchanged from before. The function that assembles face terms between + // cells also did not change because all it does is operate on two objects + // of type FEFaceValuesBase (which is the base class of both FEFaceValues + // and FESubfaceValues). Where these objects come from, i.e. how they are + // initialized, is of no concern to this function: it simply assumes that + // the quadrature points on faces or subfaces represented by the two objects + // correspond to the same points in physical space. template DGTransportEquation::DGTransportEquation () : @@ -340,15 +311,10 @@ namespace Step30 // @sect3{Class: DGMethod} // - // Even the main class of this - // program stays more or less the - // same. We omit one of the assembly - // routines and use only the second, - // more effective one of the two - // presented in step-12. However, we - // introduce a new routine - // (set_anisotropic_flags) and modify - // another one (refine_grid). + // Even the main class of this program stays more or less the same. We omit + // one of the assembly routines and use only the second, more effective one + // of the two presented in step-12. However, we introduce a new routine + // (set_anisotropic_flags) and modify another one (refine_grid). template class DGMethod { @@ -369,29 +335,23 @@ namespace Step30 Triangulation triangulation; const MappingQ1 mapping; - // Again we want to use DG elements of - // degree 1 (but this is only specified in - // the constructor). If you want to use a - // DG method of a different degree replace - // 1 in the constructor by the new degree. + // Again we want to use DG elements of degree 1 (but this is only + // specified in the constructor). If you want to use a DG method of a + // different degree replace 1 in the constructor by the new degree. const unsigned int degree; FE_DGQ fe; DoFHandler dof_handler; SparsityPattern sparsity_pattern; SparseMatrix system_matrix; - // This is new, the threshold value used in - // the evaluation of the anisotropic jump - // indicator explained in the - // introduction. Its value is set to 3.0 in - // the constructor, but it can easily be - // changed to a different value greater - // than 1. + // This is new, the threshold value used in the evaluation of the + // anisotropic jump indicator explained in the introduction. Its value is + // set to 3.0 in the constructor, but it can easily be changed to a + // different value greater than 1. const double anisotropic_threshold_ratio; - // This is a bool flag indicating whether - // anisotropic refinement shall be used or - // not. It is set by the constructor, which - // takes an argument of the same name. + // This is a bool flag indicating whether anisotropic refinement shall be + // used or not. It is set by the constructor, which takes an argument of + // the same name. const bool anisotropic; const QGauss quadrature; @@ -408,36 +368,17 @@ namespace Step30 DGMethod::DGMethod (const bool anisotropic) : mapping (), - // Change here for DG - // methods of - // different degrees. + // Change here for DG methods of different degrees. degree(1), fe (degree), dof_handler (triangulation), anisotropic_threshold_ratio(3.), anisotropic(anisotropic), - // As beta is a - // linear function, - // we can choose the - // degree of the - // quadrature for - // which the - // resulting - // integration is - // correct. Thus, we - // choose to use - // degree+1 - // gauss points, - // which enables us - // to integrate - // exactly - // polynomials of - // degree - // 2*degree+1, - // enough for all the - // integrals we will - // perform in this - // program. + // As beta is a linear function, we can choose the degree of the + // quadrature for which the resulting integration is correct. Thus, we + // choose to use degree+1 gauss points, which enables us to + // integrate exactly polynomials of degree 2*degree+1, enough + // for all the integrals we will perform in this program. quadrature (degree+1), face_quadrature (degree+1), dg () @@ -473,20 +414,14 @@ namespace Step30 // @sect4{Function: assemble_system2} // - // We proceed with the - // assemble_system2 function that - // implements the DG discretization in its - // second version. This function is very - // similar to the assemble_system2 - // function from step-12, even the four cases - // considered for the neighbor-relations of a - // cell are the same, namely a) cell is at the - // boundary, b) there are finer neighboring - // cells, c) the neighbor is neither coarser - // nor finer and d) the neighbor is coarser. - // However, the way in which we decide upon - // which case we have are modified in the way - // described in the introduction. + // We proceed with the assemble_system2 function that + // implements the DG discretization in its second version. This function is + // very similar to the assemble_system2 function from step-12, + // even the four cases considered for the neighbor-relations of a cell are + // the same, namely a) cell is at the boundary, b) there are finer + // neighboring cells, c) the neighbor is neither coarser nor finer and d) + // the neighbor is coarser. However, the way in which we decide upon which + // case we have are modified in the way described in the introduction. template void DGMethod::assemble_system2 () { @@ -560,51 +495,33 @@ namespace Step30 ExcInternalError()); typename DoFHandler::cell_iterator neighbor= cell->neighbor(face_no); - // Case b), we decide that there - // are finer cells as neighbors - // by asking the face, whether it - // has children. if so, then - // there must also be finer cells - // which are children or farther - // offsprings of our neighbor. + // Case b), we decide that there are finer cells as neighbors + // by asking the face, whether it has children. if so, then + // there must also be finer cells which are children or + // farther offsprings of our neighbor. if (face->has_children()) { - // We need to know, which of - // the neighbors faces points - // in the direction of our - // cell. Using the @p - // neighbor_face_no function - // we get this information - // for both coarser and - // non-coarser neighbors. + // We need to know, which of the neighbors faces points in + // the direction of our cell. Using the @p + // neighbor_face_no function we get this information for + // both coarser and non-coarser neighbors. const unsigned int neighbor2= cell->neighbor_face_no(face_no); - // Now we loop over all - // subfaces, i.e. the - // children and possibly - // grandchildren of the - // current face. + // Now we loop over all subfaces, i.e. the children and + // possibly grandchildren of the current face. for (unsigned int subface_no=0; subface_nonumber_of_children(); ++subface_no) { - // To get the cell behind - // the current subface we - // can use the @p - // neighbor_child_on_subface - // function. it takes - // care of all the - // complicated situations - // of anisotropic - // refinement and - // non-standard faces. + // To get the cell behind the current subface we can + // use the @p neighbor_child_on_subface function. it + // takes care of all the complicated situations of + // anisotropic refinement and non-standard faces. typename DoFHandler::cell_iterator neighbor_child = cell->neighbor_child_on_subface (face_no, subface_no); Assert (!neighbor_child->has_children(), ExcInternalError()); - // The remaining part of - // this case is - // unchanged. + // The remaining part of this case is unchanged. ue_vi_matrix = 0; ui_ve_matrix = 0; ue_ve_matrix = 0; @@ -635,42 +552,25 @@ namespace Step30 } else { - // Case c). We simply ask, - // whether the neighbor is - // coarser. If not, then it - // is neither coarser nor - // finer, since any finer - // neighbor would have been - // treated above with case - // b). Of all the cases with - // the same refinement - // situation of our cell and - // the neighbor we want to - // treat only one half, so - // that each face is - // considered only once. Thus - // we have the additional - // condition, that the cell - // with the lower index does - // the work. In the rare case - // that both cells have the - // same index, the cell with + // Case c). We simply ask, whether the neighbor is + // coarser. If not, then it is neither coarser nor finer, + // since any finer neighbor would have been treated above + // with case b). Of all the cases with the same refinement + // situation of our cell and the neighbor we want to treat + // only one half, so that each face is considered only + // once. Thus we have the additional condition, that the + // cell with the lower index does the work. In the rare + // case that both cells have the same index, the cell with // lower level is selected. if (!cell->neighbor_is_coarser(face_no) && (neighbor->index() > cell->index() || (neighbor->level() < cell->level() && neighbor->index() == cell->index()))) { - // Here we know, that the - // neigbor is not coarser - // so we can use the - // usual @p - // neighbor_of_neighbor - // function. However, we - // could also use the - // more general @p - // neighbor_face_no - // function. + // Here we know, that the neigbor is not coarser so we + // can use the usual @p neighbor_of_neighbor + // function. However, we could also use the more + // general @p neighbor_face_no function. const unsigned int neighbor2=cell->neighbor_of_neighbor(face_no); ue_vi_matrix = 0; @@ -701,10 +601,8 @@ namespace Step30 } } - // We do not need to consider - // case d), as those faces - // are treated 'from the - // other side within case b). + // We do not need to consider case d), as those faces are + // treated 'from the other side within case b). } } } @@ -721,10 +619,8 @@ namespace Step30 // @sect3{Solver} // - // For this simple problem we use the simple - // Richardson iteration again. The solver is - // completely unaffected by our anisotropic - // changes. + // For this simple problem we use the simple Richardson iteration again. The + // solver is completely unaffected by our anisotropic changes. template void DGMethod::solve (Vector &solution) { @@ -742,10 +638,8 @@ namespace Step30 // @sect3{Refinement} // - // We refine the grid according to the same - // simple refinement criterion used in step-12, - // namely an approximation to the - // gradient of the solution. + // We refine the grid according to the same simple refinement criterion used + // in step-12, namely an approximation to the gradient of the solution. template void DGMethod::refine_grid () { @@ -763,45 +657,34 @@ namespace Step30 endc = dof_handler.end(); for (unsigned int cell_no=0; cell!=endc; ++cell, ++cell_no) gradient_indicator(cell_no)*=std::pow(cell->diameter(), 1+1.0*dim/2); - // Then we use this indicator to flag the 30 - // percent of the cells with highest error - // indicator to be refined. + // Then we use this indicator to flag the 30 percent of the cells with + // highest error indicator to be refined. GridRefinement::refine_and_coarsen_fixed_number (triangulation, gradient_indicator, 0.3, 0.1); - // Now the refinement flags are set for those - // cells with a large error indicator. If - // nothing is done to change this, those - // cells will be refined isotropically. If - // the @p anisotropic flag given to this - // function is set, we now call the - // set_anisotropic_flags() function, which - // uses the jump indicator to reset some of - // the refinement flags to anisotropic - // refinement. + // Now the refinement flags are set for those cells with a large error + // indicator. If nothing is done to change this, those cells will be + // refined isotropically. If the @p anisotropic flag given to this + // function is set, we now call the set_anisotropic_flags() function, + // which uses the jump indicator to reset some of the refinement flags to + // anisotropic refinement. if (anisotropic) set_anisotropic_flags(); - // Now execute the refinement considering - // anisotropic as well as isotropic + // Now execute the refinement considering anisotropic as well as isotropic // refinement flags. triangulation.execute_coarsening_and_refinement (); } - // Once an error indicator has been evaluated - // and the cells with largerst error are - // flagged for refinement we want to loop over - // the flagged cells again to decide whether - // they need isotropic refinemnt or whether - // anisotropic refinement is more - // appropriate. This is the anisotropic jump + // Once an error indicator has been evaluated and the cells with largerst + // error are flagged for refinement we want to loop over the flagged cells + // again to decide whether they need isotropic refinemnt or whether + // anisotropic refinement is more appropriate. This is the anisotropic jump // indicator explained in the introduction. template void DGMethod::set_anisotropic_flags () { - // We want to evaluate the jump over faces of - // the flagged cells, so we need some objects - // to evaluate values of the solution on - // faces. + // We want to evaluate the jump over faces of the flagged cells, so we + // need some objects to evaluate values of the solution on faces. UpdateFlags face_update_flags = UpdateFlags(update_values | update_JxW_values); @@ -814,8 +697,7 @@ namespace Step30 endc=dof_handler.end(); for (; cell!=endc; ++cell) - // We only need to consider cells which are - // flaged for refinement. + // We only need to consider cells which are flaged for refinement. if (cell->refine_flag_set()) { Point jump; @@ -833,92 +715,50 @@ namespace Step30 std::vector u (fe_v_face.n_quadrature_points); std::vector u_neighbor (fe_v_face.n_quadrature_points); - // The four cases of different - // neighbor relations senn in - // the assembly routines are - // repeated much in the same - // way here. + // The four cases of different neighbor relations senn in + // the assembly routines are repeated much in the same way + // here. if (face->has_children()) { - // The neighbor is refined. - // First we store the - // information, which of - // the neighbor's faces - // points in the direction - // of our current - // cell. This property is - // inherited to the - // children. + // The neighbor is refined. First we store the + // information, which of the neighbor's faces points in + // the direction of our current cell. This property is + // inherited to the children. unsigned int neighbor2=cell->neighbor_face_no(face_no); // Now we loop over all subfaces, for (unsigned int subface_no=0; subface_nonumber_of_children(); ++subface_no) { - // get an iterator - // pointing to the cell - // behind the present - // subface... + // get an iterator pointing to the cell behind the + // present subface... typename DoFHandler::cell_iterator neighbor_child = cell->neighbor_child_on_subface(face_no,subface_no); Assert (!neighbor_child->has_children(), ExcInternalError()); - // ... and reinit the - // respective - // FEFaceValues und - // FESubFaceValues - // objects. + // ... and reinit the respective FEFaceValues und + // FESubFaceValues objects. fe_v_subface.reinit (cell, face_no, subface_no); fe_v_face_neighbor.reinit (neighbor_child, neighbor2); // We obtain the function values fe_v_subface.get_function_values(solution2, u); fe_v_face_neighbor.get_function_values(solution2, u_neighbor); - // as well as the - // quadrature weights, - // multiplied by the - // jacobian determinant. + // as well as the quadrature weights, multiplied by + // the jacobian determinant. const std::vector &JxW = fe_v_subface.get_JxW_values (); - // Now we loop over all - // quadrature points + // Now we loop over all quadrature points for (unsigned int x=0; xdim - // components. + // and integrate the absolute value of the jump + // of the solution, i.e. the absolute value of + // the difference between the function value + // seen from the current cell and the + // neighboring cell, respectively. We know, that + // the first two faces are orthogonal to the + // first coordinate direction on the unit cell, + // the second two faces are orthogonal to the + // second coordinate direction and so on, so we + // accumulate these values ito vectors with + // dim components. jump[face_no/2]+=std::fabs(u[x]-u_neighbor[x])*JxW[x]; - // We also sum up - // the scaled - // weights to - // obtain the - // measure of the - // face. + // We also sum up the scaled weights to obtain + // the measure of the face. area[face_no/2]+=JxW[x]; } } @@ -927,16 +767,11 @@ namespace Step30 { if (!cell->neighbor_is_coarser(face_no)) { - // Our current cell and - // the neighbor have - // the same refinement - // along the face under - // consideration. Apart - // from that, we do - // much the same as - // with one of the - // subcells in the - // above case. + // Our current cell and the neighbor have the same + // refinement along the face under + // consideration. Apart from that, we do much the + // same as with one of the subcells in the above + // case. unsigned int neighbor2=cell->neighbor_of_neighbor(face_no); fe_v_face.reinit (cell, face_no); @@ -955,26 +790,13 @@ namespace Step30 } else //i.e. neighbor is coarser than cell { - // Now the neighbor is - // actually - // coarser. This case - // is new, in that it - // did not occur in the - // assembly - // routine. Here, we - // have to consider it, - // but this is not - // overly - // complicated. We - // simply use the @p - // neighbor_of_coarser_neighbor - // function, which - // again takes care of - // anisotropic - // refinement and - // non-standard face - // orientation by - // itself. + // Now the neighbor is actually coarser. This case + // is new, in that it did not occur in the assembly + // routine. Here, we have to consider it, but this + // is not overly complicated. We simply use the @p + // neighbor_of_coarser_neighbor function, which + // again takes care of anisotropic refinement and + // non-standard face orientation by itself. std::pair neighbor_face_subface = cell->neighbor_of_coarser_neighbor(face_no); Assert (neighbor_face_subface.first::faces_per_cell, ExcInternalError()); @@ -1001,10 +823,8 @@ namespace Step30 } } } - // Now we analyze the size of the mean - // jumps, which we get dividing the - // jumps by the measure of the - // respective faces. + // Now we analyze the size of the mean jumps, which we get dividing + // the jumps by the measure of the respective faces. double average_jumps[dim]; double sum_of_average_jumps=0.; for (unsigned int i=0; idim - // coordinate directions of the unit - // cell and compare the average jump - // over the faces orthogional to that - // direction with the average jumnps - // over faces orthogonal to the - // remining direction(s). If the first - // is larger than the latter by a given - // factor, we refine only along hat - // axis. Otherwise we leave the - // refinement flag unchanged, resulting + // Now we loop over the dim coordinate directions of + // the unit cell and compare the average jump over the faces + // orthogional to that direction with the average jumnps over faces + // orthogonal to the remining direction(s). If the first is larger + // than the latter by a given factor, we refine only along hat + // axis. Otherwise we leave the refinement flag unchanged, resulting // in isotropic refinement. for (unsigned int i=0; i anisotropic_threshold_ratio*(sum_of_average_jumps-average_jumps[i])) @@ -1033,10 +848,9 @@ namespace Step30 // @sect3{The Rest} // - // The remaining part of the program is again - // unmodified. Only the creation of the - // original triangulation is changed in order - // to reproduce the new domain. + // The remaining part of the program is again unmodified. Only the creation + // of the original triangulation is changed in order to reproduce the new + // domain. template void DGMethod::output_results (const unsigned int cycle) const { @@ -1101,10 +915,8 @@ namespace Step30 p1(0)=-1; for (unsigned int i=0; i repetitions(dim,1); repetitions[0]=2; GridGenerator::subdivided_hyper_rectangle (triangulation, @@ -1149,14 +961,12 @@ int main () using namespace dealii; using namespace Step30; - // If you want to run the program in 3D, - // simply change the following line to - // const unsigned int dim = 3;. + // If you want to run the program in 3D, simply change the following + // line to const unsigned int dim = 3;. const unsigned int dim = 2; { - // First, we perform a run with - // isotropic refinement. + // First, we perform a run with isotropic refinement. std::cout << "Performing a " << dim << "D run with isotropic refinement..." << std::endl << "------------------------------------------------" << std::endl; DGMethod dgmethod_iso(false); @@ -1164,8 +974,7 @@ int main () } { - // Now we do a second run, this time - // with anisotropic refinement. + // Now we do a second run, this time with anisotropic refinement. std::cout << std::endl << "Performing a " << dim << "D run with anisotropic refinement..." << std::endl << "--------------------------------------------------" << std::endl; @@ -1199,5 +1008,3 @@ int main () return 0; } - - diff --git a/deal.II/examples/step-31/step-31.cc b/deal.II/examples/step-31/step-31.cc index 39b6350bcf..13a6290b05 100644 --- a/deal.II/examples/step-31/step-31.cc +++ b/deal.II/examples/step-31/step-31.cc @@ -12,10 +12,8 @@ // @sect3{Include files} -// The first step, as always, is to include -// the functionality of these well-known -// deal.II library files and some C++ header -// files. +// The first step, as always, is to include the functionality of these +// well-known deal.II library files and some C++ header files. #include #include #include @@ -47,30 +45,24 @@ #include #include -// Then we need to include some header files -// that provide vector, matrix, and -// preconditioner classes that implement -// interfaces to the respective Trilinos -// classes. In particular, we will need -// interfaces to the matrix and vector -// classes based on Trilinos as well as -// Trilinos preconditioners: +// Then we need to include some header files that provide vector, matrix, and +// preconditioner classes that implement interfaces to the respective Trilinos +// classes. In particular, we will need interfaces to the matrix and vector +// classes based on Trilinos as well as Trilinos preconditioners: #include #include #include #include #include -// Finally, here are two C++ headers that -// haven't been included yet by one of the -// aforelisted header files: +// Finally, here are two C++ headers that haven't been included yet by one of +// the aforelisted header files: #include #include #include -// At the end of this top-matter, we import -// all deal.II names into the global +// At the end of this top-matter, we import all deal.II names into the global // namespace: namespace Step31 { @@ -79,62 +71,42 @@ namespace Step31 // @sect3{Equation data} - // Again, the next stage in the program is - // the definition of the equation data, that - // is, the various boundary conditions, the - // right hand sides and the initial condition - // (remember that we're about to solve a - // time-dependent system). The basic strategy - // for this definition is the same as in - // step-22. Regarding the details, though, - // there are some differences. - - // The first thing is that we don't set any - // non-homogenous boundary conditions on the - // velocity, since as is explained in the - // introduction we will use no-flux - // conditions - // $\mathbf{n}\cdot\mathbf{u}=0$. So what is - // left are dim-1 conditions for - // the tangential part of the normal - // component of the stress tensor, - // $\textbf{n} \cdot [p \textbf{1} - - // \eta\varepsilon(\textbf{u})]$; we assume - // homogenous values for these components, - // i.e. a natural boundary condition that - // requires no specific action (it appears as - // a zero term in the right hand side of the - // weak form). + // Again, the next stage in the program is the definition of the equation + // data, that is, the various boundary conditions, the right hand sides and + // the initial condition (remember that we're about to solve a + // time-dependent system). The basic strategy for this definition is the + // same as in step-22. Regarding the details, though, there are some + // differences. + + // The first thing is that we don't set any non-homogenous boundary + // conditions on the velocity, since as is explained in the introduction we + // will use no-flux conditions $\mathbf{n}\cdot\mathbf{u}=0$. So what is + // left are dim-1 conditions for the tangential part of the + // normal component of the stress tensor, $\textbf{n} \cdot [p \textbf{1} - + // \eta\varepsilon(\textbf{u})]$; we assume homogenous values for these + // components, i.e. a natural boundary condition that requires no specific + // action (it appears as a zero term in the right hand side of the weak + // form). // - // For the temperature T, we assume no - // thermal energy flux, i.e. $\mathbf{n} - // \cdot \kappa \nabla T=0$. This, again, is - // a boundary condition that does not require - // us to do anything in particular. + // For the temperature T, we assume no thermal energy flux, + // i.e. $\mathbf{n} \cdot \kappa \nabla T=0$. This, again, is a boundary + // condition that does not require us to do anything in particular. // - // Secondly, we have to set initial - // conditions for the temperature (no initial - // conditions are required for the velocity - // and pressure, since the Stokes equations - // for the quasi-stationary case we consider - // here have no time derivatives of the - // velocity or pressure). Here, we choose a - // very simple test case, where the initial - // temperature is zero, and all dynamics are - // driven by the temperature right hand side. + // Secondly, we have to set initial conditions for the temperature (no + // initial conditions are required for the velocity and pressure, since the + // Stokes equations for the quasi-stationary case we consider here have no + // time derivatives of the velocity or pressure). Here, we choose a very + // simple test case, where the initial temperature is zero, and all dynamics + // are driven by the temperature right hand side. // - // Thirdly, we need to define the right hand - // side of the temperature equation. We - // choose it to be constant within three - // circles (or spheres in 3d) somewhere at - // the bottom of the domain, as explained in - // the introduction, and zero outside. + // Thirdly, we need to define the right hand side of the temperature + // equation. We choose it to be constant within three circles (or spheres in + // 3d) somewhere at the bottom of the domain, as explained in the + // introduction, and zero outside. // - // Finally, or maybe firstly, at the top of - // this namespace, we define the various - // material constants we need ($\eta,\kappa$, - // density $\rho$ and the thermal expansion - // coefficient $\beta$): + // Finally, or maybe firstly, at the top of this namespace, we define the + // various material constants we need ($\eta,\kappa$, density $\rho$ and the + // thermal expansion coefficient $\beta$): namespace EquationData { const double eta = 1; @@ -234,89 +206,55 @@ namespace Step31 // @sect3{Linear solvers and preconditioners} - // This section introduces some objects - // that are used for the solution of the - // linear equations of the Stokes system - // that we need to solve in each time - // step. Many of the ideas used here are - // the same as in step-20, where Schur - // complement based preconditioners and - // solvers have been introduced, with the - // actual interface taken from step-22 (in - // particular the discussion in the - // "Results" section of step-22, in which - // we introduce alternatives to the direct - // Schur complement approach). Note, - // however, that here we don't use the - // Schur complement to solve the Stokes - // equations, though an approximate Schur - // complement (the mass matrix on the - // pressure space) appears in the - // preconditioner. + // This section introduces some objects that are used for the solution of + // the linear equations of the Stokes system that we need to solve in each + // time step. Many of the ideas used here are the same as in step-20, where + // Schur complement based preconditioners and solvers have been introduced, + // with the actual interface taken from step-22 (in particular the + // discussion in the "Results" section of step-22, in which we introduce + // alternatives to the direct Schur complement approach). Note, however, + // that here we don't use the Schur complement to solve the Stokes + // equations, though an approximate Schur complement (the mass matrix on the + // pressure space) appears in the preconditioner. namespace LinearSolvers { // @sect4{The InverseMatrix class template} - // This class is an interface to - // calculate the action of an - // "inverted" matrix on a vector - // (using the vmult - // operation) in the same way as - // the corresponding class in - // step-22: when the product of an - // object of this class is - // requested, we solve a linear - // equation system with that matrix - // using the CG method, accelerated - // by a preconditioner of - // (templated) class - // Preconditioner. + // This class is an interface to calculate the action of an "inverted" + // matrix on a vector (using the vmult operation) in the same + // way as the corresponding class in step-22: when the product of an + // object of this class is requested, we solve a linear equation system + // with that matrix using the CG method, accelerated by a preconditioner + // of (templated) class Preconditioner. // - // In a minor deviation from the - // implementation of the same class in - // step-22 (and step-20), we make the - // vmult function take any - // kind of vector type (it will yield - // compiler errors, however, if the matrix - // does not allow a matrix-vector product - // with this kind of vector). + // In a minor deviation from the implementation of the same class in + // step-22 (and step-20), we make the vmult function take any + // kind of vector type (it will yield compiler errors, however, if the + // matrix does not allow a matrix-vector product with this kind of + // vector). // - // Secondly, we catch any exceptions that - // the solver may have thrown. The reason - // is as follows: When debugging a program - // like this one occasionally makes a - // mistake of passing an indefinite or - // non-symmetric matrix or preconditioner - // to the current class. The solver will, - // in that case, not converge and throw a - // run-time exception. If not caught here - // it will propagate up the call stack and - // may end up in main() where - // we output an error message that will say - // that the CG solver failed. The question - // then becomes: Which CG solver? The one - // that inverted the mass matrix? The one - // that inverted the top left block with - // the Laplace operator? Or a CG solver in - // one of the several other nested places - // where we use linear solvers in the - // current code? No indication about this - // is present in a run-time exception - // because it doesn't store the stack of - // calls through which we got to the place + // Secondly, we catch any exceptions that the solver may have thrown. The + // reason is as follows: When debugging a program like this one + // occasionally makes a mistake of passing an indefinite or non-symmetric + // matrix or preconditioner to the current class. The solver will, in that + // case, not converge and throw a run-time exception. If not caught here + // it will propagate up the call stack and may end up in + // main() where we output an error message that will say that + // the CG solver failed. The question then becomes: Which CG solver? The + // one that inverted the mass matrix? The one that inverted the top left + // block with the Laplace operator? Or a CG solver in one of the several + // other nested places where we use linear solvers in the current code? No + // indication about this is present in a run-time exception because it + // doesn't store the stack of calls through which we got to the place // where the exception was generated. // - // So rather than letting the exception - // propagate freely up to - // main() we realize that - // there is little that an outer function - // can do if the inner solver fails and - // rather convert the run-time exception - // into an assertion that fails and - // triggers a call to abort(), - // allowing us to trace back in a debugger - // how we got to the current place. + // So rather than letting the exception propagate freely up to + // main() we realize that there is little that an outer + // function can do if the inner solver fails and rather convert the + // run-time exception into an assertion that fails and triggers a call to + // abort(), allowing us to trace back in a debugger how we + // got to the current place. template class InverseMatrix : public Subscriptor { @@ -370,99 +308,56 @@ namespace Step31 // @sect4{Schur complement preconditioner} - // This is the implementation of the - // Schur complement preconditioner as - // described in detail in the - // introduction. As opposed to step-20 - // and step-22, we solve the block system - // all-at-once using GMRES, and use the - // Schur complement of the block - // structured matrix to build a good + // This is the implementation of the Schur complement preconditioner as + // described in detail in the introduction. As opposed to step-20 and + // step-22, we solve the block system all-at-once using GMRES, and use the + // Schur complement of the block structured matrix to build a good // preconditioner instead. // - // Let's have a look at the ideal - // preconditioner matrix - // $P=\left(\begin{array}{cc} A & 0 \\ B - // & -S \end{array}\right)$ described in - // the introduction. If we apply this - // matrix in the solution of a linear - // system, convergence of an iterative - // GMRES solver will be governed by the - // matrix - // @f{eqnarray*} - // P^{-1}\left(\begin{array}{cc} A - // & B^T \\ B & 0 - // \end{array}\right) = - // \left(\begin{array}{cc} I & - // A^{-1} B^T \\ 0 & I - // \end{array}\right), - // @f} - // which indeed is very simple. A GMRES - // solver based on exact matrices would - // converge in one iteration, since all - // eigenvalues are equal (any Krylov - // method takes at most as many - // iterations as there are distinct - // eigenvalues). Such a preconditioner - // for the blocked Stokes system has been - // proposed by Silvester and Wathen - // ("Fast iterative solution of - // stabilised Stokes systems part II. - // Using general block preconditioners", - // SIAM J. Numer. Anal., 31 (1994), - // pp. 1352-1367). + // Let's have a look at the ideal preconditioner matrix + // $P=\left(\begin{array}{cc} A & 0 \\ B & -S \end{array}\right)$ + // described in the introduction. If we apply this matrix in the solution + // of a linear system, convergence of an iterative GMRES solver will be + // governed by the matrix @f{eqnarray*} P^{-1}\left(\begin{array}{cc} A & + // B^T \\ B & 0 \end{array}\right) = \left(\begin{array}{cc} I & A^{-1} + // B^T \\ 0 & I \end{array}\right), @f} which indeed is very simple. A + // GMRES solver based on exact matrices would converge in one iteration, + // since all eigenvalues are equal (any Krylov method takes at most as + // many iterations as there are distinct eigenvalues). Such a + // preconditioner for the blocked Stokes system has been proposed by + // Silvester and Wathen ("Fast iterative solution of stabilised Stokes + // systems part II. Using general block preconditioners", SIAM + // J. Numer. Anal., 31 (1994), pp. 1352-1367). // - // Replacing P by $\tilde{P}$ - // keeps that spirit alive: the product - // $P^{-1} A$ will still be close to a - // matrix with eigenvalues 1 with a - // distribution that does not depend on - // the problem size. This lets us hope to - // be able to get a number of GMRES - // iterations that is problem-size - // independent. + // Replacing P by $\tilde{P}$ keeps that spirit alive: the product + // $P^{-1} A$ will still be close to a matrix with eigenvalues 1 with a + // distribution that does not depend on the problem size. This lets us + // hope to be able to get a number of GMRES iterations that is + // problem-size independent. // - // The deal.II users who have already - // gone through the step-20 and step-22 - // tutorials can certainly imagine how - // we're going to implement this. We - // replace the exact inverse matrices in - // $P^{-1}$ by some approximate inverses - // built from the InverseMatrix class, - // and the inverse Schur complement will - // be approximated by the pressure mass - // matrix $M_p$ (weighted by $\eta^{-1}$ - // as mentioned in the introduction). As - // pointed out in the results section of - // step-22, we can replace the exact - // inverse of A by just the - // application of a preconditioner, in - // this case on a vector Laplace matrix - // as was explained in the - // introduction. This does increase the - // number of (outer) GMRES iterations, - // but is still significantly cheaper - // than an exact inverse, which would - // require between 20 and 35 CG - // iterations for each outer - // solver step (using the AMG - // preconditioner). + // The deal.II users who have already gone through the step-20 and step-22 + // tutorials can certainly imagine how we're going to implement this. We + // replace the exact inverse matrices in $P^{-1}$ by some approximate + // inverses built from the InverseMatrix class, and the inverse Schur + // complement will be approximated by the pressure mass matrix $M_p$ + // (weighted by $\eta^{-1}$ as mentioned in the introduction). As pointed + // out in the results section of step-22, we can replace the exact inverse + // of A by just the application of a preconditioner, in this case + // on a vector Laplace matrix as was explained in the introduction. This + // does increase the number of (outer) GMRES iterations, but is still + // significantly cheaper than an exact inverse, which would require + // between 20 and 35 CG iterations for each outer solver step + // (using the AMG preconditioner). // - // Having the above explanations in mind, - // we define a preconditioner class with - // a vmult functionality, - // which is all we need for the - // interaction with the usual solver - // functions further below in the program - // code. + // Having the above explanations in mind, we define a preconditioner class + // with a vmult functionality, which is all we need for the + // interaction with the usual solver functions further below in the + // program code. // - // First the declarations. These are - // similar to the definition of the Schur - // complement in step-20, with the - // difference that we need some more - // preconditioners in the constructor and - // that the matrices we use here are - // built upon Trilinos: + // First the declarations. These are similar to the definition of the + // Schur complement in step-20, with the difference that we need some more + // preconditioners in the constructor and that the matrices we use here + // are built upon Trilinos: template class BlockSchurPreconditioner : public Subscriptor { @@ -501,30 +396,21 @@ namespace Step31 {} - // Next is the vmult - // function. We implement the action of - // $P^{-1}$ as described above in three - // successive steps. In formulas, we want - // to compute $Y=P^{-1}X$ where $X,Y$ are - // both vectors with two block components. + // Next is the vmult function. We implement the action of + // $P^{-1}$ as described above in three successive steps. In formulas, we + // want to compute $Y=P^{-1}X$ where $X,Y$ are both vectors with two block + // components. // - // The first step multiplies the velocity - // part of the vector by a preconditioner - // of the matrix A, i.e. we compute - // $Y_0={\tilde A}^{-1}X_0$. The resulting - // velocity vector is then multiplied by - // $B$ and subtracted from the pressure, - // i.e. we want to compute $X_1-BY_0$. - // This second step only acts on the - // pressure vector and is accomplished by - // the residual function of our matrix - // classes, except that the sign is - // wrong. Consequently, we change the sign - // in the temporary pressure vector and - // finally multiply by the inverse pressure - // mass matrix to get the final pressure - // vector, completing our work on the - // Stokes preconditioner: + // The first step multiplies the velocity part of the vector by a + // preconditioner of the matrix A, i.e. we compute $Y_0={\tilde + // A}^{-1}X_0$. The resulting velocity vector is then multiplied by $B$ + // and subtracted from the pressure, i.e. we want to compute $X_1-BY_0$. + // This second step only acts on the pressure vector and is accomplished + // by the residual function of our matrix classes, except that the sign is + // wrong. Consequently, we change the sign in the temporary pressure + // vector and finally multiply by the inverse pressure mass matrix to get + // the final pressure vector, completing our work on the Stokes + // preconditioner: template void BlockSchurPreconditioner:: @@ -542,43 +428,27 @@ namespace Step31 // @sect3{The BoussinesqFlowProblem class template} - // The definition of the class that defines - // the top-level logic of solving the - // time-dependent Boussinesq problem is - // mainly based on the step-22 tutorial - // program. The main differences are that now - // we also have to solve for the temperature - // equation, which forces us to have a second - // DoFHandler object for the temperature - // variable as well as matrices, right hand - // sides, and solution vectors for the - // current and previous time steps. As - // mentioned in the introduction, all linear - // algebra objects are going to use wrappers - // of the corresponding Trilinos - // functionality. + // The definition of the class that defines the top-level logic of solving + // the time-dependent Boussinesq problem is mainly based on the step-22 + // tutorial program. The main differences are that now we also have to solve + // for the temperature equation, which forces us to have a second DoFHandler + // object for the temperature variable as well as matrices, right hand + // sides, and solution vectors for the current and previous time steps. As + // mentioned in the introduction, all linear algebra objects are going to + // use wrappers of the corresponding Trilinos functionality. // - // The member functions of this class are - // reminiscent of step-21, where we also used - // a staggered scheme that first solve the - // flow equations (here the Stokes equations, - // in step-21 Darcy flow) and then update - // the advected quantity (here the - // temperature, there the saturation). The - // functions that are new are mainly - // concerned with determining the time step, - // as well as the proper size of the - // artificial viscosity stabilization. + // The member functions of this class are reminiscent of step-21, where we + // also used a staggered scheme that first solve the flow equations (here + // the Stokes equations, in step-21 Darcy flow) and then update the advected + // quantity (here the temperature, there the saturation). The functions that + // are new are mainly concerned with determining the time step, as well as + // the proper size of the artificial viscosity stabilization. // - // The last three variables indicate whether - // the various matrices or preconditioners - // need to be rebuilt the next time the - // corresponding build functions are - // called. This allows us to move the - // corresponding if into the - // respective function and thereby keeping - // our main run() function clean - // and easy to read. + // The last three variables indicate whether the various matrices or + // preconditioners need to be rebuilt the next time the corresponding build + // functions are called. This allows us to move the corresponding + // if into the respective function and thereby keeping our main + // run() function clean and easy to read. template class BoussinesqFlowProblem { @@ -663,23 +533,16 @@ namespace Step31 // @sect4{BoussinesqFlowProblem::BoussinesqFlowProblem} // - // The constructor of this class is an - // extension of the constructor in - // step-22. We need to add the various - // variables that concern the temperature. As - // discussed in the introduction, we are - // going to use $Q_2\times Q_1$ (Taylor-Hood) - // elements again for the Stokes part, and - // $Q_2$ elements for the - // temperature. However, by using variables - // that store the polynomial degree of the - // Stokes and temperature finite elements, it - // is easy to consistently modify the degree - // of the elements as well as all quadrature - // formulas used on them - // downstream. Moreover, we initialize the - // time stepping as well as the options for - // matrix assembly and preconditioning: + // The constructor of this class is an extension of the constructor in + // step-22. We need to add the various variables that concern the + // temperature. As discussed in the introduction, we are going to use + // $Q_2\times Q_1$ (Taylor-Hood) elements again for the Stokes part, and + // $Q_2$ elements for the temperature. However, by using variables that + // store the polynomial degree of the Stokes and temperature finite + // elements, it is easy to consistently modify the degree of the elements as + // well as all quadrature formulas used on them downstream. Moreover, we + // initialize the time stepping as well as the options for matrix assembly + // and preconditioning: template BoussinesqFlowProblem::BoussinesqFlowProblem () : @@ -706,77 +569,49 @@ namespace Step31 // @sect4{BoussinesqFlowProblem::get_maximal_velocity} - // Starting the real functionality of this - // class is a helper function that determines - // the maximum ($L_\infty$) velocity in the - // domain (at the quadrature points, in - // fact). How it works should be relatively - // obvious to all who have gotten to this - // point of the tutorial. Note that since we - // are only interested in the velocity, - // rather than using - // stokes_fe_values.get_function_values - // to get the values of the entire Stokes - // solution (velocities and pressures) we use - // stokes_fe_values[velocities].get_function_values - // to extract only the velocities part. This - // has the additional benefit that we get it - // as a Tensor<1,dim>, rather than some - // components in a Vector, allowing - // us to process it right away using the - // norm() function to get the - // magnitude of the velocity. + // Starting the real functionality of this class is a helper function that + // determines the maximum ($L_\infty$) velocity in the domain (at the + // quadrature points, in fact). How it works should be relatively obvious to + // all who have gotten to this point of the tutorial. Note that since we are + // only interested in the velocity, rather than using + // stokes_fe_values.get_function_values to get the values of + // the entire Stokes solution (velocities and pressures) we use + // stokes_fe_values[velocities].get_function_values to extract + // only the velocities part. This has the additional benefit that we get it + // as a Tensor<1,dim>, rather than some components in a Vector, + // allowing us to process it right away using the norm() + // function to get the magnitude of the velocity. // - // The only point worth thinking about a bit - // is how to choose the quadrature points we - // use here. Since the goal of this function - // is to find the maximal velocity over a - // domain by looking at quadrature points on - // each cell. So we should ask how we should - // best choose these quadrature points on - // each cell. To this end, recall that if we - // had a single $Q_1$ field (rather than the - // vector-valued field of higher order) then - // the maximum would be attained at a vertex - // of the mesh. In other words, we should use - // the QTrapez class that has quadrature - // points only at the vertices of cells. + // The only point worth thinking about a bit is how to choose the quadrature + // points we use here. Since the goal of this function is to find the + // maximal velocity over a domain by looking at quadrature points on each + // cell. So we should ask how we should best choose these quadrature points + // on each cell. To this end, recall that if we had a single $Q_1$ field + // (rather than the vector-valued field of higher order) then the maximum + // would be attained at a vertex of the mesh. In other words, we should use + // the QTrapez class that has quadrature points only at the vertices of + // cells. // - // For higher order shape functions, the - // situation is more complicated: the maxima - // and minima may be attained at points - // between the support points of shape - // functions (for the usual $Q_p$ elements - // the support points are the equidistant - // Lagrange interpolation points); - // furthermore, since we are looking for the - // maximum magnitude of a vector-valued - // quantity, we can even less say with - // certainty where the set of potential - // maximal points are. Nevertheless, - // intuitively if not provably, the Lagrange - // interpolation points appear to be a better - // choice than the Gauss points. + // For higher order shape functions, the situation is more complicated: the + // maxima and minima may be attained at points between the support points of + // shape functions (for the usual $Q_p$ elements the support points are the + // equidistant Lagrange interpolation points); furthermore, since we are + // looking for the maximum magnitude of a vector-valued quantity, we can + // even less say with certainty where the set of potential maximal points + // are. Nevertheless, intuitively if not provably, the Lagrange + // interpolation points appear to be a better choice than the Gauss points. // - // There are now different methods to produce - // a quadrature formula with quadrature - // points equal to the interpolation points - // of the finite element. One option would be - // to use the - // FiniteElement::get_unit_support_points() - // function, reduce the output to a unique - // set of points to avoid duplicate function - // evaluations, and create a Quadrature - // object using these points. Another option, - // chosen here, is to use the QTrapez class - // and combine it with the QIterated class - // that repeats the QTrapez formula on a - // number of sub-cells in each coordinate - // direction. To cover all support points, we - // need to iterate it - // stokes_degree+1 times since - // this is the polynomial degree of the - // Stokes element in use: + // There are now different methods to produce a quadrature formula with + // quadrature points equal to the interpolation points of the finite + // element. One option would be to use the + // FiniteElement::get_unit_support_points() function, reduce the output to a + // unique set of points to avoid duplicate function evaluations, and create + // a Quadrature object using these points. Another option, chosen here, is + // to use the QTrapez class and combine it with the QIterated class that + // repeats the QTrapez formula on a number of sub-cells in each coordinate + // direction. To cover all support points, we need to iterate it + // stokes_degree+1 times since this is the polynomial degree of + // the Stokes element in use: template double BoussinesqFlowProblem::get_maximal_velocity () const { @@ -811,44 +646,29 @@ namespace Step31 // @sect4{BoussinesqFlowProblem::get_extrapolated_temperature_range} - // Next a function that determines the - // minimum and maximum temperature at - // quadrature points inside $\Omega$ when - // extrapolated from the two previous time - // steps to the current one. We need this - // information in the computation of the - // artificial viscosity parameter $\nu$ as - // discussed in the introduction. + // Next a function that determines the minimum and maximum temperature at + // quadrature points inside $\Omega$ when extrapolated from the two previous + // time steps to the current one. We need this information in the + // computation of the artificial viscosity parameter $\nu$ as discussed in + // the introduction. // - // The formula for the extrapolated - // temperature is - // $\left(1+\frac{k_n}{k_{n-1}} - // \right)T^{n-1} + \frac{k_n}{k_{n-1}} - // T^{n-2}$. The way to compute it is to loop - // over all quadrature points and update the - // maximum and minimum value if the current - // value is bigger/smaller than the previous - // one. We initialize the variables that - // store the max and min before the loop over - // all quadrature points by the smallest and - // the largest number representable as a - // double. Then we know for a fact that it is - // larger/smaller than the minimum/maximum - // and that the loop over all quadrature - // points is ultimately going to update the + // The formula for the extrapolated temperature is + // $\left(1+\frac{k_n}{k_{n-1}} \right)T^{n-1} + \frac{k_n}{k_{n-1}} + // T^{n-2}$. The way to compute it is to loop over all quadrature points and + // update the maximum and minimum value if the current value is + // bigger/smaller than the previous one. We initialize the variables that + // store the max and min before the loop over all quadrature points by the + // smallest and the largest number representable as a double. Then we know + // for a fact that it is larger/smaller than the minimum/maximum and that + // the loop over all quadrature points is ultimately going to update the // initial value with the correct one. // - // The only other complication worth - // mentioning here is that in the first time - // step, $T^{k-2}$ is not yet available of - // course. In that case, we can only use - // $T^{k-1}$ which we have from the initial - // temperature. As quadrature points, we use - // the same choice as in the previous - // function though with the difference that - // now the number of repetitions is - // determined by the polynomial degree of the - // temperature field. + // The only other complication worth mentioning here is that in the first + // time step, $T^{k-2}$ is not yet available of course. In that case, we can + // only use $T^{k-1}$ which we have from the initial temperature. As + // quadrature points, we use the same choice as in the previous function + // though with the difference that now the number of repetitions is + // determined by the polynomial degree of the temperature field. template std::pair BoussinesqFlowProblem::get_extrapolated_temperature_range () const @@ -922,41 +742,26 @@ namespace Step31 // @sect4{BoussinesqFlowProblem::compute_viscosity} - // The last of the tool functions computes - // the artificial viscosity parameter - // $\nu|_K$ on a cell $K$ as a function of - // the extrapolated temperature, its - // gradient and Hessian (second - // derivatives), the velocity, the right - // hand side $\gamma$ all on the quadrature - // points of the current cell, and various - // other parameters as described in detail - // in the introduction. + // The last of the tool functions computes the artificial viscosity + // parameter $\nu|_K$ on a cell $K$ as a function of the extrapolated + // temperature, its gradient and Hessian (second derivatives), the velocity, + // the right hand side $\gamma$ all on the quadrature points of the current + // cell, and various other parameters as described in detail in the + // introduction. // - // There are some universal constants worth - // mentioning here. First, we need to fix - // $\beta$; we choose $\beta=0.015\cdot - // dim$, a choice discussed in detail in - // the results section of this tutorial - // program. The second is the exponent - // $\alpha$; $\alpha=1$ appears to work - // fine for the current program, even - // though some additional benefit might be - // expected from chosing $\alpha = - // 2$. Finally, there is one thing that - // requires special casing: In the first - // time step, the velocity equals zero, and - // the formula for $\nu|_K$ is not - // defined. In that case, we return - // $\nu|_K=5\cdot 10^3 \cdot h_K$, a choice - // admittedly more motivated by heuristics - // than anything else (it is in the same - // order of magnitude, however, as the - // value returned for most cells on the - // second time step). + // There are some universal constants worth mentioning here. First, we need + // to fix $\beta$; we choose $\beta=0.015\cdot dim$, a choice discussed in + // detail in the results section of this tutorial program. The second is the + // exponent $\alpha$; $\alpha=1$ appears to work fine for the current + // program, even though some additional benefit might be expected from + // chosing $\alpha = 2$. Finally, there is one thing that requires special + // casing: In the first time step, the velocity equals zero, and the formula + // for $\nu|_K$ is not defined. In that case, we return $\nu|_K=5\cdot 10^3 + // \cdot h_K$, a choice admittedly more motivated by heuristics than + // anything else (it is in the same order of magnitude, however, as the + // value returned for most cells on the second time step). // - // The rest of the function should be - // mostly obvious based on the material + // The rest of the function should be mostly obvious based on the material // discussed in the introduction: template double @@ -1023,56 +828,35 @@ namespace Step31 // @sect4{BoussinesqFlowProblem::setup_dofs} // - // This is the function that sets up the - // DoFHandler objects we have here (one for - // the Stokes part and one for the - // temperature part) as well as set to the - // right sizes the various objects required - // for the linear algebra in this - // program. Its basic operations are similar - // to what we do in step-22. + // This is the function that sets up the DoFHandler objects we have here + // (one for the Stokes part and one for the temperature part) as well as set + // to the right sizes the various objects required for the linear algebra in + // this program. Its basic operations are similar to what we do in step-22. // - // The body of the function first - // enumerates all degrees of freedom for - // the Stokes and temperature systems. For - // the Stokes part, degrees of freedom are - // then sorted to ensure that velocities - // precede pressure DoFs so that we can - // partition the Stokes matrix into a - // $2\times 2$ matrix. As a difference to - // step-22, we do not perform any - // additional DoF renumbering. In that - // program, it paid off since our solver - // was heavily dependent on ILU's, whereas - // we use AMG here which is not sensitive - // to the DoF numbering. The IC - // preconditioner for the inversion of the - // pressure mass matrix would of course - // take advantage of a Cuthill-McKee like - // renumbering, but its costs are low - // compared to the velocity portion, so the - // additional work does not pay off. + // The body of the function first enumerates all degrees of freedom for the + // Stokes and temperature systems. For the Stokes part, degrees of freedom + // are then sorted to ensure that velocities precede pressure DoFs so that + // we can partition the Stokes matrix into a $2\times 2$ matrix. As a + // difference to step-22, we do not perform any additional DoF + // renumbering. In that program, it paid off since our solver was heavily + // dependent on ILU's, whereas we use AMG here which is not sensitive to the + // DoF numbering. The IC preconditioner for the inversion of the pressure + // mass matrix would of course take advantage of a Cuthill-McKee like + // renumbering, but its costs are low compared to the velocity portion, so + // the additional work does not pay off. // - // We then proceed with the generation of the - // hanging node constraints that arise from - // adaptive grid refinement for both - // DoFHandler objects. For the velocity, we - // impose no-flux boundary conditions - // $\mathbf{u}\cdot \mathbf{n}=0$ by adding - // constraints to the object that already - // stores the hanging node constraints - // matrix. The second parameter in the - // function describes the first of the - // velocity components in the total dof - // vector, which is zero here. The variable - // no_normal_flux_boundaries - // denotes the boundary indicators for which - // to set the no flux boundary conditions; - // here, this is boundary indicator zero. + // We then proceed with the generation of the hanging node constraints that + // arise from adaptive grid refinement for both DoFHandler objects. For the + // velocity, we impose no-flux boundary conditions $\mathbf{u}\cdot + // \mathbf{n}=0$ by adding constraints to the object that already stores the + // hanging node constraints matrix. The second parameter in the function + // describes the first of the velocity components in the total dof vector, + // which is zero here. The variable no_normal_flux_boundaries + // denotes the boundary indicators for which to set the no flux boundary + // conditions; here, this is boundary indicator zero. // - // After having done so, we count the number - // of degrees of freedom in the various - // blocks: + // After having done so, we count the number of degrees of freedom in the + // various blocks: template void BoussinesqFlowProblem::setup_dofs () { @@ -1122,78 +906,45 @@ namespace Step31 << std::endl << std::endl; - // The next step is to create the sparsity - // pattern for the Stokes and temperature - // system matrices as well as the - // preconditioner matrix from which we - // build the Stokes preconditioner. As in - // step-22, we choose to create the pattern - // not as in the first few tutorial - // programs, but by using the blocked - // version of CompressedSimpleSparsityPattern. - // The reason for doing this is mainly - // memory, that is, the SparsityPattern - // class would consume too much memory when - // used in three spatial dimensions as we - // intend to do for this program. + // The next step is to create the sparsity pattern for the Stokes and + // temperature system matrices as well as the preconditioner matrix from + // which we build the Stokes preconditioner. As in step-22, we choose to + // create the pattern not as in the first few tutorial programs, but by + // using the blocked version of CompressedSimpleSparsityPattern. The + // reason for doing this is mainly memory, that is, the SparsityPattern + // class would consume too much memory when used in three spatial + // dimensions as we intend to do for this program. // - // So, we first release the memory stored - // in the matrices, then set up an object - // of type - // BlockCompressedSimpleSparsityPattern - // consisting of $2\times 2$ blocks (for - // the Stokes system matrix and - // preconditioner) or - // CompressedSimpleSparsityPattern (for - // the temperature part). We then fill - // these objects with the nonzero - // pattern, taking into account that for - // the Stokes system matrix, there are no - // entries in the pressure-pressure block - // (but all velocity vector components - // couple with each other and with the - // pressure). Similarly, in the Stokes - // preconditioner matrix, only the - // diagonal blocks are nonzero, since we - // use the vector Laplacian as discussed - // in the introduction. This operator - // only couples each vector component of - // the Laplacian with itself, but not - // with the other vector - // components. (Application of the - // constraints resulting from the no-flux - // boundary conditions will couple vector - // components at the boundary again, - // however.) + // So, we first release the memory stored in the matrices, then set up an + // object of type BlockCompressedSimpleSparsityPattern consisting of + // $2\times 2$ blocks (for the Stokes system matrix and preconditioner) or + // CompressedSimpleSparsityPattern (for the temperature part). We then + // fill these objects with the nonzero pattern, taking into account that + // for the Stokes system matrix, there are no entries in the + // pressure-pressure block (but all velocity vector components couple with + // each other and with the pressure). Similarly, in the Stokes + // preconditioner matrix, only the diagonal blocks are nonzero, since we + // use the vector Laplacian as discussed in the introduction. This + // operator only couples each vector component of the Laplacian with + // itself, but not with the other vector components. (Application of the + // constraints resulting from the no-flux boundary conditions will couple + // vector components at the boundary again, however.) // - // When generating the sparsity pattern, - // we directly apply the constraints from - // hanging nodes and no-flux boundary - // conditions. This approach was already - // used in step-27, but is different from - // the one in early tutorial programs - // where we first built the original - // sparsity pattern and only then added - // the entries resulting from - // constraints. The reason for doing so - // is that later during assembly we are - // going to distribute the constraints - // immediately when transferring local to - // global dofs. Consequently, there will - // be no data written at positions of - // constrained degrees of freedom, so we - // can let the - // DoFTools::make_sparsity_pattern - // function omit these entries by setting - // the last boolean flag to - // false. Once the sparsity - // pattern is ready, we can use it to - // initialize the Trilinos - // matrices. Since the Trilinos matrices - // store the sparsity pattern internally, - // there is no need to keep the sparsity - // pattern around after the - // initialization of the matrix. + // When generating the sparsity pattern, we directly apply the constraints + // from hanging nodes and no-flux boundary conditions. This approach was + // already used in step-27, but is different from the one in early + // tutorial programs where we first built the original sparsity pattern + // and only then added the entries resulting from constraints. The reason + // for doing so is that later during assembly we are going to distribute + // the constraints immediately when transferring local to global + // dofs. Consequently, there will be no data written at positions of + // constrained degrees of freedom, so we can let the + // DoFTools::make_sparsity_pattern function omit these entries by setting + // the last boolean flag to false. Once the sparsity pattern + // is ready, we can use it to initialize the Trilinos matrices. Since the + // Trilinos matrices store the sparsity pattern internally, there is no + // need to keep the sparsity pattern around after the initialization of + // the matrix. stokes_block_sizes.resize (2); stokes_block_sizes[0] = n_u; stokes_block_sizes[1] = n_p; @@ -1252,27 +1003,17 @@ namespace Step31 stokes_preconditioner_matrix.reinit (csp); } - // The creation of the temperature matrix - // (or, rather, matrices, since we - // provide a temperature mass matrix and - // a temperature stiffness matrix, that - // will be added together for time - // discretization) follows the generation - // of the Stokes matrix – except - // that it is much easier here since we - // do not need to take care of any blocks - // or coupling between components. Note - // how we initialize the three - // temperature matrices: We only use the - // sparsity pattern for reinitialization - // of the first matrix, whereas we use - // the previously generated matrix for - // the two remaining reinits. The reason - // for doing so is that reinitialization - // from an already generated matrix - // allows Trilinos to reuse the sparsity - // pattern instead of generating a new - // one for each copy. This saves both + // The creation of the temperature matrix (or, rather, matrices, since we + // provide a temperature mass matrix and a temperature stiffness matrix, + // that will be added together for time discretization) follows the + // generation of the Stokes matrix – except that it is much easier + // here since we do not need to take care of any blocks or coupling + // between components. Note how we initialize the three temperature + // matrices: We only use the sparsity pattern for reinitialization of the + // first matrix, whereas we use the previously generated matrix for the + // two remaining reinits. The reason for doing so is that reinitialization + // from an already generated matrix allows Trilinos to reuse the sparsity + // pattern instead of generating a new one for each copy. This saves both // some time and memory. { temperature_mass_matrix.clear (); @@ -1288,14 +1029,10 @@ namespace Step31 temperature_stiffness_matrix.reinit (temperature_matrix); } - // Lastly, we set the vectors for the - // Stokes solutions $\mathbf u^{n-1}$ and - // $\mathbf u^{n-2}$, as well as for the - // temperatures $T^{n}$, $T^{n-1}$ and - // $T^{n-2}$ (required for time stepping) - // and all the system right hand sides to - // their correct sizes and block - // structure: + // Lastly, we set the vectors for the Stokes solutions $\mathbf u^{n-1}$ + // and $\mathbf u^{n-2}$, as well as for the temperatures $T^{n}$, + // $T^{n-1}$ and $T^{n-2}$ (required for time stepping) and all the system + // right hand sides to their correct sizes and block structure: stokes_solution.reinit (stokes_block_sizes); old_stokes_solution.reinit (stokes_block_sizes); stokes_rhs.reinit (stokes_block_sizes); @@ -1311,27 +1048,18 @@ namespace Step31 // @sect4{BoussinesqFlowProblem::assemble_stokes_preconditioner} // - // This function assembles the matrix we use - // for preconditioning the Stokes - // system. What we need are a vector Laplace - // matrix on the velocity components and a - // mass matrix weighted by $\eta^{-1}$ on the - // pressure component. We start by generating - // a quadrature object of appropriate order, - // the FEValues object that can give values - // and gradients at the quadrature points - // (together with quadrature weights). Next - // we create data structures for the cell - // matrix and the relation between local and - // global DoFs. The vectors - // grad_phi_u and - // phi_p are going to hold the - // values of the basis functions in order to - // faster build up the local matrices, as was - // already done in step-22. Before we start - // the loop over all active cells, we have to - // specify which components are pressure and - // which are velocity. + // This function assembles the matrix we use for preconditioning the Stokes + // system. What we need are a vector Laplace matrix on the velocity + // components and a mass matrix weighted by $\eta^{-1}$ on the pressure + // component. We start by generating a quadrature object of appropriate + // order, the FEValues object that can give values and gradients at the + // quadrature points (together with quadrature weights). Next we create data + // structures for the cell matrix and the relation between local and global + // DoFs. The vectors grad_phi_u and phi_p are + // going to hold the values of the basis functions in order to faster build + // up the local matrices, as was already done in step-22. Before we start + // the loop over all active cells, we have to specify which components are + // pressure and which are velocity. template void BoussinesqFlowProblem::assemble_stokes_preconditioner () @@ -1364,26 +1092,17 @@ namespace Step31 stokes_fe_values.reinit (cell); local_matrix = 0; - // The creation of the local matrix is - // rather simple. There are only a - // Laplace term (on the velocity) and a - // mass matrix weighted by $\eta^{-1}$ - // to be generated, so the creation of - // the local matrix is done in two - // lines. Once the local matrix is - // ready (loop over rows and columns in - // the local matrix on each quadrature - // point), we get the local DoF indices - // and write the local information into - // the global matrix. We do this as in - // step-27, i.e. we directly apply the - // constraints from hanging nodes - // locally. By doing so, we don't have - // to do that afterwards, and we don't - // also write into entries of the - // matrix that will actually be set to - // zero again later when eliminating - // constraints. + // The creation of the local matrix is rather simple. There are only a + // Laplace term (on the velocity) and a mass matrix weighted by + // $\eta^{-1}$ to be generated, so the creation of the local matrix is + // done in two lines. Once the local matrix is ready (loop over rows + // and columns in the local matrix on each quadrature point), we get + // the local DoF indices and write the local information into the + // global matrix. We do this as in step-27, i.e. we directly apply the + // constraints from hanging nodes locally. By doing so, we don't have + // to do that afterwards, and we don't also write into entries of the + // matrix that will actually be set to zero again later when + // eliminating constraints. for (unsigned int q=0; qrebuild_stokes_preconditioner - // has the value - // false). Otherwise its first - // task is to call - // assemble_stokes_preconditioner - // to generate the preconditioner matrices. + // This function generates the inner preconditioners that are going to be + // used for the Schur complement block preconditioner. Since the + // preconditioners need only to be regenerated when the matrices change, + // this function does not have to do anything in case the matrices have not + // changed (i.e., the flag rebuild_stokes_preconditioner has + // the value false). Otherwise its first task is to call + // assemble_stokes_preconditioner to generate the + // preconditioner matrices. // - // Next, we set up the preconditioner for - // the velocity-velocity matrix - // A. As explained in the - // introduction, we are going to use an - // AMG preconditioner based on a vector - // Laplace matrix $\hat{A}$ (which is - // spectrally close to the Stokes matrix - // A). Usually, the - // TrilinosWrappers::PreconditionAMG - // class can be seen as a good black-box - // preconditioner which does not need any - // special knowledge. In this case, - // however, we have to be careful: since - // we build an AMG for a vector problem, - // we have to tell the preconditioner - // setup which dofs belong to which - // vector component. We do this using the - // function - // DoFTools::extract_constant_modes, a - // function that generates a set of - // dim vectors, where each one - // has ones in the respective component - // of the vector problem and zeros - // elsewhere. Hence, these are the - // constant modes on each component, - // which explains the name of the + // Next, we set up the preconditioner for the velocity-velocity matrix + // A. As explained in the introduction, we are going to use an AMG + // preconditioner based on a vector Laplace matrix $\hat{A}$ (which is + // spectrally close to the Stokes matrix A). Usually, the + // TrilinosWrappers::PreconditionAMG class can be seen as a good black-box + // preconditioner which does not need any special knowledge. In this case, + // however, we have to be careful: since we build an AMG for a vector + // problem, we have to tell the preconditioner setup which dofs belong to + // which vector component. We do this using the function + // DoFTools::extract_constant_modes, a function that generates a set of + // dim vectors, where each one has ones in the respective + // component of the vector problem and zeros elsewhere. Hence, these are the + // constant modes on each component, which explains the name of the // variable. template void @@ -1477,60 +1177,35 @@ namespace Step31 TrilinosWrappers::PreconditionAMG::AdditionalData amg_data; amg_data.constant_modes = constant_modes; - // Next, we set some more options of the - // AMG preconditioner. In particular, we - // need to tell the AMG setup that we use - // quadratic basis functions for the - // velocity matrix (this implies more - // nonzero elements in the matrix, so - // that a more rubust algorithm needs to - // be chosen internally). Moreover, we - // want to be able to control how the - // coarsening structure is build up. The - // way the Trilinos smoothed aggregation - // AMG does this is to look which matrix - // entries are of similar size as the - // diagonal entry in order to - // algebraically build a coarse-grid - // structure. By setting the parameter - // aggregation_threshold to - // 0.02, we specify that all entries that - // are more than two precent of size of - // some diagonal pivots in that row - // should form one coarse grid - // point. This parameter is rather - // ad-hoc, and some fine-tuning of it can - // influence the performance of the - // preconditioner. As a rule of thumb, - // larger values of - // aggregation_threshold - // will decrease the number of - // iterations, but increase the costs per - // iteration. A look at the Trilinos - // documentation will provide more - // information on these parameters. With - // this data set, we then initialize the - // preconditioner with the matrix we want - // it to apply to. + // Next, we set some more options of the AMG preconditioner. In + // particular, we need to tell the AMG setup that we use quadratic basis + // functions for the velocity matrix (this implies more nonzero elements + // in the matrix, so that a more rubust algorithm needs to be chosen + // internally). Moreover, we want to be able to control how the coarsening + // structure is build up. The way the Trilinos smoothed aggregation AMG + // does this is to look which matrix entries are of similar size as the + // diagonal entry in order to algebraically build a coarse-grid + // structure. By setting the parameter aggregation_threshold + // to 0.02, we specify that all entries that are more than two precent of + // size of some diagonal pivots in that row should form one coarse grid + // point. This parameter is rather ad-hoc, and some fine-tuning of it can + // influence the performance of the preconditioner. As a rule of thumb, + // larger values of aggregation_threshold will decrease the + // number of iterations, but increase the costs per iteration. A look at + // the Trilinos documentation will provide more information on these + // parameters. With this data set, we then initialize the preconditioner + // with the matrix we want it to apply to. // - // Finally, we also initialize the - // preconditioner for the inversion of - // the pressure mass matrix. This matrix - // is symmetric and well-behaved, so we - // can chose a simple preconditioner. We - // stick with an incomple Cholesky (IC) - // factorization preconditioner, which is - // designed for symmetric matrices. We - // could have also chosen an SSOR - // preconditioner with relaxation factor - // around 1.2, but IC is cheaper for our - // example. We wrap the preconditioners - // into a std_cxx1x::shared_ptr - // pointer, which makes it easier to - // recreate the preconditioner next time - // around since we do not have to care - // about destroying the previously used - // object. + // Finally, we also initialize the preconditioner for the inversion of the + // pressure mass matrix. This matrix is symmetric and well-behaved, so we + // can chose a simple preconditioner. We stick with an incomple Cholesky + // (IC) factorization preconditioner, which is designed for symmetric + // matrices. We could have also chosen an SSOR preconditioner with + // relaxation factor around 1.2, but IC is cheaper for our example. We + // wrap the preconditioners into a std_cxx1x::shared_ptr + // pointer, which makes it easier to recreate the preconditioner next time + // around since we do not have to care about destroying the previously + // used object. amg_data.elliptic = true; amg_data.higher_order_elements = true; amg_data.smoother_sweeps = 2; @@ -1551,65 +1226,43 @@ namespace Step31 // @sect4{BoussinesqFlowProblem::assemble_stokes_system} // - // The time lag scheme we use for advancing - // the coupled Stokes-temperature system - // forces us to split up the assembly (and - // the solution of linear systems) into two - // step. The first one is to create the - // Stokes system matrix and right hand - // side, and the second is to create matrix - // and right hand sides for the temperature - // dofs, which depends on the result of the + // The time lag scheme we use for advancing the coupled Stokes-temperature + // system forces us to split up the assembly (and the solution of linear + // systems) into two step. The first one is to create the Stokes system + // matrix and right hand side, and the second is to create matrix and right + // hand sides for the temperature dofs, which depends on the result of the // linear system for the velocity. // - // This function is called at the beginning - // of each time step. In the first time step - // or if the mesh has changed, indicated by - // the rebuild_stokes_matrix, we - // need to assemble the Stokes matrix; on the - // other hand, if the mesh hasn't changed and - // the matrix is already available, this is - // not necessary and all we need to do is - // assemble the right hand side vector which - // changes in each time step. + // This function is called at the beginning of each time step. In the first + // time step or if the mesh has changed, indicated by the + // rebuild_stokes_matrix, we need to assemble the Stokes + // matrix; on the other hand, if the mesh hasn't changed and the matrix is + // already available, this is not necessary and all we need to do is + // assemble the right hand side vector which changes in each time step. // - // Regarding the technical details of - // implementation, not much has changed from - // step-22. We reset matrix and vector, - // create a quadrature formula on the cells, - // and then create the respective FEValues - // object. For the update flags, we require - // basis function derivatives only in case of - // a full assembly, since they are not needed - // for the right hand side; as always, - // choosing the minimal set of flags - // depending on what is currently needed - // makes the call to FEValues::reinit further - // down in the program more efficient. + // Regarding the technical details of implementation, not much has changed + // from step-22. We reset matrix and vector, create a quadrature formula on + // the cells, and then create the respective FEValues object. For the update + // flags, we require basis function derivatives only in case of a full + // assembly, since they are not needed for the right hand side; as always, + // choosing the minimal set of flags depending on what is currently needed + // makes the call to FEValues::reinit further down in the program more + // efficient. // - // There is one thing that needs to be - // commented – since we have a separate - // finite element and DoFHandler for the - // temperature, we need to generate a second - // FEValues object for the proper evaluation - // of the temperature solution. This isn't - // too complicated to realize here: just use - // the temperature structures and set an - // update flag for the basis function values - // which we need for evaluation of the - // temperature solution. The only important - // part to remember here is that the same - // quadrature formula is used for both - // FEValues objects to ensure that we get - // matching information when we loop over the - // quadrature points of the two objects. + // There is one thing that needs to be commented – since we have a + // separate finite element and DoFHandler for the temperature, we need to + // generate a second FEValues object for the proper evaluation of the + // temperature solution. This isn't too complicated to realize here: just + // use the temperature structures and set an update flag for the basis + // function values which we need for evaluation of the temperature + // solution. The only important part to remember here is that the same + // quadrature formula is used for both FEValues objects to ensure that we + // get matching information when we loop over the quadrature points of the + // two objects. // - // The declarations proceed with some - // shortcuts for array sizes, the creation - // of the local matrix and right hand side - // as well as the vector for the indices of - // the local dofs compared to the global - // system. + // The declarations proceed with some shortcuts for array sizes, the + // creation of the local matrix and right hand side as well as the vector + // for the indices of the local dofs compared to the global system. template void BoussinesqFlowProblem::assemble_stokes_system () { @@ -1642,29 +1295,20 @@ namespace Step31 std::vector local_dof_indices (dofs_per_cell); - // Next we need a vector that will contain - // the values of the temperature solution - // at the previous time level at the - // quadrature points to assemble the source - // term in the right hand side of the - // momentum equation. Let's call this vector - // old_solution_values. + // Next we need a vector that will contain the values of the temperature + // solution at the previous time level at the quadrature points to + // assemble the source term in the right hand side of the momentum + // equation. Let's call this vector old_solution_values. // - // The set of vectors we create next hold - // the evaluations of the basis functions - // as well as their gradients and - // symmetrized gradients that will be used - // for creating the matrices. Putting these - // into their own arrays rather than asking - // the FEValues object for this information - // each time it is needed is an - // optimization to accelerate the assembly + // The set of vectors we create next hold the evaluations of the basis + // functions as well as their gradients and symmetrized gradients that + // will be used for creating the matrices. Putting these into their own + // arrays rather than asking the FEValues object for this information each + // time it is needed is an optimization to accelerate the assembly // process, see step-22 for details. // - // The last two declarations are used to - // extract the individual blocks - // (velocity, pressure, temperature) from - // the total FE system. + // The last two declarations are used to extract the individual blocks + // (velocity, pressure, temperature) from the total FE system. std::vector old_temperature_values(n_q_points); std::vector > phi_u (dofs_per_cell); @@ -1675,25 +1319,16 @@ namespace Step31 const FEValuesExtractors::Vector velocities (0); const FEValuesExtractors::Scalar pressure (dim); - // Now start the loop over all cells in - // the problem. We are working on two - // different DoFHandlers for this - // assembly routine, so we must have two - // different cell iterators for the two - // objects in use. This might seem a bit - // peculiar, since both the Stokes system - // and the temperature system use the - // same grid, but that's the only way to - // keep degrees of freedom in sync. The - // first statements within the loop are - // again all very familiar, doing the - // update of the finite element data as - // specified by the update flags, zeroing - // out the local arrays and getting the - // values of the old solution at the - // quadrature points. Then we are ready to - // loop over the quadrature points on the - // cell. + // Now start the loop over all cells in the problem. We are working on two + // different DoFHandlers for this assembly routine, so we must have two + // different cell iterators for the two objects in use. This might seem a + // bit peculiar, since both the Stokes system and the temperature system + // use the same grid, but that's the only way to keep degrees of freedom + // in sync. The first statements within the loop are again all very + // familiar, doing the update of the finite element data as specified by + // the update flags, zeroing out the local arrays and getting the values + // of the old solution at the quadrature points. Then we are ready to loop + // over the quadrature points on the cell. typename DoFHandler::active_cell_iterator cell = stokes_dof_handler.begin_active(), endc = stokes_dof_handler.end(); @@ -1715,29 +1350,18 @@ namespace Step31 { const double old_temperature = old_temperature_values[q]; - // Next we extract the values and - // gradients of basis functions - // relevant to the terms in the - // inner products. As shown in - // step-22 this helps accelerate - // assembly. + // Next we extract the values and gradients of basis functions + // relevant to the terms in the inner products. As shown in + // step-22 this helps accelerate assembly. // - // Once this is done, we start the - // loop over the rows and columns - // of the local matrix and feed the - // matrix with the relevant - // products. The right hand side is - // filled with the forcing term - // driven by temperature in - // direction of gravity (which is - // vertical in our example). Note - // that the right hand side term is - // always generated, whereas the - // matrix contributions are only - // updated when it is requested by - // the - // rebuild_matrices - // flag. + // Once this is done, we start the loop over the rows and columns + // of the local matrix and feed the matrix with the relevant + // products. The right hand side is filled with the forcing term + // driven by temperature in direction of gravity (which is + // vertical in our example). Note that the right hand side term + // is always generated, whereas the matrix contributions are only + // updated when it is requested by the + // rebuild_matrices flag. for (unsigned int k=0; klocal_dof_indices. - // Again, we let the ConstraintMatrix - // class do the insertion of the cell - // matrix elements to the global - // matrix, which already condenses the - // hanging node constraints. + // The last step in the loop over all cells is to enter the local + // contributions into the global matrix and vector structures to the + // positions specified in local_dof_indices. Again, we + // let the ConstraintMatrix class do the insertion of the cell matrix + // elements to the global matrix, which already condenses the hanging + // node constraints. cell->get_dof_indices (local_dof_indices); if (rebuild_stokes_matrix == true) @@ -1802,37 +1421,23 @@ namespace Step31 // @sect4{BoussinesqFlowProblem::assemble_temperature_matrix} // - // This function assembles the matrix in - // the temperature equation. The - // temperature matrix consists of two - // parts, a mass matrix and the time step - // size times a stiffness matrix given by - // a Laplace term times the amount of - // diffusion. Since the matrix depends on - // the time step size (which varies from - // one step to another), the temperature - // matrix needs to be updated every time - // step. We could simply regenerate the - // matrices in every time step, but this - // is not really efficient since mass and - // Laplace matrix do only change when we - // change the mesh. Hence, we do this - // more efficiently by generating two - // separate matrices in this function, - // one for the mass matrix and one for - // the stiffness (diffusion) matrix. We - // will then sum up the matrix plus the - // stiffness matrix times the time step - // size once we know the actual time step. + // This function assembles the matrix in the temperature equation. The + // temperature matrix consists of two parts, a mass matrix and the time step + // size times a stiffness matrix given by a Laplace term times the amount of + // diffusion. Since the matrix depends on the time step size (which varies + // from one step to another), the temperature matrix needs to be updated + // every time step. We could simply regenerate the matrices in every time + // step, but this is not really efficient since mass and Laplace matrix do + // only change when we change the mesh. Hence, we do this more efficiently + // by generating two separate matrices in this function, one for the mass + // matrix and one for the stiffness (diffusion) matrix. We will then sum up + // the matrix plus the stiffness matrix times the time step size once we + // know the actual time step. // - // So the details for this first step are - // very simple. In case we need to - // rebuild the matrix (i.e., the mesh has - // changed), we zero the data structures, - // get a quadrature formula and a - // FEValues object, and create local - // matrices, local dof indices and - // evaluation structures for the basis + // So the details for this first step are very simple. In case we need to + // rebuild the matrix (i.e., the mesh has changed), we zero the data + // structures, get a quadrature formula and a FEValues object, and create + // local matrices, local dof indices and evaluation structures for the basis // functions. template void BoussinesqFlowProblem::assemble_temperature_matrix () @@ -1859,20 +1464,14 @@ namespace Step31 std::vector phi_T (dofs_per_cell); std::vector > grad_phi_T (dofs_per_cell); - // Now, let's start the loop over all cells - // in the triangulation. We need to zero - // out the local matrices, update the - // finite element evaluations, and then - // loop over the rows and columns of the - // matrices on each quadrature point, where - // we then create the mass matrix and the - // stiffness matrix (Laplace terms times - // the diffusion - // EquationData::kappa. Finally, - // we let the constraints object insert - // these values into the global matrix, and - // directly condense the constraints into - // the matrix. + // Now, let's start the loop over all cells in the triangulation. We need + // to zero out the local matrices, update the finite element evaluations, + // and then loop over the rows and columns of the matrices on each + // quadrature point, where we then create the mass matrix and the + // stiffness matrix (Laplace terms times the diffusion + // EquationData::kappa. Finally, we let the constraints + // object insert these values into the global matrix, and directly + // condense the constraints into the matrix. typename DoFHandler::active_cell_iterator cell = temperature_dof_handler.begin_active(), endc = temperature_dof_handler.end(); @@ -1922,32 +1521,20 @@ namespace Step31 // @sect4{BoussinesqFlowProblem::assemble_temperature_system} // - // This function does the second part of - // the assembly work on the temperature - // matrix, the actual addition of - // pressure mass and stiffness matrix - // (where the time step size comes into - // play), as well as the creation of the - // velocity-dependent right hand - // side. The declarations for the right - // hand side assembly in this function - // are pretty much the same as the ones - // used in the other assembly routines, - // except that we restrict ourselves to - // vectors this time. We are going to - // calculate residuals on the temperature - // system, which means that we have to - // evaluate second derivatives, specified - // by the update flag - // update_hessians. + // This function does the second part of the assembly work on the + // temperature matrix, the actual addition of pressure mass and stiffness + // matrix (where the time step size comes into play), as well as the + // creation of the velocity-dependent right hand side. The declarations for + // the right hand side assembly in this function are pretty much the same as + // the ones used in the other assembly routines, except that we restrict + // ourselves to vectors this time. We are going to calculate residuals on + // the temperature system, which means that we have to evaluate second + // derivatives, specified by the update flag update_hessians. // - // The temperature equation is coupled to the - // Stokes system by means of the fluid - // velocity. These two parts of the solution - // are associated with different DoFHandlers, - // so we again need to create a second - // FEValues object for the evaluation of the - // velocity at the quadrature points. + // The temperature equation is coupled to the Stokes system by means of the + // fluid velocity. These two parts of the solution are associated with + // different DoFHandlers, so we again need to create a second FEValues + // object for the evaluation of the velocity at the quadrature points. template void BoussinesqFlowProblem:: assemble_temperature_system (const double maximal_velocity) @@ -1986,24 +1573,15 @@ namespace Step31 std::vector local_dof_indices (dofs_per_cell); - // Next comes the declaration of vectors - // to hold the old and older solution - // values (as a notation for time levels - // n-1 and n-2, - // respectively) and gradients at - // quadrature points of the current - // cell. We also declarate an object to - // hold the temperature right hande side - // values (gamma_values), - // and we again use shortcuts for the - // temperature basis - // functions. Eventually, we need to find - // the temperature extrema and the - // diameter of the computational domain - // which will be used for the definition - // of the stabilization parameter (we got - // the maximal velocity as an input to - // this function). + // Next comes the declaration of vectors to hold the old and older + // solution values (as a notation for time levels n-1 and + // n-2, respectively) and gradients at quadrature points of the + // current cell. We also declarate an object to hold the temperature right + // hande side values (gamma_values), and we again use + // shortcuts for the temperature basis functions. Eventually, we need to + // find the temperature extrema and the diameter of the computational + // domain which will be used for the definition of the stabilization + // parameter (we got the maximal velocity as an input to this function). std::vector > old_velocity_values (n_q_points); std::vector > old_old_velocity_values (n_q_points); std::vector old_temperature_values (n_q_points); @@ -2024,28 +1602,18 @@ namespace Step31 const FEValuesExtractors::Vector velocities (0); - // Now, let's start the loop over all cells - // in the triangulation. Again, we need two - // cell iterators that walk in parallel - // through the cells of the two involved - // DoFHandler objects for the Stokes and - // temperature part. Within the loop, we - // first set the local rhs to zero, and - // then get the values and derivatives of - // the old solution functions at the - // quadrature points, since they are going - // to be needed for the definition of the - // stabilization parameters and as - // coefficients in the equation, - // respectively. Note that since the - // temperature has its own DoFHandler and - // FEValues object we get the entire - // solution at the quadrature point (which - // is the scalar temperature field only - // anyway) whereas for the Stokes part we - // restrict ourselves to extracting the - // velocity part (and ignoring the pressure - // part) by using + // Now, let's start the loop over all cells in the triangulation. Again, + // we need two cell iterators that walk in parallel through the cells of + // the two involved DoFHandler objects for the Stokes and temperature + // part. Within the loop, we first set the local rhs to zero, and then get + // the values and derivatives of the old solution functions at the + // quadrature points, since they are going to be needed for the definition + // of the stabilization parameters and as coefficients in the equation, + // respectively. Note that since the temperature has its own DoFHandler + // and FEValues object we get the entire solution at the quadrature point + // (which is the scalar temperature field only anyway) whereas for the + // Stokes part we restrict ourselves to extracting the velocity part (and + // ignoring the pressure part) by using // stokes_fe_values[velocities].get_function_values. typename DoFHandler::active_cell_iterator cell = temperature_dof_handler.begin_active(), @@ -2083,27 +1651,16 @@ namespace Step31 stokes_fe_values[velocities].get_function_values (old_stokes_solution, old_old_velocity_values); - // Next, we calculate the artificial - // viscosity for stabilization - // according to the discussion in the - // introduction using the dedicated - // function. With that at hand, we - // can get into the loop over - // quadrature points and local rhs - // vector components. The terms here - // are quite lenghty, but their - // definition follows the - // time-discrete system developed in - // the introduction of this - // program. The BDF-2 scheme needs - // one more term from the old time - // step (and involves more - // complicated factors) than the - // backward Euler scheme that is used - // for the first time step. When all - // this is done, we distribute the - // local vector into the global one - // (including hanging node + // Next, we calculate the artificial viscosity for stabilization + // according to the discussion in the introduction using the dedicated + // function. With that at hand, we can get into the loop over + // quadrature points and local rhs vector components. The terms here + // are quite lenghty, but their definition follows the time-discrete + // system developed in the introduction of this program. The BDF-2 + // scheme needs one more term from the old time step (and involves + // more complicated factors) than the backward Euler scheme that is + // used for the first time step. When all this is done, we distribute + // the local vector into the global one (including hanging node // constraints). const double nu = compute_viscosity (old_temperature_values, @@ -2185,56 +1742,33 @@ namespace Step31 // @sect4{BoussinesqFlowProblem::solve} // - // This function solves the linear systems - // of equations. Following the - // introduction, we start with the Stokes - // system, where we need to generate our - // block Schur preconditioner. Since all - // the relevant actions are implemented in - // the class - // BlockSchurPreconditioner, - // all we have to do is to initialize the - // class appropriately. What we need to - // pass down is an - // InverseMatrix object for - // the pressure mass matrix, which we set - // up using the respective class together - // with the IC preconditioner we already - // generated, and the AMG preconditioner - // for the velocity-velocity matrix. Note - // that both Mp_preconditioner - // and Amg_preconditioner are - // only pointers, so we use * - // to pass down the actual preconditioner - // objects. + // This function solves the linear systems of equations. Following the + // introduction, we start with the Stokes system, where we need to generate + // our block Schur preconditioner. Since all the relevant actions are + // implemented in the class BlockSchurPreconditioner, all we + // have to do is to initialize the class appropriately. What we need to pass + // down is an InverseMatrix object for the pressure mass + // matrix, which we set up using the respective class together with the IC + // preconditioner we already generated, and the AMG preconditioner for the + // velocity-velocity matrix. Note that both Mp_preconditioner + // and Amg_preconditioner are only pointers, so we use + // * to pass down the actual preconditioner objects. // - // Once the preconditioner is ready, we - // create a GMRES solver for the block - // system. Since we are working with - // Trilinos data structures, we have to set - // the respective template argument in the - // solver. GMRES needs to internally store - // temporary vectors for each iteration - // (see the discussion in the results - // section of step-22) – the more - // vectors it can use, the better it will - // generally perform. To keep memory - // demands in check, we set the number of - // vectors to 100. This means that up to - // 100 solver iterations, every temporary - // vector can be stored. If the solver - // needs to iterate more often to get the - // specified tolerance, it will work on a - // reduced set of vectors by restarting at - // every 100 iterations. + // Once the preconditioner is ready, we create a GMRES solver for the block + // system. Since we are working with Trilinos data structures, we have to + // set the respective template argument in the solver. GMRES needs to + // internally store temporary vectors for each iteration (see the discussion + // in the results section of step-22) – the more vectors it can use, + // the better it will generally perform. To keep memory demands in check, we + // set the number of vectors to 100. This means that up to 100 solver + // iterations, every temporary vector can be stored. If the solver needs to + // iterate more often to get the specified tolerance, it will work on a + // reduced set of vectors by restarting at every 100 iterations. // - // With this all set up, we solve the system - // and distribute the constraints in the - // Stokes system, i.e. hanging nodes and - // no-flux boundary condition, in order to - // have the appropriate solution values even - // at constrained dofs. Finally, we write the - // number of iterations to the screen. + // With this all set up, we solve the system and distribute the constraints + // in the Stokes system, i.e. hanging nodes and no-flux boundary condition, + // in order to have the appropriate solution values even at constrained + // dofs. Finally, we write the number of iterations to the screen. template void BoussinesqFlowProblem::solve () { @@ -2270,39 +1804,26 @@ namespace Step31 << std::endl; } - // Once we know the Stokes solution, we can - // determine the new time step from the - // maximal velocity. We have to do this to - // satisfy the CFL condition since - // convection terms are treated explicitly - // in the temperature equation, as - // discussed in the introduction. The exact - // form of the formula used here for the - // time step is discussed in the results + // Once we know the Stokes solution, we can determine the new time step + // from the maximal velocity. We have to do this to satisfy the CFL + // condition since convection terms are treated explicitly in the + // temperature equation, as discussed in the introduction. The exact form + // of the formula used here for the time step is discussed in the results // section of this program. // - // There is a snatch here. The formula - // contains a division by the maximum value - // of the velocity. However, at the start - // of the computation, we have a constant - // temperature field (we start with a - // constant temperature, and it will be - // non-constant only after the first time - // step during which the source - // acts). Constant temperature means that - // no buoyancy acts, and so the velocity is - // zero. Dividing by it will not likely - // lead to anything good. + // There is a snatch here. The formula contains a division by the maximum + // value of the velocity. However, at the start of the computation, we + // have a constant temperature field (we start with a constant + // temperature, and it will be non-constant only after the first time step + // during which the source acts). Constant temperature means that no + // buoyancy acts, and so the velocity is zero. Dividing by it will not + // likely lead to anything good. // - // To avoid the resulting infinite time - // step, we ask whether the maximal - // velocity is very small (in particular - // smaller than the values we encounter - // during any of the following time steps) - // and if so rather than dividing by zero - // we just divide by a small value, - // resulting in a large but finite time - // step. + // To avoid the resulting infinite time step, we ask whether the maximal + // velocity is very small (in particular smaller than the values we + // encounter during any of the following time steps) and if so rather than + // dividing by zero we just divide by a small value, resulting in a large + // but finite time step. old_time_step = time_step; const double maximal_velocity = get_maximal_velocity(); @@ -2322,39 +1843,23 @@ namespace Step31 temperature_solution = old_temperature_solution; - // Next we set up the temperature system - // and the right hand side using the - // function - // assemble_temperature_system(). - // Knowing the matrix and right hand side - // of the temperature equation, we set up - // a preconditioner and a solver. The - // temperature matrix is a mass matrix - // (with eigenvalues around one) plus a - // Laplace matrix (with eigenvalues - // between zero and $ch^{-2}$) times a - // small number proportional to the time - // step $k_n$. Hence, the resulting - // symmetric and positive definite matrix - // has eigenvalues in the range - // $[1,1+k_nh^{-2}]$ (up to - // constants). This matrix is only - // moderately ill conditioned even for - // small mesh sizes and we get a - // reasonably good preconditioner by - // simple means, for example with an - // incomplete Cholesky decomposition - // preconditioner (IC) as we also use for - // preconditioning the pressure mass - // matrix solver. As a solver, we choose - // the conjugate gradient method CG. As - // before, we tell the solver to use - // Trilinos vectors via the template - // argument - // TrilinosWrappers::Vector. - // Finally, we solve, distribute the - // hanging node constraints and write out - // the number of iterations. + // Next we set up the temperature system and the right hand side using the + // function assemble_temperature_system(). Knowing the + // matrix and right hand side of the temperature equation, we set up a + // preconditioner and a solver. The temperature matrix is a mass matrix + // (with eigenvalues around one) plus a Laplace matrix (with eigenvalues + // between zero and $ch^{-2}$) times a small number proportional to the + // time step $k_n$. Hence, the resulting symmetric and positive definite + // matrix has eigenvalues in the range $[1,1+k_nh^{-2}]$ (up to + // constants). This matrix is only moderately ill conditioned even for + // small mesh sizes and we get a reasonably good preconditioner by simple + // means, for example with an incomplete Cholesky decomposition + // preconditioner (IC) as we also use for preconditioning the pressure + // mass matrix solver. As a solver, we choose the conjugate gradient + // method CG. As before, we tell the solver to use Trilinos vectors via + // the template argument TrilinosWrappers::Vector. Finally, + // we solve, distribute the hanging node constraints and write out the + // number of iterations. assemble_temperature_system (maximal_velocity); { @@ -2375,14 +1880,11 @@ namespace Step31 << " CG iterations for temperature." << std::endl; - // At the end of this function, we step - // through the vector and read out the - // maximum and minimum temperature value, - // which we also want to output. This - // will come in handy when determining - // the correct constant in the choice of - // time step as discuss in the results - // section of this program. + // At the end of this function, we step through the vector and read out + // the maximum and minimum temperature value, which we also want to + // output. This will come in handy when determining the correct constant + // in the choice of time step as discuss in the results section of this + // program. double min_temperature = temperature_solution(0), max_temperature = temperature_solution(0); for (unsigned int i=0; i void BoussinesqFlowProblem::output_results () const { @@ -2453,52 +1941,31 @@ namespace Step31 Vector joint_solution (joint_dof_handler.n_dofs()); - // Unfortunately, there is no - // straight-forward relation that tells - // us how to sort Stokes and temperature - // vector into the joint vector. The way - // we can get around this trouble is to - // rely on the information collected in - // the FESystem. For each dof in a cell, - // the joint finite element knows to - // which equation component (velocity - // component, pressure, or temperature) - // it belongs – that's the - // information we need! So we step - // through all cells (with iterators into - // all three DoFHandlers moving in - // synch), and for each joint cell dof, - // we read out that component using the - // FiniteElement::system_to_base_index - // function (see there for a description - // of what the various parts of its - // return value contain). We also need to - // keep track whether we're on a Stokes - // dof or a temperature dof, which is - // contained in - // joint_fe.system_to_base_index(i).first.first. - // Eventually, the dof_indices data - // structures on either of the three - // systems tell us how the relation - // between global vector and local dofs - // looks like on the present cell, which - // concludes this tedious work. + // Unfortunately, there is no straight-forward relation that tells us how + // to sort Stokes and temperature vector into the joint vector. The way we + // can get around this trouble is to rely on the information collected in + // the FESystem. For each dof in a cell, the joint finite element knows to + // which equation component (velocity component, pressure, or temperature) + // it belongs – that's the information we need! So we step through + // all cells (with iterators into all three DoFHandlers moving in synch), + // and for each joint cell dof, we read out that component using the + // FiniteElement::system_to_base_index function (see there for a + // description of what the various parts of its return value contain). We + // also need to keep track whether we're on a Stokes dof or a temperature + // dof, which is contained in + // joint_fe.system_to_base_index(i).first.first. Eventually, + // the dof_indices data structures on either of the three systems tell us + // how the relation between global vector and local dofs looks like on the + // present cell, which concludes this tedious work. // - // There's one thing worth remembering - // when looking at the output: In our - // algorithm, we first solve for the - // Stokes system at time level n-1 - // in each time step and then for the - // temperature at time level n - // using the previously computed - // velocity. These are the two components - // we join for output, so these two parts - // of the output file are actually - // misaligned by one time step. Since we - // consider graphical output as only a - // qualititative means to understand a - // solution, we ignore this - // $\mathcal{O}(h)$ error. + // There's one thing worth remembering when looking at the output: In our + // algorithm, we first solve for the Stokes system at time level + // n-1 in each time step and then for the temperature at time level + // n using the previously computed velocity. These are the two + // components we join for output, so these two parts of the output file + // are actually misaligned by one time step. Since we consider graphical + // output as only a qualititative means to understand a solution, we + // ignore this $\mathcal{O}(h)$ error. { std::vector local_joint_dof_indices (joint_fe.dofs_per_cell); std::vector local_stokes_dof_indices (stokes_fe.dofs_per_cell); @@ -2539,28 +2006,17 @@ namespace Step31 } } - // Next, we proceed as we've done in - // step-22. We create solution names - // (that are going to appear in the - // visualization program for the - // individual components), and attach the - // joint dof handler to a DataOut - // object. The first dim - // components are the vector velocity, - // and then we have pressure and - // temperature. This information is read - // out using the - // DataComponentInterpretation helper - // class. Next, we attach the solution - // values together with the names of its - // components to the output object, and - // build patches according to the degree - // of freedom, which are (sub-) elements - // that describe the data for - // visualization programs. Finally, we - // set a file name (that includes the - // time step number) and write the vtk - // file. + // Next, we proceed as we've done in step-22. We create solution names + // (that are going to appear in the visualization program for the + // individual components), and attach the joint dof handler to a DataOut + // object. The first dim components are the vector velocity, + // and then we have pressure and temperature. This information is read out + // using the DataComponentInterpretation helper class. Next, we attach the + // solution values together with the names of its components to the output + // object, and build patches according to the degree of freedom, which are + // (sub-) elements that describe the data for visualization + // programs. Finally, we set a file name (that includes the time step + // number) and write the vtk file. std::vector joint_solution_names (dim, "velocity"); joint_solution_names.push_back ("p"); joint_solution_names.push_back ("T"); @@ -2592,57 +2048,38 @@ namespace Step31 // @sect4{BoussinesqFlowProblem::refine_mesh} // - // This function takes care of the adaptive - // mesh refinement. The three tasks this - // function performs is to first find out - // which cells to refine/coarsen, then to - // actually do the refinement and eventually - // transfer the solution vectors between the - // two different grids. The first task is - // simply achieved by using the - // well-established Kelly error estimator on - // the temperature (it is the temperature - // we're mainly interested in for this - // program, and we need to be accurate in - // regions of high temperature gradients, - // also to not have too much numerical - // diffusion). The second task is to actually - // do the remeshing. That involves only basic - // functions as well, such as the - // refine_and_coarsen_fixed_fraction - // that refines those cells with the largest - // estimated error that together make up 80 - // per cent of the error, and coarsens those - // cells with the smallest error that make up - // for a combined 10 per cent of the - // error. + // This function takes care of the adaptive mesh refinement. The three tasks + // this function performs is to first find out which cells to + // refine/coarsen, then to actually do the refinement and eventually + // transfer the solution vectors between the two different grids. The first + // task is simply achieved by using the well-established Kelly error + // estimator on the temperature (it is the temperature we're mainly + // interested in for this program, and we need to be accurate in regions of + // high temperature gradients, also to not have too much numerical + // diffusion). The second task is to actually do the remeshing. That + // involves only basic functions as well, such as the + // refine_and_coarsen_fixed_fraction that refines those cells + // with the largest estimated error that together make up 80 per cent of the + // error, and coarsens those cells with the smallest error that make up for + // a combined 10 per cent of the error. // - // If implemented like this, we would get a - // program that will not make much progress: - // Remember that we expect temperature fields - // that are nearly discontinuous (the - // diffusivity $\kappa$ is very small after - // all) and consequently we can expect that a - // freely adapted mesh will refine further - // and further into the areas of large - // gradients. This decrease in mesh size will - // then be accompanied by a decrease in time - // step, requiring an exceedingly large - // number of time steps to solve to a given - // final time. It will also lead to meshes - // that are much better at resolving - // discontinuities after several mesh - // refinement cycles than in the beginning. + // If implemented like this, we would get a program that will not make much + // progress: Remember that we expect temperature fields that are nearly + // discontinuous (the diffusivity $\kappa$ is very small after all) and + // consequently we can expect that a freely adapted mesh will refine further + // and further into the areas of large gradients. This decrease in mesh size + // will then be accompanied by a decrease in time step, requiring an + // exceedingly large number of time steps to solve to a given final time. It + // will also lead to meshes that are much better at resolving + // discontinuities after several mesh refinement cycles than in the + // beginning. // - // In particular to prevent the decrease in - // time step size and the correspondingly - // large number of time steps, we limit the - // maximal refinement depth of the mesh. To - // this end, after the refinement indicator - // has been applied to the cells, we simply - // loop over all cells on the finest level - // and unselect them from refinement if they - // would result in too high a mesh level. + // In particular to prevent the decrease in time step size and the + // correspondingly large number of time steps, we limit the maximal + // refinement depth of the mesh. To this end, after the refinement indicator + // has been applied to the cells, we simply loop over all cells on the + // finest level and unselect them from refinement if they would result in + // too high a mesh level. template void BoussinesqFlowProblem::refine_mesh (const unsigned int max_grid_level) { @@ -2663,38 +2100,23 @@ namespace Step31 cell != triangulation.end(); ++cell) cell->clear_refine_flag (); - // As part of mesh refinement we need to - // transfer the solution vectors from the - // old mesh to the new one. To this end - // we use the SolutionTransfer class and - // we have to prepare the solution - // vectors that should be transferred to - // the new grid (we will lose the old - // grid once we have done the refinement - // so the transfer has to happen - // concurrently with refinement). What we - // definetely need are the current and - // the old temperature (BDF-2 time - // stepping requires two old - // solutions). Since the SolutionTransfer - // objects only support to transfer one - // object per dof handler, we need to - // collect the two temperature solutions - // in one data structure. Moreover, we - // choose to transfer the Stokes - // solution, too, since we need the - // velocity at two previous time steps, - // of which only one is calculated on the - // fly. + // As part of mesh refinement we need to transfer the solution vectors + // from the old mesh to the new one. To this end we use the + // SolutionTransfer class and we have to prepare the solution vectors that + // should be transferred to the new grid (we will lose the old grid once + // we have done the refinement so the transfer has to happen concurrently + // with refinement). What we definetely need are the current and the old + // temperature (BDF-2 time stepping requires two old solutions). Since the + // SolutionTransfer objects only support to transfer one object per dof + // handler, we need to collect the two temperature solutions in one data + // structure. Moreover, we choose to transfer the Stokes solution, too, + // since we need the velocity at two previous time steps, of which only + // one is calculated on the fly. // - // Consequently, we initialize two - // SolutionTransfer objects for the - // Stokes and temperature DoFHandler - // objects, by attaching them to the old - // dof handlers. With this at place, we - // can prepare the triangulation and the - // data vectors for refinement (in this - // order). + // Consequently, we initialize two SolutionTransfer objects for the Stokes + // and temperature DoFHandler objects, by attaching them to the old dof + // handlers. With this at place, we can prepare the triangulation and the + // data vectors for refinement (in this order). std::vector x_temperature (2); x_temperature[0] = temperature_solution; x_temperature[1] = old_temperature_solution; @@ -2709,30 +2131,18 @@ namespace Step31 temperature_trans.prepare_for_coarsening_and_refinement(x_temperature); stokes_trans.prepare_for_coarsening_and_refinement(x_stokes); - // Now everything is ready, so do the - // refinement and recreate the dof - // structure on the new grid, and - // initialize the matrix structures and - // the new vectors in the - // setup_dofs - // function. Next, we actually perform - // the interpolation of the solutions - // between the grids. We create another - // copy of temporary vectors for - // temperature (now corresponding to the - // new grid), and let the interpolate - // function do the job. Then, the - // resulting array of vectors is written - // into the respective vector member - // variables. For the Stokes vector, - // everything is just the same – - // except that we do not need another - // temporary vector since we just - // interpolate a single vector. In the - // end, we have to tell the program that - // the matrices and preconditioners need - // to be regenerated, since the mesh has - // changed. + // Now everything is ready, so do the refinement and recreate the dof + // structure on the new grid, and initialize the matrix structures and the + // new vectors in the setup_dofs function. Next, we actually + // perform the interpolation of the solutions between the grids. We create + // another copy of temporary vectors for temperature (now corresponding to + // the new grid), and let the interpolate function do the job. Then, the + // resulting array of vectors is written into the respective vector member + // variables. For the Stokes vector, everything is just the same – + // except that we do not need another temporary vector since we just + // interpolate a single vector. In the end, we have to tell the program + // that the matrices and preconditioners need to be regenerated, since the + // mesh has changed. triangulation.execute_coarsening_and_refinement (); setup_dofs (); @@ -2755,39 +2165,21 @@ namespace Step31 // @sect4{BoussinesqFlowProblem::run} // - // This function performs all the - // essential steps in the Boussinesq - // program. It starts by setting up a - // grid (depending on the spatial - // dimension, we choose some - // different level of initial - // refinement and additional adaptive - // refinement steps, and then create - // a cube in dim - // dimensions and set up the dofs for - // the first time. Since we want to - // start the time stepping already - // with an adaptively refined grid, - // we perform some pre-refinement - // steps, consisting of all assembly, - // solution and refinement, but - // without actually advancing in - // time. Rather, we use the vilified - // goto statement to - // jump out of the time loop right - // after mesh refinement to start all - // over again on the new mesh - // beginning at the - // start_time_iteration - // label. + // This function performs all the essential steps in the Boussinesq + // program. It starts by setting up a grid (depending on the spatial + // dimension, we choose some different level of initial refinement and + // additional adaptive refinement steps, and then create a cube in + // dim dimensions and set up the dofs for the first time. Since + // we want to start the time stepping already with an adaptively refined + // grid, we perform some pre-refinement steps, consisting of all assembly, + // solution and refinement, but without actually advancing in time. Rather, + // we use the vilified goto statement to jump out of the time + // loop right after mesh refinement to start all over again on the new mesh + // beginning at the start_time_iteration label. // - // Before we start, we project the - // initial values to the grid and - // obtain the first data for the - // old_temperature_solution - // vector. Then, we initialize time - // step number and time step and - // start the time loop. + // Before we start, we project the initial values to the grid and obtain the + // first data for the old_temperature_solution vector. Then, we + // initialize time step number and time step and start the time loop. template void BoussinesqFlowProblem::run () { @@ -2823,26 +2215,16 @@ start_time_iteration: << ": t=" << time << std::endl; - // The first steps in the time loop - // are all obvious – we - // assemble the Stokes system, the - // preconditioner, the temperature - // matrix (matrices and - // preconditioner do actually only - // change in case we've remeshed - // before), and then do the - // solve. Before going on - // with the next time step, we have - // to check whether we should first - // finish the pre-refinement steps or - // if we should remesh (every fifth - // time step), refining up to a level - // that is consistent with initial - // refinement and pre-refinement - // steps. Last in the loop is to - // advance the solutions, i.e. to - // copy the solutions to the next - // "older" time level. + // The first steps in the time loop are all obvious – we + // assemble the Stokes system, the preconditioner, the temperature + // matrix (matrices and preconditioner do actually only change in case + // we've remeshed before), and then do the solve. Before going on with + // the next time step, we have to check whether we should first finish + // the pre-refinement steps or if we should remesh (every fifth time + // step), refining up to a level that is consistent with initial + // refinement and pre-refinement steps. Last in the loop is to advance + // the solutions, i.e. to copy the solutions to the next "older" time + // level. assemble_stokes_system (); build_stokes_preconditioner (); assemble_temperature_matrix (); @@ -2870,8 +2252,7 @@ start_time_iteration: old_old_temperature_solution = old_temperature_solution; old_temperature_solution = temperature_solution; } - // Do all the above until we arrive at - // time 100. + // Do all the above until we arrive at time 100. while (time <= 100); } } @@ -2880,26 +2261,18 @@ start_time_iteration: // @sect3{The main function} // -// The main function looks almost the same -// as in all other programs. +// The main function looks almost the same as in all other programs. // -// There is one difference we have to be -// careful about. This program uses Trilinos -// and, typically, Trilinos is configured so -// that it can run in %parallel using -// MPI. This doesn't mean that it has -// to run in %parallel, and in fact this -// program (unlike step-32) makes no attempt -// at all to do anything in %parallel using -// MPI. Nevertheless, Trilinos wants the MPI -// system to be initialized. We do that be -// creating an object of type -// Utilities::MPI::MPI_InitFinalize that -// initializes MPI (if available) using the -// arguments given to main() (i.e., -// argc and argv) -// and de-initializes it again when the -// object goes out of scope. +// There is one difference we have to be careful about. This program uses +// Trilinos and, typically, Trilinos is configured so that it can run in +// %parallel using MPI. This doesn't mean that it has to run in +// %parallel, and in fact this program (unlike step-32) makes no attempt at +// all to do anything in %parallel using MPI. Nevertheless, Trilinos wants the +// MPI system to be initialized. We do that be creating an object of type +// Utilities::MPI::MPI_InitFinalize that initializes MPI (if available) using +// the arguments given to main() (i.e., argc and +// argv) and de-initializes it again when the object goes out of +// scope. int main (int argc, char *argv[]) { try diff --git a/deal.II/examples/step-32/step-32.cc b/deal.II/examples/step-32/step-32.cc index db3c7123d8..6b67f60f05 100644 --- a/deal.II/examples/step-32/step-32.cc +++ b/deal.II/examples/step-32/step-32.cc @@ -11,10 +11,8 @@ // @sect3{Include files} -//The first task as usual is to -// include the functionality of these -// well-known deal.II library files -// and some C++ header files. +// The first task as usual is to include the functionality of these well-known +// deal.II library files and some C++ header files. #include #include #include @@ -70,52 +68,36 @@ #include #include -// This is the only include file that -// is new: It introduces the -// parallel::distributed::SolutionTransfer -// equivalent of the -// dealii::SolutionTransfer class to -// take a solution from on mesh to -// the next one upon mesh refinement, -// but in the case of parallel -// distributed triangulations: +// This is the only include file that is new: It introduces the +// parallel::distributed::SolutionTransfer equivalent of the +// dealii::SolutionTransfer class to take a solution from on mesh to the next +// one upon mesh refinement, but in the case of parallel distributed +// triangulations: #include -// The following classes are used in -// parallel distributed computations -// and have all already been -// introduced in step-40: +// The following classes are used in parallel distributed computations and +// have all already been introduced in step-40: #include #include #include -// The next step is like in all -// previous tutorial programs: We put -// everything into a namespace of its -// own and then import the deal.II -// classes and functions into it: +// The next step is like in all previous tutorial programs: We put everything +// into a namespace of its own and then import the deal.II classes and +// functions into it: namespace Step32 { using namespace dealii; // @sect3{Equation data} - // In the following namespace, we - // define the various pieces of - // equation data that describe the - // problem. This corresponds to the - // various aspects of making the - // problem at least slightly - // realistc and that were - // exhaustively discussed in the - // description of the testcase in - // the introduction. + // In the following namespace, we define the various pieces of equation data + // that describe the problem. This corresponds to the various aspects of + // making the problem at least slightly realistc and that were exhaustively + // discussed in the description of the testcase in the introduction. // - // We start with a few coefficients - // that have constant values (the - // comment after the value - // indicates its physical units): + // We start with a few coefficients that have constant values (the comment + // after the value indicates its physical units): namespace EquationData { const double eta = 1e21; /* Pa s */ @@ -134,15 +116,10 @@ namespace Step32 const double T1 = 700+273; /* K */ - // The next set of definitions - // are for functions that encode - // the density as a function of - // temperature, the gravity - // vector, and the initial values - // for the temperature. Again, - // all of these (along with the - // values they compute) are - // discussed in the introduction: + // The next set of definitions are for functions that encode the density + // as a function of temperature, the gravity vector, and the initial + // values for the temperature. Again, all of these (along with the values + // they compute) are discussed in the introduction: double density (const double temperature) { return (reference_density * @@ -204,36 +181,19 @@ namespace Step32 } - // As mentioned in the - // introduction we need to - // rescale the pressure to avoid - // the relative ill-conditioning - // of the momentum and mass - // conservation equations. The - // scaling factor is - // $\frac{\eta}{L}$ where $L$ was - // a typical length scale. By - // experimenting it turns out - // that a good length scale is - // the diameter of plumes, which - // is around 10 km: + // As mentioned in the introduction we need to rescale the pressure to + // avoid the relative ill-conditioning of the momentum and mass + // conservation equations. The scaling factor is $\frac{\eta}{L}$ where + // $L$ was a typical length scale. By experimenting it turns out that a + // good length scale is the diameter of plumes, which is around 10 km: const double pressure_scaling = eta / 10000; - // The final number in this - // namespace is a constant that - // denotes the number of seconds - // per (average, tropical) - // year. We use this only when - // generating screen output: - // internally, all computations - // of this program happen in SI - // units (kilogram, meter, - // seconds) but writing - // geological times in seconds - // yields numbers that one can't - // relate to reality, and so we - // convert to years using the - // factor defined here: + // The final number in this namespace is a constant that denotes the + // number of seconds per (average, tropical) year. We use this only when + // generating screen output: internally, all computations of this program + // happen in SI units (kilogram, meter, seconds) but writing geological + // times in seconds yields numbers that one can't relate to reality, and + // so we convert to years using the factor defined here: const double year_in_seconds = 60*60*24*365.2425; } @@ -242,27 +202,17 @@ namespace Step32 // @sect3{Preconditioning the Stokes system} - // This namespace implements the - // preconditioner. As discussed in the - // introduction, this preconditioner - // differs in a number of key portions from - // the one used in step-31. Specifically, - // it is a right preconditioner, - // implementing the matrix - // @f{align*}\left(\begin{array}{cc}A^{-1} - // & B^T \\ 0 & S^{-1}\end{array}\right)@f} - // where the two inverse matrix operations - // are approximated by linear solvers or, - // if the right flag is given to the - // constructor of this class, by a single - // AMG V-cycle for the velocity block. The - // three code blocks of the - // vmult function implement - // the multiplications with the three - // blocks of this preconditioner matrix and - // should be self explanatory if you have - // read through step-31 or the discussion - // of compositing solvers in step-20. + // This namespace implements the preconditioner. As discussed in the + // introduction, this preconditioner differs in a number of key portions + // from the one used in step-31. Specifically, it is a right preconditioner, + // implementing the matrix @f{align*}\left(\begin{array}{cc}A^{-1} & B^T \\ + // 0 & S^{-1}\end{array}\right)@f} where the two inverse matrix operations + // are approximated by linear solvers or, if the right flag is given to the + // constructor of this class, by a single AMG V-cycle for the velocity + // block. The three code blocks of the vmult function implement + // the multiplications with the three blocks of this preconditioner matrix + // and should be self explanatory if you have read through step-31 or the + // discussion of compositing solvers in step-20. namespace LinearSolvers { template @@ -329,98 +279,47 @@ namespace Step32 // @sect3{Definition of assembly data structures} // - // As described in the - // introduction, we will use the - // WorkStream mechanism discussed - // in the @ref threads module to - // parallelize operations among the - // processors of a single - // machine. The WorkStream class - // requires that data is passed - // around in two kinds of data - // structures, one for scratch data - // and one to pass data from the - // assembly function to the - // function that copies local - // contributions into global - // objects. + // As described in the introduction, we will use the WorkStream mechanism + // discussed in the @ref threads module to parallelize operations among the + // processors of a single machine. The WorkStream class requires that data + // is passed around in two kinds of data structures, one for scratch data + // and one to pass data from the assembly function to the function that + // copies local contributions into global objects. // - // The following namespace (and the - // two sub-namespaces) contains a - // collection of data structures - // that serve this purpose, one - // pair for each of the four - // operations discussed in the - // introduction that we will want - // to parallelize. Each assembly - // routine gets two sets of data: a - // Scratch array that collects all - // the classes and arrays that are - // used for the calculation of the - // cell contribution, and a - // CopyData array that keeps local - // matrices and vectors which will - // be written into the global - // matrix. Whereas CopyData is a - // container for the final data - // that is written into the global - // matrices and vector (and, thus, - // absolutely necessary), the - // Scratch arrays are merely there - // for performance reasons — - // it would be much more expensive - // to set up a FEValues object on - // each cell, than creating it only - // once and updating some - // derivative data. + // The following namespace (and the two sub-namespaces) contains a + // collection of data structures that serve this purpose, one pair for each + // of the four operations discussed in the introduction that we will want to + // parallelize. Each assembly routine gets two sets of data: a Scratch array + // that collects all the classes and arrays that are used for the + // calculation of the cell contribution, and a CopyData array that keeps + // local matrices and vectors which will be written into the global + // matrix. Whereas CopyData is a container for the final data that is + // written into the global matrices and vector (and, thus, absolutely + // necessary), the Scratch arrays are merely there for performance reasons + // — it would be much more expensive to set up a FEValues object on + // each cell, than creating it only once and updating some derivative data. // - // Step-31 had four assembly - // routines: One for the - // preconditioner matrix of the - // Stokes system, one for the - // Stokes matrix and right hand - // side, one for the temperature - // matrices and one for the right - // hand side of the temperature - // equation. We here organize the - // scratch arrays and CopyData - // objects for each of those four - // assembly components using a - // struct environment - // (since we consider these as - // temporary objects we pass - // around, rather than classes that - // implement functionality of their - // own, though this is a more - // subjective point of view to - // distinguish between - // structs and - // classes). + // Step-31 had four assembly routines: One for the preconditioner matrix of + // the Stokes system, one for the Stokes matrix and right hand side, one for + // the temperature matrices and one for the right hand side of the + // temperature equation. We here organize the scratch arrays and CopyData + // objects for each of those four assembly components using a + // struct environment (since we consider these as temporary + // objects we pass around, rather than classes that implement functionality + // of their own, though this is a more subjective point of view to + // distinguish between structs and classes). // - // Regarding the Scratch objects, - // each struct is equipped with a - // constructor that creates an - // FEValues object for a @ref - // FiniteElement "finite element", - // a @ref Quadrature "quadrature formula", - // the @ref Mapping "mapping" that - // describes the - // interpolation of curved - // boundaries, and some @ref - // UpdateFlags "update flags". - // Moreover, we manually implement - // a copy constructor (since the - // FEValues class is not copyable - // by itself), and provide some - // additional vector fields that - // are used to hold intermediate - // data during the computation of - // local contributions. + // Regarding the Scratch objects, each struct is equipped with a constructor + // that creates an FEValues object for a @ref FiniteElement "finite + // element", a @ref Quadrature "quadrature formula", the @ref Mapping + // "mapping" that describes the interpolation of curved boundaries, and some + // @ref UpdateFlags "update flags". Moreover, we manually implement a copy + // constructor (since the FEValues class is not copyable by itself), and + // provide some additional vector fields that are used to hold intermediate + // data during the computation of local contributions. // - // Let us start with the scratch - // arrays and, specifically, the - // one used for assembly of the - // Stokes preconditioner: + // Let us start with the scratch arrays and, specifically, the one used for + // assembly of the Stokes preconditioner: namespace Assembly { namespace Scratch @@ -471,24 +370,15 @@ namespace Step32 - // The next one is the scratch object - // used for the assembly of the full - // Stokes system. Observe that we - // derive the StokesSystem scratch - // class from the StokesPreconditioner - // class above. We do this because all the - // objects that are necessary for the - // assembly of the preconditioner are - // also needed for the actual matrix - // system and right hand side, plus - // some extra data. This makes the - // program more compact. Note also that - // the assembly of the Stokes system - // and the temperature right hand side - // further down requires data from - // temperature and velocity, - // respectively, so we actually need - // two FEValues objects for those two + // The next one is the scratch object used for the assembly of the full + // Stokes system. Observe that we derive the StokesSystem scratch class + // from the StokesPreconditioner class above. We do this because all the + // objects that are necessary for the assembly of the preconditioner are + // also needed for the actual matrix system and right hand side, plus + // some extra data. This makes the program more compact. Note also that + // the assembly of the Stokes system and the temperature right hand side + // further down requires data from temperature and velocity, + // respectively, so we actually need two FEValues objects for those two // cases. template struct StokesSystem : public StokesPreconditioner @@ -550,12 +440,9 @@ namespace Step32 {} - // After defining the objects used in - // the assembly of the Stokes system, - // we do the same for the assembly of - // the matrices necessary for the - // temperature system. The general - // structure is very similar: + // After defining the objects used in the assembly of the Stokes system, + // we do the same for the assembly of the matrices necessary for the + // temperature system. The general structure is very similar: template struct TemperatureMatrix { @@ -601,24 +488,16 @@ namespace Step32 {} - // The final scratch object is used in - // the assembly of the right hand side - // of the temperature system. This - // object is significantly larger than - // the ones above because a lot more - // quantities enter the computation of - // the right hand side of the - // temperature equation. In particular, - // the temperature values and gradients - // of the previous two time steps need - // to be evaluated at the quadrature - // points, as well as the velocities - // and the strain rates (i.e. the - // symmetric gradients of the velocity) - // that enter the right hand side as - // friction heating terms. Despite the - // number of terms, the following - // should be rather self explanatory: + // The final scratch object is used in the assembly of the right hand + // side of the temperature system. This object is significantly larger + // than the ones above because a lot more quantities enter the + // computation of the right hand side of the temperature equation. In + // particular, the temperature values and gradients of the previous two + // time steps need to be evaluated at the quadrature points, as well as + // the velocities and the strain rates (i.e. the symmetric gradients of + // the velocity) that enter the right hand side as friction heating + // terms. Despite the number of terms, the following should be rather + // self explanatory: template struct TemperatureRHS { @@ -715,26 +594,15 @@ namespace Step32 } - // The CopyData objects are even - // simpler than the Scratch - // objects as all they have to do - // is to store the results of - // local computations until they - // can be copied into the global - // matrix or vector - // objects. These structures - // therefore only need to provide - // a constructor, a copy - // operation, and some arrays for - // local matrix, local vectors - // and the relation between local - // and global degrees of freedom - // (a.k.a. - // local_dof_indices). Again, - // we have one such structure for - // each of the four operations we - // will parallelize using the - // WorkStream class: + // The CopyData objects are even simpler than the Scratch objects as all + // they have to do is to store the results of local computations until + // they can be copied into the global matrix or vector objects. These + // structures therefore only need to provide a constructor, a copy + // operation, and some arrays for local matrix, local vectors and the + // relation between local and global degrees of freedom (a.k.a. + // local_dof_indices). Again, we have one such structure for + // each of the four operations we will parallelize using the WorkStream + // class: namespace CopyData { template @@ -862,34 +730,19 @@ namespace Step32 // @sect3{The BoussinesqFlowProblem class template} // - // This is the declaration of the - // main class. It is very similar - // to step-31 but there are a - // number differences we will - // comment on below. + // This is the declaration of the main class. It is very similar to step-31 + // but there are a number differences we will comment on below. // - // The top of the class is - // essentially the same as in - // step-31, listing the public - // methods and a set of private - // functions that do the heavy - // lifting. Compared to step-31 - // there are only two additions to - // this section: the function - // get_cfl_number() - // that computes the maximum CFL - // number over all cells which - // we then compute the global time - // step from, and the function - // get_entropy_variation() - // that is used in the computation - // of the entropy stabilization. It - // is akin to the - // get_extrapolated_temperature_range() - // we have used in step-31 for this - // purpose, but works on the - // entropy instead of the - // temperature instead. + // The top of the class is essentially the same as in step-31, listing the + // public methods and a set of private functions that do the heavy + // lifting. Compared to step-31 there are only two additions to this + // section: the function get_cfl_number() that computes the + // maximum CFL number over all cells which we then compute the global time + // step from, and the function get_entropy_variation() that is + // used in the computation of the entropy stabilization. It is akin to the + // get_extrapolated_temperature_range() we have used in step-31 + // for this purpose, but works on the entropy instead of the temperature + // instead. template class BoussinesqFlowProblem { @@ -933,14 +786,9 @@ namespace Step32 public: - // The first significant new - // component is the definition - // of a struct for the - // parameters according to the - // discussion in the - // introduction. This structure - // is initialized by reading - // from a parameter file during + // The first significant new component is the definition of a struct for + // the parameters according to the discussion in the introduction. This + // structure is initialized by reading from a parameter file during // construction of this object. struct Parameters { @@ -972,116 +820,49 @@ namespace Step32 private: Parameters ¶meters; - // The pcout (for - // %parallel - // std::cout) - // object is used to simplify - // writing output: each MPI - // process can use this to - // generate output as usual, - // but since each of these - // processes will (hopefully) - // produce the same output it - // will just be replicated many - // times over; with the - // ConditionalOStream class, - // only the output generated by - // one MPI process will - // actually be printed to - // screen, whereas the output - // by all the other threads - // will simply be forgotten. + // The pcout (for %parallel std::cout) + // object is used to simplify writing output: each MPI process can use + // this to generate output as usual, but since each of these processes + // will (hopefully) produce the same output it will just be replicated + // many times over; with the ConditionalOStream class, only the output + // generated by one MPI process will actually be printed to screen, + // whereas the output by all the other threads will simply be forgotten. ConditionalOStream pcout; - // The following member - // variables will then again be - // similar to those in step-31 - // (and to other tutorial - // programs). As mentioned in - // the introduction, we fully - // distribute computations, so - // we will have to use the - // parallel::distributed::Triangulation - // class (see step-40) but the - // remainder of these variables - // is rather standard with two - // exceptions: + // The following member variables will then again be similar to those in + // step-31 (and to other tutorial programs). As mentioned in the + // introduction, we fully distribute computations, so we will have to use + // the parallel::distributed::Triangulation class (see step-40) but the + // remainder of these variables is rather standard with two exceptions: // - // - The mapping - // variable is used to denote a - // higher-order polynomial - // mapping. As mentioned in the - // introduction, we use this - // mapping when forming - // integrals through quadrature - // for all cells that are - // adjacent to either the inner - // or outer boundaries of our - // domain where the boundary is - // curved. + // - The mapping variable is used to denote a higher-order + // polynomial mapping. As mentioned in the introduction, we use this + // mapping when forming integrals through quadrature for all cells that + // are adjacent to either the inner or outer boundaries of our domain + // where the boundary is curved. // - // - In a bit of naming - // confusion, you will notice - // below that some of the - // variables from namespace - // TrilinosWrappers are taken - // from namespace - // TrilinosWrappers::MPI (such - // as the right hand side - // vectors) whereas others are - // not (such as the various - // matrices). For the matrices, - // we happen to use the same - // class names for %parallel - // and sequential data - // structures, i.e., all - // matrices will actually be - // considered %parallel - // below. On the other hand, - // for vectors, only those from - // namespace - // TrilinosWrappers::MPI are - // actually distributed. In - // particular, we will - // frequently have to query - // velocities and temperatures - // at arbitrary quadrature - // points; consequently, rather - // than importing ghost - // information of a vector - // whenever we need access to - // degrees of freedom that are - // relevant locally but owned - // by another processor, we - // solve linear systems in - // %parallel but then - // immediately initialize a - // vector including ghost - // entries of the solution for - // further processing. The - // various - // *_solution - // vectors are therefore filled - // immediately after solving - // their respective linear - // system in %parallel and will - // always contain values for - // all @ref - // GlossLocallyRelevantDof - // "locally relevant degrees of freedom"; - // the fully - // distributed vectors that we - // obtain from the solution - // process and that only ever - // contain the @ref - // GlossLocallyOwnedDof - // "locally owned degrees of freedom" - // are destroyed - // immediately after the - // solution process and after - // we have copied the relevant - // values into the member - // variable vectors. + // - In a bit of naming confusion, you will notice below that some of the + // variables from namespace TrilinosWrappers are taken from namespace + // TrilinosWrappers::MPI (such as the right hand side vectors) whereas + // others are not (such as the various matrices). For the matrices, we + // happen to use the same class names for %parallel and sequential data + // structures, i.e., all matrices will actually be considered %parallel + // below. On the other hand, for vectors, only those from namespace + // TrilinosWrappers::MPI are actually distributed. In particular, we will + // frequently have to query velocities and temperatures at arbitrary + // quadrature points; consequently, rather than importing ghost + // information of a vector whenever we need access to degrees of freedom + // that are relevant locally but owned by another processor, we solve + // linear systems in %parallel but then immediately initialize a vector + // including ghost entries of the solution for further processing. The + // various *_solution vectors are therefore filled + // immediately after solving their respective linear system in %parallel + // and will always contain values for all @ref GlossLocallyRelevantDof + // "locally relevant degrees of freedom"; the fully distributed vectors + // that we obtain from the solution process and that only ever contain the + // @ref GlossLocallyOwnedDof "locally owned degrees of freedom" are + // destroyed immediately after the solution process and after we have + // copied the relevant values into the member variable vectors. parallel::distributed::Triangulation triangulation; double global_Omega_diameter; @@ -1126,70 +907,34 @@ namespace Step32 bool rebuild_temperature_matrices; bool rebuild_temperature_preconditioner; - // The next member variable, - // computing_timer - // is used to conveniently - // account for compute time - // spent in certain "sections" - // of the code that are - // repeatedly entered. For - // example, we will enter (and - // leave) sections for Stokes - // matrix assembly and would - // like to accumulate the run - // time spent in this section - // over all time steps. Every - // so many time steps as well - // as at the end of the program - // (through the destructor of - // the TimerOutput class) we - // will then produce a nice - // summary of the times spent - // in the different sections - // into which we categorize the + // The next member variable, computing_timer is used to + // conveniently account for compute time spent in certain "sections" of + // the code that are repeatedly entered. For example, we will enter (and + // leave) sections for Stokes matrix assembly and would like to accumulate + // the run time spent in this section over all time steps. Every so many + // time steps as well as at the end of the program (through the destructor + // of the TimerOutput class) we will then produce a nice summary of the + // times spent in the different sections into which we categorize the // run-time of this program. TimerOutput computing_timer; - // After these member variables - // we have a number of - // auxiliary functions that - // have been broken out of the - // ones listed - // above. Specifically, there - // are first three functions - // that we call from - // setup_dofs and - // then the ones that do the - // assembling of linear - // systems: + // After these member variables we have a number of auxiliary functions + // that have been broken out of the ones listed above. Specifically, there + // are first three functions that we call from setup_dofs and + // then the ones that do the assembling of linear systems: void setup_stokes_matrix (const std::vector &stokes_partitioning); void setup_stokes_preconditioner (const std::vector &stokes_partitioning); void setup_temperature_matrices (const IndexSet &temperature_partitioning); - // Following the @ref - // MTWorkStream - // "task-based parallelization" - // paradigm, - // we split all the assembly - // routines into two parts: a - // first part that can do all - // the calculations on a - // certain cell without taking - // care of other threads, and a - // second part (which is - // writing the local data into - // the global matrices and - // vectors) which can be - // entered by only one thread - // at a time. In order to - // implement that, we provide - // functions for each of those - // two steps for all the four - // assembly routines that we - // use in this program. The - // following eight functions do - // exactly this: + // Following the @ref MTWorkStream "task-based parallelization" paradigm, + // we split all the assembly routines into two parts: a first part that + // can do all the calculations on a certain cell without taking care of + // other threads, and a second part (which is writing the local data into + // the global matrices and vectors) which can be entered by only one + // thread at a time. In order to implement that, we provide functions for + // each of those two steps for all the four assembly routines that we use + // in this program. The following eight functions do exactly this: void local_assemble_stokes_preconditioner (const typename DoFHandler::active_cell_iterator &cell, Assembly::Scratch::StokesPreconditioner &scratch, @@ -1229,14 +974,9 @@ namespace Step32 void copy_local_to_global_temperature_rhs (const Assembly::CopyData::TemperatureRHS &data); - // Finally, we forward declare - // a member class that we will - // define later on and that - // will be used to compute a - // number of quantities from - // our solution vectors that - // we'd like to put into the - // output files for + // Finally, we forward declare a member class that we will define later on + // and that will be used to compute a number of quantities from our + // solution vectors that we'd like to put into the output files for // visualization. class Postprocessor; }; @@ -1246,37 +986,21 @@ namespace Step32 // @sect4{BoussinesqFlowProblem::Parameters} // - // Here comes the definition of the - // parameters for the Stokes - // problem. We allow to set the end - // time for the simulation, the - // level of refinements (both - // global and adaptive, which in - // the sum specify what maximum - // level the cells are allowed to - // have), and the interval between - // refinements in the time - // stepping. + // Here comes the definition of the parameters for the Stokes problem. We + // allow to set the end time for the simulation, the level of refinements + // (both global and adaptive, which in the sum specify what maximum level + // the cells are allowed to have), and the interval between refinements in + // the time stepping. // - // Then, we let the user specify - // constants for the stabilization - // parameters (as discussed in the - // introduction), the polynomial - // degree for the Stokes velocity - // space, whether to use the - // locally conservative - // discretization based on FE_DGP - // elements for the pressure or not - // (FE_Q elements for pressure), - // and the polynomial degree for - // the temperature interpolation. + // Then, we let the user specify constants for the stabilization parameters + // (as discussed in the introduction), the polynomial degree for the Stokes + // velocity space, whether to use the locally conservative discretization + // based on FE_DGP elements for the pressure or not (FE_Q elements for + // pressure), and the polynomial degree for the temperature interpolation. // - // The constructor checks for a - // valid input file (if not, a file - // with default parameters for the - // quantities is written), and - // eventually parses the - // parameters. + // The constructor checks for a valid input file (if not, a file with + // default parameters for the quantities is written), and eventually parses + // the parameters. template BoussinesqFlowProblem::Parameters::Parameters (const std::string ¶meter_filename) : @@ -1322,11 +1046,8 @@ namespace Step32 - // Next we have a function that - // declares the parameters that we - // expect in the input file, - // together with their data types, - // default values and a + // Next we have a function that declares the parameters that we expect in + // the input file, together with their data types, default values and a // description: template void @@ -1398,14 +1119,10 @@ namespace Step32 - // And then we need a function that - // reads the contents of the - // ParameterHandler object we get - // by reading the input file and - // puts the results into variables - // that store the values of the - // parameters we have previously - // declared: + // And then we need a function that reads the contents of the + // ParameterHandler object we get by reading the input file and puts the + // results into variables that store the values of the parameters we have + // previously declared: template void BoussinesqFlowProblem::Parameters:: @@ -1443,62 +1160,29 @@ namespace Step32 // @sect4{BoussinesqFlowProblem::BoussinesqFlowProblem} // - // The constructor of the problem - // is very similar to the - // constructor in step-31. What is - // different is the %parallel - // communication: Trilinos uses a - // message passing interface (MPI) - // for data distribution. When - // entering the - // BoussinesqFlowProblem class, we - // have to decide how the - // parallization is to be done. We - // choose a rather simple strategy - // and let all processors that are - // running the program work - // together, specified by the - // communicator - // MPI_COMM_WORLD. Next, - // we create the output stream (as - // we already did in step-18) that - // only generates output on the - // first MPI process and is - // completely forgetful on all - // others. The implementation of - // this idea is to check the - // process number when - // pcout gets a true - // argument, and it uses the - // std::cout stream - // for output. If we are one - // processor five, for instance, - // then we will give a - // false argument to - // pcout, which means - // that the output of that - // processor will not be - // printed. With the exception of - // the mapping object (for which we - // use polynomials of degree 4) all - // but the final member variable - // are exactly the same as in - // step-31. + // The constructor of the problem is very similar to the constructor in + // step-31. What is different is the %parallel communication: Trilinos uses + // a message passing interface (MPI) for data distribution. When entering + // the BoussinesqFlowProblem class, we have to decide how the parallization + // is to be done. We choose a rather simple strategy and let all processors + // that are running the program work together, specified by the communicator + // MPI_COMM_WORLD. Next, we create the output stream (as we + // already did in step-18) that only generates output on the first MPI + // process and is completely forgetful on all others. The implementation of + // this idea is to check the process number when pcout gets a + // true argument, and it uses the std::cout stream for + // output. If we are one processor five, for instance, then we will give a + // false argument to pcout, which means that the + // output of that processor will not be printed. With the exception of the + // mapping object (for which we use polynomials of degree 4) all but the + // final member variable are exactly the same as in step-31. // - // This final object, the - // TimerOutput object, is then told - // to restrict output to the - // pcout stream - // (processor 0), and then we - // specify that we want to get a - // summary table at the end of the - // program which shows us wallclock - // times (as opposed to CPU - // times). We will manually also - // request intermediate summaries - // every so many time steps in the - // run() function - // below. + // This final object, the TimerOutput object, is then told to restrict + // output to the pcout stream (processor 0), and then we + // specify that we want to get a summary table at the end of the program + // which shows us wallclock times (as opposed to CPU times). We will + // manually also request intermediate summaries every so many time steps in + // the run() function below. template BoussinesqFlowProblem::BoussinesqFlowProblem (Parameters ¶meters_) : @@ -1546,70 +1230,35 @@ namespace Step32 // @sect4{The BoussinesqFlowProblem helper functions} - // @sect5{BoussinesqFlowProblem::get_maximal_velocity} - // Except for two small details, - // the function to compute the - // global maximum of the velocity - // is the same as in step-31. The - // first detail is actually common - // to all functions that implement - // loops over all cells in the - // triangulation: When operating in - // %parallel, each processor can - // only work on a chunk of cells - // since each processor only has a - // certain part of the entire - // triangulation. This chunk of - // cells that we want to work on is - // identified via a so-called - // subdomain_id, as we - // also did in step-18. All we need - // to change is hence to perform - // the cell-related operations only - // on cells that are owned by the - // current process (as opposed to - // ghost or artificial cells), - // i.e. for which the subdomain id - // equals the number of the process - // ID. Since this is a commonly - // used operation, there is a - // shortcut for this operation: we - // can ask whether the cell is - // owned by the current processor - // using - // cell-@>is_locally_owned(). + // @sect5{BoussinesqFlowProblem::get_maximal_velocity} Except for two small + // details, the function to compute the global maximum of the velocity is + // the same as in step-31. The first detail is actually common to all + // functions that implement loops over all cells in the triangulation: When + // operating in %parallel, each processor can only work on a chunk of cells + // since each processor only has a certain part of the entire + // triangulation. This chunk of cells that we want to work on is identified + // via a so-called subdomain_id, as we also did in step-18. All + // we need to change is hence to perform the cell-related operations only on + // cells that are owned by the current process (as opposed to ghost or + // artificial cells), i.e. for which the subdomain id equals the number of + // the process ID. Since this is a commonly used operation, there is a + // shortcut for this operation: we can ask whether the cell is owned by the + // current processor using cell-@>is_locally_owned(). // - // The second difference is the way - // we calculate the maximum - // value. Before, we could simply - // have a double - // variable that we checked against - // on each quadrature point for - // each cell. Now, we have to be a - // bit more careful since each - // processor only operates on a - // subset of cells. What we do is - // to first let each processor - // calculate the maximum among its - // cells, and then do a global - // communication operation - // Utilities::MPI::max - // that computes the maximum value - // among all the maximum values of - // the individual processors. MPI - // provides such a call, but it's - // even simpler to use the - // respective function in namespace - // Utilities::MPI using the MPI - // communicator object since that - // will do the right thing even if - // we work without MPI and on a - // single machine only. The call to - // Utilities::MPI::max - // needs two arguments, namely the - // local maximum (input) and the - // MPI communicator, which is - // MPI_COMM_WORLD in this example. + // The second difference is the way we calculate the maximum value. Before, + // we could simply have a double variable that we checked + // against on each quadrature point for each cell. Now, we have to be a bit + // more careful since each processor only operates on a subset of + // cells. What we do is to first let each processor calculate the maximum + // among its cells, and then do a global communication operation + // Utilities::MPI::max that computes the maximum value among + // all the maximum values of the individual processors. MPI provides such a + // call, but it's even simpler to use the respective function in namespace + // Utilities::MPI using the MPI communicator object since that will do the + // right thing even if we work without MPI and on a single machine only. The + // call to Utilities::MPI::max needs two arguments, namely the + // local maximum (input) and the MPI communicator, which is MPI_COMM_WORLD + // in this example. template double BoussinesqFlowProblem::get_maximal_velocity () const { @@ -1643,23 +1292,13 @@ namespace Step32 } - // @sect5{BoussinesqFlowProblem::get_cfl_number} - // The next function does something - // similar, but we now compute the - // CFL number, i.e., maximal - // velocity on a cell divided by - // the cell diameter. This number - // is necessary to determine the - // time step size, as we use a - // semi-explicit time stepping - // scheme for the temperature - // equation (see step-31 for a - // discussion). We compute it in - // the same way as above: Compute - // the local maximum over all - // locally owned cells, then - // exchange it via MPI to find the - // global maximum. + // @sect5{BoussinesqFlowProblem::get_cfl_number} The next function does + // something similar, but we now compute the CFL number, i.e., maximal + // velocity on a cell divided by the cell diameter. This number is necessary + // to determine the time step size, as we use a semi-explicit time stepping + // scheme for the temperature equation (see step-31 for a discussion). We + // compute it in the same way as above: Compute the local maximum over all + // locally owned cells, then exchange it via MPI to find the global maximum. template double BoussinesqFlowProblem::get_cfl_number () const { @@ -1696,42 +1335,25 @@ namespace Step32 } - // @sect5{BoussinesqFlowProblem::get_entropy_variation} - // Next comes the computation of - // the global entropy variation - // $\|E(T)-\bar{E}(T)\|_\infty$ - // where the entropy $E$ is defined - // as discussed in the - // introduction. This is needed for - // the evaluation of the - // stabilization in the temperature - // equation as explained in the - // introduction. The entropy - // variation is actually only - // needed if we use $\alpha=2$ as a - // power in the residual - // computation. The infinity norm - // is computed by the maxima over - // quadrature points, as usual in - // discrete computations. + // @sect5{BoussinesqFlowProblem::get_entropy_variation} Next comes the + // computation of the global entropy variation $\|E(T)-\bar{E}(T)\|_\infty$ + // where the entropy $E$ is defined as discussed in the introduction. This + // is needed for the evaluation of the stabilization in the temperature + // equation as explained in the introduction. The entropy variation is + // actually only needed if we use $\alpha=2$ as a power in the residual + // computation. The infinity norm is computed by the maxima over quadrature + // points, as usual in discrete computations. // - // In order to compute this quantity, we - // first have to find the space-average - // $\bar{E}(T)$ and then evaluate the - // maximum. However, that means that we - // would need to perform two loops. We can - // avoid the overhead by noting that - // $\|E(T)-\bar{E}(T)\|_\infty = + // In order to compute this quantity, we first have to find the + // space-average $\bar{E}(T)$ and then evaluate the maximum. However, that + // means that we would need to perform two loops. We can avoid the overhead + // by noting that $\|E(T)-\bar{E}(T)\|_\infty = // \max\big(E_{\textrm{max}}(T)-\bar{E}(T), - // \bar{E}(T)-E_{\textrm{min}}(T)\big)$, i.e., the - // maximum out of the deviation from the - // average entropy in positive and negative - // directions. The four quantities we need - // for the latter formula (maximum entropy, - // minimum entropy, average entropy, area) - // can all be evaluated in the same loop - // over all cells, so we choose this - // simpler variant. + // \bar{E}(T)-E_{\textrm{min}}(T)\big)$, i.e., the maximum out of the + // deviation from the average entropy in positive and negative + // directions. The four quantities we need for the latter formula (maximum + // entropy, minimum entropy, average entropy, area) can all be evaluated in + // the same loop over all cells, so we choose this simpler variant. template double BoussinesqFlowProblem::get_entropy_variation (const double average_temperature) const @@ -1747,43 +1369,21 @@ namespace Step32 std::vector old_temperature_values(n_q_points); std::vector old_old_temperature_values(n_q_points); - // In the two functions above we - // computed the maximum of - // numbers that were all - // non-negative, so we knew that - // zero was certainly a lower - // bound. On the other hand, here - // we need to find the maximum - // deviation from the average - // value, i.e., we will need to - // know the maximal and minimal - // values of the entropy for - // which we don't a priori know - // the sign. + // In the two functions above we computed the maximum of numbers that were + // all non-negative, so we knew that zero was certainly a lower bound. On + // the other hand, here we need to find the maximum deviation from the + // average value, i.e., we will need to know the maximal and minimal + // values of the entropy for which we don't a priori know the sign. // - // To compute it, we can - // therefore start with the - // largest and smallest possible - // values we can store in a - // double precision number: The - // minimum is initialized with a - // bigger and the maximum with a - // smaller number than any one - // that is going to appear. We - // are then guaranteed that these - // numbers will be overwritten in - // the loop on the first cell or, - // if this processor does not own - // any cells, in the - // communication step at the - // latest. The following loop - // then computes the minimum and - // maximum local entropy as well - // as keeps track of the - // area/volume of the part of the - // domain we locally own and the - // integral over the entropy on - // it: + // To compute it, we can therefore start with the largest and smallest + // possible values we can store in a double precision number: The minimum + // is initialized with a bigger and the maximum with a smaller number than + // any one that is going to appear. We are then guaranteed that these + // numbers will be overwritten in the loop on the first cell or, if this + // processor does not own any cells, in the communication step at the + // latest. The following loop then computes the minimum and maximum local + // entropy as well as keeps track of the area/volume of the part of the + // domain we locally own and the integral over the entropy on it: double min_entropy = std::numeric_limits::max(), max_entropy = -std::numeric_limits::max(), area = 0, @@ -1814,30 +1414,16 @@ namespace Step32 } } - // Now we only need to exchange - // data between processors: we - // need to sum the two integrals - // (area, - // entropy_integrated), - // and get the extrema for - // maximum and minimum. We could - // do this through four different - // data exchanges, but we can it - // with two: Utilities::MPI::sum - // also exists in a variant that - // takes an array of values that - // are all to be summed up. And - // we can also utilize the - // Utilities::MPI::max function - // by realizing that forming the - // minimum over the minimal - // entropies equals forming the - // negative of the maximum over - // the negative of the minimal - // entropies; this maximum can - // then be combined with forming - // the maximum over the maximal - // entropies. + // Now we only need to exchange data between processors: we need to sum + // the two integrals (area, entropy_integrated), + // and get the extrema for maximum and minimum. We could do this through + // four different data exchanges, but we can it with two: + // Utilities::MPI::sum also exists in a variant that takes an array of + // values that are all to be summed up. And we can also utilize the + // Utilities::MPI::max function by realizing that forming the minimum over + // the minimal entropies equals forming the negative of the maximum over + // the negative of the minimal entropies; this maximum can then be + // combined with forming the maximum over the maximal entropies. const double local_sums[2] = { entropy_integrated, area }, local_maxima[2] = { -min_entropy, max_entropy }; double global_sums[2], global_maxima[2]; @@ -1845,13 +1431,9 @@ namespace Step32 Utilities::MPI::sum (local_sums, MPI_COMM_WORLD, global_sums); Utilities::MPI::max (local_maxima, MPI_COMM_WORLD, global_maxima); - // Having computed everything - // this way, we can then compute - // the average entropy and find - // the $L^\infty$ norm by taking - // the larger of the deviation of - // the maximum or minimum from - // the average: + // Having computed everything this way, we can then compute the average + // entropy and find the $L^\infty$ norm by taking the larger of the + // deviation of the maximum or minimum from the average: const double average_entropy = global_sums[0] / global_sums[1]; const double entropy_diff = std::max(global_maxima[1] - average_entropy, average_entropy - (-global_maxima[0])); @@ -1860,26 +1442,17 @@ namespace Step32 - // @sect5{BoussinesqFlowProblem::get_extrapolated_temperature_range} - // The next function computes the - // minimal and maximal value of the - // extrapolated temperature over - // the entire domain. Again, this - // is only a slightly modified - // version of the respective - // function in step-31. As in the - // function above, we collect local - // minima and maxima and then - // compute the global extrema using - // the same trick as above. + // @sect5{BoussinesqFlowProblem::get_extrapolated_temperature_range} The + // next function computes the minimal and maximal value of the extrapolated + // temperature over the entire domain. Again, this is only a slightly + // modified version of the respective function in step-31. As in the + // function above, we collect local minima and maxima and then compute the + // global extrema using the same trick as above. // - // As already discussed in step-31, the - // function needs to distinguish between - // the first and all following time steps - // because it uses a higher order - // temperature extrapolation scheme when at - // least two previous time steps are - // available. + // As already discussed in step-31, the function needs to distinguish + // between the first and all following time steps because it uses a higher + // order temperature extrapolation scheme when at least two previous time + // steps are available. template std::pair BoussinesqFlowProblem::get_extrapolated_temperature_range () const @@ -1957,13 +1530,10 @@ namespace Step32 } - // @sect5{BoussinesqFlowProblem::compute_viscosity} - // The function that calculates the - // viscosity is purely local and so needs - // no communication at all. It is mostly - // the same as in step-31 but with an - // updated formulation of the viscosity if - // $\alpha=2$ is chosen: + // @sect5{BoussinesqFlowProblem::compute_viscosity} The function that + // calculates the viscosity is purely local and so needs no communication at + // all. It is mostly the same as in step-31 but with an updated formulation + // of the viscosity if $\alpha=2$ is chosen: template double BoussinesqFlowProblem:: @@ -2051,86 +1621,48 @@ namespace Step32 // @sect5{BoussinesqFlowProblem::project_temperature_field} - // This function is new compared to - // step-31. What is does is to re-implement - // the library function - // VectorTools::project() for - // an MPI-based parallelization, a function - // we used for generating an initial vector - // for temperature based on some initial - // function. The library function only - // works with shared memory but doesn't - // know how to utilize multiple machines - // coupled through MPI to compute the - // projected field. The details of a - // project() function are not - // very difficult. All we do is to use a - // mass matrix and put the evaluation of - // the initial value function on the right - // hand side. The mass matrix for - // temperature we can simply generate using - // the respective assembly function, so all - // we need to do here is to create the - // right hand side and do a CG solve. The - // assembly function does a loop over all - // cells and evaluates the function in the - // EquationData namespace, and - // does this only on cells owned by the - // respective processor. The implementation - // of this assembly differs from the - // assembly we do for the principal - // assembly functions further down (which - // include thread-based parallelization - // with the WorkStream concept). Here we - // chose to keep things simple (keeping in - // mind that this function is also only - // called once at the beginning of the - // program, not in every time step), and - // generating the right hand side is cheap - // anyway so we won't even notice that this - // part is not parallized by threads. + // This function is new compared to step-31. What is does is to re-implement + // the library function VectorTools::project() for an MPI-based + // parallelization, a function we used for generating an initial vector for + // temperature based on some initial function. The library function only + // works with shared memory but doesn't know how to utilize multiple + // machines coupled through MPI to compute the projected field. The details + // of a project() function are not very difficult. All we do is + // to use a mass matrix and put the evaluation of the initial value function + // on the right hand side. The mass matrix for temperature we can simply + // generate using the respective assembly function, so all we need to do + // here is to create the right hand side and do a CG solve. The assembly + // function does a loop over all cells and evaluates the function in the + // EquationData namespace, and does this only on cells owned by + // the respective processor. The implementation of this assembly differs + // from the assembly we do for the principal assembly functions further down + // (which include thread-based parallelization with the WorkStream + // concept). Here we chose to keep things simple (keeping in mind that this + // function is also only called once at the beginning of the program, not in + // every time step), and generating the right hand side is cheap anyway so + // we won't even notice that this part is not parallized by threads. // - // Regarding the implementation of - // inhomogeneous Dirichlet boundary - // conditions: Since we use the temperature - // ConstraintMatrix, we could apply the - // boundary conditions directly when - // building the respective matrix and right - // hand side. In this case, the boundary - // conditions are inhomogeneous, which - // makes this procedure somewhat tricky - // since we get the matrix from some other - // function that uses its own integration - // and assembly loop. However, the correct - // imposition of boundary conditions needs - // the matrix data we work on plus the - // right hand side simultaneously, since - // the right hand side is created by - // Gaussian elimination on the matrix - // rows. In order to not introduce the - // matrix assembly at this place, but still - // having the matrix data available, we - // choose to create a dummy matrix - // matrix_for_bc that we only - // fill with data when we need it for - // imposing boundary conditions. These - // positions are exactly those where we - // have an inhomogeneous entry in the - // ConstraintMatrix. There are only a few - // such positions (on the boundary DoFs), - // so it is still much cheaper to use this - // function than to create the full matrix - // here. To implement this, we ask the - // constraint matrix whether the DoF under - // consideration is inhomogeneously - // constrained. In that case, we generate - // the respective matrix column that we - // need for creating the correct right hand - // side. Note that this (manually - // generated) matrix entry needs to be - // exactly the entry that we would fill the - // matrix with — otherwise, this will - // not work. + // Regarding the implementation of inhomogeneous Dirichlet boundary + // conditions: Since we use the temperature ConstraintMatrix, we could apply + // the boundary conditions directly when building the respective matrix and + // right hand side. In this case, the boundary conditions are inhomogeneous, + // which makes this procedure somewhat tricky since we get the matrix from + // some other function that uses its own integration and assembly + // loop. However, the correct imposition of boundary conditions needs the + // matrix data we work on plus the right hand side simultaneously, since the + // right hand side is created by Gaussian elimination on the matrix rows. In + // order to not introduce the matrix assembly at this place, but still + // having the matrix data available, we choose to create a dummy matrix + // matrix_for_bc that we only fill with data when we need it + // for imposing boundary conditions. These positions are exactly those where + // we have an inhomogeneous entry in the ConstraintMatrix. There are only a + // few such positions (on the boundary DoFs), so it is still much cheaper to + // use this function than to create the full matrix here. To implement this, + // we ask the constraint matrix whether the DoF under consideration is + // inhomogeneously constrained. In that case, we generate the respective + // matrix column that we need for creating the correct right hand side. Note + // that this (manually generated) matrix entry needs to be exactly the entry + // that we would fill the matrix with — otherwise, this will not work. template void BoussinesqFlowProblem::project_temperature_field () { @@ -2195,10 +1727,8 @@ namespace Step32 rhs.compress (Add); - // Now that we have the right linear - // system, we solve it using the CG - // method with a simple Jacobi - // preconditioner: + // Now that we have the right linear system, we solve it using the CG + // method with a simple Jacobi preconditioner: SolverControl solver_control(5*rhs.size(), 1e-12*rhs.l2_norm()); SolverCG cg(solver_control); @@ -2209,38 +1739,23 @@ namespace Step32 temperature_constraints.distribute (solution); - // Having so computed the current - // temperature field, let us set the - // member variable that holds the - // temperature nodes. Strictly speaking, - // we really only need to set - // old_temperature_solution - // since the first thing we will do is to - // compute the Stokes solution that only - // requires the previous time step's - // temperature field. That said, nothing - // good can come from not initializing - // the other vectors as well (especially - // since it's a relatively cheap - // operation and we only have to do it - // once at the beginning of the program) - // if we ever want to extend our - // numerical method or physical model, - // and so we initialize + // Having so computed the current temperature field, let us set the member + // variable that holds the temperature nodes. Strictly speaking, we really + // only need to set old_temperature_solution since the first + // thing we will do is to compute the Stokes solution that only requires + // the previous time step's temperature field. That said, nothing good can + // come from not initializing the other vectors as well (especially since + // it's a relatively cheap operation and we only have to do it once at the + // beginning of the program) if we ever want to extend our numerical + // method or physical model, and so we initialize // temperature_solution and - // old_old_temperature_solution - // as well. As a sidenote, while the - // solution vector is - // strictly distributed (i.e. each - // processor only stores a mutually - // exclusive subset of elements), the - // assignment makes sure that the vectors - // on the left hand side (which where - // initialized to contain ghost elements - // as well) also get the correct ghost - // elements. In other words, the - // assignment here requires communication - // between processors: + // old_old_temperature_solution as well. As a sidenote, while + // the solution vector is strictly distributed (i.e. each + // processor only stores a mutually exclusive subset of elements), the + // assignment makes sure that the vectors on the left hand side (which + // where initialized to contain ghost elements as well) also get the + // correct ghost elements. In other words, the assignment here requires + // communication between processors: temperature_solution = solution; old_temperature_solution = solution; old_old_temperature_solution = solution; @@ -2251,68 +1766,43 @@ namespace Step32 // @sect4{The BoussinesqFlowProblem setup functions} - // The following three functions set up the - // Stokes matrix, the matrix used for the - // Stokes preconditioner, and the - // temperature matrix. The code is mostly - // the same as in step-31, but it has been - // broken out into three functions of their - // own for simplicity. + // The following three functions set up the Stokes matrix, the matrix used + // for the Stokes preconditioner, and the temperature matrix. The code is + // mostly the same as in step-31, but it has been broken out into three + // functions of their own for simplicity. // - // The main functional difference between - // the code here and that in step-31 is - // that the matrices we want to set up are - // distributed across multiple - // processors. Since we still want to build - // up the sparsity pattern first for - // efficiency reasons, we could continue to - // build the entire sparsity pattern - // as a - // BlockCompressedSimpleSparsityPattern, as - // we did in step-31. However, that would - // be inefficient: every processor would - // build the same sparsity pattern, but - // only initialize a small part of the - // matrix using it. It also violates the - // principle that every processor should - // only work on those cells it owns (and, - // if necessary the layer of ghost cells + // The main functional difference between the code here and that in step-31 + // is that the matrices we want to set up are distributed across multiple + // processors. Since we still want to build up the sparsity pattern first + // for efficiency reasons, we could continue to build the entire + // sparsity pattern as a BlockCompressedSimpleSparsityPattern, as we did in + // step-31. However, that would be inefficient: every processor would build + // the same sparsity pattern, but only initialize a small part of the matrix + // using it. It also violates the principle that every processor should only + // work on those cells it owns (and, if necessary the layer of ghost cells // around it). // - // Rather, we use an object of type - // TrilinosWrappers::BlockSparsityPattern, - // which is (obviously) a wrapper around a - // sparsity pattern object provided by - // Trilinos. The advantage is that the - // Trilinos sparsity pattern class can - // communicate across multiple processors: - // if this processor fills in all the - // nonzero entries that result from the - // cells it owns, and every other processor - // does so as well, then at the end after - // some MPI communication initiated by the - // compress() call, we will - // have the globally assembled sparsity - // pattern available with which the global + // Rather, we use an object of type TrilinosWrappers::BlockSparsityPattern, + // which is (obviously) a wrapper around a sparsity pattern object provided + // by Trilinos. The advantage is that the Trilinos sparsity pattern class + // can communicate across multiple processors: if this processor fills in + // all the nonzero entries that result from the cells it owns, and every + // other processor does so as well, then at the end after some MPI + // communication initiated by the compress() call, we will have + // the globally assembled sparsity pattern available with which the global // matrix can be initialized. // - // The only other change we need to make is - // to tell the - // DoFTools::make_sparsity_pattern() function - // that it is only supposed to work on a - // subset of cells, namely the ones whose - // subdomain_id equals the - // number of the current processor, and to - // ignore all other cells. + // The only other change we need to make is to tell the + // DoFTools::make_sparsity_pattern() function that it is only supposed to + // work on a subset of cells, namely the ones whose + // subdomain_id equals the number of the current processor, and + // to ignore all other cells. // - // This strategy is replicated across all - // three of the following functions. + // This strategy is replicated across all three of the following functions. // - // Note that Trilinos matrices store the - // information contained in the sparsity - // patterns, so we can safely release the - // sp variable once the matrix - // has been given the sparsity structure. + // Note that Trilinos matrices store the information contained in the + // sparsity patterns, so we can safely release the sp variable + // once the matrix has been given the sparsity structure. template void BoussinesqFlowProblem:: setup_stokes_matrix (const std::vector &stokes_partitioning) @@ -2398,63 +1888,38 @@ namespace Step32 - // The remainder of the setup function - // (after splitting out the three functions - // above) mostly has to deal with the - // things we need to do for parallelization - // across processors. Because setting all - // of this up is a significant compute time - // expense of the program, we put - // everything we do here into a timer group - // so that we can get summary information - // about the fraction of time spent in this - // part of the program at its end. + // The remainder of the setup function (after splitting out the three + // functions above) mostly has to deal with the things we need to do for + // parallelization across processors. Because setting all of this up is a + // significant compute time expense of the program, we put everything we do + // here into a timer group so that we can get summary information about the + // fraction of time spent in this part of the program at its end. // - // At the top as usual we enumerate degrees - // of freedom and sort them by - // component/block, followed by writing - // their numbers to the screen from - // processor zero. The DoFHandler::distributed_dofs() function, when applied to a parallel::distributed::Triangulation object, sorts degrees of freedom in such a - // way that all degrees of freedom - // associated with subdomain zero come - // before all those associated with - // subdomain one, etc. For the Stokes - // part, this entails, however, that - // velocities and pressures become - // intermixed, but this is trivially - // solved by sorting again by blocks; it - // is worth noting that this latter - // operation leaves the relative ordering - // of all velocities and pressures alone, - // i.e. within the velocity block we will - // still have all those associated with - // subdomain zero before all velocities - // associated with subdomain one, - // etc. This is important since we store - // each of the blocks of this matrix - // distributed across all processors and - // want this to be done in such a way - // that each processor stores that part - // of the matrix that is roughly equal to - // the degrees of freedom located on - // those cells that it will actually work - // on. + // At the top as usual we enumerate degrees of freedom and sort them by + // component/block, followed by writing their numbers to the screen from + // processor zero. The DoFHandler::distributed_dofs() function, when applied + // to a parallel::distributed::Triangulation object, sorts degrees of + // freedom in such a way that all degrees of freedom associated with + // subdomain zero come before all those associated with subdomain one, + // etc. For the Stokes part, this entails, however, that velocities and + // pressures become intermixed, but this is trivially solved by sorting + // again by blocks; it is worth noting that this latter operation leaves the + // relative ordering of all velocities and pressures alone, i.e. within the + // velocity block we will still have all those associated with subdomain + // zero before all velocities associated with subdomain one, etc. This is + // important since we store each of the blocks of this matrix distributed + // across all processors and want this to be done in such a way that each + // processor stores that part of the matrix that is roughly equal to the + // degrees of freedom located on those cells that it will actually work on. // - // When printing the numbers of degrees of - // freedom, note that these numbers are - // going to be large if we use many - // processors. Consequently, we let the - // stream put a comma separator in between - // every three digits. The state of the - // stream, using the locale, is saved from - // before to after this operation. While - // slightly opaque, the code works because - // the default locale (which we get using - // the constructor call - // std::locale("")) implies - // printing numbers with a comma separator - // for every third digit (i.e., thousands, - // millions, billions). + // When printing the numbers of degrees of freedom, note that these numbers + // are going to be large if we use many processors. Consequently, we let the + // stream put a comma separator in between every three digits. The state of + // the stream, using the locale, is saved from before to after this + // operation. While slightly opaque, the code works because the default + // locale (which we get using the constructor call + // std::locale("")) implies printing numbers with a comma + // separator for every third digit (i.e., thousands, millions, billions). template void BoussinesqFlowProblem::setup_dofs () { @@ -2491,16 +1956,11 @@ namespace Step32 pcout.get_stream().imbue(s); - // After this, we have to set up the - // various partitioners (of type - // IndexSet, see the - // introduction) that describe which - // parts of each matrix or vector will be - // stored where, then call the functions - // that actually set up the matrices, and - // at the end also resize the various - // vectors we keep around in this - // program. + // After this, we have to set up the various partitioners (of type + // IndexSet, see the introduction) that describe which parts + // of each matrix or vector will be stored where, then call the functions + // that actually set up the matrices, and at the end also resize the + // various vectors we keep around in this program. std::vector stokes_partitioning, stokes_relevant_partitioning; IndexSet temperature_partitioning (n_T), temperature_relevant_partitioning (n_T); IndexSet stokes_relevant_set; @@ -2519,26 +1979,16 @@ namespace Step32 temperature_relevant_partitioning); } - // Following this, we can compute - // constraints for the solution vectors, - // including hanging node constraints and - // homogenous and inhomogenous boundary - // values for the Stokes and temperature - // fields. Note that as for everything - // else, the constraint objects can not - // hold all constraints on every - // processor. Rather, each processor - // needs to store only those that are - // actually necessary for correctness - // given that it only assembles linear - // systems on cells it owns. As discussed - // in the - // @ref distributed_paper "this paper", - // the set of constraints we need to know - // about is exactly the set of - // constraints on all locally relevant - // degrees of freedom, so this is what we - // use to initialize the constraint + // Following this, we can compute constraints for the solution vectors, + // including hanging node constraints and homogenous and inhomogenous + // boundary values for the Stokes and temperature fields. Note that as for + // everything else, the constraint objects can not hold all + // constraints on every processor. Rather, each processor needs to store + // only those that are actually necessary for correctness given that it + // only assembles linear systems on cells it owns. As discussed in the + // @ref distributed_paper "this paper", the set of constraints we need to + // know about is exactly the set of constraints on all locally relevant + // degrees of freedom, so this is what we use to initialize the constraint // objects. { stokes_constraints.clear (); @@ -2579,13 +2029,10 @@ namespace Step32 temperature_constraints.close (); } - // All this done, we can then initialize - // the various matrix and vector objects - // to their proper sizes. At the end, we - // also record that all matrices and - // preconditioners have to be re-computed - // at the beginning of the next time - // step. + // All this done, we can then initialize the various matrix and vector + // objects to their proper sizes. At the end, we also record that all + // matrices and preconditioners have to be re-computed at the beginning of + // the next time step. setup_stokes_matrix (stokes_partitioning); setup_stokes_preconditioner (stokes_partitioning); setup_temperature_matrices (temperature_partitioning); @@ -2611,54 +2058,36 @@ namespace Step32 // @sect4{The BoussinesqFlowProblem assembly functions} // - // Following the discussion in the - // introduction and in the @ref threads - // module, we split the assembly functions - // into different parts: + // Following the discussion in the introduction and in the @ref threads + // module, we split the assembly functions into different parts: // - //
  • The local calculations of - // matrices and right hand sides, given a - // certain cell as input (these functions - // are named local_assemble_* - // below). The resulting function is, in - // other words, essentially the body of the - // loop over all cells in step-31. Note, - // however, that these functions store the - // result from the local calculations in - // variables of classes from the CopyData - // namespace. + //
    • The local calculations of matrices and right hand sides, given + // a certain cell as input (these functions are named + // local_assemble_* below). The resulting function is, in other + // words, essentially the body of the loop over all cells in step-31. Note, + // however, that these functions store the result from the local + // calculations in variables of classes from the CopyData namespace. // - //
    • These objects are then given to the - // second step which writes the local data - // into the global data structures (these - // functions are named - // copy_local_to_global_* - // below). These functions are pretty + //
    • These objects are then given to the second step which writes the + // local data into the global data structures (these functions are named + // copy_local_to_global_* below). These functions are pretty // trivial. // - //
    • These two subfunctions are then used - // in the respective assembly routine - // (called assemble_* below), - // where a WorkStream object is set up and - // runs over all the cells that belong to - // the processor's subdomain.
    + //
  • These two subfunctions are then used in the respective assembly + // routine (called assemble_* below), where a WorkStream object + // is set up and runs over all the cells that belong to the processor's + // subdomain.
// @sect5{Stokes preconditioner assembly} // - // Let us start with the functions that - // builds the Stokes preconditioner. The - // first two of these are pretty trivial, - // given the discussion above. Note in - // particular that the main point in using - // the scratch data object is that we want - // to avoid allocating any objects on the - // free space each time we visit a new - // cell. As a consequence, the assembly - // function below only has automatic local - // variables, and everything else is - // accessed through the scratch data - // object, which is allocated only once - // before we start the loop over all cells: + // Let us start with the functions that builds the Stokes + // preconditioner. The first two of these are pretty trivial, given the + // discussion above. Note in particular that the main point in using the + // scratch data object is that we want to avoid allocating any objects on + // the free space each time we visit a new cell. As a consequence, the + // assembly function below only has automatic local variables, and + // everything else is accessed through the scratch data object, which is + // allocated only once before we start the loop over all cells: template void BoussinesqFlowProblem:: @@ -2712,88 +2141,50 @@ namespace Step32 } - // Now for the function that actually puts - // things together, using the WorkStream - // functions. WorkStream::run needs a - // start and end iterator to enumerate the - // cells it is supposed to work - // on. Typically, one would use - // DoFHandler::begin_active() and - // DoFHandler::end() for that but here we - // actually only want the subset of cells - // that in fact are owned by the current - // processor. This is where the - // FilteredIterator class comes into play: - // you give it a range of cells and it - // provides an iterator that only iterates - // over that subset of cells that satisfy a - // certain predicate (a predicate is a - // function of one argument that either - // returns true or false). The predicate we - // use here is - // IteratorFilters::LocallyOwnedCell, i.e., - // it returns true exactly if the cell is - // owned by the current processor. The - // resulting iterator range is then exactly - // what we need. + // Now for the function that actually puts things together, using the + // WorkStream functions. WorkStream::run needs a start and end iterator to + // enumerate the cells it is supposed to work on. Typically, one would use + // DoFHandler::begin_active() and DoFHandler::end() for that but here we + // actually only want the subset of cells that in fact are owned by the + // current processor. This is where the FilteredIterator class comes into + // play: you give it a range of cells and it provides an iterator that only + // iterates over that subset of cells that satisfy a certain predicate (a + // predicate is a function of one argument that either returns true or + // false). The predicate we use here is IteratorFilters::LocallyOwnedCell, + // i.e., it returns true exactly if the cell is owned by the current + // processor. The resulting iterator range is then exactly what we need. // - // With this obstacle out of the way, we - // call the WorkStream::run function with - // this set of cells, scratch and copy - // objects, and with pointers to two - // functions: the local assembly and - // copy-local-to-global function. These - // functions need to have very specific - // signatures: three arguments in the first - // and one argument in the latter case (see - // the documentation of the WorkStream::run - // function for the meaning of these - // arguments). Note how we use the - // construct std_cxx1x::bind - // to create a function object that - // satisfies this requirement. It uses - // placeholders _1, std_cxx1x::_2, - // _3 for the local assembly - // function that specify cell, scratch - // data, and copy data, as well as the - // placeholder _1 for the copy - // function that expects the data to be - // written into the global matrix. On the - // other hand, the implicit zeroth argument - // of member functions (namely the - // this pointer of the object - // on which that member function is to - // operate on) is bound to the - // this pointer of the current - // function. The WorkStream::run function, - // as a consequence, does not need to know - // anything about the object these - // functions work on. + // With this obstacle out of the way, we call the WorkStream::run function + // with this set of cells, scratch and copy objects, and with pointers to + // two functions: the local assembly and copy-local-to-global + // function. These functions need to have very specific signatures: three + // arguments in the first and one argument in the latter case (see the + // documentation of the WorkStream::run function for the meaning of these + // arguments). Note how we use the construct std_cxx1x::bind + // to create a function object that satisfies this requirement. It uses + // placeholders _1, std_cxx1x::_2, _3 for the local assembly + // function that specify cell, scratch data, and copy data, as well as the + // placeholder _1 for the copy function that expects the data + // to be written into the global matrix. On the other hand, the implicit + // zeroth argument of member functions (namely the this pointer + // of the object on which that member function is to operate on) is + // bound to the this pointer of the current + // function. The WorkStream::run function, as a consequence, does not need + // to know anything about the object these functions work on. // - // When the WorkStream is executed, it will - // create several local assembly routines - // of the first kind for several cells and - // let some available processors work on - // them. The function that needs to be - // synchronized, i.e., the write operation - // into the global matrix, however, is - // executed by only one thread at a time in - // the prescribed order. Of course, this - // only holds for the parallelization on a - // single MPI process. Different MPI - // processes will have their own WorkStream - // objects and do that work completely - // independently (and in different memory - // spaces). In a distributed calculation, - // some data will accumulate at degrees of - // freedom that are not owned by the - // respective processor. It would be - // inefficient to send data around every - // time we encounter such a dof. What - // happens instead is that the Trilinos - // sparse matrix will keep that data and - // send it to the owner at the end of - // assembly, by calling the + // When the WorkStream is executed, it will create several local assembly + // routines of the first kind for several cells and let some available + // processors work on them. The function that needs to be synchronized, + // i.e., the write operation into the global matrix, however, is executed by + // only one thread at a time in the prescribed order. Of course, this only + // holds for the parallelization on a single MPI process. Different MPI + // processes will have their own WorkStream objects and do that work + // completely independently (and in different memory spaces). In a + // distributed calculation, some data will accumulate at degrees of freedom + // that are not owned by the respective processor. It would be inefficient + // to send data around every time we encounter such a dof. What happens + // instead is that the Trilinos sparse matrix will keep that data and send + // it to the owner at the end of assembly, by calling the // compress() command. template void @@ -2836,15 +2227,11 @@ namespace Step32 - // The final function in this block - // initiates assembly of the Stokes - // preconditioner matrix and then in fact - // builds the Stokes preconditioner. It is - // mostly the same as in the serial - // case. The only difference to step-31 is - // that we use a Jacobi preconditioner for - // the pressure mass matrix instead of IC, - // as discussed in the introduction. + // The final function in this block initiates assembly of the Stokes + // preconditioner matrix and then in fact builds the Stokes + // preconditioner. It is mostly the same as in the serial case. The only + // difference to step-31 is that we use a Jacobi preconditioner for the + // pressure mass matrix instead of IC, as discussed in the introduction. template void BoussinesqFlowProblem::build_stokes_preconditioner () @@ -2886,25 +2273,16 @@ namespace Step32 // @sect5{Stokes system assembly} - // The next three functions implement the - // assembly of the Stokes system, again - // split up into a part performing local - // calculations, one for writing the local - // data into the global matrix and vector, - // and one for actually running the loop - // over all cells with the help of the - // WorkStream class. Note that the assembly - // of the Stokes matrix needs only to be - // done in case we have changed the - // mesh. Otherwise, just the - // (temperature-dependent) right hand side - // needs to be calculated here. Since we - // are working with distributed matrices - // and vectors, we have to call the - // respective compress() - // functions in the end of the assembly in - // order to send non-local data to the - // owner process. + // The next three functions implement the assembly of the Stokes system, + // again split up into a part performing local calculations, one for writing + // the local data into the global matrix and vector, and one for actually + // running the loop over all cells with the help of the WorkStream + // class. Note that the assembly of the Stokes matrix needs only to be done + // in case we have changed the mesh. Otherwise, just the + // (temperature-dependent) right hand side needs to be calculated + // here. Since we are working with distributed matrices and vectors, we have + // to call the respective compress() functions in the end of + // the assembly in order to send non-local data to the owner process. template void BoussinesqFlowProblem:: @@ -3053,20 +2431,14 @@ namespace Step32 // @sect5{Temperature matrix assembly} - // The task to be performed by the next - // three functions is to calculate a mass - // matrix and a Laplace matrix on the - // temperature system. These will be - // combined in order to yield the - // semi-implicit time stepping matrix that - // consists of the mass matrix plus a time - // step-dependent weight factor times the - // Laplace matrix. This function is again - // essentially the body of the loop over - // all cells from step-31. + // The task to be performed by the next three functions is to calculate a + // mass matrix and a Laplace matrix on the temperature system. These will be + // combined in order to yield the semi-implicit time stepping matrix that + // consists of the mass matrix plus a time step-dependent weight factor + // times the Laplace matrix. This function is again essentially the body of + // the loop over all cells from step-31. // - // The two following functions perform - // similar services as the ones above. + // The two following functions perform similar services as the ones above. template void BoussinesqFlowProblem:: local_assemble_temperature_matrix (const typename DoFHandler::active_cell_iterator &cell, @@ -3169,28 +2541,18 @@ namespace Step32 // @sect5{Temperature right hand side assembly} - // This is the last assembly function. It - // calculates the right hand side of the - // temperature system, which includes the - // convection and the stabilization - // terms. It includes a lot of evaluations - // of old solutions at the quadrature - // points (which are necessary for - // calculating the artificial viscosity of - // stabilization), but is otherwise similar - // to the other assembly functions. Notice, - // once again, how we resolve the dilemma - // of having inhomogeneous boundary - // conditions, by just making a right hand - // side at this point (compare the comments - // for the project() function - // above): We create some matrix columns - // with exactly the values that would be - // entered for the temperature stiffness - // matrix, in case we have inhomogeneously - // constrained dofs. That will account for - // the correct balance of the right hand - // side vector with the matrix system of + // This is the last assembly function. It calculates the right hand side of + // the temperature system, which includes the convection and the + // stabilization terms. It includes a lot of evaluations of old solutions at + // the quadrature points (which are necessary for calculating the artificial + // viscosity of stabilization), but is otherwise similar to the other + // assembly functions. Notice, once again, how we resolve the dilemma of + // having inhomogeneous boundary conditions, by just making a right hand + // side at this point (compare the comments for the project() + // function above): We create some matrix columns with exactly the values + // that would be entered for the temperature stiffness matrix, in case we + // have inhomogeneously constrained dofs. That will account for the correct + // balance of the right hand side vector with the matrix system of // temperature. template void BoussinesqFlowProblem:: @@ -3376,38 +2738,23 @@ namespace Step32 - // In the function that runs the WorkStream - // for actually calculating the right hand - // side, we also generate the final - // matrix. As mentioned above, it is a sum - // of the mass matrix and the Laplace - // matrix, times some time step-dependent - // weight. This weight is specified by the - // BDF-2 time integration scheme, see the - // introduction in step-31. What is new in - // this tutorial program (in addition to - // the use of MPI parallelization and the - // WorkStream class), is that we now - // precompute the temperature - // preconditioner as well. The reason is - // that the setup of the Jacobi - // preconditioner takes a noticeable time - // compared to the solver because we - // usually only need between 10 and 20 - // iterations for solving the temperature - // system (this might sound strange, as - // Jacobi really only consists of a - // diagonal, but in Trilinos it is derived - // from more general framework for point - // relaxation preconditioners which is a - // bit inefficient). Hence, it is more - // efficient to precompute the - // preconditioner, even though the matrix - // entries may slightly change because the - // time step might change. This is not too - // big a problem because we remesh every - // few time steps (and regenerate the - // preconditioner then). + // In the function that runs the WorkStream for actually calculating the + // right hand side, we also generate the final matrix. As mentioned above, + // it is a sum of the mass matrix and the Laplace matrix, times some time + // step-dependent weight. This weight is specified by the BDF-2 time + // integration scheme, see the introduction in step-31. What is new in this + // tutorial program (in addition to the use of MPI parallelization and the + // WorkStream class), is that we now precompute the temperature + // preconditioner as well. The reason is that the setup of the Jacobi + // preconditioner takes a noticeable time compared to the solver because we + // usually only need between 10 and 20 iterations for solving the + // temperature system (this might sound strange, as Jacobi really only + // consists of a diagonal, but in Trilinos it is derived from more general + // framework for point relaxation preconditioners which is a bit + // inefficient). Hence, it is more efficient to precompute the + // preconditioner, even though the matrix entries may slightly change + // because the time step might change. This is not too big a problem because + // we remesh every few time steps (and regenerate the preconditioner then). template void BoussinesqFlowProblem::assemble_temperature_system (const double maximal_velocity) { @@ -3434,25 +2781,16 @@ namespace Step32 rebuild_temperature_preconditioner = false; } - // The next part is computing the right - // hand side vectors. To do so, we first - // compute the average temperature $T_m$ - // that we use for evaluating the - // artificial viscosity stabilization - // through the residual $E(T) = - // (T-T_m)^2$. We do this by defining the - // midpoint between maximum and minimum - // temperature as average temperature in - // the definition of the entropy - // viscosity. An alternative would be to - // use the integral average, but the - // results are not very sensitive to this - // choice. The rest then only requires - // calling WorkStream::run again, binding - // the arguments to the - // local_assemble_temperature_rhs - // function that are the same in every - // call to the correct values: + // The next part is computing the right hand side vectors. To do so, we + // first compute the average temperature $T_m$ that we use for evaluating + // the artificial viscosity stabilization through the residual $E(T) = + // (T-T_m)^2$. We do this by defining the midpoint between maximum and + // minimum temperature as average temperature in the definition of the + // entropy viscosity. An alternative would be to use the integral average, + // but the results are not very sensitive to this choice. The rest then + // only requires calling WorkStream::run again, binding the arguments to + // the local_assemble_temperature_rhs function that are the + // same in every call to the correct values: temperature_rhs = 0; const QGauss quadrature_formula(parameters.temperature_degree+2); @@ -3500,87 +2838,52 @@ namespace Step32 // @sect4{BoussinesqFlowProblem::solve} - // This function solves the linear systems - // in each time step of the Boussinesq - // problem. First, we - // work on the Stokes system and then on - // the temperature system. In essence, it - // does the same things as the respective - // function in step-31. However, there are a few - // changes here. + // This function solves the linear systems in each time step of the + // Boussinesq problem. First, we work on the Stokes system and then on the + // temperature system. In essence, it does the same things as the respective + // function in step-31. However, there are a few changes here. // - // The first change is related to the way - // we store our solution: we keep the - // vectors with locally owned degrees of - // freedom plus ghost nodes on each MPI - // node. When we enter a solver which is - // supposed to perform matrix-vector - // products with a distributed matrix, this - // is not the appropriate form, - // though. There, we will want to have the - // solution vector to be distributed in the - // same way as the matrix, i.e. without any - // ghosts. So what we do first is to - // generate a distributed vector called - // distributed_stokes_solution - // and put only the locally owned dofs into - // that, which is neatly done by the - // operator= of the Trilinos - // vector. + // The first change is related to the way we store our solution: we keep the + // vectors with locally owned degrees of freedom plus ghost nodes on each + // MPI node. When we enter a solver which is supposed to perform + // matrix-vector products with a distributed matrix, this is not the + // appropriate form, though. There, we will want to have the solution vector + // to be distributed in the same way as the matrix, i.e. without any + // ghosts. So what we do first is to generate a distributed vector called + // distributed_stokes_solution and put only the locally owned + // dofs into that, which is neatly done by the operator= of the + // Trilinos vector. // - // Next, we scale the pressure solution (or - // rather, the initial guess) for the - // solver so that it matches with the - // length scales in the matrices, as - // discussed in the introduction. We also - // immediately scale the pressure solution - // back to the correct units after the - // solution is completed. We also need to - // set the pressure values at hanging nodes - // to zero. This we also did in step-31 in - // order not to disturb the Schur - // complement by some vector entries that - // actually are irrelevant during the solve - // stage. As a difference to step-31, here - // we do it only for the locally owned - // pressure dofs. After solving for the - // Stokes solution, each processor copies - // the distributed solution back into the - // solution vector that also includes ghost - // elements. + // Next, we scale the pressure solution (or rather, the initial guess) for + // the solver so that it matches with the length scales in the matrices, as + // discussed in the introduction. We also immediately scale the pressure + // solution back to the correct units after the solution is completed. We + // also need to set the pressure values at hanging nodes to zero. This we + // also did in step-31 in order not to disturb the Schur complement by some + // vector entries that actually are irrelevant during the solve stage. As a + // difference to step-31, here we do it only for the locally owned pressure + // dofs. After solving for the Stokes solution, each processor copies the + // distributed solution back into the solution vector that also includes + // ghost elements. // - // The third and most obvious change is - // that we have two variants for the Stokes - // solver: A fast solver that sometimes - // breaks down, and a robust solver that is - // slower. This is what we already - // discussed in the introduction. Here is - // how we realize it: First, we perform 30 - // iterations with the fast solver based on - // the simple preconditioner based on the - // AMG V-cycle instead of an approximate - // solve (this is indicated by the + // The third and most obvious change is that we have two variants for the + // Stokes solver: A fast solver that sometimes breaks down, and a robust + // solver that is slower. This is what we already discussed in the + // introduction. Here is how we realize it: First, we perform 30 iterations + // with the fast solver based on the simple preconditioner based on the AMG + // V-cycle instead of an approximate solve (this is indicated by the // false argument to the - // LinearSolvers::BlockSchurPreconditioner - // object). If we converge, everything is - // fine. If we do not converge, the solver - // control object will throw an exception - // SolverControl::NoConvergence. Usually, - // this would abort the program because we - // don't catch them in our usual - // solve() functions. This is - // certainly not what we want to happen - // here. Rather, we want to switch to the - // strong solver and continue the solution - // process with whatever vector we got so - // far. Hence, we catch the exception with - // the C++ try/catch mechanism. We then - // simply go through the same solver - // sequence again in the catch - // clause, this time passing the @p true - // flag to the preconditioner for the - // strong solver, signaling an approximate - // CG solve. + // LinearSolvers::BlockSchurPreconditioner object). If we + // converge, everything is fine. If we do not converge, the solver control + // object will throw an exception SolverControl::NoConvergence. Usually, + // this would abort the program because we don't catch them in our usual + // solve() functions. This is certainly not what we want to + // happen here. Rather, we want to switch to the strong solver and continue + // the solution process with whatever vector we got so far. Hence, we catch + // the exception with the C++ try/catch mechanism. We then simply go through + // the same solver sequence again in the catch clause, this + // time passing the @p true flag to the preconditioner for the strong + // solver, signaling an approximate CG solve. template void BoussinesqFlowProblem::solve () { @@ -3661,39 +2964,24 @@ namespace Step32 computing_timer.exit_section(); - // Now let's turn to the temperature - // part: First, we compute the time step - // size. We found that we need smaller - // time steps for 3D than for 2D for the - // shell geometry. This is because the - // cells are more distorted in that case - // (it is the smallest edge length that - // determines the CFL number). Instead of - // computing the time step from maximum - // velocity and minimal mesh size as in - // step-31, we compute local CFL numbers, - // i.e., on each cell we compute the - // maximum velocity times the mesh size, - // and compute the maximum of - // them. Hence, we need to choose the - // factor in front of the time step - // slightly smaller. + // Now let's turn to the temperature part: First, we compute the time step + // size. We found that we need smaller time steps for 3D than for 2D for + // the shell geometry. This is because the cells are more distorted in + // that case (it is the smallest edge length that determines the CFL + // number). Instead of computing the time step from maximum velocity and + // minimal mesh size as in step-31, we compute local CFL numbers, i.e., on + // each cell we compute the maximum velocity times the mesh size, and + // compute the maximum of them. Hence, we need to choose the factor in + // front of the time step slightly smaller. // - // After temperature right hand side - // assembly, we solve the linear system - // for temperature (with fully - // distributed vectors without any - // ghosts), apply constraints and copy - // the vector back to one with ghosts. + // After temperature right hand side assembly, we solve the linear system + // for temperature (with fully distributed vectors without any ghosts), + // apply constraints and copy the vector back to one with ghosts. // - // In the end, we extract the temperature - // range similarly to step-31 to produce - // some output (for example in order to - // help us choose the stabilization - // constants, as discussed in the - // introduction). The only difference is - // that we need to exchange maxima over - // all processors. + // In the end, we extract the temperature range similarly to step-31 to + // produce some output (for example in order to help us choose the + // stabilization constants, as discussed in the introduction). The only + // difference is that we need to exchange maxima over all processors. computing_timer.enter_section (" Assemble temperature rhs"); { old_time_step = time_step; @@ -3765,27 +3053,17 @@ namespace Step32 // @sect4{BoussinesqFlowProblem::output_results} - // Next comes the function that generates - // the output. The quantities to output - // could be introduced manually like we did - // in step-31. An alternative is to hand - // this task over to a class PostProcessor - // that inherits from the class - // DataPostprocessor, which can be attached - // to DataOut. This allows us to output - // derived quantities from the solution, - // like the friction heating included in - // this example. It overloads the virtual - // function - // DataPostprocessor::compute_derived_quantities_vector, - // which is then internally called from - // DataOut::build_patches. We have to give - // it values of the numerical solution, its - // derivatives, normals to the cell, the - // actual evaluation points and any - // additional quantities. This follows the - // same procedure as discussed in step-29 - // and other programs. + // Next comes the function that generates the output. The quantities to + // output could be introduced manually like we did in step-31. An + // alternative is to hand this task over to a class PostProcessor that + // inherits from the class DataPostprocessor, which can be attached to + // DataOut. This allows us to output derived quantities from the solution, + // like the friction heating included in this example. It overloads the + // virtual function DataPostprocessor::compute_derived_quantities_vector, + // which is then internally called from DataOut::build_patches. We have to + // give it values of the numerical solution, its derivatives, normals to the + // cell, the actual evaluation points and any additional quantities. This + // follows the same procedure as discussed in step-29 and other programs. template class BoussinesqFlowProblem::Postprocessor : public DataPostprocessor { @@ -3826,17 +3104,12 @@ namespace Step32 {} - // Here we define the names for the - // variables we want to output. These are - // the actual solution values for velocity, - // pressure, and temperature, as well as - // the friction heating and to each cell - // the number of the processor that owns - // it. This allows us to visualize the - // partitioning of the domain among the - // processors. Except for the velocity, - // which is vector-valued, all other - // quantities are scalar. + // Here we define the names for the variables we want to output. These are + // the actual solution values for velocity, pressure, and temperature, as + // well as the friction heating and to each cell the number of the processor + // that owns it. This allows us to visualize the partitioning of the domain + // among the processors. Except for the velocity, which is vector-valued, + // all other quantities are scalar. template std::vector BoussinesqFlowProblem::Postprocessor::get_names() const @@ -3877,26 +3150,18 @@ namespace Step32 } - // Now we implement the function that - // computes the derived quantities. As we - // also did for the output, we rescale the - // velocity from its SI units to something - // more readable, namely cm/year. Next, the - // pressure is scaled to be between 0 and - // the maximum pressure. This makes it more - // easily comparable -- in essence making - // all pressure variables positive or - // zero. Temperature is taken as is, and - // the friction heating is computed as $2 - // \eta \varepsilon(\mathbf{u}) \cdot - // \varepsilon(\mathbf{u})$. + // Now we implement the function that computes the derived quantities. As we + // also did for the output, we rescale the velocity from its SI units to + // something more readable, namely cm/year. Next, the pressure is scaled to + // be between 0 and the maximum pressure. This makes it more easily + // comparable -- in essence making all pressure variables positive or + // zero. Temperature is taken as is, and the friction heating is computed as + // $2 \eta \varepsilon(\mathbf{u}) \cdot \varepsilon(\mathbf{u})$. // - // The quantities we output here are more - // for illustration, rather than for actual - // scientific value. We come back to this - // briefly in the results section of this - // program and explain what one may in fact - // be interested in. + // The quantities we output here are more for illustration, rather than for + // actual scientific value. We come back to this briefly in the results + // section of this program and explain what one may in fact be interested + // in. template void BoussinesqFlowProblem::Postprocessor:: @@ -3936,40 +3201,26 @@ namespace Step32 } - // The output_results() - // function does mostly what the - // corresponding one did in to step-31, in - // particular the merging data from the two - // DoFHandler objects (for the Stokes and - // the temperature parts of the problem) - // into one. There is one minor change: we - // make sure that each processor only works - // on the subdomain it owns locally (and - // not on ghost or artificial cells) when - // building the joint solution vector. The - // same will then have to be done in - // DataOut::build_patches(), but that - // function does so automatically. + // The output_results() function does mostly what the + // corresponding one did in to step-31, in particular the merging data from + // the two DoFHandler objects (for the Stokes and the temperature parts of + // the problem) into one. There is one minor change: we make sure that each + // processor only works on the subdomain it owns locally (and not on ghost + // or artificial cells) when building the joint solution vector. The same + // will then have to be done in DataOut::build_patches(), but that function + // does so automatically. // - // What we end up with is a set of patches - // that we can write using the functions in - // DataOutBase in a variety of output - // formats. Here, we then have to pay - // attention that what each processor - // writes is really only its own part of - // the domain, i.e. we will want to write - // each processor's contribution into a - // separate file. This we do by adding an - // additional number to the filename when - // we write the solution. This is not - // really new, we did it similarly in - // step-40. Note that we write in the - // compressed format @p .vtu instead of - // plain vtk files, which saves quite some + // What we end up with is a set of patches that we can write using the + // functions in DataOutBase in a variety of output formats. Here, we then + // have to pay attention that what each processor writes is really only its + // own part of the domain, i.e. we will want to write each processor's + // contribution into a separate file. This we do by adding an additional + // number to the filename when we write the solution. This is not really + // new, we did it similarly in step-40. Note that we write in the compressed + // format @p .vtu instead of plain vtk files, which saves quite some // storage. // - // All the rest of the work is done in the - // PostProcessor class. + // All the rest of the work is done in the PostProcessor class. template void BoussinesqFlowProblem::output_results () { @@ -4058,18 +3309,12 @@ namespace Step32 data_out.write_vtu (output); - // At this point, all processors have - // written their own files to disk. We - // could visualize them individually in - // Visit or Paraview, but in reality we - // of course want to visualize the whole - // set of files at once. To this end, we - // create a master file in each of the - // formats understood by Visit - // (.visit) and Paraview - // (.pvtu) on the zeroth - // processor that describes how the - // individual files are defining the + // At this point, all processors have written their own files to disk. We + // could visualize them individually in Visit or Paraview, but in reality + // we of course want to visualize the whole set of files at once. To this + // end, we create a master file in each of the formats understood by Visit + // (.visit) and Paraview (.pvtu) on the zeroth + // processor that describes how the individual files are defining the // global data set. if (Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0) { @@ -4103,44 +3348,27 @@ namespace Step32 // @sect4{BoussinesqFlowProblem::refine_mesh} - // This function isn't really new - // either. Since the - // setup_dofs function that we - // call in the middle has its own timer - // section, we split timing this function - // into two sections. It will also allow us - // to easily identify which of the two is - // more expensive. + // This function isn't really new either. Since the setup_dofs + // function that we call in the middle has its own timer section, we split + // timing this function into two sections. It will also allow us to easily + // identify which of the two is more expensive. // - // One thing of note, however, is that we - // only want to compute error indicators on - // the locally owned subdomain. In order to - // achieve this, we pass one additional - // argument to the - // KellyErrorEstimator::estimate - // function. Note that the vector for error - // estimates is resized to the number of - // active cells present on the current - // process, which is less than the total - // number of active cells on all processors - // (but more than the number of locally - // owned active cells); each processor only - // has a few coarse cells around the - // locally owned ones, as also explained in - // step-40. + // One thing of note, however, is that we only want to compute error + // indicators on the locally owned subdomain. In order to achieve this, we + // pass one additional argument to the KellyErrorEstimator::estimate + // function. Note that the vector for error estimates is resized to the + // number of active cells present on the current process, which is less than + // the total number of active cells on all processors (but more than the + // number of locally owned active cells); each processor only has a few + // coarse cells around the locally owned ones, as also explained in step-40. // - // The local error estimates are then - // handed to a %parallel version of - // GridRefinement (in namespace - // parallel::distributed::GridRefinement, - // see also step-40) which looks at the - // errors and finds the cells that need - // refinement by comparing the error values - // across processors. As in step-31, we - // want to limit the maximum grid level. So - // in case some cells have been marked that - // are already at the finest level, we - // simply clear the refine flags. + // The local error estimates are then handed to a %parallel version of + // GridRefinement (in namespace parallel::distributed::GridRefinement, see + // also step-40) which looks at the errors and finds the cells that need + // refinement by comparing the error values across processors. As in + // step-31, we want to limit the maximum grid level. So in case some cells + // have been marked that are already at the finest level, we simply clear + // the refine flags. template void BoussinesqFlowProblem::refine_mesh (const unsigned int max_grid_level) { @@ -4168,20 +3396,14 @@ namespace Step32 cell != triangulation.end(); ++cell) cell->clear_refine_flag (); - // With all flags marked as necessary, we - // set up the - // parallel::distributed::SolutionTransfer - // object to transfer the solutions for - // the current time level and the next - // older one. The syntax is similar to - // the non-%parallel solution transfer - // (with the exception that here a - // pointer to the vector entries is - // enough). The remainder of the function - // is concerned with setting up the data - // structures again after mesh refinement - // and restoring the solution vectors on - // the new mesh. + // With all flags marked as necessary, we set up the + // parallel::distributed::SolutionTransfer object to transfer the + // solutions for the current time level and the next older one. The syntax + // is similar to the non-%parallel solution transfer (with the exception + // that here a pointer to the vector entries is enough). The remainder of + // the function is concerned with setting up the data structures again + // after mesh refinement and restoring the solution vectors on the new + // mesh. std::vector x_temperature (2); x_temperature[0] = &temperature_solution; x_temperature[1] = &old_temperature_solution; @@ -4238,17 +3460,12 @@ namespace Step32 // @sect4{BoussinesqFlowProblem::run} - // This is the final and controlling - // function in this class. It, in fact, - // runs the entire rest of the program and - // is, once more, very similar to - // step-31. We use a different mesh now (a - // GridGenerator::hyper_shell instead of a - // simple cube geometry), and use the - // project_temperature_field() - // function instead of the library function - // VectorTools::project, the - // rest is as before. + // This is the final and controlling function in this class. It, in fact, + // runs the entire rest of the program and is, once more, very similar to + // step-31. We use a different mesh now (a GridGenerator::hyper_shell + // instead of a simple cube geometry), and use the + // project_temperature_field() function instead of the library + // function VectorTools::project, the rest is as before. template void BoussinesqFlowProblem::run () { @@ -4313,27 +3530,16 @@ start_time_iteration: (timestep_number % parameters.graphical_output_interval == 0)) output_results (); - // In order to speed up linear - // solvers, we extrapolate the - // solutions from the old time levels - // to the new one. This gives a very - // good initial guess, cutting the - // number of iterations needed in - // solvers by more than one half. We - // do not need to extrapolate in the - // last iteration, so if we reached - // the final time, we stop here. + // In order to speed up linear solvers, we extrapolate the solutions + // from the old time levels to the new one. This gives a very good + // initial guess, cutting the number of iterations needed in solvers + // by more than one half. We do not need to extrapolate in the last + // iteration, so if we reached the final time, we stop here. // - // As the last thing during a - // time step (before actually - // bumping up the number of - // the time step), we check - // whether the current time - // step number is divisible - // by 100, and if so we let - // the computing timer print - // a summary of CPU times - // spent so far. + // As the last thing during a time step (before actually bumping up + // the number of the time step), we check whether the current time + // step number is divisible by 100, and if so we let the computing + // timer print a summary of CPU times spent so far. if (time > parameters.end_time * EquationData::year_in_seconds) break; @@ -4344,7 +3550,8 @@ start_time_iteration: old_temperature_solution = temperature_solution; if (old_time_step > 0) { - //Trilinos sadd does not like ghost vectors even as input. Copy into distributed vectors for now: + //Trilinos sadd does not like ghost vectors even as input. Copy + //into distributed vectors for now: { TrilinosWrappers::MPI::BlockVector distr_solution (stokes_rhs); distr_solution = stokes_solution; @@ -4373,11 +3580,8 @@ start_time_iteration: } while (true); - // If we are generating graphical - // output, do so also for the last - // time step unless we had just - // done so before we left the - // do-while loop + // If we are generating graphical output, do so also for the last time + // step unless we had just done so before we left the do-while loop if ((parameters.generate_graphical_output == true) && !((timestep_number-1) % parameters.graphical_output_interval == 0)) @@ -4389,23 +3593,16 @@ start_time_iteration: // @sect3{The main function} -// The main function is short as usual and -// very similar to the one in step-31. Since -// we use a parameter file which is specified -// as an argument in the command line, we -// have to read it in here and pass it on to -// the Parameters class for parsing. If no -// filename is given in the command line, we -// simply use the \step-32.prm -// file which is distributed together with -// the program. +// The main function is short as usual and very similar to the one in +// step-31. Since we use a parameter file which is specified as an argument in +// the command line, we have to read it in here and pass it on to the +// Parameters class for parsing. If no filename is given in the command line, +// we simply use the \step-32.prm file which is distributed +// together with the program. // -// Because 3d computations are simply -// very slow unless you throw a lot -// of processors at them, the program -// defaults to 2d. You can get the 3d -// version by changing the constant -// dimension below to 3. +// Because 3d computations are simply very slow unless you throw a lot of +// processors at them, the program defaults to 2d. You can get the 3d version +// by changing the constant dimension below to 3. int main (int argc, char *argv[]) { using namespace Step32; diff --git a/deal.II/examples/step-33/step-33.cc b/deal.II/examples/step-33/step-33.cc index e402227987..1f4df88bde 100644 --- a/deal.II/examples/step-33/step-33.cc +++ b/deal.II/examples/step-33/step-33.cc @@ -11,8 +11,7 @@ // @sect3{Include files} -// First a standard set of deal.II -// includes. Nothing special to comment on +// First a standard set of deal.II includes. Nothing special to comment on // here: #include #include @@ -45,27 +44,21 @@ #include #include -// Then, as mentioned in the introduction, we -// use various Trilinos packages as linear -// solvers as well as for automatic -// differentiation. These are in the +// Then, as mentioned in the introduction, we use various Trilinos packages as +// linear solvers as well as for automatic differentiation. These are in the // following include files. // -// Since deal.II provides interfaces to the -// basic Trilinos matrices, vectors, -// preconditioners and solvers, we include -// them similarly as deal.II linear algebra -// structures. +// Since deal.II provides interfaces to the basic Trilinos matrices, vectors, +// preconditioners and solvers, we include them similarly as deal.II linear +// algebra structures. #include #include #include #include -// Sacado is the automatic differentiation -// package within Trilinos, which is used -// to find the Jacobian for a fully -// implicit Newton iteration: +// Sacado is the automatic differentiation package within Trilinos, which is +// used to find the Jacobian for a fully implicit Newton iteration: #include @@ -75,10 +68,8 @@ #include #include -// To end this section, introduce everything -// in the dealii library into the namespace -// into which the contents of this program -// will go: +// To end this section, introduce everything in the dealii library into the +// namespace into which the contents of this program will go: namespace Step33 { using namespace dealii; @@ -86,69 +77,45 @@ namespace Step33 // @sect3{Euler equation specifics} - // Here we define the flux function for this - // particular system of conservation laws, as - // well as pretty much everything else that's - // specific to the Euler equations for gas - // dynamics, for reasons discussed in the - // introduction. We group all this into a - // structure that defines everything that has - // to do with the flux. All members of this - // structure are static, i.e. the structure - // has no actual state specified by instance - // member variables. The better way to do - // this, rather than a structure with all - // static members would be to use a namespace - // -- but namespaces can't be templatized and - // we want some of the member variables of - // the structure to depend on the space - // dimension, which we in our usual way - // introduce using a template parameter. + // Here we define the flux function for this particular system of + // conservation laws, as well as pretty much everything else that's specific + // to the Euler equations for gas dynamics, for reasons discussed in the + // introduction. We group all this into a structure that defines everything + // that has to do with the flux. All members of this structure are static, + // i.e. the structure has no actual state specified by instance member + // variables. The better way to do this, rather than a structure with all + // static members would be to use a namespace -- but namespaces can't be + // templatized and we want some of the member variables of the structure to + // depend on the space dimension, which we in our usual way introduce using + // a template parameter. template struct EulerEquations { // @sect4{Component description} - // First a few variables that - // describe the various components of our - // solution vector in a generic way. This - // includes the number of components in the - // system (Euler's equations have one entry - // for momenta in each spatial direction, - // plus the energy and density components, - // for a total of dim+2 - // components), as well as functions that - // describe the index within the solution - // vector of the first momentum component, - // the density component, and the energy - // density component. Note that all these - // %numbers depend on the space dimension; - // defining them in a generic way (rather - // than by implicit convention) makes our - // code more flexible and makes it easier - // to later extend it, for example by - // adding more components to the equations. + // First a few variables that describe the various components of our + // solution vector in a generic way. This includes the number of + // components in the system (Euler's equations have one entry for momenta + // in each spatial direction, plus the energy and density components, for + // a total of dim+2 components), as well as functions that + // describe the index within the solution vector of the first momentum + // component, the density component, and the energy density + // component. Note that all these %numbers depend on the space dimension; + // defining them in a generic way (rather than by implicit convention) + // makes our code more flexible and makes it easier to later extend it, + // for example by adding more components to the equations. static const unsigned int n_components = dim + 2; static const unsigned int first_momentum_component = 0; static const unsigned int density_component = dim; static const unsigned int energy_component = dim+1; - // When generating graphical - // output way down in this - // program, we need to specify - // the names of the solution - // variables as well as how the - // various components group into - // vector and scalar fields. We - // could describe this there, but - // in order to keep things that - // have to do with the Euler - // equation localized here and - // the rest of the program as - // generic as possible, we - // provide this sort of - // information in the following - // two functions: + // When generating graphical output way down in this program, we need to + // specify the names of the solution variables as well as how the various + // components group into vector and scalar fields. We could describe this + // there, but in order to keep things that have to do with the Euler + // equation localized here and the rest of the program as generic as + // possible, we provide this sort of information in the following two + // functions: static std::vector component_names () @@ -179,58 +146,36 @@ namespace Step33 // @sect4{Transformations between variables} - // Next, we define the gas - // constant. We will set it to 1.4 - // in its definition immediately - // following the declaration of - // this class (unlike integer - // variables, like the ones above, - // static const floating point - // member variables cannot be - // initialized within the class - // declaration in C++). This value - // of 1.4 is representative of a - // gas that consists of molecules - // composed of two atoms, such as - // air which consists up to small - // traces almost entirely of $N_2$ - // and $O_2$. + // Next, we define the gas constant. We will set it to 1.4 in its + // definition immediately following the declaration of this class (unlike + // integer variables, like the ones above, static const floating point + // member variables cannot be initialized within the class declaration in + // C++). This value of 1.4 is representative of a gas that consists of + // molecules composed of two atoms, such as air which consists up to small + // traces almost entirely of $N_2$ and $O_2$. static const double gas_gamma; - // In the following, we will need to - // compute the kinetic energy and the - // pressure from a vector of conserved - // variables. This we can do based on the - // energy density and the kinetic energy - // $\frac 12 \rho |\mathbf v|^2 = - // \frac{|\rho \mathbf v|^2}{2\rho}$ - // (note that the independent variables - // contain the momentum components $\rho - // v_i$, not the velocities $v_i$). + // In the following, we will need to compute the kinetic energy and the + // pressure from a vector of conserved variables. This we can do based on + // the energy density and the kinetic energy $\frac 12 \rho |\mathbf v|^2 + // = \frac{|\rho \mathbf v|^2}{2\rho}$ (note that the independent + // variables contain the momentum components $\rho v_i$, not the + // velocities $v_i$). // - // There is one slight problem: We will - // need to call the following functions - // with input arguments of type + // There is one slight problem: We will need to call the following + // functions with input arguments of type // std::vector@ and - // Vector@. The - // problem is that the former has an - // access operator - // operator[] whereas the - // latter, for historical reasons, has - // operator(). We wouldn't - // be able to write the function in a - // generic way if we were to use one or - // the other of these. Fortunately, we - // can use the following trick: instead - // of writing v[i] or - // v(i), we can use - // *(v.begin() + i), i.e. we - // generate an iterator that points to - // the ith element, and then - // dereference it. This works for both - // kinds of vectors -- not the prettiest - // solution, but one that works. + // Vector@. The problem is that the former has an + // access operator operator[] whereas the latter, for + // historical reasons, has operator(). We wouldn't be able to + // write the function in a generic way if we were to use one or the other + // of these. Fortunately, we can use the following trick: instead of + // writing v[i] or v(i), we can use + // *(v.begin() + i), i.e. we generate an iterator that points + // to the ith element, and then dereference it. This works + // for both kinds of vectors -- not the prettiest solution, but one that + // works. template static number @@ -259,47 +204,28 @@ namespace Step33 // @sect4{EulerEquations::compute_flux_matrix} - // We define the flux function - // $F(W)$ as one large matrix. - // Each row of this matrix - // represents a scalar - // conservation law for the - // component in that row. The - // exact form of this matrix is - // given in the - // introduction. Note that we - // know the size of the matrix: - // it has as many rows as the - // system has components, and - // dim columns; - // rather than using a FullMatrix - // object for such a matrix - // (which has a variable number - // of rows and columns and must - // therefore allocate memory on - // the heap each time such a - // matrix is created), we use a - // rectangular array of numbers - // right away. + // We define the flux function $F(W)$ as one large matrix. Each row of + // this matrix represents a scalar conservation law for the component in + // that row. The exact form of this matrix is given in the + // introduction. Note that we know the size of the matrix: it has as many + // rows as the system has components, and dim columns; rather + // than using a FullMatrix object for such a matrix (which has a variable + // number of rows and columns and must therefore allocate memory on the + // heap each time such a matrix is created), we use a rectangular array of + // numbers right away. // - // We templatize the numerical type of - // the flux function so that we may use - // the automatic differentiation type - // here. Similarly, we will call the - // function with different input vector - // data types, so we templatize on it as - // well: + // We templatize the numerical type of the flux function so that we may + // use the automatic differentiation type here. Similarly, we will call + // the function with different input vector data types, so we templatize + // on it as well: template static void compute_flux_matrix (const InputVector &W, number (&flux)[n_components][dim]) { - // First compute the pressure that - // appears in the flux matrix, and - // then compute the first - // dim columns of the - // matrix that correspond to the - // momentum terms: + // First compute the pressure that appears in the flux matrix, and then + // compute the first dim columns of the matrix that + // correspond to the momentum terms: const number pressure = compute_pressure (W); for (unsigned int d=0; d static void numerical_normal_flux (const Point &normal, @@ -366,22 +283,14 @@ namespace Step33 // @sect4{EulerEquations::compute_forcing_vector} - // In the same way as describing the flux - // function $\mathbf F(\mathbf w)$, we - // also need to have a way to describe - // the right hand side forcing term. As - // mentioned in the introduction, we - // consider only gravity here, which - // leads to the specific form $\mathbf - // G(\mathbf w) = \left( - // g_1\rho, g_2\rho, g_3\rho, 0, - // \rho \mathbf g \cdot \mathbf v - // \right)^T$, shown here for - // the 3d case. More specifically, we - // will consider only $\mathbf - // g=(0,0,-1)^T$ in 3d, or $\mathbf - // g=(0,-1)^T$ in 2d. This naturally - // leads to the following function: + // In the same way as describing the flux function $\mathbf F(\mathbf w)$, + // we also need to have a way to describe the right hand side forcing + // term. As mentioned in the introduction, we consider only gravity here, + // which leads to the specific form $\mathbf G(\mathbf w) = \left( + // g_1\rho, g_2\rho, g_3\rho, 0, \rho \mathbf g \cdot \mathbf v + // \right)^T$, shown here for the 3d case. More specifically, we will + // consider only $\mathbf g=(0,0,-1)^T$ in 3d, or $\mathbf g=(0,-1)^T$ in + // 2d. This naturally leads to the following function: template static void compute_forcing_vector (const InputVector &W, @@ -408,11 +317,9 @@ namespace Step33 // @sect4{Dealing with boundary conditions} - // Another thing we have to deal with is - // boundary conditions. To this end, let - // us first define the kinds of boundary - // conditions we currently know how to - // deal with: + // Another thing we have to deal with is boundary conditions. To this end, + // let us first define the kinds of boundary conditions we currently know + // how to deal with: enum BoundaryKind { inflow_boundary, @@ -422,73 +329,43 @@ namespace Step33 }; - // The next part is to actually decide - // what to do at each kind of - // boundary. To this end, remember from - // the introduction that boundary - // conditions are specified by choosing a - // value $\mathbf w^-$ on the outside of - // a boundary given an inhomogeneity - // $\mathbf j$ and possibly the - // solution's value $\mathbf w^+$ on the - // inside. Both are then passed to the - // numerical flux $\mathbf - // H(\mathbf{w}^+, \mathbf{w}^-, - // \mathbf{n})$ to define boundary - // contributions to the bilinear form. + // The next part is to actually decide what to do at each kind of + // boundary. To this end, remember from the introduction that boundary + // conditions are specified by choosing a value $\mathbf w^-$ on the + // outside of a boundary given an inhomogeneity $\mathbf j$ and possibly + // the solution's value $\mathbf w^+$ on the inside. Both are then passed + // to the numerical flux $\mathbf H(\mathbf{w}^+, \mathbf{w}^-, + // \mathbf{n})$ to define boundary contributions to the bilinear form. // - // Boundary conditions can in some cases - // be specified for each component of the - // solution vector independently. For - // example, if component $c$ is marked - // for inflow, then $w^-_c = j_c$. If it - // is an outflow, then $w^-_c = - // w^+_c$. These two simple cases are - // handled first in the function below. + // Boundary conditions can in some cases be specified for each component + // of the solution vector independently. For example, if component $c$ is + // marked for inflow, then $w^-_c = j_c$. If it is an outflow, then $w^-_c + // = w^+_c$. These two simple cases are handled first in the function + // below. // - // There is a little snag that makes this - // function unpleasant from a C++ - // language viewpoint: The output vector - // Wminus will of course be - // modified, so it shouldn't be a - // const argument. Yet it is - // in the implementation below, and needs - // to be in order to allow the code to - // compile. The reason is that we call - // this function at a place where - // Wminus is of type - // Table@<2,Sacado::Fad::DFad@ - // @>, this being 2d table with - // indices representing the quadrature - // point and the vector component, - // respectively. We call this function - // with Wminus[q] as last - // argument; subscripting a 2d table - // yields a temporary accessor object - // representing a 1d vector, just what we - // want here. The problem is that a - // temporary accessor object can't be - // bound to a non-const reference - // argument of a function, as we would - // like here, according to the C++ 1998 - // and 2003 standards (something that - // will be fixed with the next standard - // in the form of rvalue references). We - // get away with making the output - // argument here a constant because it is - // the accessor object that's - // constant, not the table it points to: - // that one can still be written to. The - // hack is unpleasant nevertheless - // because it restricts the kind of data - // types that may be used as template - // argument to this function: a regular - // vector isn't going to do because that - // one can not be written to when marked - // const. With no good - // solution around at the moment, we'll - // go with the pragmatic, even if not - // pretty, solution shown here: + // There is a little snag that makes this function unpleasant from a C++ + // language viewpoint: The output vector Wminus will of + // course be modified, so it shouldn't be a const + // argument. Yet it is in the implementation below, and needs to be in + // order to allow the code to compile. The reason is that we call this + // function at a place where Wminus is of type + // Table@<2,Sacado::Fad::DFad@ @>, this being 2d + // table with indices representing the quadrature point and the vector + // component, respectively. We call this function with + // Wminus[q] as last argument; subscripting a 2d table yields + // a temporary accessor object representing a 1d vector, just what we want + // here. The problem is that a temporary accessor object can't be bound to + // a non-const reference argument of a function, as we would like here, + // according to the C++ 1998 and 2003 standards (something that will be + // fixed with the next standard in the form of rvalue references). We get + // away with making the output argument here a constant because it is the + // accessor object that's constant, not the table it points to: + // that one can still be written to. The hack is unpleasant nevertheless + // because it restricts the kind of data types that may be used as + // template argument to this function: a regular vector isn't going to do + // because that one can not be written to when marked + // const. With no good solution around at the moment, we'll + // go with the pragmatic, even if not pretty, solution shown here: template static void @@ -513,20 +390,13 @@ namespace Step33 break; } - // Prescribed pressure boundary - // conditions are a bit more - // complicated by the fact that - // even though the pressure is - // prescribed, we really are - // setting the energy component - // here, which will depend on - // velocity and pressure. So - // even though this seems like - // a Dirichlet type boundary - // condition, we get - // sensitivities of energy to - // velocity and density (unless - // these are also prescribed): + // Prescribed pressure boundary conditions are a bit more + // complicated by the fact that even though the pressure is + // prescribed, we really are setting the energy component here, + // which will depend on velocity and pressure. So even though this + // seems like a Dirichlet type boundary condition, we get + // sensitivities of energy to velocity and density (unless these are + // also prescribed): case pressure_boundary: { const typename DataVector::value_type @@ -553,14 +423,10 @@ namespace Step33 case no_penetration_boundary: { - // We prescribe the - // velocity (we are dealing with a - // particular component here so - // that the average of the - // velocities is orthogonal to the - // surface normal. This creates - // sensitivies of across the - // velocity components. + // We prescribe the velocity (we are dealing with a particular + // component here so that the average of the velocities is + // orthogonal to the surface normal. This creates sensitivies of + // across the velocity components. Sacado::Fad::DFad vdotn = 0; for (unsigned int d = 0; d < dim; d++) { @@ -579,29 +445,19 @@ namespace Step33 // @sect4{EulerEquations::compute_refinement_indicators} - // In this class, we also want to specify - // how to refine the mesh. The class - // ConservationLaw that will - // use all the information we provide - // here in the EulerEquation - // class is pretty agnostic about the - // particular conservation law it solves: - // as doesn't even really care how many - // components a solution vector - // has. Consequently, it can't know what - // a reasonable refinement indicator - // would be. On the other hand, here we - // do, or at least we can come up with a - // reasonable choice: we simply look at - // the gradient of the density, and - // compute - // $\eta_K=\log\left(1+|\nabla\rho(x_K)|\right)$, - // where $x_K$ is the center of cell $K$. + // In this class, we also want to specify how to refine the mesh. The + // class ConservationLaw that will use all the information we + // provide here in the EulerEquation class is pretty agnostic + // about the particular conservation law it solves: as doesn't even really + // care how many components a solution vector has. Consequently, it can't + // know what a reasonable refinement indicator would be. On the other + // hand, here we do, or at least we can come up with a reasonable choice: + // we simply look at the gradient of the density, and compute + // $\eta_K=\log\left(1+|\nabla\rho(x_K)|\right)$, where $x_K$ is the + // center of cell $K$. // - // There are certainly a number of - // equally reasonable refinement - // indicators, but this one does, and it - // is easy to compute: + // There are certainly a number of equally reasonable refinement + // indicators, but this one does, and it is easy to compute: static void compute_refinement_indicators (const DoFHandler &dof_handler, @@ -639,70 +495,43 @@ namespace Step33 // @sect4{EulerEquations::Postprocessor} - // Finally, we declare a class that - // implements a postprocessing of data - // components. The problem this class - // solves is that the variables in the - // formulation of the Euler equations we - // use are in conservative rather than - // physical form: they are momentum - // densities $\mathbf m=\rho\mathbf v$, - // density $\rho$, and energy density - // $E$. What we would like to also put - // into our output file are velocities - // $\mathbf v=\frac{\mathbf m}{\rho}$ and - // pressure $p=(\gamma-1)(E-\frac{1}{2} - // \rho |\mathbf v|^2)$. + // Finally, we declare a class that implements a postprocessing of data + // components. The problem this class solves is that the variables in the + // formulation of the Euler equations we use are in conservative rather + // than physical form: they are momentum densities $\mathbf m=\rho\mathbf + // v$, density $\rho$, and energy density $E$. What we would like to also + // put into our output file are velocities $\mathbf v=\frac{\mathbf + // m}{\rho}$ and pressure $p=(\gamma-1)(E-\frac{1}{2} \rho |\mathbf + // v|^2)$. // - // In addition, we would like to add the - // possibility to generate schlieren - // plots. Schlieren plots are a way to - // visualize shocks and other sharp - // interfaces. The word "schlieren" is a - // German word that may be translated as - // "striae" -- it may be simpler to - // explain it by an example, however: - // schlieren is what you see when you, - // for example, pour highly concentrated - // alcohol, or a transparent saline - // solution, into water; the two have the - // same color, but they have different - // refractive indices and so before they - // are fully mixed light goes through the - // mixture along bent rays that lead to - // brightness variations if you look at - // it. That's "schlieren". A similar - // effect happens in compressible flow - // because the refractive index - // depends on the pressure (and therefore - // the density) of the gas. + // In addition, we would like to add the possibility to generate schlieren + // plots. Schlieren plots are a way to visualize shocks and other sharp + // interfaces. The word "schlieren" is a German word that may be + // translated as "striae" -- it may be simpler to explain it by an + // example, however: schlieren is what you see when you, for example, pour + // highly concentrated alcohol, or a transparent saline solution, into + // water; the two have the same color, but they have different refractive + // indices and so before they are fully mixed light goes through the + // mixture along bent rays that lead to brightness variations if you look + // at it. That's "schlieren". A similar effect happens in compressible + // flow because the refractive index depends on the pressure (and + // therefore the density) of the gas. // - // The origin of the word refers to - // two-dimensional projections of a - // three-dimensional volume (we see a 2d - // picture of the 3d fluid). In - // computational fluid dynamics, we can - // get an idea of this effect by - // considering what causes it: density - // variations. Schlieren plots are - // therefore produced by plotting - // $s=|\nabla \rho|^2$; obviously, $s$ is - // large in shocks and at other highly - // dynamic places. If so desired by the - // user (by specifying this in the input - // file), we would like to generate these - // schlieren plots in addition to the - // other derived quantities listed above. + // The origin of the word refers to two-dimensional projections of a + // three-dimensional volume (we see a 2d picture of the 3d fluid). In + // computational fluid dynamics, we can get an idea of this effect by + // considering what causes it: density variations. Schlieren plots are + // therefore produced by plotting $s=|\nabla \rho|^2$; obviously, $s$ is + // large in shocks and at other highly dynamic places. If so desired by + // the user (by specifying this in the input file), we would like to + // generate these schlieren plots in addition to the other derived + // quantities listed above. // - // The implementation of the algorithms - // to compute derived quantities from the - // ones that solve our problem, and to - // output them into data file, rests on - // the DataPostprocessor class. It has - // extensive documentation, and other - // uses of the class can also be found in - // step-29. We therefore refrain from - // extensive comments. + // The implementation of the algorithms to compute derived quantities from + // the ones that solve our problem, and to output them into data file, + // rests on the DataPostprocessor class. It has extensive documentation, + // and other uses of the class can also be found in step-29. We therefore + // refrain from extensive comments. class Postprocessor : public DataPostprocessor { public: @@ -744,22 +573,15 @@ namespace Step33 {} - // This is the only function worth commenting - // on. When generating graphical output, the - // DataOut and related classes will call this - // function on each cell, with values, - // gradients, hessians, and normal vectors - // (in case we're working on faces) at each - // quadrature point. Note that the data at - // each quadrature point is itself - // vector-valued, namely the conserved - // variables. What we're going to do here is - // to compute the quantities we're interested - // in at each quadrature point. Note that for - // this we can ignore the hessians ("dduh") - // and normal vectors; to avoid compiler - // warnings about unused variables, we - // comment out their names. + // This is the only function worth commenting on. When generating graphical + // output, the DataOut and related classes will call this function on each + // cell, with values, gradients, hessians, and normal vectors (in case we're + // working on faces) at each quadrature point. Note that the data at each + // quadrature point is itself vector-valued, namely the conserved + // variables. What we're going to do here is to compute the quantities we're + // interested in at each quadrature point. Note that for this we can ignore + // the hessians ("dduh") and normal vectors; to avoid compiler warnings + // about unused variables, we comment out their names. template void EulerEquations::Postprocessor:: @@ -770,21 +592,14 @@ namespace Step33 const std::vector > & /*evaluation_points*/, std::vector > &computed_quantities) const { - // At the beginning of the function, let us - // make sure that all variables have the - // correct sizes, so that we can access - // individual vector elements without - // having to wonder whether we might read - // or write invalid elements; we also check - // that the duh vector only - // contains data if we really need it (the - // system knows about this because we say - // so in the - // get_needed_update_flags() - // function below). For the inner vectors, - // we check that at least the first element - // of the outer vector has the correct - // inner size: + // At the beginning of the function, let us make sure that all variables + // have the correct sizes, so that we can access individual vector + // elements without having to wonder whether we might read or write + // invalid elements; we also check that the duh vector only + // contains data if we really need it (the system knows about this because + // we say so in the get_needed_update_flags() function + // below). For the inner vectors, we check that at least the first element + // of the outer vector has the correct inner size: const unsigned int n_quadrature_points = uh.size(); if (do_schlieren_plot == true) @@ -805,18 +620,13 @@ namespace Step33 else Assert (computed_quantities[0].size() == dim+1, ExcInternalError()); - // Then loop over all quadrature points and - // do our work there. The code should be - // pretty self-explanatory. The order of - // output variables is first - // dim velocities, then the - // pressure, and if so desired the - // schlieren plot. Note that we try to be - // generic about the order of variables in - // the input vector, using the - // first_momentum_component - // and density_component - // information: + // Then loop over all quadrature points and do our work there. The code + // should be pretty self-explanatory. The order of output variables is + // first dim velocities, then the pressure, and if so desired + // the schlieren plot. Note that we try to be generic about the order of + // variables in the input vector, using the + // first_momentum_component and + // density_component information: for (unsigned int q=0; qParameters. Of these - // classes, there are a few that - // group the parameters for - // individual groups, such as for - // solvers, mesh refinement, or - // output. Each of these classes have - // functions - // declare_parameters() - // and - // parse_parameters() - // that declare parameter subsections - // and entries in a ParameterHandler - // object, and retrieve actual - // parameter values from such an - // object, respectively. These - // classes declare all their - // parameters in subsections of the - // ParameterHandler. + // We will split the run-time parameters into a few separate structures, + // which we will all put into a namespace Parameters. Of these + // classes, there are a few that group the parameters for individual groups, + // such as for solvers, mesh refinement, or output. Each of these classes + // have functions declare_parameters() and + // parse_parameters() that declare parameter subsections and + // entries in a ParameterHandler object, and retrieve actual parameter + // values from such an object, respectively. These classes declare all their + // parameters in subsections of the ParameterHandler. // - // The final class of the following - // namespace combines all the - // previous classes by deriving from - // them and taking care of a few more - // entries at the top level of the - // input file, as well as a few odd - // other entries in subsections that - // are too short to warrant a - // structure by themselves. + // The final class of the following namespace combines all the previous + // classes by deriving from them and taking care of a few more entries at + // the top level of the input file, as well as a few odd other entries in + // subsections that are too short to warrant a structure by themselves. // - // It is worth pointing out one thing here: - // None of the classes below have a - // constructor that would initialize the - // various member variables. This isn't a - // problem, however, since we will read all - // variables declared in these classes from - // the input file (or indirectly: a - // ParameterHandler object will read it from - // there, and we will get the values from - // this object), and they will be initialized - // this way. In case a certain variable is - // not specified at all in the input file, - // this isn't a problem either: The - // ParameterHandler class will in this case - // simply take the default value that was - // specified when declaring an entry in the - // declare_parameters() - // functions of the classes below. + // It is worth pointing out one thing here: None of the classes below have a + // constructor that would initialize the various member variables. This + // isn't a problem, however, since we will read all variables declared in + // these classes from the input file (or indirectly: a ParameterHandler + // object will read it from there, and we will get the values from this + // object), and they will be initialized this way. In case a certain + // variable is not specified at all in the input file, this isn't a problem + // either: The ParameterHandler class will in this case simply take the + // default value that was specified when declaring an entry in the + // declare_parameters() functions of the classes below. namespace Parameters { // @sect4{Parameters::Solver} // - // The first of these classes deals - // with parameters for the linear - // inner solver. It offers - // parameters that indicate which - // solver to use (GMRES as a solver - // for general non-symmetric - // indefinite systems, or a sparse - // direct solver), the amount of - // output to be produced, as well - // as various parameters that tweak - // the thresholded incomplete LU - // decomposition (ILUT) that we use - // as a preconditioner for GMRES. + // The first of these classes deals with parameters for the linear inner + // solver. It offers parameters that indicate which solver to use (GMRES + // as a solver for general non-symmetric indefinite systems, or a sparse + // direct solver), the amount of output to be produced, as well as various + // parameters that tweak the thresholded incomplete LU decomposition + // (ILUT) that we use as a preconditioner for GMRES. // - // In particular, the ILUT takes - // the following parameters: - // - ilut_fill: the number of extra - // entries to add when forming the ILU + // In particular, the ILUT takes the following parameters: + // - ilut_fill: the number of extra entries to add when forming the ILU // decomposition - // - ilut_atol, ilut_rtol: When - // forming the preconditioner, for - // certain problems bad conditioning - // (or just bad luck) can cause the - // preconditioner to be very poorly - // conditioned. Hence it can help to - // add diagonal perturbations to the - // original matrix and form the - // preconditioner for this slightly - // better matrix. ATOL is an absolute - // perturbation that is added to the - // diagonal before forming the prec, - // and RTOL is a scaling factor $rtol - // \geq 1$. - // - ilut_drop: The ILUT will - // drop any values that - // have magnitude less than this value. - // This is a way to manage the amount - // of memory used by this - // preconditioner. + // - ilut_atol, ilut_rtol: When forming the preconditioner, for certain + // problems bad conditioning (or just bad luck) can cause the + // preconditioner to be very poorly conditioned. Hence it can help to + // add diagonal perturbations to the original matrix and form the + // preconditioner for this slightly better matrix. ATOL is an absolute + // perturbation that is added to the diagonal before forming the prec, + // and RTOL is a scaling factor $rtol \geq 1$. + // - ilut_drop: The ILUT will drop any values that have magnitude less + // than this value. This is a way to manage the amount of memory used + // by this preconditioner. // - // The meaning of each parameter is - // also briefly described in the - // third argument of the - // ParameterHandler::declare_entry - // call in + // The meaning of each parameter is also briefly described in the third + // argument of the ParameterHandler::declare_entry call in // declare_parameters(). struct Solver { @@ -1092,12 +847,9 @@ namespace Step33 // @sect4{Parameters::Refinement} // - // Similarly, here are a few parameters - // that determine how the mesh is to be - // refined (and if it is to be refined at - // all). For what exactly the shock - // parameters do, see the mesh refinement - // functions further down. + // Similarly, here are a few parameters that determine how the mesh is to + // be refined (and if it is to be refined at all). For what exactly the + // shock parameters do, see the mesh refinement functions further down. struct Refinement { bool do_refine; @@ -1153,23 +905,16 @@ namespace Step33 // @sect4{Parameters::Flux} // - // Next a section on flux modifications to - // make it more stable. In particular, two - // options are offered to stabilize the - // Lax-Friedrichs flux: either choose - // $\mathbf{H}(\mathbf{a},\mathbf{b},\mathbf{n}) - // = - // \frac{1}{2}(\mathbf{F}(\mathbf{a})\cdot - // \mathbf{n} + \mathbf{F}(\mathbf{b})\cdot - // \mathbf{n} + \alpha (\mathbf{a} - - // \mathbf{b}))$ where $\alpha$ is either a - // fixed number specified in the input - // file, or where $\alpha$ is a mesh - // dependent value. In the latter case, it - // is chosen as $\frac{h}{2\delta T}$ with - // $h$ the diameter of the face to which - // the flux is applied, and $\delta T$ - // the current time step. + // Next a section on flux modifications to make it more stable. In + // particular, two options are offered to stabilize the Lax-Friedrichs + // flux: either choose $\mathbf{H}(\mathbf{a},\mathbf{b},\mathbf{n}) = + // \frac{1}{2}(\mathbf{F}(\mathbf{a})\cdot \mathbf{n} + + // \mathbf{F}(\mathbf{b})\cdot \mathbf{n} + \alpha (\mathbf{a} - + // \mathbf{b}))$ where $\alpha$ is either a fixed number specified in the + // input file, or where $\alpha$ is a mesh dependent value. In the latter + // case, it is chosen as $\frac{h}{2\delta T}$ with $h$ the diameter of + // the face to which the flux is applied, and $\delta T$ the current time + // step. struct Flux { enum StabilizationKind { constant, mesh_dependent }; @@ -1219,13 +964,10 @@ namespace Step33 // @sect4{Parameters::Output} // - // Then a section on output parameters. We - // offer to produce Schlieren plots (the - // squared gradient of the density, a tool - // to visualize shock fronts), and a time - // interval between graphical output in - // case we don't want an output file every - // time step. + // Then a section on output parameters. We offer to produce Schlieren + // plots (the squared gradient of the density, a tool to visualize shock + // fronts), and a time interval between graphical output in case we don't + // want an output file every time step. struct Output { bool schlieren_plot; @@ -1267,85 +1009,50 @@ namespace Step33 // @sect4{Parameters::AllParameters} // - // Finally the class that brings it all - // together. It declares a number of - // parameters itself, mostly ones at the - // top level of the parameter file as well - // as several in section too small to - // warrant their own classes. It also - // contains everything that is actually - // space dimension dependent, like initial - // or boundary conditions. + // Finally the class that brings it all together. It declares a number of + // parameters itself, mostly ones at the top level of the parameter file + // as well as several in section too small to warrant their own + // classes. It also contains everything that is actually space dimension + // dependent, like initial or boundary conditions. // - // Since this class is derived from all the - // ones above, the - // declare_parameters() and - // parse_parameters() - // functions call the respective functions - // of the base classes as well. + // Since this class is derived from all the ones above, the + // declare_parameters() and parse_parameters() + // functions call the respective functions of the base classes as well. // - // Note that this class also handles the - // declaration of initial and boundary - // conditions specified in the input - // file. To this end, in both cases, - // there are entries like "w_0 value" - // which represent an expression in terms - // of $x,y,z$ that describe the initial - // or boundary condition as a formula - // that will later be parsed by the - // FunctionParser class. Similar - // expressions exist for "w_1", "w_2", - // etc, denoting the dim+2 - // conserved variables of the Euler - // system. Similarly, we allow up to - // max_n_boundaries boundary - // indicators to be used in the input - // file, and each of these boundary - // indicators can be associated with an - // inflow, outflow, or pressure boundary - // condition, with inhomogenous boundary - // conditions being specified for each - // component and each boundary indicator - // separately. + // Note that this class also handles the declaration of initial and + // boundary conditions specified in the input file. To this end, in both + // cases, there are entries like "w_0 value" which represent an expression + // in terms of $x,y,z$ that describe the initial or boundary condition as + // a formula that will later be parsed by the FunctionParser + // class. Similar expressions exist for "w_1", "w_2", etc, denoting the + // dim+2 conserved variables of the Euler system. Similarly, + // we allow up to max_n_boundaries boundary indicators to be + // used in the input file, and each of these boundary indicators can be + // associated with an inflow, outflow, or pressure boundary condition, + // with inhomogenous boundary conditions being specified for each + // component and each boundary indicator separately. // - // The data structure used to store the - // boundary indicators is a bit - // complicated. It is an array of - // max_n_boundaries elements - // indicating the range of boundary - // indicators that will be accepted. For - // each entry in this array, we store a - // pair of data in the - // BoundaryCondition - // structure: first, an array of size - // n_components that for - // each component of the solution vector - // indicates whether it is an inflow, - // outflow, or other kind of boundary, - // and second a FunctionParser object - // that describes all components of the - // solution vector for this boundary id - // at once. + // The data structure used to store the boundary indicators is a bit + // complicated. It is an array of max_n_boundaries elements + // indicating the range of boundary indicators that will be accepted. For + // each entry in this array, we store a pair of data in the + // BoundaryCondition structure: first, an array of size + // n_components that for each component of the solution + // vector indicates whether it is an inflow, outflow, or other kind of + // boundary, and second a FunctionParser object that describes all + // components of the solution vector for this boundary id at once. // - // The BoundaryCondition - // structure requires a constructor since - // we need to tell the function parser - // object at construction time how many - // vector components it is to - // describe. This initialization can - // therefore not wait till we actually - // set the formulas the FunctionParser + // The BoundaryCondition structure requires a constructor + // since we need to tell the function parser object at construction time + // how many vector components it is to describe. This initialization can + // therefore not wait till we actually set the formulas the FunctionParser // object represents later in // AllParameters::parse_parameters() // - // For the same reason of having to tell - // Function objects their vector size at - // construction time, we have to have a - // constructor of the - // AllParameters class that - // at least initializes the other - // FunctionParser object, i.e. the one - // describing initial conditions. + // For the same reason of having to tell Function objects their vector + // size at construction time, we have to have a constructor of the + // AllParameters class that at least initializes the other + // FunctionParser object, i.e. the one describing initial conditions. template struct AllParameters : public Solver, public Refinement, @@ -1562,25 +1269,15 @@ namespace Step33 // @sect3{Conservation law class} - // Here finally comes the class that - // actually does something with all - // the Euler equation and parameter - // specifics we've defined above. The - // public interface is pretty much - // the same as always (the - // constructor now takes the name of - // a file from which to read - // parameters, which is passed on the - // command line). The private - // function interface is also pretty - // similar to the usual arrangement, - // with the - // assemble_system - // function split into three parts: - // one that contains the main loop - // over all cells and that then calls - // the other two for integrals over - // cells and faces, respectively. + // Here finally comes the class that actually does something with all the + // Euler equation and parameter specifics we've defined above. The public + // interface is pretty much the same as always (the constructor now takes + // the name of a file from which to read parameters, which is passed on the + // command line). The private function interface is also pretty similar to + // the usual arrangement, with the assemble_system function + // split into three parts: one that contains the main loop over all cells + // and that then calls the other two for integrals over cells and faces, + // respectively. template class ConservationLaw { @@ -1612,31 +1309,16 @@ namespace Step33 - // The first few member variables - // are also rather standard. Note - // that we define a mapping - // object to be used throughout - // the program when assembling - // terms (we will hand it to - // every FEValues and - // FEFaceValues object); the - // mapping we use is just the - // standard $Q_1$ mapping -- - // nothing fancy, in other words - // -- but declaring one here and - // using it throughout the - // program will make it simpler - // later on to change it if that - // should become necessary. This - // is, in fact, rather pertinent: - // it is known that for - // transsonic simulations with - // the Euler equations, - // computations do not converge - // even as $h\rightarrow 0$ if - // the boundary approximation is - // not of sufficiently high - // order. + // The first few member variables are also rather standard. Note that we + // define a mapping object to be used throughout the program when + // assembling terms (we will hand it to every FEValues and FEFaceValues + // object); the mapping we use is just the standard $Q_1$ mapping -- + // nothing fancy, in other words -- but declaring one here and using it + // throughout the program will make it simpler later on to change it if + // that should become necessary. This is, in fact, rather pertinent: it is + // known that for transsonic simulations with the Euler equations, + // computations do not converge even as $h\rightarrow 0$ if the boundary + // approximation is not of sufficiently high order. Triangulation triangulation; const MappingQ1 mapping; @@ -1646,56 +1328,31 @@ namespace Step33 const QGauss quadrature; const QGauss face_quadrature; - // Next come a number of data - // vectors that correspond to the - // solution of the previous time - // step - // (old_solution), - // the best guess of the current - // solution - // (current_solution; - // we say guess because - // the Newton iteration to - // compute it may not have - // converged yet, whereas - // old_solution - // refers to the fully converged - // final result of the previous - // time step), and a predictor - // for the solution at the next - // time step, computed by - // extrapolating the current and - // previous solution one time - // step into the future: + // Next come a number of data vectors that correspond to the solution of + // the previous time step (old_solution), the best guess of + // the current solution (current_solution; we say + // guess because the Newton iteration to compute it may not have + // converged yet, whereas old_solution refers to the fully + // converged final result of the previous time step), and a predictor for + // the solution at the next time step, computed by extrapolating the + // current and previous solution one time step into the future: Vector old_solution; Vector current_solution; Vector predictor; Vector right_hand_side; - // This final set of member variables - // (except for the object holding all - // run-time parameters at the very - // bottom and a screen output stream - // that only prints something if - // verbose output has been requested) - // deals with the inteface we have in - // this program to the Trilinos library - // that provides us with linear - // solvers. Similarly to including - // PETSc matrices in step-17, - // step-18, and step-19, all we - // need to do is to create a Trilinos - // sparse matrix instead of the - // standard deal.II class. The system - // matrix is used for the Jacobian in - // each Newton step. Since we do not - // intend to run this program in - // parallel (which wouldn't be too hard - // with Trilinos data structures, - // though), we don't have to think - // about anything else like - // distributing the degrees of freedom. + // This final set of member variables (except for the object holding all + // run-time parameters at the very bottom and a screen output stream that + // only prints something if verbose output has been requested) deals with + // the inteface we have in this program to the Trilinos library that + // provides us with linear solvers. Similarly to including PETSc matrices + // in step-17, step-18, and step-19, all we need to do is to create a + // Trilinos sparse matrix instead of the standard deal.II class. The + // system matrix is used for the Jacobian in each Newton step. Since we do + // not intend to run this program in parallel (which wouldn't be too hard + // with Trilinos data structures, though), we don't have to think about + // anything else like distributing the degrees of freedom. TrilinosWrappers::SparseMatrix system_matrix; Parameters::AllParameters parameters; @@ -1705,11 +1362,8 @@ namespace Step33 // @sect4{ConservationLaw::ConservationLaw} // - // There is nothing much to say about - // the constructor. Essentially, it - // reads the input file and fills the - // parameter object with the parsed - // values: + // There is nothing much to say about the constructor. Essentially, it reads + // the input file and fills the parameter object with the parsed values: template ConservationLaw::ConservationLaw (const char *input_filename) : @@ -1733,11 +1387,9 @@ namespace Step33 // @sect4{ConservationLaw::setup_system} // - // The following (easy) function is called - // each time the mesh is changed. All it - // does is to resize the Trilinos matrix - // according to a sparsity pattern that we - // generate as in all the previous tutorial + // The following (easy) function is called each time the mesh is + // changed. All it does is to resize the Trilinos matrix according to a + // sparsity pattern that we generate as in all the previous tutorial // programs. template void ConservationLaw::setup_system () @@ -1752,44 +1404,27 @@ namespace Step33 // @sect4{ConservationLaw::assemble_system} // - // This and the following two - // functions are the meat of this - // program: They assemble the linear - // system that results from applying - // Newton's method to the nonlinear - // system of conservation - // equations. + // This and the following two functions are the meat of this program: They + // assemble the linear system that results from applying Newton's method to + // the nonlinear system of conservation equations. // - // This first function puts all of - // the assembly pieces together in a - // routine that dispatches the - // correct piece for each cell/face. - // The actual implementation of the - // assembly on these objects is done - // in the following functions. + // This first function puts all of the assembly pieces together in a routine + // that dispatches the correct piece for each cell/face. The actual + // implementation of the assembly on these objects is done in the following + // functions. // - // At the top of the function we do the - // usual housekeeping: allocate FEValues, - // FEFaceValues, and FESubfaceValues - // objects necessary to do the integrations - // on cells, faces, and subfaces (in case - // of adjoining cells on different - // refinement levels). Note that we don't - // need all information (like values, - // gradients, or real locations of - // quadrature points) for all of these - // objects, so we only let the FEValues - // classes whatever is actually necessary - // by specifying the minimal set of - // UpdateFlags. For example, when using a - // FEFaceValues object for the neighboring - // cell we only need the shape values: - // Given a specific face, the quadrature - // points and JxW values are - // the same as for the current cells, and - // the normal vectors are known to be the - // negative of the normal vectors of the - // current cell. + // At the top of the function we do the usual housekeeping: allocate + // FEValues, FEFaceValues, and FESubfaceValues objects necessary to do the + // integrations on cells, faces, and subfaces (in case of adjoining cells on + // different refinement levels). Note that we don't need all information + // (like values, gradients, or real locations of quadrature points) for all + // of these objects, so we only let the FEValues classes whatever is + // actually necessary by specifying the minimal set of UpdateFlags. For + // example, when using a FEFaceValues object for the neighboring cell we + // only need the shape values: Given a specific face, the quadrature points + // and JxW values are the same as for the current cells, and + // the normal vectors are known to be the negative of the normal vectors of + // the current cell. template void ConservationLaw::assemble_system () { @@ -1819,10 +1454,9 @@ namespace Step33 FESubfaceValues fe_v_subface_neighbor (mapping, fe, face_quadrature, neighbor_face_update_flags); - // Then loop over all cells, initialize the - // FEValues object for the current cell and - // call the function that assembles the - // problem on this cell. + // Then loop over all cells, initialize the FEValues object for the + // current cell and call the function that assembles the problem on this + // cell. typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); @@ -1833,19 +1467,12 @@ namespace Step33 assemble_cell_term(fe_v, dof_indices); - // Then loop over all the faces of this - // cell. If a face is part of the - // external boundary, then assemble - // boundary conditions there (the fifth - // argument to - // assemble_face_terms - // indicates whether we are working on - // an external or internal face; if it - // is an external face, the fourth - // argument denoting the degrees of - // freedom indices of the neighbor is - // ignored, so we pass an empty - // vector): + // Then loop over all the faces of this cell. If a face is part of + // the external boundary, then assemble boundary conditions there (the + // fifth argument to assemble_face_terms indicates + // whether we are working on an external or internal face; if it is an + // external face, the fourth argument denoting the degrees of freedom + // indices of the neighbor is ignored, so we pass an empty vector): for (unsigned int face_no=0; face_no::faces_per_cell; ++face_no) if (cell->at_boundary(face_no)) @@ -1860,69 +1487,41 @@ namespace Step33 cell->face(face_no)->diameter()); } - // The alternative is that we are - // dealing with an internal face. There - // are two cases that we need to - // distinguish: that this is a normal - // face between two cells at the same - // refinement level, and that it is a - // face between two cells of the - // different refinement levels. + // The alternative is that we are dealing with an internal face. There + // are two cases that we need to distinguish: that this is a normal + // face between two cells at the same refinement level, and that it is + // a face between two cells of the different refinement levels. // - // In the first case, there is nothing - // we need to do: we are using a - // continuous finite element, and face - // terms do not appear in the bilinear - // form in this case. The second case - // usually does not lead to face terms - // either if we enforce hanging node - // constraints strongly (as in all - // previous tutorial programs so far - // whenever we used continuous finite - // elements -- this enforcement is done - // by the ConstraintMatrix class - // together with - // DoFTools::make_hanging_node_constraints). In - // the current program, however, we opt - // to enforce continuity weakly at - // faces between cells of different - // refinement level, for two reasons: - // (i) because we can, and more - // importantly (ii) because we would - // have to thread the automatic - // differentiation we use to compute - // the elements of the Newton matrix - // from the residual through the - // operations of the ConstraintMatrix - // class. This would be possible, but - // is not trivial, and so we choose - // this alternative approach. + // In the first case, there is nothing we need to do: we are using a + // continuous finite element, and face terms do not appear in the + // bilinear form in this case. The second case usually does not lead + // to face terms either if we enforce hanging node constraints + // strongly (as in all previous tutorial programs so far whenever we + // used continuous finite elements -- this enforcement is done by the + // ConstraintMatrix class together with + // DoFTools::make_hanging_node_constraints). In the current program, + // however, we opt to enforce continuity weakly at faces between cells + // of different refinement level, for two reasons: (i) because we can, + // and more importantly (ii) because we would have to thread the + // automatic differentiation we use to compute the elements of the + // Newton matrix from the residual through the operations of the + // ConstraintMatrix class. This would be possible, but is not trivial, + // and so we choose this alternative approach. // - // What needs to be decided is which - // side of an interface between two - // cells of different refinement level - // we are sitting on. + // What needs to be decided is which side of an interface between two + // cells of different refinement level we are sitting on. // - // Let's take the case where the - // neighbor is more refined first. We - // then have to loop over the children - // of the face of the current cell and - // integrate on each of them. We - // sprinkle a couple of assertions into - // the code to ensure that our - // reasoning trying to figure out which - // of the neighbor's children's faces - // coincides with a given subface of - // the current cell's faces is correct - // -- a bit of defensive programming - // never hurts. + // Let's take the case where the neighbor is more refined first. We + // then have to loop over the children of the face of the current cell + // and integrate on each of them. We sprinkle a couple of assertions + // into the code to ensure that our reasoning trying to figure out + // which of the neighbor's children's faces coincides with a given + // subface of the current cell's faces is correct -- a bit of + // defensive programming never hurts. // - // We then call the function that - // integrates over faces; since this is - // an internal face, the fifth argument - // is false, and the sixth one is - // ignored so we pass an invalid value - // again: + // We then call the function that integrates over faces; since this is + // an internal face, the fifth argument is false, and the sixth one is + // ignored so we pass an invalid value again: else { if (cell->neighbor(face_no)->has_children()) @@ -1959,18 +1558,12 @@ namespace Step33 } } - // The other possibility we have - // to care for is if the neighbor - // is coarser than the current - // cell (in particular, because - // of the usual restriction of - // only one hanging node per - // face, the neighbor must be - // exactly one level coarser than - // the current cell, something - // that we check with an - // assertion). Again, we then - // integrate over this interface: + // The other possibility we have to care for is if the neighbor + // is coarser than the current cell (in particular, because of + // the usual restriction of only one hanging node per face, the + // neighbor must be exactly one level coarser than the current + // cell, something that we check with an assertion). Again, we + // then integrate over this interface: else if (cell->neighbor(face_no)->level() != cell->level()) { const typename DoFHandler::cell_iterator @@ -2006,101 +1599,64 @@ namespace Step33 } } - // After all this assembling, notify the - // Trilinos matrix object that the matrix - // is done: + // After all this assembling, notify the Trilinos matrix object that the + // matrix is done: system_matrix.compress(); } // @sect4{ConservationLaw::assemble_cell_term} // - // This function assembles the cell term by - // computing the cell part of the residual, - // adding its negative to the right hand side - // vector, and adding its derivative with - // respect to the local variables to the - // Jacobian (i.e. the Newton matrix). Recall - // that the cell contributions to the - // residual read $F_i = - // \left(\frac{\mathbf{w}_{n+1} - - // \mathbf{w}_n}{\delta - // t},\mathbf{z}_i\right)_K - - // \left(\mathbf{F}(\tilde{\mathbf{w}}), - // \nabla\mathbf{z}_i\right)_K + - // h^{\eta}(\nabla \mathbf{w} , \nabla - // \mathbf{z}_i)_K - - // (\mathbf{G}(\tilde{\mathbf w}), - // \mathbf{z}_i)_K$ where $\tilde{\mathbf w}$ - // is represented by the variable - // W_theta, $\mathbf{z}_i$ is - // the $i$th test function, and the scalar - // product - // $\left(\mathbf{F}(\tilde{\mathbf{w}}), - // \nabla\mathbf{z}\right)_K$ is understood - // as $\int_K - // \sum_{c=1}^{\text{n\_components}} - // \sum_{d=1}^{\text{dim}} - // \mathbf{F}(\tilde{\mathbf{w}})_{cd} + // This function assembles the cell term by computing the cell part of the + // residual, adding its negative to the right hand side vector, and adding + // its derivative with respect to the local variables to the Jacobian + // (i.e. the Newton matrix). Recall that the cell contributions to the + // residual read $F_i = \left(\frac{\mathbf{w}_{n+1} - \mathbf{w}_n}{\delta + // t},\mathbf{z}_i\right)_K - \left(\mathbf{F}(\tilde{\mathbf{w}}), + // \nabla\mathbf{z}_i\right)_K + h^{\eta}(\nabla \mathbf{w} , \nabla + // \mathbf{z}_i)_K - (\mathbf{G}(\tilde{\mathbf w}), \mathbf{z}_i)_K$ where + // $\tilde{\mathbf w}$ is represented by the variable W_theta, + // $\mathbf{z}_i$ is the $i$th test function, and the scalar product + // $\left(\mathbf{F}(\tilde{\mathbf{w}}), \nabla\mathbf{z}\right)_K$ is + // understood as $\int_K \sum_{c=1}^{\text{n\_components}} + // \sum_{d=1}^{\text{dim}} \mathbf{F}(\tilde{\mathbf{w}})_{cd} // \frac{\partial z_c}{x_d}$. // - // At the top of this function, we do the - // usual housekeeping in terms of allocating - // some local variables that we will need - // later. In particular, we will allocate - // variables that will hold the values of the - // current solution $W_{n+1}^k$ after the - // $k$th Newton iteration (variable - // W), the previous time step's - // solution $W_{n}$ (variable - // W_old), as well as the linear - // combination $\theta W_{n+1}^k + - // (1-\theta)W_n$ that results from choosing - // different time stepping schemes (variable - // W_theta). + // At the top of this function, we do the usual housekeeping in terms of + // allocating some local variables that we will need later. In particular, + // we will allocate variables that will hold the values of the current + // solution $W_{n+1}^k$ after the $k$th Newton iteration (variable + // W), the previous time step's solution $W_{n}$ (variable + // W_old), as well as the linear combination $\theta W_{n+1}^k + // + (1-\theta)W_n$ that results from choosing different time stepping + // schemes (variable W_theta). // - // In addition to these, we need the - // gradients of the current variables. It is - // a bit of a shame that we have to compute - // these; we almost don't. The nice thing - // about a simple conservation law is that - // the flux doesn't generally involve any - // gradients. We do need these, however, for - // the diffusion stabilization. + // In addition to these, we need the gradients of the current variables. It + // is a bit of a shame that we have to compute these; we almost don't. The + // nice thing about a simple conservation law is that the flux doesn't + // generally involve any gradients. We do need these, however, for the + // diffusion stabilization. // - // The actual format in which we store these - // variables requires some - // explanation. First, we need values at each - // quadrature point for each of the - // EulerEquations::n_components - // components of the solution vector. This - // makes for a two-dimensional table for - // which we use deal.II's Table class (this - // is more efficient than - // std::vector@ - // @> because it only needs to - // allocate memory once, rather than once for - // each element of the outer - // vector). Similarly, the gradient is a - // three-dimensional table, which the Table - // class also supports. + // The actual format in which we store these variables requires some + // explanation. First, we need values at each quadrature point for each of + // the EulerEquations::n_components components of the solution + // vector. This makes for a two-dimensional table for which we use deal.II's + // Table class (this is more efficient than + // std::vector@ @> because it only needs to + // allocate memory once, rather than once for each element of the outer + // vector). Similarly, the gradient is a three-dimensional table, which the + // Table class also supports. // - // Secondly, we want to use automatic - // differentiation. To this end, we use the - // Sacado::Fad::DFad template for everything - // that is a computed from the variables with - // respect to which we would like to compute - // derivatives. This includes the current - // solution and gradient at the quadrature - // points (which are linear combinations of - // the degrees of freedom) as well as - // everything that is computed from them such - // as the residual, but not the previous time - // step's solution. These variables are all - // found in the first part of the function, - // along with a variable that we will use to - // store the derivatives of a single - // component of the residual: + // Secondly, we want to use automatic differentiation. To this end, we use + // the Sacado::Fad::DFad template for everything that is a computed from the + // variables with respect to which we would like to compute + // derivatives. This includes the current solution and gradient at the + // quadrature points (which are linear combinations of the degrees of + // freedom) as well as everything that is computed from them such as the + // residual, but not the previous time step's solution. These variables are + // all found in the first part of the function, along with a variable that + // we will use to store the derivatives of a single component of the + // residual: template void ConservationLaw:: @@ -2124,66 +1680,43 @@ namespace Step33 std::vector residual_derivatives (dofs_per_cell); - // Next, we have to define the independent - // variables that we will try to determine - // by solving a Newton step. These - // independent variables are the values of - // the local degrees of freedom which we - // extract here: + // Next, we have to define the independent variables that we will try to + // determine by solving a Newton step. These independent variables are the + // values of the local degrees of freedom which we extract here: std::vector > independent_local_dof_values(dofs_per_cell); for (unsigned int i=0; iindependent_local_dof_values[i] - // as the $i$th independent variable out of - // a total of dofs_per_cell: + // In order to mark the variables as independent, the following does the + // trick, marking independent_local_dof_values[i] as the + // $i$th independent variable out of a total of + // dofs_per_cell: for (unsigned int i=0; iW, - // W_old, - // W_theta, and - // grad_W, which we can - // compute from the local DoF values by - // using the formula $W(x_q)=\sum_i \mathbf - // W_i \Phi_i(x_q)$, where $\mathbf W_i$ is - // the $i$th entry of the (local part of - // the) solution vector, and $\Phi_i(x_q)$ - // the value of the $i$th vector-valued - // shape function evaluated at quadrature - // point $x_q$. The gradient can be + // After all these declarations, let us actually compute something. First, + // the values of W, W_old, W_theta, + // and grad_W, which we can compute from the local DoF values + // by using the formula $W(x_q)=\sum_i \mathbf W_i \Phi_i(x_q)$, where + // $\mathbf W_i$ is the $i$th entry of the (local part of the) solution + // vector, and $\Phi_i(x_q)$ the value of the $i$th vector-valued shape + // function evaluated at quadrature point $x_q$. The gradient can be // computed in a similar way. // - // Ideally, we could compute this - // information using a call into something - // like FEValues::get_function_values and - // FEValues::get_function_grads, but since - // (i) we would have to extend the FEValues - // class for this, and (ii) we don't want - // to make the entire - // old_solution vector fad - // types, only the local cell variables, we - // explicitly code the loop above. Before - // this, we add another loop that - // initializes all the fad variables to - // zero: + // Ideally, we could compute this information using a call into something + // like FEValues::get_function_values and FEValues::get_function_grads, + // but since (i) we would have to extend the FEValues class for this, and + // (ii) we don't want to make the entire old_solution vector + // fad types, only the local cell variables, we explicitly code the loop + // above. Before this, we add another loop that initializes all the fad + // variables to zero: for (unsigned int q=0; q::n_components; ++c) { @@ -2216,17 +1749,12 @@ namespace Step33 } - // Next, in order to compute the cell - // contributions, we need to evaluate - // $F(\tilde{\mathbf w})$ and - // $G(\tilde{\mathbf w})$ at all quadrature - // points. To store these, we also need to - // allocate a bit of memory. Note that we - // compute the flux matrices and right hand - // sides in terms of autodifferentiation - // variables, so that the Jacobian - // contributions can later easily be - // computed from it: + // Next, in order to compute the cell contributions, we need to evaluate + // $F(\tilde{\mathbf w})$ and $G(\tilde{\mathbf w})$ at all quadrature + // points. To store these, we also need to allocate a bit of memory. Note + // that we compute the flux matrices and right hand sides in terms of + // autodifferentiation variables, so that the Jacobian contributions can + // later easily be computed from it: typedef Sacado::Fad::DFad FluxMatrix[EulerEquations::n_components][dim]; FluxMatrix *flux = new FluxMatrix[n_q_points]; @@ -2240,52 +1768,33 @@ namespace Step33 } - // We now have all of the pieces in place, - // so perform the assembly. We have an - // outer loop through the components of the - // system, and an inner loop over the - // quadrature points, where we accumulate - // contributions to the $i$th residual - // $F_i$. The general formula for this - // residual is given in the introduction - // and at the top of this function. We can, - // however, simplify it a bit taking into - // account that the $i$th (vector-valued) - // test function $\mathbf{z}_i$ has in - // reality only a single nonzero component - // (more on this topic can be found in the - // @ref vector_valued module). It will be - // represented by the variable - // component_i below. With - // this, the residual term can be - // re-written as $F_i = - // \left(\frac{(\mathbf{w}_{n+1} - + // We now have all of the pieces in place, so perform the assembly. We + // have an outer loop through the components of the system, and an inner + // loop over the quadrature points, where we accumulate contributions to + // the $i$th residual $F_i$. The general formula for this residual is + // given in the introduction and at the top of this function. We can, + // however, simplify it a bit taking into account that the $i$th + // (vector-valued) test function $\mathbf{z}_i$ has in reality only a + // single nonzero component (more on this topic can be found in the @ref + // vector_valued module). It will be represented by the variable + // component_i below. With this, the residual term can be + // re-written as $F_i = \left(\frac{(\mathbf{w}_{n+1} - // \mathbf{w}_n)_{\text{component\_i}}}{\delta - // t},(\mathbf{z}_i)_{\text{component\_i}}\right)_K$ - // $- \sum_{d=1}^{\text{dim}} - // \left(\mathbf{F} + // t},(\mathbf{z}_i)_{\text{component\_i}}\right)_K$ $- + // \sum_{d=1}^{\text{dim}} \left(\mathbf{F} // (\tilde{\mathbf{w}})_{\text{component\_i},d}, - // \frac{\partial(\mathbf{z}_i)_{\text{component\_i}}} - // {\partial x_d}\right)_K$ $+ - // \sum_{d=1}^{\text{dim}} h^{\eta} - // \left(\frac{\partial - // \mathbf{w}_{\text{component\_i}}}{\partial - // x_d} , \frac{\partial - // (\mathbf{z}_i)_{\text{component\_i}}}{\partial - // x_d} \right)_K$ - // $-(\mathbf{G}(\tilde{\mathbf{w}} - // )_{\text{component\_i}}, - // (\mathbf{z}_i)_{\text{component\_i}})_K$, - // where integrals are understood to be - // evaluated through summation over - // quadrature points. + // \frac{\partial(\mathbf{z}_i)_{\text{component\_i}}} {\partial + // x_d}\right)_K$ $+ \sum_{d=1}^{\text{dim}} h^{\eta} \left(\frac{\partial + // \mathbf{w}_{\text{component\_i}}}{\partial x_d} , \frac{\partial + // (\mathbf{z}_i)_{\text{component\_i}}}{\partial x_d} \right)_K$ + // $-(\mathbf{G}(\tilde{\mathbf{w}} )_{\text{component\_i}}, + // (\mathbf{z}_i)_{\text{component\_i}})_K$, where integrals are + // understood to be evaluated through summation over quadrature points. // - // We initialy sum all contributions of the - // residual in the positive sense, so that - // we don't need to negative the Jacobian - // entries. Then, when we sum into the - // right_hand_side vector, - // we negate this residual. + // We initialy sum all contributions of the residual in the positive + // sense, so that we don't need to negative the Jacobian entries. Then, + // when we sum into the right_hand_side vector, we negate + // this residual. for (unsigned int i=0; i F_i = 0; @@ -2293,10 +1802,10 @@ namespace Step33 const unsigned int component_i = fe_v.get_fe().system_to_component_index(i).first; - // The residual for each row (i) will be accumulating - // into this fad variable. At the end of the assembly - // for this row, we will query for the sensitivities - // to this variable and add them into the Jacobian. + // The residual for each row (i) will be accumulating into this fad + // variable. At the end of the assembly for this row, we will query + // for the sensitivities to this variable and add them into the + // Jacobian. for (unsigned int point=0; pointF_i.fastAccessDx(k), - // so we store the data in a - // temporary array. This information - // about the whole row of local dofs - // is then added to the Trilinos - // matrix at once (which supports the + // At the end of the loop, we have to add the sensitivities to the + // matrix and subtract the residual from the right hand side. Trilinos + // FAD data type gives us access to the derivatives using + // F_i.fastAccessDx(k), so we store the data in a + // temporary array. This information about the whole row of local dofs + // is then added to the Trilinos matrix at once (which supports the // data types we have chosen). for (unsigned int k=0; k void ConservationLaw::assemble_face_term(const unsigned int face_no, @@ -2398,16 +1897,12 @@ namespace Step33 } - // Next, we need to define the values of - // the conservative variables $\tilde - // {\mathbf W}$ on this side of the face - // ($\tilde {\mathbf W}^+$) and on the - // opposite side ($\tilde {\mathbf - // W}^-$). The former can be computed in - // exactly the same way as in the previous - // function, but note that the - // fe_v variable now is of - // type FEFaceValues or FESubfaceValues: + // Next, we need to define the values of the conservative variables + // $\tilde {\mathbf W}$ on this side of the face ($\tilde {\mathbf W}^+$) + // and on the opposite side ($\tilde {\mathbf W}^-$). The former can be + // computed in exactly the same way as in the previous function, but note + // that the fe_v variable now is of type FEFaceValues or + // FESubfaceValues: Table<2,Sacado::Fad::DFad > Wplus (n_q_points, EulerEquations::n_components), Wminus (n_q_points, EulerEquations::n_components); @@ -2424,11 +1919,9 @@ namespace Step33 fe_v.shape_value_component(i, q, component_i); } - // Computing $\tilde {\mathbf W}^-$ is a - // bit more complicated. If this is an - // internal face, we can compute it as - // above by simply using the independent - // variables from the neighbor: + // Computing $\tilde {\mathbf W}^-$ is a bit more complicated. If this is + // an internal face, we can compute it as above by simply using the + // independent variables from the neighbor: if (external_face == false) { for (unsigned int q=0; q::max_n_boundaries, @@ -2491,14 +1973,11 @@ namespace Step33 } - // Now that we have $\mathbf w^+$ and - // $\mathbf w^-$, we can go about computing - // the numerical flux function $\mathbf - // H(\mathbf w^+,\mathbf w^-, \mathbf n)$ - // for each quadrature point. Before - // calling the function that does so, we - // also need to determine the - // Lax-Friedrich's stability parameter: + // Now that we have $\mathbf w^+$ and $\mathbf w^-$, we can go about + // computing the numerical flux function $\mathbf H(\mathbf w^+,\mathbf + // w^-, \mathbf n)$ for each quadrature point. Before calling the function + // that does so, we also need to determine the Lax-Friedrich's stability + // parameter: typedef Sacado::Fad::DFad NormalFlux[EulerEquations::n_components]; NormalFlux *normal_fluxes = new NormalFlux[n_q_points]; @@ -2522,15 +2001,11 @@ namespace Step33 Wplus[q], Wminus[q], alpha, normal_fluxes[q]); - // Now assemble the face term in exactly - // the same way as for the cell - // contributions in the previous - // function. The only difference is that if - // this is an internal face, we also have - // to take into account the sensitivies of - // the residual contributions to the - // degrees of freedom on the neighboring - // cell: + // Now assemble the face term in exactly the same way as for the cell + // contributions in the previous function. The only difference is that if + // this is an internal face, we also have to take into account the + // sensitivies of the residual contributions to the degrees of freedom on + // the neighboring cell: std::vector residual_derivatives (dofs_per_cell); for (unsigned int i=0; i std::pair @@ -2583,28 +2054,17 @@ namespace Step33 { switch (parameters.solver) { - // If the parameter file specified - // that a direct solver shall be - // used, then we'll get here. The - // process is straightforward, since - // deal.II provides a wrapper class - // to the Amesos direct solver within - // Trilinos. All we have to do is to - // create a solver control object - // (which is just a dummy object - // here, since we won't perform any - // iterations), and then create the - // direct solver object. When - // actually doing the solve, note - // that we don't pass a - // preconditioner. That wouldn't make - // much sense for a direct solver - // anyway. At the end we return the - // solver control statistics — - // which will tell that no iterations - // have been performed and that the - // final linear residual is zero, - // absent any better information that + // If the parameter file specified that a direct solver shall be used, + // then we'll get here. The process is straightforward, since deal.II + // provides a wrapper class to the Amesos direct solver within + // Trilinos. All we have to do is to create a solver control object + // (which is just a dummy object here, since we won't perform any + // iterations), and then create the direct solver object. When + // actually doing the solve, note that we don't pass a + // preconditioner. That wouldn't make much sense for a direct solver + // anyway. At the end we return the solver control statistics — + // which will tell that no iterations have been performed and that the + // final linear residual is zero, absent any better information that // may be provided here: case Parameters::Solver::direct: { @@ -2619,53 +2079,31 @@ namespace Step33 solver_control.last_value()); } - // Likewise, if we are to use an - // iterative solver, we use Aztec's - // GMRES solver. We could use the - // Trilinos wrapper classes for - // iterative solvers and - // preconditioners here as well, but - // we choose to use an Aztec solver - // directly. For the given problem, - // Aztec's internal preconditioner - // implementations are superior over - // the ones deal.II has wrapper - // classes to, so we use ILU-T - // preconditioning within the AztecOO - // solver and set a bunch of options - // that can be changed from the - // parameter file. + // Likewise, if we are to use an iterative solver, we use Aztec's GMRES + // solver. We could use the Trilinos wrapper classes for iterative + // solvers and preconditioners here as well, but we choose to use an + // Aztec solver directly. For the given problem, Aztec's internal + // preconditioner implementations are superior over the ones deal.II has + // wrapper classes to, so we use ILU-T preconditioning within the + // AztecOO solver and set a bunch of options that can be changed from + // the parameter file. // - // There are two more practicalities: - // Since we have built our right hand - // side and solution vector as - // deal.II Vector objects (as opposed - // to the matrix, which is a Trilinos - // object), we must hand the solvers - // Trilinos Epetra vectors. Luckily, - // they support the concept of a - // 'view', so we just send in a - // pointer to our deal.II vectors. We - // have to provide an Epetra_Map for - // the vector that sets the parallel - // distribution, which is just a - // dummy object in serial. The - // easiest way is to ask the matrix - // for its map, and we're going to be - // ready for matrix-vector products - // with it. + // There are two more practicalities: Since we have built our right hand + // side and solution vector as deal.II Vector objects (as opposed to the + // matrix, which is a Trilinos object), we must hand the solvers + // Trilinos Epetra vectors. Luckily, they support the concept of a + // 'view', so we just send in a pointer to our deal.II vectors. We have + // to provide an Epetra_Map for the vector that sets the parallel + // distribution, which is just a dummy object in serial. The easiest way + // is to ask the matrix for its map, and we're going to be ready for + // matrix-vector products with it. // - // Secondly, the Aztec solver wants - // us to pass a Trilinos - // Epetra_CrsMatrix in, not the - // deal.II wrapper class itself. So - // we access to the actual Trilinos - // matrix in the Trilinos wrapper - // class by the command - // trilinos_matrix(). Trilinos wants - // the matrix to be non-constant, so - // we have to manually remove the - // constantness using a const_cast. + // Secondly, the Aztec solver wants us to pass a Trilinos + // Epetra_CrsMatrix in, not the deal.II wrapper class itself. So we + // access to the actual Trilinos matrix in the Trilinos wrapper class by + // the command trilinos_matrix(). Trilinos wants the matrix to be + // non-constant, so we have to manually remove the constantness using a + // const_cast. case Parameters::Solver::gmres: { Epetra_Vector x(View, system_matrix.domain_partitioner(), @@ -2712,13 +2150,10 @@ namespace Step33 // @sect4{ConservationLaw::compute_refinement_indicators} - // This function is real simple: We don't - // pretend that we know here what a good - // refinement indicator would be. Rather, we - // assume that the EulerEquation - // class would know about this, and so we - // simply defer to the respective function - // we've implemented there: + // This function is real simple: We don't pretend that we know here what a + // good refinement indicator would be. Rather, we assume that the + // EulerEquation class would know about this, and so we simply + // defer to the respective function we've implemented there: template void ConservationLaw:: @@ -2734,11 +2169,9 @@ namespace Step33 // @sect4{ConservationLaw::refine_grid} - // Here, we use the refinement indicators - // computed before and refine the mesh. At - // the beginning, we loop over all cells and - // mark those that we think should be - // refined: + // Here, we use the refinement indicators computed before and refine the + // mesh. At the beginning, we loop over all cells and mark those that we + // think should be refined: template void ConservationLaw::refine_grid (const Vector &refinement_indicators) @@ -2760,19 +2193,12 @@ namespace Step33 cell->set_coarsen_flag(); } - // Then we need to transfer the - // various solution vectors from - // the old to the new grid while we - // do the refinement. The - // SolutionTransfer class is our - // friend here; it has a fairly - // extensive documentation, - // including examples, so we won't - // comment much on the following - // code. The last three lines - // simply re-set the sizes of some - // other vectors to the now correct - // size: + // Then we need to transfer the various solution vectors from the old to + // the new grid while we do the refinement. The SolutionTransfer class is + // our friend here; it has a fairly extensive documentation, including + // examples, so we won't comment much on the following code. The last + // three lines simply re-set the sizes of some other vectors to the now + // correct size: std::vector > transfer_in; std::vector > transfer_out; @@ -2815,21 +2241,15 @@ namespace Step33 // @sect4{ConservationLaw::output_results} - // This function now is rather - // straightforward. All the magic, including - // transforming data from conservative - // variables to physical ones has been - // abstracted and moved into the - // EulerEquations class so that it can be - // replaced in case we want to solve some - // other hyperbolic conservation law. + // This function now is rather straightforward. All the magic, including + // transforming data from conservative variables to physical ones has been + // abstracted and moved into the EulerEquations class so that it can be + // replaced in case we want to solve some other hyperbolic conservation law. // - // Note that the number of the output file is - // determined by keeping a counter in the - // form of a static variable that is set to - // zero the first time we come to this - // function and is incremented by one at the - // end of each invokation. + // Note that the number of the output file is determined by keeping a + // counter in the form of a static variable that is set to zero the first + // time we come to this function and is incremented by one at the end of + // each invokation. template void ConservationLaw::output_results () const { @@ -2863,20 +2283,15 @@ namespace Step33 // @sect4{ConservationLaw::run} - // This function contains the top-level logic - // of this program: initialization, the time - // loop, and the inner Newton iteration. + // This function contains the top-level logic of this program: + // initialization, the time loop, and the inner Newton iteration. // - // At the beginning, we read the mesh file - // specified by the parameter file, setup the - // DoFHandler and various vectors, and then - // interpolate the given initial conditions - // on this mesh. We then perform a number of - // mesh refinements, based on the initial - // conditions, to obtain a mesh that is - // already well adapted to the starting - // solution. At the end of this process, we - // output the initial solution. + // At the beginning, we read the mesh file specified by the parameter file, + // setup the DoFHandler and various vectors, and then interpolate the given + // initial conditions on this mesh. We then perform a number of mesh + // refinements, based on the initial conditions, to obtain a mesh that is + // already well adapted to the starting solution. At the end of this + // process, we output the initial solution. template void ConservationLaw::run () { @@ -2924,13 +2339,10 @@ namespace Step33 output_results (); - // We then enter into the main time - // stepping loop. At the top we simply - // output some status information so one - // can keep track of where a computation - // is, as well as the header for a table - // that indicates progress of the nonlinear - // inner iteration: + // We then enter into the main time stepping loop. At the top we simply + // output some status information so one can keep track of where a + // computation is, as well as the header for a table that indicates + // progress of the nonlinear inner iteration: Vector newton_update (dof_handler.n_dofs()); double time = 0; @@ -2951,50 +2363,29 @@ namespace Step33 std::cout << " NonLin Res Lin Iter Lin Res" << std::endl << " _____________________________________" << std::endl; - // Then comes the inner Newton - // iteration to solve the nonlinear - // problem in each time step. The way - // it works is to reset matrix and - // right hand side to zero, then - // assemble the linear system. If the - // norm of the right hand side is small - // enough, then we declare that the - // Newton iteration has - // converged. Otherwise, we solve the - // linear system, update the current - // solution with the Newton increment, - // and output convergence - // information. At the end, we check - // that the number of Newton iterations - // is not beyond a limit of 10 -- if it - // is, it appears likely that - // iterations are diverging and further - // iterations would do no good. If that - // happens, we throw an exception that - // will be caught in - // main() with status - // information being displayed before - // the program aborts. + // Then comes the inner Newton iteration to solve the nonlinear + // problem in each time step. The way it works is to reset matrix and + // right hand side to zero, then assemble the linear system. If the + // norm of the right hand side is small enough, then we declare that + // the Newton iteration has converged. Otherwise, we solve the linear + // system, update the current solution with the Newton increment, and + // output convergence information. At the end, we check that the + // number of Newton iterations is not beyond a limit of 10 -- if it + // is, it appears likely that iterations are diverging and further + // iterations would do no good. If that happens, we throw an exception + // that will be caught in main() with status information + // being displayed before the program aborts. // - // Note that the way we write the - // AssertThrow macro below is by and - // large equivalent to writing - // something like if - // (!(nonlin_iter @<= 10)) throw - // ExcMessage ("No convergence in - // nonlinear solver");. The only - // significant difference is that - // AssertThrow also makes sure that the - // exception being thrown carries with - // it information about the location - // (file name and line number) where it - // was generated. This is not overly - // critical here, because there is only - // a single place where this sort of - // exception can happen; however, it is - // generally a very useful tool when - // one wants to find out where an error - // occurred. + // Note that the way we write the AssertThrow macro below is by and + // large equivalent to writing something like if (!(nonlin_iter + // @<= 10)) throw ExcMessage ("No convergence in nonlinear + // solver");. The only significant difference is that + // AssertThrow also makes sure that the exception being thrown carries + // with it information about the location (file name and line number) + // where it was generated. This is not overly critical here, because + // there is only a single place where this sort of exception can + // happen; however, it is generally a very useful tool when one wants + // to find out where an error occurred. unsigned int nonlin_iter = 0; current_solution = predictor; while (true) @@ -3028,38 +2419,20 @@ namespace Step33 ExcMessage ("No convergence in nonlinear solver")); } - // We only get to this point if the - // Newton iteration has converged, so - // do various post convergence tasks - // here: + // We only get to this point if the Newton iteration has converged, so + // do various post convergence tasks here: // - // First, we update the time - // and produce graphical output - // if so desired. Then we - // update a predictor for the - // solution at the next time - // step by approximating - // $\mathbf w^{n+1}\approx - // \mathbf w^n + \delta t - // \frac{\partial \mathbf - // w}{\partial t} \approx - // \mathbf w^n + \delta t \; - // \frac{\mathbf w^n-\mathbf - // w^{n-1}}{\delta t} = 2 - // \mathbf w^n - \mathbf - // w^{n-1}$ to try and make - // adaptivity work better. The - // idea is to try and refine - // ahead of a front, rather - // than stepping into a coarse - // set of elements and smearing - // the old_solution. This - // simple time extrapolator - // does the job. With this, we - // then refine the mesh if so - // desired by the user, and - // finally continue on with the - // next time step: + // First, we update the time and produce graphical output if so + // desired. Then we update a predictor for the solution at the next + // time step by approximating $\mathbf w^{n+1}\approx \mathbf w^n + + // \delta t \frac{\partial \mathbf w}{\partial t} \approx \mathbf w^n + // + \delta t \; \frac{\mathbf w^n-\mathbf w^{n-1}}{\delta t} = 2 + // \mathbf w^n - \mathbf w^{n-1}$ to try and make adaptivity work + // better. The idea is to try and refine ahead of a front, rather + // than stepping into a coarse set of elements and smearing the + // old_solution. This simple time extrapolator does the job. With + // this, we then refine the mesh if so desired by the user, and + // finally continue on with the next time step: time += parameters.time_step; if (parameters.output_step < 0) @@ -3091,12 +2464,9 @@ namespace Step33 // @sect3{main()} -// The following ``main'' function is -// similar to previous examples and -// need not to be commented on. Note -// that the program aborts if no input -// file name is given on the command -// line. +// The following ``main'' function is similar to previous examples and need +// not to be commented on. Note that the program aborts if no input file name +// is given on the command line. int main (int argc, char *argv[]) { try @@ -3142,4 +2512,3 @@ int main (int argc, char *argv[]) return 0; } - diff --git a/deal.II/examples/step-34/step-34.cc b/deal.II/examples/step-34/step-34.cc index dca7cc9139..32a1a6a061 100644 --- a/deal.II/examples/step-34/step-34.cc +++ b/deal.II/examples/step-34/step-34.cc @@ -11,11 +11,9 @@ // @sect3{Include files} -// The program starts with including a bunch -// of include files that we will use in the -// various parts of the program. Most of them -// have been discussed in previous tutorials -// already: +// The program starts with including a bunch of include files that we will use +// in the various parts of the program. Most of them have been discussed in +// previous tutorials already: #include #include #include @@ -48,17 +46,14 @@ #include #include -// And here are a few C++ standard header -// files that we will need: +// And here are a few C++ standard header files that we will need: #include #include #include #include -// The last part of this preamble is to -// import everything in the dealii namespace -// into the one into which everything in this -// program will go: +// The last part of this preamble is to import everything in the dealii +// namespace into the one into which everything in this program will go: namespace Step34 { using namespace dealii; @@ -66,18 +61,12 @@ namespace Step34 // @sect3{Single and double layer operator kernels} - // First, let us define a bit of the - // boundary integral equation - // machinery. - - // The following two functions are - // the actual calculations of the - // single and double layer potential - // kernels, that is $G$ and $\nabla - // G$. They are well defined only if - // the vector $R = - // \mathbf{y}-\mathbf{x}$ is - // different from zero. + // First, let us define a bit of the boundary integral equation machinery. + + // The following two functions are the actual calculations of the single and + // double layer potential kernels, that is $G$ and $\nabla G$. They are well + // defined only if the vector $R = \mathbf{y}-\mathbf{x}$ is different from + // zero. namespace LaplaceKernel { template @@ -119,22 +108,13 @@ namespace Step34 // @sect3{The BEMProblem class} - // The structure of a boundary - // element method code is very - // similar to the structure of a - // finite element code, and so the - // member functions of this class are - // like those of most of the other - // tutorial programs. In particular, - // by now you should be familiar with - // reading parameters from an - // external file, and with the - // splitting of the different tasks - // into different modules. The same - // applies to boundary element - // methods, and we won't comment too - // much on them, except on the - // differences. + // The structure of a boundary element method code is very similar to the + // structure of a finite element code, and so the member functions of this + // class are like those of most of the other tutorial programs. In + // particular, by now you should be familiar with reading parameters from an + // external file, and with the splitting of the different tasks into + // different modules. The same applies to boundary element methods, and we + // won't comment too much on them, except on the differences. template class BEMProblem { @@ -152,273 +132,147 @@ namespace Step34 void refine_and_resize(); - // The only really different - // function that we find here is - // the assembly routine. We wrote - // this function in the most - // possible general way, in order - // to allow for easy - // generalization to higher order - // methods and to different - // fundamental solutions (e.g., - // Stokes or Maxwell). + // The only really different function that we find here is the assembly + // routine. We wrote this function in the most possible general way, in + // order to allow for easy generalization to higher order methods and to + // different fundamental solutions (e.g., Stokes or Maxwell). // - // The most noticeable difference - // is the fact that the final - // matrix is full, and that we - // have a nested loop inside the - // usual loop on cells that - // visits all support points of - // the degrees of freedom. - // Moreover, when the support - // point lies inside the cell - // which we are visiting, then - // the integral we perform - // becomes singular. + // The most noticeable difference is the fact that the final matrix is + // full, and that we have a nested loop inside the usual loop on cells + // that visits all support points of the degrees of freedom. Moreover, + // when the support point lies inside the cell which we are visiting, then + // the integral we perform becomes singular. // - // The practical consequence is - // that we have two sets of - // quadrature formulas, finite - // element values and temporary - // storage, one for standard - // integration and one for the - // singular integration, which - // are used where necessary. + // The practical consequence is that we have two sets of quadrature + // formulas, finite element values and temporary storage, one for standard + // integration and one for the singular integration, which are used where + // necessary. void assemble_system(); - // There are two options for the - // solution of this problem. The - // first is to use a direct - // solver, and the second is to - // use an iterative solver. We + // There are two options for the solution of this problem. The first is to + // use a direct solver, and the second is to use an iterative solver. We // opt for the second option. // - // The matrix that we assemble is - // not symmetric, and we opt to - // use the GMRES method; however - // the construction of an - // efficient preconditioner for - // boundary element methods is - // not a trivial issue. Here we - // use a non preconditioned GMRES - // solver. The options for the - // iterative solver, such as the - // tolerance, the maximum number - // of iterations, are selected + // The matrix that we assemble is not symmetric, and we opt to use the + // GMRES method; however the construction of an efficient preconditioner + // for boundary element methods is not a trivial issue. Here we use a non + // preconditioned GMRES solver. The options for the iterative solver, such + // as the tolerance, the maximum number of iterations, are selected // through the parameter file. void solve_system(); - // Once we obtained the solution, - // we compute the $L^2$ error of - // the computed potential as well - // as the $L^\infty$ error of the - // approximation of the solid - // angle. The mesh we are using - // is an approximation of a - // smooth curve, therefore the - // computed diagonal matrix of - // fraction of angles or solid - // angles $\alpha(\mathbf{x})$ - // should be constantly equal to - // $\frac 12$. In this routine we - // output the error on the - // potential and the error in the - // approximation of the computed - // angle. Notice that the latter - // error is actually not the - // error in the computation of - // the angle, but a measure of - // how well we are approximating - // the sphere and the circle. + // Once we obtained the solution, we compute the $L^2$ error of the + // computed potential as well as the $L^\infty$ error of the approximation + // of the solid angle. The mesh we are using is an approximation of a + // smooth curve, therefore the computed diagonal matrix of fraction of + // angles or solid angles $\alpha(\mathbf{x})$ should be constantly equal + // to $\frac 12$. In this routine we output the error on the potential and + // the error in the approximation of the computed angle. Notice that the + // latter error is actually not the error in the computation of the angle, + // but a measure of how well we are approximating the sphere and the + // circle. // - // Experimenting a little with - // the computation of the angles - // gives very accurate results - // for simpler geometries. To - // verify this you can comment - // out, in the read_domain() - // method, the - // tria.set_boundary(1, boundary) - // line, and check the alpha that - // is generated by the - // program. By removing this - // call, whenever the mesh is - // refined new nodes will be - // placed along the straight - // lines that made up the coarse - // mesh, rather than be pulled - // onto the surface that we - // really want to approximate. In - // the three dimensional case, - // the coarse grid of the sphere - // is obtained starting from a - // cube, and the obtained values - // of alphas are exactly $\frac - // 12$ on the nodes of the faces, - // $\frac 34$ on the nodes of the - // edges and $\frac 78$ on the 8 - // nodes of the vertices. + // Experimenting a little with the computation of the angles gives very + // accurate results for simpler geometries. To verify this you can comment + // out, in the read_domain() method, the tria.set_boundary(1, boundary) + // line, and check the alpha that is generated by the program. By removing + // this call, whenever the mesh is refined new nodes will be placed along + // the straight lines that made up the coarse mesh, rather than be pulled + // onto the surface that we really want to approximate. In the three + // dimensional case, the coarse grid of the sphere is obtained starting + // from a cube, and the obtained values of alphas are exactly $\frac 12$ + // on the nodes of the faces, $\frac 34$ on the nodes of the edges and + // $\frac 78$ on the 8 nodes of the vertices. void compute_errors(const unsigned int cycle); - // Once we obtained a solution on - // the codimension one domain, we - // want to interpolate it to the - // rest of the space. This is - // done by performing again the - // convolution of the solution - // with the kernel in the - // compute_exterior_solution() - // function. + // Once we obtained a solution on the codimension one domain, we want to + // interpolate it to the rest of the space. This is done by performing + // again the convolution of the solution with the kernel in the + // compute_exterior_solution() function. // - // We would like to plot the - // velocity variable which is the - // gradient of the potential - // solution. The potential - // solution is only known on the - // boundary, but we use the - // convolution with the - // fundamental solution to - // interpolate it on a standard - // dim dimensional continuous - // finite element space. The plot - // of the gradient of the - // extrapolated solution will - // give us the velocity we want. + // We would like to plot the velocity variable which is the gradient of + // the potential solution. The potential solution is only known on the + // boundary, but we use the convolution with the fundamental solution to + // interpolate it on a standard dim dimensional continuous finite element + // space. The plot of the gradient of the extrapolated solution will give + // us the velocity we want. // - // In addition to the solution on - // the exterior domain, we also - // output the solution on the - // domain's boundary in the - // output_results() function, of + // In addition to the solution on the exterior domain, we also output the + // solution on the domain's boundary in the output_results() function, of // course. void compute_exterior_solution(); void output_results(const unsigned int cycle); - // To allow for dimension - // independent programming, we - // specialize this single - // function to extract the - // singular quadrature formula - // needed to integrate the - // singular kernels in the - // interior of the cells. + // To allow for dimension independent programming, we specialize this + // single function to extract the singular quadrature formula needed to + // integrate the singular kernels in the interior of the cells. const Quadrature & get_singular_quadrature( const typename DoFHandler::active_cell_iterator &cell, const unsigned int index) const; - // The usual deal.II classes can - // be used for boundary element - // methods by specifying the - // "codimension" of the - // problem. This is done by - // setting the optional second - // template arguments to - // Triangulation, FiniteElement - // and DoFHandler to the - // dimension of the embedding - // space. In our case we generate - // either 1 or 2 dimensional - // meshes embedded in 2 or 3 + // The usual deal.II classes can be used for boundary element methods by + // specifying the "codimension" of the problem. This is done by setting + // the optional second template arguments to Triangulation, FiniteElement + // and DoFHandler to the dimension of the embedding space. In our case we + // generate either 1 or 2 dimensional meshes embedded in 2 or 3 // dimensional spaces. // - // The optional argument by - // default is equal to the first - // argument, and produces the - // usual finite element classes - // that we saw in all previous + // The optional argument by default is equal to the first argument, and + // produces the usual finite element classes that we saw in all previous // examples. // - // The class is constructed in a - // way to allow for arbitrary - // order of approximation of both - // the domain (through high order - // mapping) and the finite - // element space. The order of - // the finite element space and - // of the mapping can be selected - // in the constructor of the class. + // The class is constructed in a way to allow for arbitrary order of + // approximation of both the domain (through high order mapping) and the + // finite element space. The order of the finite element space and of the + // mapping can be selected in the constructor of the class. Triangulation tria; FE_Q fe; DoFHandler dh; MappingQ mapping; - // In BEM methods, the matrix - // that is generated is - // dense. Depending on the size - // of the problem, the final - // system might be solved by - // direct LU decomposition, or by - // iterative methods. In this - // example we use an - // unpreconditioned GMRES - // method. Building a - // preconditioner for BEM method - // is non trivial, and we don't - // treat this subject here. + // In BEM methods, the matrix that is generated is dense. Depending on the + // size of the problem, the final system might be solved by direct LU + // decomposition, or by iterative methods. In this example we use an + // unpreconditioned GMRES method. Building a preconditioner for BEM method + // is non trivial, and we don't treat this subject here. FullMatrix system_matrix; Vector system_rhs; - // The next two variables will - // denote the solution $\phi$ as - // well as a vector that will - // hold the values of - // $\alpha(\mathbf x)$ (the - // fraction of $\Omega$ visible - // from a point $\mathbf x$) at - // the support points of our - // shape functions. + // The next two variables will denote the solution $\phi$ as well as a + // vector that will hold the values of $\alpha(\mathbf x)$ (the fraction + // of $\Omega$ visible from a point $\mathbf x$) at the support points of + // our shape functions. Vector phi; Vector alpha; - // The convergence table is used - // to output errors in the exact - // solution and in the computed - // alphas. + // The convergence table is used to output errors in the exact solution + // and in the computed alphas. ConvergenceTable convergence_table; - // The following variables are - // the ones that we fill through - // a parameter file. The new - // objects that we use in this - // example are the - // Functions::ParsedFunction - // object and the - // QuadratureSelector object. + // The following variables are the ones that we fill through a parameter + // file. The new objects that we use in this example are the + // Functions::ParsedFunction object and the QuadratureSelector object. // - // The Functions::ParsedFunction - // class allows us to easily and - // quickly define new function - // objects via parameter files, - // with custom definitions which - // can be very complex (see the - // documentation of that class - // for all the available - // options). + // The Functions::ParsedFunction class allows us to easily and quickly + // define new function objects via parameter files, with custom + // definitions which can be very complex (see the documentation of that + // class for all the available options). // - // We will allocate the - // quadrature object using the - // QuadratureSelector class that - // allows us to generate - // quadrature formulas based on - // an identifying string and on - // the possible degree of the - // formula itself. We used this - // to allow custom selection of - // the quadrature formulas for - // the standard integration, and - // to define the order of the - // singular quadrature rule. + // We will allocate the quadrature object using the QuadratureSelector + // class that allows us to generate quadrature formulas based on an + // identifying string and on the possible degree of the formula itself. We + // used this to allow custom selection of the quadrature formulas for the + // standard integration, and to define the order of the singular + // quadrature rule. // - // We also define a couple of - // parameters which are used in - // case we wanted to extend the - // solution to the entire domain. + // We also define a couple of parameters which are used in case we wanted + // to extend the solution to the entire domain. Functions::ParsedFunction wind; Functions::ParsedFunction exact_solution; @@ -438,32 +292,19 @@ namespace Step34 // @sect4{BEMProblem::BEMProblem and BEMProblem::read_parameters} - // The constructor initializes the - // variuous object in much the same - // way as done in the finite element - // programs such as step-4 or - // step-6. The only new ingredient - // here is the ParsedFunction object, - // which needs, at construction time, - // the specification of the number of - // components. + // The constructor initializes the variuous object in much the same way as + // done in the finite element programs such as step-4 or step-6. The only + // new ingredient here is the ParsedFunction object, which needs, at + // construction time, the specification of the number of components. // - // For the exact solution the number - // of vector components is one, and - // no action is required since one is - // the default value for a - // ParsedFunction object. The wind, - // however, requires dim components - // to be specified. Notice that when - // declaring entries in a parameter - // file for the expression of the - // Functions::ParsedFunction, we need - // to specify the number of - // components explicitly, since the - // function - // Functions::ParsedFunction::declare_parameters - // is static, and has no knowledge of - // the number of components. + // For the exact solution the number of vector components is one, and no + // action is required since one is the default value for a ParsedFunction + // object. The wind, however, requires dim components to be + // specified. Notice that when declaring entries in a parameter file for the + // expression of the Functions::ParsedFunction, we need to specify the + // number of components explicitly, since the function + // Functions::ParsedFunction::declare_parameters is static, and has no + // knowledge of the number of components. template BEMProblem::BEMProblem(const unsigned int fe_degree, const unsigned int mapping_degree) @@ -503,61 +344,32 @@ namespace Step34 } prm.leave_subsection(); - // For both two and three - // dimensions, we set the default - // input data to be such that the - // solution is $x+y$ or - // $x+y+z$. The actually computed - // solution will have value zero at - // infinity. In this case, this - // coincide with the exact - // solution, and no additional - // corrections are needed, but you - // should be aware of the fact that - // we arbitrarily set - // $\phi_\infty$, and the exact - // solution we pass to the program - // needs to have the same value at - // infinity for the error to be - // computed correctly. + // For both two and three dimensions, we set the default input data to be + // such that the solution is $x+y$ or $x+y+z$. The actually computed + // solution will have value zero at infinity. In this case, this coincide + // with the exact solution, and no additional corrections are needed, but + // you should be aware of the fact that we arbitrarily set $\phi_\infty$, + // and the exact solution we pass to the program needs to have the same + // value at infinity for the error to be computed correctly. // - // The use of the - // Functions::ParsedFunction object - // is pretty straight forward. The - // Functions::ParsedFunction::declare_parameters - // function takes an additional - // integer argument that specifies - // the number of components of the - // given function. Its default - // value is one. When the - // corresponding - // Functions::ParsedFunction::parse_parameters - // method is called, the calling - // object has to have the same - // number of components defined - // here, otherwise an exception is - // thrown. + // The use of the Functions::ParsedFunction object is pretty straight + // forward. The Functions::ParsedFunction::declare_parameters function + // takes an additional integer argument that specifies the number of + // components of the given function. Its default value is one. When the + // corresponding Functions::ParsedFunction::parse_parameters method is + // called, the calling object has to have the same number of components + // defined here, otherwise an exception is thrown. // - // When declaring entries, we - // declare both 2 and three - // dimensional functions. However - // only the dim-dimensional one is - // ultimately parsed. This allows - // us to have only one parameter - // file for both 2 and 3 + // When declaring entries, we declare both 2 and three dimensional + // functions. However only the dim-dimensional one is ultimately + // parsed. This allows us to have only one parameter file for both 2 and 3 // dimensional problems. // - // Notice that from a mathematical - // point of view, the wind function - // on the boundary should satisfy - // the condition - // $\int_{\partial\Omega} - // \mathbf{v}\cdot \mathbf{n} d - // \Gamma = 0$, for the problem to - // have a solution. If this - // condition is not satisfied, then - // no solution can be found, and - // the solver will not converge. + // Notice that from a mathematical point of view, the wind function on the + // boundary should satisfy the condition $\int_{\partial\Omega} + // \mathbf{v}\cdot \mathbf{n} d \Gamma = 0$, for the problem to have a + // solution. If this condition is not satisfied, then no solution can be + // found, and the solver will not converge. prm.enter_subsection("Wind function 2d"); { Functions::ParsedFunction<2>::declare_parameters(prm, 2); @@ -587,23 +399,15 @@ namespace Step34 prm.leave_subsection(); - // In the solver section, we set - // all SolverControl - // parameters. The object will then - // be fed to the GMRES solver in - // the solve_system() function. + // In the solver section, we set all SolverControl parameters. The object + // will then be fed to the GMRES solver in the solve_system() function. prm.enter_subsection("Solver"); SolverControl::declare_parameters(prm); prm.leave_subsection(); - // After declaring all these - // parameters to the - // ParameterHandler object, let's - // read an input file that will - // give the parameters their - // values. We then proceed to - // extract these values from the - // ParameterHandler object: + // After declaring all these parameters to the ParameterHandler object, + // let's read an input file that will give the parameters their values. We + // then proceed to extract these values from the ParameterHandler object: prm.read_input(filename); n_cycles = prm.get_integer("Number of cycles"); @@ -639,15 +443,10 @@ namespace Step34 prm.leave_subsection(); - // Finally, here's another example - // of how to use parameter files in - // dimension independent - // programming. If we wanted to - // switch off one of the two - // simulations, we could do this by - // setting the corresponding "Run - // 2d simulation" or "Run 3d - // simulation" flag to false: + // Finally, here's another example of how to use parameter files in + // dimension independent programming. If we wanted to switch off one of + // the two simulations, we could do this by setting the corresponding "Run + // 2d simulation" or "Run 3d simulation" flag to false: run_in_this_dimension = prm.get_bool("Run " + Utilities::int_to_string(dim) + "d simulation"); @@ -656,57 +455,32 @@ namespace Step34 // @sect4{BEMProblem::read_domain} - // A boundary element method - // triangulation is basically the - // same as a (dim-1) dimensional - // triangulation, with the difference - // that the vertices belong to a - // (dim) dimensional space. + // A boundary element method triangulation is basically the same as a + // (dim-1) dimensional triangulation, with the difference that the vertices + // belong to a (dim) dimensional space. // - // Some of the mesh formats supported - // in deal.II use by default three - // dimensional points to describe - // meshes. These are the formats - // which are compatible with the - // boundary element method - // capabilities of deal.II. In - // particular we can use either UCD - // or GMSH formats. In both cases, we - // have to be particularly careful - // with the orientation of the mesh, - // because, unlike in the standard - // finite element case, no reordering - // or compatibility check is - // performed here. All meshes are - // considered as oriented, because - // they are embedded in a higher - // dimensional space. (See the - // documentation of the GridIn and of - // the Triangulation for further - // details on orientation of cells in - // a triangulation.) In our case, the - // normals to the mesh are external - // to both the circle in 2d or the - // sphere in 3d. + // Some of the mesh formats supported in deal.II use by default three + // dimensional points to describe meshes. These are the formats which are + // compatible with the boundary element method capabilities of deal.II. In + // particular we can use either UCD or GMSH formats. In both cases, we have + // to be particularly careful with the orientation of the mesh, because, + // unlike in the standard finite element case, no reordering or + // compatibility check is performed here. All meshes are considered as + // oriented, because they are embedded in a higher dimensional space. (See + // the documentation of the GridIn and of the Triangulation for further + // details on orientation of cells in a triangulation.) In our case, the + // normals to the mesh are external to both the circle in 2d or the sphere + // in 3d. // - // The other detail that is required - // for appropriate refinement of the - // boundary element mesh, is an - // accurate description of the - // manifold that the mesh is - // approximating. We already saw this - // several times for the boundary of - // standard finite element meshes - // (for example in step-5 and - // step-6), and here the principle - // and usage is the same, except that - // the HyperBallBoundary class takes - // an additional template parameter - // that specifies the embedding space - // dimension. The function object - // still has to be static to live at - // least as long as the triangulation - // object to which it is attached. + // The other detail that is required for appropriate refinement of the + // boundary element mesh, is an accurate description of the manifold that + // the mesh is approximating. We already saw this several times for the + // boundary of standard finite element meshes (for example in step-5 and + // step-6), and here the principle and usage is the same, except that the + // HyperBallBoundary class takes an additional template parameter that + // specifies the embedding space dimension. The function object still has to + // be static to live at least as long as the triangulation object to which + // it is attached. template void BEMProblem::read_domain() @@ -739,10 +513,8 @@ namespace Step34 // @sect4{BEMProblem::refine_and_resize} - // This function globally refines the - // mesh, distributes degrees of - // freedom, and resizes matrices and - // vectors. + // This function globally refines the mesh, distributes degrees of freedom, + // and resizes matrices and vectors. template void BEMProblem::refine_and_resize() @@ -763,24 +535,16 @@ namespace Step34 // @sect4{BEMProblem::assemble_system} - // The following is the main function - // of this program, assembling the - // matrix that corresponds to the - // boundary integral equation. + // The following is the main function of this program, assembling the matrix + // that corresponds to the boundary integral equation. template void BEMProblem::assemble_system() { - // First we initialize an FEValues - // object with the quadrature - // formula for the integration of - // the kernel in non singular - // cells. This quadrature is - // selected with the parameter - // file, and needs to be quite - // precise, since the functions we - // are integrating are not - // polynomial functions. + // First we initialize an FEValues object with the quadrature formula for + // the integration of the kernel in non singular cells. This quadrature is + // selected with the parameter file, and needs to be quite precise, since + // the functions we are integrating are not polynomial functions. FEValues fe_v(mapping, fe, *quadrature, update_values | update_cell_normal_vectors | @@ -794,46 +558,29 @@ namespace Step34 std::vector > cell_wind(n_q_points, Vector(dim) ); double normal_wind; - // Unlike in finite element - // methods, if we use a collocation - // boundary element method, then in - // each assembly loop we only - // assemble the information that - // refers to the coupling between - // one degree of freedom (the - // degree associated with support - // point $i$) and the current - // cell. This is done using a - // vector of fe.dofs_per_cell - // elements, which will then be - // distributed to the matrix in the - // global row $i$. The following - // object will hold this - // information: + // Unlike in finite element methods, if we use a collocation boundary + // element method, then in each assembly loop we only assemble the + // information that refers to the coupling between one degree of freedom + // (the degree associated with support point $i$) and the current + // cell. This is done using a vector of fe.dofs_per_cell elements, which + // will then be distributed to the matrix in the global row $i$. The + // following object will hold this information: Vector local_matrix_row_i(fe.dofs_per_cell); - // The index $i$ runs on the - // collocation points, which are - // the support points of the $i$th - // basis function, while $j$ runs - // on inner integration points. + // The index $i$ runs on the collocation points, which are the support + // points of the $i$th basis function, while $j$ runs on inner integration + // points. - // We construct a vector - // of support points which will be - // used in the local integrations: + // We construct a vector of support points which will be used in the local + // integrations: std::vector > support_points(dh.n_dofs()); DoFTools::map_dofs_to_support_points( mapping, dh, support_points); - // After doing so, we can start the - // integration loop over all cells, - // where we first initialize the - // FEValues object and get the - // values of $\mathbf{\tilde v}$ at - // the quadrature points (this - // vector field should be constant, - // but it doesn't hurt to be more - // general): + // After doing so, we can start the integration loop over all cells, where + // we first initialize the FEValues object and get the values of + // $\mathbf{\tilde v}$ at the quadrature points (this vector field should + // be constant, but it doesn't hurt to be more general): typename DoFHandler::active_cell_iterator cell = dh.begin_active(), endc = dh.end(); @@ -847,22 +594,13 @@ namespace Step34 const std::vector > &normals = fe_v.get_cell_normal_vectors(); wind.vector_value_list(q_points, cell_wind); - // We then form the integral over - // the current cell for all - // degrees of freedom (note that - // this includes degrees of - // freedom not located on the - // current cell, a deviation from - // the usual finite element - // integrals). The integral that - // we need to perform is singular - // if one of the local degrees of - // freedom is the same as the - // support point $i$. A the - // beginning of the loop we - // therefore check wether this is - // the case, and we store which - // one is the singular index: + // We then form the integral over the current cell for all degrees of + // freedom (note that this includes degrees of freedom not located on + // the current cell, a deviation from the usual finite element + // integrals). The integral that we need to perform is singular if one + // of the local degrees of freedom is the same as the support point + // $i$. A the beginning of the loop we therefore check wether this is + // the case, and we store which one is the singular index: for (unsigned int i=0; i ones(dh.n_dofs()); ones.add(-1.); @@ -1035,8 +734,7 @@ namespace Step34 // @sect4{BEMProblem::solve_system} - // The next function simply solves - // the linear system. + // The next function simply solves the linear system. template void BEMProblem::solve_system() { @@ -1047,13 +745,9 @@ namespace Step34 // @sect4{BEMProblem::compute_errors} - // The computation of the errors is - // exactly the same in all other - // example programs, and we won't - // comment too much. Notice how the - // same methods that are used in the - // finite element methods can be used - // here. + // The computation of the errors is exactly the same in all other example + // programs, and we won't comment too much. Notice how the same methods that + // are used in the finite element methods can be used here. template void BEMProblem::compute_errors(const unsigned int cycle) { @@ -1066,16 +760,10 @@ namespace Step34 const double L2_error = difference_per_cell.l2_norm(); - // The error in the alpha vector - // can be computed directly using - // the Vector::linfty_norm() - // function, since on each node, - // the value should be $\frac - // 12$. All errors are then output - // and appended to our - // ConvergenceTable object for - // later computation of convergence - // rates: + // The error in the alpha vector can be computed directly using the + // Vector::linfty_norm() function, since on each node, the value should be + // $\frac 12$. All errors are then output and appended to our + // ConvergenceTable object for later computation of convergence rates: Vector difference_per_node(alpha); difference_per_node.add(-.5); @@ -1100,123 +788,69 @@ namespace Step34 } - // Singular integration requires a - // careful selection of the - // quadrature rules. In particular - // the deal.II library provides - // quadrature rules which are - // taylored for logarithmic - // singularities (QGaussLog, - // QGaussLogR), as well as for 1/R - // singularities (QGaussOneOverR). + // Singular integration requires a careful selection of the quadrature + // rules. In particular the deal.II library provides quadrature rules which + // are taylored for logarithmic singularities (QGaussLog, QGaussLogR), as + // well as for 1/R singularities (QGaussOneOverR). // - // Singular integration is typically - // obtained by constructing weighted - // quadrature formulas with singular - // weights, so that it is possible to + // Singular integration is typically obtained by constructing weighted + // quadrature formulas with singular weights, so that it is possible to // write // - // \f[ - // \int_K f(x) s(x) dx = \sum_{i=1}^N w_i f(q_i) - // \f] + // \f[ \int_K f(x) s(x) dx = \sum_{i=1}^N w_i f(q_i) \f] // - // where $s(x)$ is a given - // singularity, and the weights and - // quadrature points $w_i,q_i$ are - // carefully selected to make the - // formula above an equality for a - // certain class of functions $f(x)$. + // where $s(x)$ is a given singularity, and the weights and quadrature + // points $w_i,q_i$ are carefully selected to make the formula above an + // equality for a certain class of functions $f(x)$. // - // In all the finite element examples - // we have seen so far, the weight of - // the quadrature itself (namely, the - // function $s(x)$), was always - // constantly equal to 1. For - // singular integration, we have two - // choices: we can use the definition - // above, factoring out the - // singularity from the integrand - // (i.e., integrating $f(x)$ with the - // special quadrature rule), or we - // can ask the quadrature rule to - // "normalize" the weights $w_i$ with - // $s(q_i)$: + // In all the finite element examples we have seen so far, the weight of the + // quadrature itself (namely, the function $s(x)$), was always constantly + // equal to 1. For singular integration, we have two choices: we can use + // the definition above, factoring out the singularity from the integrand + // (i.e., integrating $f(x)$ with the special quadrature rule), or we can + // ask the quadrature rule to "normalize" the weights $w_i$ with $s(q_i)$: // - // \f[ - // \int_K f(x) s(x) dx = - // \int_K g(x) dx = \sum_{i=1}^N \frac{w_i}{s(q_i)} g(q_i) - // \f] + // \f[ \int_K f(x) s(x) dx = \int_K g(x) dx = \sum_{i=1}^N + // \frac{w_i}{s(q_i)} g(q_i) \f] // - // We use this second option, through - // the @p factor_out_singularity - // parameter of both QGaussLogR and - // QGaussOneOverR. + // We use this second option, through the @p factor_out_singularity + // parameter of both QGaussLogR and QGaussOneOverR. // - // These integrals are somewhat - // delicate, especially in two - // dimensions, due to the - // transformation from the real to - // the reference cell, where the - // variable of integration is scaled - // with the determinant of the + // These integrals are somewhat delicate, especially in two dimensions, due + // to the transformation from the real to the reference cell, where the + // variable of integration is scaled with the determinant of the // transformation. // - // In two dimensions this process - // does not result only in a factor - // appearing as a constant factor on - // the entire integral, but also on - // an additional integral altogether - // that needs to be evaluated: + // In two dimensions this process does not result only in a factor appearing + // as a constant factor on the entire integral, but also on an additional + // integral altogether that needs to be evaluated: // - // \f[ - // \int_0^1 f(x)\ln(x/\alpha) dx = - // \int_0^1 f(x)\ln(x) dx - \int_0^1 f(x) \ln(\alpha) dx. - // \f] + // \f[ \int_0^1 f(x)\ln(x/\alpha) dx = \int_0^1 f(x)\ln(x) dx - \int_0^1 + // f(x) \ln(\alpha) dx. \f] // - // This process is taken care of by - // the constructor of the QGaussLogR - // class, which adds additional - // quadrature points and weights to - // take into consideration also the - // second part of the integral. + // This process is taken care of by the constructor of the QGaussLogR class, + // which adds additional quadrature points and weights to take into + // consideration also the second part of the integral. // - // A similar reasoning should be done - // in the three dimensional case, - // since the singular quadrature is - // taylored on the inverse of the - // radius $r$ in the reference cell, - // while our singular function lives - // in real space, however in the - // three dimensional case everything - // is simpler because the singularity - // scales linearly with the - // determinant of the - // transformation. This allows us to - // build the singular two dimensional - // quadrature rules only once and, - // reuse them over all cells. + // A similar reasoning should be done in the three dimensional case, since + // the singular quadrature is taylored on the inverse of the radius $r$ in + // the reference cell, while our singular function lives in real space, + // however in the three dimensional case everything is simpler because the + // singularity scales linearly with the determinant of the + // transformation. This allows us to build the singular two dimensional + // quadrature rules only once and, reuse them over all cells. // - // In the one dimensional singular - // integration this is not possible, - // since we need to know the scaling - // parameter for the quadrature, - // which is not known a priori. Here, - // the quadrature rule itself depends - // also on the size of the current - // cell. For this reason, it is - // necessary to create a new - // quadrature for each singular - // integration. + // In the one dimensional singular integration this is not possible, since + // we need to know the scaling parameter for the quadrature, which is not + // known a priori. Here, the quadrature rule itself depends also on the size + // of the current cell. For this reason, it is necessary to create a new + // quadrature for each singular integration. // - // The different quadrature rules are - // built inside the - // get_singular_quadrature, which is - // specialized for dim=2 and dim=3, - // and they are retrieved inside the - // assemble_system function. The - // index given as an argument is the - // index of the unit support point - // where the singularity is located. + // The different quadrature rules are built inside the + // get_singular_quadrature, which is specialized for dim=2 and dim=3, and + // they are retrieved inside the assemble_system function. The index given + // as an argument is the index of the unit support point where the + // singularity is located. template<> const Quadrature<2> &BEMProblem<3>::get_singular_quadrature( @@ -1257,34 +891,20 @@ namespace Step34 // @sect4{BEMProblem::compute_exterior_solution} - // We'd like to also know something - // about the value of the potential - // $\phi$ in the exterior domain: - // after all our motivation to - // consider the boundary integral - // problem was that we wanted to know - // the velocity in the exterior + // We'd like to also know something about the value of the potential $\phi$ + // in the exterior domain: after all our motivation to consider the boundary + // integral problem was that we wanted to know the velocity in the exterior // domain! // - // To this end, let us assume here - // that the boundary element domain - // is contained in the box - // $[-2,2]^{\text{dim}}$, and we - // extrapolate the actual solution - // inside this box using the - // convolution with the fundamental - // solution. The formula for this is - // given in the introduction. + // To this end, let us assume here that the boundary element domain is + // contained in the box $[-2,2]^{\text{dim}}$, and we extrapolate the actual + // solution inside this box using the convolution with the fundamental + // solution. The formula for this is given in the introduction. // - // The reconstruction of the solution - // in the entire space is done on a - // continuous finite element grid of - // dimension dim. These are the usual - // ones, and we don't comment any - // further on them. At the end of the - // function, we output this exterior - // solution in, again, much the usual - // way. + // The reconstruction of the solution in the entire space is done on a + // continuous finite element grid of dimension dim. These are the usual + // ones, and we don't comment any further on them. At the end of the + // function, we output this exterior solution in, again, much the usual way. template void BEMProblem::compute_exterior_solution() { @@ -1373,11 +993,8 @@ namespace Step34 // @sect4{BEMProblem::output_results} - // Outputting the results of our - // computations is a rather - // mechanical tasks. All the - // components of this function have - // been discussed before. + // Outputting the results of our computations is a rather mechanical + // tasks. All the components of this function have been discussed before. template void BEMProblem::output_results(const unsigned int cycle) { @@ -1418,8 +1035,7 @@ namespace Step34 // @sect4{BEMProblem::run} - // This is the main function. It - // should be self explanatory in its + // This is the main function. It should be self explanatory in its // briefness: template void BEMProblem::run() @@ -1454,8 +1070,7 @@ namespace Step34 // @sect3{The main() function} -// This is the main function of this -// program. It is exactly like all previous +// This is the main function of this program. It is exactly like all previous // tutorial programs: int main () { diff --git a/deal.II/examples/step-35/step-35.cc b/deal.II/examples/step-35/step-35.cc index cb37555fe7..0c9b3b0a08 100644 --- a/deal.II/examples/step-35/step-35.cc +++ b/deal.II/examples/step-35/step-35.cc @@ -12,11 +12,9 @@ // @sect3{Include files} -// We start by including all the necessary -// deal.II header files and some C++ related -// ones. Each one of them has been discussed -// in previous tutorial programs, so we will -// not get into details here. +// We start by including all the necessary deal.II header files and some C++ +// related ones. Each one of them has been discussed in previous tutorial +// programs, so we will not get into details here. #include #include #include @@ -63,8 +61,7 @@ #include #include -// Finally this is as in all previous -// programs: +// Finally this is as in all previous programs: namespace Step35 { using namespace dealii; @@ -73,20 +70,13 @@ namespace Step35 // @sect3{Run time parameters} // - // Since our method has several - // parameters that can be fine-tuned - // we put them into an external file, - // so that they can be determined at - // run-time. + // Since our method has several parameters that can be fine-tuned we put + // them into an external file, so that they can be determined at run-time. // - // This includes, in particular, the - // formulation of the equation for - // the auxiliary variable $\phi$, for - // which we declare an - // enum. Next, we - // declare a class that is going to - // read and store all the parameters - // that our program needs to run. + // This includes, in particular, the formulation of the equation for the + // auxiliary variable $\phi$, for which we declare an enum. + // Next, we declare a class that is going to read and store all the + // parameters that our program needs to run. namespace RunTimeParameters { enum MethodFormulation @@ -120,11 +110,8 @@ namespace Step35 ParameterHandler prm; }; - // In the constructor of this class - // we declare all the - // parameters. The details of how - // this works have been discussed - // elsewhere, for example in + // In the constructor of this class we declare all the parameters. The + // details of how this works have been discussed elsewhere, for example in // step-19 and step-29. Data_Storage::Data_Storage() { @@ -262,27 +249,16 @@ namespace Step35 // @sect3{Equation data} - // In the next namespace, we declare - // the initial and boundary - // conditions: + // In the next namespace, we declare the initial and boundary conditions: namespace EquationData { - // As we have chosen a completely - // decoupled formulation, we will - // not take advantage of deal.II's - // capabilities to handle vector - // valued problems. We do, however, - // want to use an interface for the - // equation data that is somehow - // dimension independent. To be - // able to do that, our functions - // should be able to know on which - // spatial component we are - // currently working, and we should - // be able to have a common - // interface to do that. The - // following class is an attempt in - // that direction. + // As we have chosen a completely decoupled formulation, we will not take + // advantage of deal.II's capabilities to handle vector valued + // problems. We do, however, want to use an interface for the equation + // data that is somehow dimension independent. To be able to do that, our + // functions should be able to know on which spatial component we are + // currently working, and we should be able to have a common interface to + // do that. The following class is an attempt in that direction. template class MultiComponentFunction: public Function { @@ -309,10 +285,8 @@ namespace Step35 } - // With this class defined, we - // declare classes that describe - // the boundary conditions for - // velocity and pressure: + // With this class defined, we declare classes that describe the boundary + // conditions for velocity and pressure: template class Velocity : public MultiComponentFunction { @@ -408,14 +382,10 @@ namespace Step35 // @sect3{The NavierStokesProjection class} - // Now for the main class of the program. It - // implements the various versions of the - // projection method for Navier-Stokes - // equations. The names for all the methods - // and member variables should be - // self-explanatory, taking into account the - // implementation details given in the - // introduction. + // Now for the main class of the program. It implements the various versions + // of the projection method for Navier-Stokes equations. The names for all + // the methods and member variables should be self-explanatory, taking into + // account the implementation details given in the introduction. template class NavierStokesProjection { @@ -507,50 +477,30 @@ namespace Step35 void initialize_pressure_matrices(); - // The next few structures and functions - // are for doing various things in - // parallel. They follow the scheme laid - // out in @ref threads, using the - // WorkStream class. As explained there, - // this requires us to declare two - // structures for each of the assemblers, - // a per-task data and a scratch data - // structure. These are then handed over - // to functions that assemble local - // contributions and that copy these - // local contributions to the global - // objects. + // The next few structures and functions are for doing various things in + // parallel. They follow the scheme laid out in @ref threads, using the + // WorkStream class. As explained there, this requires us to declare two + // structures for each of the assemblers, a per-task data and a scratch + // data structure. These are then handed over to functions that assemble + // local contributions and that copy these local contributions to the + // global objects. // - // One of the things that are specific to - // this program is that we don't just - // have a single DoFHandler object that - // represents both the velocities and the - // pressure, but we use individual - // DoFHandler objects for these two kinds - // of variables. We pay for this - // optimization when we want to assemble - // terms that involve both variables, - // such as the divergence of the velocity - // and the gradient of the pressure, - // times the respective test - // functions. When doing so, we can't - // just anymore use a single FEValues - // object, but rather we need two, and - // they need to be initialized with cell - // iterators that point to the same cell - // in the triangulation but different - // DoFHandlers. + // One of the things that are specific to this program is that we don't + // just have a single DoFHandler object that represents both the + // velocities and the pressure, but we use individual DoFHandler objects + // for these two kinds of variables. We pay for this optimization when we + // want to assemble terms that involve both variables, such as the + // divergence of the velocity and the gradient of the pressure, times the + // respective test functions. When doing so, we can't just anymore use a + // single FEValues object, but rather we need two, and they need to be + // initialized with cell iterators that point to the same cell in the + // triangulation but different DoFHandlers. // - // To do this in practice, we declare a - // "synchronous" iterator -- an object - // that internally consists of several - // (in our case two) iterators, and each - // time the synchronous iteration is - // moved up one step, each of the - // iterators stored internally is moved - // up one step as well, thereby always - // staying in sync. As it so happens, - // there is a deal.II class that + // To do this in practice, we declare a "synchronous" iterator -- an + // object that internally consists of several (in our case two) iterators, + // and each time the synchronous iteration is moved up one step, each of + // the iterators stored internally is moved up one step as well, thereby + // always staying in sync. As it so happens, there is a deal.II class that // facilitates this sort of thing. typedef std_cxx1x::tuple< typename DoFHandler::active_cell_iterator, typename DoFHandler::active_cell_iterator @@ -615,10 +565,8 @@ namespace Step35 void copy_gradient_local_to_global (const InitGradPerTaskData &data); - // The same general layout also applies - // to the following classes and functions - // implementing the assembly of the - // advection term: + // The same general layout also applies to the following classes and + // functions implementing the assembly of the advection term: void assemble_advection_term(); struct AdvectionPerTaskData @@ -671,10 +619,9 @@ namespace Step35 void copy_advection_local_to_global (const AdvectionPerTaskData &data); - // The final few functions implement the - // diffusion solve as well as - // postprocessing the output, including - // computing the curl of the velocity: + // The final few functions implement the diffusion solve as well as + // postprocessing the output, including computing the curl of the + // velocity: void diffusion_component_solve (const unsigned int d); void output_results (const unsigned int step); @@ -686,14 +633,10 @@ namespace Step35 // @sect4{ NavierStokesProjection::NavierStokesProjection } - // In the constructor, we just read - // all the data from the - // Data_Storage object - // that is passed as an argument, - // verify that the data we read is - // reasonable and, finally, create - // the triangulation and load the - // initial data. + // In the constructor, we just read all the data from the + // Data_Storage object that is passed as an argument, verify + // that the data we read is reasonable and, finally, create the + // triangulation and load the initial data. template NavierStokesProjection::NavierStokesProjection(const RunTimeParameters::Data_Storage &data) : @@ -730,17 +673,13 @@ namespace Step35 } - // @sect4{ NavierStokesProjection::create_triangulation_and_dofs } + // @sect4{ + // NavierStokesProjection::create_triangulation_and_dofs } - // The method that creates the - // triangulation and refines it the - // needed number of times. After - // creating the triangulation, it - // creates the mesh dependent data, - // i.e. it distributes degrees of - // freedom and renumbers them, and - // initializes the matrices and - // vectors that we will use. + // The method that creates the triangulation and refines it the needed + // number of times. After creating the triangulation, it creates the mesh + // dependent data, i.e. it distributes degrees of freedom and renumbers + // them, and initializes the matrices and vectors that we will use. template void NavierStokesProjection:: @@ -800,9 +739,7 @@ namespace Step35 // @sect4{ NavierStokesProjection::initialize } - // This method creates the constant - // matrices and loads the initial - // data + // This method creates the constant matrices and loads the initial data template void NavierStokesProjection::initialize() @@ -828,28 +765,20 @@ namespace Step35 } - // @sect4{ The NavierStokesProjection::initialize_*_matrices methods } - - // In this set of methods we initialize the - // sparsity patterns, the constraints (if - // any) and assemble the matrices that do not - // depend on the timestep - // dt. Note that for the Laplace - // and mass matrices, we can use functions in - // the library that do this. Because the - // expensive operations of this function -- - // creating the two matrices -- are entirely - // independent, we could in principle mark - // them as tasks that can be worked on in - // %parallel using the Threads::new_task - // functions. We won't do that here since - // these functions internally already are - // parallelized, and in particular because - // the current function is only called once - // per program run and so does not incur a - // cost in each time step. The necessary - // modifications would be quite - // straightforward, however. + // @sect4{ The NavierStokesProjection::initialize_*_matrices + // methods } + + // In this set of methods we initialize the sparsity patterns, the + // constraints (if any) and assemble the matrices that do not depend on the + // timestep dt. Note that for the Laplace and mass matrices, we + // can use functions in the library that do this. Because the expensive + // operations of this function -- creating the two matrices -- are entirely + // independent, we could in principle mark them as tasks that can be worked + // on in %parallel using the Threads::new_task functions. We won't do that + // here since these functions internally already are parallelized, and in + // particular because the current function is only called once per program + // run and so does not incur a cost in each time step. The necessary + // modifications would be quite straightforward, however. template void NavierStokesProjection::initialize_velocity_matrices() @@ -876,9 +805,8 @@ namespace Step35 vel_Laplace); } - // The initialization of the matrices - // that act on the pressure space is similar - // to the ones that act on the velocity space. + // The initialization of the matrices that act on the pressure space is + // similar to the ones that act on the velocity space. template void NavierStokesProjection::initialize_pressure_matrices() @@ -902,19 +830,12 @@ namespace Step35 } - // For the gradient operator, we - // start by initializing the sparsity - // pattern and compressing it. It is - // important to notice here that the - // gradient operator acts from the - // pressure space into the velocity - // space, so we have to deal with two - // different finite element - // spaces. To keep the loops - // synchronized, we use the - // typedef's that we - // have defined before, namely - // PairedIterators and + // For the gradient operator, we start by initializing the sparsity pattern + // and compressing it. It is important to notice here that the gradient + // operator acts from the pressure space into the velocity space, so we have + // to deal with two different finite element spaces. To keep the loops + // synchronized, we use the typedef's that we have defined + // before, namely PairedIterators and // IteratorPair. template void @@ -996,38 +917,26 @@ namespace Step35 // @sect4{ NavierStokesProjection::run } - // This is the time marching - // function, which starting at - // t_0 advances in time - // using the projection method with - // time step dt until - // T. + // This is the time marching function, which starting at t_0 + // advances in time using the projection method with time step + // dt until T. // - // Its second parameter, verbose - // indicates whether the function should - // output information what it is doing at any - // given moment: for example, it will say - // whether we are working on the diffusion, - // projection substep; updating - // preconditioners etc. Rather than - // implementing this output using code like + // Its second parameter, verbose indicates whether the function + // should output information what it is doing at any given moment: for + // example, it will say whether we are working on the diffusion, projection + // substep; updating preconditioners etc. Rather than implementing this + // output using code like // @code - // if (verbose) - // std::cout << "something"; + // if (verbose) std::cout << "something"; // @endcode - // we use the ConditionalOStream class to - // do that for us. That class takes an - // output stream and a condition that - // indicates whether the things you pass - // to it should be passed through to the - // given output stream, or should just - // be ignored. This way, above code - // simply becomes + // we use the ConditionalOStream class to do that for us. That + // class takes an output stream and a condition that indicates whether the + // things you pass to it should be passed through to the given output + // stream, or should just be ignored. This way, above code simply becomes // @code // verbose_cout << "something"; // @endcode - // and does the right thing in either - // case. + // and does the right thing in either case. template void NavierStokesProjection::run (const bool verbose, @@ -1076,26 +985,17 @@ namespace Step35 // @sect4{NavierStokesProjection::diffusion_step} - // The implementation of a diffusion - // step. Note that the expensive operation is - // the diffusion solve at the end of the - // function, which we have to do once for - // each velocity component. To accellerate - // things a bit, we allow to do this in - // %parallel, using the Threads::new_task - // function which makes sure that the - // dim solves are all taken care - // of and are scheduled to available - // processors: if your machine has more than - // one processor core and no other parts of - // this program are using resources - // currently, then the diffusion solves will - // run in %parallel. On the other hand, if - // your system has only one processor core - // then running things in %parallel would be - // inefficient (since it leads, for example, - // to cache congestion) and things will be - // executed sequentially. + // The implementation of a diffusion step. Note that the expensive operation + // is the diffusion solve at the end of the function, which we have to do + // once for each velocity component. To accellerate things a bit, we allow + // to do this in %parallel, using the Threads::new_task function which makes + // sure that the dim solves are all taken care of and are + // scheduled to available processors: if your machine has more than one + // processor core and no other parts of this program are using resources + // currently, then the diffusion solves will run in %parallel. On the other + // hand, if your system has only one processor core then running things in + // %parallel would be inefficient (since it leads, for example, to cache + // congestion) and things will be executed sequentially. template void NavierStokesProjection::diffusion_step (const bool reinit_prec) @@ -1193,16 +1093,14 @@ namespace Step35 } - // @sect4{ The NavierStokesProjection::assemble_advection_term method and related} + // @sect4{ The NavierStokesProjection::assemble_advection_term + // method and related} - // The following few functions deal with - // assembling the advection terms, which is the part of the - // system matrix for the diffusion step that changes - // at every time step. As mentioned above, we - // will run the assembly loop over all cells - // in %parallel, using the WorkStream class - // and other facilities as described in the - // documentation module on @ref threads. + // The following few functions deal with assembling the advection terms, + // which is the part of the system matrix for the diffusion step that + // changes at every time step. As mentioned above, we will run the assembly + // loop over all cells in %parallel, using the WorkStream class and other + // facilities as described in the documentation module on @ref threads. template void NavierStokesProjection::assemble_advection_term() @@ -1319,17 +1217,10 @@ namespace Step35 // @sect4{ NavierStokesProjection::update_pressure } - // This is the pressure update step - // of the projection method. It - // implements the standard - // formulation of the method, that is - // @f[ - // p^{n+1} = p^n + \phi^{n+1}, - // @f] - // or the rotational form, which is - // @f[ - // p^{n+1} = p^n + \phi^{n+1} - \frac{1}{Re} \nabla\cdot u^{n+1}. - // @f] + // This is the pressure update step of the projection method. It implements + // the standard formulation of the method, that is @f[ p^{n+1} = p^n + + // \phi^{n+1}, @f] or the rotational form, which is @f[ p^{n+1} = p^n + + // \phi^{n+1} - \frac{1}{Re} \nabla\cdot u^{n+1}. @f] template void NavierStokesProjection::update_pressure (const bool reinit_prec) @@ -1355,38 +1246,25 @@ namespace Step35 // @sect4{ NavierStokesProjection::output_results } - // This method plots the current - // solution. The main difficulty is that we - // want to create a single output file that - // contains the data for all velocity - // components, the pressure, and also the - // vorticity of the flow. On the other hand, - // velocities and the pressure live on - // separate DoFHandler objects, and so can't - // be written to the same file using a single - // DataOut object. As a consequence, we have - // to work a bit harder to get the various - // pieces of data into a single DoFHandler - // object, and then use that to drive - // graphical output. + // This method plots the current solution. The main difficulty is that we + // want to create a single output file that contains the data for all + // velocity components, the pressure, and also the vorticity of the flow. On + // the other hand, velocities and the pressure live on separate DoFHandler + // objects, and so can't be written to the same file using a single DataOut + // object. As a consequence, we have to work a bit harder to get the various + // pieces of data into a single DoFHandler object, and then use that to + // drive graphical output. // - // We will not elaborate on this process - // here, but rather refer to step-31 and - // step-32, where a similar procedure is used - // (and is documented) to create a joint - // DoFHandler object for all variables. + // We will not elaborate on this process here, but rather refer to step-31 + // and step-32, where a similar procedure is used (and is documented) to + // create a joint DoFHandler object for all variables. // - // Let us also note that we here compute the - // vorticity as a scalar quantity in a - // separate function, using the $L^2$ - // projection of the quantity $\text{curl} u$ - // onto the finite element space used for the - // components of the velocity. In principle, - // however, we could also have computed as a - // pointwise quantity from the velocity, and - // do so through the DataPostprocessor - // mechanism discussed in step-29 and - // step-33. + // Let us also note that we here compute the vorticity as a scalar quantity + // in a separate function, using the $L^2$ projection of the quantity + // $\text{curl} u$ onto the finite element space used for the components of + // the velocity. In principle, however, we could also have computed as a + // pointwise quantity from the velocity, and do so through the + // DataPostprocessor mechanism discussed in step-29 and step-33. template void NavierStokesProjection::output_results (const unsigned int step) { @@ -1465,20 +1343,14 @@ namespace Step35 - // Following is the helper function that - // computes the vorticity by projecting the - // term $\text{curl} u$ onto the finite - // element space used for the components of - // the velocity. The function is only called - // whenever we generate graphical output, so - // not very often, and as a consequence we - // didn't bother parallelizing it using the - // WorkStream concept as we do for the other - // assembly functions. That should not be - // overly complicated, however, if - // needed. Moreover, the implementation that - // we have here only works for 2d, so we bail - // if that is not the case. + // Following is the helper function that computes the vorticity by + // projecting the term $\text{curl} u$ onto the finite element space used + // for the components of the velocity. The function is only called whenever + // we generate graphical output, so not very often, and as a consequence we + // didn't bother parallelizing it using the WorkStream concept as we do for + // the other assembly functions. That should not be overly complicated, + // however, if needed. Moreover, the implementation that we have here only + // works for 2d, so we bail if that is not the case. template void NavierStokesProjection::assemble_vorticity (const bool reinit_prec) { @@ -1525,9 +1397,8 @@ namespace Step35 // @sect3{ The main function } -// The main function looks very much like in -// all the other tutorial programs, so there -// is little to comment on here: +// The main function looks very much like in all the other tutorial programs, +// so there is little to comment on here: int main() { try diff --git a/deal.II/examples/step-36/step-36.cc b/deal.II/examples/step-36/step-36.cc index 0e32932b43..c43cf81f27 100644 --- a/deal.II/examples/step-36/step-36.cc +++ b/deal.II/examples/step-36/step-36.cc @@ -11,13 +11,10 @@ // @sect3{Include files} -// As mentioned in the introduction, this -// program is essentially only a slightly -// revised version of step-4. As a -// consequence, most of the following include -// files are as used there, or at least as -// used already in previous tutorial -// programs: +// As mentioned in the introduction, this program is essentially only a +// slightly revised version of step-4. As a consequence, most of the following +// include files are as used there, or at least as used already in previous +// tutorial programs: #include #include #include @@ -38,35 +35,29 @@ #include #include -// PETSc appears here because SLEPc -// depends on this library: +// PETSc appears here because SLEPc depends on this library: #include #include -// And then we need to actually -// import the interfaces for solvers -// that SLEPc provides: +// And then we need to actually import the interfaces for solvers that SLEPc +// provides: #include // We also need some standard C++: #include #include -// Finally, as in previous programs, we -// import all the deal.II class and function -// names into the namespace into which -// everything in this program will go: +// Finally, as in previous programs, we import all the deal.II class and +// function names into the namespace into which everything in this program +// will go: namespace Step36 { using namespace dealii; // @sect3{The EigenvalueProblem class template} - // Following is the class declaration - // for the main class template. It - // looks pretty much exactly like - // what has already been shown in - // step-4: + // Following is the class declaration for the main class template. It looks + // pretty much exactly like what has already been shown in step-4: template class EigenvalueProblem { @@ -84,37 +75,24 @@ namespace Step36 FE_Q fe; DoFHandler dof_handler; - // With these exceptions: For our - // eigenvalue problem, we need - // both a stiffness matrix for - // the left hand side as well as - // a mass matrix for the right - // hand side. We also need not - // just one solution function, - // but a whole set of these for - // the eigenfunctions we want to - // compute, along with the - // corresponding eigenvalues: + // With these exceptions: For our eigenvalue problem, we need both a + // stiffness matrix for the left hand side as well as a mass matrix for + // the right hand side. We also need not just one solution function, but a + // whole set of these for the eigenfunctions we want to compute, along + // with the corresponding eigenvalues: PETScWrappers::SparseMatrix stiffness_matrix, mass_matrix; std::vector eigenfunctions; std::vector eigenvalues; - // And then we need an object - // that will store several - // run-time parameters that we - // will specify in an input file: + // And then we need an object that will store several run-time parameters + // that we will specify in an input file: ParameterHandler parameters; - // Finally, we will have an - // object that contains - // "constraints" on our degrees - // of freedom. This could include - // hanging node constraints if we - // had adaptively refined meshes - // (which we don't have in the - // current program). Here, we - // will store the constraints for - // boundary nodes $U_i=0$. + // Finally, we will have an object that contains "constraints" on our + // degrees of freedom. This could include hanging node constraints if we + // had adaptively refined meshes (which we don't have in the current + // program). Here, we will store the constraints for boundary nodes + // $U_i=0$. ConstraintMatrix constraints; }; @@ -122,14 +100,10 @@ namespace Step36 // @sect4{EigenvalueProblem::EigenvalueProblem} - // First up, the constructor. The - // main new part is handling the - // run-time input parameters. We need - // to declare their existence first, - // and then read their values from - // the input file whose name is - // specified as an argument to this - // function: + // First up, the constructor. The main new part is handling the run-time + // input parameters. We need to declare their existence first, and then read + // their values from the input file whose name is specified as an argument + // to this function: template EigenvalueProblem::EigenvalueProblem (const std::string &prm_file) : @@ -154,30 +128,18 @@ namespace Step36 // @sect4{EigenvalueProblem::make_grid_and_dofs} - // The next function creates a mesh - // on the domain $[-1,1]^d$, refines - // it as many times as the input file - // calls for, and then attaches a - // DoFHandler to it and initializes - // the matrices and vectors to their - // correct sizes. We also build the - // constraints that correspond to the - // boundary values + // The next function creates a mesh on the domain $[-1,1]^d$, refines it as + // many times as the input file calls for, and then attaches a DoFHandler to + // it and initializes the matrices and vectors to their correct sizes. We + // also build the constraints that correspond to the boundary values // $u|_{\partial\Omega}=0$. // - // For the matrices, we use the PETSc - // wrappers. These have the ability - // to allocate memory as necessary as - // non-zero entries are added. This - // seems inefficient: we could as - // well first compute the sparsity - // pattern, initialize the matrices - // with it, and as we then insert - // entries we can be sure that we do - // not need to re-allocate memory and - // free the one used previously. One - // way to do that would be to use - // code like this: + // For the matrices, we use the PETSc wrappers. These have the ability to + // allocate memory as necessary as non-zero entries are added. This seems + // inefficient: we could as well first compute the sparsity pattern, + // initialize the matrices with it, and as we then insert entries we can be + // sure that we do not need to re-allocate memory and free the one used + // previously. One way to do that would be to use code like this: // @code // CompressedSimpleSparsityPattern // csp (dof_handler.n_dofs(), @@ -187,36 +149,21 @@ namespace Step36 // stiffness_matrix.reinit (csp); // mass_matrix.reinit (csp); // @endcode - // instead of the two - // reinit() calls for - // the stiffness and mass matrices - // below. + // instead of the two reinit() calls for the + // stiffness and mass matrices below. // - // This doesn't quite work, - // unfortunately. The code above may - // lead to a few entries in the - // non-zero pattern to which we only - // ever write zero entries; most - // notably, this holds true for - // off-diagonal entries for those - // rows and columns that belong to - // boundary nodes. This shouldn't be - // a problem, but for whatever - // reason, PETSc's ILU - // preconditioner, which we use to - // solve linear systems in the - // eigenvalue solver, doesn't like - // these extra entries and aborts - // with an error message. + // This doesn't quite work, unfortunately. The code above may lead to a few + // entries in the non-zero pattern to which we only ever write zero entries; + // most notably, this holds true for off-diagonal entries for those rows and + // columns that belong to boundary nodes. This shouldn't be a problem, but + // for whatever reason, PETSc's ILU preconditioner, which we use to solve + // linear systems in the eigenvalue solver, doesn't like these extra entries + // and aborts with an error message. // - // In the absence of any obvious way - // to avoid this, we simply settle - // for the second best option, which - // is have PETSc allocate memory as - // necessary. That said, since this - // is not a time critical part, this - // whole affair is of no further - // importance. + // In the absence of any obvious way to avoid this, we simply settle for the + // second best option, which is have PETSc allocate memory as + // necessary. That said, since this is not a time critical part, this whole + // affair is of no further importance. template void EigenvalueProblem::make_grid_and_dofs () { @@ -234,13 +181,9 @@ namespace Step36 dof_handler.n_dofs(), dof_handler.max_couplings_between_dofs()); - // The next step is to take care of - // the eigenspectrum. In this case, - // the outputs are eigenvalues and - // eigenfunctions, so we set the - // size of the list of - // eigenfunctions and eigenvalues - // to be as large as we asked for + // The next step is to take care of the eigenspectrum. In this case, the + // outputs are eigenvalues and eigenfunctions, so we set the size of the + // list of eigenfunctions and eigenvalues to be as large as we asked for // in the input file: eigenfunctions .resize (parameters.get_integer ("Number of eigenvalues/eigenfunctions")); @@ -253,31 +196,17 @@ namespace Step36 // @sect4{EigenvalueProblem::assemble_system} - // Here, we assemble the global - // stiffness and mass matrices from - // local contributions $A^K_{ij} = - // \int_K \nabla\varphi_i(\mathbf x) - // \cdot \nabla\varphi_j(\mathbf x) + - // V(\mathbf x)\varphi_i(\mathbf - // x)\varphi_j(\mathbf x)$ and - // $M^K_{ij} = \int_K - // \varphi_i(\mathbf - // x)\varphi_j(\mathbf x)$ - // respectively. This function should - // be immediately familiar if you've - // seen previous tutorial - // programs. The only thing new would - // be setting up an object that - // described the potential $V(\mathbf - // x)$ using the expression that we - // got from the input file. We then - // need to evaluate this object at - // the quadrature points on each - // cell. If you've seen how to - // evaluate function objects (see, - // for example the coefficient in - // step-5), the code here will also - // look rather familiar. + // Here, we assemble the global stiffness and mass matrices from local + // contributions $A^K_{ij} = \int_K \nabla\varphi_i(\mathbf x) \cdot + // \nabla\varphi_j(\mathbf x) + V(\mathbf x)\varphi_i(\mathbf + // x)\varphi_j(\mathbf x)$ and $M^K_{ij} = \int_K \varphi_i(\mathbf + // x)\varphi_j(\mathbf x)$ respectively. This function should be immediately + // familiar if you've seen previous tutorial programs. The only thing new + // would be setting up an object that described the potential $V(\mathbf x)$ + // using the expression that we got from the input file. We then need to + // evaluate this object at the quadrature points on each cell. If you've + // seen how to evaluate function objects (see, for example the coefficient + // in step-5), the code here will also look rather familiar. template void EigenvalueProblem::assemble_system () { @@ -334,12 +263,8 @@ namespace Step36 ) * fe_values.JxW (q_point); } - // Now that we have the local - // matrix contributions, we - // transfer them into the - // global objects and take care - // of zero boundary - // constraints: + // Now that we have the local matrix contributions, we transfer them + // into the global objects and take care of zero boundary constraints: cell->get_dof_indices (local_dof_indices); constraints @@ -352,13 +277,9 @@ namespace Step36 mass_matrix); } - // At the end of the function, we - // tell PETSc that the matrices - // have now been fully assembled - // and that the sparse matrix - // representation can now be - // compressed as no more entries - // will be added: + // At the end of the function, we tell PETSc that the matrices have now + // been fully assembled and that the sparse matrix representation can now + // be compressed as no more entries will be added: stiffness_matrix.compress (); mass_matrix.compress (); } @@ -366,96 +287,55 @@ namespace Step36 // @sect4{EigenvalueProblem::solve} - // This is the key new functionality - // of the program. Now that the - // system is set up, here is a good - // time to actually solve the - // problem: As with other examples - // this is done using a "solve" - // routine. Essentially, it works as - // in other programs: you set up a - // SolverControl object that - // describes the accuracy to which we - // want to solve the linear systems, - // and then we select the kind of - // solver we want. Here we choose the - // Krylov-Schur solver of SLEPc, a - // pretty fast and robust choice for - // this kind of problem: + // This is the key new functionality of the program. Now that the system is + // set up, here is a good time to actually solve the problem: As with other + // examples this is done using a "solve" routine. Essentially, it works as + // in other programs: you set up a SolverControl object that describes the + // accuracy to which we want to solve the linear systems, and then we select + // the kind of solver we want. Here we choose the Krylov-Schur solver of + // SLEPc, a pretty fast and robust choice for this kind of problem: template void EigenvalueProblem::solve () { - // We start here, as we normally do, - // by assigning convergence control - // we want: + // We start here, as we normally do, by assigning convergence control we + // want: SolverControl solver_control (dof_handler.n_dofs(), 1e-9); SLEPcWrappers::SolverKrylovSchur eigensolver (solver_control); - // Before we actually solve for the - // eigenfunctions and -values, we - // have to also select which set of - // eigenvalues to solve for. Lets - // select those eigenvalues and - // corresponding eigenfunctions - // with the smallest real part (in - // fact, the problem we solve here - // is symmetric and so the - // eigenvalues are purely - // real). After that, we can - // actually let SLEPc do its work: + // Before we actually solve for the eigenfunctions and -values, we have to + // also select which set of eigenvalues to solve for. Lets select those + // eigenvalues and corresponding eigenfunctions with the smallest real + // part (in fact, the problem we solve here is symmetric and so the + // eigenvalues are purely real). After that, we can actually let SLEPc do + // its work: eigensolver.set_which_eigenpairs (EPS_SMALLEST_REAL); eigensolver.solve (stiffness_matrix, mass_matrix, eigenvalues, eigenfunctions, eigenfunctions.size()); - // The output of the call above is - // a set of vectors and values. In - // eigenvalue problems, the - // eigenfunctions are only - // determined up to a constant that - // can be fixed pretty - // arbitrarily. Knowing nothing - // about the origin of the - // eigenvalue problem, SLEPc has no - // other choice than to normalize - // the eigenvectors to one in the - // $l_2$ (vector) - // norm. Unfortunately this norm - // has little to do with any norm - // we may be interested from a - // eigenfunction perspective: the - // $L_2(\Omega)$ norm, or maybe the - // $L_\infty(\Omega)$ norm. + // The output of the call above is a set of vectors and values. In + // eigenvalue problems, the eigenfunctions are only determined up to a + // constant that can be fixed pretty arbitrarily. Knowing nothing about + // the origin of the eigenvalue problem, SLEPc has no other choice than to + // normalize the eigenvectors to one in the $l_2$ (vector) + // norm. Unfortunately this norm has little to do with any norm we may be + // interested from a eigenfunction perspective: the $L_2(\Omega)$ norm, or + // maybe the $L_\infty(\Omega)$ norm. // - // Let us choose the latter and - // rescale eigenfunctions so that - // they have $\|\phi_i(\mathbf - // x)\|_{L^\infty(\Omega)}=1$ - // instead of $\|\Phi\|_{l_2}=1$ - // (where $\phi_i$ is the $i$th - // eigenfunction and - // $\Phi_i$ the corresponding - // vector of nodal values). For the - // $Q_1$ elements chosen here, we - // know that the maximum of the - // function $\phi_i(\mathbf x)$ is - // attained at one of the nodes, so - // $\max_{\mathbf x}\phi_i(\mathbf - // x)=\max_j (\Phi_i)_j$, making - // the normalization in the - // $L_\infty$ norm trivial. Note - // that this doesn't work as easily - // if we had chosen $Q_k$ elements - // with $k>1$: there, the maximum - // of a function does not - // necessarily have to be attained - // at a node, and so $\max_{\mathbf - // x}\phi_i(\mathbf x)\ge\max_j - // (\Phi_i)_j$ (although the - // equality is usually nearly - // true). + // Let us choose the latter and rescale eigenfunctions so that they have + // $\|\phi_i(\mathbf x)\|_{L^\infty(\Omega)}=1$ instead of + // $\|\Phi\|_{l_2}=1$ (where $\phi_i$ is the $i$th eigenfunction + // and $\Phi_i$ the corresponding vector of nodal values). For the $Q_1$ + // elements chosen here, we know that the maximum of the function + // $\phi_i(\mathbf x)$ is attained at one of the nodes, so $\max_{\mathbf + // x}\phi_i(\mathbf x)=\max_j (\Phi_i)_j$, making the normalization in the + // $L_\infty$ norm trivial. Note that this doesn't work as easily if we + // had chosen $Q_k$ elements with $k>1$: there, the maximum of a function + // does not necessarily have to be attained at a node, and so + // $\max_{\mathbf x}\phi_i(\mathbf x)\ge\max_j (\Phi_i)_j$ (although the + // equality is usually nearly true). for (unsigned int i=0; i void EigenvalueProblem::output_results () const { @@ -486,21 +360,13 @@ namespace Step36 std::string("eigenfunction_") + Utilities::int_to_string(i)); - // The only thing worth discussing - // may be that because the potential - // is specified as a function - // expression in the input file, it - // would be nice to also have it as a - // graphical representation along - // with the eigenfunctions. The - // process to achieve this is - // relatively straightforward: we - // build an object that represents - // $V(\mathbf x)$ and then we - // interpolate this continuous - // function onto the finite element - // space. The result we also attach - // to the DataOut object for + // The only thing worth discussing may be that because the potential is + // specified as a function expression in the input file, it would be nice + // to also have it as a graphical representation along with the + // eigenfunctions. The process to achieve this is relatively + // straightforward: we build an object that represents $V(\mathbf x)$ and + // then we interpolate this continuous function onto the finite element + // space. The result we also attach to the DataOut object for // visualization. Vector projected_potential (dof_handler.n_dofs()); { @@ -521,10 +387,8 @@ namespace Step36 // @sect4{EigenvalueProblem::run} - // This is the function which has the - // top-level control over - // everything. It is almost exactly - // the same as in step-4: + // This is the function which has the top-level control over everything. It + // is almost exactly the same as in step-4: template void EigenvalueProblem::run () { @@ -555,14 +419,9 @@ int main (int argc, char **argv) try { - // Here is another difference - // from other steps: We - // initialize the SLEPc work - // space which inherently - // initializes the PETSc work - // space, then go ahead run the - // whole program. After that is - // done, we finalize the + // Here is another difference from other steps: We initialize the SLEPc + // work space which inherently initializes the PETSc work space, then go + // ahead run the whole program. After that is done, we finalize the // SLEPc-PETSc work. SlepcInitialize (&argc, &argv, 0, 0); @@ -579,10 +438,8 @@ int main (int argc, char **argv) SlepcFinalize (); } - // All the while, we are watching - // out if any exceptions should - // have been generated. If that is - // so, we panic... + // All the while, we are watching out if any exceptions should have been + // generated. If that is so, we panic... catch (std::exception &exc) { std::cerr << std::endl << std::endl @@ -608,10 +465,8 @@ int main (int argc, char **argv) return 1; } - // If no exceptions are thrown, - // then we tell the program to stop - // monkeying around and exit - // nicely: + // If no exceptions are thrown, then we tell the program to stop monkeying + // around and exit nicely: std::cout << std::endl << "Job done." << std::endl; diff --git a/deal.II/examples/step-37/step-37.cc b/deal.II/examples/step-37/step-37.cc index 807b710c8e..83b8ded0a7 100644 --- a/deal.II/examples/step-37/step-37.cc +++ b/deal.II/examples/step-37/step-37.cc @@ -10,8 +10,7 @@ /* to the file deal.II/doc/license.html for the text and */ /* further information on this license. */ -// First include the necessary files -// from the deal.II library. +// First include the necessary files from the deal.II library. #include #include #include @@ -43,10 +42,9 @@ #include #include -// This includes the data structures for the -// efficient implementation of matrix-free -// methods or more generic finite element -// operators with the class MatrixFree. +// This includes the data structures for the efficient implementation of +// matrix-free methods or more generic finite element operators with the class +// MatrixFree. #include #include @@ -59,41 +57,28 @@ namespace Step37 using namespace dealii; - // To be efficient, the operations - // performed in the matrix-free - // implementation require knowledge of loop - // lengths at compile time, which are given - // by the degree of the finite - // element. Hence, we collect the values of - // the two template parameters that can be - // changed at one place in the code. Of - // course, one could make the degree of the - // finite element a run-time parameter by - // compiling the computational kernels for - // all degrees that are likely (say, - // between 1 and 6) and selecting the - // appropriate kernel at run time. Here, we - // simply choose second order $Q_2$ - // elements and choose dimension 3 as - // standard. + // To be efficient, the operations performed in the matrix-free + // implementation require knowledge of loop lengths at compile time, which + // are given by the degree of the finite element. Hence, we collect the + // values of the two template parameters that can be changed at one place in + // the code. Of course, one could make the degree of the finite element a + // run-time parameter by compiling the computational kernels for all degrees + // that are likely (say, between 1 and 6) and selecting the appropriate + // kernel at run time. Here, we simply choose second order $Q_2$ elements + // and choose dimension 3 as standard. const unsigned int degree_finite_element = 2; const unsigned int dimension = 3; // @sect3{Equation data} - // We define a variable coefficient function - // for the Poisson problem. It is similar to - // the function in step-5 but we use the form - // $a(\mathbf x)=\frac{1}{0.05 + 2\|\bf - // x\|^2}$ instead of a discontinuous one. It - // is merely to demonstrate the possibilities - // of this implementation, rather than making - // much sense physically. We define the - // coefficient in the same way as functions - // in earlier tutorial programs. There is one - // new function, namely a @p value method - // with template argument @p number. + // We define a variable coefficient function for the Poisson problem. It is + // similar to the function in step-5 but we use the form $a(\mathbf + // x)=\frac{1}{0.05 + 2\|\bf x\|^2}$ instead of a discontinuous one. It is + // merely to demonstrate the possibilities of this implementation, rather + // than making much sense physically. We define the coefficient in the same + // way as functions in earlier tutorial programs. There is one new function, + // namely a @p value method with template argument @p number. template class Coefficient : public Function { @@ -114,114 +99,67 @@ namespace Step37 - // This is the new function mentioned - // above: Evaluate the coefficient for - // abstract type @p number: It might be - // just a usual double, but it can also be - // a somewhat more complicated type that we - // call VectorizedArray. This data type is - // essentially a short array of doubles - // whose length depends on the particular - // computer system in use. For example, - // systems based on x86-64 support the - // streaming SIMD extensions (SSE), where - // the processor's vector units can process - // two doubles (or four single-precision - // floats) by one CPU instruction. Newer - // processors with support for the - // so-called advanced vector extensions - // (AVX) with 256 bit operands can use four - // doubles and eight floats, - // respectively. Vectorization is a - // single-instruct/multiple-data (SIMD) - // concept, that is, one CPU instruction is - // used to process multiple data values at - // once. Often, finite element programs do - // not use vectorization explicitly as the - // benefits of this concept are only in - // arithmetic intensive operations. The - // bulk of typical finite element workloads - // are memory bandwidth limited (operations - // on sparse matrices and vectors) where - // the additional computational power is - // useless. + // This is the new function mentioned above: Evaluate the coefficient for + // abstract type @p number: It might be just a usual double, but it can also + // be a somewhat more complicated type that we call VectorizedArray. This + // data type is essentially a short array of doubles whose length depends on + // the particular computer system in use. For example, systems based on + // x86-64 support the streaming SIMD extensions (SSE), where the processor's + // vector units can process two doubles (or four single-precision floats) by + // one CPU instruction. Newer processors with support for the so-called + // advanced vector extensions (AVX) with 256 bit operands can use four + // doubles and eight floats, respectively. Vectorization is a + // single-instruct/multiple-data (SIMD) concept, that is, one CPU + // instruction is used to process multiple data values at once. Often, + // finite element programs do not use vectorization explicitly as the + // benefits of this concept are only in arithmetic intensive operations. The + // bulk of typical finite element workloads are memory bandwidth limited + // (operations on sparse matrices and vectors) where the additional + // computational power is useless. // - // Behind the scenes, optimized BLAS - // packages might heavily rely on - // vectorization, though. Also, optimizing - // compilers might automatically transform - // loops involving standard code into more - // efficient vectorized form. However, the - // data flow must be very regular in order - // for compilers to produce efficient - // code. For example, already the automatic - // vectorization of the prototype operation - // that benefits from vectorization, - // matrix-matrix products, fails on most - // compilers (as of writing this tutorial - // in early 2012, neither gcc-4.6 nor the - // Intel compiler v. 12 manage to produce - // useful vectorized code for the - // FullMatrix::mmult function, and not even - // on the more simpler case where the - // matrix bounds are compile-time constants - // instead of run-time constants as in - // FullMatrix::mmult). The main reason for - // this is that the information to be - // processed at the innermost loop (that is - // where vectorization is applied) is not - // necessarily a multiple of the vector - // length, leaving parts of the resources - // unused. Moreover, the data that can - // potentially be processed together might - // not be laid out in a contiguous way in - // memory or not with the necessary - // alignment to address boundaries that are - // needed by the processor. Or the compiler - // might not be able to prove that. + // Behind the scenes, optimized BLAS packages might heavily rely on + // vectorization, though. Also, optimizing compilers might automatically + // transform loops involving standard code into more efficient vectorized + // form. However, the data flow must be very regular in order for compilers + // to produce efficient code. For example, already the automatic + // vectorization of the prototype operation that benefits from + // vectorization, matrix-matrix products, fails on most compilers (as of + // writing this tutorial in early 2012, neither gcc-4.6 nor the Intel + // compiler v. 12 manage to produce useful vectorized code for the + // FullMatrix::mmult function, and not even on the more simpler case where + // the matrix bounds are compile-time constants instead of run-time + // constants as in FullMatrix::mmult). The main reason for this is that the + // information to be processed at the innermost loop (that is where + // vectorization is applied) is not necessarily a multiple of the vector + // length, leaving parts of the resources unused. Moreover, the data that + // can potentially be processed together might not be laid out in a + // contiguous way in memory or not with the necessary alignment to address + // boundaries that are needed by the processor. Or the compiler might not be + // able to prove that. // - // In the matrix-free implementation in - // deal.II, we have therefore chosen to - // apply vectorization at the level which - // is most appropriate for finite element - // computations: The cell-wise computations - // are typically exactly the same for all - // cells (except for reading from and - // writing to vectors), and hence SIMD can - // be used to process several cells at - // once. In all what follows, you can think - // of a VectorizedArray to hold data from - // several cells. For example, we evaluate - // the coefficient shown here not on a - // simple point as usually done, but we - // hand it a - // Point > - // point, which is actually a collection of - // two points in the case of SSE2. Do not - // confuse the entries in - // VectorizedArray with the - // different coordinates of the - // point. Indeed, the data is laid out such - // that p[0] returns a - // VectorizedArray, which in turn - // contains the x-coordinate for the first - // point and the second point. You may - // access the coordinates individually - // using e.g. p[0][j], j=0,1, - // but it is recommended to define - // operations on a VectorizedArray as much - // as possible in order to make use of - // vectorized operations. + // In the matrix-free implementation in deal.II, we have therefore chosen to + // apply vectorization at the level which is most appropriate for finite + // element computations: The cell-wise computations are typically exactly + // the same for all cells (except for reading from and writing to vectors), + // and hence SIMD can be used to process several cells at once. In all what + // follows, you can think of a VectorizedArray to hold data from several + // cells. For example, we evaluate the coefficient shown here not on a + // simple point as usually done, but we hand it a + // Point > point, which is actually a collection + // of two points in the case of SSE2. Do not confuse the entries in + // VectorizedArray with the different coordinates of the + // point. Indeed, the data is laid out such that p[0] returns a + // VectorizedArray, which in turn contains the x-coordinate for the + // first point and the second point. You may access the coordinates + // individually using e.g. p[0][j], j=0,1, but it is + // recommended to define operations on a VectorizedArray as much as possible + // in order to make use of vectorized operations. // - // In the function implementation, we - // assume that the number type overloads - // basic arithmetic operations, so we just - // write the code as usual. The standard - // functions @p value and value_list that - // are virtual functions contained in the - // base class are then computed from the - // templated function with double type, in - // order to avoid duplicating code. + // In the function implementation, we assume that the number type overloads + // basic arithmetic operations, so we just write the code as usual. The + // standard functions @p value and value_list that are virtual functions + // contained in the base class are then computed from the templated function + // with double type, in order to avoid duplicating code. template template number Coefficient::value (const Point &p, @@ -260,132 +198,73 @@ namespace Step37 // @sect3{Matrix-free implementation} - // The following class, called - // LaplaceOperator, - // implements the differential - // operator. For all practical - // purposes, it is a matrix, i.e., - // you can ask it for its size - // (member functions m(), - // n()) and you can apply it - // to a vector (the various - // variants of the - // vmult() - // function). The difference to a - // real matrix of course lies in - // the fact that this class doesn't - // actually store the - // elements of the matrix, - // but only knows how to compute - // the action of the operator when - // applied to a vector. - - // In this program, we want to make use of - // the data cache for finite element operator - // application that is integrated in - // deal.II. The main class that collects all - // data is called MatrixFree. It contains - // mapping information (Jacobians) and index - // relations between local and global degrees - // of freedom. It also contains constraints - // like the ones from Dirichlet boundary - // conditions (or hanging nodes, if we had - // any). Moreover, it can issue a loop over - // all cells in %parallel, where it makes - // sure that only cells are worked on that do - // not share any degree of freedom (this - // makes the loop thread-safe when writing - // into destination vectors). This is a more - // advanced strategy compared to the - // WorkStream class described in the @ref - // threads module that serializes operations - // that might not be thread-safe. Of course, - // to not destroy thread-safety, we have to - // be careful when writing into class-global - // structures. + // The following class, called LaplaceOperator, implements the + // differential operator. For all practical purposes, it is a matrix, i.e., + // you can ask it for its size (member functions m(), n()) and + // you can apply it to a vector (the various variants of the + // vmult() function). The difference to a real matrix of course + // lies in the fact that this class doesn't actually store the + // elements of the matrix, but only knows how to compute the action + // of the operator when applied to a vector. + + // In this program, we want to make use of the data cache for finite element + // operator application that is integrated in deal.II. The main class that + // collects all data is called MatrixFree. It contains mapping information + // (Jacobians) and index relations between local and global degrees of + // freedom. It also contains constraints like the ones from Dirichlet + // boundary conditions (or hanging nodes, if we had any). Moreover, it can + // issue a loop over all cells in %parallel, where it makes sure that only + // cells are worked on that do not share any degree of freedom (this makes + // the loop thread-safe when writing into destination vectors). This is a + // more advanced strategy compared to the WorkStream class described in the + // @ref threads module that serializes operations that might not be + // thread-safe. Of course, to not destroy thread-safety, we have to be + // careful when writing into class-global structures. // - // First comes the implementation of the - // matrix-free class. It provides some - // standard information we expect for - // matrices (like returning the dimensions of - // the matrix), it implements matrix-vector - // multiplications in several forms - // (transposed and untransposed), and it - // provides functions for initializing the - // structure with data. The class has three - // template arguments, one for the dimension - // (as many deal.II classes carry), one for the - // degree of the finite element (which we - // need to enable efficient computations - // through the FEEvaluation class), and one - // for the underlying scalar type. We want to use - // double numbers - // (i.e., double precision, 64-bit - // floating point) for the final - // matrix, but floats (single - // precision, 32-bit floating point - // numbers) for the multigrid level - // matrices (as that is only a - // preconditioner, and floats can - // be worked with twice as fast). + // First comes the implementation of the matrix-free class. It provides some + // standard information we expect for matrices (like returning the + // dimensions of the matrix), it implements matrix-vector multiplications in + // several forms (transposed and untransposed), and it provides functions + // for initializing the structure with data. The class has three template + // arguments, one for the dimension (as many deal.II classes carry), one for + // the degree of the finite element (which we need to enable efficient + // computations through the FEEvaluation class), and one for the underlying + // scalar type. We want to use double numbers (i.e., double + // precision, 64-bit floating point) for the final matrix, but floats + // (single precision, 32-bit floating point numbers) for the multigrid level + // matrices (as that is only a preconditioner, and floats can be worked with + // twice as fast). // - // In this class, we store the actual MatrixFree - // object, the variable - // coefficient that is evaluated at all - // quadrature points (so that we don't have - // to recompute it during matrix-vector - // products), and a vector that contains the - // diagonal of the matrix that we need for - // the multigrid smoother. We choose to let - // the user provide the diagonal in this - // program, but we could also integrate a - // function in this class to evaluate the - // diagonal. Unfortunately, this forces us to - // define matrix entries at two places, - // once when we evaluate the product and once - // for the diagonal, but the work is still - // much less than when we compute sparse - // matrices. + // In this class, we store the actual MatrixFree object, the variable + // coefficient that is evaluated at all quadrature points (so that we don't + // have to recompute it during matrix-vector products), and a vector that + // contains the diagonal of the matrix that we need for the multigrid + // smoother. We choose to let the user provide the diagonal in this program, + // but we could also integrate a function in this class to evaluate the + // diagonal. Unfortunately, this forces us to define matrix entries at two + // places, once when we evaluate the product and once for the diagonal, but + // the work is still much less than when we compute sparse matrices. // - // As a sidenote, if we implemented - // several different operations on - // the same grid and degrees of - // freedom (like a mass matrix and - // a Laplace matrix), we would have - // to have two classes like the - // current one for each of the - // operators (maybe with a common - // base class). However, in that - // case, we would not store a - // MatrixFree object in this - // class to avoid doing the - // expensive work of pre-computing - // everything MatrixFree stores - // twice. Rather, we would keep - // this object in the main class - // and simply store a reference. + // As a sidenote, if we implemented several different operations on the same + // grid and degrees of freedom (like a mass matrix and a Laplace matrix), we + // would have to have two classes like the current one for each of the + // operators (maybe with a common base class). However, in that case, we + // would not store a MatrixFree object in this class to avoid doing the + // expensive work of pre-computing everything MatrixFree stores + // twice. Rather, we would keep this object in the main class and simply + // store a reference. // - // @note Observe how we store the values - // for the coefficient: We use a vector - // type - // AlignedVector - // > structure. One would think that - // one can use - // std::vector - // > as well, but there are some - // technicalities with vectorization: A - // certain alignment of the data with the - // memory address boundaries is required - // (essentially, a VectorizedArray of 16 - // bytes length as in SSE needs to start at - // a memory address that is divisible by - // 16). The chosen class makes sure that - // this alignment is respected, whereas - // std::vector can in general not, which - // may lead to segmentation faults at - // strange places for some systems or - // suboptimal performance for other - // systems. + // @note Observe how we store the values for the coefficient: We use a + // vector type AlignedVector > + // structure. One would think that one can use + // std::vector > as well, but there are + // some technicalities with vectorization: A certain alignment of the data + // with the memory address boundaries is required (essentially, a + // VectorizedArray of 16 bytes length as in SSE needs to start at a memory + // address that is divisible by 16). The chosen class makes sure that this + // alignment is respected, whereas std::vector can in general not, which may + // lead to segmentation faults at strange places for some systems or + // suboptimal performance for other systems. template class LaplaceOperator : public Subscriptor { @@ -433,13 +312,10 @@ namespace Step37 - // This is the constructor of the @p - // LaplaceOperator class. All it does is to - // subscribe to the general deal.II @p - // Subscriptor scheme that makes sure that we - // do not delete an object of this class as - // long as it used somewhere else, e.g. in a - // preconditioner. + // This is the constructor of the @p LaplaceOperator class. All it does is + // to subscribe to the general deal.II @p Subscriptor scheme that makes sure + // that we do not delete an object of this class as long as it used + // somewhere else, e.g. in a preconditioner. template LaplaceOperator::LaplaceOperator () : @@ -448,31 +324,17 @@ namespace Step37 - // The next functions return the - // number of rows and columns of - // the global matrix (i.e. the - // dimensions of the operator this - // class represents, the point of - // this tutorial program was, after - // all, that we don't actually - // store the elements of the rows - // and columns of this - // operator). Since the matrix is - // square, the returned numbers are - // the same. We get the number from - // the vector partitioner stored in - // the data field (a partitioner - // distributes elements of a vector - // onto a number of different - // machines if programs are run in - // %parallel; since this program is - // written to run on only a single - // machine, the partitioner will - // simply say that all elements of - // the vector -- or, in the current - // case, all rows and columns of a - // matrix -- are stored on the - // current machine). + // The next functions return the number of rows and columns of the global + // matrix (i.e. the dimensions of the operator this class represents, the + // point of this tutorial program was, after all, that we don't actually + // store the elements of the rows and columns of this operator). Since the + // matrix is square, the returned numbers are the same. We get the number + // from the vector partitioner stored in the data field (a partitioner + // distributes elements of a vector onto a number of different machines if + // programs are run in %parallel; since this program is written to run on + // only a single machine, the partitioner will simply say that all elements + // of the vector -- or, in the current case, all rows and columns of a + // matrix -- are stored on the current machine). template unsigned int LaplaceOperator::m () const @@ -503,72 +365,37 @@ namespace Step37 // @sect4{Initialization} - // Once we have created the multi-grid - // dof_handler and the constraints, we can - // call the reinit function for each level - // of the multi-grid routine (and the - // active cells). The main purpose of the - // reinit function is to setup the - // MatrixFree instance for the - // problem. Also, the coefficient is - // evaluated. For this, we need to activate - // the update flag in the AdditionalData - // field of MatrixFree that enables the - // storage of quadrature point coordinates - // in real space (by default, it only - // caches data for gradients (inverse - // transposed Jacobians) and JxW - // values). Note that if we call the reinit - // function without specifying the level - // (i.e., giving level = - // numbers::invalid_unsigned_int), - // we have told the class to loop over the - // active cells. + // Once we have created the multi-grid dof_handler and the constraints, we + // can call the reinit function for each level of the multi-grid routine + // (and the active cells). The main purpose of the reinit function is to + // setup the MatrixFree instance for the problem. Also, the + // coefficient is evaluated. For this, we need to activate the update flag + // in the AdditionalData field of MatrixFree that enables the storage of + // quadrature point coordinates in real space (by default, it only caches + // data for gradients (inverse transposed Jacobians) and JxW values). Note + // that if we call the reinit function without specifying the level (i.e., + // giving level = numbers::invalid_unsigned_int), we have told + // the class to loop over the active cells. // - // We also set one option regarding - // task parallelism. We choose to - // use the @p partition_color - // strategy, which is based on - // subdivision of cells into - // partitions where cells in - // partition $k$ (or, more - // precisely, the degrees of - // freedom on these cells) only - // interact with cells in - // partitions $k-1$, $k$, and - // $k+1$. Within each partition, - // cells are colored in such a way - // that cells with the same color - // do not share degrees of freedom - // and can, therefore, be worked on - // at the same time without - // interference. This determines a - // task dependency graph that is - // scheduled by the Intel Threading - // Building Blocks library. Another - // option would be the strategy @p - // partition_partition, which - // performs better when the grid is - // more unstructured. We could also - // manually set the size of chunks - // that form one task in the - // scheduling process by setting @p - // tasks_block_size, but the - // default strategy to let the - // function decide works well - // already. + // We also set one option regarding task parallelism. We choose to use the + // @p partition_color strategy, which is based on subdivision of cells into + // partitions where cells in partition $k$ (or, more precisely, the degrees + // of freedom on these cells) only interact with cells in partitions $k-1$, + // $k$, and $k+1$. Within each partition, cells are colored in such a way + // that cells with the same color do not share degrees of freedom and can, + // therefore, be worked on at the same time without interference. This + // determines a task dependency graph that is scheduled by the Intel + // Threading Building Blocks library. Another option would be the strategy + // @p partition_partition, which performs better when the grid is more + // unstructured. We could also manually set the size of chunks that form one + // task in the scheduling process by setting @p tasks_block_size, but the + // default strategy to let the function decide works well already. // - // To initialize the coefficient, - // we directly give it the - // Coefficient class defined above - // and then select the method - // coefficient_function.value - // with vectorized number (which - // the compiler can deduce from the - // point data type). The use of the - // FEEvaluation class (and its - // template arguments) will be - // explained below. + // To initialize the coefficient, we directly give it the Coefficient class + // defined above and then select the method + // coefficient_function.value with vectorized number (which the + // compiler can deduce from the point data type). The use of the + // FEEvaluation class (and its template arguments) will be explained below. template void LaplaceOperator::reinit (const MGDoFHandler &dof_handler, @@ -609,174 +436,93 @@ namespace Step37 // @sect4{Local evaluation of Laplace operator} - // Here comes the main function of this - // class, the evaluation of the - // matrix-vector product (or, in general, a - // finite element operator - // evaluation). This is done in a function - // that takes exactly four arguments, the - // MatrixFree object, the destination and - // source vectors, and a range of cells - // that are to be worked on. The method - // cell_loop in the MatrixFree - // class will internally call this function - // with some range of cells that is - // obtained by checking which cells are - // possible to work on simultaneously so - // that write operations do not cause any - // race condition. Note that the total - // range of cells as visible in this class - // is usually not equal to the number of - // (active) cells in the triangulation. In - // fact, "cell" may be the wrong term to - // begin with, since it is rather a - // collection of quadrature points from - // several cells, and the MatrixFree class - // groups the quadrature points of several - // cells into one block to enable a higher - // degree of vectorization. The number of - // such "cells" is stored in MatrixFree and - // can be queried through - // MatrixFree::get_size_info().n_macro_cells. Compared - // to the deal.II cell iterators, in this - // class all cells are laid out in a plain - // array with no direct knowledge of level - // or neighborship relations, which makes - // it possible to index the cells by - // unsigned integers. + // Here comes the main function of this class, the evaluation of the + // matrix-vector product (or, in general, a finite element operator + // evaluation). This is done in a function that takes exactly four + // arguments, the MatrixFree object, the destination and source vectors, and + // a range of cells that are to be worked on. The method + // cell_loop in the MatrixFree class will internally call this + // function with some range of cells that is obtained by checking which + // cells are possible to work on simultaneously so that write operations do + // not cause any race condition. Note that the total range of cells as + // visible in this class is usually not equal to the number of (active) + // cells in the triangulation. In fact, "cell" may be the wrong term to + // begin with, since it is rather a collection of quadrature points from + // several cells, and the MatrixFree class groups the quadrature points of + // several cells into one block to enable a higher degree of vectorization. + // The number of such "cells" is stored in MatrixFree and can be queried + // through MatrixFree::get_size_info().n_macro_cells. Compared to the + // deal.II cell iterators, in this class all cells are laid out in a plain + // array with no direct knowledge of level or neighborship relations, which + // makes it possible to index the cells by unsigned integers. // - // The implementation of the Laplace - // operator is quite simple: First, we need - // to create an object FEEvaluation that - // contains the computational kernels and - // has data fields to store temporary - // results (e.g. gradients evaluated on all - // quadrature points on a collection of a - // few cells). Note that temporary results - // do not use a lot of memory, and since we - // specify template arguments with the - // element order, the data is stored on the - // stack (without expensive memory - // allocation). Usually, one only needs to - // set two template arguments, the - // dimension as first argument and the - // degree of the finite element as - // the second argument (this is equal to - // the number of degrees of freedom per - // dimension minus one for FE_Q - // elements). However, here we also want to - // be able to use float numbers for the - // multigrid preconditioner, which is the - // last (fifth) template - // argument. Therefore, we cannot rely on - // the default template arguments and must - // also fill the third and fourth field, - // consequently. The third argument - // specifies the number of quadrature - // points per direction and has a default - // value equal to the degree of the element - // plus one. The fourth argument sets - // the number of components (one can also - // evaluate vector-valued functions in - // systems of PDEs, but the default is a - // scalar element), and finally the last - // argument sets the number type. + // The implementation of the Laplace operator is quite simple: First, we + // need to create an object FEEvaluation that contains the computational + // kernels and has data fields to store temporary results (e.g. gradients + // evaluated on all quadrature points on a collection of a few cells). Note + // that temporary results do not use a lot of memory, and since we specify + // template arguments with the element order, the data is stored on the + // stack (without expensive memory allocation). Usually, one only needs to + // set two template arguments, the dimension as first argument and the + // degree of the finite element as the second argument (this is equal to the + // number of degrees of freedom per dimension minus one for FE_Q + // elements). However, here we also want to be able to use float numbers for + // the multigrid preconditioner, which is the last (fifth) template + // argument. Therefore, we cannot rely on the default template arguments and + // must also fill the third and fourth field, consequently. The third + // argument specifies the number of quadrature points per direction and has + // a default value equal to the degree of the element plus one. The fourth + // argument sets the number of components (one can also evaluate + // vector-valued functions in systems of PDEs, but the default is a scalar + // element), and finally the last argument sets the number type. // - // Next, we loop over the given cell range and - // then we continue with the actual - // implementation: - //
    - //
  1. Tell the FEEvaluation object the - // (macro) cell we want to work on. - //
  2. Read in the values of the - // source vectors (@p read_dof_values), - // including the resolution of - // constraints. This stores - // $u_\mathrm{cell}$ as described in the - // introduction. - //
  3. Compute the unit-cell gradient - // (the evaluation of finite element - // functions). Since FEEvaluation can - // combine value computations with - // gradient computations, it uses a - // unified interface to all kinds of - // derivatives of order between zero and - // two. We only want gradients, no values - // and no second derivatives, so we set - // the function arguments to true in the - // gradient slot (second slot), and to - // false in the values slot (first slot) - // and Hessian slot (third slot). Note - // that the FEEvaluation class internally - // evaluates shape functions in an - // efficient way where one dimension is - // worked on at a time (using the tensor - // product form of shape functions and - // quadrature points as mentioned in the - // introduction). This gives complexity - // equal to $\mathcal O(d^2 (p+1)^{d+1})$ - // for polynomial degree $p$ in $d$ - // dimensions, compared to the naive - // approach with loops over all local - // degrees of freedom and quadrature - // points that is used in FEValues that - // costs $\mathcal O(d (p+1)^{2d})$. - //
  4. Next comes the application of the - // Jacobian transformation, the - // multiplication by the variable - // coefficient and the quadrature - // weight. FEEvaluation has an access - // function @p get_gradient that applies - // the Jacobian and returns the gradient - // in real space. Then, we just need to - // multiply by the (scalar) coefficient, - // and let the function @p - // submit_gradient apply the second - // Jacobian (for the test function) and - // the quadrature weight and Jacobian - // determinant (JxW). Note that the - // submitted gradient is stored in the - // same data field as where it is read - // from in @p get_gradient. Therefore, - // you need to make sure to not read from - // the same quadrature point again after - // having called @p submit_gradient on - // that particular quadrature point. In - // general, it is a good idea to copy the - // result of @p get_gradient when it is - // used more often than once. - //
  5. Next follows the summation over - // quadrature points for all test - // functions that corresponds to the - // actual integration step. For the - // Laplace operator, we just multiply by - // the gradient, so we call the integrate - // function with the respective argument - // set. If you have an equation where you - // test by both the values of the test - // functions and the gradients, both - // template arguments need to be set to - // true. Calling first the integrate - // function for values and then gradients - // in a separate call leads to wrong - // results, since the second call will - // internally overwrite the results from - // the first call. Note that there is no - // function argument for the second - // derivative for integrate step. - //
  6. Eventually, the local - // contributions in the vector - // $v_\mathrm{cell}$ as mentioned in the - // introduction need to be added into the - // result vector (and constraints are - // applied). This is done with a call to - // @p distribute_local_to_global, the - // same name as the corresponding - // function in the ConstraintMatrix (only - // that we now store the local vector in - // the FEEvaluation object, as are the - // indices between local and global - // degrees of freedom).
+ // Next, we loop over the given cell range and then we continue with the + // actual implementation:
  1. Tell the FEEvaluation object the (macro) + // cell we want to work on.
  2. Read in the values of the source vectors + // (@p read_dof_values), including the resolution of constraints. This + // stores $u_\mathrm{cell}$ as described in the introduction.
  3. Compute + // the unit-cell gradient (the evaluation of finite element + // functions). Since FEEvaluation can combine value computations with + // gradient computations, it uses a unified interface to all kinds of + // derivatives of order between zero and two. We only want gradients, no + // values and no second derivatives, so we set the function arguments to + // true in the gradient slot (second slot), and to false in the values slot + // (first slot) and Hessian slot (third slot). Note that the FEEvaluation + // class internally evaluates shape functions in an efficient way where one + // dimension is worked on at a time (using the tensor product form of shape + // functions and quadrature points as mentioned in the introduction). This + // gives complexity equal to $\mathcal O(d^2 (p+1)^{d+1})$ for polynomial + // degree $p$ in $d$ dimensions, compared to the naive approach with loops + // over all local degrees of freedom and quadrature points that is used in + // FEValues that costs $\mathcal O(d (p+1)^{2d})$.
  4. Next comes the + // application of the Jacobian transformation, the multiplication by the + // variable coefficient and the quadrature weight. FEEvaluation has an + // access function @p get_gradient that applies the Jacobian and returns the + // gradient in real space. Then, we just need to multiply by the (scalar) + // coefficient, and let the function @p submit_gradient apply the second + // Jacobian (for the test function) and the quadrature weight and Jacobian + // determinant (JxW). Note that the submitted gradient is stored in the same + // data field as where it is read from in @p get_gradient. Therefore, you + // need to make sure to not read from the same quadrature point again after + // having called @p submit_gradient on that particular quadrature point. In + // general, it is a good idea to copy the result of @p get_gradient when it + // is used more often than once.
  5. Next follows the summation over + // quadrature points for all test functions that corresponds to the actual + // integration step. For the Laplace operator, we just multiply by the + // gradient, so we call the integrate function with the respective argument + // set. If you have an equation where you test by both the values of the + // test functions and the gradients, both template arguments need to be set + // to true. Calling first the integrate function for values and then + // gradients in a separate call leads to wrong results, since the second + // call will internally overwrite the results from the first call. Note that + // there is no function argument for the second derivative for integrate + // step.
  6. Eventually, the local contributions in the vector + // $v_\mathrm{cell}$ as mentioned in the introduction need to be added into + // the result vector (and constraints are applied). This is done with a call + // to @p distribute_local_to_global, the same name as the corresponding + // function in the ConstraintMatrix (only that we now store the local vector + // in the FEEvaluation object, as are the indices between local and global + // degrees of freedom).
template void LaplaceOperator:: @@ -806,15 +552,11 @@ namespace Step37 // @sect4{vmult functions} - // Now to the @p vmult function that is - // called externally: In addition to what - // we do in a @p vmult_add function further - // down, we set the destination to zero - // first. The transposed matrix-vector is - // needed for well-defined multigrid - // preconditioner operations. Since we - // solve a Laplace problem, this is the - // same operation, and we just refer to the + // Now to the @p vmult function that is called externally: In addition to + // what we do in a @p vmult_add function further down, we set the + // destination to zero first. The transposed matrix-vector is needed for + // well-defined multigrid preconditioner operations. Since we solve a + // Laplace problem, this is the same operation, and we just refer to the // vmult operation. template void @@ -848,62 +590,38 @@ namespace Step37 - // This function implements the loop over all - // cells. This is done with the @p cell_loop - // of the MatrixFree class, which takes - // the operator() of this class with arguments - // MatrixFree, OutVector, InVector, - // cell_range. Note that we could also use a - // simple function as local operation in case - // we had constant coefficients (all we need - // then is the MatrixFree, the vectors and - // the cell range), but since the coefficient - // is stored in a variable of this class, we - // cannot use that variant here. The cell loop - // is automatically performed on several threads - // if multithreading is enabled (this class - // uses a quite elaborate algorithm to work on - // cells that do not share any degrees of - // freedom that could possibly give rise to - // race conditions, using the dynamic task - // scheduler of the Intel Threading Building - // Blocks). + // This function implements the loop over all cells. This is done with the + // @p cell_loop of the MatrixFree class, which takes the operator() of this + // class with arguments MatrixFree, OutVector, InVector, cell_range. Note + // that we could also use a simple function as local operation in case we + // had constant coefficients (all we need then is the MatrixFree, the + // vectors and the cell range), but since the coefficient is stored in a + // variable of this class, we cannot use that variant here. The cell loop is + // automatically performed on several threads if multithreading is enabled + // (this class uses a quite elaborate algorithm to work on cells that do not + // share any degrees of freedom that could possibly give rise to race + // conditions, using the dynamic task scheduler of the Intel Threading + // Building Blocks). // - // After the cell loop, we need to touch - // the constrained degrees of freedom: - // Since the assembly loop automatically - // resolves constraints (just as the - // ConstraintMatrix::distribute_local_to_global - // call does), it does not compute any - // contribution for constrained degrees of - // freedom. In other words, the entries for - // constrained DoFs remain zero after the - // first part of this function, as if the - // matrix had empty rows and columns for - // constrained degrees of freedom. On the - // other hand, iterative solvers like CG - // only work for non-singular matrices, so - // we have to modify the operation on - // constrained DoFs. The easiest way to do - // that is to pretend that the sub-block of - // the matrix that corresponds to - // constrained DoFs is the identity matrix, - // in which case application of the matrix - // would simply copy the elements of the - // right hand side vector into the left - // hand side. In general, however, one - // needs to make sure that the diagonal - // entries of this sub-block are of the - // same order of magnitude as the diagonal - // elements of the rest of the matrix. - // Here, the domain extent is of unit size, - // so we can simply choose unit size. If we - // had domains that are far away from unit - // size, we would need to choose a number - // that is close to the size of other - // diagonal matrix entries, so that these - // artificial eigenvalues do not change the - // eigenvalue spectrum (and make + // After the cell loop, we need to touch the constrained degrees of freedom: + // Since the assembly loop automatically resolves constraints (just as the + // ConstraintMatrix::distribute_local_to_global call does), it does not + // compute any contribution for constrained degrees of freedom. In other + // words, the entries for constrained DoFs remain zero after the first part + // of this function, as if the matrix had empty rows and columns for + // constrained degrees of freedom. On the other hand, iterative solvers like + // CG only work for non-singular matrices, so we have to modify the + // operation on constrained DoFs. The easiest way to do that is to pretend + // that the sub-block of the matrix that corresponds to constrained DoFs is + // the identity matrix, in which case application of the matrix would simply + // copy the elements of the right hand side vector into the left hand + // side. In general, however, one needs to make sure that the diagonal + // entries of this sub-block are of the same order of magnitude as the + // diagonal elements of the rest of the matrix. Here, the domain extent is + // of unit size, so we can simply choose unit size. If we had domains that + // are far away from unit size, we would need to choose a number that is + // close to the size of other diagonal matrix entries, so that these + // artificial eigenvalues do not change the eigenvalue spectrum (and make // convergence with CG more difficult). template void @@ -920,15 +638,11 @@ namespace Step37 - // The next function is used to return entries of - // the matrix. Since this class is intended - // not to store the matrix entries, it would - // make no sense to provide access to all those - // elements. However, diagonal entries are - // explicitly needed for the implementation - // of the Chebyshev smoother that we intend - // to use in the multigrid - // preconditioner. This matrix is equipped + // The next function is used to return entries of the matrix. Since this + // class is intended not to store the matrix entries, it would make no sense + // to provide access to all those elements. However, diagonal entries are + // explicitly needed for the implementation of the Chebyshev smoother that + // we intend to use in the multigrid preconditioner. This matrix is equipped // with a vector that stores the diagonal. template number @@ -942,20 +656,12 @@ namespace Step37 - // Regarding the calculation of the - // diagonal, we expect the user to - // provide a vector with the - // diagonal entries (and we will - // compute them in the code - // below). We only need it for the - // level matrices of multigrid, not - // the system matrix (since we only - // need these diagonals for the - // multigrid smoother). Since we - // fill only elements into - // unconstrained entries, we have - // to set constrained entries to - // one in order to avoid the same + // Regarding the calculation of the diagonal, we expect the user to provide + // a vector with the diagonal entries (and we will compute them in the code + // below). We only need it for the level matrices of multigrid, not the + // system matrix (since we only need these diagonals for the multigrid + // smoother). Since we fill only elements into unconstrained entries, we + // have to set constrained entries to one in order to avoid the same // problems as discussed above. template void @@ -975,19 +681,13 @@ namespace Step37 - // Eventually, we provide a function that - // calculates how much memory this class - // uses. We just need to sum up the memory - // consumption in the MatrixFree object and - // the memory for storing the other member - // variables. As a remark: In 3D and for - // Cartesian meshes, most memory is - // consumed for storing the vector indices - // on the local cells (corresponding to - // local_dof_indices). For general - // (non-Cartesian) meshes, the cached - // Jacobian transformation consumes most - // memory. + // Eventually, we provide a function that calculates how much memory this + // class uses. We just need to sum up the memory consumption in the + // MatrixFree object and the memory for storing the other member + // variables. As a remark: In 3D and for Cartesian meshes, most memory is + // consumed for storing the vector indices on the local cells (corresponding + // to local_dof_indices). For general (non-Cartesian) meshes, the cached + // Jacobian transformation consumes most memory. template std::size_t LaplaceOperator::memory_consumption () const @@ -1002,28 +702,19 @@ namespace Step37 // @sect3{LaplaceProblem class} - // This class is based on the one in - // step-16. However, we replaced the - // SparseMatrix class by our - // matrix-free implementation, which means - // that we can also skip the sparsity - // patterns. Notice that we define the - // LaplaceOperator class with the degree of finite - // element as template argument (the value is - // defined at the top of the file), and that - // we use float numbers for the multigrid - // level matrices. + // This class is based on the one in step-16. However, we replaced the + // SparseMatrix class by our matrix-free implementation, which means + // that we can also skip the sparsity patterns. Notice that we define the + // LaplaceOperator class with the degree of finite element as template + // argument (the value is defined at the top of the file), and that we use + // float numbers for the multigrid level matrices. // - // The class also has a member variable to - // keep track of all the time we spend on - // setting up the entire chain of data - // before we actually go about solving the - // problem. In addition, there is an output - // stream (that is disabled by default) - // that can be used to output details for - // the individual setup operations instead - // of the summary only that is printed out - // by default. + // The class also has a member variable to keep track of all the time we + // spend on setting up the entire chain of data before we actually go about + // solving the problem. In addition, there is an output stream (that is + // disabled by default) that can be used to output details for the + // individual setup operations instead of the summary only that is printed + // out by default. template class LaplaceProblem { @@ -1060,14 +751,11 @@ namespace Step37 - // When we initialize the finite element, we - // of course have to use the degree specified - // at the top of the file as well (otherwise, - // an exception will be thrown at some point, - // since the computational kernel defined in - // the templated LaplaceOperator class and the - // information from the finite element read - // out by MatrixFree will not match). + // When we initialize the finite element, we of course have to use the + // degree specified at the top of the file as well (otherwise, an exception + // will be thrown at some point, since the computational kernel defined in + // the templated LaplaceOperator class and the information from the finite + // element read out by MatrixFree will not match). template LaplaceProblem::LaplaceProblem () : @@ -1080,22 +768,16 @@ namespace Step37 // @sect4{LaplaceProblem::setup_system} - // This is the function of step-16 with - // relevant changes due to the LaplaceOperator - // class. We do not use adaptive grids, so we - // do not have to compute edge matrices. Thus, - // all we do is to implement Dirichlet - // boundary conditions through the - // ConstraintMatrix, set up the - // (one-dimensional) quadrature that should - // be used by the matrix-free class, and call - // the initialization functions. + // This is the function of step-16 with relevant changes due to the + // LaplaceOperator class. We do not use adaptive grids, so we do not have to + // compute edge matrices. Thus, all we do is to implement Dirichlet boundary + // conditions through the ConstraintMatrix, set up the (one-dimensional) + // quadrature that should be used by the matrix-free class, and call the + // initialization functions. // - // In the process, we output data on both - // the run time of the program as well as - // on memory consumption, where we output - // memory data in megabytes (1 million - // bytes). + // In the process, we output data on both the run time of the program as + // well as on memory consumption, where we output memory data in megabytes + // (1 million bytes). template void LaplaceProblem::setup_system () { @@ -1139,24 +821,14 @@ namespace Step37 << time() << "s/" << time.wall_time() << "s" << std::endl; time.restart(); - // Next, initialize the matrices - // for the multigrid method on - // all the levels. The function - // MGTools::make_boundary_list - // returns for each multigrid - // level which degrees of freedom - // are located on a Dirichlet - // boundary; we force these DoFs - // to have value zero by adding - // to the ConstraintMatrix object - // a zero condition by using the - // command - // ConstraintMatrix::add_line. Once - // this is done, we close the - // ConstraintMatrix on each level - // so it can be used to read out - // indices internally in the - // MatrixFree. + // Next, initialize the matrices for the multigrid method on all the + // levels. The function MGTools::make_boundary_list returns for each + // multigrid level which degrees of freedom are located on a Dirichlet + // boundary; we force these DoFs to have value zero by adding to the + // ConstraintMatrix object a zero condition by using the command + // ConstraintMatrix::add_line. Once this is done, we close the + // ConstraintMatrix on each level so it can be used to read out indices + // internally in the MatrixFree. const unsigned int nlevels = triangulation.n_levels(); mg_matrices.resize(0, nlevels-1); mg_constraints.resize (0, nlevels-1); @@ -1190,13 +862,10 @@ namespace Step37 // @sect4{LaplaceProblem::assemble_system} - // The assemble function is significantly - // reduced compared to step-16. All we need - // to do is to assemble the right hand - // side. That is the same as in many other - // tutorial programs. In the end, we condense - // the constraints from Dirichlet boundary - // conditions away from the right hand side. + // The assemble function is significantly reduced compared to step-16. All + // we need to do is to assemble the right hand side. That is the same as in + // many other tutorial programs. In the end, we condense the constraints + // from Dirichlet boundary conditions away from the right hand side. template void LaplaceProblem::assemble_system () { @@ -1236,16 +905,11 @@ namespace Step37 // @sect4{LaplaceProblem::assemble_multigrid} - // Here is another assemble - // function. Again, it is simpler than - // assembling matrices. We need to compute - // the diagonal of the Laplace matrices on - // the individual levels, send the final - // matrices to the LaplaceOperator class, - // and we need to compute the full matrix - // on the coarsest level (since that is - // inverted exactly in the deal.II - // multigrid implementation). + // Here is another assemble function. Again, it is simpler than assembling + // matrices. We need to compute the diagonal of the Laplace matrices on the + // individual levels, send the final matrices to the LaplaceOperator class, + // and we need to compute the full matrix on the coarsest level (since that + // is inverted exactly in the deal.II multigrid implementation). template void LaplaceProblem::assemble_multigrid () { @@ -1327,17 +991,12 @@ namespace Step37 // @sect4{LaplaceProblem::solve} - // The solution process again looks like - // step-16. We now use a Chebyshev smoother - // instead of SOR (SOR would be very - // difficult to implement because we do not - // have the matrix elements available - // explicitly, and it is difficult to make it - // work efficiently in %parallel). The - // multigrid classes provide a simple - // interface for using the Chebyshev smoother - // which is defined in a preconditioner - // class: MGSmootherPrecondition. + // The solution process again looks like step-16. We now use a Chebyshev + // smoother instead of SOR (SOR would be very difficult to implement because + // we do not have the matrix elements available explicitly, and it is + // difficult to make it work efficiently in %parallel). The multigrid + // classes provide a simple interface for using the Chebyshev smoother which + // is defined in a preconditioner class: MGSmootherPrecondition. template void LaplaceProblem::solve () { @@ -1362,25 +1021,17 @@ namespace Step37 MGSmootherPrecondition > mg_smoother(vector_memory); - // Then, we initialize the smoother with - // our level matrices and the mandatory - // additional data for the Chebyshev - // smoother. We use quite a high degree - // here (6), since matrix-vector products - // are comparably cheap and more parallel - // than the level-transfer operations. We - // choose to smooth out a range of $[1.2 - // \hat{\lambda}_{\max}/10,1.2 - // \hat{\lambda}_{\max}]$ in the smoother - // where $\hat{\lambda}_{\max}$ is an - // estimate of the largest eigenvalue. In - // order to compute that eigenvalue, the - // Chebyshev initializations performs a - // few steps of a CG algorithm without - // preconditioner. Since the highest - // eigenvalue is usually the easiest one - // to find and a rough estimate is enough, - // we choose 10 iterations. + // Then, we initialize the smoother with our level matrices and the + // mandatory additional data for the Chebyshev smoother. We use quite a + // high degree here (6), since matrix-vector products are comparably cheap + // and more parallel than the level-transfer operations. We choose to + // smooth out a range of $[1.2 \hat{\lambda}_{\max}/10,1.2 + // \hat{\lambda}_{\max}]$ in the smoother where $\hat{\lambda}_{\max}$ is + // an estimate of the largest eigenvalue. In order to compute that + // eigenvalue, the Chebyshev initializations performs a few steps of a CG + // algorithm without preconditioner. Since the highest eigenvalue is + // usually the easiest one to find and a rough estimate is enough, we + // choose 10 iterations. typename SMOOTHER::AdditionalData smoother_data; smoother_data.smoothing_range = 10.; smoother_data.degree = 6; @@ -1400,25 +1051,16 @@ namespace Step37 MGTransferPrebuilt > > preconditioner(mg_dof_handler, mg, mg_transfer); - // Finally, write out the memory - // consumption of the Multigrid object - // (or rather, of its most significant - // components, since there is no built-in - // function for the total multigrid - // object), then create the solver object - // and solve the system. This is very - // easy, and we didn't even see any - // difference in the solve process - // compared to step-16. The magic is all - // hidden behind the implementation of - // the LaplaceOperator::vmult - // operation. Note that we print out the - // solve time and the accumulated setup - // time through standard out, i.e., in - // any case, whereas detailed times for - // the setup operations are only printed - // in case the flag for detail_times in - // the constructor is changed. + // Finally, write out the memory consumption of the Multigrid object (or + // rather, of its most significant components, since there is no built-in + // function for the total multigrid object), then create the solver object + // and solve the system. This is very easy, and we didn't even see any + // difference in the solve process compared to step-16. The magic is all + // hidden behind the implementation of the LaplaceOperator::vmult + // operation. Note that we print out the solve time and the accumulated + // setup time through standard out, i.e., in any case, whereas detailed + // times for the setup operations are only printed in case the flag for + // detail_times in the constructor is changed. const std::size_t multigrid_memory = (mg_matrices.memory_consumption() + mg_transfer.memory_consumption() + @@ -1452,11 +1094,9 @@ namespace Step37 // @sect4{LaplaceProblem::output_results} - // Here is the data output, which is a - // simplified version of step-5. We use the - // standard VTU (= compressed VTK) output for - // each grid produced in the refinement - // process. + // Here is the data output, which is a simplified version of step-5. We use + // the standard VTU (= compressed VTK) output for each grid produced in the + // refinement process. template void LaplaceProblem::output_results (const unsigned int cycle) const { @@ -1479,10 +1119,9 @@ namespace Step37 // @sect4{LaplaceProblem::run} - // The function that runs the program is - // very similar to the one in step-16. We - // make less refinement steps in 3D - // compared to 2D, but that's it. + // The function that runs the program is very similar to the one in + // step-16. We make less refinement steps in 3D compared to 2D, but that's + // it. template void LaplaceProblem::run () { diff --git a/deal.II/examples/step-38/step-38.cc b/deal.II/examples/step-38/step-38.cc index e473a81b27..a6a3290175 100644 --- a/deal.II/examples/step-38/step-38.cc +++ b/deal.II/examples/step-38/step-38.cc @@ -11,11 +11,9 @@ // @sect3{Include files} -// If you've read through step-4 and step-7, -// you will recognize that we have used all -// of the following include files there -// already. Consequently, we will not explain -// their meaning here again. +// If you've read through step-4 and step-7, you will recognize that we have +// used all of the following include files there already. Consequently, we +// will not explain their meaning here again. #include #include #include @@ -49,69 +47,43 @@ namespace Step38 // @sect3{The LaplaceBeltramiProblem class template} - // This class is almost exactly similar to - // the LaplaceProblem class in - // step-4. + // This class is almost exactly similar to the LaplaceProblem + // class in step-4. // The essential differences are these: // - // - The template parameter now denotes the - // dimensionality of the embedding space, - // which is no longer the same as the - // dimensionality of the domain and the - // triangulation on which we compute. We - // indicate this by calling the parameter - // @p spacedim , and introducing a constant - // @p dim equal to the dimensionality of - // the domain -- here equal to + // - The template parameter now denotes the dimensionality of the embedding + // space, which is no longer the same as the dimensionality of the domain + // and the triangulation on which we compute. We indicate this by calling + // the parameter @p spacedim , and introducing a constant @p dim equal to + // the dimensionality of the domain -- here equal to // spacedim-1. - // - All member variables that have geometric - // aspects now need to know about both - // their own dimensionality as well as that - // of the embedding space. Consequently, we - // need to specify both of their template - // parameters one for the dimension of the - // mesh @p dim, and the other for the - // dimension of the embedding space, - // @p spacedim. This is exactly what we - // did in step-34, take a look there for - // a deeper explanation. - - // - We need an object that describes which - // kind of mapping to use from the - // reference cell to the cells that the - // triangulation is composed of. The - // classes derived from the Mapping base - // class do exactly this. Throughout most - // of deal.II, if you don't do anything at - // all, the library assumes that you want - // an object of kind MappingQ1 that uses a - // (bi-, tri-)linear mapping. In many - // cases, this is quite sufficient, which - // is why the use of these objects is - // mostly optional: for example, if you - // have a polygonal two-dimensional domain - // in two-dimensional space, a bilinear - // mapping of the reference cell to the - // cells of the triangulation yields an - // exact representation of the domain. If - // you have a curved domain, one may want - // to use a higher order mapping for those - // cells that lie at the boundary of the - // domain -- this is what we did in - // step-11, for example. However, here we - // have a curved domain, not just a curved - // boundary, and while we can approximate - // it with bilinearly mapped cells, it is - // really only prodent to use a higher - // order mapping for all - // cells. Consequently, this class has a - // member variable of type MappingQ; we - // will choose the polynomial degree of the - // mapping equal to the polynomial degree - // of the finite element used in the - // computations to ensure optimal approximation, though this - // iso-parametricity is not required. + // - All member variables that have geometric aspects now need to know about + // both their own dimensionality as well as that of the embedding + // space. Consequently, we need to specify both of their template + // parameters one for the dimension of the mesh @p dim, and the other for + // the dimension of the embedding space, @p spacedim. This is exactly what + // we did in step-34, take a look there for a deeper explanation. + // - We need an object that describes which kind of mapping to use from the + // reference cell to the cells that the triangulation is composed of. The + // classes derived from the Mapping base class do exactly this. Throughout + // most of deal.II, if you don't do anything at all, the library assumes + // that you want an object of kind MappingQ1 that uses a (bi-, tri-)linear + // mapping. In many cases, this is quite sufficient, which is why the use + // of these objects is mostly optional: for example, if you have a + // polygonal two-dimensional domain in two-dimensional space, a bilinear + // mapping of the reference cell to the cells of the triangulation yields + // an exact representation of the domain. If you have a curved domain, one + // may want to use a higher order mapping for those cells that lie at the + // boundary of the domain -- this is what we did in step-11, for + // example. However, here we have a curved domain, not just a curved + // boundary, and while we can approximate it with bilinearly mapped cells, + // it is really only prodent to use a higher order mapping for all + // cells. Consequently, this class has a member variable of type MappingQ; + // we will choose the polynomial degree of the mapping equal to the + // polynomial degree of the finite element used in the computations to + // ensure optimal approximation, though this iso-parametricity is not + // required. template class LaplaceBeltramiProblem { @@ -144,23 +116,16 @@ namespace Step38 // @sect3{Equation data} - // Next, let us define the classes that - // describe the exact solution and the right - // hand sides of the problem. This is in - // analogy to step-4 and step-7 where we also - // defined such objects. Given the discussion - // in the introduction, the actual formulas - // should be self-explanatory. A point of - // interest may be how we define the value - // and gradient functions for the 2d and 3d - // cases separately, using explicit - // specializations of the general - // template. An alternative to doing it this - // way might have been to define the general - // template and have a switch - // statement (or a sequence of - // ifs) for each possible value - // of the spatial dimension. + // Next, let us define the classes that describe the exact solution and the + // right hand sides of the problem. This is in analogy to step-4 and step-7 + // where we also defined such objects. Given the discussion in the + // introduction, the actual formulas should be self-explanatory. A point of + // interest may be how we define the value and gradient functions for the 2d + // and 3d cases separately, using explicit specializations of the general + // template. An alternative to doing it this way might have been to define + // the general template and have a switch statement (or a + // sequence of ifs) for each possible value of the spatial + // dimension. template class Solution : public Function { @@ -283,13 +248,10 @@ namespace Step38 // @sect3{Implementation of the LaplaceBeltramiProblem class} - // The rest of the program is actually quite - // unspectacular if you know step-4. Our - // first step is to define the constructor, - // setting the polynomial degree of the - // finite element and mapping, and - // associating the DoF handler to the - // triangulation: + // The rest of the program is actually quite unspectacular if you know + // step-4. Our first step is to define the constructor, setting the + // polynomial degree of the finite element and mapping, and associating the + // DoF handler to the triangulation: template LaplaceBeltramiProblem:: LaplaceBeltramiProblem (const unsigned degree) @@ -302,75 +264,47 @@ namespace Step38 // @sect4{LaplaceBeltramiProblem::make_grid_and_dofs} - // The next step is to create the mesh, - // distribute degrees of freedom, and set up - // the various variables that describe the - // linear system. All of these steps are - // standard with the exception of how to - // create a mesh that describes a surface. We - // could generate a mesh for the domain we - // are interested in, generate a - // triangulation using a mesh generator, and - // read it in using the GridIn class. Or, as - // we do here, we generate the mesh using the - // facilities in the GridGenerator namespace. + // The next step is to create the mesh, distribute degrees of freedom, and + // set up the various variables that describe the linear system. All of + // these steps are standard with the exception of how to create a mesh that + // describes a surface. We could generate a mesh for the domain we are + // interested in, generate a triangulation using a mesh generator, and read + // it in using the GridIn class. Or, as we do here, we generate the mesh + // using the facilities in the GridGenerator namespace. // - // In particular, what we're going to do is - // this (enclosed between the set of braces - // below): we generate a - // spacedim dimensional mesh for - // the half disk (in 2d) or half ball (in - // 3d), using the - // GridGenerator::half_hyper_ball - // function. This function sets the boundary - // indicators of all faces on the outside of - // the boundary to zero for the ones located - // on the perimeter of the disk/ball, and one - // on the straight part that splits the full - // disk/ball into two halves. The next step - // is the main point: The - // GridTools::extract_boundary_mesh function - // creates a mesh that consists of those - // cells that are the faces of the previous - // mesh, i.e. it describes the surface - // cells of the original (volume) - // mesh. However, we do not want all faces: - // only those on the perimeter of the disk or - // ball which carry boundary indicator zero; - // we can select these cells using a set of - // boundary indicators that we pass to + // In particular, what we're going to do is this (enclosed between the set + // of braces below): we generate a spacedim dimensional mesh + // for the half disk (in 2d) or half ball (in 3d), using the + // GridGenerator::half_hyper_ball function. This function sets the boundary + // indicators of all faces on the outside of the boundary to zero for the + // ones located on the perimeter of the disk/ball, and one on the straight + // part that splits the full disk/ball into two halves. The next step is the + // main point: The GridTools::extract_boundary_mesh function creates a mesh + // that consists of those cells that are the faces of the previous mesh, + // i.e. it describes the surface cells of the original (volume) + // mesh. However, we do not want all faces: only those on the perimeter of + // the disk or ball which carry boundary indicator zero; we can select these + // cells using a set of boundary indicators that we pass to // GridTools::extract_boundary_mesh. // - // There is one point that needs to be - // mentioned. In order to refine a surface - // mesh appropriately if the manifold is - // curved (similarly to refining the faces - // of cells that are adjacent to a curved - // boundary), the triangulation has to have - // an object attached to it that describes - // where new vertices should be located. If - // you don't attach such a boundary object, - // they will be located halfway between - // existing vertices; this is appropriate - // if you have a domain with straight - // boundaries (e.g. a polygon) but not - // when, as here, the manifold has - // curvature. So for things to work - // properly, we need to attach a manifold - // object to our (surface) triangulation, - // in much the same way as we've already - // done in 1d for the boundary. We create - // such an object (with indefinite, - // static, lifetime) at the - // top of the function and attach it to the - // triangulation for all cells with - // boundary indicator zero that will be - // created henceforth. + // There is one point that needs to be mentioned. In order to refine a + // surface mesh appropriately if the manifold is curved (similarly to + // refining the faces of cells that are adjacent to a curved boundary), the + // triangulation has to have an object attached to it that describes where + // new vertices should be located. If you don't attach such a boundary + // object, they will be located halfway between existing vertices; this is + // appropriate if you have a domain with straight boundaries (e.g. a + // polygon) but not when, as here, the manifold has curvature. So for things + // to work properly, we need to attach a manifold object to our (surface) + // triangulation, in much the same way as we've already done in 1d for the + // boundary. We create such an object (with indefinite, static, + // lifetime) at the top of the function and attach it to the triangulation + // for all cells with boundary indicator zero that will be created + // henceforth. // - // The final step in creating the mesh is to - // refine it a number of times. The rest of - // the function is the same as in previous - // tutorial programs. + // The final step in creating the mesh is to refine it a number of + // times. The rest of the function is the same as in previous tutorial + // programs. template void LaplaceBeltramiProblem::make_grid_and_dofs () { @@ -412,20 +346,14 @@ namespace Step38 // @sect4{LaplaceBeltramiProblem::assemble_system} - // The following is the central function of - // this program, assembling the matrix that - // corresponds to the surface Laplacian - // (Laplace-Beltrami operator). Maybe - // surprisingly, it actually looks exactly - // the same as for the regular Laplace - // operator discussed in, for example, - // step-4. The key is that the - // FEValues::shape_gradient function does the - // magic: It returns the surface gradient - // $\nabla_K \phi_i(x_q)$ of the $i$th shape - // function at the $q$th quadrature - // point. The rest then does not need any - // changes either: + // The following is the central function of this program, assembling the + // matrix that corresponds to the surface Laplacian (Laplace-Beltrami + // operator). Maybe surprisingly, it actually looks exactly the same as for + // the regular Laplace operator discussed in, for example, step-4. The key + // is that the FEValues::shape_gradient function does the magic: It returns + // the surface gradient $\nabla_K \phi_i(x_q)$ of the $i$th shape function + // at the $q$th quadrature point. The rest then does not need any changes + // either: template void LaplaceBeltramiProblem::assemble_system () { @@ -504,9 +432,8 @@ namespace Step38 // @sect4{LaplaceBeltramiProblem::solve} - // The next function is the one that solves - // the linear system. Here, too, no changes - // are necessary: + // The next function is the one that solves the linear system. Here, too, no + // changes are necessary: template void LaplaceBeltramiProblem::solve () { @@ -525,48 +452,30 @@ namespace Step38 // @sect4{LaplaceBeltramiProblem::output_result} - // This is the function that generates - // graphical output from the solution. Most - // of it is boilerplate code, but there are - // two points worth pointing out: + // This is the function that generates graphical output from the + // solution. Most of it is boilerplate code, but there are two points worth + // pointing out: // - // - The DataOut::add_data_vector function - // can take two kinds of vectors: Either - // vectors that have one value per degree - // of freedom defined by the DoFHandler - // object previously attached via - // DataOut::attach_dof_handler; and vectors - // that have one value for each cell of the - // triangulation, for example to output - // estimated errors for each - // cell. Typically, the DataOut class knows - // to tell these two kinds of vectors - // apart: there are almost always more - // degrees of freedom than cells, so we can - // differentiate by the two kinds looking - // at the length of a vector. We could do - // the same here, but only because we got - // lucky: we use a half sphere. If we had - // used the whole sphere as domain and - // $Q_1$ elements, we would have the same - // number of cells as vertices and - // consequently the two kinds of vectors - // would have the same number of - // elements. To avoid the resulting - // confusion, we have to tell the - // DataOut::add_data_vector function which - // kind of vector we have: DoF data. This - // is what the third argument to the - // function does. - // - The DataOut::build_patches function can - // generate output that subdivides each - // cell so that visualization programs can - // resolve curved manifolds or higher - // polynomial degree shape functions - // better. We here subdivide each element - // in each coordinate direction as many - // times as the polynomial degree of the - // finite element in use. + // - The DataOut::add_data_vector function can take two kinds of vectors: + // Either vectors that have one value per degree of freedom defined by the + // DoFHandler object previously attached via DataOut::attach_dof_handler; + // and vectors that have one value for each cell of the triangulation, for + // example to output estimated errors for each cell. Typically, the + // DataOut class knows to tell these two kinds of vectors apart: there are + // almost always more degrees of freedom than cells, so we can + // differentiate by the two kinds looking at the length of a vector. We + // could do the same here, but only because we got lucky: we use a half + // sphere. If we had used the whole sphere as domain and $Q_1$ elements, + // we would have the same number of cells as vertices and consequently the + // two kinds of vectors would have the same number of elements. To avoid + // the resulting confusion, we have to tell the DataOut::add_data_vector + // function which kind of vector we have: DoF data. This is what the third + // argument to the function does. + // - The DataOut::build_patches function can generate output that subdivides + // each cell so that visualization programs can resolve curved manifolds + // or higher polynomial degree shape functions better. We here subdivide + // each element in each coordinate direction as many times as the + // polynomial degree of the finite element in use. template void LaplaceBeltramiProblem::output_results () const { @@ -589,17 +498,12 @@ namespace Step38 // @sect4{LaplaceBeltramiProblem::compute_error} - // This is the last piece of functionality: - // we want to compute the error in the - // numerical solution. It is a verbatim copy - // of the code previously shown and discussed - // in step-7. As mentioned in the - // introduction, the Solution - // class provides the (tangential) gradient - // of the solution. To avoid evaluating the - // error only a superconvergence points, we - // choose a quadrature rule of sufficiently - // high order. + // This is the last piece of functionality: we want to compute the error in + // the numerical solution. It is a verbatim copy of the code previously + // shown and discussed in step-7. As mentioned in the introduction, the + // Solution class provides the (tangential) gradient of the + // solution. To avoid evaluating the error only a superconvergence points, + // we choose a quadrature rule of sufficiently high order. template void LaplaceBeltramiProblem::compute_error () const { @@ -619,8 +523,8 @@ namespace Step38 // @sect4{LaplaceBeltramiProblem::run} - // The last function provides the top-level - // logic. Its contents are self-explanatory: + // The last function provides the top-level logic. Its contents are + // self-explanatory: template void LaplaceBeltramiProblem::run () { @@ -635,11 +539,9 @@ namespace Step38 // @sect3{The main() function} -// The remainder of the program is taken up -// by the main() function. It -// follows exactly the general layout first -// introduced in step-6 and used in all -// following tutorial programs: +// The remainder of the program is taken up by the main() +// function. It follows exactly the general layout first introduced in step-6 +// and used in all following tutorial programs: int main () { try diff --git a/deal.II/examples/step-39/step-39.cc b/deal.II/examples/step-39/step-39.cc index d1a43acaf1..08e031ff37 100644 --- a/deal.II/examples/step-39/step-39.cc +++ b/deal.II/examples/step-39/step-39.cc @@ -9,11 +9,9 @@ /* to the file deal.II/doc/license.html for the text and */ /* further information on this license. */ -// The include files for the linear -// algebra: A regular SparseMatrix, -// which in turn will include the -// necessary files for -// SparsityPattern and Vector classes. +// The include files for the linear algebra: A regular SparseMatrix, which in +// turn will include the necessary files for SparsityPattern and Vector +// classes. #include #include #include @@ -21,29 +19,24 @@ #include #include -// Include files for setting up the -// mesh +// Include files for setting up the mesh #include #include -// Include files for FiniteElement -// classes and DoFHandler. +// Include files for FiniteElement classes and DoFHandler. #include #include #include #include #include -// The include files for using the -// MeshWorker framework +// The include files for using the MeshWorker framework #include #include #include #include -// The include file for local -// integrators associated with the -// Laplacian +// The include file for local integrators associated with the Laplacian #include // Support for multigrid methods @@ -54,9 +47,8 @@ #include #include -// Finally, we take our exact -// solution from the library as well -// as quadrature and additional tools. +// Finally, we take our exact solution from the library as well as quadrature +// and additional tools. #include #include #include @@ -65,47 +57,32 @@ #include #include -// All classes of the deal.II library -// are in the namespace dealii. In -// order to save typing, we tell the -// compiler to search names in there -// as well. +// All classes of the deal.II library are in the namespace dealii. In order to +// save typing, we tell the compiler to search names in there as well. namespace Step39 { using namespace dealii; - // This is the function we use to set - // the boundary values and also the - // exact solution we compare to. + // This is the function we use to set the boundary values and also the exact + // solution we compare to. Functions::SlitSingularityFunction<2> exact_solution; // @sect3{The local integrators} - // MeshWorker separates local - // integration from the loops over - // cells and faces. Thus, we have to - // write local integration classes - // for generating matrices, the right - // hand side and the error - // estimator. - - // All these classes have the same - // three functions for integrating - // over cells, boundary faces and - // interior faces, respectively. All - // the information needed for the - // local integration is provided by - // MeshWorker::IntegrationInfo. Note - // that the signature of the functions cannot - // be changed, because it is expected - // by MeshWorker::integration_loop(). - - // The first class defining local - // integrators is responsible for - // computing cell and face - // matrices. It is used to assemble - // the global matrix as well as the - // level matrices. + // MeshWorker separates local integration from the loops over cells and + // faces. Thus, we have to write local integration classes for generating + // matrices, the right hand side and the error estimator. + + // All these classes have the same three functions for integrating over + // cells, boundary faces and interior faces, respectively. All the + // information needed for the local integration is provided by + // MeshWorker::IntegrationInfo. Note that the signature of the + // functions cannot be changed, because it is expected by + // MeshWorker::integration_loop(). + + // The first class defining local integrators is responsible for computing + // cell and face matrices. It is used to assemble the global matrix as well + // as the level matrices. template class MatrixIntegrator : public Subscriptor { @@ -121,23 +98,15 @@ namespace Step39 }; - // On each cell, we integrate the - // Dirichlet form. We use the library - // of ready made integrals in - // LocalIntegrators to avoid writing - // these loops ourselves. Similarly, - // we implement Nitsche boundary - // conditions and the interior - // penalty fluxes between cells. + // On each cell, we integrate the Dirichlet form. We use the library of + // ready made integrals in LocalIntegrators to avoid writing these loops + // ourselves. Similarly, we implement Nitsche boundary conditions and the + // interior penalty fluxes between cells. // - // The boundary und flux terms need a - // penalty parameter, which should be - // adjusted to the cell size and the - // polynomial degree. A safe choice - // of this parameter for constant - // coefficients can be found in - // LocalIntegrators::Laplace::compute_penalty() - // and we use this below. + // The boundary und flux terms need a penalty parameter, which should be + // adjusted to the cell size and the polynomial degree. A safe choice of + // this parameter for constant coefficients can be found in + // LocalIntegrators::Laplace::compute_penalty() and we use this below. template void MatrixIntegrator::cell( MeshWorker::DoFInfo &dinfo, @@ -158,8 +127,7 @@ namespace Step39 LocalIntegrators::Laplace::compute_penalty(dinfo, dinfo, deg, deg)); } - // Interior faces use the interior - // penalty method + // Interior faces use the interior penalty method template void MatrixIntegrator::face( MeshWorker::DoFInfo &dinfo1, @@ -175,12 +143,9 @@ namespace Step39 LocalIntegrators::Laplace::compute_penalty(dinfo1, dinfo2, deg, deg)); } - // The second local integrator builds - // the right hand side. In our - // example, the right hand side - // function is zero, such that only - // the boundary condition is set here - // in weak form. + // The second local integrator builds the right hand side. In our example, + // the right hand side function is zero, such that only the boundary + // condition is set here in weak form. template class RHSIntegrator : public Subscriptor { @@ -227,11 +192,9 @@ namespace Step39 {} - // The third local integrator is - // responsible for the contributions - // to the error estimate. This is the - // standard energy estimator due to - // Karakashian and Pascal (2003). + // The third local integrator is responsible for the contributions to the + // error estimate. This is the standard energy estimator due to Karakashian + // and Pascal (2003). template class Estimator : public Subscriptor { @@ -245,10 +208,8 @@ namespace Step39 }; - // The cell contribution is the - // Laplacian of the discrete - // solution, since the right hand - // side is zero. + // The cell contribution is the Laplacian of the discrete solution, since + // the right hand side is zero. template void Estimator::cell(MeshWorker::DoFInfo &dinfo, typename MeshWorker::IntegrationInfo &info) { @@ -263,12 +224,9 @@ namespace Step39 dinfo.value(0) = std::sqrt(dinfo.value(0)); } - // At the boundary, we use simply a - // weighted form of the boundary - // residual, namely the norm of the - // difference between the finite - // element solution and the correct - // boundary condition. + // At the boundary, we use simply a weighted form of the boundary residual, + // namely the norm of the difference between the finite element solution and + // the correct boundary condition. template void Estimator::boundary(MeshWorker::DoFInfo &dinfo, typename MeshWorker::IntegrationInfo &info) { @@ -289,10 +247,8 @@ namespace Step39 } - // Finally, on interior faces, the - // estimator consists of the jumps of - // the solution and its normal - // derivative, weighted appropriately. + // Finally, on interior faces, the estimator consists of the jumps of the + // solution and its normal derivative, weighted appropriately. template void Estimator::face(MeshWorker::DoFInfo &dinfo1, MeshWorker::DoFInfo &dinfo2, @@ -322,34 +278,18 @@ namespace Step39 dinfo2.value(0) = dinfo1.value(0); } - // Finally we have an integrator for - // the error. Since the energy norm - // for discontinuous Galerkin - // problems not only involves the - // difference of the gradient inside - // the cells, but also the jump terms - // across faces and at the boundary, - // we cannot just use - // VectorTools::integrate_difference(). - // Instead, we use the MeshWorker - // interface to compute the error - // ourselves. - - // There are several different ways - // to define this energy norm, but - // all of them are equivalent to each - // other uniformly with mesh size - // (some not uniformly with - // polynomial degree). Here, we - // choose - // @f[ - // \|u\|_{1,h} = \sum_{K\in \mathbb - // T_h} \|\nabla u\|_K^2 - // + \sum_{F \in F_h^i} - // 4\sigma_F\|\{\!\{ u \mathbf - // n\}\!\}\|^2_F - // + \sum_{F \in F_h^b} 2\sigma_F\|u\|^2_F - // @f] + // Finally we have an integrator for the error. Since the energy norm for + // discontinuous Galerkin problems not only involves the difference of the + // gradient inside the cells, but also the jump terms across faces and at + // the boundary, we cannot just use VectorTools::integrate_difference(). + // Instead, we use the MeshWorker interface to compute the error ourselves. + + // There are several different ways to define this energy norm, but all of + // them are equivalent to each other uniformly with mesh size (some not + // uniformly with polynomial degree). Here, we choose @f[ \|u\|_{1,h} = + // \sum_{K\in \mathbb T_h} \|\nabla u\|_K^2 + \sum_{F \in F_h^i} + // 4\sigma_F\|\{\!\{ u \mathbf n\}\!\}\|^2_F + \sum_{F \in F_h^b} + // 2\sigma_F\|u\|^2_F @f] template class ErrorIntegrator : public Subscriptor @@ -363,29 +303,18 @@ namespace Step39 typename MeshWorker::IntegrationInfo &info2); }; - // Here we have the integration on - // cells. There is currently no good - // interfce in MeshWorker that would - // allow us to access values of - // regular functions in the - // quadrature points. Thus, we have - // to create the vectors for the - // exact function's values and - // gradients inside the cell - // integrator. After that, everything - // is as before and we just add up - // the squares of the differences. - - // Additionally to computing the error - // in the energy norm, we use the - // capability of the mesh worker to - // compute two functionals at the - // same time and compute the - // L2-error in the - // same loop. Obviously, this one - // does not have any jump terms and - // only appears in the integration on - // cells. + // Here we have the integration on cells. There is currently no good + // interfce in MeshWorker that would allow us to access values of regular + // functions in the quadrature points. Thus, we have to create the vectors + // for the exact function's values and gradients inside the cell + // integrator. After that, everything is as before and we just add up the + // squares of the differences. + + // Additionally to computing the error in the energy norm, we use the + // capability of the mesh worker to compute two functionals at the same time + // and compute the L2-error in the same loop. Obviously, + // this one does not have any jump terms and only appears in the integration + // on cells. template void ErrorIntegrator::cell( MeshWorker::DoFInfo &dinfo, @@ -472,11 +401,9 @@ namespace Step39 // @sect3{The main class} - // This class does the main job, like - // in previous examples. For a - // description of the functions - // declared here, please refer to - // the implementation below. + // This class does the main job, like in previous examples. For a + // description of the functions declared here, please refer to the + // implementation below. template class InteriorPenaltyProblem { @@ -497,65 +424,45 @@ namespace Step39 void solve (); void output_results (const unsigned int cycle) const; - // The member objects related to - // the discretization are here. + // The member objects related to the discretization are here. Triangulation triangulation; const MappingQ1 mapping; const FiniteElement &fe; MGDoFHandler mg_dof_handler; DoFHandler &dof_handler; - // Then, we have the matrices and - // vectors related to the global - // discrete system. + // Then, we have the matrices and vectors related to the global discrete + // system. SparsityPattern sparsity; SparseMatrix matrix; Vector solution; Vector right_hand_side; BlockVector estimates; - // Finally, we have a group of - // sparsity patterns and sparse - // matrices related to the - // multilevel preconditioner. - // First, we have a level matrix - // and its sparsity pattern. + // Finally, we have a group of sparsity patterns and sparse matrices + // related to the multilevel preconditioner. First, we have a level + // matrix and its sparsity pattern. MGLevelObject mg_sparsity; MGLevelObject > mg_matrix; - // When we perform multigrid with - // local smoothing on locally - // refined meshes, additional - // matrices are required; see - // Kanschat (2004). Here is the - // sparsity pattern for these - // edge matrices. We only need - // one, because the pattern of - // the up matrix is the - // transpose of that of the down - // matrix. Actually, we do not - // care too much about these - // details, since the MeshWorker - // is filling these matrices. + // When we perform multigrid with local smoothing on locally refined + // meshes, additional matrices are required; see Kanschat (2004). Here is + // the sparsity pattern for these edge matrices. We only need one, because + // the pattern of the up matrix is the transpose of that of the down + // matrix. Actually, we do not care too much about these details, since + // the MeshWorker is filling these matrices. MGLevelObject mg_sparsity_dg_interface; - // The flux matrix at the - // refinement edge, coupling fine - // level degrees of freedom to - // coarse level. + // The flux matrix at the refinement edge, coupling fine level degrees of + // freedom to coarse level. MGLevelObject > mg_matrix_dg_down; - // The transpose of the flux - // matrix at the refinement edge, - // coupling coarse level degrees - // of freedom to fine level. + // The transpose of the flux matrix at the refinement edge, coupling + // coarse level degrees of freedom to fine level. MGLevelObject > mg_matrix_dg_up; }; - // The constructor simply sets up the - // coarse grid and the - // DoFHandler. The FiniteElement is - // provided as a parameter to allow - // flexibility. + // The constructor simply sets up the coarse grid and the DoFHandler. The + // FiniteElement is provided as a parameter to allow flexibility. template InteriorPenaltyProblem::InteriorPenaltyProblem(const FiniteElement &fe) : @@ -569,88 +476,61 @@ namespace Step39 } - // In this function, we set up the - // dimension of the linear system and - // the sparsity patterns for the - // global matrix as well as the level - // matrices. + // In this function, we set up the dimension of the linear system and the + // sparsity patterns for the global matrix as well as the level matrices. template void InteriorPenaltyProblem::setup_system() { - // First, we use the finite element - // to distribute degrees of - // freedom over the mesh and number - // them. + // First, we use the finite element to distribute degrees of freedom over + // the mesh and number them. dof_handler.distribute_dofs(fe); unsigned int n_dofs = dof_handler.n_dofs(); - // Then, we already know the size - // of the vectors representing - // finite element functions. + // Then, we already know the size of the vectors representing finite + // element functions. solution.reinit(n_dofs); right_hand_side.reinit(n_dofs); - // Next, we set up the sparsity - // pattern for the global - // matrix. Since we do not know the - // row sizes in advance, we first - // fill a temporary - // CompressedSparsityPattern object - // and copy it to the regular - // SparsityPattern once it is - // complete. + // Next, we set up the sparsity pattern for the global matrix. Since we do + // not know the row sizes in advance, we first fill a temporary + // CompressedSparsityPattern object and copy it to the regular + // SparsityPattern once it is complete. CompressedSparsityPattern c_sparsity(n_dofs); DoFTools::make_flux_sparsity_pattern(dof_handler, c_sparsity); sparsity.copy_from(c_sparsity); matrix.reinit(sparsity); const unsigned int n_levels = triangulation.n_levels(); - // The global system is set up, now - // we attend to the level - // matrices. We resize all matrix - // objects to hold one matrix per level. + // The global system is set up, now we attend to the level matrices. We + // resize all matrix objects to hold one matrix per level. mg_matrix.resize(0, n_levels-1); mg_matrix.clear(); mg_matrix_dg_up.resize(0, n_levels-1); mg_matrix_dg_up.clear(); mg_matrix_dg_down.resize(0, n_levels-1); mg_matrix_dg_down.clear(); - // It is important to update the - // sparsity patterns after - // clear() was called for - // the level matrices, since the - // matrices lock the sparsity - // pattern through the Smartpointer - // ans Subscriptor mechanism. + // It is important to update the sparsity patterns after clear() + // was called for the level matrices, since the matrices lock the sparsity + // pattern through the Smartpointer ans Subscriptor mechanism. mg_sparsity.resize(0, n_levels-1); mg_sparsity_dg_interface.resize(0, n_levels-1); - // Now all objects are prepared to - // hold one sparsity pattern or - // matrix per level. What's left is - // setting up the sparsity patterns - // on each level. + // Now all objects are prepared to hold one sparsity pattern or matrix per + // level. What's left is setting up the sparsity patterns on each level. for (unsigned int level=mg_sparsity.get_minlevel(); level<=mg_sparsity.get_maxlevel(); ++level) { - // These are roughly the same - // lines as above for the - // global matrix, now for each - // level. + // These are roughly the same lines as above for the global matrix, + // now for each level. CompressedSparsityPattern c_sparsity(mg_dof_handler.n_dofs(level)); MGTools::make_flux_sparsity_pattern(mg_dof_handler, c_sparsity, level); mg_sparsity[level].copy_from(c_sparsity); mg_matrix[level].reinit(mg_sparsity[level]); - // Additionally, we need to - // initialize the transfer - // matrices at the refinement - // edge between levels. They - // are stored at the index - // referring to the finer of - // the two indices, thus there - // is no such object on level - // 0. + // Additionally, we need to initialize the transfer matrices at the + // refinement edge between levels. They are stored at the index + // referring to the finer of the two indices, thus there is no such + // object on level 0. if (level>0) { CompressedSparsityPattern ci_sparsity; @@ -664,69 +544,45 @@ namespace Step39 } - // In this function, we assemble the - // global system matrix, where by - // global we indicate that this is - // the matrix of the discrete system - // we solve and it is covering the - // whole mesh. + // In this function, we assemble the global system matrix, where by global + // we indicate that this is the matrix of the discrete system we solve and + // it is covering the whole mesh. template void InteriorPenaltyProblem::assemble_matrix() { - // First, we need t set up the - // object providing the values we - // integrate. This object contains - // all FEValues and FEFaceValues - // objects needed and also - // maintains them automatically - // such that they always point to - // the current cell. To this end, - // we need to tell it first, where - // and what to compute. Since we - // are not doing anything fancy, we - // can rely on their standard - // choice for quadrature rules. + // First, we need t set up the object providing the values we + // integrate. This object contains all FEValues and FEFaceValues objects + // needed and also maintains them automatically such that they always + // point to the current cell. To this end, we need to tell it first, where + // and what to compute. Since we are not doing anything fancy, we can rely + // on their standard choice for quadrature rules. // - // Since their default update flags - // are minimal, we add what we need - // additionally, namely the values - // and gradients of shape functions - // on all objects (cells, boundary - // and interior faces). Afterwards, - // we are ready to initialize the - // container, which will create all - // necessary FEValuesBase objects - // for integration. + // Since their default update flags are minimal, we add what we need + // additionally, namely the values and gradients of shape functions on all + // objects (cells, boundary and interior faces). Afterwards, we are ready + // to initialize the container, which will create all necessary + // FEValuesBase objects for integration. MeshWorker::IntegrationInfoBox info_box; UpdateFlags update_flags = update_values | update_gradients; info_box.add_update_flags_all(update_flags); info_box.initialize(fe, mapping); - // This is the object into which we - // integrate local data. It is - // filled by the local integration - // routines in MatrixIntegrator and - // then used by the assembler to - // distribute the information into - // the global matrix. + // This is the object into which we integrate local data. It is filled by + // the local integration routines in MatrixIntegrator and then used by the + // assembler to distribute the information into the global matrix. MeshWorker::DoFInfo dof_info(dof_handler); - // Finally, we need an object that - // assembles the local matrix into - // the global matrix. + // Finally, we need an object that assembles the local matrix into the + // global matrix. MeshWorker::Assembler::MatrixSimple > assembler; assembler.initialize(matrix); - // Now, we throw everything into a - // MeshWorker::loop(), which here - // traverses all active cells of - // the mesh, computes cell and face - // matrices and assembles them into - // the global matrix. We use the - // variable dof_handler - // here in order to use the global - // numbering of degrees of freedom. + // Now, we throw everything into a MeshWorker::loop(), which here + // traverses all active cells of the mesh, computes cell and face matrices + // and assembles them into the global matrix. We use the variable + // dof_handler here in order to use the global numbering of + // degrees of freedom. MeshWorker::integration_loop( dof_handler.begin_active(), dof_handler.end(), dof_info, info_box, @@ -737,11 +593,9 @@ namespace Step39 } - // Now, we do the same for the level - // matrices. Not too surprisingly, - // this function looks like a twin of - // the previous one. Indeed, there - // are only two minor differences. + // Now, we do the same for the level matrices. Not too surprisingly, this + // function looks like a twin of the previous one. Indeed, there are only + // two minor differences. template void InteriorPenaltyProblem::assemble_mg_matrix() @@ -753,22 +607,15 @@ namespace Step39 MeshWorker::DoFInfo dof_info(mg_dof_handler); - // Obviously, the assembler needs - // to be replaced by one filling - // level matrices. Note that it - // automatically fills the edge - // matrices as well. + // Obviously, the assembler needs to be replaced by one filling level + // matrices. Note that it automatically fills the edge matrices as well. MeshWorker::Assembler::MGMatrixSimple > assembler; assembler.initialize(mg_matrix); assembler.initialize_fluxes(mg_matrix_dg_up, mg_matrix_dg_down); - // Here is the other difference to - // the previous function: we run - // over all cells, not only the - // active ones. And we use - // mg_dof_handler, since - // we need the degrees of freedom - // on each level, not the global + // Here is the other difference to the previous function: we run over all + // cells, not only the active ones. And we use mg_dof_handler, + // since we need the degrees of freedom on each level, not the global // numbering. MeshWorker::integration_loop ( mg_dof_handler.begin(), mg_dof_handler.end(), @@ -780,11 +627,8 @@ namespace Step39 } - // Here we have another clone of the - // assemble function. The difference - // to assembling the system matrix - // consists in that we assemble a - // vector here. + // Here we have another clone of the assemble function. The difference to + // assembling the system matrix consists in that we assemble a vector here. template void InteriorPenaltyProblem::assemble_right_hand_side() @@ -796,16 +640,11 @@ namespace Step39 MeshWorker::DoFInfo dof_info(dof_handler); - // Since this assembler alows us to - // fill several vectors, the - // interface is a little more - // complicated as above. The - // pointers to the vectors have to - // be stored in a NamedData - // object. While this seems to - // cause two extra lines of code - // here, it actually comes handy in - // more complex applications. + // Since this assembler alows us to fill several vectors, the interface is + // a little more complicated as above. The pointers to the vectors have to + // be stored in a NamedData object. While this seems to cause two extra + // lines of code here, it actually comes handy in more complex + // applications. MeshWorker::Assembler::ResidualSimple > assembler; NamedData* > data; Vector *rhs = &right_hand_side; @@ -824,43 +663,31 @@ namespace Step39 } - // Now that we have coded all - // functions building the discrete - // linear system, it is about time - // that we actually solve it. + // Now that we have coded all functions building the discrete linear system, + // it is about time that we actually solve it. template void InteriorPenaltyProblem::solve() { - // The solver of choice is - // conjugate gradient. + // The solver of choice is conjugate gradient. SolverControl control(1000, 1.e-12); SolverCG > solver(control); - // Now we are setting up the - // components of the multilevel - // preconditioner. First, we need - // transfer between grid - // levels. The object we are using - // here generates sparse matrices - // for these transfers. + // Now we are setting up the components of the multilevel + // preconditioner. First, we need transfer between grid levels. The object + // we are using here generates sparse matrices for these transfers. MGTransferPrebuilt > mg_transfer; mg_transfer.build_matrices(mg_dof_handler); - // Then, we need an exact solver - // for the matrix on the coarsest - // level. + // Then, we need an exact solver for the matrix on the coarsest level. FullMatrix coarse_matrix; coarse_matrix.copy_from (mg_matrix[0]); MGCoarseGridHouseholder > mg_coarse; mg_coarse.initialize(coarse_matrix); - // While transfer and coarse grid - // solver are pretty much generic, - // more flexibility is offered for - // the smoother. First, we choose - // Gauss-Seidel as our smoothing - // method. + // While transfer and coarse grid solver are pretty much generic, more + // flexibility is offered for the smoother. First, we choose Gauss-Seidel + // as our smoothing method. GrowingVectorMemory > mem; typedef PreconditionSOR > RELAXATION; MGSmootherRelaxation, RELAXATION, Vector > @@ -868,45 +695,33 @@ namespace Step39 RELAXATION::AdditionalData smoother_data(1.); mg_smoother.initialize(mg_matrix, smoother_data); - // Do two smoothing steps on each - // level. + // Do two smoothing steps on each level. mg_smoother.set_steps(2); - // Since the SOR method is not - // symmetric, but we use conjugate - // gradient iteration below, here - // is a trick to make the - // multilevel preconditioner a - // symmetric operator even for - // nonsymmetric smoothers. + // Since the SOR method is not symmetric, but we use conjugate gradient + // iteration below, here is a trick to make the multilevel preconditioner + // a symmetric operator even for nonsymmetric smoothers. mg_smoother.set_symmetric(true); - // The smoother class optionally - // implements the variable V-cycle, - // which we do not want here. + // The smoother class optionally implements the variable V-cycle, which we + // do not want here. mg_smoother.set_variable(false); - // Finally, we must wrap our - // matrices in an object having the - // required multiplication - // functions. + // Finally, we must wrap our matrices in an object having the required + // multiplication functions. MGMatrix, Vector > mgmatrix(&mg_matrix); MGMatrix, Vector > mgdown(&mg_matrix_dg_down); MGMatrix, Vector > mgup(&mg_matrix_dg_up); - // Now, we are ready to set up the - // V-cycle operator and the - // multilevel preconditioner. + // Now, we are ready to set up the V-cycle operator and the multilevel + // preconditioner. Multigrid > mg(mg_dof_handler, mgmatrix, mg_coarse, mg_transfer, mg_smoother, mg_smoother); - // Let us not forget the edge - // matrices needed because of the - // adaptive refinement. + // Let us not forget the edge matrices needed because of the adaptive + // refinement. mg.set_edge_flux_matrices(mgdown, mgup); - // After all preparations, wrap the - // Multigrid object into another - // object, which can be used as a - // regular preconditioner, + // After all preparations, wrap the Multigrid object into another object, + // which can be used as a regular preconditioner, PreconditionMG, MGTransferPrebuilt > > preconditioner(mg_dof_handler, mg, mg_transfer); @@ -915,26 +730,19 @@ namespace Step39 } - // Another clone of the assemble - // function. The big difference to - // the previous ones is here that we - // also have an input vector. + // Another clone of the assemble function. The big difference to the + // previous ones is here that we also have an input vector. template double InteriorPenaltyProblem::estimate() { - // The results of the estimator are - // stored in a vector with one - // entry per cell. Since cells in - // deal.II are not numbered, we - // have to create our own numbering - // in order to use this vector. + // The results of the estimator are stored in a vector with one entry per + // cell. Since cells in deal.II are not numbered, we have to create our + // own numbering in order to use this vector. // - // On the other hand, somebody - // might have used the user indices - // already. So, let's be good - // citizens and save them before - // tampering with them. + // On the other hand, somebody might have used the user indices + // already. So, let's be good citizens and save them before tampering with + // them. std::vector old_user_indices; triangulation.save_user_indices(old_user_indices); @@ -949,48 +757,33 @@ namespace Step39 const unsigned int n_gauss_points = dof_handler.get_fe().tensor_degree()+1; info_box.initialize_gauss_quadrature(n_gauss_points, n_gauss_points+1, n_gauss_points); - // but now we need to notify the - // info box of the finite element - // functio we want to evaluate in - // the quadrature points. First, we - // create a NamedData object with - // this vector, which is the - // solution we just computed. + // but now we need to notify the info box of the finite element functio we + // want to evaluate in the quadrature points. First, we create a NamedData + // object with this vector, which is the solution we just computed. NamedData* > solution_data; solution_data.add(&solution, "solution"); - // Then, we tell the Meshworker::VectorSelector - // for cells, that we need the - // second derivatives of this - // solution (to compute the - // Laplacian). Therefore, the - // boolean arguments selecting - // function values and first - // derivatives a false, only the - // last one selecting second + // Then, we tell the Meshworker::VectorSelector for cells, that we need + // the second derivatives of this solution (to compute the + // Laplacian). Therefore, the boolean arguments selecting function values + // and first derivatives a false, only the last one selecting second // derivatives is true. info_box.cell_selector.add("solution", false, false, true); - // On interior and boundary faces, - // we need the function values and - // the first derivatives, but not - // second derivatives. + // On interior and boundary faces, we need the function values and the + // first derivatives, but not second derivatives. info_box.boundary_selector.add("solution", true, true, false); info_box.face_selector.add("solution", true, true, false); - // And we continue as before, with - // the exception that the default - // update flags are already - // adjusted to the values and - // derivatives we requested above. + // And we continue as before, with the exception that the default update + // flags are already adjusted to the values and derivatives we requested + // above. info_box.add_update_flags_boundary(update_quadrature_points); info_box.initialize(fe, mapping, solution_data); MeshWorker::DoFInfo dof_info(dof_handler); - // The assembler stores one number - // per cell, but else this is the - // same as in the computation of - // the right hand side. + // The assembler stores one number per cell, but else this is the same as + // in the computation of the right hand side. MeshWorker::Assembler::CellsAndFaces assembler; NamedData* > out_data; BlockVector *est = &estimates; @@ -1005,26 +798,20 @@ namespace Step39 &Estimator::face, assembler); - // Right before we return the - // result of the error estimate, we - // restore the old user indices. + // Right before we return the result of the error estimate, we restore the + // old user indices. triangulation.load_user_indices(old_user_indices); return estimates.block(0).l2_norm(); } - // Here we compare our finite element - // solution with the (known) exact - // solution and compute the mean - // quadratic error of the gradient - // and the function itself. This - // function is a clone of the - // estimation function right above. - - // Since we compute the error in the - // energy and the - // L2-norm, - // respectively, our block vector - // needs two blocks here. + // Here we compare our finite element solution with the (known) exact + // solution and compute the mean quadratic error of the gradient and the + // function itself. This function is a clone of the estimation function + // right above. + + // Since we compute the error in the energy and the + // L2-norm, respectively, our block vector needs two + // blocks here. template void InteriorPenaltyProblem::error() @@ -1077,8 +864,7 @@ namespace Step39 template void InteriorPenaltyProblem::output_results (const unsigned int cycle) const { - // Output of the solution in - // gnuplot format. + // Output of the solution in gnuplot format. char *fn = new char[100]; sprintf(fn, "sol-%02d", cycle); @@ -1098,9 +884,7 @@ namespace Step39 data_out.write_gnuplot(gnuplot_output); } - // And finally the adaptive loop, - // more or less like in previous - // examples. + // And finally the adaptive loop, more or less like in previous examples. template void InteriorPenaltyProblem::run(unsigned int n_steps) diff --git a/deal.II/examples/step-4/step-4.cc b/deal.II/examples/step-4/step-4.cc index 38c6d6b8a6..017fddc7ea 100644 --- a/deal.II/examples/step-4/step-4.cc +++ b/deal.II/examples/step-4/step-4.cc @@ -11,11 +11,8 @@ // @sect3{Include files} -// The first few (many?) include -// files have already been used in -// the previous example, so we will -// not explain their meaning here -// again. +// The first few (many?) include files have already been used in the previous +// example, so we will not explain their meaning here again. #include #include #include @@ -40,39 +37,26 @@ #include #include -// This is new, however: in the previous -// example we got some unwanted output from -// the linear solvers. If we want to suppress -// it, we have to include this file and add a -// single line somewhere to the program (see -// the main() function below for that): +// This is new, however: in the previous example we got some unwanted output +// from the linear solvers. If we want to suppress it, we have to include this +// file and add a single line somewhere to the program (see the main() +// function below for that): #include -// The final step, as in previous -// programs, is to import all the -// deal.II class and function names -// into the global namespace: +// The final step, as in previous programs, is to import all the deal.II class +// and function names into the global namespace: using namespace dealii; // @sect3{The Step4 class template} -// This is again the same -// Step4 class as in the -// previous example. The only -// difference is that we have now -// declared it as a class with a -// template parameter, and the -// template parameter is of course -// the spatial dimension in which we -// would like to solve the Laplace -// equation. Of course, several of -// the member variables depend on -// this dimension as well, in -// particular the Triangulation -// class, which has to represent -// quadrilaterals or hexahedra, -// respectively. Apart from this, -// everything is as before. +// This is again the same Step4 class as in the previous +// example. The only difference is that we have now declared it as a class +// with a template parameter, and the template parameter is of course the +// spatial dimension in which we would like to solve the Laplace equation. Of +// course, several of the member variables depend on this dimension as well, +// in particular the Triangulation class, which has to represent +// quadrilaterals or hexahedra, respectively. Apart from this, everything is +// as before. template class Step4 { @@ -101,55 +85,34 @@ private: // @sect3{Right hand side and boundary values} -// In the following, we declare two more -// classes denoting the right hand side and -// the non-homogeneous Dirichlet boundary -// values. Both are functions of a -// dim-dimensional space variable, so we -// declare them as templates as well. +// In the following, we declare two more classes denoting the right hand side +// and the non-homogeneous Dirichlet boundary values. Both are functions of a +// dim-dimensional space variable, so we declare them as templates as well. // -// Each of these classes is derived from a -// common, abstract base class Function, -// which declares the common interface which -// all functions have to follow. In -// particular, concrete classes have to -// overload the value function, -// which takes a point in dim-dimensional -// space as parameters and shall return the -// value at that point as a +// Each of these classes is derived from a common, abstract base class +// Function, which declares the common interface which all functions have to +// follow. In particular, concrete classes have to overload the +// value function, which takes a point in dim-dimensional space +// as parameters and shall return the value at that point as a // double variable. // -// The value function takes a -// second argument, which we have here named -// component: This is only meant -// for vector valued functions, where you may -// want to access a certain component of the -// vector at the point -// p. However, our functions are -// scalar, so we need not worry about this -// parameter and we will not use it in the -// implementation of the functions. Inside -// the library's header files, the Function -// base class's declaration of the -// value function has a default -// value of zero for the component, so we -// will access the value -// function of the right hand side with only -// one parameter, namely the point where we -// want to evaluate the function. A value for -// the component can then simply be omitted -// for scalar functions. +// The value function takes a second argument, which we have here +// named component: This is only meant for vector valued +// functions, where you may want to access a certain component of the vector +// at the point p. However, our functions are scalar, so we need +// not worry about this parameter and we will not use it in the implementation +// of the functions. Inside the library's header files, the Function base +// class's declaration of the value function has a default value +// of zero for the component, so we will access the value +// function of the right hand side with only one parameter, namely the point +// where we want to evaluate the function. A value for the component can then +// simply be omitted for scalar functions. // -// Note that the C++ language forces -// us to declare and define a -// constructor to the following -// classes even though they are -// empty. This is due to the fact -// that the base class has no default -// constructor (i.e. one without -// arguments), even though it has a -// constructor which has default -// values for all arguments. +// Note that the C++ language forces us to declare and define a constructor to +// the following classes even though they are empty. This is due to the fact +// that the base class has no default constructor (i.e. one without +// arguments), even though it has a constructor which has default values for +// all arguments. template class RightHandSide : public Function { @@ -175,39 +138,26 @@ public: -// For this example, we choose as right hand -// side function to function $4(x^4+y^4)$ in -// 2D, or $4(x^4+y^4+z^4)$ in 3D. We could -// write this distinction using an -// if-statement on the space dimension, but -// here is a simple way that also allows us -// to use the same function in 1D (or in 4D, -// if you should desire to do so), by using a -// short loop. Fortunately, the compiler -// knows the size of the loop at compile time -// (remember that at the time when you define -// the template, the compiler doesn't know -// the value of dim, but when it later -// encounters a statement or declaration -// RightHandSide@<2@>, it will take the -// template, replace all occurrences of dim -// by 2 and compile the resulting function); -// in other words, at the time of compiling -// this function, the number of times the -// body will be executed is known, and the -// compiler can optimize away the overhead -// needed for the loop and the result will be -// as fast as if we had used the formulas -// above right away. +// For this example, we choose as right hand side function to function +// $4(x^4+y^4)$ in 2D, or $4(x^4+y^4+z^4)$ in 3D. We could write this +// distinction using an if-statement on the space dimension, but here is a +// simple way that also allows us to use the same function in 1D (or in 4D, if +// you should desire to do so), by using a short loop. Fortunately, the +// compiler knows the size of the loop at compile time (remember that at the +// time when you define the template, the compiler doesn't know the value of +// dim, but when it later encounters a statement or declaration +// RightHandSide@<2@>, it will take the template, replace all +// occurrences of dim by 2 and compile the resulting function); in other +// words, at the time of compiling this function, the number of times the body +// will be executed is known, and the compiler can optimize away the overhead +// needed for the loop and the result will be as fast as if we had used the +// formulas above right away. // -// The last thing to note is that a -// Point@ denotes a point in -// dim-dimensionsal space, and its individual -// components (i.e. $x$, $y$, -// ... coordinates) can be accessed using the -// () operator (in fact, the [] operator will -// work just as well) with indices starting -// at zero as usual in C and C++. +// The last thing to note is that a Point@ denotes a point +// in dim-dimensionsal space, and its individual components (i.e. $x$, $y$, +// ... coordinates) can be accessed using the () operator (in fact, the [] +// operator will work just as well) with indices starting at zero as usual in +// C and C++. template double RightHandSide::value (const Point &p, const unsigned int /*component*/) const @@ -220,13 +170,10 @@ double RightHandSide::value (const Point &p, } -// As boundary values, we choose x*x+y*y in -// 2D, and x*x+y*y+z*z in 3D. This happens to -// be equal to the square of the vector from -// the origin to the point at which we would -// like to evaluate the function, -// irrespective of the dimension. So that is -// what we return: +// As boundary values, we choose x*x+y*y in 2D, and x*x+y*y+z*z in 3D. This +// happens to be equal to the square of the vector from the origin to the +// point at which we would like to evaluate the function, irrespective of the +// dimension. So that is what we return: template double BoundaryValues::value (const Point &p, const unsigned int /*component*/) const @@ -238,55 +185,33 @@ double BoundaryValues::value (const Point &p, // @sect3{Implementation of the Step4 class} -// Next for the implementation of the class -// template that makes use of the functions -// above. As before, we will write everything -// as templates that have a formal parameter -// dim that we assume unknown at -// the time we define the template -// functions. Only later, the compiler will -// find a declaration of -// Step4@<2@> (in the -// main function, actually) and -// compile the entire class with -// dim replaced by 2, a process -// referred to as `instantiation of a -// template'. When doing so, it will also -// replace instances of -// RightHandSide@ by -// RightHandSide@<2@> and -// instantiate the latter class from the +// Next for the implementation of the class template that makes use of the +// functions above. As before, we will write everything as templates that have +// a formal parameter dim that we assume unknown at the time we +// define the template functions. Only later, the compiler will find a +// declaration of Step4@<2@> (in the main function, +// actually) and compile the entire class with dim replaced by 2, +// a process referred to as `instantiation of a template'. When doing so, it +// will also replace instances of RightHandSide@ by +// RightHandSide@<2@> and instantiate the latter class from the // class template. // -// In fact, the compiler will also find a -// declaration -// Step4@<3@> in -// main(). This will cause it to -// again go back to the general -// Step4@ -// template, replace all occurrences of -// dim, this time by 3, and -// compile the class a second time. Note that -// the two instantiations -// Step4@<2@> and -// Step4@<3@> are -// completely independent classes; their only -// common feature is that they are both -// instantiated from the same general -// template, but they are not convertible -// into each other, for example, and share no -// code (both instantiations are compiled -// completely independently). +// In fact, the compiler will also find a declaration Step4@<3@> +// in main(). This will cause it to again go back to the general +// Step4@ template, replace all occurrences of +// dim, this time by 3, and compile the class a second time. Note +// that the two instantiations Step4@<2@> and +// Step4@<3@> are completely independent classes; their only +// common feature is that they are both instantiated from the same general +// template, but they are not convertible into each other, for example, and +// share no code (both instantiations are compiled completely independently). // @sect4{Step4::Step4} -// After this introduction, here is the -// constructor of the Step4 -// class. It specifies the desired polynomial -// degree of the finite elements and -// associates the DoFHandler to the -// triangulation just as in the previous +// After this introduction, here is the constructor of the Step4 +// class. It specifies the desired polynomial degree of the finite elements +// and associates the DoFHandler to the triangulation just as in the previous // example program, step-3: template Step4::Step4 () @@ -298,22 +223,15 @@ Step4::Step4 () // @sect4{Step4::make_grid} -// Grid creation is something inherently -// dimension dependent. However, as long as -// the domains are sufficiently similar in 2D -// or 3D, the library can abstract for -// you. In our case, we would like to again -// solve on the square $[-1,1]\times [-1,1]$ -// in 2D, or on the cube $[-1,1] \times -// [-1,1] \times [-1,1]$ in 3D; both can be -// termed GridGenerator::hyper_cube(), so we may -// use the same function in whatever -// dimension we are. Of course, the functions -// that create a hypercube in two and three -// dimensions are very much different, but -// that is something you need not care -// about. Let the library handle the -// difficult things. +// Grid creation is something inherently dimension dependent. However, as long +// as the domains are sufficiently similar in 2D or 3D, the library can +// abstract for you. In our case, we would like to again solve on the square +// $[-1,1]\times [-1,1]$ in 2D, or on the cube $[-1,1] \times [-1,1] \times +// [-1,1]$ in 3D; both can be termed GridGenerator::hyper_cube(), so we may +// use the same function in whatever dimension we are. Of course, the +// functions that create a hypercube in two and three dimensions are very much +// different, but that is something you need not care about. Let the library +// handle the difficult things. template void Step4::make_grid () { @@ -330,15 +248,11 @@ void Step4::make_grid () // @sect4{Step4::setup_system} -// This function looks -// exactly like in the previous example, -// although it performs actions that in their -// details are quite different if -// dim happens to be 3. The only -// significant difference from a user's -// perspective is the number of cells -// resulting, which is much higher in three -// than in two space dimensions! +// This function looks exactly like in the previous example, although it +// performs actions that in their details are quite different if +// dim happens to be 3. The only significant difference from a +// user's perspective is the number of cells resulting, which is much higher +// in three than in two space dimensions! template void Step4::setup_system () { @@ -361,76 +275,45 @@ void Step4::setup_system () // @sect4{Step4::assemble_system} -// Unlike in the previous example, we -// would now like to use a -// non-constant right hand side -// function and non-zero boundary -// values. Both are tasks that are -// readily achieved with only a few -// new lines of code in the -// assemblage of the matrix and right -// hand side. +// Unlike in the previous example, we would now like to use a non-constant +// right hand side function and non-zero boundary values. Both are tasks that +// are readily achieved with only a few new lines of code in the assemblage of +// the matrix and right hand side. // -// More interesting, though, is the -// way we assemble matrix and right -// hand side vector dimension -// independently: there is simply no -// difference to the -// two-dimensional case. Since the -// important objects used in this -// function (quadrature formula, -// FEValues) depend on the dimension -// by way of a template parameter as -// well, they can take care of -// setting up properly everything for -// the dimension for which this -// function is compiled. By declaring -// all classes which might depend on -// the dimension using a template -// parameter, the library can make -// nearly all work for you and you -// don't have to care about most +// More interesting, though, is the way we assemble matrix and right hand side +// vector dimension independently: there is simply no difference to the +// two-dimensional case. Since the important objects used in this function +// (quadrature formula, FEValues) depend on the dimension by way of a template +// parameter as well, they can take care of setting up properly everything for +// the dimension for which this function is compiled. By declaring all classes +// which might depend on the dimension using a template parameter, the library +// can make nearly all work for you and you don't have to care about most // things. template void Step4::assemble_system () { QGauss quadrature_formula(2); - // We wanted to have a non-constant right - // hand side, so we use an object of the - // class declared above to generate the - // necessary data. Since this right hand - // side object is only used locally in the - // present function, we declare it here as - // a local variable: + // We wanted to have a non-constant right hand side, so we use an object of + // the class declared above to generate the necessary data. Since this right + // hand side object is only used locally in the present function, we declare + // it here as a local variable: const RightHandSide right_hand_side; - // Compared to the previous example, in - // order to evaluate the non-constant right - // hand side function we now also need the - // quadrature points on the cell we are - // presently on (previously, we only - // required values and gradients of the - // shape function from the - // FEValues object, as well as - // the quadrature weights, - // FEValues::JxW() ). We can tell the - // FEValues object to do for - // us by also giving it the - // #update_quadrature_points - // flag: + // Compared to the previous example, in order to evaluate the non-constant + // right hand side function we now also need the quadrature points on the + // cell we are presently on (previously, we only required values and + // gradients of the shape function from the FEValues object, as well as the + // quadrature weights, FEValues::JxW() ). We can tell the FEValues object to + // do for us by also giving it the #update_quadrature_points flag: FEValues fe_values (fe, quadrature_formula, update_values | update_gradients | update_quadrature_points | update_JxW_values); - // We then again define a few - // abbreviations. The values of these - // variables of course depend on the - // dimension which we are presently - // using. However, the FE and Quadrature - // classes do all the necessary work for - // you and you don't have to care about the - // dimension dependent parts: + // We then again define a few abbreviations. The values of these variables + // of course depend on the dimension which we are presently using. However, + // the FE and Quadrature classes do all the necessary work for you and you + // don't have to care about the dimension dependent parts: const unsigned int dofs_per_cell = fe.dofs_per_cell; const unsigned int n_q_points = quadrature_formula.size(); @@ -439,19 +322,13 @@ void Step4::assemble_system () std::vector local_dof_indices (dofs_per_cell); - // Next, we again have to loop over all - // cells and assemble local contributions. - // Note, that a cell is a quadrilateral in - // two space dimensions, but a hexahedron - // in 3D. In fact, the - // active_cell_iterator data - // type is something different, depending - // on the dimension we are in, but to the - // outside world they look alike and you - // will probably never see a difference - // although the classes that this typedef - // stands for are in fact completely - // unrelated: + // Next, we again have to loop over all cells and assemble local + // contributions. Note, that a cell is a quadrilateral in two space + // dimensions, but a hexahedron in 3D. In fact, the + // active_cell_iterator data type is something different, + // depending on the dimension we are in, but to the outside world they look + // alike and you will probably never see a difference although the classes + // that this typedef stands for are in fact completely unrelated: typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); @@ -462,28 +339,16 @@ void Step4::assemble_system () cell_matrix = 0; cell_rhs = 0; - // Now we have to assemble the - // local matrix and right hand - // side. This is done exactly - // like in the previous - // example, but now we revert - // the order of the loops - // (which we can safely do - // since they are independent - // of each other) and merge the - // loops for the local matrix - // and the local vector as far - // as possible to make - // things a bit faster. + // Now we have to assemble the local matrix and right hand side. This is + // done exactly like in the previous example, but now we revert the + // order of the loops (which we can safely do since they are independent + // of each other) and merge the loops for the local matrix and the local + // vector as far as possible to make things a bit faster. // - // Assembling the right hand side - // presents the only significant - // difference to how we did things in - // step-3: Instead of using a constant - // right hand side with value 1, we use - // the object representing the right - // hand side and evaluate it at the - // quadrature points: + // Assembling the right hand side presents the only significant + // difference to how we did things in step-3: Instead of using a + // constant right hand side with value 1, we use the object representing + // the right hand side and evaluate it at the quadrature points: for (unsigned int q_point=0; q_point::assemble_system () right_hand_side.value (fe_values.quadrature_point (q_point)) * fe_values.JxW (q_point)); } - // As a final remark to these loops: - // when we assemble the local - // contributions into - // cell_matrix(i,j), we - // have to multiply the gradients of - // shape functions $i$ and $j$ at point - // q_point and multiply it with the - // scalar weights JxW. This is what - // actually happens: - // fe_values.shape_grad(i,q_point) - // returns a dim - // dimensional vector, represented by a - // Tensor@<1,dim@> object, - // and the operator* that multiplies it - // with the result of - // fe_values.shape_grad(j,q_point) - // makes sure that the dim - // components of the two vectors are - // properly contracted, and the result - // is a scalar floating point number - // that then is multiplied with the - // weights. Internally, this operator* - // makes sure that this happens - // correctly for all dim - // components of the vectors, whether - // dim be 2, 3, or any - // other space dimension; from a user's - // perspective, this is not something - // worth bothering with, however, - // making things a lot simpler if one - // wants to write code dimension - // independently. - - // With the local systems assembled, - // the transfer into the global matrix - // and right hand side is done exactly - // as before, but here we have again + // As a final remark to these loops: when we assemble the local + // contributions into cell_matrix(i,j), we have to multiply + // the gradients of shape functions $i$ and $j$ at point q_point and + // multiply it with the scalar weights JxW. This is what actually + // happens: fe_values.shape_grad(i,q_point) returns a + // dim dimensional vector, represented by a + // Tensor@<1,dim@> object, and the operator* that + // multiplies it with the result of + // fe_values.shape_grad(j,q_point) makes sure that the + // dim components of the two vectors are properly + // contracted, and the result is a scalar floating point number that + // then is multiplied with the weights. Internally, this operator* makes + // sure that this happens correctly for all dim components + // of the vectors, whether dim be 2, 3, or any other space + // dimension; from a user's perspective, this is not something worth + // bothering with, however, making things a lot simpler if one wants to + // write code dimension independently. + + // With the local systems assembled, the transfer into the global matrix + // and right hand side is done exactly as before, but here we have again // merged some loops for efficiency: cell->get_dof_indices (local_dof_indices); for (unsigned int i=0; i::assemble_system () } - // As the final step in this function, we - // wanted to have non-homogeneous boundary - // values in this example, unlike the one - // before. This is a simple task, we only - // have to replace the - // ZeroFunction used there by - // an object of the class which describes - // the boundary values we would like to use - // (i.e. the BoundaryValues - // class declared above): + // As the final step in this function, we wanted to have non-homogeneous + // boundary values in this example, unlike the one before. This is a simple + // task, we only have to replace the ZeroFunction used there by an object of + // the class which describes the boundary values we would like to use + // (i.e. the BoundaryValues class declared above): std::map boundary_values; VectorTools::interpolate_boundary_values (dof_handler, 0, @@ -571,13 +414,9 @@ void Step4::assemble_system () // @sect4{Step4::solve} -// Solving the linear system of -// equations is something that looks -// almost identical in most -// programs. In particular, it is -// dimension independent, so this -// function is copied verbatim from the -// previous example. +// Solving the linear system of equations is something that looks almost +// identical in most programs. In particular, it is dimension independent, so +// this function is copied verbatim from the previous example. template void Step4::solve () { @@ -586,11 +425,8 @@ void Step4::solve () solver.solve (system_matrix, solution, system_rhs, PreconditionIdentity()); - // We have made one addition, - // though: since we suppress output - // from the linear solvers, we have - // to print the number of - // iterations by hand. + // We have made one addition, though: since we suppress output from the + // linear solvers, we have to print the number of iterations by hand. std::cout << " " << solver_control.last_step() << " CG iterations needed to obtain convergence." << std::endl; @@ -599,31 +435,22 @@ void Step4::solve () // @sect4{Step4::output_results} -// This function also does what the -// respective one did in step-3. No changes +// This function also does what the respective one did in step-3. No changes // here for dimension independence either. // -// The only difference to the previous -// example is that we want to write output in -// VTK format, rather than for gnuplot. VTK -// format is currently the most widely used -// one and is supported by a number of -// visualization programs such as Visit and -// Paraview (for ways to obtain these -// programs see the ReadMe file of -// deal.II). To write data in this format, we -// simply replace the -// data_out.write_gnuplot call -// by data_out.write_vtk. +// The only difference to the previous example is that we want to write output +// in VTK format, rather than for gnuplot. VTK format is currently the most +// widely used one and is supported by a number of visualization programs such +// as Visit and Paraview (for ways to obtain these programs see the ReadMe +// file of deal.II). To write data in this format, we simply replace the +// data_out.write_gnuplot call by +// data_out.write_vtk. // -// Since the program will run both 2d and 3d -// versions of the laplace solver, we use the -// dimension in the filename to generate -// distinct filenames for each run (in a -// better program, one would check whether -// dim can have other values -// than 2 or 3, but we neglect this here for -// the sake of brevity). +// Since the program will run both 2d and 3d versions of the laplace solver, +// we use the dimension in the filename to generate distinct filenames for +// each run (in a better program, one would check whether dim can +// have other values than 2 or 3, but we neglect this here for the sake of +// brevity). template void Step4::output_results () const { @@ -644,11 +471,9 @@ void Step4::output_results () const // @sect4{Step4::run} -// This is the function which has the -// top-level control over -// everything. Apart from one line of -// additional output, it is the same -// as for the previous example. +// This is the function which has the top-level control over everything. Apart +// from one line of additional output, it is the same as for the previous +// example. template void Step4::run () { @@ -664,79 +489,48 @@ void Step4::run () // @sect3{The main function} -// And this is the main function. It also -// looks mostly like in step-3, but if you -// look at the code below, note how we first -// create a variable of type -// Step4@<2@> (forcing -// the compiler to compile the class template -// with dim replaced by -// 2) and run a 2d simulation, +// And this is the main function. It also looks mostly like in step-3, but if +// you look at the code below, note how we first create a variable of type +// Step4@<2@> (forcing the compiler to compile the class template +// with dim replaced by 2) and run a 2d simulation, // and then we do the whole thing over in 3d. // -// In practice, this is probably not what you -// would do very frequently (you probably -// either want to solve a 2d problem, or one -// in 3d, but not both at the same -// time). However, it demonstrates the -// mechanism by which we can simply change -// which dimension we want in a single place, -// and thereby force the compiler to -// recompile the dimension independent class -// templates for the dimension we -// request. The emphasis here lies on the -// fact that we only need to change a single -// place. This makes it rather trivial to -// debug the program in 2d where computations -// are fast, and then switch a single place -// to a 3 to run the much more computing -// intensive program in 3d for `real' +// In practice, this is probably not what you would do very frequently (you +// probably either want to solve a 2d problem, or one in 3d, but not both at +// the same time). However, it demonstrates the mechanism by which we can +// simply change which dimension we want in a single place, and thereby force +// the compiler to recompile the dimension independent class templates for the +// dimension we request. The emphasis here lies on the fact that we only need +// to change a single place. This makes it rather trivial to debug the program +// in 2d where computations are fast, and then switch a single place to a 3 to +// run the much more computing intensive program in 3d for `real' // computations. // -// Each of the two blocks is enclosed in -// braces to make sure that the -// laplace_problem_2d variable -// goes out of scope (and releases the memory -// it holds) before we move on to allocate -// memory for the 3d case. Without the -// additional braces, the -// laplace_problem_2d variable -// would only be destroyed at the end of the -// function, i.e. after running the 3d -// problem, and would needlessly hog memory -// while the 3d run could actually use it. +// Each of the two blocks is enclosed in braces to make sure that the +// laplace_problem_2d variable goes out of scope (and releases +// the memory it holds) before we move on to allocate memory for the 3d +// case. Without the additional braces, the laplace_problem_2d +// variable would only be destroyed at the end of the function, i.e. after +// running the 3d problem, and would needlessly hog memory while the 3d run +// could actually use it. // -// Finally, the first line of the function is -// used to suppress some output. Remember -// that in the previous example, we had the -// output from the linear solvers about the -// starting residual and the number of the -// iteration where convergence was -// detected. This can be suppressed through -// the deallog.depth_console(0) -// call. +// Finally, the first line of the function is used to suppress some output. +// Remember that in the previous example, we had the output from the linear +// solvers about the starting residual and the number of the iteration where +// convergence was detected. This can be suppressed through the +// deallog.depth_console(0) call. // -// The rationale here is the following: the -// deallog (i.e. deal-log, not de-allog) -// variable represents a stream to which some -// parts of the library write output. It -// redirects this output to the console and -// if required to a file. The output is -// nested in a way so that each function can -// use a prefix string (separated by colons) -// for each line of output; if it calls -// another function, that may also use its -// prefix which is then printed after the one -// of the calling function. Since output from -// functions which are nested deep below is -// usually not as important as top-level -// output, you can give the deallog variable -// a maximal depth of nested output for -// output to console and file. The depth zero -// which we gave here means that no output is -// written. By changing it you can get more -// information about the innards of the -// library. +// The rationale here is the following: the deallog (i.e. deal-log, not +// de-allog) variable represents a stream to which some parts of the library +// write output. It redirects this output to the console and if required to a +// file. The output is nested in a way so that each function can use a prefix +// string (separated by colons) for each line of output; if it calls another +// function, that may also use its prefix which is then printed after the one +// of the calling function. Since output from functions which are nested deep +// below is usually not as important as top-level output, you can give the +// deallog variable a maximal depth of nested output for output to console and +// file. The depth zero which we gave here means that no output is written. By +// changing it you can get more information about the innards of the library. int main () { deallog.depth_console (0); diff --git a/deal.II/examples/step-40/step-40.cc b/deal.II/examples/step-40/step-40.cc index 89be9d6284..f39f1d6d49 100644 --- a/deal.II/examples/step-40/step-40.cc +++ b/deal.II/examples/step-40/step-40.cc @@ -13,11 +13,9 @@ // @sect3{Include files} // -// Most of the include files we need for this -// program have already been discussed in -// previous programs. In particular, all of -// the following should already be familiar -// friends: +// Most of the include files we need for this program have already been +// discussed in previous programs. In particular, all of the following should +// already be familiar friends: #include #include #include @@ -43,81 +41,49 @@ #include #include -// The following, however, will be new or be -// used in new roles. Let's walk through -// them. The first of these will provide the -// tools of the Utilities::System namespace -// that we will use to query things like the -// number of processors associated with the -// current MPI universe, or the number within -// this universe the processor this job runs -// on is: +// The following, however, will be new or be used in new roles. Let's walk +// through them. The first of these will provide the tools of the +// Utilities::System namespace that we will use to query things like the +// number of processors associated with the current MPI universe, or the +// number within this universe the processor this job runs on is: #include -// The next one provides a class, -// ConditionOStream that allows us to write -// code that would output things to a stream -// (such as std::cout on every -// processor but throws the text away on all -// but one of them. We could achieve the same -// by simply putting an if -// statement in front of each place where we -// may generate output, but this doesn't make -// the code any prettier. In addition, the -// condition whether this processor should or -// should not produce output to the screen is -// the same every time -- and consequently it -// should be simple enough to put it into the -// statements that generate output itself. +// The next one provides a class, ConditionOStream that allows us to write +// code that would output things to a stream (such as std::cout +// on every processor but throws the text away on all but one of them. We +// could achieve the same by simply putting an if statement in +// front of each place where we may generate output, but this doesn't make the +// code any prettier. In addition, the condition whether this processor should +// or should not produce output to the screen is the same every time -- and +// consequently it should be simple enough to put it into the statements that +// generate output itself. #include -// After these preliminaries, here is where -// it becomes more interesting. As mentioned -// in the @ref distributed module, one of the -// fundamental truths of solving problems on -// large numbers of processors is that there -// is no way for any processor to store -// everything (e.g. information about all -// cells in the mesh, all degrees of freedom, -// or the values of all elements of the -// solution vector). Rather, every processor -// will own a few of each of these -// and, if necessary, may know about a -// few more, for example the ones that are -// located on cells adjacent to the ones this -// processor owns itself. We typically call -// the latter ghost cells, ghost -// nodes or ghost elements of a -// vector. The point of this discussion -// here is that we need to have a way to -// indicate which elements a particular -// processor owns or need to know of. This is -// the realm of the IndexSet class: if there -// are a total of $N$ cells, degrees of -// freedom, or vector elements, associated -// with (non-negative) integral indices -// $[0,N)$, then both the set of elements the -// current processor owns as well as the -// (possibly larger) set of indices it needs -// to know about are subsets of the set -// $[0,N)$. IndexSet is a class that stores -// subsets of this set in an efficient -// format: +// After these preliminaries, here is where it becomes more interesting. As +// mentioned in the @ref distributed module, one of the fundamental truths of +// solving problems on large numbers of processors is that there is no way for +// any processor to store everything (e.g. information about all cells in the +// mesh, all degrees of freedom, or the values of all elements of the solution +// vector). Rather, every processor will own a few of each of these +// and, if necessary, may know about a few more, for example the ones +// that are located on cells adjacent to the ones this processor owns +// itself. We typically call the latter ghost cells, ghost nodes +// or ghost elements of a vector. The point of this discussion here is +// that we need to have a way to indicate which elements a particular +// processor owns or need to know of. This is the realm of the IndexSet class: +// if there are a total of $N$ cells, degrees of freedom, or vector elements, +// associated with (non-negative) integral indices $[0,N)$, then both the set +// of elements the current processor owns as well as the (possibly larger) set +// of indices it needs to know about are subsets of the set $[0,N)$. IndexSet +// is a class that stores subsets of this set in an efficient format: #include -// The next header file is necessary for a -// single function, -// SparsityTools::distribute_sparsity_pattern. The -// role of this function will be explained -// below. +// The next header file is necessary for a single function, +// SparsityTools::distribute_sparsity_pattern. The role of this function will +// be explained below. #include -// The final two, new header files provide -// the class -// parallel::distributed::Triangulation that -// provides meshes distributed across a -// potentially very large number of -// processors, while the second provides the -// namespace -// parallel::distributed::GridRefinement that -// offers functions that can adaptively -// refine such distributed meshes: +// The final two, new header files provide the class +// parallel::distributed::Triangulation that provides meshes distributed +// across a potentially very large number of processors, while the second +// provides the namespace parallel::distributed::GridRefinement that offers +// functions that can adaptively refine such distributed meshes: #include #include @@ -130,47 +96,27 @@ namespace Step40 // @sect3{The LaplaceProblem class template} - // Next let's declare the main class of this - // program. Its structure is almost exactly - // that of the step-6 tutorial program. The - // only significant differences are: - // - The mpi_communicator - // variable that describes the set of - // processors we want this code to run - // on. In practice, this will be - // MPI_COMM_WORLD, i.e. all processors the - // batch scheduling system has assigned to - // this particular job. - // - The presence of the pcout - // variable of type ConditionOStream. - // - The obvious use of - // parallel::distributed::Triangulation - // instead of Triangulation. - // - The presence of two IndexSet objects - // that denote which sets of degrees of - // freedom (and associated elements of - // solution and right hand side vectors) we - // own on the current processor and which - // we need (as ghost elements) for the - // algorithms in this program to work. - // - The fact that all matrices and - // vectors are now distributed. We - // use their PETScWrapper versions - // for this since deal.II's own - // classes do not provide %parallel - // functionality. Note that as part - // of this class, we store a - // solution vector that does not - // only contain the degrees of - // freedom the current processor - // owns, but also (as ghost - // elements) all those vector - // elements that correspond to - // "locally relevant" degrees of - // freedom (i.e. all those that - // live on locally owned cells or - // the layer of ghost cells that - // surround it). + // Next let's declare the main class of this program. Its structure is + // almost exactly that of the step-6 tutorial program. The only significant + // differences are: + // - The mpi_communicator variable that + // describes the set of processors we want this code to run on. In practice, + // this will be MPI_COMM_WORLD, i.e. all processors the batch scheduling + // system has assigned to this particular job. + // - The presence of the pcout variable of type ConditionOStream. + // - The obvious use of parallel::distributed::Triangulation instead of Triangulation. + // - The presence of two IndexSet objects that denote which sets of degrees of + // freedom (and associated elements of solution and right hand side vectors) + // we own on the current processor and which we need (as ghost elements) for + // the algorithms in this program to work. + // - The fact that all matrices and vectors are now distributed. We use + // their PETScWrapper versions for this since deal.II's own classes do not + // provide %parallel functionality. Note that as part of this class, we + // store a solution vector that does not only contain the degrees of freedom + // the current processor owns, but also (as ghost elements) all those vector + // elements that correspond to "locally relevant" degrees of freedom + // (i.e. all those that live on locally owned cells or the layer of ghost + // cells that surround it). template class LaplaceProblem { @@ -211,16 +157,12 @@ namespace Step40 // @sect4{Constructors and destructors} - // Constructors and destructors are rather - // trivial. In addition to what we do in - // step-6, we set the set of processors we - // want to work on to all machines available - // (MPI_COMM_WORLD); ask the triangulation to - // ensure that the mesh remains smooth and - // free to refined islands, for example; and - // initialize the pcout variable - // to only allow processor zero to output - // anything: + // Constructors and destructors are rather trivial. In addition to what we + // do in step-6, we set the set of processors we want to work on to all + // machines available (MPI_COMM_WORLD); ask the triangulation to ensure that + // the mesh remains smooth and free to refined islands, for example; and + // initialize the pcout variable to only allow processor zero + // to output anything: template LaplaceProblem::LaplaceProblem () : @@ -247,78 +189,47 @@ namespace Step40 // @sect4{LaplaceProblem::setup_system} - // The following function is, arguably, the - // most interesting one in the entire program - // since it goes to the heart of what - // distinguishes %parallel step-40 from - // sequential step-6. + // The following function is, arguably, the most interesting one in the + // entire program since it goes to the heart of what distinguishes %parallel + // step-40 from sequential step-6. // - // At the top we do what we always do: tell - // the DoFHandler object to distribute - // degrees of freedom. Since the - // triangulation we use here is distributed, - // the DoFHandler object is smart enough to - // recognize that on each processor it can - // only distribute degrees of freedom on - // cells it owns; this is followed by an - // exchange step in which processors tell - // each other about degrees of freedom on - // ghost cell. The result is a DoFHandler - // that knows about the degrees of freedom on - // locally owned cells and ghost cells - // (i.e. cells adjacent to locally owned - // cells) but nothing about cells that are - // further away, consistent with the basic - // philosophy of distributed computing that - // no processor can know everything. + // At the top we do what we always do: tell the DoFHandler object to + // distribute degrees of freedom. Since the triangulation we use here is + // distributed, the DoFHandler object is smart enough to recognize that on + // each processor it can only distribute degrees of freedom on cells it + // owns; this is followed by an exchange step in which processors tell each + // other about degrees of freedom on ghost cell. The result is a DoFHandler + // that knows about the degrees of freedom on locally owned cells and ghost + // cells (i.e. cells adjacent to locally owned cells) but nothing about + // cells that are further away, consistent with the basic philosophy of + // distributed computing that no processor can know everything. template void LaplaceProblem::setup_system () { dof_handler.distribute_dofs (fe); - // The next two lines extract some - // informatino we will need later - // on, namely two index sets that - // provide information about which - // degrees of freedom are owned by - // the current processor (this - // information will be used to - // initialize solution and right - // hand side vectors, and the - // system matrix, indicating which - // elements to store on the current - // processor and which to expect to - // be stored somewhere else); and - // an index set that indicates - // which degrees of freedom are - // locally relevant (i.e. live on - // cells that the current processor - // owns or on the layer of ghost - // cells around the locally owned - // cells; we need all of these - // degrees of freedom, for example, - // to estimate the error on the - // local cells). + // The next two lines extract some informatino we will need later on, + // namely two index sets that provide information about which degrees of + // freedom are owned by the current processor (this information will be + // used to initialize solution and right hand side vectors, and the system + // matrix, indicating which elements to store on the current processor and + // which to expect to be stored somewhere else); and an index set that + // indicates which degrees of freedom are locally relevant (i.e. live on + // cells that the current processor owns or on the layer of ghost cells + // around the locally owned cells; we need all of these degrees of + // freedom, for example, to estimate the error on the local cells). locally_owned_dofs = dof_handler.locally_owned_dofs (); DoFTools::extract_locally_relevant_dofs (dof_handler, locally_relevant_dofs); - // Next, let us initialize the - // solution and right hand side - // vectors. As mentioned above, the - // solution vector we seek does not - // only store elements we own, but - // also ghost entries; on the other - // hand, the right hand side vector - // only needs to have the entries - // the current processor owns since - // all we will ever do is write - // into it, never read from it on - // locally owned cells (of course - // the linear solvers will read - // from it, but they do not care - // about the geometric location of - // degrees of freedom). + // Next, let us initialize the solution and right hand side vectors. As + // mentioned above, the solution vector we seek does not only store + // elements we own, but also ghost entries; on the other hand, the right + // hand side vector only needs to have the entries the current processor + // owns since all we will ever do is write into it, never read from it on + // locally owned cells (of course the linear solvers will read from it, + // but they do not care about the geometric location of degrees of + // freedom). locally_relevant_solution.reinit (mpi_communicator, locally_owned_dofs, locally_relevant_dofs); @@ -328,35 +239,24 @@ namespace Step40 dof_handler.n_locally_owned_dofs()); system_rhs = 0; - // The next step is to compute hanging node - // and boundary value constraints, which we - // combine into a single object storing all + // The next step is to compute hanging node and boundary value + // constraints, which we combine into a single object storing all // constraints. // - // As with all other things in %parallel, - // the mantra must be that no processor can - // store all information about the entire - // universe. As a consequence, we need to - // tell the constraints object for which - // degrees of freedom it can store - // constraints and for which it may not - // expect any information to store. In our - // case, as explained in the @ref - // distributed module, the degrees of - // freedom we need to care about on each - // processor are the locally relevant ones, - // so we pass this to the - // ConstraintMatrix::reinit function. As a - // side note, if you forget to pass this - // argument, the ConstraintMatrix class - // will allocate an array with length equal - // to the largest DoF index it has seen so - // far. For processors with high MPI - // process number, this may be very large - // -- maybe on the order of billions. The - // program would then allocate more memory - // than for likely all other operations - // combined for this single array. + // As with all other things in %parallel, the mantra must be that no + // processor can store all information about the entire universe. As a + // consequence, we need to tell the constraints object for which degrees + // of freedom it can store constraints and for which it may not expect any + // information to store. In our case, as explained in the @ref distributed + // module, the degrees of freedom we need to care about on each processor + // are the locally relevant ones, so we pass this to the + // ConstraintMatrix::reinit function. As a side note, if you forget to + // pass this argument, the ConstraintMatrix class will allocate an array + // with length equal to the largest DoF index it has seen so far. For + // processors with high MPI process number, this may be very large -- + // maybe on the order of billions. The program would then allocate more + // memory than for likely all other operations combined for this single + // array. constraints.clear (); constraints.reinit (locally_relevant_dofs); DoFTools::make_hanging_node_constraints (dof_handler, constraints); @@ -366,43 +266,27 @@ namespace Step40 constraints); constraints.close (); - // The last part of this function deals - // with initializing the matrix with - // accompanying sparsity pattern. As in - // previous tutorial programs, we use the - // CompressedSimpleSparsityPattern as an - // intermediate with which we then - // initialize the PETSc matrix. To do so we - // have to tell the sparsity pattern its - // size but as above there is no way the - // resulting object will be able to store - // even a single pointer for each global - // degree of freedom; the best we can hope - // for is that it stores information about - // each locally relevant degree of freedom, - // i.e. all those that we may ever touch in - // the process of assembling the matrix - // (the @ref distributed_paper - // "distributed computing paper" has a long - // discussion why one really needs the - // locally relevant, and not the small set - // of locally active degrees of freedom in - // this context). + // The last part of this function deals with initializing the matrix with + // accompanying sparsity pattern. As in previous tutorial programs, we use + // the CompressedSimpleSparsityPattern as an intermediate with which we + // then initialize the PETSc matrix. To do so we have to tell the sparsity + // pattern its size but as above there is no way the resulting object will + // be able to store even a single pointer for each global degree of + // freedom; the best we can hope for is that it stores information about + // each locally relevant degree of freedom, i.e. all those that we may + // ever touch in the process of assembling the matrix (the @ref + // distributed_paper "distributed computing paper" has a long discussion + // why one really needs the locally relevant, and not the small set of + // locally active degrees of freedom in this context). // - // So we tell the sparsity pattern its size - // and what DoFs to store anything for and - // then ask DoFTools::make_sparsity_pattern - // to fill it (this function ignores all - // cells that are not locally owned, - // mimicking what we will do below in the - // assembly process). After this, we call a - // function that exchanges entries in these - // sparsity pattern between processors so - // that in the end each processor really - // knows about all the entries that will - // exist in that part of the finite element - // matrix that it will own. The final step - // is to initialize the matrix with the + // So we tell the sparsity pattern its size and what DoFs to store + // anything for and then ask DoFTools::make_sparsity_pattern to fill it + // (this function ignores all cells that are not locally owned, mimicking + // what we will do below in the assembly process). After this, we call a + // function that exchanges entries in these sparsity pattern between + // processors so that in the end each processor really knows about all the + // entries that will exist in that part of the finite element matrix that + // it will own. The final step is to initialize the matrix with the // sparsity pattern. CompressedSimpleSparsityPattern csp (dof_handler.n_dofs(), dof_handler.n_dofs(), @@ -425,45 +309,29 @@ namespace Step40 // @sect4{LaplaceProblem::assemble_system} - // The function that then assembles the - // linear system is comparatively boring, - // being almost exactly what we've seen - // before. The points to watch out for are: - // - Assembly must only loop over locally - // owned cells. There are multiple ways to - // test that; for example, we could - // compare - // a cell's subdomain_id against - // information from the triangulation - // as in cell->subdomain_id() == - // triangulation.locally_owned_subdomain(), - // or skip all cells for which - // the condition cell->is_ghost() - // || cell->is_artificial() is - // true. The simplest way, however, is - // to simply ask the cell whether it is - // owned by the local processor. - // - Copying local contributions into the - // global matrix must include distributing - // constraints and boundary values. In - // other words, we can now (as we did in - // step-6) first copy every local - // contribution into the global matrix and - // only in a later step take care of - // hanging node constraints and boundary - // values. The reason is, as discussed in - // step-17, that PETSc does not provide - // access to arbitrary elements of the - // matrix once they have been assembled - // into it -- in parts because they may - // simple no longer reside on the current - // processor but have instead been shipped - // to a different machine. - // - The way we compute the right hand side - // (given the formula stated in the - // introduction) may not be the most - // elegant but will do for a program whose - // focus lies somewhere entirely different. + // The function that then assembles the linear system is comparatively + // boring, being almost exactly what we've seen before. The points to watch + // out for are: + // - Assembly must only loop over locally owned cells. There + // are multiple ways to test that; for example, we could compare a cell's + // subdomain_id against information from the triangulation as in + // cell->subdomain_id() == + // triangulation.locally_owned_subdomain(), or skip all cells for + // which the condition cell->is_ghost() || + // cell->is_artificial() is true. The simplest way, however, is to + // simply ask the cell whether it is owned by the local processor. + // - Copying local contributions into the global matrix must include + // distributing constraints and boundary values. In other words, we can now + // (as we did in step-6) first copy every local contribution into the global + // matrix and only in a later step take care of hanging node constraints and + // boundary values. The reason is, as discussed in step-17, that PETSc does + // not provide access to arbitrary elements of the matrix once they have + // been assembled into it -- in parts because they may simple no longer + // reside on the current processor but have instead been shipped to a + // different machine. + // - The way we compute the right hand side (given the + // formula stated in the introduction) may not be the most elegant but will + // do for a program whose focus lies somewhere entirely different. template void LaplaceProblem::assemble_system () { @@ -532,62 +400,32 @@ namespace Step40 // @sect4{LaplaceProblem::solve} - // Even though solving linear systems - // on potentially tens of thousands - // of processors is by far not a - // trivial job, the function that - // does this is -- at least at the - // outside -- relatively simple. Most - // of the parts you've seen - // before. There are really only two - // things worth mentioning: - // - Solvers and preconditioners are - // built on the deal.II wrappers of - // PETSc functionality. It is - // relatively well known that the - // primary bottleneck of massively - // %parallel linear solvers is not - // actually the communication - // between processors, but the fact - // that it is difficult to produce - // preconditioners that scale well - // to large numbers of - // processors. Over the second half - // of the first decade of the 21st - // century, it has become clear - // that algebraic multigrid (AMG) - // methods turn out to be extremely - // efficient in this context, and - // we will use one of them -- the - // BoomerAMG implementation of the - // Hypre package that can be - // interfaced to through PETSc -- - // for the current program. The - // rest of the solver itself is - // boilerplate and has been shown - // before. Since the linear system - // is symmetric and positive - // definite, we can use the CG - // method as the outer solver. - // - Ultimately, we want a vector - // that stores not only the - // elements of the solution for - // degrees of freedom the current - // processor owns, but also all - // other locally relevant degrees - // of freedom. On the other hand, - // the solver itself needs a vector - // that is uniquely split between - // processors, without any - // overlap. We therefore create a - // vector at the beginning of this - // function that has these - // properties, use it to solve the - // linear system, and only assign - // it to the vector we want at the - // very end. This last step ensures - // that all ghost elements are also - // copied as necessary. + // Even though solving linear systems on potentially tens of thousands of + // processors is by far not a trivial job, the function that does this is -- + // at least at the outside -- relatively simple. Most of the parts you've + // seen before. There are really only two things worth mentioning: + // - Solvers and preconditioners are built on the deal.II wrappers of PETSc + // functionality. It is relatively well known that the primary bottleneck of + // massively %parallel linear solvers is not actually the communication + // between processors, but the fact that it is difficult to produce + // preconditioners that scale well to large numbers of processors. Over the + // second half of the first decade of the 21st century, it has become clear + // that algebraic multigrid (AMG) methods turn out to be extremely efficient + // in this context, and we will use one of them -- the BoomerAMG + // implementation of the Hypre package that can be interfaced to through + // PETSc -- for the current program. The rest of the solver itself is + // boilerplate and has been shown before. Since the linear system is + // symmetric and positive definite, we can use the CG method as the outer + // solver. + // - Ultimately, we want a vector that stores not only the elements + // of the solution for degrees of freedom the current processor owns, but + // also all other locally relevant degrees of freedom. On the other hand, + // the solver itself needs a vector that is uniquely split between + // processors, without any overlap. We therefore create a vector at the + // beginning of this function that has these properties, use it to solve the + // linear system, and only assign it to the vector we want at the very + // end. This last step ensures that all ghost elements are also copied as + // necessary. template void LaplaceProblem::solve () { @@ -600,8 +438,7 @@ namespace Step40 PETScWrappers::SolverCG solver(solver_control, mpi_communicator); - // Ask for a symmetric preconditioner by - // setting the first parameter in + // Ask for a symmetric preconditioner by setting the first parameter in // AdditionalData to true. PETScWrappers::PreconditionBoomerAMG preconditioner(system_matrix, @@ -623,31 +460,18 @@ namespace Step40 // @sect4{LaplaceProblem::refine_grid} - // The function that estimates the - // error and refines the grid is - // again almost exactly like the one - // in step-6. The only difference is - // that the function that flags cells - // to be refined is now in namespace - // parallel::distributed::GridRefinement - // -- a namespace that has functions - // that can communicate between all - // involved processors and determine - // global thresholds to use in - // deciding which cells to refine and - // which to coarsen. + // The function that estimates the error and refines the grid is again + // almost exactly like the one in step-6. The only difference is that the + // function that flags cells to be refined is now in namespace + // parallel::distributed::GridRefinement -- a namespace that has functions + // that can communicate between all involved processors and determine global + // thresholds to use in deciding which cells to refine and which to coarsen. // - // Note that we didn't have to do - // anything special about the - // KellyErrorEstimator class: we just - // give it a vector with as many - // elements as the local - // triangulation has cells (locally - // owned cells, ghost cells, and - // artificial ones), but it only - // fills those entries that - // correspond to cells that are - // locally owned. + // Note that we didn't have to do anything special about the + // KellyErrorEstimator class: we just give it a vector with as many elements + // as the local triangulation has cells (locally owned cells, ghost cells, + // and artificial ones), but it only fills those entries that correspond to + // cells that are locally owned. template void LaplaceProblem::refine_grid () { @@ -668,71 +492,36 @@ namespace Step40 // @sect4{LaplaceProblem::output_results} - // Compared to the corresponding - // function in step-6, the one here - // is a tad more complicated. There - // are two reasons: the first one is - // that we do not just want to output - // the solution but also for each - // cell which processor owns it - // (i.e. which "subdomain" it is - // in). Secondly, as discussed at - // length in step-17 and step-18, - // generating graphical data can be a - // bottleneck in parallelizing. In - // step-18, we have moved this step - // out of the actual computation but - // shifted it into a separate program - // that later combined the output - // from various processors into a - // single file. But this doesn't - // scale: if the number of processors - // is large, this may mean that the - // step of combining data on a single - // processor later becomes the - // longest running part of the - // program, or it may produce a file - // that's so large that it can't be - // visualized any more. We here - // follow a more sensible approach, - // namely creating individual files - // for each MPI process and leaving - // it to the visualization program to - // make sense of that. + // Compared to the corresponding function in step-6, the one here is a tad + // more complicated. There are two reasons: the first one is that we do not + // just want to output the solution but also for each cell which processor + // owns it (i.e. which "subdomain" it is in). Secondly, as discussed at + // length in step-17 and step-18, generating graphical data can be a + // bottleneck in parallelizing. In step-18, we have moved this step out of + // the actual computation but shifted it into a separate program that later + // combined the output from various processors into a single file. But this + // doesn't scale: if the number of processors is large, this may mean that + // the step of combining data on a single processor later becomes the + // longest running part of the program, or it may produce a file that's so + // large that it can't be visualized any more. We here follow a more + // sensible approach, namely creating individual files for each MPI process + // and leaving it to the visualization program to make sense of that. // - // To start, the top of the function - // looks like always. In addition to - // attaching the solution vector (the - // one that has entries for all - // locally relevant, not only the - // locally owned, elements), we - // attach a data vector that stores, - // for each cell, the subdomain the - // cell belongs to. This is slightly - // tricky, because of course not - // every processor knows about every - // cell. The vector we attach - // therefore has an entry for every - // cell that the current processor - // has in its mesh (locally owned - // onces, ghost cells, and artificial - // cells), but the DataOut class will - // ignore all entries that correspond - // to cells that are not owned by the - // current processor. As a - // consequence, it doesn't actually - // matter what values we write into - // these vector entries: we simply - // fill the entire vector with the - // number of the current MPI process - // (i.e. the subdomain_id of the - // current process); this correctly - // sets the values we care for, - // i.e. the entries that correspond - // to locally owned cells, while - // providing the wrong value for all - // other elements -- but these are - // then ignored anyway. + // To start, the top of the function looks like always. In addition to + // attaching the solution vector (the one that has entries for all locally + // relevant, not only the locally owned, elements), we attach a data vector + // that stores, for each cell, the subdomain the cell belongs to. This is + // slightly tricky, because of course not every processor knows about every + // cell. The vector we attach therefore has an entry for every cell that the + // current processor has in its mesh (locally owned onces, ghost cells, and + // artificial cells), but the DataOut class will ignore all entries that + // correspond to cells that are not owned by the current processor. As a + // consequence, it doesn't actually matter what values we write into these + // vector entries: we simply fill the entire vector with the number of the + // current MPI process (i.e. the subdomain_id of the current process); this + // correctly sets the values we care for, i.e. the entries that correspond + // to locally owned cells, while providing the wrong value for all other + // elements -- but these are then ignored anyway. template void LaplaceProblem::output_results (const unsigned int cycle) const { @@ -747,22 +536,13 @@ namespace Step40 data_out.build_patches (); - // The next step is to write this - // data to disk. We choose file - // names of the form - // solution-XX-PPPP.vtu - // where XX indicates - // the refinement cycle, - // PPPP refers to the - // processor number (enough for up - // to 10,000 processors, though we - // hope that nobody ever tries to - // generate this much data -- you - // would likely overflow all file - // system quotas), and - // .vtu indicates the - // XML-based Visualization Toolkit - // (VTK) file format. + // The next step is to write this data to disk. We choose file names of + // the form solution-XX-PPPP.vtu where XX + // indicates the refinement cycle, PPPP refers to the + // processor number (enough for up to 10,000 processors, though we hope + // that nobody ever tries to generate this much data -- you would likely + // overflow all file system quotas), and .vtu indicates the + // XML-based Visualization Toolkit (VTK) file format. const std::string filename = ("solution-" + Utilities::int_to_string (cycle, 2) + "." + @@ -771,21 +551,13 @@ namespace Step40 std::ofstream output ((filename + ".vtu").c_str()); data_out.write_vtu (output); - // The last step is to write a - // "master record" that lists for - // the visualization program the - // names of the various files that - // combined represents the - // graphical data for the entire - // domain. The - // DataOutBase::write_pvtu_record - // does this, and it needs a list - // of filenames that we create - // first. Note that only one - // processor needs to generate this - // file; we arbitrarily choose - // processor zero to take over this - // job. + // The last step is to write a "master record" that lists for the + // visualization program the names of the various files that combined + // represents the graphical data for the entire domain. The + // DataOutBase::write_pvtu_record does this, and it needs a list of + // filenames that we create first. Note that only one processor needs to + // generate this file; we arbitrarily choose processor zero to take over + // this job. if (Utilities::MPI::this_mpi_process(mpi_communicator) == 0) { std::vector filenames; @@ -807,32 +579,19 @@ namespace Step40 // @sect4{LaplaceProblem::run} - // The function that controls the - // overall behavior of the program is - // again like the one in step-6. The - // minor difference are the use of - // pcout instead of - // std::cout for output - // to the console (see also step-17) - // and that we only generate - // graphical output if at most 32 - // processors are involved. Without - // this limit, it would be just too - // easy for people carelessly running - // this program without reading it - // first to bring down the cluster - // interconnect and fill any file - // system available :-) + // The function that controls the overall behavior of the program is again + // like the one in step-6. The minor difference are the use of + // pcout instead of std::cout for output to the + // console (see also step-17) and that we only generate graphical output if + // at most 32 processors are involved. Without this limit, it would be just + // too easy for people carelessly running this program without reading it + // first to bring down the cluster interconnect and fill any file system + // available :-) // - // A functional difference to step-6 - // is the use of a square domain and - // that we start with a slightly - // finer mesh (5 global refinement - // cycles) -- there just isn't much - // of a point showing a massively - // %parallel program starting on 4 - // cells (although admittedly the - // point is only slightly stronger + // A functional difference to step-6 is the use of a square domain and that + // we start with a slightly finer mesh (5 global refinement cycles) -- there + // just isn't much of a point showing a massively %parallel program starting + // on 4 cells (although admittedly the point is only slightly stronger // starting on 1024). template void LaplaceProblem::run () @@ -874,30 +633,18 @@ namespace Step40 // @sect4{main()} -// The final function, -// main(), again has the -// same structure as in all other -// programs, in particular -// step-6. Like in the other programs -// that use PETSc, we have to -// inialize and finalize PETSc, which -// is done using the helper object -// MPI_InitFinalize. +// The final function, main(), again has the same structure as in +// all other programs, in particular step-6. Like in the other programs that +// use PETSc, we have to inialize and finalize PETSc, which is done using the +// helper object MPI_InitFinalize. // -// Note how we enclose the use the -// use of the LaplaceProblem class in -// a pair of braces. This makes sure -// that all member variables of the -// object are destroyed by the time -// we destroy the mpi_intialization -// object. Not doing this will lead to -// strange and hard to debug errors -// when PetscFinalize -// first deletes all PETSc vectors -// that are still around, and the -// destructor of the LaplaceProblem -// class then tries to delete them -// again. +// Note how we enclose the use the use of the LaplaceProblem class in a pair +// of braces. This makes sure that all member variables of the object are +// destroyed by the time we destroy the mpi_intialization object. Not doing +// this will lead to strange and hard to debug errors when +// PetscFinalize first deletes all PETSc vectors that are still +// around, and the destructor of the LaplaceProblem class then tries to delete +// them again. int main(int argc, char *argv[]) { try diff --git a/deal.II/examples/step-41/step-41.cc b/deal.II/examples/step-41/step-41.cc index 00b8ebf75d..e2023bf2be 100644 --- a/deal.II/examples/step-41/step-41.cc +++ b/deal.II/examples/step-41/step-41.cc @@ -13,12 +13,9 @@ // @sect3{Include files} -// As usual, at the beginning we -// include all the header files we -// need in here. With the exception -// of the various files that provide -// interfaces to the Trilinos -// library, there are no surprises: +// As usual, at the beginning we include all the header files we need in +// here. With the exception of the various files that provide interfaces to +// the Trilinos library, there are no surprises: #include #include #include @@ -57,29 +54,16 @@ namespace Step41 // @sect3{The ObstacleProblem class template} - // This class supplies all function - // and variables needed to describe - // the obstacle problem. It is - // close to what we had to do in - // step-4, and so relatively - // simple. The only real new - // components are the - // update_solution_and_constraints - // function that computes the - // active set and a number of - // variables that are necessary to - // describe the original - // (unconstrained) form of the - // linear system - // (complete_system_matrix - // and - // complete_system_rhs) - // as well as the active set itself - // and the diagonal of the mass - // matrix $B$ used in scaling - // Lagrange multipliers in the - // active set formulation. The rest - // is as in step-4: + // This class supplies all function and variables needed to describe the + // obstacle problem. It is close to what we had to do in step-4, and so + // relatively simple. The only real new components are the + // update_solution_and_constraints function that computes the active set and + // a number of variables that are necessary to describe the original + // (unconstrained) form of the linear system + // (complete_system_matrix and + // complete_system_rhs) as well as the active set itself and + // the diagonal of the mass matrix $B$ used in scaling Lagrange multipliers + // in the active set formulation. The rest is as in step-4: template class ObstacleProblem { @@ -115,25 +99,14 @@ namespace Step41 // @sect3{Right hand side, boundary values, and the obstacle} - // In the following, we define - // classes that describe the right - // hand side function, the - // Dirichlet boundary values, and - // the height of the obstacle as a - // function of $\mathbf x$. In all - // three cases, we derive these - // classes from Function@, - // although in the case of - // RightHandSide and - // Obstacle this is - // more out of convention than - // necessity since we never pass - // such objects to the library. In - // any case, the definition of the - // right hand side and boundary - // values classes is obvious given - // our choice of $f=-10$, - // $u|_{\partial\Omega}=0$: + // In the following, we define classes that describe the right hand side + // function, the Dirichlet boundary values, and the height of the obstacle + // as a function of $\mathbf x$. In all three cases, we derive these classes + // from Function@, although in the case of RightHandSide + // and Obstacle this is more out of convention than necessity + // since we never pass such objects to the library. In any case, the + // definition of the right hand side and boundary values classes is obvious + // given our choice of $f=-10$, $u|_{\partial\Omega}=0$: template class RightHandSide : public Function { @@ -176,8 +149,8 @@ namespace Step41 - // We describe the obstacle function by a cascaded - // barrier (think: stair steps): + // We describe the obstacle function by a cascaded barrier (think: stair + // steps): template class Obstacle : public Function { @@ -211,10 +184,8 @@ namespace Step41 // @sect4{ObstacleProblem::ObstacleProblem} - // To everyone who has taken a look - // at the first few tutorial - // programs, the constructor is - // completely obvious: + // To everyone who has taken a look at the first few tutorial programs, the + // constructor is completely obvious: template ObstacleProblem::ObstacleProblem () : @@ -225,11 +196,9 @@ namespace Step41 // @sect4{ObstacleProblem::make_grid} - // We solve our obstacle problem on - // the square $[-1,1]\times [-1,1]$ - // in 2D. This function therefore - // just sets up one of the simplest - // possible meshes. + // We solve our obstacle problem on the square $[-1,1]\times [-1,1]$ in + // 2D. This function therefore just sets up one of the simplest possible + // meshes. template void ObstacleProblem::make_grid () { @@ -247,15 +216,10 @@ namespace Step41 // @sect4{ObstacleProblem::setup_system} - // In this first function of note, - // we set up the degrees of freedom - // handler, resize vectors and - // matrices, and deal with the - // constraints. Initially, the - // constraints are, of course, only - // given by boundary values, so we - // interpolate them towards the top - // of the function. + // In this first function of note, we set up the degrees of freedom handler, + // resize vectors and matrices, and deal with the constraints. Initially, + // the constraints are, of course, only given by boundary values, so we + // interpolate them towards the top of the function. template void ObstacleProblem::setup_system () { @@ -287,18 +251,11 @@ namespace Step41 complete_system_rhs.reinit (dof_handler.n_dofs()); contact_force.reinit (dof_handler.n_dofs()); - // The only other thing to do - // here is to compute the factors - // in the $B$ matrix which is - // used to scale the residual. As - // discussed in the introduction, - // we'll use a little trick to - // make this mass matrix - // diagonal, and in the following - // then first compute all of this - // as a matrix and then extract - // the diagonal elements for - // later use: + // The only other thing to do here is to compute the factors in the $B$ + // matrix which is used to scale the residual. As discussed in the + // introduction, we'll use a little trick to make this mass matrix + // diagonal, and in the following then first compute all of this as a + // matrix and then extract the diagonal elements for later use: TrilinosWrappers::SparseMatrix mass_matrix; mass_matrix.reinit (c_sparsity); assemble_mass_matrix_diagonal (mass_matrix); @@ -310,16 +267,10 @@ namespace Step41 // @sect4{ObstacleProblem::assemble_system} - // This function at once assembles - // the system matrix and - // right-hand-side and applied the - // constraints (both due to the - // active set as well as from - // boundary values) to our - // system. Otherwise, it is - // functionally equivalent to the - // corresponding function in, for - // example, step-4. + // This function at once assembles the system matrix and right-hand-side and + // applied the constraints (both due to the active set as well as from + // boundary values) to our system. Otherwise, it is functionally equivalent + // to the corresponding function in, for example, step-4. template void ObstacleProblem::assemble_system () { @@ -382,51 +333,26 @@ namespace Step41 // @sect4{ObstacleProblem::assemble_mass_matrix_diagonal} - // The next function is used in the - // computation of the diagonal mass - // matrix $B$ used to scale - // variables in the active set - // method. As discussed in the - // introduction, we get the mass - // matrix to be diagonal by - // choosing the trapezoidal rule - // for quadrature. Doing so we - // don't really need the triple - // loop over quadrature points, - // indices $i$ and indices $j$ any - // more and can, instead, just use - // a double loop. The rest of the - // function is obvious given what - // we have discussed in many of the - // previous tutorial programs. + // The next function is used in the computation of the diagonal mass matrix + // $B$ used to scale variables in the active set method. As discussed in the + // introduction, we get the mass matrix to be diagonal by choosing the + // trapezoidal rule for quadrature. Doing so we don't really need the triple + // loop over quadrature points, indices $i$ and indices $j$ any more and + // can, instead, just use a double loop. The rest of the function is obvious + // given what we have discussed in many of the previous tutorial programs. // - // Note that at the time this - // function is called, the - // constraints object only contains - // boundary value constraints; we - // therefore do not have to pay - // attention in the last - // copy-local-to-global step to - // preserve the values of matrix - // entries that may later on be - // constrained by the active set. + // Note that at the time this function is called, the constraints object + // only contains boundary value constraints; we therefore do not have to pay + // attention in the last copy-local-to-global step to preserve the values of + // matrix entries that may later on be constrained by the active set. // - // Note also that the trick with - // the trapezoidal rule only works - // if we have in fact $Q_1$ - // elements. For higher order - // elements, one would need to use - // a quadrature formula that has - // quadrature points at all the - // support points of the finite - // element. Constructing such a - // quadrature formula isn't really - // difficult, but not the point - // here, and so we simply assert at - // the top of the function that our - // implicit assumption about the - // finite element is in fact - // satisfied. + // Note also that the trick with the trapezoidal rule only works if we have + // in fact $Q_1$ elements. For higher order elements, one would need to use + // a quadrature formula that has quadrature points at all the support points + // of the finite element. Constructing such a quadrature formula isn't + // really difficult, but not the point here, and so we simply assert at the + // top of the function that our implicit assumption about the finite element + // is in fact satisfied. template void ObstacleProblem:: @@ -472,43 +398,23 @@ namespace Step41 // @sect4{ObstacleProblem::update_solution_and_constraints} - // In a sense, this is the central - // function of this program. It - // updates the active set of - // constrained degrees of freedom - // as discussed in the introduction - // and computes a ConstraintMatrix - // object from it that can then be - // used to eliminate constrained - // degrees of freedom from the - // solution of the next - // iteration. At the same time we - // set the constrained degrees of - // freedom of the solution to the - // correct value, namely the height - // of the obstacle. + // In a sense, this is the central function of this program. It updates the + // active set of constrained degrees of freedom as discussed in the + // introduction and computes a ConstraintMatrix object from it that can then + // be used to eliminate constrained degrees of freedom from the solution of + // the next iteration. At the same time we set the constrained degrees of + // freedom of the solution to the correct value, namely the height of the + // obstacle. // - // Fundamentally, the function is - // rather simple: We have to loop - // over all degrees of freedom and - // check the sign of the function - // $\Lambda^k_i + c([BU^k]_i - - // G_i) = \Lambda^k_i + cB_i(U^k_i - - // [g_h]_i)$ because in our case - // $G_i = B_i[g_h]_i$. To this end, - // we use the formula given in the - // introduction by which we can - // compute the Lagrange multiplier - // as the residual of the original - // linear system (given via the - // variables - // complete_system_matrix - // and - // complete_system_rhs. - // At the top of this function, we - // compute this residual using a - // function that is part of the - // matrix classes. + // Fundamentally, the function is rather simple: We have to loop over all + // degrees of freedom and check the sign of the function $\Lambda^k_i + + // c([BU^k]_i - G_i) = \Lambda^k_i + cB_i(U^k_i - [g_h]_i)$ because in our + // case $G_i = B_i[g_h]_i$. To this end, we use the formula given in the + // introduction by which we can compute the Lagrange multiplier as the + // residual of the original linear system (given via the variables + // complete_system_matrix and complete_system_rhs. + // At the top of this function, we compute this residual using a function + // that is part of the matrix classes. template void ObstacleProblem::update_solution_and_constraints () @@ -523,62 +429,30 @@ namespace Step41 contact_force.ratio (lambda, diagonal_of_mass_matrix); contact_force *= -1; - // The next step is to reset the - // active set and constraints - // objects and to start the loop - // over all degrees of - // freedom. This is made slightly - // more complicated by the fact - // that we can't just loop over - // all elements of the solution - // vector since there is no way - // for us then to find out what - // location a DoF is associated - // with; however, we need this - // location to test whether the - // displacement of a DoF is - // larger or smaller than the - // height of the obstacle at this - // location. + // The next step is to reset the active set and constraints objects and to + // start the loop over all degrees of freedom. This is made slightly more + // complicated by the fact that we can't just loop over all elements of + // the solution vector since there is no way for us then to find out what + // location a DoF is associated with; however, we need this location to + // test whether the displacement of a DoF is larger or smaller than the + // height of the obstacle at this location. // - // We work around this by looping - // over all cells and DoFs - // defined on each of these - // cells. We use here that the - // displacement is described - // using a $Q_1$ function for - // which degrees of freedom are - // always located on the vertices - // of the cell; thus, we can get - // the index of each degree of - // freedom and its location by - // asking the vertex for this - // information. On the other - // hand, this clearly wouldn't - // work for higher order - // elements, and so we add an - // assertion that makes sure that - // we only deal with elements for - // which all degrees of freedom - // are located in vertices to - // avoid tripping ourselves with - // non-functional code in case - // someone wants to play with - // increasing the polynomial - // degree of the solution. + // We work around this by looping over all cells and DoFs defined on each + // of these cells. We use here that the displacement is described using a + // $Q_1$ function for which degrees of freedom are always located on the + // vertices of the cell; thus, we can get the index of each degree of + // freedom and its location by asking the vertex for this information. On + // the other hand, this clearly wouldn't work for higher order elements, + // and so we add an assertion that makes sure that we only deal with + // elements for which all degrees of freedom are located in vertices to + // avoid tripping ourselves with non-functional code in case someone wants + // to play with increasing the polynomial degree of the solution. // - // The price to pay for having to - // loop over cells rather than - // DoFs is that we may encounter - // some degrees of freedom more - // than once, namely each time we - // visit one of the cells - // adjacent to a given vertex. We - // will therefore have to keep - // track which vertices we have - // already touched and which we - // haven't so far. We do so by - // using an array of flags + // The price to pay for having to loop over cells rather than DoFs is that + // we may encounter some degrees of freedom more than once, namely each + // time we visit one of the cells adjacent to a given vertex. We will + // therefore have to keep track which vertices we have already touched and + // which we haven't so far. We do so by using an array of flags // dof_touched: constraints.clear(); active_set.clear (); @@ -603,56 +477,25 @@ namespace Step41 else continue; - // Now that we know that we - // haven't touched this DoF - // yet, let's get the value - // of the displacement - // function there as well - // as the value of the - // obstacle function and - // use this to decide - // whether the current DoF - // belongs to the active - // set. For that we use the - // function given above and - // in the introduction. + // Now that we know that we haven't touched this DoF yet, let's get + // the value of the displacement function there as well as the value + // of the obstacle function and use this to decide whether the + // current DoF belongs to the active set. For that we use the + // function given above and in the introduction. // - // If we decide that the - // DoF should be part of - // the active set, we add - // its index to the active - // set, introduce a - // nonhomogeneous equality - // constraint in the - // ConstraintMatrix object, - // and reset the solution - // value to the height of - // the obstacle. Finally, - // the residual of the - // non-contact part of the - // system serves as an - // additional control (the - // residual equals the - // remaining, unaccounted - // forces, and should be - // zero outside the contact - // zone), so we zero out - // the components of the - // residual vector (i.e., - // the Lagrange multiplier - // lambda) that correspond - // to the area where the - // body is in contact; at - // the end of the loop over - // all cells, the residual - // will therefore only - // consist of the residual - // in the non-contact - // zone. We output the norm - // of this residual along - // with the size of the - // active set after the - // loop. + // If we decide that the DoF should be part of the active set, we + // add its index to the active set, introduce a nonhomogeneous + // equality constraint in the ConstraintMatrix object, and reset the + // solution value to the height of the obstacle. Finally, the + // residual of the non-contact part of the system serves as an + // additional control (the residual equals the remaining, + // unaccounted forces, and should be zero outside the contact zone), + // so we zero out the components of the residual vector (i.e., the + // Lagrange multiplier lambda) that correspond to the area where the + // body is in contact; at the end of the loop over all cells, the + // residual will therefore only consist of the residual in the + // non-contact zone. We output the norm of this residual along with + // the size of the active set after the loop. const double obstacle_value = obstacle.value (cell->vertex(v)); const double solution_value = solution (dof_index); @@ -679,12 +522,9 @@ namespace Step41 << lambda.l2_norm() << std::endl; - // In a final step, we add to the - // set of constraints on DoFs we - // have so far from the active - // set those that result from - // Dirichlet boundary values, and - // close the constraints object: + // In a final step, we add to the set of constraints on DoFs we have so + // far from the active set those that result from Dirichlet boundary + // values, and close the constraints object: VectorTools::interpolate_boundary_values (dof_handler, 0, BoundaryValues(), @@ -694,23 +534,13 @@ namespace Step41 // @sect4{ObstacleProblem::solve} - // There is nothing to say really - // about the solve function. In the - // context of a Newton method, we - // are not typically interested in - // very high accuracy (why ask for - // a highly accurate solution of a - // linear problem that we know only - // gives us an approximation of the - // solution of the nonlinear - // problem), and so we use the - // ReductionControl class that - // stops iterations when either an - // absolute tolerance is reached - // (for which we choose $10^{-12}$) - // or when the residual is reduced - // by a certain factor (here, - // $10^{-3}$). + // There is nothing to say really about the solve function. In the context + // of a Newton method, we are not typically interested in very high accuracy + // (why ask for a highly accurate solution of a linear problem that we know + // only gives us an approximation of the solution of the nonlinear problem), + // and so we use the ReductionControl class that stops iterations when + // either an absolute tolerance is reached (for which we choose $10^{-12}$) + // or when the residual is reduced by a certain factor (here, $10^{-3}$). template void ObstacleProblem::solve () { @@ -735,20 +565,12 @@ namespace Step41 // @sect4{ObstacleProblem::output_results} - // We use the vtk-format for the - // output. The file contains the - // displacement and a numerical - // represenation of the active - // set. The function looks standard - // but note that we can add an - // IndexSet object to the DataOut - // object in exactly the same way - // as a regular solution vector: it - // is simply interpreted as a - // function that is either zero - // (when a degree of freedom is not - // part of the IndexSet) or one (if - // it is). + // We use the vtk-format for the output. The file contains the displacement + // and a numerical represenation of the active set. The function looks + // standard but note that we can add an IndexSet object to the DataOut + // object in exactly the same way as a regular solution vector: it is simply + // interpreted as a function that is either zero (when a degree of freedom + // is not part of the IndexSet) or one (if it is). template void ObstacleProblem::output_results (const unsigned int iteration) const { @@ -773,42 +595,22 @@ namespace Step41 // @sect4{ObstacleProblem::run} - // This is the function which has - // the top-level control over - // everything. It is not very - // long, and in fact rather - // straightforward: in every - // iteration of the active set - // method, we assemble the linear - // system, solve it, update the - // active set and project the - // solution back to the feasible - // set, and then output the - // results. The iteration is - // terminated whenever the active - // set has not changed in the - // previous iteration. + // This is the function which has the top-level control over everything. It + // is not very long, and in fact rather straightforward: in every iteration + // of the active set method, we assemble the linear system, solve it, update + // the active set and project the solution back to the feasible set, and + // then output the results. The iteration is terminated whenever the active + // set has not changed in the previous iteration. // - // The only trickier part is that - // we have to save the linear - // system (i.e., the matrix and - // right hand side) after - // assembling it in the first - // iteration. The reason is that - // this is the only step where we - // can access the linear system as - // built without any of the contact - // constraints active. We need this - // to compute the residual of the - // solution at other iterations, - // but in other iterations that - // linear system we form has the - // rows and columns that correspond - // to constrained degrees of - // freedom eliminated, and so we - // can no longer access the full - // residual of the original - // equation. + // The only trickier part is that we have to save the linear system (i.e., + // the matrix and right hand side) after assembling it in the first + // iteration. The reason is that this is the only step where we can access + // the linear system as built without any of the contact constraints + // active. We need this to compute the residual of the solution at other + // iterations, but in other iterations that linear system we form has the + // rows and columns that correspond to constrained degrees of freedom + // eliminated, and so we can no longer access the full residual of the + // original equation. template void ObstacleProblem::run () { @@ -845,13 +647,9 @@ namespace Step41 // @sect3{The main function} -// And this is the main function. It -// follows the pattern of all other -// main functions. The call to -// initialize MPI exists because the -// Trilinos library upon which we -// build our linear solvers in this -// program requires it. +// And this is the main function. It follows the pattern of all other main +// functions. The call to initialize MPI exists because the Trilinos library +// upon which we build our linear solvers in this program requires it. int main (int argc, char *argv[]) { try diff --git a/deal.II/examples/step-43/step-43.cc b/deal.II/examples/step-43/step-43.cc index d24bb00104..b39a62e39f 100644 --- a/deal.II/examples/step-43/step-43.cc +++ b/deal.II/examples/step-43/step-43.cc @@ -13,18 +13,12 @@ // @sect3{Include files} -// The first step, as always, is to -// include the functionality of a -// number of deal.II and C++ header -// files. +// The first step, as always, is to include the functionality of a number of +// deal.II and C++ header files. // -// The list includes some header -// files that provide vector, matrix, -// and preconditioner classes that -// implement interfaces to the -// respective Trilinos classes; some -// more information on these may be -// found in step-31. +// The list includes some header files that provide vector, matrix, and +// preconditioner classes that implement interfaces to the respective Trilinos +// classes; some more information on these may be found in step-31. #include #include #include @@ -67,23 +61,19 @@ #include -// At the end of this top-matter, we -// open a namespace for the current -// project into which all the -// following material will go, and -// then import all deal.II names into -// this namespace: +// At the end of this top-matter, we open a namespace for the current project +// into which all the following material will go, and then import all deal.II +// names into this namespace: namespace Step43 { using namespace dealii; - // @sect3{Pressure right hand side, pressure boundary values and saturation initial value classes} + // @sect3{Pressure right hand side, pressure boundary values and saturation + // initial value classes} - // The following part is taken - // directly from step-21 so there is - // no need to repeat the - // descriptions found there. + // The following part is taken directly from step-21 so there is no need to + // repeat the descriptions found there. template class PressureRightHandSide : public Function { @@ -184,11 +174,8 @@ namespace Step43 // @sect3{Permeability models} - // In this tutorial, we still use - // the two permeability models - // previously used in step-21 so we - // again refrain from commenting in - // detail about them. + // In this tutorial, we still use the two permeability models previously + // used in step-21 so we again refrain from commenting in detail about them. namespace SingleCurvingCrack { template @@ -308,25 +295,14 @@ namespace Step43 // @sect3{Physical quantities} - // The implementations of all the - // physical quantities such as - // total mobility $\lambda_t$ and - // fractional flow of water $F$ are - // taken from step-21 so again we - // don't have do any comment about - // them. Compared to step-21 we - // have added checks that the - // saturation passed to these - // functions is in fact within the - // physically valid - // range. Furthermore, given that - // the wetting phase moves at speed - // $\mathbf u F'(S)$ it is clear - // that $F'(S)$ must be greater or - // equal to zero, so we assert that - // as well to make sure that our - // calculations to get at the - // formula for the derivative made + // The implementations of all the physical quantities such as total mobility + // $\lambda_t$ and fractional flow of water $F$ are taken from step-21 so + // again we don't have do any comment about them. Compared to step-21 we + // have added checks that the saturation passed to these functions is in + // fact within the physically valid range. Furthermore, given that the + // wetting phase moves at speed $\mathbf u F'(S)$ it is clear that $F'(S)$ + // must be greater or equal to zero, so we assert that as well to make sure + // that our calculations to get at the formula for the derivative made // sense. double mobility_inverse (const double S, const double viscosity) @@ -369,18 +345,11 @@ namespace Step43 // @sect3{Helper classes for solvers and preconditioners} - // In this first part we define a - // number of classes that we need - // in the construction of linear - // solvers and - // preconditioners. This part is - // essentially the same as that - // used in step-31. The only - // difference is that the original - // variable name stokes_matrix is - // replaced by another name - // darcy_matrix to match our - // problem. + // In this first part we define a number of classes that we need in the + // construction of linear solvers and preconditioners. This part is + // essentially the same as that used in step-31. The only difference is that + // the original variable name stokes_matrix is replaced by another name + // darcy_matrix to match our problem. namespace LinearSolvers { template @@ -487,64 +456,32 @@ namespace Step43 // @sect3{The TwoPhaseFlowProblem class} - // The definition of the class that - // defines the top-level logic of - // solving the time-dependent - // advection-dominated two-phase - // flow problem (or - // Buckley-Leverett problem - // [Buckley 1942]) is mainly based - // on tutorial programs step-21 and - // step-33, and in particular on - // step-31 where we have used - // basically the same general - // structure as done here. As in - // step-31, the key routines to - // look for in the implementation - // below are the run() - // and solve() - // functions. + // The definition of the class that defines the top-level logic of solving + // the time-dependent advection-dominated two-phase flow problem (or + // Buckley-Leverett problem [Buckley 1942]) is mainly based on tutorial + // programs step-21 and step-33, and in particular on step-31 where we have + // used basically the same general structure as done here. As in step-31, + // the key routines to look for in the implementation below are the + // run() and solve() functions. // - // The main difference to step-31 - // is that, since adaptive operator - // splitting is considered, we need - // a couple more member variables - // to hold the last two computed - // Darcy (velocity/pressure) - // solutions in addition to the - // current one (which is either - // computed directly, or - // extrapolated from the previous - // two), and we need to remember - // the last two times we computed - // the Darcy solution. We also need - // a helper function that figures - // out whether we do indeed need to - // recompute the Darcy solution. + // The main difference to step-31 is that, since adaptive operator splitting + // is considered, we need a couple more member variables to hold the last + // two computed Darcy (velocity/pressure) solutions in addition to the + // current one (which is either computed directly, or extrapolated from the + // previous two), and we need to remember the last two times we computed the + // Darcy solution. We also need a helper function that figures out whether + // we do indeed need to recompute the Darcy solution. // - // Unlike step-31, this step uses - // one more ConstraintMatrix object - // called - // darcy_preconditioner_constraints. This - // constraint object is used only - // for assembling the matrix for - // the Darcy preconditioner and - // includes hanging node constrants - // as well as Dirichlet boundary - // value constraints for the - // pressure variable. We need this - // because we are building a - // Laplace matrix for the pressure - // as an approximation of the Schur - // complement) which is only - // positive definite if boundary - // conditions are applied. + // Unlike step-31, this step uses one more ConstraintMatrix object called + // darcy_preconditioner_constraints. This constraint object is used only for + // assembling the matrix for the Darcy preconditioner and includes hanging + // node constrants as well as Dirichlet boundary value constraints for the + // pressure variable. We need this because we are building a Laplace matrix + // for the pressure as an approximation of the Schur complement) which is + // only positive definite if boundary conditions are applied. // - // The collection of member - // functions and variables thus - // declared in this class is then - // rather similar to those in - // step-31: + // The collection of member functions and variables thus declared in this + // class is then rather similar to those in step-31: template class TwoPhaseFlowProblem { @@ -573,10 +510,8 @@ namespace Step43 const unsigned int max_grid_level); void output_results () const; - // We follow with a number of - // helper functions that are - // used in a variety of places - // throughout the program: + // We follow with a number of helper functions that are used in a variety + // of places throughout the program: double get_max_u_F_prime () const; std::pair get_extrapolated_saturation_range () const; bool determine_whether_to_solve_for_pressure_and_velocity () const; @@ -591,14 +526,9 @@ namespace Step43 const double cell_diameter) const; - // This all is followed by the - // member variables, most of - // which are similar to the - // ones in step-31, with the - // exception of the ones that - // pertain to the macro time - // stepping for the - // velocity/pressure system: + // This all is followed by the member variables, most of which are similar + // to the ones in step-31, with the exception of the ones that pertain to + // the macro time stepping for the velocity/pressure system: Triangulation triangulation; double global_Omega_diameter; @@ -657,19 +587,11 @@ namespace Step43 bool rebuild_saturation_matrix; - // At the very end we declare a - // variable that denotes the - // material model. Compared to - // step-21, we do this here as - // a member variable since we - // will want to use it in a - // variety of places and so - // having a central place where - // such a variable is declared - // will make it simpler to - // replace one class by another - // (e.g. replace - // RandomMedium::KInverse by + // At the very end we declare a variable that denotes the material + // model. Compared to step-21, we do this here as a member variable since + // we will want to use it in a variety of places and so having a central + // place where such a variable is declared will make it simpler to replace + // one class by another (e.g. replace RandomMedium::KInverse by // SingleCurvingCrack::KInverse). const RandomMedium::KInverse k_inverse; }; @@ -677,26 +599,17 @@ namespace Step43 // @sect3{TwoPhaseFlowProblem::TwoPhaseFlowProblem} - // The constructor of this class is an - // extension of the constructors in step-21 - // and step-31. We need to add the various - // variables that concern the saturation. As - // discussed in the introduction, we are - // going to use $Q_2 \times Q_1$ - // (Taylor-Hood) elements again for the Darcy - // system, an element combination that fulfills - // the Ladyzhenskaya-Babuska-Brezzi (LBB) - // conditions - // [Brezzi and Fortin 1991, Chen 2005], and $Q_1$ - // elements for the saturation. However, by - // using variables that store the polynomial - // degree of the Darcy and temperature finite - // elements, it is easy to consistently - // modify the degree of the elements as well - // as all quadrature formulas used on them - // downstream. Moreover, we initialize the - // time stepping variables related to - // operator splitting as well as the option + // The constructor of this class is an extension of the constructors in + // step-21 and step-31. We need to add the various variables that concern + // the saturation. As discussed in the introduction, we are going to use + // $Q_2 \times Q_1$ (Taylor-Hood) elements again for the Darcy system, an + // element combination that fulfills the Ladyzhenskaya-Babuska-Brezzi (LBB) + // conditions [Brezzi and Fortin 1991, Chen 2005], and $Q_1$ elements for + // the saturation. However, by using variables that store the polynomial + // degree of the Darcy and temperature finite elements, it is easy to + // consistently modify the degree of the elements as well as all quadrature + // formulas used on them downstream. Moreover, we initialize the time + // stepping variables related to operator splitting as well as the option // for matrix assembly and preconditioning: template TwoPhaseFlowProblem::TwoPhaseFlowProblem (const unsigned int degree) @@ -733,70 +646,36 @@ namespace Step43 // @sect3{TwoPhaseFlowProblem::setup_dofs} - // This is the function that sets up the - // DoFHandler objects we have here (one for - // the Darcy part and one for the saturation - // part) as well as set to the right sizes - // the various objects required for the - // linear algebra in this program. Its basic - // operations are similar to what - // step-31 did. + // This is the function that sets up the DoFHandler objects we have here + // (one for the Darcy part and one for the saturation part) as well as set + // to the right sizes the various objects required for the linear algebra in + // this program. Its basic operations are similar to what step-31 did. // - // The body of the function first enumerates - // all degrees of freedom for the Darcy and - // saturation systems. For the Darcy part, - // degrees of freedom are then sorted to - // ensure that velocities precede pressure - // DoFs so that we can partition the Darcy - // matrix into a $2 \times 2$ matrix. + // The body of the function first enumerates all degrees of freedom for the + // Darcy and saturation systems. For the Darcy part, degrees of freedom are + // then sorted to ensure that velocities precede pressure DoFs so that we + // can partition the Darcy matrix into a $2 \times 2$ matrix. // - // Then, we need to incorporate - // hanging node constraints and - // Dirichlet boundary value - // constraints into - // darcy_preconditioner_constraints. - // The boundary condition - // constraints are only set on the - // pressure component since the - // Schur complement preconditioner - // that corresponds to the porous - // media flow operator in non-mixed - // form, $-\nabla \cdot [\mathbf K - // \lambda_t(S)]\nabla$, acts only - // on the pressure - // variable. Therefore, we use a - // component_mask that filters out - // the velocity component, so that - // the condensation is performed on - // pressure degrees of freedom - // only. + // Then, we need to incorporate hanging node constraints and Dirichlet + // boundary value constraints into darcy_preconditioner_constraints. The + // boundary condition constraints are only set on the pressure component + // since the Schur complement preconditioner that corresponds to the porous + // media flow operator in non-mixed form, $-\nabla \cdot [\mathbf K + // \lambda_t(S)]\nabla$, acts only on the pressure variable. Therefore, we + // use a component_mask that filters out the velocity component, so that the + // condensation is performed on pressure degrees of freedom only. // - // After having done so, we count - // the number of degrees of freedom - // in the various blocks. This - // information is then used to - // create the sparsity pattern for - // the Darcy and saturation system - // matrices as well as the - // preconditioner matrix from which - // we build the Darcy - // preconditioner. As in step-31, - // we choose to create the pattern - // not as in the first few tutorial - // programs, but by using the - // blocked version of - // CompressedSimpleSparsityPattern. The - // reason for doing this is mainly - // memory, that is, the - // SparsityPattern class would - // consume too much memory when - // used in three spatial dimensions - // as we intend to do for this - // program. So, for this, we follow - // the same way as step-31 did and - // we don't have to repeat - // descriptions again for the rest - // of the member function. + // After having done so, we count the number of degrees of freedom in the + // various blocks. This information is then used to create the sparsity + // pattern for the Darcy and saturation system matrices as well as the + // preconditioner matrix from which we build the Darcy preconditioner. As in + // step-31, we choose to create the pattern not as in the first few tutorial + // programs, but by using the blocked version of + // CompressedSimpleSparsityPattern. The reason for doing this is mainly + // memory, that is, the SparsityPattern class would consume too much memory + // when used in three spatial dimensions as we intend to do for this + // program. So, for this, we follow the same way as step-31 did and we don't + // have to repeat descriptions again for the rest of the member function. template void TwoPhaseFlowProblem::setup_dofs () { @@ -950,82 +829,46 @@ namespace Step43 // @sect3{Assembling matrices and preconditioners} - // The next few functions are - // devoted to setting up the - // various system and - // preconditioner matrices and - // right hand sides that we have to - // deal with in this program. + // The next few functions are devoted to setting up the various system and + // preconditioner matrices and right hand sides that we have to deal with in + // this program. // @sect4{TwoPhaseFlowProblem::assemble_darcy_preconditioner} - // This function assembles the matrix we use - // for preconditioning the Darcy system. What - // we need are a vector mass matrix weighted by - // $\left(\mathbf{K} \lambda_t\right)^{-1}$ - // on the velocity components and a mass - // matrix weighted by $\left(\mathbf{K} - // \lambda_t\right)$ on the pressure - // component. We start by generating a - // quadrature object of appropriate order, - // the FEValues object that can give values - // and gradients at the quadrature points - // (together with quadrature weights). Next - // we create data structures for the cell - // matrix and the relation between local and - // global DoFs. The vectors phi_u and - // grad_phi_p are going to hold the values of - // the basis functions in order to faster - // build up the local matrices, as was - // already done in step-22. Before we start - // the loop over all active cells, we have to - // specify which components are pressure and + // This function assembles the matrix we use for preconditioning the Darcy + // system. What we need are a vector mass matrix weighted by + // $\left(\mathbf{K} \lambda_t\right)^{-1}$ on the velocity components and a + // mass matrix weighted by $\left(\mathbf{K} \lambda_t\right)$ on the + // pressure component. We start by generating a quadrature object of + // appropriate order, the FEValues object that can give values and gradients + // at the quadrature points (together with quadrature weights). Next we + // create data structures for the cell matrix and the relation between local + // and global DoFs. The vectors phi_u and grad_phi_p are going to hold the + // values of the basis functions in order to faster build up the local + // matrices, as was already done in step-22. Before we start the loop over + // all active cells, we have to specify which components are pressure and // which are velocity. // - // The creation of the local matrix - // is rather simple. There are only - // a term weighted by - // $\left(\mathbf{K} - // \lambda_t\right)^{-1}$ (on the - // velocity) and a Laplace matrix - // weighted by $\left(\mathbf{K} - // \lambda_t\right)$ to be - // generated, so the creation of - // the local matrix is done in - // essentially two lines. Since the - // material model functions at the - // top of this file only provide - // the inverses of the permeability - // and mobility, we have to compute - // $\mathbf K$ and $\lambda_t$ by - // hand from the given values, once + // The creation of the local matrix is rather simple. There are only a term + // weighted by $\left(\mathbf{K} \lambda_t\right)^{-1}$ (on the velocity) + // and a Laplace matrix weighted by $\left(\mathbf{K} \lambda_t\right)$ to + // be generated, so the creation of the local matrix is done in essentially + // two lines. Since the material model functions at the top of this file + // only provide the inverses of the permeability and mobility, we have to + // compute $\mathbf K$ and $\lambda_t$ by hand from the given values, once // per quadrature point. // - // Once the - // local matrix is ready (loop over - // rows and columns in the local - // matrix on each quadrature - // point), we get the local DoF - // indices and write the local - // information into the global - // matrix. We do this by directly - // applying the constraints - // (i.e. darcy_preconditioner_constraints) - // that takes care of hanging node - // and zero Dirichlet boundary - // condition constraints. By doing - // so, we don't have to do that - // afterwards, and we later don't - // have to use - // ConstraintMatrix::condense and - // MatrixTools::apply_boundary_values, - // both functions that would need - // to modify matrix and vector - // entries and so are difficult to - // write for the Trilinos classes - // where we don't immediately have - // access to individual memory - // locations. + // Once the local matrix is ready (loop over rows and columns in the local + // matrix on each quadrature point), we get the local DoF indices and write + // the local information into the global matrix. We do this by directly + // applying the constraints (i.e. darcy_preconditioner_constraints) that + // takes care of hanging node and zero Dirichlet boundary condition + // constraints. By doing so, we don't have to do that afterwards, and we + // later don't have to use ConstraintMatrix::condense and + // MatrixTools::apply_boundary_values, both functions that would need to + // modify matrix and vector entries and so are difficult to write for the + // Trilinos classes where we don't immediately have access to individual + // memory locations. template void TwoPhaseFlowProblem::assemble_darcy_preconditioner () @@ -1113,43 +956,21 @@ namespace Step43 // @sect4{TwoPhaseFlowProblem::build_darcy_preconditioner} - // After calling the above - // functions to assemble the - // preconditioner matrix, this - // function generates the inner - // preconditioners that are going - // to be used for the Schur - // complement block - // preconditioner. The - // preconditioners need to be - // regenerated at every saturation - // time step since they depend on - // the saturation $S$ that varies - // with time. + // After calling the above functions to assemble the preconditioner matrix, + // this function generates the inner preconditioners that are going to be + // used for the Schur complement block preconditioner. The preconditioners + // need to be regenerated at every saturation time step since they depend on + // the saturation $S$ that varies with time. // - // In here, we set up the - // preconditioner for the - // velocity-velocity matrix - // $\mathbf{M}^{\mathbf{u}}$ and - // the Schur complement - // $\mathbf{S}$. As explained in - // the introduction, we are going - // to use an IC preconditioner - // based on the vector matrix - // $\mathbf{M}^{\mathbf{u}}$ and - // another based on the scalar - // Laplace matrix - // $\tilde{\mathbf{S}}^p$ (which is - // spectrally close to the Schur - // complement of the Darcy - // matrix). Usually, the - // TrilinosWrappers::PreconditionIC - // class can be seen as a good - // black-box preconditioner which - // does not need any special - // knowledge of the matrix - // structure and/or the operator - // that's behind it. + // In here, we set up the preconditioner for the velocity-velocity matrix + // $\mathbf{M}^{\mathbf{u}}$ and the Schur complement $\mathbf{S}$. As + // explained in the introduction, we are going to use an IC preconditioner + // based on the vector matrix $\mathbf{M}^{\mathbf{u}}$ and another based on + // the scalar Laplace matrix $\tilde{\mathbf{S}}^p$ (which is spectrally + // close to the Schur complement of the Darcy matrix). Usually, the + // TrilinosWrappers::PreconditionIC class can be seen as a good black-box + // preconditioner which does not need any special knowledge of the matrix + // structure and/or the operator that's behind it. template void TwoPhaseFlowProblem::build_darcy_preconditioner () @@ -1169,38 +990,27 @@ namespace Step43 // @sect4{TwoPhaseFlowProblem::assemble_darcy_system} - // This is the function that assembles the - // linear system for the Darcy system. + // This is the function that assembles the linear system for the Darcy + // system. // - // Regarding the technical details of - // implementation, the procedures are similar - // to those in step-22 and step-31. We reset - // matrix and vector, create a quadrature - // formula on the cells, and then create the - // respective FEValues object. + // Regarding the technical details of implementation, the procedures are + // similar to those in step-22 and step-31. We reset matrix and vector, + // create a quadrature formula on the cells, and then create the respective + // FEValues object. // - // There is one thing that needs to be - // commented: since we have a separate - // finite element and DoFHandler for the - // saturation, we need to generate a second - // FEValues object for the proper evaluation - // of the saturation solution. This isn't too - // complicated to realize here: just use the - // saturation structures and set an update - // flag for the basis function values which - // we need for evaluation of the saturation - // solution. The only important part to - // remember here is that the same quadrature - // formula is used for both FEValues objects - // to ensure that we get matching information - // when we loop over the quadrature points of - // the two objects. + // There is one thing that needs to be commented: since we have a separate + // finite element and DoFHandler for the saturation, we need to generate a + // second FEValues object for the proper evaluation of the saturation + // solution. This isn't too complicated to realize here: just use the + // saturation structures and set an update flag for the basis function + // values which we need for evaluation of the saturation solution. The only + // important part to remember here is that the same quadrature formula is + // used for both FEValues objects to ensure that we get matching information + // when we loop over the quadrature points of the two objects. // - // The declarations proceed with some - // shortcuts for array sizes, the creation of - // the local matrix, right hand side as well - // as the vector for the indices of the local - // dofs compared to the global system. + // The declarations proceed with some shortcuts for array sizes, the + // creation of the local matrix, right hand side as well as the vector for + // the indices of the local dofs compared to the global system. template void TwoPhaseFlowProblem::assemble_darcy_system () { @@ -1238,33 +1048,19 @@ namespace Step43 std::vector boundary_values (n_face_q_points); std::vector > k_inverse_values (n_q_points); - // Next we need a vector that - // will contain the values of the - // saturation solution at the - // previous time level at the - // quadrature points to assemble - // the saturation dependent - // coefficients in the Darcy - // equations. + // Next we need a vector that will contain the values of the saturation + // solution at the previous time level at the quadrature points to + // assemble the saturation dependent coefficients in the Darcy equations. // - // The set of vectors we create - // next hold the evaluations of - // the basis functions as well as - // their gradients that will be - // used for creating the - // matrices. Putting these into - // their own arrays rather than - // asking the FEValues object for - // this information each time it - // is needed is an optimization - // to accelerate the assembly - // process, see step-22 for + // The set of vectors we create next hold the evaluations of the basis + // functions as well as their gradients that will be used for creating the + // matrices. Putting these into their own arrays rather than asking the + // FEValues object for this information each time it is needed is an + // optimization to accelerate the assembly process, see step-22 for // details. // - // The last two declarations are used to - // extract the individual blocks (velocity, - // pressure, saturation) from the total FE - // system. + // The last two declarations are used to extract the individual blocks + // (velocity, pressure, saturation) from the total FE system. std::vector old_saturation_values (n_q_points); std::vector > phi_u (dofs_per_cell); @@ -1274,72 +1070,38 @@ namespace Step43 const FEValuesExtractors::Vector velocities (0); const FEValuesExtractors::Scalar pressure (dim); - // Now start the loop over all - // cells in the problem. We are - // working on two different - // DoFHandlers for this assembly - // routine, so we must have two - // different cell iterators for - // the two objects in use. This - // might seem a bit peculiar, but - // since both the Darcy system - // and the saturation system use - // the same grid we can assume - // that the two iterators run in - // sync over the cells of the two - // DoFHandler objects. + // Now start the loop over all cells in the problem. We are working on two + // different DoFHandlers for this assembly routine, so we must have two + // different cell iterators for the two objects in use. This might seem a + // bit peculiar, but since both the Darcy system and the saturation system + // use the same grid we can assume that the two iterators run in sync over + // the cells of the two DoFHandler objects. // - // The first statements within - // the loop are again all very - // familiar, doing the update of - // the finite element data as - // specified by the update flags, - // zeroing out the local arrays - // and getting the values of the - // old solution at the quadrature - // points. At this point we also - // have to get the values of the - // saturation function of the - // previous time step at the - // quadrature points. To this - // end, we can use the - // FEValues::get_function_values - // (previously already used in - // step-9, step-14 and step-15), - // a function that takes a - // solution vector and returns a - // list of function values at the - // quadrature points of the - // present cell. In fact, it - // returns the complete - // vector-valued solution at each - // quadrature point, i.e. not - // only the saturation but also - // the velocities and pressure. + // The first statements within the loop are again all very familiar, doing + // the update of the finite element data as specified by the update flags, + // zeroing out the local arrays and getting the values of the old solution + // at the quadrature points. At this point we also have to get the values + // of the saturation function of the previous time step at the quadrature + // points. To this end, we can use the FEValues::get_function_values + // (previously already used in step-9, step-14 and step-15), a function + // that takes a solution vector and returns a list of function values at + // the quadrature points of the present cell. In fact, it returns the + // complete vector-valued solution at each quadrature point, i.e. not only + // the saturation but also the velocities and pressure. // - // Then we are ready to loop over - // the quadrature points on the - // cell to do the - // integration. The formula for - // this follows in a - // straightforward way from what - // has been discussed in the - // introduction. + // Then we are ready to loop over the quadrature points on the cell to do + // the integration. The formula for this follows in a straightforward way + // from what has been discussed in the introduction. // - // Once this is done, we start the loop over - // the rows and columns of the local matrix - // and feed the matrix with the relevant - // products. + // Once this is done, we start the loop over the rows and columns of the + // local matrix and feed the matrix with the relevant products. // - // The last step in the loop over all cells - // is to enter the local contributions into - // the global matrix and vector structures to - // the positions specified in - // local_dof_indices. Again, we let the - // ConstraintMatrix class do the insertion of - // the cell matrix elements to the global - // matrix, which already condenses the - // hanging node constraints. + // The last step in the loop over all cells is to enter the local + // contributions into the global matrix and vector structures to the + // positions specified in local_dof_indices. Again, we let the + // ConstraintMatrix class do the insertion of the cell matrix elements to + // the global matrix, which already condenses the hanging node + // constraints. typename DoFHandler::active_cell_iterator cell = darcy_dof_handler.begin_active(), endc = darcy_dof_handler.end(); @@ -1428,18 +1190,13 @@ namespace Step43 // @sect4{TwoPhaseFlowProblem::assemble_saturation_system} - // This function is to assemble the linear - // system for the saturation transport - // equation. It calls, if necessary, two - // other member functions: - // assemble_saturation_matrix() and - // assemble_saturation_rhs(). The former - // function then assembles the saturation - // matrix that only needs to be changed - // occasionally. On the other hand, the - // latter function that assembles the right - // hand side must be called at every - // saturation time step. + // This function is to assemble the linear system for the saturation + // transport equation. It calls, if necessary, two other member functions: + // assemble_saturation_matrix() and assemble_saturation_rhs(). The former + // function then assembles the saturation matrix that only needs to be + // changed occasionally. On the other hand, the latter function that + // assembles the right hand side must be called at every saturation time + // step. template void TwoPhaseFlowProblem::assemble_saturation_system () { @@ -1457,18 +1214,13 @@ namespace Step43 // @sect4{TwoPhaseFlowProblem::assemble_saturation_matrix} - // This function is easily understood since - // it only forms a simple mass matrix for the - // left hand side of the saturation linear - // system by basis functions phi_i_s and - // phi_j_s only. Finally, as usual, we enter - // the local contribution into the global - // matrix by specifying the position in - // local_dof_indices. This is done by letting - // the ConstraintMatrix class do the - // insertion of the cell matrix elements to - // the global matrix, which already condenses - // the hanging node constraints. + // This function is easily understood since it only forms a simple mass + // matrix for the left hand side of the saturation linear system by basis + // functions phi_i_s and phi_j_s only. Finally, as usual, we enter the local + // contribution into the global matrix by specifying the position in + // local_dof_indices. This is done by letting the ConstraintMatrix class do + // the insertion of the cell matrix elements to the global matrix, which + // already condenses the hanging node constraints. template void TwoPhaseFlowProblem::assemble_saturation_matrix () { @@ -1518,40 +1270,27 @@ namespace Step43 // @sect4{TwoPhaseFlowProblem::assemble_saturation_rhs} - // This function is to assemble the right - // hand side of the saturation transport - // equation. Before going about it, we have to - // create two FEValues objects for the Darcy - // and saturation systems respectively and, - // in addition, two FEFaceValues objects for - // the two systems because we have a - // boundary integral term in the weak form of - // saturation equation. For the FEFaceValues - // object of the saturation system, we also - // require normal vectors, which we request - // using the update_normal_vectors flag. + // This function is to assemble the right hand side of the saturation + // transport equation. Before going about it, we have to create two FEValues + // objects for the Darcy and saturation systems respectively and, in + // addition, two FEFaceValues objects for the two systems because we have a + // boundary integral term in the weak form of saturation equation. For the + // FEFaceValues object of the saturation system, we also require normal + // vectors, which we request using the update_normal_vectors flag. // - // Next, before looping over all the cells, - // we have to compute some parameters - // (e.g. global_u_infty, global_S_variation, - // and global_Omega_diameter) that the - // artificial viscosity $\nu$ needs. This is - // largely the same as was done in - // step-31, so you may see there for more + // Next, before looping over all the cells, we have to compute some + // parameters (e.g. global_u_infty, global_S_variation, and + // global_Omega_diameter) that the artificial viscosity $\nu$ needs. This is + // largely the same as was done in step-31, so you may see there for more // information. // - // The real works starts with the loop over all the - // saturation and Darcy cells to put the - // local contributions into the global - // vector. In this loop, in order to simplify - // the implementation, we split some of the - // work into two helper functions: - // assemble_saturation_rhs_cell_term and - // assemble_saturation_rhs_boundary_term. - // We note that we insert cell or boundary - // contributions into the global vector in - // the two functions rather than in this - // present function. + // The real works starts with the loop over all the saturation and Darcy + // cells to put the local contributions into the global vector. In this + // loop, in order to simplify the implementation, we split some of the work + // into two helper functions: assemble_saturation_rhs_cell_term and + // assemble_saturation_rhs_boundary_term. We note that we insert cell or + // boundary contributions into the global vector in the two functions rather + // than in this present function. template void TwoPhaseFlowProblem::assemble_saturation_rhs () { @@ -1613,22 +1352,15 @@ namespace Step43 // @sect4{TwoPhaseFlowProblem::assemble_saturation_rhs_cell_term} - // This function takes care of integrating - // the cell terms of the right hand side of - // the saturation equation, and then - // assembling it into the global right hand - // side vector. Given the discussion in the - // introduction, the form of these - // contributions is clear. The only tricky - // part is getting the artificial viscosity - // and all that is necessary to compute - // it. The first half of the function is - // devoted to this task. + // This function takes care of integrating the cell terms of the right hand + // side of the saturation equation, and then assembling it into the global + // right hand side vector. Given the discussion in the introduction, the + // form of these contributions is clear. The only tricky part is getting the + // artificial viscosity and all that is necessary to compute it. The first + // half of the function is devoted to this task. // - // The last part of the function is copying - // the local contributions into the global - // vector with position specified in - // local_dof_indices. + // The last part of the function is copying the local contributions into the + // global vector with position specified in local_dof_indices. template void TwoPhaseFlowProblem:: @@ -1698,17 +1430,12 @@ namespace Step43 // @sect4{TwoPhaseFlowProblem::assemble_saturation_rhs_boundary_term} - // The next function is responsible for the - // boundary integral terms in the right - // hand side form of the saturation - // equation. For these, we have to compute - // the upwinding flux on the global - // boundary faces, i.e. we impose Dirichlet - // boundary conditions weakly only on - // inflow parts of the global boundary. As - // before, this has been described in - // step-21 so we refrain from giving more - // descriptions about that. + // The next function is responsible for the boundary integral terms in the + // right hand side form of the saturation equation. For these, we have to + // compute the upwinding flux on the global boundary faces, i.e. we impose + // Dirichlet boundary conditions weakly only on inflow parts of the global + // boundary. As before, this has been described in step-21 so we refrain + // from giving more descriptions about that. template void TwoPhaseFlowProblem:: @@ -1767,28 +1494,18 @@ namespace Step43 // @sect3{TwoPhaseFlowProblem::solve} - // This function implements the operator - // splitting algorithm, i.e. in each time - // step it either re-computes the solution - // of the Darcy system or extrapolates - // velocity/pressure from previous time - // steps, then determines the size of the - // time step, and then updates the - // saturation variable. The implementation - // largely follows similar code in - // step-31. It is, next to the run() - // function, the central one in this - // program. + // This function implements the operator splitting algorithm, i.e. in each + // time step it either re-computes the solution of the Darcy system or + // extrapolates velocity/pressure from previous time steps, then determines + // the size of the time step, and then updates the saturation variable. The + // implementation largely follows similar code in step-31. It is, next to + // the run() function, the central one in this program. // - // At the beginning of the function, we ask - // whether to solve the pressure-velocity - // part by evaluating the posteriori - // criterion (see the following - // function). If necessary, we will solve - // the pressure-velocity part using the - // GMRES solver with the Schur complement - // block preconditioner as is described in - // the introduction. + // At the beginning of the function, we ask whether to solve the + // pressure-velocity part by evaluating the posteriori criterion (see the + // following function). If necessary, we will solve the pressure-velocity + // part using the GMRES solver with the Schur complement block + // preconditioner as is described in the introduction. template void TwoPhaseFlowProblem::solve () { @@ -1839,38 +1556,22 @@ namespace Step43 saturation_matching_last_computed_darcy_solution = saturation_solution; } } - // On the other hand, if we have decided - // that we don't want to compute the - // solution of the Darcy system for the - // current time step, then we need to - // simply extrapolate the previous two - // Darcy solutions to the same time as we - // would have computed the - // velocity/pressure at. We do a simple - // linear extrapolation, i.e. given the - // current length $dt$ of the macro time - // step from the time when we last - // computed the Darcy solution to now - // (given by - // current_macro_time_step), - // and $DT$ the length of the last macro - // time step (given by - // old_macro_time_step), - // then we get - // $u^\ast = u_p + dt \frac{u_p-u_{pp}}{DT} - // = (1+dt/DT)u_p - dt/DT u_{pp}$, where - // $u_p$ and $u_{pp}$ are the last two - // computed Darcy solutions. We can - // implement this formula using just - // two lines of code. + // On the other hand, if we have decided that we don't want to compute the + // solution of the Darcy system for the current time step, then we need to + // simply extrapolate the previous two Darcy solutions to the same time as + // we would have computed the velocity/pressure at. We do a simple linear + // extrapolation, i.e. given the current length $dt$ of the macro time + // step from the time when we last computed the Darcy solution to now + // (given by current_macro_time_step), and $DT$ the length of + // the last macro time step (given by old_macro_time_step), + // then we get $u^\ast = u_p + dt \frac{u_p-u_{pp}}{DT} = (1+dt/DT)u_p - + // dt/DT u_{pp}$, where $u_p$ and $u_{pp}$ are the last two computed Darcy + // solutions. We can implement this formula using just two lines of code. // - // Note that the algorithm here only - // works if we have at least two - // previously computed Darcy solutions - // from which we can extrapolate to the - // current time, and this is ensured by - // requiring re-computation of the Darcy - // solution for the first 2 time steps. + // Note that the algorithm here only works if we have at least two + // previously computed Darcy solutions from which we can extrapolate to + // the current time, and this is ensured by requiring re-computation of + // the Darcy solution for the first 2 time steps. else { darcy_solution = last_computed_darcy_solution; @@ -1880,11 +1581,8 @@ namespace Step43 } - // With the so computed velocity - // vector, compute the optimal - // time step based on the CFL - // criterion discussed in the - // introduction... + // With the so computed velocity vector, compute the optimal time step + // based on the CFL criterion discussed in the introduction... { old_time_step = time_step; @@ -1900,24 +1598,14 @@ namespace Step43 - // ...and then also update the - // length of the macro time steps - // we use while we're dealing - // with time step sizes. In - // particular, this involves: (i) - // If we have just recomputed the - // Darcy solution, then the - // length of the previous macro - // time step is now fixed and the - // length of the current macro - // time step is, up to now, - // simply the length of the - // current (micro) time - // step. (ii) If we have not - // recomputed the Darcy solution, - // then the length of the current - // macro time step has just grown - // by time_step. + // ...and then also update the length of the macro time steps we use while + // we're dealing with time step sizes. In particular, this involves: (i) + // If we have just recomputed the Darcy solution, then the length of the + // previous macro time step is now fixed and the length of the current + // macro time step is, up to now, simply the length of the current (micro) + // time step. (ii) If we have not recomputed the Darcy solution, then the + // length of the current macro time step has just grown by + // time_step. if (solve_for_pressure_and_velocity == true) { old_macro_time_step = current_macro_time_step; @@ -1926,18 +1614,11 @@ namespace Step43 else current_macro_time_step += time_step; - // The last step in this function - // is to recompute the saturation - // solution based on the velocity - // field we've just - // obtained. This naturally - // happens in every time step, - // and we don't skip any of these - // computations. At the end of - // computing the saturation, we - // project back into the allowed - // interval $[0,1]$ to make sure - // our solution remains physical. + // The last step in this function is to recompute the saturation solution + // based on the velocity field we've just obtained. This naturally happens + // in every time step, and we don't skip any of these computations. At the + // end of computing the saturation, we project back into the allowed + // interval $[0,1]$ to make sure our solution remains physical. { std::cout << " Solving saturation transport equation..." << std::endl; @@ -1966,27 +1647,15 @@ namespace Step43 // @sect3{TwoPhaseFlowProblem::refine_mesh} - // The next function does the - // refinement and coarsening of the - // mesh. It does its work in three - // blocks: (i) Compute refinement - // indicators by looking at the - // gradient of a solution vector - // extrapolated linearly from the - // previous two using the - // respective sizes of the time - // step (or taking the only - // solution we have if this is the - // first time step). (ii) Flagging - // those cells for refinement and - // coarsening where the gradient is - // larger or smaller than a certain - // threshold, preserving minimal - // and maximal levels of mesh - // refinement. (iii) Transferring - // the solution from the old to the - // new mesh. None of this is - // particularly difficult. + // The next function does the refinement and coarsening of the mesh. It does + // its work in three blocks: (i) Compute refinement indicators by looking at + // the gradient of a solution vector extrapolated linearly from the previous + // two using the respective sizes of the time step (or taking the only + // solution we have if this is the first time step). (ii) Flagging those + // cells for refinement and coarsening where the gradient is larger or + // smaller than a certain threshold, preserving minimal and maximal levels + // of mesh refinement. (iii) Transferring the solution from the old to the + // new mesh. None of this is particularly difficult. template void TwoPhaseFlowProblem:: @@ -2087,9 +1756,7 @@ namespace Step43 // @sect3{TwoPhaseFlowProblem::output_results} - // This function generates - // graphical output. It is in - // essence a copy of the + // This function generates graphical output. It is in essence a copy of the // implementation in step-31. template void TwoPhaseFlowProblem::output_results () const @@ -2178,36 +1845,20 @@ namespace Step43 // @sect4{TwoPhaseFlowProblem::determine_whether_to_solve_for_pressure_and_velocity} - // This function implements the a - // posteriori criterion for - // adaptive operator splitting. The - // function is relatively - // straightforward given the way we - // have implemented other functions - // above and given the formula for - // the criterion derived in the - // paper. + // This function implements the a posteriori criterion for adaptive operator + // splitting. The function is relatively straightforward given the way we + // have implemented other functions above and given the formula for the + // criterion derived in the paper. // - // If one decides that one wants - // the original IMPES method in - // which the Darcy equation is - // solved in every time step, then - // this can be achieved by setting - // the threshold value - // AOS_threshold (with - // a default of $5.0$) to zero, - // thereby forcing the function to - // always return true. + // If one decides that one wants the original IMPES method in which the + // Darcy equation is solved in every time step, then this can be achieved by + // setting the threshold value AOS_threshold (with a default of + // $5.0$) to zero, thereby forcing the function to always return true. // - // Finally, note that the function - // returns true unconditionally for - // the first two time steps to - // ensure that we have always - // solved the Darcy system at least - // twice when skipping its - // solution, thereby allowing us to - // extrapolate the velocity from - // the last two solutions in + // Finally, note that the function returns true unconditionally for the + // first two time steps to ensure that we have always solved the Darcy + // system at least twice when skipping its solution, thereby allowing us to + // extrapolate the velocity from the last two solutions in // solve(). template bool @@ -2272,28 +1923,15 @@ namespace Step43 // @sect4{TwoPhaseFlowProblem::project_back_saturation} - // The next function simply makes - // sure that the saturation values - // always remain within the - // physically reasonable range of - // $[0,1]$. While the continuous - // equations guarantee that this is - // so, the discrete equations - // don't. However, if we allow the - // discrete solution to escape this - // range we get into trouble - // because terms like $F(S)$ and - // $F'(S)$ will produce - // unreasonable results - // (e.g. $F'(S)<0$ for $S<0$, which - // would imply that the wetting - // fluid phase flows against - // the direction of the bulk fluid - // velocity)). Consequently, at the - // end of each time step, we simply - // project the saturation field - // back into the physically - // reasonable region. + // The next function simply makes sure that the saturation values always + // remain within the physically reasonable range of $[0,1]$. While the + // continuous equations guarantee that this is so, the discrete equations + // don't. However, if we allow the discrete solution to escape this range we + // get into trouble because terms like $F(S)$ and $F'(S)$ will produce + // unreasonable results (e.g. $F'(S)<0$ for $S<0$, which would imply that + // the wetting fluid phase flows against the direction of the bulk + // fluid velocity)). Consequently, at the end of each time step, we simply + // project the saturation field back into the physically reasonable region. template void TwoPhaseFlowProblem::project_back_saturation () @@ -2309,17 +1947,11 @@ namespace Step43 // @sect4{TwoPhaseFlowProblem::get_max_u_F_prime} // - // Another simpler helper function: - // Compute the maximum of the total - // velocity times the derivative of - // the fraction flow function, - // i.e., compute $\|\mathbf{u} - // F'(S)\|_{L_\infty(\Omega)}$. This - // term is used in both the - // computation of the time step as - // well as in normalizing the - // entropy-residual term in the - // artificial viscosity. + // Another simpler helper function: Compute the maximum of the total + // velocity times the derivative of the fraction flow function, i.e., + // compute $\|\mathbf{u} F'(S)\|_{L_\infty(\Omega)}$. This term is used in + // both the computation of the time step as well as in normalizing the + // entropy-residual term in the artificial viscosity. template double TwoPhaseFlowProblem::get_max_u_F_prime () const @@ -2370,24 +2002,14 @@ namespace Step43 // @sect4{TwoPhaseFlowProblem::get_extrapolated_saturation_range} // - // For computing the stabilization - // term, we need to know the range - // of the saturation - // variable. Unlike in step-31, - // this range is trivially bounded - // by the interval $[0,1]$ but we - // can do a bit better by looping - // over a collection of quadrature - // points and seeing what the - // values are there. If we can, - // i.e., if there are at least two - // timesteps around, we can even - // take the values extrapolated to - // the next time step. + // For computing the stabilization term, we need to know the range of the + // saturation variable. Unlike in step-31, this range is trivially bounded + // by the interval $[0,1]$ but we can do a bit better by looping over a + // collection of quadrature points and seeing what the values are there. If + // we can, i.e., if there are at least two timesteps around, we can even + // take the values extrapolated to the next time step. // - // As before, the function is taken - // with minimal modifications from - // step-31. + // As before, the function is taken with minimal modifications from step-31. template std::pair TwoPhaseFlowProblem::get_extrapolated_saturation_range () const @@ -2460,18 +2082,11 @@ namespace Step43 // @sect4{TwoPhaseFlowProblem::compute_viscosity} // - // The final tool function is used - // to compute the artificial - // viscosity on a given cell. This - // isn't particularly complicated - // if you have the formula for it - // in front of you, and looking at - // the implementation in - // step-31. The major difference to - // that tutorial program is that - // the velocity here is not simply - // $\mathbf u$ but $\mathbf u - // F'(S)$ and some of the formulas + // The final tool function is used to compute the artificial viscosity on a + // given cell. This isn't particularly complicated if you have the formula + // for it in front of you, and looking at the implementation in step-31. The + // major difference to that tutorial program is that the velocity here is + // not simply $\mathbf u$ but $\mathbf u F'(S)$ and some of the formulas // need to be adjusted accordingly. template double @@ -2531,9 +2146,7 @@ namespace Step43 const double global_scaling = c_R * porosity * (global_max_u_F_prime) * global_S_variation / std::pow(global_Omega_diameter, alpha - 2.); -// return (beta * -// (max_velocity_times_dF_dS) * -// cell_diameter); +// return (beta * (max_velocity_times_dF_dS) * cell_diameter); return (beta * (max_velocity_times_dF_dS) * @@ -2545,25 +2158,14 @@ namespace Step43 // @sect3{TwoPhaseFlowProblem::run} - // This function is, besides - // solve(), the - // primary function of this program - // as it controls the time - // iteration as well as when the - // solution is written into output - // files and when to do mesh - // refinement. + // This function is, besides solve(), the primary function of + // this program as it controls the time iteration as well as when the + // solution is written into output files and when to do mesh refinement. // - // With the exception of the - // startup code that loops back to - // the beginning of the function - // through the goto - // start_time_iteration - // label, everything should be - // relatively straightforward. In - // any case, it mimicks the - // corresponding function in - // step-31. + // With the exception of the startup code that loops back to the beginning + // of the function through the goto start_time_iteration label, + // everything should be relatively straightforward. In any case, it mimicks + // the corresponding function in step-31. template void TwoPhaseFlowProblem::run () { @@ -2632,12 +2234,9 @@ start_time_iteration: // @sect3{The main() function} // -// The main function looks almost the -// same as in all other programs. In -// particular, it is essentially the -// same as in step-31 where we also -// explain the need to initialize the -// MPI subsystem. +// The main function looks almost the same as in all other programs. In +// particular, it is essentially the same as in step-31 where we also explain +// the need to initialize the MPI subsystem. int main (int argc, char *argv[]) { try diff --git a/deal.II/examples/step-44/step-44.cc b/deal.II/examples/step-44/step-44.cc index 1f06021f57..23377a6e3b 100644 --- a/deal.II/examples/step-44/step-44.cc +++ b/deal.II/examples/step-44/step-44.cc @@ -9,11 +9,9 @@ /* to the file deal.II/doc/license.html for the text and */ /* further information on this license. */ -// We start by including all the necessary -// deal.II header files and some C++ related -// ones. They have been discussed in detail -// in previous tutorial programs, so you need -// only refer to past tutorials for details. +// We start by including all the necessary deal.II header files and some C++ +// related ones. They have been discussed in detail in previous tutorial +// programs, so you need only refer to past tutorials for details. #include #include #include @@ -55,34 +53,30 @@ #include -// We then stick everything that relates to -// this tutorial program into a namespace of -// its own, and import all the deal.II -// function and class names into it: +// We then stick everything that relates to this tutorial program into a +// namespace of its own, and import all the deal.II function and class names +// into it: namespace Step44 { using namespace dealii; // @sect3{Run-time parameters} // -// There are several parameters that can be set -// in the code so we set up a ParameterHandler -// object to read in the choices at run-time. +// There are several parameters that can be set in the code so we set up a +// ParameterHandler object to read in the choices at run-time. namespace Parameters { // @sect4{Finite Element system} -// As mentioned in the introduction, a different order -// interpolation should be used for the displacement -// $\mathbf{u}$ than for the pressure $\widetilde{p}$ and -// the dilatation $\widetilde{J}$. -// Choosing $\widetilde{p}$ and $\widetilde{J}$ as discontinuous (constant) -// functions at the element level leads to the -// mean-dilatation method. The discontinuous approximation -// allows $\widetilde{p}$ and $\widetilde{J}$ to be condensed out -// and a classical displacement based method is recovered. -// Here we specify the polynomial order used to -// approximate the solution. -// The quadrature order should be adjusted accordingly. + +// As mentioned in the introduction, a different order interpolation should be +// used for the displacement $\mathbf{u}$ than for the pressure +// $\widetilde{p}$ and the dilatation $\widetilde{J}$. Choosing +// $\widetilde{p}$ and $\widetilde{J}$ as discontinuous (constant) functions +// at the element level leads to the mean-dilatation method. The discontinuous +// approximation allows $\widetilde{p}$ and $\widetilde{J}$ to be condensed +// out and a classical displacement based method is recovered. Here we +// specify the polynomial order used to approximate the solution. The +// quadrature order should be adjusted accordingly. struct FESystem { unsigned int poly_degree; @@ -122,10 +116,10 @@ namespace Step44 } // @sect4{Geometry} -// Make adjustments to the problem geometry and the applied load. -// Since the problem modelled here is quite specific, the load -// scale can be altered to specific values to compare with the -// results given in the literature. + +// Make adjustments to the problem geometry and the applied load. Since the +// problem modelled here is quite specific, the load scale can be altered to +// specific values to compare with the results given in the literature. struct Geometry { unsigned int global_refinement; @@ -170,9 +164,9 @@ namespace Step44 } // @sect4{Materials} -// We also need the shear modulus $ \mu $ -// and Poisson ration $ \nu $ -// for the neo-Hookean material. + +// We also need the shear modulus $ \mu $ and Poisson ration $ \nu $ for the +// neo-Hookean material. struct Materials { double nu; @@ -211,10 +205,10 @@ namespace Step44 } // @sect4{Linear solver} -// Next, we choose both solver and preconditioner settings. -// The use of an effective preconditioner is critical to ensure -// convergence when a large nonlinear motion occurs -// within a Newton increment. + +// Next, we choose both solver and preconditioner settings. The use of an +// effective preconditioner is critical to ensure convergence when a large +// nonlinear motion occurs within a Newton increment. struct LinearSolver { std::string type_lin; @@ -271,9 +265,9 @@ namespace Step44 } // @sect4{Nonlinear solver} -// A Newton-Raphson scheme is used to -// solve the nonlinear system of governing equations. -// We now define the tolerances and the maximum number of + +// A Newton-Raphson scheme is used to solve the nonlinear system of governing +// equations. We now define the tolerances and the maximum number of // iterations for the Newton-Raphson nonlinear solver. struct NonlinearSolver { @@ -319,8 +313,8 @@ namespace Step44 } // @sect4{Time} -// Set the timestep size $ \varDelta t $ -// and the simulation end-time. + +// Set the timestep size $ \varDelta t $ and the simulation end-time. struct Time { double delta_t; @@ -359,8 +353,9 @@ namespace Step44 } // @sect4{All parameters} -// Finally we consolidate all of the above structures into -// a single container that holds all of our run-time selections. + +// Finally we consolidate all of the above structures into a single container +// that holds all of our run-time selections. struct AllParameters : public FESystem, public Geometry, public Materials, @@ -408,8 +403,8 @@ namespace Step44 } // @sect3{Some standard tensors} -// Now we define some frequently used -// second and fourth-order tensors: + +// Now we define some frequently used second and fourth-order tensors: template class StandardTensors { @@ -419,14 +414,13 @@ namespace Step44 static const SymmetricTensor<2, dim> I; // $\mathbf{I} \otimes \mathbf{I}$ static const SymmetricTensor<4, dim> IxI; - // $\mathcal{S}$, note that as we only use - // this fourth-order unit tensor to operate - // on symmetric second-order tensors. - // To maintain notation consistent with Holzapfel (2001) - // we name the tensor $\mathcal{I}$ + // $\mathcal{S}$, note that as we only use this fourth-order unit tensor + // to operate on symmetric second-order tensors. To maintain notation + // consistent with Holzapfel (2001) we name the tensor $\mathcal{I}$ static const SymmetricTensor<4, dim> II; // Fourth-order deviatoric tensor such that - // $\textrm{dev} \{ \bullet \} = \{ \bullet \} - [1/\textrm{dim}][ \{ \bullet\} :\mathbf{I}]\mathbf{I}$ + // $\textrm{dev} \{ \bullet \} = \{ \bullet \} - + // [1/\textrm{dim}][ \{ \bullet\} :\mathbf{I}]\mathbf{I}$ static const SymmetricTensor<4, dim> dev_P; }; @@ -447,10 +441,10 @@ namespace Step44 StandardTensors::dev_P = deviator_tensor(); // @sect3{Time class} -// A simple class to store time data. Its -// functioning is transparent so no discussion is -// necessary. For simplicity we assume a constant -// time step size. + +// A simple class to store time data. Its functioning is transparent so no +// discussion is necessary. For simplicity we assume a constant time step +// size. class Time { public: @@ -545,13 +539,10 @@ namespace Step44 ~Material_Compressible_Neo_Hook_Three_Field() {} - // We update the material model with - // various deformation dependent data - // based on $F$ and the pressure $\widetilde{p}$ - // and dilatation $\widetilde{J}$, - // and at the end of the - // function include a physical check for - // internal consistency: + // We update the material model with various deformation dependent data + // based on $F$ and the pressure $\widetilde{p}$ and dilatation + // $\widetilde{J}$, and at the end of the function include a physical + // check for internal consistency: void update_material_data(const Tensor<2, dim> &F, const double p_tilde_in, const double J_tilde_in) @@ -564,57 +555,43 @@ namespace Step44 Assert(det_F > 0, ExcInternalError()); } - // The second function determines the - // Kirchhoff stress $\boldsymbol{\tau} - // = \boldsymbol{\tau}_{\textrm{iso}} + - // \boldsymbol{\tau}_{\textrm{vol}}$ + // The second function determines the Kirchhoff stress $\boldsymbol{\tau} + // = \boldsymbol{\tau}_{\textrm{iso}} + \boldsymbol{\tau}_{\textrm{vol}}$ SymmetricTensor<2, dim> get_tau() { return get_tau_iso() + get_tau_vol(); } - // The fourth-order elasticity tensor - // in the spatial setting - // $\mathfrak{c}$ is calculated from - // the SEF $\Psi$ as $ J - // \mathfrak{c}_{ijkl} = F_{iA} F_{jB} - // \mathfrak{C}_{ABCD} F_{kC} F_{lD}$ - // where $ \mathfrak{C} = 4 - // \frac{\partial^2 - // \Psi(\mathbf{C})}{\partial + // The fourth-order elasticity tensor in the spatial setting + // $\mathfrak{c}$ is calculated from the SEF $\Psi$ as $ J + // \mathfrak{c}_{ijkl} = F_{iA} F_{jB} \mathfrak{C}_{ABCD} F_{kC} F_{lD}$ + // where $ \mathfrak{C} = 4 \frac{\partial^2 \Psi(\mathbf{C})}{\partial // \mathbf{C} \partial \mathbf{C}}$ SymmetricTensor<4, dim> get_Jc() const { return get_Jc_vol() + get_Jc_iso(); } - // Derivative of the volumetric free - // energy with respect to $\widetilde{J}$ return - // $\frac{\partial - // \Psi_{\text{vol}}(\widetilde{J})}{\partial - // \widetilde{J}}$ + // Derivative of the volumetric free energy with respect to + // $\widetilde{J}$ return $\frac{\partial + // \Psi_{\text{vol}}(\widetilde{J})}{\partial \widetilde{J}}$ double get_dPsi_vol_dJ() const { return (kappa / 2.0) * (J_tilde - 1.0 / J_tilde); } - // Second derivative of the volumetric - // free energy wrt $\widetilde{J}$. We - // need the following computation - // explicitly in the tangent so we make - // it public. We calculate - // $\frac{\partial^2 - // \Psi_{\textrm{vol}}(\widetilde{J})}{\partial - // \widetilde{J} \partial + // Second derivative of the volumetric free energy wrt $\widetilde{J}$. We + // need the following computation explicitly in the tangent so we make it + // public. We calculate $\frac{\partial^2 + // \Psi_{\textrm{vol}}(\widetilde{J})}{\partial \widetilde{J} \partial // \widetilde{J}}$ double get_d2Psi_vol_dJ2() const { return ( (kappa / 2.0) * (1.0 + 1.0 / (J_tilde * J_tilde))); } - // The next few functions return - // various data that we choose to store - // with the material: + // The next few functions return various data that we choose to store with + // the material: double get_det_F() const { return det_F; @@ -631,34 +608,26 @@ namespace Step44 } protected: - // Define constitutive model paramaters - // $\kappa$ (bulk modulus) - // and the neo-Hookean model - // parameter $c_1$: + // Define constitutive model paramaters $\kappa$ (bulk modulus) and the + // neo-Hookean model parameter $c_1$: const double kappa; const double c_1; - // Model specific data that is - // convenient to store with the - // material: + // Model specific data that is convenient to store with the material: double det_F; double p_tilde; double J_tilde; SymmetricTensor<2, dim> b_bar; - // The following functions are used - // internally in determining the result - // of some of the public functions - // above. The first one determines the - // volumetric Kirchhoff stress - // $\boldsymbol{\tau}_{\textrm{vol}}$: + // The following functions are used internally in determining the result + // of some of the public functions above. The first one determines the + // volumetric Kirchhoff stress $\boldsymbol{\tau}_{\textrm{vol}}$: SymmetricTensor<2, dim> get_tau_vol() const { return p_tilde * det_F * StandardTensors::I; } - // Next, determine the isochoric - // Kirchhoff stress + // Next, determine the isochoric Kirchhoff stress // $\boldsymbol{\tau}_{\textrm{iso}} = // \mathcal{P}:\overline{\boldsymbol{\tau}}$: SymmetricTensor<2, dim> get_tau_iso() const @@ -666,16 +635,14 @@ namespace Step44 return StandardTensors::dev_P * get_tau_bar(); } - // Then, determine the fictitious - // Kirchhoff stress + // Then, determine the fictitious Kirchhoff stress // $\overline{\boldsymbol{\tau}}$: SymmetricTensor<2, dim> get_tau_bar() const { return 2.0 * c_1 * b_bar; } - // Calculate the volumetric part of the - // tangent $J + // Calculate the volumetric part of the tangent $J // \mathfrak{c}_\textrm{vol}$: SymmetricTensor<4, dim> get_Jc_vol() const { @@ -685,8 +652,7 @@ namespace Step44 - (2.0 * StandardTensors::II) ); } - // Calculate the isochoric part of the - // tangent $J + // Calculate the isochoric part of the tangent $J // \mathfrak{c}_\textrm{iso}$: SymmetricTensor<4, dim> get_Jc_iso() const { @@ -707,10 +673,8 @@ namespace Step44 * StandardTensors::dev_P; } - // Calculate the fictitious elasticity - // tensor $\overline{\mathfrak{c}}$. - // For the material model chosen this - // is simply zero: + // Calculate the fictitious elasticity tensor $\overline{\mathfrak{c}}$. + // For the material model chosen this is simply zero: SymmetricTensor<4, dim> get_c_bar() const { return SymmetricTensor<4, dim>(); @@ -719,13 +683,12 @@ namespace Step44 // @sect3{Quadrature point history} -// As seen in step-18, the -// PointHistory class offers a method for storing data at the -// quadrature points. Here each quadrature point holds a pointer to a -// material description. Thus, different material models can be used in -// different regions of the domain. Among other data, we choose to store the -// Kirchhoff stress $\boldsymbol{\tau}$ and the tangent $J\mathfrak{c}$ for -// the quadrature points. +// As seen in step-18, the PointHistory class offers a method +// for storing data at the quadrature points. Here each quadrature point +// holds a pointer to a material description. Thus, different material models +// can be used in different regions of the domain. Among other data, we +// choose to store the Kirchhoff stress $\boldsymbol{\tau}$ and the tangent +// $J\mathfrak{c}$ for the quadrature points. template class PointHistory { @@ -746,16 +709,11 @@ namespace Step44 material = NULL; } - // The first function is used to create - // a material object and to initialize - // all tensors correctly: - // The second one updates the stored - // values and stresses based on the - // current deformation measure - // $\textrm{Grad}\mathbf{u}_{\textrm{n}}$, - // pressure $\widetilde{p}$ and - // dilation $\widetilde{J}$ field - // values. + // The first function is used to create a material object and to + // initialize all tensors correctly: The second one updates the stored + // values and stresses based on the current deformation measure + // $\textrm{Grad}\mathbf{u}_{\textrm{n}}$, pressure $\widetilde{p}$ and + // dilation $\widetilde{J}$ field values. void setup_lqp (const Parameters::AllParameters ¶meters) { material = new Material_Compressible_Neo_Hook_Three_Field(parameters.mu, @@ -763,35 +721,20 @@ namespace Step44 update_values(Tensor<2, dim>(), 0.0, 1.0); } - // To this end, we calculate the - // deformation gradient $\mathbf{F}$ - // from the displacement gradient - // $\textrm{Grad}\ \mathbf{u}$, i.e. - // $\mathbf{F}(\mathbf{u}) = \mathbf{I} - // + \textrm{Grad}\ \mathbf{u}$ and - // then let the material model - // associated with this quadrature - // point update itself. When computing - // the deformation gradient, we have to - // take care with which data types we - // compare the sum $\mathbf{I} + - // \textrm{Grad}\ \mathbf{u}$: Since - // $I$ has data type SymmetricTensor, - // just writing I + - // Grad_u_n would convert the - // second argument to a symmetric - // tensor, perform the sum, and then - // cast the result to a Tensor (i.e., - // the type of a possibly non-symmetric - // tensor). However, since - // Grad_u_n is - // nonsymmetric in general, the - // conversion to SymmetricTensor will - // fail. We can avoid this back and - // forth by converting $I$ to Tensor - // first, and then performing the - // addition as between non-symmetric - // tensors: + // To this end, we calculate the deformation gradient $\mathbf{F}$ from + // the displacement gradient $\textrm{Grad}\ \mathbf{u}$, i.e. + // $\mathbf{F}(\mathbf{u}) = \mathbf{I} + \textrm{Grad}\ \mathbf{u}$ and + // then let the material model associated with this quadrature point + // update itself. When computing the deformation gradient, we have to take + // care with which data types we compare the sum $\mathbf{I} + + // \textrm{Grad}\ \mathbf{u}$: Since $I$ has data type SymmetricTensor, + // just writing I + Grad_u_n would convert the second + // argument to a symmetric tensor, perform the sum, and then cast the + // result to a Tensor (i.e., the type of a possibly non-symmetric + // tensor). However, since Grad_u_n is nonsymmetric in + // general, the conversion to SymmetricTensor will fail. We can avoid this + // back and forth by converting $I$ to Tensor first, and then performing + // the addition as between non-symmetric tensors: void update_values (const Tensor<2, dim> &Grad_u_n, const double p_tilde, const double J_tilde) @@ -801,16 +744,12 @@ namespace Step44 Grad_u_n); material->update_material_data(F, p_tilde, J_tilde); - // The material has been updated so - // we now calculate the Kirchhoff - // stress $\mathbf{\tau}$, the - // tangent $J\mathfrak{c}$ - // and the first and second derivatives - // of the volumetric free energy. + // The material has been updated so we now calculate the Kirchhoff + // stress $\mathbf{\tau}$, the tangent $J\mathfrak{c}$ and the first and + // second derivatives of the volumetric free energy. // - // We also store the inverse of - // the deformation gradient since - // we frequently use it: + // We also store the inverse of the deformation gradient since we + // frequently use it: F_inv = invert(F); tau = material->get_tau(); Jc = material->get_Jc(); @@ -819,9 +758,8 @@ namespace Step44 } - // We offer an interface to retrieve - // certain data. Here are the - // kinematic variables: + // We offer an interface to retrieve certain data. Here are the kinematic + // variables: double get_J_tilde() const { return material->get_J_tilde(); @@ -837,10 +775,8 @@ namespace Step44 return F_inv; } - // ...and the kinetic variables. These - // are used in the material and global - // tangent matrix and residual assembly - // operations: + // ...and the kinetic variables. These are used in the material and + // global tangent matrix and residual assembly operations: double get_p_tilde() const { return material->get_p_tilde(); @@ -861,27 +797,22 @@ namespace Step44 return d2Psi_vol_dJ2; } - // and finally the tangent + // And finally the tangent: const SymmetricTensor<4, dim> &get_Jc() const { return Jc; } - // In terms of member functions, this - // class stores for the quadrature - // point it represents a copy of a - // material type in case different - // materials are used in different - // regions of the domain, as well as - // the inverse of the deformation - // gradient... + // In terms of member functions, this class stores for the quadrature + // point it represents a copy of a material type in case different + // materials are used in different regions of the domain, as well as the + // inverse of the deformation gradient... private: Material_Compressible_Neo_Hook_Three_Field *material; Tensor<2, dim> F_inv; - // ... and stress-type variables along - // with the tangent $J\mathfrak{c}$: + // ... and stress-type variables along with the tangent $J\mathfrak{c}$: SymmetricTensor<2, dim> tau; double d2Psi_vol_dJ2; double dPsi_vol_dJ; @@ -910,18 +841,12 @@ namespace Step44 private: - // In the private section of this - // class, we first forward declare a - // number of objects that are used in - // parallelizing work using the - // WorkStream object (see the @ref - // threads module for more information - // on this). + // In the private section of this class, we first forward declare a number + // of objects that are used in parallelizing work using the WorkStream + // object (see the @ref threads module for more information on this). // - // We declare such structures for the - // computation of tangent (stiffness) - // matrix, right hand side, static - // condensation, and for updating + // We declare such structures for the computation of tangent (stiffness) + // matrix, right hand side, static condensation, and for updating // quadrature points: struct PerTaskData_K; struct ScratchData_K; @@ -935,29 +860,23 @@ namespace Step44 struct PerTaskData_UQPH; struct ScratchData_UQPH; - // We start the collection of member - // functions with one that builds the + // We start the collection of member functions with one that builds the // grid: void make_grid(); - // Set up the finite element system to - // be solved: + // Set up the finite element system to be solved: void system_setup(); void determine_component_extractors(); - // Several functions to assemble the - // system and right hand side matrices - // using multi-threading. Each of them - // comes as a wrapper function, one - // that is executed to do the work in - // the WorkStream model on one cell, - // and one that copies the work done on - // this one cell into the global object - // that represents it: + // Several functions to assemble the system and right hand side matrices + // using multi-threading. Each of them comes as a wrapper function, one + // that is executed to do the work in the WorkStream model on one cell, + // and one that copies the work done on this one cell into the global + // object that represents it: void assemble_system_tangent(); @@ -991,15 +910,12 @@ namespace Step44 void copy_local_to_global_sc(const PerTaskData_SC &data); - // Apply Dirichlet boundary conditions on - // the displacement field + // Apply Dirichlet boundary conditions on the displacement field void make_constraints(const int &it_nr); - // Create and update the quadrature - // points. Here, no data needs to be - // copied into a global object, so the - // copy_local_to_global function is + // Create and update the quadrature points. Here, no data needs to be + // copied into a global object, so the copy_local_to_global function is // empty: void setup_qph(); @@ -1016,10 +932,8 @@ namespace Step44 copy_local_to_global_UQPH(const PerTaskData_UQPH &data) {} - // Solve for the displacement using a - // Newton-Raphson method. We break this - // function into the nonlinear loop and - // the function that solves the + // Solve for the displacement using a Newton-Raphson method. We break this + // function into the nonlinear loop and the function that solves the // linearized Newton-Raphson step: void solve_nonlinear_timestep(BlockVector &solution_delta); @@ -1027,48 +941,37 @@ namespace Step44 std::pair solve_linear_system(BlockVector &newton_update); - // Solution retrieval as well as - // post-processing and writing data to - // file: + // Solution retrieval as well as post-processing and writing data to file: BlockVector get_total_solution(const BlockVector &solution_delta) const; void output_results() const; - // Finally, some member variables that - // describe the current state: A - // collection of the parameters used to - // describe the problem setup... + // Finally, some member variables that describe the current state: A + // collection of the parameters used to describe the problem setup... Parameters::AllParameters parameters; - // ...the volume of the reference and - // current configurations... + // ...the volume of the reference and current configurations... double vol_reference; double vol_current; - // ...and description of the geometry on which - // the problem is solved: + // ...and description of the geometry on which the problem is solved: Triangulation triangulation; - // Also, keep track of the current time and the - // time spent evaluating certain - // functions + // Also, keep track of the current time and the time spent evaluating + // certain functions Time time; TimerOutput timer; - // A storage object for quadrature point - // information. See step-18 for more on - // this: + // A storage object for quadrature point information. See step-18 for + // more on this: std::vector > quadrature_point_history; - // A description of the finite-element - // system including the displacement - // polynomial degree, the - // degree-of-freedom handler, number of - // dof's per cell and the extractor - // objects used to retrieve information - // from the solution vectors: + // A description of the finite-element system including the displacement + // polynomial degree, the degree-of-freedom handler, number of dof's per + // cell and the extractor objects used to retrieve information from the + // solution vectors: const unsigned int degree; const FESystem fe; DoFHandler dof_handler_ref; @@ -1077,12 +980,9 @@ namespace Step44 const FEValuesExtractors::Scalar p_fe; const FEValuesExtractors::Scalar J_fe; - // Description of how the block-system is - // arranged. There are 3 blocks, the first - // contains a vector DOF $\mathbf{u}$ - // while the other two describe scalar - // DOFs, $\widetilde{p}$ and - // $\widetilde{J}$. + // Description of how the block-system is arranged. There are 3 blocks, + // the first contains a vector DOF $\mathbf{u}$ while the other two + // describe scalar DOFs, $\widetilde{p}$ and $\widetilde{J}$. static const unsigned int n_blocks = 3; static const unsigned int n_components = dim + 2; static const unsigned int first_u_component = 0; @@ -1101,30 +1001,24 @@ namespace Step44 std::vector element_indices_p; std::vector element_indices_J; - // Rules for Gauss-quadrature on both the - // cell and faces. The number of - // quadrature points on both cells and - // faces is recorded. + // Rules for Gauss-quadrature on both the cell and faces. The number of + // quadrature points on both cells and faces is recorded. const QGauss qf_cell; const QGauss qf_face; const unsigned int n_q_points; const unsigned int n_q_points_f; - // Objects that store the converged - // solution and right-hand side vectors, - // as well as the tangent matrix. There - // is a ConstraintMatrix object used to - // keep track of constraints. We make - // use of a sparsity pattern designed for - // a block system. + // Objects that store the converged solution and right-hand side vectors, + // as well as the tangent matrix. There is a ConstraintMatrix object used + // to keep track of constraints. We make use of a sparsity pattern + // designed for a block system. ConstraintMatrix constraints; BlockSparsityPattern sparsity_pattern; BlockSparseMatrix tangent_matrix; BlockVector system_rhs; BlockVector solution_n; - // Then define a number of variables to - // store norms and update norms and + // Then define a number of variables to store norms and update norms and // normalisation factors. struct Errors { @@ -1169,8 +1063,7 @@ namespace Step44 std::pair get_error_dilation(); - // Print information to screen - // in a pleasing way... + // Print information to screen in a pleasing way... static void print_conv_header(); @@ -1182,8 +1075,8 @@ namespace Step44 // @sect3{Implementation of the Solid class} // @sect4{Public interface} -// We initialise the Solid class using data extracted -// from the parameter file. + +// We initialise the Solid class using data extracted from the parameter file. template Solid::Solid(const std::string &input_file) : @@ -1194,28 +1087,14 @@ namespace Step44 TimerOutput::summary, TimerOutput::wall_times), degree(parameters.poly_degree), - // The Finite Element - // System is composed of - // dim continuous - // displacement DOFs, and - // discontinuous pressure - // and dilatation DOFs. In - // an attempt to satisfy - // the Babuska-Brezzi or LBB stability - // conditions (see Hughes (2000)), we - // setup a $Q_n \times - // DGPM_{n-1} \times DGPM_{n-1}$ - // system. $Q_2 \times DGPM_1 - // \times DGPM_1$ elements - // satisfy this condition, - // while $Q_1 \times DGPM_0 - // \times DGPM_0$ elements do - // not. However, it has - // been shown that the - // latter demonstrate good - // convergence - // characteristics - // nonetheless. + // The Finite Element System is composed of dim continuous displacement + // DOFs, and discontinuous pressure and dilatation DOFs. In an attempt to + // satisfy the Babuska-Brezzi or LBB stability conditions (see Hughes + // (2000)), we setup a $Q_n \times DGPM_{n-1} \times DGPM_{n-1}$ + // system. $Q_2 \times DGPM_1 \times DGPM_1$ elements satisfy this + // condition, while $Q_1 \times DGPM_0 \times DGPM_0$ elements do + // not. However, it has been shown that the latter demonstrate good + // convergence characteristics nonetheless. fe(FE_Q(parameters.poly_degree), dim, // displacement FE_DGPMonomial(parameters.poly_degree - 1), 1, // pressure FE_DGPMonomial(parameters.poly_degree - 1), 1), // dilatation @@ -1288,30 +1167,23 @@ namespace Step44 output_results(); time.increment(); - // We then declare the incremental - // solution update $\varDelta - // \mathbf{\Xi}:= \{\varDelta - // \mathbf{u},\varDelta \widetilde{p}, - // \varDelta \widetilde{J} \}$ and start - // the loop over the time domain. + // We then declare the incremental solution update $\varDelta + // \mathbf{\Xi}:= \{\varDelta \mathbf{u},\varDelta \widetilde{p}, + // \varDelta \widetilde{J} \}$ and start the loop over the time domain. // - // At the beginning, we reset the solution update - // for this time step... + // At the beginning, we reset the solution update for this time step... BlockVector solution_delta(dofs_per_block); while (time.current() < time.end()) { solution_delta = 0.0; - // ...solve the current time step and - // update total solution vector - // $\mathbf{\Xi}_{\textrm{n}} = - // \mathbf{\Xi}_{\textrm{n-1}} + + // ...solve the current time step and update total solution vector + // $\mathbf{\Xi}_{\textrm{n}} = \mathbf{\Xi}_{\textrm{n-1}} + // \varDelta \mathbf{\Xi}$... solve_nonlinear_timestep(solution_delta); solution_n += solution_delta; - // ...and plot the results before - // moving on happily to the next time + // ...and plot the results before moving on happily to the next time // step: output_results(); time.increment(); @@ -1331,8 +1203,8 @@ namespace Step44 // using TBB. Our main tool for this is the WorkStream class (see the @ref // threads module for more information). -// Firstly we deal with the tangent matrix assembly structures. -// The PerTaskData object stores local contributions. +// Firstly we deal with the tangent matrix assembly structures. The +// PerTaskData object stores local contributions. template struct Solid::PerTaskData_K { @@ -1411,11 +1283,9 @@ namespace Step44 }; -// Next, the same approach is used for the -// right-hand side assembly. -// The PerTaskData object again stores local contributions -// and the ScratchData object the shape function object -// and precomputed values vector: +// Next, the same approach is used for the right-hand side assembly. The +// PerTaskData object again stores local contributions and the ScratchData +// object the shape function object and precomputed values vector: template struct Solid::PerTaskData_RHS { @@ -1646,16 +1516,11 @@ namespace Step44 vol_current = vol_reference; std::cout << "Grid:\n\t Reference volume: " << vol_reference << std::endl; - // Since we wish to apply a Neumann BC to - // a patch on the top surface, we must - // find the cell faces in this part of - // the domain and mark them with a - // distinct boundary ID number. The - // faces we are looking for are on the +y - // surface and will get boundary ID 6 - // (zero through five are already used - // when creating the six faces of the - // cube domain): + // Since we wish to apply a Neumann BC to a patch on the top surface, we + // must find the cell faces in this part of the domain and mark them with + // a distinct boundary ID number. The faces we are looking for are on the + // +y surface and will get boundary ID 6 (zero through five are already + // used when creating the six faces of the cube domain): typename Triangulation::active_cell_iterator cell = triangulation.begin_active(), endc = triangulation.end(); for (; cell != endc; ++cell) @@ -1686,10 +1551,8 @@ namespace Step44 block_component[p_component] = p_dof; // Pressure block_component[J_component] = J_dof; // Dilatation - // The DOF handler is then initialised and we - // renumber the grid in an efficient - // manner. We also record the number of - // DOF's per block. + // The DOF handler is then initialised and we renumber the grid in an + // efficient manner. We also record the number of DOF's per block. dof_handler_ref.distribute_dofs(fe); DoFRenumbering::Cuthill_McKee(dof_handler_ref); DoFRenumbering::component_wise(dof_handler_ref, block_component); @@ -1882,9 +1745,8 @@ namespace Step44 PerTaskData_UQPH per_task_data_UQPH; ScratchData_UQPH scratch_data_UQPH(fe, qf_cell, uf_UQPH, solution_total); - // We then pass them and the one-cell update - // function to the WorkStream to be - // processed: + // We then pass them and the one-cell update function to the WorkStream to + // be processed: WorkStream::run(dof_handler_ref.begin_active(), dof_handler_ref.end(), *this, @@ -1920,12 +1782,10 @@ namespace Step44 scratch.reset(); - // We first need to find the values and - // gradients at quadrature points inside - // the current cell and then we update - // each local QP using the displacement - // gradient and total pressure and - // dilatation solution values: + // We first need to find the values and gradients at quadrature points + // inside the current cell and then we update each local QP using the + // displacement gradient and total pressure and dilatation solution + // values: scratch.fe_values_ref.reinit(cell); scratch.fe_values_ref[u_fe].get_function_gradients(scratch.solution_total, scratch.solution_grads_u_total); @@ -1964,26 +1824,17 @@ namespace Step44 print_conv_header(); - // We now perform a number of Newton - // iterations to iteratively solve the - // nonlinear problem. Since the problem - // is fully nonlinear and we are using a - // full Newton method, the data stored in - // the tangent matrix and right-hand side - // vector is not reusable and must be - // cleared at each Newton step. We then - // initially build the right-hand side - // vector to check for convergence (and - // store this value in the first - // iteration). The unconstrained DOFs - // of the rhs vector hold the - // out-of-balance forces. The building is - // done before assembling the system - // matrix as the latter is an expensive - // operation and we can potentially avoid - // an extra assembly process by not - // assembling the tangent matrix when - // convergence is attained. + // We now perform a number of Newton iterations to iteratively solve the + // nonlinear problem. Since the problem is fully nonlinear and we are + // using a full Newton method, the data stored in the tangent matrix and + // right-hand side vector is not reusable and must be cleared at each + // Newton step. We then initially build the right-hand side vector to + // check for convergence (and store this value in the first iteration). + // The unconstrained DOFs of the rhs vector hold the out-of-balance + // forces. The building is done before assembling the system matrix as the + // latter is an expensive operation and we can potentially avoid an extra + // assembly process by not assembling the tangent matrix when convergence + // is attained. unsigned int newton_iteration = 0; for (; newton_iteration < parameters.max_iterations_NR; ++newton_iteration) @@ -1999,9 +1850,8 @@ namespace Step44 if (newton_iteration == 0) error_residual_0 = error_residual; - // We can now determine the - // normalised residual error and - // check for solution convergence: + // We can now determine the normalised residual error and check for + // solution convergence: error_residual_norm = error_residual; error_residual_norm.normalise(error_residual_0); @@ -2014,12 +1864,9 @@ namespace Step44 break; } - // If we have decided that we want to - // continue with the iteration, we - // assemble the tangent, make and - // impose the Dirichlet constraints, - // and do the solve of the linearised - // system: + // If we have decided that we want to continue with the iteration, we + // assemble the tangent, make and impose the Dirichlet constraints, + // and do the solve of the linearised system: assemble_system_tangent(); make_constraints(newton_iteration); constraints.condense(tangent_matrix, system_rhs); @@ -2031,15 +1878,10 @@ namespace Step44 if (newton_iteration == 0) error_update_0 = error_update; - // We can now determine the - // normalised Newton update error, - // and perform the actual update of - // the solution increment for the - // current time step, update all - // quadrature point information - // pertaining to this new - // displacement and stress state and - // continue iterating: + // We can now determine the normalised Newton update error, and + // perform the actual update of the solution increment for the current + // time step, update all quadrature point information pertaining to + // this new displacement and stress state and continue iterating: error_update_norm = error_update; error_update_norm.normalise(error_update_0); @@ -2056,21 +1898,14 @@ namespace Step44 << " " << std::endl; } - // At the end, if it turns out that we - // have in fact done more iterations than - // the parameter file allowed, we raise - // an exception that can be caught in the - // main() function. The call - // AssertThrow(condition, - // exc_object) is in essence - // equivalent to if (!cond) throw - // exc_object; but the former form - // fills certain fields in the exception - // object that identify the location - // (filename and line number) where the - // exception was raised to make it - // simpler to identify where the problem - // happened. + // At the end, if it turns out that we have in fact done more iterations + // than the parameter file allowed, we raise an exception that can be + // caught in the main() function. The call AssertThrow(condition, + // exc_object) is in essence equivalent to if (!cond) throw + // exc_object; but the former form fills certain fields in the + // exception object that identify the location (filename and line number) + // where the exception was raised to make it simpler to identify where the + // problem happened. AssertThrow (newton_iteration <= parameters.max_iterations_NR, ExcMessage("No convergence in nonlinear solver!")); } @@ -2324,25 +2159,19 @@ namespace Step44 } } - // Now we build the local cell stiffness - // matrix. Since the global and local - // system matrices are symmetric, we can - // exploit this property by building only - // the lower half of the local matrix and - // copying the values to the upper half. - // So we only assemble half of the - // $\mathsf{\mathbf{k}}_{uu}$, - // $\mathsf{\mathbf{k}}_{\widetilde{p} \widetilde{p}} = \mathbf{0}$, - // $\mathsf{\mathbf{k}}_{\widetilde{J} \widetilde{J}}$ - // blocks, while the whole $\mathsf{\mathbf{k}}_{\widetilde{p} \widetilde{J}}$, + // Now we build the local cell stiffness matrix. Since the global and + // local system matrices are symmetric, we can exploit this property by + // building only the lower half of the local matrix and copying the values + // to the upper half. So we only assemble half of the + // $\mathsf{\mathbf{k}}_{uu}$, $\mathsf{\mathbf{k}}_{\widetilde{p} + // \widetilde{p}} = \mathbf{0}$, $\mathsf{\mathbf{k}}_{\widetilde{J} + // \widetilde{J}}$ blocks, while the whole + // $\mathsf{\mathbf{k}}_{\widetilde{p} \widetilde{J}}$, // $\mathsf{\mathbf{k}}_{\mathbf{u} \widetilde{J}} = \mathbf{0}$, - // $\mathsf{\mathbf{k}}_{\mathbf{u} \widetilde{p}}$ - // blocks are built. + // $\mathsf{\mathbf{k}}_{\mathbf{u} \widetilde{p}}$ blocks are built. // - // In doing so, we first extract some - // configuration dependent variables from - // our QPH history objects for the - // current quadrature point. + // In doing so, we first extract some configuration dependent variables + // from our QPH history objects for the current quadrature point. for (unsigned int q_point = 0; q_point < n_q_points; ++q_point) { const Tensor<2, dim> tau = lqph[q_point].get_tau(); @@ -2350,8 +2179,8 @@ namespace Step44 const double d2Psi_vol_dJ2 = lqph[q_point].get_d2Psi_vol_dJ2(); const double det_F = lqph[q_point].get_det_F(); - // Next we define some aliases to make - // the assembly process easier to follow + // Next we define some aliases to make the assembly process easier to + // follow const std::vector &N = scratch.Nx[q_point]; const std::vector > @@ -2371,11 +2200,9 @@ namespace Step44 const unsigned int j_group = fe.system_to_base_index(j).first.first; // This is the $\mathsf{\mathbf{k}}_{\mathbf{u} \mathbf{u}}$ - // contribution. It comprises a - // material contribution, and a - // geometrical stress contribution - // which is only added along the - // local matrix diagonals: + // contribution. It comprises a material contribution, and a + // geometrical stress contribution which is only added along + // the local matrix diagonals: if ((i_group == j_group) && (i_group == u_dof)) { data.cell_matrix(i, j) += symm_grad_Nx[i] * Jc // The material contribution: @@ -2406,8 +2233,7 @@ namespace Step44 } } - // Finally, we need to copy the lower - // half of the local matrix into the + // Finally, we need to copy the lower half of the local matrix into the // upper half: for (unsigned int i = 0; i < dofs_per_cell; ++i) for (unsigned int j = i + 1; j < dofs_per_cell; ++j) @@ -2530,11 +2356,9 @@ namespace Step44 } } - // Next we assemble the Neumann - // contribution. We first check to see it - // the cell face exists on a boundary on - // which a traction is applied and add the - // contribution if this is the case. + // Next we assemble the Neumann contribution. We first check to see it the + // cell face exists on a boundary on which a traction is applied and add + // the contribution if this is the case. for (unsigned int face = 0; face < GeometryInfo::faces_per_cell; ++face) if (cell->face(face)->at_boundary() == true @@ -2548,32 +2372,19 @@ namespace Step44 const Tensor<1, dim> &N = scratch.fe_face_values_ref.normal_vector(f_q_point); - // Using the face normal at - // this quadrature point - // we specify - // the traction in reference - // configuration. For this - // problem, a defined pressure - // is applied in the reference - // configuration. The - // direction of the applied - // traction is assumed not to - // evolve with the deformation - // of the domain. The traction - // is defined using the first - // Piola-Kirchhoff stress is - // simply - // $\mathbf{t} = \mathbf{P}\mathbf{N} - // = [p_0 \mathbf{I}] \mathbf{N} = p_0 \mathbf{N}$ - // We use the - // time variable to linearly - // ramp up the pressure load. + // Using the face normal at this quadrature point we specify the + // traction in reference configuration. For this problem, a + // defined pressure is applied in the reference configuration. + // The direction of the applied traction is assumed not to + // evolve with the deformation of the domain. The traction is + // defined using the first Piola-Kirchhoff stress is simply + // $\mathbf{t} = \mathbf{P}\mathbf{N} = [p_0 \mathbf{I}] + // \mathbf{N} = p_0 \mathbf{N}$ We use the time variable to + // linearly ramp up the pressure load. // - // Note that the contributions - // to the right hand side - // vector we compute here only - // exist in the displacement - // components of the vector. + // Note that the contributions to the right hand side vector we + // compute here only exist in the displacement components of the + // vector. static const double p0 = -4.0 / (parameters.scale * parameters.scale); @@ -2616,49 +2427,31 @@ namespace Step44 { std::cout << " CST " << std::flush; - // Since the constraints are different at - // different Newton iterations, we need - // to clear the constraints matrix and - // completely rebuild it. However, after - // the first iteration, the constraints - // remain the same and we can simply skip - // the rebuilding step if we do not clear - // it. + // Since the constraints are different at different Newton iterations, we + // need to clear the constraints matrix and completely rebuild + // it. However, after the first iteration, the constraints remain the same + // and we can simply skip the rebuilding step if we do not clear it. if (it_nr > 1) return; constraints.clear(); const bool apply_dirichlet_bc = (it_nr == 0); - // The boundary conditions for the - // indentation problem are as follows: On - // the -x, -y and -z faces (ID's 0,2,4) we - // set up a symmetry condition to allow - // only planar movement while the +x and +y - // faces (ID's 1,3) are traction free. In - // this contrived problem, part of the +z - // face (ID 5) is set to have no motion in - // the x- and y-component. Finally, as - // described earlier, the other part of the - // +z face has an the applied pressure but - // is also constrained in the x- and - // y-directions. + // The boundary conditions for the indentation problem are as follows: On + // the -x, -y and -z faces (ID's 0,2,4) we set up a symmetry condition to + // allow only planar movement while the +x and +y faces (ID's 1,3) are + // traction free. In this contrived problem, part of the +z face (ID 5) is + // set to have no motion in the x- and y-component. Finally, as described + // earlier, the other part of the +z face has an the applied pressure but + // is also constrained in the x- and y-directions. // - // In the following, we will have to tell - // the function interpolation boundary - // values which components of the - // solution vector should be constrained - // (i.e., whether it's the x-, y-, - // z-displacements or combinations - // thereof). This is done using - // ComponentMask objects (see @ref - // GlossComponentMask) which we can get - // from the finite element if we provide - // it with an extractor object for the - // component we wish to select. To this - // end we first set up such extractor - // objects and later use it when - // generating the relevant component - // masks: + // In the following, we will have to tell the function interpolation + // boundary values which components of the solution vector should be + // constrained (i.e., whether it's the x-, y-, z-displacements or + // combinations thereof). This is done using ComponentMask objects (see + // @ref GlossComponentMask) which we can get from the finite element if we + // provide it with an extractor object for the component we wish to + // select. To this end we first set up such extractor objects and later + // use it when generating the relevant component masks: const FEValuesExtractors::Scalar x_displacement(0); const FEValuesExtractors::Scalar y_displacement(1); const FEValuesExtractors::Scalar z_displacement(2); @@ -2822,8 +2615,9 @@ namespace Step44 unsigned int lin_it = 0; double lin_res = 0.0; - // In the first step of this function, we solve for the incremental displacement $d\mathbf{u}$. - // To this end, we perform static condensation to make + // In the first step of this function, we solve for the incremental + // displacement $d\mathbf{u}$. To this end, we perform static + // condensation to make // $\mathbf{\mathsf{K}}_{\textrm{con}} // = \bigl[ \mathbf{\mathsf{K}}_{uu} + \overline{\overline{\mathbf{\mathsf{K}}}}~ \bigr]$ // and put @@ -2920,15 +2714,11 @@ namespace Step44 GrowingVectorMemory > GVM; SolverCG > solver_CG(solver_control, GVM); - // We've chosen by default a SSOR - // preconditioner as it appears to - // provide the fastest solver - // convergence characteristics for this - // problem on a single-thread machine. - // However, for multicore - // computing, the Jacobi preconditioner - // which is multithreaded may converge - // quicker for larger linear systems. + // We've chosen by default a SSOR preconditioner as it appears to + // provide the fastest solver convergence characteristics for this + // problem on a single-thread machine. However, for multicore + // computing, the Jacobi preconditioner which is multithreaded may + // converge quicker for larger linear systems. PreconditionSelector, Vector > preconditioner (parameters.preconditioner_type, parameters.preconditioner_relaxation); @@ -2960,8 +2750,7 @@ namespace Step44 timer.leave_subsection(); } - // Now that we have the displacement - // update, distribute the constraints + // Now that we have the displacement update, distribute the constraints // back to the Newton update: constraints.distribute(newton_update); @@ -3144,27 +2933,18 @@ namespace Step44 scratch.reset(); cell->get_dof_indices(data.local_dof_indices); - // We now extract the contribution of - // the dofs associated with the current cell - // to the global stiffness matrix. - // The discontinuous nature of the $\widetilde{p}$ - // and $\widetilde{J}$ - // interpolations mean that their is no - // coupling of the local contributions at the - // global level. This is not the case with the u dof. - // In other words, - // $\mathsf{\mathbf{k}}_{\widetilde{J} \widetilde{p}}$, - // $\mathsf{\mathbf{k}}_{\widetilde{p} \widetilde{p}}$ - // and + // We now extract the contribution of the dofs associated with the current + // cell to the global stiffness matrix. The discontinuous nature of the + // $\widetilde{p}$ and $\widetilde{J}$ interpolations mean that their is + // no coupling of the local contributions at the global level. This is not + // the case with the u dof. In other words, // $\mathsf{\mathbf{k}}_{\widetilde{J} \widetilde{p}}$, - // when extracted - // from the global stiffness matrix are the element - // contributions. - // This is not the case for - // $\mathsf{\mathbf{k}}_{\mathbf{u} \mathbf{u}}$ + // $\mathsf{\mathbf{k}}_{\widetilde{p} \widetilde{p}}$ and + // $\mathsf{\mathbf{k}}_{\widetilde{J} \widetilde{p}}$, when extracted + // from the global stiffness matrix are the element contributions. This + // is not the case for $\mathsf{\mathbf{k}}_{\mathbf{u} \mathbf{u}}$ // - // Note: a lower-case symbol is used to denote - // element stiffness matrices. + // Note: A lower-case symbol is used to denote element stiffness matrices. // Currently the matrix corresponding to // the dof associated with the current element @@ -3204,24 +2984,17 @@ namespace Step44 // $\mathsf{\mathbf{K}}_{\widetilde{p} \widetilde{J}}$ // and // $\mathsf{\mathbf{K}}_{\widetilde{J} \widetilde{p}}$ - // sub-blocks. So - // if we are to modify them, we must - // account for the data that is already - // there (i.e. simply add to it or remove - // it if necessary). Since the - // copy_local_to_global operation is a "+=" - // operation, we need to take this into - // account + // sub-blocks. So if we are to modify them, we must account for the data + // that is already there (i.e. simply add to it or remove it if + // necessary). Since the copy_local_to_global operation is a "+=" + // operation, we need to take this into account // - // For the $\mathsf{\mathbf{K}}_{uu}$ block in particular, this - // means that contributions have been added - // from the surrounding cells, so we need - // to be careful when we manipulate this - // block. We can't just erase the + // For the $\mathsf{\mathbf{K}}_{uu}$ block in particular, this means that + // contributions have been added from the surrounding cells, so we need to + // be careful when we manipulate this block. We can't just erase the // sub-blocks. // - // This is the strategy we will employ to - // get the sub-blocks we want: + // This is the strategy we will employ to get the sub-blocks we want: // // - $ {\mathbf{\mathsf{k}}}_{\textrm{store}}$: // Since we don't have access to $\mathsf{\mathbf{k}}_{uu}$, @@ -3272,10 +3045,8 @@ namespace Step44 element_indices_J, element_indices_J); - // To get the inverse of - // $\mathsf{\mathbf{k}}_{\widetilde{p} \widetilde{J}}$, - // we invert it - // directly. This operation is relatively + // To get the inverse of $\mathsf{\mathbf{k}}_{\widetilde{p} + // \widetilde{J}}$, we invert it directly. This operation is relatively // inexpensive since $\mathsf{\mathbf{k}}_{\widetilde{p} \widetilde{J}}$ // since block-diagonal. data.k_pJ_inv.invert(data.k_pJ); @@ -3359,22 +3130,14 @@ namespace Step44 DataOut::type_dof_data, data_component_interpretation); - // Since we are dealing with a large - // deformation problem, it would be nice - // to display the result on a displaced - // grid! The MappingQEulerian class - // linked with the DataOut class provides - // an interface through which this can be - // achieved without physically moving the - // grid points in the Triangulation - // object ourselves. We first need to - // copy the solution to a temporary - // vector and then create the Eulerian - // mapping. We also specify the - // polynomial degree to the DataOut - // object in order to produce a more - // refined output data set when higher - // order polynomials are used. + // Since we are dealing with a large deformation problem, it would be nice + // to display the result on a displaced grid! The MappingQEulerian class + // linked with the DataOut class provides an interface through which this + // can be achieved without physically moving the grid points in the + // Triangulation object ourselves. We first need to copy the solution to + // a temporary vector and then create the Eulerian mapping. We also + // specify the polynomial degree to the DataOut object in order to produce + // a more refined output data set when higher order polynomials are used. Vector soln(solution_n.size()); for (unsigned int i = 0; i < soln.size(); ++i) soln(i) = solution_n(i); diff --git a/deal.II/examples/step-45/step-45.cc b/deal.II/examples/step-45/step-45.cc index 44c679d977..6c2336fe88 100644 --- a/deal.II/examples/step-45/step-45.cc +++ b/deal.II/examples/step-45/step-45.cc @@ -12,10 +12,9 @@ // @sect3{Include files} -// The include files are already known. The -// one critical for the current program is -// the one that contains the ConstraintMatrix -// in the lac/ directory: +// The include files are already known. The one critical for the current +// program is the one that contains the ConstraintMatrix in the +// lac/ directory: #include #include @@ -49,19 +48,13 @@ namespace Step45 // @sect3{The LaplaceProblem class} - // The class LaplaceProblem is - // the main class of this problem. As - // mentioned in the introduction, it is - // fashioned after the corresponding class in - // step-3. Correspondingly, the documentation - // from that tutorial program applies here as - // well. The only new member variable is the - // constraints variables that - // will hold the constraints from the - // periodic boundary condition. We will - // initialize it in the - // make_periodicity_constraints() - // function which we call from + // The class LaplaceProblem is the main class of this + // problem. As mentioned in the introduction, it is fashioned after the + // corresponding class in step-3. Correspondingly, the documentation from + // that tutorial program applies here as well. The only new member variable + // is the constraints variables that will hold the constraints + // from the periodic boundary condition. We will initialize it in the + // make_periodicity_constraints() function which we call from // make_grid_and_dofs(). class LaplaceProblem { @@ -92,10 +85,8 @@ namespace Step45 // @sect3{The RightHandSide class} - // The following implements the right hand - // side function discussed in the - // introduction. Its implementation is - // obvious given what has been shown in + // The following implements the right hand side function discussed in the + // introduction. Its implementation is obvious given what has been shown in // step-4 before: class RightHandSide: public Function<2> { @@ -125,9 +116,8 @@ namespace Step45 // @sect3{Implementation of the LaplaceProblem class} - // The first part of implementing the main - // class is the constructor. It is unchanged - // from step-3 and step-4: + // The first part of implementing the main class is the constructor. It is + // unchanged from step-3 and step-4: LaplaceProblem::LaplaceProblem () : fe (1), @@ -137,24 +127,17 @@ namespace Step45 // @sect4{LaplaceProblem::make_grid_and_dofs} - // The following is the first function to be - // called in run(). It sets up - // the mesh and degrees of freedom. + // The following is the first function to be called in + // run(). It sets up the mesh and degrees of freedom. // - // We start by creating the usual square mesh - // and changing the boundary indicator on the - // parts of the boundary where we have - // Dirichlet boundary conditions (top and - // bottom, i.e. faces two and three of the - // reference cell as defined by - // GeometryInfo), so that we can distinguish - // between the parts of the boundary where - // periodic and where Dirichlet boundary - // conditions hold. We then refine the mesh a - // fixed number of times, with child faces - // inheriting the boundary indicators - // previously set on the coarse mesh from - // their parents. + // We start by creating the usual square mesh and changing the boundary + // indicator on the parts of the boundary where we have Dirichlet boundary + // conditions (top and bottom, i.e. faces two and three of the reference + // cell as defined by GeometryInfo), so that we can distinguish between the + // parts of the boundary where periodic and where Dirichlet boundary + // conditions hold. We then refine the mesh a fixed number of times, with + // child faces inheriting the boundary indicators previously set on the + // coarse mesh from their parents. void LaplaceProblem::make_grid_and_dofs () { GridGenerator::hyper_cube (triangulation); @@ -162,9 +145,8 @@ namespace Step45 triangulation.begin_active ()->face (3)->set_boundary_indicator (1); triangulation.refine_global (5); - // The next step is to distribute the - // degrees of freedom and produce a little - // bit of graphical output: + // The next step is to distribute the degrees of freedom and produce a + // little bit of graphical output: dof_handler.distribute_dofs (fe); std::cout << "Number of active cells: " << triangulation.n_active_cells () @@ -172,31 +154,23 @@ namespace Step45 << "Degrees of freedom: " << dof_handler.n_dofs () << std::endl; - // Now it is the time for the constraints - // that come from the periodicity - // constraints. We do this in the - // following, separate function, after - // clearing any possible prior content from - // the constraints object: + // Now it is the time for the constraints that come from the periodicity + // constraints. We do this in the following, separate function, after + // clearing any possible prior content from the constraints object: constraints.clear (); make_periodicity_constraints (); - // We also incorporate the homogeneous - // Dirichlet boundary conditions on the - // upper and lower parts of the boundary - // (i.e. the ones with boundary indicator - // 1) and close the - // ConstraintMatrix object: + // We also incorporate the homogeneous Dirichlet boundary conditions on + // the upper and lower parts of the boundary (i.e. the ones with boundary + // indicator 1) and close the ConstraintMatrix object: VectorTools::interpolate_boundary_values (dof_handler, 1, ZeroFunction<2> (), constraints); constraints.close (); - // Then we create the sparsity pattern and - // the system matrix and initialize the - // solution and right-hand side - // vectors. This is again as in step-3 or - // step-6, for example: + // Then we create the sparsity pattern and the system matrix and + // initialize the solution and right-hand side vectors. This is again as + // in step-3 or step-6, for example: CompressedSparsityPattern c_sparsity_pattern (dof_handler.n_dofs(), dof_handler.n_dofs()); DoFTools::make_sparsity_pattern (dof_handler, @@ -215,49 +189,33 @@ namespace Step45 // @sect4{LaplaceProblem::make_periodicity_constraints} - // This is the function that provides the new - // material of this tutorial program. The - // general outline of the algorithm is as - // follows: we first loop over all the - // degrees of freedom on the right boundary - // and record their $y$-locations in a map - // together with their global indices. Then - // we go along the left boundary, find - // matching $y$-locations for each degree of - // freedom, and then add constraints that - // identify these matched degrees of freedom. + // This is the function that provides the new material of this tutorial + // program. The general outline of the algorithm is as follows: we first + // loop over all the degrees of freedom on the right boundary and record + // their $y$-locations in a map together with their global indices. Then we + // go along the left boundary, find matching $y$-locations for each degree + // of freedom, and then add constraints that identify these matched degrees + // of freedom. // - // In this function, we make use of the fact - // that we have a scalar element (i.e. the - // only valid vector component that can be - // passed to DoFAccessor::vertex_dof_index is - // zero) and that we have a $Q_1$ element for - // which all degrees of freedom live in the - // vertices of the cell. Furthermore, we have - // assumed that we are in 2d and that meshes - // were not refined adaptively — the - // latter assumption would imply that there - // may be vertices that aren't matched - // one-to-one and for which we won't be able - // to compute constraints this easily. We - // will discuss in the "outlook" part of the - // results section below other strategies to - // write the current function that can work - // in cases like this as well. + // In this function, we make use of the fact that we have a scalar element + // (i.e. the only valid vector component that can be passed to + // DoFAccessor::vertex_dof_index is zero) and that we have a $Q_1$ element + // for which all degrees of freedom live in the vertices of the + // cell. Furthermore, we have assumed that we are in 2d and that meshes were + // not refined adaptively — the latter assumption would imply that + // there may be vertices that aren't matched one-to-one and for which we + // won't be able to compute constraints this easily. We will discuss in the + // "outlook" part of the results section below other strategies to write the + // current function that can work in cases like this as well. void LaplaceProblem::make_periodicity_constraints () { - // To start with the actual implementation, - // we loop over all active cells and check - // whether the cell is located at the right - // boundary (i.e. face 1 — the one at - // the right end of the cell — is at - // the boundary). If that is so, then we - // use that for the currently used finite - // element, each degree of freedom of the - // face is located on one vertex, and store - // their $y$-coordinate along with the - // global number of this degree of freedom - // in the following map: + // To start with the actual implementation, we loop over all active cells + // and check whether the cell is located at the right boundary (i.e. face + // 1 — the one at the right end of the cell — is at the + // boundary). If that is so, then we use that for the currently used + // finite element, each degree of freedom of the face is located on one + // vertex, and store their $y$-coordinate along with the global number of + // this degree of freedom in the following map: std::map dof_locations; for (DoFHandler<2>::active_cell_iterator cell = dof_handler.begin_active (); @@ -271,79 +229,49 @@ namespace Step45 dof_locations[cell->face(1)->vertex_dof_index(1, 0)] = cell->face(1)->vertex(1)[1]; } - // Note that in the above block, we add - // vertices zero and one of the affected - // face to the map. This means that we will - // add each vertex twice, once from each of - // the two adjacent cells (unless the - // vertex is a corner of the domain). Since - // the coordinates of the vertex are the - // same both times of course, there is no - // harm: we replace one value in the map - // with itself the second time we visit an - // entry. + // Note that in the above block, we add vertices zero and one of the + // affected face to the map. This means that we will add each vertex + // twice, once from each of the two adjacent cells (unless the vertex is a + // corner of the domain). Since the coordinates of the vertex are the same + // both times of course, there is no harm: we replace one value in the map + // with itself the second time we visit an entry. // - // The same will be true below where we add - // the same constraint twice to the - // ConstraintMatrix — again, we will - // overwrite the constraint with itself, - // and no harm is done. - - // Now we have to find the corresponding - // degrees of freedom on the left part of - // the boundary. Therefore we loop over all - // cells again and choose the ones where - // face 0 is at the boundary: + // The same will be true below where we add the same constraint twice to + // the ConstraintMatrix — again, we will overwrite the constraint + // with itself, and no harm is done. + + // Now we have to find the corresponding degrees of freedom on the left + // part of the boundary. Therefore we loop over all cells again and choose + // the ones where face 0 is at the boundary: for (DoFHandler<2>::active_cell_iterator cell = dof_handler.begin_active (); cell != dof_handler.end (); ++cell) if (cell->at_boundary () && cell->face (0)->at_boundary ()) { - // Every degree of freedom on this - // face needs to have a corresponding - // one on the right side of the face, - // and our goal is to add a - // constraint for the one on the left - // in terms of the one on the - // right. To this end we first add a - // new line to the constraint matrix - // for this one degree of - // freedom. Then we identify it with - // the corresponding degree of - // freedom on the right part of the - // boundary by constraining the - // degree of freedom on the left with - // the one on the right times a - // weight of 1.0. + // Every degree of freedom on this face needs to have a + // corresponding one on the right side of the face, and our goal is + // to add a constraint for the one on the left in terms of the one + // on the right. To this end we first add a new line to the + // constraint matrix for this one degree of freedom. Then we + // identify it with the corresponding degree of freedom on the right + // part of the boundary by constraining the degree of freedom on the + // left with the one on the right times a weight of 1.0. // - // Consequently, we loop over the two - // vertices of each face we find and - // then loop over all the - // $y$-locations we've previously - // recorded to find which degree of - // freedom on the right boundary - // corresponds to the one we - // currently look at. Note that we - // have entered these into a map, and - // when looping over the iterators - // p of this map, - // p-@>first corresponds - // to the "key" of an entry (the - // global number of the degree of - // freedom), whereas - // p-@>second is the - // "value" (the $y$-location we have - // entered above). + // Consequently, we loop over the two vertices of each face we find + // and then loop over all the $y$-locations we've previously + // recorded to find which degree of freedom on the right boundary + // corresponds to the one we currently look at. Note that we have + // entered these into a map, and when looping over the iterators + // p of this map, p-@>first corresponds to + // the "key" of an entry (the global number of the degree of + // freedom), whereas p-@>second is the "value" (the + // $y$-location we have entered above). // - // We are quite sure here that we - // should be finding such a - // corresponding degree of - // freedom. However, sometimes stuff - // happens and so the bottom of the - // block contains an assertion that - // our assumption was indeed correct - // and that a vertex was found. + // We are quite sure here that we should be finding such a + // corresponding degree of freedom. However, sometimes stuff happens + // and so the bottom of the block contains an assertion that our + // assumption was indeed correct and that a vertex was found. for (unsigned int face_vertex = 0; face_vertex<2; ++face_vertex) { constraints.add_line (cell->face(0)->vertex_dof_index (face_vertex, 0)); @@ -366,21 +294,15 @@ namespace Step45 // @sect4{LaplaceProblem::assemble_system} - // Assembling the system matrix and the - // right-hand side vector is done as in other - // tutorials before. + // Assembling the system matrix and the right-hand side vector is done as in + // other tutorials before. // - // The only difference here is that we don't - // copy elements from local contributions - // into the global matrix and later fix up - // constrained degrees of freedom, but that - // we let the ConstraintMatrix do this job in - // one swoop for us using the - // ConstraintMatrix::distribute_local_to_global - // function(). This was previously already - // demonstrated in step-16, step-22, for - // example, along with a discussion in the - // introduction of step-27. + // The only difference here is that we don't copy elements from local + // contributions into the global matrix and later fix up constrained degrees + // of freedom, but that we let the ConstraintMatrix do this job in one swoop + // for us using the ConstraintMatrix::distribute_local_to_global + // function(). This was previously already demonstrated in step-16, step-22, + // for example, along with a discussion in the introduction of step-27. void LaplaceProblem::assemble_system () { QGauss<2> quadrature_formula(2); @@ -429,14 +351,11 @@ namespace Step45 // @sect4{LaplaceProblem::solve} - // To solve the linear system of equations - // $Au=b$ we use the CG solver with an - // SSOR-preconditioner. This is, again, - // copied almost verbatim from step-6. As in - // step-6, we need to make sure that - // constrained degrees of freedom get their - // correct values after solving by calling - // the ConstraintMatrix::distribute function: + // To solve the linear system of equations $Au=b$ we use the CG solver with + // an SSOR-preconditioner. This is, again, copied almost verbatim from + // step-6. As in step-6, we need to make sure that constrained degrees of + // freedom get their correct values after solving by calling the + // ConstraintMatrix::distribute function: void LaplaceProblem::solve () { SolverControl solver_control (dof_handler.n_dofs (), 1e-12); @@ -453,9 +372,8 @@ namespace Step45 // @sect4{LaplaceProblem::output_results} - // This is another function copied from - // previous tutorial programs. It generates - // graphical output in VTK format: + // This is another function copied from previous tutorial programs. It + // generates graphical output in VTK format: void LaplaceProblem::output_results () { DataOut<2> data_out; @@ -473,8 +391,7 @@ namespace Step45 // @sect4{LaplaceProblem::run} - // And another function copied from previous - // programs: + // And another function copied from previous programs: void LaplaceProblem::run () { make_grid_and_dofs(); @@ -486,8 +403,8 @@ namespace Step45 // @sect3{The main function} -// And at the end we have the main function -// as usual, this time copied from step-6: +// And at the end we have the main function as usual, this time copied from +// step-6: int main () { try diff --git a/deal.II/examples/step-46/step-46.cc b/deal.II/examples/step-46/step-46.cc index 1038c97641..4acfa40208 100644 --- a/deal.II/examples/step-46/step-46.cc +++ b/deal.II/examples/step-46/step-46.cc @@ -12,12 +12,10 @@ // @sect3{Include files} -// The include files for this program are the -// same as for many others before. The only -// new one is the one that declares -// FE_Nothing as discussed in the -// introduction. The ones in the hp directory -// have already been discussed in step-27. +// The include files for this program are the same as for many others +// before. The only new one is the one that declares FE_Nothing as discussed +// in the introduction. The ones in the hp directory have already been +// discussed in step-27. #include #include @@ -62,38 +60,26 @@ namespace Step46 // @sect3{The FluidStructureProblem class template} - // This is the main class. It is, if you - // want, a combination of step-8 and step-22 - // in that it has member variables that - // either address the global problem (the - // Triangulation and hp::DoFHandler objects, - // as well as the hp::FECollection and - // various linear algebra objects) or that - // pertain to either the elasticity or Stokes - // sub-problems. The general structure of the - // class, however, is like that of most of - // the other programs implementing stationary - // problems. + // This is the main class. It is, if you want, a combination of step-8 and + // step-22 in that it has member variables that either address the global + // problem (the Triangulation and hp::DoFHandler objects, as well as the + // hp::FECollection and various linear algebra objects) or that pertain to + // either the elasticity or Stokes sub-problems. The general structure of + // the class, however, is like that of most of the other programs + // implementing stationary problems. // - // There are a few helper functions - // (cell_is_in_fluid_domain, - // cell_is_in_solid_domain) of - // self-explanatory nature (operating on the - // symbolic names for the two subdomains that - // will be used as material_ids for cells - // belonging to the subdomains, as explained - // in the introduction) and a few functions - // (make_grid, set_active_fe_indices, - // assemble_interface_terms) that have - // been broken out of other functions that - // can be found in many of the other tutorial - // programs and that will be discussed as we - // get to their implementation. + // There are a few helper functions (cell_is_in_fluid_domain, + // cell_is_in_solid_domain) of self-explanatory nature (operating on + // the symbolic names for the two subdomains that will be used as + // material_ids for cells belonging to the subdomains, as explained in the + // introduction) and a few functions (make_grid, + // set_active_fe_indices, assemble_interface_terms) that have been + // broken out of other functions that can be found in many of the other + // tutorial programs and that will be discussed as we get to their + // implementation. // - // The final set of variables - // (viscosity, lambda, eta) - // describes the material properties used for - // the two physics models. + // The final set of variables (viscosity, lambda, eta) + // describes the material properties used for the two physics models. template class FluidStructureProblem { @@ -155,17 +141,12 @@ namespace Step46 // @sect3{Boundary values and right hand side} - // The following classes do as their names - // suggest. The boundary values for the - // velocity are $\mathbf u=(0, \sin(\pi - // x))^T$ in 2d and $\mathbf u=(0, 0, - // \sin(\pi x)\sin(\pi y))^T$ in 3d, - // respectively. The remaining boundary - // conditions for this problem are all - // homogenous and have been discussed in the - // introduction. The right hand side forcing - // term is zero for both the fluid and the - // solid. + // The following classes do as their names suggest. The boundary values for + // the velocity are $\mathbf u=(0, \sin(\pi x))^T$ in 2d and $\mathbf u=(0, + // 0, \sin(\pi x)\sin(\pi y))^T$ in 3d, respectively. The remaining boundary + // conditions for this problem are all homogenous and have been discussed in + // the introduction. The right hand side forcing term is zero for both the + // fluid and the solid. template class StokesBoundaryValues : public Function { @@ -253,23 +234,15 @@ namespace Step46 // @sect4{Constructors and helper functions} - // Let's now get to the implementation of the - // primary class of this program. The first - // few functions are the constructor and the - // helper functions that can be used to - // determine which part of the domain a cell - // is in. Given the discussion of these - // topics in the introduction, their - // implementation is rather obvious. In the - // constructor, note that we have to - // construct the hp::FECollection object from - // the base elements for Stokes and - // elasticity; using the - // hp::FECollection::push_back function - // assigns them spots zero and one in this - // collection, an order that we have to - // remember and use consistently in the rest - // of the program. + // Let's now get to the implementation of the primary class of this + // program. The first few functions are the constructor and the helper + // functions that can be used to determine which part of the domain a cell + // is in. Given the discussion of these topics in the introduction, their + // implementation is rather obvious. In the constructor, note that we have + // to construct the hp::FECollection object from the base elements for + // Stokes and elasticity; using the hp::FECollection::push_back function + // assigns them spots zero and one in this collection, an order that we have + // to remember and use consistently in the rest of the program. template FluidStructureProblem:: FluidStructureProblem (const unsigned int stokes_degree, @@ -316,24 +289,16 @@ namespace Step46 // @sect4{Meshes and assigning subdomains} - // The next pair of functions deals with - // generating a mesh and making sure all - // flags that denote subdomains are - // correct. make_grid, as - // discussed in the introduction, generates - // an $8\times 8$ mesh (or an $8\times - // 8\times 8$ mesh in 3d) to make sure that - // each coarse mesh cell is completely within - // one of the subdomains. After generating - // this mesh, we loop over its boundary and - // set the boundary indicator to one at the - // top boundary, the only place where we set - // nonzero Dirichlet boundary - // conditions. After this, we loop again over - // all cells to set the material indicator - // — used to denote which part of the - // domain we are in, to either the fluid or - // solid indicator. + // The next pair of functions deals with generating a mesh and making sure + // all flags that denote subdomains are correct. make_grid, as + // discussed in the introduction, generates an $8\times 8$ mesh (or an + // $8\times 8\times 8$ mesh in 3d) to make sure that each coarse mesh cell + // is completely within one of the subdomains. After generating this mesh, + // we loop over its boundary and set the boundary indicator to one at the + // top boundary, the only place where we set nonzero Dirichlet boundary + // conditions. After this, we loop again over all cells to set the material + // indicator — used to denote which part of the domain we are in, to + // either the fluid or solid indicator. template void FluidStructureProblem::make_grid () @@ -366,24 +331,16 @@ namespace Step46 } - // The second part of this pair of functions - // determines which finite element to use on - // each cell. Above we have set the material - // indicator for each coarse mesh cell, and - // as mentioned in the introduction, this - // information is inherited from mother to - // child cell upon mesh refinement. + // The second part of this pair of functions determines which finite element + // to use on each cell. Above we have set the material indicator for each + // coarse mesh cell, and as mentioned in the introduction, this information + // is inherited from mother to child cell upon mesh refinement. // - // In other words, whenever we have refined - // (or created) the mesh, we can rely on the - // material indicators to be a correct - // description of which part of the domain a - // cell is in. We then use this to set the - // active FE index of the cell to the - // corresponding element of the - // hp::FECollection member variable of this - // class: zero for fluid cells, one for solid - // cells. + // In other words, whenever we have refined (or created) the mesh, we can + // rely on the material indicators to be a correct description of which part + // of the domain a cell is in. We then use this to set the active FE index + // of the cell to the corresponding element of the hp::FECollection member + // variable of this class: zero for fluid cells, one for solid cells. template void FluidStructureProblem::set_active_fe_indices () @@ -404,18 +361,13 @@ namespace Step46 // @sect4{FluidStructureProblem::setup_dofs} - // The next step is to setup the data - // structures for the linear system. To this - // end, we first have to set the active FE - // indices with the function immediately - // above, then distribute degrees of freedom, - // and then determine constraints on the - // linear system. The latter includes hanging - // node constraints as usual, but also the - // inhomogenous boundary values at the top - // fluid boundary, and zero boundary values - // along the perimeter of the solid - // subdomain. + // The next step is to setup the data structures for the linear system. To + // this end, we first have to set the active FE indices with the function + // immediately above, then distribute degrees of freedom, and then determine + // constraints on the linear system. The latter includes hanging node + // constraints as usual, but also the inhomogenous boundary values at the + // top fluid boundary, and zero boundary values along the perimeter of the + // solid subdomain. template void FluidStructureProblem::setup_dofs () @@ -443,12 +395,10 @@ namespace Step46 fe_collection.component_mask(displacements)); } - // There are more constraints we have to - // handle, though: we have to make sure - // that the velocity is zero at the - // interface between fluid and solid. The - // following piece of code was already - // presented in the introduction: + // There are more constraints we have to handle, though: we have to make + // sure that the velocity is zero at the interface between fluid and + // solid. The following piece of code was already presented in the + // introduction: { std::vector local_face_dof_indices (stokes_fe.dofs_per_face); for (typename hp::DoFHandler::active_cell_iterator @@ -485,11 +435,9 @@ namespace Step46 } } - // At the end of all this, we can declare - // to the constraints object that we now - // have all constraints ready to go and - // that the object can rebuild its internal - // data structures for better efficiency: + // At the end of all this, we can declare to the constraints object that + // we now have all constraints ready to go and that the object can rebuild + // its internal data structures for better efficiency: constraints.close (); std::cout << " Number of active cells: " @@ -499,11 +447,9 @@ namespace Step46 << dof_handler.n_dofs() << std::endl; - // In the rest of this function we create a - // sparsity pattern as discussed - // extensively in the introduction, and use - // it to initialize the matrix; then also - // set vectors to their correct sizes: + // In the rest of this function we create a sparsity pattern as discussed + // extensively in the introduction, and use it to initialize the matrix; + // then also set vectors to their correct sizes: { CompressedSimpleSparsityPattern csp (dof_handler.n_dofs(), dof_handler.n_dofs()); @@ -542,18 +488,13 @@ namespace Step46 // @sect4{FluidStructureProblem::assemble_system} - // Following is the central function of this - // program: the one that assembles the linear - // system. It has a long section of setting - // up auxiliary functions at the beginning: - // from creating the quadrature formulas and - // setting up the FEValues, FEFaceValues and - // FESubfaceValues objects necessary to - // integrate the cell terms as well as the - // interface terms for the case where cells - // along the interface come together at same - // size or with differing levels of - // refinement... + // Following is the central function of this program: the one that assembles + // the linear system. It has a long section of setting up auxiliary + // functions at the beginning: from creating the quadrature formulas and + // setting up the FEValues, FEFaceValues and FESubfaceValues objects + // necessary to integrate the cell terms as well as the interface terms for + // the case where cells along the interface come together at same size or + // with differing levels of refinement... template void FluidStructureProblem::assemble_system () { @@ -593,9 +534,8 @@ namespace Step46 common_face_quadrature, update_values); - // ...to objects that are needed to - // describe the local contributions to the - // global linear system... + // ...to objects that are needed to describe the local contributions to + // the global linear system... const unsigned int stokes_dofs_per_cell = stokes_fe.dofs_per_cell; const unsigned int elasticity_dofs_per_cell = elasticity_fe.dofs_per_cell; @@ -609,11 +549,9 @@ namespace Step46 const RightHandSide right_hand_side; - // ...to variables that allow us to extract - // certain components of the shape - // functions and cache their values rather - // than having to recompute them at every - // quadrature point: + // ...to variables that allow us to extract certain components of the + // shape functions and cache their values rather than having to recompute + // them at every quadrature point: const FEValuesExtractors::Vector velocities (0); const FEValuesExtractors::Scalar pressure (dim); const FEValuesExtractors::Vector displacements (dim+1); @@ -626,12 +564,10 @@ namespace Step46 std::vector elasticity_div_phi (elasticity_dofs_per_cell); std::vector > elasticity_phi (elasticity_dofs_per_cell); - // Then comes the main loop over all cells - // and, as in step-27, the initialization - // of the hp::FEValues object for the - // current cell and the extraction of a - // FEValues object that is appropriate for - // the current cell: + // Then comes the main loop over all cells and, as in step-27, the + // initialization of the hp::FEValues object for the current cell and the + // extraction of a FEValues object that is appropriate for the current + // cell: typename hp::DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); @@ -645,29 +581,20 @@ namespace Step46 cell->get_fe().dofs_per_cell); local_rhs.reinit (cell->get_fe().dofs_per_cell); - // With all of this done, we continue - // to assemble the cell terms for cells - // that are part of the Stokes and - // elastic regions. While we could in - // principle do this in one formula, in - // effect implementing the one bilinear - // form stated in the introduction, we - // realize that our finite element - // spaces are chosen in such a way that - // on each cell, one set of variables - // (either velocities and pressure, or - // displacements) are always zero, and - // consequently a more efficient way of - // computing local integrals is to do - // only what's necessary based on an - // if clause that tests - // which part of the domain we are in. + // With all of this done, we continue to assemble the cell terms for + // cells that are part of the Stokes and elastic regions. While we + // could in principle do this in one formula, in effect implementing + // the one bilinear form stated in the introduction, we realize that + // our finite element spaces are chosen in such a way that on each + // cell, one set of variables (either velocities and pressure, or + // displacements) are always zero, and consequently a more efficient + // way of computing local integrals is to do only what's necessary + // based on an if clause that tests which part of the + // domain we are in. // - // The actual computation of the local - // matrix is the same as in step-22 as - // well as that given in the @ref - // vector_valued documentation module - // for the elasticity equations: + // The actual computation of the local matrix is the same as in + // step-22 as well as that given in the @ref vector_valued + // documentation module for the elasticity equations: if (cell_is_in_fluid_domain (cell)) { const unsigned int dofs_per_cell = cell->get_fe().dofs_per_cell; @@ -724,94 +651,57 @@ namespace Step46 } } - // Once we have the contributions from - // cell integrals, we copy them into - // the global matrix (taking care of - // constraints right away, through the - // ConstraintMatrix::distribute_local_to_global - // function). Note that we have not - // written anything into the - // local_rhs variable, - // though we still need to pass it - // along since the elimination of - // nonzero boundary values requires the - // modification of local and - // consequently also global right hand - // side values: + // Once we have the contributions from cell integrals, we copy them + // into the global matrix (taking care of constraints right away, + // through the ConstraintMatrix::distribute_local_to_global + // function). Note that we have not written anything into the + // local_rhs variable, though we still need to pass it + // along since the elimination of nonzero boundary values requires the + // modification of local and consequently also global right hand side + // values: local_dof_indices.resize (cell->get_fe().dofs_per_cell); cell->get_dof_indices (local_dof_indices); constraints.distribute_local_to_global (local_matrix, local_rhs, local_dof_indices, system_matrix, system_rhs); - // The more interesting part of this - // function is where we see about face - // terms along the interface between - // the two subdomains. To this end, we - // first have to make sure that we only - // assemble them once even though a - // loop over all faces of all cells - // would encounter each part of the - // interface twice. We arbitrarily make - // the decision that we will only - // evaluate interface terms if the - // current cell is part of the solid - // subdomain and if, consequently, a - // face is not at the boundary and the - // potential neighbor behind it is part - // of the fluid domain. Let's start - // with these conditions: + // The more interesting part of this function is where we see about + // face terms along the interface between the two subdomains. To this + // end, we first have to make sure that we only assemble them once + // even though a loop over all faces of all cells would encounter each + // part of the interface twice. We arbitrarily make the decision that + // we will only evaluate interface terms if the current cell is part + // of the solid subdomain and if, consequently, a face is not at the + // boundary and the potential neighbor behind it is part of the fluid + // domain. Let's start with these conditions: if (cell_is_in_solid_domain (cell)) for (unsigned int f=0; f::faces_per_cell; ++f) if (cell->at_boundary(f) == false) { - // At this point we know that - // the current cell is a - // candidate for integration - // and that a neighbor behind - // face f - // exists. There are now three - // possibilities: + // At this point we know that the current cell is a candidate + // for integration and that a neighbor behind face + // f exists. There are now three possibilities: // - // - The neighbor is at the - // same refinement level and - // has no children. - // - The neighbor has children. - // - The neighbor is coarser. + // - The neighbor is at the same refinement level and has no + // children. + // - The neighbor has children. + // - The neighbor is coarser. // - // In all three cases, we are - // only interested in it if it - // is part of the fluid - // subdomain. So let us start - // with the first and simplest - // case: if the neighbor is at - // the same level, has no - // children, and is a fluid - // cell, then the two cells - // share a boundary that is - // part of the interface along - // which we want to integrate - // interface terms. All we have - // to do is initialize two - // FEFaceValues object with the - // current face and the face of - // the neighboring cell (note - // how we find out which face - // of the neighboring cell - // borders on the current cell) - // and pass things off to the - // function that evaluates the - // interface terms (the third - // through fifth arguments to - // this function provide it - // with scratch arrays). The - // result is then again copied - // into the global matrix, - // using a function that knows - // that the DoF indices of rows - // and columns of the local - // matrix result from different - // cells: + // In all three cases, we are only interested in it if it is + // part of the fluid subdomain. So let us start with the first + // and simplest case: if the neighbor is at the same level, + // has no children, and is a fluid cell, then the two cells + // share a boundary that is part of the interface along which + // we want to integrate interface terms. All we have to do is + // initialize two FEFaceValues object with the current face + // and the face of the neighboring cell (note how we find out + // which face of the neighboring cell borders on the current + // cell) and pass things off to the function that evaluates + // the interface terms (the third through fifth arguments to + // this function provide it with scratch arrays). The result + // is then again copied into the global matrix, using a + // function that knows that the DoF indices of rows and + // columns of the local matrix result from different cells: if ((cell->neighbor(f)->level() == cell->level()) && (cell->neighbor(f)->has_children() == false) @@ -833,22 +723,13 @@ namespace Step46 system_matrix); } - // The second case is if the - // neighbor has further - // children. In that case, we - // have to loop over all the - // children of the neighbor to - // see if they are part of the - // fluid subdomain. If they - // are, then we integrate over - // the common interface, which - // is a face for the neighbor - // and a subface of the current - // cell, requiring us to use an - // FEFaceValues for the - // neighbor and an - // FESubfaceValues for the - // current cell: + // The second case is if the neighbor has further children. In + // that case, we have to loop over all the children of the + // neighbor to see if they are part of the fluid subdomain. If + // they are, then we integrate over the common interface, + // which is a face for the neighbor and a subface of the + // current cell, requiring us to use an FEFaceValues for the + // neighbor and an FESubfaceValues for the current cell: else if ((cell->neighbor(f)->level() == cell->level()) && (cell->neighbor(f)->has_children() == true)) @@ -880,14 +761,10 @@ namespace Step46 } } - // The last option is that the - // neighbor is coarser. In that - // case we have to use an - // FESubfaceValues object for - // the neighbor and a - // FEFaceValues for the current - // cell; the rest is the same - // as before: + // The last option is that the neighbor is coarser. In that + // case we have to use an FESubfaceValues object for the + // neighbor and a FEFaceValues for the current cell; the rest + // is the same as before: else if (cell->neighbor_is_coarser(f) && cell_is_in_fluid_domain(cell->neighbor(f))) @@ -916,25 +793,17 @@ namespace Step46 - // In the function that assembles the global - // system, we passed computing interface - // terms to a separate function we discuss - // here. The key is that even though we can't - // predict the combination of FEFaceValues - // and FESubfaceValues objects, they are both - // derived from the FEFaceValuesBase class - // and consequently we don't have to care: - // the function is simply called with two - // such objects denoting the values of the - // shape functions on the quadrature points - // of the two sides of the face. We then do - // what we always do: we fill the scratch - // arrays with the values of shape functions - // and their derivatives, and then loop over - // all entries of the matrix to compute the - // local integrals. The details of the - // bilinear form we evaluate here are given - // in the introduction. + // In the function that assembles the global system, we passed computing + // interface terms to a separate function we discuss here. The key is that + // even though we can't predict the combination of FEFaceValues and + // FESubfaceValues objects, they are both derived from the FEFaceValuesBase + // class and consequently we don't have to care: the function is simply + // called with two such objects denoting the values of the shape functions + // on the quadrature points of the two sides of the face. We then do what we + // always do: we fill the scratch arrays with the values of shape functions + // and their derivatives, and then loop over all entries of the matrix to + // compute the local integrals. The details of the bilinear form we evaluate + // here are given in the introduction. template void FluidStructureProblem:: @@ -981,14 +850,10 @@ namespace Step46 // @sect4{FluidStructureProblem::solve} - // As discussed in the introduction, we use a - // rather trivial solver here: we just pass - // the linear system off to the - // SparseDirectUMFPACK direct solver (see, - // for example, step-29). The only thing we - // have to do after solving is ensure that - // hanging node and boundary value - // constraints are correct. + // As discussed in the introduction, we use a rather trivial solver here: we + // just pass the linear system off to the SparseDirectUMFPACK direct solver + // (see, for example, step-29). The only thing we have to do after solving + // is ensure that hanging node and boundary value constraints are correct. template void FluidStructureProblem::solve () @@ -1004,15 +869,11 @@ namespace Step46 // @sect4{FluidStructureProblem::output_results} - // Generating graphical output is rather - // trivial here: all we have to do is - // identify which components of the solution - // vector belong to scalars and/or vectors - // (see, for example, step-22 for a previous - // example), and then pass it all on to the - // DataOut class (with the second template - // argument equal to hp::DoFHandler instead - // of the usual default DoFHandler): + // Generating graphical output is rather trivial here: all we have to do is + // identify which components of the solution vector belong to scalars and/or + // vectors (see, for example, step-22 for a previous example), and then pass + // it all on to the DataOut class (with the second template argument equal + // to hp::DoFHandler instead of the usual default DoFHandler): template void FluidStructureProblem:: @@ -1052,20 +913,14 @@ namespace Step46 // @sect4{FluidStructureProblem::refine_mesh} - // The next step is to refine the mesh. As - // was discussed in the introduction, this is - // a bit tricky primarily because the fluid - // and the solid subdomains use variables - // that have different physical dimensions - // and for which the absolute magnitude of - // error estimates is consequently not - // directly comparable. We will therefore - // have to scale them. At the top of the - // function, we therefore first compute error - // estimates for the different variables - // separately (using the velocities but not - // the pressure for the fluid domain, and the - // displacements in the solid domain): + // The next step is to refine the mesh. As was discussed in the + // introduction, this is a bit tricky primarily because the fluid and the + // solid subdomains use variables that have different physical dimensions + // and for which the absolute magnitude of error estimates is consequently + // not directly comparable. We will therefore have to scale them. At the top + // of the function, we therefore first compute error estimates for the + // different variables separately (using the velocities but not the pressure + // for the fluid domain, and the displacements in the solid domain): template void FluidStructureProblem::refine_mesh () @@ -1098,13 +953,10 @@ namespace Step46 elasticity_estimated_error_per_cell, fe_collection.component_mask(displacements)); - // We then normalize error estimates by - // dividing by their norm and scale the - // fluid error indicators by a factor of 4 - // as discussed in the introduction. The - // results are then added together into a - // vector that contains error indicators - // for all cells: + // We then normalize error estimates by dividing by their norm and scale + // the fluid error indicators by a factor of 4 as discussed in the + // introduction. The results are then added together into a vector that + // contains error indicators for all cells: stokes_estimated_error_per_cell *= 4. / stokes_estimated_error_per_cell.l2_norm(); elasticity_estimated_error_per_cell @@ -1116,34 +968,23 @@ namespace Step46 estimated_error_per_cell += stokes_estimated_error_per_cell; estimated_error_per_cell += elasticity_estimated_error_per_cell; - // The second to last part of the function, - // before actually refining the mesh, - // involves a heuristic that we have - // already mentioned in the introduction: - // because the solution is discontinuous, - // the KellyErrorEstimator class gets all - // confused about cells that sit at the - // boundary between subdomains: it believes - // that the error is large there because - // the jump in the gradient is large, even - // though this is entirely expected and a - // feature that is in fact present in the - // exact solution as well and therefore not - // indicative of any numerical error. + // The second to last part of the function, before actually refining the + // mesh, involves a heuristic that we have already mentioned in the + // introduction: because the solution is discontinuous, the + // KellyErrorEstimator class gets all confused about cells that sit at the + // boundary between subdomains: it believes that the error is large there + // because the jump in the gradient is large, even though this is entirely + // expected and a feature that is in fact present in the exact solution as + // well and therefore not indicative of any numerical error. // - // Consequently, we set the error - // indicators to zero for all cells at the - // interface; the conditions determining - // which cells this affects are slightly - // awkward because we have to account for - // the possibility of adaptively refined - // meshes, meaning that the neighboring - // cell can be coarser than the current - // one, or could in fact be refined some - // more. The structure of these nested - // conditions is much the same as we - // encountered when assembling interface - // terms in assemble_system. + // Consequently, we set the error indicators to zero for all cells at the + // interface; the conditions determining which cells this affects are + // slightly awkward because we have to account for the possibility of + // adaptively refined meshes, meaning that the neighboring cell can be + // coarser than the current one, or could in fact be refined some + // more. The structure of these nested conditions is much the same as we + // encountered when assembling interface terms in + // assemble_system. { unsigned int cell_index = 0; for (typename hp::DoFHandler::active_cell_iterator @@ -1208,11 +1049,9 @@ namespace Step46 // @sect4{FluidStructureProblem::run} - // This is, as usual, the function that - // controls the overall flow of operation. If - // you've read through tutorial programs - // step-1 through step-6, for example, then - // you are already quite familiar with the + // This is, as usual, the function that controls the overall flow of + // operation. If you've read through tutorial programs step-1 through + // step-6, for example, then you are already quite familiar with the // following structure: template void FluidStructureProblem::run () @@ -1247,9 +1086,8 @@ namespace Step46 // @sect4{The main() function} -// This, final, function contains pretty much -// exactly what most of the other tutorial -// programs have: +// This, final, function contains pretty much exactly what most of the other +// tutorial programs have: int main () { try diff --git a/deal.II/examples/step-47/step-47.cc b/deal.II/examples/step-47/step-47.cc index 6d2fdb0533..25fb631934 100644 --- a/deal.II/examples/step-47/step-47.cc +++ b/deal.II/examples/step-47/step-47.cc @@ -219,9 +219,8 @@ namespace Step47 if (level_set(cell->vertex(v)) * level_set(cell->vertex(v+1)) < 0) return true; - // we get here only if all vertices - // have the same sign, which means - // that the cell is not intersected + // we get here only if all vertices have the same sign, which means that + // the cell is not intersected return false; } @@ -246,8 +245,7 @@ namespace Step47 constraints.clear (); //TODO: fix this, it currently crashes - // DoFTools::make_hanging_node_constraints (dof_handler, - // constraints); + // DoFTools::make_hanging_node_constraints (dof_handler, constraints); //TODO: component 1 must satisfy zero boundary conditions constraints.close(); @@ -325,8 +323,9 @@ namespace Step47 } else { -//TODO: verify that the order of support points equals the order of vertices of the cells, as we use below -//TODO: remove update_support_points and friends, since they aren't implemented anyway +//TODO: verify that the order of support points equals the order of vertices +//of the cells, as we use below TODO: remove update_support_points and +//friends, since they aren't implemented anyway Assert (cell->active_fe_index() == 1, ExcInternalError()); Assert (interface_intersects_cell(cell) == true, ExcInternalError()); @@ -438,15 +437,16 @@ namespace Step47 } -// To integrate the enriched elements we have to find the geometrical decomposition -// of the original element in subelements. The subelements are used to integrate -// the elements on both sides of the discontinuity. The disontinuity line is approximated -// by a piece-wise linear interpolation between the intersection of the discontinuity -// with the edges of the elements. The vector level_set_values has the values of -// the level set function at the vertices of the elements. From these values can be found -// by linear interpolation the intersections. There are three kind of decomposition that -// are considered. -// Type 1: there is not cut. Type 2: a corner of the element is cut. Type 3: two corners are cut. +// To integrate the enriched elements we have to find the geometrical +// decomposition of the original element in subelements. The subelements are +// used to integrate the elements on both sides of the discontinuity. The +// disontinuity line is approximated by a piece-wise linear interpolation +// between the intersection of the discontinuity with the edges of the +// elements. The vector level_set_values has the values of the level set +// function at the vertices of the elements. From these values can be found by +// linear interpolation the intersections. There are three kind of +// decomposition that are considered. Type 1: there is not cut. Type 2: a +// corner of the element is cut. Type 3: two corners are cut. template std::pair > @@ -466,11 +466,11 @@ namespace Step47 else sign_ls[v] = 0; } - // the sign of the level set function at the 4 nodes of the elements can be positive + or negative - - // depending on the sign of the level set function we have the folloing three classes of decomposition - // type 1: ++++, ---- - // type 2: -+++, +-++, ++-+, +++-, +---, -+--, --+-, ---+ - // type 3: +--+, ++--, +-+-, -++-, --++, -+-+ + // the sign of the level set function at the 4 nodes of the elements can + // be positive + or negative - depending on the sign of the level set + // function we have the folloing three classes of decomposition type 1: + // ++++, ---- type 2: -+++, +-++, ++-+, +++-, +---, -+--, --+-, ---+ type + // 3: +--+, ++--, +-+-, -++-, --++, -+-+ if ( sign_ls[0]==sign_ls[1] & sign_ls[0]==sign_ls[2] & sign_ls[0]==sign_ls[3] ) type =1; else if ( sign_ls[0]*sign_ls[1]*sign_ls[2]*sign_ls[3] < 0 ) type = 2; @@ -497,8 +497,8 @@ namespace Step47 { const unsigned int n_q_points = plain_quadrature.size(); - // loop over all subelements for integration - // in type 2 there are 5 subelements + // loop over all subelements for integration in type 2 there are 5 + // subelements Quadrature xfem_quadrature(5*n_q_points); @@ -514,11 +514,7 @@ namespace Step47 // deal.ii local coordinates - // 2-------3 - // | | - // | | - // | | - // 0-------1 + // 2-------3 | | | | | | 0-------1 if (Pos == 0) { @@ -581,12 +577,9 @@ namespace Step47 F(1) = 0.5*( 1. + B(1) ); } - //std::cout << A << std::endl; - //std::cout << B << std::endl; - //std::cout << C << std::endl; - //std::cout << D << std::endl; - //std::cout << E << std::endl; - //std::cout << F << std::endl; + //std::cout << A << std::endl; std::cout << B << std::endl; std::cout + //<< C << std::endl; std::cout << D << std::endl; std::cout << E << + //std::endl; std::cout << F << std::endl; std::string filename = "vertices.dat"; std::ofstream output (filename.c_str()); @@ -648,22 +641,23 @@ namespace Step47 for (unsigned int subcell = 0; subcell<5; subcell++) { - //std::cout << "subcell : " << subcell << std::endl; + //std::cout << "subcell : " << subcell << std::endl; std::vector > vertices; for (unsigned int i=0; i<4; i++) { vertices.push_back( subcell_vertices[subcell_v_indices[Pos][subcell][i]] ); - //std::cout << "i : " << i << std::endl; - //std::cout << "subcell v : " << subcell_v_indices[Pos][subcell][i] << std::endl; - //std::cout << vertices[i](0) << " " << vertices[i](1) << std::endl; + //std::cout << "i : " << i << std::endl; std::cout << + //"subcell v : " << subcell_v_indices[Pos][subcell][i] << + //std::endl; std::cout << vertices[i](0) << " " << + //vertices[i](1) << std::endl; } - //std::cout << std::endl; - // create quadrature rule + //std::cout << std::endl; create quadrature rule append_quadrature( plain_quadrature, vertices, xfem_points, xfem_weights); - //initialize xfem_quadrature with quadrature points of all subelements + //initialize xfem_quadrature with quadrature points of all + //subelements xfem_quadrature.initialize(xfem_points, xfem_weights); } } @@ -672,15 +666,14 @@ namespace Step47 return std::pair >(2, xfem_quadrature); } - // Type three decomposition - // (+--+, ++--, +-+-, -++-, --++, -+-+) + // Type three decomposition (+--+, ++--, +-+-, -++-, --++, -+-+) if (type==3) { const unsigned int n_q_points = plain_quadrature.size(); - // loop over all subelements for integration - // in type 2 there are 5 subelements + // loop over all subelements for integration in type 2 there are 5 + // subelements Quadrature xfem_quadrature(5*n_q_points); @@ -713,9 +706,8 @@ namespace Step47 assert(0); } - //std::cout << "Pos " << Pos << std::endl; - //std::cout << A << std::endl; - //std::cout << B << std::endl; + //std::cout << "Pos " << Pos << std::endl; std::cout << A << + //std::endl; std::cout << B << std::endl; std::string filename = "vertices.dat"; std::ofstream output (filename.c_str()); output << "#vertices of xfem subcells" << std::endl; @@ -742,25 +734,26 @@ namespace Step47 {{0,4,2,5}, {4,1,5,3}} }; - //std::cout << "Pos : " << Pos << std::endl; + //std::cout << "Pos : " << Pos << std::endl; for (unsigned int subcell = 0; subcell<2; subcell++) { - //std::cout << "subcell : " << subcell << std::endl; + //std::cout << "subcell : " << subcell << std::endl; std::vector > vertices; for (unsigned int i=0; i<4; i++) { vertices.push_back( subcell_vertices[subcell_v_indices[Pos][subcell][i]] ); - //std::cout << "i : " << i << std::endl; - //std::cout << "subcell v : " << subcell_v_indices[Pos][subcell][i] << std::endl; - //std::cout << vertices[i](0) << " " << vertices[i](1) << std::endl; + //std::cout << "i : " << i << std::endl; std::cout << + //"subcell v : " << subcell_v_indices[Pos][subcell][i] << + //std::endl; std::cout << vertices[i](0) << " " << + //vertices[i](1) << std::endl; } - //std::cout << std::endl; - // create quadrature rule + //std::cout << std::endl; create quadrature rule append_quadrature( plain_quadrature, vertices, xfem_points, xfem_weights); - //initialize xfem_quadrature with quadrature points of all subelements + //initialize xfem_quadrature with quadrature points of all + //subelements xfem_quadrature.initialize(xfem_points, xfem_weights); } } @@ -779,11 +772,13 @@ namespace Step47 std::vector &xfem_weights) { - // Project integration points into sub-elements. - // This maps quadrature points from a reference element to a subelement of a reference element. - // To implement the action of this map the coordinates of the subelements have been calculated (A(0)...F(0),A(1)...F(1)) - // the coordinates of the quadrature points are given by the bi-linear map defined by the form functions - // $x^\prime_i = \sum_j v^\prime \phi_j(x^hat_i)$, where the $\phi_j$ are the shape functions of the FEQ. + // Project integration points into sub-elements. This maps quadrature + // points from a reference element to a subelement of a reference element. + // To implement the action of this map the coordinates of the subelements + // have been calculated (A(0)...F(0),A(1)...F(1)) the coordinates of the + // quadrature points are given by the bi-linear map defined by the form + // functions $x^\prime_i = \sum_j v^\prime \phi_j(x^hat_i)$, where the + // $\phi_j$ are the shape functions of the FEQ. unsigned int n_v = GeometryInfo::vertices_per_cell; @@ -806,8 +801,8 @@ namespace Step47 double xi = q_points[i](0); double eta = q_points[i](1); - // Define shape functions on reference element - // we consider a bi-linear mapping + // Define shape functions on reference element we consider a + // bi-linear mapping phi[0] = (1. - xi) * (1. - eta); phi[1] = xi * (1. - eta); phi[2] = (1. - xi) * eta; @@ -846,7 +841,8 @@ namespace Step47 double detJ = determinant(jacobian); xfem_weights.push_back (W[i] * detJ); - // Map integration points from reference element to subcell of reference element + // Map integration points from reference element to subcell of + // reference element Point q_prime; for (unsigned int d=0; d::vertices_per_cell; j++) diff --git a/deal.II/examples/step-48/step-48.cc b/deal.II/examples/step-48/step-48.cc index e801034fc2..cfff6c9152 100644 --- a/deal.II/examples/step-48/step-48.cc +++ b/deal.II/examples/step-48/step-48.cc @@ -11,8 +11,7 @@ /* further information on this license. */ -// The necessary files from the deal.II -// library. +// The necessary files from the deal.II library. #include #include #include @@ -34,9 +33,8 @@ #include #include -// This includes the data structures for the -// efficient implementation of matrix-free -// methods. +// This includes the data structures for the efficient implementation of +// matrix-free methods. #include #include #include @@ -50,47 +48,30 @@ namespace Step48 { using namespace dealii; - // We start by defining two global - // variables to collect all parameters - // subject to changes at one place: - // One for the dimension and one for - // the finite element degree. The - // dimension is used in the main - // function as a template argument for - // the actual classes (like in all - // other deal.II programs), whereas - // the degree of the finite element is - // more crucial, as it is passed as a - // template argument to the - // implementation of the Sine-Gordon - // operator. Therefore, it needs to be - // a compile-time constant. + // We start by defining two global variables to collect all parameters + // subject to changes at one place: One for the dimension and one for the + // finite element degree. The dimension is used in the main function as a + // template argument for the actual classes (like in all other deal.II + // programs), whereas the degree of the finite element is more crucial, as + // it is passed as a template argument to the implementation of the + // Sine-Gordon operator. Therefore, it needs to be a compile-time constant. const unsigned int dimension = 2; const unsigned int fe_degree = 4; // @sect3{SineGordonOperation} - // The SineGordonOperation class - // implements the cell-based operation that is - // needed in each time step. This nonlinear - // operation can be implemented - // straight-forwardly based on the - // MatrixFree class, in the - // same way as a linear operation would be - // treated by this implementation of the - // finite element operator application. We - // apply two template arguments to the class, - // one for the dimension and one for the - // degree of the finite element. This is a - // difference to other functions in deal.II - // where only the dimension is a template - // argument. This is necessary to provide the - // inner loops in @p FEEvaluation with - // information about loop lengths etc., which - // is essential for efficiency. On the other - // hand, it makes it more challenging to - // implement the degree as a run-time + // The SineGordonOperation class implements the cell-based + // operation that is needed in each time step. This nonlinear operation can + // be implemented straight-forwardly based on the MatrixFree + // class, in the same way as a linear operation would be treated by this + // implementation of the finite element operator application. We apply two + // template arguments to the class, one for the dimension and one for the + // degree of the finite element. This is a difference to other functions in + // deal.II where only the dimension is a template argument. This is + // necessary to provide the inner loops in @p FEEvaluation with information + // about loop lengths etc., which is essential for efficiency. On the other + // hand, it makes it more challenging to implement the degree as a run-time // parameter. template class SineGordonOperation @@ -117,25 +98,17 @@ namespace Step48 // @sect4{SineGordonOperation::SineGordonOperation} - // This is the constructor of the - // SineGordonOperation class. It receives a - // reference to the MatrixFree holding the - // problem information and the time step size - // as input parameters. The initialization - // routine sets up the mass matrix. Since we - // use Gauss-Lobatto elements, the mass matrix - // is a diagonal matrix and can be stored as a - // vector. The computation of the mass matrix - // diagonal is simple to achieve with the data - // structures provided by FEEvaluation: Just - // loop over all (macro-) cells and integrate - // over the function that is constant one on - // all quadrature points by using the - // integrate function with @p - // true argument at the slot for - // values. Finally, we invert the diagonal - // entries since we have to multiply by the - // inverse mass matrix in each time step. + // This is the constructor of the SineGordonOperation class. It receives a + // reference to the MatrixFree holding the problem information and the time + // step size as input parameters. The initialization routine sets up the + // mass matrix. Since we use Gauss-Lobatto elements, the mass matrix is a + // diagonal matrix and can be stored as a vector. The computation of the + // mass matrix diagonal is simple to achieve with the data structures + // provided by FEEvaluation: Just loop over all (macro-) cells and integrate + // over the function that is constant one on all quadrature points by using + // the integrate function with @p true argument at the slot for + // values. Finally, we invert the diagonal entries since we have to multiply + // by the inverse mass matrix in each time step. template SineGordonOperation:: SineGordonOperation(const MatrixFree &data_in, @@ -172,52 +145,32 @@ namespace Step48 // @sect4{SineGordonOperation::local_apply} - // This operator implements the core operation - // of the program, the integration over a - // range of cells for the nonlinear operator - // of the Sine-Gordon problem. The - // implementation is based on the - // FEEvaluationGL class since we are using - // the cell-based implementation for - // Gauss-Lobatto elements. - - // The nonlinear function that we have to - // evaluate for the time stepping routine - // includes the value of the function at - // the present time @p current as well as - // the value at the previous time step @p - // old. Both values are passed to the - // operator in the collection of source - // vectors @p src, which is simply an STL - // vector of pointers to the actual - // solution vectors. This construct of - // collecting several source vectors into - // one is necessary as the cell loop in @p - // MatrixFree takes exactly one source - // and one destination vector, even if we - // happen to use many vectors like the two - // in this case. Note that the cell loop - // accepts any valid class for input and - // output, which does not only include - // vectors but general data types. However, - // only in case it encounters a - // parallel::distributed::Vector or - // an STL vector collecting these vectors, - // it calls functions that exchange data at - // the beginning and the end of the - // loop. In the loop over the cells, we - // first have to read in the values in the - // vectors related to the local - // values. Then, we evaluate the value and - // the gradient of the current solution - // vector and the values of the old vector - // at the quadrature points. Then, we - // combine the terms in the scheme in the - // loop over the quadrature - // points. Finally, we integrate the result - // against the test function and accumulate - // the result to the global solution vector - // @p dst. + // This operator implements the core operation of the program, the + // integration over a range of cells for the nonlinear operator of the + // Sine-Gordon problem. The implementation is based on the FEEvaluationGL + // class since we are using the cell-based implementation for Gauss-Lobatto + // elements. + + // The nonlinear function that we have to evaluate for the time stepping + // routine includes the value of the function at the present time @p current + // as well as the value at the previous time step @p old. Both values are + // passed to the operator in the collection of source vectors @p src, which + // is simply an STL vector of pointers to the actual solution vectors. This + // construct of collecting several source vectors into one is necessary as + // the cell loop in @p MatrixFree takes exactly one source and one + // destination vector, even if we happen to use many vectors like the two in + // this case. Note that the cell loop accepts any valid class for input and + // output, which does not only include vectors but general data + // types. However, only in case it encounters a + // parallel::distributed::Vector or an STL vector collecting these + // vectors, it calls functions that exchange data at the beginning and the + // end of the loop. In the loop over the cells, we first have to read in the + // values in the vectors related to the local values. Then, we evaluate the + // value and the gradient of the current solution vector and the values of + // the old vector at the quadrature points. Then, we combine the terms in + // the scheme in the loop over the quadrature points. Finally, we integrate + // the result against the test function and accumulate the result to the + // global solution vector @p dst. template void SineGordonOperation:: local_apply (const MatrixFree &data, @@ -258,20 +211,14 @@ namespace Step48 //@sect4{SineGordonOperation::apply} - // This function performs the time stepping - // routine based on the cell-local - // strategy. First the destination vector is - // set to zero, then the cell-loop is called, - // and finally the solution is multiplied by - // the inverse mass matrix. The structure of - // the cell loop is implemented in the cell - // finite element operator class. On each cell - // it applies the routine defined as the - // local_apply() method of the - // class SineGordonOperation, - // i.e., this. One could also - // provide a function with the same signature - // that is not part of a class. + // This function performs the time stepping routine based on the cell-local + // strategy. First the destination vector is set to zero, then the cell-loop + // is called, and finally the solution is multiplied by the inverse mass + // matrix. The structure of the cell loop is implemented in the cell finite + // element operator class. On each cell it applies the routine defined as + // the local_apply() method of the class + // SineGordonOperation, i.e., this. One could also + // provide a function with the same signature that is not part of a class. template void SineGordonOperation:: apply (parallel::distributed::Vector &dst, @@ -286,11 +233,9 @@ namespace Step48 //@sect3{Equation data} - // We define a time-dependent function that is - // used as initial value. Different solutions - // can be obtained by varying the starting - // time. This function has already been - // explained in step-25. + // We define a time-dependent function that is used as initial + // value. Different solutions can be obtained by varying the starting + // time. This function has already been explained in step-25. template class ExactSolution : public Function { @@ -322,12 +267,10 @@ namespace Step48 // @sect3{SineGordonProblem class} - // This is the main class that builds on the - // class in step-25. However, we replaced - // the SparseMatrix class by the - // MatrixFree class to store - // the geometry data. Also, we use a - // distributed triangulation in this example. + // This is the main class that builds on the class in step-25. However, we + // replaced the SparseMatrix class by the MatrixFree class to store + // the geometry data. Also, we use a distributed triangulation in this + // example. template class SineGordonProblem { @@ -367,20 +310,14 @@ namespace Step48 //@sect4{SineGordonProblem::SineGordonProblem} - // This is the constructor of the - // SineGordonProblem class. The time interval - // and time step size are defined - // here. Moreover, we use the degree of the - // finite element that we defined at the top - // of the program to initialize a FE_Q finite - // element based on Gauss-Lobatto support - // points. These points are convenient because - // in conjunction with a QGaussLobatto - // quadrature rule of the same order they give - // a diagonal mass matrix without compromising - // accuracy too much (note that the - // integration is inexact, though), see also - // the discussion in the introduction. + // This is the constructor of the SineGordonProblem class. The time interval + // and time step size are defined here. Moreover, we use the degree of the + // finite element that we defined at the top of the program to initialize a + // FE_Q finite element based on Gauss-Lobatto support points. These points + // are convenient because in conjunction with a QGaussLobatto quadrature + // rule of the same order they give a diagonal mass matrix without + // compromising accuracy too much (note that the integration is inexact, + // though), see also the discussion in the introduction. template SineGordonProblem::SineGordonProblem () : @@ -400,20 +337,15 @@ namespace Step48 //@sect4{SineGordonProblem::make_grid_and_dofs} - // As in step-25 this functions sets up a cube - // grid in dim dimensions of - // extent $[-15,15]$. We refine the mesh more - // in the center of the domain since the - // solution is concentrated there. We first - // refine all cells whose center is within a - // radius of 11, and then refine once more for - // a radius 6. This is simple ad-hoc - // refinement could be done better by adapting - // the mesh to the solution using error - // estimators during the time stepping as done - // in other example programs, and using - // parallel::distributed::SolutionTransfer to - // transfer the solution to the new mesh. + // As in step-25 this functions sets up a cube grid in dim + // dimensions of extent $[-15,15]$. We refine the mesh more in the center of + // the domain since the solution is concentrated there. We first refine all + // cells whose center is within a radius of 11, and then refine once more + // for a radius 6. This is simple ad-hoc refinement could be done better by + // adapting the mesh to the solution using error estimators during the time + // stepping as done in other example programs, and using + // parallel::distributed::SolutionTransfer to transfer the solution to the + // new mesh. template void SineGordonProblem::make_grid_and_dofs () { @@ -453,27 +385,19 @@ namespace Step48 << std::endl; - // We generate hanging node constraints for - // ensuring continuity of the solution. As in - // step-40, we need to equip the constraint - // matrix with the IndexSet of locally - // relevant degrees of freedom to avoid it to - // consume too much memory for big - // problems. Next, the MatrixFree - // for the problem is set up. Note - // that we specify the MPI communicator which - // we are going to use, and that we also want - // to use shared-memory parallelization (hence - // one would use multithreading for intra-node - // parallelism and not MPI; note that we here - // choose the standard option — if we - // wanted to disable shared memory - // parallelization, we would choose @p - // none). Finally, three solution vectors are - // initialized. MatrixFree stores the - // layout that is to be used by distributed - // vectors, so we just ask it to initialize - // the vectors. + // We generate hanging node constraints for ensuring continuity of the + // solution. As in step-40, we need to equip the constraint matrix with + // the IndexSet of locally relevant degrees of freedom to avoid it to + // consume too much memory for big problems. Next, the MatrixFree + // for the problem is set up. Note that we specify the MPI + // communicator which we are going to use, and that we also want to use + // shared-memory parallelization (hence one would use multithreading for + // intra-node parallelism and not MPI; note that we here choose the + // standard option — if we wanted to disable shared memory + // parallelization, we would choose @p none). Finally, three solution + // vectors are initialized. MatrixFree stores the layout that is to be + // used by distributed vectors, so we just ask it to initialize the + // vectors. DoFTools::extract_locally_relevant_dofs (dof_handler, locally_relevant_dofs); constraints.clear(); @@ -499,27 +423,18 @@ namespace Step48 //@sect4{SineGordonProblem::output_results} - // This function prints the norm of the - // solution and writes the solution vector to - // a file. The norm is standard (except for - // the fact that we need to be sure to only - // count norms on locally owned cells), and - // the second is similar to what we did in - // step-40. However, we first need to generate - // an appropriate vector for output: The ones - // we used during time stepping contained - // information about ghosts dofs that one - // needs write access to during the loops over - // cell. However, that is not the same as - // needed when outputting. So we first - // initialize a vector with locally relevant - // degrees of freedom by copying the solution - // (note how we use the function @p copy_from - // to transfer data between vectors with the - // same local range, but different layouts of - // ghosts). Then, we import the values on the - // ghost DoFs and then distribute the - // constraints (as constraints are zero in the + // This function prints the norm of the solution and writes the solution + // vector to a file. The norm is standard (except for the fact that we need + // to be sure to only count norms on locally owned cells), and the second is + // similar to what we did in step-40. However, we first need to generate an + // appropriate vector for output: The ones we used during time stepping + // contained information about ghosts dofs that one needs write access to + // during the loops over cell. However, that is not the same as needed when + // outputting. So we first initialize a vector with locally relevant degrees + // of freedom by copying the solution (note how we use the function @p + // copy_from to transfer data between vectors with the same local range, but + // different layouts of ghosts). Then, we import the values on the ghost + // DoFs and then distribute the constraints (as constraints are zero in the // vectors during loop over all cells). template void @@ -582,26 +497,18 @@ namespace Step48 // @sect4{SineGordonProblem::run} - // This function is called by the main - // function and calls the subroutines - // of the class. + // This function is called by the main function and calls the subroutines of + // the class. // - // The first step is to set up the grid and - // the cell operator. Then, the time step is - // computed from the CFL number given in the - // constructor and the finest mesh size. The - // finest mesh size is computed as the - // diameter of the last cell in the - // triangulation, which is the last cell on - // the finest level of the mesh. This is only - // possible for Cartesian meshes, otherwise, - // one needs to loop over all cells). Note - // that we need to query all the processors - // for their finest cell since the not all - // processors might hold a region where the - // mesh is at the finest level. Then, we - // readjust the time step a little to hit the - // final time exactly if necessary. + // The first step is to set up the grid and the cell operator. Then, the + // time step is computed from the CFL number given in the constructor and + // the finest mesh size. The finest mesh size is computed as the diameter of + // the last cell in the triangulation, which is the last cell on the finest + // level of the mesh. This is only possible for Cartesian meshes, otherwise, + // one needs to loop over all cells). Note that we need to query all the + // processors for their finest cell since the not all processors might hold + // a region where the mesh is at the finest level. Then, we readjust the + // time step a little to hit the final time exactly if necessary. template void SineGordonProblem::run () @@ -617,24 +524,17 @@ namespace Step48 pcout << " Time step size: " << time_step << ", finest cell: " << global_min_cell_diameter << std::endl << std::endl; - // Next the initial value is set. Since we - // have a two-step time stepping method, we - // also need a value of the solution at - // time-time_step. For accurate results, one - // would need to compute this from the time - // derivative of the solution at initial time, - // but here we ignore this difficulty and just - // set it to the initial value function at - // that artificial time. - - // We create an output of the initial - // value. Then we also need to collect - // the two starting solutions in an STL - // vector of pointers field and to set up - // an instance of the - // SineGordonOperation class - // based on the finite element degree - // specified at the top of this file. + // Next the initial value is set. Since we have a two-step time stepping + // method, we also need a value of the solution at time-time_step. For + // accurate results, one would need to compute this from the time + // derivative of the solution at initial time, but here we ignore this + // difficulty and just set it to the initial value function at that + // artificial time. + + // We create an output of the initial value. Then we also need to collect + // the two starting solutions in an STL vector of pointers field and to + // set up an instance of the SineGordonOperation class + // based on the finite element degree specified at the top of this file. VectorTools::interpolate (dof_handler, ExactSolution (1, time), solution); @@ -650,38 +550,23 @@ namespace Step48 SineGordonOperation sine_gordon_op (matrix_free_data, time_step); - // Now loop over the time steps. In each - // iteration, we shift the solution - // vectors by one and call the - // apply function of the - // SineGordonOperator . Then, we - // write the solution to a file. We clock - // the wall times for the computational - // time needed as wall as the time needed - // to create the output and report the - // numbers when the time stepping is - // finished. + // Now loop over the time steps. In each iteration, we shift the solution + // vectors by one and call the apply function of the + // SineGordonOperator . Then, we write the solution to a file. We + // clock the wall times for the computational time needed as wall as the + // time needed to create the output and report the numbers when the time + // stepping is finished. // - // Note how this shift is implemented: We - // simply call the swap method on the two - // vectors which swaps only some pointers - // without the need to copy data - // around. Obviously, this is a more - // efficient way to move data around. Let - // us see what happens in more detail: - // First, we exchange - // old_solution with - // old_old_solution, which - // means that - // old_old_solution gets - // old_solution, which is - // what we expect. Similarly, - // old_solution gets the - // content from solution in - // the next step. Afterward, - // solution holds - // old_old_solution, but - // that will be overwritten during this + // Note how this shift is implemented: We simply call the swap method on + // the two vectors which swaps only some pointers without the need to copy + // data around. Obviously, this is a more efficient way to move data + // around. Let us see what happens in more detail: First, we exchange + // old_solution with old_old_solution, which + // means that old_old_solution gets + // old_solution, which is what we expect. Similarly, + // old_solution gets the content from solution + // in the next step. Afterward, solution holds + // old_old_solution, but that will be overwritten during this // step. unsigned int timestep_number = 1; diff --git a/deal.II/examples/step-5/step-5.cc b/deal.II/examples/step-5/step-5.cc index fc5f1e582e..e01ffb37a8 100644 --- a/deal.II/examples/step-5/step-5.cc +++ b/deal.II/examples/step-5/step-5.cc @@ -11,9 +11,8 @@ // @sect3{Include files} -// Again, the first few include files -// are already known, so we won't -// comment on them: +// Again, the first few include files are already known, so we won't comment +// on them: #include #include #include @@ -35,43 +34,31 @@ #include #include -// This one is new. We want to read a -// triangulation from disk, and the -// class which does this is declared -// in the following file: +// This one is new. We want to read a triangulation from disk, and the class +// which does this is declared in the following file: #include -// We will use a circular domain, and -// the object describing the boundary -// of it comes from this file: +// We will use a circular domain, and the object describing the boundary of it +// comes from this file: #include // This is C++ ... #include -// ... and this is too: We will -// convert integers to strings using -// the C++ stringstream class -// ostringstream: +// ... and this is too: We will convert integers to strings using the C++ +// stringstream class ostringstream: #include -// Finally, this has been discussed -// in previous tutorial programs -// before: +// Finally, this has been discussed in previous tutorial programs before: using namespace dealii; // @sect3{The Step5 class template} -// The main class is mostly as in the -// previous example. The most visible -// change is that the function -// make_grid_and_dofs has been -// removed, since creating the grid -// is now done in the run -// function and the rest of its -// functionality is now in -// setup_system. Apart from this, -// everything is as before. +// The main class is mostly as in the previous example. The most visible +// change is that the function make_grid_and_dofs has been +// removed, since creating the grid is now done in the run +// function and the rest of its functionality is now in +// setup_system. Apart from this, everything is as before. template class Step5 { @@ -99,31 +86,19 @@ private: // @sect3{Nonconstant coefficients, using Assert} -// In step-4, we showed how to use -// non-constant boundary values and -// right hand side. In this example, -// we want to use a variable -// coefficient in the elliptic -// operator instead. Of course, the -// suitable object is a Function, -// as we have used for the right hand -// side and boundary values in the -// last example. We will use it -// again, but we implement another -// function value_list which -// takes a list of points and returns -// the values of the function at -// these points as a list. The reason -// why such a function is reasonable -// although we can get all the -// information from the value -// function as well will be explained -// below when assembling the matrix. +// In step-4, we showed how to use non-constant boundary values and right hand +// side. In this example, we want to use a variable coefficient in the +// elliptic operator instead. Of course, the suitable object is a +// Function, as we have used for the right hand side and boundary +// values in the last example. We will use it again, but we implement another +// function value_list which takes a list of points and returns +// the values of the function at these points as a list. The reason why such a +// function is reasonable although we can get all the information from the +// value function as well will be explained below when assembling +// the matrix. // -// The need to declare a seemingly -// useless default constructor exists -// here just as in the previous -// example. +// The need to declare a seemingly useless default constructor exists here +// just as in the previous example. template class Coefficient : public Function { @@ -140,18 +115,12 @@ public: -// This is the implementation of the -// coefficient function for a single -// point. We let it return 20 if the -// distance to the origin is less -// than 0.5, and 1 otherwise. As in -// the previous example, we simply -// ignore the second parameter of the -// function that is used to denote -// different components of -// vector-valued functions (we deal -// only with a scalar function here, -// after all): +// This is the implementation of the coefficient function for a single +// point. We let it return 20 if the distance to the origin is less than 0.5, +// and 1 otherwise. As in the previous example, we simply ignore the second +// parameter of the function that is used to denote different components of +// vector-valued functions (we deal only with a scalar function here, after +// all): template double Coefficient::value (const Point &p, const unsigned int /*component*/) const @@ -164,109 +133,55 @@ double Coefficient::value (const Point &p, -// And this is the function that -// returns the value of the -// coefficient at a whole list of -// points at once. Of course, we need -// to make sure that the values are -// the same as if we would ask the -// value function for each point -// individually. +// And this is the function that returns the value of the coefficient at a +// whole list of points at once. Of course, we need to make sure that the +// values are the same as if we would ask the value function for +// each point individually. // -// This method takes three -// parameters: a list of points at -// which to evaluate the function, a -// list that will hold the values at -// these points, and the vector -// component that should be zero here -// since we only have a single scalar -// function. Now, of course the size -// of the output array (values) -// must be the same as that of the -// input array (points), and we -// could simply assume that. However, -// in practice, it turns out that -// more than 90 per cent of -// programming errors are invalid -// function parameters such as -// invalid array sizes, etc, so we -// should try to make sure that the -// parameters are valid. For this, -// the Assert macro is a good means, -// since it makes sure that the -// condition which is given as first -// argument is valid, and if not -// throws an exception (its second -// argument) which will usually -// terminate the program giving -// information where the error -// occurred and what the reason -// was. This generally reduces the -// time to find programming errors -// dramatically and we have found -// assertions an invaluable means to -// program fast. +// This method takes three parameters: a list of points at which to evaluate +// the function, a list that will hold the values at these points, and the +// vector component that should be zero here since we only have a single +// scalar function. Now, of course the size of the output array +// (values) must be the same as that of the input array +// (points), and we could simply assume that. However, in +// practice, it turns out that more than 90 per cent of programming errors are +// invalid function parameters such as invalid array sizes, etc, so we should +// try to make sure that the parameters are valid. For this, the +// Assert macro is a good means, since it makes sure that the +// condition which is given as first argument is valid, and if not throws an +// exception (its second argument) which will usually terminate the program +// giving information where the error occurred and what the reason was. This +// generally reduces the time to find programming errors dramatically and we +// have found assertions an invaluable means to program fast. // -// On the other hand, all these -// checks (there are more than 4200 -// of them in the library at present) -// should not slow down the program -// too much if you want to do large -// computations. To this end, the -// Assert macro is only used in -// debug mode and expands to nothing -// if in optimized mode. Therefore, -// while you test your program on -// small problems and debug it, the -// assertions will tell you where the -// problems are. Once your program -// is stable, you can switch off -// debugging and the program will run -// your real computations without the -// assertions and at maximum -// speed. (In fact, it turns out the -// switching off all the checks in -// the library that prevent you from -// calling functions with the wrong -// arguments by switching to -// optimized mode, makes most -// programs run faster by about a -// factor of four. This should, -// however, not try to induce you to -// always run in optimized mode: Most -// people who have tried that soon -// realize that they introduce lots -// of errors that would have easily -// been caught had they run the -// program in debug mode while -// developing.) For those who want to -// try: The way to switch from debug -// mode to optimized mode is to go -// edit the Makefile in this -// directory. It should have a line -// debug-mode = on; simply -// replace it by debug-mode = off -// and recompile your program. The -// output of the make program -// should already indicate to you -// that the program is now compiled -// in optimized mode, and it will -// later also be linked to libraries -// that have been compiled for -// optimized mode. +// On the other hand, all these checks (there are more than 4200 of them in +// the library at present) should not slow down the program too much if you +// want to do large computations. To this end, the Assert macro +// is only used in debug mode and expands to nothing if in optimized +// mode. Therefore, while you test your program on small problems and debug +// it, the assertions will tell you where the problems are. Once your program +// is stable, you can switch off debugging and the program will run your real +// computations without the assertions and at maximum speed. (In fact, it +// turns out the switching off all the checks in the library that prevent you +// from calling functions with the wrong arguments by switching to optimized +// mode, makes most programs run faster by about a factor of four. This +// should, however, not try to induce you to always run in optimized mode: +// Most people who have tried that soon realize that they introduce lots of +// errors that would have easily been caught had they run the program in debug +// mode while developing.) For those who want to try: The way to switch from +// debug mode to optimized mode is to go edit the Makefile in this +// directory. It should have a line debug-mode = on; simply +// replace it by debug-mode = off and recompile your program. The +// output of the make program should already indicate to you that +// the program is now compiled in optimized mode, and it will later also be +// linked to libraries that have been compiled for optimized mode. // -// Here, as has been said above, we -// would like to make sure that the -// size of the two arrays is equal, -// and if not throw an -// exception. Comparing the sizes of -// two arrays is one of the most -// frequent checks, which is why -// there is already an exception -// class ExcDimensionMismatch -// that takes the sizes of two -// vectors and prints some output in -// case the condition is violated: +// Here, as has been said above, we would like to make sure that the size of +// the two arrays is equal, and if not throw an exception. Comparing the sizes +// of two arrays is one of the most frequent checks, which is why there is +// already an exception class ExcDimensionMismatch that takes the +// sizes of two vectors and prints some output in case the condition is +// violated: template void Coefficient::value_list (const std::vector > &points, @@ -275,72 +190,38 @@ void Coefficient::value_list (const std::vector > &points, { Assert (values.size() == points.size(), ExcDimensionMismatch (values.size(), points.size())); - // Since examples are not very good - // if they do not demonstrate their - // point, we will show how to - // trigger this exception at the - // end of the main program, and - // what output results from this - // (see the Results section of - // this example program). You will - // certainly notice that the output - // is quite well suited to quickly - // find what the problem is and - // what parameters are expected. An - // additional plus is that if the - // program is run inside a - // debugger, it will stop at the - // point where the exception is - // triggered, so you can go up the - // call stack to immediately find - // the place where the the array - // with the wrong size was set up. - - // While we're at it, we can do - // another check: the coefficient - // is a scalar, but the - // Function class also - // represents vector-valued - // function. A scalar function must - // therefore be considered as a - // vector-valued function with only - // one component, so the only valid - // component for which a user might - // ask is zero (we always count - // from zero). The following - // assertion checks this. If the - // condition in the Assert call - // is violated, an exception of - // type ExcRange will be - // triggered; that class takes the - // violating index as first - // argument, and the second and - // third arguments denote a range - // that includes the left point but - // is open at the right, i.e. here - // the interval [0,1). For integer - // arguments, this means that the - // only value in the range is the - // zero, of course. (The interval - // is half open since we also want - // to write exceptions like - // ExcRange(i,0,v.size()), - // where an index must be between - // zero but less than the size of - // an array. To save us the effort - // of writing v.size()-1 in - // many places, the range is - // defined as half-open.) + // Since examples are not very good if they do not demonstrate their point, + // we will show how to trigger this exception at the end of the main + // program, and what output results from this (see the Results + // section of this example program). You will certainly notice that the + // output is quite well suited to quickly find what the problem is and what + // parameters are expected. An additional plus is that if the program is run + // inside a debugger, it will stop at the point where the exception is + // triggered, so you can go up the call stack to immediately find the place + // where the the array with the wrong size was set up. + + // While we're at it, we can do another check: the coefficient is a scalar, + // but the Function class also represents vector-valued + // function. A scalar function must therefore be considered as a + // vector-valued function with only one component, so the only valid + // component for which a user might ask is zero (we always count from + // zero). The following assertion checks this. If the condition in the + // Assert call is violated, an exception of type + // ExcRange will be triggered; that class takes the violating + // index as first argument, and the second and third arguments denote a + // range that includes the left point but is open at the right, i.e. here + // the interval [0,1). For integer arguments, this means that the only value + // in the range is the zero, of course. (The interval is half open since we + // also want to write exceptions like ExcRange(i,0,v.size()), + // where an index must be between zero but less than the size of an + // array. To save us the effort of writing v.size()-1 in many + // places, the range is defined as half-open.) Assert (component == 0, ExcIndexRange (component, 0, 1)); - // The rest of the function is - // uneventful: we define - // n_q_points as an - // abbreviation for the number of - // points for which function values - // are requested, and then simply - // fill the output value: + // The rest of the function is uneventful: we define n_q_points + // as an abbreviation for the number of points for which function values are + // requested, and then simply fill the output value: const unsigned int n_points = points.size(); for (unsigned int i=0; i::Step5 () : // @sect4{Step5::setup_system} -// This is the function -// make_grid_and_dofs from the -// previous example, minus the -// generation of the grid. Everything -// else is unchanged: +// This is the function make_grid_and_dofs from the previous +// example, minus the generation of the grid. Everything else is unchanged: template void Step5::setup_system () { @@ -396,37 +274,21 @@ void Step5::setup_system () // @sect4{Step5::assemble_system} -// As in the previous examples, this -// function is not changed much with -// regard to its functionality, but -// there are still some optimizations -// which we will show. For this, it -// is important to note that if -// efficient solvers are used (such -// as the preconditions CG method), -// assembling the matrix and right -// hand side can take a comparable -// time, and you should think about -// using one or two optimizations at -// some places. +// As in the previous examples, this function is not changed much with regard +// to its functionality, but there are still some optimizations which we will +// show. For this, it is important to note that if efficient solvers are used +// (such as the preconditions CG method), assembling the matrix and right hand +// side can take a comparable time, and you should think about using one or +// two optimizations at some places. // -// What we will show here is how we -// can avoid calls to the -// shape_value, shape_grad, and -// quadrature_point functions of the -// FEValues object, and in particular -// optimize away most of the virtual -// function calls of the Function -// object. The way to do so will be -// explained in the following, while -// those parts of this function that -// are not changed with respect to -// the previous example are not -// commented on. +// What we will show here is how we can avoid calls to the shape_value, +// shape_grad, and quadrature_point functions of the FEValues object, and in +// particular optimize away most of the virtual function calls of the Function +// object. The way to do so will be explained in the following, while those +// parts of this function that are not changed with respect to the previous +// example are not commented on. // -// The first parts of the function -// are completely unchanged from -// before: +// The first parts of the function are completely unchanged from before: template void Step5::assemble_system () { @@ -444,88 +306,46 @@ void Step5::assemble_system () std::vector local_dof_indices (dofs_per_cell); - // Here is one difference: for this - // program, we will again use a - // constant right hand side - // function and zero boundary - // values, but a variable - // coefficient. We have already - // declared the class that - // represents this coefficient - // above, so we only have to - // declare a corresponding object + // Here is one difference: for this program, we will again use a constant + // right hand side function and zero boundary values, but a variable + // coefficient. We have already declared the class that represents this + // coefficient above, so we only have to declare a corresponding object // here. // - // Then, below, we will ask the - // coefficient function object - // to compute the values of the - // coefficient at all quadrature - // points on one cell at once. The - // reason for this is that, if you - // look back at how we did this in - // step-4, you will realize that we - // called the function computing - // the right hand side value inside - // nested loops over all degrees of - // freedom and over all quadrature - // points, - // i.e. dofs_per_cell*n_q_points - // times. For the coefficient that - // is used inside the matrix, this - // would actually be - // dofs_per_cell*dofs_per_cell*n_q_points. On - // the other hand, the function - // will of course return the same - // value every time it is called - // with the same quadrature point, - // independently of what shape - // function we presently treat; - // secondly, these are virtual - // function calls, so are rather - // expensive. Obviously, there are - // only n_q_point different values, - // and we shouldn't call the - // function more often than - // that. Or, even better than this, - // compute all of these values at - // once, and get away with a single + // Then, below, we will ask the coefficient function object to + // compute the values of the coefficient at all quadrature points on one + // cell at once. The reason for this is that, if you look back at how we did + // this in step-4, you will realize that we called the function computing + // the right hand side value inside nested loops over all degrees of freedom + // and over all quadrature points, i.e. dofs_per_cell*n_q_points times. For + // the coefficient that is used inside the matrix, this would actually be + // dofs_per_cell*dofs_per_cell*n_q_points. On the other hand, the function + // will of course return the same value every time it is called with the + // same quadrature point, independently of what shape function we presently + // treat; secondly, these are virtual function calls, so are rather + // expensive. Obviously, there are only n_q_point different values, and we + // shouldn't call the function more often than that. Or, even better than + // this, compute all of these values at once, and get away with a single // function call per cell. // - // This is exactly what we are - // going to do. For this, we need - // some space to store the values - // in. We therefore also have to - // declare an array to hold these - // values: + // This is exactly what we are going to do. For this, we need some space to + // store the values in. We therefore also have to declare an array to hold + // these values: const Coefficient coefficient; std::vector coefficient_values (n_q_points); - // Next is the typical loop over - // all cells to compute local - // contributions and then to - // transfer them into the global - // matrix and vector. + // Next is the typical loop over all cells to compute local contributions + // and then to transfer them into the global matrix and vector. // - // The only two things in which - // this loop differs from step-4 is - // that we want to compute the - // value of the coefficient in all - // quadrature points on the present - // cell at the beginning, and then - // use it in the computation of the - // local contributions. This is - // what we do in the call to - // coefficient.value_list in - // the fourth line of the loop. + // The only two things in which this loop differs from step-4 is that we + // want to compute the value of the coefficient in all quadrature points on + // the present cell at the beginning, and then use it in the computation of + // the local contributions. This is what we do in the call to + // coefficient.value_list in the fourth line of the loop. // - // The second change is how we make - // use of this coefficient in - // computing the cell matrix - // contributions. This is in the - // obvious way, and not worth more - // comments. For the right hand - // side, we use a constant value - // again. + // The second change is how we make use of this coefficient in computing the + // cell matrix contributions. This is in the obvious way, and not worth more + // comments. For the right hand side, we use a constant value again. typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); @@ -566,8 +386,7 @@ void Step5::assemble_system () } } - // With the matrix so built, we use - // zero boundary values again: + // With the matrix so built, we use zero boundary values again: std::map boundary_values; VectorTools::interpolate_boundary_values (dof_handler, 0, @@ -582,55 +401,31 @@ void Step5::assemble_system () // @sect4{Step5::solve} -// The solution process again looks -// mostly like in the previous -// examples. However, we will now use -// a preconditioned conjugate -// gradient algorithm. It is not very -// difficult to make this change. In -// fact, the only thing we have to -// alter is that we need an object -// which will act as a -// preconditioner. We will use SSOR -// (symmetric successive -// overrelaxation), with a relaxation -// factor of 1.2. For this purpose, -// the SparseMatrix class has a -// function which does one SSOR step, -// and we need to package the address -// of this function together with the -// matrix on which it should act -// (which is the matrix to be -// inverted) and the relaxation -// factor into one object. The -// PreconditionSSOR class does -// this for us. (PreconditionSSOR -// class takes a template argument -// denoting the matrix type it is -// supposed to work on. The default -// value is SparseMatrix@, -// which is exactly what we need -// here, so we simply stick with the -// default and do not specify -// anything in the angle brackets.) +// The solution process again looks mostly like in the previous +// examples. However, we will now use a preconditioned conjugate gradient +// algorithm. It is not very difficult to make this change. In fact, the only +// thing we have to alter is that we need an object which will act as a +// preconditioner. We will use SSOR (symmetric successive overrelaxation), +// with a relaxation factor of 1.2. For this purpose, the +// SparseMatrix class has a function which does one SSOR step, +// and we need to package the address of this function together with the +// matrix on which it should act (which is the matrix to be inverted) and the +// relaxation factor into one object. The PreconditionSSOR class +// does this for us. (PreconditionSSOR class takes a template +// argument denoting the matrix type it is supposed to work on. The default +// value is SparseMatrix@, which is exactly what we need +// here, so we simply stick with the default and do not specify anything in +// the angle brackets.) // -// Note that for the present case, -// SSOR doesn't really perform much -// better than most other -// preconditioners (though better -// than no preconditioning at all). A -// brief comparison of different -// preconditioners is presented in -// the Results section of the next -// tutorial program, step-6. +// Note that for the present case, SSOR doesn't really perform much better +// than most other preconditioners (though better than no preconditioning at +// all). A brief comparison of different preconditioners is presented in the +// Results section of the next tutorial program, step-6. // -// With this, the rest of the -// function is trivial: instead of -// the PreconditionIdentity -// object we have created before, we -// now use the preconditioner we have -// declared, and the CG solver will -// do the rest for us: +// With this, the rest of the function is trivial: instead of the +// PreconditionIdentity object we have created before, we now use +// the preconditioner we have declared, and the CG solver will do the rest for +// us: template void Step5::solve () { @@ -651,13 +446,9 @@ void Step5::solve () // @sect4{Step5::output_results and setting output flags} -// Writing output to a file is mostly -// the same as for the previous -// example, but here we will show how -// to modify some output options and -// how to construct a different -// filename for each refinement -// cycle. +// Writing output to a file is mostly the same as for the previous example, +// but here we will show how to modify some output options and how to +// construct a different filename for each refinement cycle. template void Step5::output_results (const unsigned int cycle) const { @@ -668,126 +459,69 @@ void Step5::output_results (const unsigned int cycle) const data_out.build_patches (); - // For this example, we would like - // to write the output directly to - // a file in Encapsulated - // Postscript (EPS) format. The - // library supports this, but - // things may be a bit more - // difficult sometimes, since EPS - // is a printing format, unlike - // most other supported formats - // which serve as input for - // graphical tools. Therefore, you - // can't scale or rotate the image - // after it has been written to - // disk, and you have to decide - // about the viewpoint or the - // scaling in advance. + // For this example, we would like to write the output directly to a file in + // Encapsulated Postscript (EPS) format. The library supports this, but + // things may be a bit more difficult sometimes, since EPS is a printing + // format, unlike most other supported formats which serve as input for + // graphical tools. Therefore, you can't scale or rotate the image after it + // has been written to disk, and you have to decide about the viewpoint or + // the scaling in advance. // - // The defaults in the library are - // usually quite reasonable, and - // regarding viewpoint and scaling - // they coincide with the defaults - // of Gnuplot. However, since this - // is a tutorial, we will - // demonstrate how to change - // them. For this, we first have to - // generate an object describing - // the flags for EPS output - // (similar flag classes exist for - // all supported output formats): + // The defaults in the library are usually quite reasonable, and regarding + // viewpoint and scaling they coincide with the defaults of + // Gnuplot. However, since this is a tutorial, we will demonstrate how to + // change them. For this, we first have to generate an object describing the + // flags for EPS output (similar flag classes exist for all supported output + // formats): DataOutBase::EpsFlags eps_flags; - // They are initialized with the - // default values, so we only have - // to change those that we don't - // like. For example, we would like - // to scale the z-axis differently - // (stretch each data point in - // z-direction by a factor of four): + // They are initialized with the default values, so we only have to change + // those that we don't like. For example, we would like to scale the z-axis + // differently (stretch each data point in z-direction by a factor of four): eps_flags.z_scaling = 4; - // Then we would also like to alter - // the viewpoint from which we look - // at the solution surface. The - // default is at an angle of 60 - // degrees down from the vertical - // axis, and 30 degrees rotated - // against it in mathematical - // positive sense. We raise our - // viewpoint a bit and look more - // along the y-axis: + // Then we would also like to alter the viewpoint from which we look at the + // solution surface. The default is at an angle of 60 degrees down from the + // vertical axis, and 30 degrees rotated against it in mathematical positive + // sense. We raise our viewpoint a bit and look more along the y-axis: eps_flags.azimut_angle = 40; eps_flags.turn_angle = 10; - // That shall suffice. There are - // more flags, for example whether - // to draw the mesh lines, which - // data vectors to use for - // colorization of the interior of - // the cells, and so on. You may - // want to take a look at the - // documentation of the EpsFlags - // structure to get an overview of - // what is possible. + // That shall suffice. There are more flags, for example whether to draw the + // mesh lines, which data vectors to use for colorization of the interior of + // the cells, and so on. You may want to take a look at the documentation of + // the EpsFlags structure to get an overview of what is possible. // - // The only thing still to be done, - // is to tell the output object to - // use these flags: + // The only thing still to be done, is to tell the output object to use + // these flags: data_out.set_flags (eps_flags); - // The above way to modify flags - // requires recompilation each time - // we would like to use different - // flags. This is inconvenient, and - // we will see more advanced ways - // in step-19 where the output - // flags are determined at run time - // using an input file (step-19 - // doesn't show many other things; - // you should feel free to read - // over it even if you haven't done - // step-6 to step-18 yet). - - // Finally, we need the filename to - // which the results are to be - // written. We would like to have - // it of the form - // solution-N.eps, where N is - // the number of the refinement - // cycle. Thus, we have to convert - // an integer to a part of a - // string; this can be done using - // the sprintf function, but in - // C++ there is a more elegant way: - // write everything into a special - // stream (just like writing into a - // file or to the screen) and - // retrieve what you wrote as a - // string. This applies the usual - // conversions from integer to - // strings, and one could as well - // use stream modifiers such as - // setw, setprecision, and - // so on. In C++, you can do this - // by using the so-called stringstream - // classes: + // The above way to modify flags requires recompilation each time we would + // like to use different flags. This is inconvenient, and we will see more + // advanced ways in step-19 where the output flags are determined at run + // time using an input file (step-19 doesn't show many other things; you + // should feel free to read over it even if you haven't done step-6 to + // step-18 yet). + + // Finally, we need the filename to which the results are to be written. We + // would like to have it of the form solution-N.eps, where N is + // the number of the refinement cycle. Thus, we have to convert an integer + // to a part of a string; this can be done using the sprintf + // function, but in C++ there is a more elegant way: write everything into a + // special stream (just like writing into a file or to the screen) and + // retrieve what you wrote as a string. This applies the usual conversions + // from integer to strings, and one could as well use stream modifiers such + // as setw, setprecision, and so on. In C++, you + // can do this by using the so-called stringstream classes: std::ostringstream filename; - // In order to now actually - // generate a filename, we fill the - // stringstream variable with the - // base of the filename, then the - // number part, and finally the - // suffix indicating the file type: + // In order to now actually generate a filename, we fill the stringstream + // variable with the base of the filename, then the number part, and finally + // the suffix indicating the file type: filename << "solution-" << cycle << ".eps"; - // We can get whatever we wrote to the - // stream using the str() function. The - // result is a string which we have to - // convert to a char* using the c_str() - // function. Use that as filename for the - // output stream and then write the data to - // the file: + // We can get whatever we wrote to the stream using the str() + // function. The result is a string which we have to convert to a char* + // using the c_str() function. Use that as filename for the + // output stream and then write the data to the file: std::ofstream output (filename.str().c_str()); data_out.write_eps (output); @@ -797,120 +531,64 @@ void Step5::output_results (const unsigned int cycle) const // @sect4{Step5::run} -// The second to last thing in this -// program is the definition of the -// run() function. In contrast to -// the previous programs, we will -// compute on a sequence of meshes -// that after each iteration is -// globall refined. The function -// therefore consists of a loop over -// 6 cycles. In each cycle, we first -// print the cycle number, and then -// have to decide what to do with the -// mesh. If this is not the first -// cycle, we simply refine the -// existing mesh once -// globally. Before running through -// these cycles, however, +// The second to last thing in this program is the definition of the +// run() function. In contrast to the previous programs, we will +// compute on a sequence of meshes that after each iteration is globall +// refined. The function therefore consists of a loop over 6 cycles. In each +// cycle, we first print the cycle number, and then have to decide what to do +// with the mesh. If this is not the first cycle, we simply refine the +// existing mesh once globally. Before running through these cycles, however, // we have to generate a mesh: -// In previous examples, we have -// already used some of the functions -// from the -// GridGenerator -// class. Here we would like to read -// a grid from a file where the cells -// are stored and which may originate -// from someone else, or may be the -// product of a mesh generator tool. +// In previous examples, we have already used some of the functions from the +// GridGenerator class. Here we would like to read a grid from a +// file where the cells are stored and which may originate from someone else, +// or may be the product of a mesh generator tool. // -// In order to read a grid from a -// file, we generate an object of -// data type GridIn and associate the -// triangulation to it (i.e. we tell -// it to fill our triangulation -// object when we ask it to read the -// file). Then we open the respective -// file and initialize the -// triangulation with the data in the -// file: +// In order to read a grid from a file, we generate an object of data type +// GridIn and associate the triangulation to it (i.e. we tell it to fill our +// triangulation object when we ask it to read the file). Then we open the +// respective file and initialize the triangulation with the data in the file: template void Step5::run () { GridIn grid_in; grid_in.attach_triangulation (triangulation); std::ifstream input_file("circle-grid.inp"); - // We would now like to read the - // file. However, the input file is - // only for a two-dimensional - // triangulation, while this - // function is a template for - // arbitrary dimension. Since this - // is only a demonstration program, - // we will not use different input - // files for the different - // dimensions, but rather kill the - // whole program if we are not in - // 2D: + // We would now like to read the file. However, the input file is only for a + // two-dimensional triangulation, while this function is a template for + // arbitrary dimension. Since this is only a demonstration program, we will + // not use different input files for the different dimensions, but rather + // kill the whole program if we are not in 2D: Assert (dim==2, ExcInternalError()); - // ExcInternalError is a globally - // defined exception, which may be - // thrown whenever something is - // terribly wrong. Usually, one - // would like to use more specific - // exceptions, and particular in - // this case one would of course - // try to do something else if - // dim is not equal to - // two, e.g. create a grid using - // library functions. Aborting a - // program is usually not a good - // idea and assertions should - // really only be used for - // exceptional cases which should - // not occur, but might due to - // stupidity of the programmer, - // user, or someone else. The - // situation above is not a very - // clever use of Assert, but again: - // this is a tutorial and it might - // be worth to show what not to do, - // after all. - - // So if we got past the assertion, - // we know that dim==2, and we can - // now actually read the grid. It - // is in UCD (unstructured cell - // data) format (but the ending of - // the UCD-file is - // inp), as supported - // as input format by the AVS - // Explorer (a visualization - // program), for example: + // ExcInternalError is a globally defined exception, which may be thrown + // whenever something is terribly wrong. Usually, one would like to use more + // specific exceptions, and particular in this case one would of course try + // to do something else if dim is not equal to two, e.g. create + // a grid using library functions. Aborting a program is usually not a good + // idea and assertions should really only be used for exceptional cases + // which should not occur, but might due to stupidity of the programmer, + // user, or someone else. The situation above is not a very clever use of + // Assert, but again: this is a tutorial and it might be worth to show what + // not to do, after all. + + // So if we got past the assertion, we know that dim==2, and we can now + // actually read the grid. It is in UCD (unstructured cell data) format (but + // the ending of the UCD-file is inp), as + // supported as input format by the AVS Explorer (a visualization program), + // for example: grid_in.read_ucd (input_file); - // If you like to use another input - // format, you have to use an other - // grid_in.read_xxx - // function. (See the documentation - // of the GridIn class - // to find out what input formats - // are presently supported.) - - // The grid in the file describes a - // circle. Therefore we have to use - // a boundary object which tells - // the triangulation where to put - // new points on the boundary when - // the grid is refined. This works - // in the same way as in the first - // example. Note that the - // HyperBallBoundary constructor - // takes two parameters, the center - // of the ball and the radius, but - // that their default (the origin - // and 1.0) are the ones which we - // would like to use here. + // If you like to use another input format, you have to use an other + // grid_in.read_xxx function. (See the documentation of the + // GridIn class to find out what input formats are presently + // supported.) + + // The grid in the file describes a circle. Therefore we have to use a + // boundary object which tells the triangulation where to put new points on + // the boundary when the grid is refined. This works in the same way as in + // the first example. Note that the HyperBallBoundary constructor takes two + // parameters, the center of the ball and the radius, but that their default + // (the origin and 1.0) are the ones which we would like to use here. static const HyperBallBoundary boundary; triangulation.set_boundary (0, boundary); @@ -921,11 +599,8 @@ void Step5::run () if (cycle != 0) triangulation.refine_global (1); - // Now that we have a mesh for - // sure, we write some output - // and do all the things that - // we have already seen in the - // previous examples. + // Now that we have a mesh for sure, we write some output and do all the + // things that we have already seen in the previous examples. std::cout << " Number of active cells: " << triangulation.n_active_cells() << std::endl @@ -943,10 +618,8 @@ void Step5::run () // @sect3{The main function} -// The main function looks mostly -// like the one in the previous -// example, so we won't comment on it -// further: +// The main function looks mostly like the one in the previous example, so we +// won't comment on it further: int main () { deallog.depth_console (0); @@ -954,28 +627,16 @@ int main () Step5<2> laplace_problem_2d; laplace_problem_2d.run (); - // Finally, we have promised to - // trigger an exception in the - // Coefficient class through - // the Assert macro we have - // introduced there. For this, we - // have to call its value_list - // function with two arrays of - // different size (the number in - // parentheses behind the - // declaration of the object). We - // have commented out these lines - // in order to allow the program to - // exit gracefully in normal - // situations (we use the program - // in day-to-day testing of changes - // to the library as well), so you - // will only get the exception by - // un-commenting the following - // lines. Take a look at the - // Results section of the program - // to see what happens when the - // code is actually run: + // Finally, we have promised to trigger an exception in the + // Coefficient class through the Assert macro we + // have introduced there. For this, we have to call its + // value_list function with two arrays of different size (the + // number in parentheses behind the declaration of the object). We have + // commented out these lines in order to allow the program to exit + // gracefully in normal situations (we use the program in day-to-day testing + // of changes to the library as well), so you will only get the exception by + // un-commenting the following lines. Take a look at the Results section of + // the program to see what happens when the code is actually run: /* Coefficient<2> coefficient; std::vector > points (2); diff --git a/deal.II/examples/step-6/step-6.cc b/deal.II/examples/step-6/step-6.cc index 84226f616d..07ff4acd78 100644 --- a/deal.II/examples/step-6/step-6.cc +++ b/deal.II/examples/step-6/step-6.cc @@ -11,10 +11,8 @@ // @sect3{Include files} -// The first few files have already -// been covered in previous examples -// and will thus not be further -// commented on. +// The first few files have already been covered in previous examples and will +// thus not be further commented on. #include #include #include @@ -40,85 +38,52 @@ #include #include -// From the following include file we -// will import the declaration of -// H1-conforming finite element shape -// functions. This family of finite -// elements is called FE_Q, and -// was used in all examples before -// already to define the usual bi- or -// tri-linear elements, but we will -// now use it for bi-quadratic -// elements: +// From the following include file we will import the declaration of +// H1-conforming finite element shape functions. This family of finite +// elements is called FE_Q, and was used in all examples before +// already to define the usual bi- or tri-linear elements, but we will now use +// it for bi-quadratic elements: #include -// We will not read the grid from a -// file as in the previous example, -// but generate it using a function -// of the library. However, we will -// want to write out the locally -// refined grids (just the grid, not -// the solution) in each step, so we -// need the following include file -// instead of grid_in.h: +// We will not read the grid from a file as in the previous example, but +// generate it using a function of the library. However, we will want to write +// out the locally refined grids (just the grid, not the solution) in each +// step, so we need the following include file instead of +// grid_in.h: #include -// When using locally refined grids, we will -// get so-called hanging -// nodes. However, the standard finite -// element methods assumes that the discrete -// solution spaces be continuous, so we need -// to make sure that the degrees of freedom -// on hanging nodes conform to some -// constraints such that the global solution -// is continuous. We are also going to store -// the boundary conditions in this -// object. The following file contains a -// class which is used to handle these -// constraints: +// When using locally refined grids, we will get so-called hanging +// nodes. However, the standard finite element methods assumes that the +// discrete solution spaces be continuous, so we need to make sure that the +// degrees of freedom on hanging nodes conform to some constraints such that +// the global solution is continuous. We are also going to store the boundary +// conditions in this object. The following file contains a class which is +// used to handle these constraints: #include -// In order to refine our grids -// locally, we need a function from -// the library that decides which -// cells to flag for refinement or -// coarsening based on the error -// indicators we have computed. This -// function is defined here: +// In order to refine our grids locally, we need a function from the library +// that decides which cells to flag for refinement or coarsening based on the +// error indicators we have computed. This function is defined here: #include -// Finally, we need a simple way to -// actually compute the refinement -// indicators based on some error -// estimat. While in general, -// adaptivity is very -// problem-specific, the error -// indicator in the following file -// often yields quite nicely adapted -// grids for a wide class of -// problems. +// Finally, we need a simple way to actually compute the refinement indicators +// based on some error estimat. While in general, adaptivity is very +// problem-specific, the error indicator in the following file often yields +// quite nicely adapted grids for a wide class of problems. #include -// Finally, this is as in previous -// programs: +// Finally, this is as in previous programs: using namespace dealii; // @sect3{The Step6 class template} -// The main class is again almost -// unchanged. Two additions, however, -// are made: we have added the -// refine_grid function, which is -// used to adaptively refine the grid -// (instead of the global refinement -// in the previous examples), and a -// variable which will hold the -// constraints. In addition, we -// have added a destructor to the -// class for reasons that will become -// clear when we discuss its -// implementation. +// The main class is again almost unchanged. Two additions, however, are made: +// we have added the refine_grid function, which is used to +// adaptively refine the grid (instead of the global refinement in the +// previous examples), and a variable which will hold the constraints. In +// addition, we have added a destructor to the class for reasons that will +// become clear when we discuss its implementation. template class Step6 { @@ -140,12 +105,9 @@ private: DoFHandler dof_handler; FE_Q fe; - // This is the new variable in - // the main class. We need an - // object which holds a list of - // constraints to hold the - // hanging nodes and the - // boundary conditions. + // This is the new variable in the main class. We need an object which holds + // a list of constraints to hold the hanging nodes and the boundary + // conditions. ConstraintMatrix constraints; SparsityPattern sparsity_pattern; @@ -158,9 +120,8 @@ private: // @sect3{Nonconstant coefficients} -// The implementation of nonconstant -// coefficients is copied verbatim -// from step-5: +// The implementation of nonconstant coefficients is copied verbatim from +// step-5: template class Coefficient : public Function @@ -217,15 +178,10 @@ void Coefficient::value_list (const std::vector > &points, // @sect4{Step6::Step6} -// The constructor of this class is -// mostly the same as before, but -// this time we want to use the -// quadratic element. To do so, we -// only have to replace the -// constructor argument (which was -// 1 in all previous examples) by -// the desired polynomial degree -// (here 2): +// The constructor of this class is mostly the same as before, but this time +// we want to use the quadratic element. To do so, we only have to replace the +// constructor argument (which was 1 in all previous examples) by +// the desired polynomial degree (here 2): template Step6::Step6 () : @@ -236,129 +192,65 @@ Step6::Step6 () // @sect4{Step6::~Step6} -// Here comes the added destructor of -// the class. The reason why we want -// to add it is a subtle change in -// the order of data elements in the -// class as compared to all previous -// examples: the dof_handler -// object was defined before and not -// after the fe object. Of course -// we could have left this order -// unchanged, but we would like to -// show what happens if the order is -// reversed since this produces a -// rather nasty side-effect and -// results in an error which is -// difficult to track down if one -// does not know what happens. +// Here comes the added destructor of the class. The reason why we want to add +// it is a subtle change in the order of data elements in the class as +// compared to all previous examples: the dof_handler object was +// defined before and not after the fe object. Of course we could +// have left this order unchanged, but we would like to show what happens if +// the order is reversed since this produces a rather nasty side-effect and +// results in an error which is difficult to track down if one does not know +// what happens. // -// Basically what happens is the -// following: when we distribute the -// degrees of freedom using the -// function call -// dof_handler.distribute_dofs(), -// the dof_handler also stores a -// pointer to the finite element in -// use. Since this pointer is used -// every now and then until either -// the degrees of freedom are -// re-distributed using another -// finite element object or until the -// dof_handler object is -// destroyed, it would be unwise if -// we would allow the finite element -// object to be deleted before the -// dof_handler object. To -// disallow this, the DoF handler -// increases a counter inside the -// finite element object which counts -// how many objects use that finite -// element (this is what the -// Subscriptor/SmartPointer -// class pair is used for, in case -// you want something like this for -// your own programs; see step-7 for -// a more complete discussion -// of this topic). The finite -// element object will refuse its -// destruction if that counter is -// larger than zero, since then some -// other objects might rely on the -// persistence of the finite element -// object. An exception will then be -// thrown and the program will -// usually abort upon the attempt to -// destroy the finite element. +// Basically what happens is the following: when we distribute the degrees of +// freedom using the function call dof_handler.distribute_dofs(), +// the dof_handler also stores a pointer to the finite element in +// use. Since this pointer is used every now and then until either the degrees +// of freedom are re-distributed using another finite element object or until +// the dof_handler object is destroyed, it would be unwise if we +// would allow the finite element object to be deleted before the +// dof_handler object. To disallow this, the DoF handler +// increases a counter inside the finite element object which counts how many +// objects use that finite element (this is what the +// Subscriptor/SmartPointer class pair is used for, +// in case you want something like this for your own programs; see step-7 for +// a more complete discussion of this topic). The finite element object will +// refuse its destruction if that counter is larger than zero, since then some +// other objects might rely on the persistence of the finite element +// object. An exception will then be thrown and the program will usually abort +// upon the attempt to destroy the finite element. // -// To be fair, such exceptions about -// still used objects are not -// particularly popular among -// programmers using deal.II, since -// they only tell us that something -// is wrong, namely that some other -// object is still using the object -// that is presently being -// destructed, but most of the time -// not who this user is. It is -// therefore often rather -// time-consuming to find out where -// the problem exactly is, although -// it is then usually straightforward -// to remedy the situation. However, -// we believe that the effort to find -// invalid references to objects that -// do no longer exist is less if the -// problem is detected once the -// reference becomes invalid, rather -// than when non-existent objects are -// actually accessed again, since -// then usually only invalid data is -// accessed, but no error is -// immediately raised. +// To be fair, such exceptions about still used objects are not particularly +// popular among programmers using deal.II, since they only tell us that +// something is wrong, namely that some other object is still using the object +// that is presently being destructed, but most of the time not who this user +// is. It is therefore often rather time-consuming to find out where the +// problem exactly is, although it is then usually straightforward to remedy +// the situation. However, we believe that the effort to find invalid +// references to objects that do no longer exist is less if the problem is +// detected once the reference becomes invalid, rather than when non-existent +// objects are actually accessed again, since then usually only invalid data +// is accessed, but no error is immediately raised. // -// Coming back to the present -// situation, if we did not write -// this destructor, the compiler will -// generate code that triggers -// exactly the behavior sketched -// above. The reason is that member -// variables of the -// Step6 class are -// destructed bottom-up (i.e. in -// reverse order of their declaration -// in the class), as always in -// C++. Thus, the finite element -// object will be destructed before -// the DoF handler object, since its -// declaration is below the one of -// the DoF handler. This triggers the -// situation above, and an exception -// will be raised when the fe -// object is destructed. What needs -// to be done is to tell the -// dof_handler object to release -// its lock to the finite element. Of -// course, the dof_handler will -// only release its lock if it really -// does not need the finite element -// any more, i.e. when all finite -// element related data is deleted -// from it. For this purpose, the -// DoFHandler class has a -// function clear which deletes -// all degrees of freedom, and -// releases its lock to the finite -// element. After this, you can -// safely destruct the finite element -// object since its internal counter -// is then zero. +// Coming back to the present situation, if we did not write this destructor, +// the compiler will generate code that triggers exactly the behavior sketched +// above. The reason is that member variables of the Step6 class +// are destructed bottom-up (i.e. in reverse order of their declaration in the +// class), as always in C++. Thus, the finite element object will be +// destructed before the DoF handler object, since its declaration is below +// the one of the DoF handler. This triggers the situation above, and an +// exception will be raised when the fe object is +// destructed. What needs to be done is to tell the dof_handler +// object to release its lock to the finite element. Of course, the +// dof_handler will only release its lock if it really does not +// need the finite element any more, i.e. when all finite element related data +// is deleted from it. For this purpose, the DoFHandler class has +// a function clear which deletes all degrees of freedom, and +// releases its lock to the finite element. After this, you can safely +// destruct the finite element object since its internal counter is then zero. // -// For completeness, we add the -// output of the exception that would -// have been triggered without this -// destructor, to the end of the -// results section of this example. +// For completeness, we add the output of the exception that would have been +// triggered without this destructor, to the end of the results section of +// this example. template Step6::~Step6 () { @@ -368,42 +260,24 @@ Step6::~Step6 () // @sect4{Step6::setup_system} -// The next function is setting up -// all the variables that describe -// the linear finite element problem, -// such as the DoF handler, the -// matrices, and vectors. The -// difference to what we did in -// step-5 is only that we now also -// have to take care of handing node -// constraints. These constraints are -// handled almost transparently by -// the library, i.e. you only need to -// know that they exist and how to -// get them, but you do not have to -// know how they are formed or what -// exactly is done with them. +// The next function is setting up all the variables that describe the linear +// finite element problem, such as the DoF handler, the matrices, and +// vectors. The difference to what we did in step-5 is only that we now also +// have to take care of handing node constraints. These constraints are +// handled almost transparently by the library, i.e. you only need to know +// that they exist and how to get them, but you do not have to know how they +// are formed or what exactly is done with them. // -// At the beginning of the function, -// you find all the things that are -// the same as in step-5: setting up -// the degrees of freedom (this time -// we have quadratic elements, but -// there is no difference from a user -// code perspective to the linear -- -// or cubic, for that matter -- -// case), generating the sparsity -// pattern, and initializing the -// solution and right hand side -// vectors. Note that the sparsity -// pattern will have significantly -// more entries per row now, since -// there are now 9 degrees of freedom -// per cell, not only four, that can -// couple with each other. The -// dof_Handler.max_couplings_between_dofs() -// call will take care of this, -// however: +// At the beginning of the function, you find all the things that are the same +// as in step-5: setting up the degrees of freedom (this time we have +// quadratic elements, but there is no difference from a user code perspective +// to the linear -- or cubic, for that matter -- case), generating the +// sparsity pattern, and initializing the solution and right hand side +// vectors. Note that the sparsity pattern will have significantly more +// entries per row now, since there are now 9 degrees of freedom per cell, not +// only four, that can couple with each other. The +// dof_Handler.max_couplings_between_dofs() call will take care +// of this, however: template void Step6::setup_system () { @@ -413,154 +287,98 @@ void Step6::setup_system () system_rhs.reinit (dof_handler.n_dofs()); - // After setting up all the degrees - // of freedoms, here are now the - // differences compared to step-5, - // all of which are related to - // constraints associated with the - // hanging nodes. In the class - // desclaration, we have already - // allocated space for an object - // constraints - // that will hold a list of these - // constraints (they form a matrix, - // which is reflected in the name - // of the class, but that is - // immaterial for the moment). Now - // we have to fill this - // object. This is done using the - // following function calls (the - // first clears the contents of the - // object that may still be left - // over from computations on the - // previous mesh before the last - // adaptive refinement): + // After setting up all the degrees of freedoms, here are now the + // differences compared to step-5, all of which are related to constraints + // associated with the hanging nodes. In the class desclaration, we have + // already allocated space for an object constraints that will + // hold a list of these constraints (they form a matrix, which is reflected + // in the name of the class, but that is immaterial for the moment). Now we + // have to fill this object. This is done using the following function calls + // (the first clears the contents of the object that may still be left over + // from computations on the previous mesh before the last adaptive + // refinement): constraints.clear (); DoFTools::make_hanging_node_constraints (dof_handler, constraints); - // Now we are ready to interpolate the - // ZeroFunction to our boundary with - // indicator 0 (the whole boundary) and - // store the resulting constraints in our - // constraints object. Note - // that we do not to apply the boundary - // conditions after assembly, like we did - // in earlier steps. As almost all the - // stuff, the interpolation of boundary - // values works also for higher order - // elements without the need to change your - // code for that. We note that for proper - // results, it is important that the - // elimination of boundary nodes from the - // system of equations happens *after* the - // elimination of hanging nodes. For that - // reason we are filling the boundary - // values into the ContraintMatrix after - // the hanging node constraints. + // Now we are ready to interpolate the ZeroFunction to our boundary with + // indicator 0 (the whole boundary) and store the resulting constraints in + // our constraints object. Note that we do not to apply the + // boundary conditions after assembly, like we did in earlier steps. As + // almost all the stuff, the interpolation of boundary values works also for + // higher order elements without the need to change your code for that. We + // note that for proper results, it is important that the elimination of + // boundary nodes from the system of equations happens *after* the + // elimination of hanging nodes. For that reason we are filling the boundary + // values into the ContraintMatrix after the hanging node constraints. VectorTools::interpolate_boundary_values (dof_handler, 0, ZeroFunction(), constraints); - // The next step is closing - // this object. After - // all constraints have been added, - // they need to be sorted and - // rearranged to perform some - // actions more efficiently. This - // postprocessing is done using the - // close() function, after which - // no further constraints may be + // The next step is closing this object. After all constraints + // have been added, they need to be sorted and rearranged to perform some + // actions more efficiently. This postprocessing is done using the + // close() function, after which no further constraints may be // added any more: constraints.close (); - // Now we first build our compressed - // sparsity pattern like we did in the - // previous examples. Nevertheless, we do - // not copy it to the final sparsity - // pattern immediately. Note that we call - // a variant of make_sparsity_pattern that - // takes the ConstraintMatrix as the third - // argument. We are letting the routine - // know, the we will never write into the - // locations given by - // constraints by setting the - // argument - // keep_constrained_dofs to - // false. If we were to condense the - // constraints after assembling, we would - // have to pass true instead. + // Now we first build our compressed sparsity pattern like we did in the + // previous examples. Nevertheless, we do not copy it to the final sparsity + // pattern immediately. Note that we call a variant of + // make_sparsity_pattern that takes the ConstraintMatrix as the third + // argument. We are letting the routine know, the we will never write into + // the locations given by constraints by setting the argument + // keep_constrained_dofs to false. If we were to condense the + // constraints after assembling, we would have to pass true + // instead. CompressedSparsityPattern c_sparsity(dof_handler.n_dofs()); DoFTools::make_sparsity_pattern(dof_handler, c_sparsity, constraints, false /*keep_constrained_dofs*/); - // Now all non-zero entries of the - // matrix are known (i.e. those - // from regularly assembling the - // matrix and those that were - // introduced by eliminating - // constraints). We can thus copy - // our intermediate object to - // the sparsity pattern: + // Now all non-zero entries of the matrix are known (i.e. those from + // regularly assembling the matrix and those that were introduced by + // eliminating constraints). We can thus copy our intermediate object to the + // sparsity pattern: sparsity_pattern.copy_from(c_sparsity); - // Finally, the so-constructed - // sparsity pattern serves as the - // basis on top of which we will - // create the sparse matrix: + // Finally, the so-constructed sparsity pattern serves as the basis on top + // of which we will create the sparse matrix: system_matrix.reinit (sparsity_pattern); } // @sect4{Step6::assemble_system} -// Next, we have to assemble the -// matrix again. There are two code -// changes compared to step-5: +// Next, we have to assemble the matrix again. There are two code changes +// compared to step-5: // -// First, we have to use a higher-order -// quadrature formula to account for the -// higher polynomial degree in the finite -// element shape functions. This is easy to -// change: the constructor of the -// QGauss class takes the number -// of quadrature points in each space -// direction. Previously, we had two points -// for bilinear elements. Now we should use -// three points for biquadratic elements. +// First, we have to use a higher-order quadrature formula to account for the +// higher polynomial degree in the finite element shape functions. This is +// easy to change: the constructor of the QGauss class takes the +// number of quadrature points in each space direction. Previously, we had two +// points for bilinear elements. Now we should use three points for +// biquadratic elements. // -// Second, to copy the local matrix and -// vector on each cell into the global -// system, we are no longer using a -// hand-written loop. Instead, we use -// ConstraintMatrix::distribute_local_to_global -// that internally executes this loop and -// eliminates all the constraints at the same -// time. +// Second, to copy the local matrix and vector on each cell into the global +// system, we are no longer using a hand-written loop. Instead, we use +// ConstraintMatrix::distribute_local_to_global that internally +// executes this loop and eliminates all the constraints at the same time. // -// The rest of the code that forms the local -// contributions remains unchanged. It is -// worth noting, however, that under the hood -// several things are different than -// before. First, the variables -// dofs_per_cell and -// n_q_points now are 9 each, -// where they were 4 before. Introducing such -// variables as abbreviations is a good -// strategy to make code work with different -// elements without having to change too much -// code. Secondly, the fe_values -// object of course needs to do other things -// as well, since the shape functions are now -// quadratic, rather than linear, in each -// coordinate variable. Again, however, this -// is something that is completely -// transparent to user code and nothing that -// you have to worry about. +// The rest of the code that forms the local contributions remains +// unchanged. It is worth noting, however, that under the hood several things +// are different than before. First, the variables dofs_per_cell +// and n_q_points now are 9 each, where they were 4 +// before. Introducing such variables as abbreviations is a good strategy to +// make code work with different elements without having to change too much +// code. Secondly, the fe_values object of course needs to do +// other things as well, since the shape functions are now quadratic, rather +// than linear, in each coordinate variable. Again, however, this is something +// that is completely transparent to user code and nothing that you have to +// worry about. template void Step6::assemble_system () { @@ -609,47 +427,34 @@ void Step6::assemble_system () } cell->get_dof_indices (local_dof_indices); - // transfer the contributions from @p cell_matrix and @cell_rhs into the global objects. + // transfer the contributions from @p cell_matrix and @cell_rhs into the + // global objects. constraints.distribute_local_to_global(cell_matrix, cell_rhs, local_dof_indices, system_matrix, system_rhs); } - // Now we are done assembling the linear - // system. The constrained nodes are still - // in the linear system (there is a one on - // the diagonal of the matrix and all other - // entries for this line are set to zero) - // but the computed values are invalid. We - // compute the correct values for these - // nodes at the end of the - // solve function. + // Now we are done assembling the linear system. The constrained nodes are + // still in the linear system (there is a one on the diagonal of the matrix + // and all other entries for this line are set to zero) but the computed + // values are invalid. We compute the correct values for these nodes at the + // end of the solve function. } // @sect4{Step6::solve} -// We continue with gradual improvements. The -// function that solves the linear system -// again uses the SSOR preconditioner, and is -// again unchanged except that we have to -// incorporate hanging node constraints. As -// mentioned above, the degrees of freedom -// from the ConstraintMatrix corresponding to -// hanging node constraints and boundary -// values have been removed from the linear -// system by giving the rows and columns of -// the matrix a special treatment. This way, -// the values for these degrees of freedom -// have wrong, but well-defined values after -// solving the linear system. What we then -// have to do is to use the constraints to -// assign to them the values that they should -// have. This process, called -// distributing constraints, -// computes the values of constrained nodes -// from the values of the unconstrained ones, -// and requires only a single additional -// function call that you find at the end of -// this function: +// We continue with gradual improvements. The function that solves the linear +// system again uses the SSOR preconditioner, and is again unchanged except +// that we have to incorporate hanging node constraints. As mentioned above, +// the degrees of freedom from the ConstraintMatrix corresponding to hanging +// node constraints and boundary values have been removed from the linear +// system by giving the rows and columns of the matrix a special +// treatment. This way, the values for these degrees of freedom have wrong, +// but well-defined values after solving the linear system. What we then have +// to do is to use the constraints to assign to them the values that they +// should have. This process, called distributing constraints, +// computes the values of constrained nodes from the values of the +// unconstrained ones, and requires only a single additional function call +// that you find at the end of this function: template void Step6::solve () @@ -669,119 +474,59 @@ void Step6::solve () // @sect4{Step6::refine_grid} -// Instead of global refinement, we -// now use a slightly more elaborate -// scheme. We will use the -// KellyErrorEstimator class -// which implements an error -// estimator for the Laplace -// equation; it can in principle -// handle variable coefficients, but -// we will not use these advanced -// features, but rather use its most -// simple form since we are not -// interested in quantitative results -// but only in a quick way to -// generate locally refined grids. +// Instead of global refinement, we now use a slightly more elaborate +// scheme. We will use the KellyErrorEstimator class which +// implements an error estimator for the Laplace equation; it can in principle +// handle variable coefficients, but we will not use these advanced features, +// but rather use its most simple form since we are not interested in +// quantitative results but only in a quick way to generate locally refined +// grids. // -// Although the error estimator -// derived by Kelly et al. was -// originally developed for the Laplace -// equation, we have found that it is -// also well suited to quickly -// generate locally refined grids for -// a wide class of -// problems. Basically, it looks at -// the jumps of the gradients of the -// solution over the faces of cells -// (which is a measure for the second -// derivatives) and scales it by the -// size of the cell. It is therefore -// a measure for the local smoothness -// of the solution at the place of -// each cell and it is thus -// understandable that it yields -// reasonable grids also for -// hyperbolic transport problems or -// the wave equation as well, -// although these grids are certainly -// suboptimal compared to approaches -// specially tailored to the -// problem. This error estimator may -// therefore be understood as a quick -// way to test an adaptive program. +// Although the error estimator derived by Kelly et al. was originally +// developed for the Laplace equation, we have found that it is also well +// suited to quickly generate locally refined grids for a wide class of +// problems. Basically, it looks at the jumps of the gradients of the solution +// over the faces of cells (which is a measure for the second derivatives) and +// scales it by the size of the cell. It is therefore a measure for the local +// smoothness of the solution at the place of each cell and it is thus +// understandable that it yields reasonable grids also for hyperbolic +// transport problems or the wave equation as well, although these grids are +// certainly suboptimal compared to approaches specially tailored to the +// problem. This error estimator may therefore be understood as a quick way to +// test an adaptive program. // -// The way the estimator works is to -// take a DoFHandler object -// describing the degrees of freedom -// and a vector of values for each -// degree of freedom as input and -// compute a single indicator value -// for each active cell of the -// triangulation (i.e. one value for -// each of the -// triangulation.n_active_cells() -// cells). To do so, it needs two -// additional pieces of information: -// a quadrature formula on the faces -// (i.e. quadrature formula on -// dim-1 dimensional objects. We -// use a 3-point Gauss rule again, a -// pick that is consistent and -// appropriate with the choice -// bi-quadratic finite element shape -// functions in this program. -// (What constitutes a suitable -// quadrature rule here of course -// depends on knowledge of the way -// the error estimator evaluates -// the solution field. As said -// above, the jump of the gradient -// is integrated over each face, -// which would be a quadratic -// function on each face for the -// quadratic elements in use in -// this example. In fact, however, -// it is the square of the jump of -// the gradient, as explained in -// the documentation of that class, -// and that is a quartic function, -// for which a 3 point Gauss -// formula is sufficient since it -// integrates polynomials up to -// order 5 exactly.) +// The way the estimator works is to take a DoFHandler object +// describing the degrees of freedom and a vector of values for each degree of +// freedom as input and compute a single indicator value for each active cell +// of the triangulation (i.e. one value for each of the +// triangulation.n_active_cells() cells). To do so, it needs two +// additional pieces of information: a quadrature formula on the faces +// (i.e. quadrature formula on dim-1 dimensional objects. We use +// a 3-point Gauss rule again, a pick that is consistent and appropriate with +// the choice bi-quadratic finite element shape functions in this program. +// (What constitutes a suitable quadrature rule here of course depends on +// knowledge of the way the error estimator evaluates the solution field. As +// said above, the jump of the gradient is integrated over each face, which +// would be a quadratic function on each face for the quadratic elements in +// use in this example. In fact, however, it is the square of the jump of the +// gradient, as explained in the documentation of that class, and that is a +// quartic function, for which a 3 point Gauss formula is sufficient since it +// integrates polynomials up to order 5 exactly.) // -// Secondly, the function wants a -// list of boundaries where we have -// imposed Neumann value, and the -// corresponding Neumann values. This -// information is represented by an -// object of type -// FunctionMap::type that is -// essentially a map from boundary -// indicators to function objects -// describing Neumann boundary values -// (in the present example program, -// we do not use Neumann boundary -// values, so this map is empty, and -// in fact constructed using the -// default constructor of the map in -// the place where the function call -// expects the respective function -// argument). +// Secondly, the function wants a list of boundaries where we have imposed +// Neumann value, and the corresponding Neumann values. This information is +// represented by an object of type FunctionMap::type that is +// essentially a map from boundary indicators to function objects describing +// Neumann boundary values (in the present example program, we do not use +// Neumann boundary values, so this map is empty, and in fact constructed +// using the default constructor of the map in the place where the function +// call expects the respective function argument). // -// The output, as mentioned is a -// vector of values for all -// cells. While it may make sense to -// compute the *value* of a degree of -// freedom very accurately, it is -// usually not helpful to compute the -// *error indicator* corresponding to -// a cell particularly accurately. We -// therefore typically use a vector -// of floats instead of a vector of -// doubles to represent error -// indicators. +// The output, as mentioned is a vector of values for all cells. While it may +// make sense to compute the *value* of a degree of freedom very accurately, +// it is usually not helpful to compute the *error indicator* corresponding to +// a cell particularly accurately. We therefore typically use a vector of +// floats instead of a vector of doubles to represent error indicators. template void Step6::refine_grid () { @@ -793,137 +538,74 @@ void Step6::refine_grid () solution, estimated_error_per_cell); - // The above function returned one - // error indicator value for each - // cell in the - // estimated_error_per_cell - // array. Refinement is now done as - // follows: refine those 30 per - // cent of the cells with the - // highest error values, and - // coarsen the 3 per cent of cells - // with the lowest values. + // The above function returned one error indicator value for each cell in + // the estimated_error_per_cell array. Refinement is now done + // as follows: refine those 30 per cent of the cells with the highest error + // values, and coarsen the 3 per cent of cells with the lowest values. // - // One can easily verify that if - // the second number were zero, - // this would approximately result - // in a doubling of cells in each - // step in two space dimensions, - // since for each of the 30 per - // cent of cells, four new would be - // replaced, while the remaining 70 - // per cent of cells remain - // untouched. In practice, some - // more cells are usually produced - // since it is disallowed that a - // cell is refined twice while the - // neighbor cell is not refined; in - // that case, the neighbor cell - // would be refined as well. + // One can easily verify that if the second number were zero, this would + // approximately result in a doubling of cells in each step in two space + // dimensions, since for each of the 30 per cent of cells, four new would be + // replaced, while the remaining 70 per cent of cells remain untouched. In + // practice, some more cells are usually produced since it is disallowed + // that a cell is refined twice while the neighbor cell is not refined; in + // that case, the neighbor cell would be refined as well. // - // In many applications, the number - // of cells to be coarsened would - // be set to something larger than - // only three per cent. A non-zero - // value is useful especially if - // for some reason the initial - // (coarse) grid is already rather - // refined. In that case, it might - // be necessary to refine it in - // some regions, while coarsening - // in some other regions is - // useful. In our case here, the - // initial grid is very coarse, so - // coarsening is only necessary in - // a few regions where - // over-refinement may have taken - // place. Thus a small, non-zero - // value is appropriate here. + // In many applications, the number of cells to be coarsened would be set to + // something larger than only three per cent. A non-zero value is useful + // especially if for some reason the initial (coarse) grid is already rather + // refined. In that case, it might be necessary to refine it in some + // regions, while coarsening in some other regions is useful. In our case + // here, the initial grid is very coarse, so coarsening is only necessary in + // a few regions where over-refinement may have taken place. Thus a small, + // non-zero value is appropriate here. // - // The following function now takes - // these refinement indicators and - // flags some cells of the - // triangulation for refinement or - // coarsening using the method - // described above. It is from a - // class that implements - // several different algorithms to - // refine a triangulation based on - // cell-wise error indicators. + // The following function now takes these refinement indicators and flags + // some cells of the triangulation for refinement or coarsening using the + // method described above. It is from a class that implements several + // different algorithms to refine a triangulation based on cell-wise error + // indicators. GridRefinement::refine_and_coarsen_fixed_number (triangulation, estimated_error_per_cell, 0.3, 0.03); - // After the previous function has - // exited, some cells are flagged - // for refinement, and some other - // for coarsening. The refinement - // or coarsening itself is not - // performed by now, however, since - // there are cases where further - // modifications of these flags is - // useful. Here, we don't want to - // do any such thing, so we can - // tell the triangulation to - // perform the actions for which - // the cells are flagged: + // After the previous function has exited, some cells are flagged for + // refinement, and some other for coarsening. The refinement or coarsening + // itself is not performed by now, however, since there are cases where + // further modifications of these flags is useful. Here, we don't want to do + // any such thing, so we can tell the triangulation to perform the actions + // for which the cells are flagged: triangulation.execute_coarsening_and_refinement (); } // @sect4{Step6::output_results} -// At the end of computations on each -// grid, and just before we continue -// the next cycle with mesh -// refinement, we want to output the -// results from this cycle. +// At the end of computations on each grid, and just before we continue the +// next cycle with mesh refinement, we want to output the results from this +// cycle. // -// In the present program, we will -// not write the solution (except for -// in the last step, see the next -// function), but only the meshes -// that we generated, as a -// two-dimensional Encapsulated -// Postscript (EPS) file. +// In the present program, we will not write the solution (except for in the +// last step, see the next function), but only the meshes that we generated, +// as a two-dimensional Encapsulated Postscript (EPS) file. // -// We have already seen in step-1 how -// this can be achieved. The only -// thing we have to change is the -// generation of the file name, since -// it should contain the number of -// the present refinement cycle -// provided to this function as an -// argument. The most general way is -// to use the std::stringstream class -// as shown in step-5, but here's a -// little hack that makes it simpler -// if we know that we have less than -// 10 iterations: assume that the -// %numbers `0' through `9' are -// represented consecutively in the -// character set used on your machine -// (this is in fact the case in all -// known character sets), then -// '0'+cycle gives the character -// corresponding to the present cycle -// number. Of course, this will only -// work if the number of cycles is -// actually less than 10, and rather -// than waiting for the disaster to -// happen, we safeguard our little -// hack with an explicit assertion at -// the beginning of the function. If -// this assertion is triggered, -// i.e. when cycle is larger than -// or equal to 10, an exception of -// type ExcNotImplemented is -// raised, indicating that some -// functionality is not implemented -// for this case (the functionality -// that is missing, of course, is the -// generation of file names for that -// case): +// We have already seen in step-1 how this can be achieved. The only thing we +// have to change is the generation of the file name, since it should contain +// the number of the present refinement cycle provided to this function as an +// argument. The most general way is to use the std::stringstream class as +// shown in step-5, but here's a little hack that makes it simpler if we know +// that we have less than 10 iterations: assume that the %numbers `0' through +// `9' are represented consecutively in the character set used on your machine +// (this is in fact the case in all known character sets), then '0'+cycle +// gives the character corresponding to the present cycle number. Of course, +// this will only work if the number of cycles is actually less than 10, and +// rather than waiting for the disaster to happen, we safeguard our little +// hack with an explicit assertion at the beginning of the function. If this +// assertion is triggered, i.e. when cycle is larger than or +// equal to 10, an exception of type ExcNotImplemented is raised, +// indicating that some functionality is not implemented for this case (the +// functionality that is missing, of course, is the generation of file names +// for that case): template void Step6::output_results (const unsigned int cycle) const { @@ -943,55 +625,30 @@ void Step6::output_results (const unsigned int cycle) const // @sect4{Step6::run} -// The final function before -// main() is again the main -// driver of the class, run(). It -// is similar to the one of step-5, -// except that we generate a file in -// the program again instead of -// reading it from disk, in that we -// adaptively instead of globally -// refine the mesh, and that we -// output the solution on the final -// mesh in the present function. +// The final function before main() is again the main driver of +// the class, run(). It is similar to the one of step-5, except +// that we generate a file in the program again instead of reading it from +// disk, in that we adaptively instead of globally refine the mesh, and that +// we output the solution on the final mesh in the present function. // -// The first block in the main loop -// of the function deals with mesh -// generation. If this is the first -// cycle of the program, instead of -// reading the grid from a file on -// disk as in the previous example, -// we now again create it using a -// library function. The domain is -// again a circle, which is why we -// have to provide a suitable -// boundary object as well. We place -// the center of the circle at the -// origin and have the radius be one -// (these are the two hidden -// arguments to the function, which -// have default values). +// The first block in the main loop of the function deals with mesh +// generation. If this is the first cycle of the program, instead of reading +// the grid from a file on disk as in the previous example, we now again +// create it using a library function. The domain is again a circle, which is +// why we have to provide a suitable boundary object as well. We place the +// center of the circle at the origin and have the radius be one (these are +// the two hidden arguments to the function, which have default values). // -// You will notice by looking at the -// coarse grid that it is of inferior -// quality than the one which we read -// from the file in the previous -// example: the cells are less -// equally formed. However, using the -// library function this program -// works in any space dimension, -// which was not the case before. +// You will notice by looking at the coarse grid that it is of inferior +// quality than the one which we read from the file in the previous example: +// the cells are less equally formed. However, using the library function this +// program works in any space dimension, which was not the case before. // -// In case we find that this is not -// the first cycle, we want to refine -// the grid. Unlike the global -// refinement employed in the last -// example program, we now use the -// adaptive procedure described -// above. +// In case we find that this is not the first cycle, we want to refine the +// grid. Unlike the global refinement employed in the last example program, we +// now use the adaptive procedure described above. // -// The rest of the loop looks as -// before: +// The rest of the loop looks as before: template void Step6::run () { @@ -1027,18 +684,11 @@ void Step6::run () output_results (cycle); } - // After we have finished computing - // the solution on the finest mesh, - // and writing all the grids to - // disk, we want to also write the - // actual solution on this final - // mesh to a file. As already done - // in one of the previous examples, - // we use the EPS format for - // output, and to obtain a - // reasonable view on the solution, - // we rescale the z-axis by a - // factor of four. + // After we have finished computing the solution on the finest mesh, and + // writing all the grids to disk, we want to also write the actual solution + // on this final mesh to a file. As already done in one of the previous + // examples, we use the EPS format for output, and to obtain a reasonable + // view on the solution, we rescale the z-axis by a factor of four. DataOutBase::EpsFlags eps_flags; eps_flags.z_scaling = 4; @@ -1056,52 +706,29 @@ void Step6::run () // @sect3{The main function} -// The main function is unaltered in -// its functionality from the -// previous example, but we have -// taken a step of additional -// caution. Sometimes, something goes -// wrong (such as insufficient disk -// space upon writing an output file, -// not enough memory when trying to -// allocate a vector or a matrix, or -// if we can't read from or write to -// a file for whatever reason), and -// in these cases the library will -// throw exceptions. Since these are -// run-time problems, not programming -// errors that can be fixed once and -// for all, this kind of exceptions -// is not switched off in optimized -// mode, in contrast to the -// Assert macro which we have -// used to test against programming -// errors. If uncaught, these -// exceptions propagate the call tree -// up to the main function, and -// if they are not caught there -// either, the program is aborted. In -// many cases, like if there is not -// enough memory or disk space, we -// can't do anything but we can at -// least print some text trying to -// explain the reason why the program -// failed. A way to do so is shown in -// the following. It is certainly -// useful to write any larger program -// in this way, and you can do so by -// more or less copying this function -// except for the try block that -// actually encodes the functionality -// particular to the present -// application. +// The main function is unaltered in its functionality from the previous +// example, but we have taken a step of additional caution. Sometimes, +// something goes wrong (such as insufficient disk space upon writing an +// output file, not enough memory when trying to allocate a vector or a +// matrix, or if we can't read from or write to a file for whatever reason), +// and in these cases the library will throw exceptions. Since these are +// run-time problems, not programming errors that can be fixed once and for +// all, this kind of exceptions is not switched off in optimized mode, in +// contrast to the Assert macro which we have used to test +// against programming errors. If uncaught, these exceptions propagate the +// call tree up to the main function, and if they are not caught +// there either, the program is aborted. In many cases, like if there is not +// enough memory or disk space, we can't do anything but we can at least print +// some text trying to explain the reason why the program failed. A way to do +// so is shown in the following. It is certainly useful to write any larger +// program in this way, and you can do so by more or less copying this +// function except for the try block that actually encodes the +// functionality particular to the present application. int main () { - // The general idea behind the - // layout of this function is as - // follows: let's try to run the - // program as we did before... + // The general idea behind the layout of this function is as follows: let's + // try to run the program as we did before... try { deallog.depth_console (0); @@ -1109,41 +736,25 @@ int main () Step6<2> laplace_problem_2d; laplace_problem_2d.run (); } - // ...and if this should fail, try - // to gather as much information as - // possible. Specifically, if the - // exception that was thrown is an - // object of a class that is - // derived from the C++ standard - // class exception, then we can - // use the what member function - // to get a string which describes - // the reason why the exception was + // ...and if this should fail, try to gather as much information as + // possible. Specifically, if the exception that was thrown is an object of + // a class that is derived from the C++ standard class + // exception, then we can use the what member + // function to get a string which describes the reason why the exception was // thrown. // - // The deal.II exception classes - // are all derived from the - // standard class, and in - // particular, the exc.what() - // function will return - // approximately the same string as - // would be generated if the - // exception was thrown using the - // Assert macro. You have seen - // the output of such an exception - // in the previous example, and you - // then know that it contains the - // file and line number of where - // the exception occured, and some - // other information. This is also - // what the following statements - // would print. + // The deal.II exception classes are all derived from the standard class, + // and in particular, the exc.what() function will return + // approximately the same string as would be generated if the exception was + // thrown using the Assert macro. You have seen the output of + // such an exception in the previous example, and you then know that it + // contains the file and line number of where the exception occured, and + // some other information. This is also what the following statements would + // print. // - // Apart from this, there isn't - // much that we can do except - // exiting the program with an - // error code (this is what the - // return 1; does): + // Apart from this, there isn't much that we can do except exiting the + // program with an error code (this is what the return 1; + // does): catch (std::exception &exc) { std::cerr << std::endl << std::endl @@ -1157,13 +768,9 @@ int main () return 1; } - // If the exception that was thrown - // somewhere was not an object of a - // class derived from the standard - // exception class, then we - // can't do anything at all. We - // then simply print an error - // message and exit. + // If the exception that was thrown somewhere was not an object of a class + // derived from the standard exception class, then we can't do + // anything at all. We then simply print an error message and exit. catch (...) { std::cerr << std::endl << std::endl @@ -1176,14 +783,9 @@ int main () return 1; } - // If we got to this point, there - // was no exception which - // propagated up to the main - // function (there may have been - // exceptions, but they were caught - // somewhere in the program or the - // library). Therefore, the program - // performed as was expected and we - // can return without error. + // If we got to this point, there was no exception which propagated up to + // the main function (there may have been exceptions, but they were caught + // somewhere in the program or the library). Therefore, the program + // performed as was expected and we can return without error. return 0; } diff --git a/deal.II/examples/step-7/step-7.cc b/deal.II/examples/step-7/step-7.cc index abe9a0dd07..62f5ea4fda 100644 --- a/deal.II/examples/step-7/step-7.cc +++ b/deal.II/examples/step-7/step-7.cc @@ -11,10 +11,8 @@ // @sect3{Include files} -// These first include files have all -// been treated in previous examples, -// so we won't explain what is in -// them again. +// These first include files have all been treated in previous examples, so we +// won't explain what is in them again. #include #include #include @@ -38,90 +36,60 @@ #include #include -// In this example, we will not use the -// numeration scheme which is used per -// default by the DoFHandler class, but -// will renumber them using the Cuthill-McKee -// algorithm. As has already been explained -// in step-2, the necessary functions are -// declared in the following file: +// In this example, we will not use the numeration scheme which is used per +// default by the DoFHandler class, but will renumber them using the +// Cuthill-McKee algorithm. As has already been explained in step-2, the +// necessary functions are declared in the following file: #include -// Then we will show a little trick -// how we can make sure that objects -// are not deleted while they are -// still in use. For this purpose, -// deal.II has the SmartPointer -// helper class, which is declared in -// this file: +// Then we will show a little trick how we can make sure that objects are not +// deleted while they are still in use. For this purpose, deal.II has the +// SmartPointer helper class, which is declared in this file: #include -// Next, we will want to use the function -// VectorTools::integrate_difference() -// mentioned in the introduction, and we are -// going to use a ConvergenceTable that -// collects all important data during a run -// and prints it at the end as a table. These -// comes from the following two files: +// Next, we will want to use the function VectorTools::integrate_difference() +// mentioned in the introduction, and we are going to use a ConvergenceTable +// that collects all important data during a run and prints it at the end as a +// table. These comes from the following two files: #include #include -// And finally, we need to use the -// FEFaceValues class, which is -// declared in the same file as the -// FEValues class: +// And finally, we need to use the FEFaceValues class, which is declared in +// the same file as the FEValues class: #include -// We need one more include from standard -// C++, which is necessary when we try to -// find out the actual type behind a pointer -// to a base class. We will explain this in -// slightly more detail below. The other two -// include files are obvious then: +// We need one more include from standard C++, which is necessary when we try +// to find out the actual type behind a pointer to a base class. We will +// explain this in slightly more detail below. The other two include files are +// obvious then: #include #include #include -// The last step before we go on with the -// actual implementation is to open a -// namespace Step7 into which we -// will put everything, as discussed at the -// end of the introduction, and to import the -// members of namespace dealii -// into it: +// The last step before we go on with the actual implementation is to open a +// namespace Step7 into which we will put everything, as +// discussed at the end of the introduction, and to import the members of +// namespace dealii into it: namespace Step7 { using namespace dealii; // @sect3{Equation data} - // Before implementing the classes that - // actually solve something, we first declare - // and define some function classes that - // represent right hand side and solution - // classes. Since we want to compare the - // numerically obtained solution to the exact - // continuous one, we need a function object - // that represents the continuous - // solution. On the other hand, we need the - // right hand side function, and that one of - // course shares some characteristics with - // the solution. In order to reduce - // dependencies which arise if we have to - // change something in both classes at the - // same time, we move the common - // characteristics of both functions into a - // base class. + // Before implementing the classes that actually solve something, we first + // declare and define some function classes that represent right hand side + // and solution classes. Since we want to compare the numerically obtained + // solution to the exact continuous one, we need a function object that + // represents the continuous solution. On the other hand, we need the right + // hand side function, and that one of course shares some characteristics + // with the solution. In order to reduce dependencies which arise if we have + // to change something in both classes at the same time, we move the common + // characteristics of both functions into a base class. // - // The common characteristics for solution - // (as explained in the introduction, we - // choose a sum of three exponentials) and - // right hand side, are these: the number of - // exponentials, their centers, and their - // half width. We declare them in the - // following class. Since the number of - // exponentials is a constant scalar integral - // quantity, C++ allows its definition - // (i.e. assigning a value) right at the - // place of declaration (i.e. where we - // declare that such a variable exists). + // The common characteristics for solution (as explained in the + // introduction, we choose a sum of three exponentials) and right hand side, + // are these: the number of exponentials, their centers, and their half + // width. We declare them in the following class. Since the number of + // exponentials is a constant scalar integral quantity, C++ allows its + // definition (i.e. assigning a value) right at the place of declaration + // (i.e. where we declare that such a variable exists). template class SolutionBase { @@ -132,39 +100,24 @@ namespace Step7 }; - // The variables which denote the - // centers and the width of the - // exponentials have just been - // declared, now we still need to - // assign values to them. Here, we - // can show another small piece of - // template sorcery, namely how we - // can assign different values to - // these variables depending on the - // dimension. We will only use the 2d - // case in the program, but we show - // the 1d case for exposition of a - // useful technique. + // The variables which denote the centers and the width of the exponentials + // have just been declared, now we still need to assign values to + // them. Here, we can show another small piece of template sorcery, namely + // how we can assign different values to these variables depending on the + // dimension. We will only use the 2d case in the program, but we show the + // 1d case for exposition of a useful technique. // - // First we assign values to the centers for - // the 1d case, where we place the centers - // equidistantly at -1/3, 0, and 1/3. The - // template <> header for this definition - // indicates an explicit specialization. This - // means, that the variable belongs to a - // template, but that instead of providing - // the compiler with a template from which it - // can specialize a concrete variable by - // substituting dim with some concrete - // value, we provide a specialization - // ourselves, in this case for dim=1. If - // the compiler then sees a reference to this - // variable in a place where the template - // argument equals one, it knows that it - // doesn't have to generate the variable from - // a template by substituting dim, but - // can immediately use the following - // definition: + // First we assign values to the centers for the 1d case, where we place the + // centers equidistantly at -1/3, 0, and 1/3. The template + // <> header for this definition indicates an explicit + // specialization. This means, that the variable belongs to a template, but + // that instead of providing the compiler with a template from which it can + // specialize a concrete variable by substituting dim with some + // concrete value, we provide a specialization ourselves, in this case for + // dim=1. If the compiler then sees a reference to this + // variable in a place where the template argument equals one, it knows that + // it doesn't have to generate the variable from a template by substituting + // dim, but can immediately use the following definition: template <> const Point<1> SolutionBase<1>::source_centers[SolutionBase<1>::n_source_centers] @@ -173,9 +126,8 @@ namespace Step7 Point<1>(+1.0 / 3.0) }; - // Likewise, we can provide an explicit - // specialization for dim=2. We place the - // centers for the 2d case as follows: + // Likewise, we can provide an explicit specialization for + // dim=2. We place the centers for the 2d case as follows: template <> const Point<2> SolutionBase<2>::source_centers[SolutionBase<2>::n_source_centers] @@ -184,56 +136,36 @@ namespace Step7 Point<2>(+0.5, -0.5) }; - // There remains to assign a value to the - // half-width of the exponentials. We would - // like to use the same value for all - // dimensions. In this case, we simply - // provide the compiler with a template from - // which it can generate a concrete - // instantiation by substituting dim with - // a concrete value: + // There remains to assign a value to the half-width of the exponentials. We + // would like to use the same value for all dimensions. In this case, we + // simply provide the compiler with a template from which it can generate a + // concrete instantiation by substituting dim with a concrete + // value: template const double SolutionBase::width = 1./3.; - // After declaring and defining the - // characteristics of solution and - // right hand side, we can declare - // the classes representing these - // two. They both represent - // continuous functions, so they are - // derived from the Function<dim> - // base class, and they also inherit - // the characteristics defined in the - // SolutionBase class. + // After declaring and defining the characteristics of solution and right + // hand side, we can declare the classes representing these two. They both + // represent continuous functions, so they are derived from the + // Function<dim> base class, and they also inherit the characteristics + // defined in the SolutionBase class. // - // The actual classes are declared in the - // following. Note that in order to compute - // the error of the numerical solution - // against the continuous one in the L2 and - // H1 norms, we have to provide value and - // gradient of the exact solution. This is - // more than we have done in previous - // examples, where all we provided was the - // value at one or a list of - // points. Fortunately, the Function - // class also has virtual functions for the - // gradient, so we can simply overload the - // respective virtual member functions in the - // Function base class. Note that the - // gradient of a function in dim space - // dimensions is a vector of size dim, - // i.e. a tensor of rank 1 and dimension - // dim. As for so many other things, the - // library provides a suitable class for - // this. + // The actual classes are declared in the following. Note that in order to + // compute the error of the numerical solution against the continuous one in + // the L2 and H1 norms, we have to provide value and gradient of the exact + // solution. This is more than we have done in previous examples, where all + // we provided was the value at one or a list of points. Fortunately, the + // Function class also has virtual functions for the gradient, so we can + // simply overload the respective virtual member functions in the Function + // base class. Note that the gradient of a function in dim + // space dimensions is a vector of size dim, i.e. a tensor of + // rank 1 and dimension dim. As for so many other things, the + // library provides a suitable class for this. // - // Just as in previous examples, we - // are forced by the C++ language - // specification to declare a - // seemingly useless default - // constructor. + // Just as in previous examples, we are forced by the C++ language + // specification to declare a seemingly useless default constructor. template class Solution : public Function, protected SolutionBase @@ -249,31 +181,20 @@ namespace Step7 }; - // The actual definition of the values and - // gradients of the exact solution class is - // according to their mathematical definition - // and does not need much explanation. + // The actual definition of the values and gradients of the exact solution + // class is according to their mathematical definition and does not need + // much explanation. // - // The only thing that is worth - // mentioning is that if we access - // elements of a base class that is - // template dependent (in this case - // the elements of - // SolutionBase<dim>), then the - // C++ language forces us to write - // this->n_source_centers (for - // example). Note that the this-> - // qualification is not necessary if - // the base class is not template - // dependent, and also that the gcc - // compilers prior to version 3.4 don't - // enforce this requirement of the - // C++ standard. The reason why this - // is necessary is complicated; some - // books on C++ may explain it, so if - // you are interested you can look it - // up under the phrase two-stage - // (name) lookup. + // The only thing that is worth mentioning is that if we access elements of + // a base class that is template dependent (in this case the elements of + // SolutionBase<dim>), then the C++ language forces us to write + // this->n_source_centers (for example). Note that the + // this-> qualification is not necessary if the base class + // is not template dependent, and also that the gcc compilers prior to + // version 3.4 don't enforce this requirement of the C++ standard. The + // reason why this is necessary is complicated; some books on C++ may + // explain it, so if you are interested you can look it up under the phrase + // two-stage (name) lookup. template double Solution::value (const Point &p, const unsigned int) const @@ -290,44 +211,30 @@ namespace Step7 } - // Likewise, this is the computation of the - // gradient of the solution. In order to - // accumulate the gradient from the - // contributions of the exponentials, we - // allocate an object return_value that - // denotes the mathematical quantity of a - // tensor of rank 1 and dimension - // dim. Its default constructor sets it - // to the vector containing only zeroes, so - // we need not explicitly care for its + // Likewise, this is the computation of the gradient of the solution. In + // order to accumulate the gradient from the contributions of the + // exponentials, we allocate an object return_value that + // denotes the mathematical quantity of a tensor of rank 1 and + // dimension dim. Its default constructor sets it to the vector + // containing only zeroes, so we need not explicitly care for its // initialization. // - // Note that we could as well have taken the - // type of the object to be Point<dim> - // instead of Tensor<1,dim>. Tensors of - // rank 1 and points are almost exchangeable, - // and have only very slightly different - // mathematical meanings. In fact, the - // Point<dim> class is derived from the - // Tensor<1,dim> class, which makes up - // for their mutual exchange ability. Their - // main difference is in what they logically - // mean: points are points in space, such as - // the location at which we want to evaluate - // a function (see the type of the first - // argument of this function for example). On - // the other hand, tensors of rank 1 share - // the same transformation properties, for - // example that they need to be rotated in a - // certain way when we change the coordinate - // system; however, they do not share the - // same connotation that points have and are - // only objects in a more abstract space than - // the one spanned by the coordinate - // directions. (In fact, gradients live in - // `reciprocal' space, since the dimension of - // their components is not that of a length, - // but one over length). + // Note that we could as well have taken the type of the object to be + // Point<dim> instead of Tensor<1,dim>. Tensors of rank 1 and + // points are almost exchangeable, and have only very slightly different + // mathematical meanings. In fact, the Point<dim> class is derived + // from the Tensor<1,dim> class, which makes up for their mutual + // exchange ability. Their main difference is in what they logically mean: + // points are points in space, such as the location at which we want to + // evaluate a function (see the type of the first argument of this function + // for example). On the other hand, tensors of rank 1 share the same + // transformation properties, for example that they need to be rotated in a + // certain way when we change the coordinate system; however, they do not + // share the same connotation that points have and are only objects in a + // more abstract space than the one spanned by the coordinate + // directions. (In fact, gradients live in `reciprocal' space, since the + // dimension of their components is not that of a length, but one over + // length). template Tensor<1,dim> Solution::gradient (const Point &p, const unsigned int) const @@ -338,12 +245,9 @@ namespace Step7 { const Point x_minus_xi = p - this->source_centers[i]; - // For the gradient, note that - // its direction is along - // (x-x_i), so we add up - // multiples of this distance - // vector, where the factor is - // given by the exponentials. + // For the gradient, note that its direction is along (x-x_i), so we + // add up multiples of this distance vector, where the factor is given + // by the exponentials. return_value += (-2 / (this->width * this->width) * std::exp(-x_minus_xi.square() / (this->width * this->width)) * @@ -355,18 +259,12 @@ namespace Step7 - // Besides the function that - // represents the exact solution, we - // also need a function which we can - // use as right hand side when - // assembling the linear system of - // discretized equations. This is - // accomplished using the following - // class and the following definition - // of its function. Note that here we - // only need the value of the - // function, not its gradients or - // higher derivatives. + // Besides the function that represents the exact solution, we also need a + // function which we can use as right hand side when assembling the linear + // system of discretized equations. This is accomplished using the following + // class and the following definition of its function. Note that here we + // only need the value of the function, not its gradients or higher + // derivatives. template class RightHandSide : public Function, protected SolutionBase @@ -379,10 +277,8 @@ namespace Step7 }; - // The value of the right hand side - // is given by the negative Laplacian - // of the solution plus the solution - // itself, since we wanted to solve + // The value of the right hand side is given by the negative Laplacian of + // the solution plus the solution itself, since we wanted to solve // Helmholtz's equation: template double RightHandSide::value (const Point &p, @@ -393,15 +289,13 @@ namespace Step7 { const Point x_minus_xi = p - this->source_centers[i]; - // The first contribution is - // the Laplacian: + // The first contribution is the Laplacian: return_value += ((2*dim - 4*x_minus_xi.square()/ (this->width * this->width)) / (this->width * this->width) * std::exp(-x_minus_xi.square() / (this->width * this->width))); - // And the second is the - // solution itself: + // And the second is the solution itself: return_value += std::exp(-x_minus_xi.square() / (this->width * this->width)); } @@ -412,31 +306,21 @@ namespace Step7 // @sect3{The Helmholtz solver class} - // Then we need the class that does all the - // work. Except for its name, its interface - // is mostly the same as in previous - // examples. + // Then we need the class that does all the work. Except for its name, its + // interface is mostly the same as in previous examples. // - // One of the differences is that we will use - // this class in several modes: for different - // finite elements, as well as for adaptive - // and global refinement. The decision - // whether global or adaptive refinement - // shall be used is communicated to the - // constructor of this class through an - // enumeration type declared at the top of - // the class. The constructor then takes a - // finite element object and the refinement - // mode as arguments. + // One of the differences is that we will use this class in several modes: + // for different finite elements, as well as for adaptive and global + // refinement. The decision whether global or adaptive refinement shall be + // used is communicated to the constructor of this class through an + // enumeration type declared at the top of the class. The constructor then + // takes a finite element object and the refinement mode as arguments. // - // The rest of the member functions are as - // before except for the process_solution - // function: After the solution has been - // computed, we perform some analysis on it, - // such as computing the error in various - // norms. To enable some output, it requires - // the number of the refinement cycle, and - // consequently gets it as an argument. + // The rest of the member functions are as before except for the + // process_solution function: After the solution has been + // computed, we perform some analysis on it, such as computing the error in + // various norms. To enable some output, it requires the number of the + // refinement cycle, and consequently gets it as an argument. template class HelmholtzProblem { @@ -460,192 +344,88 @@ namespace Step7 void refine_grid (); void process_solution (const unsigned int cycle); - // Now for the data elements of - // this class. Among the variables - // that we have already used in - // previous examples, only the - // finite element object differs: - // The finite elements which the - // objects of this class operate - // on are passed to the - // constructor of this class. It - // has to store a pointer to the - // finite element for the member - // functions to use. Now, for the - // present class there is no big - // deal in that, but since we - // want to show techniques rather - // than solutions in these - // programs, we will here point - // out a problem that often - // occurs -- and of course the - // right solution as well. + // Now for the data elements of this class. Among the variables that we + // have already used in previous examples, only the finite element object + // differs: The finite elements which the objects of this class operate on + // are passed to the constructor of this class. It has to store a pointer + // to the finite element for the member functions to use. Now, for the + // present class there is no big deal in that, but since we want to show + // techniques rather than solutions in these programs, we will here point + // out a problem that often occurs -- and of course the right solution as + // well. // - // Consider the following - // situation that occurs in all - // the example programs: we have - // a triangulation object, and we - // have a finite element object, - // and we also have an object of - // type DoFHandler that uses - // both of the first two. These - // three objects all have a - // lifetime that is rather long - // compared to most other - // objects: they are basically - // set at the beginning of the - // program or an outer loop, and - // they are destroyed at the very - // end. The question is: can we - // guarantee that the two objects - // which the DoFHandler uses, - // live at least as long as they - // are in use? This means that - // the DoFHandler must have some - // kind of lock on the - // destruction of the other - // objects, and it can only - // release this lock once it has - // cleared all active references - // to these objects. We have seen - // what happens if we violate - // this order of destruction in - // the previous example program: - // an exception is thrown that - // terminates the program in - // order to notify the programmer - // of this potentially dangerous - // state where an object is - // pointed to that no longer - // persists. + // Consider the following situation that occurs in all the example + // programs: we have a triangulation object, and we have a finite element + // object, and we also have an object of type DoFHandler that uses both of + // the first two. These three objects all have a lifetime that is rather + // long compared to most other objects: they are basically set at the + // beginning of the program or an outer loop, and they are destroyed at + // the very end. The question is: can we guarantee that the two objects + // which the DoFHandler uses, live at least as long as they are in use? + // This means that the DoFHandler must have some kind of lock on the + // destruction of the other objects, and it can only release this lock + // once it has cleared all active references to these objects. We have + // seen what happens if we violate this order of destruction in the + // previous example program: an exception is thrown that terminates the + // program in order to notify the programmer of this potentially dangerous + // state where an object is pointed to that no longer persists. // - // We will show here how the - // library managed to find out - // that there are still active - // references to an - // object. Basically, the method - // is along the following line: - // all objects that are subject - // to such potentially dangerous - // pointers are derived from a - // class called - // Subscriptor. For example, - // the Triangulation, - // DoFHandler, and a base - // class of the FiniteElement - // class are derived from - // Subscriptor. This latter - // class does not offer much - // functionality, but it has a - // built-in counter which we can - // subscribe to, thus the name of - // the class. Whenever we - // initialize a pointer to that - // object, we can increase its use - // counter, and when we move away - // our pointer or do not need it - // any more, we decrease the - // counter again. This way, we - // can always check how many - // objects still use that - // object. + // We will show here how the library managed to find out that there are + // still active references to an object. Basically, the method is along + // the following line: all objects that are subject to such potentially + // dangerous pointers are derived from a class called Subscriptor. For + // example, the Triangulation, DoFHandler, and a base class of the + // FiniteElement class are derived from Subscriptor. This latter class + // does not offer much functionality, but it has a built-in counter which + // we can subscribe to, thus the name of the class. Whenever we initialize + // a pointer to that object, we can increase its use counter, and when we + // move away our pointer or do not need it any more, we decrease the + // counter again. This way, we can always check how many objects still use + // that object. // - // On the other hand, if an object of a - // class that is derived from the - // Subscriptor class is destroyed, it - // also has to call the destructor of the - // Subscriptor class. In this - // destructor, there - // will then be a check whether the - // counter is really zero. If - // yes, then there are no active - // references to this object any - // more, and we can safely - // destroy it. If the counter is - // non-zero, however, then the - // destruction would result in - // stale and thus potentially - // dangerous pointers, and we - // rather throw an exception to - // alert the programmer that this - // is doing something dangerous - // and the program better be - // fixed. + // On the other hand, if an object of a class that is derived from the + // Subscriptor class is destroyed, it also has to call the destructor of + // the Subscriptor class. In this destructor, there will then be a check + // whether the counter is really zero. If yes, then there are no active + // references to this object any more, and we can safely destroy it. If + // the counter is non-zero, however, then the destruction would result in + // stale and thus potentially dangerous pointers, and we rather throw an + // exception to alert the programmer that this is doing something + // dangerous and the program better be fixed. // - // While this certainly all - // sounds very well, it has some - // problems in terms of - // usability: what happens if I - // forget to increase the counter - // when I let a pointer point to - // such an object? And what - // happens if I forget to - // decrease it again? Note that - // this may lead to extremely - // difficult to find bugs, since - // the place where we have - // forgotten something may be - // far away from the place - // where the check for zeroness - // of the counter upon - // destruction actually - // fails. This kind of bug is - // rather annoying and usually very - // hard to fix. + // While this certainly all sounds very well, it has some problems in + // terms of usability: what happens if I forget to increase the counter + // when I let a pointer point to such an object? And what happens if I + // forget to decrease it again? Note that this may lead to extremely + // difficult to find bugs, since the place where we have forgotten + // something may be far away from the place where the check for zeroness + // of the counter upon destruction actually fails. This kind of bug is + // rather annoying and usually very hard to fix. // - // The solution to this problem - // is to again use some C++ - // trickery: we create a class - // that acts just like a pointer, - // i.e. can be dereferenced, can - // be assigned to and from other - // pointers, and so on. This can - // be done by overloading the - // several dereferencing - // operators of that - // class. Within the - // constructors, destructors, and - // assignment operators of that - // class, we can however also - // manage increasing or - // decreasing the use counters of - // the objects we point - // to. Objects of that class - // therefore can be used just - // like ordinary pointers to - // objects, but they also serve - // to change the use counters of - // those objects without the need - // for the programmer to do so - // herself. The class that - // actually does all this is - // called SmartPointer and - // takes as template parameter - // the data type of the object - // which it shall point to. The - // latter type may be any class, - // as long as it is derived from - // the Subscriptor class. + // The solution to this problem is to again use some C++ trickery: we + // create a class that acts just like a pointer, i.e. can be dereferenced, + // can be assigned to and from other pointers, and so on. This can be done + // by overloading the several dereferencing operators of that + // class. Within the constructors, destructors, and assignment operators + // of that class, we can however also manage increasing or decreasing the + // use counters of the objects we point to. Objects of that class + // therefore can be used just like ordinary pointers to objects, but they + // also serve to change the use counters of those objects without the need + // for the programmer to do so herself. The class that actually does all + // this is called SmartPointer and takes as template parameter the data + // type of the object which it shall point to. The latter type may be any + // class, as long as it is derived from the Subscriptor class. // - // In the present example program, we - // want to protect the finite element - // object from the situation that for - // some reason the finite element pointed - // to is destroyed while still in use. We - // therefore use a SmartPointer to - // the finite element object; since the - // finite element object is actually - // never changed in our computations, we - // pass a const FiniteElement<dim> as - // template argument to the - // SmartPointer class. Note that the - // pointer so declared is assigned at - // construction time of the solve object, - // and destroyed upon destruction, so the - // lock on the destruction of the finite - // element object extends throughout the - // lifetime of this HelmholtzProblem - // object. + // In the present example program, we want to protect the finite element + // object from the situation that for some reason the finite element + // pointed to is destroyed while still in use. We therefore use a + // SmartPointer to the finite element object; since the finite element + // object is actually never changed in our computations, we pass a const + // FiniteElement<dim> as template argument to the SmartPointer + // class. Note that the pointer so declared is assigned at construction + // time of the solve object, and destroyed upon destruction, so the lock + // on the destruction of the finite element object extends throughout the + // lifetime of this HelmholtzProblem object. Triangulation triangulation; DoFHandler dof_handler; @@ -659,31 +439,18 @@ namespace Step7 Vector solution; Vector system_rhs; - // The second to last variable - // stores the refinement mode - // passed to the - // constructor. Since it is only - // set in the constructor, we can - // declare this variable - // constant, to avoid that - // someone sets it involuntarily - // (e.g. in an `if'-statement - // where == was written as = by - // chance). + // The second to last variable stores the refinement mode passed to the + // constructor. Since it is only set in the constructor, we can declare + // this variable constant, to avoid that someone sets it involuntarily + // (e.g. in an `if'-statement where == was written as = by chance). const RefinementMode refinement_mode; - // For each refinement level some data - // (like the number of cells, or the L2 - // error of the numerical solution) will - // be generated and later printed. The - // TableHandler can be used to - // collect all this data and to output it - // at the end of the run as a table in a - // simple text or in LaTeX - // format. Here we don't only use the - // TableHandler but we use the - // derived class ConvergenceTable - // that additionally evaluates rates of + // For each refinement level some data (like the number of cells, or the + // L2 error of the numerical solution) will be generated and later + // printed. The TableHandler can be used to collect all this data and to + // output it at the end of the run as a table in a simple text or in LaTeX + // format. Here we don't only use the TableHandler but we use the derived + // class ConvergenceTable that additionally evaluates rates of // convergence: ConvergenceTable convergence_table; }; @@ -693,12 +460,9 @@ namespace Step7 // @sect4{HelmholtzProblem::HelmholtzProblem} - // In the constructor of this class, - // we only set the variables passed - // as arguments, and associate the - // DoF handler object with the - // triangulation (which is empty at - // present, however). + // In the constructor of this class, we only set the variables passed as + // arguments, and associate the DoF handler object with the triangulation + // (which is empty at present, however). template HelmholtzProblem::HelmholtzProblem (const FiniteElement &fe, const RefinementMode refinement_mode) : @@ -720,53 +484,33 @@ namespace Step7 // @sect4{HelmholtzProblem::setup_system} - // The following function sets up the - // degrees of freedom, sizes of - // matrices and vectors, etc. Most of - // its functionality has been showed - // in previous examples, the only - // difference being the renumbering - // step immediately after first - // distributing degrees of freedom. + // The following function sets up the degrees of freedom, sizes of matrices + // and vectors, etc. Most of its functionality has been showed in previous + // examples, the only difference being the renumbering step immediately + // after first distributing degrees of freedom. // - // Renumbering the degrees of - // freedom is not overly difficult, - // as long as you use one of the - // algorithms included in the - // library. It requires only a single - // line of code. Some more information - // on this can be found in step-2. + // Renumbering the degrees of freedom is not overly difficult, as long as + // you use one of the algorithms included in the library. It requires only a + // single line of code. Some more information on this can be found in + // step-2. // - // Note, however, that when you - // renumber the degrees of freedom, - // you must do so immediately after - // distributing them, since such - // things as hanging nodes, the - // sparsity pattern etc. depend on - // the absolute numbers which are + // Note, however, that when you renumber the degrees of freedom, you must do + // so immediately after distributing them, since such things as hanging + // nodes, the sparsity pattern etc. depend on the absolute numbers which are // altered by renumbering. // - // The reason why we introduce renumbering - // here is that it is a relatively cheap - // operation but often has a beneficial - // effect: While the CG iteration itself is - // independent of the actual ordering of - // degrees of freedom, we will use SSOR as a - // preconditioner. SSOR goes through all - // degrees of freedom and does some - // operations that depend on what happened - // before; the SSOR operation is therefore - // not independent of the numbering of - // degrees of freedom, and it is known that - // its performance improves by using - // renumbering techniques. A little - // experiment shows that indeed, for example, - // the number of CG iterations for the fifth - // refinement cycle of adaptive refinement - // with the Q1 program used here is 40 - // without, but 36 with renumbering. Similar - // savings can generally be observed for all - // the computations in this program. + // The reason why we introduce renumbering here is that it is a relatively + // cheap operation but often has a beneficial effect: While the CG iteration + // itself is independent of the actual ordering of degrees of freedom, we + // will use SSOR as a preconditioner. SSOR goes through all degrees of + // freedom and does some operations that depend on what happened before; the + // SSOR operation is therefore not independent of the numbering of degrees + // of freedom, and it is known that its performance improves by using + // renumbering techniques. A little experiment shows that indeed, for + // example, the number of CG iterations for the fifth refinement cycle of + // adaptive refinement with the Q1 program used here is 40 without, but 36 + // with renumbering. Similar savings can generally be observed for all the + // computations in this program. template void HelmholtzProblem::setup_system () { @@ -794,24 +538,16 @@ namespace Step7 // @sect4{HelmholtzProblem::assemble_system} - // Assembling the system of equations - // for the problem at hand is mostly - // as for the example programs - // before. However, some things have - // changed anyway, so we comment on - // this function fairly extensively. + // Assembling the system of equations for the problem at hand is mostly as + // for the example programs before. However, some things have changed + // anyway, so we comment on this function fairly extensively. // - // At the top of the function you will find - // the usual assortment of variable - // declarations. Compared to previous - // programs, of importance is only that we - // expect to solve problems also with - // bi-quadratic elements and therefore have - // to use sufficiently accurate quadrature - // formula. In addition, we need to compute - // integrals over faces, i.e. dim-1 - // dimensional objects. The declaration of a - // face quadrature formula is then + // At the top of the function you will find the usual assortment of variable + // declarations. Compared to previous programs, of importance is only that + // we expect to solve problems also with bi-quadratic elements and therefore + // have to use sufficiently accurate quadrature formula. In addition, we + // need to compute integrals over faces, i.e. dim-1 dimensional + // objects. The declaration of a face quadrature formula is then // straightforward: template void HelmholtzProblem::assemble_system () @@ -829,52 +565,28 @@ namespace Step7 std::vector local_dof_indices (dofs_per_cell); - // Then we need objects which can - // evaluate the values, gradients, - // etc of the shape functions at - // the quadrature points. While it - // seems that it should be feasible - // to do it with one object for - // both domain and face integrals, - // there is a subtle difference - // since the weights in the domain - // integrals include the measure of - // the cell in the domain, while - // the face integral quadrature - // requires the measure of the face - // in a lower-dimensional - // manifold. Internally these two - // classes are rooted in a common - // base class which does most of - // the work and offers the same - // interface to both domain and - // interface integrals. + // Then we need objects which can evaluate the values, gradients, etc of + // the shape functions at the quadrature points. While it seems that it + // should be feasible to do it with one object for both domain and face + // integrals, there is a subtle difference since the weights in the domain + // integrals include the measure of the cell in the domain, while the face + // integral quadrature requires the measure of the face in a + // lower-dimensional manifold. Internally these two classes are rooted in + // a common base class which does most of the work and offers the same + // interface to both domain and interface integrals. // - // For the domain integrals in the - // bilinear form for Helmholtz's - // equation, we need to compute the - // values and gradients, as well as - // the weights at the quadrature - // points. Furthermore, we need the - // quadrature points on the real - // cell (rather than on the unit - // cell) to evaluate the right hand - // side function. The object we use - // to get at this information is - // the FEValues class discussed - // previously. + // For the domain integrals in the bilinear form for Helmholtz's equation, + // we need to compute the values and gradients, as well as the weights at + // the quadrature points. Furthermore, we need the quadrature points on + // the real cell (rather than on the unit cell) to evaluate the right hand + // side function. The object we use to get at this information is the + // FEValues class discussed previously. // - // For the face integrals, we only - // need the values of the shape - // functions, as well as the - // weights. We also need the normal - // vectors and quadrature points on - // the real cell since we want to - // determine the Neumann values - // from the exact solution object - // (see below). The class that gives - // us this information is called - // FEFaceValues: + // For the face integrals, we only need the values of the shape functions, + // as well as the weights. We also need the normal vectors and quadrature + // points on the real cell since we want to determine the Neumann values + // from the exact solution object (see below). The class that gives us + // this information is called FEFaceValues: FEValues fe_values (*fe, quadrature_formula, update_values | update_gradients | update_quadrature_points | update_JxW_values); @@ -883,46 +595,28 @@ namespace Step7 update_values | update_quadrature_points | update_normal_vectors | update_JxW_values); - // Then we need some objects - // already known from previous - // examples: An object denoting the - // right hand side function, its - // values at the quadrature points - // on a cell, the cell matrix and - // right hand side, and the indices - // of the degrees of freedom on a - // cell. + // Then we need some objects already known from previous examples: An + // object denoting the right hand side function, its values at the + // quadrature points on a cell, the cell matrix and right hand side, and + // the indices of the degrees of freedom on a cell. // - // Note that the operations we will do with - // the right hand side object are only - // querying data, never changing the - // object. We can therefore declare it - // const: + // Note that the operations we will do with the right hand side object are + // only querying data, never changing the object. We can therefore declare + // it const: const RightHandSide right_hand_side; std::vector rhs_values (n_q_points); - // Finally we define an object - // denoting the exact solution - // function. We will use it to - // compute the Neumann values at - // the boundary from it. Usually, - // one would of course do so using - // a separate object, in particular - // since the exact solution is generally - // unknown while the Neumann values - // are prescribed. We will, - // however, be a little bit lazy - // and use what we already have in - // information. Real-life programs - // would to go other ways here, of - // course. + // Finally we define an object denoting the exact solution function. We + // will use it to compute the Neumann values at the boundary from + // it. Usually, one would of course do so using a separate object, in + // particular since the exact solution is generally unknown while the + // Neumann values are prescribed. We will, however, be a little bit lazy + // and use what we already have in information. Real-life programs would + // to go other ways here, of course. const Solution exact_solution; - // Now for the main loop over all - // cells. This is mostly unchanged - // from previous examples, so we - // only comment on the things that - // have changed. + // Now for the main loop over all cells. This is mostly unchanged from + // previous examples, so we only comment on the things that have changed. typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); @@ -940,12 +634,8 @@ namespace Step7 for (unsigned int i=0; i1, - // which is the value that we - // have assigned to that - // portions of the boundary - // composing Gamma2 in the - // run() function further - // below. (The - // default value of boundary - // indicators is 0, so faces - // can only have an indicator - // equal to 1 if we have - // explicitly set it.) + // Then there is that second term on the right hand side, the contour + // integral. First we have to find out whether the intersection of the + // faces of this cell with the boundary part Gamma2 is nonzero. To + // this end, we loop over all faces and check whether its boundary + // indicator equals 1, which is the value that we have + // assigned to that portions of the boundary composing Gamma2 in the + // run() function further below. (The default value of + // boundary indicators is 0, so faces can only have an + // indicator equal to 1 if we have explicitly set it.) for (unsigned int face=0; face::faces_per_cell; ++face) if (cell->face(face)->at_boundary() && (cell->face(face)->boundary_indicator() == 1)) { - // If we came into here, - // then we have found an - // external face - // belonging to - // Gamma2. Next, we have - // to compute the values - // of the shape functions - // and the other - // quantities which we - // will need for the - // computation of the - // contour integral. This - // is done using the - // reinit function - // which we already know - // from the FEValue - // class: + // If we came into here, then we have found an external face + // belonging to Gamma2. Next, we have to compute the values of + // the shape functions and the other quantities which we will + // need for the computation of the contour integral. This is + // done using the reinit function which we already + // know from the FEValue class: fe_face_values.reinit (cell, face); - // And we can then - // perform the - // integration by using a - // loop over all - // quadrature points. + // And we can then perform the integration by using a loop over + // all quadrature points. // - // On each quadrature point, we - // first compute the value of the - // normal derivative. We do so - // using the gradient of the - // exact solution and the normal - // vector to the face at the - // present quadrature point - // obtained from the - // fe_face_values - // object. This is then used to - // compute the additional - // contribution of this face to - // the right hand side: + // On each quadrature point, we first compute the value of the + // normal derivative. We do so using the gradient of the exact + // solution and the normal vector to the face at the present + // quadrature point obtained from the + // fe_face_values object. This is then used to + // compute the additional contribution of this face to the right + // hand side: for (unsigned int q_point=0; q_pointget_dof_indices (local_dof_indices); for (unsigned int i=0; iinterpolate_boundary_values) - // does not represent the whole - // boundary any more. Rather, it is - // that portion of the boundary - // which we have not assigned - // another indicator (see - // below). The degrees of freedom - // at the boundary that do not - // belong to Gamma1 are therefore - // excluded from the interpolation - // of boundary values, just as - // we want. + // We note, however that now the boundary indicator for which we + // interpolate boundary values (denoted by the second parameter to + // interpolate_boundary_values) does not represent the whole + // boundary any more. Rather, it is that portion of the boundary which we + // have not assigned another indicator (see below). The degrees of freedom + // at the boundary that do not belong to Gamma1 are therefore excluded + // from the interpolation of boundary values, just as we want. hanging_node_constraints.condense (system_matrix); hanging_node_constraints.condense (system_rhs); @@ -1093,8 +736,7 @@ namespace Step7 // @sect4{HelmholtzProblem::solve} - // Solving the system of equations is - // done in the same way as before: + // Solving the system of equations is done in the same way as before: template void HelmholtzProblem::solve () { @@ -1113,66 +755,35 @@ namespace Step7 // @sect4{HelmholtzProblem::refine_grid} - // Now for the function doing grid - // refinement. Depending on the - // refinement mode passed to the - // constructor, we do global or - // adaptive refinement. + // Now for the function doing grid refinement. Depending on the refinement + // mode passed to the constructor, we do global or adaptive refinement. // - // Global refinement is simple, - // so there is - // not much to comment on. - // In case of adaptive - // refinement, we use the same - // functions and classes as in - // the previous example - // program. Note that one - // could treat Neumann - // boundaries differently than - // Dirichlet boundaries, and - // one should in fact do so - // here since we have Neumann - // boundary conditions on part - // of the boundaries, but - // since we don't have a - // function here that - // describes the Neumann - // values (we only construct - // these values from the exact - // solution when assembling - // the matrix), we omit this - // detail even though they would - // not be hard to add. + // Global refinement is simple, so there is not much to comment on. In case + // of adaptive refinement, we use the same functions and classes as in the + // previous example program. Note that one could treat Neumann boundaries + // differently than Dirichlet boundaries, and one should in fact do so here + // since we have Neumann boundary conditions on part of the boundaries, but + // since we don't have a function here that describes the Neumann values (we + // only construct these values from the exact solution when assembling the + // matrix), we omit this detail even though they would not be hard to add. // - // At the end of the switch, we have a - // default case that looks slightly strange: - // an Assert statement with a false - // condition. Since the Assert macro - // raises an error whenever the condition is - // false, this means that whenever we hit - // this statement the program will be - // aborted. This in intentional: Right now we - // have only implemented two refinement - // strategies (global and adaptive), but - // someone might want to add a third strategy - // (for example adaptivity with a different - // refinement criterion) and add a third - // member to the enumeration that determines - // the refinement mode. If it weren't for the - // default case of the switch statement, this - // function would simply run to its end - // without doing anything. This is most - // likely not what was intended. One of the - // defensive programming techniques that you - // will find all over the deal.II library is - // therefore to always have default cases - // that abort, to make sure that values not - // considered when listing the cases in the - // switch statement are eventually caught, - // and forcing programmers to add code to - // handle them. We will use this same - // technique in other places further down as - // well. + // At the end of the switch, we have a default case that looks slightly + // strange: an Assert statement with a false + // condition. Since the Assert macro raises an error whenever + // the condition is false, this means that whenever we hit this statement + // the program will be aborted. This in intentional: Right now we have only + // implemented two refinement strategies (global and adaptive), but someone + // might want to add a third strategy (for example adaptivity with a + // different refinement criterion) and add a third member to the enumeration + // that determines the refinement mode. If it weren't for the default case + // of the switch statement, this function would simply run to its end + // without doing anything. This is most likely not what was intended. One of + // the defensive programming techniques that you will find all over the + // deal.II library is therefore to always have default cases that abort, to + // make sure that values not considered when listing the cases in the switch + // statement are eventually caught, and forcing programmers to add code to + // handle them. We will use this same technique in other places further down + // as well. template void HelmholtzProblem::refine_grid () { @@ -1214,58 +825,34 @@ namespace Step7 // @sect4{HelmholtzProblem::process_solution} - // Finally we want to process the solution - // after it has been computed. For this, we - // integrate the error in various norms, and - // we generate tables that will later be used - // to display the convergence against the - // continuous solution in a nice format. + // Finally we want to process the solution after it has been computed. For + // this, we integrate the error in various norms, and we generate tables + // that will later be used to display the convergence against the continuous + // solution in a nice format. template void HelmholtzProblem::process_solution (const unsigned int cycle) { - // Our first task is to compute - // error norms. In order to integrate - // the difference between computed - // numerical solution and the - // continuous solution (described - // by the Solution class - // defined at the top of this - // file), we first need a vector - // that will hold the norm of the - // error on each cell. Since - // accuracy with 16 digits is not - // so important for these - // quantities, we save some memory - // by using float instead of + // Our first task is to compute error norms. In order to integrate the + // difference between computed numerical solution and the continuous + // solution (described by the Solution class defined at the top of this + // file), we first need a vector that will hold the norm of the error on + // each cell. Since accuracy with 16 digits is not so important for these + // quantities, we save some memory by using float instead of // double values. // - // The next step is to use a function - // from the library which computes the - // error in the L2 norm on each cell. - // We have to pass it the DoF handler - // object, the vector holding the - // nodal values of the numerical - // solution, the continuous - // solution as a function object, - // the vector into which it shall - // place the norm of the error on - // each cell, a quadrature rule by - // which this norm shall be - // computed, and the type of norm - // to be used. Here, we use a Gauss - // formula with three points in - // each space direction, and - // compute the L2 norm. + // The next step is to use a function from the library which computes the + // error in the L2 norm on each cell. We have to pass it the DoF handler + // object, the vector holding the nodal values of the numerical solution, + // the continuous solution as a function object, the vector into which it + // shall place the norm of the error on each cell, a quadrature rule by + // which this norm shall be computed, and the type of norm to be + // used. Here, we use a Gauss formula with three points in each space + // direction, and compute the L2 norm. // - // Finally, we want to get the - // global L2 norm. This can of - // course be obtained by summing - // the squares of the norms on each - // cell, and taking the square root - // of that value. This is - // equivalent to taking the l2 - // (lower case l) norm of the - // vector of norms on each cell: + // Finally, we want to get the global L2 norm. This can of course be + // obtained by summing the squares of the norms on each cell, and taking + // the square root of that value. This is equivalent to taking the l2 + // (lower case l) norm of the vector of norms on each cell: Vector difference_per_cell (triangulation.n_active_cells()); VectorTools::integrate_difference (dof_handler, solution, @@ -1275,11 +862,9 @@ namespace Step7 VectorTools::L2_norm); const double L2_error = difference_per_cell.l2_norm(); - // By same procedure we get the H1 - // semi-norm. We re-use the - // difference_per_cell vector since it - // is no longer used after computing the - // L2_error variable above. + // By same procedure we get the H1 semi-norm. We re-use the + // difference_per_cell vector since it is no longer used + // after computing the L2_error variable above. VectorTools::integrate_difference (dof_handler, solution, Solution(), @@ -1288,38 +873,22 @@ namespace Step7 VectorTools::H1_seminorm); const double H1_error = difference_per_cell.l2_norm(); - // Finally, we compute the maximum - // norm. Of course, we can't - // actually compute the true maximum, - // but only the maximum at the - // quadrature points. Since this - // depends quite sensitively on the - // quadrature rule being used, and - // since we would like to avoid - // false results due to - // super-convergence effects at - // some points, we use a special - // quadrature rule that is obtained - // by iterating the trapezoidal - // rule five times in each space - // direction. Note that the - // constructor of the QIterated - // class takes a one-dimensional - // quadrature rule and a number - // that tells it how often it shall - // use this rule in each space - // direction. + // Finally, we compute the maximum norm. Of course, we can't actually + // compute the true maximum, but only the maximum at the quadrature + // points. Since this depends quite sensitively on the quadrature rule + // being used, and since we would like to avoid false results due to + // super-convergence effects at some points, we use a special quadrature + // rule that is obtained by iterating the trapezoidal rule five times in + // each space direction. Note that the constructor of the QIterated class + // takes a one-dimensional quadrature rule and a number that tells it how + // often it shall use this rule in each space direction. // - // Using this special quadrature rule, we - // can then try to find the maximal error - // on each cell. Finally, we compute the - // global L infinity error from the L - // infinite errors on each cell. Instead of - // summing squares, we now have to take the - // maximum value over all cell-wise - // entries, an operation that is - // conveniently done using the - // Vector::linfty() function: + // Using this special quadrature rule, we can then try to find the maximal + // error on each cell. Finally, we compute the global L infinity error + // from the L infinite errors on each cell. Instead of summing squares, we + // now have to take the maximum value over all cell-wise entries, an + // operation that is conveniently done using the Vector::linfty() + // function: const QTrapez<1> q_trapez; const QIterated q_iterated (q_trapez, 5); VectorTools::integrate_difference (dof_handler, @@ -1330,19 +899,12 @@ namespace Step7 VectorTools::Linfty_norm); const double Linfty_error = difference_per_cell.linfty_norm(); - // After all these errors have been - // computed, we finally write some - // output. In addition, we add the - // important data to the - // TableHandler by specifying - // the key of the column and the value. - // Note that it is not necessary to - // define column keys beforehand -- it is - // sufficient to just add values, - // and columns will be - // introduced into the table in the - // order values are added the - // first time. + // After all these errors have been computed, we finally write some + // output. In addition, we add the important data to the TableHandler by + // specifying the key of the column and the value. Note that it is not + // necessary to define column keys beforehand -- it is sufficient to just + // add values, and columns will be introduced into the table in the order + // values are added the first time. const unsigned int n_active_cells=triangulation.n_active_cells(); const unsigned int n_dofs=dof_handler.n_dofs(); @@ -1366,75 +928,41 @@ namespace Step7 // @sect4{HelmholtzProblem::run} - // As in previous example programs, - // the run function controls the - // flow of execution. The basic - // layout is as in previous examples: - // an outer loop over successively - // refined grids, and in this loop - // first problem setup, assembling - // the linear system, solution, and + // As in previous example programs, the run function controls + // the flow of execution. The basic layout is as in previous examples: an + // outer loop over successively refined grids, and in this loop first + // problem setup, assembling the linear system, solution, and // post-processing. // - // The first task in the main loop is - // creation and refinement of - // grids. This is as in previous - // examples, with the only difference - // that we want to have part of the - // boundary marked as Neumann type, - // rather than Dirichlet. + // The first task in the main loop is creation and refinement of grids. This + // is as in previous examples, with the only difference that we want to have + // part of the boundary marked as Neumann type, rather than Dirichlet. // - // For this, we will use the - // following convention: Faces - // belonging to Gamma1 will have the - // boundary indicator 0 - // (which is the default, so we don't - // have to set it explicitely), and - // faces belonging to Gamma2 will use - // 1 as boundary - // indicator. To set these values, - // we loop over all cells, then over - // all faces of a given cell, check - // whether it is part of the boundary - // that we want to denote by Gamma2, - // and if so set its boundary - // indicator to 1. For - // the present program, we consider - // the left and bottom boundaries as - // Gamma2. We determine whether a - // face is part of that boundary by - // asking whether the x or y - // coordinates (i.e. vector - // components 0 and 1) of the - // midpoint of a face equals -1, up - // to some small wiggle room that we - // have to give since it is instable - // to compare floating point numbers - // that are subject to round off in + // For this, we will use the following convention: Faces belonging to Gamma1 + // will have the boundary indicator 0 (which is the default, so + // we don't have to set it explicitely), and faces belonging to Gamma2 will + // use 1 as boundary indicator. To set these values, we loop + // over all cells, then over all faces of a given cell, check whether it is + // part of the boundary that we want to denote by Gamma2, and if so set its + // boundary indicator to 1. For the present program, we + // consider the left and bottom boundaries as Gamma2. We determine whether a + // face is part of that boundary by asking whether the x or y coordinates + // (i.e. vector components 0 and 1) of the midpoint of a face equals -1, up + // to some small wiggle room that we have to give since it is instable to + // compare floating point numbers that are subject to round off in // intermediate computations. // - // It is worth noting that we have to - // loop over all cells here, not only - // the active ones. The reason is - // that upon refinement, newly - // created faces inherit the boundary - // indicator of their parent face. If - // we now only set the boundary - // indicator for active faces, - // coarsen some cells and refine them - // later on, they will again have the - // boundary indicator of the parent - // cell which we have not modified, - // instead of the one we - // intended. Consequently, we have to - // change the boundary indicators of - // faces of all cells on Gamma2, - // whether they are active or not. - // Alternatively, we could of course - // have done this job on the coarsest - // mesh (i.e. before the first - // refinement step) and refined the - // mesh only after that. + // It is worth noting that we have to loop over all cells here, not only the + // active ones. The reason is that upon refinement, newly created faces + // inherit the boundary indicator of their parent face. If we now only set + // the boundary indicator for active faces, coarsen some cells and refine + // them later on, they will again have the boundary indicator of the parent + // cell which we have not modified, instead of the one we + // intended. Consequently, we have to change the boundary indicators of + // faces of all cells on Gamma2, whether they are active or not. + // Alternatively, we could of course have done this job on the coarsest mesh + // (i.e. before the first refinement step) and refined the mesh only after + // that. template void HelmholtzProblem::run () { @@ -1461,49 +989,33 @@ namespace Step7 refine_grid (); - // The next steps are already - // known from previous - // examples. This is mostly the - // basic set-up of every finite - // element program: + // The next steps are already known from previous examples. This is + // mostly the basic set-up of every finite element program: setup_system (); assemble_system (); solve (); - // The last step in this chain - // of function calls is usually - // the evaluation of the computed - // solution for the quantities - // one is interested in. This - // is done in the following - // function. Since the function - // generates output that indicates - // the number of the present - // refinement step, we pass this - // number as an argument. + // The last step in this chain of function calls is usually the + // evaluation of the computed solution for the quantities one is + // interested in. This is done in the following function. Since the + // function generates output that indicates the number of the present + // refinement step, we pass this number as an argument. process_solution (cycle); } // @sect5{Output of graphical data} - // After the last iteration we output the - // solution on the finest grid. This is - // done using the following sequence of - // statements which we have already - // discussed in previous examples. The - // first step is to generate a suitable - // filename (called gmv_filename here, - // since we want to output data in GMV - // format; we add the prefix to distinguish - // the filename from that used for other - // output files further down below). Here, - // we augment the name by the mesh - // refinement algorithm, and as above we - // make sure that we abort the program if - // another refinement method is added and - // not handled by the following switch - // statement: + // After the last iteration we output the solution on the finest + // grid. This is done using the following sequence of statements which we + // have already discussed in previous examples. The first step is to + // generate a suitable filename (called gmv_filename here, + // since we want to output data in GMV format; we add the prefix to + // distinguish the filename from that used for other output files further + // down below). Here, we augment the name by the mesh refinement + // algorithm, and as above we make sure that we abort the program if + // another refinement method is added and not handled by the following + // switch statement: std::string gmv_filename; switch (refinement_mode) { @@ -1517,25 +1029,17 @@ namespace Step7 Assert (false, ExcNotImplemented()); } - // We augment the filename by a postfix - // denoting the finite element which we - // have used in the computation. To this - // end, the finite element base class - // stores the maximal polynomial degree of - // shape functions in each coordinate - // variable as a variable degree, and - // we use for the switch statement (note - // that the polynomial degree of bilinear - // shape functions is really 2, since they - // contain the term x*y; however, the - // polynomial degree in each coordinate - // variable is still only 1). We again use - // the same defensive programming technique - // to safeguard against the case that the - // polynomial degree has an unexpected - // value, using the Assert (false, - // ExcNotImplemented()) idiom in the - // default branch of the switch statement: + // We augment the filename by a postfix denoting the finite element which + // we have used in the computation. To this end, the finite element base + // class stores the maximal polynomial degree of shape functions in each + // coordinate variable as a variable degree, and we use for + // the switch statement (note that the polynomial degree of bilinear shape + // functions is really 2, since they contain the term x*y; + // however, the polynomial degree in each coordinate variable is still + // only 1). We again use the same defensive programming technique to + // safeguard against the case that the polynomial degree has an unexpected + // value, using the Assert (false, ExcNotImplemented()) idiom + // in the default branch of the switch statement: switch (fe->degree) { case 1: @@ -1549,11 +1053,9 @@ namespace Step7 Assert (false, ExcNotImplemented()); } - // Once we have the base name for the - // output file, we add an extension - // appropriate for GMV output, open a file, - // and add the solution vector to the - // object that will do the actual output: + // Once we have the base name for the output file, we add an extension + // appropriate for GMV output, open a file, and add the solution vector to + // the object that will do the actual output: gmv_filename += ".gmv"; std::ofstream output (gmv_filename.c_str()); @@ -1561,92 +1063,52 @@ namespace Step7 data_out.attach_dof_handler (dof_handler); data_out.add_data_vector (solution, "solution"); - // Now building the intermediate - // format as before is the next - // step. We introduce one more - // feature of deal.II here. The - // background is the following: in - // some of the runs of this - // function, we have used - // biquadratic finite - // elements. However, since almost - // all output formats only support - // bilinear data, the data is - // written only bilinear, and - // information is consequently lost. - // Of course, we can't - // change the format in which - // graphic programs accept their - // inputs, but we can write the - // data differently such that we - // more closely resemble the - // information available in the - // quadratic approximation. We can, - // for example, write each cell as - // four sub-cells with bilinear data - // each, such that we have nine - // data points for each cell in the - // triangulation. The graphic - // programs will, of course, - // display this data still only - // bilinear, but at least we have - // given some more of the - // information we have. + // Now building the intermediate format as before is the next step. We + // introduce one more feature of deal.II here. The background is the + // following: in some of the runs of this function, we have used + // biquadratic finite elements. However, since almost all output formats + // only support bilinear data, the data is written only bilinear, and + // information is consequently lost. Of course, we can't change the + // format in which graphic programs accept their inputs, but we can write + // the data differently such that we more closely resemble the information + // available in the quadratic approximation. We can, for example, write + // each cell as four sub-cells with bilinear data each, such that we have + // nine data points for each cell in the triangulation. The graphic + // programs will, of course, display this data still only bilinear, but at + // least we have given some more of the information we have. // - // In order to allow writing more - // than one sub-cell per actual - // cell, the build_patches - // function accepts a parameter - // (the default is 1, which is - // why you haven't seen this - // parameter in previous - // examples). This parameter - // denotes into how many sub-cells - // per space direction each cell - // shall be subdivided for - // output. For example, if you give - // 2, this leads to 4 cells in - // 2D and 8 cells in 3D. For - // quadratic elements, two - // sub-cells per space direction is - // obviously the right choice, so - // this is what we choose. In - // general, for elements of - // polynomial order q, we use - // q subdivisions, and the - // order of the elements is - // determined in the same way as - // above. + // In order to allow writing more than one sub-cell per actual cell, the + // build_patches function accepts a parameter (the default is + // 1, which is why you haven't seen this parameter in + // previous examples). This parameter denotes into how many sub-cells per + // space direction each cell shall be subdivided for output. For example, + // if you give 2, this leads to 4 cells in 2D and 8 cells in + // 3D. For quadratic elements, two sub-cells per space direction is + // obviously the right choice, so this is what we choose. In general, for + // elements of polynomial order q, we use q + // subdivisions, and the order of the elements is determined in the same + // way as above. // - // With the intermediate format - // so generated, we can then actually - // write the graphical output in GMV - // format: + // With the intermediate format so generated, we can then actually write + // the graphical output in GMV format: data_out.build_patches (fe->degree); data_out.write_gmv (output); // @sect5{Output of convergence tables} - // After graphical output, we would also - // like to generate tables from the error - // computations we have done in - // process_solution. There, we have - // filled a table object with the number of - // cells for each refinement step as well - // as the errors in different norms. - - // For a nicer textual output of this data, - // one may want to set the precision with - // which the values will be written upon - // output. We use 3 digits for this, which - // is usually sufficient for error - // norms. By default, data is written in - // fixed point notation. However, for - // columns one would like to see in - // scientific notation another function - // call sets the scientific_flag to - // true, leading to floating point - // representation of numbers. + // After graphical output, we would also like to generate tables from the + // error computations we have done in + // process_solution. There, we have filled a table object + // with the number of cells for each refinement step as well as the errors + // in different norms. + + // For a nicer textual output of this data, one may want to set the + // precision with which the values will be written upon output. We use 3 + // digits for this, which is usually sufficient for error norms. By + // default, data is written in fixed point notation. However, for columns + // one would like to see in scientific notation another function call sets + // the scientific_flag to true, leading to + // floating point representation of numbers. convergence_table.set_precision("L2", 3); convergence_table.set_precision("H1", 3); convergence_table.set_precision("Linfty", 3); @@ -1655,52 +1117,37 @@ namespace Step7 convergence_table.set_scientific("H1", true); convergence_table.set_scientific("Linfty", true); - // For the output of a table into a LaTeX - // file, the default captions of the - // columns are the keys given as argument - // to the add_value functions. To have - // TeX captions that differ from the - // default ones you can specify them by the - // following function calls. - // Note, that `\\' is reduced to - // `\' by the compiler such that the - // real TeX caption is, e.g., - // `$L^\infty$-error'. + // For the output of a table into a LaTeX file, the default captions of + // the columns are the keys given as argument to the + // add_value functions. To have TeX captions that differ from + // the default ones you can specify them by the following function calls. + // Note, that `\\' is reduced to `\' by the compiler such that the real + // TeX caption is, e.g., `$L^\infty$-error'. convergence_table.set_tex_caption("cells", "\\# cells"); convergence_table.set_tex_caption("dofs", "\\# dofs"); convergence_table.set_tex_caption("L2", "$L^2$-error"); convergence_table.set_tex_caption("H1", "$H^1$-error"); convergence_table.set_tex_caption("Linfty", "$L^\\infty$-error"); - // Finally, the default LaTeX format for - // each column of the table is `c' - // (centered). To specify a different - // (e.g. `right') one, the following + // Finally, the default LaTeX format for each column of the table is `c' + // (centered). To specify a different (e.g. `right') one, the following // function may be used: convergence_table.set_tex_format("cells", "r"); convergence_table.set_tex_format("dofs", "r"); - // After this, we can finally write the - // table to the standard output stream - // std::cout (after one extra empty - // line, to make things look - // prettier). Note, that the output in text - // format is quite simple and that - // captions may not be printed directly - // above the specific columns. + // After this, we can finally write the table to the standard output + // stream std::cout (after one extra empty line, to make + // things look prettier). Note, that the output in text format is quite + // simple and that captions may not be printed directly above the specific + // columns. std::cout << std::endl; convergence_table.write_text(std::cout); - // The table can also be written - // into a LaTeX file. The (nicely) - // formatted table can be viewed at - // after calling `latex filename' - // and e.g. `xdvi filename', where - // filename is the name of the file - // to which we will write output - // now. We construct the file name - // in the same way as before, but - // with a different prefix "error": + // The table can also be written into a LaTeX file. The (nicely) + // formatted table can be viewed at after calling `latex filename' and + // e.g. `xdvi filename', where filename is the name of the file to which + // we will write output now. We construct the file name in the same way as + // before, but with a different prefix "error": std::string error_filename = "error"; switch (refinement_mode) { @@ -1734,92 +1181,53 @@ namespace Step7 // @sect5{Further table manipulations} - // In case of global refinement, it - // might be of interest to also - // output the convergence - // rates. This may be done by the - // functionality the - // ConvergenceTable offers over - // the regular - // TableHandler. However, we do - // it only for global refinement, - // since for adaptive refinement - // the determination of something - // like an order of convergence is - // somewhat more involved. While we - // are at it, we also show a few - // other things that can be done - // with tables. + // In case of global refinement, it might be of interest to also output + // the convergence rates. This may be done by the functionality the + // ConvergenceTable offers over the regular TableHandler. However, we do + // it only for global refinement, since for adaptive refinement the + // determination of something like an order of convergence is somewhat + // more involved. While we are at it, we also show a few other things that + // can be done with tables. if (refinement_mode==global_refinement) { - // The first thing is that one - // can group individual columns - // together to form so-called - // super columns. Essentially, - // the columns remain the same, - // but the ones that were - // grouped together will get a - // caption running across all - // columns in a group. For - // example, let's merge the - // "cycle" and "cells" columns - // into a super column named "n + // The first thing is that one can group individual columns together + // to form so-called super columns. Essentially, the columns remain + // the same, but the ones that were grouped together will get a + // caption running across all columns in a group. For example, let's + // merge the "cycle" and "cells" columns into a super column named "n // cells": convergence_table.add_column_to_supercolumn("cycle", "n cells"); convergence_table.add_column_to_supercolumn("cells", "n cells"); - // Next, it isn't necessary to - // always output all columns, - // or in the order in which - // they were originally added - // during the run. Selecting - // and re-ordering the columns - // works as follows (note that - // this includes super - // columns): + // Next, it isn't necessary to always output all columns, or in the + // order in which they were originally added during the run. + // Selecting and re-ordering the columns works as follows (note that + // this includes super columns): std::vector new_order; new_order.push_back("n cells"); new_order.push_back("H1"); new_order.push_back("L2"); convergence_table.set_column_order (new_order); - // For everything that happened - // to the ConvergenceTable - // until this point, it would - // have been sufficient to use - // a simple - // TableHandler. Indeed, the - // ConvergenceTable is - // derived from the - // TableHandler but it offers - // the additional functionality - // of automatically evaluating - // convergence rates. For - // example, here is how we can - // let the table compute - // reduction and convergence - // rates (convergence rates are - // the binary logarithm of the - // reduction rate): + // For everything that happened to the ConvergenceTable until this + // point, it would have been sufficient to use a simple + // TableHandler. Indeed, the ConvergenceTable is derived from the + // TableHandler but it offers the additional functionality of + // automatically evaluating convergence rates. For example, here is + // how we can let the table compute reduction and convergence rates + // (convergence rates are the binary logarithm of the reduction rate): convergence_table .evaluate_convergence_rates("L2", ConvergenceTable::reduction_rate); convergence_table .evaluate_convergence_rates("L2", ConvergenceTable::reduction_rate_log2); convergence_table .evaluate_convergence_rates("H1", ConvergenceTable::reduction_rate_log2); - // Each of these - // function calls produces an - // additional column that is - // merged with the original - // column (in our example the - // `L2' and the `H1' column) to - // a supercolumn. - - // Finally, we want to write - // this convergence chart - // again, first to the screen - // and then, in LaTeX format, - // to disk. The filename is + // Each of these function calls produces an additional column that is + // merged with the original column (in our example the `L2' and the + // `H1' column) to a supercolumn. + + // Finally, we want to write this convergence chart again, first to + // the screen and then, in LaTeX format, to disk. The filename is // again constructed as above. std::cout << std::endl; convergence_table.write_text(std::cout); @@ -1854,34 +1262,22 @@ namespace Step7 } } - // The final step before going to - // main() is then to close the - // namespace Step7 into which - // we have put everything we needed for - // this program: + // The final step before going to main() is then to close the + // namespace Step7 into which we have put everything we needed + // for this program: } // @sect3{Main function} -// The main function is mostly as -// before. The only difference is -// that we solve three times, once -// for Q1 and adaptive refinement, -// once for Q1 elements and global -// refinement, and once for Q2 -// elements and global refinement. +// The main function is mostly as before. The only difference is that we solve +// three times, once for Q1 and adaptive refinement, once for Q1 elements and +// global refinement, and once for Q2 elements and global refinement. // -// Since we instantiate several -// template classes below for two -// space dimensions, we make this -// more generic by declaring a -// constant at the beginning of the -// function denoting the number of -// space dimensions. If you want to -// run the program in 1d or 2d, you -// will then only have to change this -// one instance, rather than all uses -// below: +// Since we instantiate several template classes below for two space +// dimensions, we make this more generic by declaring a constant at the +// beginning of the function denoting the number of space dimensions. If you +// want to run the program in 1d or 2d, you will then only have to change this +// one instance, rather than all uses below: int main () { const unsigned int dim = 2; @@ -1893,23 +1289,13 @@ int main () deallog.depth_console (0); - // Now for the three calls to - // the main class. Each call is - // blocked into curly braces in - // order to destroy the - // respective objects (i.e. the - // finite element and the - // HelmholtzProblem object) - // at the end of the block and - // before we go to the next - // run. This avoids conflicts - // with variable names, and - // also makes sure that memory - // is released immediately - // after one of the three runs - // has finished, and not only - // at the end of the try - // block. + // Now for the three calls to the main class. Each call is blocked into + // curly braces in order to destroy the respective objects (i.e. the + // finite element and the HelmholtzProblem object) at the end of the + // block and before we go to the next run. This avoids conflicts with + // variable names, and also makes sure that memory is released + // immediately after one of the three runs has finished, and not only at + // the end of the try block. { std::cout << "Solving with Q1 elements, adaptive refinement" << std::endl << "=============================================" << std::endl @@ -1981,22 +1367,13 @@ int main () } -// What comes here is basically just -// an annoyance that you can ignore -// if you are not working on an AIX -// system: on this system, static -// member variables are not -// instantiated automatically when -// their enclosing class is -// instantiated. This leads to linker -// errors if these variables are not -// explicitly instantiated. As said, -// this is, strictly C++ standards -// speaking, not necessary, but it -// doesn't hurt either on other -// systems, and since it is necessary -// to get things running on AIX, why -// not do it: +// What comes here is basically just an annoyance that you can ignore if you +// are not working on an AIX system: on this system, static member variables +// are not instantiated automatically when their enclosing class is +// instantiated. This leads to linker errors if these variables are not +// explicitly instantiated. As said, this is, strictly C++ standards speaking, +// not necessary, but it doesn't hurt either on other systems, and since it is +// necessary to get things running on AIX, why not do it: namespace Step7 { template const double SolutionBase<2>::width; diff --git a/deal.II/examples/step-8/step-8.cc b/deal.II/examples/step-8/step-8.cc index ae1da7c04d..a2b3446048 100644 --- a/deal.II/examples/step-8/step-8.cc +++ b/deal.II/examples/step-8/step-8.cc @@ -11,9 +11,8 @@ // @sect3{Include files} -// As usual, the first few include -// files are already known, so we -// will not comment on them further. +// As usual, the first few include files are already known, so we will not +// comment on them further. #include #include #include @@ -38,54 +37,38 @@ #include #include -// In this example, we need -// vector-valued finite elements. The -// support for these can be found in -// the following include file: +// In this example, we need vector-valued finite elements. The support for +// these can be found in the following include file: #include -// We will compose the vector-valued -// finite elements from regular Q1 -// elements which can be found here, -// as usual: +// We will compose the vector-valued finite elements from regular Q1 elements +// which can be found here, as usual: #include // This again is C++: #include #include -// The last step is as in previous -// programs. In particular, just like in -// step-7, we pack everything that's specific -// to this program into a namespace of its -// own. +// The last step is as in previous programs. In particular, just like in +// step-7, we pack everything that's specific to this program into a namespace +// of its own. namespace Step8 { using namespace dealii; // @sect3{The ElasticProblem class template} - // The main class is, except for its - // name, almost unchanged with - // respect to the step-6 example. + // The main class is, except for its name, almost unchanged with respect to + // the step-6 example. // - // The only change is the use of a - // different class for the fe - // variable: Instead of a concrete - // finite element class such as - // FE_Q, we now use a more - // generic one, FESystem. In - // fact, FESystem is not really a - // finite element itself in that it - // does not implement shape functions - // of its own. Rather, it is a class - // that can be used to stack several - // other elements together to form - // one vector-valued finite - // element. In our case, we will - // compose the vector-valued element - // of FE_Q(1) objects, as shown - // below in the constructor of this - // class. + // The only change is the use of a different class for the fe + // variable: Instead of a concrete finite element class such as + // FE_Q, we now use a more generic one, + // FESystem. In fact, FESystem is not really a + // finite element itself in that it does not implement shape functions of + // its own. Rather, it is a class that can be used to stack several other + // elements together to form one vector-valued finite element. In our case, + // we will compose the vector-valued element of FE_Q(1) + // objects, as shown below in the constructor of this class. template class ElasticProblem { @@ -118,56 +101,34 @@ namespace Step8 // @sect3{Right hand side values} - // Before going over to the - // implementation of the main class, - // we declare and define the class - // which describes the right hand - // side. This time, the right hand - // side is vector-valued, as is the - // solution, so we will describe the - // changes required for this in some - // more detail. + // Before going over to the implementation of the main class, we declare and + // define the class which describes the right hand side. This time, the + // right hand side is vector-valued, as is the solution, so we will describe + // the changes required for this in some more detail. // - // The first thing is that - // vector-valued functions have to - // have a constructor, since they - // need to pass down to the base - // class of how many components the - // function consists. The default - // value in the constructor of the - // base class is one (i.e.: a scalar - // function), which is why we did not - // need not define a constructor for - // the scalar function used in - // previous programs. + // The first thing is that vector-valued functions have to have a + // constructor, since they need to pass down to the base class of how many + // components the function consists. The default value in the constructor of + // the base class is one (i.e.: a scalar function), which is why we did not + // need not define a constructor for the scalar function used in previous + // programs. template class RightHandSide : public Function { public: RightHandSide (); - // The next change is that we - // want a replacement for the - // value function of the - // previous examples. There, a - // second parameter component - // was given, which denoted which - // component was requested. Here, - // we implement a function that - // returns the whole vector of - // values at the given place at - // once, in the second argument - // of the function. The obvious - // name for such a replacement + // The next change is that we want a replacement for the + // value function of the previous examples. There, a second + // parameter component was given, which denoted which + // component was requested. Here, we implement a function that returns the + // whole vector of values at the given place at once, in the second + // argument of the function. The obvious name for such a replacement // function is vector_value. // - // Secondly, in analogy to the - // value_list function, there - // is a function - // vector_value_list, which - // returns the values of the - // vector-valued function at - // several points at once: + // Secondly, in analogy to the value_list function, there is + // a function vector_value_list, which returns the values of + // the vector-valued function at several points at once: virtual void vector_value (const Point &p, Vector &values) const; @@ -176,28 +137,17 @@ namespace Step8 }; - // This is the constructor of the - // right hand side class. As said - // above, it only passes down to the - // base class the number of - // components, which is dim in - // the present case (one force - // component in each of the dim - // space directions). + // This is the constructor of the right hand side class. As said above, it + // only passes down to the base class the number of components, which is + // dim in the present case (one force component in each of the + // dim space directions). // - // Some people would have moved the - // definition of such a short - // function right into the class - // declaration. We do not do that, as - // a matter of style: the deal.II - // style guides require that class - // declarations contain only - // declarations, and that definitions - // are always to be found - // outside. This is, obviously, as - // much as matter of taste as - // indentation, but we try to be - // consistent in this direction. + // Some people would have moved the definition of such a short function + // right into the class declaration. We do not do that, as a matter of + // style: the deal.II style guides require that class declarations contain + // only declarations, and that definitions are always to be found + // outside. This is, obviously, as much as matter of taste as indentation, + // but we try to be consistent in this direction. template RightHandSide::RightHandSide () : @@ -205,49 +155,27 @@ namespace Step8 {} - // Next the function that returns - // the whole vector of values at the - // point p at once. + // Next the function that returns the whole vector of values at the point + // p at once. // - // To prevent cases where the return - // vector has not previously been set - // to the right size we test for this - // case and otherwise throw an - // exception at the beginning of the - // function. Note that enforcing that - // output arguments already have the - // correct size is a convention in - // deal.II, and enforced almost - // everywhere. The reason is that we - // would otherwise have to check at - // the beginning of the function and - // possibly change the size of the - // output vector. This is expensive, - // and would almost always be - // unnecessary (the first call to the - // function would set the vector to - // the right size, and subsequent - // calls would only have to do - // redundant checks). In addition, - // checking and possibly resizing the - // vector is an operation that can - // not be removed if we can't rely on - // the assumption that the vector - // already has the correct size; this - // is in contract to the Assert - // call that is completely removed if - // the program is compiled in - // optimized mode. + // To prevent cases where the return vector has not previously been set to + // the right size we test for this case and otherwise throw an exception at + // the beginning of the function. Note that enforcing that output arguments + // already have the correct size is a convention in deal.II, and enforced + // almost everywhere. The reason is that we would otherwise have to check at + // the beginning of the function and possibly change the size of the output + // vector. This is expensive, and would almost always be unnecessary (the + // first call to the function would set the vector to the right size, and + // subsequent calls would only have to do redundant checks). In addition, + // checking and possibly resizing the vector is an operation that can not be + // removed if we can't rely on the assumption that the vector already has + // the correct size; this is in contract to the Assert call + // that is completely removed if the program is compiled in optimized mode. // - // Likewise, if by some accident - // someone tried to compile and run - // the program in only one space - // dimension (in which the elastic - // equations do not make much sense - // since they reduce to the ordinary - // Laplace equation), we terminate - // the program in the second - // assertion. The program will work + // Likewise, if by some accident someone tried to compile and run the + // program in only one space dimension (in which the elastic equations do + // not make much sense since they reduce to the ordinary Laplace equation), + // we terminate the program in the second assertion. The program will work // just fine in 3d, however. template inline @@ -258,43 +186,30 @@ namespace Step8 ExcDimensionMismatch (values.size(), dim)); Assert (dim >= 2, ExcNotImplemented()); - // The rest of the function - // implements computing force - // values. We will use a constant - // (unit) force in x-direction - // located in two little circles - // (or spheres, in 3d) around - // points (0.5,0) and (-0.5,0), and - // y-force in an area around the - // origin; in 3d, the z-component - // of these centers is zero as - // well. + // The rest of the function implements computing force values. We will use + // a constant (unit) force in x-direction located in two little circles + // (or spheres, in 3d) around points (0.5,0) and (-0.5,0), and y-force in + // an area around the origin; in 3d, the z-component of these centers is + // zero as well. // - // For this, let us first define - // two objects that denote the - // centers of these areas. Note - // that upon construction of the - // Point objects, all - // components are set to zero. + // For this, let us first define two objects that denote the centers of + // these areas. Note that upon construction of the Point + // objects, all components are set to zero. Point point_1, point_2; point_1(0) = 0.5; point_2(0) = -0.5; - // If now the point p is in a - // circle (sphere) of radius 0.2 - // around one of these points, then - // set the force in x-direction to - // one, otherwise to zero: + // If now the point p is in a circle (sphere) of radius 0.2 + // around one of these points, then set the force in x-direction to one, + // otherwise to zero: if (((p-point_1).square() < 0.2*0.2) || ((p-point_2).square() < 0.2*0.2)) values(0) = 1; else values(0) = 0; - // Likewise, if p is in the - // vicinity of the origin, then set - // the y-force to 1, otherwise to - // zero: + // Likewise, if p is in the vicinity of the origin, then set + // the y-force to 1, otherwise to zero: if (p.square() < 0.2*0.2) values(1) = 1; else @@ -303,19 +218,12 @@ namespace Step8 - // Now, this is the function of the - // right hand side class that returns - // the values at several points at - // once. The function starts out with - // checking that the number of input - // and output arguments is equal (the - // sizes of the individual output - // vectors will be checked in the - // function that we call further down - // below). Next, we define an - // abbreviation for the number of - // points which we shall work on, to - // make some things simpler below. + // Now, this is the function of the right hand side class that returns the + // values at several points at once. The function starts out with checking + // that the number of input and output arguments is equal (the sizes of the + // individual output vectors will be checked in the function that we call + // further down below). Next, we define an abbreviation for the number of + // points which we shall work on, to make some things simpler below. template void RightHandSide::vector_value_list (const std::vector > &points, std::vector > &value_list) const @@ -325,78 +233,39 @@ namespace Step8 const unsigned int n_points = points.size(); - // Finally we treat each of the - // points. In one of the previous - // examples, we have explained why - // the - // value_list/vector_value_list - // function had been introduced: to - // prevent us from calling virtual - // functions too frequently. On the - // other hand, we now need to - // implement the same function - // twice, which can lead to - // confusion if one function is - // changed but the other is - // not. + // Finally we treat each of the points. In one of the previous examples, + // we have explained why the + // value_list/vector_value_list function had + // been introduced: to prevent us from calling virtual functions too + // frequently. On the other hand, we now need to implement the same + // function twice, which can lead to confusion if one function is changed + // but the other is not. // - // We can prevent this situation by - // calling - // RightHandSide::vector_value - // on each point in the input - // list. Note that by giving the - // full name of the function, - // including the class name, we - // instruct the compiler to - // explicitly call this function, - // and not to use the virtual - // function call mechanism that - // would be used if we had just - // called vector_value. This is - // important, since the compiler - // generally can't make any - // assumptions which function is - // called when using virtual - // functions, and it therefore - // can't inline the called function - // into the site of the call. On - // the contrary, here we give the - // fully qualified name, which - // bypasses the virtual function - // call, and consequently the - // compiler knows exactly which - // function is called and will - // inline above function into the - // present location. (Note that we - // have declared the - // vector_value function above - // inline, though modern - // compilers are also able to - // inline functions even if they - // have not been declared as - // inline). + // We can prevent this situation by calling + // RightHandSide::vector_value on each point in the input + // list. Note that by giving the full name of the function, including the + // class name, we instruct the compiler to explicitly call this function, + // and not to use the virtual function call mechanism that would be used + // if we had just called vector_value. This is important, + // since the compiler generally can't make any assumptions which function + // is called when using virtual functions, and it therefore can't inline + // the called function into the site of the call. On the contrary, here we + // give the fully qualified name, which bypasses the virtual function + // call, and consequently the compiler knows exactly which function is + // called and will inline above function into the present location. (Note + // that we have declared the vector_value function above + // inline, though modern compilers are also able to inline + // functions even if they have not been declared as inline). // - // It is worth noting why we go to - // such length explaining what we - // do. Using this construct, we - // manage to avoid any - // inconsistency: if we want to - // change the right hand side - // function, it would be difficult - // to always remember that we - // always have to change two - // functions in the same way. Using - // this forwarding mechanism, we - // only have to change a single - // place (the vector_value - // function), and the second place - // (the vector_value_list - // function) will always be - // consistent with it. At the same - // time, using virtual function - // call bypassing, the code is no - // less efficient than if we had - // written it twice in the first + // It is worth noting why we go to such length explaining what we + // do. Using this construct, we manage to avoid any inconsistency: if we + // want to change the right hand side function, it would be difficult to + // always remember that we always have to change two functions in the same + // way. Using this forwarding mechanism, we only have to change a single + // place (the vector_value function), and the second place + // (the vector_value_list function) will always be consistent + // with it. At the same time, using virtual function call bypassing, the + // code is no less efficient than if we had written it twice in the first // place: for (unsigned int p=0; p::vector_value (points[p], @@ -409,27 +278,16 @@ namespace Step8 // @sect4{ElasticProblem::ElasticProblem} - // Following is the constructor of - // the main class. As said before, we - // would like to construct a - // vector-valued finite element that - // is composed of several scalar - // finite elements (i.e., we want to - // build the vector-valued element so - // that each of its vector components - // consists of the shape functions of - // a scalar element). Of course, the - // number of scalar finite elements we - // would like to stack together - // equals the number of components - // the solution function has, which - // is dim since we consider - // displacement in each space - // direction. The FESystem class - // can handle this: we pass it the - // finite element of which we would - // like to compose the system of, and - // how often it shall be repeated: + // Following is the constructor of the main class. As said before, we would + // like to construct a vector-valued finite element that is composed of + // several scalar finite elements (i.e., we want to build the vector-valued + // element so that each of its vector components consists of the shape + // functions of a scalar element). Of course, the number of scalar finite + // elements we would like to stack together equals the number of components + // the solution function has, which is dim since we consider + // displacement in each space direction. The FESystem class can + // handle this: we pass it the finite element of which we would like to + // compose the system of, and how often it shall be repeated: template ElasticProblem::ElasticProblem () @@ -437,21 +295,16 @@ namespace Step8 dof_handler (triangulation), fe (FE_Q(1), dim) {} - // In fact, the FESystem class - // has several more constructors - // which can perform more complex - // operations than just stacking - // together several scalar finite - // elements of the same type into - // one; we will get to know these - // possibilities in later examples. + // In fact, the FESystem class has several more constructors + // which can perform more complex operations than just stacking together + // several scalar finite elements of the same type into one; we will get to + // know these possibilities in later examples. // @sect4{ElasticProblem::~ElasticProblem} - // The destructor, on the other hand, - // is exactly as in step-6: + // The destructor, on the other hand, is exactly as in step-6: template ElasticProblem::~ElasticProblem () { @@ -461,27 +314,16 @@ namespace Step8 // @sect4{ElasticProblem::setup_system} - // Setting up the system of equations - // is identitical to the function - // used in the step-6 example. The - // DoFHandler class and all other - // classes used here are fully aware - // that the finite element we want to - // use is vector-valued, and take - // care of the vector-valuedness of - // the finite element themselves. (In - // fact, they do not, but this does - // not need to bother you: since they - // only need to know how many degrees - // of freedom there are per vertex, - // line and cell, and they do not ask - // what they represent, i.e. whether - // the finite element under - // consideration is vector-valued or - // whether it is, for example, a - // scalar Hermite element with - // several degrees of freedom on each - // vertex). + // Setting up the system of equations is identitical to the function used in + // the step-6 example. The DoFHandler class and all other + // classes used here are fully aware that the finite element we want to use + // is vector-valued, and take care of the vector-valuedness of the finite + // element themselves. (In fact, they do not, but this does not need to + // bother you: since they only need to know how many degrees of freedom + // there are per vertex, line and cell, and they do not ask what they + // represent, i.e. whether the finite element under consideration is + // vector-valued or whether it is, for example, a scalar Hermite element + // with several degrees of freedom on each vertex). template void ElasticProblem::setup_system () { @@ -508,35 +350,21 @@ namespace Step8 // @sect4{ElasticProblem::assemble_system} - // The big changes in this program - // are in the creation of matrix and - // right hand side, since they are - // problem-dependent. We will go - // through that process step-by-step, - // since it is a bit more complicated - // than in previous examples. + // The big changes in this program are in the creation of matrix and right + // hand side, since they are problem-dependent. We will go through that + // process step-by-step, since it is a bit more complicated than in previous + // examples. // - // The first parts of this function - // are the same as before, however: - // setting up a suitable quadrature - // formula, initializing an - // FEValues object for the - // (vector-valued) finite element we - // use as well as the quadrature - // object, and declaring a number of - // auxiliary arrays. In addition, we - // declare the ever same two - // abbreviations: n_q_points and - // dofs_per_cell. The number of - // degrees of freedom per cell we now - // obviously ask from the composed - // finite element rather than from - // the underlying scalar Q1 - // element. Here, it is dim times - // the number of degrees of freedom - // per cell of the Q1 element, though - // this is not explicit knowledge we - // need to care about: + // The first parts of this function are the same as before, however: setting + // up a suitable quadrature formula, initializing an FEValues + // object for the (vector-valued) finite element we use as well as the + // quadrature object, and declaring a number of auxiliary arrays. In + // addition, we declare the ever same two abbreviations: + // n_q_points and dofs_per_cell. The number of + // degrees of freedom per cell we now obviously ask from the composed finite + // element rather than from the underlying scalar Q1 element. Here, it is + // dim times the number of degrees of freedom per cell of the + // Q1 element, though this is not explicit knowledge we need to care about: template void ElasticProblem::assemble_system () { @@ -554,49 +382,33 @@ namespace Step8 std::vector local_dof_indices (dofs_per_cell); - // As was shown in previous - // examples as well, we need a - // place where to store the values - // of the coefficients at all the - // quadrature points on a cell. In - // the present situation, we have - // two coefficients, lambda and mu. + // As was shown in previous examples as well, we need a place where to + // store the values of the coefficients at all the quadrature points on a + // cell. In the present situation, we have two coefficients, lambda and + // mu. std::vector lambda_values (n_q_points); std::vector mu_values (n_q_points); - // Well, we could as well have - // omitted the above two arrays - // since we will use constant - // coefficients for both lambda and - // mu, which can be declared like - // this. They both represent - // functions always returning the - // constant value 1.0. Although we - // could omit the respective - // factors in the assemblage of the - // matrix, we use them here for - // purpose of demonstration. + // Well, we could as well have omitted the above two arrays since we will + // use constant coefficients for both lambda and mu, which can be declared + // like this. They both represent functions always returning the constant + // value 1.0. Although we could omit the respective factors in the + // assemblage of the matrix, we use them here for purpose of + // demonstration. ConstantFunction lambda(1.), mu(1.); - // Then again, we need to have the - // same for the right hand - // side. This is exactly as before - // in previous examples. However, - // we now have a vector-valued - // right hand side, which is why - // the data type of the - // rhs_values array is - // changed. We initialize it by - // n_q_points elements, each of - // which is a Vector@ - // with dim elements. + // Then again, we need to have the same for the right hand side. This is + // exactly as before in previous examples. However, we now have a + // vector-valued right hand side, which is why the data type of the + // rhs_values array is changed. We initialize it by + // n_q_points elements, each of which is a + // Vector@ with dim elements. RightHandSide right_hand_side; std::vector > rhs_values (n_q_points, Vector(dim)); - // Now we can begin with the loop - // over all cells: + // Now we can begin with the loop over all cells: typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); for (; cell!=endc; ++cell) @@ -606,62 +418,35 @@ namespace Step8 fe_values.reinit (cell); - // Next we get the values of - // the coefficients at the - // quadrature points. Likewise - // for the right hand side: + // Next we get the values of the coefficients at the quadrature + // points. Likewise for the right hand side: lambda.value_list (fe_values.get_quadrature_points(), lambda_values); mu.value_list (fe_values.get_quadrature_points(), mu_values); right_hand_side.vector_value_list (fe_values.get_quadrature_points(), rhs_values); - // Then assemble the entries of - // the local stiffness matrix - // and right hand side - // vector. This follows almost - // one-to-one the pattern - // described in the - // introduction of this - // example. One of the few - // comments in place is that we - // can compute the number - // comp(i), i.e. the index - // of the only nonzero vector - // component of shape function - // i using the - // fe.system_to_component_index(i).first - // function call below. + // Then assemble the entries of the local stiffness matrix and right + // hand side vector. This follows almost one-to-one the pattern + // described in the introduction of this example. One of the few + // comments in place is that we can compute the number + // comp(i), i.e. the index of the only nonzero vector + // component of shape function i using the + // fe.system_to_component_index(i).first function call + // below. // - // (By accessing the - // first variable of - // the return value of the - // system_to_component_index - // function, you might - // already have guessed - // that there is more in - // it. In fact, the - // function returns a - // std::pair@, of - // which the first element - // is comp(i) and the - // second is the value - // base(i) also noted - // in the introduction, i.e. - // the index - // of this shape function - // within all the shape - // functions that are nonzero - // in this component, - // i.e. base(i) in the - // diction of the - // introduction. This is not a - // number that we are usually - // interested in, however.) + // (By accessing the first variable of the return value + // of the system_to_component_index function, you might + // already have guessed that there is more in it. In fact, the + // function returns a std::pair@, of which the first element is comp(i) + // and the second is the value base(i) also noted in the + // introduction, i.e. the index of this shape function within all the + // shape functions that are nonzero in this component, + // i.e. base(i) in the diction of the introduction. This + // is not a number that we are usually interested in, however.) // - // With this knowledge, we can - // assemble the local matrix + // With this knowledge, we can assemble the local matrix // contributions: for (unsigned int i=0; ishape_grad(i,q_point) - // returns the - // gradient of - // the only - // nonzero - // component of - // the i-th shape - // function at - // quadrature - // point - // q_point. The - // component - // comp(i) of - // the gradient, - // which is the - // derivative of - // this only - // nonzero vector - // component of - // the i-th shape - // function with - // respect to the - // comp(i)th - // coordinate is - // accessed by - // the appended + // The first term is (lambda d_i u_i, d_j v_j) + (mu d_i + // u_j, d_j v_i). Note that + // shape_grad(i,q_point) returns the + // gradient of the only nonzero component of the i-th + // shape function at quadrature point q_point. The + // component comp(i) of the gradient, which + // is the derivative of this only nonzero vector + // component of the i-th shape function with respect to + // the comp(i)th coordinate is accessed by the appended // brackets. ( (fe_values.shape_grad(i,q_point)[component_i] * @@ -720,44 +482,16 @@ namespace Step8 fe_values.shape_grad(j,q_point)[component_i] * mu_values[q_point]) + - // The second term is - // (mu nabla u_i, nabla v_j). - // We need not - // access a - // specific - // component of - // the - // gradient, - // since we - // only have to - // compute the - // scalar - // product of - // the two - // gradients, - // of which an - // overloaded - // version of - // the - // operator* - // takes care, - // as in - // previous - // examples. + // The second term is (mu nabla u_i, nabla v_j). We + // need not access a specific component of the + // gradient, since we only have to compute the scalar + // product of the two gradients, of which an + // overloaded version of the operator* takes care, as + // in previous examples. // - // Note that by - // using the ?: - // operator, we - // only do this - // if comp(i) - // equals - // comp(j), - // otherwise a - // zero is - // added (which - // will be - // optimized - // away by the + // Note that by using the ?: operator, we only do this + // if comp(i) equals comp(j), otherwise a zero is + // added (which will be optimized away by the // compiler). ((component_i == component_j) ? (fe_values.shape_grad(i,q_point) * @@ -771,9 +505,7 @@ namespace Step8 } } - // Assembling the right hand - // side is also just as - // discussed in the + // Assembling the right hand side is also just as discussed in the // introduction: for (unsigned int i=0; iget_dof_indices (local_dof_indices); for (unsigned int i=0; iZeroFunction - // constructor accepts a parameter - // that tells it that it shall - // represent a vector valued, - // constant zero function with that - // many components. By default, - // this parameter is equal to one, - // in which case the - // ZeroFunction object would - // represent a scalar - // function. Since the solution - // vector has dim components, - // we need to pass dim as - // number of components to the zero - // function as well. + // The interpolation of the boundary values needs a small modification: + // since the solution function is vector-valued, so need to be the + // boundary values. The ZeroFunction constructor accepts a + // parameter that tells it that it shall represent a vector valued, + // constant zero function with that many components. By default, this + // parameter is equal to one, in which case the ZeroFunction + // object would represent a scalar function. Since the solution vector has + // dim components, we need to pass dim as number + // of components to the zero function as well. std::map boundary_values; VectorTools::interpolate_boundary_values (dof_handler, 0, @@ -849,14 +563,10 @@ namespace Step8 // @sect4{ElasticProblem::solve} - // The solver does not care about - // where the system of equations - // comes, as long as it stays - // positive definite and symmetric - // (which are the requirements for - // the use of the CG solver), which - // the system indeed is. Therefore, - // we need not change anything. + // The solver does not care about where the system of equations comes, as + // long as it stays positive definite and symmetric (which are the + // requirements for the use of the CG solver), which the system indeed + // is. Therefore, we need not change anything. template void ElasticProblem::solve () { @@ -875,30 +585,16 @@ namespace Step8 // @sect4{ElasticProblem::refine_grid} - // The function that does the - // refinement of the grid is the same - // as in the step-6 example. The - // quadrature formula is adapted to - // the linear elements again. Note - // that the error estimator by - // default adds up the estimated - // obtained from all components of - // the finite element solution, i.e., - // it uses the displacement in all - // directions with the same - // weight. If we would like the grid - // to be adapted to the - // x-displacement only, we could pass - // the function an additional - // parameter which tells it to do so - // and do not consider the - // displacements in all other - // directions for the error - // indicators. However, for the - // current problem, it seems - // appropriate to consider all - // displacement components with equal - // weight. + // The function that does the refinement of the grid is the same as in the + // step-6 example. The quadrature formula is adapted to the linear elements + // again. Note that the error estimator by default adds up the estimated + // obtained from all components of the finite element solution, i.e., it + // uses the displacement in all directions with the same weight. If we would + // like the grid to be adapted to the x-displacement only, we could pass the + // function an additional parameter which tells it to do so and do not + // consider the displacements in all other directions for the error + // indicators. However, for the current problem, it seems appropriate to + // consider all displacement components with equal weight. template void ElasticProblem::refine_grid () { @@ -921,15 +617,11 @@ namespace Step8 // @sect4{ElasticProblem::output_results} - // The output happens mostly as has - // been shown in previous examples - // already. The only difference is - // that the solution function is - // vector valued. The DataOut - // class takes care of this - // automatically, but we have to give - // each component of the solution - // vector a different name. + // The output happens mostly as has been shown in previous examples + // already. The only difference is that the solution function is vector + // valued. The DataOut class takes care of this automatically, + // but we have to give each component of the solution vector a different + // name. template void ElasticProblem::output_results (const unsigned int cycle) const { @@ -945,46 +637,26 @@ namespace Step8 - // As said above, we need a - // different name for each - // component of the solution - // function. To pass one name for - // each component, a vector of - // strings is used. Since the - // number of components is the same - // as the number of dimensions we - // are working in, the following + // As said above, we need a different name for each component of the + // solution function. To pass one name for each component, a vector of + // strings is used. Since the number of components is the same as the + // number of dimensions we are working in, the following // switch statement is used. // - // We note that some graphics - // programs have restriction as to - // what characters are allowed in - // the names of variables. The - // library therefore supports only - // the minimal subset of these - // characters that is supported by - // all programs. Basically, these - // are letters, numbers, - // underscores, and some other - // characters, but in particular no - // whitespace and minus/hyphen. The - // library will throw an exception - // otherwise, at least if in debug - // mode. + // We note that some graphics programs have restriction as to what + // characters are allowed in the names of variables. The library therefore + // supports only the minimal subset of these characters that is supported + // by all programs. Basically, these are letters, numbers, underscores, + // and some other characters, but in particular no whitespace and + // minus/hyphen. The library will throw an exception otherwise, at least + // if in debug mode. // - // After listing the 1d, 2d, and 3d - // case, it is good style to let - // the program die if we run upon a - // case which we did not - // consider. Remember that the - // Assert macro generates an - // exception if the condition in - // the first parameter is not - // satisfied. Of course, the - // condition false can never be - // satisfied, so the program will - // always abort whenever it gets to - // the default statement: + // After listing the 1d, 2d, and 3d case, it is good style to let the + // program die if we run upon a case which we did not consider. Remember + // that the Assert macro generates an exception if the + // condition in the first parameter is not satisfied. Of course, the + // condition false can never be satisfied, so the program + // will always abort whenever it gets to the default statement: std::vector solution_names; switch (dim) { @@ -1004,24 +676,14 @@ namespace Step8 Assert (false, ExcNotImplemented()); } - // After setting up the names for - // the different components of the - // solution vector, we can add the - // solution vector to the list of - // data vectors scheduled for - // output. Note that the following - // function takes a vector of - // strings as second argument, - // whereas the one which we have - // used in all previous examples - // accepted a string there. In - // fact, the latter function is - // only a shortcut for the function - // which we call here: it puts the - // single string that is passed to - // it into a vector of strings with - // only one element and forwards - // that to the other function. + // After setting up the names for the different components of the solution + // vector, we can add the solution vector to the list of data vectors + // scheduled for output. Note that the following function takes a vector + // of strings as second argument, whereas the one which we have used in + // all previous examples accepted a string there. In fact, the latter + // function is only a shortcut for the function which we call here: it + // puts the single string that is passed to it into a vector of strings + // with only one element and forwards that to the other function. data_out.add_data_vector (solution, solution_names); data_out.build_patches (); data_out.write_vtk (output); @@ -1031,82 +693,45 @@ namespace Step8 // @sect4{ElasticProblem::run} - // The run function does the same - // things as in step-6, for - // example. This time, we use the - // square [-1,1]^d as domain, and we - // refine it twice globally before - // starting the first iteration. + // The run function does the same things as in step-6, for + // example. This time, we use the square [-1,1]^d as domain, and we refine + // it twice globally before starting the first iteration. // - // The reason is the following: we - // use the Gauss quadrature - // formula with two points in each - // direction for integration of the - // right hand side; that means that - // there are four quadrature points - // on each cell (in 2D). If we only - // refine the initial grid once - // globally, then there will be only - // four quadrature points in each - // direction on the domain. However, - // the right hand side function was - // chosen to be rather localized and - // in that case all quadrature points - // lie outside the support of the - // right hand side function. The - // right hand side vector will then - // contain only zeroes and the - // solution of the system of - // equations is the zero vector, - // i.e. a finite element function - // that it zero everywhere. We should - // not be surprised about such things - // happening, since we have chosen an - // initial grid that is totally - // unsuitable for the problem at - // hand. + // The reason is the following: we use the Gauss quadrature + // formula with two points in each direction for integration of the right + // hand side; that means that there are four quadrature points on each cell + // (in 2D). If we only refine the initial grid once globally, then there + // will be only four quadrature points in each direction on the + // domain. However, the right hand side function was chosen to be rather + // localized and in that case all quadrature points lie outside the support + // of the right hand side function. The right hand side vector will then + // contain only zeroes and the solution of the system of equations is the + // zero vector, i.e. a finite element function that it zero everywhere. We + // should not be surprised about such things happening, since we have chosen + // an initial grid that is totally unsuitable for the problem at hand. // - // The unfortunate thing is that if - // the discrete solution is constant, - // then the error indicators computed - // by the KellyErrorEstimator - // class are zero for each cell as - // well, and the call to - // refine_and_coarsen_fixed_number - // on the triangulation object - // will not flag any cells for - // refinement (why should it if the - // indicated error is zero for each - // cell?). The grid in the next - // iteration will therefore consist - // of four cells only as well, and - // the same problem occurs again. + // The unfortunate thing is that if the discrete solution is constant, then + // the error indicators computed by the KellyErrorEstimator + // class are zero for each cell as well, and the call to + // refine_and_coarsen_fixed_number on the + // triangulation object will not flag any cells for refinement + // (why should it if the indicated error is zero for each cell?). The grid + // in the next iteration will therefore consist of four cells only as well, + // and the same problem occurs again. // - // The conclusion needs to be: while - // of course we will not choose the - // initial grid to be well-suited for - // the accurate solution of the - // problem, we must at least choose - // it such that it has the chance to - // capture the most striking features - // of the solution. In this case, it - // needs to be able to see the right - // hand side. Thus, we refine twice - // globally. (Note that the - // refine_global function is not - // part of the GridRefinement - // class in which - // refine_and_coarsen_fixed_number - // is declared, for example. The - // reason is first that it is not an - // algorithm that computed refinement - // flags from indicators, but more - // importantly that it actually - // performs the refinement, in - // contrast to the functions in - // GridRefinement that only flag - // cells without actually refining - // the grid.) + // The conclusion needs to be: while of course we will not choose the + // initial grid to be well-suited for the accurate solution of the problem, + // we must at least choose it such that it has the chance to capture the + // most striking features of the solution. In this case, it needs to be able + // to see the right hand side. Thus, we refine twice globally. (Note that + // the refine_global function is not part of the + // GridRefinement class in which + // refine_and_coarsen_fixed_number is declared, for + // example. The reason is first that it is not an algorithm that computed + // refinement flags from indicators, but more importantly that it actually + // performs the refinement, in contrast to the functions in + // GridRefinement that only flag cells without actually + // refining the grid.) template void ElasticProblem::run () { @@ -1141,12 +766,9 @@ namespace Step8 // @sect3{The main function} -// After closing the Step8 -// namespace in the last line above, the -// following is the main function of the -// program and is again exactly like in -// step-6 (apart from the changed class -// names, of course). +// After closing the Step8 namespace in the last line above, the +// following is the main function of the program and is again exactly like in +// step-6 (apart from the changed class names, of course). int main () { try diff --git a/deal.II/examples/step-9/step-9.cc b/deal.II/examples/step-9/step-9.cc index ba43fa9e49..e986a435e2 100644 --- a/deal.II/examples/step-9/step-9.cc +++ b/deal.II/examples/step-9/step-9.cc @@ -9,10 +9,8 @@ /* to the file deal.II/doc/license.html for the text and */ /* further information on this license. */ -// Just as in previous examples, we -// have to include several files of -// which the meaning has already been -// discussed: +// Just as in previous examples, we have to include several files of which the +// meaning has already been discussed: #include #include #include @@ -38,54 +36,39 @@ #include #include -// The following two files provide classes -// and information for multi-threaded -// programs. In the first one, the classes -// and functions are declared which we need -// to start new threads and to wait for -// threads to return (i.e. the -// Thread class and the -// new_thread functions). The -// second file has a class -// MultithreadInfo (and a global -// object multithread_info of -// that type) which can be used to query the -// number of processors in your system, which -// is often useful when deciding how many -// threads to start in parallel. +// The following two files provide classes and information for multi-threaded +// programs. In the first one, the classes and functions are declared which we +// need to start new threads and to wait for threads to return (i.e. the +// Thread class and the new_thread functions). The +// second file has a class MultithreadInfo (and a global object +// multithread_info of that type) which can be used to query the +// number of processors in your system, which is often useful when deciding +// how many threads to start in parallel. #include #include -// The next new include file declares -// a base class TensorFunction -// not unlike the Function class, -// but with the difference that the -// return value is tensor-valued -// rather than scalar of -// vector-valued. +// The next new include file declares a base class TensorFunction +// not unlike the Function class, but with the difference that +// the return value is tensor-valued rather than scalar of vector-valued. #include #include -// This is C++, as we want to write -// some output to disk: +// This is C++, as we want to write some output to disk: #include #include -// The last step is as in previous -// programs: +// The last step is as in previous programs: namespace Step9 { using namespace dealii; // @sect3{AdvectionProblem class declaration} - // Following we declare the main - // class of this program. It is very - // much alike the main classes of - // previous examples, so we again - // only comment on the differences. + // Following we declare the main class of this program. It is very much + // alike the main classes of previous examples, so we again only comment on + // the differences. template class AdvectionProblem { @@ -96,51 +79,29 @@ namespace Step9 private: void setup_system (); - // The next function will be used - // to assemble the - // matrix. However, unlike in the - // previous examples, the - // function will not do the work - // itself, but rather it will - // split the range of active - // cells into several chunks and - // then call the following - // function on each of these - // chunks. The rationale is that - // matrix assembly can be - // parallelized quite well, as - // the computation of the local - // contributions on each cell is - // entirely independent of other - // cells, and we only have to - // synchronize when we add the - // contribution of a cell to the - // global matrix. The second - // function, doing the actual - // work, accepts two parameters - // which denote the first cell on - // which it shall operate, and - // the one past the last. + // The next function will be used to assemble the matrix. However, unlike + // in the previous examples, the function will not do the work itself, but + // rather it will split the range of active cells into several chunks and + // then call the following function on each of these chunks. The rationale + // is that matrix assembly can be parallelized quite well, as the + // computation of the local contributions on each cell is entirely + // independent of other cells, and we only have to synchronize when we add + // the contribution of a cell to the global matrix. The second function, + // doing the actual work, accepts two parameters which denote the first + // cell on which it shall operate, and the one past the last. // - // The strategy for parallelization we - // choose here is one of the - // possibilities mentioned in detail in - // the @ref threads module in the - // documentation. While it is a - // straightforward way to distribute the - // work for assembling the system onto - // multiple processor cores. As mentioned - // in the module, there are other, and - // possibly better suited, ways to + // The strategy for parallelization we choose here is one of the + // possibilities mentioned in detail in the @ref threads module in the + // documentation. While it is a straightforward way to distribute the work + // for assembling the system onto multiple processor cores. As mentioned + // in the module, there are other, and possibly better suited, ways to // achieve the same goal. void assemble_system (); void assemble_system_interval (const typename DoFHandler::active_cell_iterator &begin, const typename DoFHandler::active_cell_iterator &end); - // The following functions again - // are as in previous examples, - // as are the subsequent - // variables. + // The following functions again are as in previous examples, as are the + // subsequent variables. void solve (); void refine_grid (); void output_results (const unsigned int cycle) const; @@ -158,41 +119,21 @@ namespace Step9 Vector solution; Vector system_rhs; - // When assembling the matrix in - // parallel, we have to - // synchronize when several - // threads attempt to write the - // local contributions of a cell - // to the global matrix at the - // same time. This is done using - // a Mutex, which is an - // object that can be owned by - // only one thread at a time. If - // a thread wants to write to the - // matrix, it has to acquire this - // lock (if it is presently owned - // by another thread, then it has - // to wait), then write to the - // matrix and finally release the - // lock. Note that if the library - // was not compiled to support - // multithreading (which you have - // to specify at the time you - // call the ./configure - // script in the top-level - // directory), then a dummy the - // actual data type of the - // typedef - // Threads::ThreadMutex is a - // class that provides all the - // functions needed for a mutex, - // but does nothing when they are - // called; this is reasonable, of - // course, since if only one - // thread is running at a time, - // there is no need to - // synchronize with other - // threads. + // When assembling the matrix in parallel, we have to synchronize when + // several threads attempt to write the local contributions of a cell to + // the global matrix at the same time. This is done using a + // Mutex, which is an object that can be owned by only one + // thread at a time. If a thread wants to write to the matrix, it has to + // acquire this lock (if it is presently owned by another thread, then it + // has to wait), then write to the matrix and finally release the + // lock. Note that if the library was not compiled to support + // multithreading (which you have to specify at the time you call the + // ./configure script in the top-level directory), then a + // dummy the actual data type of the typedef + // Threads::ThreadMutex is a class that provides all the + // functions needed for a mutex, but does nothing when they are called; + // this is reasonable, of course, since if only one thread is running at a + // time, there is no need to synchronize with other threads. Threads::ThreadMutex assembler_lock; }; @@ -200,47 +141,26 @@ namespace Step9 // @sect3{Equation data declaration} - // Next we declare a class that - // describes the advection - // field. This, of course, is a - // vector field with as many compents - // as there are space dimensions. One - // could now use a class derived from - // the Function base class, as we - // have done for boundary values and - // coefficients in previous examples, - // but there is another possibility - // in the library, namely a base - // class that describes tensor valued - // functions. In contrast to the - // usual Function objects, we - // provide the compiler with - // knowledge on the size of the - // objects of the return type. This - // enables the compiler to generate - // efficient code, which is not so - // simple for usual vector-valued - // functions where memory has to be - // allocated on the heap (thus, the - // Function::vector_value - // function has to be given the - // address of an object into which - // the result is to be written, in - // order to avoid copying and memory - // allocation and deallocation on the - // heap). In addition to the known - // size, it is possible not only to - // return vectors, but also tensors - // of higher rank; however, this is - // not very often requested by - // applications, to be honest... + // Next we declare a class that describes the advection field. This, of + // course, is a vector field with as many compents as there are space + // dimensions. One could now use a class derived from the + // Function base class, as we have done for boundary values and + // coefficients in previous examples, but there is another possibility in + // the library, namely a base class that describes tensor valued + // functions. In contrast to the usual Function objects, we + // provide the compiler with knowledge on the size of the objects of the + // return type. This enables the compiler to generate efficient code, which + // is not so simple for usual vector-valued functions where memory has to be + // allocated on the heap (thus, the Function::vector_value + // function has to be given the address of an object into which the result + // is to be written, in order to avoid copying and memory allocation and + // deallocation on the heap). In addition to the known size, it is possible + // not only to return vectors, but also tensors of higher rank; however, + // this is not very often requested by applications, to be honest... // - // The interface of the - // TensorFunction class is - // relatively close to that of the - // Function class, so there is - // probably no need to comment in - // detail the following declaration: + // The interface of the TensorFunction class is relatively + // close to that of the Function class, so there is probably no + // need to comment in detail the following declaration: template class AdvectionField : public TensorFunction<1,dim> { @@ -252,88 +172,48 @@ namespace Step9 virtual void value_list (const std::vector > &points, std::vector > &values) const; - // In previous examples, we have - // used assertions that throw - // exceptions in several - // places. However, we have never - // seen how such exceptions are - // declared. This can be done as - // follows: + // In previous examples, we have used assertions that throw exceptions in + // several places. However, we have never seen how such exceptions are + // declared. This can be done as follows: DeclException2 (ExcDimensionMismatch, unsigned int, unsigned int, << "The vector has size " << arg1 << " but should have " << arg2 << " elements."); - // The syntax may look a little - // strange, but is - // reasonable. The format is - // basically as follows: use the - // name of one of the macros - // DeclExceptionN, where - // N denotes the number of - // additional parameters which - // the exception object shall - // take. In this case, as we want - // to throw the exception when - // the sizes of two vectors - // differ, we need two arguments, - // so we use - // DeclException2. The first - // parameter then describes the - // name of the exception, while - // the following declare the data - // types of the parameters. The - // last argument is a sequence of - // output directives that will be - // piped into the std::cerr - // object, thus the strange - // format with the leading @<@< - // operator and the like. Note - // that we can access the - // parameters which are passed to - // the exception upon - // construction (i.e. within the - // Assert call) by using the - // names arg1 through - // argN, where N is the - // number of arguments as defined - // by the use of the respective - // macro DeclExceptionN. + // The syntax may look a little strange, but is reasonable. The format is + // basically as follows: use the name of one of the macros + // DeclExceptionN, where N denotes the number of + // additional parameters which the exception object shall take. In this + // case, as we want to throw the exception when the sizes of two vectors + // differ, we need two arguments, so we use + // DeclException2. The first parameter then describes the + // name of the exception, while the following declare the data types of + // the parameters. The last argument is a sequence of output directives + // that will be piped into the std::cerr object, thus the + // strange format with the leading @<@< operator and the + // like. Note that we can access the parameters which are passed to the + // exception upon construction (i.e. within the Assert call) + // by using the names arg1 through argN, where + // N is the number of arguments as defined by the use of the + // respective macro DeclExceptionN. // - // To learn how the preprocessor - // expands this macro into actual - // code, please refer to the - // documentation of the exception - // classes in the base - // library. Suffice it to say - // that by this macro call, the - // respective exception class is - // declared, which also has error - // output functions already - // implemented. + // To learn how the preprocessor expands this macro into actual code, + // please refer to the documentation of the exception classes in the base + // library. Suffice it to say that by this macro call, the respective + // exception class is declared, which also has error output functions + // already implemented. }; - // The following two functions - // implement the interface described - // above. The first simply implements - // the function as described in the - // introduction, while the second - // uses the same trick to avoid - // calling a virtual function as has - // already been introduced in the - // previous example program. Note the - // check for the right sizes of the - // arguments in the second function, - // which should always be present in - // such functions; it is our - // experience that many if not most - // programming errors result from - // incorrectly initialized arrays, - // incompatible parameters to - // functions and the like; using - // assertion as in this case can - // eliminate many of these problems. + // The following two functions implement the interface described above. The + // first simply implements the function as described in the introduction, + // while the second uses the same trick to avoid calling a virtual function + // as has already been introduced in the previous example program. Note the + // check for the right sizes of the arguments in the second function, which + // should always be present in such functions; it is our experience that + // many if not most programming errors result from incorrectly initialized + // arrays, incompatible parameters to functions and the like; using + // assertion as in this case can eliminate many of these problems. template Tensor<1,dim> AdvectionField::value (const Point &p) const @@ -363,25 +243,15 @@ namespace Step9 - // Besides the advection field, we - // need two functions describing the - // source terms (right hand side) - // and the boundary values. First for - // the right hand side, which follows - // the same pattern as in previous - // examples. As described in the - // introduction, the source is a - // constant function in the vicinity - // of a source point, which we denote - // by the constant static variable - // center_point. We set the - // values of this center using the - // same template tricks as we have - // shown in the step-7 example - // program. The rest is simple and - // has been shown previously, - // including the way to avoid virtual - // function calls in the + // Besides the advection field, we need two functions describing the source + // terms (right hand side) and the boundary values. First for + // the right hand side, which follows the same pattern as in previous + // examples. As described in the introduction, the source is a constant + // function in the vicinity of a source point, which we denote by the + // constant static variable center_point. We set the values of + // this center using the same template tricks as we have shown in the step-7 + // example program. The rest is simple and has been shown previously, + // including the way to avoid virtual function calls in the // value_list function. template class RightHandSide : public Function @@ -412,26 +282,16 @@ namespace Step9 - // The only new thing here is that we - // check for the value of the - // component parameter. As this - // is a scalar function, it is - // obvious that it only makes sense - // if the desired component has the - // index zero, so we assert that this - // is indeed the - // case. ExcIndexRange is a - // global predefined exception - // (probably the one most often used, - // we therefore made it global - // instead of local to some class), - // that takes three parameters: the - // index that is outside the allowed - // range, the first element of the - // valid range and the one past the - // last (i.e. again the half-open - // interval so often used in the C++ - // standard library): + // The only new thing here is that we check for the value of the + // component parameter. As this is a scalar function, it is + // obvious that it only makes sense if the desired component has the index + // zero, so we assert that this is indeed the + // case. ExcIndexRange is a global predefined exception + // (probably the one most often used, we therefore made it global instead of + // local to some class), that takes three parameters: the index that is + // outside the allowed range, the first element of the valid range and the + // one past the last (i.e. again the half-open interval so often used in the + // C++ standard library): template double RightHandSide::value (const Point &p, @@ -461,10 +321,8 @@ namespace Step9 - // Finally for the boundary values, - // which is just another class - // derived from the Function base - // class: + // Finally for the boundary values, which is just another class derived from + // the Function base class: template class BoundaryValues : public Function { @@ -512,105 +370,54 @@ namespace Step9 // @sect3{GradientEstimation class declaration} - // Now, finally, here comes the class - // that will compute the difference - // approximation of the gradient on - // each cell and weighs that with a - // power of the mesh size, as - // described in the introduction. - // This class is a simple version of - // the DerivativeApproximation - // class in the library, that uses - // similar techniques to obtain - // finite difference approximations - // of the gradient of a finite - // element field, or if higher + // Now, finally, here comes the class that will compute the difference + // approximation of the gradient on each cell and weighs that with a power + // of the mesh size, as described in the introduction. This class is a + // simple version of the DerivativeApproximation class in the + // library, that uses similar techniques to obtain finite difference + // approximations of the gradient of a finite element field, or if higher // derivatives. // - // The - // class has one public static - // function estimate that is - // called to compute a vector of - // error indicators, and one private - // function that does the actual work - // on an interval of all active - // cells. The latter is called by the - // first one in order to be able to - // do the computations in parallel if - // your computer has more than one - // processor. While the first - // function accepts as parameter a - // vector into which the error - // indicator is written for each - // cell. This vector is passed on to - // the second function that actually - // computes the error indicators on - // some cells, and the respective - // elements of the vector are - // written. By the way, we made it - // somewhat of a convention to use - // vectors of floats for error - // indicators rather than the common - // vectors of doubles, as the - // additional accuracy is not - // necessary for estimated values. + // The class has one public static function estimate that is + // called to compute a vector of error indicators, and one private function + // that does the actual work on an interval of all active cells. The latter + // is called by the first one in order to be able to do the computations in + // parallel if your computer has more than one processor. While the first + // function accepts as parameter a vector into which the error indicator is + // written for each cell. This vector is passed on to the second function + // that actually computes the error indicators on some cells, and the + // respective elements of the vector are written. By the way, we made it + // somewhat of a convention to use vectors of floats for error indicators + // rather than the common vectors of doubles, as the additional accuracy is + // not necessary for estimated values. // - // In addition to these two - // functions, the class declares to - // exceptions which are raised when a - // cell has no neighbors in each of - // the space directions (in which - // case the matrix described in the - // introduction would be singular and - // can't be inverted), while the - // other one is used in the more - // common case of invalid parameters - // to a function, namely a vector of + // In addition to these two functions, the class declares to exceptions + // which are raised when a cell has no neighbors in each of the space + // directions (in which case the matrix described in the introduction would + // be singular and can't be inverted), while the other one is used in the + // more common case of invalid parameters to a function, namely a vector of // wrong size. // - // Two annotations to this class are - // still in order: the first is that - // the class has no non-static member - // functions or variables, so this is - // not really a class, but rather - // serves the purpose of a - // namespace in C++. The reason - // that we chose a class over a - // namespace is that this way we can - // declare functions that are - // private, i.e. visible to the - // outside world but not - // callable. This can be done with - // namespaces as well, if one - // declares some functions in header - // files in the namespace and - // implements these and other - // functions in the implementation - // file. The functions not declared - // in the header file are still in - // the namespace but are not callable - // from outside. However, as we have - // only one file here, it is not - // possible to hide functions in the - // present case. + // Two annotations to this class are still in order: the first is that the + // class has no non-static member functions or variables, so this is not + // really a class, but rather serves the purpose of a namespace + // in C++. The reason that we chose a class over a namespace is that this + // way we can declare functions that are private, i.e. visible to the + // outside world but not callable. This can be done with namespaces as well, + // if one declares some functions in header files in the namespace and + // implements these and other functions in the implementation file. The + // functions not declared in the header file are still in the namespace but + // are not callable from outside. However, as we have only one file here, it + // is not possible to hide functions in the present case. // - // The second is that the dimension - // template parameter is attached to - // the function rather than to the - // class itself. This way, you don't - // have to specify the template - // parameter yourself as in most - // other cases, but the compiler can - // figure its value out itself from - // the dimension of the DoF handler - // object that one passes as first - // argument. + // The second is that the dimension template parameter is attached to the + // function rather than to the class itself. This way, you don't have to + // specify the template parameter yourself as in most other cases, but the + // compiler can figure its value out itself from the dimension of the DoF + // handler object that one passes as first argument. // - // Finally note that the - // IndexInterval typedef is - // introduced as a convenient - // abbreviation for an otherwise - // lengthy type name. + // Finally note that the IndexInterval typedef is introduced as + // a convenient abbreviation for an otherwise lengthy type name. class GradientEstimation { public: @@ -640,13 +447,9 @@ namespace Step9 // @sect3{AdvectionProblem class implementation} - // Now for the implementation of the - // main class. Constructor, - // destructor and the function - // setup_system follow the same - // pattern that was used previously, - // so we need not comment on these - // three function: + // Now for the implementation of the main class. Constructor, destructor and + // the function setup_system follow the same pattern that was + // used previously, so we need not comment on these three function: template AdvectionProblem::AdvectionProblem () : dof_handler (triangulation), @@ -690,172 +493,91 @@ namespace Step9 - // In the following function, the - // matrix and right hand side are - // assembled. As stated in the - // documentation of the main class - // above, it does not do this itself, - // but rather delegates to the - // function following next, by - // splitting up the range of cells - // into chunks of approximately the - // same size and assembling on each - // of these chunks in parallel. + // In the following function, the matrix and right hand side are + // assembled. As stated in the documentation of the main class above, it + // does not do this itself, but rather delegates to the function following + // next, by splitting up the range of cells into chunks of approximately the + // same size and assembling on each of these chunks in parallel. template void AdvectionProblem::assemble_system () { - // First, we want to find out how - // many threads shall assemble the - // matrix in parallel. A reasonable - // choice would be that each - // processor in your system - // processes one chunk of cells; if - // we were to use this information, - // we could use the value of the - // global variable - // multithread_info.n_cpus, - // which is determined at start-up - // time of your program - // automatically. (Note that if the - // library was not configured for - // multi-threading, then the number - // of CPUs is set to one.) However, - // sometimes there might be reasons - // to use another value. For - // example, you might want to use - // less processors than there are - // in your system in order not to - // use too many computational - // ressources. On the other hand, - // if there are several jobs - // running on a computer and you - // want to get a higher percentage - // of CPU time, it might be worth - // to start more threads than there - // are CPUs, as most operating - // systems assign roughly the same - // CPU ressources to all threads - // presently running. For this - // reason, the MultithreadInfo - // class contains a read-write - // variable n_default_threads - // which is set to n_cpus by - // default, but can be set to - // another value. This variable is - // also queried by functions inside - // the library to determine how - // many threads they shall create. + // First, we want to find out how many threads shall assemble the matrix + // in parallel. A reasonable choice would be that each processor in your + // system processes one chunk of cells; if we were to use this + // information, we could use the value of the global variable + // multithread_info.n_cpus, which is determined at start-up + // time of your program automatically. (Note that if the library was not + // configured for multi-threading, then the number of CPUs is set to one.) + // However, sometimes there might be reasons to use another value. For + // example, you might want to use less processors than there are in your + // system in order not to use too many computational ressources. On the + // other hand, if there are several jobs running on a computer and you + // want to get a higher percentage of CPU time, it might be worth to start + // more threads than there are CPUs, as most operating systems assign + // roughly the same CPU ressources to all threads presently running. For + // this reason, the MultithreadInfo class contains a + // read-write variable n_default_threads which is set to + // n_cpus by default, but can be set to another value. This + // variable is also queried by functions inside the library to determine + // how many threads they shall create. const unsigned int n_threads = multithread_info.n_default_threads; - // It is worth noting, however, that this - // setup determines the load distribution - // onto processor in a static way: it does - // not take into account that some other - // part of our program may also be running - // something in parallel at the same time - // as we get here (this is not the case in - // the current program, but may easily be - // the case in more complex - // applications). A discussion of how to - // deal with this case can be found in the - // @ref threads module. + // It is worth noting, however, that this setup determines the load + // distribution onto processor in a static way: it does not take into + // account that some other part of our program may also be running + // something in parallel at the same time as we get here (this is not the + // case in the current program, but may easily be the case in more complex + // applications). A discussion of how to deal with this case can be found + // in the @ref threads module. // - // Next, we need an object which is - // capable of keeping track of the - // threads we created, and allows - // us to wait until they all have - // finished (to join them in - // the language of threads). The - // Threads::ThreadGroup class - // does this, which is basically - // just a container for objects of - // type Threads::Thread that - // represent a single thread; - // Threads::Thread is what the - // Threads::new_thread function below will - // return when we start a new - // thread. + // Next, we need an object which is capable of keeping track of the + // threads we created, and allows us to wait until they all have finished + // (to join them in the language of threads). The + // Threads::ThreadGroup class does this, which is basically just a + // container for objects of type Threads::Thread that represent a single + // thread; Threads::Thread is what the Threads::new_thread function below + // will return when we start a new thread. // - // Note that both Threads::ThreadGroup - // and Threads::Thread have a template - // argument that represents the - // return type of the function - // being called on a separate - // thread. Since most of the - // functions that we will call on - // different threads have return - // type void, the template - // argument has a default value - // void, so that in that case - // it can be omitted. (However, you - // still need to write the angle - // brackets, even if they are - // empty.) + // Note that both Threads::ThreadGroup and Threads::Thread have a template + // argument that represents the return type of the function being called + // on a separate thread. Since most of the functions that we will call on + // different threads have return type void, the template + // argument has a default value void, so that in that case it + // can be omitted. (However, you still need to write the angle brackets, + // even if they are empty.) // - // If you did not configure for - // multi-threading, then the - // new_thread function that is - // supposed to start a new thread - // in parallel only executes the - // function which should be run in - // parallel, waits for it to return - // (i.e. the function is executed - // sequentially), and puts the - // return value into the Thread - // object. Likewise, the function - // join that is supposed to - // wait for all spawned threads to - // return, returns immediately, as - // there can't be any threads running. + // If you did not configure for multi-threading, then the + // new_thread function that is supposed to start a new thread + // in parallel only executes the function which should be run in parallel, + // waits for it to return (i.e. the function is executed sequentially), + // and puts the return value into the Thread + // object. Likewise, the function join that is supposed to + // wait for all spawned threads to return, returns immediately, as there + // can't be any threads running. Threads::ThreadGroup<> threads; - // Now we have to split the range - // of cells into chunks of - // approximately the same - // size. Each thread will then - // assemble the local contributions - // of the cells within its chunk - // and transfer these contributions - // to the global matrix. As - // splitting a range of cells is a - // rather common task when using - // multi-threading, there is a - // function in the Threads - // namespace that does exactly - // this. In fact, it does this not - // only for a range of cell - // iterators, but for iterators in - // general, so you could use it for - // std::vector::iterator or + // Now we have to split the range of cells into chunks of approximately + // the same size. Each thread will then assemble the local contributions + // of the cells within its chunk and transfer these contributions to the + // global matrix. As splitting a range of cells is a rather common task + // when using multi-threading, there is a function in the + // Threads namespace that does exactly this. In fact, it does + // this not only for a range of cell iterators, but for iterators in + // general, so you could use it for std::vector::iterator or // usual pointers as well. // - // The function returns a vector of - // pairs of iterators, where the - // first denotes the first cell of - // each chunk, while the second - // denotes the one past the last - // (this half-open interval is the - // usual convention in the C++ - // standard library, so we keep to - // it). Note that we have to - // specify the actual data type of - // the iterators in angle brackets - // to the function. This is - // necessary, since it is a - // template function which takes - // the data type of the iterators - // as template argument; in the - // present case, however, the data - // types of the two first - // parameters differ - // (begin_active returns an - // active_iterator, while - // end returns a - // raw_iterator), and in this - // case the C++ language requires - // us to specify the template type - // explicitely. For brevity, we - // first typedef this data type to - // an alias. + // The function returns a vector of pairs of iterators, where the first + // denotes the first cell of each chunk, while the second denotes the one + // past the last (this half-open interval is the usual convention in the + // C++ standard library, so we keep to it). Note that we have to specify + // the actual data type of the iterators in angle brackets to the + // function. This is necessary, since it is a template function which + // takes the data type of the iterators as template argument; in the + // present case, however, the data types of the two first parameters + // differ (begin_active returns an + // active_iterator, while end returns a + // raw_iterator), and in this case the C++ language requires + // us to specify the template type explicitely. For brevity, we first + // typedef this data type to an alias. typedef typename DoFHandler::active_cell_iterator active_cell_iterator; std::vector > thread_ranges @@ -863,150 +585,88 @@ namespace Step9 dof_handler.end (), n_threads); - // Finally, for each of the chunks - // of iterators we have computed, - // start one thread (or if not in - // multi-thread mode: execute - // assembly on these chunks - // sequentially). This is done - // using the following sequence of + // Finally, for each of the chunks of iterators we have computed, start + // one thread (or if not in multi-thread mode: execute assembly on these + // chunks sequentially). This is done using the following sequence of // function calls: for (unsigned int thread=0; thread::assemble_system_interval, *this, thread_ranges[thread].first, thread_ranges[thread].second); - // The reasons and internal - // workings of these functions can - // be found in the report on the - // subject of multi-threading, - // which is available online as - // well. Suffice it to say that we - // create a new thread that calls - // the assemble_system_interval - // function on the present object - // (the this pointer), with the - // arguments following in the - // second set of parentheses passed - // as parameters. The Threads::new_thread - // function returns an object of - // type Threads::Thread, which - // we put into the threads - // container. If a thread exits, - // the return value of the function - // being called is put into a place - // such that the thread objects can - // access it using their - // return_value function; since - // the function we call doesn't - // have a return value, this does - // not apply here. Note that you - // can copy around thread objects - // freely, and that of course they - // will still represent the same - // thread. - - // When all the threads are - // running, the only thing we have - // to do is wait for them to - // finish. This is necessary of - // course, as we can't proceed with - // our tasks before the matrix and - // right hand side are - // assemblesd. Waiting for all the - // threads to finish can be done - // using the joint_all function - // in the ThreadGroup - // container, which just calls - // join on each of the thread + // The reasons and internal workings of these functions can be found in + // the report on the subject of multi-threading, which is available online + // as well. Suffice it to say that we create a new thread that calls the + // assemble_system_interval function on the present object + // (the this pointer), with the arguments following in the + // second set of parentheses passed as parameters. The Threads::new_thread + // function returns an object of type Threads::Thread, which we put into + // the threads container. If a thread exits, the return value + // of the function being called is put into a place such that the thread + // objects can access it using their return_value function; + // since the function we call doesn't have a return value, this does not + // apply here. Note that you can copy around thread objects freely, and + // that of course they will still represent the same thread. + + // When all the threads are running, the only thing we have to do is wait + // for them to finish. This is necessary of course, as we can't proceed + // with our tasks before the matrix and right hand side are + // assemblesd. Waiting for all the threads to finish can be done using the + // joint_all function in the ThreadGroup + // container, which just calls join on each of the thread // objects it stores. // - // Again, if the library was not - // configured to use - // multi-threading, then no threads - // can run in parallel and the - // function returns immediately. + // Again, if the library was not configured to use multi-threading, then + // no threads can run in parallel and the function returns immediately. threads.join_all (); - // After the matrix has been - // assembled in parallel, we stil - // have to eliminate hanging node - // constraints. This is something - // that can't be done on each of - // the threads separately, so we - // have to do it now. + // After the matrix has been assembled in parallel, we stil have to + // eliminate hanging node constraints. This is something that can't be + // done on each of the threads separately, so we have to do it now. hanging_node_constraints.condense (system_matrix); hanging_node_constraints.condense (system_rhs); - // Note also, that unlike in - // previous examples, there are no - // boundary conditions to be - // applied to the system of - // equations. This, of course, is - // due to the fact that we have - // included them into the weak - // formulation of the problem. + // Note also, that unlike in previous examples, there are no boundary + // conditions to be applied to the system of equations. This, of course, + // is due to the fact that we have included them into the weak formulation + // of the problem. } - // Now, this is the function that - // does the actual work. It is not - // very different from the - // assemble_system functions of - // previous example programs, so we - // will again only comment on the - // differences. The mathematical - // stuff follows closely what we have - // said in the introduction. + // Now, this is the function that does the actual work. It is not very + // different from the assemble_system functions of previous + // example programs, so we will again only comment on the differences. The + // mathematical stuff follows closely what we have said in the introduction. template void AdvectionProblem:: assemble_system_interval (const typename DoFHandler::active_cell_iterator &begin, const typename DoFHandler::active_cell_iterator &end) { - // First of all, we will need some - // objects that describe boundary - // values, right hand side function - // and the advection field. As we - // will only perform actions on - // these objects that do not change - // them, we declare them as - // constant, which can enable the - // compiler in some cases to - // perform additional - // optimizations. + // First of all, we will need some objects that describe boundary values, + // right hand side function and the advection field. As we will only + // perform actions on these objects that do not change them, we declare + // them as constant, which can enable the compiler in some cases to + // perform additional optimizations. const AdvectionField advection_field; const RightHandSide right_hand_side; const BoundaryValues boundary_values; - // Next we need quadrature formula - // for the cell terms, but also for - // the integral over the inflow - // boundary, which will be a face - // integral. As we use bilinear - // elements, Gauss formulae with - // two points in each space + // Next we need quadrature formula for the cell terms, but also for the + // integral over the inflow boundary, which will be a face integral. As we + // use bilinear elements, Gauss formulae with two points in each space // direction are sufficient. QGauss quadrature_formula(2); QGauss face_quadrature_formula(2); - // Finally, we need objects of type - // FEValues and - // FEFaceValues. For the cell - // terms we need the values and - // gradients of the shape - // functions, the quadrature points - // in order to determine the source - // density and the advection field - // at a given point, and the - // weights of the quadrature points - // times the determinant of the - // Jacobian at these points. In - // contrast, for the boundary - // integrals, we don't need the - // gradients, but rather the normal - // vectors to the cells. + // Finally, we need objects of type FEValues and + // FEFaceValues. For the cell terms we need the values and + // gradients of the shape functions, the quadrature points in order to + // determine the source density and the advection field at a given point, + // and the weights of the quadrature points times the determinant of the + // Jacobian at these points. In contrast, for the boundary integrals, we + // don't need the gradients, but rather the normal vectors to the cells. FEValues fe_values (fe, quadrature_formula, update_values | update_gradients | update_quadrature_points | update_JxW_values); @@ -1014,68 +674,51 @@ namespace Step9 update_values | update_quadrature_points | update_JxW_values | update_normal_vectors); - // Then we define some - // abbreviations to avoid - // unnecessarily long lines: + // Then we define some abbreviations to avoid unnecessarily long lines: const unsigned int dofs_per_cell = fe.dofs_per_cell; const unsigned int n_q_points = quadrature_formula.size(); const unsigned int n_face_q_points = face_quadrature_formula.size(); - // We declare cell matrix and cell - // right hand side... + // We declare cell matrix and cell right hand side... FullMatrix cell_matrix (dofs_per_cell, dofs_per_cell); Vector cell_rhs (dofs_per_cell); - // ... an array to hold the global - // indices of the degrees of - // freedom of the cell on which we - // are presently working... + // ... an array to hold the global indices of the degrees of freedom of + // the cell on which we are presently working... std::vector local_dof_indices (dofs_per_cell); - // ... and array in which the - // values of right hand side, - // advection direction, and - // boundary values will be stored, - // for cell and face integrals - // respectively: + // ... and array in which the values of right hand side, advection + // direction, and boundary values will be stored, for cell and face + // integrals respectively: std::vector rhs_values (n_q_points); std::vector > advection_directions (n_q_points); std::vector face_boundary_values (n_face_q_points); std::vector > face_advection_directions (n_face_q_points); - // Then we start the main loop over - // the cells: + // Then we start the main loop over the cells: typename DoFHandler::active_cell_iterator cell; for (cell=begin; cell!=end; ++cell) { - // First clear old contents of - // the cell contributions... + // First clear old contents of the cell contributions... cell_matrix = 0; cell_rhs = 0; - // ... then initialize - // the FEValues object... + // ... then initialize the FEValues object... fe_values.reinit (cell); - // ... obtain the values of - // right hand side and - // advection directions at the - // quadrature points... + // ... obtain the values of right hand side and advection directions + // at the quadrature points... advection_field.value_list (fe_values.get_quadrature_points(), advection_directions); right_hand_side.value_list (fe_values.get_quadrature_points(), rhs_values); - // ... set the value of the - // streamline diffusion - // parameter as described in - // the introduction... + // ... set the value of the streamline diffusion parameter as + // described in the introduction... const double delta = 0.1 * cell->diameter (); - // ... and assemble the local - // contributions to the system - // matrix and right hand side - // as also discussed above: + // ... and assemble the local contributions to the system matrix and + // right hand side as also discussed above: for (unsigned int q_point=0; q_pointinflow part of the - // boundary, but to find out - // whether a certain part of a - // face of the present cell is - // part of the inflow boundary, - // we have to have information - // on the exact location of the - // quadrature points and on the - // direction of flow at this - // point; we obtain this - // information using the - // FEFaceValues object and only - // decide within the main loop - // whether a quadrature point - // is on the inflow boundary. + // Besides the cell terms which we have build up now, the bilinear + // form of the present problem also contains terms on the boundary of + // the domain. Therefore, we have to check whether any of the faces of + // this cell are on the boundary of the domain, and if so assemble the + // contributions of this face as well. Of course, the bilinear form + // only contains contributions from the inflow part of + // the boundary, but to find out whether a certain part of a face of + // the present cell is part of the inflow boundary, we have to have + // information on the exact location of the quadrature points and on + // the direction of flow at this point; we obtain this information + // using the FEFaceValues object and only decide within the main loop + // whether a quadrature point is on the inflow boundary. for (unsigned int face=0; face::faces_per_cell; ++face) if (cell->face(face)->at_boundary()) { - // Ok, this face of the - // present cell is on the - // boundary of the - // domain. Just as for - // the usual FEValues - // object which we have - // used in previous - // examples and also - // above, we have to - // reinitialize the - // FEFaceValues object - // for the present face: + // Ok, this face of the present cell is on the boundary of the + // domain. Just as for the usual FEValues object which we have + // used in previous examples and also above, we have to + // reinitialize the FEFaceValues object for the present face: fe_face_values.reinit (cell, face); - // For the quadrature - // points at hand, we ask - // for the values of the - // inflow function and - // for the direction of - // flow: + // For the quadrature points at hand, we ask for the values of + // the inflow function and for the direction of flow: boundary_values.value_list (fe_face_values.get_quadrature_points(), face_boundary_values); advection_field.value_list (fe_face_values.get_quadrature_points(), face_advection_directions); - // Now loop over all - // quadrature points and - // see whether it is on - // the inflow or outflow - // part of the - // boundary. This is - // determined by a test - // whether the advection - // direction points - // inwards or outwards of - // the domain (note that - // the normal vector - // points outwards of the - // cell, and since the - // cell is at the - // boundary, the normal - // vector points outward - // of the domain, so if - // the advection - // direction points into - // the domain, its scalar - // product with the - // normal vector must be - // negative): + // Now loop over all quadrature points and see whether it is on + // the inflow or outflow part of the boundary. This is + // determined by a test whether the advection direction points + // inwards or outwards of the domain (note that the normal + // vector points outwards of the cell, and since the cell is at + // the boundary, the normal vector points outward of the domain, + // so if the advection direction points into the domain, its + // scalar product with the normal vector must be negative): for (unsigned int q_point=0; q_pointget_dof_indices (local_dof_indices); - // Up until now we have not - // taken care of the fact that - // this function might run more - // than once in parallel, as - // the operations above only - // work on variables that are - // local to this function, or - // if they are global (such as - // the information on the grid, - // the DoF handler, or the DoF - // numbers) they are only - // read. Thus, the different - // threads do not disturb each - // other. + // Up until now we have not taken care of the fact that this function + // might run more than once in parallel, as the operations above only + // work on variables that are local to this function, or if they are + // global (such as the information on the grid, the DoF handler, or + // the DoF numbers) they are only read. Thus, the different threads do + // not disturb each other. // - // On the other hand, we would - // now like to write the local - // contributions to the global - // system of equations into the - // global objects. This needs - // some kind of - // synchronisation, as if we - // would not take care of the - // fact that multiple threads - // write into the matrix at the - // same time, we might be - // surprised that one threads - // reads data from the matrix - // that another thread is - // presently overwriting, or - // similar things. Thus, to - // make sure that only one - // thread operates on these - // objects at a time, we have - // to lock it. This is done - // using a Mutex, which is - // short for mutually - // exclusive: a thread that - // wants to write to the global - // objects acquires this lock, - // but has to wait if it is - // presently owned by another - // thread. If it has acquired - // the lock, it can be sure - // that no other thread is - // presently writing to the - // matrix, and can do so - // freely. When finished, we - // release the lock again so as - // to allow other threads to - // acquire it and write to the + // On the other hand, we would now like to write the local + // contributions to the global system of equations into the global + // objects. This needs some kind of synchronisation, as if we would + // not take care of the fact that multiple threads write into the + // matrix at the same time, we might be surprised that one threads + // reads data from the matrix that another thread is presently + // overwriting, or similar things. Thus, to make sure that only one + // thread operates on these objects at a time, we have to lock + // it. This is done using a Mutex, which is short for + // mutually exclusive: a thread that wants to write to + // the global objects acquires this lock, but has to wait if it is + // presently owned by another thread. If it has acquired the lock, it + // can be sure that no other thread is presently writing to the + // matrix, and can do so freely. When finished, we release the lock + // again so as to allow other threads to acquire it and write to the // matrix. assembler_lock.acquire (); for (unsigned int i=0; ilock and release - // functions are no-ops, - // i.e. they return without - // doing anything. + // 1. If the library was not configured for multi-threading, then + // there can't be parallel threads and there is no need to + // synchronize. Thus, the lock and release + // functions are no-ops, i.e. they return without doing anything. // - // 2. In order to work - // properly, it is essential - // that all threads try to - // acquire the same lock. This, - // of course, can not be - // achieved if the lock is a - // local variable, as then each - // thread would acquire its own - // lock. Therefore, the lock - // variable is a member - // variable of the class; since - // all threads execute member - // functions of the same - // object, they have the same - // this pointer and - // therefore also operate on - // the same lock. + // 2. In order to work properly, it is essential that all threads try + // to acquire the same lock. This, of course, can not be achieved if + // the lock is a local variable, as then each thread would acquire its + // own lock. Therefore, the lock variable is a member variable of the + // class; since all threads execute member functions of the same + // object, they have the same this pointer and therefore + // also operate on the same lock. }; } - // Following is the function that - // solves the linear system of - // equations. As the system is no - // more symmetric positive definite - // as in all the previous examples, - // we can't use the Conjugate - // Gradients method anymore. Rather, - // we use a solver that is tailored - // to nonsymmetric systems like the - // one at hand, the BiCGStab - // method. As preconditioner, we use - // the Jacobi method. + // Following is the function that solves the linear system of equations. As + // the system is no more symmetric positive definite as in all the previous + // examples, we can't use the Conjugate Gradients method anymore. Rather, we + // use a solver that is tailored to nonsymmetric systems like the one at + // hand, the BiCGStab method. As preconditioner, we use the Jacobi method. template void AdvectionProblem::solve () { @@ -1357,16 +883,11 @@ namespace Step9 } - // The following function refines the - // grid according to the quantity - // described in the introduction. The - // respective computations are made - // in the class - // GradientEstimation. The only - // difference to previous examples is - // that we refine a little more - // aggressively (0.5 instead of 0.3 - // of the number of cells). + // The following function refines the grid according to the quantity + // described in the introduction. The respective computations are made in + // the class GradientEstimation. The only difference to + // previous examples is that we refine a little more aggressively (0.5 + // instead of 0.3 of the number of cells). template void AdvectionProblem::refine_grid () { @@ -1385,8 +906,7 @@ namespace Step9 - // Writing output to disk is done in - // the same way as in the previous + // Writing output to disk is done in the same way as in the previous // examples... template void AdvectionProblem::output_results (const unsigned int cycle) const @@ -1403,8 +923,7 @@ namespace Step9 } - // ... as is the main loop (setup -- - // solve -- refine) + // ... as is the main loop (setup -- solve -- refine) template void AdvectionProblem::run () { @@ -1451,10 +970,8 @@ namespace Step9 // @sect3{GradientEstimation class implementation} - // Now for the implementation of the - // GradientEstimation class. The - // first function does not much - // except for delegating work to the + // Now for the implementation of the GradientEstimation + // class. The first function does not much except for delegating work to the // other function: template void @@ -1462,69 +979,40 @@ namespace Step9 const Vector &solution, Vector &error_per_cell) { - // Before starting with the work, - // we check that the vector into - // which the results are written, - // has the right size. It is a - // common error that such - // parameters have the wrong size, - // but the resulting damage by not - // catching these errors are very - // subtle as they are usually - // corruption of data somewhere in - // memory. Often, the problems - // emerging from this are not - // reproducible, and we found that - // it is well worth the effort to + // Before starting with the work, we check that the vector into which the + // results are written, has the right size. It is a common error that such + // parameters have the wrong size, but the resulting damage by not + // catching these errors are very subtle as they are usually corruption of + // data somewhere in memory. Often, the problems emerging from this are + // not reproducible, and we found that it is well worth the effort to // check for such things. Assert (error_per_cell.size() == dof_handler.get_tria().n_active_cells(), ExcInvalidVectorLength (error_per_cell.size(), dof_handler.get_tria().n_active_cells())); - // Next, we subdivide the range of - // cells into chunks of equal - // size. Just as we have used the - // function - // Threads::split_range when - // assembling above, there is a - // function that computes intervals - // of roughly equal size from a - // larger interval. This is used - // here: + // Next, we subdivide the range of cells into chunks of equal size. Just + // as we have used the function Threads::split_range when + // assembling above, there is a function that computes intervals of + // roughly equal size from a larger interval. This is used here: const unsigned int n_threads = multithread_info.n_default_threads; std::vector index_intervals = Threads::split_interval (0, dof_handler.get_tria().n_active_cells(), n_threads); - // In the same way as before, we use a - // Threads::ThreadGroup object - // to collect the descriptor objects of - // different threads. Note that as the - // function called is not a member - // function, but rather a static function, - // we need not (and can not) pass a - // this pointer to the - // new_thread function in this - // case. + // In the same way as before, we use a Threads::ThreadGroup + // object to collect the descriptor objects of different threads. Note + // that as the function called is not a member function, but rather a + // static function, we need not (and can not) pass a this + // pointer to the new_thread function in this case. // - // Taking pointers to templated - // functions seems to be - // notoriously difficult for many - // compilers (since there are - // several functions with the same - // name -- just as with overloaded - // functions). It therefore happens - // quite frequently that we can't - // directly insert taking the - // address of a function in the - // call to encapsulate for one - // or the other compiler, but have - // to take a temporary variable for - // that purpose. Here, in this - // case, Compaq's cxx compiler - // choked on the code so we use - // this workaround with the - // function pointer: + // Taking pointers to templated functions seems to be notoriously + // difficult for many compilers (since there are several functions with + // the same name -- just as with overloaded functions). It therefore + // happens quite frequently that we can't directly insert taking the + // address of a function in the call to encapsulate for one + // or the other compiler, but have to take a temporary variable for that + // purpose. Here, in this case, Compaq's cxx compiler choked + // on the code so we use this workaround with the function pointer: Threads::ThreadGroup<> threads; void (*estimate_interval_ptr) (const DoFHandler &, const Vector &, @@ -1536,55 +1024,31 @@ namespace Step9 dof_handler, solution, index_intervals[i], error_per_cell); - // Ok, now the threads are at work, - // and we only have to wait for - // them to finish their work: + // Ok, now the threads are at work, and we only have to wait for them to + // finish their work: threads.join_all (); - // Note that if the value of the - // variable - // multithread_info.n_default_threads - // was one, or if the library was - // not configured to use threads, - // then the sequence of commands - // above reduced to a complicated - // way to simply call the - // estimate_interval function - // with the whole range of cells to - // work on. However, using the way - // above, we are able to write the - // program such that it makes no - // difference whether we presently - // work with multiple threads or in - // single-threaded mode, thus - // eliminating the need to write - // code included in conditional - // preprocessor sections. + // Note that if the value of the variable + // multithread_info.n_default_threads was one, or if the + // library was not configured to use threads, then the sequence of + // commands above reduced to a complicated way to simply call the + // estimate_interval function with the whole range of cells + // to work on. However, using the way above, we are able to write the + // program such that it makes no difference whether we presently work with + // multiple threads or in single-threaded mode, thus eliminating the need + // to write code included in conditional preprocessor sections. } - // Following now the function that - // actually computes the finite - // difference approximation to the - // gradient. The general outline of - // the function is to loop over all - // the cells in the range of - // iterators designated by the third - // argument, and on each cell first - // compute the list of active - // neighbors of the present cell and - // then compute the quantities - // described in the introduction for - // each of the neighbors. The reason - // for this order is that it is not a - // one-liner to find a given neighbor - // with locally refined meshes. In - // principle, an optimized - // implementation would find - // neighbors and the quantities - // depending on them in one step, - // rather than first building a list - // of neighbors and in a second step - // their contributions. + // Following now the function that actually computes the finite difference + // approximation to the gradient. The general outline of the function is to + // loop over all the cells in the range of iterators designated by the third + // argument, and on each cell first compute the list of active neighbors of + // the present cell and then compute the quantities described in the + // introduction for each of the neighbors. The reason for this order is that + // it is not a one-liner to find a given neighbor with locally refined + // meshes. In principle, an optimized implementation would find neighbors + // and the quantities depending on them in one step, rather than first + // building a list of neighbors and in a second step their contributions. // // Now for the details: template @@ -1594,51 +1058,31 @@ namespace Step9 const IndexInterval &index_interval, Vector &error_per_cell) { - // First we need a way to extract - // the values of the given finite - // element function at the center - // of the cells. As usual with - // values of finite element - // functions, we use an object of - // type FEValues, and we use - // (or mis-use in this case) the - // midpoint quadrature rule to get - // at the values at the - // center. Note that the - // FEValues object only needs - // to compute the values at the - // centers, and the location of the - // quadrature points in real space - // in order to get at the vectors + // First we need a way to extract the values of the given finite element + // function at the center of the cells. As usual with values of finite + // element functions, we use an object of type FEValues, and + // we use (or mis-use in this case) the midpoint quadrature rule to get at + // the values at the center. Note that the FEValues object + // only needs to compute the values at the centers, and the location of + // the quadrature points in real space in order to get at the vectors // y. QMidpoint midpoint_rule; FEValues fe_midpoint_value (dof_handler.get_fe(), midpoint_rule, update_values | update_quadrature_points); - // Then we need space foe the - // tensor Y, which is the sum - // of outer products of the - // y-vectors. + // Then we need space foe the tensor Y, which is the sum of + // outer products of the y-vectors. Tensor<2,dim> Y; - // Then define iterators into the - // cells and into the output - // vector, which are to be looped - // over by the present instance of - // this function. We get start and - // end iterators over cells by - // setting them to the first active - // cell and advancing them using - // the given start and end - // index. Note that we can use the - // advance function of the - // standard C++ library, but that - // we have to cast the distance by - // which the iterator is to be - // moved forward to a signed - // quantity in order to avoid - // warnings by the compiler. + // Then define iterators into the cells and into the output vector, which + // are to be looped over by the present instance of this function. We get + // start and end iterators over cells by setting them to the first active + // cell and advancing them using the given start and end index. Note that + // we can use the advance function of the standard C++ + // library, but that we have to cast the distance by which the iterator is + // to be moved forward to a signed quantity in order to avoid warnings by + // the compiler. typename DoFHandler::active_cell_iterator cell, endc; cell = dof_handler.begin_active(); @@ -1647,316 +1091,157 @@ namespace Step9 endc = dof_handler.begin_active(); advance (endc, static_cast(index_interval.second)); - // Getting an iterator into the - // output array is simpler. We - // don't need an end iterator, as - // we always move this iterator - // forward by one element for each - // cell we are on, but stop the - // loop when we hit the end cell, - // so we need not have an end - // element for this iterator. + // Getting an iterator into the output array is simpler. We don't need an + // end iterator, as we always move this iterator forward by one element + // for each cell we are on, but stop the loop when we hit the end cell, so + // we need not have an end element for this iterator. Vector::iterator error_on_this_cell = error_per_cell.begin() + index_interval.first; - // Then we allocate a vector to - // hold iterators to all active - // neighbors of a cell. We reserve - // the maximal number of active - // neighbors in order to avoid - // later reallocations. Note how - // this maximal number of active + // Then we allocate a vector to hold iterators to all active neighbors of + // a cell. We reserve the maximal number of active neighbors in order to + // avoid later reallocations. Note how this maximal number of active // neighbors is computed here. std::vector::active_cell_iterator> active_neighbors; active_neighbors.reserve (GeometryInfo::faces_per_cell * GeometryInfo::max_children_per_face); - // Well then, after all these - // preliminaries, lets start the - // computations: + // Well then, after all these preliminaries, lets start the computations: for (; cell!=endc; ++cell, ++error_on_this_cell) { - // First initialize the - // FEValues object, as well - // as the Y tensor: + // First initialize the FEValues object, as well as the + // Y tensor: fe_midpoint_value.reinit (cell); Y.clear (); - // Then allocate the vector - // that will be the sum over - // the y-vectors times the - // approximate directional - // derivative: + // Then allocate the vector that will be the sum over the y-vectors + // times the approximate directional derivative: Tensor<1,dim> projected_gradient; - // Now before going on first - // compute a list of all active - // neighbors of the present - // cell. We do so by first - // looping over all faces and - // see whether the neighbor - // there is active, which would - // be the case if it is on the - // same level as the present - // cell or one level coarser - // (note that a neighbor can - // only be once coarser than - // the present cell, as we only - // allow a maximal difference - // of one refinement over a - // face in - // deal.II). Alternatively, the - // neighbor could be on the - // same level and be further - // refined; then we have to - // find which of its children - // are next to the present cell - // and select these (note that - // if a child of of neighbor of - // an active cell that is next - // to this active cell, needs - // necessarily be active - // itself, due to the - // one-refinement rule cited - // above). + // Now before going on first compute a list of all active neighbors of + // the present cell. We do so by first looping over all faces and see + // whether the neighbor there is active, which would be the case if it + // is on the same level as the present cell or one level coarser (note + // that a neighbor can only be once coarser than the present cell, as + // we only allow a maximal difference of one refinement over a face in + // deal.II). Alternatively, the neighbor could be on the same level + // and be further refined; then we have to find which of its children + // are next to the present cell and select these (note that if a child + // of of neighbor of an active cell that is next to this active cell, + // needs necessarily be active itself, due to the one-refinement rule + // cited above). // - // Things are slightly - // different in one space - // dimension, as there the - // one-refinement rule does not - // exist: neighboring active - // cells may differ in as many - // refinement levels as they - // like. In this case, the - // computation becomes a little - // more difficult, but we will - // explain this below. + // Things are slightly different in one space dimension, as there the + // one-refinement rule does not exist: neighboring active cells may + // differ in as many refinement levels as they like. In this case, the + // computation becomes a little more difficult, but we will explain + // this below. // - // Before starting the loop - // over all neighbors of the - // present cell, we have to - // clear the array storing the - // iterators to the active + // Before starting the loop over all neighbors of the present cell, we + // have to clear the array storing the iterators to the active // neighbors, of course. active_neighbors.clear (); for (unsigned int face_no=0; face_no::faces_per_cell; ++face_no) if (! cell->at_boundary(face_no)) { - // First define an - // abbreviation for the - // iterator to the face - // and the neighbor + // First define an abbreviation for the iterator to the face and + // the neighbor const typename DoFHandler::face_iterator face = cell->face(face_no); const typename DoFHandler::cell_iterator neighbor = cell->neighbor(face_no); - // Then check whether the - // neighbor is active. If - // it is, then it is on - // the same level or one - // level coarser (if we - // are not in 1D), and we - // are interested in it - // in any case. + // Then check whether the neighbor is active. If it is, then it + // is on the same level or one level coarser (if we are not in + // 1D), and we are interested in it in any case. if (neighbor->active()) active_neighbors.push_back (neighbor); else { - // If the neighbor is - // not active, then - // check its - // children. + // If the neighbor is not active, then check its children. if (dim == 1) { - // To find the - // child of the - // neighbor which - // bounds to the - // present cell, - // successively - // go to its - // right child if - // we are left of - // the present - // cell (n==0), - // or go to the - // left child if - // we are on the - // right (n==1), - // until we find - // an active - // cell. + // To find the child of the neighbor which bounds to the + // present cell, successively go to its right child if + // we are left of the present cell (n==0), or go to the + // left child if we are on the right (n==1), until we + // find an active cell. typename DoFHandler::cell_iterator neighbor_child = neighbor; while (neighbor_child->has_children()) neighbor_child = neighbor_child->child (face_no==0 ? 1 : 0); - // As this used - // some - // non-trivial - // geometrical - // intuition, we - // might want to - // check whether - // we did it - // right, - // i.e. check - // whether the - // neighbor of - // the cell we - // found is - // indeed the - // cell we are - // presently - // working - // on. Checks - // like this are - // often useful - // and have - // frequently - // uncovered - // errors both in - // algorithms - // like the line - // above (where - // it is simple - // to - // involuntarily - // exchange - // n==1 for - // n==0 or - // the like) and - // in the library - // (the - // assumptions - // underlying the - // algorithm - // above could - // either be - // wrong, wrongly - // documented, or - // are violated - // due to an - // error in the - // library). One - // could in - // principle - // remove such - // checks after - // the program - // works for some - // time, but it - // might be a - // good things to - // leave it in - // anyway to - // check for - // changes in the - // library or in - // the algorithm - // above. + // As this used some non-trivial geometrical intuition, + // we might want to check whether we did it right, + // i.e. check whether the neighbor of the cell we found + // is indeed the cell we are presently working + // on. Checks like this are often useful and have + // frequently uncovered errors both in algorithms like + // the line above (where it is simple to involuntarily + // exchange n==1 for n==0 or + // the like) and in the library (the assumptions + // underlying the algorithm above could either be wrong, + // wrongly documented, or are violated due to an error + // in the library). One could in principle remove such + // checks after the program works for some time, but it + // might be a good things to leave it in anyway to check + // for changes in the library or in the algorithm above. // - // Note that if - // this check - // fails, then - // this is - // certainly an - // error that is - // irrecoverable - // and probably - // qualifies as - // an internal - // error. We - // therefore use - // a predefined - // exception - // class to throw - // here. + // Note that if this check fails, then this is certainly + // an error that is irrecoverable and probably qualifies + // as an internal error. We therefore use a predefined + // exception class to throw here. Assert (neighbor_child->neighbor(face_no==0 ? 1 : 0)==cell, ExcInternalError()); - // If the check - // succeeded, we - // push the - // active - // neighbor we - // just found to - // the stack we - // keep: + // If the check succeeded, we push the active neighbor + // we just found to the stack we keep: active_neighbors.push_back (neighbor_child); } else - // If we are not in - // 1d, we collect - // all neighbor - // children - // `behind' the - // subfaces of the - // current face + // If we are not in 1d, we collect all neighbor children + // `behind' the subfaces of the current face for (unsigned int subface_no=0; subface_non_children(); ++subface_no) active_neighbors.push_back ( cell->neighbor_child_on_subface(face_no, subface_no)); }; }; - // OK, now that we have all the - // neighbors, lets start the - // computation on each of - // them. First we do some - // preliminaries: find out - // about the center of the - // present cell and the - // solution at this point. The - // latter is obtained as a - // vector of function values at - // the quadrature points, of - // which there are only one, of - // course. Likewise, the - // position of the center is - // the position of the first - // (and only) quadrature point - // in real space. + // OK, now that we have all the neighbors, lets start the computation + // on each of them. First we do some preliminaries: find out about the + // center of the present cell and the solution at this point. The + // latter is obtained as a vector of function values at the quadrature + // points, of which there are only one, of course. Likewise, the + // position of the center is the position of the first (and only) + // quadrature point in real space. const Point this_center = fe_midpoint_value.quadrature_point(0); std::vector this_midpoint_value(1); fe_midpoint_value.get_function_values (solution, this_midpoint_value); - // Now loop over all active neighbors - // and collect the data we - // need. Allocate a vector just like - // this_midpoint_value which we - // will use to store the value of the - // solution in the midpoint of the - // neighbor cell. We allocate it here - // already, since that way we don't - // have to allocate memory repeatedly - // in each iteration of this inner loop - // (memory allocation is a rather + // Now loop over all active neighbors and collect the data we + // need. Allocate a vector just like this_midpoint_value + // which we will use to store the value of the solution in the + // midpoint of the neighbor cell. We allocate it here already, since + // that way we don't have to allocate memory repeatedly in each + // iteration of this inner loop (memory allocation is a rather // expensive operation): std::vector neighbor_midpoint_value(1); typename std::vector::active_cell_iterator>::const_iterator neighbor_ptr = active_neighbors.begin(); for (; neighbor_ptr!=active_neighbors.end(); ++neighbor_ptr) { - // First define an - // abbreviation for the - // iterator to the active + // First define an abbreviation for the iterator to the active // neighbor cell: const typename DoFHandler::active_cell_iterator neighbor = *neighbor_ptr; - // Then get the center of - // the neighbor cell and - // the value of the finite - // element function - // thereon. Note that for - // this information we - // have to reinitialize the - // FEValues object for + // Then get the center of the neighbor cell and the value of the + // finite element function thereon. Note that for this information + // we have to reinitialize the FEValues object for // the neighbor cell. fe_midpoint_value.reinit (neighbor); const Point neighbor_center = fe_midpoint_value.quadrature_point(0); @@ -1964,98 +1249,54 @@ namespace Step9 fe_midpoint_value.get_function_values (solution, neighbor_midpoint_value); - // Compute the vector y - // connecting the centers - // of the two cells. Note - // that as opposed to the - // introduction, we denote - // by y the normalized - // difference vector, as - // this is the quantity - // used everywhere in the - // computations. + // Compute the vector y connecting the centers of the + // two cells. Note that as opposed to the introduction, we denote + // by y the normalized difference vector, as this is + // the quantity used everywhere in the computations. Point y = neighbor_center - this_center; const double distance = std::sqrt(y.square()); y /= distance; - // Then add up the - // contribution of this - // cell to the Y matrix... + // Then add up the contribution of this cell to the Y matrix... for (unsigned int i=0; iy - // which span the whole space, - // otherwise we would not have - // all components of the - // gradient. This is indicated - // by the invertability of the - // matrix. + // If now, after collecting all the information from the neighbors, we + // can determine an approximation of the gradient for the present + // cell, then we need to have passed over vectors y which + // span the whole space, otherwise we would not have all components of + // the gradient. This is indicated by the invertability of the matrix. // - // If the matrix should not be - // invertible, this means that - // the present cell had an - // insufficient number of - // active neighbors. In - // contrast to all previous - // cases, where we raised - // exceptions, this is, - // however, not a programming - // error: it is a runtime error - // that can happen in optimized - // mode even if it ran well in - // debug mode, so it is - // reasonable to try to catch - // this error also in optimized - // mode. For this case, there - // is the AssertThrow - // macro: it checks the - // condition like the - // Assert macro, but not - // only in debug mode; it then - // outputs an error message, - // but instead of terminating - // the program as in the case - // of the Assert macro, the - // exception is thrown using - // the throw command of - // C++. This way, one has the - // possibility to catch this - // error and take reasonable - // counter actions. One such - // measure would be to refine - // the grid globally, as the - // case of insufficient - // directions can not occur if - // every cell of the initial - // grid has been refined at + // If the matrix should not be invertible, this means that the present + // cell had an insufficient number of active neighbors. In contrast to + // all previous cases, where we raised exceptions, this is, however, + // not a programming error: it is a runtime error that can happen in + // optimized mode even if it ran well in debug mode, so it is + // reasonable to try to catch this error also in optimized mode. For + // this case, there is the AssertThrow macro: it checks + // the condition like the Assert macro, but not only in + // debug mode; it then outputs an error message, but instead of + // terminating the program as in the case of the Assert + // macro, the exception is thrown using the throw command + // of C++. This way, one has the possibility to catch this error and + // take reasonable counter actions. One such measure would be to + // refine the grid globally, as the case of insufficient directions + // can not occur if every cell of the initial grid has been refined at // least once. AssertThrow (determinant(Y) != 0, ExcInsufficientDirections()); - // If, on the other hand the - // matrix is invertible, then - // invert it, multiply the - // other quantity with it and - // compute the estimated error - // using this quantity and the - // right powers of the mesh - // width: + // If, on the other hand the matrix is invertible, then invert it, + // multiply the other quantity with it and compute the estimated error + // using this quantity and the right powers of the mesh width: const Tensor<2,dim> Y_inverse = invert(Y); Point gradient; @@ -2071,11 +1312,9 @@ namespace Step9 // @sect3{Main function} -// The main function is exactly -// like in previous examples, with -// the only difference in the name of -// the main class that actually does -// the computation. +// The main function is exactly like in previous examples, with +// the only difference in the name of the main class that actually does the +// computation. int main () { try