From: Bruno Turcksin Date: Sat, 18 Apr 2015 22:19:12 +0000 (-0500) Subject: Add sections in the documentation of step-17 and reorder functions to be more like... X-Git-Tag: v8.3.0-rc1~246^2~1 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=16101c4489bb49217dbe87676e85af1faa695312;p=dealii.git Add sections in the documentation of step-17 and reorder functions to be more like step-8. --- diff --git a/examples/step-17/step-17.cc b/examples/step-17/step-17.cc index d09698d831..6fc53cc4d8 100644 --- a/examples/step-17/step-17.cc +++ b/examples/step-17/step-17.cc @@ -17,6 +17,7 @@ * Author: Wolfgang Bangerth, University of Texas at Austin, 2000, 2004 */ +// @sect3{Include files} // First the usual assortment of header files we have already used in previous // example programs: @@ -47,7 +48,7 @@ // And here come the things that we need particularly for this example program // and that weren't in step-8. First, we replace the standard output // std::cout by a new stream pcout which is used in -// %parallel computations for generating output only on one of the MPI +// parallel computations for generating output only on one of the MPI // processes. #include // We are going to query the number of processes and the number of the present @@ -91,13 +92,15 @@ namespace Step17 { using namespace dealii; - // Now, here comes the declaration of the main class and of various other - // things below it. As mentioned in the introduction, almost all of this has - // been copied verbatim from step-8, so we only comment on the few things - // that are different. There is one (cosmetic) change in that we let - // solve return a value, namely the number of iterations it - // took to converge, so that we can output this to the screen at the - // appropriate place. In addition, we introduce a stream-like variable + // @sect3{The ElasticProblem class} + + // Here comes the declaration of the main class. As mentioned in the + // introduction, almost all of this has been copied verbatim from step-8, + // so we only comment on the few differences between the two tutorials. + // There is one (cosmetic) change in that we let solve return + // a value, namely the number of iterations it took to converge, so that + // we can output this to the screen at the appropriate place. In addition, + // we introduce a stream-like variable // pcout, explained below: template class ElasticProblem @@ -114,7 +117,7 @@ namespace Step17 void refine_grid (); void output_results (const unsigned int cycle) const; - // The first variable is basically only for convenience: in %parallel + // The first variable is basically only for convenience: in parallel // program, if each process outputs status information, then there quickly // is a lot of clutter. Rather, we would want to only have one process // output everything once, for example the one with process number @@ -141,8 +144,8 @@ namespace Step17 // In step-8, this would have been the place where we would have declared // the member variables for the sparsity pattern, the system matrix, right // hand, and solution vector. We change these declarations to use - // %parallel PETSc objects instead (note that the fact that we use the - // %parallel versions is denoted the fact that we use the classes from the + // parallel PETSc objects instead (note that the fact that we use the + // parallel versions is denoted the fact that we use the classes from the // PETScWrappers::MPI namespace; sequential versions of these // classes are in the PETScWrappers namespace, i.e. without // the MPI part). Note also that we do not use a separate @@ -157,10 +160,10 @@ namespace Step17 // the MPI communicator over which we are supposed to distribute our // computations. Note that if this is a sequential job without support by // MPI, then PETSc provides some dummy type for MPI_Comm, so - // we do not have to care here whether the job is really a %parallel one: + // we do not have to care here whether the job is really a parallel one: MPI_Comm mpi_communicator; - // Then we have two variables that tell us where in the %parallel world we + // Then we have two variables that tell us where in the parallel world we // are. The first of the following variables, n_mpi_processes // tells us how many MPI processes there exist in total, while the second // one, this_mpi_process, indicates which is the number of @@ -174,7 +177,9 @@ namespace Step17 }; - // The following is again taken from step-8 without change: + // @sec3{Right hand side values} + + // The following is taken from step-8 without change: template class RightHandSide : public Function { @@ -237,7 +242,12 @@ namespace Step17 } - // The first step in the actual implementation of things is the constructor + + // @sect3{The ElasticProblem class implementation} + + // @sect4{ElasticProblem::ElasticProblem} + + // The first step in the actual implementation is the constructor // of the main class. Apart from initializing the same member variables that // we already had in step-8, we here initialize the MPI communicator // variable we shall use with the global MPI communicator linking all @@ -265,6 +275,9 @@ namespace Step17 + // @sect4{ElasticProblem::~ElasticProblem} + + // The destuctor is exactly as in step-8. template ElasticProblem::~ElasticProblem () { @@ -272,13 +285,15 @@ namespace Step17 } - // The second step is the function in which we set up the various variables - // for the global linear system to be solved. + // @sect4{ElasticProblem::setup_system} + + // Next, the function in which we set up the various variables + // for the global linear system to be solved is implemented. template void ElasticProblem::setup_system () { // Before we even start out setting up the system, there is one thing to - // do for a %parallel program: we need to assign cells to each of the + // do for a parallel program: we need to assign cells to each of the // processes. We do this by splitting (partitioning) the mesh // cells into as many chunks (subdomains) as there are // processes in this MPI job (if this is a sequential job, then there is @@ -313,7 +328,7 @@ namespace Step17 this_mpi_process); // Then we initialize the system matrix, solution, and right hand side - // vectors. Since they all need to work in %parallel, we have to pass them + // vectors. Since they all need to work in parallel, we have to pass them // an MPI communication object, as well as their global sizes (both // dimensions are equal to the number of degrees of freedom), and also how // many rows out of this global size are to be stored locally @@ -347,9 +362,12 @@ namespace Step17 } - // The third step is to actually assemble the matrix and right hand side of + + // @sect4{ElasticProblem::assemble_system} + + // We now assemble the matrix and right hand side of // the problem. There are some things worth mentioning before we go into - // detail. First, we will be assembling the system in %parallel, i.e. each + // detail. First, we will be assembling the system in parallel, i.e. each // process will be responsible for assembling on cells that belong to this // particular processor. Note that the degrees of freedom are split in a way // such that all DoFs in the interior of cells and between cells belonging @@ -457,7 +475,6 @@ namespace Step17 for (unsigned int q_point=0; q_point @@ -567,7 +586,7 @@ namespace Step17 // First, we have to set up a convergence monitor, and assign it the // accuracy to which we would like to solve the linear system. Next, an // actual solver object using PETSc's CG solver which also works with - // %parallel (distributed) vectors and matrices. And finally a + // parallel (distributed) vectors and matrices. And finally a // preconditioner; we choose to use a block Jacobi preconditioner which // works by computing an incomplete LU decomposition on each block // (i.e. the chunk of matrix that is stored on each MPI process). That @@ -616,9 +635,115 @@ namespace Step17 } + // @sect4{ElasticProblem::refine_grid} + + // Using some kind of refinement indicator, the mesh can be refined. The problem + // is basically the same as with distributing hanging node constraints: in order to + // compute the error indicator, we need access to all elements of the + // solution vector. We then compute the indicators for the cells that belong + // to the present process, but then we need to distribute the refinement + // indicators into a distributed vector so that all processes have the + // values of the refinement indicator for all cells. But then, in order for + // each process to refine its copy of the mesh, they need to have access to + // all refinement indicators locally, so they have to copy the global vector + // back into a local one. That's a little convoluted, but thinking about it + // quite straightforward nevertheless. So here's how we do it: + template + void ElasticProblem::refine_grid () + { + // So, first part: get a local copy of the distributed solution + // vector. This is necessary since the error estimator needs to get at the + // value of neighboring cells even if they do not belong to the subdomain + // associated with the present MPI process: + const PETScWrappers::Vector localized_solution (solution); + + // Second part: set up a vector of error indicators for all cells and let + // the Kelly class compute refinement indicators for all cells belonging + // to the present subdomain/process. Note that the last argument of the + // call indicates which subdomain we are interested in. The three + // arguments before it are various other default arguments that one + // usually doesn't need (and doesn't state values for, but rather uses the + // defaults), but which we have to state here explicitly since we want to + // modify the value of a following argument (i.e. the one indicating the + // subdomain): + Vector local_error_per_cell (triangulation.n_active_cells()); + KellyErrorEstimator::estimate (dof_handler, + QGauss(2), + typename FunctionMap::type(), + localized_solution, + local_error_per_cell, + ComponentMask(), + 0, + MultithreadInfo::n_threads(), + this_mpi_process); + + // Now all processes have computed error indicators for their own cells + // and stored them in the respective elements of the + // local_error_per_cell vector. The elements of this vector + // for cells not on the present process are zero. However, since all + // processes have a copy of a copy of the entire triangulation and need to + // keep these copies in sync, they need the values of refinement + // indicators for all cells of the triangulation. Thus, we need to + // distribute our results. We do this by creating a distributed vector + // where each process has its share, and sets the elements it has + // computed. We will then later generate a local sequential copy of this + // distributed vector to allow each process to access all elements of this + // vector. + // + // So in the first step, we need to set up a parallel vector. For + // simplicity, every process will own a chunk with as many elements as + // this process owns cells, so that the first chunk of elements is stored + // with process zero, the next chunk with process one, and so on. It is + // important to remark, however, that these elements are not necessarily + // the ones we will write to. This is so, since the order in which cells + // are arranged, i.e. the order in which the elements of the vector + // correspond to cells, is not ordered according to the subdomain these + // cells belong to. In other words, if on this process we compute + // indicators for cells of a certain subdomain, we may write the results + // to more or less random elements if the distributed vector, that do not + // necessarily lie within the chunk of vector we own on the present + // process. They will subsequently have to be copied into another + // process's memory space then, an operation that PETSc does for us when + // we call the compress function. This inefficiency could be + // avoided with some more code, but we refrain from it since it is not a + // major factor in the program's total runtime. + // + // So here's how we do it: count how many cells belong to this process, + // set up a distributed vector with that many elements to be stored + // locally, and copy over the elements we computed locally, then compress + // the result. In fact, we really only copy the elements that are nonzero, + // so we may miss a few that we computed to zero, but this won't hurt + // since the original values of the vector is zero anyway. + const unsigned int n_local_cells + = GridTools::count_cells_with_subdomain_association (triangulation, + this_mpi_process); + PETScWrappers::MPI::Vector + distributed_all_errors (mpi_communicator, + triangulation.n_active_cells(), + n_local_cells); + + for (unsigned int i=0; i localized_all_errors (distributed_all_errors); + + // ...which we can the subsequently use to finally refine the grid: + GridRefinement::refine_and_coarsen_fixed_number (triangulation, + localized_all_errors, + 0.3, 0.03); + triangulation.execute_coarsening_and_refinement (); + } + + + // @sect4{ElasticProblem::output_results} - // Step five is to output the results we computed in this iteration. This is - // actually the same as done in step-8 before, with two small + // This is actually the same as done in step-8 before, with two small // differences. First, all processes call this function, but not all of them // need to do the work associated with generating output. In fact, they // shouldn't, since we would try to write to the same file multiple times at @@ -741,111 +866,7 @@ namespace Step17 } - - // The sixth step is to take the solution just computed, and evaluate some - // kind of refinement indicator to refine the mesh. The problem is basically - // the same as with distributing hanging node constraints: in order to - // compute the error indicator, we need access to all elements of the - // solution vector. We then compute the indicators for the cells that belong - // to the present process, but then we need to distribute the refinement - // indicators into a distributed vector so that all processes have the - // values of the refinement indicator for all cells. But then, in order for - // each process to refine its copy of the mesh, they need to have access to - // all refinement indicators locally, so they have to copy the global vector - // back into a local one. That's a little convoluted, but thinking about it - // quite straightforward nevertheless. So here's how we do it: - template - void ElasticProblem::refine_grid () - { - // So, first part: get a local copy of the distributed solution - // vector. This is necessary since the error estimator needs to get at the - // value of neighboring cells even if they do not belong to the subdomain - // associated with the present MPI process: - const PETScWrappers::Vector localized_solution (solution); - - // Second part: set up a vector of error indicators for all cells and let - // the Kelly class compute refinement indicators for all cells belonging - // to the present subdomain/process. Note that the last argument of the - // call indicates which subdomain we are interested in. The three - // arguments before it are various other default arguments that one - // usually doesn't need (and doesn't state values for, but rather uses the - // defaults), but which we have to state here explicitly since we want to - // modify the value of a following argument (i.e. the one indicating the - // subdomain): - Vector local_error_per_cell (triangulation.n_active_cells()); - KellyErrorEstimator::estimate (dof_handler, - QGauss(2), - typename FunctionMap::type(), - localized_solution, - local_error_per_cell, - ComponentMask(), - 0, - MultithreadInfo::n_threads(), - this_mpi_process); - - // Now all processes have computed error indicators for their own cells - // and stored them in the respective elements of the - // local_error_per_cell vector. The elements of this vector - // for cells not on the present process are zero. However, since all - // processes have a copy of a copy of the entire triangulation and need to - // keep these copies in sync, they need the values of refinement - // indicators for all cells of the triangulation. Thus, we need to - // distribute our results. We do this by creating a distributed vector - // where each process has its share, and sets the elements it has - // computed. We will then later generate a local sequential copy of this - // distributed vector to allow each process to access all elements of this - // vector. - // - // So in the first step, we need to set up a %parallel vector. For - // simplicity, every process will own a chunk with as many elements as - // this process owns cells, so that the first chunk of elements is stored - // with process zero, the next chunk with process one, and so on. It is - // important to remark, however, that these elements are not necessarily - // the ones we will write to. This is so, since the order in which cells - // are arranged, i.e. the order in which the elements of the vector - // correspond to cells, is not ordered according to the subdomain these - // cells belong to. In other words, if on this process we compute - // indicators for cells of a certain subdomain, we may write the results - // to more or less random elements if the distributed vector, that do not - // necessarily lie within the chunk of vector we own on the present - // process. They will subsequently have to be copied into another - // process's memory space then, an operation that PETSc does for us when - // we call the compress function. This inefficiency could be - // avoided with some more code, but we refrain from it since it is not a - // major factor in the program's total runtime. - // - // So here's how we do it: count how many cells belong to this process, - // set up a distributed vector with that many elements to be stored - // locally, and copy over the elements we computed locally, then compress - // the result. In fact, we really only copy the elements that are nonzero, - // so we may miss a few that we computed to zero, but this won't hurt - // since the original values of the vector is zero anyway. - const unsigned int n_local_cells - = GridTools::count_cells_with_subdomain_association (triangulation, - this_mpi_process); - PETScWrappers::MPI::Vector - distributed_all_errors (mpi_communicator, - triangulation.n_active_cells(), - n_local_cells); - - for (unsigned int i=0; i localized_all_errors (distributed_all_errors); - - // ...which we can the subsequently use to finally refine the grid: - GridRefinement::refine_and_coarsen_fixed_number (triangulation, - localized_all_errors, - 0.3, 0.03); - triangulation.execute_coarsening_and_refinement (); - } - + // @sect4{ElasticProblem::run} // Lastly, here is the driver function. It is almost unchanged from step-8, @@ -897,7 +918,9 @@ namespace Step17 } -// So that's it, almost. main() works the same way as most of the +// @sect3{The mainfunction} + +// The main() works the same way as most of the // main functions in the other example programs, i.e. it delegates work to the // run function of a master object, and only wraps everything // into some code to catch exceptions: