From bc2a0c488ef83fb029e2ed1fd3a112141a37a6a1 Mon Sep 17 00:00:00 2001 From: Wolfgang Bangerth Date: Thu, 7 Jan 2016 14:07:03 -0600 Subject: [PATCH] More work on the documentation. --- doc/news/changes.h | 7 + examples/step-17/doc/intro.dox | 6 +- examples/step-17/doc/results.dox | 66 ++++-- examples/step-17/step-17.cc | 344 ++++++++++++++++++------------- 4 files changed, 253 insertions(+), 170 deletions(-) diff --git a/doc/news/changes.h b/doc/news/changes.h index 7e93d38360..2e1af662cb 100644 --- a/doc/news/changes.h +++ b/doc/news/changes.h @@ -206,6 +206,13 @@ inconvenience this causes. (Jean-Paul Pelteret, 2016/01/08) +
  • New: The documentation of step-17 has been completely rewritten, + and many aspects of how one has to think when writing parallel programs + have been much better documented now. +
    + (Wolfgang Bangerth, 2016/01/07) +
  • +
  • New: constrained_linear_operator() and constrained_right_hand_side() provide a generic mechanism of applying constraints to a LinearOperator. A detailed explanation with example code is given in the @ref constraints diff --git a/examples/step-17/doc/intro.dox b/examples/step-17/doc/intro.dox index 4b97145dc9..8a0d51c52f 100644 --- a/examples/step-17/doc/intro.dox +++ b/examples/step-17/doc/intro.dox @@ -94,7 +94,11 @@ For larger problems, having to store the entire mesh on every processor will clearly yield a bottleneck. Splitting up the mesh is slightly, though not much more complicated (from a user perspective, though it is much more complicated under the hood) to achieve and -we will show how to do this in step-40 and some other programs. +we will show how to do this in step-40 and some other programs. There are +numerous occasions where, in the course of discussing how a function of this +program works, we will comment on the fact that it will not scale to large +problems and why not. All of these issues will be addressed in step-18 and +in particular step-40, which scales to very large number of processes. Philosophically, the way MPI operates is as follows. You typically run a program via diff --git a/examples/step-17/doc/results.dox b/examples/step-17/doc/results.dox index 3ec047b7b7..b8166f7124 100644 --- a/examples/step-17/doc/results.dox +++ b/examples/step-17/doc/results.dox @@ -9,15 +9,15 @@ most basic way to run MPI programs is using a command line like @code mpirun -np 32 ./step-17 @endcode -to run the step-17 executable with 32 processors. +to run the step-17 executable with 32 processors. -The command line above is the appropriate way of starting the program -on a multicore machine when using MPI for parallelization. On the -other hand, most clusters are shared resources and have some kind of -scheduling system that distributes jobs onto available processors. All -of these scheduling systems have their own calling syntax - on my system, I have to use the command -qsub with a whole host of options to run a job in parallel - so -that the exact command line syntax varies. +(If you work on a cluster, +there is typically a step in between where you need to set up a job script, +submit the script to a scheduler that then executes the script whenever it +can allocate 32 unused processors for your job. How to write such job +scripts differs from cluster to cluster, and you should find the documentation +of your cluster to see how to do this. On my system, I have to use the command +qsub with a whole host of options to run a job in parallel.) Whether directly or through a scheduler, if you run this program on 8 processors, you should get output like the following: @@ -106,7 +106,7 @@ release mode has been enabled by running make release, and with the generation of graphical output switched off for the reasons stated in the program comments above. (@dealiiVideoLectureSeeAlso{18}) -The biggest 2d computations we did had roughly 7.1 +The biggest 2d computations I did had roughly 7.1 million unknowns, and were done on 32 processes. It took about 40 minutes. Not surprisingly, the limiting factor for how far one can go is how much memory one has, since every process has to hold the entire mesh and DoFHandler objects, @@ -120,9 +120,16 @@ entries locally or not. Here is some output generated in the 12th cycle of the program, i.e. with roughly 300,000 unknowns: - - - + + + + + +
    + + + +
    @@ -134,8 +141,16 @@ separate scalar fields. What may be more interesting, though, is to look at the mesh and partition at this step: - - + + + + + +
    + + + +
    Again, the mesh (left) shows the same refinement pattern as seen @@ -212,12 +227,20 @@ Cycle 6: The last step, going up to 1.5 million unknowns, takes about 55 minutes with 16 processes on 8 dual-processor machines (of the kind available in 2003). The graphical output generated by -this job is rather large (cycle 5 already prints around 82 MB of GMV data), so +this job is rather large (cycle 5 already prints around 82 MB of data), so we contend ourselves with showing output from cycle 4: + + + + + +
    + + + +
    - - The left picture shows the partitioning of the cube into 16 processes, whereas @@ -229,9 +252,12 @@ the right one shows the x-displacement along two cutplanes through the cube.

    Possibilities for extensions

    The program keeps a complete copy of the Triangulation and DoFHandler objects -on every processor. That's obviously the bottleneck for as far as -parallelization goes. Internally, within deal.II, parallelizing the data -structures used in hierarchic and unstructured triangulations is a very hard +on every processor. It also creates complete copies of the solution vector, +and it creates output on only one processor. All of this is obviously +the bottleneck as far as parallelization is concerned. + +Internally, within deal.II, parallelizing the data +structures used in hierarchic and unstructured triangulations is a hard problem, and it took us a few more years to make this happen. The step-40 tutorial program and the @ref distributed documentation module talk about how to do these steps and what it takes from an application perspective. An diff --git a/examples/step-17/step-17.cc b/examples/step-17/step-17.cc index 401d92b518..ebdebd1e23 100644 --- a/examples/step-17/step-17.cc +++ b/examples/step-17/step-17.cc @@ -730,35 +730,52 @@ namespace Step17 // @sect4{ElasticProblem::refine_grid} - // Using some kind of refinement indicator, the mesh can be refined. The problem - // is basically the same as with distributing hanging node constraints: in order to - // compute the error indicator, we need access to all elements of the - // solution vector. We then compute the indicators for the cells that belong - // to the present process, but then we need to distribute the refinement - // indicators into a distributed vector so that all processes have the - // values of the refinement indicator for all cells. But then, in order for - // each process to refine its copy of the mesh, they need to have access to - // all refinement indicators locally, so they have to copy the global vector - // back into a local one. That's a little convoluted, but thinking about it - // quite straightforward nevertheless. So here's how we do it: + // Using some kind of refinement indicator, the mesh can be + // refined. The problem is basically the same as with distributing + // hanging node constraints: in order to compute the error indicator + // (even if we were just interested in the indicator on the cells + // the current process owns), we need access to more elements of the + // solution vector than just those the current processor stores. To + // make this happen, we do essentially what we did in + // solve() already, namely get a complete copy + // of the solution vector onto every process, and use that to + // compute. This is, in itself expensive as explained above, and it + // is particular unnecessary since we had just created and then + // destroyed such a vector in solve(), but efficiency + // is not the point of this program and so let us opt for a design + // in which every function is as self-contained as possible. + // + // Once we have such a "localized" vector that contains all + // elements of the solution vector, we can compute the indicators + // for the cells that belong to the present process. In fact, we + // could of course compute all refinement indicators since + // our Triangulation and DoFHandler objects store information about + // all cells, and since we have a complete copy of the solution + // vector. But in the interest in showing how to operate in + // %parallel, let us demonstrate how one would operate if one were + // to only compute some error indicators and then exchange + // the remaining ones with the other processes. (Ultimately, each + // process needs a complete set of refinement indicators because + // every process needs to refine their mesh, and needs to refine it + // in exactly the same way as all of the other processes.) + // + // So, to do all of this, we need to: + // - First, get a local copy of the distributed solution vector. + // - Second, create a vector to store the refinement indicators. + // - Third, let the KellyErrorEstimator compute refinement + // indicators for all cells belonging to the present + // subdomain/process. The last argument of the call indicates + // which subdomain we are interested in. The three arguments + // before it are various other default arguments that one usually + // doesn't need (and doesn't state values for, but rather uses the + // defaults), but which we have to state here explicitly since we + // want to modify the value of a following argument (i.e. the one + // indicating the subdomain). template void ElasticProblem::refine_grid () { - // So, first part: get a local copy of the distributed solution - // vector. This is necessary since the error estimator needs to get at the - // value of neighboring cells even if they do not belong to the subdomain - // associated with the present MPI process: const Vector localized_solution (solution); - // Second part: set up a vector of error indicators for all cells and let - // the Kelly class compute refinement indicators for all cells belonging - // to the present subdomain/process. Note that the last argument of the - // call indicates which subdomain we are interested in. The three - // arguments before it are various other default arguments that one - // usually doesn't need (and doesn't state values for, but rather uses the - // defaults), but which we have to state here explicitly since we want to - // modify the value of a following argument (i.e. the one indicating the - // subdomain): Vector local_error_per_cell (triangulation.n_active_cells()); KellyErrorEstimator::estimate (dof_handler, QGauss(2), @@ -770,43 +787,49 @@ namespace Step17 MultithreadInfo::n_threads(), this_mpi_process); - // Now all processes have computed error indicators for their own cells - // and stored them in the respective elements of the - // local_error_per_cell vector. The elements of this vector - // for cells not on the present process are zero. However, since all - // processes have a copy of a copy of the entire triangulation and need to - // keep these copies in sync, they need the values of refinement - // indicators for all cells of the triangulation. Thus, we need to - // distribute our results. We do this by creating a distributed vector - // where each process has its share, and sets the elements it has - // computed. We will then later generate a local sequential copy of this - // distributed vector to allow each process to access all elements of this - // vector. + // Now all processes have computed error indicators for their own + // cells and stored them in the respective elements of the + // local_error_per_cell vector. The elements of this + // vector for cells not owned by the present process are + // zero. However, since all processes have a copy of the entire + // triangulation and need to keep these copies in sync, they need + // the values of refinement indicators for all cells of the + // triangulation. Thus, we need to distribute our results. We do + // this by creating a distributed vector where each process has + // its share, and sets the elements it has computed. Consequently, + // when you view this vector as one that lives across all + // processes, then every element of this vector has been set + // once. We can then assign this parallel vector to a local, + // non-parallel vector on each process, making all error + // indicators available on every process. // // So in the first step, we need to set up a parallel vector. For - // simplicity, every process will own a chunk with as many elements as - // this process owns cells, so that the first chunk of elements is stored - // with process zero, the next chunk with process one, and so on. It is - // important to remark, however, that these elements are not necessarily - // the ones we will write to. This is so, since the order in which cells - // are arranged, i.e. the order in which the elements of the vector - // correspond to cells, is not ordered according to the subdomain these - // cells belong to. In other words, if on this process we compute - // indicators for cells of a certain subdomain, we may write the results - // to more or less random elements if the distributed vector, that do not - // necessarily lie within the chunk of vector we own on the present - // process. They will subsequently have to be copied into another - // process's memory space then, an operation that PETSc does for us when - // we call the compress function. This inefficiency could be - // avoided with some more code, but we refrain from it since it is not a - // major factor in the program's total runtime. + // simplicity, every process will own a chunk with as many + // elements as this process owns cells, so that the first chunk of + // elements is stored with process zero, the next chunk with + // process one, and so on. It is important to remark, however, + // that these elements are not necessarily the ones we will write + // to. This is so, since the order in which cells are arranged, + // i.e., the order in which the elements of the vector correspond + // to cells, is not ordered according to the subdomain these cells + // belong to. In other words, if on this process we compute + // indicators for cells of a certain subdomain, we may write the + // results to more or less random elements of the distributed + // vector; in particular, they may not necessarily lie within the + // chunk of vector we own on the present process. They will + // subsequently have to be copied into another process's memory + // space, an operation that PETSc does for us when we call the + // compress() function. This inefficiency could be + // avoided with some more code, but we refrain from it since it is + // not a major factor in the program's total runtime. // - // So here's how we do it: count how many cells belong to this process, - // set up a distributed vector with that many elements to be stored - // locally, and copy over the elements we computed locally, then compress - // the result. In fact, we really only copy the elements that are nonzero, - // so we may miss a few that we computed to zero, but this won't hurt - // since the original values of the vector is zero anyway. + // So here's how we do it: count how many cells belong to this + // process, set up a distributed vector with that many elements to + // be stored locally, and copy over the elements we computed + // locally, then compress the result. In fact, we really only copy + // the elements that are nonzero, so we may miss a few that we + // computed to zero, but this won't hurt since the original values + // of the vector is zero anyway. const unsigned int n_local_cells = GridTools::count_cells_with_subdomain_association (triangulation, this_mpi_process); @@ -821,12 +844,15 @@ namespace Step17 distributed_all_errors.compress (VectorOperation::insert); - // So now we have this distributed vector out there that contains the - // refinement indicators for all cells. To use it, we need to obtain a - // local copy... + // So now we have this distributed vector that contains the + // refinement indicators for all cells. To use it, we need to + // obtain a local copy and then use it to mark cells for + // refinement or coarsening, and actually do the refinement and + // coarsening. It is important to recognize that every + // process does this to its own copy of the triangulation, and + // does it in exactly the same way. const Vector localized_all_errors (distributed_all_errors); - // ...which we can the subsequently use to finally refine the grid: GridRefinement::refine_and_coarsen_fixed_number (triangulation, localized_all_errors, 0.3, 0.03); @@ -836,76 +862,95 @@ namespace Step17 // @sect4{ElasticProblem::output_results} - // This is actually the same as done in step-8 before, with two small - // differences. First, all processes call this function, but not all of them - // need to do the work associated with generating output. In fact, they - // shouldn't, since we would try to write to the same file multiple times at - // once. So we let only the first job do this, and all the other ones idle - // around during this time (or start their work for the next iteration, or - // simply yield their CPUs to other jobs that happen to run at the same - // time). The second thing is that we not only output the solution vector, - // but also a vector that indicates which subdomain each cell belongs - // to. This will make for some nice pictures of partitioned domains. + // The final function of significant interest is the one that + // creates graphical output. This works the same way as in step-8, + // with two small differences. Before discussing these, let us state + // the general philosophy this function will work: we intend for all + // of the data to be generated on a single process, and subsequently + // written to a file. This is, as many other parts of this program + // already discussed, not something that will scale. Previously, we + // had argued that we will get into trouble with triangulations, + // DoFHandlers, and copies of the solution vector where every + // process has to store all of the data, and that there will come to + // be a point where each process simply doesn't have enough memory + // to store that much data. Here, the situation is different: it's + // not only the memory, but also the run time that's a problem. If + // one process is responsible for processing all of the data + // while all of the other processes do nothing, then this one + // function will eventually come to dominate the overall run time of + // the program. In particular, the time this function takes is + // going to be proportional to the overall size of the problem + // (counted in the number of cells, or the number of degrees of + // freedom), independent of the number of processes we throw at it. + // + // Such situations need to be avoided, and we will show in step-18 + // and step-40 how to address this issue. For the current problem, + // the solution is to have each process generate output data only + // for its own local cells, and write them to separate files, one + // file per process. This is how step-18 operates. Alternatively, + // one could simply leave everything in a set of independent files + // and let the visualization software read all of them (possibly + // also using multiple processors) and create a single visualization + // out of all of them; this is the path step-40, step-32, and all + // other parallel programs developed later on take. + // + // More specifically for the current function, all processes call + // this function, but not all of them need to do the work associated + // with generating output. In fact, they shouldn't, since we would + // try to write to the same file multiple times at once. So we let + // only the first process do this, and all the other ones idle + // around during this time (or start their work for the next + // iteration, or simply yield their CPUs to other jobs that happen + // to run at the same time). The second thing is that we not only + // output the solution vector, but also a vector that indicates + // which subdomain each cell belongs to. This will make for some + // nice pictures of partitioned domains. // - // In practice, the present implementation of the output function is a major - // bottleneck of this program, since generating graphical output is - // expensive and doing so only on one process does, of course, not scale if - // we significantly increase the number of processes. In effect, this - // function will consume most of the run-time if you go to very large - // numbers of unknowns and processes, and real applications should limit the - // number of times they generate output through this function. + // To implement this, process zero needs a complete set of solution + // components in a local vector. Just as with the previous function, + // the efficient way to do this would be to re-use the vector + // already created in the solve() function, but to keep + // things more self-contained, we simply re-create one here from the + // distributed solution vector. // - // The solution to this is to have each process generate output data only - // for it's own local cells, and write them to separate files, one file per - // process. This would distribute the work of generating the output to all - // processes equally. In a second step, separate from running this program, - // we would then take all the output files for a given cycle and merge these - // parts into one single output file. This has to be done sequentially, but - // can be done on a different machine, and should be relatively - // cheap. However, the necessary functionality for this is not yet - // implemented in the library, and since we are too close to the next - // release, we do not want to do such major destabilizing changes any - // more. This has been fixed in the meantime, though, and a better way to do - // things is explained in the step-18 example program. + // An important thing to realize is that we do this localization + // operation on all processes, not only the one that actually needs + // the data. This can't be avoided, however, with the communication + // model of MPI: MPI does not have a way to query data on another + // process, both sides have to initiate a communication at the same + // time. So even though most of the processes do not need the + // localized solution, we have to place the statement converting the + // distributed into a localized vector so that all processes execute + // it. + // + // (Part of this work could in fact be avoided. What we do is + // send the local parts of all processes to all other processes. What we + // would really need to do is to initiate an operation on all processes + // where each process simply sends its local chunk of data to process + // zero, since this is the only one that actually needs it, i.e., we need + // something like a gather operation. PETSc can do this, but for + // simplicity's sake we don't attempt to make use of this here. We don't, + // since what we do is not very expensive in the grand scheme of things: + // it is one vector communication among all processes , which has to be + // compared to the number of communications we have to do when solving the + // linear system, setting up the block-ILU for the preconditioner, and + // other operations.) template void ElasticProblem::output_results (const unsigned int cycle) const { - // One point to realize is that when we want to generate output on process - // zero only, we need to have access to all elements of the solution - // vector. So we need to get a local copy of the distributed vector, which - // is in fact simple: const Vector localized_solution (solution); - // The thing to notice, however, is that we do this localization operation - // on all processes, not only the one that actually needs the data. This - // can't be avoided, however, with the communication model of MPI: MPI - // does not have a way to query data on another process, both sides have - // to initiate a communication at the same time. So even though most of - // the processes do not need the localized solution, we have to place the - // call here so that all processes execute it. - // - // (In reality, part of this work can in fact be avoided. What we do is - // send the local parts of all processes to all other processes. What we - // would really need to do is to initiate an operation on all processes - // where each process simply sends its local chunk of data to process - // zero, since this is the only one that actually needs it, i.e. we need - // something like a gather operation. PETSc can do this, but for - // simplicity's sake we don't attempt to make use of this here. We don't, - // since what we do is not very expensive in the grand scheme of things: - // it is one vector communication among all processes , which has to be - // compared to the number of communications we have to do when solving the - // linear system, setting up the block-ILU for the preconditioner, and - // other operations.) - - // This being done, process zero goes ahead with setting up the output - // file as in step-8, and attaching the (localized) solution vector to the - // output object:. (The code to generate the output file name is stolen - // and slightly modified from step-5, since we expect that we can do a - // number of cycles greater than 10, which is the maximum of what the code - // in step-8 could handle.) + + // This being done, process zero goes ahead with setting up the + // output file as in step-8, and attaching the (localized) + // solution vector to the output object. (The code to generate the + // output file name is stolen and slightly modified from step-5, + // since we expect that we can do a number of cycles greater than + // 10, which is the maximum of what the code in step-8 could + // handle.) if (this_mpi_process == 0) { std::ostringstream filename; - filename << "solution-" << cycle << ".gmv"; + filename << "solution-" << cycle << ".vtk"; std::ofstream output (filename.str().c_str()); @@ -933,40 +978,40 @@ namespace Step17 data_out.add_data_vector (localized_solution, solution_names); - // The only thing we do here additionally is that we also output one - // value per cell indicating which subdomain (i.e. MPI process) it - // belongs to. This requires some conversion work, since the data the - // library provides us with is not the one the output class expects, - // but this is not difficult. First, set up a vector of integers, one - // per cell, that is then filled by the number of subdomain each cell - // is in: + // The only other thing we do here is that we also output one + // value per cell indicating which subdomain (i.e., MPI + // process) it belongs to. This requires some conversion work, + // since the data the library provides us with is not the one + // the output class expects, but this is not difficult. First, + // set up a vector of integers, one per cell, that is then + // filled by the subdomain id of each cell. + // + // The elements of this vector are then converted to a + // floating point vector in a second step, and this vector is + // added to the DataOut object, which then goes off creating + // output in VTK format: std::vector partition_int (triangulation.n_active_cells()); GridTools::get_subdomain_association (triangulation, partition_int); - // Then convert this integer vector into a floating point vector just - // as the output functions want to see: const Vector partitioning(partition_int.begin(), partition_int.end()); - // And finally add this vector as well: data_out.add_data_vector (partitioning, "partitioning"); - // This all being done, generate the intermediate format and write it - // out in GMV output format: data_out.build_patches (); - data_out.write_gmv (output); + data_out.write_vtk (output); } } // @sect4{ElasticProblem::run} - // Lastly, here is the driver function. It is almost completely unchanged from step-8, - // with the exception that we replace std::cout by the - // pcout stream. Apart from this, the only other cosmetic - // change is that we output how many degrees of freedom there are per - // process, and how many iterations it took for the linear solver to - // converge: + // Lastly, here is the driver function. It is almost completely + // unchanged from step-8, with the exception that we replace + // std::cout by the pcout stream. Apart + // from this, the only other cosmetic change is that we output how + // many degrees of freedom there are per process, and how many + // iterations it took for the linear solver to converge: template void ElasticProblem::run () { @@ -1012,10 +1057,10 @@ namespace Step17 // @sect3{The main function} -// The main() works the same way as most of the -// main functions in the other example programs, i.e., it delegates work to the -// run function of a master object, and only wraps everything -// into some code to catch exceptions: +// The main() works the same way as most of the main +// functions in the other example programs, i.e., it delegates work to +// the run function of a master object, and only wraps +// everything into some code to catch exceptions: int main (int argc, char **argv) { try @@ -1023,9 +1068,10 @@ int main (int argc, char **argv) using namespace dealii; using namespace Step17; - // Here is the only real difference: MPI and PETSc both require that we initialize - // these libraries at the beginning of the program, and un-initialize them at the - // end. The class MPI_InitFinalize takes care of all of that. + // Here is the only real difference: MPI and PETSc both require + // that we initialize these libraries at the beginning of the + // program, and un-initialize them at the end. The class + // MPI_InitFinalize takes care of all of that. Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1); ElasticProblem<2> elastic_problem; -- 2.39.5