From 28a28be0fce8af3e7c5dfb1a35c2b1705c4dd91b Mon Sep 17 00:00:00 2001 From: Wolfgang Bangerth Date: Fri, 8 Jan 2016 17:37:00 -0600 Subject: [PATCH] Address all but one point in the review of the previous commits. --- examples/step-17/doc/results.dox | 13 ++++++------- examples/step-17/step-17.cc | 12 ++++++------ 2 files changed, 12 insertions(+), 13 deletions(-) diff --git a/examples/step-17/doc/results.dox b/examples/step-17/doc/results.dox index b8166f7124..b08b160e54 100644 --- a/examples/step-17/doc/results.dox +++ b/examples/step-17/doc/results.dox @@ -11,10 +11,10 @@ most basic way to run MPI programs is using a command line like @endcode to run the step-17 executable with 32 processors. -(If you work on a cluster, -there is typically a step in between where you need to set up a job script, -submit the script to a scheduler that then executes the script whenever it -can allocate 32 unused processors for your job. How to write such job +(If you work on a cluster, then there is typically a step in between where you +need to set up a job script and submit the script to a scheduler. The scheduler +will execute the script whenever it can allocate 32 unused processors for your +job. How to write such job scripts differs from cluster to cluster, and you should find the documentation of your cluster to see how to do this. On my system, I have to use the command qsub with a whole host of options to run a job in parallel.) @@ -180,7 +180,7 @@ though. Finally, here are some results for a 3d simulation. You can repeat these by -first changing +changing @code ElasticProblem<2> elastic_problem; @endcode @@ -188,8 +188,7 @@ to @code ElasticProblem<3> elastic_problem; @endcode -in the main function, and then in the Makefile, change the reference to the 2d -libraries to their 3d counterparts. If you then run the program in parallel, +in the main function. If you then run the program in parallel, you get something similar to this (this is for a job with 16 processes): @code Cycle 0: diff --git a/examples/step-17/step-17.cc b/examples/step-17/step-17.cc index ebdebd1e23..70f148db49 100644 --- a/examples/step-17/step-17.cc +++ b/examples/step-17/step-17.cc @@ -329,10 +329,10 @@ namespace Step17 // // The final step of this initial setup is that we get ourselves a // variable that indicates how many degrees of freedom the current - // process is responsible for. (This will, in general, be less than - // fe.dofs_per_cell times the number of cells the - // current process owns because some degrees of freedom live on - // interfaces between subdomains, and are consequently only owned by + // process is responsible for. (Note that a degree of freedom is not + // necessarily owned by the process that owns a cells just because + // the degree of freedom lives on this cell: some degrees of freedom + // live on interfaces between subdomains, and are consequently only owned by // one of the processes adjacent to this interface.) // // Before we move on, let us recall a fact already discussed in the @@ -345,7 +345,7 @@ namespace Step17 // degrees of freedom that live on each cell, whether it is one that // the current process owns or not. This can not scale to large // problems because eventually just storing on every process the - // entire mesh and everything that is associated with it, will + // entire mesh, and everything that is associated with it, will // become infeasible if the problem is large enough. On the other // hand, if we split the triangulation into parts so that every // process stores only those cells it "owns" but nothing else (or, @@ -429,7 +429,7 @@ namespace Step17 // the local contributions into the global matrix or right hand side // vector, we have to transfer these entries to the process that // owns these elements. Fortunately, we don't have to do this by - // hand, PETSc does all this for us by caching these elements + // hand: PETSc does all this for us by caching these elements // locally, and sending them to the other processes as necessary // when we call the compress() functions on the matrix // and vector at the end of this function. -- 2.39.5