From: Wolfgang Bangerth Date: Mon, 11 Jan 2016 01:00:25 +0000 (-0600) Subject: Address comments in the second round of reviews. X-Git-Tag: v8.4.0-rc2~101^2 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=c7cde339129f2ddc9e23794de4484d235483e2f0;p=dealii.git Address comments in the second round of reviews. --- diff --git a/examples/step-17/doc/intro.dox b/examples/step-17/doc/intro.dox index 8a0d51c52f..6e0156bde2 100644 --- a/examples/step-17/doc/intro.dox +++ b/examples/step-17/doc/intro.dox @@ -98,7 +98,7 @@ we will show how to do this in step-40 and some other programs. There are numerous occasions where, in the course of discussing how a function of this program works, we will comment on the fact that it will not scale to large problems and why not. All of these issues will be addressed in step-18 and -in particular step-40, which scales to very large number of processes. +in particular step-40, which scales to very large numbers of processes. Philosophically, the way MPI operates is as follows. You typically run a program via diff --git a/examples/step-17/doc/results.dox b/examples/step-17/doc/results.dox index b08b160e54..4da8e77bd8 100644 --- a/examples/step-17/doc/results.dox +++ b/examples/step-17/doc/results.dox @@ -9,7 +9,7 @@ most basic way to run MPI programs is using a command line like @code mpirun -np 32 ./step-17 @endcode -to run the step-17 executable with 32 processors. +to run the step-17 executable with 32 processors. (If you work on a cluster, then there is typically a step in between where you need to set up a job script and submit the script to a scheduler. The scheduler diff --git a/examples/step-17/step-17.cc b/examples/step-17/step-17.cc index 0c80ccc2eb..4bc646b6d3 100644 --- a/examples/step-17/step-17.cc +++ b/examples/step-17/step-17.cc @@ -124,7 +124,7 @@ namespace Step17 // Then we have two variables that tell us where in the parallel // world we are. The first of the following variables, - // n_mpi_processes tells us how many MPI processes + // n_mpi_processes, tells us how many MPI processes // there exist in total, while the second one, // this_mpi_process, indicates which is the number of // the present process within this space of processes (in MPI @@ -304,7 +304,7 @@ namespace Step17 // GridTools::partition_triangulation() function that does this at a // much higher level of programming. // - // @note As mentioned in the introduction, we can avoid this manual + // @note As mentioned in the introduction, we could avoid this manual // partitioning step if we used the parallel::shared::Triangulation // class for the triangulation object instead (as we do in step-18). // That class does, in essence, everything a regular triangulation @@ -327,7 +327,7 @@ namespace Step17 // The final step of this initial setup is that we get ourselves a // variable that indicates how many degrees of freedom the current // process is responsible for. (Note that a degree of freedom is not - // necessarily owned by the process that owns a cells just because + // necessarily owned by the process that owns a cell just because // the degree of freedom lives on this cell: some degrees of freedom // live on interfaces between subdomains, and are consequently only owned by // one of the processes adjacent to this interface.)