From: Wolfgang Bangerth Date: Mon, 28 Dec 2015 21:42:21 +0000 (-0600) Subject: Document some common MPI terms and link to them. X-Git-Tag: v8.4.0-rc2~125^2 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=refs%2Fpull%2F2021%2Fhead;p=dealii.git Document some common MPI terms and link to them. --- diff --git a/doc/doxygen/headers/glossary.h b/doc/doxygen/headers/glossary.h index 599d8a9613..fd2adb68dd 100644 --- a/doc/doxygen/headers/glossary.h +++ b/doc/doxygen/headers/glossary.h @@ -1159,6 +1159,82 @@ * * * + *
@anchor GlossMPICommunicator MPI Communicator
+ *
+ * In the language of the Message Passing Interface (MPI), a communicator + * can be thought of as a mail system that allows sending messages to + * other members of the mail system. Within each communicator, each + * @ref GlossMPIProcess "process" has a + * @ref GlossMPIRank "rank" (the equivalent of a house number) that + * allows to identify senders and receivers of messages. It is not + * possible to send messages via a communicator to receivers that are + * not part of this communicator/mail service. + * + * When starting a parallel program via a command line call such as + * @code + * mpirun -np 32 ./step-17 + * @endcode + * (or the equivalent used in the batch submission system used on your + * cluster) the MPI system starts 32 copies of the step-17 executable. + * Each of these has access to the MPI_COMM_WORLD communicator + * that then consists of all 32 processors, each with its own rank. A subset + * of processes within this MPI universe can later agree to create other + * communicators that allow communication between only a subset of + * processes. + *
+ * + * + *
@anchor GlossMPIProcess MPI Process
+ *
+ * When running parallel jobs on distributed memory machines, one + * almost always uses MPI. There, a command line call such as + * @code + * mpirun -np 32 ./step-17 + * @endcode + * (or the equivalent used in the batch submission system used on your + * cluster) starts 32 copies of the step-17 executable. Some of these may actually + * run on the same machine, but in general they will be running on different + * machines that do not have direct access to each other's memory space. + * + * In the language of the Message Passing Interface (MPI), each of these + * copies of the same executable running on (possibly different) machines + * are called processes. The collection of all processes running in + * parallel is called the "MPI Universe" and is identified by the + * @ref GlossMPICommunicator "MPI communicator" MPI_COMM_WORLD. + * + * Each process has immediate access only to the objects in its own + * memory space. A process can not read from or write into the memory + * of other processes. As a consequence, the only way by which + * processes can communicate is by sending each other messages. That + * said (and as explained in the introduction to step-17), one + * typically calls higher level MPI functions in which all processes + * that are part of a communicator participate. An example would + * be computing the sum over a set of integers where each process + * provides one term of the sum. + *
+ * + * + *
@anchor GlossMPIRank MPI Rank
+ *
+ * In the language of the Message Passing Interface (MPI), the rank + * of an @ref GlossMPIProcess "MPI process" is the number this process + * carries within the set MPI_COMM_WORLD of all processes + * currently running as one parallel job. More correctly, it is the + * number within an @ref GlossMPICommunicator "MPI communicator" that + * groups together a subset of all processes with one parallel job + * (where MPI_COMM_WORLD simply denotes the complete + * set of processes). + * + * Within each communicator, each process has a unique rank, distinct from the + * all other processes' ranks, that allows + * identifying one recipient or sender in MPI communication calls. Each + * process, running on one processor, can inquire about its own rank + * within a communicator by calling Utilities::MPI::this_mpi_process(). + * The total number of processes participating in a communicator (i.e., + * the size of the communicator) can be obtained by calling + * Utilities::MPI::n_mpi_processes(). + *
+ * * *
@anchor mg_paper %Multigrid paper
*
The "multigrid paper" is a paper by B. Janssen and G. Kanschat, titled @@ -1299,8 +1375,8 @@ * While in principle this property * can be used in any way application programs deem useful (it is simply an * integer associated with each cell that can indicate whatever you want), at - * least for programs that run in %parallel it usually denotes the MPI rank - * (or number) of the processor that "owns" this cell. + * least for programs that run in %parallel it usually denotes the + * @ref GlossMPIRank "MPI rank" of the processor that "owns" this cell. * * For programs that are parallelized based on MPI but where each processor * stores the entire triangulation (as in, for example, step-17 and step-18, diff --git a/examples/step-17/doc/intro.dox b/examples/step-17/doc/intro.dox index 68acf8b55b..4b97145dc9 100644 --- a/examples/step-17/doc/intro.dox +++ b/examples/step-17/doc/intro.dox @@ -103,11 +103,13 @@ program via @endcode which means to run it on (say) 32 processors. (If you are on a cluster system, you typically need to schedule the program to run whenever 32 processors -become available; this is typically described in the documentation of your +become available; this will be described in the documentation of your cluster. But under the hood, whenever those processors become available, the same call as above will generally be executed.) What this does is that the MPI system will start 32 copies of the step-17 -executable. This may happen on different machines that can't even read +executable. (The MPI term for each of these running executables is that you +have 32 @ref GlossMPIProcess "MPI processes".) +This may happen on different machines that can't even read from each others' memory spaces, or it may happen on the same machine, but the end result is the same: each of these 32 copies will run with some memory allocated to it by the operating system, and it will not directly diff --git a/include/deal.II/base/mpi.h b/include/deal.II/base/mpi.h index 72cc8a3940..0783b6be24 100644 --- a/include/deal.II/base/mpi.h +++ b/include/deal.II/base/mpi.h @@ -65,13 +65,15 @@ namespace Utilities { /** * Return the number of MPI processes there exist in the given - * communicator object. If this is a sequential job, it returns 1. + * @ref GlossMPICommunicator "communicator" object. If this is + * a sequential job, it returns 1. */ unsigned int n_mpi_processes (const MPI_Comm &mpi_communicator); /** - * Return the number of the present MPI process in the space of processes - * described by the given communicator. This will be a unique value for + * Return the @ref GlossMPIRank "rank of the present MPI process" + * in the space of processes described by the given + * @ref GlossMPICommunicator "communicator". This will be a unique value for * each process between zero and (less than) the number of all processes * (given by get_n_mpi_processes()). */ @@ -83,8 +85,8 @@ namespace Utilities * processors. To do that, the other processors need to know who to expect * messages from. This function computes this information. * - * @param mpi_comm A communicator that describes the processors that are - * going to communicate with each other. + * @param mpi_comm A @ref GlossMPICommunicator "communicator" that describes + * the processors that are going to communicate with each other. * * @param destinations The list of processors the current process wants to * send information to. This list need not be sorted in any way. If it @@ -101,7 +103,8 @@ namespace Utilities const std::vector &destinations); /** - * Given a communicator, generate a new communicator that contains the + * Given a @ref GlossMPICommunicator "communicator", generate a new + * communicator that contains the * same set of processors but that has a different, unique identifier. * * This functionality can be used to ensure that different objects, such @@ -115,7 +118,8 @@ namespace Utilities /** * Return the sum over all processors of the value @p t. This function is - * collective over all processors given in the communicator. If deal.II is + * collective over all processors given in the @ref GlossMPICommunicator "communicator". + * If deal.II is * not configured for use of MPI, this function simply returns the value * of @p t. This function corresponds to the MPI_Allreduce * function, i.e. all processors receive the result of this operation. @@ -198,7 +202,8 @@ namespace Utilities /** * Return the maximum over all processors of the value @p t. This function - * is collective over all processors given in the communicator. If deal.II + * is collective over all processors given in the + * @ref GlossMPICommunicator "communicator". If deal.II * is not configured for use of MPI, this function simply returns the * value of @p t. This function corresponds to the * MPI_Allreduce function, i.e. all processors receive the @@ -248,7 +253,8 @@ namespace Utilities /** * Return the minimum over all processors of the value @p t. This function - * is collective over all processors given in the communicator. If deal.II + * is collective over all processors given in the + * @ref GlossMPICommunicator "communicator". If deal.II * is not configured for use of MPI, this function simply returns the * value of @p t. This function corresponds to the * MPI_Allreduce function, i.e. all processors receive the @@ -314,8 +320,9 @@ namespace Utilities /** * Returns sum, average, minimum, maximum, processor id of minimum and - * maximum as a collective operation of on the given MPI communicator @p - * mpi_communicator . Each processor's value is given in @p my_value and + * maximum as a collective operation of on the given MPI + * @ref GlossMPICommunicator "communicator" @p mpi_communicator. + * Each processor's value is given in @p my_value and * the result will be returned. The result is available on all machines. * * @note Sometimes, not all processors need a result and in that case one @@ -337,7 +344,7 @@ namespace Utilities * control the number threads used in each MPI task. * * If deal.II is configured with PETSc, the library will also be - * initialized in the beginning and destructed at the end automatically + * initialized in the beginning and destroyed at the end automatically * (internally by calling PetscInitialize() and PetscFinalize()). * * If a program uses MPI one would typically just create an object of this