From: Wolfgang Bangerth Date: Sun, 7 Jan 2024 18:31:22 +0000 (-0700) Subject: Document what a collective operation actually is. X-Git-Tag: relicensing~168^2 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=653653c15c643ad978bee3597b3dffb3ab490ec1;p=dealii.git Document what a collective operation actually is. --- diff --git a/doc/doxygen/headers/glossary.h b/doc/doxygen/headers/glossary.h index 62767ab8f3..cde731c39a 100644 --- a/doc/doxygen/headers/glossary.h +++ b/doc/doxygen/headers/glossary.h @@ -499,6 +499,48 @@ * * * + *
@anchor GlossCollectiveOperation Collective operation
+ *
+ * When running programs in parallel using MPI, a collective + * operation is one in which all processes on an + * @ref GlossMPICommunicator "MPI communicator" + * have to participate. At its core, the concept of collective operations + * rests on the mental model that MPI traditionally uses, namely where + * processes communicate by sending messages to each other; in this model, + * nothing happens if one process sends a message but the receiving process + * does not expect or respond to it, or if one process needs access to a + * piece of data stored elsewhere, but the storing process does not send it. + * + * Collective operations are then operations that need to be called on all + * processes at the same time to execute. An obvious example is calling the + * `MPI_Sum` function in which every process provides a number that is then + * summed over all processes. If in a program running with 4 processes only + * three processes call `MPI_Sum`, the program will hang until the fourth + * process eventually also gets to the place where this function is called. + * If the fourth process never calls that function, for example because it + * calls another MPI function in the belief that the other processes called + * that function as well, a "deadlock" results: Every process is now waiting + * for something to happen that cannot happen. + * + * Many functions in deal.II are "collective operations" because internally + * they call MPI functions that are collective. For some, this is obvious, + * such as when you call + * parallel::distributed::Triangulation::execute_coarsening_and_refinement() + * in step-40, given that this function refines a mesh that is stored + * in parallel on all processes in a parallel universe. In some other + * cases, the name of the function is a hint that a function is collective, + * such as in + * GridTools::distributed_compute_point_locations() or + * GridTools::build_global_description_tree(), where the "distributed" + * and "global" components of the names are an indication; the latter + * function also takes an explicit MPI communicator as argument. For yet + * other functions, it is perhaps not as obvious that a function is + * a collective operation. GridTools::volume() is an example; it is + * collective because each process computes the volume of those cells it + * locally owns, and these contributions then have to be added up. + *
+ * + * *
@anchor GlossColorization Colorization
*
Colorization is the process of marking certain parts of a * Triangulation with different labels. The use of the word color diff --git a/include/deal.II/base/mpi.h b/include/deal.II/base/mpi.h index 5cdb0d2621..bc0aab7339 100644 --- a/include/deal.II/base/mpi.h +++ b/include/deal.II/base/mpi.h @@ -988,7 +988,7 @@ namespace Utilities /** * Return sum, average, minimum, maximum, processor id of minimum and - * maximum as a collective operation of on the given MPI + * maximum as a @ref GlossCollectiveOperation "collective operation" of on the given MPI * @ref GlossMPICommunicator "communicator" * @p mpi_communicator. Each processor's value is given in @p my_value and * the result will be returned. The result is available on all machines. @@ -1005,7 +1005,7 @@ namespace Utilities /** * Same as above but returning the sum, average, minimum, maximum, - * process id of minimum and maximum as a collective operation on the + * process id of minimum and maximum as a @ref GlossCollectiveOperation "collective operation" on the * given MPI * @ref GlossMPICommunicator "communicator" * @p mpi_communicator for each entry of the vector. @@ -1021,7 +1021,7 @@ namespace Utilities /** * Same as above but returning the sum, average, minimum, maximum, - * process id of minimum and maximum as a collective operation on the + * process id of minimum and maximum as a @ref GlossCollectiveOperation "collective operation" on the * given MPI * @ref GlossMPICommunicator "communicator" * @p mpi_communicator for each entry of the ArrayView. @@ -1525,7 +1525,7 @@ namespace Utilities * owned indices, these indices will be treated correctly and the rank of * this process is returned for those entries. * - * @note This is a collective operation: all processes within the given + * @note This is a @ref GlossCollectiveOperation "collective operation": all processes within the given * communicator have to call this function. Since this function does not * use MPI_Alltoall or MPI_Allgather, but instead uses non-blocking * point-to-point communication instead, and only a single non-blocking @@ -1551,7 +1551,7 @@ namespace Utilities * Compute the union of the input vectors @p vec of all processes in the * MPI communicator @p comm. * - * @note This is a collective operation. The result will available on all + * @note This is a @ref GlossCollectiveOperation "collective operation". The result will available on all * processes. */ template diff --git a/include/deal.II/distributed/tria.h b/include/deal.II/distributed/tria.h index a478dfb4f2..96856f4a42 100644 --- a/include/deal.II/distributed/tria.h +++ b/include/deal.II/distributed/tria.h @@ -579,7 +579,7 @@ namespace parallel memory_consumption_p4est() const; /** - * A collective operation that produces a sequence of output files with + * A @ref GlossCollectiveOperation "collective operation" that produces a sequence of output files with * the given file base name that contain the mesh in VTK format. * * More than anything else, this function is useful for debugging the diff --git a/include/deal.II/grid/grid_tools.h b/include/deal.II/grid/grid_tools.h index 66aad79dcb..04ff6ee300 100644 --- a/include/deal.II/grid/grid_tools.h +++ b/include/deal.II/grid/grid_tools.h @@ -681,7 +681,7 @@ namespace GridTools * In a serial execution the first three elements of the tuple are the same * as in GridTools::compute_point_locations . * - * Note: this function is a collective operation. + * Note: this function is a @ref GlossCollectiveOperation "collective operation". * * @note The actual return type of this function, i.e., the type referenced * above as @p return_type, is @@ -2813,7 +2813,7 @@ namespace GridTools const MPI_Comm mpi_communicator); /** - * In this collective operation each process provides a vector + * In this @ref GlossCollectiveOperation "collective operation" each process provides a vector * of bounding boxes and a communicator. * All these vectors are gathered on each of the processes, * organized in a search tree, and then returned. @@ -2842,7 +2842,7 @@ namespace GridTools * the second being the rank of the process for which at least some * of the locally owned cells overlap with the bounding box. * - * @note This function is a collective operation. + * @note This function is a @ref GlossCollectiveOperation "collective operation". */ template RTree, unsigned int>> diff --git a/include/deal.II/grid/grid_tools_cache.h b/include/deal.II/grid/grid_tools_cache.h index 9a25521f58..3cef49f18b 100644 --- a/include/deal.II/grid/grid_tools_cache.h +++ b/include/deal.II/grid/grid_tools_cache.h @@ -205,7 +205,7 @@ namespace GridTools * Constructing or updating the rtree requires a call to * GridTools::build_global_description_tree(), which exchanges * bounding boxes between all processes using - * Utilities::MPI::all_gather(), a collective operation. + * Utilities::MPI::all_gather(), a @ref GlossCollectiveOperation "collective operation". * Therefore this function must be called by all processes * at the same time. * diff --git a/include/deal.II/grid/grid_tools_geometry.h b/include/deal.II/grid/grid_tools_geometry.h index 5c7879503a..8fd7263aa8 100644 --- a/include/deal.II/grid/grid_tools_geometry.h +++ b/include/deal.II/grid/grid_tools_geometry.h @@ -70,7 +70,7 @@ namespace GridTools * * This function also works for objects of type * parallel::distributed::Triangulation, in which case the function is a - * collective operation. + * @ref GlossCollectiveOperation "collective operation". * * @param tria The triangulation. * @return The dim-dimensional measure of the domain described by the @@ -96,7 +96,7 @@ namespace GridTools * * This function also works for objects of type * parallel::distributed::Triangulation, in which case the function is a - * collective operation. + * @ref GlossCollectiveOperation "collective operation". * * @param tria The triangulation. * @param mapping The Mapping which computes the Jacobians used to diff --git a/include/deal.II/lac/la_parallel_vector.h b/include/deal.II/lac/la_parallel_vector.h index 688c422686..8b39a745e9 100644 --- a/include/deal.II/lac/la_parallel_vector.h +++ b/include/deal.II/lac/la_parallel_vector.h @@ -1154,7 +1154,7 @@ namespace LinearAlgebra OutputIterator values_begin) const; /** * Return whether the vector contains only elements with value zero. - * This is a collective operation. This function is expensive, because + * This is a @ref GlossCollectiveOperation "collective operation". This function is expensive, because * potentially all elements have to be checked. */ bool diff --git a/include/deal.II/lac/petsc_sparse_matrix.h b/include/deal.II/lac/petsc_sparse_matrix.h index fea15af48b..cac676de2e 100644 --- a/include/deal.II/lac/petsc_sparse_matrix.h +++ b/include/deal.II/lac/petsc_sparse_matrix.h @@ -384,7 +384,7 @@ namespace PETScWrappers /** * It is not safe to elide additions of zeros to individual elements * of this matrix. The reason is that additions to the matrix may - * trigger collective operations synchronizing buffers on multiple + * trigger @ref GlossCollectiveOperation "collective operations" synchronizing buffers on multiple * processes. If an addition is elided on one process, this may lead * to other processes hanging in an infinite waiting loop. */ diff --git a/include/deal.II/lac/petsc_vector.h b/include/deal.II/lac/petsc_vector.h index dd383953c0..c76a7c0435 100644 --- a/include/deal.II/lac/petsc_vector.h +++ b/include/deal.II/lac/petsc_vector.h @@ -380,7 +380,7 @@ namespace PETScWrappers * @copydoc PETScWrappers::VectorBase::all_zero() * * @note This function overloads the one in the base class to make this - * a collective operation. + * a @ref GlossCollectiveOperation "collective operation". */ bool all_zero() const; diff --git a/include/deal.II/lac/petsc_vector_base.h b/include/deal.II/lac/petsc_vector_base.h index 1a25e0260f..1e7253ba69 100644 --- a/include/deal.II/lac/petsc_vector_base.h +++ b/include/deal.II/lac/petsc_vector_base.h @@ -625,7 +625,7 @@ namespace PETScWrappers /** * Return whether the vector contains only elements with value zero. This - * is a collective operation. This function is expensive, because + * is a @ref GlossCollectiveOperation "collective operation". This function is expensive, because * potentially all elements have to be checked. */ bool diff --git a/include/deal.II/lac/trilinos_block_sparse_matrix.h b/include/deal.II/lac/trilinos_block_sparse_matrix.h index 1ddbecb8ba..32df581377 100644 --- a/include/deal.II/lac/trilinos_block_sparse_matrix.h +++ b/include/deal.II/lac/trilinos_block_sparse_matrix.h @@ -207,7 +207,7 @@ namespace TrilinosWrappers * internal arrays, in order to be able to relay global indices into the * matrix to indices into the subobjects. You *must* call this function * each time after you have changed the size of the sub-objects. Note that - * this is a collective operation, i.e., it needs to be called on all MPI + * this is a @ref GlossCollectiveOperation "collective operation", i.e., it needs to be called on all MPI * processes. This command internally calls the method * compress(), so you don't need to call that function in case * you use collect_sizes(). diff --git a/include/deal.II/lac/trilinos_sparse_matrix.h b/include/deal.II/lac/trilinos_sparse_matrix.h index dfd90b522f..bf3f293206 100644 --- a/include/deal.II/lac/trilinos_sparse_matrix.h +++ b/include/deal.II/lac/trilinos_sparse_matrix.h @@ -666,7 +666,7 @@ namespace TrilinosWrappers * holds the sparsity_pattern structure because each processor sets its * rows. * - * This is a collective operation that needs to be called on all + * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all * processors in order to avoid a dead lock. */ template @@ -677,7 +677,7 @@ namespace TrilinosWrappers * This function reinitializes the Trilinos sparse matrix from a (possibly * distributed) Trilinos sparsity pattern. * - * This is a collective operation that needs to be called on all + * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all * processors in order to avoid a dead lock. * * If you want to write to the matrix from several threads and use MPI, @@ -693,7 +693,7 @@ namespace TrilinosWrappers * matrix. The values are not copied, but you can use copy_from() for * this. * - * This is a collective operation that needs to be called on all + * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all * processors in order to avoid a dead lock. */ void @@ -709,7 +709,7 @@ namespace TrilinosWrappers * sparsity structure of the input matrix should be used or the matrix * entries should be copied, too. * - * This is a collective operation that needs to be called on all + * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all * processors in order to avoid a deadlock. * * @note If a different sparsity pattern is given in the last argument @@ -821,7 +821,7 @@ namespace TrilinosWrappers * processor just sets the elements in the sparsity pattern that belong to * its rows. * - * This is a collective operation that needs to be called on all + * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all * processors in order to avoid a dead lock. */ template @@ -840,7 +840,7 @@ namespace TrilinosWrappers * only implemented for input sparsity patterns of type * DynamicSparsityPattern. * - * This is a collective operation that needs to be called on all + * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all * processors in order to avoid a dead lock. */ template @@ -865,7 +865,7 @@ namespace TrilinosWrappers * sparsity structure of the input matrix should be used or the matrix * entries should be copied, too. * - * This is a collective operation that needs to be called on all + * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all * processors in order to avoid a dead lock. */ template @@ -887,7 +887,7 @@ namespace TrilinosWrappers * sparsity structure of the input matrix should be used or the matrix * entries should be copied, too. * - * This is a collective operation that needs to be called on all + * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all * processors in order to avoid a dead lock. */ template @@ -1004,7 +1004,7 @@ namespace TrilinosWrappers * Release all memory and return to a state just like after having called * the default constructor. * - * This is a collective operation that needs to be called on all + * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all * processors in order to avoid a dead lock. */ void @@ -1030,7 +1030,7 @@ namespace TrilinosWrappers * * In both cases, this function compresses the data structures and allows * the resulting matrix to be used in all other operations like - * matrix-vector products. This is a collective operation, i.e., it needs to + * matrix-vector products. This is a @ref GlossCollectiveOperation "collective operation", i.e., it needs to * be run on all processors when used in %parallel. * * See diff --git a/include/deal.II/lac/trilinos_sparsity_pattern.h b/include/deal.II/lac/trilinos_sparsity_pattern.h index 0b219eec3a..d745d4491f 100644 --- a/include/deal.II/lac/trilinos_sparsity_pattern.h +++ b/include/deal.II/lac/trilinos_sparsity_pattern.h @@ -398,7 +398,7 @@ namespace TrilinosWrappers * Release all memory and return to a state just like after having called * the default constructor. * - * This is a collective operation that needs to be called on all + * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all * processors in order to avoid a dead lock. */ void @@ -410,7 +410,7 @@ namespace TrilinosWrappers * actually generating a (Trilinos-based) matrix. This function also * exchanges non-local data that might have accumulated during the * addition of new elements. This function must therefore be called once - * the structure is fixed. This is a collective operation, i.e., it needs + * the structure is fixed. This is a @ref GlossCollectiveOperation "collective operation", i.e., it needs * to be run on all processors when used in parallel. */ void diff --git a/include/deal.II/lac/trilinos_vector.h b/include/deal.II/lac/trilinos_vector.h index fdf33ee866..440dacc539 100644 --- a/include/deal.II/lac/trilinos_vector.h +++ b/include/deal.II/lac/trilinos_vector.h @@ -921,7 +921,7 @@ namespace TrilinosWrappers /** * Return whether the vector contains only elements with value zero. This - * is a collective operation. This function is expensive, because + * is a @ref GlossCollectiveOperation "collective operation". This function is expensive, because * potentially all elements have to be checked. */ bool diff --git a/include/deal.II/lac/vector.h b/include/deal.II/lac/vector.h index 36b9fefc2b..92edce0202 100644 --- a/include/deal.II/lac/vector.h +++ b/include/deal.II/lac/vector.h @@ -228,7 +228,7 @@ public: * the same time. This means that unless you use a split MPI communicator * then it is not normally possible for only one or a subset of processes * to obtain a copy of a parallel vector while the other jobs do something - * else. In other words, calling this function is a 'collective operation' + * else. In other words, calling this function is a @ref GlossCollectiveOperation "collective operation" * that needs to be executed by all MPI processes that jointly share @p v. */ explicit Vector(const TrilinosWrappers::MPI::Vector &v); @@ -433,7 +433,7 @@ public: * the same time. This means that unless you use a split MPI communicator * then it is not normally possible for only one or a subset of processes * to obtain a copy of a parallel vector while the other jobs do something - * else. In other words, calling this function is a 'collective operation' + * else. In other words, calling this function is a @ref GlossCollectiveOperation "collective operation" * that needs to be executed by all MPI processes that jointly share @p v. */ Vector &