* </dd>
*
*
+ * <dt class="glossary">@anchor GlossCollectiveOperation <b>Collective operation</b></dt>
+ * <dd>
+ * When running programs in parallel using MPI, a <em>collective
+ * operation</em> is one in which all processes on an
+ * @ref GlossMPICommunicator "MPI communicator"
+ * have to participate. At its core, the concept of collective operations
+ * rests on the mental model that MPI traditionally uses, namely where
+ * processes communicate by sending messages to each other; in this model,
+ * nothing happens if one process sends a message but the receiving process
+ * does not expect or respond to it, or if one process needs access to a
+ * piece of data stored elsewhere, but the storing process does not send it.
+ *
+ * Collective operations are then operations that need to be called on all
+ * processes at the same time to execute. An obvious example is calling the
+ * `MPI_Sum` function in which every process provides a number that is then
+ * summed over all processes. If in a program running with 4 processes only
+ * three processes call `MPI_Sum`, the program will hang until the fourth
+ * process eventually also gets to the place where this function is called.
+ * If the fourth process never calls that function, for example because it
+ * calls another MPI function in the belief that the other processes called
+ * that function as well, a "deadlock" results: Every process is now waiting
+ * for something to happen that cannot happen.
+ *
+ * Many functions in deal.II are "collective operations" because internally
+ * they call MPI functions that are collective. For some, this is obvious,
+ * such as when you call
+ * parallel::distributed::Triangulation::execute_coarsening_and_refinement()
+ * in step-40, given that this function refines a mesh that is stored
+ * in parallel on all processes in a parallel universe. In some other
+ * cases, the name of the function is a hint that a function is collective,
+ * such as in
+ * GridTools::distributed_compute_point_locations() or
+ * GridTools::build_global_description_tree(), where the "distributed"
+ * and "global" components of the names are an indication; the latter
+ * function also takes an explicit MPI communicator as argument. For yet
+ * other functions, it is perhaps not as obvious that a function is
+ * a collective operation. GridTools::volume() is an example; it is
+ * collective because each process computes the volume of those cells it
+ * locally owns, and these contributions then have to be added up.
+ * </dd>
+ *
+ *
* <dt class="glossary">@anchor GlossColorization <b>Colorization</b></dt>
* <dd><em>Colorization</em> is the process of marking certain parts of a
* Triangulation with different labels. The use of the word <em>color</em>
/**
* Return sum, average, minimum, maximum, processor id of minimum and
- * maximum as a collective operation of on the given MPI
+ * maximum as a @ref GlossCollectiveOperation "collective operation" of on the given MPI
* @ref GlossMPICommunicator "communicator"
* @p mpi_communicator. Each processor's value is given in @p my_value and
* the result will be returned. The result is available on all machines.
/**
* Same as above but returning the sum, average, minimum, maximum,
- * process id of minimum and maximum as a collective operation on the
+ * process id of minimum and maximum as a @ref GlossCollectiveOperation "collective operation" on the
* given MPI
* @ref GlossMPICommunicator "communicator"
* @p mpi_communicator for each entry of the vector.
/**
* Same as above but returning the sum, average, minimum, maximum,
- * process id of minimum and maximum as a collective operation on the
+ * process id of minimum and maximum as a @ref GlossCollectiveOperation "collective operation" on the
* given MPI
* @ref GlossMPICommunicator "communicator"
* @p mpi_communicator for each entry of the ArrayView.
* owned indices, these indices will be treated correctly and the rank of
* this process is returned for those entries.
*
- * @note This is a collective operation: all processes within the given
+ * @note This is a @ref GlossCollectiveOperation "collective operation": all processes within the given
* communicator have to call this function. Since this function does not
* use MPI_Alltoall or MPI_Allgather, but instead uses non-blocking
* point-to-point communication instead, and only a single non-blocking
* Compute the union of the input vectors @p vec of all processes in the
* MPI communicator @p comm.
*
- * @note This is a collective operation. The result will available on all
+ * @note This is a @ref GlossCollectiveOperation "collective operation". The result will available on all
* processes.
*/
template <typename T>
memory_consumption_p4est() const;
/**
- * A collective operation that produces a sequence of output files with
+ * A @ref GlossCollectiveOperation "collective operation" that produces a sequence of output files with
* the given file base name that contain the mesh in VTK format.
*
* More than anything else, this function is useful for debugging the
* In a serial execution the first three elements of the tuple are the same
* as in GridTools::compute_point_locations .
*
- * Note: this function is a collective operation.
+ * Note: this function is a @ref GlossCollectiveOperation "collective operation".
*
* @note The actual return type of this function, i.e., the type referenced
* above as @p return_type, is
const MPI_Comm mpi_communicator);
/**
- * In this collective operation each process provides a vector
+ * In this @ref GlossCollectiveOperation "collective operation" each process provides a vector
* of bounding boxes and a communicator.
* All these vectors are gathered on each of the processes,
* organized in a search tree, and then returned.
* the second being the rank of the process for which at least some
* of the locally owned cells overlap with the bounding box.
*
- * @note This function is a collective operation.
+ * @note This function is a @ref GlossCollectiveOperation "collective operation".
*/
template <int spacedim>
RTree<std::pair<BoundingBox<spacedim>, unsigned int>>
* Constructing or updating the rtree requires a call to
* GridTools::build_global_description_tree(), which exchanges
* bounding boxes between all processes using
- * Utilities::MPI::all_gather(), a collective operation.
+ * Utilities::MPI::all_gather(), a @ref GlossCollectiveOperation "collective operation".
* Therefore this function must be called by all processes
* at the same time.
*
*
* This function also works for objects of type
* parallel::distributed::Triangulation, in which case the function is a
- * collective operation.
+ * @ref GlossCollectiveOperation "collective operation".
*
* @param tria The triangulation.
* @return The dim-dimensional measure of the domain described by the
*
* This function also works for objects of type
* parallel::distributed::Triangulation, in which case the function is a
- * collective operation.
+ * @ref GlossCollectiveOperation "collective operation".
*
* @param tria The triangulation.
* @param mapping The Mapping which computes the Jacobians used to
OutputIterator values_begin) const;
/**
* Return whether the vector contains only elements with value zero.
- * This is a collective operation. This function is expensive, because
+ * This is a @ref GlossCollectiveOperation "collective operation". This function is expensive, because
* potentially all elements have to be checked.
*/
bool
/**
* It is not safe to elide additions of zeros to individual elements
* of this matrix. The reason is that additions to the matrix may
- * trigger collective operations synchronizing buffers on multiple
+ * trigger @ref GlossCollectiveOperation "collective operations" synchronizing buffers on multiple
* processes. If an addition is elided on one process, this may lead
* to other processes hanging in an infinite waiting loop.
*/
* @copydoc PETScWrappers::VectorBase::all_zero()
*
* @note This function overloads the one in the base class to make this
- * a collective operation.
+ * a @ref GlossCollectiveOperation "collective operation".
*/
bool
all_zero() const;
/**
* Return whether the vector contains only elements with value zero. This
- * is a collective operation. This function is expensive, because
+ * is a @ref GlossCollectiveOperation "collective operation". This function is expensive, because
* potentially all elements have to be checked.
*/
bool
* internal arrays, in order to be able to relay global indices into the
* matrix to indices into the subobjects. You *must* call this function
* each time after you have changed the size of the sub-objects. Note that
- * this is a collective operation, i.e., it needs to be called on all MPI
+ * this is a @ref GlossCollectiveOperation "collective operation", i.e., it needs to be called on all MPI
* processes. This command internally calls the method
* <tt>compress()</tt>, so you don't need to call that function in case
* you use <tt>collect_sizes()</tt>.
* holds the sparsity_pattern structure because each processor sets its
* rows.
*
- * This is a collective operation that needs to be called on all
+ * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all
* processors in order to avoid a dead lock.
*/
template <typename SparsityPatternType>
* This function reinitializes the Trilinos sparse matrix from a (possibly
* distributed) Trilinos sparsity pattern.
*
- * This is a collective operation that needs to be called on all
+ * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all
* processors in order to avoid a dead lock.
*
* If you want to write to the matrix from several threads and use MPI,
* matrix. The values are not copied, but you can use copy_from() for
* this.
*
- * This is a collective operation that needs to be called on all
+ * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all
* processors in order to avoid a dead lock.
*/
void
* sparsity structure of the input matrix should be used or the matrix
* entries should be copied, too.
*
- * This is a collective operation that needs to be called on all
+ * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all
* processors in order to avoid a deadlock.
*
* @note If a different sparsity pattern is given in the last argument
* processor just sets the elements in the sparsity pattern that belong to
* its rows.
*
- * This is a collective operation that needs to be called on all
+ * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all
* processors in order to avoid a dead lock.
*/
template <typename SparsityPatternType>
* only implemented for input sparsity patterns of type
* DynamicSparsityPattern.
*
- * This is a collective operation that needs to be called on all
+ * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all
* processors in order to avoid a dead lock.
*/
template <typename SparsityPatternType>
* sparsity structure of the input matrix should be used or the matrix
* entries should be copied, too.
*
- * This is a collective operation that needs to be called on all
+ * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all
* processors in order to avoid a dead lock.
*/
template <typename number>
* sparsity structure of the input matrix should be used or the matrix
* entries should be copied, too.
*
- * This is a collective operation that needs to be called on all
+ * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all
* processors in order to avoid a dead lock.
*/
template <typename number>
* Release all memory and return to a state just like after having called
* the default constructor.
*
- * This is a collective operation that needs to be called on all
+ * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all
* processors in order to avoid a dead lock.
*/
void
*
* In both cases, this function compresses the data structures and allows
* the resulting matrix to be used in all other operations like
- * matrix-vector products. This is a collective operation, i.e., it needs to
+ * matrix-vector products. This is a @ref GlossCollectiveOperation "collective operation", i.e., it needs to
* be run on all processors when used in %parallel.
*
* See
* Release all memory and return to a state just like after having called
* the default constructor.
*
- * This is a collective operation that needs to be called on all
+ * This is a @ref GlossCollectiveOperation "collective operation" that needs to be called on all
* processors in order to avoid a dead lock.
*/
void
* actually generating a (Trilinos-based) matrix. This function also
* exchanges non-local data that might have accumulated during the
* addition of new elements. This function must therefore be called once
- * the structure is fixed. This is a collective operation, i.e., it needs
+ * the structure is fixed. This is a @ref GlossCollectiveOperation "collective operation", i.e., it needs
* to be run on all processors when used in parallel.
*/
void
/**
* Return whether the vector contains only elements with value zero. This
- * is a collective operation. This function is expensive, because
+ * is a @ref GlossCollectiveOperation "collective operation". This function is expensive, because
* potentially all elements have to be checked.
*/
bool
* the same time. This means that unless you use a split MPI communicator
* then it is not normally possible for only one or a subset of processes
* to obtain a copy of a parallel vector while the other jobs do something
- * else. In other words, calling this function is a 'collective operation'
+ * else. In other words, calling this function is a @ref GlossCollectiveOperation "collective operation"
* that needs to be executed by all MPI processes that jointly share @p v.
*/
explicit Vector(const TrilinosWrappers::MPI::Vector &v);
* the same time. This means that unless you use a split MPI communicator
* then it is not normally possible for only one or a subset of processes
* to obtain a copy of a parallel vector while the other jobs do something
- * else. In other words, calling this function is a 'collective operation'
+ * else. In other words, calling this function is a @ref GlossCollectiveOperation "collective operation"
* that needs to be executed by all MPI processes that jointly share @p v.
*/
Vector<Number> &