* fact, any linear data structure) among processors using MPI.
*
* The partitioner stores the global vector size and the locally owned
- * range as a half-open interval [@p lower, @p upper). Furthermore, it
- * includes a structure for the point-to-point communication patterns. It
- * allows the inclusion of ghost indices (i.e. indices that a current
- * processor needs to have access to, but are owned by another process)
- * through an IndexSet. In addition, it also stores the other processors'
- * ghost indices belonging to the current processor (see import_targets()),
- * which are the indices where other processors might require information
- * from. In a sense, these import indices form the dual of the ghost
- * indices. This information is gathered once when constructing the
- * partitioner, which obviates subsequent global communication steps when
- * exchanging data.
+ * range as a half-open interval [@p lower, @p upper) on each process.
+ * Furthermore, it includes a structure for the point-to-point communication
+ * patterns. It allows the inclusion of ghost indices (i.e. indices that a
+ * current processor needs to have access to, but are owned by another
+ * process) through an IndexSet. In addition, it also stores the other
+ * processors' ghost indices belonging to the current processor (see
+ * import_targets()), which are the indices where other processors might
+ * require information from. In a sense, these import indices form the dual
+ * of the ghost indices. This information is gathered once when constructing
+ * the partitioner, which obviates subsequent global communication steps
+ * when exchanging data.
*
* The figure below gives an example of index space $[0,74)$ being split
- * into four processes.
+ * into four parts that are each owned by one MPI process:
* @image html partitioner.png
- * Here process 0 will import 5 DoFs from process 1 (first pair of import
- * targets), which corresponds to the first 3 elements of its import
- * indices. Whereas process 2 will import 3 DoFs from process 0, which
- * corresponds to the first two elements of its import indices.
+ * The first row (above the thick black line) shows which process owns which
+ * elements. Below it, the next four lines indicate which elements of
+ * the overall array each processor wants to know about -- this is
+ * generally a superset of the locally owned elements, with the difference
+ * being what are called "ghost elements".
+ *
+ * To understand the remaining pieces of the figure (and this class),
+ * remember that in MPI, you can't just ask another process for data.
+ * (That's not quite true: There are mechanisms in newer MPI standards
+ * for this, but as a general rule it's true.) Rather, if you need
+ * information, you need to send another process a message, the other
+ * process needs to expect the message and respond as appropriate with
+ * a message of its own. In practice, it is therefore easier and faster
+ * if each process will *already* know what it will be asked and, at
+ * the appropriate time, just send that data. The remaining lines of
+ * information set up this kind of scheme.
+ *
+ * To this end, note that process 0 will want to know about five
+ * elements it does not own itself: those with indices 20, 21 (owned
+ * by process 1); and 40, 41, 43 (owned by process 2). Similar information
+ * can be obtained from the following lines. To satisfy this need for
+ * knowledge, it would therefore be quite useful if process 1 stored
+ * that, at the appropriate time, it will have to send elements 20, 21
+ * to process 0. Similarly, if you go through lines 2-4 below the thick
+ * black line, you will see that process 0 should know that it will need
+ * to send elements 1, 2, 13, 18, 19 to process 1; 18, 19 to process 2;
+ * and elements 1, 2, 13 to process 3. These are called "import indices"
+ * because other processes will want to import them. Instead of storing
+ * these indices as a set, it is often useful to use half-open index
+ * sets instead, and so the import indices listed above form the following
+ * collection of sets: `[1,3)`, `[13,14)`, `[18,20)`, `[18,20)`,
+ * `[1,3)`, `[13,14)`. This is how the import indices are shown
+ * in the figure above. We now only have to know which of these
+ * half-open sets are to be sent to whom. This is done in the line
+ * above it where we list the "import targets" (i.e., the target
+ * processes for an import operations): Process 1 will receive 5
+ * elements (which are comprised of the first three half-open
+ * target index sets), process 2 will receive 2 indices
+ * (the fourth half-open interval), and process 3 will receive
+ * 3 indices (the remaining two half-open intervals). This information
+ * is encoded as the pairs `{1,5}`, `{2,2}`, `{3,3}` as the import
+ * targets. Similar considerations can be made for what processes
+ * 1, 2, and 3 will have to send out.
+ *
+ * Finally, when receiving information, it is useful to know how
+ * many indices each process will receive from whom since then one
+ * can already pre-allocate buffers of the right size. This is listed
+ * in the last line under "ghost targets": Process 0 will receive
+ * two elements from process 1 (namely those with indices 20, 21),
+ * and three from process 2 (namely those with indices 40, 41, 43).
+ * This is encoded as pairs `{1,2}` and `{2,3}`. Again, similar
+ * considerations can be made for what processes 1, 2, and 3
+ * should expect, and what is then shown in their respective columns.
+ *
+ * The main purpose of this class is to set up these data structures
+ * knowing only which process owns which elements, and for which
+ * additional ghost elements each process needs knowledge.
+ *
+ *
+ * <h4>Local and global numbering</h4>
*
* The partitioner includes a mechanism for converting global to local and
* local to global indices. Internally, this class stores vector elements
* consecutively in [@p local_size, @p local_size + @p n_ghost_indices).
* The ghost indices are sorted according to their global index.
*
+ *
* <h4>Parallel data exchange</h4>
*
- * This class handles the main ghost data exchange of
- * LinearAlgebra::distributed::Vector through four functions:
+ * This class also handles the ghost data exchange for partitioned
+ * arrays of objects -- i.e., if a separate class stores the
+ * locally owned elements on each process, then this class
+ * facilitates the importation of those elements that are locally
+ * needed as ghosts but stored elsewhere. An example of where this class
+ * is used is the LinearAlgebra::distributed::Vector class.
+ *
+ * The data exchange happens through four functions:
* <ul>
* <li> export_to_ghosted_array_start() is used for initiating an export
* operation that sends data from the locally owned data field, passed as
- * an array, to a ghost data array according to the ghost indices stored
- * in the present class. This call starts non-blocking MPI communication
+ * an array, to the ghost data arrays of other processes (according to the
+ * ghost indices stored in the present class).
+ * This call starts non-blocking MPI send communication
* routines, but does not wait for the routines to finish. Thus, the user
* may not write into the respective positions of the underlying arrays as
* the data might still be needed by MPI.
* present class. A VectorOperation::values flag can be passed to decide
* on how to combine the data in the ghost field with the data at the
* owner, since both relate to the same data entry. In assembly, this is
- * usually and add-into operation. This call starts non-blocking MPI
- * communication routines, but does not wait for the routines to
- * finish. Thus, the user may not write into the respective positions of
- * the underlying arrays as the data might still be needed by MPI.
+ * usually an add-to operation where contributions from all processes to
+ * a locally owned element need to be added up. This call starts
+ * non-blocking MPI communication routines, but does not wait for the
+ * routines to finish. Thus, the user may not write into the respective
+ * positions of the underlying arrays as the data might still be needed by
+ * MPI.
* <li> import_from_ghosted_array_finish() finalizes the MPI data exchange
* started in import_from_ghosted_array_start() and signals that the data
* in the arrays may be used for further processing or modified as
*
* The MPI communication routines are point-to-point communication patterns.
*
+ *
* <h4>Sending only selected ghost data</h4>
*
* This partitioner class operates on a fixed set of ghost indices and
- * must always be compatible with the ghost indices inside a vector. In
- * some cases, one only wants to send some of the ghost indices present in
- * a vector around, but without creating a copy of the vector with a
+ * must always be compatible with the ghost indices inside the array whose
+ * partitioning it represents. In some cases,
+ * one only wants to send around some of the ghost indices present in
+ * a vector, but without creating a copy of the vector with a
* suitable index set - think e.g. of local time stepping where different
* regions of a vector might be exchanged at different stages of a time
* step slice. This class supports that case by the following model: A
{
public:
/**
- * Empty Constructor.
+ * Default constructor.
*/
Partitioner();
* Constructor with index set arguments. This constructor creates a
* distributed layout based on a given communicators, an IndexSet
* describing the locally owned range and another one for describing
- * ghost indices that are owned by other processors, but we need to have
- * read or write access to.
+ * ghost indices that are owned by other processors, but that we need to
+ * have read or write access to.
*/
Partitioner(const IndexSet &locally_owned_indices,
const IndexSet &ghost_indices_in,
const MPI_Comm communicator_in);
/**
- * Reinitialize the communication pattern. The first argument @p
- * vector_space_vector_index_set is the index set associated to a
- * VectorSpaceVector object. The second argument @p
- * read_write_vector_index_set is the index set associated to a
+ * Reinitialize the communication pattern. The first argument
+ * `vector_space_vector_index_set` is the index set associated to a
+ * VectorSpaceVector object. The second argument
+ * `read_write_vector_index_set` is the index set associated to a
* ReadWriteVector object.
*/
virtual void
* the given global index is neither locally owned nor a ghost, an
* exception is thrown.
*
- * Note that the local index for locally owned indices is between 0 and
- * local_size()-1, and the local index for ghosts is between
- * local_size() and local_size()+n_ghost_indices()-1.
+ * Note that the returned local index for locally owned indices
+ * will be between 0 and
+ * `local_size()-1`, and the local index for ghosts is between
+ * `local_size()` and `local_size()+n_ghost_indices()-1`.
*/
unsigned int
global_to_local(const types::global_dof_index global_index) const;
/**
* Return the global index corresponding to the given local index.
*
- * Note that the local index for locally owned indices is between 0 and
- * local_size()-1, and the local index for ghosts is between
- * local_size() and local_size()+n_ghost_indices()-1.
+ * Note that the local index for locally owned indices must be between 0
+ * and `local_size()-1`, and the local index for ghosts must be between
+ * `local_size()` and `local_size()+n_ghost_indices()-1`.
*/
types::global_dof_index
local_to_global(const unsigned int local_index) const;
* for all the processors that data is obtained from, i.e., locally owned
* indices that are ghosts on other processors.
*
- * @note the returned vector only contains those processor id's for which
+ * @note The returned vector only contains those processor id's for which
* the second entry is non-zero.
*/
const std::vector<std::pair<unsigned int, unsigned int>> &
import_targets() const;
/**
- * Check whether the given partitioner is compatible with the
- * partitioner used for this vector. Two partitioners are compatible if
- * they have the same local size and the same ghost indices. They do not
- * necessarily need to be the same data field. This is a local operation
+ * Check whether the given partitioner is compatible with the current
+ * partitioner. Two partitioners are compatible if
+ * they have the same local sizes and the same ghost indices. They do not
+ * necessarily need to correspond to the same data that is stored based
+ * on these partioner objects. This is a local operation
* only, i.e., if only some processors decide that the partitioning is
* not compatible, only these processors will return @p false, whereas
* the other processors will return @p true.
/**
* Check whether the given partitioner is compatible with the
- * partitioner used for this vector. Two partitioners are compatible if
+ * current partitioner. Two partitioners are compatible if
* they have the same local size and the same ghost indices. They do not
- * necessarily need to be the same data field. As opposed to
+ * necessarily need to correspond to the same data that is stored based
+ * on these partioner objects. As opposed to
* is_compatible(), this method checks for compatibility among all
* processors and the method only returns @p true if the partitioner is
- * the same on all processors.
+ * the same on all processors. In other words, it does a global
+ * "and" operation over the results returned by is_compatible() on all
+ * involved processes.
*
* This method performs global communication, so make sure to use it
* only in a context where all processors call it the same number of
/**
* Return whether ghost indices have been explicitly added as a @p
- * ghost_indices argument. Only true if a reinit call or constructor
+ * ghost_indices argument. Only true if a reinit() call or constructor
* provided that argument.
*/
bool
#ifdef DEAL_II_WITH_MPI
/**
- * Start the exports of the data in a locally owned array to the range
- * described by the ghost indices of this class.
+ * Start the exportation of the data in a locally owned array to the
+ * range described by the ghost indices of this class.
*
* @param communication_channel Sets an offset to the MPI_Isend and
* MPI_Irecv calls that avoids interference with other ongoing
std::vector<MPI_Request> & requests) const;
/**
- * Finish the exports of the data in a locally owned array to the range
- * described by the ghost indices of this class.
+ * Finish the exportation of the data in a locally owned array to the
+ * range described by the ghost indices of this class.
*
* @param ghost_array The array that will receive the exported data
* started in the @p export_to_ghosted_array_start(). This must be the