* from the data that needs to be communicated. The goal is to reuse the
* same communication pattern for different containers. This is similar to
* the way SparseMatrix and SparsityPattern works.
+ *
+ * Conceptually, this class operates under the assumption that data
+ * are stored in one linearly indexed array of which every MPI process
+ * stores some elements (or possibly none). In practice it does of course
+ * not matter whether the elements are stored in contiguous arrays; the
+ * point is simply that a single integer index can be used to access a
+ * specific element. The elements of this logical array are (or at least
+ * may be) stored on different processes in a parallel MPI universe.
+ *
+ * In this world view, every process has a set of elements and their
+ * indices in the array form the "locally owned indices". Next, every
+ * process will as part of executing an algorithm require access to some
+ * of the elements stored elsewhere; we call the indices of these elements
+ * the "ghost indices", in analogy to how vectors and triangulations
+ * partition vector elements and mesh cells into locally
+ * owned ones and ghost ones (along, of course, with cells and ghosts stored
+ * on other processes that the current process simply does not care about
+ * and consequently needs not know anything about).
+ *
+ * The point of this class (and its implementations in derived classes) is
+ * to set up communication infrastructure whereby one process can inquire
+ * efficiently about the "ghost elements" stored on other processes, and
+ * to send those locally owned elements to those other processes that
+ * require knowledge of their value because they list these elements among
+ * their respective "ghost indices".
*/
class CommunicationPatternBase
{
virtual ~CommunicationPatternBase() = default;
/**
- * Reinitialize the communication pattern. The first argument
- * `vector_space_vector_index_set` is the index set associated to a
- * VectorSpaceVector object. The second argument
- * `read_write_vector_index_set` is the index set associated to a
- * ReadWriteVector object.
+ * Reinitialize the communication pattern.
+ *
+ * @param[in] locally_owned_indices The set of indices of elements
+ * in the array mentioned in the class documentation that are
+ * stored on the current process.
+ * @param[in] ghost_indices The set of indices of elements in the
+ * array mentioned in the class documentation that the current
+ * process will need to be able to import.
+ * @param[in] communicator The MPI communicator used to describe the
+ * entire set of processes that participate in the storage and
+ * access to elements of the array.
*/
virtual void
- reinit(const IndexSet &vector_space_vector_index_set,
- const IndexSet &read_write_vector_index_set,
+ reinit(const IndexSet &locally_owned_indices,
+ const IndexSet &ghost_indices,
const MPI_Comm &communicator) = 0;
/**
{
/**
* A flexible Partitioner class, which does not impose restrictions
- * regarding the order of the underlying index sets.
+ * regarding the order of the underlying index sets. In other words,
+ * this class implements the interface of the
+ * Utilities::MPI::CommunicationPatternBase base class with no
+ * assumption that every process stores a contiguous part of the
+ * array of objects, but that indeed the locally owned indices
+ * can be an arbitrary subset of all indices of elements of the array
+ * to which they refer.
+ *
+ * If you want to store only contiguous parts of these arrays on
+ * each process, take a look at Utilities::MPI::Partitioner.
*/
class NoncontiguousPartitioner
: public Utilities::MPI::CommunicationPatternBase
const MPI_Comm &
get_mpi_communicator() const override;
- /**
- * Initialize the inner data structures.
- */
void
- reinit(const IndexSet &indexset_locally_owned,
- const IndexSet &indexset_ghost,
+ reinit(const IndexSet &locally_owned_indices,
+ const IndexSet &ghost_indices,
const MPI_Comm &communicator) override;
/**
- * Initialize the inner data structures.
+ * Initialize the inner data structures using explicit sets of
+ * indices. See the documentation of the other reinit() function for
+ * what the function does.
*/
void
- reinit(const std::vector<types::global_dof_index> &indices_locally_owned,
- const std::vector<types::global_dof_index> &indices_ghost,
+ reinit(const std::vector<types::global_dof_index> &locally_owned_indices,
+ const std::vector<types::global_dof_index> &ghost_indices,
const MPI_Comm & communicator);
private:
* fact, any linear data structure) among processors using MPI.
*
* The partitioner stores the global vector size and the locally owned
- * range as a half-open interval [@p lower, @p upper) on each process.
+ * range as a half-open interval [@p lower, @p upper) on each process. In
+ * other words, it assumes that every process stores a contiguous subset
+ * of the array mentioned in the documentation of the base class
+ * Utilities::MPI::CommunicationPatternBase. (If you want to store
+ * non-contiguous parts of these arrays on each process, take a look
+ * at Utilities::MPI::NoncontiguousPartitioner.)
+ *
* Furthermore, it includes a structure for the point-to-point communication
* patterns. It allows the inclusion of ghost indices (i.e. indices that a
* current processor needs to have access to, but are owned by another