* @deprecated As of deal.II version 9.2, we do not populate a vector with
* the index sets of all processors by default any more due to a possibly
* large memory footprint on many processors. As a consequence, this
- * function needs to call `Utilities::all_gather(comm, locally_owned_dofs())`
- * upon the first invocation, including global communication. Use
- * `Utilities::all_gather(comm, dof_handler.locally_owned_dofs())` instead if
- * using up to a few thousands of MPI ranks or some variant involving local
- * communication with more processors.
+ * function needs to call `Utilities::MPI::all_gather(comm,
+ * locally_owned_dofs())` upon the first invocation, including global
+ * communication. Use `Utilities::MPI::all_gather(comm,
+ * dof_handler.locally_owned_dofs())` instead if using up to a few thousands
+ * of MPI ranks or some variant involving local communication with more
+ * processors.
*/
DEAL_II_DEPRECATED const std::vector<IndexSet> &
locally_owned_dofs_per_processor() const;
* @deprecated As of deal.II version 9.2, we do not populate a vector with
* the numbers of dofs of all processors by default any more due to a
* possibly large memory footprint on many processors. As a consequence,
- * this function needs to call `Utilities::all_gather(comm,
+ * this function needs to call `Utilities::MPI::all_gather(comm,
* n_locally_owned_dofs()` upon the first invocation, including global
- * communication. Use `Utilities::all_gather(comm,
+ * communication. Use `Utilities::MPI::all_gather(comm,
* dof_handler.n_locally_owned_dofs()` instead if using up to a few thousands
* of MPI ranks or some variant involving local communication with more
* processors.
* @deprecated As of deal.II version 9.2, we do not populate a vector with
* the index sets of all processors by default any more due to a possibly
* large memory footprint on many processors. As a consequence, this
- * function needs to call `Utilities::all_gather(comm,
+ * function needs to call `Utilities::MPI::all_gather(comm,
* locally_owned_dofs_mg())` upon the first invocation, including global
- * communication. Use `Utilities::all_gather(comm,
+ * communication. Use `Utilities::MPI::all_gather(comm,
* dof_handler.locally_owned_dofs_mg())` instead if using up to a few
* thousands of MPI ranks or some variant involving local communication with
* more processors.
* operation and will return @p true only if all processors are consistent.
*
* Please supply the owned DoFs per processor as returned by
- * DoFHandler::locally_owned_dofs_per_processor() as @p locally_owned_dofs
- * and the result of DoFTools::extract_locally_active_dofs() as
- * @p locally_active_dofs. The
- * former is used to determine ownership of the specific DoF, while the latter
- * is used as the set of rows that need to be checked.
+ * Utilities::MPI::all_gather(MPI_Comm, DoFHandler::locally_owned_dofs()) as
+ * @p locally_owned_dofs and the result of
+ * DoFTools::extract_locally_active_dofs() as
+ * @p locally_active_dofs. The former is used to determine ownership of the
+ * specific DoF, while the latter is used as the set of rows that need to be
+ * checked.
*
* If @p verbose is set to @p true, additional debug information is written
* to std::cout.
* Communicate rows in a dynamic sparsity pattern over MPI, similar to the
* one above but using a vector `rows_per_cpu` containing the number of
* rows per CPU for determining ownership. This is typically the value
- * returned by DoFHandler::n_locally_owned_dofs_per_processor -- given that
- * the construction of the input to this function involves all-to-all
- * communication, it is typically slower than the function above for more
- * than a thousand of processes (and quick enough also for small sizes).
+ * returned by Utilities::MPI::all_gather(MPI_Comm,
+ * DoFHandler::locally_owned_dofs()) -- given that the construction of the
+ * input to this function involves all-to-all communication, it is typically
+ * slower than the function above for more than a thousand of processes (and
+ * quick enough also for small sizes).
*/
void
distribute_sparsity_pattern(