From: Jean-Paul Pelteret Date: Fri, 26 Jan 2018 07:29:28 +0000 (+0100) Subject: Amend comment related to making a local copy of Trilinos MPI vector. X-Git-Tag: v9.0.0-rc1~517^2 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=5e7c61f9e109ac906d77e1964f2c34e52d86d118;p=dealii.git Amend comment related to making a local copy of Trilinos MPI vector. --- diff --git a/include/deal.II/lac/vector.h b/include/deal.II/lac/vector.h index d777b56c48..02352d3ca0 100644 --- a/include/deal.II/lac/vector.h +++ b/include/deal.II/lac/vector.h @@ -196,8 +196,8 @@ public: * the same time. This means that unless you use a split MPI communicator * then it is not normally possible for only one or a subset of processes * to obtain a copy of a parallel vector while the other jobs do something - * else. This call will therefore result in a copy of the vector on all - * processors that share @p v. + * else. In other words, calling this function is a 'collective operation' + * that needs to be executed by all MPI processes that jointly share @p v. */ explicit Vector (const TrilinosWrappers::MPI::Vector &v); #endif @@ -386,8 +386,8 @@ public: * the same time. This means that unless you use a split MPI communicator * then it is not normally possible for only one or a subset of processes * to obtain a copy of a parallel vector while the other jobs do something - * else. This call will therefore result in a copy of the vector on all - * processors that share @p v. + * else. In other words, calling this function is a 'collective operation' + * that needs to be executed by all MPI processes that jointly share @p v. */ Vector & operator= (const TrilinosWrappers::MPI::Vector &v);