From bfd58dfeee8c8b0262e218f11009ce415c1da7e7 Mon Sep 17 00:00:00 2001 From: Wolfgang Bangerth Date: Thu, 25 Sep 2014 14:56:28 -0500 Subject: [PATCH] Augment documentation motivated by Martin's post on the mailing list. --- include/deal.II/lac/trilinos_vector.h | 120 ++++++++++++++++++++++++-- 1 file changed, 115 insertions(+), 5 deletions(-) diff --git a/include/deal.II/lac/trilinos_vector.h b/include/deal.II/lac/trilinos_vector.h index d130582772..1f55445147 100644 --- a/include/deal.II/lac/trilinos_vector.h +++ b/include/deal.II/lac/trilinos_vector.h @@ -132,17 +132,19 @@ namespace TrilinosWrappers * In contrast to read access, Trilinos (and the respective deal.II * wrapper classes) allow to write (or add) to individual elements of * vectors, even if they are stored on a different process. You can do - * this writing, for example, vec(i)=d or vec(i)+=d, + * this writing by writing into or adding to elements using the syntax + * vec(i)=d or vec(i)+=d, * or similar operations. There is one catch, however, that may lead to * very confusing error messages: Trilinos requires application programs - * to call the compress() function when they switch from adding, to - * elements to writing to elements. The reasoning is that all processes + * to call the compress() function when they switch from performing a set of + * operations that add to elements, to performing a set of operations + * that write to elements. The reasoning is that all processes * might accumulate addition operations to elements, even if multiple * processes write to the same elements. By the time we call compress() * the next time, all these additions are executed. However, if one * process adds to an element, and another overwrites to it, the order * of execution would yield non-deterministic behavior if we don't make - * sure that a synchronisation with compress() happens in between. + * sure that a synchronization with compress() happens in between. * * In order to make sure these calls to compress() happen at the * appropriate time, the deal.II wrappers keep a state variable that @@ -186,6 +188,45 @@ namespace TrilinosWrappers * operations at the same time, for example by placing zero additions if * necessary. * + * + *

Ghost elements of vectors

+ * + * Parallel vectors come in two kinds: without and with ghost elements. + * Vectors without ghost elements uniquely partition the vector elements + * between processors: each vector entry has exactly one processor that + * owns it. For such vectors, you can read those elements that you are + * owned by the processor you are currently on, and you can write into + * any element whether you own it or not: if you don't own it, the + * value written or added to a vector element will be shipped to the + * processor that owns this vector element the next time you call + * compress(), as described above. + * + * What we call a 'ghosted' vector is simply a view of the + * parallel vector where the element distributions overlap. The 'ghosted' + * Trilinos vector in itself has no idea of which entries are ghosted and + * which are locally owned. In particular, there is no notion of + * an 'owner' of vector selement in the way we have it in the + * the non-ghost case view. + * + * This explains why we do not allow writing into ghosted vectors on the + * Trilinos side: Who would be responsible for taking care of the + * duplicated entries, given that there is not such information as locally + * owned indices? In other words, since a processor doesn't know which other + * processors own an element, who would it send a value to if one were to write + * to it? The only possibility would be to send this information to all + * other processors, but that is clearly not practical. Thus, we only allow + * reading from ghosted vectors, which however we do very often. + * + * So how do you fill a ghosted vector if you can't write to it? This only + * happens through the assignment with a non-ghosted vector. It can go both ways + * (non-ghosted is assigned to a ghosted vector, and a ghosted vector is assigned + * to a non-ghosted one; the latter one typically only requires taking out + * the locally owned part as most often ghosted vectors store a superset of + * elements of non-ghosted ones). In general, you send data around with that + * operation and it all depends on the different views of the two vectors. + * Trilinos also allows you to get subvectors out of a big vector that way. + * + * *

Thread safety of Trilinos vectors

* * When writing into Trilinos vectors from several threads in shared @@ -340,6 +381,11 @@ namespace TrilinosWrappers * distribute the individual components among the MPI processors. Since * it also includes information about the size of the vector, this is * all we need to generate a parallel vector. + * + * Depending on whether the @p parallel_partitioning argument uniquely + * subdivides elements among processors or not, the resulting vector + * may or may not have ghost elements. See the general documentation of + * this class for more information. */ explicit Vector (const Epetra_Map ¶llel_partitioning); @@ -348,6 +394,11 @@ namespace TrilinosWrappers * vector of this class does not necessarily need to be distributed * among processes, the user needs to supply us with an Epetra_Map that * sets the partitioning details. + * + * Depending on whether the @p parallel_partitioning argument uniquely + * subdivides elements among processors or not, the resulting vector + * may or may not have ghost elements. See the general documentation of + * this class for more information. */ Vector (const Epetra_Map ¶llel_partitioning, const VectorBase &v); @@ -355,6 +406,11 @@ namespace TrilinosWrappers /** * Reinitialize from a deal.II vector. The Epetra_Map specifies the * %parallel partitioning. + * + * Depending on whether the @p parallel_partitioning argument uniquely + * subdivides elements among processors or not, the resulting vector + * may or may not have ghost elements. See the general documentation of + * this class for more information. */ template void reinit (const Epetra_Map ¶llel_partitioner, @@ -363,6 +419,11 @@ namespace TrilinosWrappers /** * Reinit functionality. This function destroys the old vector content * and generates a new one based on the input map. + * + * Depending on whether the @p parallel_partitioning argument uniquely + * subdivides elements among processors or not, the resulting vector + * may or may not have ghost elements. See the general documentation of + * this class for more information. */ void reinit (const Epetra_Map ¶llel_partitioning, const bool fast = false); @@ -370,6 +431,11 @@ namespace TrilinosWrappers /** * Copy-constructor from deal.II vectors. Sets the dimension to that of * the given vector, and copies all elements. + * + * Depending on whether the @p parallel_partitioning argument uniquely + * subdivides elements among processors or not, the resulting vector + * may or may not have ghost elements. See the general documentation of + * this class for more information. */ template Vector (const Epetra_Map ¶llel_partitioning, @@ -384,12 +450,22 @@ namespace TrilinosWrappers * individual components among the MPI processors. Since it also * includes information about the size of the vector, this is all we * need to generate a %parallel vector. + * + * Depending on whether the @p parallel_partitioning argument uniquely + * subdivides elements among processors or not, the resulting vector + * may or may not have ghost elements. See the general documentation of + * this class for more information. */ explicit Vector (const IndexSet ¶llel_partitioning, const MPI_Comm &communicator = MPI_COMM_WORLD); /** * Creates a ghosted parallel vector. + * + * Depending on whether the @p ghost argument uniquely + * subdivides elements among processors or not, the resulting vector + * may or may not have ghost elements. See the general documentation of + * this class for more information. */ Vector (const IndexSet &local, const IndexSet &ghost, @@ -400,6 +476,11 @@ namespace TrilinosWrappers * vector of this class does not necessarily need to be distributed * among processes, the user needs to supply us with an IndexSet and an * MPI communicator that set the partitioning details. + * + * Depending on whether the @p parallel_partitioning argument uniquely + * subdivides elements among processors or not, the resulting vector + * may or may not have ghost elements. See the general documentation of + * this class for more information. */ Vector (const IndexSet ¶llel_partitioning, const VectorBase &v, @@ -408,6 +489,11 @@ namespace TrilinosWrappers /** * Copy-constructor from deal.II vectors. Sets the dimension to that of * the given vector, and copies all the elements. + * + * Depending on whether the @p parallel_partitioning argument uniquely + * subdivides elements among processors or not, the resulting vector + * may or may not have ghost elements. See the general documentation of + * this class for more information. */ template Vector (const IndexSet ¶llel_partitioning, @@ -419,6 +505,12 @@ namespace TrilinosWrappers * and generates a new one based on the input partitioning. The flag * fast determines whether the vector should be filled with * zero (false) or left untouched (true). + * + * + * Depending on whether the @p parallel_partitioning argument uniquely + * subdivides elements among processors or not, the resulting vector + * may or may not have ghost elements. See the general documentation of + * this class for more information. */ void reinit (const IndexSet ¶llel_partitioning, const MPI_Comm &communicator = MPI_COMM_WORLD, @@ -441,6 +533,11 @@ namespace TrilinosWrappers * alternative storage scheme for ghost elements that allows multiple * threads to write into the vector (for the other reinit methods, only * one thread is allowed to write into the ghost entries at a time). + * + * Depending on whether the @p ghost_entries argument uniquely + * subdivides elements among processors or not, the resulting vector + * may or may not have ghost elements. See the general documentation of + * this class for more information. */ void reinit (const IndexSet &locally_owned_entries, const IndexSet &ghost_entries, @@ -596,6 +693,9 @@ namespace TrilinosWrappers * vector. If the map is not localized, i.e., if there are some elements * that are not present on all processes, only the global size of the map * will be taken and a localized map will be generated internally. + * In other words, which element of the @p partitioning argument + * are set is in fact ignored, the only thing that matters is the size of + * the index space described by this argument. */ explicit Vector (const Epetra_Map &partitioning); @@ -604,7 +704,9 @@ namespace TrilinosWrappers * vector. If the index set is not localized, i.e., if there are some * elements that are not present on all processes, only the global size of * the index set will be taken and a localized version will be generated - * internally. + * internally. In other words, which element of the @p partitioning argument + * are set is in fact ignored, the only thing that matters is the size of + * the index space described by this argument. */ explicit Vector (const IndexSet &partitioning, const MPI_Comm &communicator = MPI_COMM_WORLD); @@ -637,6 +739,10 @@ namespace TrilinosWrappers * that has been initialized with the same communicator. The variable * fast determines whether the vector should be filled with zero * or left untouched. + * + * Which element of the @p input_map argument + * are set is in fact ignored, the only thing that matters is the size of + * the index space described by this argument. */ void reinit (const Epetra_Map &input_map, const bool fast = false); @@ -649,6 +755,10 @@ namespace TrilinosWrappers * that has been initialized with the same communicator. The variable * fast determines whether the vector should be filled with zero * (false) or left untouched (true). + * + * Which element of the @p input_map argument + * are set is in fact ignored, the only thing that matters is the size of + * the index space described by this argument. */ void reinit (const IndexSet &input_map, const MPI_Comm &communicator = MPI_COMM_WORLD, -- 2.39.5