From: Wolfgang Bangerth Date: Wed, 6 Jan 2010 16:12:57 +0000 (+0000) Subject: Take over patch 20101 from the distributed grids branch. X-Git-Tag: v8.0.0~6672 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=96c767e4b67b1e6134c52558762d3dc7294da61a;p=dealii.git Take over patch 20101 from the distributed grids branch. git-svn-id: https://svn.dealii.org/trunk@20318 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/doc/doxygen/headers/glossary.h b/deal.II/doc/doxygen/headers/glossary.h index e52f080af4..42d8511f4c 100644 --- a/deal.II/doc/doxygen/headers/glossary.h +++ b/deal.II/doc/doxygen/headers/glossary.h @@ -65,6 +65,64 @@ * FiniteElement * * + *
@anchor GlossCompress Compressing distributed + * vectors and matrices
+ * + * For parallel computations, deal.II uses the vector and matrix + * classes defined in the PETScWrappers and TrilinosWrappers + * namespaces. When running programs in parallel using MPI, these + * classes only store a certain number of rows or elements on the + * current processor, whereas the rest of the vector or matrix is + * stored on the other processors that belong to our MPI + * universe. This presents a certain problem when you assemble linear + * systems: we add elements to the matrix and right hand side vectors + * that may or may not be stored locally. Sometimes, we may also want + * to just set an element, not add to it. + * + * Both PETSc and Trilinos allow adding to or setting elements that + * are not locally stored. In that case, they write the value that we + * want to store or add into a cache, and we need to call one of the + * functions TrilinosWrappers::VectorBase::compress(), + * TrilinosWrappers::SparseMatrix::compress(), + * PETScWrappers::VectorBase::compress() or + * PETScWrappers::MatrixBase::compress() which will then ship the + * values in the cache to the MPI process that owns the element to + * which it is supposed to be added or written to. Due to the MPI + * model that only allows to initiate communication from the sender + * side (i.e. in particular, it is not a remote procedure call), these + * functions are collective, i.e. they need to be called by all + * processors. + * + * There is one snag, however: both PETSc and Trilinos need to know + * whether the operation that these compress() functions + * invoke applies to adding elements or setting them. Usually, you + * will have written or added elements to the vector or matrix before + * (and after compress() was last called), and in this + * case the wrapper object knows that the global communication + * operation is either an add or a set operation since it keeps track + * of this sort of thing. However, there are cases where this isn't + * so: for example, if you are working on a coarse grid and there are + * more processors than coarse grid cells; in that case, some + * processors will not assemble anything, and when they come to the + * point where they call compress() on the system matrix + * and right hand side, these objects are still in their pristine + * state. In a case like this the wrapper object doesn't know whether + * it is supposed to do a global exchange for add or set operations, + * and in the worst case you end up with a deadlock (because those + * processors that did assembly operations want to communicate, while + * those that didn't assemble anything do not want to communicate). + * + * The way out of a situation like this is to use one of the two following ways: + * - You tell the object that you want to compress what operation is + * intended. The TrilinosWrappers::VectorBase::compress() can take + * such an additional argument + * - You do a fake addition or set operation on the object in question. + * + * Some of the objects are also indifferent and can figure out what to + * do without being told. The TrilinosWrappers::SparseMatrix can do that, + * for example. + * + * *
@anchor GlossDistorted Distorted cells
* *
A distorted cell is a cell for which the mapping from