* described in the general documentation. This function works on the
* classes that are used to wrap PETSc objects.
*
- * Note that this function is not very efficient: it needs to alternatingly
- * read and write into the matrix, a situation that PETSc does not handle
- * too well. In addition, we only get rid of rows corresponding to boundary
- * nodes, but the corresponding case of deleting the respective columns
- * (i.e. if @p eliminate_columns is @p true) is not presently implemented,
- * and probably will never because it is too expensive without direct access
- * to the PETSc data structures. (This leads to the situation where the
- * action indicated by the default value of the last argument is actually
- * not implemented; that argument has <code>true</code> as its default value
- * to stay consistent with the other functions of same name in this class.)
- * A third reason against this function is that it doesn't handle the case
- * where the matrix is distributed across an MPI system.
+ * <b>Important:</b> This function is not very efficient: it needs
+ * to alternatingly read and write into the matrix, a situation that
+ * PETSc does not handle well. In addition, we only get rid of
+ * rows corresponding to boundary nodes, but the corresponding case
+ * of deleting the respective columns (i.e. if @p eliminate_columns
+ * is @p true) is not presently implemented, and probably will never
+ * because it is too expensive without direct access to the PETSc
+ * data structures. (This leads to the situation where the action
+ * indicated by the default value of the last argument is actually
+ * not implemented; that argument has <code>true</code> as its
+ * default value to stay consistent with the other functions of same
+ * name in this class.)
*
* This function is used in step-17 and step-18.
*/
const bool eliminate_columns = true);
/**
- * Same function, but for parallel PETSc matrices.
+ * Same function as above, but for parallel PETSc matrices.
+ *
+ * @note If the matrix is stored in parallel across multiple
+ * processors using MPI, this function only touches rows that are
+ * locally stored and simply ignores all other rows. In other words,
+ * each processor is responsible for its own rows, and the @p
+ * boundary_values argument needs to contain all locally owned rows
+ * of the matrix that you want to have treated. (But it can also
+ * contain entries for degrees of freedom not owned locally; these
+ * will simply be ignored.) Further, in the context of parallel
+ * computations, you will get into trouble if you treat a row while
+ * other processors still have pending writes or additions into the
+ * same row. In other words, if another processor still wants to add
+ * something to an element of a row and you call this function to
+ * zero out the row, then the next time you call compress() may add
+ * the remote value to the zero you just created. Consequently, you
+ * will want to call compress() after you made the last
+ * modifications to a matrix and before starting to clear rows.
*/
void
apply_boundary_values (const std::map<types::global_dof_index,double> &boundary_values,
const bool eliminate_columns = true);
/**
- * Same function, but for parallel PETSc matrices. Note that this function
- * only operates on the local range of the parallel matrix, i.e. it only
- * eliminates rows corresponding to degrees of freedom for which the row is
- * stored on the present processor. All other boundary nodes are ignored,
- * and it doesn't matter whether they are present in the first argument to
- * this function or not. A consequence of this, however, is that this
- * function has to be called from all processors that participate in sharing
- * the contents of the given matrices and vectors. It is also implied that
- * the local range for all objects passed to this function is the same.
+ * Same function, but for parallel PETSc matrices and a non-parallel vector.
*/
void
apply_boundary_values (const std::map<types::global_dof_index,double> &boundary_values,
* described in the general documentation. This function works on the
* classes that are used to wrap Trilinos objects.
*
- * Note that this function is not very efficient: it needs to alternatingly
- * read and write into the matrix, a situation that Trilinos does not handle
- * too well. In addition, we only get rid of rows corresponding to boundary
- * nodes, but the corresponding case of deleting the respective columns
- * (i.e. if @p eliminate_columns is @p true) is not presently implemented,
- * and probably will never because it is too expensive without direct access
- * to the Trilinos data structures. (This leads to the situation where the
- * action indicated by the default value of the last argument is actually
- * not implemented; that argument has <code>true</code> as its default value
- * to stay consistent with the other functions of same name in this class.)
- * A third reason against this function is that it doesn't handle the case
- * where the matrix is distributed across an MPI system.
+ * <b>Important:</b> This function is not very efficient: it needs
+ * to alternatingly read and write into the matrix, a situation that
+ * Trilinos does not handle well. In addition, we only get rid of
+ * rows corresponding to boundary nodes, but the corresponding case
+ * of deleting the respective columns (i.e. if @p eliminate_columns
+ * is @p true) is not presently implemented, and probably will never
+ * because it is too expensive without direct access to the Trilinos
+ * data structures. (This leads to the situation where the action
+ * indicated by the default value of the last argument is actually
+ * not implemented; that argument has <code>true</code> as its
+ * default value to stay consistent with the other functions of same
+ * name in this class.)
*/
void
apply_boundary_values (const std::map<types::global_dof_index,double> &boundary_values,
const bool eliminate_columns = true);
/**
- * Apply Dirichlet boundary conditions to the system matrix and vectors as
- * described in the general documentation. This function works on the
- * classes that are used to wrap Trilinos objects.
+ * Same as above, but for parallel matrices and vectors.
*
- * Note that this function is not very efficient: it needs to alternatingly
- * read and write into the matrix, a situation that Trilinos does not handle
- * too well. In addition, we only get rid of rows corresponding to boundary
- * nodes, but the corresponding case of deleting the respective columns
- * (i.e. if @p eliminate_columns is @p true) is not presently implemented,
- * and probably will never because it is too expensive without direct access
- * to the Trilinos data structures. (This leads to the situation where the
- * action indicated by the default value of the last argument is actually
- * not implemented; that argument has <code>true</code> as its default value
- * to stay consistent with the other functions of same name in this class.)
- * This function does work on MPI vector types.
+ * @note If the matrix is stored in parallel across multiple
+ * processors using MPI, this function only touches rows that are
+ * locally stored and simply ignores all other rows. In other words,
+ * each processor is responsible for its own rows, and the @p
+ * boundary_values argument needs to contain all locally owned rows
+ * of the matrix that you want to have treated. (But it can also
+ * contain entries for degrees of freedom not owned locally; these
+ * will simply be ignored.) Further, in the context of parallel
+ * computations, you will get into trouble if you treat a row while
+ * other processors still have pending writes or additions into the
+ * same row. In other words, if another processor still wants to add
+ * something to an element of a row and you call this function to
+ * zero out the row, then the next time you call compress() may add
+ * the remote value to the zero you just created. Consequently, you
+ * will want to call compress() after you made the last
+ * modifications to a matrix and before starting to clear rows.
*/
void
apply_boundary_values (const std::map<types::global_dof_index,double> &boundary_values,