From: Wolfgang Bangerth Date: Mon, 9 Mar 2015 19:48:33 +0000 (-0500) Subject: Document changes in semantics to apply_boundary_values. X-Git-Tag: v8.3.0-rc1~380^2~3 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=557916ce65696c603848646043723ab6475d7bf4;p=dealii.git Document changes in semantics to apply_boundary_values. --- diff --git a/doc/news/changes.h b/doc/news/changes.h index 47cad75662..3e672fb1ba 100644 --- a/doc/news/changes.h +++ b/doc/news/changes.h @@ -43,6 +43,10 @@ inconvenience this causes. but this is neither efficient nor safe. You will now have to do this yourself after assembling a matrix and before clearing rows.
+ The changes to the function above also affect the + MatrixTools::apply_boundary_values() variants that operate on Trilinos + matrices. +
(Wolfgang Bangerth, 2015/03/09) diff --git a/include/deal.II/numerics/matrix_tools.h b/include/deal.II/numerics/matrix_tools.h index 7dfc46796e..a4b8ed59a9 100644 --- a/include/deal.II/numerics/matrix_tools.h +++ b/include/deal.II/numerics/matrix_tools.h @@ -819,18 +819,18 @@ namespace MatrixTools * described in the general documentation. This function works on the * classes that are used to wrap PETSc objects. * - * Note that this function is not very efficient: it needs to alternatingly - * read and write into the matrix, a situation that PETSc does not handle - * too well. In addition, we only get rid of rows corresponding to boundary - * nodes, but the corresponding case of deleting the respective columns - * (i.e. if @p eliminate_columns is @p true) is not presently implemented, - * and probably will never because it is too expensive without direct access - * to the PETSc data structures. (This leads to the situation where the - * action indicated by the default value of the last argument is actually - * not implemented; that argument has true as its default value - * to stay consistent with the other functions of same name in this class.) - * A third reason against this function is that it doesn't handle the case - * where the matrix is distributed across an MPI system. + * Important: This function is not very efficient: it needs + * to alternatingly read and write into the matrix, a situation that + * PETSc does not handle well. In addition, we only get rid of + * rows corresponding to boundary nodes, but the corresponding case + * of deleting the respective columns (i.e. if @p eliminate_columns + * is @p true) is not presently implemented, and probably will never + * because it is too expensive without direct access to the PETSc + * data structures. (This leads to the situation where the action + * indicated by the default value of the last argument is actually + * not implemented; that argument has true as its + * default value to stay consistent with the other functions of same + * name in this class.) * * This function is used in step-17 and step-18. */ @@ -842,7 +842,24 @@ namespace MatrixTools const bool eliminate_columns = true); /** - * Same function, but for parallel PETSc matrices. + * Same function as above, but for parallel PETSc matrices. + * + * @note If the matrix is stored in parallel across multiple + * processors using MPI, this function only touches rows that are + * locally stored and simply ignores all other rows. In other words, + * each processor is responsible for its own rows, and the @p + * boundary_values argument needs to contain all locally owned rows + * of the matrix that you want to have treated. (But it can also + * contain entries for degrees of freedom not owned locally; these + * will simply be ignored.) Further, in the context of parallel + * computations, you will get into trouble if you treat a row while + * other processors still have pending writes or additions into the + * same row. In other words, if another processor still wants to add + * something to an element of a row and you call this function to + * zero out the row, then the next time you call compress() may add + * the remote value to the zero you just created. Consequently, you + * will want to call compress() after you made the last + * modifications to a matrix and before starting to clear rows. */ void apply_boundary_values (const std::map &boundary_values, @@ -852,15 +869,7 @@ namespace MatrixTools const bool eliminate_columns = true); /** - * Same function, but for parallel PETSc matrices. Note that this function - * only operates on the local range of the parallel matrix, i.e. it only - * eliminates rows corresponding to degrees of freedom for which the row is - * stored on the present processor. All other boundary nodes are ignored, - * and it doesn't matter whether they are present in the first argument to - * this function or not. A consequence of this, however, is that this - * function has to be called from all processors that participate in sharing - * the contents of the given matrices and vectors. It is also implied that - * the local range for all objects passed to this function is the same. + * Same function, but for parallel PETSc matrices and a non-parallel vector. */ void apply_boundary_values (const std::map &boundary_values, @@ -887,18 +896,18 @@ namespace MatrixTools * described in the general documentation. This function works on the * classes that are used to wrap Trilinos objects. * - * Note that this function is not very efficient: it needs to alternatingly - * read and write into the matrix, a situation that Trilinos does not handle - * too well. In addition, we only get rid of rows corresponding to boundary - * nodes, but the corresponding case of deleting the respective columns - * (i.e. if @p eliminate_columns is @p true) is not presently implemented, - * and probably will never because it is too expensive without direct access - * to the Trilinos data structures. (This leads to the situation where the - * action indicated by the default value of the last argument is actually - * not implemented; that argument has true as its default value - * to stay consistent with the other functions of same name in this class.) - * A third reason against this function is that it doesn't handle the case - * where the matrix is distributed across an MPI system. + * Important: This function is not very efficient: it needs + * to alternatingly read and write into the matrix, a situation that + * Trilinos does not handle well. In addition, we only get rid of + * rows corresponding to boundary nodes, but the corresponding case + * of deleting the respective columns (i.e. if @p eliminate_columns + * is @p true) is not presently implemented, and probably will never + * because it is too expensive without direct access to the Trilinos + * data structures. (This leads to the situation where the action + * indicated by the default value of the last argument is actually + * not implemented; that argument has true as its + * default value to stay consistent with the other functions of same + * name in this class.) */ void apply_boundary_values (const std::map &boundary_values, @@ -919,21 +928,24 @@ namespace MatrixTools const bool eliminate_columns = true); /** - * Apply Dirichlet boundary conditions to the system matrix and vectors as - * described in the general documentation. This function works on the - * classes that are used to wrap Trilinos objects. + * Same as above, but for parallel matrices and vectors. * - * Note that this function is not very efficient: it needs to alternatingly - * read and write into the matrix, a situation that Trilinos does not handle - * too well. In addition, we only get rid of rows corresponding to boundary - * nodes, but the corresponding case of deleting the respective columns - * (i.e. if @p eliminate_columns is @p true) is not presently implemented, - * and probably will never because it is too expensive without direct access - * to the Trilinos data structures. (This leads to the situation where the - * action indicated by the default value of the last argument is actually - * not implemented; that argument has true as its default value - * to stay consistent with the other functions of same name in this class.) - * This function does work on MPI vector types. + * @note If the matrix is stored in parallel across multiple + * processors using MPI, this function only touches rows that are + * locally stored and simply ignores all other rows. In other words, + * each processor is responsible for its own rows, and the @p + * boundary_values argument needs to contain all locally owned rows + * of the matrix that you want to have treated. (But it can also + * contain entries for degrees of freedom not owned locally; these + * will simply be ignored.) Further, in the context of parallel + * computations, you will get into trouble if you treat a row while + * other processors still have pending writes or additions into the + * same row. In other words, if another processor still wants to add + * something to an element of a row and you call this function to + * zero out the row, then the next time you call compress() may add + * the remote value to the zero you just created. Consequently, you + * will want to call compress() after you made the last + * modifications to a matrix and before starting to clear rows. */ void apply_boundary_values (const std::map &boundary_values,