From: Wolfgang Bangerth Date: Mon, 4 Apr 2005 18:40:20 +0000 (+0000) Subject: Make the default value of preset_nonzero_locations = true. X-Git-Tag: v8.0.0~14203 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=4ff1dda208ff5dfa1c9644ae549bbc46b9aa2d99;p=dealii.git Make the default value of preset_nonzero_locations = true. git-svn-id: https://svn.dealii.org/trunk@10344 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/doc/news/changes.html b/deal.II/doc/news/changes.html index 66c1ef43cb..f8b50394a9 100644 --- a/deal.II/doc/news/changes.html +++ b/deal.II/doc/news/changes.html @@ -230,6 +230,19 @@ inconvenience this causes.

lac

    +
  1. + Fixed: The PETScWrappers::MPI::SparseMatrix + class had functions that allow to take a pre-existing sparsity pattern + as the basis for entry allocation. These function had an option to + allow pre-setting these entries in the underlying data structures, but + this option was disabled by default because of unresolved questions + about its effectiveness. This has now been fixed: The code now properly + initializes these elements, and makes the resulting matrix much faster + to use. +
    + (WB, 2005/04/04) +

    +
  2. New: The ProductSparseMatrix implements the product of two rectangular sparse matrices with diff --git a/deal.II/lac/include/lac/petsc_parallel_sparse_matrix.h b/deal.II/lac/include/lac/petsc_parallel_sparse_matrix.h index c71dde22e0..d2cfa586ef 100644 --- a/deal.II/lac/include/lac/petsc_parallel_sparse_matrix.h +++ b/deal.II/lac/include/lac/petsc_parallel_sparse_matrix.h @@ -261,29 +261,13 @@ namespace PETScWrappers * significantly more efficient to * get memory allocation right from * the start. - * - * Despite the fact that it would - * seem to be an obvious win, setting - * the @p preset_nonzero_locations - * flag to @p true doesn't seem to - * accelerate program. Rather on the - * contrary, it seems to be able to - * slow down entire programs - * somewhat. This is suprising, since - * we can use efficient function - * calls into PETSc that allow to - * create multiple entries at once; - * nevertheless, given the fact that - * it is inefficient, the respective - * flag has a default value equal to - * @p false. */ SparseMatrix (const MPI_Comm &communicator, const CompressedSparsityPattern &sparsity_pattern, const std::vector &local_rows_per_process, const std::vector &local_columns_per_process, const unsigned int this_process, - const bool preset_nonzero_locations = false); + const bool preset_nonzero_locations = true); /** * This operator assigns a scalar to @@ -368,29 +352,13 @@ namespace PETScWrappers * significantly more efficient to * get memory allocation right from * the start. - * - * Despite the fact that it would - * seem to be an obvious win, setting - * the @p preset_nonzero_locations - * flag to @p true doesn't seem to - * accelerate program. Rather on the - * contrary, it seems to be able to - * slow down entire programs - * somewhat. This is suprising, since - * we can use efficient function - * calls into PETSc that allow to - * create multiple entries at once; - * nevertheless, given the fact that - * it is inefficient, the respective - * flag has a default value equal to - * @p false. */ void reinit (const MPI_Comm &communicator, const CompressedSparsityPattern &sparsity_pattern, const std::vector &local_rows_per_process, const std::vector &local_columns_per_process, const unsigned int this_process, - const bool preset_nonzero_locations = false); + const bool preset_nonzero_locations = true); /** * Return a reference to the MPI