From: Bruno Turcksin Date: Sun, 26 Apr 2015 21:59:58 +0000 (-0500) Subject: Deprecate PETScWrappers::Vector and PETScWrappers::BlockVector and update the tutorials. X-Git-Tag: v8.3.0-rc1~196^2~8 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=60d0571e616629b502c0fd9eff4a6b87c39690dd;p=dealii.git Deprecate PETScWrappers::Vector and PETScWrappers::BlockVector and update the tutorials. --- diff --git a/examples/step-17/step-17.cc b/examples/step-17/step-17.cc index 5469fa5c86..84014df530 100644 --- a/examples/step-17/step-17.cc +++ b/examples/step-17/step-17.cc @@ -148,9 +148,9 @@ namespace Step17 // parallel versions is denoted the fact that we use the classes from the // PETScWrappers::MPI namespace; sequential versions of these // classes are in the PETScWrappers namespace, i.e. without - // the MPI part). Note also that we do not use a separate - // sparsity pattern, since PETSc manages that as part of its matrix data - // structures. + // the MPI part, be aware that these classes are deprecated). + // Note also that we do not use a separate sparsity pattern, since PETSc + // manages that as part of its matrix data structures. PETScWrappers::MPI::SparseMatrix system_matrix; PETScWrappers::MPI::Vector solution; @@ -615,8 +615,8 @@ namespace Step17 // another node in a simple way if we should need it, what we do here is // to get a copy of the distributed vector where we keep all elements // locally. This is simple, since the deal.II wrappers have a conversion - // constructor for the non-MPI vector class: - PETScWrappers::Vector localized_solution (solution); + // constructor for the deal.II vector class: + Vector localized_solution (solution); // Then we distribute hanging node constraints on this local copy, i.e. we // compute the values of all constrained nodes: @@ -781,7 +781,7 @@ namespace Step17 // zero only, we need to have access to all elements of the solution // vector. So we need to get a local copy of the distributed vector, which // is in fact simple: - const PETScWrappers::Vector localized_solution (solution); + const Vector localized_solution (solution); // The thing to notice, however, is that we do this localization operation // on all processes, not only the one that actually needs the data. This // can't be avoided, however, with the communication model of MPI: MPI @@ -868,6 +868,111 @@ namespace Step17 // @sect4{ElasticProblem::run} + // The sixth step is to take the solution just computed, and evaluate some + // kind of refinement indicator to refine the mesh. The problem is basically + // the same as with distributing hanging node constraints: in order to + // compute the error indicator, we need access to all elements of the + // solution vector. We then compute the indicators for the cells that belong + // to the present process, but then we need to distribute the refinement + // indicators into a distributed vector so that all processes have the + // values of the refinement indicator for all cells. But then, in order for + // each process to refine its copy of the mesh, they need to have access to + // all refinement indicators locally, so they have to copy the global vector + // back into a local one. That's a little convoluted, but thinking about it + // quite straightforward nevertheless. So here's how we do it: + template + void ElasticProblem::refine_grid () + { + // So, first part: get a local copy of the distributed solution + // vector. This is necessary since the error estimator needs to get at the + // value of neighboring cells even if they do not belong to the subdomain + // associated with the present MPI process: + const Vector localized_solution (solution); + + // Second part: set up a vector of error indicators for all cells and let + // the Kelly class compute refinement indicators for all cells belonging + // to the present subdomain/process. Note that the last argument of the + // call indicates which subdomain we are interested in. The three + // arguments before it are various other default arguments that one + // usually doesn't need (and doesn't state values for, but rather uses the + // defaults), but which we have to state here explicitly since we want to + // modify the value of a following argument (i.e. the one indicating the + // subdomain): + Vector local_error_per_cell (triangulation.n_active_cells()); + KellyErrorEstimator::estimate (dof_handler, + QGauss(2), + typename FunctionMap::type(), + localized_solution, + local_error_per_cell, + ComponentMask(), + 0, + MultithreadInfo::n_threads(), + this_mpi_process); + + // Now all processes have computed error indicators for their own cells + // and stored them in the respective elements of the + // local_error_per_cell vector. The elements of this vector + // for cells not on the present process are zero. However, since all + // processes have a copy of a copy of the entire triangulation and need to + // keep these copies in sync, they need the values of refinement + // indicators for all cells of the triangulation. Thus, we need to + // distribute our results. We do this by creating a distributed vector + // where each process has its share, and sets the elements it has + // computed. We will then later generate a local sequential copy of this + // distributed vector to allow each process to access all elements of this + // vector. + // + // So in the first step, we need to set up a %parallel vector. For + // simplicity, every process will own a chunk with as many elements as + // this process owns cells, so that the first chunk of elements is stored + // with process zero, the next chunk with process one, and so on. It is + // important to remark, however, that these elements are not necessarily + // the ones we will write to. This is so, since the order in which cells + // are arranged, i.e. the order in which the elements of the vector + // correspond to cells, is not ordered according to the subdomain these + // cells belong to. In other words, if on this process we compute + // indicators for cells of a certain subdomain, we may write the results + // to more or less random elements if the distributed vector, that do not + // necessarily lie within the chunk of vector we own on the present + // process. They will subsequently have to be copied into another + // process's memory space then, an operation that PETSc does for us when + // we call the compress function. This inefficiency could be + // avoided with some more code, but we refrain from it since it is not a + // major factor in the program's total runtime. + // + // So here's how we do it: count how many cells belong to this process, + // set up a distributed vector with that many elements to be stored + // locally, and copy over the elements we computed locally, then compress + // the result. In fact, we really only copy the elements that are nonzero, + // so we may miss a few that we computed to zero, but this won't hurt + // since the original values of the vector is zero anyway. + const unsigned int n_local_cells + = GridTools::count_cells_with_subdomain_association (triangulation, + this_mpi_process); + PETScWrappers::MPI::Vector + distributed_all_errors (mpi_communicator, + triangulation.n_active_cells(), + n_local_cells); + + for (unsigned int i=0; i localized_all_errors (distributed_all_errors); + + // ...which we can the subsequently use to finally refine the grid: + GridRefinement::refine_and_coarsen_fixed_number (triangulation, + localized_all_errors, + 0.3, 0.03); + triangulation.execute_coarsening_and_refinement (); + } + + // Lastly, here is the driver function. It is almost unchanged from step-8, // with the exception that we replace std::cout by the diff --git a/examples/step-18/step-18.cc b/examples/step-18/step-18.cc index d059460033..ea1906eac4 100644 --- a/examples/step-18/step-18.cc +++ b/examples/step-18/step-18.cc @@ -495,7 +495,7 @@ namespace Step18 PETScWrappers::MPI::Vector system_rhs; - PETScWrappers::Vector incremental_displacement; + Vector incremental_displacement; // The next block of variables is then related to the time dependent // nature of the problem: they denote the length of the time interval diff --git a/examples/step-40/step-40.cc b/examples/step-40/step-40.cc index 7f17aea51b..ba64b936c8 100644 --- a/examples/step-40/step-40.cc +++ b/examples/step-40/step-40.cc @@ -168,9 +168,9 @@ namespace Step40 ConstraintMatrix constraints; - LA::MPI::SparseMatrix system_matrix; - LA::MPI::Vector locally_relevant_solution; - LA::MPI::Vector system_rhs; + LA::MPI::SparseMatrix system_matrix; + LA::MPI::Vector locally_relevant_solution; + LA::MPI::Vector system_rhs; ConditionalOStream pcout; TimerOutput computing_timer; diff --git a/include/deal.II/lac/petsc_block_vector.h b/include/deal.II/lac/petsc_block_vector.h index 2fc9a57f97..38080427dd 100644 --- a/include/deal.II/lac/petsc_block_vector.h +++ b/include/deal.II/lac/petsc_block_vector.h @@ -43,6 +43,8 @@ namespace PETScWrappers * interface, this class handles the actual allocation of vectors and * provides functions that are specific to the underlying vector type. * + * This class is deprecated use PETScWrappers::MPI::BlockVector. + * * @ingroup Vectors * * @see @@ -85,13 +87,13 @@ namespace PETScWrappers * of different sizes. */ explicit BlockVector (const unsigned int num_blocks = 0, - const size_type block_size = 0); + const size_type block_size = 0) DEAL_II_DEPRECATED; /** * Copy-Constructor. Dimension set to that of V, all components are copied * from V */ - BlockVector (const BlockVector &V); + BlockVector (const BlockVector &V) DEAL_II_DEPRECATED; /** * Copy-constructor: copy the values from a PETSc wrapper parallel block @@ -103,13 +105,13 @@ namespace PETScWrappers * It is not sufficient if only one processor tries to copy the elements * from the other processors over to its own process space. */ - explicit BlockVector (const MPI::BlockVector &v); + explicit BlockVector (const MPI::BlockVector &v) DEAL_II_DEPRECATED; /** * Constructor. Set the number of blocks to n.size() and * initialize each block with n[i] zero elements. */ - BlockVector (const std::vector &n); + BlockVector (const std::vector &n) DEAL_II_DEPRECATED; /** * Constructor. Set the number of blocks to n.size(). Initialize @@ -122,7 +124,7 @@ namespace PETScWrappers template BlockVector (const std::vector &n, const InputIterator first, - const InputIterator end); + const InputIterator end) DEAL_II_DEPRECATED; /** * Destructor. Clears memory diff --git a/include/deal.II/lac/petsc_vector.h b/include/deal.II/lac/petsc_vector.h index befd569c0f..5e953c2810 100644 --- a/include/deal.II/lac/petsc_vector.h +++ b/include/deal.II/lac/petsc_vector.h @@ -45,6 +45,8 @@ namespace PETScWrappers * virtual functions). Only the functions creating a vector of specific type * differ, and are implemented in this particular class. * + * This class is deprecated use PETScWrappers::MPI::Vector instead. + * * @ingroup Vectors * @author Wolfgang Bangerth, 2004 */ @@ -71,7 +73,7 @@ namespace PETScWrappers /** * Default constructor. Initialize the vector as empty. */ - Vector (); + Vector () DEAL_II_DEPRECATED; /** * Constructor. Set dimension to @p n and initialize all elements with @@ -83,14 +85,14 @@ namespace PETScWrappers * v=Vector@(0);, i.e. the vector is replaced by one of * length zero. */ - explicit Vector (const size_type n); + explicit Vector (const size_type n) DEAL_II_DEPRECATED; /** * Copy-constructor from deal.II vectors. Sets the dimension to that of * the given vector, and copies all elements. */ template - explicit Vector (const dealii::Vector &v); + explicit Vector (const dealii::Vector &v) DEAL_II_DEPRECATED; /** * Construct it from an existing PETSc Vector of type Vec. Note: this does @@ -98,12 +100,12 @@ namespace PETScWrappers * the vector is not used twice at the same time or destroyed while in * use. This class does not destroy the PETSc object. Handle with care! */ - explicit Vector (const Vec &v); + explicit Vector (const Vec &v) DEAL_II_DEPRECATED; /** * Copy-constructor the values from a PETSc wrapper vector class. */ - Vector (const Vector &v); + Vector (const Vector &v) DEAL_II_DEPRECATED; /** * Copy-constructor: copy the values from a PETSc wrapper parallel vector @@ -114,7 +116,7 @@ namespace PETScWrappers * It is not sufficient if only one processor tries to copy the elements * from the other processors over to its own process space. */ - explicit Vector (const MPI::Vector &v); + explicit Vector (const MPI::Vector &v) DEAL_II_DEPRECATED; /** * Copy the given vector. Resize the present vector if necessary.