From 7c87d8c230079f981458b88662a820465a8ddf9e Mon Sep 17 00:00:00 2001 From: Denis Davydov Date: Sun, 8 Apr 2018 22:32:06 +0200 Subject: [PATCH] doc: mention Kelly estimator in step-37 extensions --- examples/step-37/doc/results.dox | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/examples/step-37/doc/results.dox b/examples/step-37/doc/results.dox index 5a7789d33c..262eaebd77 100644 --- a/examples/step-37/doc/results.dox +++ b/examples/step-37/doc/results.dox @@ -402,6 +402,37 @@ processors in MPI, which limits scalability by a factor of around two to five.

Possibilities for extensions

+

Kelly error estimator

+ +As mentioned above the code is ready for locally adaptive h-refinement. +For the Poisson equation one can employ the Kelly error indicator, +implemented in the KellyErrorEstimator class. However one needs to be careful +with the ghost indices of parallel vectors. +In order to evaluate the jump terms in the error indicator, each MPI process +needs to know locally relevant DoFs. +However MatrixFree::initialize_dof_vector() function initializes the vector only with +some locally relevant DoFs. +The ghost indices made available in the vector are a tight set of only those indices +that are touched in the cell integrals (including constraint resolution). +This choice has performance reasons, because sending all locally relevant degrees +of freedom would be too expensive compared to the matrix-vector product. +Consequently the solution vector as-is is +not suitable for the KellyErrorEstimator class. +The trick is to change the ghost part of the partition, for example using a +temporary vector and LinearAlgebra::distributed::Vector::copy_locally_owned_data_from() +as shown below. + +@code +IndexSet locally_relevant_set; +DoFTools::extract_locally_relevant_dofs (dof_handler, + locally_relevant_set); +LinearAlgebra::distributed::Vector copy_vec(solution); +solution.reinit(locally_owned_dofs, locally_relevant_set, mpi_communicator); +solution.copy_locally_owned_data_from(copy_vec); +constraints.distribute(solution); +solution.update_ghost_values(); +@endcode +

Shared-memory parallelization

This program is parallelized with MPI only. As an alternative, the MatrixFree -- 2.39.5