// manifold indicator, a manifold that produces straight edges is
// implied. (Manifold indicators are a slightly complicated topic; if
// you're confused about what exactly is happening here, you may want to
- // look at the @ref GlossManifoldIndicator "glossary entry on this
- // topic".) Since the default chosen by GridGenerator::hyper_shell is
- // reasonable we leave things alone.
+ // look at the
+ // @ref GlossManifoldIndicator "glossary entry on this topic".)
+ // Since the default chosen by GridGenerator::hyper_shell is reasonable
+ // we leave things alone.
//
// In order to demonstrate how to write a loop over all cells, we will
// refine the grid in five steps towards the inner circle of the domain:
will be a few points where we have to limit loops over all cells to
those that are locally owned, or where we need to distinguish between
vectors that store only locally owned elements and those that store
-everything that is locally relevant (see @ref GlossLocallyRelevantDof
-"this glossary entry"), but by and large the amount of heavy lifting
-necessary to make the program run in %parallel is well hidden in the
-libraries upon which this program builds. In any case, we will comment
-on these locations as we get to them in the program code.
+everything that is locally relevant (see
+@ref GlossLocallyRelevantDof "this glossary entry"), but by and large the
+amount of heavy lifting necessary to make the program run in %parallel is
+well hidden in the libraries upon which this program builds. In any case,
+we will comment on these locations as we get to them in the program code.
<h3> Parallelization within individual nodes of a cluster </h3>
The deal.II parallel vector class, LinearAlgebra::distributed::Vector, holds
the processor-local part of the solution as well as data fields for ghosted
DoFs, i.e. DoFs that are owned by a remote processor but accessed by cells
-that are owned by the present processor. In the @ref GlossLocallyActiveDof
-"glossary" these degrees of freedom are referred to as locally active degrees
-of freedom. The function MatrixFree::initialize_dof_vector() provides a method
-that sets this design. Note that hanging nodes can relate to additional
-ghosted degrees of freedom that must be included in the distributed vector but
-are not part of the locally active DoFs in the sense of the @ref
-GlossLocallyActiveDof "glossary". Moreover, the distributed vector holds the
-MPI metadata for DoFs that are owned locally but needed by other
+that are owned by the present processor. In the
+@ref GlossLocallyActiveDof "glossary" these degrees of freedom are referred
+to as locally active degrees of freedom. The function
+MatrixFree::initialize_dof_vector() provides a method that sets this
+design. Note that hanging nodes can relate to additional ghosted degrees of
+freedom that must be included in the distributed vector but are not part of
+the locally active DoFs in the sense of the
+@ref GlossLocallyActiveDof "glossary". Moreover, the distributed vector
+holds the MPI metadata for DoFs that are owned locally but needed by other
processors. A benefit of the design of this vector class is the way ghosted
entries are accessed. In the storage scheme of the vector, the data array
extends beyond the processor-local part of the solution with further vector
-entries available for the ghosted degrees of freedom. This gives a contiguous
-index range for all locally active degrees of freedom. (Note that the index
-range depends on the exact configuration of the mesh.) Since matrix-free
-operations can be thought of doing linear algebra that is performance
-critical, and performance-critical code cannot waste time on doing MPI-global
-to MPI-local index translations, the availability of an index spaces local to
-one MPI rank is fundamental. The way things are accessed here is a direct
-array access. This is provided through
+entries available for the ghosted degrees of freedom. This gives a
+contiguous index range for all locally active degrees of freedom. (Note
+that the index range depends on the exact configuration of the mesh.) Since
+matrix-free operations can be thought of doing linear algebra that is
+performance critical, and performance-critical code cannot waste time on
+doing MPI-global to MPI-local index translations, the availability of an
+index spaces local to one MPI rank is fundamental. The way things are
+accessed here is a direct array access. This is provided through
LinearAlgebra::distributed::Vector::local_element(), but it is actually rarely
needed because all of this happens internally in FEEvaluation.
// appearing on locally owned cells (plus those referenced via hanging node
// constraints) are necessary. However, in deal.II we often set all the
// degrees of freedom on ghosted elements as ghosted vector entries, called
- // the @ref GlossLocallyRelevantDof "locally relevant DoFs described in the
- // glossary". In that case, the MPI-local index of a ghosted vector entry
- // can in general be different in the two possible ghost sets, despite
- // referring to the same global index. To avoid problems, FEEvaluation
- // checks that the partitioning of the vector used for the matrix-vector
- // product does indeed match with the partitioning of the indices in
- // MatrixFree by a check called
+ // the
+ // @ref GlossLocallyRelevantDof "locally relevant DoFs described in the glossary".
+ // In that case, the MPI-local index of a ghosted vector entry can in
+ // general be different in the two possible ghost sets, despite referring
+ // to the same global index. To avoid problems, FEEvaluation checks that
+ // the partitioning of the vector used for the matrix-vector product does
+ // indeed match with the partitioning of the indices in MatrixFree by a
+ // check called
// LinearAlgebra::distributed::Vector::partitioners_are_compatible. To
// facilitate things, the MatrixFreeOperators::Base class includes a
// mechanism to fit the ghost set to the correct layout. This happens in the