we care about is that it stores those elements locally that we need.
</ul>
-There are a number of other concepts relevant to distributing the mesh to a
-number of processors; you may want to take a look at the @ref distributed
-module and step-40 before trying to understand this program. The rest of the
-program is almost completely agnostic about the fact that we don't store all
-objects completely locally. There will be a few points where we have to limit
-loops over all cells to those that are locally owned, or where we need to
-distinguish between vectors that store only locally owned elements and those
-that store everything that is locally relevant (see @ref
-GlossLocallyRelevantDof "this glossary entry"), and there will be a few places
-where we have to
-explicitly call MPI functions, but by and large the amount of heavy lifting
-necessary to make the program run in %parallel is well hidden in the libraries
-upon which this program builds. In any case, we will comment on these
-locations as we get to them in the program code.
+There are a number of other concepts relevant to distributing the mesh
+to a number of processors; you may want to take a look at the @ref
+distributed module and step-40 before trying to understand this
+program. The rest of the program is almost completely agnostic about
+the fact that we don't store all objects completely locally. There
+will be a few points where we have to limit loops over all cells to
+those that are locally owned, or where we need to distinguish between
+vectors that store only locally owned elements and those that store
+everything that is locally relevant (see @ref GlossLocallyRelevantDof
+"this glossary entry"), but by and large the amount of heavy lifting
+necessary to make the program run in %parallel is well hidden in the
+libraries upon which this program builds. In any case, we will comment
+on these locations as we get to them in the program code.
<h3> Parallelization within individual nodes of a cluster </h3>
single multicore machine, then it may make sense to use threads.
-<h3> Implementation details </h3>
-
-One detail worth discussing before we show the actual implementation
-is how to deal with distributed vs. localized vectors. When we build
-the linear systems, the various matrices and right hand side vectors
-are always built distributed, i.e. each processor builds contributions
-for a subset of cells and adds them to the global matrix objects;
-Trilinos then makes sure that the right entries end up on the right
-processors so that each processor stores a part of these distributed
-objects.
-
-When we then come around to solving linear systems, the solution
-vectors will also be distributed. On the other hand, in later steps,
-processors will need to be able to access random elements for various
-tasks. For example, processor zero needs to have access to all
-elements in order to produce output; all processors need to have
-access to temperature and Stokes solution data on the cells they own
-to build right hand side vectors for the next time step; and all
-processors need to access data on the cells they own as well as their
-neighbors (whether these neighbors are owned by the same process or
-another one) in order to compute the jump of the gradient across cell
-interfaces in the Kelly error estimator when computing refinement
-information.
-
-In other words, for some operations we will have to exchange
-information between processors. We could try to be smart and really
-only exchange that data that we really need. This would probably be
-important if we were to run this program on thousands of processors,
-as then storing <i>all</i> elements of a solution vector may become
-impractical. On the other hand, deal.II currently has a number of
-other bottlenecks that make sure that thousands of processors is not
-possible anyway. Consequently, here we opt for the simplest choice:
-solve linear systems using distributed vectors, and immediately
-afterwards each processor requests a localized copy of the solution
-vector containing all elements that we will use henceforth for all
-operations. The distributed vector is no longer necessary at this
-point and is, in fact, deallocated again immediately.
-
+@todo talk about the input file
<h3> The testcase </h3>