From: bangerth Date: Mon, 26 Sep 2011 19:25:25 +0000 (+0000) Subject: Remove now outdated sections. X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=3187294dc33b6d87cc90874283f8361fc3972119;p=dealii-svn.git Remove now outdated sections. git-svn-id: https://svn.dealii.org/trunk@24437 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-32/doc/intro.dox b/deal.II/examples/step-32/doc/intro.dox index e48d95a3ef..4adc763805 100644 --- a/deal.II/examples/step-32/doc/intro.dox +++ b/deal.II/examples/step-32/doc/intro.dox @@ -589,20 +589,19 @@ are part of Trilinos' Epetra library of basic linear algebra and tool classes: we care about is that it stores those elements locally that we need. -There are a number of other concepts relevant to distributing the mesh to a -number of processors; you may want to take a look at the @ref distributed -module and step-40 before trying to understand this program. The rest of the -program is almost completely agnostic about the fact that we don't store all -objects completely locally. There will be a few points where we have to limit -loops over all cells to those that are locally owned, or where we need to -distinguish between vectors that store only locally owned elements and those -that store everything that is locally relevant (see @ref -GlossLocallyRelevantDof "this glossary entry"), and there will be a few places -where we have to -explicitly call MPI functions, but by and large the amount of heavy lifting -necessary to make the program run in %parallel is well hidden in the libraries -upon which this program builds. In any case, we will comment on these -locations as we get to them in the program code. +There are a number of other concepts relevant to distributing the mesh +to a number of processors; you may want to take a look at the @ref +distributed module and step-40 before trying to understand this +program. The rest of the program is almost completely agnostic about +the fact that we don't store all objects completely locally. There +will be a few points where we have to limit loops over all cells to +those that are locally owned, or where we need to distinguish between +vectors that store only locally owned elements and those that store +everything that is locally relevant (see @ref GlossLocallyRelevantDof +"this glossary entry"), but by and large the amount of heavy lifting +necessary to make the program run in %parallel is well hidden in the +libraries upon which this program builds. In any case, we will comment +on these locations as we get to them in the program code.

Parallelization within individual nodes of a cluster

@@ -694,44 +693,7 @@ for this machine. On the other hand, if you want to run this program on a single multicore machine, then it may make sense to use threads. -

Implementation details

- -One detail worth discussing before we show the actual implementation -is how to deal with distributed vs. localized vectors. When we build -the linear systems, the various matrices and right hand side vectors -are always built distributed, i.e. each processor builds contributions -for a subset of cells and adds them to the global matrix objects; -Trilinos then makes sure that the right entries end up on the right -processors so that each processor stores a part of these distributed -objects. - -When we then come around to solving linear systems, the solution -vectors will also be distributed. On the other hand, in later steps, -processors will need to be able to access random elements for various -tasks. For example, processor zero needs to have access to all -elements in order to produce output; all processors need to have -access to temperature and Stokes solution data on the cells they own -to build right hand side vectors for the next time step; and all -processors need to access data on the cells they own as well as their -neighbors (whether these neighbors are owned by the same process or -another one) in order to compute the jump of the gradient across cell -interfaces in the Kelly error estimator when computing refinement -information. - -In other words, for some operations we will have to exchange -information between processors. We could try to be smart and really -only exchange that data that we really need. This would probably be -important if we were to run this program on thousands of processors, -as then storing all elements of a solution vector may become -impractical. On the other hand, deal.II currently has a number of -other bottlenecks that make sure that thousands of processors is not -possible anyway. Consequently, here we opt for the simplest choice: -solve linear systems using distributed vectors, and immediately -afterwards each processor requests a localized copy of the solution -vector containing all elements that we will use henceforth for all -operations. The distributed vector is no longer necessary at this -point and is, in fact, deallocated again immediately. - +@todo talk about the input file

The testcase