From f030ee356eb406a8cfe7257eae006f483d656957 Mon Sep 17 00:00:00 2001 From: Wolfgang Bangerth Date: Fri, 14 Aug 2009 00:15:24 +0000 Subject: [PATCH] Fix one TODO. git-svn-id: https://svn.dealii.org/trunk@19260 0785d39b-7218-0410-832d-ea1e28bc413d --- deal.II/examples/step-32/doc/intro.dox | 38 ++++++++++++++++++++++++-- 1 file changed, 35 insertions(+), 3 deletions(-) diff --git a/deal.II/examples/step-32/doc/intro.dox b/deal.II/examples/step-32/doc/intro.dox index daffa7b47a..d06d0edbd5 100644 --- a/deal.II/examples/step-32/doc/intro.dox +++ b/deal.II/examples/step-32/doc/intro.dox @@ -279,9 +279,41 @@ process in use.

Implementation details

-TODO: Wolfgang -Mention that we immediately localize all solution vectors after -solving. +One detail worth discussing before we show the actual implementation +is how to deal with distributed vs. localized vectors. When we build +the linear systems, the various matrices and right hand side vectors +are always built distributed, i.e. each processor builds contributions +for a subset of cells and adds them to the global matrix objects; +Trilinos then makes sure that the right entries end up on the right +processors so that each processor stores a part of these distributed +objects. + +When we then come around to solving linear systems, the solution +vectors will also be distributed. On the other hand, in later steps, +processors will need to be able to access random elements for various +tasks. For example, processor zero needs to have access to all +elements in order to produce output; all processors need to have +access to temperature and Stokes solution data on the cells they own +to build right hand side vectors for the next time step; and all +processors need to access data on the cells they own as well as their +neighbors (whether these neighbors are owned by the same process or +another one) in order to compute the jump of the gradient across cell +interfaces in the Kelly error estimator when computing refinement +information. + +In other words, for some operations we will have to exchange +information between processors. We could try to be smart and really +only exchange that data that we really need. This would probably be +important if we were to run this program on thousands of processors, +as then storing all elements of a solution vector may become +impractical. On the other hand, deal.II currently has a number of +other bottlenecks that make sure that thousands of processors is not +possible anyway. Consequently, here we opt for the simplest choice: +solve linear systems using distributed vectors, and immediately +afterwards each processor requests a localized copy of the solution +vector containing all elements that we will use henceforth for all +operations. The distributed vector is no longer necessary at this +point and is, in fact, deallocated again immediately.

The testcase

-- 2.39.5