From: Wolfgang Bangerth Date: Tue, 11 Aug 2009 18:05:59 +0000 (+0000) Subject: A couple of minor modifications to the output. X-Git-Tag: v8.0.0~7345 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=b176c81abdf2f95405cef1954a55a967bbcdda61;p=dealii.git A couple of minor modifications to the output. git-svn-id: https://svn.dealii.org/trunk@19223 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-32/step-32.cc b/deal.II/examples/step-32/step-32.cc index 01a02ffe03..6521e2339c 100644 --- a/deal.II/examples/step-32/step-32.cc +++ b/deal.II/examples/step-32/step-32.cc @@ -65,8 +65,9 @@ // This is the only include file that is // new: We use Trilinos for defining the // %parallel partitioning of the matrices - // and vectors, and an Epetra_Map is the - // Trilinos data structure for the + // and vectors, and as explained in the + // introduction, an Epetra_Map + // is the Trilinos data structure for the // definition of which part of a // distributed vector is stored locally. #include @@ -192,22 +193,23 @@ namespace EquationData // In comparison to step-31, we did one // change in the linear algebra of the - // problem: We exchange the InverseMatrix - // that previously held the approximation - // of the Schur complement by a - // preconditioner only (we will choose - // ILU in the application code - // below). This is the same trick we - // already did for the velocity block - - // the idea of this is that the outer - // iterations will eventually also make - // the inner approximation for the Schur - // complement good. If the preconditioner - // we're using is good enough, there will - // be no increase in the (outer) - // iteration count. All we need to do for - // implementing this change here is to - // give the respective variable in the + // problem: We exchange the + // InverseMatrix that + // previously held the approximation of the + // Schur complement by a preconditioner + // only (we will choose ILU in the + // application code below). This is the + // same trick we already did for the + // velocity block - the idea of this is + // that the outer iterations will + // eventually also make the inner + // approximation for the Schur complement + // good. If the preconditioner we're using + // is good enough, there will be no + // increase in the (outer) iteration + // count. All we need to do for + // implementing this change here is to give + // the respective variable in the // BlockSchurPreconditioner class another // name. namespace LinearSolvers @@ -283,7 +285,7 @@ namespace LinearSolvers // merely there for performance reasons // — it would be much more // expensive to set up a FEValues object - // on each cell, then creating it only + // on each cell, than creating it only // once and updating some derivative // data. // @@ -305,8 +307,9 @@ namespace LinearSolvers // that create an FEValues object for a // @ref FiniteElement "finite element", a // @ref Quadrature "quadrature formula" - // and some @ref UpdateFlags "update - // flags". Moreover, we manually + // and some + // @ref UpdateFlags "update flags". + // Moreover, we manually // implement a copy constructor (since // the FEValues class is not copyable by // itself), and provide some additional @@ -690,8 +693,8 @@ namespace Assembly // This is the declaration of the main // class. It is very similar to // step-31. Following the @ref - // MTWorkStream "task-based - // parallilization", we split all the + // MTWorkStream "task-based parallelization" + // paradigm, we split all the // assembly routines into two parts: a // first part that can do all the // calculations on a certain cell without @@ -705,12 +708,52 @@ namespace Assembly // the four assembly routines that we use // in this program. // - // Moreover, we include an MPI - // communicator and a so-called - // Epetra_Map that are needed for - // communication and data exchange if the - // Trilinos matrices and vectors are - // distributed over several processors. + // Moreover, we include an MPI communicator + // and an Epetra_Map (see the introduction) + // that are needed for communication and + // data exchange if the Trilinos matrices + // and vectors are distributed over several + // processors. Finally, the + // pcout (for %parallel + // std::cout) object is + // used to simplify writing output: each + // MPI process can use this to generate + // output as usual, but since each of these + // processes will produce the same output + // it will just be replicated many times + // over; with the ConditionalOStream class, + // only the output generated by one task + // will actually be printed to screen, + // whereas the output by all the other + // threads will simply be forgotten. + // + // In a bit of naming confusion, you will + // notice below that some of the variables + // from namespace TrilinosWrappers are + // taken from namespace + // TrilinosWrappers::MPI (such as the right + // hand side vectors) whereas others are + // not (such as the various matrices). For + // the matrices, we happen to use the same + // class names for parallel and sequential + // data structures, i.e. all matrices will + // actually be considered parallel + // below. On the other hand, for vectors, + // only those from namespace + // TrilinosWrappers::MPI are actually + // distributed. In particular, we will + // frequently have to query velocities and + // temperatures at arbitrary quadrature + // points; consequently, rather than + // "localizing" a vector whenever we need a + // localized vector, we solve linear + // systems in parallel but then immediately + // localize the solution for further + // processing. The various + // *_solution vectors are + // therefore filled immediately after + // solving their respective linear system + // in parallel. template class BoussinesqFlowProblem {