// This is the only include file that is
// new: We use Trilinos for defining the
// %parallel partitioning of the matrices
- // and vectors, and an Epetra_Map is the
- // Trilinos data structure for the
+ // and vectors, and as explained in the
+ // introduction, an <code>Epetra_Map</code>
+ // is the Trilinos data structure for the
// definition of which part of a
// distributed vector is stored locally.
#include <Epetra_Map.h>
// In comparison to step-31, we did one
// change in the linear algebra of the
- // problem: We exchange the InverseMatrix
- // that previously held the approximation
- // of the Schur complement by a
- // preconditioner only (we will choose
- // ILU in the application code
- // below). This is the same trick we
- // already did for the velocity block -
- // the idea of this is that the outer
- // iterations will eventually also make
- // the inner approximation for the Schur
- // complement good. If the preconditioner
- // we're using is good enough, there will
- // be no increase in the (outer)
- // iteration count. All we need to do for
- // implementing this change here is to
- // give the respective variable in the
+ // problem: We exchange the
+ // <code>InverseMatrix</code> that
+ // previously held the approximation of the
+ // Schur complement by a preconditioner
+ // only (we will choose ILU in the
+ // application code below). This is the
+ // same trick we already did for the
+ // velocity block - the idea of this is
+ // that the outer iterations will
+ // eventually also make the inner
+ // approximation for the Schur complement
+ // good. If the preconditioner we're using
+ // is good enough, there will be no
+ // increase in the (outer) iteration
+ // count. All we need to do for
+ // implementing this change here is to give
+ // the respective variable in the
// BlockSchurPreconditioner class another
// name.
namespace LinearSolvers
// merely there for performance reasons
// — it would be much more
// expensive to set up a FEValues object
- // on each cell, then creating it only
+ // on each cell, than creating it only
// once and updating some derivative
// data.
//
// that create an FEValues object for a
// @ref FiniteElement "finite element", a
// @ref Quadrature "quadrature formula"
- // and some @ref UpdateFlags "update
- // flags". Moreover, we manually
+ // and some
+ // @ref UpdateFlags "update flags".
+ // Moreover, we manually
// implement a copy constructor (since
// the FEValues class is not copyable by
// itself), and provide some additional
// This is the declaration of the main
// class. It is very similar to
// step-31. Following the @ref
- // MTWorkStream "task-based
- // parallilization", we split all the
+ // MTWorkStream "task-based parallelization"
+ // paradigm, we split all the
// assembly routines into two parts: a
// first part that can do all the
// calculations on a certain cell without
// the four assembly routines that we use
// in this program.
//
- // Moreover, we include an MPI
- // communicator and a so-called
- // Epetra_Map that are needed for
- // communication and data exchange if the
- // Trilinos matrices and vectors are
- // distributed over several processors.
+ // Moreover, we include an MPI communicator
+ // and an Epetra_Map (see the introduction)
+ // that are needed for communication and
+ // data exchange if the Trilinos matrices
+ // and vectors are distributed over several
+ // processors. Finally, the
+ // <code>pcout</code> (for <i>%parallel
+ // <code>std::cout</code></i>) object is
+ // used to simplify writing output: each
+ // MPI process can use this to generate
+ // output as usual, but since each of these
+ // processes will produce the same output
+ // it will just be replicated many times
+ // over; with the ConditionalOStream class,
+ // only the output generated by one task
+ // will actually be printed to screen,
+ // whereas the output by all the other
+ // threads will simply be forgotten.
+ //
+ // In a bit of naming confusion, you will
+ // notice below that some of the variables
+ // from namespace TrilinosWrappers are
+ // taken from namespace
+ // TrilinosWrappers::MPI (such as the right
+ // hand side vectors) whereas others are
+ // not (such as the various matrices). For
+ // the matrices, we happen to use the same
+ // class names for parallel and sequential
+ // data structures, i.e. all matrices will
+ // actually be considered parallel
+ // below. On the other hand, for vectors,
+ // only those from namespace
+ // TrilinosWrappers::MPI are actually
+ // distributed. In particular, we will
+ // frequently have to query velocities and
+ // temperatures at arbitrary quadrature
+ // points; consequently, rather than
+ // "localizing" a vector whenever we need a
+ // localized vector, we solve linear
+ // systems in parallel but then immediately
+ // localize the solution for further
+ // processing. The various
+ // <code>*_solution</code> vectors are
+ // therefore filled immediately after
+ // solving their respective linear system
+ // in parallel.
template <int dim>
class BoussinesqFlowProblem
{