From: Wolfgang Bangerth Date: Tue, 11 Aug 2009 15:45:30 +0000 (+0000) Subject: Write the part of the introduction that deals with MPI and Trilinos. X-Git-Tag: v8.0.0~7349 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=34b8c7bc7f7afd10987c2acee67031e00d7936b1;p=dealii.git Write the part of the introduction that deals with MPI and Trilinos. git-svn-id: https://svn.dealii.org/trunk@19219 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-32/doc/intro.dox b/deal.II/examples/step-32/doc/intro.dox index acaa3cb3b7..f80e6c7394 100644 --- a/deal.II/examples/step-32/doc/intro.dox +++ b/deal.II/examples/step-32/doc/intro.dox @@ -14,3 +14,115 @@ California Institute of Technology.

Introduction

+ +This program does pretty much exactly what @ref step_31 "step-31" already +does: it solves the Boussinesq equations that describe the motion of a fluid +whose temperature is not in equilibrium. As such, all the equations we have +described in @ref step_31 "step-31" still hold: we solve the same partial +differential equation, using the same finite element scheme, the same time +stepping algorithm, and the same stabilization method for the temperature +advection-diffusion equation. As a consequence, you may first want to +understand that program before you work on the current one. + +The difference between @ref step_31 "step-31" and the current program is that +here we want to do things in %parallel, using both the availability of many +machines in a cluster (with parallelization based on MPI) as well as many +processor cores within a single machine (with parallelization based on +threads). This program's main job is therefore to discuss the changes that are +necessary to utilize the availability of these %parallel compute resources. + + +

Parallelization on clusters

+ +Parallelization of scientific codes across multiple machines in a cluster of +computers is almost always done using the Message Passing Interface +(MPI). This program is no exception to that, and it follows the +@ref step_17 "step-17" and @ref step_18 "step-18" programs in this. + +MPI is a rather awkward interface to program with, and so we usually try to +not use it directly but through an interface layer that abstracts most of the +MPI operations into a friendlier interface. In the two programs mentioned +above, this was achieved by using the PETSc library that provides support for +%parallel linear algebra in a way that almost completely hides the MPI layer +under it. PETSc is powerful, providing a large number of functions that deal +with matrices, vectors, and iterative solvers and preconditioners, along with +lots of other stuff, most of which runs quite well in %parallel. It is, +however, a few years old already, written in C, and generally not quite as +easy to use as some other libraries. As a consequence, deal.II also has +interfaces to Trilinos, a library similar to PETSc in its aims and with a lot +of the same functionality. It is, however, a project that is several years +younger, is written in C++ and by people who generally have put a significant +emphasis on software design. We have already used Trilinos in +@ref step_31 "step-31", and will do so again here, with the difference that we +will use its %parallel capabilities. + +deal.II's Trilinos interfaces encapsulate pretty much everything Trilinos +provides into wrapper classes (in namespace TrilinosWrappers) that make the +Trilinos matrix, vector, solver and preconditioner classes look very much the +same as deal.II's own implementations of this functionality. However, as +opposed to deal.II's classes, they can be used in %parallel if we give them the +necessary information. As a consequence, there are two Trilinos classes that +we have to deal with directly (rather than through wrappers), both of which +are part of Trilinos' Epetra library of basic linear algebra and tool classes: + + +The only other things specific to programming using MPI that we will use in +this program are the following facilities deal.II provides: + +The rest of the program is almost completely agnostic about the fact that we +don't store all objects completely locally. There will be a few points where +we can not use certain programming techniques (though without making explicit +reference to MPI or parallelization) or where we need access to all +elements of a vector and therefore need to localize its elements +(i.e. create a vector that has all its elements stored on the current +machine), but we will comment on these locations as we get to them in the +program code. + + +

Parallelization within individual nodes of a cluster