From: Wolfgang Bangerth Date: Wed, 6 Jul 2016 16:10:37 +0000 (-0500) Subject: Better connect step-55 throughout the tutorials. X-Git-Tag: v8.5.0-rc1~924^2~1 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=d58a240a75508a21fa99f89a12b49566328861ba;p=dealii.git Better connect step-55 throughout the tutorials. --- diff --git a/doc/doxygen/headers/distributed.h b/doc/doxygen/headers/distributed.h index bc550c28e9..2ffb254f2d 100644 --- a/doc/doxygen/headers/distributed.h +++ b/doc/doxygen/headers/distributed.h @@ -1,6 +1,6 @@ // --------------------------------------------------------------------- // -// Copyright (C) 2009 - 2015 by the deal.II authors +// Copyright (C) 2009 - 2016 by the deal.II authors // // This file is part of the deal.II library. // @@ -75,7 +75,8 @@ * concise definition of many of the terms that are used here and in other * places of the library related to distributed computing. The step-40 * tutorial program shows an application of the classes and methods of this - * namespace to the Laplace equation, while step-32 extends the step-31 + * namespace to the Laplace equation, while step-55 does so for a vector-valued problem. + * step-32 extends the step-31 * program to massively parallel computations and thereby explains the use of * the topic discussed here to more complicated applications. * @@ -296,7 +297,7 @@ * owned by another processor. * * You can copy between vectors with and without ghost - * elements (you can see this in step-40 and step-32) using operator=. + * elements (you can see this in step-40, step-55, and step-32) using operator=. * * *
Sparsity patterns
@@ -362,7 +363,7 @@ * DoFTools::extract_locally_active_dofs() . This is also affordable since the * set of locally relevant degrees of freedom is only marginally larger than * the set of locally active degrees of freedom. We choose this strategy in - * both step-32 and step-40. + * step-32, step-40, and step-55. * * *

Postprocessing

diff --git a/examples/step-32/doc/builds-on b/examples/step-32/doc/builds-on index 63490605a0..d2c21e4319 100644 --- a/examples/step-32/doc/builds-on +++ b/examples/step-32/doc/builds-on @@ -1 +1 @@ -step-31 step-17 step-40 +step-31 step-55 diff --git a/examples/step-32/doc/intro.dox b/examples/step-32/doc/intro.dox index 8c178c6509..099a9bcc9e 100644 --- a/examples/step-32/doc/intro.dox +++ b/examples/step-32/doc/intro.dox @@ -49,7 +49,9 @@ processor cores within a single machine (with parallelization based on threads). This program's main job is therefore to introduce the changes that are necessary to utilize the availability of these %parallel compute resources. In this regard, it builds on the step-40 program that first -introduces the necessary classes for much of the %parallel functionality. +introduces the necessary classes for much of the %parallel +functionality, and on step-55 that shows how this is done for a +vector-valued problem. In addition to these changes, we also use a slightly different preconditioner, and we will have to make a number of changes that have @@ -650,9 +652,11 @@ code). Consequently, we need to parallelize it. Parallelization of scientific codes across multiple machines in a cluster of computers is almost always done using the Message Passing Interface (MPI). This program is no exception to that, and it follows the general spirit -of step-17 and step-18 programs in this though in practice it borrows more +of the step-17 and step-18 programs in this though in practice it borrows more from step-40 in which we first introduced the classes and strategies we use -when we want to completely distribute all computations: including, for +when we want to completely distribute all computations, and +step-55 that shows how to do that for +@ref vector_valued "vector-valued problems": including, for example, splitting the mesh up into a number of parts so that each processor only stores its own share plus some ghost cells, and using strategies where no processor potentially has enough memory to hold the entries of the combined @@ -660,8 +664,8 @@ solution vector locally. The goal is to run this code on hundreds or maybe even thousands of processors, at reasonable scalability. @note Even though it has a larger number, step-40 comes logically before the -current program. You will probably want to look at step-40 before you try to -understand the what we do here. +current program. The same is true for step-55. You will probably want +to look at these programs before you try to understand what we do here. MPI is a rather awkward interface to program with. It is a semi-object oriented set of functions, and while one uses it to send data around a @@ -671,7 +675,7 @@ objects rather than deducing the data type automatically through overloading or templates. We've already seen in step-17 and step-18 how to avoid almost all of MPI by putting all the communication necessary into either the deal.II library or, in those programs, into PETSc. We'll do something similar here: -like in step-40, deal.II and the underlying p4est library are responsible for +like in step-40 and step-55, deal.II and the underlying p4est library are responsible for all the communication necessary for distributing the mesh, and we will let the Trilinos library (along with the wrappers in namespace TrilinosWrappers) deal with parallelizing the linear algebra components. We have already used @@ -730,7 +734,7 @@ are part of Trilinos' Epetra library of basic linear algebra and tool classes: There are a number of other concepts relevant to distributing the mesh to a number of processors; you may want to take a look at the @ref -distributed module and step-40 before trying to understand this +distributed module and step-40 or step-55 before trying to understand this program. The rest of the program is almost completely agnostic about the fact that we don't store all objects completely locally. There will be a few points where we have to limit loops over all cells to @@ -1052,10 +1056,10 @@ the following quantities: \left\{ \begin{array}{ll} -\frac{4}{3}\pi G \rho \|\mathbf x\| \frac{\mathbf x}{\|\mathbf x\|} - & \text{for} \ \|\mathbf x\|