example, splitting the mesh up into a number of parts so that each processor
only stores its own share plus some ghost cells, and using strategies where no
processor potentially has enough memory to hold the entries of the combined
-solution vector locally.
+solution vector locally. The goal is to run this code on hundreds or maybe
+even thousands of processors, at reasonable scalability.
MPI is a rather awkward interface to program with. It is a semi-object
oriented set of functions, and while one uses it to send data around a
All of these operations essentially look as follows: we need to loop over all
cells for which <code>cell-@>subdomain_id()</code> equals the index our
machine has within the communicator object used for all communication
-(i.e. essentially <code>MPI_COMM_WORLD</code>, as explained above), on each
-cell we need to assemble the local contributions to the global matrix or
+(i.e. <code>MPI_COMM_WORLD</code>, as explained above). The test we are
+actually going to use for this, and which describes in a concise way why we
+test this condition, is <code>cell-@>is_locally_owned()</code>. On each
+such cell we need to assemble the local contributions to the global matrix or
vector, and then we have to copy each cell's contribution into the global
matrix or vector. Note that the first part of this (the loop) defines a range
of iterators on which something has to happen. The second part, assembly of
local contributions is something that takes the majority of CPU time in this
-sequence of steps, is a typical example of things that can be done in
+sequence of steps, and is a typical example of things that can be done in
%parallel: each cell's contribution is entirely independent of all other cells'
contributions. The third part, copying into the global matrix, must not happen
in %parallel since we are modifying one object and so several threads can not
<code>BoussinesqFlowProblem::local_assemble_stokes_preconditioner</code>,
<code>BoussinesqFlowProblem::local_assemble_temperature_matrix</code>, and
<code>BoussinesqFlowProblem::local_assemble_temperature_rhs</code> functions in
- the code below. These four functions can all have several instances of each
- running in %parallel.
+ the code below. These four functions can all have several instances
+ running in %parallel at the same time.
- %Functions that copy the result of the previous ones into the global object
and that run sequentially to avoid race conditions. These are the
The underlying technology for WorkStream identifies "tasks" that need to be
worked on (e.g. assembling local contributions on a cell) and schedules
these tasks automatically to available processors. WorkStream creates these
-tasks automatically, by splitting the iterator range into suitable chunks,
-but as outlined in @ref threads, one can also create tasks explicitly. We
-use this in one place in the program, namely where we set up the Stokes
-system and preconditioner matrices as well as the temperature matrix. These
-are independent operations that, if enough processors are available, can be
-worked on in parallel (if not enough processors are available -- because the
-system has only one, or because the others are working on something else for
-us -- then these tasks will be worked on sequentially). Consequently,
-<code>BoussinesqFlowProblem::setup_dofs</code> creates tasks for the three
-calls to <code>BoussinesqFlowProblem::setup_stokes_matrix</code>,
-<code>BoussinesqFlowProblem::setup_stokes_preconditioner</code>, and
-<code>BoussinesqFlowProblem::setup_temperature_matrices</code> that are then
-scheduled to available resources. There is one problem with this, however:
-if we have more than one MPI process running in parallel, then all of these
-processes need to communicate in a certain order and that requires that the
-various <code>setup_*</code> functions can't run in parallel. To make
-things worse, even if there is only a single MPI process, we still
-have to make a few calls to the MPI runtime (for example to set up
-communicators for the various matrices and vectors; all of these
-communicators contain only a single processor, but we still have to
-call MPI functions for this). Unfortunately, most MPI implementations
-are not thread-safe, and we can't call MPI functions from multiple
-threads at once. For these setup operations, we will therefore only be
-able to make use of multiple processor cores within the same machine
-if we are not running under the control of MPI -- a condition we can
-check using the Utilities::System::job_supports_mpi() function.
+tasks automatically, by splitting the iterator range into suitable chunks.
+
+@note Using multiple threads within each MPI process only makes sense if you
+have fewer MPI processes running on each node of your cluster than there are
+processor cores on this machine. Otherwise, MPI will already keep your
+processors busy and you won't get any additional speedup from using
+threads. For example, if your cluster nodes have 8 cores as they often have at
+the time of writing this, and if your batch scheduler puts 8 MPI processes on
+each node, then using threads doesn't make the program any
+faster. Consequently, you probably want to use the
+<code>--disable-threads</code> flag when configuring your deal.II installation
+for this machine. On the other hand, if you want to run this program on a
+single multicore machine, then it may make sense to use threads.
<h3> Implementation details </h3>