From: bangerth Date: Mon, 26 Sep 2011 02:54:51 +0000 (+0000) Subject: Go through the multithreading section. X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=dc24f2c2f38e16c9d057331b87c4186e80ca6e91;p=dealii-svn.git Go through the multithreading section. git-svn-id: https://svn.dealii.org/trunk@24415 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-32/doc/intro.dox b/deal.II/examples/step-32/doc/intro.dox index 80f866e419..e48d95a3ef 100644 --- a/deal.II/examples/step-32/doc/intro.dox +++ b/deal.II/examples/step-32/doc/intro.dox @@ -519,7 +519,8 @@ when we want to completely distribute all computations: including, for example, splitting the mesh up into a number of parts so that each processor only stores its own share plus some ghost cells, and using strategies where no processor potentially has enough memory to hold the entries of the combined -solution vector locally. +solution vector locally. The goal is to run this code on hundreds or maybe +even thousands of processors, at reasonable scalability. MPI is a rather awkward interface to program with. It is a semi-object oriented set of functions, and while one uses it to send data around a @@ -623,13 +624,15 @@ preconditioner, and assembly of the right hand side of the temperature system. All of these operations essentially look as follows: we need to loop over all cells for which cell-@>subdomain_id() equals the index our machine has within the communicator object used for all communication -(i.e. essentially MPI_COMM_WORLD, as explained above), on each -cell we need to assemble the local contributions to the global matrix or +(i.e. MPI_COMM_WORLD, as explained above). The test we are +actually going to use for this, and which describes in a concise way why we +test this condition, is cell-@>is_locally_owned(). On each +such cell we need to assemble the local contributions to the global matrix or vector, and then we have to copy each cell's contribution into the global matrix or vector. Note that the first part of this (the loop) defines a range of iterators on which something has to happen. The second part, assembly of local contributions is something that takes the majority of CPU time in this -sequence of steps, is a typical example of things that can be done in +sequence of steps, and is a typical example of things that can be done in %parallel: each cell's contribution is entirely independent of all other cells' contributions. The third part, copying into the global matrix, must not happen in %parallel since we are modifying one object and so several threads can not @@ -659,8 +662,8 @@ and per-cell data. Suffice it to mention that we need the following: BoussinesqFlowProblem::local_assemble_stokes_preconditioner, BoussinesqFlowProblem::local_assemble_temperature_matrix, and BoussinesqFlowProblem::local_assemble_temperature_rhs functions in - the code below. These four functions can all have several instances of each - running in %parallel. + the code below. These four functions can all have several instances + running in %parallel at the same time. - %Functions that copy the result of the previous ones into the global object and that run sequentially to avoid race conditions. These are the @@ -676,32 +679,19 @@ their structure should be clear from the discussion in @ref threads. The underlying technology for WorkStream identifies "tasks" that need to be worked on (e.g. assembling local contributions on a cell) and schedules these tasks automatically to available processors. WorkStream creates these -tasks automatically, by splitting the iterator range into suitable chunks, -but as outlined in @ref threads, one can also create tasks explicitly. We -use this in one place in the program, namely where we set up the Stokes -system and preconditioner matrices as well as the temperature matrix. These -are independent operations that, if enough processors are available, can be -worked on in parallel (if not enough processors are available -- because the -system has only one, or because the others are working on something else for -us -- then these tasks will be worked on sequentially). Consequently, -BoussinesqFlowProblem::setup_dofs creates tasks for the three -calls to BoussinesqFlowProblem::setup_stokes_matrix, -BoussinesqFlowProblem::setup_stokes_preconditioner, and -BoussinesqFlowProblem::setup_temperature_matrices that are then -scheduled to available resources. There is one problem with this, however: -if we have more than one MPI process running in parallel, then all of these -processes need to communicate in a certain order and that requires that the -various setup_* functions can't run in parallel. To make -things worse, even if there is only a single MPI process, we still -have to make a few calls to the MPI runtime (for example to set up -communicators for the various matrices and vectors; all of these -communicators contain only a single processor, but we still have to -call MPI functions for this). Unfortunately, most MPI implementations -are not thread-safe, and we can't call MPI functions from multiple -threads at once. For these setup operations, we will therefore only be -able to make use of multiple processor cores within the same machine -if we are not running under the control of MPI -- a condition we can -check using the Utilities::System::job_supports_mpi() function. +tasks automatically, by splitting the iterator range into suitable chunks. + +@note Using multiple threads within each MPI process only makes sense if you +have fewer MPI processes running on each node of your cluster than there are +processor cores on this machine. Otherwise, MPI will already keep your +processors busy and you won't get any additional speedup from using +threads. For example, if your cluster nodes have 8 cores as they often have at +the time of writing this, and if your batch scheduler puts 8 MPI processes on +each node, then using threads doesn't make the program any +faster. Consequently, you probably want to use the +--disable-threads flag when configuring your deal.II installation +for this machine. On the other hand, if you want to run this program on a +single multicore machine, then it may make sense to use threads.

Implementation details