From 4dcbde6d42e21be2f714ad503322686f8204302e Mon Sep 17 00:00:00 2001 From: Wolfgang Bangerth Date: Thu, 24 Dec 2015 09:13:15 -0600 Subject: [PATCH] Add another paragraph. --- examples/step-17/doc/intro.dox | 25 +++++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-) diff --git a/examples/step-17/doc/intro.dox b/examples/step-17/doc/intro.dox index 519289269d..cbc93e113f 100644 --- a/examples/step-17/doc/intro.dox +++ b/examples/step-17/doc/intro.dox @@ -128,8 +128,29 @@ receiving individual messages, but uses higher level operations. For example, in the program we will use function calls that take a number from each processor, add them all up, and return the sum to all processors. Internally, this is implemented using individual messages, -but to the user this is transparent. In reality, even this is too -low a level and the program below will not contain any direct +but to the user this is transparent. We call such operations collectives +because all processors participate in them. Collectives allow us +to write programs where not every copy of the executable is doing something +completely different (this would be incredibly difficult to program) but +where in essence all copies are doing the same thing (though on different +data) for themselves, running through the same blocks of code; then they +communicate data through collectives; and then go back to doing something +for themselves again running through the same blocks of data. This is the +key piece to being able to write programs, and it is the key component +to making sure that programs can run on any number of processors, +since we do not have to write different code for each of the participating +processors. + +(This is not to say that programs are never written in ways where +different processors run through different blocks of code in their +copy of the executable. Programs internally also often communicate +in other ways than through collectives. But in practice, %parallel finite +finite element codes almost always follow the scheme where every copy +of the program runs through the same blocks of code at the same time, +interspersed by phases where all processors communicate with each other.) + +In reality, even the level of calling MPI collective functions is too +low. Rather, the program below will not contain any direct calls to MPI at all, but only deal.II functions that hide this communication from users of the deal.II. This has the advantage that you don't have to learn the details of MPI and its rather intricate -- 2.39.5