From: Wolfgang Bangerth Date: Wed, 12 Aug 2009 03:46:23 +0000 (+0000) Subject: Minor changes to documentation. X-Git-Tag: v8.0.0~7336 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=d87fc692a9a1fa547872cb927bfd0dbe3c7e89e3;p=dealii.git Minor changes to documentation. git-svn-id: https://svn.dealii.org/trunk@19232 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-32/step-32.cc b/deal.II/examples/step-32/step-32.cc index 20dd78f762..037c3a2ed1 100644 --- a/deal.II/examples/step-32/step-32.cc +++ b/deal.II/examples/step-32/step-32.cc @@ -1,6 +1,6 @@ /* $Id$ */ /* Author: Martin Kronbichler, Uppsala University, - Wolfgang Bangerth, Texas A&M University 2007, 2008, 2009 */ + Wolfgang Bangerth, Texas A&M University 2007, 2008, 2009 */ /* */ /* Copyright (C) 2008, 2009 by the deal.II authors */ /* */ @@ -264,30 +264,40 @@ namespace LinearSolvers // @sect3{Definition of assembly data structures} - // - // This is a collection of data - // structures that we use for assembly in - // %parallel. The concept of this - // task-based parallelization is - // described in detail @ref MTWorkStream - // "here". Each assembly routine gets two - // sets of data: a Scratch array that - // collects all the classes and arrays - // that are used for the calculation of - // the cell contribution, and a CopyData - // array that keeps local matrices and - // vectors which will be written into the + // + // As described in the introduction, we will + // use the WorkStream mechanism discussed in + // the @ref threads module to parallelize + // operations among the processors of a + // single machine. The WorkStream class + // requires that data is passed around in two + // kinds of data structures, one for scratch + // data and one to pass data from the + // assembly function to the function that + // copies local contributions into global + // objects. + // + // The following namespace (and the two + // sub-namespaces) contains a collection of + // data structures that serve this purpose, + // one pair for each of the four operations + // discussed in the introduction that we will + // want to parallelize. Each + // assembly routine gets two sets of data: a + // Scratch array that collects all the + // classes and arrays that are used for the + // calculation of the cell contribution, and + // a CopyData array that keeps local matrices + // and vectors which will be written into the // global matrix. Whereas CopyData is a // container for the final data that is // written into the global matrices and - // vector (and, thus, absolutely - // necessary), the Scratch arrays are - // merely there for performance reasons - // — it would be much more - // expensive to set up a FEValues object - // on each cell, than creating it only - // once and updating some derivative - // data. + // vector (and, thus, absolutely necessary), + // the Scratch arrays are merely there for + // performance reasons — it would be + // much more expensive to set up a FEValues + // object on each cell, than creating it only + // once and updating some derivative data. // // Using the program in step-31, we have // four assembly routines. One for the