href="http://www.mcs.anl.gov/petsc/" target="_top">PETSc</a> library. And
since PETSc allows to distribute matrices and vectors across several computers
within an MPI network, the resulting code will even be able to solve the
-problem in parallel. If you don't know what PETSc is, then this would be a
+problem in %parallel. If you don't know what PETSc is, then this would be a
good time to take a quick glimpse at their homepage.
As a prerequisite of this program, you need to have PETSc installed, and if
-you want to run in parallel on a cluster, you also need <a
+you want to run in %parallel on a cluster, you also need <a
href="http://www-users.cs.umn.edu/~karypis/metis/index.html"
target="_top">METIS</a> to partition meshes. The installation of deal.II
together with these two additional libraries is described in the <a
While the sequential PETSc wrappers classes do not have any advantage over
their deal.II counterparts, the main point of using PETSc is that it can run
-in parallel. We will make use of this by partitioning the domain into as many
+in %parallel. We will make use of this by partitioning the domain into as many
blocks (``subdomains'') as there are processes in the MPI network. At the same
time, PETSc provides dummy MPI stubs that allow to run the same program on a
single machine if so desired, without any changes.
// step-8. First, we replace the
// standard output <code>std::cout</code> by a
// new stream <code>pcout</code> which is used
- // in parallel computations for
+ // in %parallel computations for
// generating output only on one of
// the MPI processes.
#include <base/conditional_ostream.h>
void output_results (const unsigned int cycle) const;
// The first variable is basically only
- // for convenience: in parallel program,
+ // for convenience: in %parallel program,
// if each process outputs status
// information, then there quickly is a
// lot of clutter. Rather, we would want
// member variables for the sparsity
// pattern, the system matrix, right
// hand, and solution vector. We change
- // these declarations to use parallel
+ // these declarations to use %parallel
// PETSc objects instead (note that the
- // fact that we use the parallel versions
+ // fact that we use the %parallel versions
// is denoted the fact that we use the
// classes from the
// <code>PETScWrappers::MPI</code> namespace;
// then PETSc provides some dummy type
// for <code>MPI_Comm</code>, so we do not have to
// care here whether the job is really a
- // parallel one:
+ // %parallel one:
MPI_Comm mpi_communicator;
// Then we have two variables that tell
- // us where in the parallel world we
+ // us where in the %parallel world we
// are. The first of the following
// variables, <code>n_mpi_processes</code> tells
// us how many MPI processes there exist
{
// Before we even start out setting up the
// system, there is one thing to do for a
- // parallel program: we need to assign
+ // %parallel program: we need to assign
// cells to each of the processes. We do
// this by splitting (<code>partitioning</code>) the
// mesh cells into as many chunks
// Then we initialize the system matrix,
// solution, and right hand side
// vectors. Since they all need to work in
- // parallel, we have to pass them an MPI
+ // %parallel, we have to pass them an MPI
// communication object, as well as their
// global sizes (both dimensions are equal
// to the number of degrees of freedom),
// problem. There are some things worth
// mentioning before we go into
// detail. First, we will be assembling the
- // system in parallel, i.e. each process will
+ // system in %parallel, i.e. each process will
// be responsible for assembling on cells
// that belong to this particular
// processor. Note that the degrees of
// The fourth step is to solve the linear
// system, with its distributed matrix and
// vector objects. Fortunately, PETSc offers
- // a variety of sequential and parallel
+ // a variety of sequential and %parallel
// solvers, for which we have written
// wrappers that have almost the same
// interface as is used for the deal.II
// which we would like to solve the linear
// system. Next, an actual solver object
// using PETSc's CG solver which also works
- // with parallel (distributed) vectors and
+ // with %parallel (distributed) vectors and
// matrices. And finally a preconditioner;
// we choose to use a block Jacobi
// preconditioner which works by computing
// vector.
//
// So in the first step, we need to set up
- // a parallel vector. For simplicity, every
+ // a %parallel vector. For simplicity, every
// process will own a chunk with as many
// elements as this process owns cells, so
// that the first chunk of elements is
different directions: first, it solves the quasistatic but time dependent
elasticity problem for large deformations with a Lagrangian mesh movement
approach. Secondly, it shows some more techniques for solving such problems
-using parallel processing with PETSc's linear algebra. In addition to this, we
+using %parallel processing with PETSc's linear algebra. In addition to this, we
show how to work around the main bottleneck of @ref step_17 "step-17", namely that we
generated graphical output from only one process, and that this scaled very
badly with larger numbers of processes and on large problems. Finally, a good
finite elements, then its symmetric gradient $\varepsilon(\Delta\mathbf{u}^n)$ is
in general still a function that is not easy to describe. In particular, it is
not a piecewise constant function, and on general meshes (with cells that are
-not rectangles parallel to the coordinate axes) or with non-constant
+not rectangles %parallel to the coordinate axes) or with non-constant
stress-strain tensors $C$ it is not even a bi- or trilinear function. Thus, it
is a priori not clear how to store $\sigma^n$ in a computer program.
<h3>Parallel graphical output</h3>
-In the @ref step_17 "step-17" example program, the main bottleneck for parallel computations
+In the @ref step_17 "step-17" example program, the main bottleneck for %parallel computations
was that only the first processor generated output for the entire domain.
Since generating graphical output is expensive, this did not scale well when
large numbers of processors were involved. However, no viable ways around this
Unlike the previous one, this function is not really interesting, since it
does what similar functions have done in all previous tutorial programs --
solving the linear system using the CG method, using an incomplete LU
- decomposition as a preconditioner (in the parallel case, it uses an ILU of
+ decomposition as a preconditioner (in the %parallel case, it uses an ILU of
each processor's block separately). It is virtually unchanged
from @ref step_17 "step-17".
// iterators looping over all cells. We will
// use this when selecting only those cells
// for output that are owned by the present
- // process in a parallel program:
+ // process in a %parallel program:
#include <grid/filtered_iterator.h>
// This is then simply C++ again:
// and the solution vector. Since we
// anticipate solving big problems, we
// use the same types as in step-17,
- // i.e. distributed parallel matrices
+ // i.e. distributed %parallel matrices
// and vectors built on top of the
// PETSc library. Conveniently, they
// can also be used when running on
// only a single machine, in which case
// this machine happens to be the only
- // one in our parallel universe.
+ // one in our %parallel universe.
//
// However, as a difference to step-17,
// we do not store the solution vector
unsigned int timestep_no;
// Then a few variables that have to do
- // with parallel processing: first, a
+ // with %parallel processing: first, a
// variable denoting the MPI
// communicator we use, and then two
// numbers telling us how many
// freedom whether they will be owned by
// the processor we are on or another one
// (in case this program is run in
- // parallel via MPI). This of course is
+ // %parallel via MPI). This of course is
// not optimal -- it limits the size of
// the problems we can solve, since
// storing the entire sparsity pattern
// the solution vector is a local
// one, unlike the right hand
// side that is a distributed
- // parallel one and therefore
+ // %parallel one and therefore
// needs to know the MPI
// communicator over which it is
// supposed to transmit messages:
// solution vector compatible
// with the matrix and right hand
// side (i.e. here a distributed
- // parallel vector, rather than
+ // %parallel vector, rather than
// the sequential vector we use
// in this program) in order to
// preset the entries of the
// Then set up a global vector into which
// we merge the local indicators from
- // each of the parallel processes:
+ // each of the %parallel processes:
const unsigned int n_local_cells
= GridTools::count_cells_with_subdomain_association (triangulation,
this_mpi_process);
const int ierr
= MatSetValues (matrix, 1, &petsc_i, n_columns, col_index_ptr,
col_value_ptr, ADD_VALUES);
- AssertThrow (ierr == 0, ExcPETScError(ierr));
+ Assert (ierr == 0, ExcPETScError(ierr));
}
*/
template <typename SparsityType>
SparseMatrix (const SparsityType &sparsity_pattern,
- const bool preset_nonzero_locations = false);
+ const bool preset_nonzero_locations = true);
/**
* This operator assigns a scalar to
*/
template <typename SparsityType>
void reinit (const SparsityType &sparsity_pattern,
- const bool preset_nonzero_locations = false);
+ const bool preset_nonzero_locations = true);
/**
* Return a reference to the MPI