From 1b3660ec51cd85521319d9c1dfdd94cbb36828e9 Mon Sep 17 00:00:00 2001 From: wolf Date: Tue, 6 Apr 2004 15:53:23 +0000 Subject: [PATCH] Add intro section. git-svn-id: https://svn.dealii.org/trunk@8980 0785d39b-7218-0410-832d-ea1e28bc413d --- .../step-17.data/intro.html | 57 +++++++++++++++++++ 1 file changed, 57 insertions(+) diff --git a/deal.II/doc/tutorial/chapter-2.step-by-step/step-17.data/intro.html b/deal.II/doc/tutorial/chapter-2.step-by-step/step-17.data/intro.html index 1518606e43..5a921a0a88 100644 --- a/deal.II/doc/tutorial/chapter-2.step-by-step/step-17.data/intro.html +++ b/deal.II/doc/tutorial/chapter-2.step-by-step/step-17.data/intro.html @@ -1,3 +1,60 @@

Introduction

+

+This program does not introduce any new mathematical ideas; in fact, all it +does is to do the exact same computations that step-8 +already does, but it does so in a different manner: instead of using deal.II's +own linear algebra classes, we build everything on top of classes deal.II +provides that wrap around the linear algebra implementation of the PETSc library. And +since PETSc allows to distribute matrices and vectors across several computers +within an MPI network, the resulting code will even be able to solve the +problem in parallel. If you don't know what PETSc is, then this would be a +good time to take a quick glimpse at their homepage. +

+ +

+As a prerequisite of this program, you need to have PETSc installed, and if +you want to run in parallel on a cluster, you also need METIS to partition meshes. The installation of deal.II +together with these two additional libraries is described in the README file. +

+ +

+Now, for the details: as mentioned, the program does not compute anything new, +so the use of finite element classes etc. is exactly the same as before. The +difference to previous programs is that we have replaced almost all uses of +classes Vector and SparseMatrix by their +near-equivalents PETScWrappers::Vector and +PETScWrappers::SparseMatrix (for sequential vectors and matrices, +i.e. objects for which all elements are stored locally on one machine), and +PETScWrappers::MPI::Vector and +PETScWrappers::MPI::SparseMatrix for versions of these classes +where only a part of the matrix or vector is stored on each machine within an +MPI network. These classes provide an interface that is very similar to that +of the deal.II linear algebra classes, but instead of implementing this +functionality themselves, they simply pass on to their corresponding PETSc +functions. The wrappers are therefore only used to give PETSc a more modern, +object oriented interface, and to make the use of PETSc and deal.II objects as +interchangable as possible. +

+ +

+While the sequential PETSc wrappers classes do not have any advantage over +their deal.II counterparts, the main point of using PETSc is that it can run +in parallel. We will make use of this by partitioning the domain into as many +blocks (``subdomains'') as there are processes in the MPI network. At the same +time, PETSc provides dummy MPI stubs that allow to run the same program on a +single machine if so desired, without any changes. +

+ +

+The techniques this program demonstrates are: how to use the PETSc wrapper +classes; how to parallelize operations for jobs running on an MPI network; and +how to partition the domain into subdomains to parallelize up the work. Since +all this can only be demonstrated using actual code, let us go straight to the +code without much further ado. +

-- 2.39.5