From 058c39901d0e7e5741194e321a184f67212e4039 Mon Sep 17 00:00:00 2001 From: wolf Date: Mon, 8 Mar 2004 22:32:42 +0000 Subject: [PATCH] Mention METIS. git-svn-id: https://svn.dealii.org/trunk@8684 0785d39b-7218-0410-832d-ea1e28bc413d --- deal.II/doc/readme.html | 58 +++++++++++++++++++++++++++++------------ 1 file changed, 42 insertions(+), 16 deletions(-) diff --git a/deal.II/doc/readme.html b/deal.II/doc/readme.html index 8f742b61ce..145ebb9296 100644 --- a/deal.II/doc/readme.html +++ b/deal.II/doc/readme.html @@ -670,22 +670,23 @@
  • deal.II can interface to the PETSc library. PETSc's - main strength is to provide lots of functions for linear algebra, among - several other things. It comes with implementations of a variety of - linear solvers, as well as various different sparse and dense matrix and - vector formats, for which deal.II has wrapper classes - that provide almost the same interfaces as the built-in - deal.II linear algebra classes. The main advantage of - PETSc is the variety of its algorithms, and that all of them can work in - a parallel setting, i.e. matrices and vectors can be distributed across - a cluster of computers that communicate via MPI. This makes PETSc the - choice of linear algrebra implementations for very large problems that - do not fit into a single computer's memory or have run-times that are - too long for a single computer. At present, our interfaces are only - wrappers for vector and matrix formats, as well as to their solver and - preconditioner classes. However, we plan to extend this to a tighter - coupling in the future. + href="http://www.mcs.anl.gov/petsc/" target="_top">PETSc + library. PETSc's main strength is to provide lots of functions for + linear algebra, among several other things. It comes with + implementations of a variety of linear solvers, as well as various + different sparse and dense matrix and vector formats, for which + deal.II has wrapper classes that provide almost the + same interfaces as the built-in deal.II linear + algebra classes. The main advantage of PETSc is the variety of its + algorithms, and that all of them can work in a parallel setting, + i.e. matrices and vectors can be distributed across a cluster of + computers that communicate via MPI. This makes PETSc the choice of + linear algrebra implementations for very large problems that do not fit + into a single computer's memory or have run-times that are too long for + a single computer. At present, our interfaces are only wrappers for + vector and matrix formats, as well as to their solver and preconditioner + classes. However, we plan to extend this to a tighter coupling in the + future.

    @@ -715,6 +716,31 @@ fixed in 2.2.0. Versions prior to 2.1.6 are likely not to work, but have not been tested.

    + + +
  • +

    + In order to generate partitionings of triangulations, we have functions + that call METIS library. METIS is a function that provides + various methods to partition graphs, which we use to define which cell + belongs to which part of a triangulation. The main point in using METIS + is to generate partitions so that the interfaces between cell blocks are + as small as possible. This data can, in turn, be used to distribute + degrees of freedom onto different processors when using PETSc in + parallel mode. +

    + +

    + As with PETSc, the use of METIS is optional. If you wish to use it, you + can do so by having a METIS installation around at the time of calling + ./configure. You can let ./configure know + about this by either setting a METIS_DIR environment + variable denoting the path to the METIS library, or by transmitting this + information to ./configure using the + --with-metis flag. We have tested our interface code with + METIS version 4.0.1, and newer versions should presumable work as well. +

  • -- 2.39.5