From a56967eba14234e304d5fe110a74ee9a746c6745 Mon Sep 17 00:00:00 2001 From: David Wells Date: Sun, 29 Jun 2025 17:40:32 -0400 Subject: [PATCH] step-17: make this step work without METIS. --- examples/step-17/doc/intro.dox | 9 +++++---- examples/step-17/step-17.cc | 26 ++++++++++++++++---------- 2 files changed, 21 insertions(+), 14 deletions(-) diff --git a/examples/step-17/doc/intro.dox b/examples/step-17/doc/intro.dox index d69038fece..7ea6cdb8fc 100644 --- a/examples/step-17/doc/intro.dox +++ b/examples/step-17/doc/intro.dox @@ -14,11 +14,12 @@ solving the problem in %parallel. If you don't know what PETSc is, then this would be a good time to take a quick glimpse at their homepage. As a prerequisite of this program, you need to have PETSc installed, and if -you want to run in %parallel on a cluster, you also need METIS to partition meshes. The installation of deal.II -together with these two additional libraries is described in the README file. +target="_top">METIS to partition meshes. If METIS is not available then +this program will use a less efficient mesh partitioner which is built-in to deal.II. +The installation of deal.II together with these two additional libraries is +described in the README file. Now, for the details: as mentioned, the program does not compute anything new, so the use of finite element classes, etc., is exactly the same as before. The diff --git a/examples/step-17/step-17.cc b/examples/step-17/step-17.cc index ea4fbe1262..6f87209067 100644 --- a/examples/step-17/step-17.cc +++ b/examples/step-17/step-17.cc @@ -251,16 +251,18 @@ namespace Step17 // for the global linear system to be solved needs to be implemented. // // However, before we proceed with this, there is one thing to do for a - // parallel program: we need to determine which MPI process is - // responsible for each of the cells. Splitting cells among - // processes, commonly called "partitioning the mesh", is done by - // assigning a @ref GlossSubdomainId "subdomain id" to each cell. We - // do so by calling into the METIS library that does this in a very - // efficient way, trying to minimize the number of nodes on the - // interfaces between subdomains. Rather than trying to call METIS - // directly, we do this by calling the - // GridTools::partition_triangulation() function that does this at a - // much higher level of programming. + // parallel program: we need to determine which MPI process is responsible for + // each of the cells. Splitting cells among processes, commonly called + // "partitioning the mesh", is done by assigning a @ref GlossSubdomainId + // "subdomain id" to each cell. In this case, we partition the mesh either via + // the METIS library or, if deal.II was not set up with support for METIS, via + // the built-in zorder partitioner. In general, external packages like METIS + // and Zoltan do a better job (meaning that they minimize the number of + // elements on subdomain interfaces) than the simple zorder partitioner + // implemented in GridTools::partition_triangulation_zorder(), but they have + // the disadvantage of requiring an external dependency. Rather than directly + // using METIS' API, we use it via GridTools::partition_triangulation() which + // does this at a much higher level of programming. // // @note As mentioned in the introduction, we could avoid this manual // partitioning step if we used the parallel::shared::Triangulation @@ -316,7 +318,11 @@ namespace Step17 template void ElasticProblem::setup_system() { +#ifdef DEAL_II_WITH_METIS GridTools::partition_triangulation(n_mpi_processes, triangulation); +#else + GridTools::partition_triangulation_zorder(n_mpi_processes, triangulation); +#endif dof_handler.distribute_dofs(fe); DoFRenumbering::subdomain_wise(dof_handler); -- 2.39.5