From: Wolfgang Bangerth Date: Thu, 11 Jan 2018 01:56:59 +0000 (-0700) Subject: Document what DoFRenumbering::hierarchical() does for parallel::shared::Triangulation. X-Git-Tag: v9.0.0-rc1~535^2~1 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=c5cfe48c34928055544e9818f5cd85f16f8461b3;p=dealii.git Document what DoFRenumbering::hierarchical() does for parallel::shared::Triangulation. --- diff --git a/include/deal.II/dofs/dof_renumbering.h b/include/deal.II/dofs/dof_renumbering.h index 21ad83437f..19584c0ac4 100644 --- a/include/deal.II/dofs/dof_renumbering.h +++ b/include/deal.II/dofs/dof_renumbering.h @@ -764,7 +764,7 @@ namespace DoFRenumbering * the mesh has undergone. On the other hand, the z-order * of cells is independent of the mesh's history, and so yields a * predictable DoF numbering. - * - For meshes described by parallel::distributed::Triangulation, + * - For meshes based on parallel::distributed::Triangulation, * the @ref GlossLocallyOwnedCell "locally owned cells" of * each MPI process are contiguous in Z order. That means that * numbering degrees of freedom by visiting cells in Z order yields @@ -777,7 +777,43 @@ namespace DoFRenumbering * cell with indices that will be the same regardless of how many * processes the mesh is split up between. * - * This function generates an ordering that is independent of the previous + * For meshes based on parallel::shared::Triangulation, the situation is + * more complex. Here, the set of locally owned cells is determined by + * a partitioning algorithm (selected by passing an object of type + * parallel::shared::Triangulation::Settings to the constructor of the + * triangulation), and in general these partitioning algorithms may + * assign cells to @ref GlossSubdomainId "subdomains" based on + * decisions that may have nothing to do with the Z order. (Though it + * is possible to select these flags in a way so that partitioning + * uses the Z order.) As a consequence, the cells of one subdomain + * are not contiguous in Z order, and if one renumbered degrees of freedom + * based on the Z order of cells, one would generally end up with DoF + * indices that on each processor do not form a contiguous range. + * This is often inconvenient (for example, because PETSc cannot store + * vectors and matrices for which the locally owned set of indices + * is not contiguous), and consequently this function uses the following + * algorithm for parallel::shared::Triangulation objects: + * - It determines how many degrees of freedom each processor owns. + * This is an invariant under renumbering, and consequently we can + * use how many DoFs each processor owns at the beginning of the current + * function. Let us call this number $n_P$ for processor $P$. + * - It determines for each processor a contiguous range of new + * DoF indices $[b_P,e_P)$ so that $e_P-b_P=n_P$, $b_0=0$, and + * $b_P=e_{P-1}$. + * - It traverses the locally owned cells in Z order and + * renumbers the locally owned degrees of freedom on these cells + * so that the new numbers fit within the interval $[b_P,e_P)$. + * In other words, the locally owned degrees of freedom on each + * processor are sorted according to the Z order of the locally + * owned cells they are on, but this property may not hold globally, + * across cells. This is because the partitioning algorithm may have + * decided that, for example, processor 0 owns a cell that comes + * later in Z order than one of the cells assigned to processor 1. + * On the other hand, the algorithm described above assigns the + * degrees of freedom on this cell earlier indices than + * all of the indices owned by processor 1. + * + * @note This function generates an ordering that is independent of the previous * numbering of degrees of freedom. In other words, any information that may * have been produced by a previous call to a renumbering function is * ignored.