<br>
-<i>This program has evolved from a version originally written by Guido
-Kanschat in 2003. It has undergone significant revisions by Bärbel
-Janssen, Guido Kanschat and Wolfgang Bangerth in 2009 and 2010 to demonstrate
-multigrid algorithms on adaptively refined meshes.
-</i>
-
-
<a name="Intro"></a>
<h1>Introduction</h1>
This example shows the basic usage of the multilevel functions in
-deal.II. It solves the same problem as used in step-6,
+deal.II. It solves almost the same problem as used in step-6,
but demonstrating the things one has to provide when using multigrid
as a preconditioner. In particular, this requires that we define a
hierarchy of levels, provide transfer operators from one level to the
inside the library.
<li>The solver on the coarsest level; here, we use MGCoarseGridHouseholder.
<li>The smoother on all other levels, which in our case will be the
- MGSmootherRelaxation class using SOR as the underlying method
-<li>And MGMatrix, a class having a special level multiplication, i.e. we
+ mg::SmootherRelaxation class using SOR as the underlying method
+<li>And mg::Matrix, a class having a special level multiplication, i.e. we
basically store one matrix per grid level and allow multiplication
with it.
</ul>
V-cycle, which is in turn used by the preconditioner PreconditionMG,
ready for plug-in into a linear solver of the LAC library.
-The multilevel method in deal.II follows in many respects the outlines
-of the various publications by James Bramble, Joseph Pasciak and
-Jinchao Xu (i.e. the "BPX" framework). In order to understand many of
-the options, a rough familiarity with their work is quite helpful.
-
-However, in comparison to this framework, the implementation in
-deal.II has to take into account the fact that we want to solve linear
-systems on adaptively refined meshes. This leads to the complication
-that it isn't quite as clear any more what exactly a "level" in a
-multilevel hierarchy of a mesh is. The following image shows what we
-consider to be a "level":
+The multigrid method implemented here for adaptively refined meshes
+follows the outline in the @ref mg_paper "Multigrid paper by Janssen and Kanschat",
+which describes the underlying implementation in
+deal.II and also introduces a lot of the nomenclature. First, we have
+to distinguish between level meshes, namely cells that have the same
+refinement distance from the coarse mesh, and the leaf mesh consisting
+of active cells of the hierarchy (in older work we refer to this as
+the global mesh, but this term is overused). Most importantly, the
+leaf mesh is not identical with the level mesh on the finest level.
+The following image shows what we consider to be a "level mesh":
<p align="center">
- @image html "hanging_nodes.png" ""
+ @image html "multigrid.png" ""
</p>
-In other words, the fine level in this mesh consists only of the
+The fine level in this mesh consists only of the
degrees of freedom that are defined on the refined cells, but does not
extend to that part of the domain that is not refined. While this
guarantees that the overall effort grows as ${\cal O}(N)$ as necessary
for optimal multigrid complexity, it leads to problems when defining
-where to smooth and what boundary conditions to pose for the operators
+where to smoothen and what boundary conditions to pose for the operators
defined on individual levels if the level boundary is not an external
-boundary. These questions are discussed in detail in the
-@ref mg_paper "Multigrid paper by Janssen and Kanschat" that describes
-the implementation in deal.II.
-
-
+boundary. These questions are discussed in detail in the article cited above.
<h3>The testcase</h3>
-The problem we solve here is exactly the same as in
-step-6, the only difference being the solver we use
-here. You may want to look there for a definition of what we solve,
-right hand side and boundary conditions. Obviously, the program would
-also work if we changed the geometry and other pieces of data that
-defines this particular problem.
-
-The things that are new are all those parts that concern the
-multigrid. In particular, this includes the following members of the
-main class:
-- <code>LaplaceProblem::mg_dof_handler</code>
-- <code>LaplaceProblem::mg_sparsity</code>
-- <code>LaplaceProblem::mg_matrices</code>
-- <code>LaplaceProblem::mg_interface_matrices_up</code>
-- <code>LaplaceProblem::assemble_multigrid ()</code>
-- <code>LaplaceProblem::solve ()</code>
-Take a look at these functions.
+The problem we solve here is similar to step-6, with two main
+differences: first, the multigrid preconditioner, obviously. We also
+change the discontinuity of the coefficients such that the local
+assembler does not look more complicated than necessary.
<h1>Results</h1>
-The output that this program generates is, of course, the same as that
-of step-6, so you may see there for more results. On the
-other hand, since no tutorial program is a good one unless it has at
-least one colorful picture, here is, again, the solution:
+On the finest mesh, the solution looks like this:
<p align="center">
<img src="http://www.dealii.org/images/steps/developer/step-16.solution.png" alt="">
</p>
-When run, the output of this program is
+More impoartantly, we would like to see if the multigrid method really
+improved the solver performance. Therefore, here is the textual
+output:
+
<pre>
-Cycle 0:
- Number of active cells: 20
- Number of degrees of freedom: 25 (by level: 8, 25)
- 7 CG iterations needed to obtain convergence.
-Cycle 1:
- Number of active cells: 44
- Number of degrees of freedom: 57 (by level: 8, 25, 48)
- 8 CG iterations needed to obtain convergence.
-Cycle 2:
- Number of active cells: 92
- Number of degrees of freedom: 117 (by level: 8, 25, 80, 60)
- 9 CG iterations needed to obtain convergence.
-Cycle 3:
- Number of active cells: 188
- Number of degrees of freedom: 221 (by level: 8, 25, 80, 200)
- 12 CG iterations needed to obtain convergence.
-Cycle 4:
- Number of active cells: 416
- Number of degrees of freedom: 485 (by level: 8, 25, 89, 288, 280)
- 13 CG iterations needed to obtain convergence.
-Cycle 5:
- Number of active cells: 800
- Number of degrees of freedom: 925 (by level: 8, 25, 89, 288, 784, 132)
- 14 CG iterations needed to obtain convergence.
-Cycle 6:
- Number of active cells: 1628
- Number of degrees of freedom: 1865 (by level: 8, 25, 89, 304, 1000, 1164, 72)
- 14 CG iterations needed to obtain convergence.
-Cycle 7:
- Number of active cells: 3194
- Number of degrees of freedom: 3603 (by level: 8, 25, 89, 328, 1032, 2200, 1392)
- 16 CG iterations needed to obtain convergence.
+DEAL::Cycle 0
+DEAL:: Number of active cells: 20
+DEAL:: Number of degrees of freedom: 25 (by level: 8, 25)
+DEAL:cg::Starting value 0.510691
+DEAL:cg::Convergence step 6 value 4.59193e-14
+DEAL::Cycle 1
+DEAL:: Number of active cells: 41
+DEAL:: Number of degrees of freedom: 52 (by level: 8, 25, 41)
+DEAL:cg::Starting value 0.455356
+DEAL:cg::Convergence step 8 value 3.09682e-13
+DEAL::Cycle 2
+DEAL:: Number of active cells: 80
+DEAL:: Number of degrees of freedom: 100 (by level: 8, 25, 61, 52)
+DEAL:cg::Starting value 0.394469
+DEAL:cg::Convergence step 9 value 1.96993e-13
+DEAL::Cycle 3
+DEAL:: Number of active cells: 161
+DEAL:: Number of degrees of freedom: 190 (by level: 8, 25, 77, 160)
+DEAL:cg::Starting value 0.322156
+DEAL:cg::Convergence step 9 value 2.94418e-13
+DEAL::Cycle 4
+DEAL:: Number of active cells: 311
+DEAL:: Number of degrees of freedom: 364 (by level: 8, 25, 86, 227, 174)
+DEAL:cg::Starting value 0.279667
+DEAL:cg::Convergence step 10 value 3.45746e-13
+DEAL::Cycle 5
+DEAL:: Number of active cells: 593
+DEAL:: Number of degrees of freedom: 667 (by level: 8, 25, 89, 231, 490, 96)
+DEAL:cg::Starting value 0.215917
+DEAL:cg::Convergence step 10 value 1.03758e-13
+DEAL::Cycle 6
+DEAL:: Number of active cells: 1127
+DEAL:: Number of degrees of freedom: 1251 (by level: 8, 25, 89, 274, 760, 417, 178)
+DEAL:cg::Starting value 0.185906
+DEAL:cg::Convergence step 10 value 3.40351e-13
+DEAL::Cycle 7
+DEAL:: Number of active cells: 2144
+DEAL:: Number of degrees of freedom: 2359 (by level: 8, 25, 89, 308, 779, 1262, 817)
+DEAL:cg::Starting value 0.141519
+DEAL:cg::Convergence step 10 value 5.74965e-13
</pre>
-That's not perfect — we would have hoped for a constant number
-of iterations rather than one that increases as we get more and more
-degrees of freedom — but it is also not far away. The reason for
-this is easy enough to understand, however: since we have a strongly
-varying coefficient, the operators that we assembly by quadrature on
-the lower levels become worse and worse approximations of the operator
-on the finest level. Consequently, even if we had perfect solvers on
-the coarser levels, they would not be good preconditioners on the
-finest level. This theory is easily tested by comparing results when
-we use a constant coefficient: in that case, the number of iterations
-remains constant at 9 after the first three or four refinement steps.
-We can also compare what this program produces with how @ref step_5
-"step-5" performed. To solve the same problem as in step-5, the only
-two changes that are necessary are (i) to replace the body of the
-function <code>LaplaceProblem::refine_grid</code> by a call to
-<code>triangulation.refine_global(1)</code>, and (ii) to use the same
-SolverControl object and tolerance as in step-5 — the rest of the
-program remains unchanged. In that case, here is how the solvers used
-in step-5 and the multigrid solver used in the current program
-compare:
-<table align="center">
-<tr><th>cells</th><th>step-5</th><th>step-16</th></tr>
-<tr><td>20</td> <td>13</td> <td>6</td> </tr>
-<tr><td>80</td> <td>17</td> <td>7</td> </tr>
-<tr><td>320</td> <td>29</td> <td>9</td> </tr>
-<tr><td>1280</td> <td>51</td> <td>10</td> </tr>
-<tr><td>5120</td> <td>94</td> <td>11</td> </tr>
-<tr><td>20480</td><td>180</td><td>13</td></tr>
-</table>
-This isn't only fewer iterations than in step-5 (each of which
-is, however, much more expensive) but more importantly, the number of
-iterations also grows much more slowly under mesh refinement (again,
-it would be almost constant if the coefficient was constant rather
-than strongly varying as chosen here). This justifies the common
-observation that, whenever possible, multigrid methods should be used
-for second order problems.
+That's almost perfect multigrid performance: 12 orders of magnitude in
+10 iteration steps, and almost independent of the mesh size. That's
+obviously in part due to the simple nature of the problem solved, but
+it shows the power of multigrid methods.
<h3> Possible extensions </h3>
+We encourage you to switch on timing output by calling the function
+LogStream::log_execution_time() of the deallog object and compare to
+step 6. You will see that the multigrid method has quite an overhead
+on coarse meshes, but that that it always beats other methods on fine
+meshes because of its optimal complexity.
+
A close inspection of this program's performance shows that it is mostly
dominated by matrix-vector operations. step-37 shows one way
how this can be avoided by working with matrix-free methods.
* the top level of the deal.II distribution.
*
* ---------------------------------------------------------------------
-
*
* Authors: Guido Kanschat, University of Heidelberg, 2003
* Baerbel Janssen, University of Heidelberg, 2010
*/
-// As discussed in the introduction, most of this program is copied almost
-// verbatim from step-6, which itself is only a slight modification of
-// step-5. Consequently, a significant part of this program is not new if
-// you've read all the material up to step-6, and we won't comment on that
-// part of the functionality that is unchanged. Rather, we will focus on those
-// aspects of the program that have to do with the multigrid functionality
-// which forms the new aspect of this tutorial program.
-
// @sect3{Include files}
// Again, the first few include files are already known, so we won't comment
#include <deal.II/multigrid/mg_smoother.h>
#include <deal.II/multigrid/mg_matrix.h>
-// Finally we include the MeshWorker framework. Since we have to build
+// Finally we include the MeshWorker framework. This framework through
+// its function loop() and integration_loop(), automates loops over
+// cells and assembling of data into vectors, matrices, etc. It obeys
+// constraints automatically. Since we have to build
// several matrices and have to be aware of several sets of
-// constraints, we do not program loops over cells ourselves, but
-// rather leave the actual logic to MeshWorker::loop().
+// constraints, this will save us a lot of headache.
#include <deal.II/meshworker/dof_info.h>
#include <deal.II/meshworker/integration_info.h>
#include <deal.II/meshworker/simple.h>
#include <fstream>
#include <sstream>
-// Lazy as we are, we avoid typing namespace names
-
using namespace dealii;
-using namespace LocalIntegrators;
namespace Step16
{
- // @{sect3}{The integrator on each cell}
+ // @sect3{The integrator on each cell}
- // MeshWorker::integration_loop() expects a class that provides
+ // The MeshWorker::integration_loop() expects a class that provides
// functions for integration on cells and boundary and interior
// faces. This is done by the following class. In the constructor,
// we tell the loop that cell integrals should be computed (the
// 'true'), but integrals should not be computed on boundary and
- // interior faces (the two 'false').
+ // interior faces (the two 'false'). Accordingly, we only need a
+ // cell function, but none for the faces.
template <int dim>
- class LaplaceMatrix : public MeshWorker::LocalIntegrator<dim>
+ class LaplaceIntegrator : public MeshWorker::LocalIntegrator<dim>
{
-public:
-LaplaceMatrix();
-virtual void cell(MeshWorker::DoFInfo<dim>& dinfo, MeshWorker::IntegrationInfo<dim>& info) const;
-};
+ public:
+ LaplaceIntegrator();
+ virtual void cell(MeshWorker::DoFInfo<dim> &dinfo, MeshWorker::IntegrationInfo<dim> &info) const;
+ };
-template <int dim>
-LaplaceMatrix<dim>::LaplaceMatrix()
- :
- MeshWorker::LocalIntegrator<dim>(true, false, false)
-{}
+ template <int dim>
+ LaplaceIntegrator<dim>::LaplaceIntegrator()
+ :
+ MeshWorker::LocalIntegrator<dim>(true, false, false)
+ {}
-template <int dim>
-void LaplaceMatrix<dim>::cell(MeshWorker::DoFInfo<dim>& dinfo, MeshWorker::IntegrationInfo<dim>& info) const
-{
- AssertDimension (dinfo.n_matrices(), 1);
-const double coefficient = (dinfo.cell->center()(0) > 0.)
- ? .1 : 1.;
-
- Laplace::cell_matrix(dinfo.matrix(0,false).matrix, info.fe_values(0), coefficient);
+ // Next the actual integrator on each cell. We solve a Poisson problem with a
+ // coefficient one in the right half plane and one tenth in the left
+ // half plane.
+
+ // The MeshWorker::LocalResults base class of MeshWorker::DoFInfo
+ // contains objects that can be filled in this local integrator. How
+ // many objects is determined inside the MeshWorker framework by the
+ // assembler class. Here, we test for instance that one matrix is
+ // required (MeshWorker::LocalResults::n_matrices()). The matrices are accessed
+ // through MeshWorker::LocalResults::matrix(), which takes the number of the
+ // matrix as its first argument. The second argument is only used
+ // for integrals over faces, where there are two matrices for each
+ // test function set. In such a case, a second matrix with indicator
+ // 'true' would exist with the same index.
+
+ // MeshWorker::IntegrationInfo provides one or several FEValues
+ // objects, which below are used by
+ // LocalIntegrators::Laplace::cell_matrix() or
+ // LocalIntegrators::L2::L2(). Since we are assembling only a single
+ // PDE, there is also only one of these objects with index zero.
+
+ // In addition, we note that this integrator serves to compute the
+ // matrices for the multilevel preconditioner as well as the matrix
+ // and the right hand side for the global system. Since the
+ // assembler for a system requires an additional vector, this is
+ // indicated by MeshWorker::LocalResults::n_vectors() returning a nonzero
+ // value. Accordingly we fill a right hand side vector at the end of
+ // this function. Since LocalResults can deal with several
+ // BlockVector objects, but we are again in the simplest case here,
+ // we enter the information into block zero of vector zero.
+ template <int dim>
+ void LaplaceIntegrator<dim>::cell(MeshWorker::DoFInfo<dim> &dinfo, MeshWorker::IntegrationInfo<dim> &info) const
+ {
+ AssertDimension (dinfo.n_matrices(), 1);
+ const double coefficient = (dinfo.cell->center()(0) > 0.)
+ ? .1 : 1.;
- if (dinfo.n_vectors() > 0)
- {
-std::vector<double> rhs(info.fe_values(0).n_quadrature_points, 1.);
- L2::L2(dinfo.vector(0).block(0), info.fe_values(0), rhs);
-}
-}
+ LocalIntegrators::Laplace::cell_matrix(dinfo.matrix(0,false).matrix, info.fe_values(0), coefficient);
+
+ if (dinfo.n_vectors() > 0)
+ {
+ std::vector<double> rhs(info.fe_values(0).n_quadrature_points, 1.);
+ LocalIntegrators::L2::L2(dinfo.vector(0).block(0), info.fe_values(0), rhs);
+ }
+ }
// @sect3{The <code>LaplaceProblem</code> class template}
const unsigned int degree;
- // The following four objects are the only additional member variables,
- // compared to step-6. The first three represent the operators that act
- // on individual levels of the multilevel hierarchy, rather than on the
- // finest mesh as do the objects above while the last object stores
- // information about the boundary indices on each level and information
- // about indices lying on a refinement edge between two different
- // refinement levels.
- //
- // To facilitate having objects on each level of a multilevel hierarchy,
- // deal.II has the MGLevelObject class template that provides storage for
- // objects on each level. What we need here are matrices on each level,
- // which implies that we also need sparsity patterns on each level. As
- // outlined in the @ref mg_paper, the operators (matrices) that we need
- // are actually twofold: one on the interior of each level, and one at the
- // interface between each level and that part of the domain where the mesh
- // is coarser. In fact, we will need the latter in two versions: for the
- // direction from coarse to fine mesh and from fine to
- // coarse. Fortunately, however, we here have a self-adjoint problem for
- // which one of these is the transpose of the other, and so we only have
- // to build one; we choose the one from coarse to fine.
+ // The following members are the essential data structures for the
+ // multigrid method. The first two represent the sparsity patterns
+ // and the matrices on individual levels of the multilevel
+ // hierarchy, very much like the objects for the global mesh above.
+
+ // Then we have two new matrices only needed for multigrid
+ // methods with local smoothing on adaptive meshes. They convey
+ // data between the interior part of the refined region and the
+ // refinement edge, as outline in detail in @ref mg_paper.
+
+ // The last object stores information about the boundary indices
+ // on each level and information about indices lying on a
+ // refinement edge between two different refinement levels. It
+ // thus serves a similar purpose as ConstraintMatrix, but on each
+ // level.
MGLevelObject<SparsityPattern> mg_sparsity_patterns;
MGLevelObject<SparseMatrix<double> > mg_matrices;
MGLevelObject<SparseMatrix<double> > mg_interface_in;
// @sect3{The <code>LaplaceProblem</code> class implementation}
- // @sect4{LaplaceProblem::LaplaceProblem}
-
- // The constructor is left mostly unchanged. We take the polynomial degree
- // of the finite elements to be used as a constructor argument and store it
- // in a member variable.
- //
- // By convention, all adaptively refined triangulations in deal.II never
+ // Just one short remark about the constructor of the Triangulation:
+ // by convention, all adaptively refined triangulations in deal.II never
// change by more than one level across a face between cells. For our
// multigrid algorithms, however, we need a slightly stricter guarantee,
// namely that the mesh also does not change by more than refinement level
// @sect4{LaplaceProblem::setup_system}
- // The following function extends what the corresponding one in step-6
- // did, with the exception of also distributing the degrees of freedom on
- // each level of the mesh which is needed for the multigrid hierarchy.
- // We then output the number of degrees (globally and on each level) and
- // reset the solution and right hand side vectors to the correct size.
- // As in other programs where we use a sequence of meshes that become
- // finer and finer (starting with step-5), the code as is here sets
- // the solution vector to zero, rather than trying to interpolate the
- // solution obtained on the previous mesh to the current one. This
- // follows from the empirical observation that interpolating the solution
- // and using it as a starting guess does not significant decrease the number
- // of linear CG iterations one has to perform to solve a linear system.
- // (The situation is very different for nonlinear problems, where using
- // a previous time step's solution, or a solution from a coarser grid,
- // typically drastically makes the solution of a nonlinear system cheaper.)
+ // In addition to just distributing the degrees of freedom in
+ // the DoFHandler, we do the same on each level. Then, we follow the
+ // same procedure as before to set up the system on the leaf mesh.
template <int dim>
void LaplaceProblem<dim>::setup_system ()
{
dof_handler.distribute_dofs (fe);
dof_handler.distribute_mg_dofs (fe);
- std::cout << " Number of degrees of freedom: "
- << dof_handler.n_dofs()
- << " (by level: ";
+ deallog << " Number of degrees of freedom: "
+ << dof_handler.n_dofs()
+ << " (by level: ";
for (unsigned int level=0; level<triangulation.n_levels(); ++level)
- std::cout << dof_handler.n_dofs(level)
- << (level == triangulation.n_levels()-1
- ? ")" : ", ");
- std::cout << std::endl;
+ deallog << dof_handler.n_dofs(level)
+ << (level == triangulation.n_levels()-1
+ ? ")" : ", ");
+ deallog << std::endl;
sparsity_pattern.reinit (dof_handler.n_dofs(),
dof_handler.n_dofs(),
solution.reinit (dof_handler.n_dofs());
system_rhs.reinit (dof_handler.n_dofs());
- // But it starts to be a wee bit different here, although this still
- // doesn't have anything to do with multigrid methods. step-6 took care of
- // boundary values and hanging nodes in a separate step after assembling
- // the global matrix from local contributions. This works, but the same
- // can be done in a slightly simpler way if we already take care of these
- // constraints at the time of copying local contributions into the global
- // matrix. To this end, we here do not just compute the constraints due to
- // hanging nodes, but also due to zero boundary conditions. We will use
- // this set of constraints later on to help us copy local contributions
- // correctly into the global linear system right away, without the need
- // for a later clean-up stage:
constraints.clear ();
hanging_node_constraints.clear ();
DoFTools::make_hanging_node_constraints (dof_handler, hanging_node_constraints);
// @sect4{LaplaceProblem::assemble_system}
// The following function assembles the linear system on the finest level of
- // the mesh. It is almost exactly the same as in step-6.
+ // the mesh. Since we want to reuse the code here for the level
+ // assembling below, we use the local integrator class
+ // LaplaceIntegrator and leave the loops to the MeshWorker
+ // framework. Thus, this function first sets up the objects
+ // necessary for this framework, namely
+ // <ol>
+ // <li>an MeshWorker::IntegrationInfoBox, which will provide all the required
+ // data in quadrature points on the cell. This object can be seen as
+ // an extension of FEValues, providing a lot more useful
+ // information,</li>
+ // <li>a MeshWorker::DoFInfo object, which on the one hand side extends the
+ // functionality of cell iterators, but also provides space for
+ // return values in its base class LocalResults,</li>
+ // <li>an assembler, in this case for the whole system. The term
+ // 'simple' here refers to the fact that the global system does not
+ // have a block structure,</li>
+ // <li>an the local integrator, which implements the actual forms.
+ // </ol>
//
- // This latter trick is something that only found its way into deal.II over
- // time and wasn't used in the initial version of this tutorial
- // program. There is, however, a discussion of this function in the
- // introduction of step-27.
+ // After the loop has combined all of these into a matrix and a
+ // right hand side, there is one thing left to do: the assemblers
+ // leave matrix rows and columns of constrained degrees of freedom
+ // untouched. Therefore, we put a one on the diagonal to make the
+ // whole system well posed. The value one, or any fixed value has
+ // the advantage, that its effect on the spectrum of the matrix is
+ // easily understood. Since the corresponding eigenvectors form an
+ // invariant subspace, the value chosen does not affect the
+ // convergence of Krylov space solvers.
template <int dim>
void LaplaceProblem<dim>::assemble_system ()
{
MeshWorker::Assembler::SystemSimple<SparseMatrix<double>, Vector<double> > assembler;
assembler.initialize(constraints);
assembler.initialize(system_matrix, system_rhs);
-
- LaplaceMatrix<dim> matrix_integrator;
+
+ LaplaceIntegrator<dim> matrix_integrator;
MeshWorker::integration_loop<dim, dim> (
- dof_handler.begin_active(), dof_handler.end(),
- dof_info, info_box, matrix_integrator, assembler);
+ dof_handler.begin_active(), dof_handler.end(),
+ dof_info, info_box, matrix_integrator, assembler);
- for(unsigned int i=0; i<dof_handler.n_dofs(); ++i)
- if(constraints.is_constrained(i))
- system_matrix.set(i,i,1.);
+ for (unsigned int i=0; i<dof_handler.n_dofs(); ++i)
+ if (constraints.is_constrained(i))
+ system_matrix.set(i, i, 1.);
}
// that define the multigrid method on each level of the mesh. The
// integration core is the same as above, but the loop below will go over
// all existing cells instead of just the active ones, and the results must
- // be entered into the correct matrix. Note also that since we only do
- // multilevel preconditioning, no right-hand side needs to be assembled
- // here.
- //
- // Before we go there, however, we have to take care of a significant amount
- // of book keeping:
+ // be entered into the correct level matrices. Fortunately,
+ // MeshWorker hides most of that from us, and thus the difference
+ // between this function and the previous lies only in the setup of
+ // the assembler and the different iterators in the loop.
+ // Also, fixing up the matrices in the end is a little more comlicated.
template <int dim>
void LaplaceProblem<dim>::assemble_multigrid ()
{
assembler.initialize(mg_matrices);
assembler.initialize_interfaces(mg_interface_in, mg_interface_out);
- LaplaceMatrix<dim> matrix_integrator;
+ LaplaceIntegrator<dim> matrix_integrator;
MeshWorker::integration_loop<dim, dim> (
- dof_handler.begin_mg(), dof_handler.end_mg(),
- dof_info, info_box, matrix_integrator, assembler);
+ dof_handler.begin_mg(), dof_handler.end_mg(),
+ dof_info, info_box, matrix_integrator, assembler);
const unsigned int nlevels = triangulation.n_levels();
- for (unsigned int level=0;level<nlevels;++level)
- {
- for(unsigned int i=0; i<dof_handler.n_dofs(level); ++i)
- if(mg_matrices[level].diag_element(i)==0)
- mg_matrices[level].set(i,i,1.);
- }
+ for (unsigned int level=0; level<nlevels; ++level)
+ {
+ for (unsigned int i=0; i<dof_handler.n_dofs(level); ++i)
+ if (mg_constrained_dofs.is_boundary_index(level,i) ||
+ mg_constrained_dofs.at_refinement_edge(level,i))
+ mg_matrices[level].set(i, i, 1.);
+ }
}
// With all this together, we can finally get about solving the linear
// system in the usual way:
SolverControl solver_control (1000, 1e-12);
- SolverCG<> cg (solver_control);
+ SolverCG<> solver (solver_control);
solution = 0;
- cg.solve (system_matrix, solution, system_rhs,
- preconditioner);
+ solver.solve (system_matrix, solution, system_rhs,
+ preconditioner);
constraints.distribute (solution);
-
- std::cout << " " << solver_control.last_step()
- << " CG iterations needed to obtain convergence."
- << std::endl;
}
{
for (unsigned int cycle=0; cycle<8; ++cycle)
{
- std::cout << "Cycle " << cycle << ':' << std::endl;
+ deallog << "Cycle " << cycle << std::endl;
if (cycle == 0)
{
}
else
refine_grid ();
-// triangulation.refine_global (1);
-
- std::cout << " Number of active cells: "
- << triangulation.n_active_cells()
- << std::endl;
+ deallog << " Number of active cells: "
+ << triangulation.n_active_cells()
+ << std::endl;
setup_system ();
{
using namespace Step16;
- deallog.depth_console (0);
-
LaplaceProblem<2> laplace_problem(1);
laplace_problem.run ();
}