From 8c0369d4afa3af8fa3ab854873c4fc16843c60e1 Mon Sep 17 00:00:00 2001 From: kronbichler Date: Thu, 21 Feb 2008 10:33:45 +0000 Subject: [PATCH] Possible extensions: Discussion about other solvers, some code, some tests... git-svn-id: https://svn.dealii.org/trunk@15751 0785d39b-7218-0410-832d-ea1e28bc413d --- deal.II/examples/step-31/doc/results.dox | 361 +++++++++++++++++++++-- deal.II/examples/step-31/step-31.cc | 115 ++++---- 2 files changed, 405 insertions(+), 71 deletions(-) diff --git a/deal.II/examples/step-31/doc/results.dox b/deal.II/examples/step-31/doc/results.dox index a407b79ad9..b6ca10b66d 100644 --- a/deal.II/examples/step-31/doc/results.dox +++ b/deal.II/examples/step-31/doc/results.dox @@ -61,15 +61,17 @@ Refinement cycle 6 Computing preconditioner... Solving... 11 outer CG Schur complement iterations for pressure @endcode + Note that the number of (outer) iterations does not increase as we refine the mesh. -In the images below, we show the grids for the first six steps in the program. -Note how the grid is refined in regions where the solution rapidly changes: On -the upper boundary, we have Dirichlet boundary conditions that are -1 in the +In the images below, we show the grids for the first six refinement steps in the +program. +Obersve how the grid is refined in regions where the solution rapidly changes: +On the upper boundary, we have Dirichlet boundary conditions that are -1 in the left half of the line and 1 in the right one, so there is an aprupt change at x=0. Likewise, there are changes from Dirichlet to Neumann data in the two upper -corners, so the solution changes there as well. +corners, so there is need for refinement there as well. @@ -164,7 +166,7 @@ the mesh. Compute times for each iteration alone: seconds, seconds, 1 minute, 5 minutes, 29 minutes, 3h12, 21h39 -Pictures +The grids generated during the solution look as follow:
@@ -207,6 +209,8 @@ Pictures
+Let's have a look at the solution as well. + @image html step-31.3d.solution.png

Sparsity pattern

@@ -234,17 +238,19 @@ not slow down the program. @image html step-31.2d.sparsity-nor.png It is clearly visible that the dofs are spread over the almost the whole matrix. -This makes the preconditioning by ILU unefficient: ILU generates a Gaussian -elimination (LU decomposition) without fill-in, which means that more fill-ins -will result in a worse approximation to the full decomposition. +This makes the preconditioning by ILU inefficient: ILU generates a Gaussian +elimination (LU decomposition) without fill-in element, which means that more +fill-ins left out will result in a worse approximation to the full +decomposition. -In this program, we have thus chosen a better renumbering of components. A -renumbering with Cuthill_McKee yields the following output. +In this program, we have thus chosen a more advanced renumbering of components. +The renumbering with Cuthill_McKee and grouping the components into velocity +and pressure yields the following output. @image html step-31.2d.sparsity-ren.png -It is clearly visible that the situation is much better now. Most of the -elements are now concentrated around the diagonal for the (0,0) block in the +It is apparent that the situation has improved a lot. Most of the +elements are now concentrated around the diagonal in the (0,0) block in the matrix. Similar effects are also visible for the other blocks. In this case, the ILU decomposition will be much closer to the full LU decomposition, which improves the quality of the preconditioner. It is also worthwile to note that @@ -252,12 +258,333 @@ the sparse direct solver UMFPACK does some internal renumbering of the equations that leads to a similar pattern as the one we got from Cuthill_McKee. Finally, we want to have a closer -look at a sparsity pattern in 3D. We now show only the (0,0) block of the +look at a sparsity pattern in 3D. We show only the (0,0) block of the matrix, again after one adaptive refinement. Apart from the fact that the matrix -has more rows and columns, it is also visible that there are many more entries +size has increased, it is also visible that there are many more entries in the matrix. Moreover, even for the optimized renumbering, there will be a -considerable amount of fill-in elements. This illustrates why UMFPACK is not a -good choice in 3D - there will be need for many new entries that eventually -won't fit into the physical memory (RAM). +considerable amount of tentative fill-in elements. This illustrates why UMFPACK +is not a good choice in 3D - there will be need for many new entries that + eventually won't fit into the physical memory (RAM). @image html step-31.3d.sparsity_uu-ren.png + +

Possible Extensions

+ +

Improved linear solver

+ +We have seen in the section of computational results that the number of outer +iterations does not depend on the mesh size, which is optimal in a sense of +scalability. This is however only partly true, since we did not look at the +number of iterations in the inner preconditioner. Of course, this is +unproblematic in the 2D case where we precondition with a direct solver and the +vmult operation of the inverse matrix structure will converge in one single CG +step, but this changes in 3D where we need to apply the ILU preconditioner. +There, the number of required step basically doubles with half the element size, +so the work gets more and more for larger systems. For the 3D results obtained +above, each vmult operation involves approx. 14, 23, 36, 59, 72, 101 etc. inner +CG iterations. (On the other hand, the number of iterations for applying the +inverse +pressure mass matrix is always about 10-11.) To summarize, most work is spent on +creating the same matrix inverse over and over again. It is a natural question +to ask whether we can do that any better and avoid inverting the matrix several +times. + +The answer is, of course, that we can do that in a few other (most of the time +better) ways. + +The first way would be to choose a preconditioner that makes CG +for the (0,0) matrix converge in a mesh-independent number of iterations around +10 to 20. We have seen such a canditate in step-16: multigrid. + +

Block Schur complement preconditioner

+But even in this situation there would still be need for the repeated solution +of the same linear system. The question is how this can be avoided. If we stay +with the calculation of the Schur complement, there is no other possibility. +The alternative is to attack the block system at one time and use an approximate +Schur complement as an efficient preconditioner. The basic attempt is as +follows: If we find a block preconditioner $\mathcal P$ such that the matrix +@f{eqnarray*} + \mathcal P^{-1}\left(\begin{array}{cc} + A & B^T \\ B & 0 + \end{array}\right) +@f} +is simple, then an iterative solver with that preconditioner will converge in a +few iterations. Using the Schur complement $S = B A^{-1} B^T$, one finds that +@f{eqnarray*} + \mathcal P^{-1}\left(\begin{array}{cc} + A & B^T \\ B & 0 + \end{array}\right) + = + \left(\begin{array}{cc} + A^{-1} & 0 \\ S^{-1} B A^{-1} & -S^{-1} + \end{array}\right)\cdot \left(\begin{array}{cc} + A & B^T \\ B & 0 + \end{array}\right) + = + \left(\begin{array}{cc} + I & A^{-1} B^T \\ 0 & 0 + \end{array}\right), +@f} +see also the paper by Silvester and Wathen referenced to in the introduction. In +this case, a Krylov-based iterative method will converge in two steps, since +there are only two distinct eigenvalues to this matrix. Though, it is not +possible to use CG since the block matrix is not positive definite (it has +indeed both positive and negative eigenvalues, i.e. it is indefinite). Instead +one has to choose e.g. the iterative solver GMRES, which is more expensive +per iteration (more inner products to be calculated), but is applicable also to +indefinite (and non-symmetric) matrices. + +Since $\mathcal P$ is aimed to be a preconditioner only, we can also use +approximations to the Schur complement $S$ and the matrix $A$. This is what an +improved solver for the Stokes system is going to look like: The Schur +complement will be approximated by the pressure mass matrix $M_p$, and we use a +preconditioner to $A$ (without an InverseMatrix class around it) to approximate +A. This is however partly compensated by the fact that we need to perform the +solver iterations on the full block system instead of the smaller pressure +space. + +An implementation of such a solver could look like this: + +First the class for the block Schur complement preconditioner, which implements +a vmult operation for block vectors. +@code +template +class BlockSchurPreconditioner : public Subscriptor +{ + public: + BlockSchurPreconditioner (const BlockSparseMatrix &S, + const InverseMatrix,PreconditionSSOR<> > &Mpinv, + const Preconditioner &Apreconditioner); + + void vmult (BlockVector &dst, + const BlockVector &src) const; + + private: + const SmartPointer > system_matrix; + const SmartPointer, + PreconditionSSOR<> > > m_inverse; + const Preconditioner &a_preconditioner; + + mutable BlockVector tmp; + +}; + +template +BlockSchurPreconditioner::BlockSchurPreconditioner( + const BlockSparseMatrix &S, + const InverseMatrix,PreconditionSSOR<> > &Mpinv, + const Preconditioner &Apreconditioner + ) + : + system_matrix (&S), + m_inverse (&Mpinv), + a_preconditioner (Apreconditioner), + tmp (2) +{ + // We have to initialize the BlockVector@ + // tmp to the correct sizes in the respective blocks + tmp.block(0).reinit(S.block(0,0).m()); + tmp.block(1).reinit(S.block(1,1).m()); + tmp.collect_sizes(); +} + + // Now the interesting function, the multiplication of + // the preconditioner with a BlockVector. +template +void BlockSchurPreconditioner::vmult ( + BlockVector &dst, + const BlockVector &src) const +{ + // This is going to be a left preconditioner for the Schur complement + // to be used with the CG or GMRES method + // Form u_new = F^{-1} u + a_preconditioner.vmult (dst.block(0), src.block(0)); + // Form tmp3 = - B u_new + p (SparseMatrix::residual + // does precisely this) + system_matrix->block(1,0).residual(tmp.block(1), + dst.block(0), src.block(1)); + // Change sign in tmp3 + tmp.block(1) *= -1; + // Multiply by Greens function (Schur complement-like) precond + m_inverse->vmult (dst.block(1), tmp.block(1)); +} +@endcode + +The actual solver call can be realized as + +@code + SparseMatrix pressure_mass_matrix; + pressure_mass_matrix.reinit(sparsity_pattern.block(1,1)); + pressure_mass_matrix.copy_from(system_matrix.block(1,1)); + system_matrix.block(1,1) = 0; + + PreconditionSSOR<> pmass_preconditioner; + pmass_preconditioner.initialize (pressure_mass_matrix, 1.2); + + InverseMatrix,PreconditionSSOR<> > + m_inverse (pressure_mass_matrix, pmass_preconditioner); + + BlockSchurPreconditioner::type> + preconditioner (system_matrix, m_inverse, *A_preconditioner); + + SolverControl solver_control (system_matrix.m(), + 1e-6*system_rhs.l2_norm()); + + SolverGMRES > gmres(solver_control); + + gmres.solve(system_matrix, alternative_solution, system_rhs, + preconditioner); + + hanging_node_constraints.distribute (alternative_solution); + + std::cout << " " + << solver_control.last_step() + << " block GMRES iterations"; +@endcode + +Obviously, one needs to add the include file in order to +make this run. We call the solver with a BlockVector template in order to enable +GMRES to operate on block vectors and matrices. +Note also that we need to zero the (1,1) block from the system +matrix (we saved the pressure mass matrix there which is not part of the +problem) after we copied to information to another matrix. + +Using the timer class, we collected some statistics that compare the runtime of +the block solver with the one used in the problem implementation above. Besides +the solution of the two systems we also check if the solutions to the two +systems are close to each other, i.e. we calculate the infinity norm of the +vector difference. + +Let's first see the results in 2D: +@code +Refinement cycle 0 + Number of active cells: 64 + Number of degrees of freedom: 679 (594+85) [0.025624 s] + Assembling... [0.052874 s] + Computing preconditioner... [0.016394 s] + Solving... + Schur complement: 11 outer CG iterations for p [0.028722 s] + Block Schur preconditioner: 11 GMRES iterations [0.027471 s] + max difference l_infty in the two solution vectors: 6.20426e-06 + +Refinement cycle 1 + Number of active cells: 160 + Number of degrees of freedom: 1683 (1482+201) [0.07247 s] + Assembling... [0.129607 s] + Computing preconditioner... [0.045196 s] + Solving... + Schur complement: 11 outer CG iterations for p [0.074961 s] + Block Schur preconditioner: 12 GMRES iterations [0.080309 s] + max difference l_infty in the two solution vectors: 1.48144e-06 + +Refinement cycle 2 + Number of active cells: 376 + Number of degrees of freedom: 3813 (3370+443) [0.168264 s] + Assembling... [0.298043 s] + Computing preconditioner... [0.122656 s] + Solving... + Schur complement: 11 outer CG iterations for p [0.19247 s] + Block Schur preconditioner: 12 GMRES iterations [0.211514 s] + max difference l_infty in the two solution vectors: 4.30711e-06 + +Refinement cycle 3 + Number of active cells: 880 + Number of degrees of freedom: 8723 (7722+1001) [0.390052 s] + Assembling... [0.694365 s] + Computing preconditioner... [0.315401 s] + Solving... + Schur complement: 11 outer CG iterations for p [0.54384 s] + Block Schur preconditioner: 12 GMRES iterations [0.588273 s] + max difference l_infty in the two solution vectors: 1.03471e-05 + +Refinement cycle 4 + Number of active cells: 2008 + Number of degrees of freedom: 19383 (17186+2197) [0.874274 s] + Assembling... [1.58498 s] + Computing preconditioner... [0.813296 s] + Solving... + Schur complement: 11 outer CG iterations for p [1.22453 s] + Block Schur preconditioner: 13 GMRES iterations [1.45711 s] + max difference l_infty in the two solution vectors: 4.53216e-05 + +Refinement cycle 5 + Number of active cells: 4288 + Number of degrees of freedom: 40855 (36250+4605) [1.85269 s] + Assembling... [3.36536 s] + Computing preconditioner... [1.93873 s] + Solving... + Schur complement: 11 outer CG iterations for p [3.39201 s] + Block Schur preconditioner: 13 GMRES iterations [3.77069 s] + max difference l_infty in the two solution vectors: 0.000291127 + +Refinement cycle 6 + Number of active cells: 8896 + Number of degrees of freedom: 83885 (74474+9411) [3.85563 s] + Assembling... [6.9564 s] + Computing preconditioner... [4.8595 s] + Solving... + Schur complement: 11 outer CG iterations for p [7.69326 s] + Block Schur preconditioner: 13 GMRES iterations [8.62034 s] + max difference l_infty in the two solution vectors: 0.000245764 +@endcode +We see that the runtime for solution using the block Schur complement +preconditioner is higher than the one with the Schur complement directly. The +reason is simple: We used a direct solve as preconditioner here - so there won't +be any gain by avoiding the inner iterations (indeed, we see that slightly more +iterations are needed). + +The picture changes of course in 3D: + +@code +Refinement cycle 0 + Number of active cells: 32 + Number of degrees of freedom: 1356 (1275+81) [0.344441 s] + Assembling... [1.38754 s] + Computing preconditioner... [1.29145 s] + Solving... + Schur complement: 13 outer CG iterations for p [0.609626 s] + Block Schur preconditioner: 24 GMRES iterations [0.107059 s] + max difference l_infty in the two solution vectors: 1.60165e-05 + +Refinement cycle 1 + Number of active cells: 144 + Number of degrees of freedom: 5088 (4827+261) [2.21296 s] + Assembling... [6.28298 s] + Computing preconditioner... [7.78238 s] + Solving... + Schur complement: 14 outer CG iterations for p [5.98275 s] + Block Schur preconditioner: 44 GMRES iterations [0.979435 s] + max difference l_infty in the two solution vectors: 2.00892e-05 + +Refinement cycle 2 + Number of active cells: 704 + Number of degrees of freedom: 22406 (21351+1055) [12.2284 s] + Assembling... [30.8002 s] + Computing preconditioner... [42.131 s] + Solving... + Schur complement: 14 outer CG iterations for p [45.1115 s] + Block Schur preconditioner: 83 GMRES iterations [9.04946 s] + max difference l_infty in the two solution vectors: 3.06288e-05 + +Refinement cycle 3 + Number of active cells: 3168 + Number of degrees of freedom: 93176 (89043+4133) [54.9067 s] + Assembling... [137.506 s] + Computing preconditioner... [192.36 s] + Solving... + Schur complement: 15 outer CG iterations for p [371.85 s] + Block Schur preconditioner: 204 GMRES iterations [110.316 s] + max difference l_infty in the two solution vectors: 8.92999e-05 +@endcode + +Here, the block preconditioned solver is clearly superior to the Schur +complement, even though the advantage gets less for more mesh points. The reason +for the decreasing advantage is that the mass matrix is still inverted +iteratively, so that it is inverted more and more time if the block GMRES +solver uses more iterations. + +

Not using block matrices and vectors

+Another possibility that can be taken into account is to not set up a block +system, but rather solve the system of velocity and pressure all-at once. The +alternatives are direct solve with UMFPACK (2D) or GMRES with ILU +preconditioning (3D). diff --git a/deal.II/examples/step-31/step-31.cc b/deal.II/examples/step-31/step-31.cc index 05641c5a3b..b486221bc7 100644 --- a/deal.II/examples/step-31/step-31.cc +++ b/deal.II/examples/step-31/step-31.cc @@ -508,21 +508,21 @@ void InverseMatrix::vmult (Vector &dst, // InverseMatrix // now contains the second template // parameter from preconditioning as above, - // which effects the SmartPointer@ + // which effects the SmartPointer // object m_inverse as well. template class SchurComplement : public Subscriptor { public: SchurComplement (const BlockSparseMatrix &A, - const InverseMatrix,Preconditioner> &Minv); + const InverseMatrix, Preconditioner> &Minv); void vmult (Vector &dst, const Vector &src) const; private: const SmartPointer > system_matrix; - const SmartPointer,Preconditioner> > m_inverse; + const SmartPointer, Preconditioner> > m_inverse; mutable Vector tmp1, tmp2; }; @@ -572,7 +572,7 @@ void SchurComplement::vmult (Vector &dst, // in a way that the approximation of the // PDE solution remains well-behaved (problems // arise if grids are too unstructered), - // see the discussion of + // see the documentation of // Triangulation::MeshSmoothing // for details. template @@ -598,7 +598,7 @@ void StokesProblem::setup_dofs () // Release preconditioner from // previous steps since it // will definitely not be needed - // any more after this point + // any more after this point. A_preconditioner.reset (); dof_handler.distribute_dofs (fe); @@ -610,10 +610,10 @@ void StokesProblem::setup_dofs () // of velocity and pressure shall be as in // step-20. This is done in two steps. First, // all dofs are renumbered by - // DoFRenumbering::Cuthill_McKee@, + // DoFRenumbering::Cuthill_McKee, // and then we renumber once again by // components. Since - // DoFRenumbering::component_wise@ + // DoFRenumbering::component_wise // does not touch the renumbering within // the individual blocks, the basic // renumbering from Cuthill-McKee remains. @@ -638,7 +638,7 @@ void StokesProblem::setup_dofs () // the dof handler. hanging_node_constraints.clear (); DoFTools::make_hanging_node_constraints (dof_handler, - hanging_node_constraints); + hanging_node_constraints); hanging_node_constraints.close (); // In analogy to step-20, we count @@ -647,9 +647,9 @@ void StokesProblem::setup_dofs () // there, but we want to operate on // the block structure we used already for // the renumbering: The function - // DoFTools::count_dofs_per_block@ + // DoFTools::count_dofs_per_block // does the same as - // DoFTools::count_dofs_per_component@, + // DoFTools::count_dofs_per_component, // but now grouped as velocity and // pressure block via block_component. std::vector dofs_per_block (2); @@ -665,8 +665,10 @@ void StokesProblem::setup_dofs () << " (" << n_u << '+' << n_p << ')' << std::endl; - // Clear the system matrix prior to - // generating the entries. + // Release the memory previously attached + // to the system matrix and untie it + // from the old sparsity pattern prior to + // generating the current data structure. system_matrix.clear (); // The next task is to allocate a @@ -675,20 +677,20 @@ void StokesProblem::setup_dofs () // this in the same way as in step-20, // though, there is a major reason // not to do so. In 3D, the function - // max_couplings_between_dofs@ + // DoFTools::max_couplings_between_dofs // yields a very large number for the // coupling between the individual dofs, // so that the memory initially provided - // in the reinit of - // the matrix is far too much - so + // for the creation of the sparsity pattern + // of the matrix is far too much - so // much actually that it won't even fit // into the physical memory of most // systems already for moderately-sized 3D - // problems. See also the discussing in + // problems. See also the discussion in // step-18. // Instead, we use a temporary object of // the class - // BlockCompressedSparsityPattern, + // BlockCompressedSparsityPattern, // which is a block version of the // compressed sparsity patterns from // step-11 and step-18. All this is done @@ -713,6 +715,9 @@ void StokesProblem::setup_dofs () sparsity_pattern.copy_from (csp); } + std::ofstream out ("sparsity_pattern.gpl"); + sparsity_pattern.block(0,0).print_gnuplot(out); + // Finally, the system matrix, // solution and right hand side are // created from the block @@ -785,13 +790,13 @@ void StokesProblem::assemble_system () const RightHandSide right_hand_side; std::vector > rhs_values (n_q_points, - Vector(dim+1)); + Vector(dim+1)); // This starts the loop over all // cells. With the abbreviations // extract_u etc. - // introduced above, it is very - // clear what is going on. + // introduced above, it is + // evident what is going on. typename DoFHandler::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end(); @@ -819,7 +824,7 @@ void StokesProblem::assemble_system () const double phi_j_p = extract_p (fe_values, j, q); - // Note how we write the + // Note the way we write the // contributions // phi_i_p * phi_j_p , // yielding a pressure mass matrix, @@ -830,8 +835,8 @@ void StokesProblem::assemble_system () // They won't be mixed up, since // phi_i_p * phi_j_p // is only non-zero when all the - // other terms vanish and the other - // way around. + // other terms vanish (and the other + // way around). local_matrix(i,j) += (scalar_product(phi_i_grads_u, phi_j_grads_u) - div_phi_i_u * phi_j_p - phi_i_p * div_phi_j_u @@ -847,7 +852,7 @@ void StokesProblem::assemble_system () } // Here we add the contributions from - // Neumann (pressure) boundary conditions. + // Neumann (pressure) boundary conditions // at faces on the domain boundary that // have the boundary flag "0", i.e. those // that are not subject to Dirichlet @@ -920,8 +925,8 @@ void StokesProblem::assemble_system () // To this end, we use a // component_mask that // filters away the pressure - // componenent, so that the condensation - // is performed only on + // component, so that the condensation + // is performed on // velocity dofs. hanging_node_constraints.condense (system_matrix); hanging_node_constraints.condense (system_rhs); @@ -931,15 +936,15 @@ void StokesProblem::assemble_system () std::vector component_mask (dim+1, true); component_mask[dim] = false; VectorTools::interpolate_boundary_values (dof_handler, - 1, - BoundaryValues(), - boundary_values, - component_mask); + 1, + BoundaryValues(), + boundary_values, + component_mask); MatrixTools::apply_boundary_values (boundary_values, - system_matrix, - solution, - system_rhs); + system_matrix, + solution, + system_rhs); } // Before we're going to solve @@ -994,7 +999,7 @@ void StokesProblem::solve () // This is as in step-20. We generate // the right hand side - // B A^{-1} F Ð G for the + // B A^{-1} F Ð G for the // Schur complement and an object // that represents the respective // linear operation B A^{-1} B^T, @@ -1018,7 +1023,7 @@ void StokesProblem::solve () SolverCG<> cg (solver_control); // Now to the preconditioner to the - // Schur complement. As derived in the + // Schur complement. As explained in the // introduction, the preconditioning // is done by a mass matrix in the // pressure variable. @@ -1037,12 +1042,12 @@ void StokesProblem::solve () // In this case, we have to invert // the pressure mass matrix. As it // already turned out in earlier tutorial - // program, the inversion of a mass + // programs, the inversion of a mass // matrix is a rather cheap and // straight-forward operation (compared // to, e.g., a Laplace matrix). The CG - // method with simple preconditioning - // with SSOR converges in 10-20 steps, + // method with SSOR preconditioning + // converges in 10-20 steps, // independently on the mesh size. // This is precisely what we do here: // We choose an SSOR preconditioner @@ -1050,7 +1055,8 @@ void StokesProblem::solve () // to the InverseMatrix object via // the corresponding template parameter. // A CG solver is then called within - // the vmult operation. + // the vmult operation of the inverse + // matrix. PreconditionSSOR<> preconditioner; preconditioner.initialize (system_matrix.block(1,1), 1.2); @@ -1074,8 +1080,8 @@ void StokesProblem::solve () // After this first solution step, // the hanging node constraints have // to be distributed to the solution - - // that a consistent pressure field - // is achieved. + // in order to achieve a consistent + // pressure field. hanging_node_constraints.distribute (solution); std::cout << " " @@ -1087,10 +1093,10 @@ void StokesProblem::solve () // As in step-20, we finally need to // solve for the velocity equation - // with the solution of the pressure - // equation at hand. We do not perform - // any direct solution of a linear - // system, but only need to + // where we plug in the the solution + // to the pressure equation. This involves + // only objects we already know - so + // we simply // multiply p by B^T, subtract the // right hand side and multiply // by the inverse of A. @@ -1125,7 +1131,7 @@ void StokesProblem::solve () // and which scalars, we need to // add that information as well - // achieved by the - // DataComponentInterpretation@ + // DataComponentInterpretation // class. // The rest of the function is // then the same as in step-20. @@ -1237,16 +1243,17 @@ void StokesProblem::run () // the example description above for // details. for (typename Triangulation::active_cell_iterator - cell = triangulation.begin_active(); + cell = triangulation.begin_active(); cell != triangulation.end(); ++cell) for (unsigned int f=0; f::faces_per_cell; ++f) if (cell->face(f)->center()[dim-1] == 0) - { - cell->face(f)->set_boundary_indicator(1); - - /*for (unsigned int e=0; e::lines_per_face; ++e) - cell->face(f)->line(e)->set_boundary_indicator (1);*/ - } + { + cell->face(f)->set_boundary_indicator(1); + /* + for (unsigned int e=0; e::lines_per_face; ++e) + cell->face(f)->line(e)->set_boundary_indicator (1); + */ + } // We employ an initial refinement before @@ -1296,7 +1303,7 @@ int main () { deallog.depth_console (0); - StokesProblem<2> flow_problem(1); + StokesProblem<3> flow_problem(1); flow_problem.run (); } catch (std::exception &exc) -- 2.39.5