From: Toby D. Young Date: Mon, 13 Jul 2009 10:06:21 +0000 (+0000) Subject: Skeleton introduction to step-36 X-Git-Tag: v8.0.0~7496 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=48b955e77635c206b452c71d398a6c4fb2c865f1;p=dealii.git Skeleton introduction to step-36 git-svn-id: https://svn.dealii.org/trunk@19063 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-36/doc/intro.dox b/deal.II/examples/step-36/doc/intro.dox index b2e4e82c2b..5f665d5ac3 100644 --- a/deal.II/examples/step-36/doc/intro.dox +++ b/deal.II/examples/step-36/doc/intro.dox @@ -3,62 +3,107 @@ This program was contributed by Toby D. Young and Wolfgang Bangerth. + +

Preamble

- -

Introduction

+The problem we want to solve in this example is an eigenspectrum +problem which no longer contains a single solution, but a set of +solutions for the various eigenfunctions we want to compute. The +problem of finding all eigenvalues (eigenfunctions) of a generalized +eigenvalue problem is a formidable challange; however, most of the +time we are really only interested in a small subset of these values +(functions). Fortunately, the interface to the SLEPc library allows us +to select which portion of the eigenspectrum and how many solutions we +want to solve for. -Everything here is built on top of classes provided by deal.II that -wrap around the linear algebra implementation of the SLEPc -library; which wraps itself around the PETSc library. -Start with the wave equation of Schr\"odinger: + +

Introduction

+ +Start with a differential operator equation @f[ - ( \frac{1}{2} \partial_x \partial_y + V(x,y) ) \Phi - = - \varepsilon \Psi \quad, + L(x,y) \Psi = \varepsilon \Psi \quad, @f] -Then transform using the finite element ansatz +where $L$ is a differential operator. In the usual way in finite +elements we multiply both sides with a test function and then replace +$\Psi$ and the test function with discrete shape functions. @f[ - \phi = \tilde{\phi} \quad, + \sum_j (\phi_i, L\phi_j) \tilde{\psi}_j = + \varepsilon \sum_j (\phi_i, \phi_j) \tilde{\psi}_j \quad, @f] -where $\tilde{\phi_i}$ is known as the reduced wavefunction of the -state $i$. +where $\tilde{\psi}_j$, the reduced wavefunction, are the expansion +coefficients of the approximation +@f[ + \psi_i = \sum_j \phi_j \tilde{\psi}_j \quad. +@f] +It can easily be shown that the consequence of this, is that we want +to solve the generalized eigenvalue problem +@f[ + A \tilde{\Phi} = \varepsilon M \tilde{\Phi} \quad, +@f] +where $A$ si the stiffness matrix arising from the differential +operator $L$, and $M$ is the mass matrix. The solution to the +eigenvalue problem is an eigenspectrum $\varepsilon_j$, with +associated eigenfunctions $\tilde{\Phi}=\sum_j \tilde{\phi}_j$. + +To solve an actual physical problem we next define the differential +operator $L$ in a way that mimicks the wave-equation of Schr\"odinger: +@f[ + L({\mathbf x}) = K({\mathbf x}) + + V({\mathbf x})\quad, +@f] +and let $K({\mathbf x}) = \partial^2_{\mathbf x}$, and +@f[ + V({\mathbf x}) = \quad, +@f] +In the parlance of Schroedinger eigenvalue problems, the solutions +sought are the wavefunctions $\phi_j$. In finite elements the +solutions are $\tilde{\phi}_j$, known as a reduced (or tilde-basis) +wavefunctions.

Implementation details

The program below is essentially just a slightly modified version of @ref step_4 "step-4". The things that are different are the following: -- The main class (now named EigenvalueProblem) now no - longer has a single solution vector, but a whole set of vectors for - the various eigenfunctions we want to compute. - -- We use PETSc matrices and vectors as in @ref step_17 "step-17" and - @ref step_18 "step-18" since that is what the SLEPc eigenvalue - solvers require. - -- The function EigenvalueProblem::solve is entirely - different from anything seen so far in the tutorial, as it does not - just solve a linear system but actually solves the eigenvalue problem. - It is built on the SLEPc library, and more immediately on the deal.II - SLEPc wrappers in the class SLEPcWrappers::SolverKrylovSchur. - -- We use the ParameterHandler class to describe a few input parameters, - such as the exact form of the potential $V(\mathbf x)$, the number of - global refinement steps of the mesh, or the number of eigenvalues - we want to solve for. We could go much further with this but - stop at making only a few of the things that one could select at - run time actual input file parameters. In order to see what could be - done in this regard, take a look at @ref step_29 "step-29", - @ref step_33 "step-33", and in particular @ref step_19 "step-19". - -- We use the FunctionParser class to make the potential $V(\mathbf x)$ - a run-time parameter that can be specified in the input file as - a formula. + The rest of the program follows in a pretty straightforward way from @ref step_4 "step-4". diff --git a/deal.II/examples/step-36/step-36.cc b/deal.II/examples/step-36/step-36.cc index ec69dfd77f..4bc85de96a 100644 --- a/deal.II/examples/step-36/step-36.cc +++ b/deal.II/examples/step-36/step-36.cc @@ -25,8 +25,6 @@ #include #include #include -#include -#include #include #include #include @@ -39,32 +37,37 @@ #include #include #include +#include // PETSc appears here because SLEPc // depends on this library: #include #include - // And then we need to actually import - // the interfaces for solvers that - // SLEPc provides: + // And then we need to actually + // import the interfaces for solvers + // that SLEPc provides: #include // We also need some standard C++: #include #include - // Finally, as in previous programs, we - // import all the deal.II class and function - // names into the global namespace: + // Finally, as in previous programs, + // we import all the deal.II class + // and function names into the global + // namespace: using namespace dealii; - // @sect3{The EigenvalueProblem class template} + // @sect3{The + // EigenvalueProblem + // class template} - // Following is the class declaration for the - // main class template. It looks pretty much - // exactly like what has already been shown - // in step-4: + // Following is the class declaration + // for the main class template. It + // looks pretty much exactly like + // what has already been shown in + // step-4: template class EigenvalueProblem { @@ -83,44 +86,53 @@ class EigenvalueProblem DoFHandler dof_handler; // With these exceptions: For our - // eigenvalue problem, we need both a - // stiffness matrix for the left hand - // side as well as a mass matrix for the - // right hand side. We also need not just - // one solution function, but a whole set - // of those for the eigenfunctions we - // want to compute, along with the + // eigenvalue problem, we need + // both a stiffness matrix for + // the left hand side as well as + // a mass matrix for the right + // hand side. We also need not + // just one solution function, + // but a whole set of those for + // the eigenfunctions we want to + // compute, along with the // corresponding eigenvectors: PETScWrappers::SparseMatrix stiffness_matrix, mass_matrix; std::vector eigenfunctions; std::vector eigenvalues; - // And then we need an object that will - // store several run-time parameters that - // we will specify in an input file: + // And then we need an object + // that will store several + // run-time parameters that we + // will specify in an input file: ParameterHandler parameters; - // Finally, we will have an object that - // contains "constraints" on our degrees - // of freedom. This could include hanging - // node constraints if we had adaptively - // refined meshes (which we don't have in - // the current program). Here, we will - // store the constraints for boundary - // nodes $U_i=0$. + // Finally, we will have an + // object that contains + // "constraints" on our degrees + // of freedom. This could include + // hanging node constraints if we + // had adaptively refined meshes + // (which we don't have in the + // current program). Here, we + // will store the constraints for + // boundary nodes $U_i=0$. ConstraintMatrix constraints; }; - // @sect3{Implementation of the EigenvalueProblem class} + // @sect3{Implementation of the + // EigenvalueProblem + // class} // @sect4{EigenvalueProblem::EigenvalueProblem} - // First up, the constructor. The main, new - // part is handling the run-time input - // parameters. We need to declare their - // existence first, and then read their - // values from the input file whose name is - // specified as an argument to this function: + // First up, the constructor. The + // main, new part is handling the + // run-time input parameters. We need + // to declare their existence first, + // and then read their values from + // the input file whose name is + // specified as an argument to this + // function: template EigenvalueProblem::EigenvalueProblem (const std::string &prm_file) : @@ -145,26 +157,30 @@ EigenvalueProblem::EigenvalueProblem (const std::string &prm_file) // @sect4{EigenvalueProblem::make_grid_and_dofs} - // The next function creates a mesh on the - // domain $[-1,1]^d$, refines it as many - // times as the input file calls for, and - // then attaches a DoFHandler to it and - // initializes the matrices and vectors to - // their correct sizes. We also build the + // The next function creates a mesh + // on the domain $[-1,1]^d$, refines + // it as many times as the input file + // calls for, and then attaches a + // DoFHandler to it and initializes + // the matrices and vectors to their + // correct sizes. We also build the // constraints that correspond to the - // boundary values $u|_{\partial\Omega}=0$. + // boundary values + // $u|_{\partial\Omega}=0$. // // For the matrices, we use the PETSc - // wrappers. These have the ability to - // allocate memory as necessary as non-zero - // entries are added. This seems inefficient: - // we could as well first compute the - // sparsity pattern, initialize the matrices - // with it, and as we then insert entries we - // can be sure that we do not need to - // re-allocate memory and free the one used - // previously. One way to do that would be to - // use code like this: + // wrappers. These have the ability + // to allocate memory as necessary as + // non-zero entries are added. This + // seems inefficient: we could as + // well first compute the sparsity + // pattern, initialize the matrices + // with it, and as we then insert + // entries we can be sure that we do + // not need to re-allocate memory and + // free the one used previously. One + // way to do that would be to use + // code like this: // @code // CompressedSimpleSparsityPattern // csp (dof_handler.n_dofs(), @@ -174,30 +190,36 @@ EigenvalueProblem::EigenvalueProblem (const std::string &prm_file) // stiffness_matrix.reinit (csp); // mass_matrix.reinit (csp); // @endcode - // instead of the two reinit() - // calls for the stiffness and mass matrices - // below. + // instead of the two + // reinit() calls for + // the stiffness and mass matrices + // below. // // This doesn't quite work, - // unfortunately. The code above may lead to - // a few entries in the non-zero pattern to - // which we only ever write zero entries; - // most notably, this holds true for - // off-diagonal entries for those rows and - // columns that belong to boundary - // nodes. This shouldn't be a problem, but - // for whatever reason, PETSc's ILU - // preconditioner, which we use to solve - // linear systems in the eigenvalue solver, - // doesn't like these extra entries and - // aborts with an error message. + // unfortunately. The code above may + // lead to a few entries in the + // non-zero pattern to which we only + // ever write zero entries; most + // notably, this holds true for + // off-diagonal entries for those + // rows and columns that belong to + // boundary nodes. This shouldn't be + // a problem, but for whatever + // reason, PETSc's ILU + // preconditioner, which we use to + // solve linear systems in the + // eigenvalue solver, doesn't like + // these extra entries and aborts + // with an error message. // - // Absent any obvious way to avoid this, we - // simply settle for the second best option, - // which is have PETSc allocate memory as - // necessary. That said, since this is not a - // time critical part, this whole affair is - // of no further importance. + // In the absense of any obvious way + // to avoid this, we simply settle + // for the second best option, which + // is have PETSc allocate memory as + // necessary. That said, since this + // is not a time critical part, this + // whole affair is of no further + // importance. template void EigenvalueProblem::make_grid_and_dofs () { @@ -215,12 +237,14 @@ void EigenvalueProblem::make_grid_and_dofs () dof_handler.n_dofs(), dof_handler.max_couplings_between_dofs()); - // The next step is to take care of the - // eigenspectrum. In this case, the outputs - // are eigenfunctions and eigenvalues, so - // we set the size of the list of - // eigenfunctions and eigenvalues to be as - // large as asked for in the input file: + // The next step is to take care of + // the eigenspectrum. In this case, + // the outputs are eigenvalues and + // eigenfunctions, so we set the + // size of the list of + // eigenfunctions and eigenvalues + // to be as large as we asked for + // in the input file: eigenfunctions .resize (parameters.get_integer ("Number of eigenvalues/eigenfunctions")); for (unsigned int i=0; i::make_grid_and_dofs () // @sect4{EigenvalueProblem::assemble_system} - // Here, we assemble the global stiffness and - // mass matrices from local contributions - // $A^K_{ij} = \int_K \nabla\varphi_i(\mathbf - // x) \cdot \nabla\varphi_j(\mathbf x) + + // Here, we assemble the global + // stiffness and mass matrices from + // local contributions $A^K_{ij} = + // \int_K \nabla\varphi_i(\mathbf x) + // \cdot \nabla\varphi_j(\mathbf x) + // V(\mathbf x)\varphi_i(\mathbf - // x)\varphi_j(\mathbf x)$. The function - // should be immediately familiar if you've - // seen previous tutorial programs. The only - // thing new would be setting up an object - // that described the potential $V(\mathbf - // x)$ using the expression that we got from - // the input file. We then need to evaluate - // this object at the quadrature points on - // each cell. If you've seen how to evaluate - // function objects (see, for example the - // coefficient in step-5), the code here will - // also look rather familiar. + // x)\varphi_j(\mathbf x)$. The + // function should be immediately + // familiar if you've seen previous + // tutorial programs. The only thing + // new would be setting up an object + // that described the potential + // $V(\mathbf x)$ using the + // expression that we got from the + // input file. We then need to + // evaluate this object at the + // quadrature points on each cell. If + // you've seen how to evaluate + // function objects (see, for example + // the coefficient in step-5), the + // code here will also look rather + // familiar. template void EigenvalueProblem::assemble_system () { @@ -305,10 +334,12 @@ void EigenvalueProblem::assemble_system () ) * fe_values.JxW (q_point); } - // Now that we have the local matrix - // contributions, we transfer them into - // the global objects and take care of - // zero boundary constraints: + // Now that we have the local + // matrix contributions, we + // transfer them into the + // global objects and take care + // of zero boundary + // constraints: cell->get_dof_indices (local_dof_indices); constraints @@ -321,12 +352,13 @@ void EigenvalueProblem::assemble_system () mass_matrix); } - // At the end of the function, we tell - // PETSc that the matrices have now been - // fully assembled and that the sparse - // matrix representation can now be - // compressed as no more entries will be - // added: + // At the end of the function, we + // tell PETSc that the matrices + // have now been fully assembled + // and that the sparse matrix + // representation can now be + // compressed as no more entries + // will be added: stiffness_matrix.compress(); mass_matrix.compress(); } @@ -334,80 +366,96 @@ void EigenvalueProblem::assemble_system () // @sect4{EigenvalueProblem::solve} - // This is the key new functionality of the - // program. Now that the system is set up, - // here is a good time to actually solve the - // problem: As with other examples this is - // done using a "solve" routine. Essentially, - // it works as in other programs: you set up - // a SolverControl object that describes the - // accuracy to which we want to solve the - // linear systems, and then we select the - // kind of solver we want. Here we choose the - // Krylov-Schur solver of SLEPc, a pretty - // fast and robust choice for this kind of - // problem: - + // This is the key new functionality + // of the program. Now that the + // system is set up, here is a good + // time to actually solve the + // problem: As with other examples + // this is done using a "solve" + // routine. Essentially, it works as + // in other programs: you set up a + // SolverControl object that + // describes the accuracy to which we + // want to solve the linear systems, + // and then we select the kind of + // solver we want. Here we choose the + // Krylov-Schur solver of SLEPc, a + // pretty fast and robust choice for + // this kind of problem: template void EigenvalueProblem::solve () { - SolverControl solver_control (dof_handler.n_dofs(), 1e-9); + // We start here, as we normally do, + // by assigning convergence control + // we want: + SolverControl solver_control (dof_handler.n_dofs(), 1e-9); SLEPcWrappers::SolverKrylovSchur eigensolver (solver_control); // Before we actually solve for the - // eigenfunctions and -values, we have to - // also select which set of eigenvalues to - // solve for. Lets select those eigenvalues - // and corresponding eigenfunctions with - // the smallest real part (in fact, the - // problem we solve here is symmetric and - // so the eigenvalues are purely - // real). After that, we can actually let - // SLEPc do its work: + // eigenfunctions and -values, we + // have to also select which set of + // eigenvalues to solve for. Lets + // select those eigenvalues and + // corresponding eigenfunctions + // with the smallest real part (in + // fact, the problem we solve here + // is symmetric and so the + // eigenvalues are purely + // real). After that, we can + // actually let SLEPc do its work: eigensolver.set_which_eigenpairs (EPS_SMALLEST_REAL); eigensolver.solve (stiffness_matrix, mass_matrix, eigenvalues, eigenfunctions, eigenfunctions.size()); - // The output of the call above is a set of - // vectors and values. In eigenvalue - // problems, the eigenfunctions are only - // determined up to a constant that can be - // fixed pretty arbitrarily. Knowing - // nothing about the origin of the - // eigenvalue problem, SLEPc has no other - // choice than to normalize the - // eigenvectors to one in the $l_2$ - // (vector) norm. Unfortunately this norm - // has little to do with any norm we may be - // interested from a eigenfunction - // perspective: the $L_2(\Omega)$ norm, or - // maybe the $L_\infty(\Omega)$ norm. + // The output of the call above is + // a set of vectors and values. In + // eigenvalue problems, the + // eigenfunctions are only + // determined up to a constant that + // can be fixed pretty + // arbitrarily. Knowing nothing + // about the origin of the + // eigenvalue problem, SLEPc has no + // other choice than to normalize + // the eigenvectors to one in the + // $l_2$ (vector) + // norm. Unfortunately this norm + // has little to do with any norm + // we may be interested from a + // eigenfunction perspective: the + // $L_2(\Omega)$ norm, or maybe the + // $L_\infty(\Omega)$ norm. // - // Let us choose the latter and rescale - // eigenfunctions so that they have - // $\|\phi_i(\mathbf - // x)\|_{L^\infty(\Omega)}=1$ instead of - // $\|\Phi\|_{l_2}=1$ (where $\phi_i$ is - // the $i$th eigenfunction and - // $\Phi_i$ the corresponding vector of - // nodal values). For the $Q_1$ elements - // chosen here, we know that the maximum of - // the function $\phi_i(\mathbf x)$ is + // Let us choose the latter and + // rescale eigenfunctions so that + // they have $\|\phi_i(\mathbf + // x)\|_{L^\infty(\Omega)}=1$ + // instead of $\|\Phi\|_{l_2}=1$ + // (where $\phi_i$ is the $i$th + // eigenfunction and + // $\Phi_i$ the corresponding + // vector of nodal values). For the + // $Q_1$ elements chosen here, we + // know that the maximum of the + // function $\phi_i(\mathbf x)$ is // attained at one of the nodes, so // $\max_{\mathbf x}\phi_i(\mathbf - // x)=\max_j (\Phi_i)_j$, making the - // normalization in the $L_\infty$ norm - // trivial. Note that this doesn't work as - // easily if we had chosen $Q_k$ elements - // with $k>1$: there, the maximum of a - // function does not necessarily have to be - // attained at a node, and so - // $\max_{\mathbf x}\phi_i(\mathbf - // x)\ge\max_j (\Phi_i)_j$ (although the - // equality is usually nearly true). + // x)=\max_j (\Phi_i)_j$, making + // the normalization in the + // $L_\infty$ norm trivial. Note + // that this doesn't work as easily + // if we had chosen $Q_k$ elements + // with $k>1$: there, the maximum + // of a function does not + // necessarily have to be attained + // at a node, and so $\max_{\mathbf + // x}\phi_i(\mathbf x)\ge\max_j + // (\Phi_i)_j$ (although the + // equality is usually nearly + // true). for (unsigned int i=0; i::solve () // @sect4{EigenvalueProblem::output_results} - // This is the last significant function of - // this program. It uses the DataOut class to - // generate graphical output from the - // eigenfunctions for later visualization. It - // works as in many of the other tutorial - // programs. + // This is the last significant + // function of this program. It uses + // the DataOut class to generate + // graphical output from the + // eigenfunctions for later + // visualization. It works as in many + // of the other tutorial programs. // - // The only thing worth discussing may be - // that because the potential is specified as - // a function expression in the input file, - // it would be nice to also have it as a - // graphical representation along with the - // eigenfunctions. The process to achieve - // this is relatively straightforward: we - // build an object that represents $V(\mathbf - // x)$ and then we interpolate this - // continuous function onto the finite - // element space. The result we also attach - // to the DataOut object for visualization. + // The only thing worth discussing + // may be that because the potential + // is specified as a function + // expression in the input file, it + // would be nice to also have it as a + // graphical representation along + // with the eigenfunctions. The + // process to achieve this is + // relatively straightforward: we + // build an object that represents + // $V(\mathbf x)$ and then we + // interpolate this continuous + // function onto the finite element + // space. The result we also attach + // to the DataOut object for + // visualization. // - // The whole collection of functions is then - // output as a single VTK file. + // The whole collection of functions + // is then output as a single VTK + // file. template void EigenvalueProblem::output_results () const { @@ -470,8 +524,9 @@ void EigenvalueProblem::output_results () const // @sect4{EigenvalueProblem::run} // This is the function which has the - // top-level control over everything. It is - // almost exactly the same as in step-4: + // top-level control over + // everything. It is almost exactly + // the same as in step-4: template void EigenvalueProblem::run () { @@ -502,11 +557,13 @@ int main (int argc, char **argv) try { - // Here is another difference from - // other steps: We initialize the SLEPc - // work space which inherently - // initializes the PETSc work space, - // run the whole program, ... + // Here is another difference + // from other steps: We + // initialize the SLEPc work + // space which inherently + // initializes the PETSc work + // space, run the whole + // program, ... SlepcInitialize (&argc,&argv,0,0); { @@ -516,14 +573,16 @@ int main (int argc, char **argv) problem.run (); } - // ...and then unitialize the SLEPc - // work space when the job is done: + // ...and then unitialize the + // SLEPc work space when the + // job is done: SlepcFinalize (); } - // All the while, we are watching out if - // any exceptions should have been - // generated. If that is so, we panic... + // All the while, we are watching + // out if any exceptions should + // have been generated. If that is + // so, we panic... catch (std::exception &exc) { std::cerr << std::endl << std::endl @@ -549,9 +608,10 @@ int main (int argc, char **argv) return 1; } - // ...or show that we are happy: + // ...or show that we are happy by + // exiting nicely: std::cout << std::endl - << "Done." + << "Job done." << std::endl; return 0;