<i>This program was contributed by Toby D. Young and Wolfgang
Bangerth. </i>
-<a name="Preamble"></a>
+<a name="Preamble"></a>
<h1>Preamble</h1>
The problem we want to solve in this example is an eigenspectrum
-problem. Eigenvalue problems appear in a wide context of problems, for example
-in the computation of electromagnetic standing waves in cavities, vibration
-modes of drum membranes, or oscillations of lakes and estuaries. One of the
-most enigmatic applications is probably the computation of stationary or
-quasi-static wave functions in quantum mechanics. The latter application is
-what we would like to investigate here, though the general techniques outlined
-in this program are of course equally applicable to the other applications
-above.
+problem. Eigenvalue problems appear in a wide context of problems, for
+example in the computation of electromagnetic standing waves in
+cavities, vibration modes of drum membranes, or oscillations of lakes
+and estuaries. One of the most enigmatic applications is probably the
+computation of stationary or quasi-static wave functions in quantum
+mechanics. The latter application is what we would like to investigate
+here, though the general techniques outlined in this program are of
+course equally applicable to the other applications above.
Eigenspectrum problems have the general form
@f{align*}
- L \Psi &= \varepsilon \Psi \qquad &&\text{in}\ \Omega,
+ L \Psi &= \varepsilon \Psi \qquad &&\text{in}\ \Omega\quad,
\\
- \Psi &= 0 &&\text{on}\ \partial\Omega,
+ \Psi &= 0 &&\text{on}\ \partial\Omega\quad,
@f}
where the Dirichlet boundary condition on $\Psi=\Psi(\mathbf x)$ could also be
replaced by Neumann or Robin conditions; $L$ is an operator that generally
also contains differential operators.
-Under suitable conditions, above equations have a set of solutions
-$\Psi_\ell,\varepsilon_\ell$, $\ell\in {\cal I}$, where $\cal I$ can be a finite or
-infinite set. In either case, let us note that there is no longer just a
-single solution, but a set of solutions (the various eigenfunctions and
-corresponding eigenvalues) that we want to compute. The problem of finding all
-eigenvalues (eigenfunctions) of such eigenvalue problems is a formidable
-challange; however, most of the time we are really only interested in a small
-subset of these values (functions). Fortunately, the interface to the SLEPc
-library that we will use for this tutorial program allows us to select which
-portion of the eigenspectrum and how many solutions we want to solve for.
-
-In this program, the eigenspectrum solvers we use are classes
-provided by deal.II that wrap around the linear algebra implementation of the
-<a href="http://www.grycap.upv.es/slepc/" target="_top">SLEPc</a> library;
-SLEPc itself builds on the <a
-href="http://www.mcs.anl.gov/petsc/" target="_top">PETSc</a> library for
-linear algebra contents.
-
-
-<a name="Introduction"></a>
+Under suitable conditions, the above equations have a set of solutions
+$\Psi_\ell,\varepsilon_\ell$, $\ell\in {\cal I}$, where $\cal I$ can
+be a finite or infinite set. In either case, let us note that there is
+no longer just a single solution, but a set of solutions (the various
+eigenfunctions and corresponding eigenvalues) that we want to
+compute. The problem of numerically finding all eigenvalues
+(eigenfunctions) of such eigenvalue problems is a formidable
+challange. In fact, if the set $\cal I$ is infinite, the callange is
+insurmountable. Most of the time however we are really only
+interested in a small subset of these values (functions); and
+fortunately, the interface to the SLEPc library that we will use for
+this tutorial program allows us to select which portion of the
+eigenspectrum and how many solutions we want to solve for.
+
+In this program, the eigenspectrum solvers we use are classes provided
+by deal.II that wrap around the linear algebra implementation of the
+<a href="http://www.grycap.upv.es/slepc/" target="_top">SLEPc</a>
+library; SLEPc itself builds on the <a
+href="http://www.mcs.anl.gov/petsc/" target="_top">PETSc</a> library
+for linear algebra contents.
+
+<a name="Intro"></a>
<h1>Introduction</h1>
-The basic equation of stationary quantum mechanics is the Schrödinger
-equation. The Copenhagen interpretation of quantum mechanics posits that the
-motion of particles in an external potential $V(\mathbf x)$ is governed a wave
-function $\Psi(\mathbf x)$ that satisfies this Schrödinger
-equation of the (non-dimensionalized) form
-@f{align*}
- [-\Delta + V(\mathbf x)] \Psi(\mathbf x) &= \varepsilon \Psi(\mathbf x)
- \qquad &&\text{in}\ \Omega,
- \\
- \Psi &= 0 &&\text{on}\ \partial\Omega.
-@f}
-As a consequence of this eigenvalue problem, this particle can only exist in a
-certain number of eigenstates that correspond to the energy eigenvalues
-$\varepsilon_\ell$ admitted as solutions of this equation, and if a particle has
-energy $\varepsilon_\ell$ then the probability of finding it at location $\mathbf
-x$ is proportional to $|\Psi_\ell(\mathbf x)|^2$ where $\Psi_\ell$ is the
-eigenfunction that corresponds to this eigenvalue.
-
-In order to numerically find solutions to this equation, i.e. a set of pairs
-of eigenvalue/eigenfunction, we use the usual finite element approach of
-multiplying the equation from the left with testfunctions, integrating by
-parts, and searching for solutions in finite dimensional spaces by
-approximating $\Psi(\mathbf x)\approx\Psi_h(\mathbf x)=\sum_{j}\phi_j(\mathbf
-x)\tilde\psi_j$, where $\tilde\psi$ is a vector of expansion coefficients. We
-then immediately arrive at the following equation that discretizes the
-continuous eigenvalue problem:
-@f[
- \sum_j [(\nabla\phi_i, \nabla\phi_j)+(V(\mathbf x)\phi_i,\phi_j)]
- \tilde{\psi}_j =
- \varepsilon_h \sum_j (\phi_i, \phi_j) \tilde{\psi}_j.
-@f]
-In matrix and vector notation, this equation then reads:
-@f[
- A \tilde{\Psi} = \varepsilon_h M \tilde{\Psi} \quad,
-@f]
-where $A$ is the stiffness matrix arising from the differential
-operator $L$, and $M$ is the mass matrix. The solution to the
-eigenvalue problem is an eigenspectrum $\varepsilon_{h,\ell}$, with
-associated eigenfunctions $\tilde{\Psi}_\ell=\sum_j \phi_j\tilde{\psi}_j$.
-
-It is this form of the eigenvalue problem that involves both matrices $A$ and
-$M$ that we will solve in the current tutorial program. We will want to solve
-it for the lowermost few eigenvalue/eigenfunction pairs.
-
+The basic equation of stationary quantum mechanics is the
+Schrödinger equation which models the motion of particles in an
+external potential $V(\mathbf x)$ as if it were governed a wave
+function $\Psi(\mathbf x)$ that satisfies a relation of the
+(non-dimensionalized ) form
+@f{align*} [-\Delta + V(\mathbf x)]
+\Psi(\mathbf x) &= \varepsilon \Psi(\mathbf x) \qquad &&\text{in}\
+\Omega\quad, \\ \Psi &= 0 &&\text{on}\ \partial\Omega\quad.
+@f}
+As a consequence, this particle can only exist in a certain number of
+eigenstates that correspond to the energy eigenvalues
+$\varepsilon_\ell$ admitted as solutions of this equation. The
+Copenhagen interpretation of quantum mechanics posits that, if a
+particle has energy $\varepsilon_\ell$ then the probability of finding
+it at location $\mathbf x$ is proportional to $|\Psi_\ell(\mathbf
+x)|^2$ where $\Psi_\ell$ is the eigenfunction that corresponds to this
+eigenvalue.
+
+In order to numerically find solutions to this equation, i.e. a set of
+pairs of eigenvalues/eigenfunctions, we use the usual finite element
+approach of multiplying the equation from the left with testfunctions,
+integrating by parts, and searching for solutions in finite
+dimensional spaces by approximating $\Psi(\mathbf
+x)\approx\Psi_h(\mathbf x)=\sum_{j}\phi_j(\mathbf x)\tilde\psi_j$,
+where $\tilde\psi$ is a vector of expansion coefficients. We then
+immediately arrive at the following equation that discretizes the
+continuous eigenvalue problem: @f[ \sum_j [(\nabla\phi_i,
+\nabla\phi_j)+(V(\mathbf x)\phi_i,\phi_j)] \tilde{\psi}_j =
+\varepsilon_h \sum_j (\phi_i, \phi_j) \tilde{\psi}_j\quad. @f] In
+matrix and vector notation, this equation then reads: @f[ A
+\tilde{\Psi} = \varepsilon_h M \tilde{\Psi} \quad, @f] where $A$ is
+the stiffness matrix arising from the differential operator $L$, and
+$M$ is the mass matrix. The solution to the eigenvalue problem is an
+eigenspectrum $\varepsilon_{h,\ell}$, with associated eigenfunctions
+$\Psi_\ell=\sum_j \phi_j\tilde{\psi}_j$.
+
+It is this form of the eigenvalue problem that involves both matrices
+$A$ and $M$ that we will solve in the current tutorial program. We
+will want to solve it for the lowermost few eigenvalue/eigenfunction
+pairs.
<h3>Implementation details</h3>
Eigenvalue 3 : 19.8027
Eigenvalue 4 : 24.837
-Job done.
-@endcode
-These eigenvalues are exactly the ones that correspond to pairs $(m,n)=(1,1)$,
-$(1,2)$ and $(2,1)$, $(2,2)$, and $(3,1)$. A visualization of the
-corresponding eigenfunctions would look like this:
+Job done. @endcode These eigenvalues are exactly the ones that
+correspond to pairs $(m,n)=(1,1)$, $(1,2)$ and $(2,1)$, $(2,2)$, and
+$(3,1)$. A visualization of the corresponding eigenfunctions would
+look like this:
<TABLE WIDTH="100%">
<tr>
</tr>
</table>
-
-
<h2>Possibilities for extensions</h2>
It is always worth playing a few games in the playground! So here goes
<ul>
-<li> The potential used above (called the <i>infinite well</i> because it is a
-flat potential surrounded by infinitely high walls, see below) is interesting
-because it allows for analytically known solutions. Apart from that, it is
-rather boring, however. That said, it is trivial to play around with the
-potential by just setting it to something different in the input file. For
-example, let us assume that we wanted to work with the following potential in
+<li> The potential used above (called the <i>infinite well</i> because
+it is a flat potential surrounded by infinitely high walls) is
+interesting because it allows for analytically known solutions. Apart
+from that, it is rather boring, however. That said, it is trivial to
+play around with the potential by just setting it to something
+different in the input file. For example, let us assume that we wanted
+to work with the following potential in
2d:
@f[
V(x,y) = \left\{
\begin{array}{ll}
-100 & \text{if}\ \sqrt{x^2+y^2}<\frac 34 \ \text{and}
- \ xy>0,
+ \ xy>0
\\
-5 & \text{if}\ \sqrt{x^2+y^2}<\frac 34 \ \text{and}
- \ xy\le 0,
+ \ xy\le 0
\\
- 0 & \text{otherwise.}
- \end{array} \right.
+ 0 & \text{otherwise}
+ \end{array} \right.\quad.
@f]
In other words, the potential is -100 in two sectors of a circle of radius
0.75, -5 in the other two sectors, and zero outside the circle. We can achieve
</tr>
</table>
-
-<li> In our derivation of the problem we have assumed that the particle is
-confined to a domain $\Omega$ and that at the boundary of this domain its
-probability $|\Psi|^2$ of being is zero. This is equivalent to solving the
-eigenvalue problem on all of ${\mathbb R}^d$ and assuming that
-the energy potential is finite only inside a region $\Omega$ and infinite
-outside. It is relatively easy to show that $|\Psi(\mathbf x)|^2$ at all
-locations $\mathbf x$ where $V(\mathbf x)=\infty$. So the question is what
-happens if our potential is not of this form, i.e. there is no bounded domain
-outside of which the potential is infinite? In that case, it may be worth to
-just consider a very large domain at the boundary of which $V(\mathbf x)$ is
-at least very large, if not infinite. Play around with a few cases like this
-and explore how spectrum and eigenfunction change as we make the computational
-region larger and larger.
+<li> In our derivation of the problem we have assumed that the
+particle is confined to a domain $\Omega$ and that at the boundary of
+this domain its probability $|\Psi|^2$ of being is zero. This is
+equivalent to solving the eigenvalue problem on all of ${\mathbb R}^d$
+and assuming that the energy potential is finite only inside a region
+$\Omega$ and infinite outside. It is relatively easy to show that
+$|\Psi(\mathbf x)|^2$ at all locations $\mathbf x$ where $V(\mathbf
+x)=\infty$. So the question is what happens if our potential is not of
+this form, i.e. there is no bounded domain outside of which the
+potential is infinite? In that case, it may be worth to just consider
+a very large domain at the boundary of which $V(\mathbf x)$ is at
+least very large, if not infinite. Play around with a few cases like
+this and explore how the spectrum and eigenfunctions change as we make
+the computational region larger and larger.
<li> What happens if we investigate the simple harmonic oscillator
problem $V(\mathbf x)=c|\mathbf x|^2$? This potential is exactly of the form
<li> What happens if the particle in the box has <i>internal</i>
degrees of freedom? For example, if the particle were a spin-$1/2$
-particle?</li>
+particle? In that case, we may want to start solving a vector-valued
+problem instead.</li>
</ul>
// a mass matrix for the right
// hand side. We also need not
// just one solution function,
- // but a whole set of those for
+ // but a whole set of these for
// the eigenfunctions we want to
// compute, along with the
- // corresponding eigenvectors:
+ // corresponding eigenvalues:
PETScWrappers::SparseMatrix stiffness_matrix, mass_matrix;
std::vector<PETScWrappers::Vector> eigenfunctions;
std::vector<double> eigenvalues;
// @sect4{EigenvalueProblem::EigenvalueProblem}
// First up, the constructor. The
- // main, new part is handling the
+ // main new part is handling the
// run-time input parameters. We need
// to declare their existence first,
// and then read their values from
// \int_K \nabla\varphi_i(\mathbf x)
// \cdot \nabla\varphi_j(\mathbf x) +
// V(\mathbf x)\varphi_i(\mathbf
- // x)\varphi_j(\mathbf x)$. The
- // function should be immediately
- // familiar if you've seen previous
- // tutorial programs. The only thing
- // new would be setting up an object
- // that described the potential
- // $V(\mathbf x)$ using the
- // expression that we got from the
- // input file. We then need to
- // evaluate this object at the
- // quadrature points on each cell. If
- // you've seen how to evaluate
- // function objects (see, for example
- // the coefficient in step-5), the
- // code here will also look rather
- // familiar.
+ // x)\varphi_j(\mathbf x)$ and
+ // $M^K_{ij} = \int_K
+ // \varphi_i(\mathbf
+ // x)\varphi_j(\mathbf x)$
+ // respectively. This function should
+ // be immediately familiar if you've
+ // seen previous tutorial
+ // programs. The only thing new would
+ // be setting up an object that
+ // described the potential $V(\mathbf
+ // x)$ using the expression that we
+ // got from the input file. We then
+ // need to evaluate this object at
+ // the quadrature points on each
+ // cell. If you've seen how to
+ // evaluate function objects (see,
+ // for example the coefficient in
+ // step-5), the code here will also
+ // look rather familiar.
template <int dim>
void EigenvalueProblem<dim>::assemble_system ()
{
// visualization. It works as in many
// of the other tutorial programs.
//
+ // The whole collection of functions
+ // is then output as a single VTK
+ // file.
+template <int dim>
+void EigenvalueProblem<dim>::output_results () const
+{
+ DataOut<dim> data_out;
+
+ data_out.attach_dof_handler (dof_handler);
+
+ for (unsigned int i=0; i<eigenfunctions.size(); ++i)
+ data_out.add_data_vector (eigenfunctions[i],
+ std::string("eigenfunction_") +
+ Utilities::int_to_string(i));
+
// The only thing worth discussing
// may be that because the potential
// is specified as a function
// space. The result we also attach
// to the DataOut object for
// visualization.
- //
- // The whole collection of functions
- // is then output as a single VTK
- // file.
-template <int dim>
-void EigenvalueProblem<dim>::output_results () const
-{
- DataOut<dim> data_out;
-
- data_out.attach_dof_handler (dof_handler);
-
- for (unsigned int i=0; i<eigenfunctions.size(); ++i)
- data_out.add_data_vector (eigenfunctions[i],
- std::string("eigenfunction_") +
- Utilities::int_to_string(i));
-
- // How does this work?
Vector<double> projected_potential (dof_handler.n_dofs());
{
FunctionParser<dim> potential;
return 1;
}
- // ...or show that we are happy by
- // exiting nicely:
+ // If no exceptions are thrown,
+ // then we can tell the program to
+ // stop monkeying around and exit
+ // nicely:
std::cout << std::endl
<< "Job done."
<< std::endl;