From 42a4dbb9c29ab7f99dd96224048591481afcc01e Mon Sep 17 00:00:00 2001 From: Wolfgang Bangerth Date: Sun, 26 Apr 2020 15:50:45 -0600 Subject: [PATCH] Write the introduction of step-58. --- examples/step-58/doc/intro.dox | 877 +++++++++++++++++++++++++++++++++ 1 file changed, 877 insertions(+) create mode 100644 examples/step-58/doc/intro.dox diff --git a/examples/step-58/doc/intro.dox b/examples/step-58/doc/intro.dox new file mode 100644 index 0000000000..9e53bae1fe --- /dev/null +++ b/examples/step-58/doc/intro.dox @@ -0,0 +1,877 @@ +
+ +This program was contributed by Wolfgang Bangerth (Colorado State +University) and Yong-Yong Cai (Beijing +Computational Science Research Center, CSRC) and is the result of the +first author's time as a visitor at CSRC. + +This material is based upon work partially supported National Science +Foundation grants OCI-1148116, OAC-1835673, DMS-1821210, and EAR-1925595; +and by the Computational Infrastructure in +Geodynamics initiative (CIG), through the National Science Foundation under +Award No. EAR-1550901 and The University of California-Davis. + + + +

Introduction

+ +The Nonlinear +Schrödinger Equation (NLSE) for a function $\psi=\psi(\mathbf +x,t)$ and a potential $V=V(\mathbf x)$ is a model often used in +quantum mechanics and nonlinear optics. If one measures in appropriate +quantities (so that $\hbar=1$), then it reads as follows: +@f{align*}{ + - i \frac{\partial \psi}{\partial t} + - \frac 12 \Delta \psi + + V \psi + + \kappa |\psi|^2 \psi + &= 0 + \qquad\qquad + & + \text{in}\; \Omega\times (0,T), + \\ + \psi(\mathbf x,0) &= \psi_0(\mathbf x) + & + \text{in}\; \Omega, + \\ + \psi(\mathbf x,t) &= 0 + & + \text{on}\; \partial\Omega\times (0,T). +@f} +If there is no potential, i.e. $V(\mathbf x)=0$, then it can be used +to describe the propagation of light in optical fibers. If $V(\mathbf +x)\neq 0$, the equation is also sometimes called the Gross-Pitaevskii +equation and can be used to model the time dependent behavior of +Bose-Einstein +condensates. + +For this particular tutorial program, the physical interpretation of +the equation is not of much concern to us. Rather, we want to use it +as a model that allows us to explain two aspects: +- It is a complex-valued equation for $\psi \in H^1(\Omega,{\mathbb + C})$. We have previously seen complex-valued equations in step-29, + but there have opted to split the equations into real and imaginary + parts and consequently ended up solving a system of two real-valued + equations. In contrast, the goal here is to show how to solve + problems in which we keep everything as complex numbers. +- The equation is a nice model problem to explain how operator + splitting methods work. This is because it has terms with + fundamentally different character: on the one hand, $- \frac 12 + \Delta \psi$ is a regular spatial operator in the way we have seen + many times before; on the other hand, $\kappa |\psi(\mathbf x,t)|^2 + \psi$ has no spatial or temporal derivatives, i.e., it is a purely + local operator. It turns out that we have efficient methods for each + of these terms (in particular, we have analytic solutions for the + latter), and that we may be better off treating these terms + differently and separately. We will explain this in more detail + below. + + + +

A note about the character of the equations

+ +At first glance, the equations appear to be parabolic and similar to +the heat equation (see step-26) as there is only a single time +derivative and two spatial derivatives. But this is misleading. +Indeed, that this is not the correct interpretation is +more easily seen if we assume for a moment that the potential $V=0$ +and $\kappa=0$. Then we have the equation +@f{align*}{ + - i \frac{\partial \psi}{\partial t} + - \frac 12 \Delta \psi + &= 0. +@f} +If we separate the solution into real and imaginary parts, $\psi=v+iw$, +with $v=\textrm{Re}\;\psi,\; w=\textrm{Im}\;\psi$, +then we can split the one equation into its real and imaginary parts +in the same way as we did in step-29: +@f{align*}{ + \frac{\partial w}{\partial t} + - \frac 12 \Delta v + &= 0, + \\ + -\frac{\partial v}{\partial t} + - \frac 12 \Delta w + &= 0. +@f} +Not surprisingly, the factor $i$ in front of the time derivative +couples the real and imaginary parts of the equation. If we want to +understand this equation further, take the time derivative of one of +the equations, say +@f{align*}{ + \frac{\partial^2 w}{\partial t^2} + - \frac 12 \Delta \frac{\partial v}{\partial t} + &= 0, +@f} +(where we have assumed that, at least in some formal sense, we can +commute the spatial and temporal derivatives), and then insert the +other equation into it: +@f{align*}{ + \frac{\partial^2 w}{\partial t^2} + + \frac 14 \Delta^2 w + &= 0. +@f} +This equation is hyperbolic and similar in character to the wave +equation. (This will also be obvious if you look at the video +in the "Results" section of this program.) Furthermore, we could +have arrived at the same equation for $v$ as well. +Consequently, a better assumption for the NLSE is to think of +it as a hyperbolic, wave-propagation equation than as a diffusion +equation such as the heat equation. (You may wonder whether it is +correct that the operator $\Delta^2$ appears with a positive sign +whereas in the wave equation, $\Delta$ has a negative sign. This is +indeed correct: After multiplying by a test function and integrating +by parts, we want to come out with a positive (semi-)definite +form. So, from $-\Delta u$ we obtain $+(\nabla v,\nabla u)$. Likewise, +after integrating by parts twice, we obtain from $+\Delta^2 u$ the +form $+(\Delta v,\Delta u)$. In both cases do we get the desired positive +sign.) + +The real NLSE, of course, also has the terms $V\psi$ and +$\kappa|\psi|^2\psi$. However, these are of lower order in the spatial +derivatives, and while they are obviously important, they do not +change the character of the equation. + +In any case, the purpose of this discussion is to figure out +what time stepping scheme might be appropriate for the equation. The +conclusions is that, as a hyperbolic-kind of equation, we need to +choose a time step that satisfies a CFL-type condition. If we were to +use an explicit method (which we will not), we would have to investigate +the eigenvalues of the matrix that corresponds to the spatial +operator. If you followed the discussions of the video lectures +(@dealiiVideoLectureSeeAlso{26,27,28}) +then you will remember that the pattern is that one needs to make sure +that $k^s \propto h^t$ where $k$ is the time step, $h$ the mesh width, +and $s,t$ are the orders of temporal and spatial derivatives. +Whether you take the original equation ($s=1,t=2$) or the reformulation +for only the real or imaginary part, the outcome is that we would need to +choose $k \propto h^2$ if we were to use an explicit time stepping +method. This is not feasible for the same reasons as in step-26 for +the heat equation: It would yield impractically small time steps +for even only modestly refined meshes. Rather, we have to use an +implicit time stepping method and can then choose a more balanced +$k \propto h$. Indeed, we will use the implicit Crank-Nicolson +method as we have already done in step-23 before for the regular +wave equation. + + +

The general idea of operator splitting

+ +@dealiiVideoLecture{30.25} + +If one thought of the NLSE as an ordinary differential equation in +which the right hand side happens to have spatial derivatives, i.e., +write it as +@f{align*}{ + \frac{d\psi}{dt} + &= + i\frac 12 \Delta \psi + -i V \psi + -i\kappa |\psi|^2 \psi, + \qquad\qquad + & + \text{for}\; t \in (0,T), + \\ + \psi(0) &= \psi_0, +@f} +one may be tempted to "formally solve" it by integrating both sides +over a time interval $[t_{n},t_{n+1}]$ and obtain +@f{align*}{ + \psi(t_{n+1}) + &= + \psi(t_n) + + + \int_{t_n}^{t_{n+1}} + \left( + i\frac 12 \Delta \psi(t) + -i V \psi(t) + -i\kappa |\psi(t)|^2 \psi(t) + \right) + \; + dt. +@f} +Of course, it's not that simple: the $\psi(t)$ in the integrand is +still changing over time in accordance with the differential equation, +so we cannot just evaluate the integral (or approximate it easily via +quadrature) because we don't know $\psi(t)$. +But we can write this with separate contributions as +follows, and this will allow us to deal with different terms separately: +@f{align*}{ + \psi(t_{n+1}) + &= + \psi(t_n) + + + \int_{t_n}^{t_{n+1}} + \left( + i\frac 12 \Delta \psi(t) + \right) + \; + dt + + + \int_{t_n}^{t_{n+1}} + \left( + -i V \psi(t) + \right) + \; + dt + + + \int_{t_n}^{t_{n+1}} + \left( + -i\kappa |\psi(t)|^2 \,\psi(t) + \right) + \; + dt. +@f} +The way this equation can now be read is as follows: For each time interval +$[t_{n},t_{n+1}]$, the change $\psi(t_{n+1})-\psi(t_{n})$ in the +solution consists of three contributions: +- The contribution of the Laplace operator. +- The contribution of the potential $V$. +- The contribution of the "phase" term $-i\kappa |\psi(t)|^2\,\psi(t)$. + +Operator splitting is now an approximation technique that +allows us to treat each of these contributions separately. (If we +want: In practice, we will treat the first two together, and the last +one separate. But that is a detail, conceptually we could treat all of +them differently.) To this end, let us introduce three separate "solutions": +@f{align*}{ + \psi^{(1)}(t_{n+1}) + &= + \psi(t_n) + + + \int_{t_n}^{t_{n+1}} + \left( + i\frac 12 \Delta \psi^{(1)}(t) + \right) + \; + dt, +\\ + \psi^{(2)}(t_{n+1}) + &= + \psi(t_n) + + + \int_{t_n}^{t_{n+1}} + \left( + -i V \psi^{(2)}(t) + \right) + \; + dt, +\\ + \psi^{(3)}(t_{n+1}) + &= + \psi(t_n) + + + \int_{t_n}^{t_{n+1}} + \left( + -i\kappa |\psi^{(3)}(t)|^2 \,\psi^{(3)}(t) + \right) + \; + dt. +@f} + +These three "solutions" can be thought of as satisfying the following +differential equations: +@f{align*}{ + \frac{d\psi^{(1)}}{dt} + &= + i\frac 12 \Delta \psi^{(1)}, + \qquad + & + \text{for}\; t \in (t_n,t_{n+1}), + \qquad\qquad\text{with initial condition}\; + \psi^{(1)}(t_n) &= \psi(t_n), +\\ + \frac{d\psi^{(2)}}{dt} + &= + -i V \psi^{(2)}, + & + \text{for}\; t \in (t_n,t_{n+1}), + \qquad\qquad\text{with initial condition}\; + \psi^{(2)}(t_n) &= \psi(t_n), +\\ + \frac{d\psi^{(3)}}{dt} + &= + -i\kappa |\psi^{(3)}|^2 \,\psi^{(3)}, + & + \text{for}\; t \in (t_n,t_{n+1}), + \qquad\qquad\text{with initial condition}\; + \psi^{(3)}(t_n) &= \psi(t_n). +@f} +In other words, they are all trajectories $\psi^{(k)}$ that start at +$\psi(t_n)$ and integrate up the effects of exactly one of the three +terms. The increments resulting from each of these terms over our time +interval are then $I^{(1)}=\psi^{(1)}(t_{n+1})-\psi(t_n)$, +$I^{(2)}=\psi^{(2)}(t_{n+1})-\psi(t_n)$, and +$I^{(3)}=\psi^{(3)}(t_{n+1})-\psi(t_n)$. + +It is now reasonable to assume (this is an approximation!) that the +change due to all three of the effects in question is well approximated +by the sum of the three separate increments: +@f{align*}{ + \psi(t_{n+1})-\psi(t_n) + \approx + I^{(1)} + I^{(2)} + I^{(3)}. +@f} +This intuition is indeed correct, though the approximation is not +exact: the difference between the exact left hand side and the term +$I^{(1)}+I^{(2)}+I^{(3)}$ (i.e., the difference between the exact increment +for the exact solution $\psi(t)$ when moving from $t_n$ to $t_{n+1}$, +and the increment composed of the three parts on the right hand side), +is proportional to $\Delta t=t_{n+1}-t_{n}$. In other words, this +approach introduces an error of size ${\cal O}(\Delta t)$. Nothing we +have done so far has discretized anything in time or space, so the +overall error is going to be ${\cal O}(\Delta t)$ plus whatever +error we commit when approximating the integrals (the temporal +discretization error) plus whatever error we commit when approximating +the spatial dependencies of $\psi$ (the spatial error). + +Before we continue with discussions about operator splitting, let us +talk about why one would even want to go this way? The answer is +simple: For some of the separate equations for the $\psi^{(k)}$, we +may have ways to solve them more efficiently than if we throw +everything together and try to solve it at once. For example, and +particularly pertinent in the current case: The equation for +$\psi^{(3)}$, i.e., +@f{align*}{ + \frac{d\psi^{(3)}}{dt} + &= + -i\kappa |\psi^{(3)}|^2 \,\psi^{(3)}, + \qquad\qquad + & + \text{for}\; t \in (t_n,t_{n+1}), + \qquad\qquad\text{with initial condition}\; + \psi^{(3)}(t_n) &= \psi(t_n), +@f} +or equivalently, +@f{align*}{ + \psi^{(3)}(t_{n+1}) + &= + \psi(t_n) + + + \int_{t_n}^{t_{n+1}} + \left( + -i\kappa |\psi^{(3)}(t)|^2 \,\psi^{(3)}(t) + \right) + \; + dt, +@f} +can be solved exactly: the equation is solved by +@f{align*}{ + \psi^{(3)}(t) = e^{-i\kappa|\psi(t_n)|^2 (t-t_{n})} \psi(t_n). +@f} +This is easy to see if (i) you plug this solution into the +differential equation, and (ii) realize that the magnitude +$|\psi^{(3)}|$ is constant, i.e., the term $|\psi(t_n)|^2$ in the +exponent is in fact equal to $|\psi^{(3)}(t)|^2$. In other words, the +solution of the ODE for $\psi^{(3)}(t)$ only changes its phase, +but the magnitude of the complex-valued function $\psi^{(3)}(t)$ +remains constant. This makes computing $I^{(3)}$ particularly convenient: +we don't actually need to solve any ODE, we can write the solution +down by hand. Using the operator splitting approach, none of the +methods to compute $I^{(1)},I^{(2)}$ therefore have to deal with the nonlinear +term and all of the associated unpleasantries: we can get away with +solving only linear problems, as long as we allow ourselves the +luxury of using an operator splitting approach. + +Secondly, one often uses operator splitting if the different physical +effects described by the different terms have different time +scales. Imagine, for example, a case where we really did have some +sort of diffusion equation. Diffusion acts slowly, but if $\kappa$ is +large, then the "phase rotation" by the term $-i\kappa +|\psi^{(3)}(t)|^2 \,\psi^{(3)}(t)$ acts quickly. If we treated +everything together, this would imply having to take rather small time +steps. But with operator splitting, we can take large time steps +$\Delta t=t_{n+1}-t_{n}$ for the diffusion, and (assuming we didn't +have an analytic solution) use an ODE solver with many small time +steps to integrate the "phase rotation" equation for $\psi^{(3)}$ from +$t_n$ to $t_{n+1}$. In other words, operator splitting allows us to +decouple slow and fast time scales and treat them differently, with +methods adjusted to each case. + + +

Operator splitting: the "Lie splitting" approach

+ +While the method above allows to compute the three contributions +$I^{(k)}$ in parallel, if we want, the method can be made slightly +more accurate and easy to implement if we don't let the trajectories +for the $\psi^{(k)}$ start all at $\psi(t_n)$, but instead let the +trajectory for $\psi^{(2)}$ start at the end point of the +trajectory for $\psi^{(1)}$, namely $\psi^{(1)}(t_{n+1})$; similarly, +we will start the trajectory for $\psi^{(3)}$ start at the end point +of the trajectory for $\psi^{(2)}$, namely $\psi^{(2)}(t_{n+1})$. This +method is then called "Lie splitting" and has the same order of error +as the method above, i.e., the splitting error is ${\cal O}(\Delta +t)$. + +This variation of operator splitting can be written as +follows (carefully compare the initial conditions to the ones above): +@f{align*}{ + \frac{d\psi^{(1)}}{dt} + &= + i\frac 12 \Delta \psi^{(1)}, + \qquad + & + \text{for}\; t \in (t_n,t_{n+1}), + \qquad\qquad\text{with initial condition}\; + \psi^{(1)}(t_n) &= \psi(t_n), +\\ + \frac{d\psi^{(2)}}{dt} + &= + -i V \psi^{(2)}, + & + \text{for}\; t \in (t_n,t_{n+1}), + \qquad\qquad\text{with initial condition}\; + \psi^{(2)}(t_n) &= \psi^{(1)}(t_{n+1}), +\\ + \frac{d\psi^{(3)}}{dt} + &= + -i\kappa |\psi^{(3)}|^2 \,\psi^{(3)}, + & + \text{for}\; t \in (t_n,t_{n+1}), + \qquad\qquad\text{with initial condition}\; + \psi^{(3)}(t_n) &= \psi^{(2)}(t_{n+1}). +@f} +(Obviously, while the formulas above imply that we should solve these +problems in this particular order, it is equally valid to first solve +for trajectory 3, then 2, then 1, or any other permutation.) + +The integrated forms of these equations are then +@f{align*}{ + \psi^{(1)}(t_{n+1}) + &= + \psi(t_n) + + + \int_{t_n}^{t_{n+1}} + \left( + i\frac 12 \Delta \psi^{(1)}(t) + \right) + \; + dt, +\\ + \psi^{(2)}(t_{n+1}) + &= + \psi^{(1)}(t_{n+1}) + + + \int_{t_n}^{t_{n+1}} + \left( + -i V \psi^{(2)}(t) + \right) + \; + dt, +\\ + \psi^{(3)}(t_{n+1}) + &= + \psi^{(2)}(t_{n+1}) + + + \int_{t_n}^{t_{n+1}} + \left( + -i\kappa |\psi^{(3)}(t)|^2 \,\psi^{(3)}(t) + \right) + \; + dt. +@f} +From a practical perspective, this has the advantage that we need +to keep around fewer solution vectors: Once $\psi^{(1)}(t_n)$ has been +computed, we don't need $\psi(t_n)$ any more; once $\psi^{(2)}(t_n)$ +has been computed, we don't need $\psi^{(1)}(t_n)$ any more. And once +$\psi^{(3)}(t_n)$ has been computed, we can just call it +$\psi(t_{n+1})$ because, if you insert the first into the second, and +then into the third equation, you see that the right hand side of +$\psi^{(3)}(t_n)$ now contains the contributions of all three physical +effects: +@f{align*}{ + \psi^{(3)}(t_{n+1}) + &= + \psi(t_n) + + + \int_{t_n}^{t_{n+1}} + \left( + i\frac 12 \Delta \psi^{(1)}(t) + \right) + \; + dt + + + \int_{t_n}^{t_{n+1}} + \left( + -i V \psi^{(2)}(t) + \right) + \; + dt+ + \int_{t_n}^{t_{n+1}} + \left( + -i\kappa |\psi^{(3)}(t)|^2 \,\psi^{(3)}(t) + \right) + \; + dt. +@f} +(Compare this again with the "exact" computation of $\psi(t_{n+1})$: +It only differs in how we approximate $\psi(t)$ in each of the three integrals.) +In other words, Lie splitting is a lot simpler to implement that the +original method outlined above because data handling is so much +simpler. + + +

Operator splitting: the "Strang splitting" approach

+ +As mentioned above, Lie splitting is only ${\cal O}(\Delta t)$ +accurate. This is acceptable if we were to use a first order time +discretization, for example using the explicit or implicit Euler +methods to solve the differential equations for $\psi^{(k)}$. This is +because these time integration methods introduce an error proportional +to $\Delta t$ themselves, and so the splitting error is proportional +to an error that we would introduce anyway, and does not diminish the +overall convergence order. + +But we typically want to use something higher order -- say, a +Crank-Nicolson +or +BDF2 +method -- since these are often not more expensive than a +simple Euler method. It would be a shame if we were to use a time +stepping method that is ${\cal O}(\Delta t^2)$, but then lose the +accuracy again through the operator splitting. + +This is where the Strang +splitting method comes in. It is easier to explain if we had only +two parts, and so let us combine the effects of the Laplace operator +and of the potential into one, and the phase rotation into a second +effect. (Indeed, this is what we will do in the code since solving the +equation with the Laplace equation with or without the potential costs +the same -- so we merge these two steps.) The Lie splitting method +from above would then do the following: It computes solutions of the +following two ODEs, +@f{align*}{ + \frac{d\psi^{(1)}}{dt} + &= + i\frac 12 \Delta \psi^{(1)} -i V \psi^{(1)}, + \qquad + & + \text{for}\; t \in (t_n,t_{n+1}), + \qquad\qquad\text{with initial condition}\; + \psi^{(1)}(t_n) &= \psi(t_n), +\\ + \frac{d\psi^{(2)}}{dt} + &= + -i\kappa |\psi^{(2)}|^2 \,\psi^{(2)}, + & + \text{for}\; t \in (t_n,t_{n+1}), + \qquad\qquad\text{with initial condition}\; + \psi^{(2)}(t_n) &= \psi^{(1)}(t_{n+1}), +@f} +and then uses the approximation $\psi(t_{n+1}) \approx +\psi^{(2)}(t_{n+1})$. In other words, we first make one full time step +for physical effect one, then one full time step for physical effect +two. The solution at the end of the time step is simply the sum of the +increments due to each of these physical effects separately. + +In contrast, +Gil Strang +(one of the titans of numerical analysis starting in the mid-20th +century) figured out that it is more accurate to first do +one half-step for one physical effect, then a full time step for the +other physical effect, and then another half step for the first. Which +one is which does not matter, but because it is so simple to do the +phase rotation, we will use this effect for the half steps and then +only need to do one spatial solve with the Laplace operator plus +potential. This operator splitting method is now ${\cal O}(\Delta +t^2)$ accurate. Written in formulas, this yields the following +sequence of steps: +@f{align*}{ + \frac{d\psi^{(1)}}{dt} + &= + -i\kappa |\psi^{(1)}|^2 \,\psi^{(1)}, + && + \text{for}\; t \in (t_n,t_n+\tfrac 12\Delta t), + \qquad\qquad&\text{with initial condition}\; + \psi^{(1)}(t_n) &= \psi(t_n), +\\ + \frac{d\psi^{(2)}}{dt} + &= + i\frac 12 \Delta \psi^{(2)} -i V \psi^{(2)}, + \qquad + && + \text{for}\; t \in (t_n,t_{n+1}), + \qquad\qquad&\text{with initial condition}\; + \psi^{(2)}(t_n) &= \psi^{(1)}(t_n+\tfrac 12\Delta t), +\\ + \frac{d\psi^{(3)}}{dt} + &= + -i\kappa |\psi^{(3)}|^2 \,\psi^{(3)}, + && + \text{for}\; t \in (t_n+\tfrac 12\Delta t,t_{n+1}), + \qquad\qquad&\text{with initial condition}\; + \psi^{(3)}(t_n) &= \psi^{(2)}(t_{n+1}). +@f} +As before, the first and third step can be computed exactly for this +particular equation, yielding +@f{align*}{ + \psi^{(1)}(t_n+\tfrac 12\Delta t) &= e^{-i\kappa|\psi(t_n)|^2 \tfrac + 12\Delta t} \; \psi(t_n), + \\ + \psi^{(3)}(t_{n+1}) &= e^{-i\kappa|\psi^{(2)}(t_{n+1})|^2 \tfrac + 12\Delta t} \; \psi^{(2)}(t_{n+1}). +@f} + +This is then how we are going to implement things in this program: +In each time step, we execute three steps, namely +- Update the solution value at each node by analytically integrating + the phase rotation equation by one half time step; +- Solving the space-time equation that corresponds to the full step + for $\psi^{(2)}$, namely + $-i\frac{\partial\psi^{(2)}}{\partial t} + - + \frac 12 \Delta \psi^{(2)} + V \psi^{(2)} = 0$, + with initial conditions equal to the solution of the first half step + above. +- Update the solution value at each node by analytically integrating + the phase rotation equation by another half time step. + +This structure will be reflected in an obvious way in the main time +loop of the program. + + + +

Time discretization

+ +From the discussion above, it should have become clear that the only +partial differential equation we have to solve in each time step is +@f{align*}{ + -i\frac{\partial\psi^{(2)}}{\partial t} + - + \frac 12 \Delta \psi^{(2)} + V \psi^{(2)} = 0. +@f} +This equation is linear. Furthermore, we only have to solve it from +$t_n$ to $t_{n+1}$, i.e., for exactly one time step. + +To do this, we will apply the second order accurate Crank-Nicolson +scheme that we have already used in some of the other time dependent +codes (specifically: step-23 and step-26). It reads as follows: +@f{align*}{ + -i\frac{\psi^{(n,2)}-\psi^{(n,1)}}{k_{n+1}} + - + \frac 12 \Delta \left[\frac 12 + \left(\psi^{(n,2)}+\psi^{(n,1)}\right)\right] + + + V \left[\frac 12 \left(\psi^{(n,2)}+\psi^{(n,1)}\right)\right] = 0. +@f} +Here, the "previous" solution $\psi^{(n,1)}$ (or the "initial +condition" for this part of the time step) is the output of the +first phase rotation half-step; the output of the current step will +be denoted by $\psi^{(n,2)}$. $k_{n+1}=t_{n+1}-t_n$ is +the length of the time step. (One could argue whether $\psi^{(n,1)}$ +and $\psi^{(n,1)}$ live at time step $n$ or $n+1$ and what their upper +indices should be. This is a philosophical discussion without practical +impact, and one might think of $\psi^{(n,1)}$ as something like +$\psi^{(n+\tfrac 13)}$, and $\psi^{(n,2)}$ as +$\psi^{(n+\tfrac 23)}$ if that helps clarify things -- though, again +$n+\frac 13$ is not to be understood as "one third time step after +$t_n$" but more like "we've already done one third of the work necessary +for time step $n+1$".) + +If we multiply the whole equation with $k_{n+1}$ and sort terms with +the unknown $\psi^{(n+1,2)}$ to the left and those with the known +$\psi^{(n,2)}$ to the right, then we obtain the following (spatial) +partial differential equation that needs to be solved in each time +step: +@f{align*}{ + -i\psi^{(n,2)} + - + \frac 14 k_{n+1} \Delta \psi^{(n,2)} + + + \frac 12 k_{n+1} V \psi^{(n,2)} + = + -i\psi^{(n,1)} + + + \frac 14 k_{n+1} \Delta \psi^{(n,1)} + - + \frac 12 k_{n+1} V \psi^{(n,1)}. +@f} + + + +

Spatial discretization and dealing with complex variables

+ +As mentioned above, the previous tutorial program dealing with +complex-valued solutions (namely, step-29) separated real and imaginary +parts of the solution. It thus reduced everything to real +arithmetic. In contrast, we here want to keep things +complex-valued. + +The first part of this is that we need to define the discretized +solution as $\psi_h^n(\mathbf x)=\sum_j \Psi^n_j \varphi_j(\mathbf +x) \approx \psi(\mathbf x,t_n)$ where the $\varphi_j$ are the usual shape functions (which are +real valued) but the expansion coefficients $\Psi^n_j$ at time step +$n$ are now complex-valued. This is easily done in deal.II: We just +have to use Vector> instead of Vector to +store these coefficients. + +Of more interest is how to build and solve the linear +system. Obviously, this will only be necessary for the second step of +the Strang splitting discussed above, with the time discretization of +the previous subsection. We obtain the fully discrete version through +straightforward substitution of $\psi^n$ by $\psi^n_h$ and +multiplication by a test function: +@f{align*}{ + -iM\Psi^{(n,2)} + + + \frac 14 k_{n+1} A \Psi^{(n,2)} + + + \frac 12 k_{n+1} W \Psi^{(n,2)} + = + -iM\Psi^{(n+1,1)} + - + \frac 14 k_{n+1} A \Psi^{(n,1)} + - + \frac 12 k_{n+1} W \Psi^{(n,1)}, +@f} +or written in a more compact way: +@f{align*}{ + \left[ + -iM + + + \frac 14 k_{n+1} A + + + \frac 12 k_{n+1} W + \right] \Psi^{(n,2)} + = + \left[ + -iM + - + \frac 14 k_{n+1} A + - + \frac 12 k_{n+1} W + \right] \Psi^{(n,1)}. +@f} +Here, the matrices are defined in their obvious ways: +@f{align*}{ + M_{ij} &= (\varphi_i,\varphi_j), \\ + A_{ij} &= (\nabla\varphi_i,\nabla\varphi_j), \\ + W_{ij} &= (\varphi_i,V \varphi_j). +@f} +Note that all matrices individually are in fact symmetric, +real-valued, and at least positive semidefinite, though the same is +obviously not true for +the system matrix $C = -iM + \frac 14 k_{n+1} A + \frac 12 k_{n+1} W$ +and the corresponding matrix +$R = -iM - \frac 14 k_{n+1} A - \frac 12 k_{n+1} W$ +on the right hand side. + + +

Linear solvers

+ +@dealiiVideoLecture{34} + +The only remaining important question about the solution procedure is +how to solve the complex-valued linear system +@f{align*}{ + C \Psi^{(n+1,2)} + = + R \Psi^{(n+1,1)}, +@f} +with the matrix $C = -iM + \frac 14 k_{n+1} A + \frac 12 k_{n+1} +W$ and a right hand side that is easily computed as the product of +a known matrix and the previous part-step's solution. +As usual, this comes down to the question of what properties the +matrix $C$ has. If it is symmetric and positive definite, then we can +for example use the Conjugate Gradient method. + +Unfortunately, the matrix's only useful property is that it is complex +symmetric, i.e., $C_{ij}=C_{ji}$, as is easy to see by recalling that +$M,A,W$ are all symmetric. It is not, however, +Hermitian, +which would require that $C_{ij}=\bar C_{ji}$ where the bar indicates complex +conjugation. + +Complex symmetry can be exploited for iterative solvers as a quick +literature search indicates. We will here not try to become too +sophisticated (and indeed leave this to the Possibilities for extensions section below) and +instead simply go with the good old standby for problems without +properties: A direct solver. That's not optimal, especially for large +problems, but it shall suffice for the purposes of a tutorial program. +Fortunately, the SparseDirectUMFPACK class allows solving complex-valued +problems. + + +

Definition of the test case

+ +Initial conditions for the NLSE are typically chosen to represent +particular physical situations. This is beyond the scope of this +program, but suffice it to say that these initial conditions are +(i) often superpositions of the wave functions of particles located +at different points, and that (ii) because $|\psi(\mathbf x,t)|^2$ +corresponds to a particle density function, the integral +@f[ + N(t) = \int_\Omega |\psi(\mathbf x,t)|^2 +@f] +corresponds to the number of particles in the system. (Clearly, if +one were to be physically correct, $N(t)$ better be a constant if +the system is closed, or $\frac{dN}{dt}<0$ if one has absorbing +boundary conditions.) The important point is that one should choose +initial conditions so that +@f[ + N(0) = \int_\Omega |\psi_0(\mathbf x)|^2 +@f] +makes sense. + +What we will use here, primarily because it makes for good graphics, +is the following: +@f[ + \psi_0(\mathbf x) = \sqrt{\sum_{k=1}^4 \alpha_k e^{-\frac{r_k^2}{R^2}}}, +@f] +where $r_k = |\mathbf x-\mathbf x_k|$ is the distance from the (fixed) +locations $\mathbf x_k$, and +$\alpha_k$ are chosen so that each of the Gaussians that we are +adding up adds an integer number of particles to $N(0)$. We achieve +this by making sure that +@f[ + \int_\Omega \alpha_k e^{-\frac{r_k^2}{R^2}} +@f] +is a positive integer. In other words, we need to choose $\alpha$ +as an integer multiple of +@f[ + \left(\int_\Omega e^{-\frac{r_k^2}{R^2}}\right)^{-1} + = + \left(R^d\sqrt{\pi^d}}\right)^{-1}, +@f] +assuming for the moment that $\Omega={\mathbb R}^d$ -- which is +of course not the case, but we'll ignore the small difference in +integral. + +Thus, we choose $\alpha_k=\left(R^d\sqrt{\pi^d}}\right)^{-1}$ for all, and +$R=0.1$. This $R$ is small enough that the difference between the +exact (infinite) integral and the integral over $\Omega$ should not be +too concerning. +We choose the four points $\mathbf x_k$ as $(\pm 0.3, 0), (0, \pm +0.3)$ -- also far enough away from the boundary of $\Omega$ to keep +ourselves on the safe side. + +For simplicity, we pose the problem on the square $[-1,1]^2$. For +boundary conditions, we will use time-independent Neumann conditions of the +form +@f[ + \nabla\psi(\mathbf x,t)\cdot \mathbf n=0 \qquad\qquad \forall \mathbf x\in\partial\Omega. +@f] +This is not a realistic choice of boundary conditions but sufficient +for what we want to demonstrate here. We will comment further on this +in the Possibilities for extensions section below. + +Finally, we choose $\kappa=1$, and the potential as +@f[ + V(\mathbf x) + = + \begin{cases} 0 & \text{if}\; |\mathbf x|<0.7 + \\ + 1000 & \text{otherwise}. + \end{cases} +@f] +Using a large potential makes sure that the wave function $\psi$ remains +small outside the circle of radius 0.7. All of the Gaussians that make +up the initial conditions are within this circle, and the solution will +mostly oscillate within it, with a small amount of energy radiating into +the outside. The use of a large potential also makes sure that the nonphysical +boundary condition does not have too large an effect. -- 2.39.5