From: Wolfgang Bangerth Date: Thu, 15 Jun 2023 22:19:27 +0000 (-0600) Subject: Some updates to the step-19 documentation. X-Git-Tag: v9.5.0-rc1~103^2~1 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=7a7cfffd1fbb31b46d33c3bb1ce03aa2acc77394;p=dealii.git Some updates to the step-19 documentation. --- diff --git a/examples/step-19/doc/intro.dox b/examples/step-19/doc/intro.dox index a0067bd1af..58d33cbed7 100644 --- a/examples/step-19/doc/intro.dox +++ b/examples/step-19/doc/intro.dox @@ -23,7 +23,7 @@ On the other hand, sometimes one wants to solve problems in which it is useful to track individual objects ("particles") and how their positions evolve. If this simply leads to a set of ordinary differential equations, for example if you want to track the positions of the planets in the solar system over -time, then deal.II is clearly not your right tool. On the other hand, if +time, then deal.II is clearly not the right tool. On the other hand, if this evolution is due to the interaction with the solution of partial differential equation, or if having a mesh to determine which particles interact with others (such as in the @@ -109,14 +109,12 @@ Second, in principle we would have to model the charge density via @f[ \rho(\mathbf x) = \sum_i e\delta(\mathbf x-\mathbf x_i). @f] - -@note The issue now is that in reality, a cathode ray tube in an old television yields a current of somewhere around a few milli-Amperes. In the much higher energy beams of particle accelerators, the current may only be a few nano-Ampere. But an Ampere is $6\times 10^{18}$ electrons flowing per second. Now, as you will see in the results section, we really only simulate -a few microseconds ($10^{-5}$ seconds), but that still results in very very +a few microseconds ($10^{-6}$ seconds), but that still results in very very large numbers of electrons -- far more than we can hope to simulate with a program as small as the current one. As a consequence, let us presume that each particle represents $N$ electrons. Then the particle @@ -125,7 +123,8 @@ solve are @f[ (Nm) {\ddot {\mathbf x}}_i = (Ne)\mathbf E, @f] -which is of course exactly the same as above. On the other hand, the charge +which is of course exactly the same as above after dividing both sides by $N$. +On the other hand, the charge density for these "clumps" of electrons is given by @f[ \rho(\mathbf x) = \sum_i (Ne)\delta(\mathbf x-\mathbf x_i). @@ -137,7 +136,7 @@ there are just not enough electrons to actually affect the overall electric field. But realism is not our goal here.) -@note One may wonder why the equation for the electric field (or, rather, +As a final thought about the model, one may wonder why the equation for the electric field (or, rather, the electric potential) has no time derivative whereas the equations for the electron positions do. In essence, this is a modeling assumption: We assume that the particles move so slowly that at any given time the @@ -155,7 +154,7 @@ the movement of the electrons.

Time discretization

-The equations outlined above form a set of coupled differential equations. +The equations outlined above then form a set of coupled differential equations. Let us bring them all together in one place again to make that clear: @f{align*}{ -\epsilon_0 \Delta V &= \sum_i e\delta(\mathbf x-\mathbf x_i) @@ -184,8 +183,16 @@ step: \\ \frac{{\mathbf x}_i^{(n)}-{\mathbf x}_i^{(n-1)}}{\Delta t} &= {\mathbf v}_i^{(n)}. @f} +This scheme can be understood in the framework of operator splitting methods (specifically, +the "Lie splitting" method) wherein a coupled system is solved by updating one +variable at a time, using either the old values of other variables (e.g., using +$\mathbf x_i^{(n-1)}$ in the first equation) or the values of variables that have +already been updated in a previous sub-step (e.g., using $V^{(n)}$ in the second +equation). There are of course many better ways to do a time discretization (for -example the simple [leapfrog scheme](https://en.wikipedia.org/wiki/Leapfrog_integration)) +example the simple [leapfrog scheme](https://en.wikipedia.org/wiki/Leapfrog_integration) +when updating the velocity, or more general Strang splitting methods for the coupled +system) but this isn't the point of the tutorial program, and so we will be content with what we have here. (We will comment on a piece of this puzzle in the possibilities for extensions section of this program, diff --git a/examples/step-19/doc/results.dox b/examples/step-19/doc/results.dox index 448f15495f..9f02129e57 100644 --- a/examples/step-19/doc/results.dox +++ b/examples/step-19/doc/results.dox @@ -3,7 +3,7 @@ When this program is run, it produces output that looks as follows: ``` Timestep 1 - Field degrees of freedom: 4989 + Field degrees of freedom: 4989 Total number of particles in simulation: 20 Number of particles lost this time step: 0 @@ -162,6 +162,12 @@ electrode.

More statistics about electrons

+At the end of the day, we are rarely interested in the *solution* of an equation, +but in numbers that can be *extracted* from it -- in other words, we want to +*postprocess* the solution (see the results section of step-4 for an extensive +discussion of this concept). Here, what one would likely be most interested in +assessing are some statistics about the particles. + The program already computes the fraction of the electrons that leave the domain through the hole in the anode. But there are other quantities one might be interested in. For example, the average velocity of these particles. It would @@ -182,13 +188,14 @@ regardless of how many time steps lie between each such point. The problem we are considering in this program is a coupled, multiphysics problem. But the way we solve it is by first computing the (electric) potential -field and then update the particle locations. This is what is called an +field and then updating first the particle velocity and finally the +particle locations. This is what is called an "operator-splitting method", a concept we will investigate in more detail in step-58. While it is awkward to think of a way to solve this problem that does not involve -splitting the problem into a PDE piece and a particles piece, one -*can* (and probably should!) think of a better way to update the particle +splitting the problem into a PDE piece and a particles piece, one *can* +(and probably should!) think of a better way to update the particle locations. Specifically, the equations we use to update the particle location are @f{align*}{ @@ -196,9 +203,21 @@ are \\ \frac{{\mathbf x}_i^{(n)}-{\mathbf x}_i^{(n-1)}}{\Delta t} &= {\mathbf v}_i^{(n)}. @f} -This corresponds to a simple forward-Euler time discretization -- a method of -first order accuracy in the time step size $\Delta t$ that we know we should -avoid because we can do better. Rather, one might want to consider a scheme such +This corresponds to a Lie splitting where we first update one variable +(the velocity) and then, using the already-updated value of the one variable +to update the other (the position). It is well understood that the Lie +splitting incurs a first order error accuracy in the time step (i.e., +the error introduced by the splitting is ${\cal O}(\Delta t)$) that we know we should +avoid because we can do better. Independently, of course, we incur an error +of the same rate a second time because we use an Euler-like scheme for each of +the two updates (we replace the time derivative by a simple finite difference +quotient containing the old and new value, and we evaluate the right hand side +only at the end time of the interval, at $t_n$), and a better scheme would also +use a better way to do this kind of update. + +Better strategies would replace the Lie splitting by something like a +[Strang splitting](https://en.wikipedia.org/wiki/Strang_splitting), and the +update of the particle position and velocity by a scheme such as the [leapfrog scheme](https://en.wikipedia.org/wiki/Leapfrog_integration) or more generally @@ -217,8 +236,9 @@ used here) but 100,000? If we needed a substantially finer mesh? In those cases, one would want to run the program not just on a single processor, but in fact on as many as one has available. This requires parallelization -both the PDE solution as well as over particles. In practice, while there +of both the PDE solution as well as over particles. In practice, while there are substantial challenges to making this efficient and scale well, these -challenges are all addressed in deal.II itself. For example, step-40 shows +challenges are all addressed in deal.II itself, and have been demonstrated +on simulations running on 10,000 or more cores. For example, step-40 shows how to parallelize the finite element part, and step-70 shows how one can then also parallelize the particles part.