an array which can be used to store any arbitrary number of properties
associated with the particles. Consequently, users can build their own
particle solver and attribute the desired properties to the particles (e.g. mass, charge
- diameter, temperature, etc.). In the present tutorial, this is used to
- store the value of the fluid velocity and the process id to which the particles
- belong.
+diameter, temperature, etc.). In the present tutorial, this is used to
+store the value of the fluid velocity and the process id to which the particles
+belong.
<h3>Challenges related to distributed particle simulations</h3>
-Although the present step is not computationnaly intensive, simulations that
-include particles can be computationnaly demanding and require parallelization.
+Although the present step is not computationally intensive, simulations that
+include many particles can be computationally demanding and require parallelization.
The present step showcases the distributed parallel capabilities of deal.II for particles.
In general, there are three main challenges
-that arise in parallel distributed simulations that include particle:
+that specifically arise in parallel distributed simulations that include particles:
- Generating the particles on the distributed triangulation;
- Exchanging the particles that leave local domains between the processors;
- Load balancing the simulation so that every processor has a similar computational load.
-
-Generating the particles is not straightforward since the processor to which they belong
-must first be identified before the cell in which they are located is found.
-Deal.II provides numerous capabilities to generate particles through the Particles::Generator namespace.
-Some of these particle generator generate particles on the locally own subdomain. For example,
-Particles::Generators::regular_reference_locations uses a regular reference location within each cell of
-the subdomain or Particles::Generators::probabilistic_locations uses
-a probability density function to generate the particles.
-
-In some situations, such as the present step, particles must be generated at specific locations
-on cells that are owned only by a subset of the processors. In most of these situations,
-the insertion of the particle is done for a very limited number of time-steps and, consequently,
-does not constitute a large portion of the computational cost. For these occasions, deal.II provides
-convenient Particles::Generators that can globally insert the particles even if they are not located
-on the subdomain from which they are created. The generators first locate on which subdomain the particles
-are situated, identify is which cell they are located and exchange the necessary information amongst the processors
-to ensure that the particle is generated with the right properties. Consequently, this type of particle generation
-is communication intensive. The Particles::Generators::dof_support_points and the Particles::Generators::quadrature_points
-generate particles using a, possible non-matching, triangulation and the points of an associated dof_handler or quadrature respectively.
-Furthermore, the Particles::ParticleHandler class provides the Particles::ParticleHandler::insert_global_particles function
-which enables the global insertion of particles from a vector of points and a global vector of bounding boxes. In the present step,
-we use the Particles::Generators::quadrature_points to insert the particle in the shape of a circle.
-
+These challenges and their solution in deal.II have been discussed in more detail in
+Gassmöller et al. (2018), but we will summarize them below.
+
+<h4>Parallel particle generation</h4>
+
+Generating distributed particles in a scalable way is not straightforward since
+the processor to which they belong must first be identified before the cell in
+which they are located is found. Deal.II provides numerous capabilities to
+generate particles through the Particles::Generator namespace. Some of these
+particle generators create particles only on the locally owned subdomain. For
+example, Particles::Generators::regular_reference_locations() creates particles
+at the same reference locations within each cell of the local subdomain and
+Particles::Generators::probabilistic_locations() uses a globally defined probability
+density function to determine how many and where to generate particles locally.
+
+In other situations, such as the present step, particles must be generated at
+specific locations on cells that may be owned only by a subset of the processors.
+In most of these situations, the insertion of the particle is done for a very
+limited number of time-steps and, consequently, does not constitute a large
+portion of the computational cost. For these occasions, deal.II provides
+convenient Particles::Generators that can globally insert the particles even if
+they are not located on the subdomain from which they are created. The
+generators first locate on which subdomain the particles are situated, identify
+in which cell they are located and exchange the necessary information amongst
+the processors to ensure that the particle is generated with the right
+properties. Consequently, this type of particle generation can be communication
+intensive. The Particles::Generators::dof_support_points and the
+Particles::Generators::quadrature_points generate particles using a
+triangulation and the points of an associated dof_handler or quadrature
+respectively. The triangulation that is used to generate the particles can be
+the same triangulation as used for the background mesh, in which case these
+functions are very similar to the
+Particles::Generators::regular_reference_locations() function described in the
+previous paragraph. However, the triangulation used to generate particles can
+also be different (non-matching) from the triangulation of the background grid,
+which is useful to generate particles in particular shapes (as in this
+example), or to transfer information between two different computational grids
+(as in step-70). Furthermore, the Particles::ParticleHandler class provides the
+Particles::ParticleHandler::insert_global_particles function which enables the
+global insertion of particles from a vector of arbitrary points and a global
+vector of bounding boxes. In the present step, we use the
+Particles::Generators::quadrature_points on a non-matching triangulation to
+insert the particle in the shape of a disk.
+
+<h4>Particle exchange</h4>
+
+As particles move around in parallel distributed computations they may leave
+the locally owned subdomain and need to be transferred to their new owner
+processes. This situation can arise in two very different ways: First, if the
+previous owning process knows the new owner of the particles that were lost
+(for example because the particles moved into ghost cells of a distributed
+triangulation) then the transfer can be handled efficiently as a point-to-point
+communication between each process and the new owners. This transfer happens
+automatically whenever particles are sorted into their new cells. Secondly,
+the previous owner may not know to which process the particle has moved. In
+this case the particle is discarded by default, as a global search for the
+owner can be expensive. Step-19 shows how such a discarded particle can still
+be collected, interpreted, and potentially reinserted by the user. In the
+present example we prevent the second case by imposing a CFL criterion on the
+timestep to ensure particles will at most move into the ghost layer of the
+local process and can therefore be send to neighboring processes automatically.
+
+<h4>Balancing mesh and particle load</h4>
+
+The last challenge that arises in parallel distributed computations using
+particles is to balance the computational load between work that is done on the
+grid, for example solving the finite-element problem, and the work that is done
+on the particles, for example advecting the particles or computing the forces
+between particles or between particles and grid. By default, for example in
+step-40, deal.II distributes the background mesh as evenly as possible between
+the available processes, that is it balances the number of cells on each
+process. However, if some cells own many more particles than other cells, or if
+the particles of one cell are much more computationally expensive than the
+particles in other cells, then this problem no longer scales efficiently (for a
+discussion of what we consider "scalable" programs, see @ref
+GlossParallelScaling "this glossary entry"). Thus, we have to apply a form of
+"load balancing", which means we estimate the computational load that is
+associated with each cell and its particles. Repartitioning the mesh then
+accounts for this combined computational load instead of the simplified
+assumption of the number of cells.
+
+In this section we only discussed the particle-specific challenges in distributed
+computation. Parallel challenges that particles share with
+finite-element solutions (parallel output, data transfer during mesh
+refinement) can be addressed with the solutions found for
+finite-element problems already discussed in other examples.
<h3>The testcase</h3>
v &=& \frac{\partial\Psi}{\partial x} = 2 \cos(\pi x) \sin(\pi x) \sin^2 (\pi y) \cos \left( \pi \frac{t}{T} \right)
@f}
-
+This example uses the testcase to produce two models that handle the particles
+slightly differently. The first model prescribes the exact analytical velocity
+solution as the velocity for each particle. Therefore in this model there is no
+error in the assigned velocity to the particles, and any deviation of particle
+positions from the analytical position at a given time results from the error
+in solving the equation of motion for the particle. In the second model the
+analytical velocity field is first interpolated to a finite-element vector
+space (to simulate the case that the velocity was obtained from solving a
+finite-element problem). This finite-element "solution" is then evaluated at
+the locations of the particles to solve their equation of motion. The
+difference between the two cases allows to assess whether the chosen
+finite-element space is sufficiently accurate to advect the particles with the
+optimal convergence rate of the chosen particle advection scheme, a question
+that is important in practice to determine the accuracy of the combined
+algorithm (see e.g. Gassmöller et al., 2019).
<li> Blais, Bruno, et al. (2019) "Experimental Methods in Chemical Engineering: Discrete
Element Method—DEM." The Canadian Journal of Chemical Engineering 97.7 : 1964-1973.
+<li> Gassmöller, Rene, et al. (2019). "Evaluating the accuracy of hybrid finite
+ element/particle-in-cell methods for modelling incompressible Stokes flow".
+ Geophysical Journal International, 219.3, 1915-1938.
+
<li>Gassmöller, Rene, et al. (2018). "Flexible and Scalable Particle‐in‐Cell Methods With
- Adaptive Mesh Refinement for Geodynamic Computations." Geochemistry, Geophysics,
- Geosystems 19.9 : 3596-3604.
+ Adaptive Mesh Refinement for Geodynamic Computations." Geochemistry, Geophysics,
+ Geosystems 19.9 : 3596-3604.
<li>Blais, Bruno, et al. (2013) "Dealing with more than two materials in the FVCF–ENIP method."
European Journal of Mechanics-B/Fluids 42 1-9.