From: Martin Kronbichler Date: Mon, 30 Dec 2019 21:16:21 +0000 (+0100) Subject: Add results section X-Git-Tag: v9.2.0-rc1~338^2~15 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=194d6f3435d83b8fb6856e115d78dade26b0de93;p=dealii.git Add results section --- diff --git a/examples/step-67/doc/results.dox b/examples/step-67/doc/results.dox index 207317d6ce..7a8f6d9289 100644 --- a/examples/step-67/doc/results.dox +++ b/examples/step-67/doc/results.dox @@ -1,4 +1,765 @@

Results

-Running the program ... +

Program output

+Running the program with the default settings on a machine with 40 processes +produces the following output: +@code +Running with 40 MPI processes +Vectorization over 8 doubles = 512 bits (AVX512), VECTORIZATION_LEVEL=3 +Number of degrees of freedom: 147,456 ( = 4 [vars] x 1,024 [cells] x 36 [dofs/cell/var] ) +Time step size: 0.00689325, minimal h: 0.3125, initial transport scaling: 0.102759 + +Time: 0 , dt: 0.0069 , error rho: 2.76e-07 , rho * u: 1.259e-06 , energy: 2.987e-06 +Time: 1.01 , dt: 0.0069 , error rho: 1.37e-06 , rho * u: 2.252e-06 , energy: 4.153e-06 +Time: 2.01 , dt: 0.0069 , error rho: 1.561e-06 , rho * u: 2.43e-06 , energy: 4.493e-06 +Time: 3.01 , dt: 0.0069 , error rho: 1.714e-06 , rho * u: 2.591e-06 , energy: 4.762e-06 +Time: 4.01 , dt: 0.0069 , error rho: 1.843e-06 , rho * u: 2.625e-06 , energy: 4.985e-06 +Time: 5.01 , dt: 0.0069 , error rho: 1.496e-06 , rho * u: 1.961e-06 , energy: 4.142e-06 +Time: 6 , dt: 0.0083 , error rho: 1.007e-06 , rho * u: 7.119e-07 , energy: 2.972e-06 +Time: 7 , dt: 0.0095 , error rho: 9.096e-07 , rho * u: 3.786e-07 , energy: 2.626e-06 +Time: 8 , dt: 0.0096 , error rho: 8.439e-07 , rho * u: 3.338e-07 , energy: 2.43e-06 +Time: 9 , dt: 0.0096 , error rho: 7.822e-07 , rho * u: 2.984e-07 , energy: 2.248e-06 +Time: 10 , dt: 0.0096 , error rho: 7.231e-07 , rho * u: 2.666e-07 , energy: 2.074e-06 + ++-------------------------------------------+------------------+------------+------------------+ +| Total wallclock time elapsed | 2.249s 30 | 2.249s | 2.249s 8 | +| | | | +| Section | no. calls | min time rank | avg time | max time rank | ++-------------------------------------------+------------------+------------+------------------+ +| compute errors | 11 | 0.008066s 13 | 0.00952s | 0.01041s 20 | +| compute transport speed | 258 | 0.01012s 13 | 0.05392s | 0.08574s 25 | +| output | 11 | 0.9597s 13 | 0.9613s | 0.9623s 6 | +| rk time step total | 1283 | 0.9827s 25 | 1.015s | 1.06s 13 | +| rk_stage - integrals L_h | 6415 | 0.8803s 26 | 0.9198s | 0.9619s 14 | +| rk_stage - inv mass + vec upd | 6415 | 0.05677s 15 | 0.06487s | 0.07597s 13 | ++-------------------------------------------+------------------+------------+------------------+ +@endcode + +The program output shows that all errors are small. This is due to the fact +that we use a relatively fine mesh of $32^2$ cells with polynomials of degree +5 for a solution that is smooth. An interesting pattern shows for the time +step size: whereas it is 0.0069 up to time 5, it increases to 0.0096 for later +times. The step size increases once the vortex with some motion on top of the +speed of sound (and thus faster propagation) leaves the computational domain +between times 5 and 6.5. Our time step formula recognizes this and only +detects the acoustic limit in the last part of the simulation. + +The summary of wall clock times shows that 1283 time steps have been performed +in 1.02 seconds (looking at the average time among all MPI processes), while +the output of 11 files has taken additional 0.96 seconds. Broken down per time +step and into the five Runge--Kutta stages, the compute time per evaluation is +0.16 milliseconds. This high performance is typical of matrix-free evaluators +and a reason why explicit time integration is very competitive against +implicit solvers, especially for large-scale simulations. The breakdown of +computational times at the end of the program run shows that the evaluation of +integrals in $\mathcal L_h$ contributes with around 0.92 seconds and the +application of the inverse mass matrix with 0.06 seconds. Furthermore, the +estimation of the transport speed for the time step size computation +contributes with another 0.05 seconds of compute time. + +If we use three more levels of global refinement and 9.4 million DoFs in total, +the final statistics are as follows (for the modified Lax--Friedrichs flux, +$p=5$, and the same system of 40 cores of dual-socket Intel Xeon Gold 6230): +@code ++-------------------------------------------+------------------+------------+------------------+ +| Total wallclock time elapsed | 244.9s 12 | 244.9s | 244.9s 34 | +| | | | +| Section | no. calls | min time rank | avg time | max time rank | ++-------------------------------------------+------------------+------------+------------------+ +| compute errors | 11 | 0.4239s 12 | 0.4318s | 0.4408s 9 | +| compute transport speed | 2053 | 3.962s 12 | 6.727s | 10.12s 7 | +| output | 11 | 30.35s 12 | 30.36s | 30.37s 9 | +| rk time step total | 10258 | 201.7s 7 | 205.1s | 207.8s 12 | +| rk_stage - integrals L_h | 51290 | 121.3s 6 | 126.6s | 136.3s 16 | +| rk_stage - inv mass + vec upd | 51290 | 66.19s 16 | 77.52s | 81.84s 10 | ++-------------------------------------------+------------------+------------+------------------+ +@endcode + +Per time step, the solver now takes 0.02 seconds, about 25 times as long as +for the small problem with 147k unknowns. Given that the problem involves 64 +times as many unknowns, the increase in computing time is not +surprising. Since we also do 8 times as many time steps, the compute time +should in theory increase by a factor of 512. The actual increase is 205 s / +1.02 s = 202. This is because the small problem size cannot fully utilize the +40 cores due to communication overhead. This becomes clear if we look into the +details of the operations done per time step. The evaluation of the +differential operator $\mathcal L_h$ with nearest neighbor communication goes +from 0.92 seconds to 127 seconds, i.e., it increases with a factor of 138. On +the other hand, the cost for application of the inverse mass matrix and the +vector updates, which do not need to communicate between the MPI processes at +all, has increased by a factor of 1195. The increase is more than the +theoretical factor of 512 because the operation is limited by the bandwidth +from RAM memory for the larger size while for the smaller size, all vectors +fit into the caches of the CPU. The numbers show that the mass matrix +evaluation and vector update part consume almost 40% of the time spent by the +Runge--Kutta stages -- despite using a low-storage Runge--Kutta integrator and +merging of vector operations! And despite using over-integration for the +$\mathcal L_h$ operator. For simpler differential operators and more expensive +time integrators, the proportion spent in the mass matrix and vector update +part can also reach 70%. If we compute a throughput number in terms of DoFs +processed per second and Runge--Kutta stage, we obtain @f[ \text{throughput} = +\frac{n_\mathrm{time steps} n_\mathrm{stages} +n_\mathrm{dofs}}{t_\mathrm{compute}} = \frac{10258 \cdot 5 \cdot +9.4\,\text{MDoFs}}{205s} = 2360\, \text{MDoFs/s} @f] This throughput number is +very high, given that simply copying one vector to another one runs at around +10,000 MDoFs/s. + +If we go to the next-larger size with 37.7 million DoFs, the overall +simulation time is 2196 seconds, with 1978 seconds spent in the time +stepping. The increase in run time is a factor of 9.3 for the L_h operator +(1179 versus 127 seconds) and a factor of 10.3 for the inverse mass matrix and +vector updates (797 vs 77.5 seconds). The reason for this non-optimal increase +in run time can be traced back to cache effects on the given hardware (with 40 +MB of L2 cache and 55 MB of L3 cache): While not all of the relevant data fits +into caches for 9.4 million DoFs (one vector takes 75 MB and we have three +vectors plus some additional data in MatrixFree), there is capacity for almost +half of it nonetheless. Given that modern caches are more sophisticated than +the naive least-recently-used strategy (where we would have little re-use as +the data is used in a streaming-like fashion), we can assume that a sizeable +fraction of data can indeed be delivered from caches for the 9.4 million DoFs +case. For the larger case, even with optimal caching less than 10 percent of +data would fit into caches, with an associated loss in performance. + +

Convergence rates for the analytical test case

+ +For the modified Lax--Friedrichs flux and measuring the error in the momentum +variable, we obtain the following convergence table (the rates are very +similar for the density and energy variables): + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
 p=2p=3p=5
n_cellsn_dofserror momraten_dofserror momraten_dofserror momrate
16      2,3041.373e-01 
64   4,0969.130e-02 9,2168.899e-033.94
2569,2165.577e-02 16,3847.381e-033.6436,8642.082e-045.42
102436,8644.724e-033.5665,5363.072e-044.59147,4562.625e-066.31
4096147,4566.205e-042.92262,1441.880e-054.03589,8243.268e-086.33
16,384589,8248.279e-052.911,048,5761.224e-063.942,359,2969.252e-105.14
65,5362,359,2961.105e-052.914,194,3047.871e-083.969,437,1841.369e-102.77
262,1449,437,1841.615e-062.7716,777,2164.961e-093.9937,748,7367.091e-110.95
+ +If we switch to the Harten-Lax-van Leer flux, the results are as follows: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
 p=2p=3p=5
n_cellsn_dofserror momraten_dofserror momraten_dofserror momrate
16      2,3041.339e-01 
64   4,0969.037e-02 9,2168.849e-033.92
2569,2164.204e-02 16,3849.143e-033.3136,8642.501e-045.14
102436,8644.913e-033.0965,5363.257e-044.81147,4563.260e-066.26
4096147,4567.862e-042.64262,1441.588e-054.36589,8242.953e-086.79
16,384589,8241.137e-042.791,048,5769.400e-074.082,359,2964.286e-106.11
65,5362,359,2961.476e-052.954,194,3045.799e-084.029,437,1842.789e-113.94
262,1449,437,1842.038e-062.8616,777,2163.609e-094.0137,748,7365.730e-11-1.04
+ +The tables show that we get optimal $\mathcal O\left(h^{p+1}\right)$ +convergence rates for both numerical fluxes. The errors are slighly smaller +for the Lax--Friedrichs flux for $p=2$, but the picture is reversed for +$p=3$. For $p=5$, we reach the roundoff accuracy of $10^{-11}$ with both +fluxes on the finest grids. Also note that the errors are absolute with a +domain length of $10^2$, so relative errors are below $10^{-12}$. The HLL flux +is somewhat better for the highest degree, which is due to a slight inaccuracy +of the Lax--Friedrichs flux: The Lax--Friedrichs flux sets a Dirichlet +condition on the solution that leaves the domain, which results in a small +artificial reflection, which is accentuated for the Lax--Friedrichs +flux. Apart from that, we see that the influence of the numerical flux is +minor, as the polynomial part inside elements is the main driver of the +accucary. The limited influence of the flux also has consequences when trying +to approach more challenging setups with the higher-order DG setup: Taking for +example the parameters and grid of step-33, we get oscillations (which in turn +make density negative and make the solution explode) with both fluxes once the +high-mass part comes near the boundary, as opposed to the low-order finite +volume case ($p=0$). Thus, any case that leads to shocks in the solution +necessitates some form of limiting or artificial dissipation. For another +alternative, see the step-69 tutorial program. + +

Results for flow in channel around cylinder in 2D

+ +For the test case of the flow around a cylinder in a channel, we need to +change the first code line to `constexpr unsigned int testcase = 1;`. This +test case starts with a background field of a constant velocity of Ma=0.31 and +density around an obstacle in the form of a cylinder. Since we impose a +no-penetration condition on the cylinder walls, the flow has to rearrange, +which creates a big sound wave. The following pictures show the pressure at +times 0.1, 0.25, 0.5, and 1.0 (top left to bottom right) for the 2D case with +5 levels of global refinement. We clearly see the discontinuity that +propagates slowly in the upstream direction and more quickly in downstream +direction in the first snapshot at time 0.1. At time 0.25, the sound wave has +reached the top and bottom walls and reflected back to the interior. From the +different position of the reflected waves from lower and upper walls we can +see the slight asymmetry of the Schäfer-Turek test case represented by +GridGenerator::channel_with_cylinder() with somewhat more space above the +cylinder compared to below. At later times, the picture is more chaotic with +many sound waves all over the place. + + + + + + + + + + +
+ + + +
+ + + +
+ +The next picture shows an elevation plot of the pressure at time 1.0 looking +from the channel inlet towards the outlet -- here, we can see the large number +of reflections. In the figure, two types of waves are visible. The +larger-amplitude waves correspond to various reflections that happened as the +initial discontinuity hit the walls, whereas the small-amplitude waves of the +size similar to the elements correspond to numerical artifacts. They have the +origin in the finite resolution of the scheme and appear as the discontinuity +travels through elements with high-order polynomials. This effect can be cured +by increasing resolution. Apart from this effect, the rich wave structure is +the result of the transport accuracy of the high-order DG method. + + + +With 2 levels of global refinement, the mesh and its partitioning on 40 MPI +processes looks as follows: + + + +When we run the code with 4 levels of global refinements on 40 cores, we get +the following output: +@code +Running with 40 MPI processes +Vectorization over 8 doubles = 512 bits (AVX512), VECTORIZATION_LEVEL=3 +Number of degrees of freedom: 3,686,400 ( = 4 [vars] x 25,600 [cells] x 36 [dofs/cell/var] ) +Time step size: 7.39876e-05, minimal h: 0.001875, initial transport scaling: 0.00110294 + +Time: 0 , dt: 7.4e-05 , norm rho: 4.17e-16 , rho * u: 1.629e-16 , energy: 1.381e-15 +Time: 0.05 , dt: 6.3e-05 , norm rho: 0.02075 , rho * u: 0.03801 , energy: 0.08772 +Time: 0.1 , dt: 5.9e-05 , norm rho: 0.02211 , rho * u: 0.04515 , energy: 0.08953 +Time: 0.15 , dt: 5.7e-05 , norm rho: 0.02261 , rho * u: 0.04592 , energy: 0.08967 +Time: 0.2 , dt: 5.8e-05 , norm rho: 0.02058 , rho * u: 0.04361 , energy: 0.08222 +Time: 0.25 , dt: 5.9e-05 , norm rho: 0.01695 , rho * u: 0.04203 , energy: 0.06873 +Time: 0.3 , dt: 5.9e-05 , norm rho: 0.01653 , rho * u: 0.0401 , energy: 0.06604 +Time: 0.35 , dt: 5.7e-05 , norm rho: 0.01774 , rho * u: 0.04264 , energy: 0.0706 + +... + +Time: 1.95 , dt: 5.8e-05 , norm rho: 0.01488 , rho * u: 0.03923 , energy: 0.05185 +Time: 2 , dt: 5.7e-05 , norm rho: 0.01432 , rho * u: 0.03969 , energy: 0.04889 + ++-------------------------------------------+------------------+------------+------------------+ +| Total wallclock time elapsed | 273.6s 13 | 273.6s | 273.6s 0 | +| | | | +| Section | no. calls | min time rank | avg time | max time rank | ++-------------------------------------------+------------------+------------+------------------+ +| compute errors | 41 | 0.01112s 35 | 0.0672s | 0.1337s 0 | +| compute transport speed | 6914 | 5.422s 35 | 15.96s | 29.99s 1 | +| output | 41 | 37.24s 35 | 37.3s | 37.37s 0 | +| rk time step total | 34564 | 205.4s 1 | 219.5s | 230.1s 35 | +| rk_stage - integrals L_h | 172820 | 153.6s 1 | 164.9s | 175.6s 27 | +| rk_stage - inv mass + vec upd | 172820 | 47.13s 13 | 53.09s | 64.05s 33 | ++-------------------------------------------+------------------+------------+------------------+ +@endcode + +We note that the norms we print for the various quantities are the deviations +$\rho'$, $(\rho u)'$, and $E'$ against the background field which is the +initial condition. The distribution of run time is overall similar as in the +previous test case. The only slight difference is the larger proportion of +time spent in L_h as compared to the inverse mass matrix and vector +updates. This is because the geometry is deformed and the matrix-free +framework needs to load additional arrays for the geometry from memory that +are compressed in the affine mesh case. + +Increasing the number of global refinements to 5, the output becomes: +@code +Running with 40 MPI processes +Vectorization over 8 doubles = 512 bits (AVX512), VECTORIZATION_LEVEL=3 +Number of degrees of freedom: 14,745,600 ( = 4 [vars] x 102,400 [cells] x 36 [dofs/cell/var] ) + +... + ++-------------------------------------------+------------------+------------+------------------+ +| Total wallclock time elapsed | 2693s 32 | 2693s | 2693s 23 | +| | | | +| Section | no. calls | min time rank | avg time | max time rank | ++-------------------------------------------+------------------+------------+------------------+ +| compute errors | 41 | 0.04537s 32 | 0.173s | 0.3489s 0 | +| compute transport speed | 13858 | 40.75s 32 | 85.99s | 149.8s 0 | +| output | 41 | 153.8s 32 | 153.9s | 154.1s 0 | +| rk time step total | 69284 | 2386s 0 | 2450s | 2496s 32 | +| rk_stage - integrals L_h | 346420 | 1365s 32 | 1574s | 1718s 19 | +| rk_stage - inv mass + vec upd | 346420 | 722.5s 10 | 870.7s | 1125s 32 | ++-------------------------------------------+------------------+------------+------------------+ +@endcode + +The effect on performance is similar as for the analytical test case -- in +theory, computation times should increase by a factor of 8, but we actually +see an increase by a factor of 11 for the time steps (219.5 seconds versus +2450 seconds). This can be traced back to caches, with the small case mostly +fitting in caches. An interesting effect, typical of programs with a mix of +local communication (integrals L_h) and global communication (computation of +transport speed) with some load imbalance, can be observed by looking at the +MPI rank that measure the minimal and maximal time of different phases, +respectively. Rank 0 reports the fastest throughput for the "rk time step +total" part. At the same time, it appears to be slowest for the "compute +transport speed" part, more almost a factor of 2 slower than the +average. Since the latter involves global communication, we can attribute the +slowness in this part to the fact that the local Runge--Kutta stages have +advanced more quickly on this rank and need to wait until the other processors +catch up. At this point, one can wonder about the reason for this imbalance: +The number of cells is almost the same on all MPI processes due to the default +weights. However, the matrix-free framework is faster on affine and Cartesian +cells located towards the outlet of the channel, to which the lower MPI ranks +are assigned. On the other hand, rank 32, which reports the highest run time +for the Runga--Kutta stages, owns the curved cells near the cylinder, for +which no data compression is possible. To improve throughput, we could assign +different weights to different cell types, or even measure the run time for a +few time steps and try to rebalance then. + +The throughput per Runge--Kutta stage can be computed to 2085 MDoFs/s for the +14.7 million DoFs test case over 346k Runge--Kutta stages, slightly slower +than the Cartesian mesh throughput of 2360 MDoFs/s reported above. + +Finally, if we add one additional refinement, we record the following output: +@code +Running with 40 MPI processes +Vectorization over 8 doubles = 512 bits (AVX512), VECTORIZATION_LEVEL=3 +Number of degrees of freedom: 58,982,400 ( = 4 [vars] x 409,600 [cells] x 36 [dofs/cell/var] ) + +... + +Time: 1.95 , dt: 1.4e-05 , norm rho: 0.01488 , rho * u: 0.03923 , energy: 0.05183 +Time: 2 , dt: 1.4e-05 , norm rho: 0.01431 , rho * u: 0.03969 , energy: 0.04887 + ++-------------------------------------------+------------------+------------+------------------+ +| Total wallclock time elapsed | 2.166e+04s 26 | 2.166e+04s | 2.166e+04s 24 | +| | | | +| Section | no. calls | min time rank | avg time | max time rank | ++-------------------------------------------+------------------+------------+------------------+ +| compute errors | 41 | 0.1758s 30 | 0.672s | 1.376s 1 | +| compute transport speed | 27748 | 321.3s 34 | 678.8s | 1202s 1 | +| output | 41 | 616.3s 32 | 616.4s | 616.4s 34 | +| rk time step total | 138733 | 1.983e+04s 1 | 2.036e+04s | 2.072e+04s 34 | +| rk_stage - integrals L_h | 693665 | 1.052e+04s 32 | 1.248e+04s | 1.387e+04s 19 | +| rk_stage - inv mass + vec upd | 693665 | 6404s 10 | 7868s | 1.018e+04s 32 | ++-------------------------------------------+------------------+------------+------------------+ +@endcode + +The "rk time stop total" part corresponds to a throughput of 2010 MDoFs/s. The +overall run time to perform 139k time steps is 20k seconds (5.7 hours) or 7 +time steps per second. More throughput can be achieved by adding more cores to +the computation. + +

Results for flow in channel around cylinder in 3D

+ +Switching the channel test case to 3D with 3 global refinements, the output is +@code +Running with 40 MPI processes +Vectorization over 8 doubles = 512 bits (AVX512), VECTORIZATION_LEVEL=3 +Number of degrees of freedom: 221,184,000 ( = 5 [vars] x 204,800 [cells] x 216 [dofs/cell/var] ) + +... + +Time: 1.95 , dt: 0.00011 , norm rho: 0.01131 , rho * u: 0.03056 , energy: 0.04091 +Time: 2 , dt: 0.00011 , norm rho: 0.0119 , rho * u: 0.03142 , energy: 0.04425 + ++-------------------------------------------+------------------+------------+------------------+ +| Total wallclock time elapsed | 1.734e+04s 4 | 1.734e+04s | 1.734e+04s 38 | +| | | | +| Section | no. calls | min time rank | avg time | max time rank | ++-------------------------------------------+------------------+------------+------------------+ +| compute errors | 41 | 0.6551s 34 | 3.216s | 7.281s 0 | +| compute transport speed | 3546 | 160s 34 | 393.2s | 776.9s 0 | +| output | 41 | 1350s 34 | 1353s | 1357s 0 | +| rk time step total | 17723 | 1.519e+04s 0 | 1.558e+04s | 1.582e+04s 34 | +| rk_stage - integrals L_h | 88615 | 1.005e+04s 32 | 1.126e+04s | 1.23e+04s 11 | +| rk_stage - inv mass + vec upd | 88615 | 3056s 11 | 4322s | 5759s 32 | ++-------------------------------------------+------------------+------------+------------------+ +@endcode + +The physics are similar to the 2D case, with a slight motion in the z +direction due to the gravitational force. The throughput per Runge--Kutta +stage in this case is +@f[ +\text{throughput} = \frac{n_\mathrm{time steps} n_\mathrm{stages} +n_\mathrm{dofs}}{t_\mathrm{compute}} = +\frac{17723 \cdot 5 \cdot 221.2\,\text{M}}{15580s} = 1258\, \text{MDoFs/s}. +@f] + +The throughput is lower than in 2D because the computation of the $L_h$ term +is more expensive. This is due to over-integration with `degree+2` points and +the larger fraction of face integrals (worse volume-to-surface ratio) with +more expensive flux computations. If we only consider the inverse mass matrix +and vector update part, we record a throughput of 4857 MDoFs/s for the 2D case +of the isentropic vortex with 37.7 million unknowns, whereas the 3D case +runs with 4535 MDoFs/s. The performance is similar because both cases are in +fact limited by the memory bandwidth. + +If we go to four levels of global refinement, we need to increase the number +of processes to fit everything in memory -- the computation needs around 350 +GB of RAM memory in this case. Also, the time it takes to complete 35k time +steps becomes more tolerable by adding additional resources. We therefore use +6 nodes with 40 cores each, resulting in a computation with 240 MPI processes: +@code +Running with 240 MPI processes +Vectorization over 8 doubles = 512 bits (AVX512), VECTORIZATION_LEVEL=3 +Number of degrees of freedom: 1,769,472,000 ( = 5 [vars] x 1,638,400 [cells] x 216 [dofs/cell/var] ) + +... + +Time: 1.95 , dt: 5.6e-05 , norm rho: 0.01129 , rho * u: 0.0306 , energy: 0.04086 +Time: 2 , dt: 5.6e-05 , norm rho: 0.01189 , rho * u: 0.03145 , energy: 0.04417 + ++-------------------------------------------+------------------+------------+------------------+ +| Total wallclock time elapsed | 5.396e+04s 151 | 5.396e+04s | 5.396e+04s 0 | +| | | | +| Section | no. calls | min time rank | avg time | max time rank | ++-------------------------------------------+------------------+------------+------------------+ +| compute errors | 41 | 2.632s 178 | 7.221s | 16.56s 0 | +| compute transport speed | 7072 | 714s 193 | 1553s | 3351s 0 | +| output | 35350 | 8065s 176 | 8070s | 8079s 0 | +| rk time step total | 35350 | 4.25e+04s 0 | 4.43e+04s | 4.515e+04s 193 | +| rk_stage - integrals L_h | 176750 | 2.936e+04s 134 | 3.222e+04s | 3.67e+04s 99 | +| rk_stage - inv mass + vec upd | 176750 | 7004s 99 | 1.207e+04s | 1.55e+04s 132 | ++-------------------------------------------+------------------+------------+------------------+ +@endcode + +

Possibilities for extensions

+ +The code presented here straight-forwardly extends to adaptive meshes, given +appropriate indicators for setting the refinement flags. Large-scale +adaptivity of a similar solver in the context of the acoustic wave equation +has been derived by the exwave +project. However, in the present context the effect of adaptivity is often +limited to early times and effects close to the origin of sound waves, as the +waves eventually reflect and diffract. This leads to steep gradients all over +the place, similar to turbulent flow. + +Another topic that we did not discuss in the results section is a comparison +of different time integration schemes. The program provides four variants of +low-storage Runga--Kutta integrators that each have slightly different +accuracy and stability behavior. Among the schemes implemented here, the +higher-order ones provide additional accuracy but come with slightly lower +efficiency in terms of step size per stage before they violate the CFL +condition. An interesting extension would be to compare the low-storage +variants proposed here with standard Runge--Kutta integrators or to use vector +operations that are run separate from the mass matrix operation and compare +performance. + +

More advanced numerical flux functions and skew-symmetric formulations

+ +As mentioned in the introduction, the modified Lax--Friedrichs flux and the +HLL flux employed in this program are only two variants of a large body of +numerical fluxes available in the literature on the Euler equations. One +example is the HLLC flux (Harten-Lax-van Leer-Contact) flux which adds the +effect of rarefaction waves missing in the HLL flux, or the Roe flux. As +mentioned in the introduction, the effect of numerical fluxes on high-order DG +schemes is debatable. + +A related improvement to increase the stability of the solver is to also +consider the spatial integral terms. A shortcoming in the rather naive +implementation used above is the fact that the energy conservation of the +original Euler equations (in the absence of shocks) only holds up to a +discretization error. If the solution is under-resolved, the discretization +error can give rise to an increase in the numerical energy and eventually +render the discretization unstable. This is because of the inexact numerical +integration of the terms in the Euler equations, which both contain rational +nonlinearities and higher-degree content from curved cells. A way out of this +dilemma are so-called skew-symmetric formulations, see e.g. Gassner (2013) for a simple +variant. Skew symmetry means that switching the role of the solution +$\mathbf{w}$ and test functions $\mathbf{v}$ in the weak form produces the +exact negative of the original quantity, apart from some boundary terms. In +the discrete setting, the challenge is to keep this skew symmetry also when +the integrals are only computed approximately (in the continuous case, +skew-symmetry is a consequence of integration by parts). Skew-symmetric +numerical schemes balance spatial derivatives in the conservative form +$(\nabla \mathbf v, \mathbf{F}(\mathbf w))_{K}$ with contributions in the +convective form $(\mathbf v, \tilde{\mathbf{F}}(\mathbf w)\nabla +\mathbf{w})_{K}$ for some $\tilde{\mathbf{F}}$. The precise terms depend on +the equation and the integration formula, and can in some cases by understood +by special skew-symmetric finite difference schemes. + +

Equipping the code for supersonic calculations

+ +As mentioned in the introduction, the solution to the Euler equations develops +shocks as the Mach number increases, which require additional mechanisms to +stabilize the scheme, e.g. in the form of limiters. The main challenge besides +actually implementing the limiter or artificial viscosity approach would be to +load-balance the computations, as the additional computations involved for +limiting the oscillations in troubled cells would make them heavier than the +plain DG cells without limiting. Furthermore, additional numerical fluxes that +better cope with the discontinuities would also be an option. + +One ingredient also necessary for supersonic flows are appropriate boundary +conditions. As opposed to the subsonic outflow boundaries discussed in the +introduction and implemented in the program, all characteristics are outgoing +for supersonic outflow boundaries, so we do not want to prescribe any external +data, +@f[ +\mathbf{w}^+ = \mathbf{w}^- = \begin{pmatrix} \rho^-\\ +(\rho \mathbf u)^- \\ E^-\end{pmatrix} \quad + \text{(Neumann)}. +@f] + +In the code, we would simply add the additional statement +@code + else if (supersonic_outflow_boundaries.find(boundary_id) != + supersonic_outflow_boundaries.end()) + { + w_p = w_m; + at_outflow = true; + } +@endcode +in the `local_apply_boundary_face()` function. + +

Extension to the linearized Euler equations

+ +When the interest with an Euler solution is mostly in the propagation of sound +waves, it often makes sense to linearize the Euler equations around a +background state, i.e., a given density, velocity and energy (or pressure) +field, and only compute the change against these fields. This is the setting +of the wide field of aeroacoustics. Even though the resolution requirements +are sometimes considerably reduced, implementation gets somewhat more +complicated as the linearization gives rise to additional terms. From a code +perspective, in the operator evaluation we also need to equip the code with +the state to linearize against. This information can be provided either by +analytical functions (that are evaluated in terms of the position of the +quadrature points) or by a vector similar to the solution. Based on that +vector, we would create an additional FEEvaluation object to read from it and +provide the values of the field at quadrature points. If the background +velocity is zero and the density is constant, the linearized Euler equations +further simplify to the acoustic wave equation. + +A challenge in the context of sound propagation is often the definition of +boundary conditions, as the computational domain needs to be of finite size, +whereas the actual simulation often spans an infinite (or at least much +larger) physical domain. Conventional Dirichlet or Neumann boundary conditions +give rise to reflections of the sound waves that eventually propagate back to +the region of interest and spoil the solution. Therefore, various variants of +non-reflecting boundary conditions or sponge layers, often in the form of +perfectly matched layers -- where the solution is damped without reflection -- +are common. + +

Extension to the compressible Navier-Stokes equation

+ +The solver presented in this tutorial program can also be extended to the +compressible Navier--Stokes equations by adding viscous terms, as described in +@cite FehnWallKronbichler2019. To keep as much of the performance obtained +here despite the additional cost of elliptic terms e.g. via an interior +penalty method, one can switch the basis from FE_DGQ to FE_DGQHermite like in +the step-59 tutorial program.