From: kronbichler Date: Wed, 9 Sep 2009 14:38:49 +0000 (+0000) Subject: Gone through the text and fixed some small things. X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=56d0e9e516a4d3b660f18db96c25dda60f4b8f9f;p=dealii-svn.git Gone through the text and fixed some small things. git-svn-id: https://svn.dealii.org/trunk@19420 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-37/doc/intro.dox b/deal.II/examples/step-37/doc/intro.dox index bf8a2eb2f5..0dc7a4d166 100644 --- a/deal.II/examples/step-37/doc/intro.dox +++ b/deal.II/examples/step-37/doc/intro.dox @@ -64,23 +64,23 @@ operations, a multiplication and an addition. If our matrix has one billion entries, a matrix-vector product consists of two billion floating point operations, 2 GFLOP. One core of a processor of 2009's standard (Intel's 'Nehalem' processor, 3 GHz) has a peak performance of about 12 billion -operations per second, 12 GFLOPS. Now we might be tempted to hope for +operations per second, 12 GFLOP/s. Now we might be tempted to hope for getting a matrix-vector product done in about one sixth of a second on such a machine. Remember, though, that we have to get 12 GB of data through the processor in order to form the matrix-vector product. Looking again at which hardware is available in 2009, we will hardly get more than 10 GB/s of data read. This means that the matrix-vector product will take more than one -second to complete, giving a rate of 1.7 GFLOPS at the best. This is quite -far away from the theoretical peak performance of 12 GLOPS. +second to complete, giving a rate of 1.7 GFLOP/s at the best. This is quite +far away from the theoretical peak performance of 12 GFLOP/s. What makes things worse is that today's processors have multiple cores, and multiple cores have to compete for memory bandwidth. Imagine we have 8 cores -available with a theoretical peak performance of 96 GFLOPs. However, they +available with a theoretical peak performance of 96 GFLOP/s. However, they can only get about 35 GB/s of data. For our matrix-vector product, we would -get a performance of about 6 GFLOPS, which is at nightmarish 6 per cent of -the processor's peak performance. In practice, things are likely even -worse since, for example, a substantial amount of -vector entries needs to be read, too. +get a performance of about 6 GFLOP/s, which is at nightmarish 6 per cent of +the system's peak performance. In practice, things are likely even worse +since, for example, a substantial amount of vector entries needs to be +fetched from main memory, too. Things won't get better in the future, rather worse: Memory bandwidth will most likely continue to grow more slowly than the number of cores @@ -211,9 +211,9 @@ forming a product of the form @f{eqnarray*} A_\mathrm{cell}\cdot x_\mathrm{cell} = B D B^T \cdot x_\mathrm{cell}, @f} -one should never form the matrix-matrix products, but rather by -multiplying with the vector from right to left. To put this into code, we can -write: +one should never form the matrix-matrix products, but rather multiply +with the vector from right to left so that only matrix-vector products are +formed. To put this into code, we can write: @code ... for (; cell!=endc; ++cell) @@ -250,36 +250,37 @@ write: @endcode This removed the three nested loops in the calculation of the local matrix -(note that the loop over d is a not really a loop, rather two or -three operations). What happens is as follows: We first transform the vector -of values on the local dofs to a vector of gradients on the quadrature +(here the loop over d is a not really a loop, rather two or three +operations). What happens is as follows: We first transform the vector of +values on the local dofs to a vector of gradients on the quadrature points. In the second loop, we multiply these gradients by the integration weight. The third loop applies the second gradient (in transposed form), so that we get back to a vector of (Laplacian) values on the cell dofs. This improves the situation a lot and reduced the complexity of the product -considerably. In fact, all the remainder is just to make a slightly more -clever use of data in order to gain some extra speed. It does not change the -code structure, though. +from something like $\mathcal {O}(\mathrm{dofs_per_cell}^3)$ to $\mathcal +{O}(\mathrm{dofs_per_cell}^2)$. In fact, all the remainder is just to make a +slightly more clever use of data in order to gain some extra speed. It does +not change the code structure, though. If one would implement the code above, one would soon realize that the -operations done by the call fe_values.reinit(cell) takes about +operations done by the call fe_values.reinit(cell) take about as much time as the other steps together (at least if the mesh is unstructured; deal.II can recognize that the gradients are often unchanged -on structured meshes and does nothing in such a case). That is certainly not -ideal and we would like to do smarter than this. What the reinit function -actually does is to calculate the gradient in real space by transforming the -gradient on the reference cell using the Jacobian of the transformation from -real to unit cell. This is done for each local basis function and for each -quadrature point. The Jacobian does not depend on the basis function, but it -is different on different quadrature points in general. The trick is now to -first apply the operation that leads us to temp_vector only -with the gradient on the reference cell. That transforms the vector of -values on the local dofs to a vector of gradients on the quadrature -points. There, we first apply the Jacobian that we factored out from the -gradient, then we apply the weights of the quadrature, and we apply with the -transposed Jacobian for preparing the third loop which agains uses the -gradients on the unit cell. +on structured meshes). That is certainly not ideal and we would like to do +smarter than this. What the reinit function does is to calculate the +gradient in real space by transforming the gradient on the reference cell +using the Jacobian of the transformation from real to unit cell. This is +done for each basis function on the cell on each quadrature point. The +Jacobian does not depend on the basis function, but it is different on +different quadrature points in general. The trick is now to factor out the +Jacobian transformation and first apply the operation that leads us to +temp_vector only with the gradient on the reference cell. That +transforms the vector of values on the local dofs to a vector of gradients +on the quadrature points. There, we first apply the Jacobian that we +factored out from the gradient, then we apply the weights of the quadrature, +and we apply with the transposed Jacobian for preparing the third loop which +agains uses the gradients on the unit cell. Let's see how we can implement this: @code @@ -374,8 +375,8 @@ in 2D, a bit more in 3D). The above is, in essence, what happens in the code below and if you have difficulties in understanding the implementation, you should try to first -understand what happens in the code above. There are a few more points done -there that make the implementation even more efficient, namely: +understand what happens in the code above. In the actual implementation +there are a few more points done to be even more efficient, namely: An additional gain with this implementation is that we do not have to build