From: Wolfgang Bangerth Date: Wed, 14 Aug 2019 23:26:10 +0000 (-0600) Subject: Avoid a couple FP subtractions. X-Git-Tag: v9.2.0-rc1~1221^2 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=b0edf302ac768feded941239848e97818eb0ae61;p=dealii.git Avoid a couple FP subtractions. By noting that the existing code performs dim subtractions of terms that are each a product of two values, we can reorder things in such a way that we first accumulate the products (which is a dot product) and then subtract the result. This should allow for some vectorization. The performance gain is almost certainly completely negligible, but it makes the code marginally easier to read. The reason why the indices involved here allow for this is because 'jacobian_pushed_forward_grads[i]' happens to be a Tensor<3,dim> and 'shape_gradients[k][i]' is a Tensor<1,dim>. So the types are so that their product is in fact equivalent to the summation of the last index as was written before. --- diff --git a/include/deal.II/fe/fe_poly.templates.h b/include/deal.II/fe/fe_poly.templates.h index 2d0938665e..867fa10c73 100644 --- a/include/deal.II/fe/fe_poly.templates.h +++ b/include/deal.II/fe/fe_poly.templates.h @@ -268,10 +268,9 @@ FE_Poly::fill_fe_values( for (unsigned int k = 0; k < this->dofs_per_cell; ++k) for (unsigned int i = 0; i < quadrature.size(); ++i) - for (unsigned int j = 0; j < spacedim; ++j) - output_data.shape_hessians[k][i] -= - mapping_data.jacobian_pushed_forward_grads[i][j] * - output_data.shape_gradients[k][i][j]; + output_data.shape_hessians[k][i] -= + output_data.shape_gradients[k][i] * + mapping_data.jacobian_pushed_forward_grads[i]; } if (flags & update_3rd_derivatives &&