This fixes a bug uncovered by our recent base::Tensor() cleanup: The
intermediate tensor type used for the gradient in the w() function
erroneously casts the result to a Tensor<1, dim, double> which for
example for automatic differentiation then removes the underyling AD
type:
/srv/temp/testsuite-d8F1wK0J/dealii/include/deal.II/base/tensor.h:1316:13: error: cannot convert 'const value_type' (aka 'const Sacado::Fad::Exp::GeneralFad<Sacado::Fad::Exp::DynamicStorage<double>>') to 'value_type' (aka 'double') without a conversion operator
: values{{value_type(initializer[indices])...}}
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Passes all kinematics/physics_functions tests in the testsuite
Fixes #16517
{
// This could be implemented as w = l-d, but that would mean computing "l"
// a second time.
- const Tensor<2, dim> grad_v = l(F, dF_dt);
+ const auto grad_v = l(F, dF_dt);
return internal::NumberType<Number>::value(0.5) *
(grad_v - transpose(grad_v));
}
In the beginning the Universe was created. This has made a lot of
people very angry and has been widely regarded as a bad move.
Douglas Adams