- Implemented a new method add_and_dot for all vector classes (deal.II's vectors,
block vectors, PETSc, Trilinos vectors)
- Specialized function for deal.II vector that does add_and_dot in one sweep over
the data by a new AddAndDot struct that is passed to the accumulate function. This
also extends the other structs for inner products and norms by one (unused) argument.
- New tests for this feature.
- Added new do_vectorized operation to all accumulation operations that uses the
operations of VectorizedArray<double/float> for the sums. This helps to reduce the
operations and thus increase performance when memory transfer is not the limit.
This functionality is called in the regular part of the accumulate function which
now sits in its own method 'accumulate_regular' (length divisible by 32). This
functionality can unfortunately not be realized by the OPENMP_SIMD pragma recently
introduced because that leads to non-reproducible results when the vector memory
starts at different memory locations. VectorizedArray is only supported on some
platforms and thus this functionality may not be available everywhere. However, good
code will be generated unconditionally (except on old machines from around 2004-2008
where unaligned reads also from aligned memory are slow, but those are not relevant
any more).
- To implemented vectorized pow operations, I also added a std::pow function for
VectorizedArray.
- The memory for parallel::distributed::Vector is now allocated to 64 byte boundaries
just as is the memory of Vector<Number>.
<h3>Specific improvements</h3>
<ol>
+ <li> New: The vector classes in deal.II (including Trilinos and PETSc
+ wrappers) now have a new method x.add_and_dot(factor,v,w) which performs
+ x.add(factor,v) and subsequent inner product of x with another vector
+ w. This operation occurs in some iterative solvers; by a combined operation,
+ reduced memory transfer and thus higher performance are enabled.
+ <br>
+ (Martin Kronbichler, 2014/10/27)
+ </li>
+
+ <li> Improved: Inner products and norms on deal.II's own vector classes now
+ use vectorization through VectorizedArray if available.
+ <br>
+ (Martin Kronbichler, 2014/10/27)
+ </li>
+
<li> Changed: PETSc and Trilinos vectors with ghost entries can now be reset to zero
using = 0.0;
<br>
+ /**
+ * Raises the given number @p x to the power @p p for a vectorized data
+ * field. The result is returned as vectorized array in the form
+ * <tt>{pow(x[0],p), pow(x[1],p), ..., pow(x[n_array_elements-1],p)}</tt>.
+ *
+ * @relates VectorizedArray
+ */
+ template <typename Number>
+ inline
+ ::dealii::VectorizedArray<Number>
+ pow (const ::dealii::VectorizedArray<Number> &x,
+ const Number p)
+ {
+ Number values[::dealii::VectorizedArray<Number>::n_array_elements];
+ for (unsigned int i=0; i<dealii::VectorizedArray<Number>::n_array_elements; ++i)
+ values[i] = std::pow(x[i], p);
+ ::dealii::VectorizedArray<Number> out;
+ out.load(&values[0]);
+ return out;
+ }
+
+
+
/**
* Computes the absolute value (modulus) of a vectorized data field. The
* result is returned as vectorized array in the form <tt>{abs(x[0]),
*/
real_type linfty_norm () const;
+ /**
+ * Performs a combined operation of a vector addition and a subsequent
+ * inner product, returning the value of the inner product. In other words,
+ * the result of this function is the same as if the user called
+ * @code
+ * this->add(a, V);
+ * return_value = *this * W;
+ * @endcode
+ *
+ * The reason this function exists is that this operation involves less
+ * memory transfer than calling the two functions separately on deal.II's
+ * vector classes (Vector<Number> and parallel::distributed::Vector<double>).
+ * This method only needs to load three vectors, @p this, @p V, @p W,
+ * whereas calling separate methods means to load the calling vector @p this
+ * twice. Since most vector operations are memory transfer limited, this
+ * reduces the time by 25\% (or 50\% if @p W equals @p this).
+ */
+ value_type add_and_dot (const value_type a,
+ const BlockVectorBase &V,
+ const BlockVectorBase &W);
+
/**
* Returns true if the given global index is
* in the local range of this processor.
+template <class VectorType>
+typename BlockVectorBase<VectorType>::value_type
+BlockVectorBase<VectorType>::
+add_and_dot (const typename BlockVectorBase<VectorType>::value_type a,
+ const BlockVectorBase<VectorType> &V,
+ const BlockVectorBase<VectorType> &W)
+{
+ AssertDimension (n_blocks(), V.n_blocks());
+ AssertDimension (n_blocks(), W.n_blocks());
+
+ value_type sum = 0.;
+ for (size_type i=0; i<n_blocks(); ++i)
+ sum += components[i].add_and_dot(a, V.components[i], W.components[i]);
+
+ return sum;
+}
+
+
+
template <class VectorType>
BlockVectorBase<VectorType> &
BlockVectorBase<VectorType>::operator += (const BlockVectorBase<VectorType> &v)
*/
real_type linfty_norm () const;
+ /**
+ * Performs a combined operation of a vector addition and a subsequent
+ * inner product, returning the value of the inner product. In other
+ * words, the result of this function is the same as if the user called
+ * @code
+ * this->add(a, V);
+ * return_value = *this * W;
+ * @endcode
+ *
+ * The reason this function exists is that this operation involves less
+ * memory transfer than calling the two functions separately. This
+ * method only needs to load three vectors, @p this, @p V, @p W, whereas
+ * calling separate methods means to load the calling vector @p this
+ * twice. Since most vector operations are memory transfer limited, this
+ * reduces the time by 25\% (or 50\% if @p W equals @p this).
+ */
+ Number add_and_dot (const Number a,
+ const BlockVector<Number> &V,
+ const BlockVector<Number> &W);
+
/**
* Scale each element of the vector by the given factor.
*
template <typename Number>
inline
- void BlockVector<Number>::swap (BlockVector<Number> &v)
+ Number
+ BlockVector<Number>::add_and_dot (const Number a,
+ const BlockVector<Number> &V,
+ const BlockVector<Number> &W)
+ {
+ Number local_result = Number();
+ for (unsigned int i=0; i<this->n_blocks(); ++i)
+ local_result += this->block(i).add_and_dot_local(a, V.block(i), W.block(i));
+
+ if (this->block(0).partitioner->n_mpi_processes() > 1)
+ return Utilities::MPI::sum (local_result,
+ this->block(0).partitioner->get_communicator());
+ else
+ return local_result;
+ }
+
+
+
+ template <typename Number>
+ inline
+ void
+ BlockVector<Number>::swap (BlockVector<Number> &v)
{
Assert (this->n_blocks() == v.n_blocks(),
ExcDimensionMismatch(this->n_blocks(), v.n_blocks()));
*/
real_type linfty_norm () const;
+ /**
+ * Performs a combined operation of a vector addition and a subsequent
+ * inner product, returning the value of the inner product. In other
+ * words, the result of this function is the same as if the user called
+ * @code
+ * this->add(a, V);
+ * return_value = *this * W;
+ * @endcode
+ *
+ * The reason this function exists is that this operation involves less
+ * memory transfer than calling the two functions separately. This
+ * method only needs to load three vectors, @p this, @p V, @p W, whereas
+ * calling separate methods means to load the calling vector @p this
+ * twice. Since most vector operations are memory transfer limited, this
+ * reduces the time by 25\% (or 50\% if @p W equals @p this).
+ */
+ Number add_and_dot (const Number a,
+ const Vector<Number> &V,
+ const Vector<Number> &W);
+
/**
* Returns the global size of the vector, equal to the sum of the number
* of locally owned indices among all the processors.
*/
real_type linfty_norm_local () const;
+ /**
+ * Local part of the addition followed by an inner product of two
+ * vectors.
+ */
+ Number add_and_dot_local (const Number a,
+ const Vector<Number> &V,
+ const Vector<Number> &W);
+
/**
* Shared pointer to store the parallel partitioning information. This
* information can be shared between several vectors that have the same
inline
Vector<Number>::~Vector ()
{
- if (val != 0)
- delete[] val;
- val = 0;
+ resize_val(0);
if (import_data != 0)
delete[] import_data;
+ template <typename Number>
+ inline
+ Number
+ Vector<Number>::add_and_dot_local(const Number a,
+ const Vector<Number> &V,
+ const Vector<Number> &W)
+ {
+ // on some processors, the size might be zero, which is not allowed by
+ // the dealii::Vector class. Therefore, insert a check here
+ return (partitioner->local_size()>0 ?
+ vector_view.add_and_dot(a, V.vector_view, W.vector_view)
+ : Number());
+ }
+
+
+
+ template <typename Number>
+ inline
+ Number
+ Vector<Number>::add_and_dot (const Number a,
+ const Vector<Number> &V,
+ const Vector<Number> &W)
+ {
+ Number local_result = add_and_dot_local(a, V, W);
+ if (partitioner->n_mpi_processes() > 1)
+ return Utilities::MPI::sum (local_result,
+ partitioner->get_communicator());
+ else
+ return local_result;
+ }
+
+
+
template <typename Number>
inline
typename Vector<Number>::size_type
// ---------------------------------------------------------------------
//
-// Copyright (C) 2011 - 2013 by the deal.II authors
+// Copyright (C) 2011 - 2014 by the deal.II authors
//
// This file is part of the deal.II library.
//
#include <deal.II/lac/petsc_parallel_vector.h>
#include <deal.II/lac/trilinos_vector.h>
+#include <mm_malloc.h>
+
DEAL_II_NAMESPACE_OPEN
Assert (((allocated_size > 0 && val != 0) ||
val == 0), ExcInternalError());
if (val != 0)
- delete [] val;
- val = new Number[new_alloc_size];
+ _mm_free(val);
+ val = static_cast<Number *>(_mm_malloc (sizeof(Number)*new_alloc_size, 64));
allocated_size = new_alloc_size;
}
else if (new_alloc_size == 0)
{
if (val != 0)
- delete [] val;
+ _mm_free(val);
val = 0;
allocated_size = 0;
}
*/
real_type linfty_norm () const;
+ /**
+ * Performs a combined operation of a vector addition and a subsequent
+ * inner product, returning the value of the inner product. In other
+ * words, the result of this function is the same as if the user called
+ * @code
+ * this->add(a, V);
+ * return_value = *this * W;
+ * @endcode
+ *
+ * The reason this function exists is for compatibility with deal.II's own
+ * vector classes which can implement this functionality with less memory
+ * transfer. However, for PETSc vectors such a combined operation is not
+ * natively supported and thus the cost is completely equivalent as
+ * calling the two methods separately.
+ */
+ PetscScalar add_and_dot (const Number a,
+ const VectorBase &V,
+ const VectorBase &W);
+
/**
* Normalize vector by dividing by the $l_2$-norm of the
* vector. Return the vector norm before normalization.
if (std::fabs(alpha) > 1.e10)
return true;
- r.add(-alpha, v);
+ res = std::sqrt(r.add_and_dot(-alpha, v, r));
// check for early success, see the lac/bicgstab_early testcase as to
// why this is necessary
- if (this->control().check(step, r.l2_norm()) == SolverControl::success)
+ if (this->control().check(step, res) == SolverControl::success)
{
Vx->add(alpha, y);
print_vectors(step, *Vx, r, y);
rhobar = t*r;
omega = rhobar/(t*t);
Vx->add(alpha, y, omega, z);
- r.add(-omega, t);
if (additional_data.exact_residual)
- res = criterion(A, *Vx, *Vb);
+ {
+ r.add(-omega, t);
+ res = criterion(A, *Vx, *Vb);
+ }
else
- res = r.l2_norm();
+ res = std::sqrt(r.add_and_dot(-omega, t, r));
state = this->control().check(step, res);
print_vectors(step, *Vx, r, y);
Assert(alpha != 0., ExcDivideByZero());
alpha = gh/alpha;
- g.add(alpha,h);
x.add(alpha,d);
- res = g.l2_norm();
+ res = std::sqrt(g.add_and_dot(alpha, h, g));
print_vectors(it, x, g, d);
Vector<double> &h,
bool &re_orthogonalize)
{
+ Assert(dim > 0, ExcInternalError());
const unsigned int inner_iteration = dim - 1;
// need initial norm for detection of re-orthogonalization, see below
norm_vv_start = vv.l2_norm();
// Orthogonalization
- for (unsigned int i=0 ; i<dim ; ++i)
- {
- h(i) = vv * orthogonal_vectors[i];
- vv.add(-h(i), orthogonal_vectors[i]);
- };
+ h(0) = vv * orthogonal_vectors[0];
+ for (unsigned int i=1 ; i<dim ; ++i)
+ h(i) = vv.add_and_dot(-h(i-1), orthogonal_vectors[i-1], orthogonal_vectors[i]);
+ double norm_vv = std::sqrt(vv.add_and_dot(-h(dim-1), orthogonal_vectors[dim-1], vv));
// Re-orthogonalization if loss of orthogonality detected. For the test, use
// a strategy discussed in C. T. Kelley, Iterative Methods for Linear and
// previous vectors, which indicates loss of precision.
if (re_orthogonalize == false && inner_iteration % 5 == 4)
{
- const double norm_vv = vv.l2_norm();
if (norm_vv > 10. * norm_vv_start *
std::sqrt(std::numeric_limits<typename VECTOR::value_type>::epsilon()))
return norm_vv;
}
if (re_orthogonalize == true)
- for (unsigned int i=0 ; i<dim ; ++i)
- {
- double htmp = vv * orthogonal_vectors[i];
- h(i) += htmp;
- vv.add(-htmp, orthogonal_vectors[i]);
- }
-
- return vv.l2_norm();
+ {
+ double htmp = vv * orthogonal_vectors[0];
+ h(0) += htmp;
+ for (unsigned int i=1 ; i<dim ; ++i)
+ {
+ htmp = vv.add_and_dot(-htmp, orthogonal_vectors[i-1], orthogonal_vectors[i]);
+ h(i) += htmp;
+ }
+ norm_vv = std::sqrt(vv.add_and_dot(-htmp, orthogonal_vectors[dim-1], vv));
+ }
+
+ return norm_vv;
}
A.vmult(*aux, z[j]);
// Gram-Schmidt
- for (unsigned int i=0; i<=j; ++i)
- {
- H(i,j) = *aux * v[i];
- aux->add(-H(i,j), v[i]);
- }
- H(j+1,j) = a = aux->l2_norm();
+ H(0,j) = *aux * v[0];
+ for (unsigned int i=1; i<=j; ++i)
+ H(i,j) = aux->add_and_dot(-H(i-1,j), v[i-1], v[i]);
+ H(j+1,j) = a = std::sqrt(aux->add_and_dot(-H(j,j), v[j], *aux));
// Compute projected solution
*/
real_type linfty_norm () const;
+ /**
+ * Performs a combined operation of a vector addition and a subsequent
+ * inner product, returning the value of the inner product. In other
+ * words, the result of this function is the same as if the user called
+ * @code
+ * this->add(a, V);
+ * return_value = *this * W;
+ * @endcode
+ *
+ * The reason this function exists is for compatibility with deal.II's own
+ * vector classes which can implement this functionality with less memory
+ * transfer. However, for Trilinos vectors such a combined operation is
+ * not natively supported and thus the cost is completely equivalent as
+ * calling the two methods separately.
+ */
+ TrilinosScalar add_and_dot (const TrilinosScalar a,
+ const VectorBase &V,
+ const VectorBase &W);
+
/**
* Return whether the vector contains only elements with value
* zero. This is a collective operation. This function is expensive, because
+ inline
+ TrilinosScalar
+ VectorBase::add_and_dot (const TrilinosScalar a,
+ const VectorBase &V,
+ const VectorBase &W)
+ {
+ this->add(a, V);
+ return *this * W;
+ }
+
+
+
// inline also scalar products, vector
// additions etc. since they are all
// representable by a single Trilinos
* Maximum absolute value of the elements.
*/
real_type linfty_norm () const;
+
+ /**
+ * Performs a combined operation of a vector addition and a subsequent
+ * inner product, returning the value of the inner product. In other words,
+ * the result of this function is the same as if the user called
+ * @code
+ * this->add(a, V);
+ * return_value = *this * W;
+ * @endcode
+ *
+ * The reason this function exists is that this operation involves less
+ * memory transfer than calling the two functions separately. This method
+ * only needs to load three vectors, @p this, @p V, @p W, whereas calling
+ * separate methods means to load the calling vector @p this twice. Since
+ * most vector operations are memory transfer limited, this reduces the time
+ * by 25\% (or 50\% if @p W equals @p this).
+ *
+ * @dealiiOperationIsMultithreaded The algorithm uses pairwise
+ * summation with the same order of summation in every run, which gives
+ * fully repeatable results from one run to another.
+ */
+ Number add_and_dot (const Number a,
+ const Vector<Number> &V,
+ const Vector<Number> &W);
+
//@}
#include <deal.II/base/parallel.h>
#include <deal.II/base/thread_management.h>
#include <deal.II/base/multithread_info.h>
+#include <deal.II/base/vectorization.h>
#include <deal.II/lac/vector.h>
#include <deal.II/lac/block_vector.h>
namespace Vector
{
// All sums over all the vector entries (l2-norm, inner product, etc.) are
- // performed with the same code, using a templated operation defined here
+ // performed with the same code, using a templated operation defined
+ // here. There are always two versions defined, a standard one that covers
+ // most cases and a vectorized one which is only for equal types and float
+ // and double.
template <typename Number, typename Number2>
- struct InnerProd
+ struct Dot
{
Number
- operator() (const Number *&X, const Number2 *&Y, const Number &) const
+ operator() (const Number *&X, const Number2 *&Y, const Number &, Number *&) const
{
return *X++ * Number(numbers::NumberTraits<Number2>::conjugate(*Y++));
}
+
+ VectorizedArray<Number>
+ do_vectorized(const Number *&X, const Number *&Y, const Number &, Number *&) const
+ {
+ VectorizedArray<Number> x, y;
+ x.load(X);
+ y.load(Y);
+ X += VectorizedArray<Number>::n_array_elements;
+ Y += VectorizedArray<Number>::n_array_elements;
+ return x * y;
+ }
};
template <typename Number, typename RealType>
struct Norm2
{
RealType
- operator() (const Number *&X, const Number *&, const RealType &) const
+ operator() (const Number *&X, const Number *&, const RealType &, Number *&) const
{
return numbers::NumberTraits<Number>::abs_square(*X++);
}
+
+ VectorizedArray<Number>
+ do_vectorized(const Number *&X, const Number *&, const Number &, Number *&) const
+ {
+ VectorizedArray<Number> x;
+ x.load(X);
+ X += VectorizedArray<Number>::n_array_elements;
+ return x * x;
+ }
};
template <typename Number, typename RealType>
struct Norm1
{
RealType
- operator() (const Number *&X, const Number *&, const RealType &) const
+ operator() (const Number *&X, const Number *&, const RealType &, Number *&) const
{
return numbers::NumberTraits<Number>::abs(*X++);
}
+
+ VectorizedArray<Number>
+ do_vectorized(const Number *&X, const Number *&, const Number &, Number *&) const
+ {
+ VectorizedArray<Number> x;
+ x.load(X);
+ X += VectorizedArray<Number>::n_array_elements;
+ return std::abs(x);
+ }
};
template <typename Number, typename RealType>
struct NormP
{
RealType
- operator() (const Number *&X, const Number *&, const RealType &p) const
+ operator() (const Number *&X, const Number *&, const RealType &p, Number *&) const
{
return std::pow(numbers::NumberTraits<Number>::abs(*X++), p);
}
+
+ VectorizedArray<Number>
+ do_vectorized(const Number *&X, const Number *&, const Number &p, Number *&) const
+ {
+ VectorizedArray<Number> x;
+ x.load(X);
+ X += VectorizedArray<Number>::n_array_elements;
+ return std::pow(std::abs(x),p);
+ }
};
template <typename Number>
struct MeanValue
{
Number
- operator() (const Number *&X, const Number *&, const Number &) const
+ operator() (const Number *&X, const Number *&, const Number &, Number *&) const
{
return *X++;
}
+
+ VectorizedArray<Number>
+ do_vectorized(const Number *&X, const Number *&, const Number &, Number *&) const
+ {
+ VectorizedArray<Number> x;
+ x.load(X);
+ X += VectorizedArray<Number>::n_array_elements;
+ return x;
+ }
};
+ template <typename Number>
+ struct AddAndDot
+ {
+ Number
+ operator() (const Number *&V, const Number *&W, const Number &a,
+ Number *&X) const
+ {
+ *X += a **V++;
+ return *X++ * Number(numbers::NumberTraits<Number>::conjugate(*W++));
+ }
+
+ VectorizedArray<Number>
+ do_vectorized(const Number *&V, const Number *&W, const Number &a,
+ Number *&X) const
+ {
+ VectorizedArray<Number> x, w, v;
+ x.load(X);
+ v.load(V);
+ x += a * v;
+ x.store(X);
+ // may only load from W after storing in X because the pointers might
+ // point to the same memory
+ w.load(W);
+ X += VectorizedArray<Number>::n_array_elements;
+ V += VectorizedArray<Number>::n_array_elements;
+ W += VectorizedArray<Number>::n_array_elements;
+ return x * w;
+ }
+ };
+
+
+
+ // this is the inner working routine for the accumulation loops
+ // below. This is the standard case where the loop bounds are known. We
+ // pulled this function out of the regular accumulate routine because we
+ // might do this thing vectorized (see specialized function below)
+ template <typename Operation, typename Number, typename Number2,
+ typename ResultType, typename size_type>
+ void
+ accumulate_regular(const Operation &op,
+ const Number *&X,
+ const Number2 *&Y,
+ const ResultType power,
+ const size_type n_chunks,
+ Number *&Z,
+ ResultType (&outer_results)[128],
+ internal::bool2type<false>,
+ const unsigned int start_chunk=0)
+ {
+ AssertIndexRange(start_chunk, n_chunks+1);
+ for (size_type i=start_chunk; i<n_chunks; ++i)
+ {
+ ResultType r0 = op(X, Y, power, Z);
+ ResultType r1 = op(X, Y, power, Z);
+ ResultType r2 = op(X, Y, power, Z);
+ ResultType r3 = op(X, Y, power, Z);
+ for (size_type j=1; j<8; ++j)
+ {
+ r0 += op(X, Y, power, Z);
+ r1 += op(X, Y, power, Z);
+ r2 += op(X, Y, power, Z);
+ r3 += op(X, Y, power, Z);
+ }
+ r0 += r1;
+ r2 += r3;
+ outer_results[i] = r0 + r2;
+ }
+ }
+
+
+
+ // this is the inner working routine for the accumulation loops
+ // below. This is the specialized case where the loop bounds are known and
+ // where we can vectorize. In that case, we request the 'do_vectorized'
+ // routine of the operation instead of the regular one which does several
+ // operations at once.
+ template <typename Operation, typename Number, typename size_type>
+ void
+ accumulate_regular(const Operation &op,
+ const Number *&X,
+ const Number *&Y,
+ const Number power,
+ const size_type n_chunks,
+ Number *&Z,
+ Number (&outer_results)[128],
+ internal::bool2type<true>)
+ {
+ for (size_type i=0; i<n_chunks/VectorizedArray<Number>::n_array_elements; ++i)
+ {
+ VectorizedArray<Number> r0 = op.do_vectorized(X, Y, power, Z);
+ VectorizedArray<Number> r1 = op.do_vectorized(X, Y, power, Z);
+ VectorizedArray<Number> r2 = op.do_vectorized(X, Y, power, Z);
+ VectorizedArray<Number> r3 = op.do_vectorized(X, Y, power, Z);
+ for (size_type j=1; j<8; ++j)
+ {
+ r0 += op.do_vectorized(X, Y, power, Z);
+ r1 += op.do_vectorized(X, Y, power, Z);
+ r2 += op.do_vectorized(X, Y, power, Z);
+ r3 += op.do_vectorized(X, Y, power, Z);
+ }
+ r0 += r1;
+ r2 += r3;
+ r0 += r2;
+ r0.store(&outer_results[i*VectorizedArray<Number>::n_array_elements]);
+ }
+
+ // If we are treating a case where the vector length is not divisible by
+ // the vectorization length, need the other routine for the cleanup work
+ if (n_chunks % VectorizedArray<Number>::n_array_elements != 0)
+ accumulate_regular(op, X, Y, power, n_chunks, Z, outer_results,
+ internal::bool2type<false>(),
+ (n_chunks/VectorizedArray<Number>::n_array_elements)
+ * VectorizedArray<Number>::n_array_elements);
+ }
+
+
+
// this is the main working loop for all vector sums using the templated
// operation above. it accumulates the sums using a block-wise summation
// algorithm with post-update. this blocked algorithm has been proposed in
const Number2 *Y,
const ResultType power,
const size_type vec_size,
+ Number *Z,
ResultType &result,
const int depth = -1)
{
const size_type remainder = vec_size % 32;
Assert (remainder == 0 || n_chunks < 128, ExcInternalError());
- for (size_type i=0; i<n_chunks; ++i)
- {
- ResultType r0 = op(X, Y, power);
- for (size_type j=1; j<8; ++j)
- r0 += op(X, Y, power);
- ResultType r1 = op(X, Y, power);
- for (size_type j=1; j<8; ++j)
- r1 += op(X, Y, power);
- r0 += r1;
- r1 = op(X, Y, power);
- for (size_type j=1; j<8; ++j)
- r1 += op(X, Y, power);
- ResultType r2 = op(X, Y, power);
- for (size_type j=1; j<8; ++j)
- r2 += op(X, Y, power);
- r1 += r2;
- r0 += r1;
- outer_results[i] = r0;
- }
-
- // now work on the remainder, i.e., the last
- // up to 32 values. Use switch statement with
- // fall-through to work on these values.
+ // Select between the regular version and vectorized version based
+ // on the number types we are given. To choose the vectorized
+ // version often enough, we need to have all tasks but the last one
+ // to be divisible by the vectorization length
+ accumulate_regular(op, X, Y, power, n_chunks, Z, outer_results,
+ internal::bool2type<(types_are_equal<Number,Number2>::value &&
+ (types_are_equal<Number,double>::value ||
+ types_are_equal<Number,float>::value))>());
+
+ // now work on the remainder, i.e., the last up to 32 values. Use
+ // switch statement with fall-through to work on these values.
if (remainder > 0)
{
const size_type inner_chunks = remainder / 8;
switch (inner_chunks)
{
case 3:
- r2 = op(X, Y, power);
+ r2 = op(X, Y, power, Z);
for (size_type j=1; j<8; ++j)
- r2 += op(X, Y, power);
+ r2 += op(X, Y, power, Z);
// no break
case 2:
- r1 = op(X, Y, power);
+ r1 = op(X, Y, power, Z);
for (size_type j=1; j<8; ++j)
- r1 += op(X, Y, power);
+ r1 += op(X, Y, power, Z);
r1 += r2;
// no break
case 1:
- r2 = op(X, Y, power);
+ r2 = op(X, Y, power, Z);
for (size_type j=1; j<8; ++j)
- r2 += op(X, Y, power);
+ r2 += op(X, Y, power, Z);
// no break
default:
for (size_type j=0; j<remainder_inner; ++j)
- r0 += op(X, Y, power);
+ r0 += op(X, Y, power, Z);
r0 += r2;
r0 += r1;
outer_results[n_chunks] = r0;
Threads::TaskGroup<> task_group;
task_group += Threads::new_task(&accumulate<Operation,Number,Number2,
ResultType,size_type>,
- op, X, Y, power, new_size, r0, next_depth);
+ op, X, Y, power, new_size, Z, r0, next_depth);
task_group += Threads::new_task(&accumulate<Operation,Number,Number2,
ResultType,size_type>,
op, X+new_size, Y+new_size, power,
- new_size, r1, next_depth);
+ new_size, Z+new_size, r1, next_depth);
task_group += Threads::new_task(&accumulate<Operation,Number,Number2,
ResultType,size_type>,
op, X+2*new_size, Y+2*new_size, power,
- new_size, r2, next_depth);
+ new_size, Z+2*new_size, r2, next_depth);
task_group += Threads::new_task(&accumulate<Operation,Number,Number2,
ResultType,size_type>,
op, X+3*new_size, Y+3*new_size, power,
- vec_size-3*new_size, r3, next_depth);
+ vec_size-3*new_size, Z+3*new_size, r3,
+ next_depth);
task_group.join_all();
r0 += r1;
r2 += r3;
#endif
else
{
- // split vector into four pieces and work on
- // the pieces recursively. Make pieces (except last)
- // divisible by 1024.
+ // split vector into four pieces and work on the pieces
+ // recursively. Make pieces (except last) divisible by 1024.
const size_type new_size = (vec_size / 4096) * 1024;
ResultType r0, r1, r2, r3;
- accumulate (op, X, Y, power, new_size, r0);
- accumulate (op, X+new_size, Y+new_size, power, new_size, r1);
- accumulate (op, X+2*new_size, Y+2*new_size, power, new_size, r2);
- accumulate (op, X+3*new_size, Y+3*new_size, power, vec_size-3*new_size, r3);
+ accumulate (op, X, Y, power, new_size, Z, r0);
+ accumulate (op, X+new_size, Y+new_size, power, new_size, Z+new_size, r1);
+ accumulate (op, X+2*new_size, Y+2*new_size, power, new_size, Z+2*new_size, r2);
+ accumulate (op, X+3*new_size, Y+3*new_size, power, vec_size-3*new_size,
+ Z+3*new_size, r3);
r0 += r1;
r2 += r3;
result = r0 + r2;
ExcDimensionMismatch(vec_size, v.size()));
Number sum;
- internal::Vector::accumulate (internal::Vector::InnerProd<Number,Number2>(),
- val, v.val, Number(), vec_size, sum);
+ internal::Vector::accumulate (internal::Vector::Dot<Number,Number2>(),
+ val, v.val, Number(), vec_size, val, sum);
Assert(numbers::is_finite(sum), ExcNumberNotFinite());
return sum;
}
+
template <typename Number>
typename Vector<Number>::real_type
Vector<Number>::norm_sqr () const
real_type sum;
internal::Vector::accumulate (internal::Vector::Norm2<Number,real_type>(),
- val, val, real_type(), vec_size, sum);
+ val, val, real_type(), vec_size, val, sum);
Assert(numbers::is_finite(sum), ExcNumberNotFinite());
}
+
template <typename Number>
Number Vector<Number>::mean_value () const
{
Number sum;
internal::Vector::accumulate (internal::Vector::MeanValue<Number>(),
- val, val, Number(), vec_size, sum);
+ val, val, Number(), vec_size, val, sum);
return sum / real_type(size());
}
real_type sum;
internal::Vector::accumulate (internal::Vector::Norm1<Number,real_type>(),
- val, val, real_type(), vec_size, sum);
+ val, val, real_type(), vec_size, val, sum);
return sum;
}
real_type norm_square;
internal::Vector::accumulate (internal::Vector::Norm2<Number,real_type>(),
- val, val, real_type(), vec_size, norm_square);
+ val, val, real_type(), vec_size, val, norm_square);
if (numbers::is_finite(norm_square) &&
norm_square >= std::numeric_limits<real_type>::min())
return std::sqrt(norm_square);
real_type sum;
internal::Vector::accumulate (internal::Vector::NormP<Number,real_type>(),
- val, val, p, vec_size, sum);
+ val, val, p, vec_size, val, sum);
if (numbers::is_finite(sum) && sum >= std::numeric_limits<real_type>::min())
return std::pow(sum, static_cast<real_type>(1./p));
}
+
+template <typename Number>
+Number
+Vector<Number>::add_and_dot (const Number a,
+ const Vector<Number> &V,
+ const Vector<Number> &W)
+{
+ Assert (vec_size!=0, ExcEmptyObject());
+ AssertDimension (vec_size, V.size());
+ AssertDimension (vec_size, W.size());
+
+ Number sum;
+ internal::Vector::accumulate (internal::Vector::AddAndDot<Number>(),
+ V.val, W.val, a, vec_size, val, sum);
+ Assert(numbers::is_finite(sum), ExcNumberNotFinite());
+
+ return sum;
+}
+
+
+
template <typename Number>
Vector<Number> &Vector<Number>::operator += (const Vector<Number> &v)
{
}
+
template <typename Number>
Vector<Number> &Vector<Number>::operator -= (const Vector<Number> &v)
{
}
+
template <typename Number>
void Vector<Number>::add (const Number v)
{
}
+
template <typename Number>
void Vector<Number>::add (const Vector<Number> &v)
{
}
+
template <typename Number>
void Vector<Number>::add (const Number a, const Vector<Number> &v,
const Number b, const Vector<Number> &w)
+ PetscScalar
+ VectorBase::add_and_dot (const PetscScalar a,
+ const VectorBase &V,
+ const VectorBase &W)
+ {
+ this->add(a, V);
+ return *this * W;
+ }
+
+
+
void
VectorBase::compress (::dealii::VectorOperation::values)
{
void
VectorBase::add (const PetscScalar a,
- const VectorBase &v)
+ const VectorBase &v)
{
Assert (!has_ghost_elements(), ExcGhostsPresent());
Assert (numbers::is_finite(a), ExcNumberNotFinite());
for (unsigned int i=0; i<n; ++i)
matrix.diag_element(i) = (i+1);
- SolverControl control(1000, 1e2*std::numeric_limits<number>::epsilon());
+ SolverControl control(1000, 1e3*std::numeric_limits<number>::epsilon());
typename SolverGMRES<Vector<number> >::AdditionalData data;
data.max_n_tmp_vectors = 202;
deallog.push("double");
test<double>();
deallog.pop();
- deallog.threshold_double(1.e-4);
+ deallog.threshold_double(2.e-4);
deallog.push("float");
test<float>();
deallog.pop();
+++ /dev/null
-
-DEAL:double:GMRES::Starting value 14.14
-DEAL:double:GMRES::Convergence step 109 value 0
-DEAL:float:GMRES::Starting value 14.14
-DEAL:float:GMRES::Convergence step 68 value 0
+++ /dev/null
-
-DEAL:double:GMRES::Starting value 14.14
-DEAL:double:GMRES::Convergence step 109 value 0
-DEAL:float:GMRES::Starting value 14.14
-DEAL:float:GMRES::Convergence step 69 value 0
+++ /dev/null
-
-DEAL:double:GMRES::Starting value 14.14
-DEAL:double:GMRES::Convergence step 109 value 0
-DEAL:float:GMRES::Starting value 14.14
-DEAL:float:GMRES::Convergence step 68 value 0
--- /dev/null
+
+DEAL:double:GMRES::Starting value 14.14
+DEAL:double:GMRES::Convergence step 105 value 0
+DEAL:float:GMRES::Starting value 14.14
+DEAL:float:GMRES::Convergence step 59 value 0
deallog.push("double");
test<double>();
deallog.pop();
- deallog.threshold_double(1.e-4);
+ deallog.threshold_double(2.e-4);
deallog.push("float");
test<float>();
deallog.pop();
+++ /dev/null
-
-DEAL:double:GMRES::Starting value 14.1
-DEAL:double:GMRES::Convergence step 105 value 0
-DEAL:float:GMRES::Starting value 14.1
-DEAL:float:GMRES::Convergence step 59 value 0.000114
+++ /dev/null
-
-DEAL:double:GMRES::Starting value 14.1
-DEAL:double:GMRES::Convergence step 105 value 0
-DEAL:float:GMRES::Starting value 14.1
-DEAL:float:GMRES::Convergence step 59 value 0.000114
DEAL:double:GMRES::Starting value 14.1
DEAL:double:GMRES::Convergence step 105 value 0
DEAL:float:GMRES::Starting value 14.1
-DEAL:float:GMRES::Convergence step 59 value 0.000110
+DEAL:float:GMRES::Convergence step 59 value 0
--- /dev/null
+// ---------------------------------------------------------------------
+//
+// Copyright (C) 2014 - 2014 by the deal.II authors
+//
+// This file is part of the deal.II library.
+//
+// The deal.II library is free software; you can use it, redistribute
+// it, and/or modify it under the terms of the GNU Lesser General
+// Public License as published by the Free Software Foundation; either
+// version 2.1 of the License, or (at your option) any later version.
+// The full text of the license can be found in the file LICENSE at
+// the top level of the deal.II distribution.
+//
+// ---------------------------------------------------------------------
+
+
+// check that the inner product is exactly the same down to roundoff for
+// various vectors. Due to vectorization and its requirements for alignment,
+// we also test that the result does not depend on the alignment of any of the
+// vectors by choosing among several start locations of a vector relative to a
+// data array allocated as usual.
+
+#include "../tests.h"
+#include <deal.II/base/logstream.h>
+#include <deal.II/lac/vector.h>
+#include <deal.II/lac/vector_view.h>
+#include <cmath>
+#include <fstream>
+#include <iomanip>
+
+
+
+
+template <typename number>
+void check_norms ()
+{
+ for (unsigned int test=0; test<5; ++test)
+ {
+ const unsigned int size = Testing::rand() % 20000;
+ Vector<number> larger1 (size+8), larger2(size+8), in1(size), in2(size);
+ for (unsigned int i=0; i<size; ++i)
+ {
+ in1(i) = static_cast<number>(Testing::rand())/static_cast<number>(RAND_MAX);
+ in2(i) = static_cast<number>(Testing::rand())/static_cast<number>(RAND_MAX);
+ }
+
+ const number inner_product = in1 * in2;
+ deallog << "Vector difference: ";
+ for (unsigned int shift1=0; shift1<8; ++shift1)
+ for (unsigned int shift2=0; shift2<8; ++shift2)
+ {
+ VectorView<number> v1 (size, larger1.begin()+shift1);
+ VectorView<number> v2 (size, larger2.begin()+shift2);
+ for (unsigned int i=0; i<size; ++i)
+ {
+ v1(i) = in1(i);
+ v2(i) = in2(i);
+ }
+
+ const number result = v1 * v2;
+ deallog << static_cast<double>(std::abs(result-inner_product)) << " ";
+ }
+ deallog << std::endl;
+ }
+}
+
+
+int main()
+{
+ std::ofstream logfile("output");
+ deallog << std::setprecision(2);
+ deallog.attach(logfile);
+ deallog.depth_console(0);
+ deallog.threshold_double(1e-50); // exact equality required!
+
+ check_norms<float>();
+ check_norms<double>();
+ check_norms<long double>();
+ check_norms<std::complex<double> >();
+}
+
+
--- /dev/null
+
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+DEAL::Vector difference: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
--- /dev/null
+// ---------------------------------------------------------------------
+//
+// Copyright (C) 2012 - 2013 by the deal.II authors
+//
+// This file is part of the deal.II library.
+//
+// The deal.II library is free software; you can use it, redistribute
+// it, and/or modify it under the terms of the GNU Lesser General
+// Public License as published by the Free Software Foundation; either
+// version 2.1 of the License, or (at your option) any later version.
+// The full text of the license can be found in the file LICENSE at
+// the top level of the deal.II distribution.
+//
+// ---------------------------------------------------------------------
+
+
+// check that Vector::add_and_dot works correctly
+
+#include "../tests.h"
+#include <deal.II/base/logstream.h>
+#include <deal.II/lac/vector.h>
+#include <cmath>
+#include <fstream>
+#include <iomanip>
+
+
+
+
+template <typename number>
+void check ()
+{
+ for (unsigned int test=0; test<5; ++test)
+ {
+ const unsigned int size = 17 + test*1101;
+ Vector<number> v1 (size), v2(size), v3(size), check(size);
+ for (unsigned int i=0; i<size; ++i)
+ {
+ v1(i) = 0.1 + 0.005 * i;
+ v2(i) = -5.2 + 0.18 * i;
+ v3(i) = 3.14159 + 2.7183/(1.+i);
+ }
+ check = v1;
+ const number factor = 0.01432;
+
+ v1.add(factor, v2);
+ const number prod = v1 * v3;
+ const number prod_check = check.add_and_dot(factor, v2, v3);
+ if (test == 0 && types_are_equal<number,double>::value)
+ {
+ deallog << "Vector add reference: ";
+ v1.print(deallog);
+ deallog << "Vector check reference: ";
+ check.print(deallog);
+ }
+
+ deallog << "Add and dot should be " << prod/static_cast<number>(size)
+ << ", is " << prod_check/static_cast<number>(size)
+ << std::endl;
+ }
+}
+
+
+int main()
+{
+ std::ofstream logfile("output");
+ deallog << std::fixed;
+ deallog << std::setprecision(2);
+ deallog.attach(logfile);
+ deallog.depth_console(0);
+ deallog.threshold_double(1.e-10);
+
+ check<float>();
+ check<double>();
+ check<long double>();
+ check<std::complex<double> >();
+ deallog << "OK" << std::endl;
+}
+
+
--- /dev/null
+
+DEAL::Add and dot should be 0.30, is 0.30
+DEAL::Add and dot should be 13.40, is 13.40
+DEAL::Add and dot should be 26.50, is 26.50
+DEAL::Add and dot should be 39.61, is 39.61
+DEAL::Add and dot should be 52.71, is 52.71
+DEAL::Vector add reference: 0.03 0.03 0.04 0.05 0.06 0.06 0.07 0.08 0.09 0.09 0.10 0.11 0.12 0.12 0.13 0.14 0.15
+DEAL::Vector check reference: 0.03 0.03 0.04 0.05 0.06 0.06 0.07 0.08 0.09 0.09 0.10 0.11 0.12 0.12 0.13 0.14 0.15
+DEAL::Add and dot should be 0.30, is 0.30
+DEAL::Add and dot should be 13.40, is 13.40
+DEAL::Add and dot should be 26.50, is 26.50
+DEAL::Add and dot should be 39.61, is 39.61
+DEAL::Add and dot should be 52.71, is 52.71
+DEAL::Add and dot should be 0.30, is 0.30
+DEAL::Add and dot should be 13.40, is 13.40
+DEAL::Add and dot should be 26.50, is 26.50
+DEAL::Add and dot should be 39.61, is 39.61
+DEAL::Add and dot should be 52.71, is 52.71
+DEAL::Add and dot should be (0.30,0.00), is (0.30,0.00)
+DEAL::Add and dot should be (13.40,0.00), is (13.40,0.00)
+DEAL::Add and dot should be (26.50,0.00), is (26.50,0.00)
+DEAL::Add and dot should be (39.61,0.00), is (39.61,0.00)
+DEAL::Add and dot should be (52.71,0.00), is (52.71,0.00)
+DEAL::OK