From: Martin Kronbichler Date: Thu, 2 Jul 2015 15:47:55 +0000 (+0200) Subject: Introduce vectorized transposition for array-to-structure conversions X-Git-Tag: v8.3.0-rc1~46^2 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=a779cef7200ace3351761a55697417c8e7b245c5;p=dealii.git Introduce vectorized transposition for array-to-structure conversions --- diff --git a/doc/news/changes.h b/doc/news/changes.h index 4645d16c87..c3c3dc557d 100644 --- a/doc/news/changes.h +++ b/doc/news/changes.h @@ -385,10 +385,10 @@ inconvenience this causes.
    -
  1. New: Added the class Functions::Polynomial for representation of polynomials. +
  2. New: Added the class Functions::Polynomial for representation of polynomials. The new class is derived from the Function class.
    - (Angel Rodriguez, 2015/07/01) + (Angel Rodriguez, 2015/07/01)
  3. New: deal.II now supports compilation in C++14 mode, which may be @@ -540,7 +540,6 @@ inconvenience this causes.
      -
    1. New: Utilities::replace_in_string().
      (Timo Heister, 2015/07/05) @@ -559,7 +558,15 @@ inconvenience this causes.
      (Guido Kanschat, 2015/07/04)
    2. - + +
    3. New: VectorizedArray now provides two methods + vectorized_load_and_transpose() and vectorized_transpose_and_store() that + perform vectorized reads or writes and convert from array-of-struct into + struct-of-array or the other way around. +
      + (Martin Kronbichler, 2015/07/02) +
    4. +
    5. New: GridGenerator::cheese() for a mesh with many holes; GridGenerator::simplex() for simplices in 2 and 3 dimensions; GridGenerator::hyper_cross() for crosses in 2 and 3 dimensions. @@ -581,7 +588,7 @@ inconvenience this causes.
      (Angel Rodriguez, 2015/06/29)
    6. - +
    7. Fixed: The function numbers::is_finite() produced incorrect results when called with a NaN number (specifically, it produces an uncatchable floating point exception when called with a signaling NaN). This was clearly not diff --git a/include/deal.II/base/vectorization.h b/include/deal.II/base/vectorization.h index dd36d3e310..7f0d01eab7 100644 --- a/include/deal.II/base/vectorization.h +++ b/include/deal.II/base/vectorization.h @@ -1,6 +1,6 @@ // --------------------------------------------------------------------- // -// Copyright (C) 2011 - 2014 by the deal.II authors +// Copyright (C) 2011 - 2015 by the deal.II authors // // This file is part of the deal.II library. // @@ -27,7 +27,7 @@ // according to the following scheme // #ifdef __AVX512F__ // #define DEAL_II_COMPILER_VECTORIZATION_LEVEL 3 -// #ifdef __AVX__ +// #elif defined (__AVX__) // #define DEAL_II_COMPILER_VECTORIZATION_LEVEL 2 // #elif defined (__SSE2__) // #define DEAL_II_COMPILER_VECTORIZATION_LEVEL 1 @@ -36,7 +36,7 @@ // #endif // In addition to checking the flags __AVX__ and __SSE2__, a CMake test, // 'check_01_cpu_features.cmake', ensures that these feature are not only -// present but also working properly. +// present in the compilation unit but also working properly. #if DEAL_II_COMPILER_VECTORIZATION_LEVEL >= 2 // AVX, AVX-512 #include @@ -45,12 +45,13 @@ #endif - // forward declarations -namespace dealii -{ - template class VectorizedArray; -} +DEAL_II_NAMESPACE_OPEN + +template class VectorizedArray; + +DEAL_II_NAMESPACE_CLOSE + namespace std { template ::dealii::VectorizedArray @@ -64,7 +65,349 @@ namespace std } -DEAL_II_NAMESPACE_OPEN + +DEAL_II_NAMESPACE_OPEN + +/** + * This generic class defines a unified interface to a vectorized data type. + * For general template arguments, this class simply corresponds to the + * template argument. For example, VectorizedArray is nothing + * else but a wrapper around long double with exactly one data field + * of type long double and overloaded arithmetic operations. This + * means that VectorizedArray has a similar layout + * as ComplicatedType, provided that ComplicatedType defines basic arithmetic + * operations. For floats and doubles, an array of numbers are packed + * together, though. The number of elements packed together depend on the + * computer system and compiler flags that are used for compilation of + * deal.II. The fundamental idea of these packed data types is to use one + * single CPU instruction to perform arithmetic operations on the whole array + * using the processor's vector units. Most computer systems by 2010 standards + * will use an array of two doubles and four floats, respectively (this + * corresponds to the SSE/SSE2 data sets) when compiling deal.II on 64-bit + * operating systems. On Intel Sandy Bridge processors and newer or AMD + * Bulldozer processors and newer, four doubles and eight floats are used when + * deal.II is configured e.g. using gcc with --with-cpu=native or --with- + * cpu=corei7-avx. On compilations with AVX-512 support, eight doubles and + * sixteen floats are used. + * + * This behavior of this class is made similar to the basic data types double + * and float. The definition of a vectorized array does not initialize the + * data field but rather leaves it undefined, as is the case for double and + * float. However, when calling something like VectorizedArray a = + * VectorizedArray(), it sets all numbers in this field to zero. In + * other words, this class is a plain old data (POD) type which has an + * equivalent C representation and can e.g. be safely copied with std::memcpy. + * This POD layout is also necessary for ensuring correct alignment of data + * with address boundaries when collected in a vector (i.e., when the first + * element in a vector is properly aligned, all subsequent elements will be + * correctly aligned, too). + * + * Note that for proper functioning of this class, certain data alignment + * rules must be respected. This is because the computer expects the starting + * address of a VectorizedArray field at specific addresses in memory + * (usually, the address of the vectorized array should be a multiple of the + * length of the array in bytes). Otherwise, a segmentation fault or a severe + * loss of performance might occur. When creating a single data field on the + * stack like VectorizedArray a = VectorizedArray(), + * the compiler will take care of data alignment automatically. However, when + * allocating a long vector of VectorizedArray data, one needs to + * respect these rules. Use the class AlignedVector or data containers based + * on AlignedVector (such as Table) for this purpose. It is a class very + * similar to std::vector otherwise but always makes sure that data is + * correctly aligned. + * + * @author Katharina Kormann, Martin Kronbichler, 2010, 2011 + */ +template +class VectorizedArray +{ +public: + /** + * This gives the number of vectors collected in this class. + */ + static const unsigned int n_array_elements = 1; + + // POD means that there should be no user-defined constructors, destructors + // and copy functions (the standard is somewhat relaxed in C++2011, though). + + /** + * This function assigns a scalar to this class. + */ + VectorizedArray & + operator = (const Number scalar) + { + data = scalar; + return *this; + } + + /** + * Access operator (only valid with component 0) + */ + Number & + operator [] (const unsigned int comp) + { + (void)comp; + AssertIndexRange (comp, 1); + return data; + } + + /** + * Constant access operator (only valid with component 0) + */ + const Number & + operator [] (const unsigned int comp) const + { + (void)comp; + AssertIndexRange (comp, 1); + return data; + } + + /** + * Addition + */ + VectorizedArray & + operator += (const VectorizedArray &vec) + { + data+=vec.data; + return *this; + } + + /** + * Subtraction + */ + VectorizedArray & + operator -= (const VectorizedArray &vec) + { + data-=vec.data; + return *this; + } + + /** + * Multiplication + */ + VectorizedArray & + operator *= (const VectorizedArray &vec) + { + data*=vec.data; + return *this; + } + + /** + * Division + */ + VectorizedArray & + operator /= (const VectorizedArray &vec) + { + data/=vec.data; + return *this; + } + + /** + * Loads @p n_array_elements from memory into the calling class, starting at + * the given address. The memory need not be aligned by the amount of bytes + * in the vectorized array, as opposed to casting a double address to + * VectorizedArray*. + */ + void load (const Number *ptr) + { + data = *ptr; + } + + /** + * Writes the content of the calling class into memory in form of @p + * n_array_elements to the given address. The memory need not be aligned by + * the amount of bytes in the vectorized array, as opposed to casting a + * double address to VectorizedArray*. + */ + void store (Number *ptr) const + { + *ptr = data; + } + + /** + * Actual data field. Since this class represents a POD data type, it is + * declared public. + */ + Number data; + +private: + /** + * Returns the square root of this field. Not for use in user code. Use + * sqrt(x) instead. + */ + VectorizedArray + get_sqrt () const + { + VectorizedArray res; + res.data = std::sqrt(data); + return res; + } + + /** + * Returns the absolute value of this field. Not for use in user code. Use + * abs(x) instead. + */ + VectorizedArray + get_abs () const + { + VectorizedArray res; + res.data = std::fabs(data); + return res; + } + + /** + * Returns the component-wise maximum of this field and another one. Not for + * use in user code. Use max(x,y) instead. + */ + VectorizedArray + get_max (const VectorizedArray &other) const + { + VectorizedArray res; + res.data = std::max (data, other.data); + return res; + } + + /** + * Returns the component-wise minimum of this field and another one. Not for + * use in user code. Use min(x,y) instead. + */ + VectorizedArray + get_min (const VectorizedArray &other) const + { + VectorizedArray res; + res.data = std::min (data, other.data); + return res; + } + + /** + * Make a few functions friends. + */ + template friend VectorizedArray + std::sqrt (const VectorizedArray &); + template friend VectorizedArray + std::abs (const VectorizedArray &); + template friend VectorizedArray + std::max (const VectorizedArray &, const VectorizedArray &); + template friend VectorizedArray + std::min (const VectorizedArray &, const VectorizedArray &); +}; + + + +/** + * Create a vectorized array that sets all entries in the array to the given + * scalar. + * + * @relates VectorizedArray + */ +template +inline +VectorizedArray +make_vectorized_array (const Number &u) +{ + VectorizedArray result; + result = u; + return result; +} + + + +/** + * This method loads VectorizedArray::n_array_elements data streams from the + * given array @p in. The offsets to the input array are given by the array @p + * offsets. From each stream, n_entries are read. The data is then transposed + * and stored it into an array of VectorizedArray type. The output array @p + * out is expected to be an array of size @p n_entries. This method + * operates on plain arrays, so no checks for valid data access are made. It is + * the user's responsibility to ensure that the given arrays are valid + * according to the access layout below. + * + * This operation corresponds to a transformation of an array-of-struct + * (input) into a struct-of-array (output) according to the following formula: + * + * @code + * for (unsigned int i=0; i::n_array_elements; ++v) + * out[i][v] = in[offsets[v]+i]; + * @endcode + * + * A more optimized version of this code will be used for supported types. + * + * This is the inverse operation to vectorized_transpose_and_store(). + * + * @relates VectorizedArray + */ +template +inline +void +vectorized_load_and_transpose(const unsigned int n_entries, + const Number *in, + const unsigned int *offsets, + VectorizedArray *out) +{ + for (unsigned int i=0; i::n_array_elements; ++v) + out[i][v] = in[offsets[v]+i]; +} + + + +/** + * This method stores the vectorized arrays in transposed form into the given + * output array @p out with the given offsets @p offsets. This operation + * corresponds to a transformation of a struct-of-array (input) into an + * array-of-struct (output). This method operates on plain array, so no checks + * for valid data access are made. It is the user's responsibility to ensure + * that the given arrays are valid according to the access layout below. + * + * This method assumes that the specified offsets do not overlap. Otherwise, + * the behavior is undefined in the vectorized case. It is the user's + * responsibility to make sure that the access does not overlap and avoid + * undefined behavior. + * + * The argument @p add_into selects where the entries should only be written + * into the output arrays or the result should be added into the exisiting + * entries in the output. For add_into == false, the following + * code is assumed: + * + * @code + * for (unsigned int i=0; i::n_array_elements; ++v) + * out[offsets[v]+i] = in[i][v]; + * @endcode + * + * For add_into == true, the code implements the following action: + * @code + * for (unsigned int i=0; i::n_array_elements; ++v) + * out[offsets[v]+i] += in[i][v]; + * @endcode + * + * A more optimized version of this code will be used for supported types. + * + * This is the inverse operation to vectorized_load_and_transpose(). + * + * @relates VectorizedArray + */ +template +inline +void +vectorized_transpose_and_store(const bool add_into, + const unsigned int n_entries, + const VectorizedArray *in, + const unsigned int *offsets, + Number *out) +{ + if (add_into) + for (unsigned int i=0; i::n_array_elements; ++v) + out[offsets[v]+i] += in[i][v]; + else + for (unsigned int i=0; i::n_array_elements; ++v) + out[offsets[v]+i] = in[i][v]; +} + // for safety, also check that __AVX512F__ is defined in case the user manually @@ -680,6 +1023,122 @@ private: +/** + * Specialization for double and AVX. + */ +template <> +inline +void +vectorized_load_and_transpose(const unsigned int n_entries, + const double *in, + const unsigned int *offsets, + VectorizedArray *out) +{ + const unsigned int n_chunks = n_entries/4, remainder = n_entries%4; + for (unsigned int i=0; i 0 && n_chunks > 0) + { + // simple re-load all data in the last slot + const unsigned int final_pos = n_chunks*4-4+remainder; + Assert(final_pos+4 == n_entries, ExcInternalError()); + __m256d u0 = _mm256_loadu_pd(in+final_pos+offsets[0]); + __m256d u1 = _mm256_loadu_pd(in+final_pos+offsets[1]); + __m256d u2 = _mm256_loadu_pd(in+final_pos+offsets[2]); + __m256d u3 = _mm256_loadu_pd(in+final_pos+offsets[3]); + __m256d t0 = _mm256_permute2f128_pd (u0, u2, 0x20); + __m256d t1 = _mm256_permute2f128_pd (u1, u3, 0x20); + __m256d t2 = _mm256_permute2f128_pd (u0, u2, 0x31); + __m256d t3 = _mm256_permute2f128_pd (u1, u3, 0x31); + out[final_pos+0].data = _mm256_unpacklo_pd (t0, t1); + out[final_pos+1].data = _mm256_unpackhi_pd (t0, t1); + out[final_pos+2].data = _mm256_unpacklo_pd (t2, t3); + out[final_pos+3].data = _mm256_unpackhi_pd (t2, t3); + } + else if (remainder > 0) + for (unsigned int i=0; i +inline +void +vectorized_transpose_and_store(const bool add_into, + const unsigned int n_entries, + const VectorizedArray *in, + const unsigned int *offsets, + double *out) +{ + const unsigned int n_chunks = n_entries/4; + for (unsigned int i=0; i= 1 && defined(__SSE2__) - - - /** - * Specialization for double and SSE2. + * Specialization for double and AVX. */ template <> -class VectorizedArray +inline +void +vectorized_load_and_transpose(const unsigned int n_entries, + const float *in, + const unsigned int *offsets, + VectorizedArray *out) { -public: - /** - * This gives the number of vectors collected in this class. - */ - static const unsigned int n_array_elements = 2; - - /** - * This function can be used to set all data fields to a given scalar. - */ - VectorizedArray & - operator = (const double x) - { - data = _mm_set1_pd(x); - return *this; - } - - /** - * Access operator. - */ - double & - operator [] (const unsigned int comp) - { - AssertIndexRange (comp, 2); - return *(reinterpret_cast(&data)+comp); - } - - /** - * Constant access operator. - */ - const double & - operator [] (const unsigned int comp) const - { - AssertIndexRange (comp, 2); - return *(reinterpret_cast(&data)+comp); - } - - /** - * Addition. - */ - VectorizedArray & - operator += (const VectorizedArray &vec) - { -#ifdef DEAL_II_COMPILER_USE_VECTOR_ARITHMETICS - data += vec.data; -#else - data = _mm_add_pd(data,vec.data); -#endif - return *this; - } - - /** - * Subtraction. - */ - VectorizedArray & - operator -= (const VectorizedArray &vec) - { -#ifdef DEAL_II_COMPILER_USE_VECTOR_ARITHMETICS - data -= vec.data; -#else - data = _mm_sub_pd(data,vec.data); -#endif - return *this; - } - /** - * Multiplication. - */ - VectorizedArray & - operator *= (const VectorizedArray &vec) - { -#ifdef DEAL_II_COMPILER_USE_VECTOR_ARITHMETICS - data *= vec.data; -#else - data = _mm_mul_pd(data,vec.data); -#endif - return *this; - } - - /** - * Division. - */ - VectorizedArray & - operator /= (const VectorizedArray &vec) - { -#ifdef DEAL_II_COMPILER_USE_VECTOR_ARITHMETICS - data /= vec.data; -#else - data = _mm_div_pd(data,vec.data); -#endif - return *this; - } - - /** - * Loads @p n_array_elements from memory into the calling class, starting at - * the given address. The memory need not be aligned by 16 bytes, as opposed - * to casting a double address to VectorizedArray*. - */ - void load (const double *ptr) - { - data = _mm_loadu_pd (ptr); - } - - /** - * Writes the content of the calling class into memory in form of @p - * n_array_elements to the given address. The memory need not be aligned by - * 16 bytes, as opposed to casting a double address to - * VectorizedArray*. - */ - void store (double *ptr) const - { - _mm_storeu_pd (ptr, data); - } - - /** - * Actual data field. Since this class represents a POD data type, it - * remains public. - */ - __m128d data; + const unsigned int n_chunks = n_entries/4, remainder = n_entries%4; + for (unsigned int i=0; i 0 && n_chunks > 0) + { + // simple re-load all data in the last slot + const unsigned int final_pos = n_chunks*4-4+remainder; + Assert(final_pos+4 == n_entries, ExcInternalError()); + __m128 u0 = _mm_loadu_ps(in+final_pos+offsets[0]); + __m128 u1 = _mm_loadu_ps(in+final_pos+offsets[1]); + __m128 u2 = _mm_loadu_ps(in+final_pos+offsets[2]); + __m128 u3 = _mm_loadu_ps(in+final_pos+offsets[3]); + __m128 u4 = _mm_loadu_ps(in+final_pos+offsets[4]); + __m128 u5 = _mm_loadu_ps(in+final_pos+offsets[5]); + __m128 u6 = _mm_loadu_ps(in+final_pos+offsets[6]); + __m128 u7 = _mm_loadu_ps(in+final_pos+offsets[7]); + __m256 t0 = __m256(), t1 = __m256(), t2 = __m256(), t3 = __m256(); + t0 = _mm256_insertf128_ps (t0, u0, 0); + t0 = _mm256_insertf128_ps (t0, u4, 1); + t1 = _mm256_insertf128_ps (t1, u1, 0); + t1 = _mm256_insertf128_ps (t1, u5, 1); + t2 = _mm256_insertf128_ps (t2, u2, 0); + t2 = _mm256_insertf128_ps (t2, u6, 1); + t3 = _mm256_insertf128_ps (t3, u3, 0); + t3 = _mm256_insertf128_ps (t3, u7, 1); + __m256 v0 = _mm256_shuffle_ps (t0, t1, 0x44); + __m256 v1 = _mm256_shuffle_ps (t0, t1, 0xee); + __m256 v2 = _mm256_shuffle_ps (t2, t3, 0x44); + __m256 v3 = _mm256_shuffle_ps (t2, t3, 0xee); + out[final_pos+0].data = _mm256_shuffle_ps (v0, v2, 0x88); + out[final_pos+1].data = _mm256_shuffle_ps (v0, v2, 0xdd); + out[final_pos+2].data = _mm256_shuffle_ps (v1, v3, 0x88); + out[final_pos+3].data = _mm256_shuffle_ps (v1, v3, 0xdd); + } + else if (remainder > 0) + for (unsigned int i=0; i +inline +void +vectorized_transpose_and_store(const bool add_into, + const unsigned int n_entries, + const VectorizedArray *in, + const unsigned int *offsets, + float *out) +{ + const unsigned int n_chunks = n_entries/4; + for (unsigned int i=0; i friend VectorizedArray - std::sqrt (const VectorizedArray &); - template friend VectorizedArray - std::abs (const VectorizedArray &); - template friend VectorizedArray - std::max (const VectorizedArray &, const VectorizedArray &); - template friend VectorizedArray - std::min (const VectorizedArray &, const VectorizedArray &); -}; +// for safety, also check that __SSE2__ is defined in case the user manually +// set some conflicting compile flags which prevent compilation +#elif DEAL_II_COMPILER_VECTORIZATION_LEVEL >= 1 && defined(__SSE2__) /** - * Specialization for float and SSE2. + * Specialization for double and SSE2. */ template <> -class VectorizedArray +class VectorizedArray { public: /** * This gives the number of vectors collected in this class. */ - static const unsigned int n_array_elements = 4; + static const unsigned int n_array_elements = 2; /** * This function can be used to set all data fields to a given scalar. */ - VectorizedArray & - operator = (const float x) + operator = (const double x) { - data = _mm_set1_ps(x); + data = _mm_set1_pd(x); return *this; } /** * Access operator. */ - float & + double & operator [] (const unsigned int comp) { - AssertIndexRange (comp, 4); - return *(reinterpret_cast(&data)+comp); + AssertIndexRange (comp, 2); + return *(reinterpret_cast(&data)+comp); } /** * Constant access operator. */ - const float & + const double & operator [] (const unsigned int comp) const { - AssertIndexRange (comp, 4); - return *(reinterpret_cast(&data)+comp); + AssertIndexRange (comp, 2); + return *(reinterpret_cast(&data)+comp); } /** @@ -1130,7 +1555,7 @@ public: #ifdef DEAL_II_COMPILER_USE_VECTOR_ARITHMETICS data += vec.data; #else - data = _mm_add_ps(data,vec.data); + data = _mm_add_pd(data,vec.data); #endif return *this; } @@ -1144,7 +1569,7 @@ public: #ifdef DEAL_II_COMPILER_USE_VECTOR_ARITHMETICS data -= vec.data; #else - data = _mm_sub_ps(data,vec.data); + data = _mm_sub_pd(data,vec.data); #endif return *this; } @@ -1157,7 +1582,7 @@ public: #ifdef DEAL_II_COMPILER_USE_VECTOR_ARITHMETICS data *= vec.data; #else - data = _mm_mul_ps(data,vec.data); + data = _mm_mul_pd(data,vec.data); #endif return *this; } @@ -1171,7 +1596,7 @@ public: #ifdef DEAL_II_COMPILER_USE_VECTOR_ARITHMETICS data /= vec.data; #else - data = _mm_div_ps(data,vec.data); + data = _mm_div_pd(data,vec.data); #endif return *this; } @@ -1179,29 +1604,29 @@ public: /** * Loads @p n_array_elements from memory into the calling class, starting at * the given address. The memory need not be aligned by 16 bytes, as opposed - * to casting a float address to VectorizedArray*. + * to casting a double address to VectorizedArray*. */ - void load (const float *ptr) + void load (const double *ptr) { - data = _mm_loadu_ps (ptr); + data = _mm_loadu_pd (ptr); } /** * Writes the content of the calling class into memory in form of @p * n_array_elements to the given address. The memory need not be aligned by - * 16 bytes, as opposed to casting a float address to - * VectorizedArray*. + * 16 bytes, as opposed to casting a double address to + * VectorizedArray*. */ - void store (float *ptr) const + void store (double *ptr) const { - _mm_storeu_ps (ptr, data); + _mm_storeu_pd (ptr, data); } /** * Actual data field. Since this class represents a POD data type, it * remains public. */ - __m128 data; + __m128d data; private: /** @@ -1212,7 +1637,7 @@ private: get_sqrt () const { VectorizedArray res; - res.data = _mm_sqrt_ps(data); + res.data = _mm_sqrt_pd(data); return res; } @@ -1223,12 +1648,13 @@ private: VectorizedArray get_abs () const { - // to compute the absolute value, perform bitwise andnot with -0. This - // will leave all value and exponent bits unchanged but force the sign - // value to +. - __m128 mask = _mm_set1_ps (-0.f); + // to compute the absolute value, perform + // bitwise andnot with -0. This will leave all + // value and exponent bits unchanged but force + // the sign value to +. + __m128d mask = _mm_set1_pd (-0.); VectorizedArray res; - res.data = _mm_andnot_ps(mask, data); + res.data = _mm_andnot_pd(mask, data); return res; } @@ -1240,7 +1666,7 @@ private: get_max (const VectorizedArray &other) const { VectorizedArray res; - res.data = _mm_max_ps (data, other.data); + res.data = _mm_max_pd (data, other.data); return res; } @@ -1252,7 +1678,7 @@ private: get_min (const VectorizedArray &other) const { VectorizedArray res; - res.data = _mm_min_ps (data, other.data); + res.data = _mm_min_pd (data, other.data); return res; } @@ -1270,170 +1696,206 @@ private: }; -#endif // if DEAL_II_COMPILER_VECTORIZATION_LEVEL > 0 + +/** + * Specialization for double and SSE2. + */ +template <> +inline +void vectorized_load_and_transpose(const unsigned int n_entries, + const double *in, + const unsigned int *offsets, + VectorizedArray *out) +{ + const unsigned int n_chunks = n_entries/2, remainder = n_entries%2; + for (unsigned int i=0; i 0) + for (unsigned int i=0; i is nothing - * else but a wrapper around long double with exactly one data field - * of type long double and overloaded arithmetic operations. This - * means that VectorizedArray has a similar layout - * as ComplicatedType, provided that ComplicatedType defines basic arithmetic - * operations. For floats and doubles, an array of numbers are packed - * together, though. The number of elements packed together depend on the - * computer system and compiler flags that are used for compilation of - * deal.II. The fundamental idea of these packed data types is to use one - * single CPU instruction to perform arithmetic operations on the whole array - * using the processor's vector units. Most computer systems by 2010 standards - * will use an array of two doubles and four floats, respectively (this - * corresponds to the SSE/SSE2 data sets) when compiling deal.II on 64-bit - * operating systems. On Intel Sandy Bridge processors and newer or AMD - * Bulldozer processors and newer, four doubles and eight floats are used when - * deal.II is configured e.g. using gcc with --with-cpu=native or --with- - * cpu=corei7-avx. On compilations with AVX-512 support, eight doubles and - * sixteen floats are used. - * - * This behavior of this class is made similar to the basic data types double - * and float. The definition of a vectorized array does not initialize the - * data field but rather leaves it undefined, as is the case for double and - * float. However, when calling something like VectorizedArray a = - * VectorizedArray(), it sets all numbers in this field to zero. In - * other words, this class is a plain old data (POD) type which has an - * equivalent C representation and can e.g. be safely copied with std::memcpy. - * This POD layout is also necessary for ensuring correct alignment of data - * with address boundaries when collected in a vector (i.e., when the first - * element in a vector is properly aligned, all subsequent elements will be - * correctly aligned, too). - * - * Note that for proper functioning of this class, certain data alignment - * rules must be respected. This is because the computer expects the starting - * address of a VectorizedArray field at specific addresses in memory - * (usually, the address of the vectorized array should be a multiple of the - * length of the array in bytes). Otherwise, a segmentation fault or a severe - * loss of performance might occur. When creating a single data field on the - * stack like VectorizedArray a = VectorizedArray(), - * the compiler will take care of data alignment automatically. However, when - * allocating a long vector of VectorizedArray data, one needs to - * respect these rules. Use the class AlignedVector or data containers based - * on AlignedVector (such as Table) for this purpose. It is a class very - * similar to std::vector otherwise but always makes sure that data is - * correctly aligned. - * - * @author Katharina Kormann, Martin Kronbichler, 2010, 2011 + * Specialization for double and AVX. */ -template -class VectorizedArray +template <> +inline +void +vectorized_transpose_and_store(const bool add_into, + const unsigned int n_entries, + const VectorizedArray *in, + const unsigned int *offsets, + double *out) +{ + const unsigned int n_chunks = n_entries/2; + if (add_into) + { + for (unsigned int i=0; i +class VectorizedArray { public: /** * This gives the number of vectors collected in this class. */ - static const unsigned int n_array_elements = 1; - - // POD means that there should be no user-defined constructors, destructors - // and copy functions (the standard is somewhat relaxed in C++2011, though). + static const unsigned int n_array_elements = 4; /** - * This function assigns a scalar to this class. + * This function can be used to set all data fields to a given scalar. */ + VectorizedArray & - operator = (const Number scalar) + operator = (const float x) { - data = scalar; + data = _mm_set1_ps(x); return *this; } /** - * Access operator (only valid with component 0) + * Access operator. */ - Number & + float & operator [] (const unsigned int comp) { - (void)comp; - AssertIndexRange (comp, 1); - return data; + AssertIndexRange (comp, 4); + return *(reinterpret_cast(&data)+comp); } /** - * Constant access operator (only valid with component 0) + * Constant access operator. */ - const Number & + const float & operator [] (const unsigned int comp) const { - (void)comp; - AssertIndexRange (comp, 1); - return data; + AssertIndexRange (comp, 4); + return *(reinterpret_cast(&data)+comp); } /** - * Addition + * Addition. */ VectorizedArray & - operator += (const VectorizedArray &vec) + operator += (const VectorizedArray &vec) { - data+=vec.data; +#ifdef DEAL_II_COMPILER_USE_VECTOR_ARITHMETICS + data += vec.data; +#else + data = _mm_add_ps(data,vec.data); +#endif return *this; } /** - * Subtraction + * Subtraction. */ VectorizedArray & - operator -= (const VectorizedArray &vec) + operator -= (const VectorizedArray &vec) { - data-=vec.data; +#ifdef DEAL_II_COMPILER_USE_VECTOR_ARITHMETICS + data -= vec.data; +#else + data = _mm_sub_ps(data,vec.data); +#endif return *this; } - /** - * Multiplication + * Multiplication. */ VectorizedArray & - operator *= (const VectorizedArray &vec) + operator *= (const VectorizedArray &vec) { - data*=vec.data; +#ifdef DEAL_II_COMPILER_USE_VECTOR_ARITHMETICS + data *= vec.data; +#else + data = _mm_mul_ps(data,vec.data); +#endif return *this; } /** - * Division + * Division. */ VectorizedArray & - operator /= (const VectorizedArray &vec) + operator /= (const VectorizedArray &vec) { - data/=vec.data; +#ifdef DEAL_II_COMPILER_USE_VECTOR_ARITHMETICS + data /= vec.data; +#else + data = _mm_div_ps(data,vec.data); +#endif return *this; } /** * Loads @p n_array_elements from memory into the calling class, starting at - * the given address. The memory need not be aligned by the amount of bytes - * in the vectorized array, as opposed to casting a double address to - * VectorizedArray*. + * the given address. The memory need not be aligned by 16 bytes, as opposed + * to casting a float address to VectorizedArray*. */ - void load (const Number *ptr) + void load (const float *ptr) { - data = *ptr; + data = _mm_loadu_ps (ptr); } /** * Writes the content of the calling class into memory in form of @p * n_array_elements to the given address. The memory need not be aligned by - * the amount of bytes in the vectorized array, as opposed to casting a - * double address to VectorizedArray*. + * 16 bytes, as opposed to casting a float address to + * VectorizedArray*. */ - void store (Number *ptr) const + void store (float *ptr) const { - *ptr = data; + _mm_storeu_ps (ptr, data); } /** - * Actual data field. Since this class represents a POD data type, it is - * declared public. + * Actual data field. Since this class represents a POD data type, it + * remains public. */ - Number data; + __m128 data; private: /** @@ -1444,7 +1906,7 @@ private: get_sqrt () const { VectorizedArray res; - res.data = std::sqrt(data); + res.data = _mm_sqrt_ps(data); return res; } @@ -1455,8 +1917,12 @@ private: VectorizedArray get_abs () const { + // to compute the absolute value, perform bitwise andnot with -0. This + // will leave all value and exponent bits unchanged but force the sign + // value to +. + __m128 mask = _mm_set1_ps (-0.f); VectorizedArray res; - res.data = std::fabs(data); + res.data = _mm_andnot_ps(mask, data); return res; } @@ -1468,7 +1934,7 @@ private: get_max (const VectorizedArray &other) const { VectorizedArray res; - res.data = std::max (data, other.data); + res.data = _mm_max_ps (data, other.data); return res; } @@ -1480,7 +1946,7 @@ private: get_min (const VectorizedArray &other) const { VectorizedArray res; - res.data = std::min (data, other.data); + res.data = _mm_min_ps (data, other.data); return res; } @@ -1497,22 +1963,126 @@ private: std::min (const VectorizedArray &, const VectorizedArray &); }; + + /** - * Create a vectorized array that sets all entries in the array to the given - * scalar. - * - * @relates VectorizedArray + * Specialization for float and SSE2. */ -template +template <> inline -VectorizedArray -make_vectorized_array (const Number &u) +void vectorized_load_and_transpose(const unsigned int n_entries, + const float *in, + const unsigned int *offsets, + VectorizedArray *out) { - VectorizedArray result; - result = u; - return result; + const unsigned int n_chunks = n_entries/4, remainder = n_entries%4; + for (unsigned int i=0; i 0 && n_chunks > 0) + { + // simple re-load all data in the last slot + const unsigned int final_pos = n_chunks*4-4+remainder; + Assert(final_pos+4 == n_entries, ExcInternalError()); + __m128 u0 = _mm_loadu_ps(in+final_pos+offsets[0]); + __m128 u1 = _mm_loadu_ps(in+final_pos+offsets[1]); + __m128 u2 = _mm_loadu_ps(in+final_pos+offsets[2]); + __m128 u3 = _mm_loadu_ps(in+final_pos+offsets[3]); + __m128 v0 = _mm_shuffle_ps (u0, u1, 0x44); + __m128 v1 = _mm_shuffle_ps (u0, u1, 0xee); + __m128 v2 = _mm_shuffle_ps (u2, u3, 0x44); + __m128 v3 = _mm_shuffle_ps (u2, u3, 0xee); + out[final_pos+0].data = _mm_shuffle_ps (v0, v2, 0x88); + out[final_pos+1].data = _mm_shuffle_ps (v0, v2, 0xdd); + out[final_pos+2].data = _mm_shuffle_ps (v1, v3, 0x88); + out[final_pos+3].data = _mm_shuffle_ps (v1, v3, 0xdd); + } + else if (remainder > 0) + for (unsigned int i=0; i +inline +void +vectorized_transpose_and_store(const bool add_into, + const unsigned int n_entries, + const VectorizedArray *in, + const unsigned int *offsets, + float *out) +{ + const unsigned int n_chunks = n_entries/4; + for (unsigned int i=0; i 0 + + /** * Addition of two vectorized arrays with operator +. * @@ -1874,7 +2444,6 @@ operator - (const VectorizedArray &u) } - DEAL_II_NAMESPACE_CLOSE diff --git a/tests/base/vectorization_05.cc b/tests/base/vectorization_05.cc new file mode 100644 index 0000000000..8c84ea36f2 --- /dev/null +++ b/tests/base/vectorization_05.cc @@ -0,0 +1,119 @@ +// --------------------------------------------------------------------- +// +// Copyright (C) 2015 - 2015 by the deal.II authors +// +// This file is part of the deal.II library. +// +// The deal.II library is free software; you can use it, redistribute +// it, and/or modify it under the terms of the GNU Lesser General +// Public License as published by the Free Software Foundation; either +// version 2.1 of the License, or (at your option) any later version. +// The full text of the license can be found in the file LICENSE at +// the top level of the deal.II distribution. +// +// --------------------------------------------------------------------- + + +// test transpose operations of vectorized array using the array+offset method +// (otherwise the same as vectorization_05) + +#include "../tests.h" +#include +#include + +#include + + +template +void test () +{ + // since the number of array elements is system dependent, it is not a good + // idea to print them to an output file. Instead, check the values manually + const unsigned int n_vectors = VectorizedArray::n_array_elements; + VectorizedArray arr[n_numbers]; + Number other[n_vectors*n_numbers]; + unsigned int offsets[n_vectors]; + for (unsigned int v=0; v 0) + for (unsigned int i=0; i 0) + for (unsigned int i=0; i 0) + for (unsigned int i=0; i (); + test (); + test (); + deallog.pop(); + deallog.push("float"); + test (); + test (); + test (); + deallog.pop(); + + // test long double: in that case, the default + // path of VectorizedArray is taken no matter + // what was done for double or float + deallog.push("long double"); + test (); + deallog.pop(); +} diff --git a/tests/base/vectorization_05.output b/tests/base/vectorization_05.output new file mode 100644 index 0000000000..a345a3ff17 --- /dev/null +++ b/tests/base/vectorization_05.output @@ -0,0 +1,22 @@ + +DEAL:double::load_and_transpose at n=1: #errors: 0 +DEAL:double::transpose_and_store ( add) at n=1: #errors: 0 +DEAL:double::transpose_and_store (noadd) at n=1: #errors: 0 +DEAL:double::load_and_transpose at n=9: #errors: 0 +DEAL:double::transpose_and_store ( add) at n=9: #errors: 0 +DEAL:double::transpose_and_store (noadd) at n=9: #errors: 0 +DEAL:double::load_and_transpose at n=32: #errors: 0 +DEAL:double::transpose_and_store ( add) at n=32: #errors: 0 +DEAL:double::transpose_and_store (noadd) at n=32: #errors: 0 +DEAL:float::load_and_transpose at n=1: #errors: 0 +DEAL:float::transpose_and_store ( add) at n=1: #errors: 0 +DEAL:float::transpose_and_store (noadd) at n=1: #errors: 0 +DEAL:float::load_and_transpose at n=9: #errors: 0 +DEAL:float::transpose_and_store ( add) at n=9: #errors: 0 +DEAL:float::transpose_and_store (noadd) at n=9: #errors: 0 +DEAL:float::load_and_transpose at n=32: #errors: 0 +DEAL:float::transpose_and_store ( add) at n=32: #errors: 0 +DEAL:float::transpose_and_store (noadd) at n=32: #errors: 0 +DEAL:long double::load_and_transpose at n=4: #errors: 0 +DEAL:long double::transpose_and_store ( add) at n=4: #errors: 0 +DEAL:long double::transpose_and_store (noadd) at n=4: #errors: 0