From f4d553177a1c30f4d8b9d1c9371664a371e8c599 Mon Sep 17 00:00:00 2001 From: Martin Kronbichler Date: Fri, 5 Apr 2013 10:57:55 +0000 Subject: [PATCH] Since native processor variants are not enabled in a default deal.II build, mention vectorization and how to enable the latest instruction set in the examples that explicitly use it. git-svn-id: https://svn.dealii.org/trunk@29201 0785d39b-7218-0410-832d-ea1e28bc413d --- deal.II/examples/step-37/doc/intro.dox | 66 ++++++++++++++++++++++++-- deal.II/examples/step-37/step-37.cc | 54 +++------------------ deal.II/examples/step-48/doc/intro.dox | 17 ++++--- deal.II/examples/step-48/step-48.cc | 8 ++-- 4 files changed, 83 insertions(+), 62 deletions(-) diff --git a/deal.II/examples/step-37/doc/intro.dox b/deal.II/examples/step-37/doc/intro.dox index 19e8a07e9f..08add8cb9c 100644 --- a/deal.II/examples/step-37/doc/intro.dox +++ b/deal.II/examples/step-37/doc/intro.dox @@ -410,8 +410,66 @@ thereby decreases efficiency (for more detail see e.g. Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, 2nd edition, 2003, chapters 11 & 12). -The implementation into the multigrid framework is then straightforward. The multigrid implementation in this -program is based on an earlier version of step-16 that demonstrated multigrid -on uniformly refined grids. - +The implementation into the multigrid framework is then straightforward. The +multigrid implementation in this program is based on a simplified version of +step-16 that disregards adaptivity. + + +

Using CPU-dependent instructions (vectorization)

+ +The computational kernels for evaluation in FEEvaluation are written in a way +to optimally use computational resources. Indeed, they operate not on double +data types, but something we call VectorizedArray (check e.g. the return type +of FEEvaluationBase::get_value, which is VectorizedArray for a scalar element +and a Tensor of VectorizedArray for a vector finite element). VectorizedArray +is a short array of doubles or float whose length depends on the particular +computer system in use. For example, systems based on x86-64 support the +streaming SIMD extensions (SSE), where the processor's vector units can +process two doubles (or four single-precision floats) by one CPU +instruction. Newer processors with support for the so-called advanced vector +extensions (AVX) with 256 bit operands can use four doubles and eight floats, +respectively. Vectorization is a single-instruction/multiple-data (SIMD) +concept, that is, one CPU instruction is used to process multiple data values +at once. Often, finite element programs do not use vectorization explicitly as +the benefits of this concept are only in arithmetic intensive operations. The +bulk of typical finite element workloads are memory bandwidth limited +(operations on sparse matrices and vectors) where the additional computational +power is useless. + +Behind the scenes, optimized BLAS packages might heavily rely on +vectorization, though. Also, optimizing compilers might automatically +transform loops involving standard code into more efficient vectorized +form. However, the data flow must be very regular in order for compilers to +produce efficient code. For example, already the automatic vectorization of +the prototype operation that benefits from vectorization, matrix-matrix +products, fails on most compilers (as of writing this tutorial in early 2012, +neither gcc-4.6 nor the Intel compiler v. 12 manage to produce useful +vectorized code for the FullMatrix::mmult function, and not even on the +simpler case where the matrix bounds are compile-time constants instead of +run-time constants as in FullMatrix::mmult). The main reason for this is that +the information to be processed at the innermost loop (that is where +vectorization is applied) is not necessarily a multiple of the vector length, +leaving parts of the resources unused. Moreover, the data that can potentially +be processed together might not be laid out in a contiguous way in memory or +not with the necessary alignment to address boundaries that are needed by the +processor. Or the compiler might not be able to prove that data arrays do not +overlap when loading several elements at once. + +In the matrix-free implementation in deal.II, we have therefore chosen to +apply vectorization at the level which is most appropriate for finite element +computations: The cell-wise computations are typically exactly the same for +all cells (except for reading from and writing to vectors), and hence SIMD can +be used to process several cells at once. In all what follows, you can think +of a VectorizedArray to hold data from several cells. Remember that it is not +related to the spatial dimension and the number of elements e.g. in a Tensor +or Point. + +Note that vectorization depends on the CPU that is used for deal.II. In order +to generate the fastest kernels of FEEvaluation for your computer, you should +compile deal.II with the so-called native processor variant. When using +the gcc compiler, it can be enabled by setting the variable +CMAKE_CXX_FLAGS to "-march=native" in the cmake build +settings (on the command line, specify +-DCMAKE_CXX_FLAGS="-march=native", see the deal.II README for more +information). Similar options exist for other compilers. diff --git a/deal.II/examples/step-37/step-37.cc b/deal.II/examples/step-37/step-37.cc index 59dc99a04b..453e0ae72b 100644 --- a/deal.II/examples/step-37/step-37.cc +++ b/deal.II/examples/step-37/step-37.cc @@ -98,54 +98,14 @@ namespace Step37 // This is the new function mentioned above: Evaluate the coefficient for - // abstract type @p number: It might be just a usual double, but it can also + // abstract type @p number. It might be just a usual double, but it can also // be a somewhat more complicated type that we call VectorizedArray. This - // data type is essentially a short array of doubles whose length depends on - // the particular computer system in use. For example, systems based on - // x86-64 support the streaming SIMD extensions (SSE), where the processor's - // vector units can process two doubles (or four single-precision floats) by - // one CPU instruction. Newer processors with support for the so-called - // advanced vector extensions (AVX) with 256 bit operands can use four - // doubles and eight floats, respectively. Vectorization is a - // single-instruction/multiple-data (SIMD) concept, that is, one CPU - // instruction is used to process multiple data values at once. Often, - // finite element programs do not use vectorization explicitly as the - // benefits of this concept are only in arithmetic intensive operations. The - // bulk of typical finite element workloads are memory bandwidth limited - // (operations on sparse matrices and vectors) where the additional - // computational power is useless. - // - // Behind the scenes, optimized BLAS packages might heavily rely on - // vectorization, though. Also, optimizing compilers might automatically - // transform loops involving standard code into more efficient vectorized - // form. However, the data flow must be very regular in order for compilers - // to produce efficient code. For example, already the automatic - // vectorization of the prototype operation that benefits from - // vectorization, matrix-matrix products, fails on most compilers (as of - // writing this tutorial in early 2012, neither gcc-4.6 nor the Intel - // compiler v. 12 manage to produce useful vectorized code for the - // FullMatrix::mmult function, and not even on the simpler case where - // the matrix bounds are compile-time constants instead of run-time - // constants as in FullMatrix::mmult). The main reason for this is that the - // information to be processed at the innermost loop (that is where - // vectorization is applied) is not necessarily a multiple of the vector - // length, leaving parts of the resources unused. Moreover, the data that - // can potentially be processed together might not be laid out in a - // contiguous way in memory or not with the necessary alignment to address - // boundaries that are needed by the processor. Or the compiler might not be - // able to prove that. - // - // In the matrix-free implementation in deal.II, we have therefore chosen to - // apply vectorization at the level which is most appropriate for finite - // element computations: The cell-wise computations are typically exactly - // the same for all cells (except for reading from and writing to vectors), - // and hence SIMD can be used to process several cells at once. In all what - // follows, you can think of a VectorizedArray to hold data from several - // cells. For example, we evaluate the coefficient shown here not on a - // simple point as usually done, but we hand it a - // Point > point, which is actually a collection - // of two points in the case of SSE2. Do not confuse the entries in - // VectorizedArray with the different coordinates of the + // data type is essentially a short array of doubles as discussed in the + // introduction that holds data from several cells. For example, we evaluate + // the coefficient shown here not on a simple point as usually done, but we + // hand it a Point > point, which is actually a + // collection of two points in the case of SSE2. Do not confuse the entries + // in VectorizedArray with the different coordinates of the // point. Indeed, the data is laid out such that p[0] returns a // VectorizedArray, which in turn contains the x-coordinate for the // first point and the second point. You may access the coordinates diff --git a/deal.II/examples/step-48/doc/intro.dox b/deal.II/examples/step-48/doc/intro.dox index 04d9ee7e4e..59b9fab473 100644 --- a/deal.II/examples/step-48/doc/intro.dox +++ b/deal.II/examples/step-48/doc/intro.dox @@ -133,13 +133,16 @@ interaction.

Parallelization

-The MatrixFree class comes with the option to be -parallelized on three levels: MPI parallelization on clusters of -distributed nodes, thread parallelization scheduled by the Threading -Building Blocks library, and finally with a vectorization by clustering of two -(or more) cells into a SIMD data type for the operator application. We -have already seen how shared-memory parallelism can be exploited in -step-37. Here, we demonstrate MPI parallelization. +The MatrixFree class comes with the option to be parallelized on three levels: +MPI parallelization on clusters of distributed nodes, thread parallelization +scheduled by the Threading Building Blocks library, and finally with a +vectorization by clustering of two (or more) cells into a SIMD data type for +the operator application. As we have already discussed in step-37, you will +get best performance by using an instruction set specific to your system, +e.g. with the cmake variable +-DCMAKE_CXX_FLAGS="-march=native". Shared memory (thread) +parallelization was also exploited in step-37. Here, we demonstrate MPI +parallelization. To facilitate parallelism with distributed memory (MPI), we use a special vector type parallel::distributed::Vector that holds the diff --git a/deal.II/examples/step-48/step-48.cc b/deal.II/examples/step-48/step-48.cc index 3b1d4d5acd..e5e71e7d20 100644 --- a/deal.II/examples/step-48/step-48.cc +++ b/deal.II/examples/step-48/step-48.cc @@ -467,11 +467,11 @@ namespace Step48 data_out.build_patches (); const std::string filename = - "solution-" + Utilities::int_to_string (timestep_number, 3) + - "." + Utilities::int_to_string (Utilities::MPI:: - this_mpi_process(MPI_COMM_WORLD),4); + "solution-" + Utilities::int_to_string (timestep_number, 3); - std::ofstream output ((filename + ".vtu").c_str()); + std::ofstream output ((filename + + "." + Utilities::int_to_string (Utilities::MPI:: + this_mpi_process(MPI_COMM_WORLD),4) + ".vtu").c_str()); data_out.write_vtu (output); if (Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0) -- 2.39.5