From: maier -DDEAL_II_SETUP_DEFAULT_COMPILER_FLAGS=OFF
and set all
necessary compiler flags by hand via
-
-
- CMAKE_CXX_FLAGS - used during all builds
- DEAL_II_CXX_FLAGS_DEBUG - additional flags for the debug library
- DEAL_II_CXX_FLAGS_RELEASE - additional flags for the release library
-
+
+CMAKE_CXX_FLAGS - used during all builds
+DEAL_II_CXX_FLAGS_DEBUG - additional flags for the debug library
+DEAL_II_CXX_FLAGS_RELEASE - additional flags for the release library
+
After that try to compile the library with minimal external
dependencies (-DDEAL_II_ALLOW_AUTODETECTION=OFF
, for
further information see the deal.II CMake
@@ -53,23 +52,21 @@
-
- cmake/setup_compiler_flags.cmake
- cmake/setup_compiler_flags_gnu.cmake
- cmake/setup_compiler_flags_icc.cmake
-
+
+cmake/setup_compiler_flags.cmake
+cmake/setup_compiler_flags_gnu.cmake
+cmake/setup_compiler_flags_icc.cmake
+
Patches are highly welcome! See here
for information on how to get in contact with us.
-
- cmake/checks/check_for_compiler_bugs.cmake
- cmake/checks/check_for_compiler_features.cmake
- cmake/checks/check_for_cxx_features.cmake
- include/deal.II/base/config.h.in
-
+
+cmake/checks/check_01_for_compiler_features.cmake
+cmake/checks/check_01_for_cxx_features.cmake
+cmake/checks/check_03_for_compiler_bugs.cmake
+include/deal.II/base/config.h.in
+
to see how compiler specific checks are done.
- - cmake/checks/check_for_compiler_bugs.cmake - cmake/checks/check_for_compiler_features.cmake - cmake/checks/check_for_cxx_features.cmake - cmake/checks/check_for_system_features.cmake - include/deal.II/base/config.h.in -+
+cmake/checks/check_for_compiler_bugs.cmake +cmake/checks/check_for_compiler_features.cmake +cmake/checks/check_for_cxx_features.cmake +cmake/checks/check_for_system_features.cmake +include/deal.II/base/config.h.in +to see how platform and compiler specific checks are done.
cmake
with something like:
- - - cmake -DCMAKE_TOOLCHAIN_FILE=<...>/Toolchain.sample - -DDEAL_II_FORCE_BUNDLED_BOOST=ON - -DDEAL_II_ALLOW_AUTODETECTION=OFF - ../deal.II -+
+$ cmake -DCMAKE_TOOLCHAIN_FILE=<...>/Toolchain.sample \ + -DDEAL_II_FORCE_BUNDLED_BOOST=ON \ + -DDEAL_II_ALLOW_AUTODETECTION=OFF \ + ../deal.II +where
CMAKE_TOOLCHAIN_FILE
points to the toolchain file.
The remaining configuration can be adjusted at will, see the documentation.
diff --git a/deal.II/doc/developers/testsuite.html b/deal.II/doc/developers/testsuite.html
index a77a74b014..2cbdfad8d8 100644
--- a/deal.II/doc/developers/testsuite.html
+++ b/deal.II/doc/developers/testsuite.html
@@ -81,18 +81,17 @@
If you're impatient, use the following commands: -
- - $ mkdir new_directory - $ cd new_directory - $ svn checkout https://svn.dealii.org/trunk . - $ mkdir build - $ cd build - $ cmake ../deal.II - $ make -j16 - $ make -j16 setup_tests - $ ctest -j16 -+
+$ mkdir new_directory +$ cd new_directory +$ svn checkout https://svn.dealii.org/trunk . +$ mkdir build +$ cd build +$ cmake ../deal.II +$ make -j16 +$ make -j16 setup_tests +$ ctest -j16 +The exact meaning of all of these commands will be explained in much greater detail below. @@ -111,19 +110,17 @@ To download the testsuite, check it out from the subversion repository, along with deal.II. To this end, go to an empty directory where you want to test deal.II and do this: -
- - $ svn checkout https://svn.dealii.org/trunk . -+
+$ svn checkout https://svn.dealii.org/trunk . +(The period at the end puts everything from under
trunk/
into the current directory, rather than creating a
new trunk/
directory.) You will then have
two folders:
- - - ./deal.II - ./tests -+
+./deal.II +./tests +
@@ -132,10 +129,9 @@
(../tests
). If your test directory is at a different
location you have to hint during configuration by specifying
TEST_DIR
:
-
- - $ cmake -DTEST_DIR=<...> -+
+$ cmake -DTEST_DIR=<...> +@@ -145,10 +141,9 @@ To enable the testsuite, configure and build deal.II in a build directory as normal (installation is not necessary). After that you can setup the testsuite via the "setup_tests" target:s -
- - $ make setup_tests -+
+$ make setup_tests +This will set up all tests supported by the current configuration. The testsuite can now be run in the current build directory as described below. @@ -156,39 +151,37 @@
Setup can be fine-tuned using the following commands: -
++$ make regen_tests - reruns configure stage in every testsuite subproject - $ make regen_tests - reruns configure stage in every testsuite subproject +$ make clean_tests - runs the 'clean' target in every testsuite subproject - $ make clean_tests - runs the 'clean' target in every testsuite subproject - - $ make prune_tests - removes all testsuite subprojects -+$ make prune_tests - removes all testsuite subprojects +
In addition, when setting up the testsuite, the following environment
variables can be used to override default behavior when
calling make setup_tests
:
-
- - TEST_DIFF - - The diff tool and command line to use for comparison. If numdiff is - available it defaults to "numdiff -a 1e-6 -q", otherwise plain diff - is used. +@@ -196,10 +189,9 @@+TEST_DIFF + - The diff tool and command line to use for comparison. If numdiff is + available it defaults to "numdiff -a 1e-6 -q", otherwise plain diff + is used. - TEST_TIME_LIMIT - - The time limit (in seconds) a single test is allowed to take. Defaults - to 180 seconds +TEST_TIME_LIMIT + - The time limit (in seconds) a single test is allowed to take. Defaults + to 180 seconds - TEST_PICKUP_REGEX - - A regular expression to select only a subset of tests during setup. - An empty string is interpreted as a catchall (this is the default). +TEST_PICKUP_REGEX + - A regular expression to select only a subset of tests during setup. + An empty string is interpreted as a catchall (this is the default). - TEST_OVERRIDE_LOCATION - - If TEST_OVERRIDE_LOCATION is set, a comparison file category/test.output - will be substituted by ${TEST_OVERRIDE_LOCATION}/category/test.output if - the latter exists. -+TEST_OVERRIDE_LOCATION + - If TEST_OVERRIDE_LOCATION is set, a comparison file category/test.output + will be substituted by ${TEST_OVERRIDE_LOCATION}/category/test.output if + the latter exists. +
The testsuite can now be run in the build directory via -
- - $ ctest [-j N] -+
+$ ctest [-j N] +Here,
N
is the number of concurrent tests that should be
run, in the same way as you can say make -jN
. The testsuite
is huge and will need around 12h on current computers
@@ -210,11 +202,10 @@
If you only want to run a subset of tests
matching a regular expression, or if you want to exclude tests matching
a regular expression, you can use
- - - $ ctest [-j N] -R '<positive regular expression>' - $ ctest [-j N] -E '<negative regular expression>' -+
+$ ctest [-j N] -R '<positive regular expression>' +$ ctest [-j N] -E '<negative regular expression>' +
@@ -236,69 +227,67 @@
A typical output of a ctest
invocation looks like:
-
- - $ ctest -j4 -R "base/thread_validity" - Test project /tmp/trunk/build - Start 747: base/thread_validity_01.debug - Start 748: base/thread_validity_01.release - Start 775: base/thread_validity_05.debug - Start 776: base/thread_validity_05.release - 1/24 Test #776: base/thread_validity_05.release ... Passed 1.89 sec - 2/24 Test #748: base/thread_validity_01.release ... Passed 1.89 sec - Start 839: base/thread_validity_03.debug - Start 840: base/thread_validity_03.release - 3/24 Test #747: base/thread_validity_01.debug ..... Passed 2.68 sec - [...] - Start 1077: base/thread_validity_08.debug - Start 1078: base/thread_validity_08.release - 16/24 Test #1078: base/thread_validity_08.release ...***Failed 2.86 sec - 18/24 Test #1077: base/thread_validity_08.debug .....***Failed 3.97 sec - [...] - - 92% tests passed, 2 tests failed out of 24 - - Total Test time (real) = 20.43 sec - - The following tests FAILED: - 1077 - base/thread_validity_08.debug (Failed) - 1078 - base/thread_validity_08.release (Failed) - Errors while running CTest -+
+$ ctest -j4 -R "base/thread_validity" +Test project /tmp/trunk/build + Start 747: base/thread_validity_01.debug + Start 748: base/thread_validity_01.release + Start 775: base/thread_validity_05.debug + Start 776: base/thread_validity_05.release + 1/24 Test #776: base/thread_validity_05.release ... Passed 1.89 sec + 2/24 Test #748: base/thread_validity_01.release ... Passed 1.89 sec + Start 839: base/thread_validity_03.debug + Start 840: base/thread_validity_03.release + 3/24 Test #747: base/thread_validity_01.debug ..... Passed 2.68 sec +[...] + Start 1077: base/thread_validity_08.debug + Start 1078: base/thread_validity_08.release +16/24 Test #1078: base/thread_validity_08.release ...***Failed 2.86 sec +18/24 Test #1077: base/thread_validity_08.debug .....***Failed 3.97 sec +[...] + +92% tests passed, 2 tests failed out of 24 + +Total Test time (real) = 20.43 sec + +The following tests FAILED: + 1077 - base/thread_validity_08.debug (Failed) + 1078 - base/thread_validity_08.release (Failed) +Errors while running CTest +If a test failed (like
base/thread_validity_08.debug
in above
example output), you might want to find out what exactly went wrong. To
this end, you can search
through Testing/Temporary/LastTest.log
for the exact output
of the test, or you can rerun this one test, specifying -V
to select verbose output of tests:
- - - $ ctest -V -R "base/thread_validity_08.debug" - [...] - test 1077 - Start 1077: base/thread_validity_08.debug - - 1077: Test command: [...] - 1077: Test timeout computed to be: 600 - 1077: Test base/thread_validity_08.debug: RUN - 1077: =============================== OUTPUT BEGIN =============================== - 1077: Built target thread_validity_08.debug - 1077: Generating thread_validity_08.debug/output - 1077: terminate called without an active exception - 1077: /bin/sh: line 1: 18030 Aborted [...]/thread_validity_08.debug - 1077: base/thread_validity_08.debug: BUILD successful. - 1077: base/thread_validity_08.debug: RUN failed. Output: - 1077: DEAL::OK. - 1077: gmake[3]: *** [thread_validity_08.debug/output] Error 1 - 1077: gmake[2]: *** [CMakeFiles/thread_validity_08.debug.diff.dir/all] Error 2 - 1077: gmake[1]: *** [CMakeFiles/thread_validity_08.debug.diff.dir/rule] Error 2 - 1077: gmake: *** [thread_validity_08.debug.diff] Error 2 - 1077: - 1077: - 1077: base/thread_validity_08.debug: ****** RUN failed ******* - 1077: - 1077: =============================== OUTPUT END =============================== -+
+$ ctest -V -R "base/thread_validity_08.debug" +[...] +test 1077 + Start 1077: base/thread_validity_08.debug + +1077: Test command: [...] +1077: Test timeout computed to be: 600 +1077: Test base/thread_validity_08.debug: RUN +1077: =============================== OUTPUT BEGIN =============================== +1077: Built target thread_validity_08.debug +1077: Generating thread_validity_08.debug/output +1077: terminate called without an active exception +1077: /bin/sh: line 1: 18030 Aborted [...]/thread_validity_08.debug +1077: base/thread_validity_08.debug: BUILD successful. +1077: base/thread_validity_08.debug: RUN failed. Output: +1077: DEAL::OK. +1077: gmake[3]: *** [thread_validity_08.debug/output] Error 1 +1077: gmake[2]: *** [CMakeFiles/thread_validity_08.debug.diff.dir/all] Error 2 +1077: gmake[1]: *** [CMakeFiles/thread_validity_08.debug.diff.dir/rule] Error 2 +1077: gmake: *** [thread_validity_08.debug.diff] Error 2 +1077: +1077: +1077: base/thread_validity_08.debug: ****** RUN failed ******* +1077: +1077: =============================== OUTPUT END =============================== +So this specific test aborted in the
RUN
stage.
@@ -306,25 +295,23 @@
The general output for a successful test <test>
in
category <category>
for build type
<build>
is
- - - xx: Test <category>/<test>.<build>: PASSED - xx: =============================== OUTPUT BEGIN =============================== - xx: [...] - xx: <category>/<test>.<build>: PASSED. - xx: =============================== OUTPUT END =============================== -+
+xx: Test <category>/<test>.<build>: PASSED +xx: =============================== OUTPUT BEGIN =============================== +xx: [...] +xx: <category>/<test>.<build>: PASSED. +xx: =============================== OUTPUT END =============================== +And for a test that fails in stage
<stage>
:
- - - xx: Test <category>/<test>.<build>: <stage> - xx: =============================== OUTPUT BEGIN =============================== - xx: [...] - xx: <category>/<test>.<build>: <stage> failed. [...] - xx: - xx: <category>/<test>.<build>: ****** <stage> failed ******* - xx: =============================== OUTPUT END =============================== -+
+xx: Test <category>/<test>.<build>: <stage> +xx: =============================== OUTPUT BEGIN =============================== +xx: [...] +xx: <category>/<test>.<build>: <stage> failed. [...] +xx: +xx: <category>/<test>.<build>: ****** <stage> failed ******* +xx: =============================== OUTPUT END =============================== +Hereby,
<stage>
indicates the stage in which the
test failed:
A test usually consists of a source file and an output file for
comparison (under the testsuite directory tests
):
-
- - category/test.cc - category/test.output -+
+category/test.cc +category/test.output +
category
will be one of the existing subdirectory
under tests/
, e.g., lac/
, base/
,
or mpi/
. Historically, we have grouped tests into the
@@ -446,30 +432,27 @@
Comparison file can actually be named in a more complex way than
just category/test.output
:
-
- - category/test.[with_<feature>=<on|off>.]*[mpirun=<x>.][expect=<y>.][binary.][<debug|release>.]output -+
+category/test.[with_<feature>=<on|off>.]*[mpirun=<x>.][expect=<y>.][binary.][<debug|release>.]output +Normally, a test will be set up so that it runs twice, once in debug and once in release configuration. If a specific test can only be run in debug or release configurations but not in both it is possible to restrict the setup by prepeding
.debug
or .release
directly before
.output
, e.g.:
- - - category/test.debug.output -+
+category/test.debug.output +This way, the test will only be set up to build and run against the debug library. If a test should run in both configurations but, for some reason, produces different output (e.g., because it triggers an assertion in debug mode), then you can just provide two different output files: -
- - category/test.debug.output - category/test.release.output -+
+category/test.debug.output +category/test.release.output +@@ -478,24 +461,21 @@
In a similar vain as for build configurations, it is possible to restrict tests to specific feature configurations, e.g.: -
- - category/test.with_umfpack=on.output, or - category/test.with_zlib=off.output -+
+category/test.with_umfpack=on.output, or +category/test.with_zlib=off.output +These tests will only be set up if the specified feature was configured. It is possible to provide different output files for disabled/enabled features, e.g. -
- - category/test.with_64bit_indices=on.output - category/test.with_64bit_indices=off.output -+
+category/test.with_64bit_indices=on.output +category/test.with_64bit_indices=off.output +It is also possible to declare multiple constraints subsequently, e.g. -
- - category/test.with_umfpack=on.with_zlib=on.output -+
+category/test.with_umfpack=on.with_zlib=on.output +
Note: The tests in some subdirectories of tests/
are
@@ -516,10 +496,9 @@
If a test should be run with MPI in parallel, the number of MPI
processes N
with which a program needs to be run for
comparison with a given output file is specified as follows:
-
- - category/test.mpirun=N.output -+
+category/test.mpirun=N.output +It is quite typical for an MPI-enabled test to have multiple output files for different numbers of MPI processes. @@ -529,10 +508,9 @@
If a test produces binary output add binary
to the
output file to indicate this:
-
- - category/test.binary.output -+
+category/test.binary.output +The testsuite ensures that a diff tool suitable for comparing binary output files is used instead of the default diff tool, which (as in the case of
numdiff
) might be unable to compare binary
@@ -548,10 +526,9 @@
If (for some reason) the test should succeed ending at a specific
test stage different than PASSED
you can specify it via
expect=<stage>
, e.g.:
- - - category/test.expect=run.output -+
+category/test.expect=run.output +@@ -572,8 +549,7 @@ For the testcase, we usually start from one of the existing tests, copy and modify it to where it does what we'd like to test. Alternatively, you can also start from a template like this: -
- +// --------------------------------------------------------------------- // $Id$ // @@ -590,7 +566,6 @@ // // --------------------------------------------------------------------- - // a short (a few lines) description of what the program does #include "../tests.h" @@ -599,7 +574,6 @@ // all include files you need here - int main () { std::ofstream logfile("output"); @@ -612,7 +586,7 @@ int main () return 0; } -+
This code opens an output file output
in the current working
directory and then writes all output you generate to it, through the
@@ -640,22 +614,19 @@ int main ()
In order to run your new test, copy it to an appropriate category and create an empty comparison file for it: -
- - category/my_new_test.cc - category/my_new_test.output -+
+category/my_new_test.cc +category/my_new_test.output +Now, rerun -
- - $ make setup_tests -+
+$ make setup_tests +so that your new test is picked up. After that it is possible to invoke it with -
- - $ ctest -V -R "category/my_new_test" -+
+$ ctest -V -R "category/my_new_test" +
@@ -672,15 +643,13 @@ int main ()
The next step is to copy and rename this output file to the source directory and replace the original comparison file with it: -
- - category/my_new_test.output -+
+category/my_new_test.output +At this point running the test again should be successful: -
- - $ ctest -V -R "category/my_new_test" -+
+$ ctest -V -R "category/my_new_test" +@@ -692,12 +661,11 @@ int main () interested in getting new tests. If you have subversion write access already, you can add the new test and the expected output file: -
- - svn add category/my_new_test.cc - svn add category/my_new_test.output - svn commit -m "New test" -+
+svn add category/my_new_test.cc +svn add category/my_new_test.output +svn commit -m "New test" +If you don't have subversion write access, talk to us in the discussion group; writing testcases is a worthy and laudable task, and we would like to encourage it by giving people the opportunity to @@ -714,14 +682,13 @@ int main () folder under
CMakeLists.txt
file into it containing
- - - CMAKE_MINIMUM_REQUIRED(VERSION 2.8.8) - INCLUDE(${DEAL_II_SOURCE_DIR}/cmake/setup_testsubproject.cmake) - PROJECT(testsuite CXX) - INCLUDE(${DEAL_II_TARGET_CONFIG}) - DEAL_II_PICKUP_TESTS() -+
+CMAKE_MINIMUM_REQUIRED(VERSION 2.8.8) +INCLUDE(${DEAL_II_SOURCE_DIR}/cmake/setup_testsubproject.cmake) +PROJECT(testsuite CXX) +INCLUDE(${DEAL_II_TARGET_CONFIG}) +DEAL_II_PICKUP_TESTS() +@@ -733,10 +700,9 @@ int main () href="http://cdash.kyomu.43-1.org/index.php?project=deal.II">CDash instance just invoke ctest within a build directory (or designated build directory) with the
-S
option pointing to the
- run_testsuite.cmake
script: - - $ ctest [...] -V -S ../tests/run_testsuite.cmake -+
+$ ctest [...] -V -S ../tests/run_testsuite.cmake +The script will run configure, build and ctest and submit the results to the CDash server. It does not matter whether the configure, build or ctest stages were run before that. Also in script mode, you can @@ -751,60 +717,58 @@ int main ()
Note: The following variables can be set to via -
- - ctest -D<variable>=<value> [...] -+
+ctest -D<variable>=<value> [...] +to control the behaviour of the
run_testsuite.cmake
script:
- - - CTEST_SOURCE_DIRECTORY - - The source directory of deal.II (usually ending in "[...]/deal.II" - (equivalent to https://svn.dealii.org/trunk/deal.II) - Note: This is _not_ the test directory ending in "[...]/tests" - - If unspecified, "../deal.II" and "../../$ relative to the location - of this script is used. If this is not a source directory, an error - thrown. - - CTEST_BINARY_DIRECTORY - - The designated build directory (already configured, empty, or non - existent - see the information about TRACKs what will happen) - - If unspecified the current directory is used. If the current - directory is equal to CTEST_SOURCE_DIRECTORY or the "tests" - directory, an error is thrown. - - CTEST_CMAKE_GENERATOR - - The CMake Generator to use (e.g. "Unix Makefiles", or "Ninja", see - $ man cmake) - - If unspecified the current generator of a configured build directory - will be used, otherwise "Unix Makefiles". - - TRACK - - The track the test should be submitted to. Defaults to "Experimental". - Possible values are: - - "Experimental" - all tests that are not specifically "build" or - "regression" tests should go into this track - - "Build Tests" - Build tests that configure and build in a - clean directory and run the build tests - "build_tests/*" - - "Nightly" - Reserved for nightly regression tests for - build bots on various architectures - - "Regression Tests" - Reserved for the regression tester - - CONFIG_FILE - - A configuration file (see docs/development/Config.sample) - that will be used during the configuration stage (invokes - $ cmake -C ${CONFIG_FILE}). This only has an effect if - CTEST_BINARY_DIRECTORY is empty. - - MAKEOPTS - - Additional options that will be passed directly to make (or ninja). -+
+CTEST_SOURCE_DIRECTORY + - The source directory of deal.II (usually ending in "[...]/deal.II" + (equivalent to https://svn.dealii.org/trunk/deal.II) + Note: This is _not_ the test directory ending in "[...]/tests" + - If unspecified, "../deal.II" and "../../$ relative to the location + of this script is used. If this is not a source directory, an error + thrown. + +CTEST_BINARY_DIRECTORY + - The designated build directory (already configured, empty, or non + existent - see the information about TRACKs what will happen) + - If unspecified the current directory is used. If the current + directory is equal to CTEST_SOURCE_DIRECTORY or the "tests" + directory, an error is thrown. + +CTEST_CMAKE_GENERATOR + - The CMake Generator to use (e.g. "Unix Makefiles", or "Ninja", see + $ man cmake) + - If unspecified the current generator of a configured build directory + will be used, otherwise "Unix Makefiles". + +TRACK + - The track the test should be submitted to. Defaults to "Experimental". + Possible values are: + + "Experimental" - all tests that are not specifically "build" or + "regression" tests should go into this track + + "Build Tests" - Build tests that configure and build in a + clean directory and run the build tests + "build_tests/*" + + "Nightly" - Reserved for nightly regression tests for + build bots on various architectures + + "Regression Tests" - Reserved for the regression tester + +CONFIG_FILE + - A configuration file (see docs/development/Config.sample) + that will be used during the configuration stage (invokes + $ cmake -C ${CONFIG_FILE}). This only has an effect if + CTEST_BINARY_DIRECTORY is empty. + +MAKEOPTS + - Additional options that will be passed directly to make (or ninja). +Furthermore, the variables described above can also be set and will be handed automatically down to
cmake
.
@@ -826,13 +790,12 @@ int main ()
href="http://www.dealii.org/testsuite.html">test suite page to
participate. Assuming you checked out deal.II into the directory
deal.II
, running it is as simple as:
- - - cd deal.II - mkdir build - cd build - ctest -j4 -S ../cmake/scripts/run_buildtest.cmake -+
+cd deal.II +mkdir build +cd build +ctest -j4 -S ../cmake/scripts/run_buildtest.cmake +
@@ -848,10 +811,9 @@ int main () version control. If you want to specify a build configuration for cmake use a configuration file to preseed the cache as explained above: -
- - $ ctest -DCONFIG_FILE="[...]/Config.sample" [...] -+
+$ ctest -DCONFIG_FILE="[...]/Config.sample" [...] +