From 2e40380fd88dafcbda1e47c6d3c4c414e733716b Mon Sep 17 00:00:00 2001
From: Wolfgang Bangerth
- The deal.II testsuite consists of two parts, the
- build tests and the
- regression tests. While the build tests
- just check if the
+
+ The deal.II testsuite consists of two parts:
+ build tests and the
+ regression testsuite. While the build tests
+ are used to check that the
library can be compiled on different systems and with different (versions
of) compilers, the regression tests are actually run and their output
- compared with previously stored. These two testsuites are
+ compared with previously stored output files to verify that what
+ worked yesterday still works today. These two testsuites are
described below.
- deal.II has a testsuite that, at the time this article is written
- (mid-2013), has some 2,900 small programs (growing by roughly one per
- day) that we run every time we make a change to make sure that no
- existing functionality is broken. The expected output is also stored in
+
+ deal.II has a testsuite that has, at the time this article is written
+ (mid-2013), some 2,900 small programs (growing by roughly one per day)
+ that we run every time we make a change to make sure that no existing
+ functionality is broken. The expected output for every test is stored in
our subversion archive, and when you run a test you are notified if a
- test fails. These days, every time we add a significant piece of
- functionality, we add at least one new test to the testsuite, and we
- also do so if we fix a bug, in both cases to make sure that future
- changes do not break what we have just checked in. In addition, some
- machines run the tests every night and send the results back home; this
- is then converted into
+ test produces different output. These days, every time we add a
+ significant piece of functionality, we add at least one new test to the
+ testsuite, and we also do so if we fix a bug, in both cases to make sure
+ that future changes do not break what we have just checked in. Machines
+ running tests send results
+ back home and these are then converted into
a webpage showing the status of our regression tests.
+ If you're impatient, use the following commands:
+ The deal.II Testsuite
-
-
-
-
- Quick instructions
+
+
+ $ mkdir new_directory
+ $ cd new_directory
+ $ svn checkout https://svn.dealii.org/trunk .
+ $ mkdir build
+ $ cd build
+ $ cmake ../deal.II
+ $ make -j16
+ $ make -j16 setup_test
+ $ ctest -j16
+
+ The exact meaning of all of these commands will be explained in much
+ greater detail below.
+
Here, some text is missing
+In order to run it, you need to download and set up the testsuite + first. The following paragraphs detail how to do that. +
-- In order to run the testsuite you have to download it first. The - easiest way is to directly check out the testsuite along with deal.II - from the subversion repository. Go to an empty directory where you + To download the testsuite, check it out from the subversion repository, + along with deal.II. To this end, go to an empty directory where you want to test deal.II and do this:
$ svn checkout https://svn.dealii.org/trunk .- (Do not forget the dot "." at the end.) This should leave you with + (The period at the end puts everything from under
trunk/
+ into the current directory, rather than creating a
+ new trunk/
directory.) You will then have
two folders:
@@ -92,15 +122,6 @@-
- Note: If you want to check out the testsuite separately, you - can do so with -
- - $ svn checkout https://svn.dealii.org/trunk/tests -- -
Note: CMake will pick up any testsuite that is located in a
tests
folder next to the source directory
@@ -114,7 +135,7 @@
To enable the testsuite, configure and build deal.II in a build
@@ -124,13 +145,13 @@
$ make setup_test
- This will set up all tests supported by the current configuration
- (and not otherwise disabled due to TEST_PICKUP_REGEX
).
- Now, the testsuite can be run in the _build directory_ via the
- ctest
command (as will be explained in the next
- section).
+ This will set up all tests supported by the current configuration.
+ The testsuite can now be run in the current build directory as
+ described below.
+ Setup can be fine-tuned using the following commands:
$ make clean_test - runs the 'clean' target in every testsuite subproject @@ -139,34 +160,29 @@
- The testsuite uses the following CMake variables:
+ In addition, when setting up the testsuite, the following environment
+ variables can be used to override default behavior when
+ calling make setup_test
:
TEST_PICKUP_REGEX - - A regular expression to filter tests. If this is a nonempty string - only tests that match the regular expression will be set up. An empty - string is interpreted as a catchall. + - A regular expression to select only a subset of tests during setup. + An empty string is interpreted as a catchall (this is the default). TEST_DIFF - - the diff tool and command line to use for comparison. If numdiff is + - The diff tool and command line to use for comparison. If numdiff is available it defaults to "numdiff -a 1e-6 -q", otherwise plain diff is used. TEST_TIME_LIMIT - - The time limit (in seconds) a single test is allowed to run. Defaults + - The time limit (in seconds) a single test is allowed to take. Defaults to 180 seconds- These options can be set as environment variables prior to the call to the - setup_test target: -
- - $ TEST_PICKUP_REGEX="^build_tests/" TEST_TIME_LIMIT="120" make setup_test -
Note: Specifying these options via environment variables is
- volatile, i.e. if $ make setup_test
is invoked a second
+ volatile, i.e. if make setup_test
is invoked a second
time without the variables set in environment, the option will be
reset to the default value. If you want to set these options
permanently, set them via cmake as CMake variable in the build
@@ -175,50 +191,34 @@
$ cmake -DTEST_PICKUP_REGEX="<regular expression>" .
- Please also note that a variable set via cmake always _overrides_ one
- set via environment. If you wish to reset such a variable again,
- undefine it in the cache:
-
- - $ cmake -UTEST_PICKUP_REGEX . -+ A variable set via cmake always overrides one + set via environment. -
- Now, the testsuite can be run in the _build directory_ via -
- - $ ctest [-j x] -- where x is the number of concurrent tests that should be run. The - testsuite is huge (!) and will need around 12h on current computer - running single threaded. If you only want to run a subset of tests - matching a regular expression, you can use + The testsuite can now be run in the build directory via
- $ ctest [-j x] -R '<regular expression>' + $ ctest [-j N]+ Here,
N
is the number of concurrent tests that should be
+ run, in the same way as you can say make -jN
. The testsuite
+ is huge and will need around 12h on current computers
+ running single threaded.
- Note: You can also invoke ctest
under
- BUILD_DIR/tests
or any subdirectory under
- BUILD_DIR/tests
. This will only invoke the tests that
- are located under the subdirectory.
-
- To get verbose output of tests (which is otherwise just logged into
- Testing/Temporary/LastTest.log
) specify
+ If you only want to run a subset of tests
+ matching a regular expression, or if you want to exclude tests matching
+ a regular expression, you can use
- $ ctest -V [...] + $ ctest [-j N] -R '<positive regular expression>' + $ ctest [-j N] -E '<negative regular expression>'- Alternatively, if you're just interested in verbose output of failing - tests,
--output-on-failure
.
@@ -229,20 +229,12 @@
install the
numdiff tool that compares
stored and newly created output files based on floating point
- tolerances. To use it, simply export it via the PATH
+ tolerances. To use it, simply export where the numdiff
+ executable can be found via the PATH
environment variable so that it can be found during
make setup_test
.
- In a similar vain, there is also an option to disable tests matching a - regular exression: -
- - $ ctest -E '<regular expression>' [...] -- -
base/thread_validity_08.debug
in above
- example output), you might want to find out what exactly went wrong.
- So, invoke ctest
to just run the above test with verbose
- output:
+ example output), you might want to find out what exactly went wrong. To
+ this end, you can search
+ through Testing/Temporary/LastTest.log
for the exact output
+ of the test, or you can rerun this one test, specifying -V
+ to select verbose output of tests:
$ ctest -V -R "base/thread_validity_08.debug" @@ -355,13 +349,19 @@+PASSED
: the test run successful - + Typically, tests fail because the output has changed, and you will see + this in the DIFF phase of the test. + -Testsuite development
+Testsuite development
-Here, some text is missing
++ The following outlines what you need to know if you want to understand + how the testsuite actually works, for example because you may want to + add tests along with the functionality you are currently developing. +
@@ -374,25 +374,38 @@category/test.cc - category/test[...].output -+ category/test.output +
category
will be one of the existing subdirectory
+ under tests/
, e.g., lac/
, base/
,
+ or mpi/
. Historically, we have grouped tests into the
+ directories base/, lac/, deal.II/
depending on their
+ functionality, and bits/
if they were small unit tests, but
+ in practice we have not always followed this rigidly. There are also
+ more specialized directories trilinos/, petsc/,
+ serialization/, mpi/
etc, whose meaning is more obvious.
test.cc
must be a regular executable (i.e. having an
int main()
routine). It will be compiled, linked and
run. The executable should not output anything to cout
(at least under normal circumstances, i.e. no error condition),
instead the executable should output to a file output
- under the current working directory.
+ in the current working directory. In practice, we rarely write the
+ source files completely from scratch, but we find an existing test that
+ already does something similar and copy/modify it to fit our needs.
- In detail, for a regular test the following 3 stages will be run:
+ For a normal test, ctest
will typically run the following 3
+ stages:
BUILD
: The build stage generates an executable in
BUILD_DIR/tests/<category>/<test>
.
RUN
: The run stages invokes the executable that
- generates an output file
+ RUN
: The run stage then invokes the executable in the
+ directory where it is located. By convention, each test puts its
+ output into a file simply called output
, which will
+ then be located in
BUILD_DIR/tests/<category>/<test>/output
.
If the run fails (e.g. because the program aborts with an error
code) the file output
is renamed to
@@ -408,29 +421,21 @@
failing_diff
.
- -
- The full file signature for a comparison file is
+ Comparison file can actually be named in a more complex way than
+ just category/test.output
:
category/test.[with_<feature>=<on|off>.]*[mpirun=<x>.][<debug|release>.]output- which is explained in detail below. - - -
- Normally, a test will be set up for debug and release configuration
- (if deal.II was configured with combined DebugRelease
- build type) or for the available build configuration (if deal.II was
- configured either with Debug
or with
- Release
only build type).
+ Normally, a test will be set up so that it runs twice, once in debug and
+ once in release configuration.
If a specific test can only be run in debug or release configurations but
not in both it is possible to restrict the setup by prepeding
.debug
or .release
directly before
@@ -439,22 +444,21 @@
category/test.debug.output
- This way, test will only be set up to build and run against the debug
- library.
-
- Note: It is possible to provide both configuration types at the - same time: + This way, the test will only be set up to build and run against the debug + library. If a test should run in both configurations but, for some + reason, produces different output (e.g., because it triggers an + assertion in debug mode), then you can just provide two different output + files:
- category/test.debug.output - category/test.release.output + category/test.debug.output + category/test.release.output- This will set up two seperate tests, one for the debug configuration that - will be tested against test.debug.output, and similarly one for release. + + -
In a similar vain as for build configurations, it is possible to restrict tests to specific feature configurations, e.g.: @@ -463,67 +467,65 @@ category/test.with_umfpack=on.output, or category/test.with_zlib=off.output - These tests will only be set up if the specified feature was configured - accordingly. -
- -- Note: It is possible to provide different output files for disabled/enabled + These tests will only be set up if the specified feature was configured. + It is possible to provide different output files for disabled/enabled features, e.g.
category/test.with_64bit_indices=on.output category/test.with_64bit_indices=off.output- -
- Note: It is possible to declare multiple constraints subsequently, e.g. + It is also possible to declare multiple constraints subsequently, e.g.
category/test.with_umfpack=on.with_zlib=on.output
- Note: Quite a number of test categories are already guarded so
- that the contained tests will only be set up if the feature is
- enabled. In this case a feature constraint in the output file name is
- redundant and should be avoided. (Folders with guards are
+ Note: The tests in some subdirectories of tests/
are
+ automatically run only if some feature is enabled. In this case a
+ feature constraint encoded in the output file name is
+ redundant and should be avoided. In particular, this holds for
+ subdirectories
distributed_grids
, lapack
,
metis
, petsc
, slepc
,
- trilinos
, umfpack
, gla
,
- mpi
)
+ trilinos
, umfpack
, gla
, and
+ mpi
- If a test should be run with mpirun in parallel, specify the number x of
- simultaneous processes in the following way:
+ If a test should be run with MPI in parallel, the number of MPI
+ processes N
with which a program needs to be run for
+ comparison with a given output file is specified as follows:
- category/test.mpirun=x.output + category/test.mpirun=N.output+ It is quite typical for an MPI-enabled test to have multiple output + files for different numbers of MPI processes. -
- Note: It is possible to provide multiple output files for different mpirun - values.
- As mentioned above, we add a new test every - time we add new functionality to the library or fix a bug. If you - want to contribute code to the library, you should do this - as well. Here's how: you need a testcase, - a subdirectory with the same name as the test, and a file with the - expected output. + We typically add one or more new tests every + time we add new functionality to the library or fix a bug. If you + want to contribute code to the library, you should do this + as well. Here's how: you need a testcase and a file with the + expected output.
- For the testcase, we usually start from a template like this: -
+ For the testcase, we usually start from one of the existing tests, copy + and modify it to where it does what we'd like to test. Alternatively, + you can also start from a template like this: ++// --------------------------------------------------------------------- // $Id$ // @@ -544,8 +546,8 @@ // a short (a few lines) description of what the program does #include "../tests.h" -#include--#include +#include <iostream> +#include <fstream> // all include files you need here @@ -564,12 +566,12 @@ int main () } You open an output file
output
in the current working - directory and then write all output you generate to it, through the +This code opens an output file
@@ -580,16 +582,8 @@ int main () have traditionally been into theoutput
in the current working + directory and then writes all output you generate to it, through thedeallog
stream. Thedeallog
stream works like any otherstd::ostream
except that it does a few more things behind the scenes that are helpful in this context. In above - case, we only write a zero to the output file. Most tests actually + case, we only write a zero to the output file. Most tests of course write computed data to the output file to make sure that whatever we compute is what we got when the test was first written.base/
,lac/
,deal.II/
,fe/
,hp/
, ormultigrid/
directories, depending on - where the classes that are tested are located. - - -- We have started to create more atomic tests which - are usually very small and test only a single aspect of the - library, often only a single function. These tests go into the -
@@ -618,15 +612,13 @@ int main ()bits/
directory and often have names that are - composed of the name of the class being tested and a two-digit - number, e.g.,dof_tools_11
. There are + where the classes that are tested are located. More atomic tests often go + intobits/
. There are also directories for PETSc and Trilinos wrapper functionality.If you run your new test executable this way, the test should compile - and run successfully but fail in the diff stage (due to the empty + and run successfully but fail in the diff stage (because of the empty comparison file). You will get an output file -
BUILD_DIR/category/my_new_test/output
that should be - used to compare all future runs with. If the test is relatively - simple, it is often a good idea to look at the output and make sure - that the output is actually what you had expected. However, if you do - complex operations, this may sometimes be impossible, and in this - case we are quite happy with any reasonable output file just to make - sure that future invokations of the test yield the same results. +BUILD_DIR/category/my_new_test/output
. Take a look at it to + make sure that the output is what you had expected. (For complex tests, + it may sometimes be impossible to say whether the output is correct, and + in this case we sometimes just take it to make + sure that future invokations of the test yield the same results.)@@ -643,6 +635,7 @@ int main ()
@@ -666,7 +659,7 @@ int main () -
Explain how to use run_testsuite.cmake
in all imaginable
@@ -676,14 +669,10 @@ int main ()
-
- Update this section -
+- With our build tests, we check if deal.II can be compiled on + Build tests are used to check that deal.II can be compiled on different systems and with different compilers as well as different configuration options. Results are collected in a database and can be accessed
++ What this does is to create a temporary directory, compile and build + deal.II in it, and for good measure also build the tutorial + programs. The fourth of the commands above then sends the resulting + status files to a daemon that presents this information on the website + linked to above. +
+
The build_test
script supports the following options:
@@ -723,7 +720,7 @@ int main ()An example configuration file can be found here. Options can be passed either via - environment + environment variables
export CONFIGFILE=MyConfiguration.conf @@ -742,10 +739,13 @@ int main () periodically, but not immediately after a mail has been received). + +Dedicated build tests
- There is a detailed example for dedicated build tests on the wiki.
-- 2.39.5