From d19af27a915866e77866a10b375492b1e56591d5 Mon Sep 17 00:00:00 2001 From: maier Date: Sat, 5 Oct 2013 22:37:33 +0000 Subject: [PATCH] Some more documentation git-svn-id: https://svn.dealii.org/branches/branch_port_the_testsuite@31151 0785d39b-7218-0410-832d-ea1e28bc413d --- deal.II/doc/developers/testsuite.html | 617 +++++++++++++++----------- tests/README | 114 +---- 2 files changed, 359 insertions(+), 372 deletions(-) diff --git a/deal.II/doc/developers/testsuite.html b/deal.II/doc/developers/testsuite.html index 7af6885db8..bec99858ed 100644 --- a/deal.II/doc/developers/testsuite.html +++ b/deal.II/doc/developers/testsuite.html @@ -17,15 +17,32 @@

The deal.II Testsuite

- TODO: Das ist nicht mehr aktuell -

The deal.II testsuite consists of two parts, the - build tests and the - regression tests. While the build tests - just check if the - library can be compiled on different systems and with different (versions - of) compilers, the regression tests are actually run and their output - compared with previously stored. These two testsuites are - described below.

+

+ The deal.II testsuite consists of two parts, the + build tests and the + regression tests. While the build tests + just check if the + library can be compiled on different systems and with different (versions + of) compilers, the regression tests are actually run and their output + compared with previously stored. These two testsuites are + described below. +

+ +

+ deal.II has a testsuite that, at the time this article is written + (mid-2013), has some 2,900 small programs (growing by roughly one per + day) that we run every time we make a change to make sure that no + existing functionality is broken. The expected output is also stored in + our subversion archive, and when you run a test you are notified if a + test fails. These days, every time we add a significant piece of + functionality, we add at least one new test to the testsuite, and we + also do so if we fix a bug, in both cases to make sure that future + changes do not break what we have just checked in. In addition, some + machines run the tests every night and send the results back home; this + is then converted into + a webpage showing the status of our regression tests. +

    @@ -36,16 +53,24 @@
  • Run the testsuite
    1. -
    2. Interpreting the output
    3. +
    4. How to interpret the output
    5. +
    +
  • Testsuite development
  • +
      +
    1. General layout
    2. +
    3. Comparison file
    4. +
    5. Adding new tests
    +
  • Submit test results
  • The build tests
  • -
  • The regression tests
  • Set up the testsuite

    +

    Here, some text is missing

    +

    Download the testsuite

    @@ -219,7 +244,7 @@

    -

    Interpreting the output

    +

    How to interpret the output

    A typical output of a ctest invocation looks like: @@ -256,157 +281,234 @@ example output), you might want to find out what exactly went wrong. So, invoke ctest to just run the above test with verbose output: -

    -
    -      $ ctest -V -R "base/thread_validity_08.debug"
    -      [...]
    -      test 1077
    -          Start 1077: base/thread_validity_08.debug
    +      
     
    -      1077: Test command: [...]
    -      1077: Test timeout computed to be: 600
    -      1077: Test base/thread_validity_08.debug: RUN
    -      1077: ===============================   OUTPUT BEGIN  ===============================
    -      1077: Built target thread_validity_08.debug
    -      1077: Generating thread_validity_08.debug/output
    -      1077: terminate called without an active exception
    -      1077: /bin/sh: line 1: 18030 Aborted [...]/thread_validity_08.debug
    -      1077: base/thread_validity_08.debug: BUILD successful.
    -      1077: base/thread_validity_08.debug: RUN failed. Output:
    -      1077: DEAL::OK.
    -      1077: gmake[3]: *** [thread_validity_08.debug/output] Error 1
    -      1077: gmake[2]: *** [CMakeFiles/thread_validity_08.debug.diff.dir/all] Error 2
    -      1077: gmake[1]: *** [CMakeFiles/thread_validity_08.debug.diff.dir/rule] Error 2
    -      1077: gmake: *** [thread_validity_08.debug.diff] Error 2
    -      1077:
    -      1077:
    -      1077: base/thread_validity_08.debug: ******    RUN failed    *******
    -      1077:
    -      1077: ===============================    OUTPUT END   ===============================
    -    
    + $ ctest -V -R "base/thread_validity_08.debug" + [...] + test 1077 + Start 1077: base/thread_validity_08.debug + + 1077: Test command: [...] + 1077: Test timeout computed to be: 600 + 1077: Test base/thread_validity_08.debug: RUN + 1077: =============================== OUTPUT BEGIN =============================== + 1077: Built target thread_validity_08.debug + 1077: Generating thread_validity_08.debug/output + 1077: terminate called without an active exception + 1077: /bin/sh: line 1: 18030 Aborted [...]/thread_validity_08.debug + 1077: base/thread_validity_08.debug: BUILD successful. + 1077: base/thread_validity_08.debug: RUN failed. Output: + 1077: DEAL::OK. + 1077: gmake[3]: *** [thread_validity_08.debug/output] Error 1 + 1077: gmake[2]: *** [CMakeFiles/thread_validity_08.debug.diff.dir/all] Error 2 + 1077: gmake[1]: *** [CMakeFiles/thread_validity_08.debug.diff.dir/rule] Error 2 + 1077: gmake: *** [thread_validity_08.debug.diff] Error 2 + 1077: + 1077: + 1077: base/thread_validity_08.debug: ****** RUN failed ******* + 1077: + 1077: =============================== OUTPUT END =============================== +
    So this specific test aborted in the RUN stage. +

    +

    + The general output for a successful test <test> in + category <category> for build type + <build> is +

     
    +    xx: Test <category>/<test>.<build>: PASSED
    +    xx: ===============================   OUTPUT BEGIN  ===============================
    +    xx: [...]
    +    xx: <category>/<test>.<build>: PASSED.
    +    xx: ===============================    OUTPUT END   ===============================
    +      
    + And for a test that fails in stage <stage>: +
     
    +    xx: Test <category>/<test>.<build>: <stage>
    +    xx: ===============================   OUTPUT BEGIN  ===============================
    +    xx: [...]
    +    xx: <category>/<test>.<build>: <stage> failed. [...]
    +    xx:
    +    xx: <category>/<test>.<build>: ******    <stage> failed    *******
    +    xx: ===============================    OUTPUT END   ===============================
    +      
    + Hereby, <stage> indicates the stage in which the + test failed: + -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - -

    The build tests

    + +

    Testsuite development

    -

    - With our build tests, we check if deal.II can be compiled on - different systems and with different compilers as well as - different configuration options. Results are collected in a - database and can be accessed online.

    +

    Here, some text is missing

    -

    Running the build test suite is simple and we encourage deal.II - users with configurations not found on the test suite page to - participate. Assuming you checked out deal.II into the directory - dealtest, running it is as simple as: + + + +

    General layout

    + +

    + A test usually consists of a source file and an output file for + comparison (under the testsuite directory tests):

     
    -    cd dealtest
    -    svn update
    -    ./contrib/utilities/build_test
    -    mail build-tests@dealii.org < *.log
    -  ( rm *.log )
    +    category/test.cc
    +    category/test[...].output
           
    + test.cc must be a regular executable (i.e. having an + int main() routine). It will be compiled, linked and + run. The executable should not output anything to cout + (at least under normal circumstances, i.e. no error condition), + instead the executable should output to a file output + under the current working directory.

    +

    + In detail, for a regular test the following 3 stages will be run: +

    - The build_test script supports the following options: -

    +    

    - SOURCEDIR - the source directory to use (otherwise the current directory is used) - CONFIGFILE - A cmake configuration file for the build test - LOGDIR - directory for the log file - LOGFILE - the logfile to use, defaults to - $LOGDIR/$BRANCH.$CONFIGFILE..log + +

    Comparison file

    - CMAKE - the cmake executable to use - SVN - svn info command to use, defaults to - svn info $(SOURCEDIR) - TMPDIR - defaults to "/tmp" - CLEAN_TMPDIR - defaults to "true" - RUN_EXAMPLES - defaults to "true" -
    - An example configuration file can be found here. Options can be passed either via - environment +

    + The full file signature for a comparison file is

     
    -    export CONFIGFILE=MyConfiguration.conf
    -    ./contrib/utilities/build_test
    +    category/test.[with_<feature>=<on|off>.]*[mpirun=<x>.][<debug|release>.]output
           
    - or directly on the command line: + which is explained in detail below. +

    + +

    Restrict tests for build configurations

    +

    + Normally, a test will be set up for debug and release configuration + (if deal.II was configured with combined DebugRelease + build type) or for the available build configuration (if deal.II was + configured either with Debug or with + Release only build type). + If a specific test can only be run in debug or release configurations but + not in both it is possible to restrict the setup by prepeding + .debug or .release directly before + .output, e.g.:

     
    -    ./contrib/utilities/build_test CONFIGFILE=myConfiguration.conf
    +    category/test.debug.output
           
    + This way, test will only be set up to build and run against the debug + library.

    - A status indicator should appear on the build test website after some - time (results are collected and processed by a program that is run - periodically, but not immediately after a mail has been received). -

    + Note: It is possible to provide both configuration types at the + same time: +
     
    -    

    Dedicated build tests

    + category/test.debug.output + category/test.release.output +
    + This will set up two seperate tests, one for the debug configuration that + will be tested against test.debug.output, and similarly one for release. +

    Restrict tests for feature configurations

    - There is a detailed example for dedicated build tests on the wiki. -

    - - + In a similar vain as for build configurations, it is possible to restrict + tests to specific feature configurations, e.g.: +
     
    -    
    -    

    The regression tests

    + category/test.with_umfpack=on.output, or + category/test.with_zlib=off.output +
    + These tests will only be set up if the specified feature was configured + accordingly. +

    - deal.II has a testsuite that, at the time this article is written - (mid-2013), has some 2,900 small programs (growing by roughly one per - day) that we run every time we make a change to make sure that no - existing functionality is broken. The expected output is also stored in - our subversion archive, and when you run a test you are notified if a - test fails. These days, every time we add a significant piece of - functionality, we add at least one new test to the testsuite, and we - also do so if we fix a bug, in both cases to make sure that future - changes do not break what we have just checked in. In addition, some - machines run the tests every night and send the results back home; this - is then converted into - a webpage showing the status of our regression tests. + Note: It is possible to provide different output files for disabled/enabled + features, e.g. +

    +
    +    category/test.with_64bit_indices=on.output
    +    category/test.with_64bit_indices=off.output
    +      

    +

    + Note: It is possible to declare multiple constraints subsequently, e.g. +

     
    +    category/test.with_umfpack=on.with_zlib=on.output
    +      
    +

    - If you develop parts of deal.II, want to add something, or fix a bug - in it, we encourage you to use our testsuite. This page documents - some aspects of it. + Note: Quite a number of test categories are already guarded so + that the contained tests will only be set up if the feature is + enabled. In this case a feature constraint in the output file name is + redundant and should be avoided. (Folders with guards are + distributed_grids, lapack, + metis, petsc, slepc, + trilinos, umfpack, gla, + mpi)

    +

    Run mpi tests with mpirun

    +

    + If a test should be run with mpirun in parallel, specify the number x of + simultaneous processes in the following way: +

     
    -
    -    

    Running it

    - - + category/test.mpirun=x.output +
    +

    +

    + Note: It is possible to provide multiple output files for different mpirun + values. +

    Adding new tests

    @@ -421,8 +523,7 @@

    The testcase

    For the testcase, we usually start from a template like this: -

    -
    +    
     
     // ---------------------------------------------------------------------
     // $Id$
     //
    @@ -451,7 +552,7 @@
     
     int main ()
     {
    -  std::ofstream logfile("my_new_test/output");
    +  std::ofstream logfile("output");
       deallog.attach(logfile);
       deallog.depth_console(0);
     
    @@ -463,17 +564,14 @@ int main ()
     }
         
    -

    You open an output file in a directory with the same - name as your test, and then write - all output you generate to it, - through the deallog stream. The deallog - stream works like any - other std::ostream except that it does a few more - things behind the scenes that are helpful in this context. In - above case, we only write a zero to the output - file. Most tests actually write computed data to the output file - to make sure that whatever we compute is what we got when the - test was first written. +

    You open an output file output in the current working + directory and then write all output you generate to it, through the + deallog stream. The deallog stream works like + any other std::ostream except that it does a few more + things behind the scenes that are helpful in this context. In above + case, we only write a zero to the output file. Most tests actually + write computed data to the output file to make sure that whatever we + compute is what we got when the test was first written.

    @@ -495,168 +593,163 @@ int main () directories for PETSc and Trilinos wrapper functionality.

    -

    A directory with the same name as the test

    - -

    You have to create a subdirectory - with the same name as your test to hold the output from the test. - -

    One convenient way to create this subdirectory with the correct - properties is to use svn copy. -

    -
    -    svn copy existing_test_directory my_new_test
    -    
    +

    An expected output

    - Once you have done this, you can try to run -

    -
    -      make my_new_test/output
    -    
    - This should compile, link, and run your test. Running your test - should generate the desired output file. -

    - + In order to run your new test, copy it to an appropriate category and + create an empty comparison file for it: +
     
    +    category/my_new_test.cc
    +    category/my_new_test.output
    +      
    + Now, rerun +
     
    -    

    An expected output

    + $ make setup_test +
    + so that your new test is picked up. After that it is possible to + invoke it with +
     
    -    

    - If you run your new test executable, you will get an output file - mytestname/output that should be used to compare all future - runs with. If the test - is relatively simple, it is often a good idea to look at the - output and make sure that the output is actually what you had - expected. However, if you do complex operations, this may - sometimes be impossible, and in this case we are quite happy with - any reasonable output file just to make sure that future - invokations of the test yield the same results. + $ ctest -V -R "category/my_new_test" +

    - The next step is to copy this output file to the place where the - scripts can find it when they compare with newer runs. For this, you first - have to understand how correct results are verified. It works in the - following way: for each test, we have subdirectories - testname/cmp where we store the expected results in a file - testname/cmp/generic. If you create a new test, you should - therefore create this directory, and copy the output of your program, - testname/output to testname/cmp/generic. + If you run your new test executable this way, the test should compile + and run successfully but fail in the diff stage (due to the empty + comparison file). You will get an output file + BUILD_DIR/category/my_new_test/output that should be + used to compare all future runs with. If the test is relatively + simple, it is often a good idea to look at the output and make sure + that the output is actually what you had expected. However, if you do + complex operations, this may sometimes be impossible, and in this + case we are quite happy with any reasonable output file just to make + sure that future invokations of the test yield the same results.

    - Why generic? The reason is that sometimes test results - differ slightly from platform to platform, for example because numerical - roundoff is different due to different floating point implementations on - different CPUs. What this means is that sometimes a single stored output is - not enough to verify that a test functioned properly: if you happen to be - on a platform different from the one on which the generic output was - created, your test will always fail even though it produces almost exactly - the same output. + The next step is to copy and rename this output file to the source + directory and replace the original comparison file with it: +

    +
    +    category/my_new_test.output
    +      
    + At this point running the test again should be successful: +
    +
    +    $ ctest -V -R "category/my_new_test"
    +      

    +

    Checking in

    +

    - To avoid this, what the makefiles do is to first check whether an output - file is stored for this test and your particular configuration (platform - and compiler). If this isn't the case, it goes through a hierarchy of files - with related configurations, and only if none of them does it take the - generic output file. It then compares the output of your test run with the - first file it found in this process. To make things a bit clearer, if you - are, for example, on a i686-pc-linux-gnu box and use - gcc4.0 as your compiler, then the following files will be - sought (in this order): -

    +      Tests are a way to make sure everything keeps working. If they
    +      aren't automated, they are no good. We are therefore very
    +      interested in getting new tests. If you have subversion write access
    +      already, you can add the new test and the expected output
    +      file:
    +      
     
    -testname/cmp/i686-pc-linux-gnu+gcc4.0
    -testname/cmp/i686-pc-linux-gnu+gcc3.4
    -testname/cmp/i686-pc-linux-gnu+gcc3.3
    -testname/cmp/generic
    -    
    - (This list is generated by the tests/hierarchy.pl script.) - Your output will then be compared with the first one that is actually - found. The virtue of this is that we don't have to store the output files - from all possible platforms (this would amount to gigabytes of data), but - that we only have store an output file for gcc4.0 if it differs from that - of gcc3.4, and for gcc3.4 if it differs from gcc3.3. If all of them are the - same, we would only have the generic output file. + svn add category/my_new_test.cc + svn add category/my_new_test.output + svn commit -m "New test" +
    + If you don't have subversion write access, talk to us in the + discussion group; writing testcases is a worthy and laudable task, + and we would like to encourage it by giving people the opportunity to + contribute!

    -

    - Most of the time, you will be able to generate output files only - for your own platform and compiler, and that's alright: someone - else will create the output files for other platforms - eventually. You only have to copy your output file to - testname/cmp/generic. + + + +

    Submit test results

    + +

    + Explain how to use run_testsuite.cmake in all imaginable + ways...

    -

    - At this point you can run -

     
    -      make my_new_test/OK
    -    
    - which should compare the present output with what you have just - copied into the compare directory. This should, of course, - succeed, since the two files should be identical. + + +

    The build tests

    + +

    + Update this section

    - On the other hand, if you realize that an existing test fails on your - system, but that the differences (as shown when running with - verbose=on, see above) are only marginal and around the 6th or - 8th digit, then you should check in your output file for the platform you - work on. For this, you could copy testname/output to - testname/cmp/myplatform+compiler, but your life can be easier - if you simply type -

    +      With our build tests, we check if deal.II can be compiled on
    +      different systems and with different compilers as well as
    +      different configuration options. Results are collected in a
    +      database and can be accessed online.

    - make my_new_test/ref -

    - which takes your output and copies it to the right place automatically. +

    Running the build test suite is simple and we encourage deal.II + users with configurations not found on the test suite page to + participate. Assuming you checked out deal.II into the directory + dealtest, running it is as simple as: +

    +
    +    cd dealtest
    +    svn update
    +    ./contrib/utilities/build_test
    +    mail build-tests@dealii.org < *.log
    +  ( rm *.log )
    +      

    +

    + The build_test script supports the following options: +

    +
    +    SOURCEDIR     - the source directory to use (otherwise the current directory is used)
    +    CONFIGFILE    - A cmake configuration file for the build test
    +    LOGDIR        - directory for the log file
    +    LOGFILE       - the logfile to use, defaults to
    +                        $LOGDIR/$BRANCH.$CONFIGFILE..log
     
    +    CMAKE         - the cmake executable to use
    +    SVN           - svn info command to use, defaults to
    +                        svn info $(SOURCEDIR)
    +    TMPDIR        - defaults to "/tmp"
    +    CLEAN_TMPDIR  - defaults to "true"
    +    RUN_EXAMPLES  - defaults to "true"
    +      
    + An example configuration file can be found here. Options can be passed either via + environment +
     
    +    export CONFIGFILE=MyConfiguration.conf
    +    ./contrib/utilities/build_test
    +      
    + or directly on the command line: +
     
    -    

    Checking in

    + ./contrib/utilities/build_test CONFIGFILE=myConfiguration.conf +
    +

    - Tests are a way to make sure everything keeps working. If they - aren't automated, they are no good. We are therefore very - interested in getting new tests. If you have subversion write access - already, you can add the new test and the expected output - file: -

    -
    -      svn add bits/my_new_test.cc
    -      svn add bits/my_new_test
    -      svn add bits/my_new_test/cmp
    -      svn add bits/my_new_test/cmp/generic
    -      svn commit -m "New test" bits/my_new_test*
    -    
    - In addition, you should do the following in order to avoid that the files - generated while running the testsuite show up in the output of svn - status commands: -
    -
    -      svn propset svn:ignore "obj.*
    -        exe
    -        output
    -        status
    -        OK" bits/my_new_test
    -      svn commit -m "Ignore generated files." bits/my_new_test
    -    
    - Note that the list of files given in quotes to the propset command extends - over several lines. + A status indicator should appear on the build test website after some + time (results are collected and processed by a program that is run + periodically, but not immediately after a mail has been received).

    +

    Dedicated build tests

    +

    - If you don't have subversion write access, talk to us in the discussion group; - writing testcases is a worthy and laudable task, and we would - like to encourage it by giving people the opportunity to - contribute! + There is a detailed example for dedicated build tests on the wiki.

    +
    The deal.II Authors diff --git a/tests/README b/tests/README index a6e7c16a39..dfc75f80e9 100644 --- a/tests/README +++ b/tests/README @@ -1,111 +1,5 @@ -DEAL.II TESTSUITE README -======================== +Further information: -TODO: Merge into testsuite.html - - - -General test layout -=================== - -A test usually consists of a source file and an output file for -comparison (under SOURCE_DIR/tests): - - category/test.cc - category/test.output - -test.cc must be a regular executable (i.e. having an int main() routine). -It will be compiled, linked and run. The executable should not output -anything to cout (at least under normal circumstances, i.e. no error -condition), instead the executable should output to a file "output" under -the current working directory. - - -As a last stage the generated output during the run stage will be compared -to category/test.output. - -The full file signature for a comparison file is - - category/test.[with_=.]*[mpirun=.][.]output - -which is explained in detail below. - - -Restrict tests for build configurations ---------------------------------------- - -Normally, a test will be set up for debug and release configuration (if -deal.II was configured with combined DebugRelease build type) or for the -available build configuration (if deal.II was configured either with Debug -or with Release only build type). - -If a specific test can only be run in debug or release configurations but -not in both it is possible to restrict the setup by prepeding ".debug" or -".release" directly before ".output", e.g.: - - category/test.debug.output - -This way, test will only be set up to build and run against the debug -library. - -Note: It is possible to provide both configuration types at the same time: - - category/test.debug.output - category/test.release.output - -This will set up two seperate tests, one for the debug configuration that -will be tested against test.debug.output, and similarly one for release. - - -Restrict tests for feature configurations ------------------------------------------ - -In a similar vain as for build configurations, it is possible to restrict -tests to specific feature configurations, e.g.: - - category/test.with_umfpack=on.output, or - category/test.with_zlib=off.output - -These tests will only be set up if the specified feature was configured -accordingly. - -Note: It is possible to provide different output files for disabled/enabled -features, e.g. - - category/test.with_64bit_indices=on.output - category/test.with_64bit_indices=off.output - - -Note: It is possible to declare multiple constraints subsequently, e.g. - - category/test.with_umfpack=on.with_zlib=on.output - -Note: Quite a number of test categories are already guarded so that the -contained tests will only be set up if the feature is enabled. In this case -a feature constraint in the output file name is redundant and should be -avoided. (Folder with guards are distributed_grids, lapack, metis, petsc, -slepc, trilinos, umfpack, gla, mpi) - - -Run mpi tests with mpirun -------------------------- - -If a test should be run with mpirun in parallel, specify the number x of -simultaneous processes in the following way: - - category/test.mpirun=x.output - -Note: It is possible to provide multiple output files for different mpirun -values. - - -TODO: Write and document the following - - - How a normal run looks like. - - - Intermediate files generated under BUILD_DIR. - - BUILD_DIR/tests/category/test[.mpirun=x]. - - output our failing_output - diff or failing_diff + The testsuite documentation is located at + ../deal.II/doc/developers/testsuite.html. + Alternatively, have a look at http://www.dealii.org/ -- 2.39.5