From: bangerth Date: Fri, 7 Mar 2008 16:29:35 +0000 (+0000) Subject: Point out the different between compute time and instruction count. X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=b4250505f869c63e41fad885259020b76d2a2222;p=dealii-svn.git Point out the different between compute time and instruction count. git-svn-id: https://svn.dealii.org/trunk@15877 0785d39b-7218-0410-832d-ea1e28bc413d --- diff --git a/deal.II/examples/step-22/doc/intro.dox b/deal.II/examples/step-22/doc/intro.dox index 990feb0360..20cc99423d 100644 --- a/deal.II/examples/step-22/doc/intro.dox +++ b/deal.II/examples/step-22/doc/intro.dox @@ -611,22 +611,29 @@ the SparseILU class was very inefficient and has been replaced by one that is about 10 times faster. Small improvements were applied here and there. -A profile of where the program spends it time in refinement cycles +A profile of how many CPU instructions are spent at the various +different places in the the program during refinement cycles zero through three in 3d is shown here: @image html step-22.profile-3.png As can be seen, at this refinement level approximately half of the -time is spent on matrix assembly and sparse ILU computation (left +instruction count is spent on matrix assembly and sparse ILU computation (left half), one third on the actual solver (the SparseILU::vmult calls in -the center right), and the rest on other things. For higher refinement -levels, the greenesh boxes at the center right representing the solver -as well as the blue box at the top right representing the reordering -algorithm are going to grow at the expense of the other parts of the -program, since they don't scale linearly. The fact that at this -moderate refinement level (3168 cells and 93176 degrees of freedom) -matrix assembly requires about half the compute time may therefore not -be of such importance. +the center right), and the rest on other things. Since floating point +operations such as in the SparseILU::vmult calls typically take much +longer than many of the logical operations and table lookups in matrix +assembly, the fraction of the run time taken up by matrix assembly is +actually significantly less than half the total, as will become +apparent in the comparison we make in the results section. + +For higher refinement levels, the greenesh boxes at the center right +representing the solver as well as the blue box at the top right +representing the reordering algorithm are going to grow at the expense +of the other parts of the program, since they don't scale +linearly. The fact that at this moderate refinement level (3168 cells +and 93176 degrees of freedom) matrix assembly requires about half the +instructions may therefore not be of such importance. As a final point, and as a point of reference, the following picture also shows how the profile looked at an early stage of optimizing this @@ -636,6 +643,6 @@ program: As mentioned above, the runtime of this version was about twice as long as for the first profile, with the SparseILU decomposition taking -up about 30% of the run time, and operations on the ill-suited +up about 30% of the instruction count, and operations on the ill-suited CompressedSparsityPattern about 10%. Both these bottlenecks have since been completely removed.