// @sect3{Include files}
// We start with a bunch of include files that have already been explained in
-// previous tutorial programs:
+// previous tutorial programs. One new one is <code>timer.h</code>: This is
+// the first example program that uses the Timer class. The Timer keeps track
+// of both the elapsed wall clock time (that is, the amount of time that a
+// clock mounted on the wall would measure) and CPU clock time (the amount of
+// time that the current process uses on the CPUs). We will use a Timer below
+// to measure how much CPU time each grid refinement cycle takes.
#include <deal.II/base/timer.h>
#include <deal.II/base/quadrature_lib.h>
#include <deal.II/base/function.h>
double k_eff_old = k_eff;
- Timer timer;
- timer.start ();
-
for (unsigned int cycle=0; cycle<parameters.n_refinement_cycles; ++cycle)
{
+ // We will measure the CPU time that each cycle takes below. The
+ // constructor for Timer calls Timer::start(), so once we create a
+ // timer we can query it for information. Since we use a thread pool
+ // to assemble the system matrices, the CPU time we measure (if we run
+ // with more than one thread) will be larger than the wall time.
+ Timer timer;
+
std::cout << "Cycle " << cycle << ':' << std::endl;
if (cycle == 0)
for (unsigned int group=0; group<parameters.n_groups; ++group)
energy_groups[group]->output_results (cycle);
+ // Print out information about the simulation as well as the elapsed
+ // CPU time. We can call Timer::cpu_time() without first calling
+ // Timer::stop() to get the elapsed CPU time at the point of calling
+ // the function.
std::cout << std::endl;
std::cout << " Cycle=" << cycle
<< ", n_dofs=" << energy_groups[0]->n_dofs() + energy_groups[1]->n_dofs()
// compute execution time when this function is done:
deallog << "Generating grid... ";
Timer timer;
- timer.start ();
// Then we query the values for the focal distance of the transducer lens
// and the number of mesh refinement steps from our ParameterHandler
{
deallog << "Setting up system... ";
Timer timer;
- timer.start();
dof_handler.distribute_dofs (fe);
{
deallog << "Assembling system matrix... ";
Timer timer;
- timer.start ();
// First we query wavespeed and frequency from the ParameterHandler object
// and store them in local variables, as they will be used frequently
{
deallog << "Solving linear system... ";
Timer timer;
- timer.start ();
// The code to solve the linear system is short: First, we allocate an
// object of the right type. The following <code>initialize</code> call
{
deallog << "Generating output... ";
Timer timer;
- timer.start ();
// Define objects of our <code>ComputeIntensity</code> class and a DataOut
// object:
void LaplaceProblem<dim>::setup_system ()
{
Timer time;
- time.start ();
setup_time = 0;
system_matrix.clear();
* as well as the total time elapsed over all laps. Here is an example:
*
* @code
- * Timer timer;
- * timer.start();
+ * Timer timer; // creating a timer also starts it
*
* // do some complicated computations here
* // ...
* Alternatively, you can also restart the timer instead of resetting it. The
* times between successive calls to start() and stop() (i.e., the laps) will
* then be accumulated. The usage of this class is also explained in the
- * step-28, step-29 and step-30 tutorial programs.
+ * step-28 tutorial program.
+ *
+ * @note The TimerOutput (combined with TimerOutput::Scope) class provide a
+ * convenient way to time multiple named sections and summarize the output.
*
* @note Implementation of this class is system dependent. In particular, CPU
* times are accumulated from summing across all threads and will usually
* TimerOutput::wall_times);
* @endcode
* Here, <code>pcout</code> is an object of type ConditionalOStream that makes
- * sure that we only generate output on a single processor. See the step-32
- * and step-40 tutorial programs for this kind of usage of this class.
+ * sure that we only generate output on a single processor. See the step-32,
+ * step-40, and step-42 tutorial programs for this kind of usage of this class.
*
* @ingroup utilities
* @author M. Kronbichler, 2009.