<code>bsub</code> with a whole host of options to run a job in parallel - so
that the exact command line syntax varies. If you have found out how to run a
job on your system, you should get output like this for a job on 8 processors,
-and with a few more refinement cycles than in the code above:
+and with a few more refinement cycles than in the code above (these
+results were generated in 2004 with older versions of deal.II and a
+version of METIS that generated different partitionings; consequently,
+the numbers you get today are slightly different):
@code
Cycle 0:
Number of active cells: 64
information for the last step, though, but you get the idea. All this is if
the debug flag in the Makefile was changed to "off", i.e. "optimized", and
with the generation of graphical output switched off for the reasons stated in
-the program comments above. The biggest 2d computations we did had roughly 7.1
+the program comments above. The biggest 2d computations we did had roughly 7.1
million unknowns, and were done on 32 processes. It took about 40 minutes.
Not surprisingly, the limiting factor for how far one can go is how much memory
-one has, since every process has to hold the entire mesh and DoFHandler objects,
+one has, since every process has to hold the entire mesh and DoFHandler objects,
although matrices and vectors are split up. For the 7.1M computation, the memory
-consumption was about 600 bytes per unknown, which is not bad, but one has to
+consumption was about 600 bytes per unknown, which is not bad, but one has to
consider that this is for every unknown, whether we store the matrix and vector
entries locally or not.
The last step, going up to 1.5 million unknowns, takes about 55 minutes with
16 processes on 8 dual-processor machines (of the kind available in 2003). The
-graphical output generated by
+graphical output generated by
this job is rather large (cycle 5 already prints around 82 MB of GMV data), so
we contend ourselves with showing output from cycle 4: