@endcode
to run the step-17 executable with 32 processors.
-(If you work on a cluster,
-there is typically a step in between where you need to set up a job script,
-submit the script to a scheduler that then executes the script whenever it
-can allocate 32 unused processors for your job. How to write such job
+(If you work on a cluster, then there is typically a step in between where you
+need to set up a job script and submit the script to a scheduler. The scheduler
+will execute the script whenever it can allocate 32 unused processors for your
+job. How to write such job
scripts differs from cluster to cluster, and you should find the documentation
of your cluster to see how to do this. On my system, I have to use the command
<code>qsub</code> with a whole host of options to run a job in parallel.)
Finally, here are some results for a 3d simulation. You can repeat these by
-first changing
+changing
@code
ElasticProblem<2> elastic_problem;
@endcode
@code
ElasticProblem<3> elastic_problem;
@endcode
-in the main function, and then in the Makefile, change the reference to the 2d
-libraries to their 3d counterparts. If you then run the program in parallel,
+in the main function. If you then run the program in parallel,
you get something similar to this (this is for a job with 16 processes):
@code
Cycle 0:
//
// The final step of this initial setup is that we get ourselves a
// variable that indicates how many degrees of freedom the current
- // process is responsible for. (This will, in general, be less than
- // <code>fe.dofs_per_cell</code> times the number of cells the
- // current process owns because some degrees of freedom live on
- // interfaces between subdomains, and are consequently only owned by
+ // process is responsible for. (Note that a degree of freedom is not
+ // necessarily owned by the process that owns a cells just because
+ // the degree of freedom lives on this cell: some degrees of freedom
+ // live on interfaces between subdomains, and are consequently only owned by
// one of the processes adjacent to this interface.)
//
// Before we move on, let us recall a fact already discussed in the
// degrees of freedom that live on each cell, whether it is one that
// the current process owns or not. This can not scale to large
// problems because eventually just storing on every process the
- // entire mesh and everything that is associated with it, will
+ // entire mesh, and everything that is associated with it, will
// become infeasible if the problem is large enough. On the other
// hand, if we split the triangulation into parts so that every
// process stores only those cells it "owns" but nothing else (or,
// the local contributions into the global matrix or right hand side
// vector, we have to transfer these entries to the process that
// owns these elements. Fortunately, we don't have to do this by
- // hand, PETSc does all this for us by caching these elements
+ // hand: PETSc does all this for us by caching these elements
// locally, and sending them to the other processes as necessary
// when we call the <code>compress()</code> functions on the matrix
// and vector at the end of this function.