endif()
#
- # Petsc has to be configured with the same MPI configuration as
+ # PETSc has to be configured with the same MPI configuration as
# deal.II.
#
# petscconf.h should export PETSC_HAVE_MPIUNI 1 in case mpi support is
endif()
#
- # Petsc has to be configured with the same number of bits for indices as
+ # PETSc has to be configured with the same number of bits for indices as
# deal.II.
#
# petscconf.h should export PETSC_WITH_64BIT_INDICES 1 in case 64bits
- The program has been developed for solving problems on single-node
multicore machines. With a little effort, the program could be
extended to a large-scale computing environment through the use of
- Petsc or Trilinos, using a similar technique to that demonstrated in
+ PETSc or Trilinos, using a similar technique to that demonstrated in
step-40. This would mostly involve changes to the setup, assembly,
<code>PointHistory</code> and linear solver routines.
- As this program assumes quasi-static equilibrium, extensions to
// in this case a) again takes up a whole lot of memory on the heap,
// and b) is totally dumb since its content would simply be the
// sequence 0,1,2,3,...,n. the best of all worlds would probably be a
- // function in Petsc that would take a pointer to an array of
+ // function in PETSc that would take a pointer to an array of
// PetscScalar values and simply copy n elements verbatim into the
// vector...
for (size_type i = 0; i < v.size(); ++i)
// below we use type-traits from matrix-free/type_traits.h
// access to generic const vectors that have operator ().
- // FIXME: this is wrong for Trilinos/Petsc MPI vectors
+ // FIXME: this is wrong for Trilinos/PETSc MPI vectors
// where we should first do Partitioner::local_to_global()
template <
typename VectorType,
// access to generic non-const vectors that have operator ().
- // FIXME: this is wrong for Trilinos/Petsc MPI vectors
+ // FIXME: this is wrong for Trilinos/PETSc MPI vectors
// where we should first do Partitioner::local_to_global()
template <
typename VectorType,