--- /dev/null
+New: Ths step-64 tutorial demonstrates how to use the CUDAWrappers::MatrixFree
+framework (possibly with MPI) and discusses the peculiarities of using CUDA
+inside deal.II in general.
+<br>
+(Daniel Arndt, Bruno Turcksin, 2019/05/10)
<h1>Results</h1>
+Since the main purpose of this tutorial is to demonstrate how to use the
+CUDAWrappers::MatrixFree interface, we just show the expected output here:
+@code
+Cycle 0
+ Number of active cells: 8
+ Number of degrees of freedom: 343
+ Solved in 27 iterations.
+ solution norm: 0.0205439
+
+Cycle 1
+ Number of active cells: 64
+ Number of degrees of freedom: 2197
+ Solved in 60 iterations.
+ solution norm: 0.0205269
+
+Cycle 2
+ Number of active cells: 512
+ Number of degrees of freedom: 15625
+ Solved in 114 iterations.
+ solution norm: 0.0205261
+
+Cycle 3
+ Number of active cells: 4096
+ Number of degrees of freedom: 117649
+ Solved in 227 iterations.
+ solution norm: 0.0205261
+@endcode
+
<a name="extensions"></a>
<h3> Possible extensions </h3>
// that always uses GPU memory storage but doesn't work with MPI. It might
// be worth noticing that the communication between different MPI processes
// can be improved if the MPI implementation is CUDA-aware and the configure
- // flag DEAL_II_WITH_CUDA_AWARE_MPI is enabled.
+ // flag DEAL_II_MPI_WITH_CUDA_SUPPORT is enabled.
//
// Here, we also have a finite element vector with CPU storage such that we
// can view and display the solution as usual.
// precondition by the identity, to focus just on the peculiarities of the
// CUDAWrappers::MatrixFree framework. Of course, in a real application the
// choice of a suitable preconditioner is crucial but we have at least the
- // same restructions as in step-37 since matrix entries are computed on the
+ // same restrictions as in step-37 since matrix entries are computed on the
// fly and not stored.
template <int dim, int fe_degree>
void HelmholtzProblem<dim, fe_degree>::solve()