From d53dbaf8cdd25c1f51f88a5d5fc617c27fd88d1b Mon Sep 17 00:00:00 2001 From: Bruno Turcksin Date: Wed, 8 May 2019 21:56:27 -0400 Subject: [PATCH] Fix a few typos --- examples/step-64/doc/intro.dox | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/examples/step-64/doc/intro.dox b/examples/step-64/doc/intro.dox index 85e3121ad3..4642403a96 100644 --- a/examples/step-64/doc/intro.dox +++ b/examples/step-64/doc/intro.dox @@ -9,7 +9,7 @@ This program was contributed by Bruno Turcksin, Daniel Arndt, Oak Ridge National This example shows how to implement a matrix-free method on the GPU using CUDA for the Helmhotz equation with variable coefficients on a hypercube. The linear -system willbe solved using CG and MPI. +system will be solved using CG and MPI. In the last few years, heterogeneous computing in general and GPUs in particular have gained a lot of popularity. This is because GPUs offer better computing @@ -19,13 +19,13 @@ interesting to be able to efficiently run a simulation along side a machine learning code. While we have tried for the interface of the matrix-free classes for the CPU and -the GPU to be a close as possible, there are a few differences. When using -matrix-free on GPU, one must write some CUDA codes. However, the amount is -fairly small and the use of CUDA is limited to a few keyword +the GPU to be as close as possible, there are a few differences. When using +matrix-free on GPU, one must write some CUDA code. However, the amount is +fairly small and the use of CUDA is limited to a few keywords.

The test case

-In this example, we consider the Poisson problem @f{eqnarray*} - \nabla \cdot +In this example, we consider the Helmholtz problem @f{eqnarray*} - \nabla \cdot \nabla u + a(\mathbf r) u &=&1,\\ u &=& 0 \quad \text{on} \partial \Omega @f} where $a(\mathbf x)$ is a variable coefficient. @@ -41,7 +41,7 @@ now on). A normal calculation on the device can be divided in three separte steps: 1) the data is moved from the host to the device 2) computation is done on the device - 3) the result is move from the device to the host + 3) the result is moved back from the device to the host The data movements can either done manually by the user or done automatically using UVM (Unified Virtual Memory). In deal.II, only the first method is supported. While it means an extra burden for the user, it allows a better @@ -50,7 +50,7 @@ important kernels on the host instead of the device. The data movement in deal.II is done using LinearAlgebra::ReadWriteVector. These vectors can be seen as buffers on -the host that are used to either store data from the device or to seen the +the host that are used to either store data from the device or to send data to the device. There are two types of vectors that can be used on the device: LinearAlgebra::CUDAWrappers::Vector, which is similar to the more common Vector, and LinearAlgebra::distributed::Vector where we have specified which memory space to use. The default value if the memory space is not specified is MemorySpace::Host. -Next, we show how to move data to/from +Next, we show how to move data to/from the device using LinearAlgebra::CUDAWrappers::Vector: unsigned int size = 10; -- 2.39.5