* </dd>
*
*
+ * <dt class="glossary">@anchor GlossDevice <b>Device</b></dt>
+ *
+ * <dd> We commonly refer to GPUs as "devices" in deal.II. The context is
+ * always related to Kokkos or CUDA that motivated using this term.
+ * Occasionally, we also call data corresponding to MemorySpace::Default "device data"
+ * (even though it is allocated in CPU memory if Kokkos was configured without
+ * a GPU backend) to distinguish between MemorySpace::Default and MemorySpace::Host.
+ * </dd>
+ *
+ *
* <dt class="glossary">@anchor GlossDimension <b>Dimensions `dim` and `spacedim`</b></dt>
*
* <dd> Many classes and functions in deal.II have two template parameters,
};
/**
- * Allocate @p n_elements on the device.
+ * Allocate @p n_elements on the @ref GlossDevice "device".
*/
template <typename T>
inline void
}
/**
- * Free memory on the device.
+ * Free memory on the @ref GlossDevice "device".
*/
template <typename T>
inline void
}
/**
- * Allocator to be used for `std::unique_ptr` pointing to device memory.
+ * Allocator to be used for `std::unique_ptr` pointing to @ref GlossDevice "device" memory.
*/
template <typename Number>
Number *
}
/**
- * Deleter to be used for `std::unique_ptr` pointing to device memory.
+ * Deleter to be used for `std::unique_ptr` pointing to @ref GlossDevice "device" memory.
*/
template <typename Number>
void
}
/**
- * Copy the device ArrayView @p in to the host ArrayView @p out.
+ * Copy the @ref GlossDevice "device" ArrayView @p in to the host ArrayView @p out.
*/
template <typename T>
inline void
}
/**
- * Copy the host ArrayView @p in to the device ArrayView @p out.
+ * Copy the host ArrayView @p in to the @ref GlossDevice "device" ArrayView @p out.
*/
template <typename T>
inline void
}
/**
- * Copy the elements in @p vector_host to the device in @p pointer_dev. The
- * memory needs to be allocate on the device before this function is called.
+ * Copy the elements in @p vector_host to the @ref GlossDevice "device" in @p pointer_dev. The
+ * memory needs to be allocate on the @ref GlossDevice "device" before this function is called.
*/
template <typename T>
inline void
namespace MemorySpace
{
/**
- * Structure which stores data on the host or the device depending on the
+ * Structure which stores data on the host or the @ref GlossDevice "device" depending on the
* template parameter @p MemorySpace. Valid choices are MemorySpace::Host,
* MemorySpace::Default, and MemorySpace::CUDA (if CUDA was enabled in
* deal.II). The data is copied into the structure which then owns the data
/**
* Copy the class member values to @p begin.
- * If the data is on the device it is moved to the host.
+ * If the data is on the @ref GlossDevice "device" it is moved to the host.
*/
void
copy_to(T *begin, const std::size_t n_elements);
Kokkos::View<T *, Kokkos::HostSpace> values_host_buffer;
/**
- * Kokkos View owning the data on the device (unless @p values_sm_ptr is used).
+ * Kokkos View owning the data on the @ref GlossDevice "device" (unless @p values_sm_ptr is used).
*/
Kokkos::View<T *, typename MemorySpace::kokkos_space> values;
* template is selected if number is not a complex data type, this
* function simply returns the given number.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
static constexpr DEAL_II_HOST_DEVICE const number &
conjugate(const number &x);
* general template is chosen for types not equal to std::complex, this
* function simply returns the square of the given number.
*
- * @note If the template type can be used in device code, the same holds true
+ * @note If the template type can be used in @ref GlossDevice "device" code, the same holds true
* for this function.
*/
static constexpr DEAL_II_HOST_DEVICE real_type
private:
/**
* Initialize import_indices_plain_dev from import_indices_data. This
- * function is only used when using device-aware MPI.
+ * function is only used when using @ref GlossDevice "device"-aware MPI.
*/
void
initialize_import_indices_plain_dev() const;
* The set of (local) indices that we are importing during compress(),
* i.e., others' ghosts that belong to the local range. The data stored is
* the same as in import_indices_data but the data is expanded in plain
- * arrays. This variable is only used when using device-aware MPI.
+ * arrays. This variable is only used when using @ref GlossDevice "device"-aware MPI.
*/
// The variable is mutable to enable lazy initialization in
// export_to_ghosted_array_start().
* Standard constructor. Creates an object that corresponds to the origin,
* i.e., all coordinates are set to zero.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
DEAL_II_HOST_DEVICE
Point();
* <tt>dim!=1</tt> as it would leave some components of the point
* coordinates uninitialized.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
explicit DEAL_II_HOST_DEVICE
Point(const Number x);
* coordinates uninitialized (if dim>2) or would not use some arguments (if
* dim<2).
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
DEAL_II_HOST_DEVICE
Point(const Number x, const Number y);
* point coordinates uninitialized (if dim>3) or would not use some
* arguments (if dim<3).
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
DEAL_II_HOST_DEVICE
Point(const Number x, const Number y, const Number z);
* that is zero in all coordinates except for a single 1 in the <tt>i</tt>th
* coordinate.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
static DEAL_II_HOST_DEVICE Point<dim, Number>
unit_vector(const unsigned int i);
/**
* Read access to the <tt>index</tt>th coordinate.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
DEAL_II_HOST_DEVICE Number
operator()(const unsigned int index) const;
/**
* Read and write access to the <tt>index</tt>th coordinate.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
DEAL_II_HOST_DEVICE Number &
operator()(const unsigned int index);
/**
* Add an offset given as Tensor<1,dim,Number> to a point.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
DEAL_II_HOST_DEVICE Point<dim, Number>
operator+(const Tensor<1, dim, Number> &) const;
* origin) and, consequently, the result is returned as a Tensor@<1,dim@>
* rather than as a Point@<dim@>.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
DEAL_II_HOST_DEVICE Tensor<1, dim, Number>
operator-(const Point<dim, Number> &) const;
* documentation of this class, the result is then naturally returned as a
* Point@<dim@> object rather than as a Tensor@<1,dim@>.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
DEAL_II_HOST_DEVICE Point<dim, Number>
operator-(const Tensor<1, dim, Number> &) const;
/**
* The opposite vector.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
DEAL_II_HOST_DEVICE Point<dim, Number>
operator-() const;
/**
* Multiply the current point by a factor.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*
* @relatesalso EnableIfScalar
*/
/**
* Divide the current point by a factor.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
DEAL_II_HOST_DEVICE Point<
/**
* Return the scalar product of the vectors representing two points.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
DEAL_II_HOST_DEVICE Number
operator*(const Tensor<1, dim, Number> &p) const;
* Tensor<rank,dim,Number>::norm_square() which returns the square of the
* Frobenius norm.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
DEAL_II_HOST_DEVICE typename numbers::NumberTraits<Number>::real_type
square() const;
* <tt>p</tt>, i.e. the $l_2$ norm of the difference between the
* vectors representing the two points.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
DEAL_II_HOST_DEVICE typename numbers::NumberTraits<Number>::real_type
distance(const Point<dim, Number> &p) const;
* Return the squared Euclidean distance of <tt>this</tt> point to the point
* <tt>p</tt>.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
DEAL_II_HOST_DEVICE typename numbers::NumberTraits<Number>::real_type
distance_square(const Point<dim, Number> &p) const;
/**
* Global operator scaling a point vector by a scalar.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*
* @relates Point
*/
/**
* Constructor. Set to zero.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
constexpr DEAL_II_HOST_DEVICE
Tensor();
* obviously requires that the @p OtherNumber type is convertible to @p
* Number.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
constexpr DEAL_II_HOST_DEVICE
/**
* Constructor, where the data is copied from a C-style array.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
constexpr DEAL_II_HOST_DEVICE
* This is the non-const conversion operator that returns a writable
* reference.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
constexpr DEAL_II_HOST_DEVICE
operator Number &();
*
* This is the const conversion operator that returns a read-only reference.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
constexpr DEAL_II_HOST_DEVICE operator const Number &() const;
* obviously requires that the @p OtherNumber type is convertible to @p
* Number.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
constexpr DEAL_II_HOST_DEVICE Tensor &
* copy constructor for Sacado::Rad::ADvar types automatically.
* See https://github.com/dealii/dealii/pull/5865.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
constexpr DEAL_II_HOST_DEVICE Tensor &
operator=(const Tensor<0, dim, Number> &rhs);
* This operator assigns a scalar to a tensor. This obviously requires
* that the @p OtherNumber type is convertible to @p Number.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
constexpr DEAL_II_HOST_DEVICE Tensor &
/**
* Add another scalar.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
constexpr DEAL_II_HOST_DEVICE Tensor &
/**
* Subtract another scalar.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
constexpr DEAL_II_HOST_DEVICE Tensor &
/**
* Multiply the scalar with a <tt>factor</tt>.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
constexpr DEAL_II_HOST_DEVICE Tensor &
/**
* Divide the scalar by <tt>factor</tt>.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
constexpr DEAL_II_HOST_DEVICE Tensor &
/**
* Tensor with inverted entries.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
constexpr DEAL_II_HOST_DEVICE Tensor
operator-() const;
* Return the square of the Frobenius-norm of a tensor, i.e. the sum of the
* absolute squares of all entries.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
constexpr DEAL_II_HOST_DEVICE real_type
norm_square() const;
/**
* Constructor. Initialize all entries to zero.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
constexpr DEAL_II_HOST_DEVICE_ALWAYS_INLINE
Tensor();
/**
* A constructor where the data is copied from a C-style array.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
constexpr DEAL_II_HOST_DEVICE explicit Tensor(const array_type &initializer);
* either equal to @p Number, or is convertible to @p Number.
* Number.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename ElementType, typename MemorySpace>
constexpr DEAL_II_HOST_DEVICE explicit Tensor(
* obviously requires that the @p OtherNumber type is convertible to @p
* Number.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
constexpr DEAL_II_HOST_DEVICE
/**
* Read-Write access operator.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
constexpr DEAL_II_HOST_DEVICE value_type &
operator[](const unsigned int i);
/**
* Read-only access operator.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
constexpr DEAL_II_HOST_DEVICE const value_type &
operator[](const unsigned int i) const;
* This obviously requires that the @p OtherNumber type is convertible to @p
* Number.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
constexpr DEAL_II_HOST_DEVICE Tensor &
/**
* Add another tensor.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
constexpr DEAL_II_HOST_DEVICE Tensor &
/**
* Subtract another tensor.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
constexpr DEAL_II_HOST_DEVICE Tensor &
* Scale the tensor by <tt>factor</tt>, i.e. multiply all components by
* <tt>factor</tt>.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
constexpr DEAL_II_HOST_DEVICE Tensor &
/**
* Scale the vector by <tt>1/factor</tt>.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename OtherNumber>
constexpr DEAL_II_HOST_DEVICE Tensor &
/**
* Unary minus operator. Negate all entries of a tensor.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
constexpr DEAL_II_HOST_DEVICE Tensor
operator-() const;
* the absolute squares of all entries. For the present case of rank-1
* tensors, this equals the usual <tt>l<sub>2</sub></tt> norm of the vector.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
DEAL_II_HOST_DEVICE
typename numbers::NumberTraits<Number>::real_type
* Return the square of the Frobenius-norm of a tensor, i.e. the sum of the
* absolute squares of all entries.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
constexpr DEAL_II_HOST_DEVICE
typename numbers::NumberTraits<Number>::real_type
* This constructor is for internal use. It provides a way
* to create constexpr constructors for Tensor<rank, dim, Number>
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*/
template <typename ArrayLike, std::size_t... Indices>
constexpr DEAL_II_HOST_DEVICE
* This function unwraps the underlying @p Number stored in the Tensor and
* multiplies @p object with it.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*
* @relatesalso Tensor
*/
* This function unwraps the underlying @p Number stored in the Tensor and
* multiplies @p object with it.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*
* @relatesalso Tensor
*/
* OtherNumber that are stored within the Tensor and multiplies them. It
* returns an unwrapped number of product type.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*
* @relatesalso Tensor
*/
/**
* Division of a tensor of rank 0 by a scalar number.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*
* @relatesalso Tensor
*/
/**
* Add two tensors of rank 0.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*
* @relatesalso Tensor
*/
/**
* Subtract two tensors of rank 0.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*
* @relatesalso Tensor
*/
* number, a complex floating point number, etc.) is allowed, see the
* documentation of EnableIfScalar for details.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*
* @relatesalso Tensor
*/
* number, a complex floating point number, etc.) is allowed, see the
* documentation of EnableIfScalar for details.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*
* @relatesalso Tensor
*/
* discussion on operator*() above for more information about template
* arguments and the return type.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*
* @relatesalso Tensor
*/
*
* @tparam rank The rank of both tensors.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*
* @relatesalso Tensor
*/
*
* @tparam rank The rank of both tensors.
*
- * @note This function can also be used in device code.
+ * @note This function can also be used in @ref GlossDevice "device" code.
*
* @relatesalso Tensor
*/
SmartPointer<const SparseMatrix<Number>> matrix_pointer;
/**
- * Pointer to the values (on the device) of the computed preconditioning
+ * Pointer to the values (on the @ref GlossDevice "device") of the computed preconditioning
* matrix.
*/
std::unique_ptr<Number[], void (*)(Number *)> P_val_dev;
/**
- * Pointer to the row pointer (on the device) of the sparse matrix this
+ * Pointer to the row pointer (on the @ref GlossDevice "device") of the sparse matrix this
* object was initialized with. Guarded by matrix_pointer.
*/
const int *P_row_ptr_dev;
/**
- * Pointer to the column indices (on the device) of the sparse matrix this
+ * Pointer to the column indices (on the @ref GlossDevice "device") of the sparse matrix this
* object was initialized with. Guarded by matrix_pointer.
*/
const int *P_column_index_dev;
/**
- * Pointer to the value (on the device) for a temporary (helper) vector
+ * Pointer to the value (on the @ref GlossDevice "device") for a temporary (helper) vector
* used in vmult().
*/
std::unique_ptr<Number[], void (*)(Number *)> tmp_dev;
/**
- * Pointer to an internal buffer (on the device) that is used for
+ * Pointer to an internal buffer (on the @ref GlossDevice "device") that is used for
* computing the decomposition.
*/
std::unique_ptr<void, void (*)(void *)> buffer_dev;
SmartPointer<const SparseMatrix<Number>> matrix_pointer;
/**
- * Pointer to the values (on the device) of the computed preconditioning
+ * Pointer to the values (on the @ref GlossDevice "device") of the computed preconditioning
* matrix.
*/
std::unique_ptr<Number[], void (*)(Number *)> P_val_dev;
/**
- * Pointer to the row pointer (on the device) of the sparse matrix this
+ * Pointer to the row pointer (on the @ref GlossDevice "device") of the sparse matrix this
* object was initialized with. Guarded by matrix_pointer.
*/
const int *P_row_ptr_dev;
/**
- * Pointer to the column indices (on the device) of the sparse matrix this
+ * Pointer to the column indices (on the @ref GlossDevice "device") of the sparse matrix this
* object was initialized with. Guarded by matrix_pointer.
*/
const int *P_column_index_dev;
/**
- * Pointer to the value (on the device) for a temporary (helper) vector
+ * Pointer to the value (on the @ref GlossDevice "device") for a temporary (helper) vector
* used in vmult().
*/
std::unique_ptr<Number[], void (*)(Number *)> tmp_dev;
/**
- * Pointer to an internal buffer (on the device) that is used for
+ * Pointer to an internal buffer (on the @ref GlossDevice "device") that is used for
* computing the decomposition.
*/
std::unique_ptr<void, void (*)(void *)> buffer_dev;
/**
* Set the solver type. Possibilities are:
* <ul>
- * <li> "Cholesky" which performs a Cholesky decomposition on the device
+ * <li> "Cholesky" which performs a Cholesky decomposition on the @ref GlossDevice "device"
* </li>
* <li> "LU_dense" which converts the sparse matrix to a dense
* matrix and uses LU factorization </li>
/**
* Constructor. Takes a Utilities::CUDA::Handle and a sparse matrix on the
- * host. The sparse matrix on the host is copied on the device and the
+ * host. The sparse matrix on the host is copied on the @ref GlossDevice "device" and the
* elements are reordered according to the format supported by cuSPARSE.
*/
SparseMatrix(Utilities::CUDA::Handle & handle,
/**
* Reinitialize the sparse matrix. The sparse matrix on the host is copied
- * to the device and the elementes are reordered according to the format
+ * to the @ref GlossDevice "device" and the elementes are reordered according to the format
* supported by cuSPARSE.
*/
void
int n_cols;
/**
- * Pointer to the values (on the device) of the sparse matrix.
+ * Pointer to the values (on the @ref GlossDevice "device") of the sparse matrix.
*/
std::unique_ptr<Number[], void (*)(Number *)> val_dev;
/**
- * Pointer to the column indices (on the device) of the sparse matrix.
+ * Pointer to the column indices (on the @ref GlossDevice "device") of the sparse matrix.
*/
std::unique_ptr<int[], void (*)(int *)> column_index_dev;
/**
- * Pointer to the row pointer (on the device) of the sparse matrix.
+ * Pointer to the row pointer (on the @ref GlossDevice "device") of the sparse matrix.
*/
std::unique_ptr<int[], void (*)(int *)> row_ptr_dev;
* necessary. Since an MPI communication may be performed, import needs to
* be called on all the processors.
*
- * @note By default, the GPU device id is chosen in a round-robin fashion
- * according to the local MPI rank id. To choose a different device, Kokkos
+ * @note By default, the GPU @ref GlossDevice "device" id is chosen in a round-robin fashion
+ * according to the local MPI rank id. To choose a different @ref GlossDevice "device", Kokkos
* has to be initialized explicitly providing the respective devide id
* explicitly.
*
* Must follow a call to the @p compress_start function.
*
* When the MemorySpace is Default and MPI is not GPU-aware, data changed
- * on the device after the call to compress_start will be lost.
+ * on the @ref GlossDevice "device" after the call to compress_start will be lost.
*/
void
compress_finish(VectorOperation::values operation);
* improve performance.
*
* @note If the MemorySpace is Default, the data in the ReadWriteVector will
- * be moved to the device.
+ * be moved to the @ref GlossDevice "device".
*/
virtual void
import(const LinearAlgebra::ReadWriteVector<Number> &V,
* It holds that end() - begin() == locally_owned_size().
*
* @note For the Default memory space, the iterator might point to memory
- * on the device.
+ * on the @ref GlossDevice "device".
*/
iterator
begin();
* of the vector.
*
* @note For the Default memory space, the iterator might point to memory
- * on the device.
+ * on the @ref GlossDevice "device".
*/
const_iterator
begin() const;
* of locally owned entries.
*
* @note For the Default memory space, the iterator might point to memory
- * on the device.
+ * on the @ref GlossDevice "device".
*/
iterator
end();
* the array of the locally owned entries.
*
* @note For the Default memory space, the iterator might point to memory
- * on the device.
+ * on the @ref GlossDevice "device".
*/
const_iterator
end() const;
* Return the pointer to the underlying raw array.
*
* @note For the Default memory space, the pointer might point to memory
- * on the device.
+ * on the @ref GlossDevice "device".
*/
Number *
get_values() const;
*
* In case Kokkos was configured with GPU support, this class performs its
* actions on the GPU. In particular, there is no need for manually
- * synchronizing memory between host and device.
+ * synchronizing memory between host and @ref GlossDevice "device".
*
* @ingroup TrilinosWrappers
* @ingroup Vectors
/**
- * Copy @p data from the device to the device. @p update_flags should be
+ * Copy @p data from the @ref GlossDevice "device" to the host. @p update_flags should be
* identical to the one used in MatrixFree::AdditionalData.
*
* @relates CUDAWrappers::MatrixFree
/**
- * Allocate an array to the device and copy @p array_host to the device.
+ * Allocate an array on the @ref GlossDevice "device" and copy @p array_host to the @ref GlossDevice "device".
*/
template <typename Number1>
void