From: Daniel Arndt Date: Thu, 9 Jul 2020 02:44:23 +0000 (-0400) Subject: Remove lac/parallel_vector.h X-Git-Tag: v9.3.0-rc1~1306^2~1 X-Git-Url: https://gitweb.dealii.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=fc3caf19da5afa9d9324fbc09f8fdea2271e60d1;p=dealii.git Remove lac/parallel_vector.h --- diff --git a/include/deal.II/lac/parallel_vector.h b/include/deal.II/lac/parallel_vector.h deleted file mode 100644 index a06cc00af6..0000000000 --- a/include/deal.II/lac/parallel_vector.h +++ /dev/null @@ -1,155 +0,0 @@ -// --------------------------------------------------------------------- -// -// Copyright (C) 2011 - 2018 by the deal.II authors -// -// This file is part of the deal.II library. -// -// The deal.II library is free software; you can use it, redistribute -// it, and/or modify it under the terms of the GNU Lesser General -// Public License as published by the Free Software Foundation; either -// version 2.1 of the License, or (at your option) any later version. -// The full text of the license can be found in the file LICENSE.md at -// the top level directory of deal.II. -// -// --------------------------------------------------------------------- - -#ifndef dealii_parallel_vector_h -#define dealii_parallel_vector_h - -#include - -#include - -DEAL_II_WARNING( - "This file is deprecated. Use and LinearAlgebra::distributed::Vector instead.") - -#include -#include - - -DEAL_II_NAMESPACE_OPEN - - -namespace parallel -{ - namespace distributed - { - /*! @addtogroup Vectors - *@{ - */ - - - /** - * Implementation of a parallel vector class. The design of this class is - * similar to the standard ::dealii::Vector class in deal.II, with the - * exception that storage is distributed with MPI. - * - * The vector is designed for the following scheme of parallel - * partitioning: - *
    - *
  • The indices held by individual processes (locally owned part) in - * the MPI parallelization form a contiguous range - * [my_first_index,my_last_index). - *
  • Ghost indices residing on arbitrary positions of other processors - * are allowed. It is in general more efficient if ghost indices are - * clustered, since they are stored as a set of intervals. The - * communication pattern of the ghost indices is determined when calling - * the function reinit (locally_owned, ghost_indices, - * communicator), and retained until the partitioning is changed. - * This allows for efficient parallel communication of indices. In - * particular, it stores the communication pattern, rather than having to - * compute it again for every communication. For more information on ghost - * vectors, see also the - * @ref GlossGhostedVector "glossary entry on vectors with ghost elements". - *
  • Besides the usual global access operator() it is also possible to - * access vector entries in the local index space with the function @p - * local_element(). Locally owned indices are placed first, [0, - * local_size()), and then all ghost indices follow after them - * contiguously, [local_size(), local_size()+n_ghost_entries()). - *
- * - * Functions related to parallel functionality: - *
    - *
  • The function compress() goes through the data - * associated with ghost indices and communicates it to the owner process, - * which can then add it to the correct position. This can be used e.g. - * after having run an assembly routine involving ghosts that fill this - * vector. Note that the @p insert mode of @p compress() does not set the - * elements included in ghost entries but simply discards them, assuming - * that the owning processor has set them to the desired value already - * (See also the - * @ref GlossCompress "glossary entry on compress"). - *
  • The update_ghost_values() function imports the data - * from the owning processor to the ghost indices in order to provide read - * access to the data associated with ghosts. - *
  • It is possible to split the above functions into two phases, where - * the first initiates the communication and the second one finishes it. - * These functions can be used to overlap communication with computations - * in other parts of the code. - *
  • Of course, reduction operations (like norms) make use of - * collective all-to-all MPI communications. - *
- * - * This vector can take two different states with respect to ghost - * elements: - *
    - *
  • After creation and whenever zero_out_ghosts() is called (or - * operator= (0.)), the vector does only allow writing into - * ghost elements but not reading from ghost elements. - *
  • After a call to update_ghost_values(), the vector does not allow - * writing into ghost elements but only reading from them. This is to - * avoid undesired ghost data artifacts when calling compress() after - * modifying some vector entries. The current status of the ghost entries - * (read mode or write mode) can be queried by the method - * has_ghost_elements(), which returns true exactly when - * ghost elements have been updated and false otherwise, - * irrespective of the actual number of ghost entries in the vector layout - * (for that information, use n_ghost_entries() instead). - *
- * - * This vector uses the facilities of the class dealii::Vector for - * implementing the operations on the local range of the vector. In - * particular, it also inherits thread parallelism that splits most - * vector-vector operations into smaller chunks if the program uses - * multiple threads. This may or may not be desired when working also with - * MPI. - * - *

Limitations regarding the vector size

- * - * This vector class is based on two different number types for indexing. - * The so-called global index type encodes the overall size of the vector. - * Its type is types::global_dof_index. The largest possible value is - * 2^32-1 or approximately 4 billion in case 64 bit integers - * are disabled at configuration of deal.II (default case) or - * 2^64-1 or approximately 10^19 if 64 bit - * integers are enabled (see the glossary entry on - * @ref GlobalDoFIndex - * for further information). - * - * The second relevant index type is the local index used within one MPI - * rank. As opposed to the global index, the implementation assumes 32-bit - * unsigned integers unconditionally. In other words, to actually use a - * vector with more than four billion entries, you need to use MPI with - * more than one rank (which in general is a safe assumption since four - * billion entries consume at least 16 GB of memory for floats or 32 GB of - * memory for doubles) and enable 64-bit indices. If more than 4 billion - * local elements are present, the implementation tries to detect that, - * which triggers an exception and aborts the code. Note, however, that - * the detection of overflow is tricky and the detection mechanism might - * fail in some circumstances. Therefore, it is strongly recommended to - * not rely on this class to automatically detect the unsupported case. - * - * - * @deprecated Use LinearAlgebra::distributed::Vector instead. - */ - template - using Vector DEAL_II_DEPRECATED = - LinearAlgebra::distributed::Vector; - - /*@}*/ - } // namespace distributed -} // namespace parallel - -DEAL_II_NAMESPACE_CLOSE - -#endif