* @note The use of shared memory between MPI processes requires
* that the detected MPI installation supports the necessary operations.
* This is the case for MPI 3.0 and higher.
+ *
+ * @note This function is not cheap. It needs to create sub-communicators
+ * of the provided @p communicator object, which is generally an expensive
+ * operation. Likewise, the generation of shared memory spaces is not
+ * a cheap operation. As a consequence, this function primarily makes
+ * sense when the goal is to share large read-only data tables among
+ * processes; examples are data tables that are loaded at start-up
+ * time and then used over the course of the run time of the program.
+ * In such cases, the start-up cost of running this function can be
+ * amortized over time, and the potential memory savings from not having to
+ * store the table on each process may be substantial on machines with
+ * large core counts on which many MPI processes run on the same machine.
+ *
+ * @note This function only makes sense if the data type `T` is
+ * "self-contained", i.e., all if its information is stored in its
+ * member variables, and if none of the member variables are pointers
+ * to other parts of the memory. This is because if a type `T` does
+ * have pointers to other parts of memory, then moving `T` into
+ * a shared memory space does not result in the other processes having
+ * access to data that the object points to with its member variable
+ * pointers: These continue to live only on one process, and are
+ * typically in memory areas not accessible to the other processes.
+ * As a consequence, the usual use case for this function is to share
+ * arrays of simple objects such as `double`s or `int`s.
*/
void
replicate_across_communicator(const MPI_Comm & communicator,
/**
* Pointer to actual data array.
*/
- std::unique_ptr<T[], void (*)(T *)> elements;
+ std::unique_ptr<T[], std::function<void(T *)>> elements;
/**
* Pointer to one past the last valid value.
// Create communicators for each group of processes that can use
// shared memory areas. Within each of these groups, we don't care about
// which rank each of the old processes gets except that we would like to
- // make sure that the (global) root process will be have rank=0 within
+ // make sure that the (global) root process will have rank=0 within
// its own sub-communicator. We can do that through the third argument of
// MPI_Comm_split_type (the "key") which is an integer meant to indicate the
// order of processes within the split communicators, and we will set it to
// There are different ways of creating a "shmem_roots_communicator".
// The conceptually easiest way is to create an MPI_Group that only
// includes the shmem roots and then create a communicator from this
- // by way of MPI_Comm_create or MPI_Comm_create_group. The problem
+ // via MPI_Comm_create or MPI_Comm_create_group. The problem
// with this is that we would have to exchange among all processes
// which ones are shmem roots and which are not. This is awkward.
//
// anything among themselves with the communicator so created.
//
// Using MPI_Comm_split has the additional benefit that, just as above,
- // we can choose where each rank will end up in the shmem roots communicator.
+ // we can choose where each rank will end up in shmem_roots_communicator.
// We would again like to set key=0 for the original root_process -- in other
// word, we would then know that among the shmem roots, the original root has
// rank=0. But, just like above, this doesn't appear to be working, so we put
- // the origin root at the *end* of the shmem roots communicator.
+ // the origin root at the *end* of shmem_roots_communicator.
//
// In any case, this makes it easy to next determine which process among the
// shmem roots is the one who initiates the broad cast operation mentioned
- // above
+ // above.
MPI_Comm shmem_roots_communicator;
{
const int key =
// Now let the original root_process broadcast the current object to all
// shmem roots. We know that the last rank is the original root process that
- // has all of the data
+ // has all of the data.
if (is_shmem_root)
- Utilities::MPI::broadcast(
- shmem_roots_communicator,
- *this,
- Utilities::MPI::n_mpi_processes(shmem_roots_communicator) - 1);
+ {
+ if (std::is_trivial<T>::value)
+ {
+ // The data is "trivial", i.e., we can copy things directly without
+ // having to go through the serialization/deserialization machinery of
+ // Utilities::MPI::broadcast.
+ //
+ // In that case, first tell all of the other shmem roots how many
+ // elements we will have to deal with, and let them resize their
+ // (non-shared) arrays.
+ const size_type new_size = Utilities::MPI::broadcast(
+ shmem_roots_communicator,
+ size(),
+ Utilities::MPI::n_mpi_processes(shmem_roots_communicator) - 1);
+ if (Utilities::MPI::this_mpi_process(shmem_roots_communicator) !=
+ Utilities::MPI::n_mpi_processes(shmem_roots_communicator) - 1)
+ resize(new_size);
+
+ // Then directly copy from the root process into these buffers
+ int ierr = MPI_Bcast(
+ elements.get(),
+ sizeof(T) * new_size,
+ MPI_CHAR,
+ Utilities::MPI::n_mpi_processes(shmem_roots_communicator) - 1,
+ shmem_roots_communicator);
+ AssertThrowMPI(ierr);
+ }
+ else
+ {
+ // The objects to be sent around are not "trivial", and so we have
+ // to go through the serialization/deserialization machinery. On all
+ // but the sending process, overwrite the current state with the
+ // vector just broadcast.
+ if (Utilities::MPI::this_mpi_process(shmem_roots_communicator) ==
+ Utilities::MPI::n_mpi_processes(shmem_roots_communicator) - 1)
+ Utilities::MPI::broadcast(
+ shmem_roots_communicator,
+ *this,
+ Utilities::MPI::n_mpi_processes(shmem_roots_communicator) - 1);
+ else
+ *this = Utilities::MPI::broadcast(
+ shmem_roots_communicator,
+ *this,
+ Utilities::MPI::n_mpi_processes(shmem_roots_communicator) - 1);
+ }
+ }
// We no longer need the shmem roots communicator, so get rid of it
{
const int ierr = MPI_Comm_free(&shmem_roots_communicator);
AssertThrowMPI(ierr);
}
+
+
// **** Step 3 ****
// At this point, all shmem groups have one shmem root process that has
- // a copy of the data. Let each of these shmem roots broadcast the
- // data to the other processes in their shmem group. As mentioned above,
- // we know that the shmem roots is the last rank in their respective
+ // a copy of the data. This is the point where each shmem group should
+ // establish a shmem area to put the data into. As mentioned above,
+ // we know that the shmem roots are the last rank in their respective
// shmem_group_communicator.
- *this = Utilities::MPI::broadcast(
+ //
+ // The process for all of this works as follows: While all processes in
+ // the shmem group participate in the generation of the shmem memory window,
+ // only the shmem root actually allocates any memory -- the rest just
+ // allocate zero bytes of their own. We allocate space for exactly
+ // size() elements (computed on the shmem_root that already has the data)
+ // and add however many bytes are necessary so that we know that we can align
+ // things to 64-byte boundaries. The worst case happens if the memory system
+ // gives us a pointer to an address one byte past a desired alignment
+ // boundary, and in that case aligning the memory will require us to waste the
+ // first (align_by-1) bytes. So we have to ask for
+ // size() * sizeof(T) + align_by - 1
+ // bytes.
+ //
+ // Before MPI 4.0, there was no way to specify that we want memory aligned to
+ // a certain number of bytes. This is going to come back to bite us further
+ // down below when we try to get a properly aligned pointer to our memory
+ // region, see the commentary there. Starting with MPI 4.0, one can set a
+ // flag in an MPI_Info structure that requests a desired alignment, so we do
+ // this for forward compatibility; MPI implementations ignore flags they don't
+ // know anything about, and so setting this flag is backward compatible also
+ // to older MPI versions.
+ //
+ // There is one final piece we can already take care of here. At the beginning
+ // of all of this, only the shmem_root knows how many elements there are in
+ // the array. But at the end of it, all processes of course need to know. We
+ // could put this information somewhere into the shmem area, along with the
+ // other data, but that seems clumsy. It turns out that when calling
+ // MPI_Win_allocate_shared, we are asked for the value of a parameter called
+ // 'disp_unit' whose meaning is difficult to determine from the MPI
+ // documentation, and that we do not actually need. So we "abuse" it a bit: On
+ // the shmem root, we put the array size into it. Later on, the remaining
+ // processes can query the shmem root's value of 'disp_unit', and so will be
+ // able to learn about the array size that way.
+ MPI_Win shmem_window;
+ void * base_ptr;
+ const MPI_Aint align_by = 64;
+ const MPI_Aint alloc_size = Utilities::MPI::broadcast(
shmem_group_communicator,
- *this,
+ (size() * sizeof(T) + align_by - 1),
Utilities::MPI::n_mpi_processes(shmem_group_communicator) - 1);
- // We now also no longer need the shmem group communicators, so get rid of
- // them
{
- const int ierr = MPI_Comm_free(&shmem_group_communicator);
+ const int disp_unit = (is_shmem_root ? size() : 1);
+
+ int ierr;
+
+ MPI_Info mpi_info;
+ ierr = MPI_Info_create(&mpi_info);
+ AssertThrowMPI(ierr);
+ ierr = MPI_Info_set(mpi_info,
+ "mpi_minimum_memory_alignment",
+ std::to_string(align_by).c_str());
+ AssertThrowMPI(ierr);
+ ierr = MPI_Win_allocate_shared((is_shmem_root ? alloc_size : 0),
+ disp_unit,
+ mpi_info,
+ shmem_group_communicator,
+ &base_ptr,
+ &shmem_window);
+ AssertThrowMPI(ierr);
+
+ ierr = MPI_Info_free(&mpi_info);
AssertThrowMPI(ierr);
}
+
+ // **** Step 4 ****
+ // The next step is to teach all non-shmem root processes what the pointer to
+ // the array is that the shmem-root created. MPI has a nifty way for this
+ // given that only a single process actually allocated memory in the window:
+ // When calling MPI_Win_shared_query, the MPI documentation says that
+ // "When rank is MPI_PROC_NULL, the pointer, disp_unit, and size returned are
+ // the pointer, disp_unit, and size of the memory segment belonging the lowest
+ // rank that specified size > 0. If all processes in the group attached to the
+ // window specified size = 0, then the call returns size = 0 and a baseptr as
+ // if MPI_ALLOC_MEM was called with size = 0."
+ //
+ // This will allow us to obtain the pointer to the shmem root's memory area,
+ // which is the only one we care about. (None of the other processes have
+ // even allocated any memory.) But this will also retrieve the shmem root's
+ // disp_unit, which in step 3 above we have abused to pass along the number of
+ // elements in the array.
+ //
+ // We don't need to do this on the shmem root process: This process has
+ // already gotten its base_ptr correctly set above, and we can determine the
+ // array size by just calling size().
+ unsigned int array_size =
+ (is_shmem_root ? size() : numbers::invalid_unsigned_int);
+ if (is_shmem_root == false)
+ {
+ int disp_unit;
+ MPI_Aint alloc_size; // not actually used
+ const int ierr = MPI_Win_shared_query(
+ shmem_window, MPI_PROC_NULL, &alloc_size, &disp_unit, &base_ptr);
+ AssertThrowMPI(ierr);
+
+ // Make sure we actually got a pointer, and also unpack the array size as
+ // discussed above.
+ Assert(base_ptr != nullptr, ExcInternalError());
+
+ array_size = disp_unit;
+ }
+
+
+ // **** Step 5 ****
+ // Now that all processes know the address of the space that is visible to
+ // everyone, we need to figure out whether it is properly aligned and if not,
+ // find the next aligned address.
+ //
+ // std::align does that, but it also modifies its last two arguments. The
+ // documentation of that function at
+ // https://en.cppreference.com/w/cpp/memory/align is not entirely clear, but I
+ // *think* that the following should do given that we do not use base_ptr and
+ // available_space any further after the call to std::align.
+ std::size_t available_space = alloc_size;
+ void * base_ptr_backup = base_ptr;
+ T * aligned_shmem_pointer = static_cast<T *>(
+ std::align(align_by, array_size * sizeof(T), base_ptr, available_space));
+ Assert(aligned_shmem_pointer != nullptr, ExcInternalError());
+
+ // There is one step to guard against. It is *conceivable* that the base_ptr
+ // we have previously obtained from MPI_Win_shared_query is mapped so
+ // awkwardly into the different MPI processes' memory spaces that it is
+ // aligned in one memory space, but not another. In that case, different
+ // processes would align base_ptr differently, and adjust available_space
+ // differently. We can check that by making sure that the max (or min) over
+ // all processes is equal to every process's value. If that's not the case,
+ // then the whole idea of aligning above is wrong and we need to rethink what
+ // it means to align data in a shared memory space.
+ //
+ // One might be tempted to think that this is not how MPI implementations
+ // actually arrange things. Alas, when developing this functionality in 2021,
+ // this is really how at least OpenMPI ends up doing things. (This is with an
+ // OpenMPI implementation of MPI 3.1, so it does not support the flag we set
+ // in the MPI_Info structure above when allocating the memory window.) Indeed,
+ // when running this code on three processes, one ends up with base_ptr values
+ // of
+ // base_ptr=0x7f0842f02108
+ // base_ptr=0x7fc0a47881d0
+ // base_ptr=0x7f64872db108
+ // which, most annoyingly, are aligned to 8 and 16 byte boundaries -- so there
+ // is no common offset std::align could find that leads to a 64-byte
+ // aligned memory address in all three memory spaces. That's a tremendous
+ // nuisance and there is really nothing we can do about this other than just
+ // fall back on the (unaligned) base_ptr in that case.
+ if (Utilities::MPI::min(available_space, shmem_group_communicator) !=
+ Utilities::MPI::max(available_space, shmem_group_communicator))
+ aligned_shmem_pointer = static_cast<T *>(base_ptr_backup);
+
+
+ // **** Step 6 ****
+ // If this is the shmem root process, we need to copy the data into the
+ // shared memory space.
+ if (is_shmem_root)
+ {
+ if (std::is_trivial<T>::value == true)
+ std::memcpy(aligned_shmem_pointer, elements.get(), sizeof(T) * size());
+ else
+ for (std::size_t i = 0; i < size(); ++i)
+ new (&aligned_shmem_pointer[i]) T(std::move(elements[i]));
+ }
+
+ // **** Step 7 ****
+ // Finally, we need to set the pointers of this object to what we just
+ // learned. This also releases all memory that may have been in use
+ // previously.
+ //
+ // The part that is a bit tricky is how to write the deleter of this
+ // shared memory object. When we want to get rid of it, we need to
+ // also release the MPI_Win object along with the shmem_group_communicator
+ // object. That's because as long as we use the shared memory, we still need
+ // to hold on to the MPI_Win object, and the MPI_Win object is based on the
+ // communicator. (The former is definitely true, the latter is not quite clear
+ // from the MPI documentation, but seems reasonable.) So we need to have a
+ // deleter for the pointer that ensures that upon release of the memory, we
+ // not only call the destructor of these memory elements (but only once, on
+ // the shmem root!) but also destroy the MPI_Win and the communicator. All of
+ // that is encapsulated in the following call where the deleter makes copies
+ // of the arguments in the lambda capture.
+ elements =
+ decltype(elements)(aligned_shmem_pointer,
+ [is_shmem_root,
+ array_size,
+ aligned_shmem_pointer,
+ shmem_group_communicator,
+ shmem_window](T *) mutable {
+ if (is_shmem_root)
+ for (unsigned int i = 0; i < array_size; ++i)
+ aligned_shmem_pointer[i].~T();
+
+ int ierr;
+ ierr = MPI_Win_free(&shmem_window);
+ AssertThrowMPI(ierr);
+
+ ierr = MPI_Comm_free(&shmem_group_communicator);
+ AssertThrowMPI(ierr);
+ });
+
+ // We then also have to set the other two pointers that define the state of
+ // the current object. Note that the new buffer size is exactly as large as
+ // necessary, i.e., can store size() elements, regardless of the number of
+ // allocated elements in the original objects.
+ used_elements_end = elements.get() + array_size;
+ allocated_elements_end = used_elements_end;
+
// **** Consistency check ****
// At this point, each process should have a copy of the data.
// Verify this in some sort of round-about way
inline void
AlignedVector<T>::save(Archive &ar, const unsigned int) const
{
- size_type vec_size(size());
+ size_type vec_size = size();
ar & vec_size;
if (vec_size > 0)
ar &boost::serialization::make_array(elements.get(), vec_size);