* than replicating data once for each MPI process. This results in
* large memory savings if the data is large on today's machines that
* can easily house several dozen MPI processes per shared memory
- * space.
+ * space. This use case is outlined in the TableBase class documentation
+ * as the current function is called from
+ * TableBase::replicate_across_communicator(). Indeed, the primary rationale
+ * for this function is to enable sharing data tables based on TableBase
+ * across MPI processes.
*
* This function does not imply a model of keeping data on different processes
* in sync, as parallel::distributed::Vector and other vector classes do where
* @note The use of the related class InterpolatedUniformGridData is
* discussed in step-53.
*
+ *
+ * <h3>Dealing with large data sets</h3>
+ *
+ * This class is often used to interpolate data provided by fairly
+ * large data tables that are expensive to read from disk, and that take
+ * a large amount of memory when replicated on every process of parallel
+ * (MPI) programs.
+ *
+ * The Table class can help with amortizing this cost by using
+ * shared memory to store the data only as often as necessary -- see the
+ * documentation of the TableBase class. Once one has obtained such a
+ * Table object that uses shared memory to store the data only as often
+ * as is necessary, one has to avoid that the current class *copies*
+ * the table into its own member variable. Rather, it is necessary to
+ * use the *move* constructor of this class to take over ownership of
+ * the table and its shared memory space. This can be achieved using
+ * the following extension of the code snippet shown in the
+ * documentation of the TableBase class:
+ * @code
+ * const unsigned int N=..., M=...; // table sizes, assumed known
+ * Table<2,double> data_table;
+ * const unsigned int root_rank = 0;
+ *
+ * if (Utilities::MPI::this_mpi_process(mpi_communicator) == root_rank)
+ * {
+ * data_table.resize (N,M);
+ *
+ * std::ifstream input_file ("data_file.dat");
+ * ...; // read the data from the file
+ * }
+ *
+ * // Now distribute to all processes
+ * data_table.replicate_across_communicator (mpi_communicator, root_rank);
+ *
+ * // Set up the x- and y-coordinates of the points stored in the
+ * // data table
+ * std::array<std::vector<double>, dim> coordinate_values;
+ * ...; // do what needs to be done
+ *
+ * // And finally set up the interpolation object. The calls
+ * // to std::move() make sure that the tables are moved into
+ * // the memory space of the InterpolateTensorProductGridData
+ * // object:
+ * InterpolatedTensorProductGridData<2>
+ * interpolation_function (std::move(coordinate_values),
+ * std::move(data_table));
+ * @endcode
+ *
+ *
* @ingroup functions
*/
template <int dim>
*
* @note The use of this class is discussed in step-53.
*
+ *
+ * <h3>Dealing with large data sets</h3>
+ *
+ * This class supports the same facilities for dealing with large data sets
+ * as the InterpolatedTensorProductGridData class. See there for more
+ * information and example codes.
+ *
+ *
* @ingroup functions
*/
template <int dim>
* large core counts on which many MPI processes run on the same machine.
*
* @note This function only makes sense if the data type `T` is
- * "self-contained", i.e., all if its information is stored in its
+ * "self-contained", i.e., all of its information is stored in its
* member variables, and if none of the member variables are pointers
* to other parts of the memory. This is because if a type `T` does
* have pointers to other parts of memory, then moving `T` into