From 8a6c10026a556ea85efaf77f54a82c97fffe8495 Mon Sep 17 00:00:00 2001 From: Timo Heister Date: Wed, 11 May 2022 16:39:40 -0400 Subject: [PATCH] step-40: update IO section Update the documentation on how we generate output using MPI I/O (we originally wrote one vtu per process). Remove the limit of 32 MPI ranks, because it is confusing according to my students. I think this was put in place to avoid generating 1000+ output files, which is no longer the case (we use 8 here). The output is a few MB in total, so I don't see a reason to limit things. --- examples/step-40/step-40.cc | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/examples/step-40/step-40.cc b/examples/step-40/step-40.cc index de9d4c487e..c97ea27c02 100644 --- a/examples/step-40/step-40.cc +++ b/examples/step-40/step-40.cc @@ -545,12 +545,17 @@ namespace Step40 // bottleneck in parallelizing. In step-18, we have moved this step out of // the actual computation but shifted it into a separate program that later // combined the output from various processors into a single file. But this - // doesn't scale: if the number of processors is large, this may mean that - // the step of combining data on a single processor later becomes the - // longest running part of the program, or it may produce a file that's so - // large that it can't be visualized any more. We here follow a more - // sensible approach, namely creating individual files for each MPI process - // and leaving it to the visualization program to make sense of that. + // doesn't scale for several reasons: First, creating a single file per + // processor will overwhelm the filesystem with a large number of processors. + // Second, the step of combining all output files later can become the longest + // running part of the program, or it may produce a file that's so large that + // it can't be visualized easily any more. + // + // We here follow a more sophisticated approach that uses + // high-performance, parallel IO routines using MPI I/O to write to + // a small, fixed number of visualization files (here 8). We also + // generate a .pvtu record referencing these .vtu files, which can + // be opened directly in visualizatin tools like ParaView and VisIt. // // To start, the top of the function looks like it usually does. In addition // to attaching the solution vector (the one that has entries for all locally @@ -581,7 +586,7 @@ namespace Step40 data_out.build_patches(); - // The next step is to write this data to disk. We write up to 8 VTU files + // The final step is to write this data to disk. We write up to 8 VTU files // in parallel with the help of MPI-IO. Additionally a PVTU record is // generated, which groups the written VTU files. data_out.write_vtu_with_pvtu_record( @@ -595,11 +600,7 @@ namespace Step40 // The function that controls the overall behavior of the program is again // like the one in step-6. The minor difference are the use of // pcout instead of std::cout for output to the - // console (see also step-17) and that we only generate graphical output if - // at most 32 processors are involved. Without this limit, it would be just - // too easy for people carelessly running this program without reading it - // first to bring down the cluster interconnect and fill any file system - // available :-) + // console (see also step-17). // // A functional difference to step-6 is the use of a square domain and that // we start with a slightly finer mesh (5 global refinement cycles) -- there @@ -641,11 +642,10 @@ namespace Step40 assemble_system(); solve(); - if (Utilities::MPI::n_mpi_processes(mpi_communicator) <= 32) - { - TimerOutput::Scope t(computing_timer, "output"); - output_results(cycle); - } + { + TimerOutput::Scope t(computing_timer, "output"); + output_results(cycle); + } computing_timer.print_summary(); computing_timer.reset(); -- 2.39.5