#
# Even though [[deprecated]] is a C++14 feature we have to check
-# wether we can actually use the [[deprecated]] attribute in all
+# whether we can actually use the [[deprecated]] attribute in all
# cases we care about; some of the following are C++17 features.
#
CHECK_CXX_SOURCE_COMPILES(
endif()
#
- # If _n_cpu or _n_threads are larger than zero we have to accomodate
+ # If _n_cpu or _n_threads are larger than zero we have to accommodate
# the fact that multiple output files specifying a different mpirun or
- # threads count are present. In order to accomodate this we create a
+ # threads count are present. In order to accommodate this we create a
# runtime subdirectory "mpirun_M-threads_N" to the test.
#
# Note that we could do this unconditionally for every test but for
#
# CGAL requires C++17 and an externally configured Boost, otherwise the
# call to find_package(CGAL) will fail. Guard the call to FIND_PACKAGE to
-# fail cracefully:
+# fail gracefully:
#
if(DEAL_II_HAVE_CXX17 AND NOT FEATURE_BOOST_BUNDLED_CONFIGURED)
set(CGAL_DO_NOT_WARN_ABOUT_CMAKE_BUILD_TYPE ON)
endif()
endif()
- # Make sure we dont' pass Boost::Boost over to deal.II.
+ # Make sure we don't pass Boost::Boost over to deal.II.
list(FILTER CGAL_LIBRARIES EXCLUDE REGEX "::")
endif()
else()
#
# Determine the stage a test reached: Possible values are
# CONFIGURE - the test started with a special configure stage and failed during configure
-# BUILD - the test reached the build stage and a compilation error occured
+# BUILD - the test reached the build stage and a compilation error occurred
# RUN - the test reached the run stage but the run terminated with an error
# DIFF - the test reached the diff stage but output differed
# PASSED - the test passed all stages
* automatically inside FEFaceEvaluation::gather_evaluate() and
* FEFaceEvaluation::integrate_scatter(). It might seem inefficient to do this
* decision for every integration task, but in the end this is a single `if`
- * statement (conditional jump) that is easily predicable for a modern CPU as
+ * statement (conditional jump) that is easily predictable for a modern CPU as
* the decision is always the same inside an integration loop. (One only pays
* by somewhat increased compile times because the compiler needs to generate
* code for all paths, though).
Curve Loop(3) = {39, 52, 53, 54, 55, 56, -10, -42};
Plane Surface(2) = {3};
-// Surface of bottem left mesh
+// Surface of bottom left mesh
Curve Loop(4) = {42, 7, 8, 9, 38};
Curve Loop(5) = {49, 50, 51, 48};
Plane Surface(3) = {4, 5};
"cell_type": "markdown",
"metadata": {},
"source": [
- "This is a replica of [step-49](https://www.dealii.org/current/doxygen/deal.II/step_49.html) C++ turorial program. However, here we will use the deal.II Python interface to achieve the same. \n",
+ "This is a replica of [step-49](https://www.dealii.org/current/doxygen/deal.II/step_49.html) C++ tutorial program. However, here we will use the deal.II Python interface to achieve the same. \n",
"\n",
"Not all of the material is replicated since some parts of the original C++ tutorial are only relevant when using C++. Furthermore, in contrast to the C++ program, here we will take advantage of Jupyter notebook format and do coding and visualization in place.\n",
"\n",
"cell_type": "markdown",
"metadata": {},
"source": [
- "This is a replica of [step-53](https://www.dealii.org/current/doxygen/deal.II/step_53.html) C++ turorial program. However, here we will use the deal.II Python interface to implement the functionality of the original tutorial. \n",
+ "This is a replica of [step-53](https://www.dealii.org/current/doxygen/deal.II/step_53.html) C++ tutorial program. However, here we will use the deal.II Python interface to implement the functionality of the original tutorial. \n",
"\n",
"Not all of the material is replicated since some parts of the original C++ tutorial are only relevant when using C++. Therefore, it is recommended that you first go through the original C++ tutorial to see all the details covered there.\n",
"\n",
</table>
The pictures confirm that the normal to mesh projection approach leads to grids that remain evenly spaced
-throughtout the refinement steps. At the same time, these meshes represent rather well the original geometry even in the bottom region
+throughout the refinement steps. At the same time, these meshes represent rather well the original geometry even in the bottom region
of the bulb, which is not well recovered employing the directional projector or the normal projector.
void setup_embedded_dofs();
// The only unconventional function we have here is the `setup_coupling()`
- // method, used to generate the sparsity patter for the coupling matrix $C$.
+ // method, used to generate the sparsity pattern for the coupling matrix $C$.
void setup_coupling();
// @sect4{The `Rho` class implementation}
- // This class is used to define the mass density. As we have explaine before,
+ // This class is used to define the mass density. As we have explained before,
// a phononic superlattice cavity is formed by two
// [Distributed Reflector](https://en.wikipedia.org/wiki/Band_gap),
// mirrors and a $\lambda/2$ cavity where $\lambda$ is the acoustic
// The last thing to note is that since our problem is non-symmetric, we must
// use an appropriate Krylov subspace method. We choose here to
// use GMRES since it offers the guarantee of residual reduction in each
- // iteration. The major disavantage of GMRES is that, for each iteration,
+ // iteration. The major disadvantage of GMRES is that, for each iteration,
// the number of stored temporary vectors increases by one, and one also needs
// to compute a scalar product with all previously stored vectors. This is
// rather expensive. This requirement is relaxed by using the restarted GMRES
is consistent with the Euler equations along a discontinuity -- and
approximate Riemann solvers, which violate some physical properties and rely
on other mechanisms to render the scheme accurate overall. Approximate Riemann
-solvers have the advantage of beging cheaper to compute. Most flux functions
+solvers have the advantage of being cheaper to compute. Most flux functions
have their origin in the finite volume community, which are similar to DG
methods with polynomial degree 0 within the cells (called volumes). As the
volume integral of the Euler operator $\mathbf{F}$ would disappear for
// - A copy data: a struct that contains all the local assembly
// contributions, in this case <code>CopyData<dim>()</code>.
// - A copy data routine: in this case it is
- // <code>copy_local_to_global()</code> in charge of actually coping these
+ // <code>copy_local_to_global()</code> in charge of actually copying these
// local contributions into the global objects (matrices and/or vectors)
//
// Most of the following lines are spent in the definition of the worker
* vector of simplices, each one identified by an array of deal.II Points. All
* the simplices together are a subdivision of the intersection. If cells are
* non-affine, a geometrical error is introduced. If the
- * measure of one of the simplices is below a certain treshold which defaults
+ * measure of one of the simplices is below a certain threshold which defaults
* to 1e-9, then it is discarded. In case the two cells are disjoint, an empty
* array is returned.
*
* @param cell1 Iterator to the second cell.
* @param mapping0 Mapping for the first cell.
* @param mapping1 Mapping for the second cell.
- * @param tol Treshold to decide whether or not a simplex is included.
+ * @param tol Threshold to decide whether or not a simplex is included.
* @return Vector of arrays, where each array identify a simplex by its vertices.
*/
template <int dim0, int dim1, int spacedim>
* the values associated with a key are also symbolic then the returned
* result may still be symbolic in nature. The terminal result of using the
* input substitution map, @p symbol_values, is then guaranteed to be
- * rendered by a single substitition of the returned dependency-resolved
+ * rendered by a single substitution of the returned dependency-resolved
* map.
*
* Example:
const DoFHandler<dim, spacedim> &dof_handler;
/**
- * Data transfered by cell_data_transfer.
+ * Data transferred by cell_data_transfer.
*/
std::vector<Vector<Number>> data_to_transfer;
/**
* A restriction operation similar to the above one. However, the indices
* of the blocks can be chosen arbitrarily. If the indices of cells are
- * given, the ouput is the same as of the above function. However, one
+ * given, the output is the same as of the above function. However, one
* can also provide, e.g., indices that are also part of a halo around
* a cell to implement element-block based overlapping Schwarz methods.
*
}
const FloatingPointComparator<Number> comparator(
- eps, false /*use relativ torlerance*/, mask);
+ eps, false /*use relative tolerance*/, mask);
if (comparator(M_0, M_1))
return true;
# ifdef DEAL_II_TRILINOS_WITH_BELOS
/**
* Wrapper around the iterate solver package from the Belos
- * packge
+ * package
* (https://docs.trilinos.org/latest-release/packages/belos/doc/html/index.html),
* targeting deal.II data structures.
*/
}
// now put the tensor into data
- // note we padd with zeros because VTK format always wants to
+ // note we pad with zeros because VTK format always wants to
// see a 3x3 tensor, regardless of dimension
for (unsigned int i = 0; i < 3; ++i)
for (unsigned int j = 0; j < 3; ++j)
// make sure that there is no rounding error for 0.0 and 1.0, since there
// are multiple asserts in the library checking for equality without
- // tolorances
+ // tolerances
for (auto &i : this->quadrature_points)
if (std::abs(i[0] - 0.0) < 1e-12)
i[0] = 0.0;
parallel::distributed::experimental::
FieldTransfer<2, LinearAlgebra::distributed::Vector<double>>
field_transfer(dof_handler);
- // Assgin FE_Q to all the cells
+ // Assign FE_Q to all the cells
for (auto cell : dof_handler.active_cell_iterators())
{
if (cell->is_locally_owned())
parallel::distributed::experimental::
FieldTransfer<2, LinearAlgebra::distributed::Vector<double>>
field_transfer(dof_handler);
- // Assgin FE_Q to all the cells
+ // Assign FE_Q to all the cells
for (auto cell : dof_handler.active_cell_iterators())
{
if (cell->is_locally_owned())
// ---------------------------------------------------------------------
-// Test TensorProductMatrixSymmetricSum for zero (constrained) rows and colums.
+// Test TensorProductMatrixSymmetricSum for zero (constrained) rows and columns.
// We consider a single cell with DBC applied to face 2*(dim-1).
#include <deal.II/dofs/dof_handler.h>
// experiencing was the mesh I was testing on was too coarse for
// larger number of processors. This test case shows that as
// well. For 4 processors the code produces output without error
-// for both the 12 repitions and the 2 repetitions. For 6 and 12
+// for both the 12 repetitions and the 2 repetitions. For 6 and 12
// processors only the 12 repetition case produces the proper
// output. Fortunately it does show as long as the mesh is
// adequately refined DataOutFaces produces the output for each