* CompressedSimpleSparsityPattern indicates that it should store too few rows
* of the matrix, the program will either abort when you attempt to write into
* matrix entries that do not exist or the matrix class will silently allocate
- * more memory to accomodate them. As a consequence, it is useful to err on
+ * more memory to accommodate them. As a consequence, it is useful to err on
* the side of caution when indicating which constraints to store and use the
* result of DoFTools::extract_locally_relevant_dofs() rather than
* DoFTools::extract_locally_active_dofs() . This is also affordable since the
* (Note that function names and exact calling sequences may change
* over time, but the general principle remains the same.) I.e., if
* the given condition is violated, then the file and line in which
- * the exception occured as well as the condition itself and the call
+ * the exception occurred as well as the condition itself and the call
* sequence of the exception object is passed to the
* deal_II_exceptions::internals::issue_error_assert_1()
* function. Additionally an object of the form given by <tt>exc</tt>
*
* <dt class="glossary">@anchor GlossMaterialId <b>Material id</b></dt>
* <dd>Each cell of a triangulation has associated with it a property called
- * "material id". It is commonly used in problems with heterogenous
+ * "material id". It is commonly used in problems with heterogeneous
* coefficients to identify which part of the domain a cell is in and,
* consequently, which value the coefficient should have on this particular
* cell. The material id is inherited from mother to child cell upon mesh
*
* In addition, deal.II can read an intermediate graphics format using the
* DataOutReader. This format is used as an intermediate step between data
- * associated wiht a simulation and is written by the DataOutBase class (or
+ * associated with a simulation and is written by the DataOutBase class (or
* through the more derived classes described in the \ref output module). The
* DataOutReader class reads this data back in, and it can then be converted
* to any of a number of data formats supported by visualization programs.
Past-the-end iterators may also be used to compare an iterator with
the before-the-start value, when running backwards. There is no
-distiction between the iterators pointing past the two ends of a
+distinction between the iterators pointing past the two ends of a
vector.
-Cells are stored based on a hierachical structure of levels, therefore the
+Cells are stored based on a hierarchical structure of levels, therefore the
above mentioned structure is useful. Faces however are not organized in
levels, and accessors for objects of lower dimensionality do not have a
<code>present_level</code> member variable.
* bit more creative. The way chosen is to introduce a function
* <code>new_task</code> that takes as arguments the function to call as well
* as the arguments to the call. The <code>new_task</code> function is
- * overloaded to accomodate starting tasks with functions that take no, one,
+ * overloaded to accommodate starting tasks with functions that take no, one,
* two, and up to 9 arguments. In deal.II, these functions live in the Threads
* namespace. Consequently, the actual code for what we try to do above looks
* like this:
*
* Some of the classes, like DerivativeApproximation, KellyErrorEstimator and
* SolutionTransfer, act on solutions already obtained, and compute derived
- * quantities in the first two cases, or help transfering a set of vectors
+ * quantities in the first two cases, or help transferring a set of vectors
* from one mesh to another.
*
* The remaining classes MatrixCreator, MatrixTools, and VectorTools provide
for the FiniteElement, but need to set <code>update_once=0</code>
<code>update_each=update_jacobians</code> for the Mapping object.
-To accomodate this structure, at the time a FEValues object is constructed,
+To accommodate this structure, at the time a FEValues object is constructed,
it asks both the FiniteElement and the Mapping object it uses the following:
<ol>
<li> Are any additional values required in order to compute the
<tr valign="top">
<td><a href="../../doxygen/deal.II/step_15.html">Step-15</a></td>
<td> 1d problems, nonlinear solvers,
- transfering a solution across mesh refinement.
+ transferring a solution across mesh refinement.
</td></tr>
<tr valign="top">
<a href="../../doxygen/deal.II/step_40.html">Step-40</a>,
<a href="../../doxygen/deal.II/step_43.html">Step-43</a>
</td>
- <td> Transfering solutions across mesh refinement
+ <td> Transferring solutions across mesh refinement
</td>
</tr>
<p>
Note: For compilation of <acronym>ARPACK</acronym> we emphasise
- adding the compiler flag <code>-fPIC</code>. This is a definate
+ adding the compiler flag <code>-fPIC</code>. This is a definite
requirement if we are compiling <acronym>deal.II</acronym> with
shared libraries (which is the default). If we had preferred to be
compiling <acronym>deal.II</acronym> without shared libraries,
<li> Finally we add the flag <code>-fPIC</code> to the
three compiler flag variables <code>OPTF</code>,
<code>OPTC</code>, and <code>OPTL</code> which follow
- imediately after the above. Without adding this last flag it
+ immediately after the above. Without adding this last flag it
will not be possible to link <acronym>deal.II</acronym> with
<acronym>MUMPS</acronym> as a shared library.
</li>
<p>
Note: Throughout this description of the compilation process
of <acronym>MUMPS</acronym> we have emphasised adding the
- compiler flag <code>-fPIC</code>. This is a definate requirement
+ compiler flag <code>-fPIC</code>. This is a definite requirement
if we are compiling <acronym>deal.II</acronym> with shared
libraries (which is the default). If we had preferred to be
compiling <acronym>deal.II</acronym> without shared libraries,
<li> <p>
New: <code class="class">SolverControl</code> has an interface
- to <code class="class">ParameterHandler</code>, definining and
+ to <code class="class">ParameterHandler</code>, defining and
reading parameters from a file automatically.
<br>
(GK 2000/05/24)
<li> <p>
New: There is now a class <code class="class">MappingC1</code>
- that implements a continously differentiable C<sup>1</sup>
+ that implements a continuously differentiable C<sup>1</sup>
mapping of the boundary by using a cubic mapping with
continuous derivatives at cell boundaries. The class presently
only implements something for 2d and 1d (where it does nothing).
<li> <p>
New: The interface for sparse decompositions has been abstracted, and
there is now an Modified Incomplete Cholesky (MIC) decomposition in
- addition to the Imcomplete LU (ILU) decomposition.
+ addition to the Incomplete LU (ILU) decomposition.
<br>
(Stephen Kolaroff 2002/11/27)
</p>
New: Checked in new <code class="class">GridGenerator</code>
member function <code class="member">half_hyper_ball</code>,
derived from member <code class="member">hyper_ball</code>.
- The intial mesh contains four elements. This mesh will work with
+ The initial mesh contains four elements. This mesh will work with
the boundary class <code class="class">HalfHyperBallBoundary</code>.
<br>
(Brian Carnes 2002/12/16)
New: Checked in new class <code class="class">FE_Q_Hierarchical</code>
derived from class <code class="class">FiniteElement</code>.
This element is analogous to <code class="class">FE_Q</code>, but
- makes use of hierachical shape functions, based on the
+ makes use of hierarchical shape functions, based on the
<code class="class">Polynomials::Hierarchical</code> class.
For <code>degree>1</code>, the non-nodal basis functions are "bubble"
functions, which are not Lagrange polynomials. Therefore, the usual
<li> <p> Fixed: The <code
class="class">SolverMinRes</code> class had a nasty bug where we were
- inadvertantly copying vectors; this could also have led to a memory
+ inadvertently copying vectors; this could also have led to a memory
corruption bug. This is now fixed.
<br>
(WB 2004/02/26)
<li> <p>
Changed: Lower dimensional objects have been removed from the
- hierachical structure of levels in <code
+ hierarchical structure of levels in <code
class="class">TriaLevel</code>. Faces, i.e. lines in 2D and
lines and quads in 3D, have no associated level
anymore. Therefore, the level argument of some iterator
<li> <p>
Changed: The internal numbering of faces, lines and vertices
has been reimplemented. Now the numbering scheme uses a
- lexicographic ordering (with x running fastest) whereever
+ lexicographic ordering (with x running fastest) wherever
possible. For more information on the new numbering scheme, see
the <a
href="http://ganymed.iwr.uni-heidelberg.de/pipermail/dealii/2005/000827.html">announcement</a>
lines of cells is in some cases a bit more complicated. The same applies,
for example,
to the extraction of the information, which child of a neighbor is behind
- a given subface. However, this infomation is supplied by various
+ a given subface. However, this information is supplied by various
functions in <code class="class">GeometryInfo</code>. As a rule-of-thumb:
- if you want to use non-standard meshes, all occurances of
+ if you want to use non-standard meshes, all occurrences of
<tt>face_orientation</tt> have to be supplemented by <tt>face_flip</tt>
and <tt>face_rotation</tt>.
<br> In order to reduce the impact of possible bugs, the grid is still given to
</p>
<li> <p>Changed: The version number of the deal.II intermediate format written by
- DataOutBase::write_deal_II_intermediate has been increased to 3 to accomodate the fact that
+ DataOutBase::write_deal_II_intermediate has been increased to 3 to accommodate the fact that
we now support writing vector-valued data to output files in at least some output formats.
(Previously, vector-valued date was written as a collection of scalar fields.) Since
we can only read files written in intermediate format that has the same number as the
New: A new tutorial program step-34 was added to the
library that shows the usage of the new codimension one functionality
recently added to the library. In this tutorial we show the use of
- bondary element methods on piecewise constant functions defined over
+ boundary element methods on piecewise constant functions defined over
a surface, and we solve the irrotational flow problem, or exterior
Neumann Laplace problem.
<br>
<li>
<p>
Removed: The interface to PETSc has been simplified to better handle
- incremental changes in PETSc versions and accomodate changes in
+ incremental changes in PETSc versions and accommodate changes in
functionality between versions. As a part of this process, the
deal.II configure script no longer handles PETSc versions
<2.3.0. Attempting to configure deal.II with PETSc versions that are
if the compiler used supports C++ 1x, we now selectively import elements of the
compiler's namespace std into namespace std_cxx1x as well. This may lead to
incompatibilities if you are already using elements of the C++ 1x
-standard by refering to them through the std_cxx1x namespace and these elements
+standard by referring to them through the std_cxx1x namespace and these elements
are not on the list of selectively imported ones.
<br>
(Wolfgang Bangerth, 2011/05/29)
<li> Improved: <code>PETScWrappers::SolverXXX</code> class was
restricted to using default solver options for the KSP only. It is now
possible to override those by using PETSc command-line options
-<code>-ksp_*</code>; giving greater flexability in controling PETSc
-solvers. (See class documentation).
+<code>-ksp_*</code>; giving greater flexibility in controlling PETSc
+solvers. (See the class's documentation).
<br>
(Vijay S. Mahadevan, 2011/12/22)
actually execute it.
<P>
-<B>Acknowledgments</B> Present development and maintainance of deal.II is a
+<B>Acknowledgments</B> Present development and maintenance of deal.II is a
joint effort of several people at the University of Heidelberg, the University
of Minneapolis, and elsewhere. The author acknowledges the support by the
German Research Association (DFG) through the Graduiertenkolleg and the SFB
SRC="img9.gif"
ALT="$K=\sigma_K(\hat K)$">
in real space, whereas on cells <I>K</I> at the
-boundary, i.e.
+boundary, i.e.
<!-- MATH: $\partial K\cap\Gamma=\emptyset$ -->
<IMG
WIDTH="93" HEIGHT="34" ALIGN="MIDDLE" BORDER="0"
SRC="img12.gif"
ALT="$K\in T_h$">
be a cell of the
-triangulation <I>T</I><SUB><I>h</I></SUB> with
+triangulation <I>T</I><SUB><I>h</I></SUB> with
<!-- MATH: $K=\sigma_K(\hat K)$ -->
<IMG
WIDTH="96" HEIGHT="41" ALIGN="MIDDLE" BORDER="0"
SRC="img15.gif"
ALT="$i=0,\ldots,(p+1)^d-1$">,
we define a
-<I>Q</I><SUB><I>p</I></SUB>-mapping
+<I>Q</I><SUB><I>p</I></SUB>-mapping
<!-- MATH: $\sigma\in [Q_p]^d$ -->
<IMG
WIDTH="75" HEIGHT="38" ALIGN="MIDDLE" BORDER="0"
WIDTH="174" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
SRC="img22.gif"
ALT="$i=0,\ldots,(p+1)^2-1$">
-for degrees
+for degrees
<!-- MATH: $p=1,\ldots,4$ -->
<IMG
WIDTH="96" HEIGHT="30" ALIGN="MIDDLE" BORDER="0"
WIDTH="126" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
SRC="img4.gif"
ALT="$0\leq i<(p+1)^2$">,
-for degrees
+for degrees
<!-- MATH: $p=1,\ldots,4$ -->
<IMG
WIDTH="96" HEIGHT="30" ALIGN="MIDDLE" BORDER="0"
follows: first the corners, then the points on the edges and finally
the inner support points, see also Figure
<A HREF="index.html#fig:unit-mapping-points">2</A>. Thus the first 4<I>p</I> points are placed
-on the boundary
+on the boundary
<!-- MATH: $\partial\hat K$ -->
<IMG
WIDTH="31" HEIGHT="21" ALIGN="BOTTOM" BORDER="0"
<BR CLEAR="ALL">
<P></P>
According to (<A HREF="index.html#eq:point-mappings">2</A>) these points are mapped to the
-mapping support points <I>p</I><SUB><I>k</I></SUB>,
+mapping support points <I>p</I><SUB><I>k</I></SUB>,
<!-- MATH: $k=0,\ldots,4p-1$ -->
<IMG
WIDTH="135" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
</DIV>
<BR CLEAR="ALL">
<P></P>
-While the support points <I>p</I><SUB><I>k</I></SUB>,
+While the support points <I>p</I><SUB><I>k</I></SUB>,
<!-- MATH: $k=0,\ldots,4p-1$ -->
<IMG
WIDTH="135" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img8.gif"
ALT="$\hat K$">.
Discrete boundary conditions are imposed that are given by the
-coordinates of the mapping support points <I>p</I><SUB><I>k</I></SUB>,
+coordinates of the mapping support points <I>p</I><SUB><I>k</I></SUB>,
<!-- MATH: $k=0,\ldots,4p-1$ -->
<IMG
WIDTH="135" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
WIDTH="15" HEIGHT="16" ALIGN="BOTTOM" BORDER="0"
SRC="img1.gif"
ALT="$\sigma $">.
-Here, the discrete boundary function
+Here, the discrete boundary function
<!-- MATH: $g\in [Q_p]^2$ -->
<IMG
WIDTH="73" HEIGHT="37" ALIGN="MIDDLE" BORDER="0"
ALT="$\phi_i$">
the corresponding Lagrangian interpolation basis
function. We recall that the numbering of the mapping support points
-involves
+involves
<!-- MATH: $p_k\in\partial K$ -->
<IMG
WIDTH="69" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
SRC="img36.gif"
ALT="$p_k\in\partial K$">
-for
+for
<!-- MATH: $k=0,\ldots, 4p-1$ -->
<IMG
WIDTH="135" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
</TABLE>
</DIV>
<BR CLEAR="ALL"><P></P>
-with the matrices
+with the matrices
<!-- MATH: $S_{ij}\in\mathbb R^{(p-1)^2\times(p-1)^2}$ -->
<IMG
WIDTH="155" HEIGHT="42" ALIGN="MIDDLE" BORDER="0"
SRC="img42.gif"
ALT="$S_{ij}\in\mathbb R^{(p-1)^2\times(p-1)^2}$">
-and
+and
<!-- MATH: $T_{ik}\in\mathbb R^{(p-1)^2\times 4p}$ -->
<IMG
WIDTH="128" HEIGHT="42" ALIGN="MIDDLE" BORDER="0"
<BR CLEAR="ALL"><P></P>
of the linear combination (<A HREF="index.html#eq:linear-combination-laplace">8</A>), that
represents the dependency of the <I>j</I>th inner mapping support point
-<I>p</I><SUB>4<I>p</I>+<I>j</I></SUB> on the support points <I>p</I><SUB><I>k</I></SUB>,
+<I>p</I><SUB>4<I>p</I>+<I>j</I></SUB> on the support points <I>p</I><SUB><I>k</I></SUB>,
<!-- MATH: $k=0,\ldots,4p-1$ -->
<IMG
WIDTH="135" HEIGHT="32" ALIGN="MIDDLE" BORDER="0"
<I>MappingQ::compute_support_points_laplace</I>, first the 4<I>p</I> points on the boundary of the cell are computed (by calling
<I>MappingQ::add_line_support_points</I>), then by calling
<I>MappingQ::apply_laplace_vector</I> the remaining (<I>p</I>-1)<SUP>2</SUP> inner mapping supports points are computed, where
-<I>MappingQ::apply_laplace_vector</I> just performes the linear
+<I>MappingQ::apply_laplace_vector</I> just performs the linear
combination given in (<A HREF="index.html#eq:linear-combination-laplace">8</A>).
<BR><HR>
</FONT>
<P>
We have shown how multi-threading is supported in <TT>deal.II</TT> and how it
-can be used in several examples occuring in common finite element programs. It
+can be used in several examples occurring in common finite element programs. It
was demonstrated that implementing a usable C++ interface poses several
difficulties, both from the aspect of user friendliness as well as program
correctness. In order to overcome these difficulties, first the more simple
boundaries of the triangulation. In the interior, all lines are still
represented by linear functions, resulting in additional computations
only on cells at the boundary. Higher order mappings are therefore
-usually not noticably slower than lower order ones, because the
+usually not noticeably slower than lower order ones, because the
additional computations are only performed on a small subset of all
cells.
What surprises here is that the exact value is 1.59491554..., and that
-it is obviously suprisingly complicated to compute the solution even to
+it is obviously surprisingly complicated to compute the solution even to
only one per cent accuracy, although the solution is smooth (in fact
infinite often differentiable). This smoothness is shown in the
graphical output generated by the program, here coarse grid and the
Besides evaluating the values of the solution at a certain point, the
program also offers the possibility to evaluate the x-derivatives at a
certain point, and also to tailor mesh refinement for this. To let the
-program compute these quantities, simply replace the two occurences of
+program compute these quantities, simply replace the two occurrences of
<code>PointValueEvaluation</code> in the main function by
<code>PointXDerivativeEvaluation</code>, and let the program run:
@code
// domains, boundary values, and
// right hand sides do not fit
// together any more, and starts
- // loosing the overview over the
+ // losing the overview over the
// whole structure. Encapsulating
// everything belonging to a certain
// test case into a structure of its
// But only the first time, the starting solution has to be
// initialized. Also the vector of the solution will be
// resized in the <code>refine_grid</code> function, while the
- // vector is transfered to the new mesh.
+ // vector is transferred to the new mesh.
if (first_step)
{
functionality themselves, they simply pass on to their corresponding PETSc
functions. The wrappers are therefore only used to give PETSc a more modern,
object oriented interface, and to make the use of PETSc and deal.II objects as
-interchangable as possible.
+interchangeable as possible.
// should have computed an
// incremental displacement update
// so that the material in its new
- // configuration accomodates for
+ // configuration accommodates for
// the difference between the
// external body and boundary
// forces applied during this time
// of course be ideal if all couplings were
// in the lower or upper triangular part of a
// matrix, since then solving the linear
- // system would amoung to only forward or
+ // system would among to only forward or
// backward substitution. This is of course
// unachievable for symmetric sparsity
// patterns, but in some special situations
As mentioned in the introduction, the Schur complement solver used here is not
the best one conceivable (nor is it intended to be a particularly good
-one). Better ones can be found in the literture and can be built using the
+one). Better ones can be found in the literature and can be built using the
same block matrix techniques that were introduced here. We pick up on this
theme again in step-22, where we first build a Schur complement solver for the
Stokes equation as we did here, and then in the <a
reservoir $\Omega$ under the assumption that the movement of fluids is
dominated by viscous effects; i.e. we neglect the effects of gravity,
compressibility, and capillary pressure. Porosity will be considered
-to be constant. We will denote variables refering to either of the two
+to be constant. We will denote variables referring to either of the two
phases using subscripts $w$ and $o$, short for water and oil. The
derivation of the equations holds for other pairs of fluids as well,
however.
scale linearly with the number of degrees of freedom: renumbering of degrees
of freedom (which is ${\cal O}(N \log N)$, and the linear solver (which is
${\cal O}(N^{4/3})$). As for the first, while reordering degrees of freedom
-may not scale linearly, it is an indispensible part of the overall algorithm
+may not scale linearly, it is an indispensable part of the overall algorithm
as it greatly improves the quality of the sparse ILU, easily making up for
the time spent on computing the renumbering; graphs and timings to
demonstrate this are shown in the documentation of the DoFRenumbering
required iterations.
BiCGStab, on the other hand, won't get slower when many iterations are needed
-(one iteration uses only results from one preceeding step and
+(one iteration uses only results from one preceding step and
not all the steps as GMRES). Besides the fact the BiCGStab is more expensive per
step since two matrix-vector products are needed (compared to one for
CG or GMRES), there is one main reason which makes BiCGStab not appropriate for
be a starting point for some geophysical flow problems, such as the
movement of magma under places where continental plates drift apart (for
example mid-ocean ridges). Of course, in such places, the geometry is more
-complicated than the examples shown above, but it is not hard to accomodate
+complicated than the examples shown above, but it is not hard to accommodate
for that.
For example, by using the folllowing modification of the boundary values
// for the sparse direct solver UMFPACK:
#include <deal.II/lac/sparse_direct.h>
- // This includes the libary for the
+ // This includes the library for the
// incomplete LU factorization that will
// be used as a preconditioner in 3D:
#include <deal.II/lac/sparse_ilu.h>
// makes the algorithm to build the
// sparsity pattern be quadratic in the
// number of degrees of freedom. This
- // doesn't become noticable until we get
+ // doesn't become noticeable until we get
// well into the range of several 100,000
// degrees of freedom, but eventually
// dominates the setup of the linear
@f]
With the second variables, one then transform the forward problem into
-two seperate equations:
+two separate equations:
@f{eqnarray*}
\bar{p}_{t} - v & = & 0 \\
\Delta\bar{p} - \frac{1}{c_0^2}\,v_{t} & = & f
Rather facetiously, the sine-Gordon equation's moniker is a pun on the
so-called Klein-Gordon equation, which is a relativistic version of
-the Schrödinger equation for particles with non-zero mass. The resemblence is not just
+the Schrödinger equation for particles with non-zero mass. The resemblance is not just
superficial, the sine-Gordon equation has been shown to model some
unified-field phenomena such as interaction of subatomic particles
(see, e.g., Perring & Skyrme in Nuclear Physics <b>31</b>) and the
// program. It creates an object of
// top-level class and calls its
// principal function. Also, we
- // supress some of the library output
+ // suppress some of the library output
// by setting
// <code>deallog.depth_console</code>
// to zero. Furthermore, if
// $\Sigma_{s,g'\to g}$). This is
// straight forward, but note how
// we determine which of the two
- // cells is ther finer one by
+ // cells is the finer one by
// looking at the refinement level
// of the two cells:
if (!cell_g->has_children() && !cell_g_prime->has_children())
wrong: the mesh size is simply not small enough to resolve the
solution's waves accurately, and you can see this in plots of the
solution. Consequently, this is one of the cases where adaptivity is
-indispensible if you don't just want to throw a bigger (presumably
+indispensable if you don't just want to throw a bigger (presumably
%parallel) machine at the problem.
// old mesh to the new one. To this end
// we use the SolutionTransfer class and
// we have to prepare the solution
- // vectors that should be transfered to
+ // vectors that should be transferred to
// the new grid (we will lose the old
// grid once we have done the refinement
// so the transfer has to happen
// precompute the temperature
// preconditioner as well. The reason is
// that the setup of the Jacobi
- // preconditioner takes a noticable time
+ // preconditioner takes a noticeable time
// compared to the solver because we
// usually only need between 10 and 20
// iterations for solving the temperature
independent of almost all elements of the solution vector, and
consequently their derivatives are zero; however, trying to compute
these zeros can easily take 90% or more of the compute time of the
-entire program in an experiment inadvertantly made by a student a few
+entire program, as shown in an experiment inadvertently made by a student a few
years after this program was first written.
@image html step-33.slide_adapt.gif
-The adaptivity follows and preceeds the flow pattern, based on the heuristic
+The adaptivity follows and precedes the flow pattern, based on the heuristic
refinement scheme discussed above.
// entries at the top level of the
// input file, as well as a few odd
// other entries in subsections that
- // are too short to warrent a
+ // are too short to warrant a
// structure by themselves.
//
// It is worth pointing out one thing here:
// does not result only in a factor
// appearing as a constant factor on
// the entire integral, but also on
- // an additional integral alltogether
+ // an additional integral altogether
// that needs to be evaluated:
//
// \f[
// these extra entries and aborts
// with an error message.
//
- // In the absense of any obvious way
+ // In the absence of any obvious way
// to avoid this, we simply settle
// for the second best option, which
// is have PETSc allocate memory as
<code>DoFHandler@<dim@></code>. Again, the compiler can't
compile this function until it knows for which dimension. If you call
this function for a specific dimension as above, the compiler will
-take the template, replace all occurences of dim by the dimension for
+take the template, replace all occurrences of dim by the dimension for
which it was called, and compile it. If you call the function several
times for different dimensions, it will compile it several times, each
time calling the right <code>make_grid</code> function and reserving the right
// larger or smaller than a certain
// threshold, preserving minimal
// and maximal levels of mesh
- // refinement. (iii) Transfering
+ // refinement. (iii) Transferring
// the solution from the old to the
// new mesh. None of this is
// particularly difficult.
0,
@f}
for all test functions $\mathbf a, q, \mathbf b$.
-Note that $Y$ is only a subspace of the spaces listed above to accomodate for
+Note that $Y$ is only a subspace of the spaces listed above to accommodate for
the various Dirichlet boundary conditions.
This sort of coupling is of course possible by simply having two Triangulation
// argument) which will usually
// terminate the program giving
// information where the error
- // occured and what the reason
+ // occurred and what the reason
// was. This generally reduces the
// time to find programming errors
// dramatically and we have found
// dofs_per_cell*dofs_per_cell*n_q_points. On
// the other hand, the function
// will of course return the same
- // value everytime it is called
+ // value every time it is called
// with the same quadrature point,
// independently of what shape
// function we presently treat;
* Operator class performing Newton's iteration with standard step
* size control and adaptive matrix generation.
*
- * This class performes a Newton iteration up to convergence
+ * This class performs a Newton iteration up to convergence
* determined by #control. If after an update the norm of the residual
* has become larger, then step size control is activated and the
* update is subsequently divided by two until the residual actually
* @code
* system_matrix.print_formatted(pout);
* @endcode
- * is <em>not</em> possible. Instead use the is_active() funtion for a
+ * is <em>not</em> possible. Instead use the is_active() function for a
* work-around:
*
* @code
*
* Default is the
* Gnuplot-default of 30.
- * An exemple of a
+ * An example of a
* Gnuplot-default of 0 is
* the following:
*
* make sure that the TECHOME environment variable points to the
* Tecplot installation directory, and that the files
* \$TECHOME/include/TECIO.h and \$TECHOME/lib/tecio.a are readable.
- * If these files are not availabe (or in the case of 1D) this
+ * If these files are not available (or in the case of 1D) this
* function will simply call write_tecplot() and thus larger ASCII
* data files will be produced rather than more efficient Tecplot
* binary files.
* Function derived from the base class
* which allows to pass information like
* the line and name of the file where the
- * exception occured as well as user
+ * exception occurred as well as user
* information.
*
* This function is mainly used
* call itself.
*
*
- * Support for time dependant functions can be found in the base
+ * Support for time dependent functions can be found in the base
* class FunctionTime.
*
* @note if the functions you are dealing with have sizes which
whose tangent is (A).
atan2(A,B): Arc-tangent of A/B. The two main differences to atan() is
that it will return the right angle depending on the signs of
- A and B (atan() can only return values betwen -pi/2 and pi/2),
+ A and B (atan() can only return values between -pi/2 and pi/2),
and that the return value of pi/2 and -pi/2 are possible.
atanh(A) : Same as atan() but for hyperbolic tangent.
ceil(A) : Ceiling of A. Returns the smallest integer greater than A.
case RefinementCase<3>::cut_xz:
// careful, this is slightly
// different from xy and yz due to
- // differnt internal numbering of
+ // different internal numbering of
// children!
point[0]*=2.0;
point[2]*=2.0;
case RefinementCase<3>::cut_xz:
// careful, this is slightly
// different from xy and yz due to
- // differnt internal numbering of
+ // different internal numbering of
// children!
if (child_index/2==1)
point[0]+=1.0;
*
* This functionality was
* introduced to produce more
- * reproducable floating point
+ * reproducible floating point
* output for regression
* tests. The rationale is, that
* an exact output value is much
* <li> The argument is a standard C++ data type, namely,
* <tt>bool</tt>, <tt>float</tt>, <tt>double</tt> or any of the
* integer types. In that case, memory_consumption() simple returns
- * <tt>sizeof</tt> of its argument. The libary also provides an
+ * <tt>sizeof</tt> of its argument. The library also provides an
* estimate for the amount of memory occupied by a
* <tt>std::string</tt> this way.
*
* integer. If bounds are given
* to the constructor, then the
* integer given also needs to be
- * withing the interval specified
+ * within the interval specified
* by these bounds. Note that
* unlike common convention in
* the C++ standard library, both
* <tt>double</tt>. If bounds are
* given to the constructor, then
* the integer given also needs
- * to be withing the interval
+ * to be within the interval
* specified by these
* bounds. Note that unlike
* common convention in the C++
* ...
* @endcode
* You can use several sources of input successively. Entries which are changed more than
- * once will be overwritten everytime they are used.
+ * once will be overwritten every time they are used.
*
* You should not try to declare entries using declare_entry() and
* enter_subsection() with as yet unknown subsection names after
* left by an <tt>END</tt> or <tt>end</tt>
* statement, a value for a
* non-declared entry was given
- * or teh entry value did not
+ * or the entry value did not
* match the regular
* expression. <tt>true</tt> otherwise.
*
// single this file out from tensor.h, since we want to derive
// Point<dim,Number> from Tensor<1,dim,Number>. However, the point class will
// not need all the tensor stuff, so we don't want the whole tensor package to
-// be included everytime we use a point.
+// be included every time we use a point.
#include <deal.II/base/config.h>
* <h3>Construction and destruction</h3>
*
* Objects of this class can either be default constructed or by providing an
- * "exemplar", i.e. an object of type T so that everytime we need to create
+ * "exemplar", i.e. an object of type T so that every time we need to create
* a T on a thread that doesn't already have such an object, it is copied from
* the exemplar.
*
* long as the thread
* executes. This means that even
* if all Threads::Thread objects
- * that refered to this
+ * that referred to this
* descriptor (through a
* std::shared_ptr) have gone out
* of scope, we must still hold
* ordering in the deal.II mesh. As
* assembly is done in the deal.II
* cell ordering, this flag is
- * required to get reproducable
+ * required to get reproducible
* behaviour after snapshot/resume.
*/
enum Settings
*
* These classes are similar to the DoFLevel classes. We here store information
* that is associated with faces, rather than cells, as this information is independent of
- * the hierachical structure of cells, which are organized in levels. In 2D we store
+ * the hierarchical structure of cells, which are organized in levels. In 2D we store
* information on degrees of freedom located on lines whereas in 3D we store information on
* drefrees of freedom located on quads and lines. In 1D we do nothing, as the faces of
* lines are vertices which are treated seperately.
};
/**
- * Store the indices of degrees of freedom on faces in 3D, which are quads, additionaly also on lines.
+ * Store the indices of degrees of freedom on faces in 3D, which are quads, additionally also on lines.
*
* @author Tobias Leicht, 2006
*/
* listed in the given set. The
* reason that a @p map rather
* than a @p set is used is the
- * same as descibed in the
+ * same as described in the
* section on the
* @p make_boundary_sparsity_pattern
* function.
* objects, i.e. on lines for 2D and on quads and lines for 3D are
* treated similarly than that on cells. However, theses geometrical
* objects, which are called faces as a generalisation, are not
- * organised in a hierachical structure of levels. Therefore, the
- * degrees of freedom located on these objects are stored in seperate
+ * organised in a hierarchical structure of levels. Therefore, the
+ * degrees of freedom located on these objects are stored in separate
* classes, namely the <tt>DoFFaces</tt> classes.
*
*
* `zoom in' mesh). In one such example the bandwidth was increased by
* about 50 per cent.
*
- * In most other cases, the bandwith is reduced significantly. The reduction
+ * In most other cases, the bandwidth is reduced significantly. The reduction
* is the better the less structured the grid is. With one grid where the
* cells were refined according to a random driven algorithm, the bandwidth
* was reduced by a factor of six.
*
* <h5>Computing the correct basis from "raw" basis functions</h5>
*
- * First, aready the basis of the shape function space may be
+ * First, already the basis of the shape function space may be
* difficult to implement for arbitrary order and dimension. On the
* other hand, if the @ref GlossNodes "node values" are given, then
* the duality relation between node functionals and basis functions
/**
* The numbering of the degrees
- * of freedom in continous finite
+ * of freedom in continuous finite
* elements is hierarchic,
* i.e. in such a way that we
* first number the vertex dofs,
//@}
/**
* The numbering of the degrees
- * of freedom in continous finite
+ * of freedom in continuous finite
* elements is hierarchic,
* i.e. in such a way that we
* first number the vertex dofs,
* @f[
* \mathbf u(\mathbf x) = J(\mathbf{\hat x})\mathbf{\hat u}(\mathbf{\hat x}).
* @f]
- * In physics, this is usually refered to as the contravariant
+ * In physics, this is usually referred to as the contravariant
* transformation. Mathematically, it is the push forward of a
* vector field.
*
/**
* Vertex numbers can be
* written onto the
- * vertices. This is controled
+ * vertices. This is controlled
* by the following
* flag. Default is @p false.
*/
/**
* Indices of the cells to be
- * processed withing the
+ * processed within the
* present sheet. If a cell
* is being processed
* presently, it is taken
* finer level faces to their
* corresponding surface mesh
* cells, for example to
- * accomodate different geometry
+ * accommodate different geometry
* descriptions in the case of
* curved boundaries.
*/
* are similar to the TriaLevel classes. As cells are organised in a hierarchical
* structure of levels, each triangulation consists of several such TriaLevels. However the
* faces of a triangulation, lower dimensional objects like lines in 2D or lines and quads
- * in 3D, do not have to be based on such a hierachical structure. In fact we have to
+ * in 3D, do not have to be based on such a hierarchical structure. In fact we have to
* organise them in only one object if we want to enable anisotropic refinement. Therefore
* the TriaFaces classes store the information belonging to the faces of a
- * triangualtion seperately from the TriaLevel classes.
+ * triangulation separately from the TriaLevel classes.
*
* This general template is only provided to enable a specialization below.
*
* as template arguments, you can write your own versions here to add
* more functionality.
*
- * Furthermore, the iterators decribed here satisfy the requirement of
+ * Furthermore, the iterators described here satisfy the requirement of
* input and bidirectional iterators as stated by the C++ standard and
* the STL documentation. It is therefore possible to use the
* functions from the algorithm section of the C++ standard,
*
* Past-the-end iterators may also be used to compare an iterator with the
* <i>before-the-start</i> value, when running backwards. There is no
- * distiction between the iterators pointing past the two ends of a vector.
+ * distinction between the iterators pointing past the two ends of a vector.
*
* By defining only one value to be past-the-end and making all other values
* invalid provides a second track of security: if we should have forgotten
/**
- * Reserve enough space to accomodate
+ * Reserve enough space to accommodate
* @p total_cells cells on this level.
* Since there are no @p used flags on this
* level, you have to give the total number
* of cells, not only the number of newly
- * to accomodate ones, like in the
+ * to accommodate ones, like in the
* <tt>TriaLevel<N>::reserve_space</tt>
* functions, with <tt>N>0</tt>.
*
/**
* Assert that enough space
* is allocated to
- * accomodate
+ * accommodate
* <code>new_objs_in_pairs</code>
* new objects, stored in
* pairs, plus
/**
* For hexahedrons the data of TriaObjects needs to be extended, as we can obtain faces
* (quads) in non-standard-orientation, therefore we declare a class TriaObjectsHex, which
- * additionaly contains a bool-vector of the face-orientations.
+ * additionally contains a bool-vector of the face-orientations.
*/
class TriaObjectsHex : public TriaObjects<TriaObject<3> >
/**
* Assert that enough space is
- * allocated to accomodate
+ * allocated to accommodate
* <code>new_objs</code> new objects.
* This function does not only call
* <code>vector::reserve()</code>, but
/**
* For quadrilaterals in 3D the data of TriaObjects needs to be extended, as we
* can obtain faces (quads) with lines in non-standard-orientation, therefore we
- * declare a class TriaObjectsQuad3D, which additionaly contains a bool-vector
+ * declare a class TriaObjectsQuad3D, which additionally contains a bool-vector
* of the line-orientations.
*/
/**
* Assert that enough space
* is allocated to
- * accomodate
+ * accommodate
* <code>new_quads_in_pairs</code>
* new quads, stored in
* pairs, plus
*
* These classes are similar to the internal::hp::DoFLevel classes. We here store
* information that is associated with faces, rather than cells, as this information is
- * independent of the hierachical structure of cells, which are organized in levels. In 2D
+ * independent of the hierarchical structure of cells, which are organized in levels. In 2D
* we store information on degrees of freedom located on lines whereas in 3D we store
* information on drefrees of freedom located on quads and lines. In 1D we do nothing, as
* the faces of lines are vertices which are treated seperately.
* listed in the given set. The
* reason that a @p map rather
* than a @p set is used is the
- * same as descibed in the
+ * same as described in the
* section on the
* @p make_boundary_sparsity_pattern
* function.
* Return whether the vector contains only
* elements with value zero. This function
* is mainly for internal consistency
- * check and should seldomly be used when
+ * check and should seldom be used when
* not in debug mode since it uses quite
* some time.
*/
* denoted by pairs of column indices
* and values, to a line of
* constraints. This function is
- * equivalent to calling the preceeding
+ * equivalent to calling the preceding
* function several times, but is
* faster.
*/
*
* @note The hanging nodes are
* completely eliminated from the
- * linear system refering to
+ * linear system referring to
* <tt>condensed</tt>. Therefore, the
* dimension of <tt>condensed</tt> is
* the dimension of
DeclException1 (ExcPETScError,
int,
<< "An error with error number " << arg1
- << " occured while calling a PETSc function");
+ << " occurred while calling a PETSc function");
/**
* An error of a Trilinos function was
DeclException1 (ExcTrilinosError,
int,
<< "An error with error number " << arg1
- << " occured while calling a Trilinos function");
+ << " occurred while calling a Trilinos function");
//@}
}
* value zero. This function is
* mainly for internal
* consistency checks and should
- * seldomly be used when not in
+ * seldom be used when not in
* debug mode since it uses quite
* some time.
*/
* and ExcBlockIndexMismatch is
* thrown, if the global index
* does not point into the
- * block refered to by #row and
+ * block referred to by #row and
* #column.
*
* @todo
* and ExcBlockIndexMismatch is
* thrown, if the global index
* does not point into the
- * block refered to by #row and
+ * block referred to by #row and
* #column.
*
* @todo
* ExcBlockIndexMismatch is
* thrown, if the global index
* does not point into the
- * block refered to by #row and
+ * block referred to by #row and
* #column.
*
* @todo
* Return whether the vector contains only
* elements with value zero. This function
* is mainly for internal consistency
- * checks and should seldomly be used when
+ * checks and should seldom be used when
* not in debug mode since it uses quite
* some time.
*/
DeclException1 (ExcPETScError,
int,
<< "An error with error number " << arg1
- << " occured while calling a PETSc function");
+ << " occurred while calling a PETSc function");
/**
* Exception
*/
DeclException1 (ExcPETScError,
int,
<< "An error with error number " << arg1
- << " occured while calling a PETSc function");
+ << " occurred while calling a PETSc function");
protected:
DeclException1 (ExcPETScError,
int,
<< "An error with error number " << arg1
- << " occured while calling a PETSc function");
+ << " occurred while calling a PETSc function");
/**
* Exception
*/
* only elements with value zero. This
* function is mainly for internal
* consistency checks and should
- * seldomly be used when not in debug
+ * seldom be used when not in debug
* mode since it uses quite some time.
*/
bool all_zero () const;
* <li>If the length of the
* vector is zero, then the
* relaxation method will be
- * exectued from first to
+ * executed from first to
* last block.
* <li> If the length is one,
* then the inner vector must
* being used (and can be more). To
* avoid doing this, the fairly
* standard calling sequence
- * excecuted here is used:
+ * executed here is used:
* Initialise; Set up matrices for
* solving; Actually solve the
* system; Gather the solution(s);
DeclException1 (ExcSLEPcError,
int,
<< " An error with error number " << arg1
- << " occured while calling a SLEPc function");
+ << " occurred while calling a SLEPc function");
DeclException2 (ExcSLEPcEigenvectorConvergenceMismatchError,
int, int,
SolverControl &cntrl;
/**
- * Memory for auxilliary vectors.
+ * Memory for auxiliary vectors.
*/
VectorMemory<VECTOR> &memory;
};
bool decomposed;
/**
- * The default strenghtening
+ * The default strengthening
* value, returned by
* get_strengthen_diagonal().
*/
* that we now have two programs that communicate via pipes. The
* forked copy of the program then actually replaces itself by a
* program called <tt>detached_ma27</tt>, that is started in its place
- * through the <tt>execv</tt> system call. Now everytime you call one of
+ * through the <tt>execv</tt> system call. Now every time you call one of
* the functions of this class, it relays the data to the other
* program and lets it execute the respective function. The results
- * are then transfered back. Since the MA27 functions are only called
+ * are then transferred back. Since the MA27 functions are only called
* in the detached program, they will now no longer interfere with the
* respective calls to other functions with different data, so no
* synchronisation is necessary any more.
*
* Finally, the @p default_reserve allocates extra space at the end
* of the data array. This space is used whenever a row must be
- * enlarged. Since @p std::vector doubles the capacity everytime it
+ * enlarged. Since @p std::vector doubles the capacity every time it
* must increase it, this value should allow for all the growth needed.
*
* Suggested settings: @p default_row_length should be the length of
* structure is to hold. It is
* assumed that this number is
* sufficiently large to
- * accomodate both the elements
+ * accommodate both the elements
* in <tt>original</tt> as well
* as the new off-diagonal
* elements created by this
* value zero. This function is
* mainly for internal
* consistency checks and should
- * seldomly be used when not in
+ * seldom be used when not in
* debug mode since it uses quite
* some time.
*/
DeclException1 (ExcTrilinosError,
int,
<< "An error with error number " << arg1
- << " occured while calling a Trilinos function");
+ << " occurred while calling a Trilinos function");
protected:
DeclException1 (ExcTrilinosError,
int,
<< "An error with error number " << arg1
- << " occured while calling a Trilinos function");
+ << " occurred while calling a Trilinos function");
private:
DeclException1 (ExcTrilinosError,
int,
<< "An error with error number " << arg1
- << " occured while calling a Trilinos function");
+ << " occurred while calling a Trilinos function");
/**
* Exception
DeclException1 (ExcTrilinosError,
int,
<< "An error with error number " << arg1
- << " occured while calling a Trilinos function");
+ << " occurred while calling a Trilinos function");
/**
* Exception
DeclException1 (ExcTrilinosError,
int,
<< "An error with error number " << arg1
- << " occured while calling a Trilinos function");
+ << " occurred while calling a Trilinos function");
/**
* Exception
* value zero. This function is
* mainly for internal
* consistency checks and should
- * seldomly be used when not in
+ * seldom be used when not in
* debug mode since it uses quite
* some time.
*/
DeclException1 (ExcTrilinosError,
int,
<< "An error with error number " << arg1
- << " occured while calling a Trilinos function");
+ << " occurred while calling a Trilinos function");
/**
* Exception
* Return whether the vector contains only
* elements with value zero. This function
* is mainly for internal consistency
- * checks and should seldomly be used when
+ * checks and should seldom be used when
* not in debug mode since it uses quite
* some time.
*/
* reinit function. You can, however,
* resize the view that you have of the
* original object. Notice that it is
- * your own responsability to ensure that
+ * your own responsibility to ensure that
* the memory you are pointing to is big
* enough.
*
* true). In each of the two,
* each block should have
* equal size and be large
- * enough to accomodate all
+ * enough to accommodate all
* user indices set in the
* cells and faces covered by
* the loop it is used
* FEVALUES object is fixed in the constructor and only used to
* initialize the pointers in #fevalv.
*
- * Additionally, this function containes space to store the values of
+ * Additionally, this function contains space to store the values of
* finite element functions stored in #global_data in the
* quadrature points. These vectors are initialized automatically on
* each cell or face. In order to avoid initializing unused vectors,
* @note This function caches
* the index associated with a
* name. Therefore, it must be
- * called everytime after the
+ * called every time after the
* NamedData object has changed.
*/
template <class DATA>
*
* There is one matrix for
* couplings in a cell and one
- * for the couplings occuring in
+ * for the couplings occurring in
* fluxes.
*/
template <int dim, class SparsityPattern, int spacedim>
* postprocessor is going to be
* used. In that case, the names and
* vector declarations are going to
- * be aquired from the postprocessor.
+ * be acquired from the postprocessor.
*/
DataEntryBase (const DataPostprocessor<DH::space_dimension> *data_postprocessor);
// found
std::vector<bool> point_flags(np, false);
- // Set this to true untill all
+ // Set this to true until all
// points have been classified
bool left_over = true;
/**
* All cell data (the dof indices and
* the dof values)
- * should be accessable from each cell.
+ * should be accessible from each cell.
* As each cell has got only one
* @p user_pointer, multiple pointers to the
* data need to be packetized in a structure.
* equally well use
* <tt>bind2nd(mem_fun1(&X::unary_function), arg)</tt>
* which lets the @p do_loop
- * function call teh given function with
+ * function call the given function with
* the specified parameter. Note that you
* need to bind the second parameter since
* the first one implicitly contains
* number correction is done,
* but before grid adaption, so
* the cell number on this grid
- * is not noticably influenced
+ * is not noticeably influenced
* by the cells flagged
* additionally on the previous
* grid.
* the respective @p wake_up function can
* rebuild it. You should therefore call
* this function from your overloaded
- * version, preferrably at the end so
+ * version, preferably at the end so
* that your function can use the
* triangulation as long as ou need it.
*/
//TODO: Move documentation of functions to the functions!
/**
- * Provide a namespace which offers some operations on vectors. Amoung
+ * Provide a namespace which offers some operations on vectors. Among
* these are assembling of standard vectors, integration of the
* difference of a finite element solution and a continuous function,
* interpolations and projections of continuous functions to the
case reduction_rate_log2:
rate_key+="red.rate.log2";
Assert(columns.count(rate_key)==0, ExcRateColumnAlreadyExists(rate_key));
- // no value availble for the
+ // no value available for the
// first row
add_value(rate_key, std::string("-"));
for (unsigned int i=1; i<n; ++i)
if (!flags.bicubic_patch)
{
- // aproximate normal
+ // approximate normal
// vectors in patch
std::vector<Point<3> > nrml;
// only if smooth triangles are used
# elif defined(__MACH__) && defined(__APPLE__)
// This is only tested on a dual G5 2.5GHz running MacOSX 10.3.6
// and on an Intel Mac Book Pro.
-// If it doesnt work please contact the mailinglist.
+// If it doesn't work please contact the mailinglist.
unsigned int MultithreadInfo::get_n_cpus()
{
int mib[2];
double sum = 0;
for (unsigned int i=0; i<size(); ++i)
sum += weights[i];
- // we cant guarantee the sum of weights
+ // we cannot guarantee the sum of weights
// to be exactly one, but it should be
// near that.
Assert ((sum>0.999999) && (sum<1.000001), ExcInternalError());
double sum = 0;
for (unsigned int i=0; i<size(); ++i)
sum += weights[i];
- // we cant guarantee the sum of weights
+ // we cannot guarantee the sum of weights
// to be exactly one, but it should be
// near that.
Assert ((sum>0.999999) && (sum<1.000001), ExcInternalError());
// already for the isotropic
// case. Additionally, we have three
// different refinement cases, resulting in
- // <tt>4 + 2 + 2 = 8</tt> differnt subfaces
+ // <tt>4 + 2 + 2 = 8</tt> different subfaces
// for each face.
const unsigned int total_subfaces_per_face=8;
// this assumption is not
// justified and needs to be
// fixed some time. fortunately,
- // ommitting it for now does no
+ // omitting it for now does no
// harm since the matrix will cry
// foul if its requirements are
// not satisfied
// * Create global_dof_indexsets by
- // transfering our own owned_dofs to
+ // transferring our own owned_dofs to
// every other machine.
const unsigned int n_cpus = Utilities::System::
get_n_mpi_processes (tr->get_communicator());
{
// delete pointer and set it
// to zero to avoid
- // inadvertant use
+ // inadvertent use
delete differences[i];
differences[i] = 0;
};
FESystem<dim,spacedim>::InternalData::~InternalData()
{
// delete pointers and set them to
- // zero to avoid inadvertant use
+ // zero to avoid inadvertent use
for (unsigned int i=0; i<base_fe_datas.size(); ++i)
if (base_fe_datas[i])
{
{
// neglect size of data stored in
// @p{base_elements} due to some
- // problems with teh
+ // problems with the
// compiler. should be neglectable
// after all, considering the size
// of the data of the subelements
if (pos2 != pos1)
name.erase(pos1, pos2-pos1+1);
}
- // Replace all occurences of "^dim"
+ // Replace all occurrences of "^dim"
// by "^d" to be handled by the
// next loop
for (unsigned int pos = name.find("^dim");
pos = name.find("^dim"))
name.erase(pos+2, 2);
- // Replace all occurences of "^d"
+ // Replace all occurrences of "^d"
// by using the actual dimension
for (unsigned int pos = name.find("^d");
pos < name.size();
const MappingType mapping_type) const
{
AssertDimension (input.size(), output.size());
- // The data object may be jsut a
+ // The data object may be just a
// MappingQ1::InternalData, so we
// have to test for this first.
const typename MappingQ1<dim,spacedim>::InternalData *q1_data =
const MappingType mapping_type) const
{
AssertDimension (input.size(), output.size());
- // The data object may be jsut a
+ // The data object may be just a
// MappingQ1::InternalData, so we
// have to test for this first.
const typename MappingQ1<dim,spacedim>::InternalData *q1_data =
const MappingType mapping_type) const
{
AssertDimension (input.size(), output.size());
- // The data object may be jsut a
+ // The data object may be just a
// MappingQ1::InternalData, so we
// have to test for this first.
const typename MappingQ1<dim,spacedim>::InternalData *q1_data =
Triangulation<2>& tria,
const Point<2>&, const double, const double)
{
- // Inspite of receiving geometrical
+ // In spite of receiving geometrical
// data, we do this only based on
// topology.
Assert (perm_num != numbers::invalid_unsigned_int,
ExcGridOrientError("No node having 3 incoming edges found in curent hex."));
- // So use the apropriate
+ // So use the appropriate
// rotation to get the new
// cube
unsigned int temp[8];
namespace
{
-// helper function to aquire the number of levels within a grid
+// helper function to acquire the number of levels within a grid
template <class GridClass>
unsigned int
get_n_levels (const GridClass &grid)
// in memory, we won't find
// them later on, so we have
// to create new ones instead
- // and replace all occurances
+ // and replace all occurrences
// of the old ones with those
// new ones. As this is kind
// of ugly, we hope we don't
// similar, the actual work
// strongly depends on the actual
// refinement case. therefore, we
- // use seperate blocks of code for
+ // use separate blocks of code for
// each of these cases, which
// hopefully increases the
// readability to some extend.
// coordinate direction (0
// for faces 0 and 1, 1 for
// faces 2 and 3, 2 for faces
- // 4 and 5) and substract the
+ // 4 and 5) and subtract the
// correct boundary value of
// the face (0 for faces 0,
// 2, and 4; 1 for faces 1, 3
// are further refined along
// the face, otherwise
// something went wrong in the
- // contruction of neighbor
+ // construction of neighbor
// pointers. then only allow
// coarsening if this neighbor
// will be coarsened as well
// larger dimensions
std::vector<unsigned short int> usage_count (max_vertex_index+1, 0);
// touch a vertex's usage count
- // everytime we find an adjacent
+ // every time we find an adjacent
// element
for (cell=begin(); cell!=endc; ++cell)
for (unsigned vertex=0; vertex<GeometryInfo<dim>::vertices_per_cell; ++vertex)
// now extract which
// refine case would
// be necessary to
- // achive the same
+ // achieve the same
// face
// refinement. set
// the intersection
// 1/ do not coarsen a cell if
// 'most of the neighbors' will be
// refined after the step. This is
- // to prevent occurence of
+ // to prevent occurrence of
// unrefined islands.
// 2/ eliminate refined islands in the
// interior and at the boundary. since
// do not coarsen a cell if 'most of
// the neighbors' will be refined after
// the step. This is to prevent the
- // occurence of unrefined islands.
+ // occurrence of unrefined islands.
// If patch_level_1 is set, this will
// be automatically fulfilled.
if (smooth_grid & do_not_produce_unrefined_islands &&
// Weimar
// we first eliminate points based
- // on the maximum and minumum of
+ // on the maximum and minimum of
// the corner coordinates, then
// transform to the unit cell, and
// check there.
// we assume here, that only four faces
// meet at the boundary; this assumption
// is not justified and needs to be
- // fixed some time. fortunately, ommitting
+ // fixed some time. fortunately, omitting
// it for now does no harm since the
// matrix will cry foul if its requirements
// are not satisfied
//
// we need to make this object static, since
// we want to return the data stored in it
- // and therefore need a liftime which is
+ // and therefore need a lifetime which is
// longer than the execution time of this
// function
static std::string description;
continue;
// need to delete all the columns in the
- // matrix that are on the boundary. to achive
+ // matrix that are on the boundary. to achieve
// this, create an array as long as there are
// matrix columns, and find which columns we
// need to filter away.
continue;
// need to delete all the columns in the
- // matrix that are on the boundary. to achive
+ // matrix that are on the boundary. to achieve
// this, create an array as long as there are
// matrix columns, and find which columns we
// need to filter away.
* postprocessor is going to be
* used. In that case, the names and
* vector declarations are going to
- * be aquired from the postprocessor.
+ * be acquired from the postprocessor.
*/
DataEntry (const VectorType *data,
const DataPostprocessor<DH::space_dimension> *data_postprocessor);