)
#
-# Are all dependencies fullfilled?
+# Are all dependencies fulfilled?
#
IF(NOT DEAL_II_WITH_UMFPACK)
Motivation for project
----------------------
-This code was made to simulate the evolution of global-scale topography on planetary bodies. Specifically, it is designed to compute the rates of topography relaxation on the dwarf planet Ceres. The NASA Dawn mission, in orbit around Ceres since March, 2015, has produced a high resolution shape model of its surface. As on other planets including the Earth, topography on Ceres is subject to decay over time due to processes such as viscous flow and brittle failure. Because the efficiency of these processes is dependent on the material properties of the body at depth, simulating the decay of topography and comparing it to the observed shape model permits insights into Ceres' internal stucture.
+This code was made to simulate the evolution of global-scale topography on planetary bodies. Specifically, it is designed to compute the rates of topography relaxation on the dwarf planet Ceres. The NASA Dawn mission, in orbit around Ceres since March, 2015, has produced a high resolution shape model of its surface. As on other planets including the Earth, topography on Ceres is subject to decay over time due to processes such as viscous flow and brittle failure. Because the efficiency of these processes is dependent on the material properties of the body at depth, simulating the decay of topography and comparing it to the observed shape model permits insights into Ceres' internal structure.
Some previous applications of this basic idea- using topography to constrain internal structure- may be found in the following references:
----------------------------
* src/ceres.cc Main code
-* support_code/config_in.h Reads config file and intializes system parameters
+* support_code/config_in.h Reads config file and initializes system parameters
* support_code/ellipsoid_fit.h Finds best-fit ellipse for surface and internal density boundaries. Also uses deal.II
* support_code/ellipsoid_grav.h Analytically computes self gravity of layered ellipsoids structure
* support_code/local_math.h Defines some constants for convenience
ENDIF()
#
-# Are all dependencies fullfilled?
+# Are all dependencies fulfilled?
#
IF( NOT DEAL_II_WITH_MPI OR
NOT DEAL_II_WITH_P4EST OR
{}
virtual void vector_value(const Point<dim> &p,
- Vector<double> &valuess) const override;
+ Vector<double> &values) const override;
};
template <int dim>
#include <deal.II/lac/trilinos_solver.h>
-// The functions class contains all the defintions of the functions we
+// The functions class contains all the definitions of the functions we
// will use, i.e. the right hand side function, the boundary conditions
// and the test functions.
#include "Functions.cc"
// Here is the main class for the Local Discontinuous Galerkin method
// applied to Poisson's equation, we won't explain much of the
// the class and method declarations, but dive deeper into describing the
-// functions when they are defined. The only thing I will menion
+// functions when they are defined. The only thing I will mention
// about the class declaration is that this is where I labeled
// the different types of boundaries using enums.
template <int dim>
// @sect4{Class constructor and destructor}
// The constructor and destructor for this class is very much like the
// like those for step-40. The difference being that we'll be passing
-// in an integer, <code>degree</code>, which tells us the maxiumum order
+// in an integer, <code>degree</code>, which tells us the maximum order
// of the polynomial to use as well as <code>n_refine</code> which is the
// global number of times we refine our mesh. The other main differences
// are that we use a FESystem object for our choice of basis
// FE_DGQ<dim>(degree), 1)
// </code>
//
-// which tells us that the basis functions contain discontinous polynomials
+// which tells us that the basis functions contain discontinuous polynomials
// of order <code>degree</code> in each of the <code>dim</code> dimensions
// for the vector field. For the scalar unknown we
// use a discontinuous polynomial of the order <code>degree</code>.
// as well as its gradient, just like the mixed finite element method.
// However, unlike the mixed method, the LDG method uses discontinuous
// polynomials to approximate both variables.
-// The other difference bewteen our constructor and that of step-40 is that
+// The other difference between our constructor and that of step-40 is that
// we all instantiate our linear solver in the constructor definition.
template <int dim>
LDGPoissonProblem<dim>::
// the domain. This was just to show that
// the LDG method is working with local
// refinement and discussions on building
- // more realistic refinement stategies are
+ // more realistic refinement strategies are
// discussed elsewhere in the deal.ii
// documentation.
for (; cell != endc; ++cell)
// type, i.e. Dirichlet or Neumann,
// we loop over all the cells in the mesh and then over
// all the faces of each cell. We then have to figure out
- // which faces are on the bounadry and set all faces
+ // which faces are on the boundary and set all faces
// on the boundary to have
// <code>boundary_id</code> to be <code>Dirichlet</code>.
// We remark that one could easily set more complicated
// with a distributed triangulation!
dof_handler.distribute_dofs(fe);
- // We now renumber the dofs so that the vector of unkonwn dofs
+ // We now renumber the dofs so that the vector of unknown dofs
// that we are solving for, <code>locally_relevant_solution</code>,
// corresponds to a vector of the form,
//
// matrix and vectors that we will write to.
const IndexSet &locally_owned_dofs = dof_handler.locally_owned_dofs();
- // In additon to the locally owned dofs, we also need the the locally
+ // In addition to the locally owned dofs, we also need the the locally
// relevant dofs. These are the dofs that have read access to and we
// need in order to do computations on our processor, but, that
// we do not have the ability to write to.
// Just like step-40 we create a dynamic sparsity pattern
// and distribute it to the processors. Notice how we do not have to
- // explictly mention that we are using a FESystem for system of
+ // explicitly mention that we are using a FESystem for system of
// variables instead of a FE_DGQ for a scalar variable
// or that we are using a discributed DoFHandler. All these specifics
// are taken care of under the hood by the deal.ii library.
// for evaluating the basis functions
// on one side of an element face as well as another FEFaceValues object,
// <code>fe_neighbor_face_values</code>, for evaluating the basis functions
- // on the opposite side of the face, i.e. on the neighoring element's face.
+ // on the opposite side of the face, i.e. on the neighboring element's face.
// In addition, we also introduce a FESubfaceValues object,
// <code>fe_subface_values</code>, that
// will be used for dealing with faces that have multiple refinement
FullMatrix<double> ve_ue_matrix(dofs_per_cell, dofs_per_cell);
// As explained in the section on the LDG method we take our test
// function to be v and multiply it on the left side of our differential
- // equation that is on u and peform integration by parts as explained in the
+ // equation that is on u and perform integration by parts as explained in the
// introduction. Using this notation for test and solution function,
// the matrices below will then stand for:
//
// At this point we know that this cell and the neighbor
// of this cell are on the same refinement level and
// the work to assemble the interior flux matrices
- // is very much the same as before. Infact it is
+ // is very much the same as before. In fact it is
// much simpler since we do not have to loop through the
// subfaces. However, we have to check that we do
// not compute the same contribution twice. This would
// Now that have looped over all the faces for this
- // cell and computed as well as disributed the local
+ // cell and computed as well as distributed the local
// flux matrices to the <code>system_matrix</code>, we
// can finally distribute the cell's <code>local_matrix</code>
// and <code>local_vector</code> contribution to the
// the faces on the boundary of the domain contribute
// to the <code>local_matrix</code>
// and <code>system_rhs</code>. We could distribute
- // the local contributions for each component seperately,
+ // the local contributions for each component separately,
// but writing to the distributed sparse matrix and vector
// is expensive and want to to minimize the number of times
// we do so.
// Here we have the function that builds the <code>local_matrix</code>
// contribution
// and local right hand side vector, <code>local_vector</code>
-// for the Dirichlet boundary condtions.
+// for the Dirichlet boundary conditions.
template<int dim>
void
LDGPoissonProblem<dim>::
// @sect4{assemble_Neumann_boundary_terms}
// Here we have the function that builds the <code>local_matrix</code>
-// and <code>local_vector</code> for the Neumann boundary condtions.
+// and <code>local_vector</code> for the Neumann boundary conditions.
template<int dim>
void
LDGPoissonProblem<dim>::
}
// We also compute the contribution for the flux for
- // $\widehat{q}$ on the Neumann bounary which is the
+ // $\widehat{q}$ on the Neumann boundary which is the
// Neumann boundary condition and enters the right
// hand side vector as
//
// As mentioned earlier I used a direct solver to solve
// the linear system of equations resulting from the LDG
// method applied to the Poisson equation. One could also
-// use a iterative sovler, however, we then need to use
-// a preconditoner and that was something I did not wanted
+// use a iterative solver, however, we then need to use
+// a preconditioner and that was something I did not wanted
// to get into. For information on preconditioners
// for the LDG Method see this
// <a href="http://epubs.siam.org/doi/abs/10.1137/S1064827502410657">
-// paper</a>. The uses of a direct sovler here is
+// paper</a>. The uses of a direct solver here is
// somewhat of a limitation. The built-in distributed
// direct solver in Trilinos reduces everything to one
// processor, solves the system and then distributes
// everything back out to the other processors. However,
-// by linking to more advanced direct sovlers through
+// by linking to more advanced direct solvers through
// Trilinos one can accomplish fully distributed computations
// and not much about the following function calls will
// change.
TrilinosWrappers::MPI::Vector
completely_distributed_solution(system_rhs);
- // Now we can preform the solve on the completeley distributed
+ // Now we can perform the solve on the completeley distributed
// right hand side vector, system matrix and the completely
// distributed solution.
solver.solve(system_matrix,
}
// @sect4{output_results}
-// This function deals with the writing of the reuslts in parallel
+// This function deals with the writing of the results in parallel
// to disk. It is almost exactly the same as
-// in step-40 and we wont go into it. It is noteworthy
+// in step-40 and we won't go into it. It is noteworthy
// that in step-40 the output is only the scalar solution,
-// while in our situation, we are outputing both the scalar
+// while in our situation, we are outputting both the scalar
// solution as well as the vector field solution. The only
// difference between this function and the one in step-40
// is in the <code>solution_names</code> vector where we have to add
needed to write the LDG method from scratch. I thought it
would be helpful for others to have access to
this example that goes through writing a discontinuous Galerkin method from
-scatch and also shows how to do it in a distributed setting using the
+scratch and also shows how to do it in a distributed setting using the
<a href="https://www.trilinos.org">Trilinos</a> library. This example may also
be of interest to users that wish to use the LDG method, as the method is
distinctly different from the
f(\textbf{x}) && \text{in} \ \Omega, \label{eq:Primary} \\
\textbf{q}
\; &= \;
- -\nabla u && \text{in} \ \Omega, \label{eq:Auxillary} \\
+ -\nabla u && \text{in} \ \Omega, \label{eq:Auxiliary} \\
\textbf{q} \cdot \textbf{n}
\; &= \; g_{N}(\textbf{x}) && \text{on} \ \partial \Omega_{N},\\
u &= g_{D}(\textbf{x}) && \mbox{on}\ \partial \Omega_{D}.
with $\tilde{\sigma}$ being a positive constant. There are other choices of
-penalty values $\sigma$, but the one above produces in appoximations to solutions
+penalty values $\sigma$, but the one above produces in approximations to solutions
that are the most accurate, see this
<a href="http://epubs.siam.org/doi/abs/10.1137/S0036142900371003">
reference</a> for more info.
/*************************************************************/
-// formating
+// formatting
template <int dim>
void ElastoplasticTorsion<dim>::format_convergence_tables()
}
/***************************************************************************************/
- /* the coeffcients W, W' and G defining the problem.
+ /* the coefficients W, W' and G defining the problem.
Min_u \int W(|Du|^2) dx
}
if (!done)
{
- std::cerr << ", max. no. of iterations reached wiht steplength= "<< alpha
+ std::cerr << ", max. no. of iterations reached with steplength= "<< alpha
<< ", fcn value= "<< phi_alpha<<std::endl;
return false;
}
double ptime=0.0;
timer.start ();
- // initalize mesh for the selected domain
+ // initialize mesh for the selected domain
init_mesh();
// setup FE space
{
// Values from previous state
// These were the values that were used in the assembly,
- // so we must use them in the update step to be consistant.
+ // so we must use them in the update step to be consistent.
// Need to compute these before we overwrite epsilon_c_t1
const double m_s = get_m_s();
const double beta = get_beta(dt);
// Provides the muscle direction at the point @p pt
// in the real geometry (one that has undergone the
// transformation given by the profile() function)
- // and subequent grid rescaling.
+ // and subsequent grid rescaling.
// The directions are given by the gradient of the
// transformation function (i.e. the fibres are
// orientated by the curvature of the muscle).
We assume that there exists a single muscle fibre family orientated axially.
The orientation of the underlying muscle fibres is, however, not parallel,
but rather follows the curvature of the macroscopic anatomy.
-The longitudinal profile of the muscle is generated using a trignometric
+The longitudinal profile of the muscle is generated using a trigonometric
function, as opposed to being extracted from medical images.
The benefit to doing so is that the geometry can be (parametrically) created
in `deal.II` itself and the associated microstructural orientation can be
written to disk over the past hour, the amount of internet traffic
that has gone through the machine in the last hour, and similar pieces
of pretty much random information. As a consequence, the seed is then
-pretty much guaranteed to be different from program invokation to
-program invokation, and consequently we will get different random
+pretty much guaranteed to be different from program invocation to
+program invocation, and consequently we will get different random
number sequences every time. The output file is tagged with a string
representation of this random seed, so that it is safe to run the same
program multiple times at the same time in the same directory, with
// Next, since this particular program allows for the use of
// multiple threads, the helper CopyData structures
// are defined. There are two kinds of these, one is used
- // for the copying cell-wise contributions to the corresponging
+ // for the copying cell-wise contributions to the corresponding
// node-associated data structures...
template <int dim>
struct NodeAssemblyCopyData
// Similarly, two ScratchData classes are defined.
// One for the assembly part, where we need
// FEValues, FEFaceValues, Quadrature and storage
- // for the basis fuctions...
+ // for the basis functions...
template <int dim>
struct NodeAssemblyScratchData
{
// First, the function that copies local cell contributions to the corresponding nodal
// matrices and vectors is defined. It places the values obtained from local cell integration
- // into the correct place in a matrix/vector corresponging to a specific node.
+ // into the correct place in a matrix/vector corresponding to a specific node.
template <int dim>
void MultipointMixedDarcyProblem<dim>::copy_cell_to_node(const DataStructures::NodeAssemblyCopyData<dim> ©_data)
{
std::vector<Vector<double>>& computed_quantities) const {
const unsigned int n_quadrature_points = inputs.solution_values.size();
- /*--- Check the correctness of all data structres ---*/
+ /*--- Check the correctness of all data structures ---*/
Assert(inputs.solution_gradients.size() == n_quadrature_points, ExcInternalError());
Assert(computed_quantities.size() == n_quadrature_points, ExcInternalError());
// @sect{ <code>NavierStokesProjectionOperator::NavierStokesProjectionOperator</code> }
- // The following class sets effecively the weak formulation of the problems for the different stages
+ // The following class sets effectively the weak formulation of the problems for the different stages
// and for both velocity and pressure.
// The template parameters are the dimnesion of the problem, the polynomial degree for the pressure,
// the polynomial degree for the velocity, the number of quadrature points for integrals for the pressure step,
}
- // Put together all the previous steps for porjection of pressure gradient. Here we loop only over cells
+ // Put together all the previous steps for projection of pressure gradient. Here we loop only over cells
//
template<int dim, int fe_degree_p, int fe_degree_v, int n_q_points_1d_p, int n_q_points_1d_v, typename Vec>
void NavierStokesProjectionOperator<dim, fe_degree_p, fe_degree_v, n_q_points_1d_p, n_q_points_1d_v, Vec>::
}
- // Put together all previous steps. This is the overriden function that effectively performs the
+ // Put together all previous steps. This is the overridden function that effectively performs the
// matrix-vector multiplication.
//
template<int dim, int fe_degree_p, int fe_degree_v, int n_q_points_1d_p, int n_q_points_1d_v, typename Vec>
Tensor<1, dim, VectorizedArray<Number>> tmp;
for(unsigned int d = 0; d < dim; ++d)
- tmp[d] = make_vectorized_array<Number>(1.0); /*--- We build the usal vector of ones that we will use as dof value ---*/
+ tmp[d] = make_vectorized_array<Number>(1.0); /*--- We build the usual vector of ones that we will use as dof value ---*/
/*--- Now we loop over faces ---*/
for(unsigned int face = face_range.first; face < face_range.second; ++face) {
void NavierStokesProjection<dim>::diffusion_step() {
TimerOutput::Scope t(time_table, "Diffusion step");
- /*--- We first speicify that we want to deal with velocity dof_handler (index 0, since it is the first one
+ /*--- We first specify that we want to deal with velocity dof_handler (index 0, since it is the first one
in the 'dof_handlers' vector) ---*/
const std::vector<unsigned int> tmp = {0};
navier_stokes_matrix.initialize(matrix_free_storage, tmp, tmp);
u_star = u_extr;
}
- /*--- Build the linear solver; in this case we specifiy the maximum number of iterations and residual ---*/
+ /*--- Build the linear solver; in this case we specify the maximum number of iterations and residual ---*/
SolverControl solver_control(max_its, eps*rhs_u.l2_norm());
SolverGMRES<LinearAlgebra::distributed::Vector<double>> gmres(solver_control);
// The following function is used in determining the maximal nodal difference
- // between old and current velocity value in order to see if we have reched steady-state.
+ // between old and current velocity value in order to see if we have reached steady-state.
//
template<int dim>
double NavierStokesProjection<dim>::get_maximal_difference_velocity() {
double local_lift = 0.0;
/*--- We need to perform a unique loop because the whole stress tensor takes into account contributions of
- velocity and pressure obviously. However, the two dof_handlers are different, so we neede to create an ad-hoc
+ velocity and pressure obviously. However, the two dof_handlers are different, so we need to create an ad-hoc
iterator for the pressure that we update manually. It is guaranteed that the cells are visited in the same order
(see the documentation) ---*/
auto tmp_cell = dof_handler_pressure.begin_active();
}
triangulation.prepare_coarsening_and_refinement();
- /*--- Now we prepare the object for transfering, basically saving the old quantities using SolutionTransfer.
+ /*--- Now we prepare the object for transferring, basically saving the old quantities using SolutionTransfer.
Since the 'prepare_for_coarsening_and_refinement' method can be called only once, but we have two vectors
for dof_handler_velocity, we need to put them in an auxiliary vector. ---*/
std::vector<const LinearAlgebra::distributed::Vector<double>*> velocities;
ENDIF()
-# Are all dependencies fullfilled?
+# Are all dependencies fulfilled?
IF(NOT DEAL_II_WITH_MPI OR
NOT DEAL_II_WITH_TRILINOS OR
NOT DEAL_II_TRILINOS_WITH_SACADO)
Vector<double> sum_solid_vol_fraction_vertex(vertex_handler_ref.n_dofs());
// We need to create a new FE space with a dim dof per node to
- // be able to ouput data on nodes in vector form
+ // be able to output data on nodes in vector form
FESystem<dim> fe_vertex_vec(FE_Q<dim>(1),dim);
DoFHandler<dim> vertex_vec_handler_ref(triangulation);
vertex_vec_handler_ref.distribute_dofs(fe_vertex_vec);
double total_vol_reference = 0.0;
std::vector<Point<dim+1>> solution_vertices(tracked_vertices_IN.size());
- //Auxiliar variables needed for mpi processing
+ //Auxiliary variables needed for mpi processing
Tensor<1,dim> sum_reaction_mpi;
Tensor<1,dim> sum_reaction_pressure_mpi;
Tensor<1,dim> sum_reaction_extra_mpi;
std::cout << "Assembly method: Residual and linearisation computed using AD." << std::endl;
// Sacado Rad-Fad is not thread-safe, so disable threading.
- // Parallisation using MPI would be possible though.
+ // Parallelisation using MPI would be possible though.
Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv,
1);
# Set the name of the project and target:
SET(TARGET "viscoelastic_strip_with_hole")
-# Declare all source files the targest consists of:
+# Declare all source files the target consists of:
SET(TARGET_SRC
${TARGET}.cc
)
ENDIF()
#
-# Are all dependencies fullfilled?
+# Are all dependencies fulfilled?
#
IF(NOT DEAL_II_WITH_CXX11 OR
NOT DEAL_II_WITH_MPI OR
tria_2d_not_flat);
// Attach a manifold to the curved boundary and refine
- // Note: We can only guarentee that the vertices sit on
+ // Note: We can only guarantee that the vertices sit on
// the curve, so we must test with their position instead
// of the cell centre.
const Point<2> centre_2d (0,0);
}
}
Assert(vol_current > 0.0, ExcInternalError());
- // Sum across all porcessors
+ // Sum across all processors
dil_L2_error = Utilities::MPI::sum(dil_L2_error,mpi_communicator);
vol_reference = Utilities::MPI::sum(vol_reference,mpi_communicator);
vol_current = Utilities::MPI::sum(vol_current,mpi_communicator);
It will pick up the code gallery and create joint documentation for the
tutorial and the code gallery.
-### Maintainance of contributed codes
+### Maintenance of contributed codes
The examples in the code-gallery are periodically adjusted so that they
maintain compatibility with recent versions of deal.II. This means
- /** @brief This function warps points on a cyclindrical mesh by cosine wave along the central axis.
+ /** @brief This function warps points on a cylindrical mesh by cosine wave along the central axis.
* We use this function to generate the "sinusoid" mesh, which is the surface of revolution
* bounded by the cosine wave.
* @tparam spacedim This is the dimension of the embedding space, which is where the input point lives
* @param p This is the input point to be translated.
- * @return The return as a tranlated point in the same dimensional space. This is the new point on the mesh.
+ * @return The return as a translated point in the same dimensional space. This is the new point on the mesh.
*/
template<int spacedim>
Point<spacedim> transform_function(const Point<spacedim>&p)
// because we are explicitly referencing the x, y, and z coordinates
Assert(spacedim == 3, ExcNotImplemented());
- // Retruns a point where the x-coordinate is unchanged but the y and z coordinates are adjusted
+ // Returns a point where the x-coordinate is unchanged but the y and z coordinates are adjusted
// by a cos wave of period 20, amplitude .5, and vertical shift 1
return Point<spacedim>(p(0), p(1)*(1 + .5*std::cos((3.14159/10)*p(0))), p(2)*(1 + .5*std::cos((3.14159/10)*p(0))));
}
double time;
/// @brief The amount time is increased each iteration/ the denominator of the discretized time derivative
double time_step;
- /// @brief Counts the number of iterations that have ellapsed
+ /// @brief Counts the number of iterations that have elapsed
unsigned int timestep_number;
/// @brief Used to compute the time_step: time_step = 1/timestep_denominator
unsigned int timestep_denominator;
, end_time(end_time)
{}
- /** @brief Distrubutes the finite element vectors to each DoF, creates the system matrix, solution, old_solution, and system_rhs vectors,
+ /** @brief Distributes the finite element vectors to each DoF, creates the system matrix, solution, old_solution, and system_rhs vectors,
* and outputs the number of DoF's to the console.
* @tparam dim The dimension of the manifold
* @tparam spacedim The dimension of the ambient space
setup_system();
- // Counts total time ellapsed
+ // Counts total time elapsed
time = 0.0;
// Counts number of iterations
timestep_number = 0;
const FEValuesExtractors::Scalar u(0);
const FEValuesExtractors::Scalar v(1);
- // Loops over the cells to create the system matrix. We do this only once becase the timestep is constant
+ // Loops over the cells to create the system matrix. We do this only once because the timestep is constant
for(const auto &cell : dof_handler.active_cell_iterators()){
cell_matrix = 0;
cell_rhs = 0;
{
using namespace SwiftHohenbergSolver;
- // An array of mesh types. We itterate over this to allow for longer runs without having to stop the code
+ // An array of mesh types. We iterate over this to allow for longer runs without having to stop the code
MeshType mesh_types[5] = {HYPERCUBE, CYLINDER, SPHERE, TORUS, SINUSOID};
- // An array of initial condition types. We itterate this as well, for the same reason
+ // An array of initial condition types. We iterate this as well, for the same reason
InitialConditionType ic_types[3] = {HOTSPOT, PSUEDORANDOM, RANDOM};
// Controls how long the code runs
try{
// Switch statement that determines what template parameters are used by the solver object. Template parameters must be known at compile time, so we cannot
- // pass this as a varible unfortunately. In each case, we create a filename string (named appropriately for the particular case), output to the console what
+ // pass this as a variable unfortunately. In each case, we create a filename string (named appropriately for the particular case), output to the console what
// we are running, create the solver object, and call run(). Note that for the cylinder, sphere, and sinusoid we decrease the refinement number by 1. This keeps
// the number of dofs used in these cases comparable to the number of dofs on the 2D hypercube (otherwise the number of dofs is much larger). For the torus, we
// decrease the refinement number by 2.
}
catch (std::exception &exc)
{
- std::cout << "An error occured" << std::endl;
+ std::cout << "An error occurred" << std::endl;
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
}
catch (...)
{
- std::cout << "Error occured, made it past first catch" << std::endl;
+ std::cout << "Error occurred, made it past first catch" << std::endl;
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
reasonably well behaved for small values of $r$ and $g_1$, but there
are interesting behaviors that occur when $g_1$ is smaller or larger
than $r$ in magnitude, so this allows us room to vary $g_1$ and
-explore these behavior. Additionally, we choose $r = 0.3$ because this matches the parameters used by Gurevich in [1]. We chose our parameters to match so that we could compare the output of our program to the results presented in [1], which was useful for validating that our code was functioning properly during the developement process. To summarize, this code solves:
+explore these behavior. Additionally, we choose $r = 0.3$ because this matches the parameters used by Gurevich in [1]. We chose our parameters to match so that we could compare the output of our program to the results presented in [1], which was useful for validating that our code was functioning properly during the development process. To summarize, this code solves:
@f{align*}{
\frac{\partial u}{\partial t} = 0.3u - (1 + \Delta)^2 u + g_1 u^2 - u^3
const types::boundary_id interface_boundary_id;
Adapter<dim, CouplingParamters> adapter;
- // The time-step size delta_t is the acutual time-step size used for all
+ // The time-step size delta_t is the actual time-step size used for all
// computations. The preCICE time-step size is obtained by preCICE in order to
// ensure a synchronization at all coupling time steps. The solver time
// step-size is the desired time-step size of our individual solver. In more
ENDIF()
#
-# Are all dependencies fullfilled?
+# Are all dependencies fulfilled?
#
IF( NOT DEAL_II_WITH_MPI OR
NOT DEAL_II_WITH_P4EST OR
Vector<double> &rhs) const
{
// Assemble right hand side of the dual problem when the quantity of interest is
- // a nonlinear functinoal. In this case, the QoI should be linearized which depends
+ // a nonlinear functional. In this case, the QoI should be linearized which depends
// on the solution of the primal problem.
- // The extracter of the linearized QoI functional is the gradient of the the original
+ // The extractor of the linearized QoI functional is the gradient of the the original
// QoI functional with the primal solution values.
AssertThrow (dim >= 2, ExcNotImplemented());
ElastoPlasticProblem<dim>::refine_grid ()
{
// ---------------------------------------------------------------
- // Make a field variable for history varibales to be able to
+ // Make a field variable for history variables to be able to
// transfer the data to the quadrature points of the new mesh
FE_DGQ<dim> history_fe (1);
DoFHandler<dim> history_dof_handler (triangulation);
information)]
set maximum relative error [set a criterion value for
- perfoming the mesh adaptivity]
+ performing the mesh adaptivity]
set output directory [determine a directory to save
the output results]
## Documentation
-In this example, we solve a simple transient nonlinear heat transfer equation. The nonlinearity is due to the temperature dependence of the thermal conductivity. Two main aspects covered by this example are (a) it develops the residual and the jacobian using automatic differentiation and (b) solves the nonlinear equations using TRILINOS NOX. The actual code contains the comments which will explain how these aspects are executed. Here, we give the full derivation and set up the equations. We also provide explanations to some of the functions important for this applicaiton.
+In this example, we solve a simple transient nonlinear heat transfer equation. The nonlinearity is due to the temperature dependence of the thermal conductivity. Two main aspects covered by this example are (a) it develops the residual and the jacobian using automatic differentiation and (b) solves the nonlinear equations using TRILINOS NOX. The actual code contains the comments which will explain how these aspects are executed. Here, we give the full derivation and set up the equations. We also provide explanations to some of the functions important for this application.
### Strong form
### Results
-The results are essentially the time evolution of the temperature throughout the domain. The first of the pictures below shows the temperature distribution at the final step, i.e. at time $t=5$. This should be very similar to the figure at the bottom on the page [here](https://www.mathworks.com/help/pde/ug/heat-transfer-problem-with-temperature-dependent-properties.html). We also plot the time evolution of the temperature at a point close to the right edge of the domain indicated by the small magenta dot (close to $(0.49, 0.12)$) in the second of the pictures below. This is also simlar to the second figure at the [bottom of this page](https://www.mathworks.com/help/pde/ug/heat-transfer-problem-with-temperature-dependent-properties.html). There could be minor differences due to the choice of the point. Further, note that, we have plotted in the second of the pictures below the temperature as a function of time steps instead of time. Since the $\Delta t$ chosen is 0.1, 50 steps maps to $t=5$ as in the link.
+The results are essentially the time evolution of the temperature throughout the domain. The first of the pictures below shows the temperature distribution at the final step, i.e. at time $t=5$. This should be very similar to the figure at the bottom on the page [here](https://www.mathworks.com/help/pde/ug/heat-transfer-problem-with-temperature-dependent-properties.html). We also plot the time evolution of the temperature at a point close to the right edge of the domain indicated by the small magenta dot (close to $(0.49, 0.12)$) in the second of the pictures below. This is also similar to the second figure at the [bottom of this page](https://www.mathworks.com/help/pde/ug/heat-transfer-problem-with-temperature-dependent-properties.html). There could be minor differences due to the choice of the point. Further, note that, we have plotted in the second of the pictures below the temperature as a function of time steps instead of time. Since the $\Delta t$ chosen is 0.1, 50 steps maps to $t=5$ as in the link.
![image](./doc/Images/contour.png)
public:
Initialcondition(): Function<2>(1)
{}
- // Returns the intitial values.
+ // Returns the initial values.
virtual double value(const Point<2> &p,
const unsigned int component =0) const override;
};
fe_values[t].get_function_values(converged_solution, consol);
fe_values[t].get_function_gradients(converged_solution, consol_grad);
/**
- * residual_ad is defined and initalized in its symbolic form.
+ * residual_ad is defined and initialized in its symbolic form.
*/
std::vector<ADNumberType> residual_ad(n_dependent_variables,
ADNumberType(0.0));
double Initialcondition::value(const Point<2> & /*p*/, const unsigned int /*comp*/) const
{
/**
- * In the current case, we asume that the initial conditions are zero everywhere.
+ * In the current case, we assume that the initial conditions are zero everywhere.
*/
return 0.0;
}
// This function packs a linear buffer with data so that the buffer
// may be sent to another processor via MPI. The buffer is cast to
// a type we can work with. The first element of the buffer is the
-// size of the buffer. Then we iterate over soltuion vector u and
+// size of the buffer. Then we iterate over solution vector u and
// fill the buffer with our solution data. Finally we tell XBraid
// how much data we wrote.
int
return 0;
}
-// This function unpacks a buffer that was recieved from a different
+// This function unpacks a buffer that was received from a different
// processor via MPI. The size of the buffer is read from the first
// element, then we iterate over the size of the buffer and fill
// the values of solution vector u with the data in the buffer.
// This struct contains all data that changes with time. For now
// this is just the solution data. When doing AMR this should
-// probably include the triangulization, the sparsity patter,
+// probably include the triangulization, the sparsity pattern,
// constraints, etc.
/**
* \brief Struct that contains the deal.ii vector.
s_pout_basename = "pout" ;
s_pout_init = true ;
}
- // if MPI not initialized, we cant open the file so return cout
+ // if MPI not initialized, we can't open the file so return cout
if ( ! flag_i || flag_f)
{
return std::cout; // MPI hasn't been started yet, or has ended....
c(u;u, v) = \int_{\Omega} (u \cdot \nabla u) \cdot v d\Omega
@f}
-Substracting $m(u^n, v) + \Delta{t}a((u^n, p^n), (v, q))$ from both sides of the equation,
+Subtracting $m(u^n, v) + \Delta{t}a((u^n, p^n), (v, q))$ from both sides of the equation,
we have the incremental form:
@f{eqnarray*}
m(\Delta{u}, v) + \Delta{t}\cdot a((\Delta{u}, \Delta{p}), (v, q)) = \Delta{t}(-a(u^n, p^n), (q, v)) - \Delta{t}c(u^n;u^n, v)
\right)
@f}
-#### Grad-Div stablization ####
+#### Grad-Div stabilization ####
Similar to step-57, we add $\gamma B^T M_p^{-1} B$ to the upper left block of the system. This is a
term that is consistent, i.e., the corresponding operators applied to the exact solution would
@f}
where $\tilde{A} = A + \gamma B^T M_p^{-1} B$.
-A detailed explanation of the Grad-Div stablization can be found in [1].
+A detailed explanation of the Grad-Div stabilization can be found in [1].
#### Block preconditioner ####
-Solving time-dependent incompressible Navier-Stokes problem in parallel with Grad-Div stablization using IMEX scheme.
+Solving time-dependent incompressible Navier-Stokes problem in parallel with Grad-Div stabilization using IMEX scheme.
// refinement:
GridGenerator::flatten_triangulation(middle, tmp2);
- // Left domain is requred in 3d only.
+ // Left domain is required in 3d only.
if (compute_in_2d)
{
GridGenerator::merge_triangulations(tmp2, right, tria);
// The system equation is written in the incremental form, and we treat
// the convection term explicitly. Therefore the system equation is linear
// and symmetric, which does not need to be solved with Newton's iteration.
- // The system is further stablized and preconditioned with Grad-Div method,
+ // The system is further stabilized and preconditioned with Grad-Div method,
// where GMRES solver is used as the outer solver.
template <int dim>
class InsIMEX
bool apply_nonzero_constraints = (time.get_timestep() == 1);
// We have to assemble the LHS for the initial two time steps:
// once using nonzero_constraints, once using zero_constraints,
- // as well as the steps imediately after mesh refinement.
+ // as well as the steps immediately after mesh refinement.
bool assemble_system = (time.get_timestep() < 3 || refined);
refined = false;
assemble(apply_nonzero_constraints, assemble_system);
//////////////////////////////////////
cK = 1.0;
cE = 1.0;
- sharpness_integer=10; //this will be multipled by min_h
+ sharpness_integer=10; //this will be multiplied by min_h
//TRANSPORT_TIME_INTEGRATION=FORWARD_EULER;
TRANSPORT_TIME_INTEGRATION=SSP33;
//ALGORITHM = "MPP_u1";
//////////////////////////////////////
cK = 1.0; // compression constant
cE = 1.0; // entropy viscosity constant
- sharpness_integer=1; //this will be multipled by min_h
+ sharpness_integer=1; //this will be multiplied by min_h
//TRANSPORT_TIME_INTEGRATION=FORWARD_EULER;
TRANSPORT_TIME_INTEGRATION=SSP33;
//ALGORITHM = "MPP_u1";