uses the symmetric gradient. This differs from the convention in the fluids
community by a factor of two since the fact that $\textrm{div}\; \textbf{u}=0$
implies that $-\textrm{div}\; \varepsilon(\textbf{u}) = \frac 12 \Delta
-\textbf{u}$. The equations above are therefore equivalent to
+\textbf{u}$. The equations above are therefore equivalent to
@f{eqnarray*}
-\frac 12 \Delta\textbf{u} + \nabla p &=& \textbf{f},
\\
The weak form of the equations is obtained by writing it in vector
form as
@f{eqnarray*}
- \left(
- {-\textrm{div}\; \varepsilon(\textbf{u}) + \nabla p}
- \atop
- {-\textrm{div}\; \textbf{u}}
- \right)
+ \begin{pmatrix}
+ {-\textrm{div}\; \varepsilon(\textbf{u}) + \nabla p}
+ \\
+ {-\textrm{div}\; \textbf{u}}
+ \end{pmatrix}
=
- \left(
+ \begin{pmatrix}
{\textbf{f}}
- \atop
+ \\
0
- \right),
+ \end{pmatrix},
@f}
forming the dot product from the left with a vector-valued test
-function $\phi = \left({\textbf v \atop q}\right)$ and integrating
+function $\phi = \begin{pmatrix}\textbf v \\ q\end{pmatrix}$ and integrating
over the domain $\Omega$, yielding the following set of equations:
@f{eqnarray*}
(\mathrm v,
=
(\textbf{v}, \textbf{f})_\Omega,
@f}
-which has to hold for all test functions $\phi = \left({\textbf v
-\atop q}\right)$.
+which has to hold for all test functions $\phi = \begin{pmatrix}\textbf v
+\\ q\end{pmatrix}$.
In practice, one wants to impose as little regularity on the pressure
variable as possible; consequently, we integrate by parts the second term:
\frac{\partial^2 u}{\partial t^2}
-
\Delta u &=& f
- \qquad
+ \qquad
\textrm{in}\ \Omega\times [0,T],
\\
u(x,t) &=& g
- \qquad
+ \qquad
\textrm{on}\ \partial\Omega\times [0,T],
\\
u(x,0) &=& u_0(x)
- \qquad
+ \qquad
\textrm{in}\ \Omega,
\\
\frac{\partial u(x,0)}{\partial t} &=& u_1(x)
- \qquad
+ \qquad
\textrm{in}\ \Omega.
@f}
Note that since this is an equation with second-order time
over whether a discretization of time dependent equations should
involve first discretizing the time variable leading to a stationary
PDE at each time step that is then solved using standard finite
-element techniques (this is called the Rothe method), or whether
+element techniques (this is called the Rothe method), or whether
one should first discretize the spatial variables, leading to a large
system of ordinary differential equations that can then be handled by
one of the usual ODE solvers (this is called the method of lines).
each time step that we may choose to discretize independently of the
mesh used for the previous time step; this approach is not without
perils and difficulties, but at least is a sensible and well-defined
-procedure.
+procedure.
For all these reasons, for the present program, we choose to use the
Rothe method for discretization, i.e. we first discretize in time and
-
v
&=& 0
- \qquad
+ \qquad
\textrm{in}\ \Omega\times [0,T],
\\
\frac{\partial v}{\partial t}
-
\Delta u &=& f
- \qquad
+ \qquad
\textrm{in}\ \Omega\times [0,T],
\\
u(x,t) &=& g
- \qquad
+ \qquad
\textrm{on}\ \partial\Omega\times [0,T],
\\
u(x,0) &=& u_0(x)
- \qquad
+ \qquad
\textrm{in}\ \Omega,
\\
v(x,0) &=& u_1(x)
- \qquad
+ \qquad
\textrm{in}\ \Omega.
@f}
The advantage of this formulation is that it now only contains first
g}{\partial t}$ on the boundary. It turns out in numerical examples that this
as actually necessary: without doing so the solution doesn't look particularly
wrong, but the Crank-Nicolson scheme does not conserve energy if one doesn't
-enforce these boundary conditions.
+enforce these boundary conditions.
With this formulation, let us introduce the following time
discretization where a superscript $n$ indicates the number of a time
-step and $k=t_n-t_{n-1}$ is the length of the present time step:
+step and $k=t_n-t_{n-1}$ is the length of the present time step:
\f{eqnarray*}
- \frac{u^n - u^{n-1}}{k}
+ \frac{u^n - u^{n-1}}{k}
- \left[\theta v^n + (1-\theta) v^{n-1}\right] &=& 0,
\\
- \frac{v^n - v^{n-1}}{k}
+ \frac{v^n - v^{n-1}}{k}
- \Delta\left[\theta u^n + (1-\theta) u^{n-1}\right]
&=& \theta f^n + (1-\theta) f^{n-1}.
\f}
$\theta=0$, for example, the first equation would reduce to
$\frac{u^n - u^{n-1}}{k} - v^{n-1} = 0$, which is well-known as the
forward or explicit Euler method. On the other hand, if we set
-$\theta=1$, then we would get
+$\theta=1$, then we would get
$\frac{u^n - u^{n-1}}{k} - v^n = 0$, which corresponds to the
backward or implicit Euler method. Both these methods are first order
accurate methods. They are simple to implement, but they are not
really very accurate.
The third case would be to choose $\theta=\frac 12$. The first of the
-equations above would then read $\frac{u^n - u^{n-1}}{k}
+equations above would then read $\frac{u^n - u^{n-1}}{k}
- \frac 12 \left[v^n + v^{n-1}\right] = 0$. This method is known as
the Crank-Nicolson method and has the advantage that it is second
order accurate. In addition, it has the nice property that it
rearranging terms. We then get
\f{eqnarray*}
\left[ 1-k^2\theta^2\Delta \right] u^n &=&
- \left[ 1+k^2\theta(1-\theta)\Delta\right] u^{n-1} + k v^{n-1}
+ \left[ 1+k^2\theta(1-\theta)\Delta\right] u^{n-1} + k v^{n-1}
+ k^2\theta\left[\theta f^n + (1-\theta) f^{n-1}\right],\\
v^n &=& v^{n-1} + k\Delta\left[ \theta u^n + (1-\theta) u^{n-1}\right]
+ k\left[\theta f^n + (1-\theta) f^{n-1}\right].
\right],
\\
(v^n,\varphi)
- &=&
+ &=&
(v^{n-1},\varphi)
- -
- k\left[ \theta (\nabla u^n,\nabla\varphi) +
+ -
+ k\left[ \theta (\nabla u^n,\nabla\varphi) +
(1-\theta) (\nabla u^{n-1},\nabla \varphi)\right]
+ k
\left[
If we plug these expansions into above equations and test with the
test functions from the present mesh, we get the following linear
-system:
+system:
\f{eqnarray*}
(M^n + k^2\theta^2 A^n)U^n &=&
M^{n,n-1}U^{n-1} - k^2\theta(1-\theta) A^{n,n-1}U^{n-1}
\right],
\\
M^nV^n
- &=&
+ &=&
M^{n,n-1}V^{n-1}
- -
+ -
k\left[ \theta A^n U^n +
(1-\theta) A^{n,n-1} U^{n-1}\right]
+ k
\f}
where
@f{eqnarray*}
- M^n_{ij} &=& (\phi_i^n, \phi_j^n),
+ M^n_{ij} &=& (\phi_i^n, \phi_j^n),
\\
- A^n_{ij} &=& (\nabla\phi_i^n, \nabla\phi_j^n),
+ A^n_{ij} &=& (\nabla\phi_i^n, \nabla\phi_j^n),
\\
- M^{n,n-1}_{ij} &=& (\phi_i^n, \phi_j^{n-1}),
+ M^{n,n-1}_{ij} &=& (\phi_i^n, \phi_j^{n-1}),
\\
A^{n,n-1}_{ij} &=& (\nabla\phi_i^n, \nabla\phi_j^{n-1}),
\\
Under these conditions (i.e. a mesh that doesn't change), one can optimize the
solution procedure a bit by basically eliminating the solution of the second
linear system. We will discuss this in the introduction of the @ref step_25
-"step-25" program.
+"step-25" program.
<h3>Energy conservation</h3>
t}\right)^2 + (\nabla u)^2 \; dx
@f]
is a conserved quantity, i.e. one that doesn't change with time. We
-will compute this quantity after each time
+will compute this quantity after each time
step. It is straightforward to see that if we replace $u$ by its finite
element approximation, and $\frac{\partial u}{\partial t}$ by the finite
element approximation of the velocity $v$, then
@f]
As we will see in the results section, the Crank-Nicolson scheme does indeed
conserve the energy, whereas neither the forward nor the backward Euler scheme
-do.
+do.
<h3>Who are Courant, Friedrichs, and Lewy?</h3>
\\
u_1 &=& 0,
\\
- g &=& \left\{{\sin (4\pi t) \atop 0}
- \qquad {\textrm{for}\ t\le \frac 12, x=-1, -\frac 13<y<\frac 13
+ g &=& \left\{\begin{matrix}\sin (4\pi t) \\ 0\end{matrix}
+ \qquad {\textrm{for}\ t\le \frac 12, x=-1, -\frac 13<y<\frac 13
\atop \textrm{otherwise}}
\right.
@f}
This corresponds to a membrane initially at rest and clamped all around, where
someone is waving a part of the clamped boundary once up and down, thereby
-shooting a wave into the domain.
+shooting a wave into the domain.
with deal.II. It solves the Laplace equation and so builds only on the first
few tutorial programs, in particular on step-4 for dimension
independent programming and step-6 for adaptive mesh
-refinement.
+refinement.
The $hp$ finite element method was proposed in the early 1980s by
Babuska and Guo as an alternative to either
corners, or where coefficients are discontinuous; consequently, the
approximation can not be improved in these areas by increasing the polynomial
degree $p$ but only by refining the mesh, i.e. by reducing the mesh size
-$h$. These differing means to reduce the
+$h$. These differing means to reduce the
error have led to the notion of $hp$ finite elements, where the approximating
finite element spaces are adapted to have a high polynomial degree $p$
wherever the solution is sufficiently smooth, while the mesh width $h$ is
tasks are already well supported by functionality provided by the
deal.II libraries, and that we will only have to provide the logic of
what the program should do, not exactly how all this is going to
-happen.
+happen.
In deal.II, the $hp$ functionality is largely packaged into
the hp namespace. This namespace provides classes that handle
It may be worth giving a slightly larger perspective at the end of
this first part of the introduction. $hp$ functionality has been
implemented in a number of different finite element packages (see, for
-example, the list of references cited in the @ref hp_paper "hp paper").
+example, the list of references cited in the @ref hp_paper "hp paper").
However, by and large, most of these packages have implemented it only
for the (i) the 2d case, and/or (ii) the discontinuous Galerkin
method. The latter is a significant simplification because
the resulting complexity. In particular, it handles computing the
constraints (similar to hanging node constraints) of elements of
different degree meeting at a face or edge. The many algorithmic and
-data structure techniques necessary for this are described in the
+data structure techniques necessary for this are described in the
@ref hp_paper "hp paper" for those interested in such detail.
We hope that providing such a general implementation will help explore
should not even use the same quadrature object for all cells, but rather
higher order quadrature formulas for cells where we use higher order finite
elements. Similarly, we may want to use higher order mappings on such cells as
-well.
+well.
To facilitate these considerations, deal.II has a class hp::FEValues that does
what we need in the current context. The difference is that instead of a
@code
hp::FEValues<dim> hp_fe_values (mapping_collection,
fe_collection,
- quadrature_collection,
+ quadrature_collection,
update_values | update_gradients |
update_q_points | update_JxW_values);
cell->active_fe_index(),
cell->active_fe_index(),
cell->active_fe_index());
-
+
const FEValues<dim> &fe_values = hp_fe_values.get_present_fe_values ();
... // assemble local contributions and copy them into global object
which are the cells where the error is largest, and then refine them. In many
of the other tutorial programs, we use the KellyErrorEstimator class to get an
indication of the size of the error on a cell, although we also discuss more
-complicated strategies in some programs, most importantly in step-14.
+complicated strategies in some programs, most importantly in step-14.
In any case, as long as the decision is only "refine this cell" or "do not
refine this cell", the actual refinement step is not particularly
in 3d, etc, and $k_x,k_y,k_z=0,\pi,2\pi,3\pi,\ldots$. If we re-compose $\hat u$
from $\hat U$ using the formula
@f[
- \hat u(\hat{\bf x})
+ \hat u(\hat{\bf x})
= \frac 1{(2\pi)^{d/2}} \sum_{\bf k} e^{-i {\bf k}\cdot \hat{\bf x}} \hat U_{\bf k},
@f]
then it becomes clear that we can write the $H^s$ norm of $\hat u$ as
\sum_{\bf k} |{\bf k}|^s e^{-i{\bf k}\cdot \hat{\bf x}} \hat U_{\bf k}
\right|^2 \; d\hat{\bf x}
=
- \sum_{\bf k}
+ \sum_{\bf k}
|{\bf k}|^{2s}
|\hat U_{\bf k}|^2.
@f]
$2\pi r\; dr$. Consequently, it is no longer $|{\bf k}|^{2s}|\hat
U_{\bf k}|^2$ that has to decay as ${\cal O}(|{\bf k}|^{-1-\epsilon})$, but
it is in fact $|{\bf k}|^{2s}|\hat U_{\bf k}|^2 |{\bf k}|^{d-1}$. A
-comparison of exponents yields the result.)
+comparison of exponents yields the result.)
We can turn this around: Assume we are given a function $\hat u$ of unknown
smoothness. Let us compute its Fourier coefficients $\hat U_{\bf k}$
we see that we can compute the coefficient $\hat U_{\bf k}$ as
@f[
\hat U_{\bf k}
- = \frac 1{(2\pi)^{d/2}}
+ = \frac 1{(2\pi)^{d/2}}
\sum_{i=0}^{\textrm{\tiny dofs per cell}}
\left[\int_{\hat K} e^{i {\bf k}\cdot \hat{\bf x}} \hat \varphi_i(\hat{\bf x})
d\hat{\bf x} \right] u_i,
with the matrix
@f[
{\cal F}_{{\bf k},j}
- = \frac 1{(2\pi)^{d/2}}
+ = \frac 1{(2\pi)^{d/2}}
\int_{\hat K} e^{i {\bf k}\cdot \hat{\bf x}} \hat \varphi_j(\hat{\bf x}) d\hat{\bf x}.
@f]
This matrix is easily computed for a given number of shape functions
like this:
@f[
\min_{\alpha,\mu}
- Q(\alpha,\mu) =
+ Q(\alpha,\mu) =
\frac 12 \sum_{{\bf k}, |{\bf k}|\le N}
\left( \ln |\hat U_{\bf k}| - \ln (\alpha |{\bf k}|^{-\mu})\right)^2.
@f]
Using the usual facts about logarithms, we see that this yields the
-problem
+problem
@f[
\min_{\beta,\mu}
- Q(\beta,\mu) =
+ Q(\beta,\mu) =
\frac 12 \sum_{{\bf k}, |{\bf k}|\le N}
\left( \ln |\hat U_{\bf k}| - \beta + \mu \ln |{\bf k}|\right)^2,
@f]
@f[
\left(\begin{array}{cc}
\sum_{{\bf k}, |{\bf k}|\le N} 1 &
- \sum_{{\bf k}, |{\bf k}|\le N} \ln |{\bf k}|
+ \sum_{{\bf k}, |{\bf k}|\le N} \ln |{\bf k}|
\\
\sum_{{\bf k}, |{\bf k}|\le N} \ln |{\bf k}| &
- \sum_{{\bf k}, |{\bf k}|\le N} (\ln |{\bf k}|)^2
+ \sum_{{\bf k}, |{\bf k}|\le N} (\ln |{\bf k}|)^2
\end{array}\right)
\left(\begin{array}{c}
\beta \\ -\mu
\left(\begin{array}{c}
\sum_{{\bf k}, |{\bf k}|\le N} \ln |\hat U_{{\bf k}}|
\\
- \sum_{{\bf k}, |{\bf k}|\le N} \ln |\hat U_{{\bf k}}| \ln |{\bf k}|
+ \sum_{{\bf k}, |{\bf k}|\le N} \ln |\hat U_{{\bf k}}| \ln |{\bf k}|
\end{array}\right)
@f]
This linear system is readily inverted to yield
@f[
- \beta =
+ \beta =
\frac 1{\left(\sum_{{\bf k}, |{\bf k}|\le N} 1\right)
\left(\sum_{{\bf k}, |{\bf k}|\le N} (\ln |{\bf k}|)^2\right)
-\left(\sum_{{\bf k}, |{\bf k}|\le N} \ln |{\bf k}|\right)^2}
@f]
and
@f[
- \mu =
+ \mu =
\frac 1{\left(\sum_{{\bf k}, |{\bf k}|\le N} 1\right)
\left(\sum_{{\bf k}, |{\bf k}|\le N} (\ln |{\bf k}|)^2\right)
-\left(\sum_{{\bf k}, |{\bf k}|\le N} \ln |{\bf k}|\right)^2}
regularity, in order to keep numerical efforts low. Consequently, instead of
using the formula
@f[
- \mu =
+ \mu =
\frac 1{\left(\sum_{{\bf k}, |{\bf k}|\le N} 1\right)
\left(\sum_{{\bf k}, |{\bf k}|\le N} (\ln |{\bf k}|)^2\right)
-\left(\sum_{{\bf k}, |{\bf k}|\le N} \ln |{\bf k}|\right)^2}
@f[
\sum_{{\bf k}, |{\bf k}|\le N}
\longrightarrow
- \sum_{{{\bf k}, |{\bf k}|\le N} \atop {|\hat U_{{\bf k}}| \ge |\hat U_{{\bf k}'}|
- \ \textrm{for all}\ {\bf k}'\ \textrm{with}\ |{\bf k}'|=|{\bf k}|}}
+ \sum_{\footnotesize \begin{matrix}{{\bf k}, |{\bf k}|\le N} \\ {|\hat U_{{\bf k}}| \ge |\hat U_{{\bf k}'}|
+ \ \textrm{for all}\ {\bf k}'\ \textrm{with}\ |{\bf k}'|=|{\bf k}|}\end{matrix}}
@f]
This is the form we will implement in the program.
reference cell <i>to the Fourier frequencies on the real cell $|\vec
k|h$</i>, where $h$ is the norm of the transformation operator (i.e. something
like the diameter of the cell). In other words, we would have to minimize the
-sum of squares of the terms
+sum of squares of the terms
@f[
\ln |\hat U_{{\bf k}}| - \beta + \mu \ln (|{\bf k}|h).
@f]
instead. However, using fundamental properties of the logarithm, this is
-simply equivalent to minimizing
+simply equivalent to minimizing
@f[
\ln |\hat U_{{\bf k}}| - (\beta - \mu \ln h) + \mu \ln (|{\bf k}|).
@f]
rows. At the same time, because there are areas where we use low polynomial
degree and consequently matrix rows with relatively few nonzero
entries. Consequently, allocating the sparsity pattern for these matrices is a
-challenge.
+challenge.
Most programs built on deal.II use the DoFTools::make_sparsity_pattern
function to allocate the sparsity pattern of a matrix, and later add a few
over-allocate memory. Typical code doing this is shown in the documentation of
the CompressedSparsityPattern class. This solution is slower than directly
building a SparsityPattern object, but only uses as much memory as is really
-necessary.
+necessary.
As it now turns out, the storage format used in the
CompressedSparsityPattern class is not very good for matrices with
ConstraintMatrix that stores the constraints, and then "condensing" away these
degrees of freedom first from the sparsity pattern. The same scheme is used
for the matrix and right hand side, which are also both first built then
-condensed.
+condensed.
Dealing with sparsity patterns and matrices in this way turns out to be
inefficient because the effort necessary is at least ${\cal O}(N \log N)$ in
course only have algorithms that are linear in the number of unknowns. The
solution to this problem is to use the ConstraintMatrix object already at the
time of creating the sparsity pattern, or while copying local contributions
-into the global matrix object (or global vector).
+into the global matrix object (or global vector).
So, instead of the code snippet (taken from step-6)
@code
for (unsigned int j=0; j<dofs_per_cell; ++j)
system_matrix.add (local_dof_indices[i],
local_dof_indices[j],
- cell_matrix(i,j));
+ cell_matrix(i,j));
system_rhs(local_dof_indices[i]) += cell_rhs(i);
}
}
@endcode
A preprint that mostly matches the final version of the paper is
available <a target="_top"
-href="http://iamcs.tamu.edu/file_dl.php?type=preprint&preprint_id=19">here</a>.
+href="http://iamcs.tamu.edu/file_dl.php?type=preprint&preprint_id=19">here</a>.
</i>
<br>
nuclear reactor, neutrons are speeding around at different energies, get
absorbed or scattered, or start a new fission
event. If viewed at long enough length scales, the movement of neutrons can be
-considered a diffusion process.
+considered a diffusion process.
A mathematical description of this would group neutrons into energy bins, and
consider the balance equations for the neutron fluxes in each of these
processes:
<ul>
<li> Diffusion $\nabla \cdot(D_g(x) \nabla \phi_g(x,t))$. Here, $D_g$ is the
- (spatially variable) diffusion coefficient.
+ (spatially variable) diffusion coefficient.
<li> Absorption $\Sigma_{r,g}(x)\phi_g(x,t)$ (note the
negative sign). The coefficient $\Sigma_{r,g}$ is called the <i>removal
cross section</i>.
-<li> Nuclear fission $\chi_g\sum_{g'=1}^G\nu\Sigma_{f,g'}(x)\phi_{g'}(x,t)$.
+<li> Nuclear fission $\chi_g\sum_{g'=1}^G\nu\Sigma_{f,g'}(x)\phi_{g'}(x,t)$.
The production of neutrons of energy $g$ is
proportional to the flux of neutrons of energy $g'$ times the
probability $\Sigma_{f,g'}$ that neutrons of energy $g'$ cause a fission
$\chi_g$ the <i>fission spectrum</i>. We will denote the term
$\chi_g\nu\Sigma_{f,g'}$ as the <i>fission distribution cross
section</i> in the program.
-<li> Scattering $\sum_{g'\ne g}\Sigma_{s,g'\to g}(x)\phi_{g'}(x,t)$
+<li> Scattering $\sum_{g'\ne g}\Sigma_{s,g'\to g}(x)\phi_{g'}(x,t)$
of neutrons of energy $g'$ producing neutrons
of energy $g$. $\Sigma_{s,g'\to g}$ is called the <i>scattering cross
section</i>. The case of elastic, in-group scattering $g'=g$ exists, too, but
continuous spectrum of neutron energies into many energy groups, often up to
100. However, if neutron energy spectra are known well enough for some type of
reactor (for example Pressurized Water Reactors, PWR), it is possible to obtain
-satisfactory results with only 2 energy groups.
+satisfactory results with only 2 energy groups.
In the program shown in this tutorial program, we provide the structure to
compute with as many energy groups as desired. However, to keep computing
i.e. $g=1,2$. We do, however, consider a realistic situation by assuming that
the coefficients are not constant, but rather depend on the materials that are
assembled into reactor fuel assemblies in rather complicated ways (see
-below).
+below).
<h3>The eigenvalue problem</h3>
neutrons created in fission re-enter the fission cycle. Nevertheless, control
rods in nuclear reactors absorbing neutrons -- and therefore reducing
$k_{\mathrm{eff}}$ -- are designed in such a way that they are all the way in
-the reactor in at most 2 seconds.
+the reactor in at most 2 seconds.
One therefore has on the order of 10-60 seconds to regulate the nuclear reaction
if $k_{\mathrm{eff}}$ should be larger than one for some time, as indicated by
<ol>
<li> Initialize $\phi_g$ and $k_{\mathrm{eff}}$ with $\phi_g^{(0)}$
- and $k_{\mathrm{eff}}^{(0)}$ and let $n=1$.
+ and $k_{\mathrm{eff}}^{(0)}$ and let $n=1$.
<li> Define the so-called <i>fission source</i> by
@f{eqnarray*}
-\nabla \cdot D_g\nabla \phi_g^{(n)}
+
\Sigma_{r,g}\phi_g^{(n)}
- =
+ =
\chi_g s_f^{(n-1)}
- +
+ +
\sum_{g'< g} \Sigma_{s,g'\to g} \phi_{g'}^{(n)}
+
\sum_{g'> g}\Sigma_{s,g'\to g}\phi_{g'}^{(n-1)}.
\frac{\eta_{g,K}}{\|\phi_g\|_\infty}
>
\alpha_1
- \displaystyle{\max_{{1\le g\le G \atop K\in {\cal T}_g}}
+ \displaystyle{\max_{\footnotesize \begin{matrix}1\le g\le G \\ K\in {\cal T}_g\end{matrix}}
\frac{\eta_{g,K}}{\|\phi_g\|_\infty}}
@f}
and coarsen the cells where
\frac{\eta_{g,K}}{\|\phi_g\|_\infty}
<
\alpha_2
- \displaystyle{\max_{{1\le g\le G \choose K\in {\cal T}_g}}
+ \displaystyle{\max_{\footnotesize \begin{matrix}1\le g\le G \\ K\in {\cal T}_g\end{matrix}}
\frac{\eta_{g,K}}{\|\phi_g\|_\infty}}.
@f}
We chose $\alpha_1=0.3$ and $\alpha_2=0.01$ in the code. Note that this will,
F_i = \int_\Omega f(x) \varphi_g^i(x) \phi_{g'}(x) \ dx,
@f}
where $f(x)$ is one of the coefficient functions $\Sigma_{s,g'\to g}$ or
-$\nu\chi_g\Sigma_{f,g'}$ used in the right hand side
+$\nu\chi_g\Sigma_{f,g'}$ used in the right hand side
of eigenvalue equation. The difficulty now is that $\phi_{g'}$ is defined on
the mesh for energy group $g'$, i.e. it can be expanded as
$\phi_{g'}(x)=\sum_j\phi_{g'}^j \varphi_{g'}^j(x)$, with basis functions
$\varphi_{g'}^j(x)$ defined on mesh $g'$. The contribution to the right hand
side can therefore be written as
@f{eqnarray*}
- F_i = \sum_j \left\{\int_\Omega f(x) \varphi_g^i(x) \varphi_{g'}^j(x)
- \ dx \right\} \phi_{g'}^j ,
+ F_i = \sum_j \left\{\int_\Omega f(x) \varphi_g^i(x) \varphi_{g'}^j(x)
+ \ dx \right\} \phi_{g'}^j ,
@f}
On the other hand, the test functions $\varphi_g^i(x)$ are defined on mesh
$g$. This means that we can't just split the integral $\Omega$ into integrals
With this, we can write above integral as follows:
@f{eqnarray*}
- F_i
- =
+ F_i
+ =
\sum_{K \in {\cal T}_g \cap {\cal T}_{g'}}
- \sum_j \left\{\int_K f(x) \varphi_g^i(x) \varphi_{g'}^j(x)
+ \sum_j \left\{\int_K f(x) \varphi_g^i(x) \varphi_{g'}^j(x)
\ dx \right\} \phi_{g'}^j.
@f}
In the code, we
@f{eqnarray*}
F_i|_K
&=&
- \left\{ \int_K f(x) \varphi_g^i(x) \varphi_{g'}^j(x)
- \ dx \right\} \phi_{g'}^j
+ \left\{ \int_K f(x) \varphi_g^i(x) \varphi_{g'}^j(x)
+ \ dx \right\} \phi_{g'}^j
\\
&=&
\left\{
\sum_{0\le c<2^{\texttt{dim}}}
- B_c^{il} \int_{K_c} f(x) \varphi_{g'}^l(x) \varphi_{g'}^j(x)
+ B_c^{il} \int_{K_c} f(x) \varphi_{g'}^l(x) \varphi_{g'}^j(x)
\ dx \right\} \phi_{g'}^j.
@f}
In matrix notation, this can be written as
@f}
where $M_{K_c}^{lj}=\int_{K_c} f(x) \varphi_{g'}^l(x) \varphi_{g'}^j(x)$ is
the weighted mass matrix on child $c$ of cell $K$.
-
+
The next question is what happens if a child $K_c$ of $K$ is not
active. Then, we have to apply the process recursively, i.e. we have to
interpolate the basis functions $\varphi_g^i$ onto child $K_c$ of $K$, then
@f{eqnarray*}
F_i|_K
&=&
- \left\{ \int_K f(x) \varphi_g^i(x) \varphi_{g'}^j(x)
- \ dx \right\} \phi_{g'}^j
+ \left\{ \int_K f(x) \varphi_g^i(x) \varphi_{g'}^j(x)
+ \ dx \right\} \phi_{g'}^j
\\
&=&
\left\{
\sum_{0\le c<2^{\texttt{dim}}}
- \int_{K_c} f(x) \varphi_g^i(x) B_c^{jl} \varphi_{g}^l(x)
+ \int_{K_c} f(x) \varphi_g^i(x) B_c^{jl} \varphi_{g}^l(x)
\ dx \right\} \phi_{g'}^j.
@f}
In matrix notation, this expression now reads as
@f}
etc. In other words, the process works in exactly the same way as before,
except that we have to take the transpose of the prolongation matrices and
- need to multiply it to the mass matrix from the other side.
+ need to multiply it to the mass matrix from the other side.
</ol>