* using a convolution kernel with compact support of radius epsilon.
*
* Given two non-matching triangulations, representing the domains $\Omega^0$
- * and $\Omega^1$, both embedded in $R^d$, and two finite element spaces
- * $V^0(\Omega^0) = \text{span}\{v_i\}_{i=0}^n$ and $V^1(\Omega^1) =
+ * and $\Omega^1$, both embedded in $\mathbb{R}^d$, and two finite element
+ * spaces $V^0(\Omega^0) = \text{span}\{v_i\}_{i=0}^n$ and $V^1(\Omega^1) =
* \text{span}\{w_\alpha\}_{\alpha=0}^m$, compute the sparsity pattern that
* would be necessary to assemble the matrix
*
* \f]
*
* where $V^0(\Omega^0)$ is the finite element space associated with the
- * `dh0` passed to this function (or part of it, if specified in
- * `comps0`), while $V^1(\Omega^1)$ is the finite element space associated
- * with the `dh1` passed to this function (or part of it, if specified
- * in `comps1`), and $K^\epsilon$ is a function with compact support included
- * in a ball of radius $\epsilon$, derived from CutOffFunctionBase.
- *
- * The `comps0` and `comps1` masks are assumed to be ordered in
- * the same way: the first component of `comps0` will couple with the
- * first component of `comps1`, the second with the second, and so
- * on. If one of the two masks has more non-zero than the other, then the
- * excess components will be ignored.
+ * @p dh0 passed to this function (or part of it, if specified in
+ * @p comps0), while $V^1(\Omega^1)$ is the finite element space associated
+ * with the @p dh1 passed to this function (or part of it, if specified
+ * in @p comps1), and $K^\epsilon$ is a function derived from
+ * CutOffFunctionBase with compact support included in a ball of radius
+ * $\epsilon$.
+ *
+ * The @p comps0 and @p comps1 masks are assumed to be ordered in
+ * the same way: the first component of @p comps0 will couple with the
+ * first component of @p comps1, the second with the second, and so
+ * on. If one of the two masks has more active components than the other, then
+ * the excess components will be ignored.
*
* For both spaces, it is possible to specify a custom Mapping, which
* defaults to StaticMappingQ1 for both.
* using a convolution kernel with compact support.
*
* Given two non-matching triangulations, representing the domains
- * $\Omega^0$ and $\Omega^1$, both embedded in $R^d$, and two finite element
- * spaces $V^0(\Omega^0) = \text{span}\{v_i\}_{i=0}^n$ and $V^1(\Omega^1) =
- * \text{span}\{w_\alpha\}_{\alpha=0}^m$, compute the matrix
+ * $\Omega^0$ and $\Omega^1$, both embedded in $\mathbb{R}^d$, and two finite
+ * element spaces $V^0(\Omega^0) = \text{span}\{v_i\}_{i=0}^n$ and
+ * $V^1(\Omega^1) = \text{span}\{w_\alpha\}_{\alpha=0}^m$, compute the matrix
*
* \f[
* M_{i\alpha} \dealcoloneq \int_{\Omega^0} \int_{\Omega^1}
* \f]
*
* where $V^0(\Omega^0)$ is the finite element space associated with the
- * `dh0` passed to this function (or part of it, if specified in
- * `comps0`), while $V^1(\Omega^1)$ is the finite element space associated
- * with the `dh1` passed to this function (or part of it, if specified
- * in `comps1`), and $K^\epsilon$ is a function with compact support included,
- * in a ball of radius $\epsilon$, derived from CutOffFunctionBase.
+ * @p dh0 passed to this function (or part of it, if specified in
+ * @p comps0), while $V^1(\Omega^1)$ is the finite element space associated
+ * with the @p dh1 passed to this function (or part of it, if specified
+ * in @p comps1), and $K^\epsilon$ is a function derived from
+ * CutOffFunctionBase with compact support included in a ball of radius
+ * $\epsilon$.
*
* The corresponding sparsity patterns can be computed by calling the
* make_coupling_sparsity_pattern() function.
*
- * The `comps0` and `comps1` masks are assumed to be ordered in
- * the same way: the first component of `comps0` will couple with the
- * first component of `comps1`, the second with the second, and so
- * on. If one of the two masks has more non-zero than the other, then the
- * excess components will be ignored.
+ * The @p comps0 and @p comps1 masks are assumed to be ordered in
+ * the same way: the first component of @p comps0 will couple with the
+ * first component of @p comps1, the second with the second, and so
+ * on. If one of the two masks has more active components than the other, then
+ * the excess components will be ignored.
*
* For both spaces, it is possible to specify a custom Mapping, which
* defaults to StaticMappingQ1 for both.
Assert(!zero_is_distributed || !one_is_distributed, ExcNotImplemented());
// If we can loop on both, we decide where to make the outer loop according
- // to the size of triangulation. The reasoning is the following:
- // - Access to the tree: log(N)
- // - We compute intersection for each of the outer loop cells (M)
+ // to the size of the triangulation. The reasoning is the following:
+ // - cost for accessing the tree: log(N)
+ // - cost for computing the intersection for each of the outer loop cells: M
// Total cost (besides the setup) is: M log(N)
- // If we can, make sure M is the smallest number
+ // If we can, make sure M is the smaller number of the two.
const bool outer_loop_on_zero =
(zero_is_distributed && !one_is_distributed) ||
(dh1.get_triangulation().n_active_cells() >
if (outer_loop_on_zero)
{
- std::cout << "Looping on zero." << std::endl;
-
Assert(one_is_distributed == false, ExcInternalError());
const auto &tree1 = cache1.get_cell_bounding_boxes_rtree();
Assert(!zero_is_distributed || !one_is_distributed, ExcNotImplemented());
// If we can loop on both, we decide where to make the outer loop according
- // to the size of triangulation. The reasoning is the following:
- // - Access to the tree: log(N)
- // - We compute intersection for each of the outer loop cells (M)
+ // to the size of the triangulation. The reasoning is the following:
+ // - cost for accessing the tree: log(N)
+ // - cost for computing the intersection for each of the outer loop cells: M
// Total cost (besides the setup) is: M log(N)
- // If we can, make sure M is the smallest number
+ // If we can, make sure M is the smaller number of the two.
const bool outer_loop_on_zero =
(zero_is_distributed && !one_is_distributed) ||
(dh1.get_triangulation().n_active_cells() >
fe1.dofs_per_cell);
// Global to local indices
- auto p = internal::compute_components_coupling(comps0, comps1, fe0, fe1);
+ const auto p =
+ internal::compute_components_coupling(comps0, comps1, fe0, fe1);
const auto >l0 = p.first;
const auto >l1 = p.second;
using namespace dealii;
// Test that a coupling matrix can be constructed for each pair of dimension and
-// immersed dimension, and check that constants are projected correctly.
-//
-// Even when locally refined grids are used.
+// immersed dimension, and check that constants are projected correctly
+// even when locally refined grids are used.
template <int dim0, int dim1, int spacedim>
void
{
DynamicSparsityPattern dsp(dh0.n_dofs(), dh1.n_dofs());
NonMatching::create_coupling_sparsity_pattern(
- epsilon, cache0, cache1, dh0, dh1, dsp, constraints0);
+ epsilon, cache0, cache1, dh0, dh1, quad1, dsp, constraints0);
sparsity.copy_from(dsp);
}
SparseMatrix<double> coupling(sparsity);