// $Id$
// Version: $Name$
//
-// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003 by the deal.II authors
+// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004 by the deal.II authors
//
// This file is subject to QPL and may not be distributed
// without copyright and license information. Please refer
class ConstraintMatrix;
-
+//TODO: Move documentation of functions to the functions!
+//TODO: (Re)move the basic course on Sobolev spaces
/**
- * Provide a class which offers some operations on vectors. Amoung these are
- * assemblage of standard vectors, integration of the difference of a
- * finite element solution and a continuous function,
- * interpolations and projections of continuous functions to the finite
- * element space and other operations.
+ * Provide a class which offers some operations on vectors. Amoung
+ * these are assembling of standard vectors, integration of the
+ * difference of a finite element solution and a continuous function,
+ * interpolations and projections of continuous functions to the
+ * finite element space and other operations.
*
- * There exist two versions of almost each function. One with a
- * @ref{Mapping} argument and one without. If a code uses a mapping
- * different from @ref{MappingQ1} the functions @em{with} mapping
- * argument should be used. Code that uses only @ref{MappingQ1} may
- * also use the functions @em{without} @ref{Mapping} argument. Each of
- * these latter functions create a @ref{MappingQ1} object and just
- * call the respective functions with that object as mapping
- * argument. The functions without @ref{Mapping} argument still exist
- * to ensure backward compatibility. Nevertheless it is advised to
- * change the user's codes to store a specific @ref{Mapping} object
- * and to use the functions that take this @p{Mapping} object as
- * argument. This gives the possibility to easily extend the user
- * codes to work also on mappings of higher degree, this just by
- * exchanging @ref{MappingQ1} by, for example, a @ref{MappingQ} or
- * another @ref{Mapping} object of interest.
+ * @note There exist two versions of almost each function. One with a
+ * Mapping argument and one without. If a code uses a mapping
+ * different from MappingQ1 the functions <b>with</b> mapping argument
+ * should be used. Code that uses only MappingQ1 may also use the
+ * functions without Mapping argument. Each of these latter functions
+ * create a MappingQ1 object and just call the respective functions
+ * with that object as mapping argument. The functions without Mapping
+ * argument still exist to ensure backward compatibility. Nevertheless
+ * it is advised to change the user's codes to store a specific
+ * Mapping object and to use the functions that take this Mapping
+ * object as argument. This gives the possibility to easily extend the
+ * user codes to work also on mappings of higher degree, this just by
+ * exchanging MappingQ1 by, for example, a MappingQ or another Mapping
+ * object of interest.
*
- * @sect3{Description of operations}
+ * @section VectorTools1 Description of operations
*
* This collection of methods offers the following operations:
- * @begin{itemize}
- * @item Interpolation: assign each degree of freedom in the vector to be
+ * <ul>
+ * <li> Interpolation: assign each degree of freedom in the vector to be
* the value of the function given as argument. This is identical to
* saying that the resulting finite element function (which is
* isomorphic to the output vector) has exact function values in all
* given function may be, taking into account that a virtual function has
* to be called.
*
- * @item Projection: compute the $L_2$-projection of the given function onto
+ * <li> Projection: compute the <i>L<sup>2</sup></i>-projection of the given function onto
* the finite element space. This is done through the solution of the
* linear system of equations $M v = f$ where $M$ is the mass matrix
* $m_{ij} = \int_\Omega \phi_i(x) \phi_j(x) dx$ and
*
* In order to get proper results, it may necessary to treat
* boundary conditions right. Below are listed some cases where this
- * may be needed. If needed, this is done by $L_2$-projection of
+ * may be needed. If needed, this is done by <i>L<sup>2</sup></i>-projection of
* the trace of the given function onto the finite element space
* restricted to the boundary of the domain, then taking this
* information and using it to eliminate the boundary nodes from the
* mass matrix of the whole domain, using the
- * @ref{MatrixTools}@p{::apply_boundary_values} function. The
- * projection of the trace of the function to the boundary is done
- * with the @ref{VectorTools}@p{::project_boundary_values} (see
- * below) function, which is called with a map of boundary functions
- * (@ref{FunctioMap}@p{::FunctionMap}) in which all boundary
- * indicators from zero to 254 (255 is used for other purposes, see
- * the @ref{Triangulation} class documentation) point to the
- * function to be projected. The projection to the boundary takes
- * place using a second quadrature formula on the boundary given to
- * the @p{project} function. The first quadrature formula is used to
- * compute the right hand side and for numerical quadrature of the
- * mass matrix.
+ * MatrixTools::apply_boundary_values() function. The projection of
+ * the trace of the function to the boundary is done with the
+ * VectorTools::project_boundary_values() (see below) function,
+ * which is called with a map of boundary functions FunctioMap in
+ * which all boundary indicators from zero to 254 (255 is used for
+ * other purposes, see the Triangulation class documentation) point
+ * to the function to be projected. The projection to the boundary
+ * takes place using a second quadrature formula on the boundary
+ * given to the project() function. The first quadrature formula is
+ * used to compute the right hand side and for numerical quadrature
+ * of the mass matrix.
*
- * The projection of the boundary values first, then eliminating them from
- * the global system of equations is not needed usually. It may be necessary
- * if you want to enforce special restrictions on the boundary values of the
- * projected function, for example in time dependent problems: you may want
- * to project the initial values but need consistency with the boundary
- * values for later times. Since the latter are projected onto the boundary
- * in each time step, it is necessary that we also project the boundary
- * values of the initial values, before projecting them to the whole domain.
+ * The projection of the boundary values first, then eliminating
+ * them from the global system of equations is not needed
+ * usually. It may be necessary if you want to enforce special
+ * restrictions on the boundary values of the projected function,
+ * for example in time dependent problems: you may want to project
+ * the initial values but need consistency with the boundary values
+ * for later times. Since the latter are projected onto the boundary
+ * in each time step, it is necessary that we also project the
+ * boundary values of the initial values, before projecting them to
+ * the whole domain.
*
- * Obviously, the results of the two schemes for projection are different.
- * Usually, when projecting to the boundary first, the $L_2$-norm of the
- * difference between original function and projection over the whole domain
- * will be larger (factors of five have been observed) while the $L_2$-norm
- * of the error integrated over the boundary should of course be less. The
- * reverse should also hold if no projection to the boundary is performed.
+ * Obviously, the results of the two schemes for projection are
+ * different. Usually, when projecting to the boundary first, the
+ * <i>L<sup>2</sup></i>-norm of the difference between original
+ * function and projection over the whole domain will be larger
+ * (factors of five have been observed) while the
+ * <i>L<sup>2</sup></i>-norm of the error integrated over the
+ * boundary should of course be less. The reverse should also hold
+ * if no projection to the boundary is performed.
*
- * The selection whether the projection to the boundary first is needed is
- * done with the @p{project_to_boundary_first} flag passed to the function.
- * If @p{false} is given, the additional quadrature formula for faces is
- * ignored.
+ * The selection whether the projection to the boundary first is
+ * needed is done with the <tt>project_to_boundary_first</tt> flag
+ * passed to the function. If @p{false} is given, the additional
+ * quadrature formula for faces is ignored.
*
* You should be aware of the fact that if no projection to the boundary
* is requested, a function with zero boundary values may not have zero
* The @p{project_boundary_values} function acts similar to the
* @p{interpolate_boundary_values} function, apart from the fact that it does
* not get the nodal values of boundary nodes by interpolation but rather
- * through the $L_2$-projection of the trace of the function to the boundary.
+ * through the <i>L<sup>2</sup></i>-projection of the trace of the function to the boundary.
*
* The projection takes place on all boundary parts with boundary
* indicators listed in the map (@ref{FunctioMap}@p{::FunctionMap})
* use of the wrong quadrature formula may show a significantly wrong result
* and care should be taken to chose the right formula.
*
- * The $H_1$ seminorm is the $L_2$ norm of the gradient of the
- * difference. The square of the full $H_1$ norm is the sum of the
- * square of seminorm and the square of the $L_2$ norm.
+ * The <i>H<sup>1</sup></i> seminorm is the <i>L<sup>2</sup></i>
+ * norm of the gradient of the difference. The square of the full
+ * <i>H<sup>1</sup></i> norm is the sum of the square of seminorm
+ * and the square of the <i>L<sup>2</sup></i> norm.
*
- * To get the @em{global} $L_1$ error, you have to sum up the
+ * To get the global <i>L<sup>1</sup></i> error, you have to sum up the
* entries in @p{difference}, e.g. using
- * @ref{Vector}@p{<double>::l1_norm} function. For the global $L_2$
+ * @ref{Vector}@p{<double>::l1_norm} function. For the global <i>L<sup>2</sup></i>
* difference, you have to sum up the squares of the entries and
* take the root of the sum, e.g. using
* @ref{Vector}@p{<double>::l2_norm}. These two operations
* To get the $L_\infty$ norm, take the maximum of the vector elements, e.g.
* using the @ref{Vector}@p{<double>::linfty_norm} function.
*
- * For the global $H_1$ norm and seminorm, the same rule applies as for the
- * $L_2$ norm: compute the $l_2$ norm of the cell error vector.
+ * For the global <i>H<sup>1</sup></i> norm and seminorm, the same rule applies as for the
+ * <i>L<sup>2</sup></i> norm: compute the $l_2$ norm of the cell error vector.
* @end{itemize}
*
* All functions use the finite element given to the @ref{DoFHandler} object the last
/**
* Denote which norm/integral is
* to be computed by the
- * @p{integrate_difference}
+ * integrate_difference()
* function of this class. The
* following possibilities are
* implemented:
- * @begin{itemize}
- * @item @p{mean}: the function
- * or difference of functions
- * is integrated on each cell.
- * @item @p{L1_norm}: the
- * absolute value of the
- * function is integrated.
- * @item @p{L2_norm}: the square
- * of the function is integrated
- * and the the square root of the
- * result is computed on each
- * cell.
- * @item @p{Lp_norm}: the
- * absolute value to the pth
- * power is integrated and the
- * pth root is computed on each
- * cell. The exponent @p{p} is
- * the last parameter of the
- * function.
- * @item @p{Linfty_norm}: the
- * maximum absolute value of the
- * function.
- * @item @p{H1_seminorm}: the
- * square of the function
- * gradient is integrated on
- * each cell; afterwards the
- * root is taken of this
- * value.
- * @item @p{W1p_seminorm}: this
- * is the @p{Lp_norm} of the
- * gradient.
- * @item @p{H1_norm}: the square
- * of the function plus the
- * square of the function
- * gradient is integrated on
- * each cell; afterwards the
- * root is taken of
- * this. I.e. the square of
- * this norm is the square of
- * the @p{L2_norm} plus the
- * square of the
- * @p{H1_seminorm}.
- * @end{itemize}
- * @item @p{W1p_norm}: like
- * @p{H1_norm}, but for
- * @p{Lp_norm} instead of
- * @p{L2_norm}
*/
enum NormType {
+ /**
+ * The function or
+ * difference of functions
+ * is integrated on each
+ * cell.
+ */
mean,
+ /**
+ * The absolute value of
+ * the function is
+ * integrated.
+ */
L1_norm,
+ /**
+ * The square of the
+ * function is integrated
+ * and the the square root
+ * of the result is
+ * computed on each cell.
+ */
L2_norm,
+ /**
+ * The maximum absolute
+ * value of the function.
+ */
Linfty_norm,
+ /**
+ * #L2_norm of the gradient.
+ */
H1_seminorm,
+ /**
+ * The square of this norm
+ * is the square of the
+ * #L2_norm plus the square
+ * of the #H1_seminorm.
+ */
H1_norm,
+ /**
+ * The absolute value to
+ * the <i>p</i>th power is
+ * integrated and the pth
+ * root is computed on each
+ * cell. The exponent
+ * <i>p</i> is the last
+ * parameter of the
+ * function.
+ */
Lp_norm,
+ /**
+ * #Lp_norm of the gradient.
+ */
W1p_seminorm,
+ /**
+ * same as #H1_norm for
+ * <i>L<sup>p</sup></i>.
+ */
W1p_norm
};
/**
* Calls the @p{interpolate}
- * function, see above, with
- * @p{mapping=MappingQ1<dim>()}.
+ * function above with
+ * <tt>mapping=MappingQ1@<dim>@()</tt>.
*/
template <int dim, class VECTOR>
static void interpolate (const DoFHandler<dim> &dof,
/**
* Calls the @p{project}
* function, see above, with
- * @p{mapping=MappingQ1<dim>()}.
+ * @p{mapping=MappingQ1@<dim@>()}.
*/
template <int dim>
static void project (const DoFHandler<dim> &dof,
/**
* Calls the @p{create_right_hand_side}
* function, see above, with
- * @p{mapping=MappingQ1<dim>()}.
+ * @p{mapping=MappingQ1@<dim@>()}.
*/
template <int dim>
static void create_right_hand_side (const DoFHandler<dim> &dof,
* Calls the
* @p{create_boundary_right_hand_side}
* function, see above, with
- * @p{mapping=MappingQ1<dim>()}.
+ * @p{mapping=MappingQ1@<dim@>()}.
*/
template <int dim>
static void create_boundary_right_hand_side (const DoFHandler<dim> &dof,
* Calls the other
* @p{interpolate_boundary_values}
* function, see above, with
- * @p{mapping=MappingQ1<dim>()}.
+ * @p{mapping=MappingQ1@<dim@>()}.
*/
template <int dim>
static void interpolate_boundary_values (const DoFHandler<dim> &dof,
* Calls the other
* @p{interpolate_boundary_values}
* function, see above, with
- * @p{mapping=MappingQ1<dim>()}.
+ * @p{mapping=MappingQ1@<dim@>()}.
*/
template <int dim>
static void interpolate_boundary_values (const DoFHandler<dim> &dof,
/**
* Calls the @p{project_boundary_values}
* function, see above, with
- * @p{mapping=MappingQ1<dim>()}.
+ * @p{mapping=MappingQ1@<dim@>()}.
*/
template <int dim>
static void project_boundary_values (const DoFHandler<dim> &dof,
/**
* Calls the @p{integrate_difference}
* function, see above, with
- * @p{mapping=MappingQ1<dim>()}.
+ * @p{mapping=MappingQ1@<dim@>()}.
*/
template <int dim, class InVector, class OutVector>
static void integrate_difference (const DoFHandler<dim> &dof,
/**
* Calls the @p{compute_mean_value}
* function, see above, with
- * @p{mapping=MappingQ1<dim>()}.
+ * @p{mapping=MappingQ1@<dim@>()}.
*/
template <int dim, class InVector>
static double compute_mean_value (const DoFHandler<dim> &dof,
// $Id$
// Version: $Name$
//
-// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003 by the deal.II authors
+// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004 by the deal.II authors
//
// This file is subject to QPL and may not be distributed
// without copyright and license information. Please refer
* of other classes.
*
* This is for the use of
- * @p{const_iterator}s.
+ * const_iterator.
*/
template <typename number>
struct Types<number,true>
* Type of the vector
* underlying the block vector
* used in
- * @p{const_iterators}. There,
+ * const_iterator. There,
* the vector must be
* constant.
*/
/**
* Type of the block vector
* used in
- * @p{const_iterator}s. There,
+ * const_iterator. There,
* the block vector must be
* constant.
*/
* class for block vectors. Since
* we do not want to have two
* classes for non-const
- * @p{iterator}s and
- * @p{const_iterator}s, we take a
+ * iterator and
+ * const_iterator, we take a
* second template argument which
* denotes whether the vector we
* point into is a constant object
* the element pointed to.
*
* Depending on the value of
- * the @p{constness} template
+ * the <tt>constness</tt> template
* argument of this class,
* the first argument of this
* constructor is either is a
- * @p{const} or non-@p{const}
+ * const or non-const
* reference.
*/
Iterator (BlockVectorType &parent,
/**
* Dereferencing operator. If
* the template argument
- * @p{constness} is @p{true},
+ * <tt>constness</tt> is <tt>true</tt>,
* then no writing to the
* result is possible, making
- * this a @p{const_iterator}.
+ * this a const_iterator.
*/
reference operator * () const;
/**
* Dereferencing operator. If
* the template argument
- * @p{constness} is @p{true},
+ * <tt>constness</tt> is <tt>true</tt>,
* then no writing to the
* result is possible, making
- * this a @p{const_iterator}.
+ * this a const_iterator.
*/
pointer operator -> () const;
reference operator [] (const difference_type d) const;
/**
- * Prefix @p{++} operator:
- * @p{++i}. This operator
+ * Prefix increment operator. This operator
* advances the iterator to
* the next element and
* returns a reference to
- * @p{*this}.
+ * <tt>*this</tt>.
*/
Iterator & operator ++ ();
/**
- * Postfix @p{++} operator:
- * @p{i++}. This operator
+ * Postfix increment
+ * operator. This operator
* advances the iterator to
* the next element and
* returns a copy of the old
Iterator operator ++ (int);
/**
- * Prefix @p{--} operator:
- * @p{--i}. This operator
+ * Prefix decrement operator. This operator
* retracts the iterator to
* the previous element and
* returns a reference to
- * @p{*this}.
+ * <tt>*this</tt>.
*/
Iterator & operator -- ();
/**
- * Postfix @p{--} operator:
- * @p{i--}. This operator
+ * Postfix decrement
+ * operator. This operator
* retracts the iterator to
* the previous element and
* returns a copy of the old
Iterator operator - (const difference_type &d) const;
/**
- * Move the iterator @p{d}
+ * Move the iterator <tt>d</tt>
* elements forward at once,
* and return the result.
*/
Iterator & operator += (const difference_type &d);
/**
- * Move the iterator @p{d}
+ * Move the iterator <tt>d</tt>
* elements backward at once,
* and return the result.
*/
* vector object to which
* this iterator
* points. Depending on the
- * value of the @p{constness}
+ * value of the <tt>constness</tt>
* template argument of this
- * class, this is a @p{const}
- * or non-@p{const} pointer.
+ * class, this is a <tt>const</tt>
+ * or non-<tt>const</tt> pointer.
*/
BlockVectorType *parent;
* A vector composed of several blocks each representing a vector of
* its own.
*
- * The @p{BlockVector} is a collection of normal LAC-@ref{Vector}s. Each of
+ * The BlockVector is a collection of normal LAC-@ref{Vector}s. Each of
* the vectors inside can have a different size. The special case of a
* block vector with constant block size is supported by constructor
- * and @p{reinit} functions.
+ * and reinit() functions.
*
- * The functionality of @p{BlockVector} includes everything a
- * @p{Vector} can do, plus the access to a single @p{Vector} inside
- * the @p{BlockVector} by @p{block(i)}. It also has a complete random
- * access iterator, just as the LAC-@ref{Vector} class or the standard
- * C++ library template @p{std::vector}. Therefore, all algorithms
- * working on iterators also work with objects of this class.
+ * The functionality of BlockVector includes everything a Vector can
+ * do, plus the access to a single Vector inside the BlockVector by
+ * block(i). It also has a complete random access iterator, just as
+ * the LAC Vector class or the standard C++ library template
+ * <tt>std::vector</tt>. Therefore, all algorithms working on
+ * iterators also work with objects of this class.
*
*
- * @sect3{Accessing individual blocks, and resizing vectors}
+ * @section BlockVectorAccess Accessing individual blocks, and resizing vectors
*
* Apart from using this object as a whole, you can use each block
- * separately as a @ref{Vector}, using the @p{block} function. There
+ * separately as a Vector, using the block() function. There
* is a single caveat: if you have changed the size of one of several
- * block, you must call the function @ref{collect_sizes} of the block
+ * block, you must call the function collect_sizes() of the block
* vector to update its internal structures.
*
- * Warning: If you change the sizes of single blocks without calling
- * @ref{collect_sizes}, results may be unpredictable. The debug
- * version does not check consistency here for performance reasons!
- *
- * @sect3{On template instantiations}
+ * @attention Warning: If you change the sizes of single blocks
+ * without calling collect_sizes(), results may be unpredictable. The
+ * debug version does not check consistency here for performance
+ * reasons!
*
- * Member functions of this class are either implemented in this file
- * or in a file of the same name with suffix ``.templates.h''. For the
- * most common combinations of the template parameters, instantiations
- * of this class are provided in a file with suffix ``.cc'' in the
- * ``source'' directory. If you need an instantiation that is not
- * listed there, you have to include this file along with the
- * corresponding ``.templates.h'' file and instantiate the respective
- * class yourself.
+ * @ref Instantiations: some (<tt>@<float@> @<double@></tt>)
*
* @author Wolfgang Bangerth, Guido Kanschat, 1999, 2000, 2001, 2002
*/
/*
* Declare standard types used in
* all containers. These types
- * parallel those in the @p{C++}
- * standard libraries
- * @p{vector<...>} class. This
- * includes iterator types.
+ * parallel those in the
+ * <tt>C++</tt> standard
+ * libraries
+ * <tt>std::vector<...></tt>
+ * class. This includes iterator
+ * types.
*/
typedef Number value_type;
typedef value_type *pointer;
* any arguments, it generates
* an objetct with no
* blocks. Given one argument,
- * it initializes @p{num_blocks}
+ * it initializes <tt>num_blocks</tt>
* blocks, but these blocks have
* size zero. The third variant
* finally initializes all
* blocks to the same size
- * @p{block_size}.
+ * <tt>block_size</tt>.
*
* Confer the other constructor
* further down if you intend to
// * Copy constructor taking a BlockVector of
// * another data type. This will fail if
// * there is no conversion path from
-// * @p{OtherNumber} to @p{Number}. Note that
+// * <tt>OtherNumber</tt> to <tt>Number</tt>. Note that
// * you may lose accuracy when copying
// * to a BlockVector with data elements with
// * less accuracy.
/**
* Constructor. Set the number of
- * blocks to @p{n.size()} and
+ * blocks to <tt>n.size()</tt> and
* initialize each block with
- * @p{n[i]} zero elements.
+ * <tt>n[i]</tt> zero elements.
*/
BlockVector (const std::vector<unsigned int> &n);
/**
* Constructor. Set the number of
* blocks to
- * @p{n.size()}. Initialize the
+ * <tt>n.size()</tt>. Initialize the
* vector with the elements
* pointed to by the range of
* iterators given as second and
* constructor is in complete
* analogy to the respective
* constructor of the
- * @p{std::vector} class, but the
+ * <tt>std::vector</tt> class, but the
* first argument is needed in
* order to know how to subdivide
* the block vector into
/**
* Reinitialize the BlockVector to
- * contain @p{num_blocks} blocks of
- * size @p{block_size} each.
+ * contain <tt>num_blocks</tt> blocks of
+ * size <tt>block_size</tt> each.
*
- * If @p{fast==false}, the vector
+ * If <tt>fast==false</tt>, the vector
* is filled with zeros.
*/
void reinit (const unsigned int num_blocks,
/**
* Reinitialize the BlockVector
* such that it contains
- * @p{N.size()} blocks. Each
+ * <tt>N.size()</tt> blocks. Each
* Block is reinitialized to
- * dimension @p{N[i]}.
+ * dimension <tt>N[i]</tt>.
*
* If the number of blocks is the
* same as before this function
* was called, all vectors remain
- * the same and @p{reinit} is
+ * the same and reinit() is
* called for each vector. While
* reinitailizing a usual vector
* can consume a lot of time,
* has a potential to slow down a
* program considerably.
*
- * If @p{fast==false}, the vector
+ * If <tt>fast==false</tt>, the vector
* is filled with zeros.
*
* Note that you must call this
- * (or the other @p{reinit}
+ * (or the other reinit()
* functions) function, rather
- * than calling the @p{reinit}
+ * than calling the reinit()
* functions of an individual
* block, to allow the block
* vector to update its caches of
* vector sizes. If you call
- * @p{reinit} of one of the
+ * reinit() of one of the
* blocks, then subsequent
* actions of this object may
* yield unpredictable results
/**
* Change the dimension to that
- * of the vector @p{V}. The same
+ * of the vector <tt>V</tt>. The same
* applies as for the other
- * @p{reinit} function.
+ * reinit() function.
*
- * The elements of @p{V} are not
+ * The elements of <tt>V</tt> are not
* copied, i.e. this function is
- * the same as calling @p{reinit
- * (V.size(), fast)}.
+ * the same as calling <tt>reinit
+ * (V.size(), fast)</tt>.
*
* Note that you must call this
- * (or the other @p{reinit}
+ * (or the other reinit()
* functions) function, rather
- * than calling the @p{reinit}
+ * than calling the reinit()
* functions of an individual
* block, to allow the block
* vector to update its caches of
* vector sizes. If you call
- * @p{reinit} of one of the
+ * reinit() of one of the
* blocks, then subsequent
* actions of this object may
* yield unpredictable results
/**
* Set all entries to zero. Equivalent to
- * @p{v = 0}, but more obvious and faster.
+ * <tt>v = 0</tt>, but more obvious and faster.
* Note that this function does not change
* the size of the vector, unlike the
- * STL's @p{vector<>::clear} function.
+ * STL's <tt>vector<>::clear</tt> function.
*/
void clear ();
/**
* Swap the contents of this
* vector and the other vector
- * @p{v}. One could do this
+ * <tt>v</tt>. One could do this
* operation with a temporary
* variable and copying over the
* data elements, but this
* exchanged, too.
*
* This function is analog to the
- * the @p{swap} function of all C++
+ * the swap() function of all C++
* standard containers. Also,
* there is a global function
- * @p{swap(u,v)} that simply calls
- * @p{u.swap(v)}, again in analogy
+ * swap(u,v) that simply calls
+ * <tt>u.swap(v)</tt>, again in analogy
* to standard functions.
*/
void swap (BlockVector<Number> &v);
//@{
/**
* Addition operator.
- * Fast equivalent to @p{U.add(1, V)}.
+ * Fast equivalent to <tt>U.add(1, V)</tt>.
*/
BlockVector<Number> &
operator += (const BlockVector<Number> &V);
/**
* Subtraction operator.
- * Fast equivalent to @p{U.add(-1, V)}.
+ * Fast equivalent to <tt>U.add(-1, V)</tt>.
*/
BlockVector<Number> &
operator -= (const BlockVector<Number> &V);
/**
* $U(0-DIM)+=s$.
- * Addition of @p{s} to all components. Note
- * that @p{s} is a scalar and not a vector.
+ * Addition of <tt>s</tt> to all components. Note
+ * that <tt>s</tt> is a scalar and not a vector.
*/
void add (const Number s);
/**
* U+=V.
* Simple vector addition, equal to the
- * @p{operator +=}.
+ * <tt>operator +=</tt>.
*/
void add (const BlockVector<Number>& V);
* This function is deprecated
* and will be removed in a
* future version. Use
- * @p{operator *=} and
- * @p{operator /=} instead.
+ * <tt>operator *=</tt> and
+ * <tt>operator /=</tt> instead.
*/
void scale (const Number factor);
/**
* Multiply each element of this
* vector by the corresponding
- * element of @p{v}.
+ * element of <tt>v</tt>.
*/
template<typename Number2>
void scale (const BlockVector<Number2>& v);
/**
* The number of blocks. This
* number is redundant to
- * @p{components.size()} and stored
+ * <tt>components.size()</tt> and stored
* here for convenience.
*/
unsigned int num_blocks;
*/
/**
- * Global function @p{swap} which overloads the default implementation
+ * Global function which overloads the default implementation
* of the C++ standard library which uses a temporary object. The
* function simply exchanges the data of the two vectors.
*
+ * @relates BlockVector
* @author Wolfgang Bangerth, 2000
*/
template <typename Number>
// $Id$
// Version: $Name$
//
-// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003 by the deal.II authors
+// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004 by the deal.II authors
//
// This file is subject to QPL and may not be distributed
// without copyright and license information. Please refer
*
* Implementation of a classical rectangular scheme of numbers. The
* data type of the entries is provided in the template argument
- * @p{number}. The interface is quite fat and in fact has grown every
+ * <tt>number</tt>. The interface is quite fat and in fact has grown every
* time a new feature was needed. So, a lot of functions are provided.
*
* Since the instantiation of this template is quite an effort,
* standard versions are precompiled into the library. These include
- * all combinations of @p{float} and @p{double} for matrices and
+ * all combinations of <tt>float</tt> and <tt>double</tt> for matrices and
* vectors. If you need more data types, the implementation of
- * non-inline functions is in @p{fullmatrix.templates.h}. Driver files
+ * non-inline functions is in <tt>fullmatrix.templates.h</tt>. Driver files
* are in the source tree.
*
* Internal calculations are usually done with the accuracy of the
* vector argument to functions. If there is no argument with a number
* type, the matrix number type is used.
*
- *
- * @sect2{On template instantiations}
- *
- * Member functions of this class are either implemented in this file
- * or in a file of the same name with suffix ``.templates.h''. For the
- * most common combinations of the template parameters, instantiations
- * of this class are provided in a file with suffix ``.cc'' in the
- * ``source'' directory. If you need an instantiation that is not
- * listed there, you have to include this file along with the
- * corresponding ``.templates.h'' file and instantiate the respective
- * class yourself.
+ * @ref Instantiations: some (<tt>@<float@> @<double@></tt>)
*
* @author Guido Kanschat, Franz-Theo Suttmeier, Wolfgang Bangerth, 1993-2001
*/
*/
bool operator == (const const_iterator&) const;
/**
- * Inverse of @p{==}.
+ * Inverse of <tt>==</tt>.
*/
bool operator != (const const_iterator&) const;
/**
* Constructor. Initialize the
* matrix as a square matrix with
- * dimension @p{n}.
+ * dimension <tt>n</tt>.
*
* In order to avoid the implicit
* conversion of integers and
* other types to a matrix, this
* constructor is declared
- * @p{explicit}.
+ * <tt>explicit</tt>.
*
* By default, no memory is
* allocated.
* by value rather than by
* reference. Unfortunately, we
* can't mark this copy
- * constructor @p{explicit},
+ * constructor <tt>explicit</tt>,
* since that prevents the use of
* this class in containers, such
- * as @p{std::vector}. The
+ * as <tt>std::vector</tt>. The
* responsibility to check
* performance of programs must
* therefore remain with the
* matrix classes. This
* assignment operator uses
* iterators of the class
- * @p{MATRIX}. Therefore, sparse
+ * MATRIX. Therefore, sparse
* matrices are possible sources.
*/
template <class MATRIX>
* Fill rectangular block.
*
* A rectangular block of the
- * matrix @p{src} is copied into
- * @p{this}. The upper left
+ * matrix <tt>src</tt> is copied into
+ * <tt>this</tt>. The upper left
* corner of the block being
* copied is
- * @p{(src_offset_i,src_offset_j)}.
+ * <tt>(src_offset_i,src_offset_j)</tt>.
* The upper left corner of the
* copied block is
- * @p{(dst_offset_i,dst_offset_j)}.
+ * <tt>(dst_offset_i,dst_offset_j)</tt>.
* The size of the rectangular
* block being copied is the
* maximum size possible,
* determined either by the size
- * of @p{this} or @p{src}.
+ * of <tt>this</tt> or <tt>src</tt>.
*/
template<typename number2>
void fill (const FullMatrix<number2> &src,
* Fill with permutation of
* another matrix.
*
- * The matrix @p{src} is copied
+ * The matrix <tt>src</tt> is copied
* into the target. The two
- * permutation @p{p_r} and
- * @p{p_c} operate in a way, such
- * that @p{result(i,j) =
- * src(p_r[i], p_c[j])}.
+ * permutation <tt>p_r</tt> and
+ * <tt>p_c</tt> operate in a way, such
+ * that <tt>result(i,j) =
+ * src(p_r[i], p_c[j])</tt>.
*
* The vectors may also be a
* selection from a larger set of
* integers, if the matrix
- * @p{src} is bigger. It is also
+ * <tt>src</tt> is bigger. It is also
* possible to duplicate rows or
* columns by this method.
*/
/**
* STL-like iterator with the
- * first entry of row @p{r}.
+ * first entry of row <tt>r</tt>.
*/
const_iterator begin (const unsigned int r) const;
/**
- * Final iterator of row @p{r}.
+ * Final iterator of row <tt>r</tt>.
*/
const_iterator end (const unsigned int r) const;
/**
* Weighted addition. The matrix
- * @p{s*B} is added to @p{this}.
+ * <tt>s*B</tt> is added to <tt>this</tt>.
*
* $A += sB$
*/
/**
* Weighted addition of the
- * transpose of @p{B} to @p{this}.
+ * transpose of <tt>B</tt> to <tt>this</tt>.
*
* $A += s B^T$
*/
* Matrix-matrix-multiplication.
*
* The optional parameter
- * @p{adding} determines, whether the
- * result is stored in @p{C} or added
- * to @p{C}.
+ * <tt>adding</tt> determines, whether the
+ * result is stored in <tt>C</tt> or added
+ * to <tt>C</tt>.
*
* if (adding)
* $C += A*B$
* if (!adding)
* $C = A*B$
*
- * Assumes that @p{A} and @p{B} have
- * compatible sizes and that @p{C}
+ * Assumes that <tt>A</tt> and <tt>B</tt> have
+ * compatible sizes and that <tt>C</tt>
* already has the right size.
*/
template<typename number2>
/**
* Matrix-matrix-multiplication using
- * transpose of @p{this}.
+ * transpose of <tt>this</tt>.
*
* The optional parameter
- * @p{adding} determines, whether the
- * result is stored in @p{C} or added
- * to @p{C}.
+ * <tt>adding</tt> determines, whether the
+ * result is stored in <tt>C</tt> or added
+ * to <tt>C</tt>.
*
* if (adding)
* $C += A^T*B$
* if (!adding)
* $C = A^T*B$
*
- * Assumes that @p{A} and @p{B} have
- * compatible sizes and that @p{C}
+ * Assumes that <tt>A</tt> and <tt>B</tt> have
+ * compatible sizes and that <tt>C</tt>
* already has the right size.
*/
template<typename number2>
* Matrix-vector-multiplication.
*
* The optional parameter
- * @p{adding} determines, whether the
- * result is stored in @p{w} or added
- * to @p{w}.
+ * <tt>adding</tt> determines, whether the
+ * result is stored in <tt>w</tt> or added
+ * to <tt>w</tt>.
*
* if (adding)
* $w += A*v$
/**
* Transpose
* matrix-vector-multiplication.
- * See @p{vmult} above.
+ * See vmult() above.
*/
template<typename number2>
void Tvmult (Vector<number2> &w,
/**
* Return the square of the norm
- * of the vector @p{v} with respect
- * to the norm induced by this
- * matrix,
- * i.e. $\left(v,Mv\right)$. This
- * is useful, e.g. in the finite
+ * of the vector <tt>v</tt> with
+ * respect to the norm induced by
+ * this matrix,
+ * i.e. <i>(v,Mv)</i>. This is
+ * useful, e.g. in the finite
* element context, where the
- * $L_2$ norm of a function
- * equals the matrix norm with
- * respect to the mass matrix of
- * the vector representing the
- * nodal values of the finite
- * element function.
+ * <i>L<sup>2</sup></i> norm of a
+ * function equals the matrix
+ * norm with respect to the mass
+ * matrix of the vector
+ * representing the nodal values
+ * of the finite element
+ * function.
*
* Obviously, the matrix needs to
- * be square for this operation.
+ * be quadratic for this operation.
*/
template<typename number2>
number2 matrix_norm_square (const Vector<number2> &v) const;
/**
* Build the matrix scalar product
- * @p{u^T M v}. This function is mostly
+ * <tt>u^T M v</tt>. This function is mostly
* useful when building the cellwise
* scalar product of two functions in
* the finite element context.
* transpose, $A = \frac 12(A+A^T)$.
*
* Obviously the matrix must be
- * square for this operation.
+ * quadratic for this operation.
*/
void symmetrize ();
* the indefinite case.
*
* The numerical effort to invert
- * an @p{n x n} matrix is of the
- * order @p{n**3}.
+ * an <tt>n x n</tt> matrix is of the
+ * order <tt>n**3</tt>.
*/
void gauss_jordan ();
* higher dimensions the
* numerical work explodes.
* Obviously, the matrix needs to
- * be square for this function.
+ * be quadratic for this function.
*/
double determinant () const;
/**
* Assign the inverse of the
* given matrix to
- * @p{*this}. This function is
- * hardcoded for square matrices
+ * <tt>*this</tt>. This function is
+ * hardcoded for quadratic matrices
* of dimension one to four,
* since the amount of code
* needed grows quickly. For
* Apply the Jacobi
* preconditioner, which
* multiplies every element of
- * the @p{src} vector by the
+ * the <tt>src</tt> vector by the
* inverse of the respective
* diagonal element and
* multiplies the result with the
- * damping factor @p{omega}.
+ * damping factor <tt>omega</tt>.
*/
template <typename somenumber>
void precondition_Jacobi (Vector<somenumber> &dst,
void diagadd (const number s);
/**
- * $w=b-A*v$.
+ * <i>w=b-A*v</i>.
* Residual calculation , returns
- * the $l_2$-norm $|w|$.
+ * the <i>l<sub>2</sub></i>-norm |<i>w</i>|.
*/
template<typename number2, typename number3>
double residual (Vector<number2> &w,
*
* If the matrix has more columns
* than rows, this function only
- * operates on the left square
+ * operates on the left quadratic
* submatrix. If there are more
- * rows, the upper square part of
- * the matrix is considered.
+ * rows, the upper quadratic part
+ * of the matrix is considered.
*
* Note that this function does
* not fit into this class at
* more. Conversely, if these
* functions have a meaning on
* this object, then the
- * @p{forward} function has no
+ * forward() function has no
* meaning. This bifacial
* property of this class is
* probably a design mistake and
* may once go away by separating
- * the @p{forward} and
- * @p{backward} functions into a
- * class of their own.
+ * the forward() and backward()
+ * functions into a class of
+ * their own.
*/
template<typename number2>
void forward (Vector<number2> &dst,
* Backward elimination of upper
* triangle.
*
- * @ref forward
+ * See forward()
*/
template<typename number2>
void backward (Vector<number2> &dst,
*
* The parameters allow for a
* flexible setting of the output
- * format: @p{precision} and
- * @p{scientific} are used to
+ * format: <tt>precision</tt> and
+ * <tt>scientific</tt> are used to
* determine the number format,
- * where @p{scientific} = @p{false}
+ * where <tt>scientific</tt> = <tt>false</tt>
* means fixed point notation. A
- * zero entry for @p{width} makes
+ * zero entry for <tt>width</tt> makes
* the function compute a width,
* but it may be changed to a
* positive value, if output is
* readable output, even
* integers.
*
- * This function
- * may produce @em{large} amounts of
+ * @attention This function
+ * may produce <b>large</b> amounts of
* output if applied to a large matrix!
*/
void print_formatted (std::ostream &out,
// $Id$
// Version: $Name$
//
-// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003 by the deal.II authors
+// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004 by the deal.II authors
//
// This file is subject to QPL and may not be distributed
// without copyright and license information. Please refer
* preconditioner. Since this is
* the identity, this function is
* the same as
- * @ref{vmult}.
+ * vmult().
*/
template<class VECTOR>
void Tvmult (VECTOR&, const VECTOR&) const;
* preconditioner, adding. Since this is
* the identity, this function is
* the same as
- * @ref{vmult}.
+ * vmult_add().
*/
template<class VECTOR>
void Tvmult_add (VECTOR&, const VECTOR&) const;
* classes.
*
* It seems that all builtin preconditioners have a relaxation
- * parameter, so please use @p{PreconditionRelaxation} for these.
+ * parameter, so please use PreconditionRelaxation for these.
+ *
+ * @section PrecUMU Usage
*
- * @sect3{Use}
* You will usually not want to create a named object of this type,
* although possible. The most common use is like this:
- * @begin{verbatim}
+ * @code
* SolverGMRES<SparseMatrix<double>,
* Vector<double> > gmres(control,memory,500);
*
* gmres.solve (matrix, solution, right_hand_side,
* PreconditionUseMatrix<SparseMatrix<double>,Vector<double> >
* (matrix,&SparseMatrix<double>::template precondition_Jacobi));
- * @end{verbatim}
+ * @endcode
* This creates an unnamed object to be passed as the fourth parameter to
- * the solver function of the @p{SolverGMRES} class. It assumes that the
- * @p{SparseMatrix} class has a function @p{precondition_Jacobi} taking two
- * vectors (source and destination) as parameters. (Actually, there is no
+ * the solver function of the SolverGMRES class. It assumes that the
+ * SparseMatrix class has a function <tt>precondition_Jacobi</tt> taking two
+ * vectors (source and destination) as parameters (Actually, there is no
* function like that, the existing function takes a third parameter,
* denoting the relaxation parameter; this example is therefore only meant to
- * illustrate the general idea.)
+ * illustrate the general idea).
*
* Note that due to the default template parameters, the above example
* could be written shorter as follows:
- * @begin{verbatim}
+ * @code
* ...
* gmres.solve (matrix, solution, right_hand_side,
* PreconditionUseMatrix<>
* (matrix,&SparseMatrix<double>::template precondition_Jacobi));
- * @end{verbatim}
+ * @endcode
*
* @author Guido Kanschat, Wolfgang Bangerth, 1999
*/
/**
- * Jacobi preconditioner using matrix built-in function. The MATRIX
- * class used is required to have a function
- * @p{precondition_Jacobi(VECTOR&, const VECTOR&, double}
+ * Jacobi preconditioner using matrix built-in function. The
+ * <tt>MATRIX</tt> class used is required to have a function
+ * <tt>precondition_Jacobi(VECTOR&, const VECTOR&, double</tt>)
+ *
+ * @section PrecJU Usage
*
- * @sect2{Usage example}
- * @begin{itemize}
+ * @code
* // Declare related objects
*
* SparseMatrix<double> A;
* precondition.initialize (A, .6);
*
* solver.solve (A, x, b, precondition);
- * @end{itemize}
+ * @endcode
*
* @author Guido Kanschat, 2000
*/
* preconditioner. Since this is
* a symmetric preconditioner,
* this function is the same as
- * @ref{vmult}.
+ * vmult().
*/
template<class VECTOR>
void Tvmult (VECTOR&, const VECTOR&) const;
/**
* SOR preconditioner using matrix built-in function. The MATRIX
* class used is required to have functions
- * @p{precondition_SOR(VECTOR&, const VECTOR&, double)} and
- * @p{precondition_TSOR(VECTOR&, const VECTOR&, double)}.
+ * <tt>precondition_SOR(VECTOR&, const VECTOR&, double)</tt> and
+ * <tt>precondition_TSOR(VECTOR&, const VECTOR&, double)</tt>.
*
*
- * @sect2{Usage example}
- * @begin{itemize}
+ * @section PrexSORU Usage
+ * @code
* // Declare related objects
*
* SparseMatrix<double> A;
* precondition.initialize (A, .6);
*
* solver.solve (A, x, b, precondition);
- * @end{itemize}
+ * @endcode
*
* @author Guido Kanschat, 2000
*/
/**
- * SSOR preconditioner using matrix built-in function. The MATRIX
- * class used is required to have a function
- * @p{precondition_SSOR(VECTOR&, const VECTOR&, double}
+ * SSOR preconditioner using matrix built-in function. The
+ * <tt>MATRIX</tt> class used is required to have a function
+ * <tt>precondition_SSOR(VECTOR&, const VECTOR&, double)</tt>
*
*
- * @sect2{Usage example}
- * @begin{itemize}
+ * @section PrexSSORU Usage
+ * @code
* // Declare related objects
*
* SparseMatrix<double> A;
* precondition.initialize (A, .6);
*
* solver.solve (A, x, b, precondition);
- * @end{itemize}
+ * @endcode
*
* @author Guido Kanschat, 2000
*/
* preconditioner. Since this is
* a symmetric preconditioner,
* this function is the same as
- * @ref{vmult}.
+ * vmult().
*/
template<class VECTOR>
void Tvmult (VECTOR&, const VECTOR&) const;
/**
- * Permuted SOR preconditioner using matrix built-in function. The MATRIX
- * class used is required to have functions
- * @p{PSOR(VECTOR&, const VECTOR&, double)} and
- * @p{TPSOR(VECTOR&, const VECTOR&, double)}.
+ * Permuted SOR preconditioner using matrix built-in function. The
+ * <tt>MATRIX</tt> class used is required to have functions
+ * <tt>PSOR(VECTOR&, const VECTOR&, double)</tt> and
+ * <tt>TPSOR(VECTOR&, const VECTOR&, double)</tt>.
*
*
- * @sect2{Usage example}
- * @begin{itemize}
+ * @section PrecPSORU Usage
+ * @code
* // Declare related objects
*
* SparseMatrix<double> A;
* precondition.initialize (A, permutation, inverse_permutation, .6);
*
* solver.solve (A, x, b, precondition);
- * @end{itemize}
+ * @endcode
*
* @author Guido Kanschat, 2003
*/
* inverse of the matrix. Naturally, this solver needs another
* preconditionig method.
*
- * Usually, the use of @p{ReductionControl} is preferred over the use of
- * the basic @p{SolverControl} in defining this solver.
+ * Usually, the use of ReductionControl is preferred over the use of
+ * the basic SolverControl in defining this solver.
*
- * @sect2{Usage example}
+ * @section PrecItU Usage
*
- * Krylov space methods like @ref{SolverCG} or @ref{SolverBicgstab}
+ * Krylov space methods like SolverCG or SolverBicgstab
* become inefficient if soution down to machine accuracy is
* needed. This is due to the fact, that round-off errors spoil the
* orthogonality of the vector sequences. Therefore, a nested
* iteration of two methods is proposed: The outer method is
- * @ref{SolverRichardson}, since it is robust with respect to round-of
+ * SolverRichardson, since it is robust with respect to round-of
* errors. The inner loop is an appropriate Krylov space method, since
* it is fast.
*
- * @begin{itemize}
+ * @code
* // Declare related objects
*
* SparseMatrix<double> A;
* SolverRichardson<Vector<double> > outer_iteration;
*
* outer_iteration.solve (A, x, b, precondition);
- * @end{itemize}
+ * @endcode
*
* Each time we call the inner loop, reduction of the residual by a
- * factor @p{1.e-2} is attempted. Since the right hand side vector of
+ * factor <tt>1.e-2</tt> is attempted. Since the right hand side vector of
* the inner iteration is the residual of the outer loop, the relative
* errors are far from machine accuracy, even if the errors of the
* outer loop are in the range of machine accuracy.
* with the matrix-vector product $PA$. It needs an auxiliary vector for that.
*
* By this time, this is considered a temporary object to be plugged
- * into eigenvalue solvers. Therefore, no @p{SmartPointer} is used for
- * @p{A} and @p{P}.
+ * into eigenvalue solvers. Therefore, no SmartPointer is used for
+ * <tt>A</tt> and <tt>P</tt>.
*
* @author Guido Kanschat, 2000
*/
// $Id$
// Version: $Name$
//
-// Copyright (C) 2001, 2002, 2003 by the deal.II authors
+// Copyright (C) 2001, 2002, 2003, 2004 by the deal.II authors
//
// This file is subject to QPL and may not be distributed
// without copyright and license information. Please refer
/**
* Matrix with shifted diagonal values.
*
- * Given a matrix p{A}, this class implements a matrix-vector product with
- * @p{A+\sigma I}, where sigma is a provided shift parameter.
+ * Given a matrix <tt>A</tt>, this class implements a matrix-vector
+ * product with <i>A+s I</i>, where <i>s</i> is a provided shift
+ * parameter.
*
* @author Guido Kanschat, 2000, 2001
*/
{
public:
/**
- * Constructor.
- * Provide the base matrix and a shift parameter.
+ * Constructor. Provide the base
+ * matrix and a shift parameter.
*/
ShiftedMatrix (const MATRIX& A, const double sigma);
/**
* Matrix with shifted diagonal values with respect to a certain scalar product.
*
- * Given a matrix @p{A}, this class implements a matrix-vector product
- * with @p{A+\sigma M}, where sigma is a provided shift parameter and
- * @p{M} is the matrix representing the identity
+ * Given a matrix <tt>A</tt>, this class implements a matrix-vector product
+ * with <i>A+s M</i>, where <i>s</i> is a provided shift parameter and
+ * <tt>M</tt> is the matrix representing the identity
*
* @author Guido Kanschat, 2001
*/
// $Id$
// Version: $Name$
//
-// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003
+// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004
// by the deal.II authors and Stephen "Cheffo" Kolaroff
//
// This file is subject to QPL and may not be distributed
* into another sparse matrix.
*
* The decomposition is stored as a sparse matrix which is why this
- * class is derived from the @p{SparseMatrix}. Since it is not a matrix in
- * the usual sense, the derivation is @p{protected} rather than @p{public}.
+ * class is derived from the SparseMatrix. Since it is not a matrix in
+ * the usual sense, the derivation is <tt>protected</tt> rather than <tt>public</tt>.
* @sect3{Fill-in}
*
* The sparse LU decompositions are frequently used with additional
* fill-in, i.e. the sparsity structure of the decomposition is denser
- * than that of the matrix to be decomposed. The @p{initialize}
+ * than that of the matrix to be decomposed. The initialize()
* function of this class allows this fill-in as long as all entries
* present in the original matrix are present in the decomposition
* also, i.e. the sparsity pattern of the decomposition is a superset
* of the sparsity pattern in the original matrix.
*
* Such fill-in can be accomplished by various ways, one of which is a
- * copy-constructor of the @p{SparsityPattern} class which allows the addition
+ * copy-constructor of the SparsityPattern class which allows the addition
* of side-diagonals to a given sparsity structure.
*
* @sect3{Unified use of preconditioners}
*
* An object of this class can be used in the same form as all
- * @ref{PreconditionBlock} preconditioners:
- * @begin{verbatim}
+ * PreconditionBlock preconditioners:
+ * @code
* SparseLUImplementation<double> lu;
* lu.initialize(matrix, SparseLUImplementation<double>::AdditionalData(...));
*
* somesolver.solve (A, x, f, lu);
- * @end{verbatim}
+ * @endcode
*
- * Through the @p{AdditionalData} object it is possible to specify
+ * Through the AdditionalData object it is possible to specify
* additional parameters of the LU decomposition.
*
* 1/ The matrix diagonals can be strengthened by adding
- * @p{strengthen_diagonal} times the sum of the absolute row entries
+ * <tt>strengthen_diagonal</tt> times the sum of the absolute row entries
* of each row to the respective diagonal entries. By default no
* strengthening is performed.
*
- * 2/ By default, each @p{initialize} function call creates its own
- * sparsity. For that, it copies the sparsity of @p{matrix} and adds a
+ * 2/ By default, each initialize() function call creates its own
+ * sparsity. For that, it copies the sparsity of <tt>matrix</tt> and adds a
* specific number of extra off diagonal entries specified by
- * @p{extra_off_diagonals}.
+ * <tt>extra_off_diagonals</tt>.
*
- * 3/ By setting @p{use_previous_sparsity=true} the sparsity is not
- * recreated but the sparsity of the previous @p{initialize} call is
+ * 3/ By setting <tt>use_previous_sparsity=true</tt> the sparsity is not
+ * recreated but the sparsity of the previous initialize() call is
* reused (recycled). This might be useful when several linear
* problems on the same sparsity need to solved, as for example
* several Newton iteration steps on the same triangulation. The
- * default is @p{false}.
+ * default is <tt>false</tt>.
*
* 4/ It is possible to give a user defined sparsity to
- * @p{use_this_sparsity}. Then, no sparsity is created but
- * @p[*use_this_sparsity} is used to store the decomposed matrix. For
+ * <tt>use_this_sparsity</tt>. Then, no sparsity is created but
+ * <tt>*use_this_sparsity</tt> is used to store the decomposed matrix. For
* restrictions on the sparsity see section `Fill-in' above).
*
*
* @sect2{State management}
*
- * The state management simply requires the @p{initialize} function to
+ * The state management simply requires the initialize() function to
* be called before the object is used as preconditioner.
*
* Obsolete:
* decomposition itself has been built, and to introduce some
* optimization of common "sparse idioms", this class introduces a
* simple state management. A SparseLUdecomposition instance is
- * considered @p{not decomposed} if the decompose method has not yet
- * been invoked since the last time the underlying @ref{SparseMatrix}
+ * considered not decomposed if the decompose method has not yet
+ * been invoked since the last time the underlying SparseMatrix
* had changed. The underlying sparse matrix is considered changed
* when one of this class reinit methods, constructors or destructors
- * are invoked. The @p{not decomposed} state is indicated by a false
- * value returned by @p{is_decomposed} method. It is illegal to apply
- * this decomposition (@p{vmult} method) in not decomposed state; in
- * this case, the @p{vmult} method throws an @p{ExcInvalidState}
+ * are invoked. The not-decomposed state is indicated by a false
+ * value returned by is_decomposed() method. It is illegal to apply
+ * this decomposition (vmult() method) in not decomposed state; in
+ * this case, the vmult() method throws an <tt>ExcInvalidState</tt>
* exception. This object turns into decomposed state immediately
- * after its @p{decompose} method is invoked. The @p{decomposed}
- * state is indicated by true value returned by @p{is_decomposed}
- * method. It is legal to apply this decomposition (@p{vmult} method) in
+ * after its decompose() method is invoked. The decomposed
+ * state is indicated by true value returned by is_decomposed()
+ * method. It is legal to apply this decomposition (vmult() method) in
* decomposed state.
*
* @sect2{Particular implementations}
*
- * It is enough to override the @p{initialize} and @p{vmult} methods to
+ * It is enough to override the initialize() and vmult() methods to
* implement particular LU decompositions, like the true LU, or the
* Cholesky decomposition. Additionally, if that decomposition needs
* fine tuned diagonal strengthening on a per row basis, it may override the
- * @p{get_strengthen_diagonal} method. You should invoke the non-abstract
+ * get_strengthen_diagonal() method. You should invoke the non-abstract
* base class method to employ the state management. Implementations
* may choose more restrictive definition of what is legal or illegal
- * state; but they must conform to the @p{is_decomposed} method
+ * state; but they must conform to the is_decomposed() method
* specification above.
*
- * If an exception is thrown by method other than @p{vmult}, this
+ * If an exception is thrown by method other than vmult(), this
* object may be left in an inconsistent state.
*
* @author Stephen "Cheffo" Kolaroff, 2002, based on SparseILU implementation by Wolfgang Bangerth; unified interface: Ralf Hartmann, 2003
/**
* Constructor. Does nothing.
*
- * Call the @p{initialize}
+ * Call the initialize()
* function before using this
* object as preconditioner
- * (@p{vmult}).
+ * (vmult()).
*/
SparseLUDecomposition ();
const SparsityPattern *use_this_sparsity=0);
/**
- * @p{strengthen_diag} times
+ * <tt>strengthen_diag</tt> times
* the sum of absolute row
* entries is added to the
* diagonal entries.
/**
* By default, the
- * @p{initialize(matrix,
- * data)} function creates
+ * <tt>initialize(matrix,
+ * data)</tt> function creates
* its own sparsity. This
* sparsity has the same
- * @p{SparsityPattern} as
- * @p{matrix} with some extra
+ * SparsityPattern as
+ * <tt>matrix</tt> with some extra
* off diagonals the number
* of which is specified by
- * @p{extra_off_diagonals}.
+ * <tt>extra_off_diagonals</tt>.
*
* The user can give a
- * @p{SparsityPattern} to
- * @p{use_this_sparsity}. Then
+ * SparsityPattern to
+ * <tt>use_this_sparsity</tt>. Then
* this sparsity is used and
* the
- * @p{extra_off_diagonals}
+ * <tt>extra_off_diagonals</tt>
* argument is ignored.
*/
unsigned int extra_off_diagonals;
/**
* If this flag is true the
- * @p{initialize} function uses
+ * initialize() function uses
* the same sparsity that was
* used during the previous
- * @p{initialize} call.
+ * initialize() call.
*
* This might be useful when
* several linear problems on
/**
* When a
- * @ref{SparsityPattern} is
+ * SparsityPattern is
* given to this argument,
- * the @p{initialize}
+ * the initialize()
* function calls
- * @p{reinit(*use_this_sparsity)}
+ * <tt>reinit(*use_this_sparsity)</tt>
* causing this sparsity to
* be used.
*
* Note that the sparsity
* structures of
- * @p{*use_this_sparsity} and
+ * <tt>*use_this_sparsity</tt> and
* the matrix passed to the
* initialize function need
* not be equal, but that the
* parameters, see the class
* documentation and the
* documentation of the
- * @p{SparseLUDecomposition::AdditionalData}
+ * SparseLUDecomposition::AdditionalData
* class.
*
* According to the
- * @p{parameters}, this function
+ * <tt>parameters</tt>, this function
* creates a new SparsityPattern
* or keeps the previous sparsity
* or takes the sparsity given by
- * the user to @p{data}. Then,
+ * the user to <tt>data</tt>. Then,
* this function performs the LU
* decomposition.
*
/**
* Exception. Indicates violation
- * of a @p{state rule}.
+ * of a state rule().
*/
DeclException0 (ExcInvalidState);
* the sum of absolute values of
* its elements, determines the
* strengthening factor (through
- * @p{get_strengthen_diagonal})
+ * get_strengthen_diagonal())
* sf and multiplies the diagonal
- * entry with @p{sf+1}.
+ * entry with <tt>sf+1</tt>.
*/
virtual void strengthen_diagonal_impl ();
* In the decomposition phase,
* computes a strengthening
* factor for the diagonal entry
- * in row @p{row} with sum of
+ * in row <tt>row</tt> with sum of
* absolute values of its
- * elements @p{rowsum}.<br> Note:
+ * elements <tt>rowsum</tt>.<br> Note:
* The default implementation in
- * @ref{SparseLUDecomposition}
+ * SparseLUDecomposition
* returns
- * @p{strengthen_diagonal}'s
+ * <tt>strengthen_diagonal</tt>'s
* value.
*/
virtual number get_strengthen_diagonal(const number rowsum, const unsigned int row) const;
/**
* State flag. If not in
- * @em{decomposed} state, it is
- * unlegal to apply the
+ * decomposed state, it is
+ * illegal to apply the
* decomposition. This flag is
* cleared when the underlaying
- * @ref{SparseMatrix}
- * @ref{SparsityPattern} is
+ * SparseMatrix
+ * SparsityPattern is
* changed, and set by
- * @p{decompose}.
+ * decompose().
*/
bool decomposed;
/**
* The default strenghtening
* value, returned by
- * @p{get_strengthen_diagonal}.
+ * get_strengthen_diagonal().
*/
double strengthen_diagonal;
/**
* For every row in the
* underlying
- * @ref{SparsityPattern}, this
+ * SparsityPattern, this
* array contains a pointer
* to the row's first
* afterdiagonal entry. Becomes
* available after invocation of
- * @p{decompose}.
+ * decompose().
*/
std::vector<const unsigned int*> prebuilt_lower_bound;
private:
/**
* Fills the
- * @ref{prebuilt_lower_bound}
+ * #prebuilt_lower_bound
* array.
*/
void prebuild_lower_bound ();
/**
* In general this pointer is
* zero except for the case that
- * no @p{SparsityPattern} is
+ * no SparsityPattern is
* given to this class. Then, a
- * @p{SparsityPattern} is created
+ * SparsityPattern is created
* and is passed down to the
- * @p{SparseMatrix} base class.
+ * SparseMatrix base class.
*
* Nevertheless, the
- * @p{SparseLUDecomposition}
+ * SparseLUDecomposition
* needs to keep ownership of
* this sparsity. It keeps this
* pointer to it enabling it to
// $Id$
// Version: $Name$
//
-// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003 by the deal.II authors
+// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004 by the deal.II authors
//
// This file is subject to QPL and may not be distributed
// without copyright and license information. Please refer
/**
* This class provides an interface to the sparse direct solver MA27
* from the Harwell Subroutine Library. MA27 is a direct solver
- * specialized for sparse symmetric indefinite systems of linear equations and
- * uses a modified form of Gauss elimination. It is included in the
- * Harwell Subroutine Library (see
- * @url{http://www.cse.clrc.ac.uk/Activity/HSL}) and is written in
- * Fortran. The present class only transforms the data stored in
- * @ref{SparseMatrix} objects into the form which is required by the
- * functions resembling MA27, calls these Fortran functions, and
- * interprets some of the returned values indicating error codes,
- * etc. It also manages allocation of the right amount of temporary
- * storage required by these functions.
+ * specialized for sparse symmetric indefinite systems of linear
+ * equations and uses a modified form of Gauss elimination. It is
+ * included in the <a
+ * href="http://www.cse.clrc.ac.uk/Activity/HSL">Harwell Subroutine
+ * Library</a> and is written in Fortran. The present class only
+ * transforms the data stored in SparseMatrix objects into the
+ * form which is required by the functions resembling MA27, calls
+ * these Fortran functions, and interprets some of the returned values
+ * indicating error codes, etc. It also manages allocation of the
+ * right amount of temporary storage required by these functions.
*
* For a description of the steps necessary for the installation of
* HSL subroutines, read the section on external libraries in the
- * deal.II ReadMe file.
+ * <tt>deal.II</tt> ReadMe file.
*
- * @sect3{Interface and Method}
+ * @section SPDMA1 Interface and Method
*
- * For the meaning of the three functions @p{initialize},
- * @p{factorize}, and @p{solve}, as well as for the method used in
- * MA27, please see the documentation of these functions, which can be
- * obtained from @url{http://www.cse.clrc.ac.uk/Activity/HSL}. In
- * practice, one will most often call the second @p{solve} function,
- * which solves the linear system for a given right hand sidem but one
- * can as well call the three functions separately if, for example,
- * one would like to solve the same matrix for several right hand side
- * vectors; the MA27 solver can do this efficiently, as it computes a
- * decomposition of the matrix, so that subsequent solves only amount
- * to a forward-backward substitution which is significantly less
- * costly than the decomposition process.
+ * For the meaning of the three functions initialize(), factorize(),
+ * and solve(), as well as for the method used in MA27, please see the
+ * <a href="http://www.cse.clrc.ac.uk/Activity/HSL">documentation</a>
+ * of these functions. In practice, you will most often call the
+ * second solve() function, which solves the linear system for a
+ * given right hand sidem but one can as well call the three functions
+ * separately if, for example, one would like to solve the same matrix
+ * for several right hand side vectors; the MA27 solver can do this
+ * efficiently, as it computes a decomposition of the matrix, so that
+ * subsequent solves only amount to a forward-backward substitution
+ * which is significantly less costly than the decomposition process.
*
*
- * @sect3{Parameters to the constructor}
+ * @section SPDMA2 Parameters to the constructor
*
* The constructor of this class takes several arguments. The meaning
* is the following: the MA27 functions require the user to allocate
* and pass a certain amount of memory for temporary variables or for
* data to be passed to subsequent functions. The sizes of these
- * arrays are denoted by the variables @p{LIW1}, @p{LIW2}, and @p{LA},
- * where @p{LIW1} denotes the size of the @p{IW} array in the call to
- * @p{MA27A}, while @p{LIW2} is the array size in the call to
- * @p{MA27B}. The documentation of the MA27 functions gives ways to
+ * arrays are denoted by the variables <tt>LIW1</tt>, <tt>LIW2</tt>, and <tt>LA</tt>,
+ * where <tt>LIW1</tt> denotes the size of the <tt>IW</tt> array in the call to
+ * <tt>MA27A</tt>, while <tt>LIW2</tt> is the array size in the call to
+ * <tt>MA27B</tt>. The documentation of the MA27 functions gives ways to
* obtain estimates for their values, e.g. by evaluating values
* returned by functions called before. However, the documentation
- * only states that the values have to be @em{at least as large} as
+ * only states that the values have to be <b>at least</b> as large as
* the estimates, a hint that is not very useful oftentimes (in my
* humble opinion, the lack of dynamic memory allocation mechanism is
* a good reason not to program in Fortran 77...).
*
* In our experience, it is often necessary to go beyond the proposed
- * values (most often for @p{LA}, but also for @p{LIW1}). The first
+ * values (most often for <tt>LA</tt>, but also for <tt>LIW1</tt>). The first
* three parameters of the constructor denote by which factor the
* initial estimates shall be increased. The default values are 1.2
* (the documentation recommends this value, 1, and 1.5, values which
- * have often worked for us. Note that the value of @p{LIW} is only
+ * have often worked for us. Note that the value of <tt>LIW</tt> is only
* changed in the second call if the recommended value times
- * @p{LIW_factor_2} is larger than the array size already is from the
- * call to @p{MA27A}; otherwise, @p{LIW_factor_2} is ignored.
+ * <tt>LIW_factor_2</tt> is larger than the array size already is from the
+ * call to <tt>MA27A</tt>; otherwise, <tt>LIW_factor_2</tt> is ignored.
*
* If the values thus constructed fail to work, we try to restart the
* called function with larger values until the calls succeed. The
* factor we shall increase the array sizes. If the increment factors
* are less than or equal to one, then we only try to call the
* respective calls to the functions once and abort by throwing an
- * error. Note that the @p{MA27C} function writes out an error message
- * if the value of @p{LA} is too small and gives an indication to
+ * error. Note that the <tt>MA27C</tt> function writes out an error message
+ * if the value of <tt>LA</tt> is too small and gives an indication to
* which size it should be increased. However, most often the
* indicated value is far too small and can not be relied upon.
*
*
- * @sect3{Note on parallelization}
+ * @section SPDMA3 Note on parallelization
*
- * @sect4{Synchronisation}
+ * @subsection SPDMA4 Synchronisation
*
* Due to the use of global variables through COMMON blocks, the calls
* to the sparse direct solver routines are not multithreading-safe,
* in different parts of your program, and may not want to use a
* global variable for locking, this class has a lock as static member
* variable, which may be accessed using the
- * @p{get_synchronisation_lock} function. Note however, that this
- * class does not perform the synchronisation for you within its
- * member functions. The reason is that you will usually want to
- * synchronise over the calls to @p{initialize} and @p{factorize},
- * since there should probably not be a call to one of these function
- * with another matrix between the calls for one matrix. (The author
- * does not really know whether this is true, but it is probably safe
- * to assume that.) Since such cross-function synchronisation can only
- * be performed from outside, it is left to the user of this class to
- * do so.
+ * get_synchronisation_lock() function. Note however, that this class
+ * does not perform the synchronisation for you within its member
+ * functions. The reason is that you will usually want to synchronise
+ * over the calls to initialize() and factorize(), since there should
+ * probably not be a call to one of these function with another matrix
+ * between the calls for one matrix. (The author does not really know
+ * whether this is true, but it is probably safe to assume that.)
+ * Since such cross-function synchronisation can only be performed
+ * from outside, it is left to the user of this class to do so.
*
- * @sect4{Detached mode}
+ * @subsection SPDMA5 Detached mode
*
- * As an alternative, you can call the function @p{set_detached_mode}
+ * As an alternative, you can call the function set_detached_mode()
* right after calling the constructor. This lets the program fork, so
* that we now have two programs that communicate via pipes. The
* forked copy of the program then actually replaces itself by a
- * program called @p{detached_ma27}, that is started in its place
- * through the @p{execv} system call. Now everytime you call one of
+ * program called <tt>detached_ma27</tt>, that is started in its place
+ * through the <tt>execv</tt> system call. Now everytime you call one of
* the functions of this class, it relays the data to the other
* program and lets it execute the respective function. The results
* are then transfered back. Since the MA27 functions are only called
* of a factor).
*
* Since no more synchronisation is necessary, the
- * @p{get_synchronisation_lock} returns a reference to a member
+ * get_synchronisation_lock() returns a reference to a member
* variable when the detached mode is set. Thus, you need not change
* your program: you can still acquire and release the lock as before,
* it will only have no effect now, since different objects of this
* you probably wanted.
*
*
- * @sect4{Internals of the detached mode}
+ * @subsection SPDMA6 Internals of the detached mode
*
* The program that actually runs the detached solver is called
- * @p{detached_ma27}, and will show up under this name in the process
+ * <tt>detached_ma27</tt>, and will show up under this name in the process
* list. It communicates with the main program through a pipe.
*
* Since the solver and the main program are two separated processes,
* just not get any new jobs, but will happily wait until the end of
* times. For this reason, the detached solver has a second thread
* running in parallel that simply checks in regular intervals whether
- * the main program is still alive, using the @p{ps} program. If this
+ * the main program is still alive, using the <tt>ps</tt> program. If this
* is no longer the case, the detached solver exits as well.
*
* Since the intervals between two such checks are a couple of second,
* is).
*
* This function must not be
- * called after @p{initialize}
- * (or the two-argument @p{solve}
+ * called after initialize()
+ * (or the two-argument solve()
* function has been called. If
* it is to be called, then only
* right after construction of
*
* If the initialization step has
* not been performed yet, then
- * the @p{initialize} function is
+ * the initialize() function is
* called at the beginning of
* this function.
*/
* happened before, strange
* things will happen. Note that
* we can't actually call the
- * @p{factorize} function from
+ * factorize() function from
* here if it has not yet been
* called, since we have no
* access to the actual matrix.
/**
* Store whether
- * @p{set_detached_mode} has been
+ * set_detached_mode() has been
* called.
*/
bool detached_mode;
std::vector<double> A;
/**
- * Length of the @p{A} array.
+ * Length of the <tt>A</tt> array.
*/
unsigned int LA;
mutable Threads::ThreadMutex non_static_synchronisation_lock;
/**
- * Fill the @p{A} array from the
+ * Fill the <tt>A</tt> array from the
* symmetric part of the given
* matrix.
*/
/**
* This class provides an interface to the sparse direct solver MA47
* from the Harwell Subroutine Library. MA47 is a direct solver
- * specialized for sparse symmetric indefinite systems of linear equations and
- * uses a frontal elimination method. It is included in the Harwell
- * Subroutine Library (see
- * @url{http://www.cse.clrc.ac.uk/Activity/HSL}) and is written in
- * Fortran. The present class only transforms the data stored in
- * @ref{SparseMatrix} objects into the form which is required by the
- * functions resembling MA47, calls these Fortran functions, and
- * interprets some of the returned values indicating error codes,
- * etc. It also manages allocation of the right amount of temporary
- * storage required by these functions.
+ * specialized for sparse symmetric indefinite systems of linear
+ * equations and uses a frontal elimination method. It is included in
+ * the <a href="http://www.cse.clrc.ac.uk/Activity/HSL">Harwell
+ * Subroutine Library</a> and is written in Fortran. The present class
+ * only transforms the data stored in SparseMatrix objects into
+ * the form which is required by the functions resembling MA47, calls
+ * these Fortran functions, and interprets some of the returned values
+ * indicating error codes, etc. It also manages allocation of the
+ * right amount of temporary storage required by these functions.
*
*
- * @sect3{Interface and Method}
+ * @section SPDMA47a Interface and Method
*
- * For the meaning of the three functions @p{initialize},
- * @p{factorize}, and @p{solve}, as well as for the method used in
- * MA47, please see the documentation of these functions, which can be
- * obtained from @url{http://www.cse.clrc.ac.uk/Activity/HSL}. In
- * practice, one will most often call the second @p{solve} function,
- * which solves the linear system for a given right hand sidem but one
- * can as well call the three functions separately if, for example,
- * one would like to solve the same matrix for several right hand side
- * vectors; the MA47 solver can do this efficiently, as it computes a
- * decomposition of the matrix, so that subsequent solves only amount
- * to a forward-backward substitution which is significantly less
- * costly than the decomposition process.
+ * For the meaning of the three functions initialize(), factorize(),
+ * and solve(), as well as for the method used in MA47, please see the
+ * <a href="http://www.cse.clrc.ac.uk/Activity/HSL">documentation</a>
+ * of these functions. In practice, one will most often call the
+ * second solve() function, which solves the linear system for a given
+ * right hand sidem but one can as well call the three functions
+ * separately if, for example, one would like to solve the same matrix
+ * for several right hand side vectors; the MA47 solver can do this
+ * efficiently, as it computes a decomposition of the matrix, so that
+ * subsequent solves only amount to a forward-backward substitution
+ * which is significantly less costly than the decomposition process.
*
*
- * @sect3{Parameters to the constructor}
+ * @section SPDMA47b Parameters to the constructor
*
* The constructor of this class takes several arguments. Their
* meaning is equivalent to those of the constructor of the
* @ref{SparseDirectMA27} class; see there for more information.
*
*
- * @sect3{Note on parallelization}
+ * @section SPDMA47c Note on parallelization
*
* Due to the use of global variables through COMMON blocks, the calls
* to the sparse direct solver routines is not multithreading-capable,
* in different parts of your program, and may not want to use a
* global variable for locking, this class has a lock as static member
* variable, which may be accessed using the
- * @p{get_synchronisation_lock} function. Note however, that this
- * class does not perform the synchronisation for you within its
- * member functions. The reason is that you will usually want to
- * synchronise over the calls to @p{initialize} and @p{factorize},
- * since there should probably not be a call to one of these function
- * with another matrix between the calls for one matrix. (The author
- * does not really know whether this is true, but it is probably safe
- * to assume that.) Since such cross-function synchronisation can only
- * be performed from outside, it is left to the user of this class to
- * do so.
+ * get_synchronisation_lock() function. Note however, that this class
+ * does not perform the synchronisation for you within its member
+ * functions. The reason is that you will usually want to synchronise
+ * over the calls to initialize() and factorize(), since there should
+ * probably not be a call to one of these function with another matrix
+ * between the calls for one matrix. (The author does not really know
+ * whether this is true, but it is probably safe to assume that.)
+ * Since such cross-function synchronisation can only be performed
+ * from outside, it is left to the user of this class to do so.
*
* A detached mode as for MA27 has not yet been implemented for this
* class.
*
* This function already calls
* the initialization function
- * @p{MA47ID} to set up some
+ * <tt>MA47ID</tt> to set up some
* values.
*/
SparseDirectMA47 (const double LIW_factor_1 = 1.4,
*
* If the initialization step has
* not been performed yet, then
- * the @p{initialize} function is
+ * the initialize() function is
* called at the beginning of
* this function.
*/
* happened before, strange
* things will happen. Note that
* we can't actually call the
- * @p{factorize} function from
+ * factorize() function from
* here if it has not yet been
* called, since we have no
* access to the actual matrix.
unsigned int n_nonzero_elements;
/**
- * Control values set by @p{MA47ID}.
+ * Control values set by <tt>MA47ID</tt>.
*/
double CNTL[2];
unsigned int ICNTL[7];
std::vector<double> A;
/**
- * Length of the @p{A} array.
+ * Length of the <tt>A</tt> array.
*/
unsigned int LA;
static Threads::ThreadMutex synchronisation_lock;
/**
- * Fill the @p{A} array from the
+ * Fill the <tt>A</tt> array from the
* symmetric part of the given
* matrix.
*/
void fill_A (const SparseMatrix<double> &matrix);
/**
- * Call the @p{ma47id} function
+ * Call the <tt>ma47id</tt> function
* with the given args.
*/
void call_ma47id (double *CNTL,
unsigned int *ICNTL);
/**
- * Call the @p{ma47ad} function
+ * Call the <tt>ma47ad</tt> function
* with the given args.
*/
void call_ma47ad (const unsigned int *n_rows,
int *INFO);
/**
- * Call the @p{ma47bd} function
+ * Call the <tt>ma47bd</tt> function
* with the given args.
*/
void call_ma47bd (const unsigned int *n_rows,
int *INFO);
/**
- * Call the @p{ma47bd} function
+ * Call the <tt>ma47bd</tt> function
* with the given args.
*/
void call_ma47cd (const unsigned int *n_rows,
// $Id$
// Version: $Name$
//
-// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003 by the deal.II authors
+// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004 by the deal.II authors
//
// This file is subject to QPL and may not be distributed
// without copyright and license information. Please refer
/**
* Sparse matrix.
*
- *
- * @sect2{On template instantiations}
- *
- * Member functions of this class are either implemented in this file
- * or in a file of the same name with suffix ``.templates.h''. For the
- * most common combinations of the template parameters, instantiations
- * of this class are provided in a file with suffix ``.cc'' in the
- * ``source'' directory. If you need an instantiation that is not
- * listed there, you have to include this file along with the
- * corresponding ``.templates.h'' file and instantiate the respective
- * class yourself.
- *
+ * @ref Instantiations some
+
* @author several, 1994-2003
*/
template <typename number>
*/
bool operator == (const const_iterator&) const;
/**
- * Inverse of @p{==}.
+ * Inverse of <tt>==</tt>.
*/
bool operator != (const const_iterator&) const;
*
* You have to initialize
* the matrix before usage with
- * @p{reinit(SparsityPattern)}.
+ * reinit(const SparsityPattern&).
*/
SparseMatrix ();
* only allowed to be called if the matrix
* to be copied is empty. This is for the
* same reason as for the
- * @p{SparsityPattern}, see there for the
+ * SparsityPattern, see there for the
* details.
*
* If you really want to copy a whole
* matrix, you can do so by using the
- * @p{copy_from} function.
+ * copy_from() function.
*/
SparseMatrix (const SparseMatrix &);
* represent the sparsity pattern
* of this matrix. You can change
* the sparsity pattern later on
- * by calling the @p{reinit}
- * function.
+ * by calling the reinit(const
+ * SparsityPattern&) function.
*
* You have to make sure that the
* lifetime of the sparsity
* structure is at least as long
* as that of this matrix or as
- * long as @p{reinit} is not
+ * long as reinit(const
+ * SparsityPattern&) is not
* called with a new sparsity
- * structure.
+ * pattern.
*
* The constructor is marked
* explicit so as to disallow
* Reinitialize the object but
* keep to the sparsity pattern
* previously used. This may be
- * necessary if you @p{reinit}'d
- * the sparsity structure and
+ * necessary if the sparsity
+ * structure has changed and you
* want to update the size of the
* matrix.
*
* lifetime of the sparsity
* structure is at least as long
* as that of this matrix or as
- * long as @p{reinit} is not called
- * with a new sparsity structure.
+ * long as reinit(const
+ * SparsityPattern &) is not
+ * called with a new sparsity
+ * structure.
*
* The elements of the matrix are
* set to zero by this function.
* Return whether the object is
* empty. It is empty if either
* both dimensions are zero or no
- * @p{SparsityPattern} is
+ * SparsityPattern is
* associated.
*/
bool empty () const;
* matrix.
*
* Note, that this function does
- * (in contrary to the
- * @p{n_nonzero_elements}) NOT
+ * (in contrary to
+ * n_nonzero_elements()) not
* count all entries of the
* sparsity pattern but only the
* ones that are nonzero.
unsigned int n_actually_nonzero_elements () const;
/**
- * Set the element @p{(i,j)} to @p{value}.
- * Throws an error if the entry does
- * not exist. Still, it is allowed to store
- * zero values in non-existent fields.
+ * Set the element (<i>i,j</i>)
+ * to <tt>value</tt>. Throws an
+ * error if the entry does not
+ * exist. Still, it is allowed to
+ * store zero values in
+ * non-existent fields.
*/
void set (const unsigned int i, const unsigned int j,
const number value);
SparseMatrix & operator /= (const number factor);
/**
- * Add @p{value} to the element
- * @p{(i,j)}. Throws an error if
- * the entry does not
+ * Add <tt>value</tt> to the
+ * element (<i>i,j</i>). Throws
+ * an error if the entry does not
* exist. Still, it is allowed to
* store zero values in
* non-existent fields.
* in the
* symmetrization. Symmetrization
* of the sparsity pattern can be
- * obtain by the
- * @ref{SparsityPattern}@p{::symmetrize}
- * function.
+ * obtain by
+ * SparsityPattern::symmetrize().
*/
void symmetrize ();
* cheaper. Since this operation
* is notheless not for free, we
* do not make it available
- * through @p{operator =}, since
- * this may lead to unwanted
- * usage, e.g. in copy arguments
- * to functions, which should
- * really be arguments by
+ * through <tt>operator =</tt>,
+ * since this may lead to
+ * unwanted usage, e.g. in copy
+ * arguments to functions, which
+ * should really be arguments by
* reference.
*
* The source matrix may be a matrix
* data type of this matrix.
*
* The function returns a reference to
- * @p{this}.
+ * <tt>*this</tt>.
*/
template <typename somenumber>
SparseMatrix<number> &
/**
* This function is complete
* analogous to the
- * @ref{SparsityPattern}@p{::copy_from}
+ * SparsityPattern::copy_from()
* function in that it allows to
* initialize a whole matrix in
* one step. See there for more
* cited function is that the
* objects which the inner
* iterator points to need to be
- * of type @p{std::pair<unsigned int, value},
- * where @p{value}
- * needs to be convertible to the
- * element type of this class, as
- * specified by the @p{number}
- * template argument.
+ * of type <tt>std::pair<unsigned
+ * int, value</tt>, where
+ * <tt>value</tt> needs to be
+ * convertible to the element
+ * type of this class, as
+ * specified by the
+ * <tt>number</tt> template
+ * argument.
*
* Previous content of the matrix
* is overwritten. Note that the
void copy_from (const FullMatrix<somenumber> &matrix);
/**
- * Add @p{matrix} scaled by
- * @p{factor} to this matrix. The
- * function throws an error if
- * the sparsity patterns of the
- * two involved matrices do not
- * point to the same object,
- * since in this case the
+ * Add <tt>matrix</tt> scaled by
+ * <tt>factor</tt> to this
+ * matrix. The function throws an
+ * error if the sparsity patterns
+ * of the two involved matrices
+ * do not point to the same
+ * object, since in this case the
* operation is cheaper.
*
* The source matrix may be a matrix
/**
* Return the value of the entry
- * (i,j). This may be an
+ * (<i>i,j</i>). This may be an
* expensive operation and you
* should always take care where
* to call this function. In
* that returns zero instead (for
* entries that are not in the
* sparsity pattern of the
- * matrix), use the @p{el}
+ * matrix), use the el()
* function.
+ *
+ * @deprecated Consider using
+ * const_iterator instead, since
+ * it is tailored better to a
+ * sparse matrix structure.
*/
number operator () (const unsigned int i,
const unsigned int j) const;
/**
* This function is mostly like
- * @p{operator()} in that it
+ * operator()() in that it
* returns the value of the
- * matrix entry @p{(i,j)}. The only
- * difference is that if this
- * entry does not exist in the
- * sparsity pattern, then instead
- * of raising an exception, zero
- * is returned. While this may be
+ * matrix entry (<i>i,j</i>). The
+ * only difference is that if
+ * this entry does not exist in
+ * the sparsity pattern, then
+ * instead of raising an
+ * exception, zero is
+ * returned. While this may be
* convenient in some cases, note
* that it is simple to write
* algorithms that are slow
* compared to an optimal
* solution, since the sparsity
* of the matrix is not used.
+ *
+ * @deprecated Consider using
+ * const_iterator instead, since
+ * it is tailored better to a
+ * sparse matrix structure.
*/
number el (const unsigned int i,
const unsigned int j) const;
/**
- * Return the main diagonal element in
- * the @p{i}th row. This function throws an
- * error if the matrix is not square.
+ * Return the main diagonal
+ * element in the <i>i</i>th
+ * row. This function throws an
+ * error if the matrix is not
+ * quadratic (see
+ * SparsityPattern::optimize_diagonal()).
*
* This function is considerably
- * faster than the @p{operator()},
- * since for square matrices, the
- * diagonal entry is always the
+ * faster than the operator()(),
+ * since for quadratic matrices, the
+ * diagonal entry may be the
* first to be stored in each row
* and access therefore does not
* involve searching for the
/**
* Access to values in internal
* mode. Returns the value of
- * the @p{index}th entry in
- * @p{row}. Here, @p{index} refers to
- * the internal representation of
- * the matrix, not the column. Be
- * sure to understand what you are
- * doing here.
+ * the <tt>index</tt>th entry in
+ * <tt>row</tt>. Here,
+ * <tt>index</tt> refers to the
+ * internal representation of the
+ * matrix, not the column. Be
+ * sure to understand what you
+ * are doing here.
+ *
+ * @deprecated Use const_iterator
+ * instead!
*/
number raw_entry (const unsigned int row,
const unsigned int index) const;
/**
+ * @internal
+ * @deprecated Use const_iterator
+ * instead!
+ *
* This is for hackers. Get
- * access to the @p{i}th element of
+ * access to the <i>i</i>th element of
* this matrix. The elements are
* stored in a consecutive way,
- * refer to the @p{SparsityPattern}
+ * refer to the SparsityPattern
* class for more details.
*
* You should use this interface
number global_entry (const unsigned int i) const;
/**
+ * @internal
+ * @deprecated Use const_iterator
+ * instead!
+ *
* Same as above, but with write
* access. You certainly know
* what you do?
/**
* Matrix-vector multiplication:
- * let $dst = M*src$ with $M$
- * being this matrix.
+ * let <i>dst = M*src</i> with
+ * <i>M</i> being this matrix.
*/
template <typename somenumber>
void vmult (Vector<somenumber> &dst,
/**
* Matrix-vector multiplication:
- * let $dst = M^T*src$ with $M$
- * being this matrix. This
- * function does the same as
- * @p{vmult} but takes the
- * transposed matrix.
+ * let <i>dst = M<sup>T</sup>*src</i> with
+ * <i>M</i> being this
+ * matrix. This function does the
+ * same as vmult() but takes
+ * the transposed matrix.
*/
template <typename somenumber>
void Tvmult (Vector<somenumber> &dst,
/**
* Adding Matrix-vector
- * multiplication. Add $M*src$ on
- * $dst$ with $M$ being this
+ * multiplication. Add
+ * <i>M*src</i> on <i>dst</i>
+ * with <i>M</i> being this
* matrix.
*/
template <typename somenumber>
/**
* Adding Matrix-vector
- * multiplication. Add $M^T*src$
- * to $dst$ with $M$ being this
- * matrix. This function does the
- * same as @p{vmult_add} but takes
- * the transposed matrix.
+ * multiplication. Add
+ * <i>M<sup>T</sup>*src</i> to
+ * <i>dst</i> with <i>M</i> being
+ * this matrix. This function
+ * does the same as vmult_add()
+ * but takes the transposed
+ * matrix.
*/
template <typename somenumber>
void Tvmult_add (Vector<somenumber> &dst,
* element function.
*
* Obviously, the matrix needs to
- * be square for this operation.
+ * be quadratic for this operation.
*/
template <typename somenumber>
somenumber matrix_norm_square (const Vector<somenumber> &v) const;
/**
* Compute the residual of an
- * equation @p{Mx=b}, where the
- * residual is defined to be
- * @p{r=b-Mx} with @p{x} typically
- * being an approximate of the
- * true solution of the
- * equation. Write the residual
- * into @p{dst}. The l2 norm of
+ * equation <i>Mx=b</i>, where
+ * the residual is defined to be
+ * <i>r=b-Mx</i>. Write the
+ * residual into
+ * <tt>dst</tt>. The
+ * <i>l<sub>2</sub></i> norm of
* the residual vector is
* returned.
*/
* Apply the Jacobi
* preconditioner, which
* multiplies every element of
- * the @p{src} vector by the
+ * the <tt>src</tt> vector by the
* inverse of the respective
* diagonal element and
* multiplies the result with the
- * damping factor @p{omega}.
+ * relaxation factor <tt>omega</tt>.
*/
template <typename somenumber>
void precondition_Jacobi (Vector<somenumber> &dst,
/**
* Apply SSOR preconditioning to
- * @p{src}.
+ * <tt>src</tt>.
*/
template <typename somenumber>
void precondition_SSOR (Vector<somenumber> &dst,
const number om = 1.) const;
/**
- * Apply SOR preconditioning matrix to @p{src}.
- * The result of this method is
- * $dst = (om D - L)^{-1} src$.
+ * Apply SOR preconditioning
+ * matrix to <tt>src</tt>.
*/
template <typename somenumber>
void precondition_SOR (Vector<somenumber> &dst,
const number om = 1.) const;
/**
- * Apply transpose SOR preconditioning matrix to @p{src}.
- * The result of this method is
- * $dst = (om D - U)^{-1} src$.
+ * Apply transpose SOR
+ * preconditioning matrix to
+ * <tt>src</tt>.
*/
template <typename somenumber>
void precondition_TSOR (Vector<somenumber> &dst,
* in-place. Apply the
* preconditioner matrix without
* copying to a second vector.
- * @p{omega} is the relaxation
+ * <tt>omega</tt> is the relaxation
* parameter.
*/
template <typename somenumber>
const number omega = 1.) const;
/**
- * Perform an SOR preconditioning in-place.
- * The result is $v = (\omega D - L)^{-1} v$.
- * @p{omega} is the damping parameter.
+ * Perform an SOR preconditioning
+ * in-place. <tt>omega</tt> is
+ * the relaxation parameter.
*/
template <typename somenumber>
void SOR (Vector<somenumber> &v,
const number om = 1.) const;
/**
- * Perform a transpose SOR preconditioning in-place.
- * The result is $v = (\omega D - L)^{-1} v$.
- * @p{omega} is the damping parameter.
+ * Perform a transpose SOR
+ * preconditioning in-place.
+ * <tt>omega</tt> is the
+ * relaxation parameter.
*/
template <typename somenumber>
void TSOR (Vector<somenumber> &v,
*
* The standard SOR method is
* applied in the order
- * prescribed by @p{permutation},
+ * prescribed by <tt>permutation</tt>,
* that is, first the row
- * @p{permutation[0]}, then
- * @p{permutation[1]} and so
+ * <tt>permutation[0]</tt>, then
+ * <tt>permutation[1]</tt> and so
* on. For efficiency reasons,
* the permutation as well as its
* inverse are required.
*
- * @p{omega} is the relaxation
- * parameter.
+ * <tt>omega</tt> is the
+ * relaxation parameter.
*/
template <typename somenumber>
void PSOR (Vector<somenumber> &v,
*
* The transposed SOR method is
* applied in the order
- * prescribed by @p{permutation},
- * that is, first the row
- * @p{permutation[m()-1]}, then
- * @p{permutation[m()-2]} and so
- * on. For efficiency reasons,
- * the permutation as well as its
- * inverse are required.
+ * prescribed by
+ * <tt>permutation</tt>, that is,
+ * first the row
+ * <tt>permutation[m()-1]</tt>,
+ * then
+ * <tt>permutation[m()-2]</tt>
+ * and so on. For efficiency
+ * reasons, the permutation as
+ * well as its inverse are
+ * required.
*
- * @p{omega} is the relaxation
- * parameter.
+ * <tt>omega</tt> is the
+ * relaxation parameter.
*/
template <typename somenumber>
void TPSOR (Vector<somenumber> &v,
const number om = 1.) const;
/**
- * Do one SOR step on @p{v}.
+ * Do one SOR step on <tt>v</tt>.
* Performs a direct SOR step
- * with right hand side @p{b}.
+ * with right hand side
+ * <tt>b</tt>.
*/
template <typename somenumber>
void SOR_step (Vector<somenumber> &v,
/**
* Do one adjoint SOR step on
- * @p{v}. Performs a direct TSOR
- * step with right hand side @p{b}.
+ * <tt>v</tt>. Performs a direct
+ * TSOR step with right hand side
+ * <tt>b</tt>.
*/
template <typename somenumber>
void TSOR_step (Vector<somenumber> &v,
const number om = 1.) const;
/**
- * Do one adjoint SSOR step on
- * @p{v}. Performs a direct SSOR
- * step with right hand side @p{b}
- * by performing TSOR after SOR.
+ * Do one SSOR step on
+ * <tt>v</tt>. Performs a direct
+ * SSOR step with right hand side
+ * <tt>b</tt> by performing TSOR
+ * after SOR.
*/
template <typename somenumber>
void SSOR_step (Vector<somenumber> &v,
* pattern of this matrix.
*
* Though the return value is
- * declared @p{const}, you should
- * be aware that it may change if
- * you call any nonconstant
- * function of objects which
- * operate on it.
+ * declared <tt>const</tt>, you
+ * should be aware that it may
+ * change if you call any
+ * nonconstant function of
+ * objects which operate on it.
*/
const SparsityPattern & get_sparsity_pattern () const;
/**
* STL-like iterator with the
- * first entry of row @p{r}.
+ * first entry of row <tt>r</tt>.
*/
const_iterator begin (const unsigned int r) const;
/**
- * Final iterator of row @p{r}.
+ * Final iterator of row
+ * <tt>r</tt>.
*/
const_iterator end (const unsigned int r) const;
/**
* Print the matrix to the given
* stream, using the format
- * @p{(line,col) value}, i.e. one
- * nonzero entry of the matrix
- * per line.
+ * <tt>(line,col) value</tt>,
+ * i.e. one nonzero entry of the
+ * matrix per line.
*/
void print (std::ostream &out) const;
*
* The parameters allow for a
* flexible setting of the output
- * format: @p{precision} and
- * @p{scientific} are used to
- * determine the number format,
- * where @p{scientific} = @p{false}
- * means fixed point notation. A
- * zero entry for @p{width} makes
- * the function compute a width,
- * but it may be changed to a
+ * format: <tt>precision</tt> and
+ * <tt>scientific</tt> are used
+ * to determine the number
+ * format, where <tt>scientific =
+ * false</tt> means fixed point
+ * notation. A zero entry for
+ * <tt>width</tt> makes the
+ * function compute a width, but
+ * it may be changed to a
* positive value, if output is
* crude.
*
* readable output, even
* integers.
*
- * This function
- * may produce @em{large} amounts of
- * output if applied to a large matrix!
+ * @attention This function may
+ * produce <b>large</b> amounts
+ * of output if applied to a
+ * large matrix!
*/
void print_formatted (std::ostream &out,
const unsigned int precision = 3,
/**
* Read data that has previously
- * been written by
- * @p{block_write} en block from
- * a file. This is done using the
- * inverse operations to the
- * above function, so it is
- * reasonably fast because the
+ * been written by block_write()
+ * from a file. This is done
+ * using the inverse operations
+ * to the above function, so it
+ * is reasonably fast because the
* bitstream is not interpreted
* except for a few numbers up
* front.
* contents are lost. Note,
* however, that no checks are
* performed whether new data and
- * the underlying
- * @ref{SparsityPattern} object
- * fit together. It is your
- * responsibility to make sure
- * that the sparsity pattern and
- * the data to be read match.
+ * the underlying SparsityPattern
+ * object fit together. It is
+ * your responsibility to make
+ * sure that the sparsity pattern
+ * and the data to be read match.
*
* A primitive form of error
* checking is performed which
/**
* Determine an estimate for the
* memory consumption (in bytes)
- * of this object.
+ * of this object. See
+ * MemoryConsumption.
*/
unsigned int memory_consumption () const;
* matrix. In order to guarantee
* that it is not deleted while
* still in use, we subscribe to
- * it using the @p{SmartPointer}
+ * it using the SmartPointer
* class.
*/
SmartPointer<const SparsityPattern> cols;
number *val;
/**
- * Allocated size of
- * @p{val}. This can be larger
- * than the actually used part if
- * the size of the matrix was
- * reduced somewhen in the past
- * by associating a sparsity
- * pattern with a smaller size to
- * this object, using the
- * @p{reinit} function.
+ * Allocated size of #val. This
+ * can be larger than the
+ * actually used part if the size
+ * of the matrix was reduced
+ * somewhen in the past by
+ * associating a sparsity pattern
+ * with a smaller size to this
+ * object, using the reinit()
+ * function.
*/
unsigned int max_len;
/**
- * Version of @p{vmult} which only
+ * Version of vmult() which only
* performs its actions on the
* region defined by
- * @p{[begin_row,end_row)}. This
- * function is called by @p{vmult}
+ * <tt>[begin_row,end_row)</tt>. This
+ * function is called by vmult()
* in the case of enabled
* multithreading.
*/
/**
* Version of
- * @p{matrix_norm_square} which
+ * matrix_norm_square() which
* only performs its actions on
* the region defined by
- * @p{[begin_row,end_row)}. This
+ * <tt>[begin_row,end_row)</tt>. This
* function is called by
- * @p{matrix_norm_square} in the
+ * matrix_norm_square() in the
* case of enabled
* multithreading.
*/
/**
* Version of
- * @p{matrix_scalar_product} which
+ * matrix_scalar_product() which
* only performs its actions on
* the region defined by
- * @p{[begin_row,end_row)}. This
+ * <tt>[begin_row,end_row)</tt>. This
* function is called by
- * @p{matrix_scalar_product} in the
+ * matrix_scalar_product() in the
* case of enabled
* multithreading.
*/
somenumber *partial_sum) const;
/**
- * Version of @p{residual} which
+ * Version of residual() which
* only performs its actions on
* the region defined by
- * @p{[begin_row,end_row)} (these
- * numbers are the components of
- * @p{interval}). This function is
- * called by @p{residual} in the
- * case of enabled
- * multithreading.
+ * <tt>[begin_row,end_row)</tt>
+ * (these numbers are the
+ * components of
+ * <tt>interval</tt>). This
+ * function is called by
+ * residual() in the case of
+ * enabled multithreading.
*/
template <typename somenumber>
void threaded_residual (Vector<somenumber> &dst,
Assert (i<m(), ExcInvalidIndex1(i));
// Use that the first element in each
- // row of a square matrix is the main
+ // row of a quadratic matrix is the main
// diagonal
return val[cols->rowstart[i]];
}
Assert (i<m(), ExcInvalidIndex1(i));
// Use that the first element in each
- // row of a square matrix is the main
+ // row of a quadratic matrix is the main
// diagonal
return val[cols->rowstart[i]];
}
// $Id$
// Version: $Name$
//
-// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003 by the deal.II authors
+// Copyright (C) 1998, 1999, 2000, 2001, 2002, 2003, 2004 by the deal.II authors
//
// This file is subject to QPL and may not be distributed
// without copyright and license information. Please refer
* Structure representing the sparsity pattern of a sparse matrix.
*
* The following picture will illustrate the relation between the
- * @p{SparsityPattern} an the @p{SparseMatrix}.
+ * SparsityPattern an the SparseMatrix.
*
* @begin{verbatim}
* SparsityPattern: \
* @begin{verbatim}
* For row = 0
*
- * it exists: (0| 3) = colnums[0]
+ * there are: (0| 3) = colnums[0]
* (0| 2) = colnums[1]
* (0| 9) = colnums[2]
* (0|17) = colnums[3]
*
* For row = 1
*
- * it exists: (1| 1) = colnums[4]
+ * there are: (1| 1) = colnums[4]
* (1| 4) = colnums[5]
* ....
*
* /
* @end{verbatim}
*
- * If you want to get the @p{3} you need to get its position in the
- * table above and its value by returning the value of the element on
- * which the pointer shows, using @p{*val}. For example @p{val[8]=3}. Its
- * position is @p{colnums[8]} so @p{row=2}. In other words, if you want to get
- * the element @p{a_{24}} you know that @p{row=2}. To get the element, a
- * search of @p{4} form @p{colnums[rowstart[2]]} to @p{colnums[rowstart[3]]} is
- * needed. Then @p{a_{24}=val[number of the found element] = 3}.
+ * If you want to get the <tt>3</tt> you need to get its position in
+ * the table above and its value by returning the value of the element
+ * on which the pointer shows, using <tt>*val</tt>. For example
+ * <tt>val[8]=3</tt>. Its position is <tt>colnums[8]</tt> so
+ * <tt>row=2</tt>. In other words, if you want to get the element
+ * <i>a<sub>24</sub></i> you know that <tt>row=2</tt>. To get the
+ * element, a search of <tt>4</tt> form <tt>colnums[rowstart[2]]</tt>
+ * to <tt>colnums[rowstart[3]]</tt> is needed. Then
+ * <i>a<sub>24</sub></i>=<tt>val[number of the found element] =
+ * 3</tt>.
*
*
* @author Wolfgang Bangerth and others
/**
* Define a value which is used
* to indicate that a certain
- * value in the @p{colnums} array
+ * value in the #colnums array
* is unused, i.e. does not
* represent a certain column
* number index.
* Indices with this invalid
* value are used to insert new
* entries to the sparsity
- * pattern using the @p{add} member
+ * pattern using the add() member
* function, and are removed when
- * calling @p{compress}.
+ * calling compress().
*
* You should not assume that the
* variable declared here has a
* member variables in other
* classes. You can make the
* structure usable by calling
- * the @p{reinit} function.
+ * the reinit() function.
*/
SparsityPattern ();
/**
- * Copy constructor. This constructor is
- * only allowed to be called if the matrix
- * structure to be copied is empty. This is
- * so in order to prevent involuntary
- * copies of objects for temporaries, which
- * can use large amounts of computing time.
- * However, copy constructors are needed
- * if yo want to use the STL data types
- * on classes like this, e.g. to write
- * such statements like
- * @p{v.push_back (SparsityPattern());},
- * with @p{v} a vector of @p{SparsityPattern}
- * objects.
+ * Copy constructor. This
+ * constructor is only allowed to
+ * be called if the matrix
+ * structure to be copied is
+ * empty. This is so in order to
+ * prevent involuntary copies of
+ * objects for temporaries, which
+ * can use large amounts of
+ * computing time. However, copy
+ * constructors are needed if yo
+ * want to use the STL data types
+ * on classes like this, e.g. to
+ * write such statements like
+ * <tt>v.push_back
+ * (SparsityPattern());</tt>,
+ * with <tt>v</tt> a vector of
+ * SparsityPattern objects.
*
- * Usually, it is sufficient to use the
- * explicit keyword to disallow unwanted
- * temporaries, but for the STL vectors,
- * this does not work. Since copying a
- * structure like this is not useful
- * anyway because multiple matrices can
- * use the same sparsity structure, copies
- * are only allowed for empty objects, as
- * described above.
+ * Usually, it is sufficient to
+ * use the explicit keyword to
+ * disallow unwanted temporaries,
+ * but for the STL vectors, this
+ * does not work. Since copying a
+ * structure like this is not
+ * useful anyway because multiple
+ * matrices can use the same
+ * sparsity structure, copies are
+ * only allowed for empty
+ * objects, as described above.
*/
SparsityPattern (const SparsityPattern &);
/**
- * Initialize a rectangular matrix with
- * @p{m} rows and @p{n} columns.
- * The matrix may contain at most @p{max_per_row}
- * nonzero entries per row.
+ * Initialize a rectangular
+ * matrix with <tt>m</tt> rows
+ * and <tt>n</tt> columns. The
+ * matrix may contain at most
+ * <tt>max_per_row</tt> nonzero
+ * entries per row.
*
* If the matrix is quadratic,
* then the last parameter
/**
* Initialize a rectangular
- * matrix with @p{m} rows and @p{n}
- * columns. The maximal number
- * of nonzero entries for each
- * row is given by the
- * @p{row_lengths} array.
+ * matrix with <tt>m</tt> rows
+ * and <tt>n</tt> columns. The
+ * maximal number of nonzero
+ * entries for each row is given
+ * by the <tt>row_lengths</tt>
+ * array.
*/
SparsityPattern (const unsigned int m,
const unsigned int n,
/**
* Initialize a quadratic matrix of dimension
- * @p{n} with at most @p{max_per_row}
+ * <tt>n</tt> with at most <tt>max_per_row</tt>
* nonzero entries per row.
*
* This constructor automatically
const unsigned int max_per_row);
/**
- * Initialize a quadratic
- * matrix with @p{m} rows and @p{m}
- * columns. The maximal number
- * of nonzero entries for each
- * row is given by the
- * @p{row_lengths} array.
+ * Initialize a quadratic matrix
+ * with <tt>m</tt> rows and
+ * <tt>m</tt> columns. The
+ * maximal number of nonzero
+ * entries for each row is given
+ * by the <tt>row_lengths</tt>
+ * array.
*/
SparsityPattern (const unsigned int m,
const std::vector<unsigned int> &row_lengths,
* or other incomplete decompositions.
* Therefore, additional to the original
* entry structure, space for
- * @p{extra_off_diagonals}
+ * <tt>extra_off_diagonals</tt>
* side-diagonals is provided on both
* sides of the main diagonal.
*
- * @p{max_per_row} is the maximum number of
- * nonzero elements per row which this
- * structure is to hold. It is assumed
- * that this number is sufficiently large
- * to accomodate both the elements in
- * @p{original} as well as the new
- * off-diagonal elements created by this
- * constructor. You will usually want to
- * give the same number as you gave for
- * @p{original} plus the number of side
- * diagonals times two. You may however
- * give a larger value if you wish to add
- * further nonzero entries for the
- * decomposition based on other criteria
- * than their being on side-diagonals.
+ * <tt>max_per_row</tt> is the
+ * maximum number of nonzero
+ * elements per row which this
+ * structure is to hold. It is
+ * assumed that this number is
+ * sufficiently large to
+ * accomodate both the elements
+ * in <tt>original</tt> as well
+ * as the new off-diagonal
+ * elements created by this
+ * constructor. You will usually
+ * want to give the same number
+ * as you gave for
+ * <tt>original</tt> plus the
+ * number of side diagonals times
+ * two. You may however give a
+ * larger value if you wish to
+ * add further nonzero entries
+ * for the decomposition based on
+ * other criteria than their
+ * being on side-diagonals.
*
- * This function requires that @p{original}
- * refer to a quadratic matrix structure.
- * It shall be compressed. The matrix
- * structure is not compressed
- * after this function finishes.
+ * This function requires that
+ * <tt>original</tt> refers to a
+ * quadratic matrix structure.
+ * It must be compressed. The
+ * matrix structure is not
+ * compressed after this function
+ * finishes.
*/
SparsityPattern (const SparsityPattern &original,
const unsigned int max_per_row,
/**
* Reallocate memory and set up data
* structures for a new matrix with
- * @p{m} rows and @p{n} columns,
- * with at most @p{max_per_row}
+ * <tt>m </tt>rows and <tt>n</tt> columns,
+ * with at most <tt>max_per_row</tt>
* nonzero entries per row.
*
* This function simply maps its
* operations to the other
- * @p{reinit} function.
+ * <tt>reinit</tt> function.
*/
void reinit (const unsigned int m,
const unsigned int n,
/**
* Reallocate memory for a matrix
- * of size @p{m \times n}. The
+ * of size <tt>m x n</tt>. The
* number of entries for each row
* is taken from the array
- * @p{row_lengths} which has to
+ * <tt>row_lengths</tt> which has to
* give this number of each row
- * @p{i=1...m}.
+ * <tt>i=1...m</tt>.
*
- * If @p{m*n==0} all memory is freed,
+ * If <tt>m*n==0</tt> all memory is freed,
* resulting in a total reinitialization
* of the object. If it is nonzero, new
* memory is only allocated if the new
* The memory which is no more
* needed is released.
*
- * @p{SparseMatrix} objects require the
- * @p{SparsityPattern} objects they are
+ * SparseMatrix objects require the
+ * SparsityPattern objects they are
* initialized with to be compressed, to
* reduce memory requirements.
*/
/**
* This function can be used as a
- * replacement for @ref{reinit},
- * subsequent calls to @ref{add}
- * and a final call to
- * @ref{close} if you know
- * exactly in advance the entries
- * that will form the matrix
- * sparsity pattern.
+ * replacement for reinit(),
+ * subsequent calls to add() and
+ * a final call to close() if you
+ * know exactly in advance the
+ * entries that will form the
+ * matrix sparsity pattern.
*
* The first two parameters
* determine the size of the
* be equal to
* @ref{n_rows}. These iterators
* may be iterators of
- * @p{std::vector},
- * @p{std::list}, pointers into a
+ * <tt>std::vector</tt>,
+ * <tt>std::list</tt>, pointers into a
* C-style array, or any other
* iterator satisfying the
* requirements of a forward
* iterator. The objects pointed
* to by these iterators
* (i.e. what we get after
- * applying @p{operator*} or
- * @p{operator->} to one of these
+ * applying <tt>operator*</tt> or
+ * <tt>operator-></tt> to one of these
* iterators) must be a container
* itself that provides functions
- * @p{begin} and @p{end}
+ * <tt>begin</tt> and <tt>end</tt>
* designating a range of
* iterators that describe the
* contents of one
* following example code, which
* may be used to fill a sparsity
* pattern:
- * @begin{verbatim}
+ * @code
* std::vector<std::vector<unsigned int> > column_indices (n_rows);
* for (unsigned int row=0; row<n_rows; ++row)
* // generate necessary columns in this row
* sparsity.copy_from (n_rows, n_cols,
* column_indices.begin(),
* column_indices.end());
- * @end{verbatim}
+ * @endcode
*
* Note that this example works
* since the iterators
* dereferenced yield containers
- * with functions @p{begin} and
- * @p{end} (namely
- * @p{std::vector}s), and the
+ * with functions <tt>begin</tt> and
+ * <tt>end</tt> (namely
+ * <tt>std::vector</tt>s), and the
* inner iterators dereferenced
* yield unsigned integers as
* column indices. Note that we
* could have replaced each of
- * the two @p{std::vector}
- * occurrences by @p{std::list},
+ * the two <tt>std::vector</tt>
+ * occurrences by <tt>std::list</tt>,
* and the inner one by
- * @p{std::set} as well.
+ * <tt>std::set</tt> as well.
*
* Another example would be as
* follows, where we initialize a
* whole matrix, not only a
* sparsity pattern:
- * @begin{verbatim}
+ * @code
* std::vector<std::map<unsigned int,double> > entries (n_rows);
* for (unsigned int row=0; row<n_rows; ++row)
* // generate necessary pairs of columns
* matrix.reinit (sparsity);
* matrix.copy_from (column_indices.begin(),
* column_indices.end());
- * @end{verbatim}
+ * @endcode
*
* This example works because
* dereferencing iterators of the
* unsigned integers and a value,
* the first of which we take as
* column index. As previously,
- * the outer @p{std::vector}
+ * the outer <tt>std::vector</tt>
* could be replaced by
- * @p{std::list}, and the inner
- * @p{std::map<unsigned int,double>}
+ * <tt>std::list</tt>, and the inner
+ * <tt>std::map<unsigned int,double></tt>
* could be replaced by
- * @p{std::vector<std::pair<unsigned int,double> >},
+ * <tt>std::vector<std::pair<unsigned int,double> ></tt>,
* or a list or set of such
* pairs, as they all return
* iterators that point to such
/**
* Copy data from an object of
* type
- * @ref{CompressedSparsityPattern}.
+ * CompressedSparsityPattern.
* Previous content of this
* object is lost, and the
* sparsity pattern is in
/**
* Return the index of the matrix
- * element with row number @p{i}
- * and column number @p{j}. If
+ * element with row number <tt>i</tt>
+ * and column number <tt>j</tt>. If
* the matrix element is not a
* nonzero one, return
- * @p{SparsityPattern::invalid_entry}.
+ * SparsityPattern::invalid_entry.
*
- * This function is usually called
- * by the @p{operator()} of the
- * @p{SparseMatrix}. It shall only be
- * called for compressed sparsity
- * patterns, since in this case
- * searching whether the entry
- * exists can be done quite fast
- * with a binary sort algorithm
- * because the column numbers are
- * sorted.
+ * This function is usually
+ * called by the
+ * SparseMatrix::operator()(). It
+ * may only be called for
+ * compressed sparsity patterns,
+ * since in this case searching
+ * whether the entry exists can
+ * be done quite fast with a
+ * binary sort algorithm because
+ * the column numbers are sorted.
*
- * If @p{m} is the number of
- * entries in @p{row}, then the
+ * If <tt>m</tt> is the number of
+ * entries in <tt>row</tt>, then the
* complexity of this function is
- * @p{log(m)} if the sparsity
+ * <i>log(m)</i> if the sparsity
* pattern is compressed.
+ *
+ * @deprecated Use
+ * SparseMatrix::const_iterator
*/
unsigned int operator() (const unsigned int i,
const unsigned int j) const;
/**
* This is the inverse operation
- * to @p{operator()}: given a
+ * to operator()(): given a
* global index, find out row and
* column of the matrix entry to
* which it belongs. The returned
* called if the sparsity pattern
* is closed. The global index
* must then be between zero and
- * @p{n_nonzero_elements}.
+ * n_nonzero_elements().
*
- * If @p{N} is the number of
+ * If <tt>N</tt> is the number of
* rows of this matrix, then the
* complexity of this function is
- * @p{log(N)}.
+ * <i>log(N)</i>.
*/
std::pair<unsigned int, unsigned int>
matrix_position (const unsigned int global_index) const;
/**
* Print the sparsity of the matrix
- * in a format that @p{gnuplot} understands
+ * in a format that <tt>gnuplot</tt> understands
* and which can be used to plot the
* sparsity pattern in a graphical
* way. The format consists of pairs
- * @p{i j} of nonzero elements, each
+ * <tt>i j</tt> of nonzero elements, each
* representing one entry of this
* matrix, one per line of the output
* file. Indices are counted from
* way as matrices are displayed, we
* print the negative of the column
* index, which means that the
- * @p{(0,0)} element is in the top left
+ * <tt>(0,0)</tt> element is in the top left
* rather than in the bottom left
* corner.
*
* Print the sparsity pattern in
* gnuplot by setting the data style
* to dots or points and use the
- * @p{plot} command.
+ * <tt>plot</tt> command.
*/
void print_gnuplot (std::ostream &out) const;
/**
* Access to column number field.
* Return the column number of
- * the @p{index}th entry in
- * @p{row}. Note that the if
+ * the <tt>index</tt>th entry in
+ * <tt>row</tt>. Note that the if
* diagonal elements are
* optimized, the first element
* in each row is the diagonal
* element,
- * i.e. @p{column_number(row,0)==row}.
+ * i.e. <tt>column_number(row,0)==row</tt>.
*
* If the sparsity pattern is
* already compressed, then
* (except for the diagonal
* element), the entries are
* sorted by columns,
- * i.e. @p{column_number(row,i)}
- * @p{<} @p{column_number(row,i+1)}.
+ * i.e. <tt>column_number(row,i)</tt>
+ * <tt><</tt> <tt>column_number(row,i+1)</tt>.
*/
unsigned int column_number (const unsigned int row,
const unsigned int index) const;
* and @p{column_number} instead.
*
* Though the return value is declared
- * @p{const}, you should be aware that it
+ * <tt>const</tt>, you should be aware that it
* may change if you call any nonconstant
* function of objects which operate on
* it.
const unsigned int * get_rowstart_indices () const;
/**
+ * @deprecated. Use @p{row_length}
+ * and @p{column_number} instead.
+ *
* This is kind of an expert mode: get
* access to the colnums array, but
* readonly.
*
- * Use of this function is highly
- * deprecated. Use @p{row_length}
- * and @p{column_number} instead.
- *
* Though the return value is declared
- * @p{const}, you should be aware that it
+ * <tt>const</tt>, you should be aware that it
* may change if you call any nonconstant
* function of objects which operate on
* it.
/**
* Read data that has previously
- * been written by
- * @p{block_write} en block from
- * a file. This is done using the
- * inverse operations to the
- * above function, so it is
- * reasonably fast because the
+ * been written by block_write()
+ * from a file. This is done
+ * using the inverse operations
+ * to the above function, so it
+ * is reasonably fast because the
* bitstream is not interpreted
* except for a few numbers up
* front.
/**
* Determine an estimate for the
* memory consumption (in bytes)
- * of this object.
+ * of this object. See
+ * MemoryConsumption.
*/
unsigned int memory_consumption () const;
private:
/**
* Maximum number of rows that can
- * be stored in the @p{row_start} array.
+ * be stored in the #rowstart array.
* Since reallocation of that array
* only happens if the present one is
* too small, but never when the size
* of this matrix structure shrinks,
- * @p{max_dim} might be larger than
- * @p{rows} and in this case @p{row_start}
+ * #max_dim might be larger than
+ * #rows and in this case #rowstart
* has more elements than are used.
*/
unsigned int max_dim;
/**
* Size of the actually allocated array
- * @p{colnums}. Here, the same applies as
- * for the @p{rowstart} array, i.e. it
+ * #colnums. Here, the same applies as
+ * for the #rowstart array, i.e. it
* may be larger than the actually used
* part of the array.
*/
/**
* Maximum number of elements per
* row. This is set to the value
- * given to the @p{reinit} function
+ * given to the reinit() function
* (or to the constructor), or to
* the maximum row length
* computed from the vectors in
* constructors or reinit
* versions are called. Its value
* is more or less meaningsless
- * after @p{compress()} has been
+ * after compress() has been
* called.
*/
unsigned int max_row_length;
/**
- * Array which hold for each row which
- * is the first element in @p{colnums}
- * belonging to that row. Note that
- * the size of the array is one larger
- * than the number of rows, because
- * the last element is used for
- * @p{row=rows}, i.e. the row past the
- * last used one. The value of
- * @p{rowstart[rows]} equals the index
- * of the element past the end in
- * @p{colnums}; this way, we are able to
- * write loops like
- * @p{for (i=rowstart[k]; i<rowstart[k+1]; ++i)}
+ * Array which hold for each row
+ * which is the first element in
+ * #colnums belonging to that
+ * row. Note that the size of the
+ * array is one larger than the
+ * number of rows, because the
+ * last element is used for
+ * <tt>row</tt>=#rows, i.e. the
+ * row past the last used
+ * one. The value of
+ * #rowstart[#rows]} equals the
+ * index of the element past the
+ * end in #colnums; this way, we
+ * are able to write loops like
+ * <tt>for (i=rowstart[k];
+ * i<rowstart[k+1]; ++i)</tt>
* also for the last row.
*
* Note that the actual size of the
* allocated memory may be larger than
* the region that is used. The actual
* number of elements that was allocated
- * is stored in @p{max_dim}.
+ * is stored in #max_dim.
*/
unsigned int *rowstart;
/**
- * Array of column numbers. In this array,
- * we store for each non-zero element its
- * column number. The column numbers for
- * the elements in row @p{r} are stored
- * within the index range
- * @p{rowstart[r]...rowstart[r+1]}. Therefore
- * to find out whether a given element
- * @p{(r,c)} exists, we have to check
- * whether the column number @p{c} exists in
- * the abovementioned range within this
- * array. If it exists, say at position
- * @p{p} within this array, the value of
- * the respective element in the sparse
- * matrix will also be at position @p{p}
- * of the values array of that class.
+ * Array of column numbers. In
+ * this array, we store for each
+ * non-zero element its column
+ * number. The column numbers for
+ * the elements in row <i>r</i>
+ * are stored within the index
+ * range
+ * #rowstart[<i>r</i>]...#rowstart[<i>r+1</i>]. Therefore
+ * to find out whether a given
+ * element (<i>r,c</i>) exists,
+ * we have to check whether the
+ * column number <i>c</i> exists
+ * in the abovementioned range
+ * within this array. If it
+ * exists, say at position
+ * <i>p</i> within this array,
+ * the value of the respective
+ * element in the sparse matrix
+ * will also be at position
+ * <i>p</i> of the values array
+ * of that class.
*
* At the beginning, all elements
* of this array are set to
unsigned int *colnums;
/**
- * Store whether the @p{compress} function
- * was called for this object.
+ * Store whether the compress()
+ * function was called for this
+ * object.
*/
bool compressed;
/**
* Optimized replacement for
- * @p{std::lower_bound} for
+ * <tt>std::lower_bound</tt> for
* searching within the range of
* column indices. Slashes
* execution time by
* Helper function to get the
* column index from a
* dereferenced iterator in the
- * @ref{copy_from} function, if
+ * copy_from() function, if
* the inner iterator type points
* to plain unsigned integers.
*/
* Helper function to get the
* column index from a
* dereferenced iterator in the
- * @ref{copy_from} function, if
+ * copy_from() function, if
* the inner iterator type points
* to pairs of unsigned integers
* and some other value.
* for certain types of
* containers that make the first
* element of the pair constant
- * (such as @p{std::map}).
+ * (such as <tt>std::map</tt>).
*/
template <typename value>
unsigned int
/*@}*/
/*---------------------- Inline functions -----------------------------------*/
+/// @if NoDoc
inline
const unsigned int *
compress ();
}
+/// @endif
#endif