void compute_cholesky_factorization ();
/**
- * Estimate the reciprocal of the condition number $1/k(\mathbf A)$ in $L_1$ norm ($1/(||\mathbf A||_1 ||\mathbf A^{-1}||_1)$)
+ * Estimate the reciprocal of the condition number $1/k(\mathbf A)$ in $L_1$ norm ($1/(||\mathbf A||_1 \, ||\mathbf A^{-1}||_1)$)
* of a symmetric positive definite matrix using Cholesky factorization. This function can only
* be called if the matrix is already factorized.
*
/**
* Compute eigenvalues and eigenvectors of a real symmetric matrix. Only
- * eigenvalues in the interval $(lower_bound, upper_bound]$ are computed with
- * the absolute tolerance abs_accuracy. An approximate eigenvalue is
+ * eigenvalues in the interval $(\rm{lower\_bound}, \rm{upper\_bound}]$ are computed with
+ * the absolute tolerance $\rm abs\_accuracy$. An approximate eigenvalue is
* accepted as converged when it is determined to lie in an interval $[a,b]$
- * of width less than or equal to $abs_accuracy + eps * max(|a|,|b|)$, where
- * $eps$ is the machine precision. If $abs_accuracy$ is less than or equal to
- * zero, then $eps*|t|$ will be used in its place, where $|t|$ is the 1-norm of
+ * of width less than or equal to $\rm{abs\_accuracy} + eps * \rm{max}(|a|,|b|)$, where
+ * $eps$ is the machine precision. If $\rm{abs\_accuracy}$ is less than or equal to
+ * zero, then $eps\,|\mathbf{T}|_1$ will be used in its place, where $|\mathbf{T}|_1$ is the 1-norm of
* the tridiagonal matrix obtained by reducing $\mathbf A$ to tridiagonal form.
- * Eigenvalues will be computed most accurately when $abs_accuracy$ is set to
+ * Eigenvalues will be computed most accurately when $\rm{abs\_accuracy}$ is set to
* twice the underflow threshold, not zero. After this routine has been
- * called, all eigenvalues in $(lower_bound, upper_bound]$ will be stored in
+ * called, all eigenvalues in $(\rm{lower\_bound}, \rm{upper\_bound}]$ will be stored in
* eigenvalues and the corresponding eigenvectors will be stored in the
* columns of eigenvectors, whose dimension is set accordingly.
*
/**
* Compute generalized eigenvalues and eigenvectors of a real generalized
* symmetric eigenproblem of the form
- * - itype = 1: $\mathbf A \cdot \mathbf x=\lambda \mathbf B \cdot x$
+ * - itype = 1: $\mathbf A \cdot \mathbf x=\lambda \mathbf B \cdot \mathbf x$
* - itype = 2: $\mathbf A \cdot \mathbf B \cdot \mathbf x=\lambda \mathbf x$
* - itype = 3: $\mathbf B \cdot \mathbf A \cdot \mathbf x=\lambda \mathbf x$
+ *
* where $\mathbf A$ is this matrix. $\mathbf A$
* and $\mathbf B$ are assumed to be symmetric, and $\mathbf B$ has to be positive definite.
- * Only eigenvalues in the interval $(lower_bound, upper_bound]$ are computed
- * with the absolute tolerance $abs_accuracy$. An approximate eigenvalue is
+ * Only eigenvalues in the interval $(\rm{lower\_bound}, \rm{upper\_bound}]$ are computed
+ * with the absolute tolerance $\rm{abs\_accuracy}$. An approximate eigenvalue is
* accepted as converged when it is determined to lie in an interval $[a,b]$
- * of width less than or equal to $abs_accuracy + eps * max( |a|,|b| )$, where
- * $eps$ is the machine precision. If $abs_accuracy$ is less than or equal to
- * zero, then $eps*|t|$ will be used in its place, where $|t|$ is the 1-norm of
+ * of width less than or equal to $\rm{abs\_accuracy} + eps * \rm{max}( |a|,|b| )$, where
+ * $eps$ is the machine precision. If $\rm{abs\_accuracy}$ is less than or equal to
+ * zero, then $eps \, |\mathbf{T}|_1$ will be used in its place, where $|\mathbf{T}|_1$ is the 1-norm of
* the tridiagonal matrix obtained by reducing $\mathbf A$ to tridiagonal form.
- * Eigenvalues will be computed most accurately when $abs_accuracy$ is set to
+ * Eigenvalues will be computed most accurately when $\rm{abs\_accuracy}$ is set to
* twice the underflow threshold, not zero. After this routine has been
- * called, all eigenvalues in $(lower_bound, upper_bound]$ will be stored in
+ * called, all eigenvalues in $(\rm{lower\_bound}, \rm{upper\_bound}]$ will be stored in
* eigenvalues and the corresponding eigenvectors will be stored in
* eigenvectors, whose dimension is set accordingly.
*
* The parameter @p threshold determines, when a singular value should
* be considered zero. It is the ratio of the smallest to the largest
* nonzero singular value $s_{max}$. Thus, the inverses of all
- * singular values less than $s_{max}/threshold$ will
+ * singular values less than $s_{max}/\rm{threshold}$ will
* be set to zero.
*/
void compute_inverse_svd (const double threshold = 0.);
*
*
* If it is necessary to copy complete matrices with an identical block-cyclic distribution,
- * use copy_to(ScaLAPACKMatrix<NumberType> &dest) with only one argument to avoid communication.
+ * use ScaLAPACKMatrix<NumberType>::copy_to(ScaLAPACKMatrix<NumberType> &dest)
+ * with only one argument to avoid communication.
*
* The underlying process grids of the matrices @p A and @p B must have been built
* with the same MPI communicator.
/**
* Matrix-addition:
- * $\mathbf{A} = \mathbf{A} + b \mathbf{B}$
+ * $\mathbf{A} = \mathbf{A} + b\, \mathbf{B}$
*
* The matrices $\mathbf{A}$ and $\mathbf{B}$ must have the same process grid.
*
/**
* Matrix-addition:
- * $\mathbf{A} = \mathbf{A} + b \mathbf{B}^T$
+ * $\mathbf{A} = \mathbf{A} + b\, \mathbf{B}^T$
*
* The matrices $\mathbf{A}$ and $\mathbf{B}$ must have the same process grid.
*
/**
* Computing selected eigenvalues and, optionally, the eigenvectors of the real symmetric
- * matrix $A \in \mathbb{R}^{M \times M}$.
+ * matrix $\mathbf{A} \in \mathbb{R}^{M \times M}$.
*
* The eigenvalues/eigenvectors are selected by prescribing a range of indices @p index_limits.
*
/**
* Computing the singular value decomposition (SVD) of a
- * matrix $A \in \mathbb{R}^{M \times N}$, optionally computing the left and/or right
- * singular vectors. The SVD is written as $A = U * \Sigma * V^T$
- * with $\Sigma \in \mathbb{R}^{M \times N}$ as a diagonal matrix,
- * $U \in \mathbb{R}^{M \times M}$ and $U \in \mathbb{R}^{M \times M}$
- * as orthogonal matrices. The diagonal elements of $\Sigma$
- * are the singular values of $A$ and the columns of $U$ and $V$ are the
+ * matrix $\mathbf{A} \in \mathbb{R}^{M \times N}$, optionally computing the left and/or right
+ * singular vectors. The SVD is written as $\mathbf{A} = \mathbf{U} \cdot \mathbf{\Sigma} \cdot \mathbf{V}^T$
+ * with $\mathbf{\Sigma} \in \mathbb{R}^{M \times N}$ as a diagonal matrix,
+ * $\mathbf{U} \in \mathbb{R}^{M \times M}$ and $\mathbf{V} \in \mathbb{R}^{M \times M}$
+ * as orthogonal matrices. The diagonal elements of $\mathbf{\Sigma}$
+ * are the singular values of $A$ and the columns of $\mathbf{U}$ and $\mathbf{V}$ are the
* corresponding left and right singular vectors, respectively. The
* singular values are returned in decreasing order and only the first $\min(M,N)$
- * columns of $U$ and rows of VT = $V^T$ are computed.
+ * columns of $\mathbf{U}$ and rows of $\mathbf{V}^T$ are computed.
*
* Upon return the content of the matrix is unusable.
- * The matrix A must have identical block cyclic distribution for the rows and column.
+ * The matrix $\mathbf{A}$ must have identical block cyclic distribution for the rows and column.
*
- * If left singular vectors are required matrices $A$ and $U$
+ * If left singular vectors are required matrices $\mathbf{A}$ and $\mathbf{U}$
* have to be constructed with the same process grid and block cyclic distribution.
- * If right singular vectors are required matrices $A$ and $V^T$
+ * If right singular vectors are required matrices $\mathbf{A}$ and $\mathbf{V}^T$
* have to be constructed with the same process grid and block cyclic distribution.
*
* To avoid computing the left and/or right singular vectors the function accepts <code>nullptr</code>
/**
* Solving overdetermined or underdetermined real linear
- * systems involving matrix $A \in \mathbb{R}^{M \times N}$, or its transpose $A^T$,
- * using a QR or LQ factorization of $A$ for $N_{\rm RHS}$ RHS vectors in the columns of matrix $B$
+ * systems involving matrix $\mathbf{A} \in \mathbb{R}^{M \times N}$, or its transpose $\mathbf{A}^T$,
+ * using a QR or LQ factorization of $\mathbf{A}$ for $N_{\rm RHS}$ RHS vectors in the columns of matrix $\mathbf{B}$
*
- * It is assumed that $A$ has full rank: $rank(A) = \min(M,N)$.
+ * It is assumed that $\mathbf{A}$ has full rank: $\rm{rank}(\mathbf{A}) = \min(M,N)$.
*
* The following options are supported:
* -# If(!transpose) and $M \geq N$: least squares solution of overdetermined system
- * $\min \Vert B - A*X\Vert$.\n
- * Upon exit the rows $0$ to $N-1$ of $B$ contain the least square solution vectors. The residual sum of squares
+ * $\min \Vert \mathbf{B} - \mathbf{A}\cdot \mathbf{X}\Vert$.\n
+ * Upon exit the rows $0$ to $N-1$ of $\mathbf{B}$ contain the least square solution vectors. The residual sum of squares
* for each column is given by the sum of squares of elements $N$ to $M-1$ in that column.
*
* -# If(!transpose) and $M < N$: find minimum norm solutions of underdetermined systems
- * $A * X = B$.\n
- * Upon exit the columns of $B$ contain the minimum norm solution vectors.
+ * $\mathbf{A} \cdot \mathbf{X} = \mathbf{B}$.\n
+ * Upon exit the columns of $\mathbf{B}$ contain the minimum norm solution vectors.
*
* -# If(transpose) and $M \geq N$: find minimum norm solutions of underdetermined system
- * $ A^\top X = B$.\n
- * Upon exit the columns of $B$ contain the minimum norm solution vectors.
+ * $ \mathbf{A}^\top \cdot \mathbf{X} = \mathbf{B}$.\n
+ * Upon exit the columns of $\mathbf{B}$ contain the minimum norm solution vectors.
*
* -# If(transpose) and $M < N$: least squares solution of overdetermined system
- * $\min \Vert B - A^\top X\Vert$.\n
+ * $\min \Vert \mathbf{B} - \mathbf{A}^\top \cdot \mathbf{X}\Vert$.\n
* Upon exit the rows $0$ to $M-1$ contain the least square solution vectors. The residual sum of squares
* for each column is given by the sum of squares of elements $M$ to $N-1$ in that column.
*
- * If(!tranpose) then $B \in \mathbb{R}^{M \times N_{\rm RHS}}$,
- * otherwise $B \in \mathbb{R}^{N \times N_{\rm RHS}}}$.
- * The matrices $A$ and $B$ must have an identical block cyclic distribution for rows and columns.
+ * If(!tranpose) then $\mathbf{B} \in \mathbb{R}^{M \times N_{\rm RHS}}$,
+ * otherwise $\mathbf{B} \in \mathbb{R}^{N \times N_{\rm RHS}}$.
+ * The matrices $\mathbf{A}$ and $\mathbf{B}$ must have an identical block cyclic distribution for rows and columns.
*/
void least_squares(ScaLAPACKMatrix<NumberType> &B,
const bool transpose=false);
* Cholesky factorization (see l1_norm()).
*
* @note An alternative is to compute the inverse of the matrix
- * explicitly and manually construct $k_1 = ||A||_1 ||A^{-1}||_1$.
+ * explicitly and manually construct $k_1 = ||\mathbf{A}||_1 \, ||\mathbf{A}^{-1}||_1$.
*/
NumberType reciprocal_condition_number(const NumberType a_norm) const;