f08khf computes the singular value decomposition (SVD) of a real $m\times n$ matrix $A$, where $m\ge n$, and optionally computes the left and/or right singular vectors. f08khf implements the preconditioned Jacobi SVD of Drmač and Veselić (2008a) and Drmač and Veselić (2008b). This is the expert driver routine that calls f08kjf after certain preconditioning. In most cases f08kbforf08kdf, employing fast scaled rotations and de Rijk's pivoting strategy, is sufficient to obtain the SVD of a real matrix. These are much simpler to use and also handle the case $m<n$.
The routine may be called by the names f08khf, nagf_lapackeig_dgejsv or its LAPACK name dgejsv.
3Description
The SVD is written as
$$A=U\Sigma {V}^{\mathrm{T}}\text{,}$$
where $\Sigma $ is an $m\times n$ matrix which is zero except for its $n$ diagonal elements, $U$ is an $m\times m$ orthogonal matrix, and $V$ is an $n\times n$ orthogonal matrix. The diagonal elements of $\Sigma $ are the singular values of $A$ in descending order of magnitude. The columns of $U$ and $V$ are the left and the right singular vectors of $A$, respectively.
4References
Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia https://www.netlib.org/lapack/lug
Drmač Z and Veselić K (2008a) New fast and accurate Jacobi SVD Algorithm I SIAM J. Matrix Anal. Appl.29 4
Drmač Z and Veselić K (2008b) New fast and accurate Jacobi SVD Algorithm II SIAM J. Matrix Anal. Appl.29 4
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
5Arguments
1: $\mathbf{joba}$ – Character(1)Input
On entry: specifies the form of pivoting for the $QR$ factorization stage; whether an estimate of the condition number of the scaled matrix is required; and the form of rank reduction that is performed.
${\mathbf{joba}}=\text{'C'}$
The initial $QR$ factorization of the input matrix is performed with column pivoting; no estimate of condition number is computed; and, the rank is reduced by only the underflowed part of the triangular factor $R$. This option works well (high relative accuracy) if $A$ can be written in the form $A=BD$, with well-conditioned $B$ and arbitrary diagonal matrix $D$. The accuracy cannot be spoiled by column scaling. The accuracy of the computed output depends on the condition of $B$, and the procedure attempts to achieve the best theoretical accuracy.
${\mathbf{joba}}=\text{'E'}$
Computation as with ${\mathbf{joba}}=\text{'C'}$ with an additional estimate of the condition number of $B$. It provides a realistic error bound.
${\mathbf{joba}}=\text{'F'}$
The initial $QR$ factorization of the input matrix is performed with full row and column pivoting; no estimate of condition number is computed; and, the rank is reduced by only the underflowed part of the triangular factor $R$. If $A={D}_{1}\times C\times {D}_{2}$ with ill-conditioned diagonal scalings ${D}_{1}$, ${D}_{2}$, and well-conditioned matrix $C$, this option gives higher accuracy than the ${\mathbf{joba}}=\text{'C'}$ option. If the structure of the input matrix is not known, and relative accuracy is desirable, then this option is advisable.
${\mathbf{joba}}=\text{'G'}$
Computation as with ${\mathbf{joba}}=\text{'F'}$ with an additional estimate of the condition number of $B$, where $A=DB$ (i.e., $B=C\times {D}_{2}$). If $A$ has heavily weighted rows, then using this condition number gives too pessimistic an error bound.
${\mathbf{joba}}=\text{'A'}$
Computation as with ${\mathbf{joba}}=\text{'C'}$ except in the treatment of rank reduction. In this case, small singular values are to be considered as noise and, if found, the matrix is treated as numerically rank deficient. The computed SVD, $A=U\Sigma {V}^{\mathrm{T}}$, is such that the relative residual norm (when comparing against $A$) is of the order $\mathit{O}\left(m\right)\times \epsilon $, where $\epsilon $ is machine precision. This gives the procedure licence to discard (set to zero) all singular values below ${\mathbf{n}}\times \epsilon \times \Vert A\Vert $.
${\mathbf{joba}}=\text{'R'}$
Similar to ${\mathbf{joba}}=\text{'A'}$. The rank revealing property of the initial $QR$ factorization is used to reveal (using the upper triangular factor) a gap, ${\sigma}_{r+1}<\epsilon {\sigma}_{r}$, in which case the numerical rank is declared to be $r$. The SVD is computed with absolute error bounds, but more accurately than with ${\mathbf{joba}}=\text{'A'}$.
Constraint:
${\mathbf{joba}}=\text{'C'}$, $\text{'E'}$, $\text{'F'}$, $\text{'G'}$, $\text{'A'}$ or $\text{'R'}$.
2: $\mathbf{jobu}$ – Character(1)Input
On entry: specifies options for computing the left singular vectors $U$.
${\mathbf{jobu}}=\text{'U'}$
The first $n$ left singular vectors (columns of $U$) are computed and returned in the array u.
${\mathbf{jobu}}=\text{'F'}$
All $m$ left singular vectors are computed and returned in the array u.
${\mathbf{jobu}}=\text{'W'}$
No left singular vectors are computed, but the array u (with ${\mathbf{ldu}}\ge {\mathbf{m}}$ and second dimension at least n) is available as workspace for computing right singular values. See the description of u.
${\mathbf{jobu}}=\text{'N'}$
No left singular vectors are computed. ${\mathbf{u}}$ is not referenced when ${\mathbf{jobv}}=\text{'W'}$ or $\text{'N'}$.
Constraint:
${\mathbf{jobu}}=\text{'U'}$, $\text{'F'}$, $\text{'W'}$ or $\text{'N'}$.
3: $\mathbf{jobv}$ – Character(1)Input
On entry: specifies options for computing the right singular vectors $V$.
${\mathbf{jobv}}=\text{'V'}$
The $n$ right singular vectors (columns of $V$) are computed and returned in the array v; Jacobi rotations are not explicitly accumulated.
${\mathbf{jobv}}=\text{'J'}$
The $n$ right singular vectors (columns of $V$) are computed and returned in the array v, but they are computed as the product of Jacobi rotations. This option is allowed only if ${\mathbf{jobu}}=\text{'U'}$ or $\text{'F'}$, i.e., in computing the full SVD.
This is equivalent to multiplying the input matrix, on the right, by the matrix $V$.
${\mathbf{jobv}}=\text{'W'}$
No right singular values are computed, but the array v (with ${\mathbf{ldv}}\ge {\mathbf{n}}$ and second dimension at least n) is available as workspace for computing left singular values. See the description of v.
${\mathbf{jobv}}=\text{'N'}$
No right singular vectors are computed. ${\mathbf{v}}$ is not referenced when ${\mathbf{jobu}}=\text{'W'}$ or $\text{'N'}$ or ${\mathbf{jobt}}=\text{'N'}$ or ${\mathbf{m}}\ne {\mathbf{n}}$.
Constraints:
${\mathbf{jobv}}=\text{'V'}$, $\text{'J'}$, $\text{'W'}$ or $\text{'N'}$;
if ${\mathbf{jobu}}=\text{'W'}$ or $\text{'N'}$, ${\mathbf{jobv}}\ne \text{'J'}$.
4: $\mathbf{jobr}$ – Character(1)Input
On entry: specifies the conditions under which columns of $A$ are to be set to zero. This effectively specifies a lower limit on the range of singular values; any singular values below this limit are (through column zeroing) set to zero. If $A\ne 0$ is scaled so that the largest column (in the Euclidean norm) of $cA$ is equal to the square root of the overflow threshold, then jobr allows the routine to kill columns of $A$ whose norm in $cA$ is less than $\sqrt{\mathit{sfmin}}$ (for ${\mathbf{jobr}}=\text{'R'}$), or less than $\mathit{sfmin}/\epsilon $ (otherwise). $\mathit{sfmin}$ is the safe range parameter, as returned by routine x02amf.
${\mathbf{jobr}}=\text{'N'}$
Only set to zero those columns of $A$ for which the norm of corresponding column of $cA<\mathit{sfmin}/\epsilon $, that is, those columns that are effectively zero (to machine precision) anyway. If the condition number of $A$ is greater than the overflow threshold $\lambda $, where $\lambda $ is the value returned by x02alf, you are recommended to use routine f08kjf.
${\mathbf{jobr}}=\text{'R'}$
Set to zero those columns of $A$ for which the norm of the corresponding column of $cA<\sqrt{\mathit{sfmin}}$. This approximately represents a restricted range for $\sigma \left(cA\right)$ of $[\sqrt{\mathit{sfmin}},\sqrt{\lambda}]$.
For computing the singular values in the full range from the safe minimum up to the overflow threshold use f08kjf.
Suggested value:
${\mathbf{jobr}}=\text{'R'}$.
Constraint:
${\mathbf{jobr}}=\text{'N'}$ or $\text{'R'}$.
5: $\mathbf{jobt}$ – Character(1)Input
On entry: specifies, in the case $n=m$, whether the routine is permitted to use the transpose of $A$ for improved efficiency. If the matrix is square, then the procedure may use ${A}^{\mathrm{T}}$ if it seems to be better with respect to convergence. If the matrix is not square, jobt is ignored. The decision is based on two values of entropy over the adjoint orbit of ${A}^{\mathrm{T}}A$. See the descriptions of ${\mathbf{work}}\left(6\right)$ and ${\mathbf{work}}\left(7\right)$.
${\mathbf{jobt}}=\text{'T'}$
If $n=m$, perform an entropy test and, if the test indicates possibly faster convergence of the Jacobi process when using ${A}^{\mathrm{T}}$, then form the transpose ${A}^{\mathrm{T}}$. If $A$ is replaced with ${A}^{\mathrm{T}}$, then the row pivoting is included automatically.
${\mathbf{jobt}}=\text{'N'}$
No entropy test and no transposition is performed.
The option ${\mathbf{jobt}}=\text{'T'}$ can be used to compute only the singular values, or the full SVD ($U$, $\Sigma $ and $V$). In the case where only one set of singular vectors ($U$ or $V$) is required, the caller must still provide both u and v, as one of the matrices is used as workspace if the matrix $A$ is transposed. See the descriptions of u and v.
Constraint:
${\mathbf{jobt}}=\text{'T'}$ or $\text{'N'}$.
6: $\mathbf{jobp}$ – Character(1)Input
On entry: specifies whether the routine should be allowed to introduce structured perturbations to drown denormalized numbers. For details see Drmač and Veselić (2008a) and Drmač and Veselić (2008b). For the sake of simplicity, these perturbations are included only when the full SVD or only the singular values are requested.
${\mathbf{jobp}}=\text{'P'}$
Introduce perturbation if $A$ is found to be very badly scaled (introducing denormalized numbers).
${\mathbf{jobp}}=\text{'N'}$
Do not perturb.
Constraint:
${\mathbf{jobp}}=\text{'P'}$ or $\text{'N'}$.
7: $\mathbf{m}$ – IntegerInput
On entry: $m$, the number of rows of the matrix $A$.
Constraint:
${\mathbf{m}}\ge 0$.
8: $\mathbf{n}$ – IntegerInput
On entry: $n$, the number of columns of the matrix $A$.
Constraint:
${\mathbf{m}}\ge {\mathbf{n}}\ge 0$.
9: $\mathbf{a}({\mathbf{lda}},*)$ – Real (Kind=nag_wp) arrayInput/Output
Note: the second dimension of the array a
must be at least
$\mathrm{max}\phantom{\rule{0.125em}{0ex}}(1,{\mathbf{n}})$.
11: $\mathbf{sva}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput
On exit: the, possibly scaled, singular values of $A$.
The singular values of $A$ are
${\sigma}_{\mathit{i}}=\alpha \times {\mathbf{sva}}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,n$, where $\alpha ={\mathbf{work}}\left(1\right)/{\mathbf{work}}\left(2\right)$. Normally $\alpha =1$ and no scaling is required to obtain the singular values. However, if the largest singular value of $A$ overflows or if small singular values have been saved from underflow by scaling the input matrix $A$, then $\alpha \ne 1$.
If ${\mathbf{jobr}}=\text{'R'}$, then some of the singular values may be returned as exact zeros because they are below the numerical rank threshold or are denormalized numbers.
12: $\mathbf{u}({\mathbf{ldu}},*)$ – Real (Kind=nag_wp) arrayOutput
Note: the second dimension of the array u
must be at least
$\mathrm{max}\phantom{\rule{0.125em}{0ex}}(1,{\mathbf{m}})$ if ${\mathbf{jobu}}=\text{'F'}$, $\mathrm{max}\phantom{\rule{0.125em}{0ex}}(1,{\mathbf{n}})$ if ${\mathbf{jobu}}=\text{'U'}$ or $\text{'W'}$, and at least $1$ otherwise.
On exit: if ${\mathbf{jobu}}=\text{'U'}$, u contains the $m\times n$ matrix of left singular vectors.
If ${\mathbf{jobu}}=\text{'F'}$, u contains the $m\times m$ matrix of left singular vectors, including an orthonormal basis of the orthogonal complement of Range($A$).
u is not referenced when ${\mathbf{jobu}}=\text{'W'}$ or $\text{'N'}$ and one of the following is satisfied:
${\mathbf{jobv}}=\text{'W'}$ or $\text{'N'}$, or
${\mathbf{n}}=1$, or
$A$ is the zero matrix.
13: $\mathbf{ldu}$ – IntegerInput
On entry: the first dimension of the array u as declared in the (sub)program from which f08khf is called.
Constraints:
if ${\mathbf{jobu}}=\text{'U'}$, $\text{'F'}$ or $\text{'W'}$, ${\mathbf{ldu}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}(1,{\mathbf{m}})$;
otherwise ${\mathbf{ldu}}\ge 1$.
14: $\mathbf{v}({\mathbf{ldv}},*)$ – Real (Kind=nag_wp) arrayOutput
Note: the second dimension of the array v
must be at least
$\mathrm{max}\phantom{\rule{0.125em}{0ex}}(1,{\mathbf{n}})$ if ${\mathbf{jobv}}=\text{'V'}$, $\text{'J'}$ or $\text{'W'}$, and at least $1$ otherwise.
On exit: if ${\mathbf{jobv}}=\text{'V'}$ or $\text{'J'}$, v contains the $n\times n$ matrix of right singular vectors.
v is not referenced when ${\mathbf{jobv}}=\text{'W'}$ or $\text{'N'}$ and one of the following is satisfied:
${\mathbf{jobu}}=\text{'U'}$ or $\text{'F'}$ and ${\mathbf{jobt}}=\text{'T'}$, or
${\mathbf{n}}=1$, or
$A$ is the zero matrix.
15: $\mathbf{ldv}$ – IntegerInput
On entry: the first dimension of the array v as declared in the (sub)program from which f08khf is called.
Constraints:
if ${\mathbf{jobv}}=\text{'V'}$, $\text{'J'}$ or $\text{'W'}$, ${\mathbf{ldv}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}(1,{\mathbf{n}})$;
otherwise ${\mathbf{ldv}}\ge 1$.
16: $\mathbf{work}\left({\mathbf{lwork}}\right)$ – Real (Kind=nag_wp) arrayWorkspace
On exit: contains information about the completed job.
${\mathbf{work}}\left(1\right)$
$\alpha ={\mathbf{work}}\left(1\right)/{\mathbf{work}}\left(2\right)$ is the scaling factor such that
${\sigma}_{\mathit{i}}=\alpha \times {\mathbf{sva}}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,n$ are the computed singular values of $A$. (See the description of ${\mathbf{sva}}$.)
${\mathbf{work}}\left(2\right)$
See the description of ${\mathbf{work}}\left(1\right)$.
${\mathbf{work}}\left(3\right)$
sconda, an estimate for the condition number of column equilibrated $A$ (if ${\mathbf{joba}}=\text{'E'}$ or $\text{'G'}$). sconda is an estimate of $\sqrt{\left({\Vert {\left({R}^{\mathrm{T}}R\right)}^{-1}\Vert}_{1}\right)}$. It is computed using f07fgf. It satisfies ${n}^{-\frac{1}{4}}\times \mathit{sconda}\le {\Vert {R}^{-1}\Vert}_{2}\le {n}^{\frac{1}{4}}\times \mathit{sconda}$ where $R$ is the triangular factor from the $QR$ factorization of $A$. However, if $R$ is truncated and the numerical rank is determined to be strictly smaller than $n$, sconda is returned as $-1$, thus indicating that the smallest singular values might be lost.
If full SVD is needed, and you are familiar with the details of the method, the following two condition numbers are useful for the analysis of the algorithm.
${\mathbf{work}}\left(4\right)$
An estimate of the scaled condition number of the triangular factor in the first $QR$ factorization.
${\mathbf{work}}\left(5\right)$
An estimate of the scaled condition number of the triangular factor in the second $QR$ factorization.
The following two parameters are computed if ${\mathbf{jobt}}=\text{'T'}$.
${\mathbf{work}}\left(6\right)$
The entropy of ${A}^{\mathrm{T}}A$: this is the Shannon entropy of $\mathrm{diag}{A}^{\mathrm{T}}A/\mathrm{trace}{A}^{\mathrm{T}}A$ taken as a point in the probability simplex.
${\mathbf{work}}\left(7\right)$
The entropy of $A{A}^{\mathrm{T}}$.
17: $\mathbf{lwork}$ – IntegerInput
On entry: the dimension of the array work as declared in the (sub)program from which f08khf is called.
If ${\mathbf{jobu}}=\text{'N'}$ or $\text{'W'}$ and ${\mathbf{jobv}}=\text{'N'}$ or $\text{'W'}$
if ${\mathbf{joba}}\ne \text{'E'}$ or $\text{'G'}$
the minimal requirement is ${\mathbf{lwork}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}(2{\mathbf{m}}+{\mathbf{n}},4{\mathbf{n}}+1,7)$;
for optimal performance the requirement is ${\mathbf{lwork}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}(2{\mathbf{m}}+{\mathbf{n}},3{\mathbf{n}}+({\mathbf{n}}+1)\times \mathit{nb},7)$, where $\mathit{nb}$ is the block size used by f08aefandf08bff. Assuming a value of $\mathit{nb}=256$ is wise, but choosing a smaller value (e.g., $\mathit{nb}=128$) should still lead to acceptable performance.
if ${\mathbf{joba}}=\text{'E'}$ or $\text{'G'}$
the minimal requirement is ${\mathbf{lwork}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}(2{\mathbf{m}}+{\mathbf{n}},{\mathbf{n}}({\mathbf{n}}+4),7)$;
for optimal performance the requirement is ${\mathbf{lwork}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}(2{\mathbf{m}}+{\mathbf{n}},3{\mathbf{n}}+({\mathbf{n}}+1)\times \mathit{nb},{\mathbf{n}}({\mathbf{n}}+4),7)$.
If ${\mathbf{jobu}}\ne \text{'U'}$ or $\text{'F'}$ and ${\mathbf{jobv}}=\text{'V'}$ or $\text{'J'}$
the minimal requirement is ${\mathbf{lwork}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}(2{\mathbf{m}}+{\mathbf{n}},4{\mathbf{n}}+1,7)$;
for optimal performance the requirement is ${\mathbf{lwork}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}(2{\mathbf{m}}+{\mathbf{n}},{\mathbf{n}}(3+\mathit{nb}),7)$, where $\mathit{nb}$ is described above.
If ${\mathbf{jobu}}=\text{'U'}$ or $\text{'F'}$ and ${\mathbf{jobv}}\ne \text{'V'}$ or $\text{'J'}$
the minimal requirement is ${\mathbf{lwork}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}(2{\mathbf{m}}+{\mathbf{n}},4{\mathbf{n}}+1,7)$;
for optimal performance the requirement is ${\mathbf{lwork}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}(2{\mathbf{m}}+{\mathbf{n}},{\mathbf{n}}(3+\mathit{nb}),7)$, where $\mathit{nb}$ is described above.
If ${\mathbf{jobu}}=\text{'U'}$ or $\text{'F'}$ and ${\mathbf{jobv}}=\text{'V'}$
If ${\mathbf{jobu}}=\text{'U'}$ or $\text{'F'}$ and ${\mathbf{jobv}}=\text{'J'}$
the minimal requirement is ${\mathbf{lwork}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}({\mathbf{n}}({\mathbf{n}}+4),2{\mathbf{m}}+{\mathbf{n}},{\mathbf{n}}({\mathbf{n}}+2)+6,7)$;
for better performance ${\mathbf{lwork}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}(2{\mathbf{m}}+{\mathbf{n}},{\mathbf{n}}({\mathbf{n}}+3+\mathit{nb}),7)$, where $\mathit{nb}$ is described above.
On exit: contains information about the completed job.
${\mathbf{iwork}}\left(1\right)$
The numerical rank of $A$ determined after the initial $QR$ factorization with pivoting. See the descriptions of joba and jobr.
${\mathbf{iwork}}\left(2\right)$
The number of computed nonzero singular values.
${\mathbf{iwork}}\left(3\right)$
If nonzero, a warning message: If ${\mathbf{iwork}}\left(3\right)=1$, then some of the column norms of $A$ were denormalized (tiny) numbers. The requested high accuracy is not warranted by the data.
19: $\mathbf{info}$ – IntegerOutput
On exit: ${\mathbf{info}}=0$ unless the routine detects an error (see Section 6).
6Error Indicators and Warnings
${\mathbf{info}}<0$
If ${\mathbf{info}}=-i$, argument $i$ had an illegal value. An explanatory message is output, and execution of the program is terminated.
${\mathbf{info}}>0$
f08khf did not converge in the allowed number of iterations ($30$). The computed values might be inaccurate.
7Accuracy
The computed SVD is nearly the exact SVD for a nearby matrix $(A+E)$, where
and $\epsilon $ is the machine precision. In addition, the computed singular vectors are nearly orthogonal to working precision. See Section 4.9 of Anderson et al. (1999) for further details.
8Parallelism and Performance
Background information to multithreading can be found in the Multithreading documentation.
f08khf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
f08khf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
9Further Comments
f08khf implements a preconditioned Jacobi SVD algorithm. It uses f08aef,f08ahfandf08bff as preprocessors and preconditioners. Optionally, an additional row pivoting can be used as a preprocessor, which in some cases results in much higher accuracy. An example is a matrix $A$ with the structure $A={D}_{1}C{D}_{2}$, where ${D}_{1}$, ${D}_{2}$ are arbitrarily ill-conditioned diagonal matrices and $C$ is a well-conditioned matrix. In that case, complete pivoting in the first $QR$ factorizations provides accuracy dependent on the condition number of $C$, and independent of ${D}_{1}$, ${D}_{2}$. Such higher accuracy is not completely understood theoretically, but it works well in practice. Further, if $A$ can be written as $A=BD$, with well-conditioned $B$ and some diagonal $D$, then the high accuracy is guaranteed, both theoretically and in software, independent of $D$.
10Example
This example finds the singular values and left and right singular vectors of the $6\times 4$ matrix