F01 (Matop)
Chapter Introduction
FL CL
NAG FL Interface
F01 (Matop)
Matrix Operations, Including Inversion
F01 (Matop)
Chapter Introduction
FL CL
1
Scope of the Chapter
This chapter provides facilities for four types of problem:
-
(i)matrix inversion;
-
(ii)matrix factorizations;
-
(iii)matrix arithmetic and manipulation;
-
(iv)matrix functions.
See
Sections 2.1,
2.2,
2.3 and
2.4 where these problems are discussed.
2
Background to the Problems
2.1
Matrix Inversion
-
(i)Nonsingular square matrices of order .
If , a square matrix of order , is nonsingular (has rank ), then its inverse exists and satisfies the equations (the identity or unit matrix).
It is worth noting that if
, so that
is the ‘residual’ matrix, then a bound on the relative error is given by
, i.e.,
-
(ii)General real rectangular matrices.
A real matrix
has no inverse if it is square (
) and singular (has rank
), or if it is of shape (
) with
, but there is a
Generalized or
Pseudo-inverse
which satisfies the equations
(which of course are also satisfied by the inverse
of
if
is square and nonsingular).
-
(a)if and then can be factorized using a factorization, given by
where is an orthogonal matrix and is an , nonsingular, upper triangular matrix. The pseudo-inverse of is then given by
where consists of the first columns of .
-
(b)if and then can be factorized using an RQ factorization, given by
where is an orthogonal matrix and is an , nonsingular, upper triangular matrix. The pseudo-inverse of is then given by
where consists of the first columns of .
-
(c)if and then can be factorized using a factorization, with column interchanges, as
where is an orthogonal matrix, is an upper trapezoidal matrix and is an permutation matrix. The pseudo-inverse of is then given by
where consists of the first columns of .
-
(d)if then can be factorized as the singular value decomposition
where is an orthogonal matrix, is an orthogonal matrix and is an diagonal matrix with non-negative diagonal elements . The first columns of and are the left- and right-hand singular vectors of respectively and the diagonal elements of are the singular values of . may be chosen so that
and in this case, if then
If and consist of the first columns of and respectively and is an diagonal matrix with diagonal elements then is given by
and the pseudo-inverse of is given by
Notice that
which is the classical eigenvalue (spectral) factorization of .
-
(e)if is complex then the above relationships are still true if we use ‘unitary’ in place of ‘orthogonal’ and conjugate transpose in place of transpose. For example, the singular value decomposition of is
where and are unitary, the conjugate transpose of and is as in (ii)(d) above.
2.2
Matrix Factorizations
The routines in this section perform matrix factorizations which are required for the solution of systems of linear equations with various special structures. A few routines which perform associated computations are also included.
Other routines for matrix factorizations are to be found in
Chapters F07,
F08 and
F11.
This section also contains a few routines associated with eigenvalue problems (see
Chapter F02). (Historical note: this section used to contain many more such routines, but they have now been superseded by routines in
Chapter F08.)
Finally, this section contains routines for computing non-negative matrix factorizations, which are used for dimensional reduction and classification in data analysis. Given a rectangular matrix, , with non-negative elements, a non-negative matrix factorization of is an approximate factorization of into the product of an non-negative matrix and a non-negative matrix , so that . Typically is chosen so that . The matrices and are then computed to minimize . The factorization is not unique.
2.3
Matrix Arithmetic and Manipulation
The intention of routines in this section (f01c, f01d, f01v, and f01z) is to cater for some of the commonly occurring operations in matrix manipulation, i.e., transposing a matrix or adding part of one matrix to another, and for conversion between different storage formats,such as conversion between rectangular band matrix storage and packed band matrix storage. For vector or matrix-vector or matrix-matrix operations refer to
Chapters F06 and
F16.
2.4
Matrix Functions
Given a square matrix , the matrix function is a matrix with the same dimensions as which provides a generalization of the scalar function .
If
has a full set of eigenvectors
then
can be factorized as
where
is the diagonal matrix whose diagonal elements,
, are the eigenvalues of
.
is given by
where
is the diagonal matrix whose
th diagonal element is
.
In general,
may not have a full set of eigenvectors. The matrix function can then be defined via a Cauchy integral. For
,
where
is a closed contour surrounding the eigenvalues of
, and
is analytic within
.
Some matrix functions are defined implicitly. A matrix logarithm is a solution
to the equation
In general,
is not unique, but if
has no eigenvalues on the closed negative real line then a unique
principal logarithm exists whose eigenvalues have imaginary part between
and
. Similarly, a matrix square root is a solution
to the equation
If has no eigenvalues on the closed negative real line then a unique principal square root exists with eigenvalues in the right half-plane. If has a vanishing eigenvalue then cannot be computed. If the vanishing eigenvalue is defective (its algebraic multiplicity exceeds its geometric multiplicity, or equivalently it occurs in a Jordan block of size greater than ) then the square root cannot be computed. If the vanishing eigenvalue is semisimple (its algebraic and geometric multiplicities are equal, or equivalently it occurs only in Jordan blocks of size ) then a square root can be computed.
Algorithms for computing matrix functions are usually tailored to a specific function. Currently,
Chapter F01 contains routines for calculating the exponential, logarithm, sine, cosine, sinh, cosh, square root and the general real power of both real and complex matrices. In addition, there are routines to compute a general function of real symmetric and complex Hermitian matrices and a general function of general real and complex matrices.
The Fréchet derivative of a matrix function
in the direction of the matrix
is the linear function mapping
to
such that
The Fréchet derivative measures the first-order effect on
of perturbations in
.
Chapter F01 contains routines for calculating the Fréchet derivative of the exponential, logarithm and real powers of both real and complex matrices.
The condition number of a matrix function is a measure of its sensitivity to perturbations in the data. The absolute condition number measures these perturbations in an absolute sense and is defined by
The relative condition number, which is usually of more interest, measures these perturbations in a relative sense and is defined by
The absolute and relative condition numbers can be expressed in terms of the norm of the Fréchet derivative by
Chapter F01 contains routines for calculating the condition number of the matrix exponential, logarithm, sine, cosine, sinh, cosh, square root and the general real power of both real and complex matrices. It also contains routines for estimating the condition number of a general function of a real or complex matrix.
3
Recommendations on Choice and Use of Available Routines
3.1
Matrix Inversion
Note: before using any routine for matrix inversion, consider carefully whether it is needed.
Although the solution of a set of linear equations
can be written as
, the solution should
never be computed by first inverting
and then computing
; the routines in
Chapters F04 or
F07 should
always be used to solve such sets of equations directly; they are faster in execution, and numerically more stable and accurate. Similar remarks apply to the solution of least squares problems which again should be solved by using the routines in
Chapters F04 and
F08
rather than by computing a pseudo-inverse.
-
(a)Nonsingular square matrices of order
This chapter describes techniques for inverting a general real matrix and matrices which are positive definite (have all eigenvalues positive) and are either real and symmetric or complex and Hermitian. It is wasteful and uneconomical not to use the appropriate routine when a matrix is known to have one of these special forms. A general routine must be used when the matrix is not known to be positive definite. In most routines, the inverse is computed by solving the linear equations , for , where is the th column of the identity matrix.
Routines are given for calculating the approximate inverse, that is solving the linear equations just once, and also for obtaining the accurate inverse by successive iterative corrections of this first approximation. The latter, of course, are more costly in terms of time and storage, since each correction involves the solution of sets of linear equations and since the original and its decomposition must be stored together with the first and successively corrected approximations to the inverse. In practice, the storage requirements for the ‘corrected’ inverse routines are about double those of the ‘approximate’ inverse routines, though the extra computer time is not prohibitive since the same matrix and the same decomposition is used in every linear equation solution.
Despite the extra work of the ‘corrected’ inverse routines, they are superior to the ‘approximate’ inverse routines. A correction provides a means of estimating the number of accurate figures in the inverse or the number of ‘meaningful’ figures relating to the degree of uncertainty in the coefficients of the matrix.
The residual matrix , where is a computed inverse of , conveys useful information. Firstly
is a bound on the relative error in and secondly guarantees the convergence of the iterative process in the ‘corrected’ inverse routines.
The decision trees for inversion show which routines in
Chapter F04 and
Chapter F07 should be used for the inversion of other special types of matrices not treated in the chapter.
-
(b)General real rectangular matrices
For real matrices
f08aef and
f01qjf return
and
factorizations of
respectively and
f08bff returns the
factorization with column interchanges. The corresponding complex routines are
f08asf,
f01rjf and
f08btf respectively. Routines are also provided to form the orthogonal matrices and transform by the orthogonal matrices following the use of the above routines.
f01qgf and
f01rgf form the
factorization of an upper trapezoidal matrix for the real and complex cases respectively.
f01blf uses the
factorization as described in
Section 2.1(ii)(a) and is the only routine that explicitly returns a pseudo-inverse. If
then the routine will calculate the pseudo-inverse
of the matrix
. If
then the
matrix
should be used. The routine will calculate the pseudo-inverse
of
and the required pseudo-inverse will be
. The routine also attempts to calculate the rank,
, of the matrix given a tolerance to decide when elements can be regarded as zero. However, should this routine fail due to an incorrect determination of the rank, the singular value decomposition method (described below) should be used.
f08kbf and
f08kpf
compute the singular value decomposition as described in
Section 2 for real and complex matrices respectively. If
has rank
then the
smallest singular values will be negligible and the pseudo-inverse of
can be obtained as
as described in
Section 2. If the rank of
is not known in advance it can be estimated from the singular values (see
Section 2.4 in the
F04 Chapter Introduction).
In the real case with
,
f08aef followed by
f02wuf provide details of the
factorization or the singular value decomposition depending on whether or not
is of full rank and for some problems provides an attractive alternative to
f08kbf.
For large sparse matrices, leading terms in the singular value decomposition can be computed using routines from
Chapter F12.
3.2
Matrix Factorizations
Most of these routines serve a special purpose required for the solution of sets of simultaneous linear equations or the eigenvalue problem. For further details, you should consult
Sections 3 or
4 in the
F02 Chapter Introduction or
Sections 3 or
4 in the
F04 Chapter Introduction.
f01brf and
f01bsf are provided for factorizing general real sparse matrices. A more recent algorithm for the same problem is available through
f11mef. For factorizing real symmetric positive definite sparse matrices, see
f11jaf. These routines should only be used when
is
not banded and when the total number of nonzero elements is less than 10% of the total number of elements. In all other cases, either the band routines or the general routines should be used.
f01mdf and
f01mef compute the Cheng–Higham modified Cholesky factorization of a real symmetric matrix and the positive definite perturbed input matrix from the factors.
The routines
f01saf (for dense matrices) and
f01sbf (sparse matrices, using a reverse communication interface) are provided for computing non-negative matrix factorizations.
3.3
Matrix Arithmetic and Manipulation
The routines in the f01c section are designed for the general handling of
matrices. Emphasis has been placed on flexibility in the argument specifications and on avoiding, where possible, the use of internally declared arrays. They are, therefore, suited for use with large matrices of variable row and column dimensions. Routines are included for the addition and subtraction of sub-matrices of larger matrices, as well as the standard manipulations of full matrices. Those routines involving matrix multiplication may use additional-precision arithmetic for the accumulation of inner products. See also
Chapter F06.
The routines in the f01d section perform arithmetic operations on triangular matrices.
The routines in the f01v (LAPACK) and f01z section are designed to allow conversion between full storage format and one of the packed storage schemes required by some of the routines in
Chapters F02,
F04,
F06,
F07 and
F08.
3.3.1
NAG Names and LAPACK Names
Routines with NAG name beginning f01v may be called either by their NAG names or by their LAPACK names. When using the NAG Library, the double precision form of the LAPACK name must be used (beginning with D- or Z-).
References to
Chapter F01 routines in the manual normally include the LAPACK double precision names, for example,
f01vef.
The LAPACK routine names follow a simple scheme (which is similar to that used for the BLAS in
Chapter F06). Most names have the structure XYYTZZ, where the components have the following meanings:
– the initial letter, X, indicates the data type (real or complex) and precision:
- S – real, single precision (in Fortran, 4 byte length REAL)
- D – real, double precision (in Fortran, 8 byte length REAL)
- C – complex, single precision (in Fortran, 8 byte length COMPLEX)
- Z – complex, double precision (in Fortran, 16 byte length COMPLEX)
– the fourth letter, T, indicates that the routine is performing a storage scheme transformation (conversion)
– the letters YY indicate the original storage scheme used to store a triangular part of the matrix
, while the letters ZZ indicate the target storage scheme of the conversion (YY cannot equal ZZ since this would do nothing):
- TF – Rectangular Full Packed Format (RFP)
- TP – Packed Format
- TR – Full Format
3.4
Matrix Functions
f01ecf and
f01fcf compute the matrix exponential,
, of a real and complex square matrix
respectively. If estimates of the condition number of the matrix exponential are required then
f01jgf and
f01kgf should be used. If Fréchet derivatives are required then
f01jhf and
f01khf should be used.
f01edf and
f01fdf compute the matrix exponential,
, of a real symmetric and complex Hermitian matrix respectively. If the matrix is real symmetric, or complex Hermitian then it is recommended that
f01edf, or
f01fdf be used as they are more efficient and, in general, more accurate than
f01ecf and
f01fcf.
f01ejf and
f01fjf compute the principal matrix logarithm,
, of a real and complex square matrix
respectively. If estimates of the condition number of the matrix logarithm are required then
f01jjf and
f01kjf should be used. If Fréchet derivatives are required then
f01jkf and
f01kkf should be used.
f01ekf and
f01fkf compute the matrix exponential, sine, cosine, sinh or cosh of a real and complex square matrix
respectively. If the matrix exponential is required then it is recommended that
f01ecf or
f01fcf be used as they are, in general, more accurate than
f01ekf and
f01fkf. If estimates of the condition number of the matrix function are required then
f01jaf and
f01kaf should be used.
f01elf and
f01emf compute the matrix function,
, of a real square matrix.
f01flf and
f01fmf compute the matrix function of a complex square matrix. The derivatives of
are required for these computations.
f01elf and
f01flf use numerical differentiation to obtain the derivatives of
.
f01emf and
f01fmf use derivatives you have supplied. If estimates of the condition number are required but you are not supplying derivatives then
f01jbf and
f01kbf should be used.
If estimates of the condition number of the matrix function are required and you are supplying derivatives of
then
f01jcf and
f01kcf should be used.
If the matrix
is real symmetric or complex Hermitian then it is recommended that to compute the matrix function,
,
f01eff and
f01fff are used respectively as they are more efficient and, in general, more accurate than
f01elf,
f01emf,
f01flf and
f01fmf.
f01gaf and
f01haf compute the matrix function
for explicitly stored dense real and complex matrices
and
respectively while
f01gbf and
f01hbf compute the same using reverse communication. In the latter case, control is returned to you. You should calculate any required matrix-matrix products and then call the routine again. See
Section 7 in How to Use the NAG Library for further information.
f01enf and
f01fnf compute the principal square root
of a real and complex square matrix
respectively. If
is complex and upper triangular then
f01fpf should be used. If
is real and upper quasi-triangular then
f01epf should be used. If estimates of the condition number of the matrix square root are required then
f01jdf and
f01kdf should be used.
f01eqf and
f01fqf compute the matrix power
, where
, of real and complex matrices respectively. If estimates of the condition number of the matrix power are required then
f01jef and
f01kef should be used. If Fréchet derivatives are required then
f01jff and
f01kff should be used.
4
Decision Trees
The decision trees show the routines in this chapter and in
Chapter F04,
Chapter F07 and
Chapter F08 that should be used for inverting matrices of various types. They also show which routine should be used to calculate various matrix functions.
(i) Matrix Inversion:
Tree 1
Is an matrix of rank ? |
|
Is a real matrix? |
|
see Tree 2 |
yes | yes |
| no |
|
| no |
|
|
see Tree 3 |
|
|
see Tree 4 |
|
Tree 2: Inverse of a real n by n matrix of full rank
Tree 3: Inverse of a complex n by n matrix of full rank
Tree 4: Pseudo-inverses
Note 1: the inverse of a band matrix
does not, in general, have the same shape as
, and no routines are provided specifically for finding such an inverse. The matrix must either be treated as a full matrix or the equations
must be solved, where
has been initialized to the identity matrix
. In the latter case, see the decision trees in
Section 4 in the
F04 Chapter Introduction.
Note 2: by ‘guaranteed accuracy’ we mean that the accuracy of the inverse is improved by the use of the iterative refinement technique using additional precision.
(ii)
Matrix Factorizations: see the decision trees in Section 4 in the
F02 and
F04 Chapter Introductions.
(iii) Matrix Arithmetic and Manipulation: not appropriate.
(iv) Matrix Functions:
Tree 5: Matrix functions of an n by n real matrix
Is required? |
|
Is stored in dense format? |
|
f01gaf |
yes | yes |
| no |
|
| no |
|
|
f01gbf |
|
|
Is real symmetric? |
|
Is required? |
|
f01edf |
yes | yes |
| no |
|
| no |
|
|
f01eff |
|
|
Is or
or
or
required? |
|
Is the condition number of the matrix function required? |
|
f01jaf |
yes | yes |
| no |
|
| no |
|
|
f01ekf |
|
|
Is required? |
|
Is the condition number of the matrix logarithm required? |
|
f01jjf |
yes | yes |
| no |
|
| no |
|
|
Is the Fréchet derivative of the matrix logarithm required? |
|
f01jkf |
| yes |
|
| no |
|
|
f01ejf |
|
|
Is required? |
|
Is the condition number of the matrix exponential required? |
|
f01jgf |
yes | yes |
| no |
|
| no |
|
|
Is the Fréchet derivative of the matrix exponential required? |
|
f01jhf |
| yes |
|
| no |
|
|
f01ecf |
|
|
Is required? |
|
Is the condition number of the matrix square root required? |
|
f01jdf |
yes | yes |
| no |
|
| no |
|
|
Is the matrix upper quasi-triangular? |
|
f01epf |
| yes |
|
| no |
|
|
f01enf |
|
|
Is required? |
|
Is the condition number of the matrix power required? |
|
f01jef |
yes | yes |
| no |
|
| no |
|
|
Is the Fréchet derivative of the matrix power required? |
|
f01jff |
| yes |
|
| no |
|
|
f01eqf |
|
|
will be computed. Will derivatives of be supplied by the user? |
|
Is the condition number of the matrix function required? |
|
f01jcf |
yes | yes |
| no |
|
| no |
|
|
f01emf |
|
|
Is the condition number of the matrix function required? |
|
f01jbf |
yes |
| no |
|
f01elf |
|
Tree 6: Matrix functions of an n by n complex matrix
Is required? |
|
Is stored in dense format? |
|
f01haf |
yes | yes |
| no |
|
| no |
|
|
f01hbf |
|
|
Is complex Hermitian? |
|
Is required? |
|
f01fdf |
yes | yes |
| no |
|
| no |
|
|
f01fff |
|
|
Is or
or
or
required? |
|
Is the condition number of the matrix function required? |
|
f01kaf |
yes | yes |
| no |
|
| no |
|
|
f01fkf |
|
|
Is required? |
|
Is the condition number of the matrix logarithm required? |
|
f01kjf |
yes | yes |
| no |
|
| no |
|
|
Is the Fréchet derivative of the matrix logarithm required? |
|
f01kkf |
| yes |
|
| no |
|
|
f01fjf |
|
|
Is required? |
|
Is the condition number of the matrix exponential required? |
|
f01kgf |
yes | yes |
| no |
|
| no |
|
|
Is the Fréchet derivative of the matrix exponential required? |
|
f01khf |
| yes |
|
| no |
|
|
f01fcf |
|
|
Is required? |
|
Is the condition number of the matrix square root required? |
|
f01kdf |
yes | yes |
| no |
|
| no |
|
|
Is the matrix upper triangular? |
|
f01fpf |
| yes |
|
| no |
|
|
f01fnf |
|
|
Is required? |
|
Is the condition number of the matrix power required? |
|
f01kef |
yes | yes |
| no |
|
| no |
|
|
Is the Fréchet derivative of the matrix power required? |
|
f01kff |
| yes |
|
| no |
|
|
f01fqf |
|
|
will be computed. Will derivatives of be supplied by the user? |
|
Is the condition number of the matrix function required? |
|
f01kcf |
yes | yes |
| no |
|
| no |
|
|
f01fmf |
|
|
Is the condition number of the matrix function required? |
|
f01kbf |
yes |
| no |
|
f01flf |
|
5
Functionality Index
Action of the matrix exponential on a complex matrix
|
|
f01haf
|
Action of the matrix exponential on a complex matrix (reverse communication)
|
|
f01hbf
|
Action of the matrix exponential on a real matrix
|
|
f01gaf
|
Action of the matrix exponential on a real matrix (reverse communication)
|
|
f01gbf
|
real symmetric positive definite matrix,
|
|
|
Matrix Arithmetic and Manipulation,
|
|
|
matrix storage conversion,
|
|
|
full to packed triangular storage,
|
|
|
full to Rectangular Full Packed storage,
|
|
|
packed band rectangular storage, special provision for diagonal
|
|
|
packed triangular to full storage,
|
|
|
packed triangular to Rectangular Full Packed storage,
|
|
|
packed triangular square storage, special provision for diagonal
|
|
|
Rectangular Full Packed to full storage,
|
|
|
Rectangular Full Packed to packed triangular storage,
|
|
|
complex Hermitian matrix,
|
|
|
condition number for a matrix exponential
|
|
f01kgf
|
condition number for a matrix exponential, logarithm, sine, cosine, sinh or cosh
|
|
f01kaf
|
condition number for a matrix function, using numerical differentiation
|
|
f01kbf
|
condition number for a matrix function, using user-supplied derivatives
|
|
f01kcf
|
condition number for a matrix logarithm
|
|
f01kjf
|
condition number for a matrix power
|
|
f01kef
|
condition number for the matrix square root, logarithm, sine, cosine, sinh or cosh
|
|
f01kdf
|
matrix exponential, sine, cosine, sinh or cosh
|
|
f01fkf
|
matrix function, using numerical differentiation
|
|
f01flf
|
matrix function, using user-supplied derivatives
|
|
f01fmf
|
condition number for a matrix exponential
|
|
f01jgf
|
condition number for a matrix function, using numerical differentiation
|
|
f01jbf
|
condition number for a matrix function, using user-supplied derivatives
|
|
f01jcf
|
condition number for a matrix logarithm
|
|
f01jjf
|
condition number for a matrix power
|
|
f01jef
|
condition number for the matrix exponential, logarithm, sine, cosine, sinh or cosh
|
|
f01jaf
|
condition number for the matrix square root, logarithm, sine, cosine, sinh or cosh
|
|
f01jdf
|
matrix exponential, sine, cosine, sinh or cosh
|
|
f01ekf
|
matrix function, using numerical differentiation
|
|
f01elf
|
matrix function, using user-supplied derivatives
|
|
f01emf
|
real symmetric matrix,
|
|
|
complex matrix, form unitary matrix
|
|
f01rkf
|
complex upper trapezoidal matrix,
|
|
|
eigenproblem , , banded,
|
|
|
reduction to standard symmetric problem
|
|
f01bvf
|
modified Cholesky factorization, form positive definite perturbed input matrix
|
|
f01mef
|
modified Cholesky factorization of a real symmetric matrix
|
|
f01mdf
|
non-negative matrix factorization
|
|
f01saf
|
non-negative matrix factorization, reverse communication
|
|
f01sbf
|
real almost block-diagonal matrix,
|
|
|
real band symmetric positive definite matrix,
|
|
|
variable bandwidth, factorization
|
|
f01mcf
|
factorization, known sparsity pattern
|
|
f01bsf
|
real upper trapezoidal matrix,
|
|
|
6
Auxiliary Routines Associated with Library Routine Arguments
None.
7
Withdrawn or Deprecated Routines
None.
8
References
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Higham N J (2008) Functions of Matrices: Theory and Computation SIAM, Philadelphia, PA, USA
Wilkinson J H (1965) The Algebraic Eigenvalue Problem Oxford University Press, Oxford
Wilkinson J H (1977) Some recent advances in numerical linear algebra The State of the Art in Numerical Analysis (ed D A H Jacobs) Academic Press
Wilkinson J H and Reinsch C (1971) Handbook for Automatic Computation II, Linear Algebra Springer–Verlag
F01 (Matop)
Chapter Introduction
FL CL