NAG FL Interface
F01 (Matop)
Matrix Operations, Including Inversion

Settings help

FL Name Style:


FL Specification Language:


1 Scope of the Chapter

This chapter provides facilities for four types of problem:
  1. (i)matrix inversion;
  2. (ii)matrix factorizations;
  3. (iii)matrix arithmetic and manipulation;
  4. (iv)matrix functions.
See Sections 2.1, 2.2, 2.3 and 2.4 where these problems are discussed.

2 Background to the Problems

2.1 Matrix Inversion

  1. (i)Nonsingular square matrices of order n.
    If A, a square matrix of order n, is nonsingular (has rank n), then its inverse X exists and satisfies the equations AX=XA=I (the identity or unit matrix).
    It is worth noting that if AX-I=R, so that R is the ‘residual’ matrix, then a bound on the relative error is given by R, i.e.,
    X-A-1 A-1 R.  
  2. (ii)General real rectangular matrices.
    A real matrix A has no inverse if it is square (n×n) and singular (has rank <n), or if it is of shape (m×n) with mn, but there is a Generalized or Pseudo-inverse A+ which satisfies the equations
    AA+A=A,  A+AA+=A+,  (AA+)T=AA+,  (A+A)T=A+A  
    (which of course are also satisfied by the inverse X of A if A is square and nonsingular).
    1. (a)if mn and rank(A)=n then A can be factorized using a QR factorization, given by
      A=Q ( R 0 ) ,  
      where Q is an m×m orthogonal matrix and R is an n×n, nonsingular, upper triangular matrix. The pseudo-inverse of A is then given by
      A+=R-1Q~T,  
      where Q~ consists of the first n columns of Q.
    2. (b)if mn and rank(A)=m then A can be factorized using an RQ factorization, given by
      A=(R0)QT  
      where Q is an n×n orthogonal matrix and R is an m×m, nonsingular, upper triangular matrix. The pseudo-inverse of A is then given by
      A+ = Q~R-1 ,  
      where Q~ consists of the first m columns of Q.
    3. (c)if mn and rank(A)=rn then A can be factorized using a QR factorization, with column interchanges, as
      A=Q ( R 0 ) PT,  
      where Q is an m×m orthogonal matrix, R is an r×n upper trapezoidal matrix and P is an n×n permutation matrix. The pseudo-inverse of A is then given by
      A+=PRT(RRT)-1Q~T,  
      where Q~ consists of the first r columns of Q.
    4. (d)if rank(A)=rk=min(m,n) then A can be factorized as the singular value decomposition
      A=UΣVT,  
      where U is an m×m orthogonal matrix, V is an n×n orthogonal matrix and Σ is an m×n diagonal matrix with non-negative diagonal elements σ. The first k columns of U and V are the left- and right-hand singular vectors of A respectively and the k diagonal elements of Σ are the singular values of A. Σ may be chosen so that
      σ1σ2σk0  
      and in this case, if rank(A)=r then
      σ1σ2σr>0,  σr+1==σk=0.  
      If U~ and V~ consist of the first r columns of U and V respectively and Σ~ is an r×r diagonal matrix with diagonal elements σ1,σ2,,σr then A is given by
      A=U~Σ~V~T  
      and the pseudo-inverse of A is given by
      A+=V~Σ~-1U~T.  
      Notice that
      ATA=V(ΣTΣ)VT  
      which is the classical eigenvalue (spectral) factorization of ATA.
    5. (e)if A is complex then the above relationships are still true if we use ‘unitary’ in place of ‘orthogonal’ and conjugate transpose in place of transpose. For example, the singular value decomposition of A is
      A=UΣVH,  
      where U and V are unitary, VH the conjugate transpose of V and Σ is as in (ii)(d) above.

2.2 Matrix Factorizations

The routines in this section perform matrix factorizations which are required for the solution of systems of linear equations with various special structures. A few routines which perform associated computations are also included.
Other routines for matrix factorizations are to be found in Chapters F07, F08 and F11.
This section also contains a few routines associated with eigenvalue problems (see Chapter F02). (Historical note: this section used to contain many more such routines, but they have now been superseded by routines in Chapter F08.)
Finally, this section contains routines for computing non-negative matrix factorizations, which are used for dimensional reduction and classification in data analysis. Given a rectangular m×n matrix, A, with non-negative elements, a non-negative matrix factorization of A is an approximate factorization of A into the product of an m×k non-negative matrix W and a k×n non-negative matrix H, so that AWH. Typically k is chosen so that kmin(m,n). The matrices W and H are then computed to minimize |A-WH|F. The factorization is not unique.

2.3 Matrix Arithmetic and Manipulation

The intention of routines in this section (f01c, f01d, f01v, and f01z) is to cater for some of the commonly occurring operations in matrix manipulation, i.e., transposing a matrix or adding part of one matrix to another, and for conversion between different storage formats,such as conversion between rectangular band matrix storage and packed band matrix storage. For vector or matrix-vector or matrix-matrix operations refer to Chapters F06 and F16.

2.4 Matrix Functions

Given a square matrix A, the matrix function f(A) is a matrix with the same dimensions as A which provides a generalization of the scalar function f.
If A has a full set of eigenvectors V then A can be factorized as
A = V D V-1 ,  
where D is the diagonal matrix whose diagonal elements, di, are the eigenvalues of A. f(A) is given by
f(A) = V f(D) V-1 ,  
where f(D) is the diagonal matrix whose ith diagonal element is f(di).
In general, A may not have a full set of eigenvectors. The matrix function can then be defined via a Cauchy integral. For An×n,
f(A) = 1 2π i Γ f(z) (zI-A)-1 dz ,  
where Γ is a closed contour surrounding the eigenvalues of A, and f is analytic within Γ.
Some matrix functions are defined implicitly. A matrix logarithm is a solution X to the equation
eX=A .  
In general, X is not unique, but if A has no eigenvalues on the closed negative real line then a unique principal logarithm exists whose eigenvalues have imaginary part between π and -π. Similarly, a matrix square root is a solution X to the equation
X2=A .  
If A has no eigenvalues on the closed negative real line then a unique principal square root exists with eigenvalues in the right half-plane. If A has a vanishing eigenvalue then log(A) cannot be computed. If the vanishing eigenvalue is defective (its algebraic multiplicity exceeds its geometric multiplicity, or equivalently it occurs in a Jordan block of size greater than 1) then the square root cannot be computed. If the vanishing eigenvalue is semisimple (its algebraic and geometric multiplicities are equal, or equivalently it occurs only in Jordan blocks of size 1) then a square root can be computed.
Algorithms for computing matrix functions are usually tailored to a specific function. Currently, Chapter F01 contains routines for calculating the exponential, logarithm, sine, cosine, sinh, cosh, square root and the general real power of both real and complex matrices. In addition, there are routines to compute a general function of real symmetric and complex Hermitian matrices and a general function of general real and complex matrices.
The Fréchet derivative of a matrix function f(A) in the direction of the matrix E is the linear function mapping E to Lf(A,E) such that
f(A+E) - f(A) - Lf(A,E) = O(E) .  
The Fréchet derivative measures the first-order effect on f(A) of perturbations in A. Chapter F01 contains routines for calculating the Fréchet derivative of the exponential, logarithm and real powers of both real and complex matrices.
The condition number of a matrix function is a measure of its sensitivity to perturbations in the data. The absolute condition number measures these perturbations in an absolute sense and is defined by
condabs (f,A) lim ε0 sup {E0} f(A+E)-f(A) ε .  
The relative condition number, which is usually of more interest, measures these perturbations in a relative sense and is defined by
condrel (f,A) = condabs (f,A) A f(A) .  
The absolute and relative condition numbers can be expressed in terms of the norm of the Fréchet derivative by
condabs (f,A) = max E0 L(A,E) E ,  
condrel (f,A) = A f(A) max E0 L(A,E) E .  
Chapter F01 contains routines for calculating the condition number of the matrix exponential, logarithm, sine, cosine, sinh, cosh, square root and the general real power of both real and complex matrices. It also contains routines for estimating the condition number of a general function of a real or complex matrix.

3 Recommendations on Choice and Use of Available Routines

3.1 Matrix Inversion

Note:  before using any routine for matrix inversion, consider carefully whether it is needed.
Although the solution of a set of linear equations Ax=b can be written as x=A-1b, the solution should never be computed by first inverting A and then computing A-1b; the routines in Chapters F04 or F07 should always be used to solve such sets of equations directly; they are faster in execution, and numerically more stable and accurate. Similar remarks apply to the solution of least squares problems which again should be solved by using the routines in Chapters F04 and F08 rather than by computing a pseudo-inverse.
  1. (a)Nonsingular square matrices of order n
    This chapter describes techniques for inverting a general real matrix A and matrices which are positive definite (have all eigenvalues positive) and are either real and symmetric or complex and Hermitian. It is wasteful and uneconomical not to use the appropriate routine when a matrix is known to have one of these special forms. A general routine must be used when the matrix is not known to be positive definite. In most routines, the inverse is computed by solving the linear equations Axi=ei, for i=1,2,,n, where ei is the ith column of the identity matrix.
    Routines are given for calculating the approximate inverse, that is solving the linear equations just once, and also for obtaining the accurate inverse by successive iterative corrections of this first approximation. The latter, of course, are more costly in terms of time and storage, since each correction involves the solution of n sets of linear equations and since the original A and its LU decomposition must be stored together with the first and successively corrected approximations to the inverse. In practice, the storage requirements for the ‘corrected’ inverse routines are about double those of the ‘approximate’ inverse routines, though the extra computer time is not prohibitive since the same matrix and the same LU decomposition is used in every linear equation solution.
    Despite the extra work of the ‘corrected’ inverse routines, they are superior to the ‘approximate’ inverse routines. A correction provides a means of estimating the number of accurate figures in the inverse or the number of ‘meaningful’ figures relating to the degree of uncertainty in the coefficients of the matrix.
    The residual matrix R=AX-I, where X is a computed inverse of A, conveys useful information. Firstly R is a bound on the relative error in X and secondly R<12 guarantees the convergence of the iterative process in the ‘corrected’ inverse routines.
    The decision trees for inversion show which routines in Chapter F04 and Chapter F07 should be used for the inversion of other special types of matrices not treated in the chapter.
  2. (b)General real rectangular matrices
    For real matrices f08aef and f01qjf return QR and RQ factorizations of A respectively and f08bff returns the QR factorization with column interchanges. The corresponding complex routines are f08asf, f01rjf and f08btf respectively. Routines are also provided to form the orthogonal matrices and transform by the orthogonal matrices following the use of the above routines. f01qgf and f01rgf form the RQ factorization of an upper trapezoidal matrix for the real and complex cases respectively.
    f01blf uses the QR factorization as described in Section 2.1(ii)(a) and is the only routine that explicitly returns a pseudo-inverse. If mn then the routine will calculate the pseudo-inverse A+ of the matrix A. If m<n then the n×m matrix AT should be used. The routine will calculate the pseudo-inverse Z=(AT)+=(A+)T of AT and the required pseudo-inverse will be ZT. The routine also attempts to calculate the rank, r, of the matrix given a tolerance to decide when elements can be regarded as zero. However, should this routine fail due to an incorrect determination of the rank, the singular value decomposition method (described below) should be used.
    f08kbf and f08kpf compute the singular value decomposition as described in Section 2 for real and complex matrices respectively. If A has rank rk=min(m,n) then the k-r smallest singular values will be negligible and the pseudo-inverse of A can be obtained as A+=VΣ-1UT as described in Section 2. If the rank of A is not known in advance it can be estimated from the singular values (see Section 2.4 in the F04 Chapter Introduction). In the real case with mn, f08aef followed by f02wuf provide details of the QR factorization or the singular value decomposition depending on whether or not A is of full rank and for some problems provides an attractive alternative to f08kbf. For large sparse matrices, leading terms in the singular value decomposition can be computed using routines from Chapter F12.

3.2 Matrix Factorizations

Most of these routines serve a special purpose required for the solution of sets of simultaneous linear equations or the eigenvalue problem. For further details, you should consult Sections 3 or 4 in the F02 Chapter Introduction or Sections 3 or 4 in the F04 Chapter Introduction.
f01brf and f01bsf are provided for factorizing general real sparse matrices. A more recent algorithm for the same problem is available through f11mef. For factorizing real symmetric positive definite sparse matrices, see f11jaf. These routines should only be used when A is not banded and when the total number of nonzero elements is less than 10% of the total number of elements. In all other cases, either the band routines or the general routines should be used.
f01mdf and f01mef compute the Cheng–Higham modified Cholesky factorization of a real symmetric matrix and the positive definite perturbed input matrix from the factors.
The routines f01saf (for dense matrices) and f01sbf (sparse matrices, using a reverse communication interface) are provided for computing non-negative matrix factorizations.

3.3 Matrix Arithmetic and Manipulation

The routines in the f01c section are designed for the general handling of m×n matrices. Emphasis has been placed on flexibility in the argument specifications and on avoiding, where possible, the use of internally declared arrays. They are, therefore, suited for use with large matrices of variable row and column dimensions. Routines are included for the addition and subtraction of sub-matrices of larger matrices, as well as the standard manipulations of full matrices. Those routines involving matrix multiplication may use additional-precision arithmetic for the accumulation of inner products. See also Chapter F06.
The routines in the f01d section perform arithmetic operations on triangular matrices.
The routines in the f01v (LAPACK) and f01z section are designed to allow conversion between full storage format and one of the packed storage schemes required by some of the routines in Chapters F02, F04, F06, F07 and F08.

3.3.1 NAG Names and LAPACK Names

Routines with NAG name beginning f01v may be called either by their NAG names or by their LAPACK names. When using the NAG Library, the double precision form of the LAPACK name must be used (beginning with D- or Z-).
References to Chapter F01 routines in the manual normally include the LAPACK double precision names, for example, f01vef.
The LAPACK routine names follow a simple scheme (which is similar to that used for the BLAS in Chapter F06). Most names have the structure XYYTZZ, where the components have the following meanings:
– the initial letter, X, indicates the data type (real or complex) and precision:
– the fourth letter, T, indicates that the routine is performing a storage scheme transformation (conversion)
– the letters YY indicate the original storage scheme used to store a triangular part of the matrix A, while the letters ZZ indicate the target storage scheme of the conversion (YY cannot equal ZZ since this would do nothing):

3.4 Matrix Functions

f01ecf and f01fcf compute the matrix exponential, eA, of a real and complex square matrix A respectively. If estimates of the condition number of the matrix exponential are required then f01jgf and f01kgf should be used. If Fréchet derivatives are required then f01jhf and f01khf should be used.
f01edf and f01fdf compute the matrix exponential, eA, of a real symmetric and complex Hermitian matrix respectively. If the matrix is real symmetric, or complex Hermitian then it is recommended that f01edf, or f01fdf be used as they are more efficient and, in general, more accurate than f01ecf and f01fcf.
f01ejf and f01fjf compute the principal matrix logarithm, log(A), of a real and complex square matrix A respectively. If estimates of the condition number of the matrix logarithm are required then f01jjf and f01kjf should be used. If Fréchet derivatives are required then f01jkf and f01kkf should be used.
f01ekf and f01fkf compute the matrix exponential, sine, cosine, sinh or cosh of a real and complex square matrix A respectively. If the matrix exponential is required then it is recommended that f01ecf or f01fcf be used as they are, in general, more accurate than f01ekf and f01fkf. If estimates of the condition number of the matrix function are required then f01jaf and f01kaf should be used.
f01elf and f01emf compute the matrix function, f(A), of a real square matrix. f01flf and f01fmf compute the matrix function of a complex square matrix. The derivatives of f are required for these computations. f01elf and f01flf use numerical differentiation to obtain the derivatives of f. f01emf and f01fmf use derivatives you have supplied. If estimates of the condition number are required but you are not supplying derivatives then f01jbf and f01kbf should be used. If estimates of the condition number of the matrix function are required and you are supplying derivatives of f then f01jcf and f01kcf should be used.
If the matrix A is real symmetric or complex Hermitian then it is recommended that to compute the matrix function, f(A), f01eff and f01fff are used respectively as they are more efficient and, in general, more accurate than f01elf, f01emf, f01flf and f01fmf.
f01gaf and f01haf compute the matrix function etAB for explicitly stored dense real and complex matrices A and B respectively while f01gbf and f01hbf compute the same using reverse communication. In the latter case, control is returned to you. You should calculate any required matrix-matrix products and then call the routine again. See Section 7 in How to Use the NAG Library for further information.
f01enf and f01fnf compute the principal square root A1/2 of a real and complex square matrix A respectively. If A is complex and upper triangular then f01fpf should be used. If A is real and upper quasi-triangular then f01epf should be used. If estimates of the condition number of the matrix square root are required then f01jdf and f01kdf should be used.
f01eqf and f01fqf compute the matrix power Ap, where p, of real and complex matrices respectively. If estimates of the condition number of the matrix power are required then f01jef and f01kef should be used. If Fréchet derivatives are required then f01jff and f01kff should be used.

4 Decision Trees

The decision trees show the routines in this chapter and in Chapter F04, Chapter F07 and Chapter F08 that should be used for inverting matrices of various types. They also show which routine should be used to calculate various matrix functions.
(i) Matrix Inversion:

Tree 1

Is A an n×n matrix of rank n?   Is A a real matrix?   see Tree 2
yesyes
  no   no
see Tree 3
see Tree 4

Tree 2: Inverse of a real n by n matrix of full rank

Is A a band matrix?   See Note 1.
yes
  no
Is A symmetric?   Is A positive definite?   Do you want guaranteed accuracy? (See Note 2)   f01abf
yesyesyes
  no   no   no
Is one triangle of A stored as a linear array?   f07gdf and f07gjf
yes
  no
f01adf or f07fdf and f07fjf
Is one triangle of A stored as a linear array?   f07pdf and f07pjf
yes
  no
f07mdf and f07mjf
Is A triangular?   Is A stored as a linear array?   f07ujf
yesyes
  no   no
f07tjf
Do you want guaranteed accuracy? (See Note 2)   f07abf
yes
  no
f07adf and f07ajf

Tree 3: Inverse of a complex n by n matrix of full rank

Is A a band matrix?   See Note 1.
yes
  no
Is A Hermitian?   Is A positive definite?   Is one triangle of A stored as a linear array?   f07grf and f07gwf
yesyesyes
  no   no   no
f07frf and f07fwf
Is one triangle A stored as a linear array?   f07prf and f07pwf
yes
  no
f07mrf and f07mwf
Is A symmetric?   Is one triangle of A stored as a linear array?   f07qrf and f07qwf
yesyes
  no   no
f07nrf and f07nwf
Is A triangular?   Is A stored as a linear array?   f07uwf
yesyes
  no   no
f07twf
f07anf or f07arf and f07awf

Tree 4: Pseudo-inverses

Is A a complex matrix?   Is A of full rank?   Is A an m×n matrix with m<n?   f01rjf and f01rkf
yesyesyes
  no   no   no
f08asf and f08auf or f08atf
f08kpf
Is A of full rank?   Is A an m×n matrix with m<n?   f01qjf and f01qkf
yesyes
  no   no
f08aef and f08agf or f08aff
Is A an m×n matrix with m<n?   f08kbf
yes
  no
Is reliability more important than efficiency?   f08kbf
yes
  no
f01blf
Note 1: the inverse of a band matrix A does not, in general, have the same shape as A, and no routines are provided specifically for finding such an inverse. The matrix must either be treated as a full matrix or the equations AX=B must be solved, where B has been initialized to the identity matrix I. In the latter case, see the decision trees in Section 4 in the F04 Chapter Introduction.
Note 2: by ‘guaranteed accuracy’ we mean that the accuracy of the inverse is improved by the use of the iterative refinement technique using additional precision.
(ii) Matrix Factorizations: see the decision trees in Section 4 in the F02 and F04 Chapter Introductions.
(iii) Matrix Arithmetic and Manipulation: not appropriate.
(iv) Matrix Functions:

Tree 5: Matrix functions f(A) of an n by n real matrix A

Is etAB required?   Is A stored in dense format?   f01gaf
yesyes
  no   no
f01gbf
Is A real symmetric?   Is eA required?   f01edf
yesyes
  no   no
f01eff
Is cos(A) or cosh(A) or sin(A) or sinh(A) required?   Is the condition number of the matrix function required?   f01jaf
yesyes
  no   no
f01ekf
Is log(A) required?   Is the condition number of the matrix logarithm required?   f01jjf
yesyes
  no   no
Is the Fréchet derivative of the matrix logarithm required?   f01jkf
yes
  no
f01ejf
Is exp(A) required?   Is the condition number of the matrix exponential required?   f01jgf
yesyes
  no   no
Is the Fréchet derivative of the matrix exponential required?   f01jhf
yes
  no
f01ecf
Is A1/2 required?   Is the condition number of the matrix square root required?   f01jdf
yesyes
  no   no
Is the matrix upper quasi-triangular?   f01epf
yes
  no
f01enf
Is Ap required?   Is the condition number of the matrix power required?   f01jef
yesyes
  no   no
Is the Fréchet derivative of the matrix power required?   f01jff
yes
  no
f01eqf
f(A) will be computed. Will derivatives of f be supplied by the user?   Is the condition number of the matrix function required?   f01jcf
yesyes
  no   no
f01emf
Is the condition number of the matrix function required?   f01jbf
yes
  no
f01elf

Tree 6: Matrix functions f(A) of an n by n complex matrix A

Is etAB required?   Is A stored in dense format?   f01haf
yesyes
  no   no
f01hbf
Is A complex Hermitian?   Is eA required?   f01fdf
yesyes
  no   no
f01fff
Is cos(A) or cosh(A) or sin(A) or sinh(A) required?   Is the condition number of the matrix function required?   f01kaf
yesyes
  no   no
f01fkf
Is log(A) required?   Is the condition number of the matrix logarithm required?   f01kjf
yesyes
  no   no
Is the Fréchet derivative of the matrix logarithm required?   f01kkf
yes
  no
f01fjf
Is exp(A) required?   Is the condition number of the matrix exponential required?   f01kgf
yesyes
  no   no
Is the Fréchet derivative of the matrix exponential required?   f01khf
yes
  no
f01fcf
Is A1/2 required?   Is the condition number of the matrix square root required?   f01kdf
yesyes
  no   no
Is the matrix upper triangular?   f01fpf
yes
  no
f01fnf
Is Ap required?   Is the condition number of the matrix power required?   f01kef
yesyes
  no   no
Is the Fréchet derivative of the matrix power required?   f01kff
yes
  no
f01fqf
f(A) will be computed. Will derivatives of f be supplied by the user?   Is the condition number of the matrix function required?   f01kcf
yesyes
  no   no
f01fmf
Is the condition number of the matrix function required?   f01kbf
yes
  no
f01flf

5 Functionality Index

Action of the matrix exponential on a complex matrix   f01haf
Action of the matrix exponential on a complex matrix (reverse communication)   f01hbf
Action of the matrix exponential on a real matrix   f01gaf
Action of the matrix exponential on a real matrix (reverse communication)   f01gbf
Inversion (also see Chapter F07),  
real m×n matrix,  
pseudo-inverse   f01blf
real symmetric positive definite matrix,  
accurate inverse   f01abf
approximate inverse   f01adf
Matrix Arithmetic and Manipulation,  
matrix addition,  
complex matrices   f01cwf
real matrices   f01ctf
matrix multiplication,  
rectangular matrices,  
update,  
real matrices   f01ckf
triangular matrices,  
in-place,  
complex matrices   f01duf
real matrices   f01dgf
update,  
complex matrices   f01dtf
real matrices   f01dff
matrix storage conversion,  
full to packed triangular storage,  
complex matrices   f01vbf
real matrices   f01vaf
full to Rectangular Full Packed storage,  
complex matrix   f01vff
real matrix   f01vef
packed band  rectangular storage, special provision for diagonal  
complex matrices   f01zdf
real matrices   f01zcf
packed triangular to full storage,  
complex matrices   f01vdf
real matrices   f01vcf
packed triangular to Rectangular Full Packed storage,  
complex matrices   f01vkf
real matrices   f01vjf
packed triangular  square storage, special provision for diagonal  
complex matrices   f01zbf
real matrices   f01zaf
Rectangular Full Packed to full storage,  
complex matrices   f01vhf
real matrices   f01vgf
Rectangular Full Packed to packed triangular storage,  
complex matrices   f01vmf
real matrices   f01vlf
matrix subtraction,  
real matrices   f01ctf
matrix transpose   f01crf
Matrix function,  
complex Hermitian n×n matrix,  
matrix exponential   f01fdf
matrix function   f01fff
complex n×n matrix,  
condition number for a matrix exponential   f01kgf
condition number for a matrix exponential, logarithm, sine, cosine, sinh or cosh   f01kaf
condition number for a matrix function, using numerical differentiation   f01kbf
condition number for a matrix function, using user-supplied derivatives   f01kcf
condition number for a matrix logarithm   f01kjf
condition number for a matrix power   f01kef
condition number for the matrix square root, logarithm, sine, cosine, sinh or cosh   f01kdf
Fréchet derivative  
matrix exponential   f01khf
matrix logarithm   f01kkf
matrix power   f01kff
general power  
matrix   f01fqf
matrix exponential   f01fcf
matrix exponential, sine, cosine, sinh or cosh   f01fkf
matrix function, using numerical differentiation   f01flf
matrix function, using user-supplied derivatives   f01fmf
matrix logarithm   f01fjf
matrix square root   f01fnf
upper triangular  
matrix square root   f01fpf
real n×n matrix,  
condition number for a matrix exponential   f01jgf
condition number for a matrix function, using numerical differentiation   f01jbf
condition number for a matrix function, using user-supplied derivatives   f01jcf
condition number for a matrix logarithm   f01jjf
condition number for a matrix power   f01jef
condition number for the matrix exponential, logarithm, sine, cosine, sinh or cosh   f01jaf
condition number for the matrix square root, logarithm, sine, cosine, sinh or cosh   f01jdf
Fréchet derivative  
matrix exponential   f01jhf
matrix logarithm   f01jkf
matrix power   f01jff
general power  
matrix exponential   f01eqf
matrix exponential   f01ecf
matrix exponential, sine, cosine, sinh or cosh   f01ekf
matrix function, using numerical differentiation   f01elf
matrix function, using user-supplied derivatives   f01emf
matrix logarithm   f01ejf
matrix square root   f01enf
upper quasi-triangular  
matrix square root   f01epf
real symmetric n×n matrix,  
matrix exponential   f01edf
matrix function   f01eff
Matrix Transformations,  
complex m×n(mn) matrix,  
RQ factorization   f01rjf
complex matrix, form unitary matrix   f01rkf
complex upper trapezoidal matrix,  
RQ factorization   f01rgf
eigenproblem Ax=λBxAB banded,  
reduction to standard symmetric problem   f01bvf
modified Cholesky factorization, form positive definite perturbed input matrix   f01mef
modified Cholesky factorization of a real symmetric matrix   f01mdf
non-negative matrix factorization   f01saf
non-negative matrix factorization, reverse communication   f01sbf
real almost block-diagonal matrix,  
LU factorization   f01lhf
real band symmetric positive definite matrix,  
ULDLTUT factorization   f01buf
variable bandwidth, LDLT factorization   f01mcf
real m×n(mn) matrix,  
RQ factorization   f01qjf
real matrix,  
form orthogonal matrix   f01qkf
real sparse matrix,  
factorization   f01brf
factorization, known sparsity pattern   f01bsf
real upper trapezoidal matrix,  
RQ factorization   f01qgf
tridiagonal matrix,  
LU factorization   f01lef

6 Auxiliary Routines Associated with Library Routine Arguments

None.

7 Withdrawn or Deprecated Routines

None.

8 References

Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Higham N J (2008) Functions of Matrices: Theory and Computation SIAM, Philadelphia, PA, USA
Wilkinson J H (1965) The Algebraic Eigenvalue Problem Oxford University Press, Oxford
Wilkinson J H (1977) Some recent advances in numerical linear algebra The State of the Art in Numerical Analysis (ed D A H Jacobs) Academic Press
Wilkinson J H and Reinsch C (1971) Handbook for Automatic Computation II, Linear Algebra Springer–Verlag