# NAG Library Chapter Introduction

## 1Scope of the Chapter

This chapter provides facilities for four types of problem:
 (i) Matrix Inversion (ii) Matrix Factorizations (iii) Matrix Arithmetic and Manipulation (iv) Matrix Functions
See Sections 2.1, 2.2, 2.3 and 2.4 where these problems are discussed.

## 2Background to the Problems

### 2.1Matrix Inversion

(i) Nonsingular square matrices of order $n$.
If $A$, a square matrix of order $n$, is nonsingular (has rank $n$), then its inverse $X$ exists and satisfies the equations $AX=XA=I$ (the identity or unit matrix).
It is worth noting that if $AX-I=R$, so that $R$ is the ‘residual’ matrix, then a bound on the relative error is given by $‖R‖$, i.e.,
 $X-A-1 A-1 ≤R.$
(ii) General real rectangular matrices.
A real matrix $A$ has no inverse if it is square ($n$ by $n$) and singular (has rank $\text{}), or if it is of shape ($m$ by $n$) with $m\ne n$, but there is a Generalized or Pseudo-inverse ${A}^{+}$ which satisfies the equations
 $AA+A=A, A+AA+=A+, AA+T=AA+, A+AT=A+A$
(which of course are also satisfied by the inverse $X$ of $A$ if $A$ is square and nonsingular).
(a) if $m\ge n$ and $\mathrm{rank}\left(A\right)=n$ then $A$ can be factorized using a $\mathbit{Q}\mathbit{R}$ factorization, given by
 $A=Q R 0 ,$
where $Q$ is an $m$ by $m$ orthogonal matrix and $R$ is an $n$ by $n$, nonsingular, upper triangular matrix. The pseudo-inverse of $A$ is then given by
 $A+=R-1Q~T,$
where $\stackrel{~}{Q}$ consists of the first $n$ columns of $Q$.
(b) if $m\le n$ and $\mathrm{rank}\left(A\right)=m$ then $A$ can be factorized using an RQ factorization, given by
 $A=R 0QT$
where $Q$ is an $n$ by $n$ orthogonal matrix and $R$ is an $m$ by $m$, nonsingular, upper triangular matrix. The pseudo-inverse of $A$ is then given by
 $A+ = Q~R-1 ,$
where $\stackrel{~}{Q}$ consists of the first $m$ columns of $Q$.
(c) if $m\ge n$ and $\mathrm{rank}\left(A\right)=r\le n$ then $A$ can be factorized using a $QR$ factorization, with column interchanges, as
 $A=Q R 0 PT,$
where $Q$ is an $m$ by $m$ orthogonal matrix, $R$ is an $r$ by $n$ upper trapezoidal matrix and $P$ is an $n$ by $n$ permutation matrix. The pseudo-inverse of $A$ is then given by
 $A+=PRTRRT-1Q~T,$
where $\stackrel{~}{Q}$ consists of the first $r$ columns of $Q$.
(d) if $\mathrm{rank}\left(A\right)=r\le k=\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(m,n\right)$, then $A$ can be factorized as the singular value decomposition
 $A=UΣVT,$
where $U$ is an $m$ by $m$ orthogonal matrix, $V$ is an $n$ by $n$ orthogonal matrix and $\Sigma$ is an $m$ by $n$ diagonal matrix with non-negative diagonal elements $\sigma$. The first $k$ columns of $U$ and $V$ are the left- and right-hand singular vectors of $A$ respectively and the $k$ diagonal elements of $\Sigma$ are the singular values of $A$. $\Sigma$ may be chosen so that
 $σ1≥σ2≥⋯≥σk≥0$
and in this case if $\mathrm{rank}\left(A\right)=r$ then
 $σ1≥σ2≥⋯≥σr>0, σr+1=⋯=σk=0.$
If $\stackrel{~}{U}$ and $\stackrel{~}{V}$ consist of the first $r$ columns of $U$ and $V$ respectively and $\stackrel{~}{\Sigma }$ is an $r$ by $r$ diagonal matrix with diagonal elements ${\sigma }_{1},{\sigma }_{2},\dots ,{\sigma }_{r}$ then $A$ is given by
 $A=U~Σ~V~T$
and the pseudo-inverse of $A$ is given by
 $A+=V~Σ~-1U~T.$
Notice that
 $ATA=VΣTΣVT$
which is the classical eigenvalue (spectral) factorization of ${A}^{\mathrm{T}}A$.
(e) if $A$ is complex then the above relationships are still true if we use ‘unitary’ in place of ‘orthogonal’ and conjugate transpose in place of transpose. For example, the singular value decomposition of $A$ is
 $A=UΣVH,$
where $U$ and $V$ are unitary, ${V}^{\mathrm{H}}$ the conjugate transpose of $V$ and $\Sigma$ is as in (ii)(d) above.

### 2.2Matrix Factorizations

The routines in this section perform matrix factorizations which are required for the solution of systems of linear equations with various special structures. A few routines which perform associated computations are also included.
Other routines for matrix factorizations are to be found in Chapters F07, F08 and F11.
This section also contains a few routines associated with eigenvalue problems (see Chapter F02). (Historical note: this section used to contain many more such routines, but they have now been superseded by routines in Chapter F08.)

### 2.3Matrix Arithmetic and Manipulation

The intention of routines in this section (sub-chapters F01C, F01V and F01Z) is to cater for some of the commonly occurring operations in matrix manipulation, i.e., transposing a matrix or adding part of one matrix to another, and for conversion between different storage formats,such as conversion between rectangular band matrix storage and packed band matrix storage. For vector or matrix-vector or matrix-matrix operations refer to Chapters F06 and F16.

### 2.4Matrix Functions

Given a square matrix $A$, the matrix function $f\left(A\right)$ is a matrix with the same dimensions as $A$ which provides a generalization of the scalar function $f$.
If $A$ has a full set of eigenvectors $V$ then $A$ can be factorized as
 $A = V D V-1 ,$
where $D$ is the diagonal matrix whose diagonal elements, ${d}_{i}$, are the eigenvalues of $A$. $f\left(A\right)$ is given by
 $fA = V fD V-1 ,$
where $f\left(D\right)$ is the diagonal matrix whose $i$th diagonal element is $f\left({d}_{i}\right)$.
In general, $A$ may not have a full set of eigenvectors. The matrix function can then be defined via a Cauchy integral. For $A\in {ℂ}^{n×n}$,
 $fA = 1 2π i ∫ Γ fz zI-A-1 dz ,$
where $\Gamma$ is a closed contour surrounding the eigenvalues of $A$, and $f$ is analytic within $\Gamma$.
Some matrix functions are defined implicitly. A matrix logarithm is a solution $X$ to the equation
 $eX=A .$
In general $X$ is not unique, but if $A$ has no eigenvalues on the closed negative real line then a unique principal logarithm exists whose eigenvalues have imaginary part between $\pi$ and $-\pi$. Similarly, a matrix square root is a solution $X$ to the equation
 $X2=A .$
If $A$ has no eigenvalues on the closed negative real line then a unique principal square root exists with eigenvalues in the right half-plane. If $A$ has a vanishing eigenvalue then $\mathrm{log}\left(A\right)$ cannot be computed. If the vanishing eigenvalue is defective (its algebraic multiplicity exceeds its geometric multiplicity, or equivalently it occurs in a Jordan block of size greater than $1$) then the square root cannot be computed. If the vanishing eigenvalue is semisimple (its algebraic and geometric multiplicities are equal, or equivalently it occurs only in Jordan blocks of size $1$) then a square root can be computed.
Algorithms for computing matrix functions are usually tailored to a specific function. Currently Chapter F01 contains routines for calculating the exponential, logarithm, sine, cosine, sinh, cosh, square root and general real power of both real and complex matrices. In addition there are routines to compute a general function of real symmetric and complex Hermitian matrices and a general function of general real and complex matrices.
The Fréchet derivative of a matrix function $f\left(A\right)$ in the direction of the matrix $E$ is the linear function mapping $E$ to ${L}_{f}\left(A,E\right)$ such that
 $fA+E - fA - LfA,E = OE .$
The Fréchet derivative measures the first-order effect on $f\left(A\right)$ of perturbations in $A$. Chapter F01 contains routines for calculating the Fréchet derivative of the exponential, logarithm and real powers of both real and complex matrices.
The condition number of a matrix function is a measure of its sensitivity to perturbations in the data. The absolute condition number measures these perturbations in an absolute sense, and is defined by
 $condabs f,A ≔ lim ε→0 sup E→0 fA+E - fA ε .$
The relative condition number, which is usually of more interest, measures these perturbations in a relative sense, and is defined by
 $condrel f,A = condabs f,A A fA .$
The absolute and relative condition numbers can be expressed in terms of the norm of the Fréchet derivative by
 $condabs f,A = max E≠0 LA,E E ,$
 $condrel f,A = A fA max E≠0 LA,E E .$
Chapter F01 contains routines for calculating the condition number of the matrix exponential, logarithm, sine, cosine, sinh, cosh, square root and general real power of both real and complex matrices. It also contains routines for estimating the condition number of a general function of a real or complex matrix.

## 3Recommendations on Choice and Use of Available Routines

### 3.1Matrix Inversion

Note:  before using any routine for matrix inversion, consider carefully whether it is really needed.
Although the solution of a set of linear equations $Ax=b$ can be written as $x={A}^{-1}b$, the solution should never be computed by first inverting $A$ and then computing ${A}^{-1}b$; the routines in Chapters F04 or F07 should always be used to solve such sets of equations directly; they are faster in execution, and numerically more stable and accurate. Similar remarks apply to the solution of least squares problems which again should be solved by using the routines in Chapters F04 and F08 rather than by computing a pseudo-inverse.
 (a) Nonsingular square matrices of order $n$ This chapter describes techniques for inverting a general real matrix $A$ and matrices which are positive definite (have all eigenvalues positive) and are either real and symmetric or complex and Hermitian. It is wasteful and uneconomical not to use the appropriate routine when a matrix is known to have one of these special forms. A general routine must be used when the matrix is not known to be positive definite. In most routines the inverse is computed by solving the linear equations $A{x}_{\mathit{i}}={e}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$, where ${e}_{i}$ is the $i$th column of the identity matrix. Routines are given for calculating the approximate inverse, that is solving the linear equations just once, and also for obtaining the accurate inverse by successive iterative corrections of this first approximation. The latter, of course, are more costly in terms of time and storage, since each correction involves the solution of $n$ sets of linear equations and since the original $A$ and its $LU$ decomposition must be stored together with the first and successively corrected approximations to the inverse. In practice the storage requirements for the ‘corrected’ inverse routines are about double those of the ‘approximate’ inverse routines, though the extra computer time is not prohibitive since the same matrix and the same $LU$ decomposition is used in every linear equation solution. Despite the extra work of the ‘corrected’ inverse routines they are superior to the ‘approximate’ inverse routines. A correction provides a means of estimating the number of accurate figures in the inverse or the number of ‘meaningful’ figures relating to the degree of uncertainty in the coefficients of the matrix. The residual matrix $R=AX-I$, where $X$ is a computed inverse of $A$, conveys useful information. Firstly $‖R‖$ is a bound on the relative error in $X$ and secondly $‖R‖<\frac{1}{2}$ guarantees the convergence of the iterative process in the ‘corrected’ inverse routines. The decision trees for inversion show which routines in Chapter F04 and Chapter F07 should be used for the inversion of other special types of matrices not treated in the chapter. (b) General real rectangular matrices For real matrices f08aef (dgeqrf) and f01qjf return $QR$ and $RQ$ factorizations of $A$ respectively and f08bff (dgeqp3) returns the $QR$ factorization with column interchanges. The corresponding complex routines are f08asf (zgeqrf), f01rjf and f08btf (zgeqp3) respectively. Routines are also provided to form the orthogonal matrices and transform by the orthogonal matrices following the use of the above routines. f01qgf and f01rgf form the $RQ$ factorization of an upper trapezoidal matrix for the real and complex cases respectively. f01blf uses the $QR$ factorization as described in Section 2.1(ii)(a) and is the only routine that explicitly returns a pseudo-inverse. If $m\ge n$, then the routine will calculate the pseudo-inverse ${A}^{+}$ of the matrix $A$. If $m, then the $n$ by $m$ matrix ${A}^{\mathrm{T}}$ should be used. The routine will calculate the pseudo-inverse $Z={\left({A}^{\mathrm{T}}\right)}^{+}={\left({A}^{+}\right)}^{\mathrm{T}}$ of ${A}^{\mathrm{T}}$ and the required pseudo-inverse will be ${Z}^{\mathrm{T}}$. The routine also attempts to calculate the rank, $r$, of the matrix given a tolerance to decide when elements can be regarded as zero. However, should this routine fail due to an incorrect determination of the rank, the singular value decomposition method (described below) should be used. f08kbf (dgesvd) and f08kpf (zgesvd) compute the singular value decomposition as described in Section 2 for real and complex matrices respectively. If $A$ has rank $r\le k=\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(m,n\right)$ then the $k-r$ smallest singular values will be negligible and the pseudo-inverse of $A$ can be obtained as ${A}^{+}=V{\Sigma }^{-1}{U}^{\mathrm{T}}$ as described in Section 2. If the rank of $A$ is not known in advance it can be estimated from the singular values (see Section 2.4 in the F04 Chapter Introduction). In the real case with $m\ge n$, f08aef (dgeqrf) followed by f02wuf provide details of the $QR$ factorization or the singular value decomposition depending on whether or not $A$ is of full rank and for some problems provides an attractive alternative to f08kbf (dgesvd). For large sparse matrices, leading terms in the singular value decomposition can be computed using routines from Chapter F12.

### 3.2Matrix Factorizations

Each of these routines serves a special purpose required for the solution of sets of simultaneous linear equations or the eigenvalue problem. For further details you should consult Sections 3 or 4 in the F02 Chapter Introduction or Sections 3 or 4 in the F04 Chapter Introduction.
f01brf and f01bsf are provided for factorizing general real sparse matrices. A more recent algorithm for the same problem is available through f11mef. For factorizing real symmetric positive definite sparse matrices, see f11jaf. These routines should be used only when $A$ is not banded and when the total number of nonzero elements is less than 10% of the total number of elements. In all other cases either the band routines or the general routines should be used.

### 3.3Matrix Arithmetic and Manipulation

The routines in the F01C section are designed for the general handling of $m$ by $n$ matrices. Emphasis has been placed on flexibility in the argument specifications and on avoiding, where possible, the use of internally declared arrays. They are therefore suited for use with large matrices of variable row and column dimensions. Routines are included for the addition and subtraction of sub-matrices of larger matrices, as well as the standard manipulations of full matrices. Those routines involving matrix multiplication may use additional-precision arithmetic for the accumulation of inner products. See also Chapter F06.
The routines in the F01V (LAPACK) and F01Z section are designed to allow conversion between full storage format and one of the packed storage schemes required by some of the routines in Chapters F02, F04, F06, F07 and F08.

#### 3.3.1NAG Names and LAPACK Names

Routines with NAG name beginning F01V may be called either by their NAG names or by their LAPACK names. When using the NAG Library, the double precision form of the LAPACK name must be used (beginning with D- or Z-).
References to Chapter F01 routines in the manual normally include the LAPACK double precision names, for example, f01vef (dtrttf).
The LAPACK routine names follow a simple scheme (which is similar to that used for the BLAS in Chapter F06). Most names have the structure XYYTZZ, where the components have the following meanings:
– the initial letter, X, indicates the data type (real or complex) and precision:
• S – real, single precision (in Fortran, 4 byte length REAL)
• D – real, double precision (in Fortran, 8 byte length REAL)
• C – complex, single precision (in Fortran, 8 byte length COMPLEX)
• Z – complex, double precision (in Fortran, 16 byte length COMPLEX)
– the fourth letter, T, indicates that the routine is performing a storage scheme transformation (conversion)
– the letters YY indicate the original storage scheme used to store a triangular part of the matrix $A$, while the letters ZZ indicate the target storage scheme of the conversion (YY cannot equal ZZ since this would do nothing):
• TF – Rectangular Full Packed Format (RFP)
• TP – Packed Format
• TR – Full Format

### 3.4Matrix Functions

f01ecf and f01fcf compute the matrix exponential, ${e}^{A}$, of a real and complex square matrix $A$ respectively. If estimates of the condition number of the matrix exponential are required then f01jgf and f01kgf should be used. If Fréchet derivatives are required then f01jhf and f01khf should be used.
f01edf and f01fdf compute the matrix exponential, ${e}^{A}$, of a real symmetric and complex Hermitian matrix respectively. If the matrix is real symmetric, or complex Hermitian then it is recommended that f01edf, or f01fdf be used as they are more efficient and, in general, more accurate than f01ecf and f01fcf.
f01ejf and f01fjf compute the principal matrix logarithm, $\mathrm{log}\left(A\right)$, of a real and complex square matrix $A$ respectively. If estimates of the condition number of the matrix logarithm are required then f01jjf and f01kjf should be used. If Fréchet derivatives are required then f01jkf and f01kkf should be used.
f01ekf and f01fkf compute the matrix exponential, sine, cosine, sinh or cosh of a real and complex square matrix $A$ respectively. If the matrix exponential is required then it is recommended that f01ecf or f01fcf be used as they are, in general, more accurate than f01ekf and f01fkf. If estimates of the condition number of the matrix function are required then f01jaf and f01kaf should be used.
f01elf and f01emf compute the matrix function, $f\left(A\right)$, of a real square matrix. f01flf and f01fmf compute the matrix function of a complex square matrix. The derivatives of $f$ are required for these computations. f01elf and f01flf use numerical differentiation to obtain the derivatives of $f$. f01emf and f01fmf use derivatives you have supplied. If estimates of the condition number are required but you are not supplying derivatives then f01jbf and f01kbf should be used. If estimates of the condition number of the matrix function are required and you are supplying derivatives of $f$, then f01jcf and f01kcf should be used.
If the matrix $A$ is real symmetric or complex Hermitian then it is recommended that to compute the matrix function, $f\left(A\right)$, f01eff and f01fff are used respectively as they are more efficient and, in general, more accurate than f01elf, f01emf, f01flf and f01fmf.
f01gaf and f01haf compute the matrix function ${e}^{tA}B$ for explicitly stored dense real and complex matrices $A$ and $B$ respectively while f01gbf and f01hbf compute the same using reverse communication. In the latter case, control is returned to you. You should calculate any required matrix-matrix products and then call the routine again. See Section 3.3.3 in How to Use the NAG Library and its Documentation for further information.
f01enf and f01fnf compute the principal square root ${A}^{1/2}$ of a real and complex square matrix $A$ respectively. If $A$ is complex and upper triangular then f01fpf should be used. If $A$ is real and upper quasi-triangular then f01epf should be used. If estimates of the condition number of the matrix square root are required then f01jdf and f01kdf should be used.
f01eqf and f01fqf compute the matrix power ${A}^{p}$, where $p\in ℝ$, of real and complex matrices respectively. If estimates of the condition number of the matrix power are required then f01jef and f01kef should be used. If Fréchet derivatives are required then f01jff and f01kff should be used.

## 4Decision Trees

The decision trees show the routines in this chapter and in Chapter F04, Chapter F07 and Chapter F08 that should be used for inverting matrices of various types. They also show which routine should be used to calculate various matrix functions.
(i) Matrix Inversion:

### Tree 1

 Is $A$ an $n$ by $n$ matrix of rank $n$? Is $A$ a real matrix? see Tree 2 yes yes no no see Tree 3 see Tree 4

### Tree 2: Inverse of a real n by n matrix of full rank

 Is $A$ a band matrix? See Note 1. yes no Is $A$ symmetric? Is $A$ positive definite? Do you want guaranteed accuracy? (See Note 2) f01abf yes yes yes no no no Is one triangle of $A$ stored as a linear array? f07gdf and f07gjf yes no f01adf or f07fdf and f07fjf Is one triangle of $A$ stored as a linear array? f07pdf and f07pjf yes no f07mdf and f07mjf Is $A$ triangular? Is $A$ stored as a linear array? f07ujf yes yes no no f07tjf Do you want guaranteed accuracy? (See Note 2) f07abf yes no f07adf and f07ajf

### Tree 3: Inverse of a complex n by n matrix of full rank

 Is $A$ a band matrix? See Note 1. yes no Is $A$ Hermitian? Is $A$ positive definite? Is one triangle of $A$ stored as a linear array? f07grf and f07gwf yes yes yes no no no f07frf and f07fwf Is one triangle $A$ stored as a linear array? f07prf and f07pwf yes no f07mrf and f07mwf Is $A$ symmetric? Is one triangle of $A$ stored as a linear array? f07qrf and f07qwf yes yes no no f07nrf and f07nwf Is $A$ triangular? Is $A$ stored as a linear array? f07uwf yes yes no no f07twf f07anf or f07arf and f07awf

### Tree 4: Pseudo-inverses

 Is $A$ a complex matrix? Is $A$ of full rank? Is $A$ an $m$ by $n$ matrix with $m? f01rjf and f01rkf yes yes yes no no no f08asf and f08auf or f08atf f08kpf Is $A$ of full rank? Is $A$ an $m$ by $n$ matrix with $m? f01qjf and f01qkf yes yes no no f08aef and f08agf or f08aff Is $A$ an $m$ by $n$ matrix with $m? f08kbf yes no Is reliability more important than efficiency? f08kbf yes no f01blf
Note 1: the inverse of a band matrix $A$ does not in general have the same shape as $A$, and no routines are provided specifically for finding such an inverse. The matrix must either be treated as a full matrix, or the equations $AX=B$ must be solved, where $B$ has been initialized to the identity matrix $I$. In the latter case, see the decision trees in Section 4 in the F04 Chapter Introduction.
Note 2: by ‘guaranteed accuracy’ we mean that the accuracy of the inverse is improved by use of the iterative refinement technique using additional precision.
(ii) Matrix Factorizations: see the decision trees in Section 4 in the F02 and F04 Chapter Introductions.
(iii) Matrix Arithmetic and Manipulation: not appropriate.
(iv) Matrix Functions:

### Tree 5: Matrix functions $f\left(A\right)$ of an n by n real matrix $A$

 Is ${e}^{tA}B$ required? Is $A$ stored in dense format? f01gaf yes yes no no f01gbf Is $A$ real symmetric? Is ${e}^{A}$ required? f01edf yes yes no no f01eff Is $\mathrm{cos}\left(A\right)$ or $\mathrm{cosh}\left(A\right)$ or $\mathrm{sin}\left(A\right)$ or $\mathrm{sinh}\left(A\right)$ required? Is the condition number of the matrix function required? f01jaf yes yes no no f01ekf Is $\mathrm{log}\left(A\right)$ required? Is the condition number of the matrix logarithm required? f01jjf yes yes no no Is the Fréchet derivative of the matrix logarithm required? f01jkf yes no f01ejf Is $\mathrm{exp}\left(A\right)$ required? Is the condition number of the matrix exponential required? f01jgf yes yes no no Is the Fréchet derivative of the matrix exponential required? f01jhf yes no f01ecf Is ${A}^{1/2}$ required? Is the condition number of the matrix square root required? f01jdf yes yes no no Is the matrix upper quasi-triangular? f01epf yes no f01enf Is ${A}^{p}$ required? Is the condition number of the matrix power required? f01jef yes yes no no Is the Fréchet derivative of the matrix power required? f01jff yes no f01eqf $f\left(A\right)$ will be computed. Will derivatives of $f$ be supplied by the user? Is the condition number of the matrix function required? f01jcf yes yes no no f01emf Is the condition number of the matrix function required? f01jbf yes no f01elf

### Tree 6: Matrix functions $f\left(A\right)$ of an n by n complex matrix $A$

 Is ${e}^{tA}B$ required? Is $A$ stored in dense format? f01haf yes yes no no f01hbf Is $A$ complex Hermitian? Is ${e}^{A}$ required? f01fdf yes yes no no f01fff Is $\mathrm{cos}\left(A\right)$ or $\mathrm{cosh}\left(A\right)$ or $\mathrm{sin}\left(A\right)$ or $\mathrm{sinh}\left(A\right)$ required? Is the condition number of the matrix function required? f01kaf yes yes no no f01fkf Is $\mathrm{log}\left(A\right)$ required? Is the condition number of the matrix logarithm required? f01kjf yes yes no no Is the Fréchet derivative of the matrix logarithm required? f01kkf yes no f01fjf Is $\mathrm{exp}\left(A\right)$ required? Is the condition number of the matrix exponential required? f01kgf yes yes no no Is the Fréchet derivative of the matrix exponential required? f01khf yes no f01fcf Is ${A}^{1/2}$ required? Is the condition number of the matrix square root required? f01kdf yes yes no no Is the matrix upper triangular? f01fpf yes no f01fnf Is ${A}^{p}$ required? Is the condition number of the matrix power required? f01kef yes yes no no Is the Fréchet derivative of the matrix power required? f01kff yes no f01fqf $f\left(A\right)$ will be computed. Will derivatives of $f$ be supplied by the user? Is the condition number of the matrix function required? f01kcf yes yes no no f01fmf Is the condition number of the matrix function required? f01kbf yes no f01flf

## 5Functionality Index

 Action of the matrix exponential on a complex matrix f01haf
 Action of the matrix exponential on a complex matrix (reverse communication) f01hbf
 Action of the matrix exponential on a real matrix f01gaf
 Action of the matrix exponential on a real matrix (reverse communication) f01gbf
 Inversion (also see Chapter F07),
 real m by n matrix,
 pseudo-inverse f01blf
 real symmetric positive definite matrix,
 accurate inverse f01abf
 Matrix Arithmetic and Manipulation,
 complex matrices f01cwf
 real matrices f01ctf
 matrix multiplication f01ckf
 matrix storage conversion,
 full to packed triangular storage,
 complex matrices f01vbf (ztrttp)
 real matrices f01vaf (dtrttp)
 full to Rectangular Full Packed storage,
 complex matrix f01vff (ztrttf)
 real matrix f01vef (dtrttf)
 packed band  ↔  rectangular storage, special provision for diagonal
 complex matrices f01zdf
 real matrices f01zcf
 packed triangular to full storage,
 complex matrices f01vdf (ztpttr)
 real matrices f01vcf (dtpttr)
 packed triangular to Rectangular Full Packed storage,
 complex matrices f01vkf (ztpttf)
 real matrices f01vjf (dtpttf)
 packed triangular  ↔  square storage, special provision for diagonal
 complex matrices f01zbf
 real matrices f01zaf
 Rectangular Full Packed to full storage,
 complex matrices f01vhf (ztfttr)
 real matrices f01vgf (dtfttr)
 Rectangular Full Packed to packed triangular storage,
 complex matrices f01vmf (ztfttp)
 real matrices f01vlf (dtfttp)
 matrix subtraction,
 complex matrices f01cwf
 real matrices f01ctf
 matrix transpose f01crf
 Matrix function,
 complex Hermitian n by n matrix,
 matrix exponential f01fdf
 matrix function f01fff
 complex n by n matrix,
 condition number for a matrix exponential f01kgf
 condition number for a matrix exponential, logarithm, sine, cosine, sinh or cosh f01kaf
 condition number for a matrix function, using numerical differentiation f01kbf
 condition number for a matrix function, using user-supplied derivatives f01kcf
 condition number for a matrix logarithm f01kjf
 condition number for a matrix power f01kef
 condition number for the matrix square root, logarithm, sine, cosine, sinh or cosh f01kdf
 Fréchet derivative
 matrix exponential f01khf
 matrix logarithm f01kkf
 matrix power f01kff
 general power
 matrix f01fqf
 matrix exponential f01fcf
 matrix exponential, sine, cosine, sinh or cosh f01fkf
 matrix function, using numerical differentiation f01flf
 matrix function, using user-supplied derivatives f01fmf
 matrix logarithm f01fjf
 matrix square root f01fnf
 upper triangular
 matrix square root f01fpf
 real n by n matrix,
 condition number for a matrix exponential f01jgf
 condition number for a matrix function, using numerical differentiation f01jbf
 condition number for a matrix function, using user-supplied derivatives f01jcf
 condition number for a matrix logarithm f01jjf
 condition number for a matrix power f01jef
 condition number for the matrix exponential, logarithm, sine, cosine, sinh or cosh f01jaf
 condition number for the matrix square root, logarithm, sine, cosine, sinh or cosh f01jdf
 Fréchet derivative
 matrix exponential f01jhf
 matrix logarithm f01jkf
 matrix power f01jff
 general power
 matrix exponential f01eqf
 matrix exponential f01ecf
 matrix exponential, sine, cosine, sinh or cosh f01ekf
 matrix function, using numerical differentiation f01elf
 matrix function, using user-supplied derivatives f01emf
 matrix logarithm f01ejf
 matrix square root f01enf
 upper quasi-triangular
 matrix square root f01epf
 real symmetric n by n matrix,
 matrix exponential f01edf
 matrix function f01eff
 Matrix Transformations,
 complex matrix, form unitary matrix f01rkf
 complex m by n(m ≤ n) matrix,
 RQ factorization f01rjf
 complex upper trapezoidal matrix,
 RQ factorization f01rgf
 eigenproblem Ax = λBx, A, B banded,
 reduction to standard symmetric problem f01bvf
 real almost block-diagonal matrix,
 LU factorization f01lhf
 real band symmetric positive definite matrix,
 ULDLTUT factorization f01buf
 variable bandwidth, LDLT factorization f01mcf
 real matrix,
 form orthogonal matrix f01qkf
 real m by n(m  ≤  n) matrix,
 RQ factorization f01qjf
 real sparse matrix,
 factorization f01brf
 factorization, known sparsity pattern f01bsf
 real upper trapezoidal matrix,
 RQ factorization f01qgf
 tridiagonal matrix,
 LU factorization f01lef

None.

## 7Routines Withdrawn or Scheduled for Withdrawal

The following lists all those routines that have been withdrawn since Mark 19 of the Library or are scheduled for withdrawal at one of the next two marks.
 WithdrawnRoutine Mark ofWithdrawal Replacement Routine(s) f01maf 19 f11jaf
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Higham N J (2008) Functions of Matrices: Theory and Computation SIAM, Philadelphia, PA, USA
Wilkinson J H (1965) The Algebraic Eigenvalue Problem Oxford University Press, Oxford
Wilkinson J H (1977) Some recent advances in numerical linear algebra The State of the Art in Numerical Analysis (ed D A H Jacobs) Academic Press
Wilkinson J H and Reinsch C (1971) Handbook for Automatic Computation II, Linear Algebra Springer–Verlag