Integer type:** int32**** int64**** nag_int** show int32 show int32 show int64 show int64 show nag_int show nag_int

F — linear algebra

The
F
Chapters of the Library are concerned with linear algebra and cover a large area. This general introduction is intended to help you decide which particular
F
Chapter is relevant to your problem. The following Chapters are currently available:

- Chapter F01 – matrix operations, including inversion
- Chapter F02 – eigenvalues and eigenvectors
- Chapter F03 – determinants
- Chapter F04 – simultaneous linear equations
- Chapter F05 – orthogonalization
- Chapter F07 – linear equations (lapack)
- Chapter F08 – least squares and eigenvalue problems (lapack)
- Chapter F11 – large scale linear systems
- Chapter F12 – large scale eigenproblems
- Chapter F16 – further linear algebra support routines

The principal problem areas addressed by the above Chapters are

- Systems of linear equations
- Linear least squares problems
- Eigenvalue and singular value problems

The solution of these problems usually involves several matrix operations, such as a matrix factorization followed by the solution of the factorized form, and the functions for these operations themselves utilize lower level support functions
You will not normally need to be concerned with these support functions.

NAG has been involved in a project, called LAPACK (see Anderson et al. (1999)), to develop a linear algebra package for modern high-performance computers, and the functions developed within that project are
incorporated into the Library as Chapters F07 and F08. It should be emphasized that, while the LAPACK project has been concerned with high-performance computers, the functions do not compromise efficiency on conventional machines.

Chapters F11 and F12 contain functions for solving large scale problems, but a few earlier functions are still located in Chapters F01, F02 and F04.

For background information on numerical algorithms for the solution of linear algebra problems see Golub and Van Loan (1996).
For the three main problem areas listed above you generally have
the choice of selecting a single function to solve the problem, a so-called Black Box function, or selecting more than one function to solve the problem, such as a factorization function followed by a solve function, so-called General Purpose functions. The following sections indicate which chapters are relevant to particular problem areas.

The Black Box functions for solving linear equations of the form

where $A$ is an $n$ by $n$ real or complex nonsingular matrix, are to be found in Chapters F04 and F07. Such equations can also be solved by selecting a general purpose factorization function from Chapter F01
or Chapter F03
and combining them with a solve function in Chapter F04, or by selecting a factorization and a solve function from Chapter F07. For large sparse problems, functions from Chapter F11 should be used. In addition there are functions to estimate condition numbers
in Chapters F04 and F07,
and functions to give error estimates in Chapters F02, F04 and F07.

$$Ax=b\text{\hspace{1em} and \hspace{1em}}AX=B\text{,}$$ |

There are functions to cater for a variety of types of matrix, including general, symmetric or Hermitian, symmetric or Hermitian positive definite, banded, skyline and sparse matrices.

In order to select the appropriate function, you are recommended to consult the F04 Chapter Introduction in the first instance, although the decision trees will often in fact point to a function in Chapters F07 or F11.

The Black Box functions
for solving linear least squares problems of the form

and $A$ is an $m$ by $n$, possibly rank deficient, matrix,
are to be found in Chapters F04 and F08. Such problems can also
be solved by selecting one or more general purpose factorization functions from Chapters F02 or F08 and combining them with a solve function in Chapter F04, which also contains a function to compute covariance matrices, or Chapter F08.
Linear least squares problems can also be solved by functions in the statistical Chapter G02.

$$\underset{x}{\mathrm{minimize}}\phantom{\rule{0.25em}{0ex}}{r}^{\mathrm{T}}r\text{, \hspace{1em} where}r=b-Ax\text{,}$$ |

In order to select the appropriate function, you are recommended to consult the F04 Chapter Introduction in the first instance, but if you have additional statistical requirements you may prefer to consult Regression in the G02 Chapter Introduction.

Chapter F08 also contains functions for solving linear equality constrained least squares problems, and the general Gauss–Markov linear model problem. Chapter E04 contains a function to solve general linearly constrained linear least squares problems.

The Black Box functions for solving standard matrix eigenvalue problems of the form

where $A$ is an $n$ by $n$ real or complex matrix, and generalized matrix eigenvalue problems of the form

where $B$ is also an $n$ by $n$ matrix, are to be found in Chapters F02, F08 and F12. These eigenvalue problems can also be solved by a combination of General Purpose functions
(which are mostly
in Chapter F08, but a few are in Chapter F02).

$$Ax=\lambda x\text{,}$$ |

$$Ax=\lambda Bx\text{\hspace{1em}and\hspace{1em}}ABx=\lambda x\text{,}$$ |

There are functions to cater for various types of matrices, including general, symmetric or Hermitian, and banded and sparse matrices.

Similarly, the Black Box functions for finding singular values and/or singular vectors of an $m$ by $n$ real or complex matrix $A$ are to be found in Chapters F02 and F08, and such problems may also be solved by functions from Chapter F12, and by combining functions from Chapter F08.

In order to select the appropriate function, you are recommended to consult Chapters F02 and F08 in the first instance.

Functions for matrix inversion are to be found in
Chapters F01 and F07.
You are recommended to consult Chapter F01 in the first instance, although the decision tree will often in fact point to a function in Chapter F07.
It should be noted that you are strongly encouraged not to use matrix inversion functions for the solution of linear equations, since these can be solved more efficiently and accurately using functions directed specifically at such problems. Indeed many problems, which superficially appear to be matrix inversion, can be posed as the solution of a system of linear equations and this is almost invariably preferable.

Functions to compute determinants of matrices are to be found in Chapter F03. You are recommended to consult Chapter F03 in the first instance.

Functions for various sorts of matrix operation are to be found in Chapter F01, including matrix transposition, addition and multiplication, and conversion between different matrix representation storage formats.

Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) *LAPACK Users' Guide* (3rd Edition) SIAM, Philadelphia

Blackford L S, Demmel J, Dongarra J J, Duff I S, Hammarling S, Henry G, Heroux M, Kaufman L, Lumsdaine A, Petitet A, Pozo R, Remington K and Whaley R C (2002) An updated set of Basic Linear Algebra Subprograms (BLAS) *ACM Trans. Math. Software* **28** 135–151

Dodson D S, Grimes R G and Lewis J G (1991) Sparse extensions to the Fortran basic linear algebra subprograms *ACM Trans. Math. Software* **17** 253–263

Dongarra J J, Du Croz J J, Duff I S and Hammarling S (1990) A set of Level 3 basic linear algebra subprograms *ACM Trans. Math. Software* **16** 1–28

Dongarra J J, Du Croz J J, Hammarling S and Hanson R J (1988) An extended set of FORTRAN basic linear algebra subprograms *ACM Trans. Math. Software* **14** 1–32

Golub G H and Van Loan C F (1996) *Matrix Computations* (3rd Edition) Johns Hopkins University Press, Baltimore

Lawson C L, Hanson R J, Kincaid D R and Krogh F T (1979) Basic linear algebra supbrograms for Fortran usage *ACM Trans. Math. Software* **5** 308–325

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015