NAG CL Interface
F04 (Linsys)
Simultaneous Linear Equations

Settings help

CL Name Style:


1 Scope of the Chapter

This chapter is concerned with the solution of the matrix equation AX=B, where B may be a single vector or a matrix of multiple right-hand sides. The matrix A may be real, complex, symmetric, Hermitian, positive definite or banded. It may also be rectangular, in which case a least squares solution is obtained.
Much of the functionality of this chapter has been superseded by functions from Chapters F07 and F08 (LAPACK routines) as those chapters have grown and have included driver and expert driver functions.
For a general introduction to sparse systems of equations, see the F11 Chapter Introduction, which provides functions for large sparse systems.

2 Background to the Problems

A set of linear equations may be written in the form
Ax=b  
where the known matrix A, with real or complex coefficients, is of size m×n (m rows and n columns), the known right-hand vector b has m components (m rows and one column), and the required solution vector x has n components (n rows and one column). There may also be p vectors bi, for i=1,2,,p, on the right-hand side and the equations may then be written as
AX=B,  
the required matrix X having as its p columns the solutions of Axi=bi, for i=1,2,,p. Some functions deal with the latter case, but for clarity only the case p=1 is discussed here.
The most common problem, the determination of the unique solution of Ax=b, occurs when m=n and A is not singular, that is rank(A)=n. This is discussed in Section 2.1 below. The next most common problem, discussed in Section 2.2 below, is the determination of the least squares solution of Axb required when m>n and rank(A)=n, i.e., the determination of the vector x which minimizes the norm of the residual vector r=b-Ax. All other cases are rank deficient, and they are treated in Section 2.3.

2.1 Unique Solution of Ax=b

Most functions in this chapter solve this particular problem. The computation starts with the triangular decomposition A=PLU, where L and U are respectively lower and upper triangular matrices and P is a permutation matrix, chosen so as to ensure that the decomposition is numerically stable. The solution is then obtained by solving in succession the simpler equations
Ly = PTb Ux = y  
the first by forward-substitution and the second by back-substitution.
If A is real symmetric and positive definite, U=LT, while if A is complex Hermitian and positive definite, U=LH; in both these cases P is the identity matrix (i.e., no permutations are necessary). In all other cases either U or L has unit diagonal elements.
Due to rounding errors the computed ‘solution’ x0, say, is only an approximation to the true solution x. This approximation will sometimes be satisfactory, agreeing with x to several figures, but if the problem is ill-conditioned then x and x0 may have few or even no figures in common, and at this stage there is no means of estimating the ‘accuracy’ of x0.
It must be emphasized that the ‘true’ solution x may not be meaningful, that is correct to all figures quoted, if the elements of A and b are known with certainty only to say p figures, where p is less than full precision.
One approach to assessing the accuracy of the solution is to compute or estimate the condition number of A, defined as
κ(A) = A . A-1 .  
Roughly speaking, errors or uncertainties in A or b may be amplified in the solution by a factor κ(A). Thus, for example, if the data in A and b are only accurate to 5 digits and κ(A)103, then the solution cannot be guaranteed to have more than 2 correct digits. If κ(A)105, the solution may have no meaningful digits.
To be more precise, suppose that
Ax=b  and  (A+δA)(x+δx)=b+δb.  
Here δA and δb represent perturbations to the matrices A and b which cause a perturbation δx in the solution. We can define measures of the relative sizes of the perturbations in A, b and x as
ρA=δA A ,  ρb=δb b   and  ρx=δx x   respectively.  
Then
ρxκ (A) 1-κ (A)ρA (ρA+ρb)  
provided that κ(A)ρA<1. Often κ(A)ρA1 and then the bound effectively simplifies to
ρxκ(A)(ρA+ρb).  
Hence, if we know κ(A), ρA and ρb, we can compute a bound on the relative errors in the solution. Note that ρA, ρb and ρx are defined in terms of the norms of A, b and x. If A, b or x contains elements of widely differing magnitude, then ρA, ρb and ρx will be dominated by the errors in the larger elements, and ρx will give no information about the relative accuracy of smaller elements of x.
Another way to obtain useful information about the accuracy of a solution is to solve two sets of equations, one with the given coefficients, which are assumed to be known with certainty to p figures, and one with the coefficients rounded to (p-1) figures, and to count the number of figures to which the two solutions agree. In ill-conditioned problems this can be surprisingly small and even zero.

2.2 The Least Squares Solution of Axb, m>n, rank(A)=n

The least squares solution is the vector x^ which minimizes the sum of the squares of the residuals,
S=(b-Ax^)T(b-Ax^)=b-Ax^22.  
The solution is obtained in two steps.
  1. (a)Householder transformations are used to reduce A to ‘simpler form’ via the equation QA=R, where R has the appearance
    (R^0)  
    with R^ a nonsingular upper triangular n×n matrix and 0 a zero matrix of shape (m-n)×n. Similar operations convert b to Qb=c, where
    c=(c1c2)  
    with c1 having n rows and c2 having (m-n) rows.
  2. (b)The required least squares solution is obtained by back-substitution in the equation
    R^x^=c1.  
Again due to rounding errors the computed x^0 is only an approximation to the required x^.

2.3 Rank-deficient Cases

If, in the least squares problem just discussed, rank(A)<n, then a least squares solution exists but it is not unique. In this situation it is usual to ask for the least squares solution ‘of minimal length’, i.e., the vector x which minimizes x2, among all those x for which b-Ax2 is a minimum.
This can be computed from the Singular Value Decomposition (SVD) of A, in which A is factorized as
A=QDPT  
where Q is an m×n matrix with orthonormal columns, P is an n×n orthogonal matrix and D is an n×n diagonal matrix. The diagonal elements of D are called the ‘singular values’ of A; they are non-negative and can be arranged in decreasing order of magnitude:
d1d2dn0.  
The columns of Q and P are called respectively the left and right singular vectors of A. If the singular values dr+1,,dn are zero or negligible, but dr is not negligible, then the rank of A is taken to be r (see also Section 2.4) and the minimal length least squares solution of Axb is given by
x^=DQTb  
where D is the diagonal matrix with diagonal elements d1−1,d2−1,,dr−1,0,,0.
The SVD may also be used to find solutions to the homogeneous system of equations Ax=0, where A is m×n. Such solutions exist if and only if rank(A)<n, and are given by
x=i=r+1nαipi  
where the αi are arbitrary numbers and the pi are the columns of P which correspond to negligible elements of D.
The general solution to the rank-deficient least squares problem is given by x^+x, where x^ is the minimal length least squares solution and x is any solution of the homogeneous system of equations Ax=0.

2.4 The Rank of a Matrix

In theory the rank is r if n-r elements of the diagonal matrix D of the singular value decomposition are exactly zero. In practice, due to rounding and/or experimental errors, some of these elements have very small values which usually can and should be treated as zero.
For example, the following 5×8 matrix has rank 3 in exact arithmetic:
( 22 14 −1 −3 9 9 2 4 10 7 13 −2 8 1 −6 5 2 10 −1 13 1 −7 6 0 3 0 −11 −2 −2 5 5 −2 7 8 3 4 4 −1 1 2 ) .  
On a computer with 7 decimal digits of precision the computed singular values were
3.5×101,   2.0×101,   2.0×101,   1.3×10−6,   5.5×10−7  
and the rank would be correctly taken to be 3.
It is not, however, always certain that small computed singular values are really zero. With the 7×7 Hilbert matrix, for example, where aij=1/(i+j-1), the singular values are
1.7,  2.7×10−1,  2.1×10−2,  1.0×10−3,  2.9×10−5,  4.9×10−7,  3.5×10−9.  
Here there is no clear cut-off between small (i.e., negligible) singular values and larger ones. In fact, in exact arithmetic, the matrix is known to have full rank and none of its singular values is zero. On a computer with 7 decimal digits of precision, the matrix is effectively singular, but should its rank be taken to be 6, or 5, or 4?
It is, therefore, impossible to give an infallible rule, but generally the rank can be taken to be the number of singular values which are neither zero nor very small compared with other singular values. For example, if there is a sharp decrease in singular values from numbers of order unity to numbers of order 10−7, then the latter will almost certainly be zero in a machine in which 7 significant decimal figures is the maximum accuracy. Similarly for a least squares problem in which the data is known to about four significant figures and the largest singular value is of order unity then a singular value of order 10−4 or less should almost certainly be regarded as zero.
It should be emphasized that rank determination and least squares solutions can be sensitive to the scaling of the matrix. If at all possible the units of measurement should be chosen so that the elements of the matrix have data errors of approximately equal magnitude.

2.5 Generalized Linear Least Squares Problems

The simple type of linear least squares problem described in Section 2.2 can be generalized in various ways.
  1. 1.Linear least squares problems with equality constraints:
    find ​x​ to minimize ​S=c-Ax22  subject to  Bx=d,  
    where A is m×n and B is p×n, with pnm+p. The equations Bx=d may be regarded as a set of equality constraints on the problem of minimizing S. Alternatively the problem may be regarded as solving an overdetermined system of equations
    ( A B ) x=( c d ) ,  
    where some of the equations (those involving B) are to be solved exactly, and the others (those involving A) are to be solved in a least squares sense. The problem has a unique solution on the assumptions that B has full row rank p and the matrix ( A B ) has full column rank n. (For linear least squares problems with inequality constraints, refer to Chapter E04.)
  2. 2.General Gauss–Markov linear model problems:
    minimize ​y2  subject to  d=Ax+By,  
    where A is m×n and B is m×p, with nmn+p. When B=I, the problem reduces to an ordinary linear least squares problem. When B is square and nonsingular, it is equivalent to a weighted linear least squares problem:
    find ​x​ to minimize ​B-1(d-Ax)2.  
    The problem has a unique solution on the assumptions that A has full column rank n, and the matrix (A,B) has full row rank m.

2.6 Calculating the Inverse of a Matrix

The functions in this chapter can also be used to calculate the inverse of a square matrix A by solving the equation
AX=I  
where I is the identity matrix. However, solving the equations AX=B by calculation of the inverse of the coefficient matrix A, i.e., by X=A-1B, is definitely not recommended.
Similar remarks apply to the calculation of the pseudo-inverse of a singular or rectangular matrix.

2.7 Estimating the 1-norm of a Matrix

The 1-norm of a matrix A is defined to be:
A1 = max 1jn i=1 m |aij| (1)
Typically it is useful to calculate the condition number of a matrix with respect to the solution of linear equations, or inversion. The higher the condition number the less accuracy might be expected from a numerical computation. A condition number for the solution of linear equations is A.A-1. Since this might be a relatively expensive computation it often suffices to estimate the norm of each matrix.

3 Recommendations on Choice and Use of Available Functions

See also Section 3 in the F07 Chapter Introduction for recommendations on the choice of available functions from that chapter.

3.1 Black Box and General Purpose Functions

Most of the functions in this chapter are categorised either as Black Box functions or general purpose functions.
Black Box functions solve the equations Axi=bi, for i=1,2,,p, in a single call with the matrix A and the right-hand sides, bi, being supplied as data. These are the simplest functions to use and are suitable when all the right-hand sides are known in advance and do not occupy too much storage.
General purpose functions, in general, require a previous call to a function in Chapters F01 or F07 to factorize the matrix A. This factorization can then be used repeatedly to solve the equations for one or more right-hand sides which may be generated in the course of the computation. The Black Box functions simply call a factorization function and then a general purpose function to solve the equations.

3.2 Systems of Linear Equations

Most of the functions in this chapter solve linear equations Ax=b when A is n×n and a unique solution is expected (see Section 2.1). The matrix A may be ‘general’ real or complex, or may have special structure or properties, e.g., it may be banded, tridiagonal, almost block-diagonal, sparse, symmetric, Hermitian, positive definite (or various combinations of these). For some of the combinations see Chapter F07. f04mcc (which needs to be preceded by a call to f01mcc) can be used for the solution of variable band-width (skyline) positive definite systems.
It must be emphasized that it is a waste of computer time and space to use an inappropriate function, for example one for the complex case when the equations are real. It is also unsatisfactory to use the special functions for a positive definite matrix if this property is not known in advance.
A number of the Black Box functions in this chapter return estimates of the condition number and the forward error, along with the solution of the equations. But for those functions that do not return a condition estimate two functions are provided – f04ydc for real matrices, f04zdc for complex matrices – which can return a cheap but reliable estimate of A-1, and hence an estimate of the condition number κ(A) (see Section 2.1). These functions can also be used in conjunction with most of the linear equation solving functions in Chapter F11: further advice is given in the function documents.
Other functions for solving linear equation systems, computing inverse matrices, and estimating condition numbers can be found in Chapter F07, which contains LAPACK software.

3.3 Linear Least Squares Problems

The majority of the functions for solving linear least squares problems are to be found in Chapter F08.
Functions for solving linear least squares problems using the QR factorization or the SVD can be found in Chapters F01, F02 and F08. When mn and a unique solution is expected, the QR factorization can be used, otherwise the QR factorization with pivoting, or the SVD should be used. For mn, the SVD is not significantly more expensive than the QR factorization. See Chapter F08 for further discussion.
Problems with linear equality constraints can be solved by f08zac (for real data) or by f08znc (for complex data), provided that the problems are of full rank. Problems with linear inequality constraints can be solved by e04ncc in Chapter E04.
General Gauss–Markov linear model problems, as formulated in Section 2.5, can be solved by f08zbc (for real data) or by f08zpc (for complex data).

3.4 Sparse Matrix Functions

Functions specifically for sparse matrices are appropriate only when the number of nonzero elements is very small, less than, say, 10% of the n2 elements of A, and the matrix does not have a relatively small band width.
Chapter F11 contains functions for both the direct and iterative solution of sparse linear systems.

4 Decision Trees

The name of the function (if any) that should be used to factorize the matrix A is given in brackets after the name of the function for solving the equations.

Tree 1: Black Box functions for unique solution of Ax=b (Real matrix)

Is A symmetric?   Is A positive definite?   Is A a band matrix?   Is A tridiagonal?   f07jac, f07jbc or f04bgc (see Note 1)
yesyesyesyes
  no   no   no   no
f07hac, f07hbc or f04bfc (see Note 1)
Is one triangle of A stored as a linear array?   f07gac, f07gbc or f04bec (see Note 1)
yes
  no
f07fac, f07fbc or f04bdc (see Note 1)
Is one triangle of A stored as a linear array?   f07pac, f07pbc or f04bjc (see Note 1)
yes
  no
f07mac, f07mbc or f04bhc (see Note 1)
Is A a band matrix?   Is A tridiagonal?   f07cac, f07cbc or f04bcc (see Note 1)
yesyes
  no   no
f07bac, f07bbc or f04bbc (see Note 1)
f07aac, f07abc or f04bac

Tree 2: Black Box functions for unique solution of Ax=b (Complex matrix)

Is A Hermitian?   Is A positive definite?   Is A a band matrix?   Is A tridiagonal?   f07jnc, f07jpc or f04cgc (see Note 1)
yesyesyesyes
  no   no   no   no
f07hnc, f07hpc or f04cfc (see Note 1)
Is one triangle of A stored as a linear array?   f07gnc, f07gpc or f04cec (see Note 1)
yes
  no
f07fnc, f07fpc or f04cdc (see Note 1)
Is one triangle of A stored as a linear array?   f07pnc, f07ppc or f04cjc (see Note 1)
yes
  no
f07mnc, f07mpc or f04chc (see Note 1)
Is A symmetric?   Is one triangle of A stored as a linear array?   f07qnc, f07qpc or f04djc (see Note 1)
yesyes
  no   no
f07nnc, f07npc or f04dhc (see Note 1)
Is A a band matrix?   Is A tridiagonal?   f07cnc, f07cpc or f04ccc (see Note 1)
yesyes
  no   no
f07bnc, f07bpc or f04cbc (see Note 1)
f07fnc, f07fpc or f04cac (see Note 1)

Tree 3: General purpose functions for unique solution of Ax=b (Real matrix)

Is A a sparse matrix and not banded?   Chapter F11
yes
  no
Is A symmetric?   Is A positive definite?   Is A band matrix?   Is A tridiagonal?   f07jec (f07jdc)
yesyesyesyes
  no   no   no   no
Variable band width?   f04mcc (f01mcc)
yes
  no
f07hec (f07hdc)
Is one triangle of A stored as a linear array?   f07gec (f07gdc)
yes
  no
f07fec (f07fdc)
Is one triangle of A stored as a linear array?   f07pec (f07pdc)
yes
  no
f07mec (f07mdc)
Is A triangular?   Is A a band matrix?   f07vec
yesyes
  no   no
Is A stored as a linear array?   f07uec
yes
  no
f07tec
Is A a band matrix?   Is A tridiagonal?   f07cec (f07cdc)
yesyes
  no   no
f07bec (f07bdc)
f07aec (f07adc)

Tree 4: General purpose functions for unique solution of Ax=b (Complex matrix)

Is A a sparse matrix and not banded?   Chapter F11
yes
  no
Is A Hermitian?   Is A positive definite?   Is A a band matrix?   Is A tridiagonal?   f07jsc (f07jrc)
yesyesyesyes
  no   no   no   no
f07hsc (f07hrc)
Is one triangle of A stored as a linear array?   f07gsc (f07grc)
yes
  no
f07fsc (f07frc)
Is one triangle of A stored as a linear array?   f07psc (f07prc)
yes
  no
f07msc (f07mrc)
Is A symmetric?   Is one triangle of A stored as a linear array?   f07qsc (f07qrc)
yesyes
  no   no
f07nsc (f07nrc)
Is A triangular?   Is A a band matrix?   f07vsc
yesyes
  no   no
Is A stored as a linear array?   f07usc
yes
  no
f07tsc
Is A a band matrix?   Is A tridiagonal?   f07csc (f07crc)
yesyes
  no   no
f07bsc (f07brc)
f07asc (f07arc)
Note 1: also returns an estimate of the condition number and the forward error.

5 Functionality Index

Black Box functions, Ax=b,  
complex general band matrix   f04cbc
complex general matrix   f04cac
complex general tridiagonal matrix   f04ccc
complex Hermitian matrix,  
packed matrix format   f04cjc
standard matrix format   f04chc
complex Hermitian positive definite band matrix   f04cfc
complex Hermitian positive definite matrix,  
packed matrix format   f04cec
standard matrix format   f04cdc
complex Hermitian positive definite tridiagonal matrix   f04cgc
complex symmetric matrix,  
packed matrix format   f04djc
standard matrix format   f04dhc
real general band matrix   f04bbc
real general matrix,  
multiple right-hand sides, standard precision   f04bac
real general tridiagonal matrix   f04bcc
real symmetric matrix,  
packed matrix format   f04bjc
standard matrix format   f04bhc
real symmetric positive definite band matrix   f04bfc
real symmetric positive definite matrix,  
multiple right-hand sides, standard precision   f04bdc
packed matrix format   f04bec
real symmetric positive definite tridiagonal matrix   f04bgc
General Purpose functions, Ax=b,  
real band symmetric positive definite matrix, variable bandwidth   f04mcc
Service Functions,  
complex rectangular matrix,  
norm and condition number estimation   f04zdc
real rectangular matrix,  
norm and condition number estimation   f04ydc

6 Auxiliary Functions Associated with Library Function Arguments

None.

7 Withdrawn or Deprecated Functions

The following lists all those functions that have been withdrawn since Mark 23 of the Library or are in the Library, but deprecated.
Function Status Replacement Function(s)
f04adc Withdrawn at Mark 25 f04cac
f04agc Withdrawn at Mark 25 f07fec
f04ajc Withdrawn at Mark 25 f07aec
f04akc Withdrawn at Mark 25 f07asc
f04arc Withdrawn at Mark 25 f04bac
f04awc Withdrawn at Mark 25 f07fsc

8 References

Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Lawson C L and Hanson R J (1974) Solving Least Squares Problems Prentice–Hall
Wilkinson J H and Reinsch C (1971) Handbook for Automatic Computation II, Linear Algebra Springer–Verlag