This chapter provides functions for the solution of large sparse systems of simultaneous linear equations. These include iterative methods for real nonsymmetric and symmetric, complex non-Hermitian and Hermitian linear systems and direct methods for general real linear systems. Further direct methods are currently available in Chapters F01 and F04.
2Background to the Problems
This section is only a brief introduction to the solution of sparse linear systems. For a more detailed discussion see for example Duff et al. (1986) and Demmel et al. (1999) for direct methods, or Barrett et al. (1994) for iterative methods.
2.1Sparse Matrices and Their Storage
A matrix may be described as sparse if the number of zero elements is sufficiently large that it is worthwhile using algorithms which avoid computations involving zero elements.
If is sparse, and the chosen algorithm requires the matrix coefficients to be stored, a significant saving in storage can often be made by storing only the nonzero elements. A number of different formats may be used to represent sparse matrices economically. These differ according to the amount of storage required, the amount of indirect addressing required for fundamental operations such as matrix-vector products, and their suitability for vector and/or parallel architectures. For a survey of some of these storage formats see Barrett et al. (1994).
Some of the functions in this chapter have been designed to be independent of the matrix storage format. This allows you to choose your own preferred format, or to avoid storing the matrix altogether. Other functions are the so-called Black Boxes, which are easier to use, but are based on fixed storage formats. Three fixed storage formats for sparse matrices are currently used. These are known as coordinate storage (CS) format, symmetric coordinate storage (SCS) format and compressed column storage (CCS) format.
2.1.1Coordinate storage (CS) format
This storage format represents a sparse matrix , with nnz nonzero elements, in terms of three one-dimensional arrays – a double or Complex array a and two Integer arrays irow and icol. These arrays are all of dimension at least nnz. a contains the nonzero elements themselves, while irow and icol store the corresponding row and column indices respectively.
For example, the matrix
might be represented in the arrays a, irow and icol as
.
Notes
(i)The general format specifies no ordering of the array elements, but some functions may impose a specific ordering. For example, the nonzero elements may be required to be ordered by increasing row index and by increasing column index within each row, as in the example above. Utility functions are provided to order the elements appropriately (see Section 2.2).
(ii)With this storage format it is possible to enter duplicate elements. These may be interpreted in various ways (e.g., raising an error, ignoring all but the first entry, all but the last, or summing).
2.1.2Symmetric coordinate storage (SCS) format
This storage format is suitable for symmetric and Hermitian matrices, and is identical to the CS format described in Section 2.1.1, except that only the lower triangular nonzero elements are stored. Thus, for example, the matrix
might be represented in the arrays a, irow and icol as
.
,
.
2.1.3Compressed column storage (CCS) format
This storage format also uses three one-dimensional arrays – a double or Complex array a and two Integer arrays irowix and icolzp. The array a and irowix are of dimension at least , while icolzp is of dimension at least . a contains the nonzero elements, going down the first column, then the second and so on. For example, the matrix in Section 2.1.1 above will be represented by
.
irowix records the row index for each entry in a, so the same matrix will have
.
icolzp records the index into a which starts each new column. The last entry of icolzp is equal to . An empty column (one filled with zeros, that is) is signalled by an index that is the same as the next non-empty column, or if all subsequent columns are empty. The above example corresponds to
The example in Section 2.1.2 above will be represented by
The example in Section 10 in f11zac shows how to convert a CS representation to a CCS representation. The example in Section 10 in f11zcc shows how to convert back and forth between CS and CCS for rectangular matrices.
2.2Direct Methods
Direct methods for the solution of the linear algebraic system
(1)
aim to determine the solution vector in a fixed number of arithmetic operations, which is determined a priori by the number of unknowns. For example, an factorization of followed by forward and backward substitution is a direct method for (1).
If the matrix is sparse it is possible to design direct methods which exploit the sparsity pattern and are therefore much more computationally efficient than the algorithms in Chapter F07, which in general take no account of sparsity. However, if the matrix is very large and sparse, then iterative methods, with an appropriate preconditioner, (see Section 2.3) may be more efficient still.
This chapter provides a direct factorization method for sparse real systems. This method is based on special coding for supernodes, broadly defined as groups of consecutive columns with the same nonzero structure, which enables use of dense BLAS kernels. The algorithms contained here come from the SuperLU software suite (see Demmel et al. (1999)). An important requirement of sparse factorization is keeping the factors as sparse as possible. It is well known that certain column orderings can produce much sparser factorizations than the normal left-to-right ordering. It is well worth the effort, then, to find such column orderings since they reduce both storage requirements of the factors, the time taken to compute them and the time taken to solve the linear system. The row reorderings, demanded by partial pivoting in order to keep the factorization stable, can further complicate the choice of the column ordering, but quite good and fast algorithms have been developed to make possible a fairly reliable computation of an appropriate column ordering for any sparsity pattern. We provide one such algorithm (known in the literature as COLAMD) through one function in the suite. Similar to the case for dense matrices, functions are provided to compute the factorization with partial row pivoting for numerical stability, solve (1) by performing the forward and backward substitutions for multiple right-hand side vectors, refine the solution, minimize the backward error and estimate the forward error of the solutions, compute norms, estimate condition numbers and perform diagnostics of the factorization. It is also possible to explicitly construct, column by column, the dense inverse of the matrix by solving equation (1) for right-hand sides corresponding to columns of the identity matrix. Blocks of dense columns can be handled at one time and then stored in some chosen sparse format, as system memory allows. For more details see Section 3.4.
It is also possible to use iterative method functions in this chapter to compute a direct factorization. Such methods are available for sparse real nonsymmetric, complex non-Hermitian, real symmetric positive definite and complex Hermitian positive definite systems. Further direct methods may be found in Chapters F01, F04 and F07.
2.3Iterative Methods
In contrast to the direct methods discussed in Section 2.2, iterative methods for (1) approach the solution through a sequence of approximations until some user-specified termination criterion is met or until some predefined maximum number of iterations has been reached. The number of iterations required for convergence is not generally known in advance, as it depends on the accuracy required, and on the matrix – its sparsity pattern, conditioning and eigenvalue spectrum.
Faster convergence can often be achieved using a preconditioner (see Golub and Van Loan (1996) and Barrett et al. (1994)). A preconditioner maps the original system of equations onto a different system
(2)
which hopefully exhibits better convergence characteristics. For example, the condition number of the matrix may be better than that of , or it may have eigenvalues of greater multiplicity.
An unsuitable preconditioner or no preconditioning at all may result in a very slow rate or lack of convergence. However, preconditioning involves a trade-off between the reduction in the number of iterations required for convergence and the additional computational costs per iteration. Setting up a preconditioner may also involve non-negligible overheads. The application of preconditioners to real nonsymmetric, complex non-Hermitian, real symmetric and complex Hermitian and real symmetric systems of equations is further considered in Sections 2.4 and 2.5.
2.4Iterative Methods for Real Nonsymmetric and Complex Non-Hermitian Linear Systems
Many of the most effective iterative methods for the solution of (1) lie in the class of non-stationary Krylov subspace methods (see Barrett et al. (1994)). For real nonsymmetric
and complex non-Hermitian
matrices this class includes:
Here we just give a brief overview of these algorithms as implemented in this chapter.
For full details see the function documents for f11bdcandf11brc.
RGMRES is based on the Arnoldi method, which explicitly generates an orthogonal basis for the Krylov subspace , , where is the initial residual. The solution is then expanded onto the orthogonal basis so as to minimize the residual norm. For real nonsymmetric and complex non-Hermitian matrices the generation of the basis requires a ‘long’ recurrence relation, resulting in prohibitive computational and storage costs. RGMRES limits these costs by restarting the Arnoldi process from the latest available residual every iterations. The value of is chosen in advance and is fixed throughout the computation. Unfortunately, an optimum value of cannot easily be predicted.
CGS is a development of the bi-conjugate gradient method where the nonsymmetric Lanczos method is applied to reduce the coefficient matrix to tridiagonal form: two bi-orthogonal sequences of vectors are generated starting from the initial residual and from the shadow residual corresponding to the arbitrary problem , where is chosen so that . In the course of the iteration, the residual and shadow residual and are generated, where is a polynomial of order , and bi-orthogonality is exploited by computing the vector product
. Applying the ‘contraction’ operator twice, the iteration coefficients can still be recovered without advancing the solution of the shadow problem, which is of no interest. The CGS method often provides fast convergence; however, there is no reason why the contraction operator should also reduce the once reduced vector : this can lead to a highly irregular convergence.
Bi-CGSTAB is similar to the CGS method. However, instead of generating the sequence , it generates the sequence where the are polynomials chosen to minimize the residual after the application of the contraction operator . Two main steps can be identified for each iteration: an OR (Orthogonal Residuals) step where a basis of order is generated by a Bi-CG iteration and an MR (Minimum Residuals) step where the residual is minimized over the basis generated, by a method similar to GMRES. For , the method corresponds to the Bi-CGSTAB method of Van der Vorst (1989). For , more information about complex eigenvalues of the iteration matrix can be taken into account, and this may lead to improved convergence and robustness. However, as increases, numerical instabilities may arise.
The transpose-free quasi-minimal residual method (TFQMR) (see Freund and Nachtigal (1991) and Freund (1993)) is conceptually derived from the CGS method. The residual is minimized over the space of the residual vectors generated by the CGS iterations under the simplifying assumption that residuals are almost orthogonal. In practice, this is not the case but theoretical analysis has proved the validity of the method. This has the effect of remedying the rather irregular convergence behaviour with wild oscillations in the residual norm that can degrade the numerical performance and robustness of the CGS method. In general, the TFQMR method can be expected to converge at least as fast as the CGS method, in terms of number of iterations, although each iteration involves a higher operation count. When the CGS method exhibits irregular convergence, the TFQMR method can produce much smoother, almost monotonic convergence curves. However, the close relationship between the CGS and TFQMR method implies that the overall speed of convergence is similar for both methods. In some cases, the TFQMR method may converge faster than the CGS method.
Faster convergence can usually be achieved by using a preconditioner. A left preconditioner can be used by the RGMRES, CGS and TFQMR methods, such that in (2), where is the identity matrix of order ; a right preconditioner can be used by the Bi-CGSTAB method, such that . These are formal definitions, used only in the design of the algorithms; in practice, only the means to compute the matrix-vector products and (the latter only being required when an estimate of or is computed internally), and to solve the preconditioning equations are required, that is, explicit information about , or its inverse is not required at any stage.
Preconditioning matrices are typically based on incomplete factorizations (see Meijerink and Van der Vorst (1981)), or on the approximate inverses occurring in stationary iterative methods (see Young (1971)). A common example is the incomplete factorization
where is lower triangular with unit diagonal elements, is diagonal, is upper triangular with unit diagonals, and are permutation matrices, and is a remainder matrix. A zero-fill incomplete factorization is one for which the matrix
has the same pattern of nonzero entries as . This is obtained by discarding any fill elements (nonzero elements of arising during the factorization in locations where has zero elements). Allowing some of these fill elements to be kept rather than discarded generally increases the accuracy of the factorization at the expense of some loss of sparsity. For further details see Barrett et al. (1994).
2.5Iterative Methods for Real Symmetric and Complex Hermitian Linear Systems
Three of the best known iterative methods applicable to real symmetric and complex Hermitian linear systems are the conjugate gradient (CG) method (see Hestenes and Stiefel (1952) and Golub and Van Loan (1996)) and Lanczos type methods based on SYMMLQ and MINRES (see Paige and Saunders (1975)).
The description of these methods given below is for the real symmetric cases. The generalization to complex Hermitian matrices is straightforward.
For the CG method the matrix should ideally be positive definite. The application of CG to indefinite matrices may lead to failure, or to lack of convergence. The SYMMLQ and MINRES methods are suitable for both positive definite and indefinite symmetric matrices. They are more robust than CG, but less efficient when is positive definite.
The methods start from the residual , where is an initial estimate for the solution (often ), and generate an orthogonal basis for the Krylov subspace , for , by means of three-term recurrence relations (see Golub and Van Loan (1996)). A sequence of symmetric tridiagonal matrices is also generated. Here and in the following, the index denotes the iteration count. The resulting symmetric tridiagonal systems of equations are usually more easily solved than the original problem. A sequence of solution iterates is thus generated such that the sequence of the norms of the residuals converges to a required tolerance. Note that, in general, the convergence is not monotonic.
In exact arithmetic, after iterations, this process is equivalent to an orthogonal reduction of to symmetric tridiagonal form, ; the solution would thus achieve exact convergence. In finite-precision arithmetic, cancellation and round-off errors accumulate causing loss of orthogonality. These methods must therefore be viewed as genuinely iterative methods, able to converge to a solution within a prescribed tolerance.
The orthogonal basis is not formed explicitly in either method. The basic difference between the methods lies in the method of solution of the resulting symmetric tridiagonal systems of equations: the CG method is equivalent to carrying out an (Cholesky) factorization whereas the Lanczos method (SYMMLQ) uses an factorization. The MINRES method on the other hand minimizes the residual into 2-norm.
A preconditioner for these methods must be symmetric and positive definite, i.e., representable by , where is nonsingular, and such that in (2), where is the identity matrix of order . These are formal definitions, used only in the design of the algorithms; in practice, only the means to compute the matrix-vector products and to solve the preconditioning equations are required.
Preconditioning matrices are typically based on incomplete factorizations (see Meijerink and Van der Vorst (1977)), or on the approximate inverses occurring in stationary iterative methods (see Young (1971)). A common example is the incomplete Cholesky factorization
where is a permutation matrix, is lower triangular with unit diagonal elements, is diagonal and is a remainder matrix. A zero-fill incomplete Cholesky factorization is one for which the matrix
has the same pattern of nonzero entries as . This is obtained by discarding any fill elements (nonzero elements of arising during the factorization in locations where has zero elements). Allowing some of these fill elements to be kept rather than discarded generally increases the accuracy of the factorization at the expense of some loss of sparsity. For further details see Barrett et al. (1994).
3Recommendations on Choice and Use of Available Functions
3.1Types of Function Available
The direct method functions available in this chapter largely follow the LAPACK scheme in that four different functions separately handle the tasks of factorizing, solving, refining and condition number estimating. See Section 3.4.
The iterative method functions available in this chapter divide essentially into three types: basic functions, utility functions and Black Box functions.
Basic functions are grouped in suites of three, and implement the underlying iterative method. Each suite comprises a setup function, a solver, and a function to return additional information. The solver function is independent of the matrix storage format (indeed the matrix need not be stored at all) and the type of preconditioner. It uses reverse communication (see Section 7 in How to Use the NAG Library for further information), i.e., it returns repeatedly to the calling program with the argument irevcm set to specified values which require the calling program to carry out a specific task (either to compute a matrix-vector product or to solve the preconditioning equation), to signal the completion of the computation or to allow the calling program to monitor the solution. Reverse communication has the following advantages.
(i)Maximum flexibility in the representation and storage of sparse matrices. All matrix operations are performed outside the solver function, thereby avoiding the need for a complicated interface with enough flexibility to cope with all types of storage schemes and sparsity patterns. This also applies to preconditioners.
(ii)Enhanced user interaction: you can closely monitor the solution and tidy or immediate termination can be requested. This is useful, for example, when alternative termination criteria are to be employed or in case of failure of the external functions used to perform matrix operations.
At present there are suites of basic functions for real symmetric and nonsymmetric systems, and for complex Hermitian and non-Hermitian systems.
Utility functions perform such tasks as initializing the preconditioning matrix , solving linear systems involving , or computing matrix-vector products, for particular preconditioners and matrix storage formats. Used in combination, basic functions and utility functions therefore provide iterative methods with a considerable degree of flexibility, allowing you to select from different termination criteria, monitor the approximate solution, and compute various diagnostic parameters. The tasks of computing the matrix-vector products and dealing with the preconditioner are removed from you, but at the expense of sacrificing some flexibility in the choice of preconditioner and matrix storage format.
Black Box functions call basic and utility functions in order to provide easy-to-use functions for particular preconditioners and sparse matrix storage formats. They are much less flexible than the basic functions, but do not use reverse communication, and may be suitable in many simple cases.
The structure of this chapter has been designed to cater for as many types of application as possible. If a Black Box function exists which is suitable for a given application you are recommended to use it. If you then decide you need some additional flexibility it is easy to achieve this by using basic and utility functions which reproduce the algorithm used in the Black Box, but allow more access to algorithmic control parameters and monitoring. If you wish to use a preconditioner or storage format for which no utility functions are provided, you must call basic functions, and provide your own utility functions.
3.2Iterative Methods for Real Nonsymmetric and Complex Non-Hermitian Linear Systems
The suite of basic functions f11bdc,f11becandf11bfc implements either RGMRES, CGS, Bi-CGSTAB, or TFQMR, for the iterative solution of the real sparse nonsymmetric linear system . These functions allow a choice of termination criteria and the norms used in them, allow monitoring of the approximate solution, and can return estimates of the norm of and the largest singular value of the preconditioned matrix .
In general, it is not possible to recommend one of these methods (RGMRES, CGS, Bi-CGSTAB, or TFQMR) in preference to another. RGMRES is popular, but requires the most storage, and can easily stagnate when the size of the orthogonal basis is too small, or the preconditioner is not good enough. CGS can be the fastest method, but the computed residuals can exhibit instability which may greatly affect the convergence and quality of the solution. Bi-CGSTAB seems robust and reliable, but it can be slower than the other methods. TFQMR can be viewed as a more robust variant of the CGS method: it shares the CGS method speed but avoids the CGS fluctuations in the residual, which may give, rise to instability. Some further discussion of the relative merits of these methods can be found in Barrett et al. (1994).
The utility functions provided for real nonsymmetric matrices use the coordinate storage (CS) format described in Section 2.1.1. f11dac computes a preconditioning matrix based on incomplete factorization, and f11dbc solves linear systems involving the preconditioner generated by f11dac. The amount of fill-in occurring in the incomplete factorization can be controlled by specifying either the level of fill, or the drop tolerance. Partial or complete pivoting may optionally be employed, and the factorization can be modified to preserve row-sums.
f11dfc is a generalization of f11dac. It computes incomplete factorizations on a set of (possibly overlapping) block diagonal matrices, using a prescribed block structure, to provide a block Jacobi or additive Schwartz preconditioner. To solve the linear system defined by the preconditioner generated by f11dfc, a sequence of calls to f11dbc (one for each block) would be required.
f11ddc is similar to f11dbc, but solves linear systems involving the preconditioner corresponding to symmetric successive-over-relaxation (SSOR). The value of the relaxation parameter must currently be supplied by you. Automatic procedures for choosing will be included in the chapter at a future mark.
f11dkc applies the iterated Jacobi method to a symmetric or nonsymmetric system of linear equations and can be used as a preconditioner. However, the domain of validity of the Jacobi method is rather restricted; you should read the function document for f11dkc before using it.
f11xac computes matrix-vector products for real nonsymmetric matrices stored in ordered CS format. An additional utility function f11zac orders the nonzero elements of a real sparse nonsymmetric matrix stored in general CS format. The same function can be used to convert a matrix from CS format to CCS format. For more general rectangular matrices, the utility routine f11zcc should be used.
The Black Box function f11dcc makes calls to f11bdc,f11bec,f11bfc,f11dbcandf11xac, to solve a real sparse nonsymmetric linear system, represented in CS format, using RGMRES, CGS, Bi-CGSTAB, or TFQMR, with incomplete preconditioning. f11dec is similar, but has options for no preconditioning, Jacobi preconditioning or SSOR preconditioning. f11dgc is also similar to f11dcc, but uses block Jacobi or additive Schwartz preconditioning.
For complex non-Hermitian sparse matrices there is an equivalent suite of functions. f11brc,f11bscandf11btc are the basic functions which implement the same methods used for real nonsymmetric systems, namely RGMRES, CGS, Bi-CGSTAB and TFQMR, for the solution of complex sparse non-Hermitian linear systems. f11dncandf11dpc are the complex equivalents of f11dacandf11dbc, respectively, providing facilities for implementing ILU preconditioning. f11drcandf11dtc implement complex versions of the SSOR and block Jacobi (or additive Schwartz) preconditioners, respectively. f11dxc implements a complex version of the iterated Jacobi preconditioner. Utility functions f11xncandf11znc are provided for computing matrix-vector products and sorting the elements of complex sparse non-Hermitian matrices, respectively. Finally, the Black Box functions f11dqc,f11dscandf11duc are complex equivalents of f11dcc,f11decandf11dfc, respectively.
3.3Iterative Methods for Real Symmetric and Complex Hermitian Linear Systems
The suite of basic functions f11gdc,f11gecandf11gfc implement either the conjugate gradient (CG) method, or a Lanczos method based on SYMMLQ, for the iterative solution of the real sparse symmetric linear system . If is known to be positive definite the CG method should be chosen; the Lanczos method is more robust but less efficient for positive definite matrices. These functions allow a choice of termination criteria and the norms used in them, allow monitoring of the approximate solution, and can return estimates of the norm of and the largest singular value of the preconditioned matrix .
The utility functions provided for real symmetric matrices use the symmetric coordinate storage (SCS) format described in Section 2.1.2. f11jac computes a preconditioning matrix based on incomplete Cholesky factorization, and f11jbc solves linear systems involving the preconditioner generated by f11jac. The amount of fill-in occurring in the incomplete factorization can be controlled by specifying either the level of fill, or the drop tolerance. Diagonal Markowitz pivoting may optionally be employed, and the factorization can be modified to preserve row-sums. Additionally, the utility function f11yec can be used to discover a row and column permutation that reduces the bandwidth of .
f11jdc is similar to f11jbc, but solves linear systems involving the preconditioner corresponding to symmetric successive-over-relaxation (SSOR). The value of the relaxation parameter must currently be supplied by you. Automatic procedures for choosing will be included in the chapter at a future mark.
f11dkc applies the iterated Jacobi method to a symmetric or nonsymmetric system of linear equations and can be used as a preconditioner. However, the domain of validity of the Jacobi method is rather restricted; you should read the function document for f11dkc before using it.
f11xec computes matrix-vector products for real symmetric matrices stored in ordered SCS format. An additional utility function f11zbc orders the nonzero elements of a real sparse symmetric matrix stored in general SCS format.
The Black Box function f11jcc makes calls to f11gdc,f11gec,f11gfc,f11jbcandf11xec, to solve a real sparse symmetric linear system, represented in SCS format, using a conjugate gradient or Lanczos method, with incomplete Cholesky preconditioning. f11jec is similar, but has options for no preconditioning, Jacobi preconditioning or SSOR preconditioning.
For complex Hermitian sparse matrices there is an equivalent suite of functions. f11grc,f11gscandf11gtc are the basic functions which implement the same methods used for real symmetric systems, namely CG and SYMMLQ, for the solution of complex sparse Hermitian linear systems. f11jncandf11jpc are the complex equivalents of f11jacandf11jbc, respectively, providing facilities for implementing incomplete Cholesky preconditioning. f11jrc implements a complex version of the SSOR preconditioner. f11dxc implements a complex version of the iterated Jacobi preconditioner. Utility functions f11xscandf11zpc are provided for computing matrix-vector products and sorting the elements of complex sparse Hermitian matrices, respectively. Finally, the Black Box functions f11jqcandf11jsc provide easy-to-use implementations of the CG and SYMMLQ methods for complex Hermitian linear systems.
3.4Direct Methods
The suite of functions f11mdc,f11mec,f11mfc,f11mgc,f11mhc,f11mkc,f11mlcandf11mmc implement the COLAMD/SuperLU direct real sparse solver and associated utilities. You are expected to first call f11mdc to compute a suitable column permutation for the subsequent factorization by f11mec. f11mfc then solves the system of equations. A solution can be further refined by f11mhc, which also minimizes the backward error and estimates a bound for the forward error in the solution. Diagnostics are provided by f11mgc which computes an estimate of the condition number of the matrix using the factorization output by f11mec, and f11mmc which computes the reciprocal pivot growth (a numerical stability measure) of the factorization. The two utility functions, f11mkc, which computes matrix-matrix products in the particular storage scheme demanded by the suite (CCS format), and f11mlc which computes quantities relating to norms of a matrix in that particular storage scheme, complete the suite.
Another way of computing a direct solution is to choose specific arguments for the indirect solvers. For example, function f11dbc solves a linear system involving the incomplete preconditioning matrix
generated by f11dac, where and are permutation matrices, is lower triangular with unit diagonal elements, is upper triangular with unit diagonal elements, is diagonal and is a remainder matrix.
If is nonsingular, a call to f11dac with and results in a zero remainder matrix and a complete factorization. A subsequent call to f11dbc will therefore result in a direct method for real sparse nonsymmetric systems.
If is known to be symmetric positive definite, f11jacandf11jbc may similarly be used to give a direct solution. For further details see Section 9.4 in f11jac.
Complex non-Hermitian systems can be solved directly in the same way using f11dncandf11dpc, while for complex Hermitian systems f11jncandf11jpc may be used.
Some other functions specifically designed for direct solution of sparse linear systems can currently be found in Chapters F01, F04 and F07. In particular, the following functions allow the direct solution of symmetric positive definite systems:
6Auxiliary Functions Associated with Library Function Arguments
None.
7 Withdrawn or Deprecated Functions
None.
8References
Barrett R, Berry M, Chan T F, Demmel J, Donato J, Dongarra J, Eijkhout V, Pozo R, Romine C and Van der Vorst H (1994) Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods SIAM, Philadelphia
Demmel J W, Eisenstat S C, Gilbert J R, Li X S and Li J W H (1999) A supernodal approach to sparse partial pivoting SIAM J. Matrix Anal. Appl. 20 720–755
Duff I S, Erisman A M and Reid J K (1986) Direct Methods for Sparse Matrices Oxford University Press, London
Freund R W (1993) A transpose-free quasi-minimal residual algorithm for non-Hermitian linear systems SIAM J. Sci. Comput.14 470–482
Freund R W and Nachtigal N (1991) QMR: a Quasi-Minimal Residual Method for Non-Hermitian Linear Systems Numer. Math.60 315–339
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Hestenes M and Stiefel E (1952) Methods of conjugate gradients for solving linear systems J. Res. Nat. Bur. Stand.49 409–436
Meijerink J and Van der Vorst H (1977) An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix Math. Comput.31 148–162
Meijerink J and Van der Vorst H (1981) Guidelines for the usage of incomplete decompositions in solving sets of linear equations as they occur in practical problems J. Comput. Phys.44 134–155
Paige C C and Saunders M A (1975) Solution of sparse indefinite systems of linear equations SIAM J. Numer. Anal.12 617–629
Saad Y and Schultz M (1986) GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems SIAM J. Sci. Statist. Comput.7 856–869
Sleijpen G L G and Fokkema D R (1993) BiCGSTAB for linear equations involving matrices with complex spectrum ETNA1 11–32
Sonneveld P (1989) CGS, a fast Lanczos-type solver for nonsymmetric linear systems SIAM J. Sci. Statist. Comput.10 36–52
Van der Vorst H (1989) Bi-CGSTAB, a fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems SIAM J. Sci. Statist. Comput.13 631–644
Young D (1971) Iterative Solution of Large Linear Systems Academic Press, New York