NAG CL Interface
F10 (Rnla)
Randomized Numerical Linear Algebra

1 Scope of the Chapter

This chapter covers linear algebra functions that make use of random projections to reduce problem dimension. This area is referred to as RNLA (Randomized Numerical Linear Algebra). The functions can be split into the following categories:
It is envisaged that users of the higher-level functions, such as matrix factorization, will have some background in linear algebra. A common use case would be that you have tried solving your problem using a deterministic linear algebra function, e.g., an LAPACK function from Chapter F08, and are in need of a function that is more computationally efficient.
Users of the building block functions would be expected to have some familiarity with the RNLA literature, i.e., a higher level of expertise.

2 Background to the Problems

2.1 Building Blocks

RNLA algorithms use random projections in order to reduce problem dimension. If A is an m by n matrix, the random projection of A is either,
Y=AΩ  
where A is a real m by n matrix and Ω is an n by k matrix, or
Y=ΩA  
where Ω is an k by m matrix.
Random projections can be classified as slow or fast. Slow random projections have the same computational cost as a standard matrix multiplication, i.e., Omnk operations. Fast random projections have a lower computational cost, typically Omnlogk operations. This is achieved by use of structured transformations such as the Discrete Cosine Transform (DCT).
Random projections can be real-valued (e.g., a randomized DCT) or complex-valued (e.g., a randomized Fast Fourier Transform (FFT)).
Currently the chapter contains one random projection function that is fast and real-valued, f10dac.

2.2 Matrix factorization

f10cac is provided for factorizing a general rectangular m by n matrix A.
The Singular Value Decomposition (SVD) is written as
A = UΣVT ,  
where Σ is an m by n matrix which is zero except for its minm,n diagonal elements, U is an m by m orthogonal matrix, and V is an n by n orthogonal matrix. The diagonal elements of Σ are the singular values of A; they are real and non-negative, and are returned in descending order. The first minm,n columns of U and V are the left and right singular vectors of A, respectively.
If the numerical rank of A is r<minm,n then Σ has r nonzero elements, and only r columns of U and V are well-defined. In this case we can reduce Σ to an r by r matrix, U to an m by r matrix, and VT to an r by n matrix.
RNLA algorithms can be used to compute the SVD more efficiently than deterministic alternatives such as f08kbc. Two scenarios where randomized SVD outperforms deterministic SVD are (i) low rank problems where randomized SVD can obtain machine precision results in less time than deterministic SVD, (ii) problems where only a low rank approximation of the matrix is needed.
One strategy used by some RNLA algorithms is to obtain, in the m>n case, the rows of the matrix A that span the projection space of AT; this is known as row extraction.

3 Recommendations on Choice and Use of Available Functions

Building blocks,  
random projection with DCT (double)   f10dac
Matrix factorization,  
SVD via row extraction (double)   f10cac

3.1 Initializing seed for random number generators

The functions in this chapter contain an input argument called state. This argument is used to generate random variables. When state is initialized by a call to g05kfc the random numbers are repeatable. When state is initialized by a call to g05kgc the random numbers are non-repeatable. Further details on random number generators in the NAG Library can be found in Chapter G05.

3.2 Choosing an appropriate value for the size of the random projection

The functions in this chapter contain an input argument k that determines the size of the random projection. For example if the random projection is done by post-multiplication, k is the number of columns in Y=AΩ.
Increasing k increases both the accuracy and the computational cost of RNLA functions. For example, if the numerical rank of a matrix, A, is r<minm,n, it is possible to obtain an SVD that is accurate to within machine precision by setting k slightly bigger (e.g. 5) than r. For problems where a low rank approximation to A is sufficient, k can be set to the required rank.

4 Auxiliary Functions Associated with Library Function Arguments

None.

5 Withdrawn or Deprecated Functions

None.

6 References

None.