F10 (Rnla)
Chapter Introduction
FL CL
NAG FL Interface
F10 (Rnla)
Randomized Numerical Linear Algebra
F10 (Rnla)
Chapter Introduction
FL CL
1
Scope of the Chapter
This chapter covers linear algebra routines that make use of random projections to reduce problem dimension. This area is referred to as RNLA (Randomized Numerical Linear Algebra). The routines can be split into the following categories:
- building blocks that are intended to be used as components in RNLA algorithms written by you;
- RNLA algorithms for matrix factorization.
It is envisaged that users of the higher-level routines, such as matrix factorization, will have some background in linear algebra. A common use case would be that you have tried solving your problem using a deterministic linear algebra routine, e.g., an LAPACK routine from
Chapter F08, and are in need of a routine that is more computationally efficient.
Users of the building block routines would be expected to have some familiarity with the RNLA literature, i.e., a higher level of expertise.
2
Background to the Problems
2.1
Building Blocks
RNLA algorithms use random projections in order to reduce problem dimension. If
is an
matrix, the random projection of
is either,
where
is a real
matrix and
is an
matrix, or
where
is an
matrix.
Random projections can be classified as slow or fast. Slow random projections have the same computational cost as a standard matrix multiplication, i.e., operations. Fast random projections have a lower computational cost, typically operations. This is achieved by use of structured transformations such as the Discrete Cosine Transform (DCT).
Random projections can be real-valued (e.g., a randomized DCT) or complex-valued (e.g., a randomized Fast Fourier Transform (FFT)).
Currently the chapter contains one random projection routine that is fast and real-valued,
f10daf.
2.2
Matrix factorization
f10caf is provided for factorizing a general rectangular
matrix
.
The Singular Value Decomposition (SVD) is written as
where
is an
matrix which is zero except for its
diagonal elements,
is an
orthogonal matrix, and
is an
orthogonal matrix. The diagonal elements of
are the singular values of
; they are real and non-negative, and are returned in descending order. The first
columns of
and
are the left and right singular vectors of
, respectively.
If the numerical rank of is then has nonzero elements, and only columns of and are well-defined. In this case we can reduce to an matrix, to an matrix, and to an matrix.
RNLA algorithms can be used to compute the SVD more efficiently than deterministic alternatives such as
f08kbf. Two scenarios where randomized SVD outperforms deterministic SVD are (i) low rank problems where randomized SVD can obtain
machine precision results in less time than deterministic SVD, (ii) problems where only a low rank approximation of the matrix is needed.
One strategy used by some RNLA algorithms is to obtain, in the case, the rows of the matrix that span the projection space of ; this is known as row extraction.
3
Recommendations on Choice and Use of Available Routines
random projection with DCT (real)
|
|
f10daf
|
SVD via row extraction (real)
|
|
f10caf
|
3.1
Initializing seed for random number generators
The routines in this chapter contain an input argument called
state. This argument is used to generate random variables. When
state is initialized by a call to
g05kff the random numbers are repeatable. When
state is initialized by a call to
g05kgf the random numbers are non-repeatable. Further details on random number generators in the NAG Library can be found in
Chapter G05.
3.2
Choosing an appropriate value for the size of the random projection
The routines in this chapter contain an input argument k that determines the size of the random projection. For example if the random projection is done by post-multiplication, k is the number of columns in .
Increasing k increases both the accuracy and the computational cost of RNLA routines. For example, if the numerical rank of a matrix, , is , it is possible to obtain an SVD that is accurate to within machine precision by setting k slightly bigger (e.g., ) than . For problems where a low rank approximation to is sufficient, k can be set to the required rank.
4
Auxiliary Routines Associated with Library Routine Arguments
None.
5
Withdrawn or Deprecated Routines
None.
6
References
None.
F10 (Rnla)
Chapter Introduction
FL CL