hide long namesshow long names
hide short namesshow short names
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

NAG Toolbox: nag_eigen_withdraw_real_band_geneig (f02sd)

 Contents

    1  Purpose
    2  Syntax
    7  Accuracy
    9  Example

Purpose

nag_eigen_real_band_geneig (f02sd) finds the eigenvector corresponding to a given real eigenvalue for the generalized problem Ax=λBx, or for the standard problem Ax=λx, where A and B are real band matrices.
Note: this function is scheduled to be withdrawn, please see f02sd in Advice on Replacement Calls for Withdrawn/Superseded Routines..

Syntax

[a, b, vec, d, ifail] = f02sd(ma1, mb1, a, b, sym, relep, rmu, d, 'n', n)
[a, b, vec, d, ifail] = nag_eigen_withdraw_real_band_geneig(ma1, mb1, a, b, sym, relep, rmu, d, 'n', n)

Description

Given an approximation μ to a real eigenvalue λ of the generalized eigenproblem Ax=λBx, nag_eigen_real_band_geneig (f02sd) attempts to compute the corresponding eigenvector by inverse iteration.
nag_eigen_real_band_geneig (f02sd) first computes lower and upper triangular factors, L and U, of A-μB, using Gaussian elimination with interchanges, and then solves the equation Ux=e, where e=1,1,1,,1T – this is the first half iteration.
There are then three possible courses of action depending on the input value of d1.
1. d1=0.
This setting should be used if λ is an ill-conditioned eigenvalue (provided the matrix elements do not vary widely in order of magnitude). In this case it is essential to accept only a vector found after one half iteration, and μ must be a very good approximation to λ. If acceptable growth is achieved in the solution of Ux=e, then the normalized x is accepted as the eigenvector. If not, columns of an orthogonal matrix are tried in turn in place of e. If none of these give acceptable growth, the function fails, indicating that μ was not a sufficiently good approximation to λ.
2. d1>0.
This setting should be used if μ is moderately close to an eigenvalue which is not ill-conditioned (provided the matrix elements do not differ widely in order of magnitude). If acceptable growth is achieved in the solution of Ux=e, the normalized x is accepted as the eigenvector. If not, inverse iteration is performed. Up to 30 iterations are allowed to achieve a vector and a correction to μ which together give acceptably small residuals.
3. d1<0.
This setting should be used if the elements of A and B vary widely in order of magnitude. Inverse iteration is performed, but a different convergence criterion is used.
See Algorithmic Details for further details.
Note that the bandwidth of the matrix A must not be less than the bandwidth of B. If this is not so, either A must be filled out with zeros, or matrices A and B may be reversed and 1/μ supplied as an approximation to the eigenvalue 1/λ. Also it is assumed that A and B each have the same number of subdiagonals as superdiagonals. If this is not so, they must be filled out with zeros. If A and B are both symmetric, only the upper triangles need be supplied.

References

Peters G and Wilkinson J H (1979) Inverse iteration, ill-conditioned equations and Newton's method SIAM Rev. 21 339–360
Wilkinson J H (1965) The Algebraic Eigenvalue Problem Oxford University Press, Oxford
Wilkinson J H (1972) Inverse iteration in theory and practice Symposia Mathematica Volume X 361–379 Istituto Nazionale di Alta Matematica, Monograf, Bologna
Wilkinson J H (1974) Notes on inverse iteration and ill-conditioned eigensystems Acta Univ. Carolin. Math. Phys. 1–2 173–177
Wilkinson J H (1979) Kronecker's canonical form and the QZ algorithm Linear Algebra Appl. 28 285–303

Parameters

Compulsory Input Parameters

1:     ma1 int64int32nag_int scalar
The value mA+1, where mA is the number of nonzero lines on each side of the diagonal of A. Thus the total bandwidth of A is 2mA+1.
Constraint: 1ma1n.
2:     mb1 int64int32nag_int scalar
If mb10, B is assumed to be the unit matrix. Otherwise mb1 must specify the value mB+1, where mB is the number of nonzero lines on each side of the diagonal of B. Thus the total bandwidth of B is 2mB+1.
Constraint: mb1ma1.
3:     aldan – double array
lda, the first dimension of the array, must satisfy the constraint lda2×ma1-1.
The n by n band matrix A. The mA subdiagonals must be stored in the first mA rows of the array; the diagonal in the (mA+1)th row; and the mA superdiagonals in rows mA+2 to 2mA+1. Each row of the matrix must be stored in the corresponding column of the array. For example, if n=6 and mA=2 the storage scheme is:
* * a31 a42 a53 a64 * a21 a32 a43 a54 a65 a11 a22 a33 a44 a55 a66 a12 a23 a34 a45 a56 * a13 a24 a35 a46 * * .  
Elements of the array marked * need not be set. The following code assigns the matrix elements within the band to the correct elements of the array:
 for j=1:n
 for i=max(1,j-ma1+1):min(n,j+ma1-1)
 a(i-j+ma1,j) = matrix(i,j);
 end
 end 
If sym=true (i.e., both A and B are symmetric), only the lower triangle of A need be stored in the first ma1 rows of the array.
4:     bldbn – double array
ldb, the first dimension of the array, must satisfy the constraint
  • if sym=false, ldb2×mb1-1;
  • if sym=true, ldbmb1.
If mb1>0, b must contain the n by n band matrix B, stored in the same way as A. If sym=true, only the lower triangle of B need be stored in the first mb1 rows of the array.
If mb10, the array is not used.
5:     sym – logical scalar
If sym=true, both A and B are assumed to be symmetric and only their upper triangles need be stored. Otherwise sym must be set to false.
6:     relep – double scalar
The relative error of the coefficients of the given matrices A and B. If the value of relep is less than the machine precision, the machine precision is used instead.
7:     rmu – double scalar
μ, an approximation to the eigenvalue for which the corresponding eigenvector is required.
8:     d30 – double array
d1 must be set to indicate the type of problem (see Description):
d1>0.0
Indicates a well-conditioned eigenvalue.
d1=0.0
Indicates an ill-conditioned eigenvalue.
d1<0.0
Indicates that the matrices have elements varying widely in order of magnitude.

Optional Input Parameters

1:     n int64int32nag_int scalar
Default: the second dimension of the arrays a, b. (An error is raised if these dimensions are not equal.)
n, the order of the matrices A and B.
Constraint: n1.

Output Parameters

1:     aldan – double array
Details of the factorization of A-λ-B, where λ- is an estimate of the eigenvalue.
2:     bldbn – double array
Elements in the top-left corner, and in the bottom right corner if sym=false, are set to zero; otherwise the array is unchanged.
3:     vecn – double array
The eigenvector, normalized so that the largest element is unity, corresponding to the improved eigenvalue rmu+d30.
4:     d30 – double array
If d10.0 on entry, the successive corrections to μ are given in di, for i=1,2,,k, where k+1 is the total number of iterations performed. The final correction is also given in the last position, d30, of the array. The remaining elements of d are set to zero.
If d1=0.0 on entry, no corrections to μ are computed and di is set to 0.0, for i=1,2,,30. Thus in all three cases the best available approximation to the eigenvalue is rmu+d30.
5:     ifail int64int32nag_int scalar
ifail=0 unless the function detects an error (see Error Indicators and Warnings).

Error Indicators and Warnings

Errors or warnings detected by the function:
   ifail=1
On entry,n<1,
orma1<1,
orma1>n,
orlda<2×ma1-1,
orldb<mb1 when sym=true,
orldb<2×mb1-1 when sym=false (ldb is not checked if mb10).
   ifail=2
On entry,ma1<mb1. Either fill out a with zeros, or reverse the roles of a and b, and replace rmu by its reciprocal, i.e., solve Bx=λ-1Ax.
   ifail=3
On entry,lwork<2×n when d1=0.0,
orlwork<n×ma1+1 when d10.0.
   ifail=4
A is null. If B is nonsingular, all the eigenvalues are zero and any set of n orthogonal vectors forms the eigensolution.
   ifail=5
B is null. If A is nonsingular, all the eigenvalues are infinite, and the columns of the unit matrix are eigenvectors.
   ifail=6
On entry,A and B are both null. The eigensolution is arbitrary.
   ifail=7
d10.0 on entry and convergence is not achieved in 30 iterations. Either the eigenvalue is ill-conditioned or rmu is a poor approximation to the eigenvalue. See Algorithmic Details.
   ifail=8
d1=0.0 on entry and no eigenvector has been found after minn,5 back-substitutions. rmu is not a sufficiently good approximation to the eigenvalue.
   ifail=9
d1<0.0 on entry and rmu is too inaccurate for the solution to converge.
   ifail=-99
An unexpected error has been triggered by this routine. Please contact NAG.
   ifail=-399
Your licence key may have expired or may not have been installed correctly.
   ifail=-999
Dynamic memory allocation failed.

Accuracy

The eigensolution is exact for some problem
A+Ex=μB+Fx,  
where E,F are of the order of ηA+μB, where η is the value used for relep.

Further Comments

Timing

The time taken by nag_eigen_real_band_geneig (f02sd) is approximately proportional to n 2mA+1 2 for factorization, and to n2mA+1 for each iteration.

Storage

The storage of the matrices A and B is designed for efficiency on a paged machine.
nag_eigen_real_band_geneig (f02sd) will work with full matrices but it will do so inefficiently, particularly in respect of storage requirements.

Algorithmic Details

Inverse iteration is performed according to the rule
A-μByr+1=Bxr  
xr+ 1=1αr+ 1yr+ 1  
where αr+1 is the element of yr+1 of largest magnitude.
Thus:
A-μBxr+1=1αr+1Bxr.  
Hence the residual corresponding to xr+1 is very small if αr+1 is very large (see Peters and Wilkinson (1979)). The first half iteration, Uy1=e, corresponds to taking L-1PBx0=e.
If μ is a very accurate eigenvalue, then there should always be an initial vector x0 such that one half iteration gives a small residual and thus a good eigenvector. If the eigenvalue is ill-conditioned, then second and subsequent iterated vectors may not be even remotely close to an eigenvector of a neighbouring problem (see pages 374–376 of Wilkinson (1972) and Wilkinson (1974)). In this case it is essential to accept only a vector obtained after one half iteration.
However, for well-conditioned eigenvalues, there is no loss in performing more than one iteration (see page 376 of Wilkinson (1972)), and indeed it will be necessary to iterate if μ is not such a good approximation to the eigenvalue. When the iteration has converged, yr+1 will be some multiple of xr, yr+1=βr+1xr, say.
Therefore
A-μBβr+1xr=Bxr,  
giving
A-μ+1βr+ 1 B xr=0.  
Thus μ+ 1βr+1  is a better approximation to the eigenvalue. βr+1 is obtained as the element of yr+1 which corresponds to the element of largest magnitude, +1, in xr. The function terminates when A-μ+ 1 βr B xr  is of the order of the machine precision relative to A+μB.
If the elements of A and B vary widely in order of magnitude, then A and B are excessively large and a different convergence test is required. The function terminates when the difference between successive corrections to μ is small relative to μ.
In practice one does not necessarily know if the given problem is well-conditioned or ill-conditioned. In order to provide some information on the condition of the eigenvalue or the accuracy of μ in the event of failure, successive values of 1βr  are stored in the vector d when d1 is nonzero on input. If these values appear to be converging steadily, then it is likely that μ was a poor approximation to the eigenvalue and it is worth trying again with rmu+d30 as the initial approximation. If the values in d vary considerably in magnitude, then the eigenvalue is ill-conditioned.
A discussion of the significance of the singularity of A and/or B is given in relation to the QZ algorithm in Wilkinson (1979).

Example

Given the generalized eigenproblem Ax=λBx where
A= 1 1 2 -1 2 1 2 -1 3 1 2 -1 4 1 -1 5   and  B= 5 1 1 4 2 2 3 2 2 2 1 1 1  
find the eigenvector corresponding to the approximate eigenvalue -12.33.
Although B is symmetric, A is not, so sym must be set to false and all the elements of B in the band must be supplied to the function. A (as written above) has 1 subdiagonal and 2 superdiagonals, so ma1 must be set to 3 and A filled out with an additional subdiagonal of zeros. Each row of the matrices is read in as data in turn.
function f02sd_example


fprintf('f02sd example results\n\n');

% A and B in non-symmmetric banded format:
% A has 2 super-diagonals, B has 1 sub/super-diagonal;
% a row of zeros is added to A to give same number of subdiagonals.
a = [0, 0, 0, 0, 0;
     0,-1,-1,-1,-1;
     1, 2, 3, 4, 5;
     1, 1, 1, 1, 0;
     2, 2, 2, 0, 0];
b = [0, 1, 2, 2, 1;
     5, 4, 3, 2, 1;
     1, 2, 2, 1, 0];

ma1 = int64(2+1);
mb1 = int64(1+1);
sym = false;

% Find eigenvalue closest to rmu and associated eigenvector.
rmu   = -12.33;
relep = 0;
d     = zeros(30, 1);
% Assume the eigenvalue is well-conditioned
d(1)  = 1;

[a, b, vec, d, ifail] = f02sd( ...
                               ma1, mb1, a, b, sym, relep, rmu, d);

eval = rmu + d(30);
disp('Corrected eigenvalue');
disp(eval);
disp('Eigenvector')
disp(vec/norm(vec));


f02sd example results

Corrected eigenvalue
  -12.3394

Eigenvector
   -0.0377
    0.2606
   -0.5560
    0.6598
   -0.4315


PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015