The input is a set of lagged autocovariance matrices
. These will generally be sample values such as are obtained from a multivariate time series using
nag_tsa_multi_corrmat_cross (g13dm).
The main calculation is the recursive determination of the coefficients in the finite lag (forward) prediction equation
and the associated backward prediction equation
together with the covariance matrices
of
and
of
.
The recursive cycle, by which the order of the prediction equation is extended from
to
, is to calculate
then
,
The cycle is initialized by taking (for
)
In the step from
to
, the above equations contain redundant terms and simplify. Thus
(1) becomes
and neither
(2) or
(3) are needed.
Quantities useful in assessing the effectiveness of the prediction equation are generalized variance ratios
and multiple squared partial autocorrelations
Whittle P (1963) On the fitting of multivariate autoregressions and the approximate canonical factorization of a spectral density matrix Biometrika 50 129–134
The conditioning of the problem depends on the prediction error variance ratios. Very small values of these may indicate loss of accuracy in the computations.
If sample autocorrelation matrices are used as input, then the output will be relevant to the original series scaled by their standard deviations. If these autocorrelation matrices are produced by
nag_tsa_multi_corrmat_cross (g13dm), you must replace the diagonal elements of
(otherwise used to hold the series variances) by
.
function g13db_example
fprintf('g13db example results\n\n');
c0 = [0.0109, -0.0077917, 0.0013004, 0.0012654;
-0.0077917, 0.05704, 0.002418, 0.014409;
0.0013004, 0.002418, 0.04396, -0.021421;
0.0012654, 0.014409, -0.021421, 0.072289];
c(:,:,1) = ...
[0.0045889, 0.0004651, -0.00013275, 0.0077531;
-0.0024419, -0.011667, -0.021956, -0.0045803;
0.001108, -0.0080479, 0.013621, -0.0085868;
-0.00050614, 0.014045, -0.0010087, 0.012269];
c(:,:,2) = ...
[0.0018652, -0.0064389, 0.0088307, -0.0024808;
-0.011865, 0.0072367, -0.019802, 0.0059069;
-0.0080307, 0.014306, 0.014546, 0.01351;
-0.0021791, -0.029528, -0.015887, 0.00088308];
c(:,:,3) = ...
[-8.055e-005,-0.0037759, 0.0075463, -0.0042276;
0.0041447, -0.0037987, 0.0019332, -0.017564;
-0.010582, 0.0067733, 0.0069832, 0.0061747;
0.0041352, -0.016013, 0.017043, -0.013412];
c(:,:,4) = ...
[0.00076079,-0.0010134, 0.01187, -0.0041651;
0.0036014, -0.0036375, -0.025571, 0.0050218;
-0.013924, 0.011718, -0.0059088, 0.0059297;
0.010739, -0.014571, 0.013816, -0.012588];
c(:,:,5) = ...
[-0.00064365,-0.0044556, 0.0051334, 0.00071587;
0.0063617, 0.00015217, 0.002727, -0.0022261;
-0.0085855, 0.0014468, -0.0028698, 0.0044384;
0.0068339, -0.002179, 0.013759, 0.00028217];
nl = int64(5);
nk = int64(3);
ns = size(c0,1);
[p, v0, v, d, db, w, wb, nvp, ifail] = ...
g13db( ...
c0, c, nl, nk);
fprintf('Number of valid parameters = %10d\n\n', nvp);
fprintf('Multivariate partial autocorrelations\n');
for j = 1:5:nk
fprintf('%12.5f', p(j:min(j+4,nk)));
fprintf('\n');
end
fprintf('\nZero lag predictor error variance determinant\n');
fprintf('followed by error variance ratios\n');
fprintf('%12.5f\n', v0);
for j = 1:5:nk
fprintf('%12.5f', v(j:min(j+4,nk)));
fprintf('\n');
end
fprintf('\nPrediction error variances\n');
for k = 1:nk
fprintf('\nLag = %4d\n', k);
disp(d(1:ns,1:ns,k));
end
fprintf('\nLast backward prediction error variances\n\n');
fprintf('Lag = %4d\n', nvp);
disp(db(1:ns,1:ns));
fprintf('\nPrediction coefficients\n');
for k = 1:nk
fprintf('\nLag = %4d\n', k);
disp(w(1:ns,1:ns,k));
end
fprintf('\nBackward prediction coefficients\n');
for k = 1:nk
fprintf('\nLag = %4d\n', k);
disp(wb(1:ns,1:ns,k));
end