PDF version (NAG web site
, 64-bit version, 64-bit version)
NAG Toolbox: nag_correg_lars (g02ma)
Purpose
nag_correg_lars (g02ma) performs Least Angle Regression (LARS), forward stagewise linear regression or Least Absolute Shrinkage and Selection Operator (LASSO).
Syntax
[
b,
fitsum,
ifail] = g02ma(
mtype,
m,
d,
y, 'pred',
pred, 'prey',
prey, 'n',
n, 'isx',
isx, 'mnstep',
mnstep, 'ropt',
ropt)
[
b,
fitsum,
ifail] = nag_correg_lars(
mtype,
m,
d,
y, 'pred',
pred, 'prey',
prey, 'n',
n, 'isx',
isx, 'mnstep',
mnstep, 'ropt',
ropt)
Description
nag_correg_lars (g02ma) implements the LARS algorithm of
Efron et al. (2004) as well as the modifications needed to perform forward stagewise linear regression and fit LASSO and positive LASSO models.
Given a vector of
observed values,
and an
design matrix
, where the
th column of
, denoted
, is a vector of length
representing the
th independent variable
, standardized such that
, and
and a set of model parameters
to be estimated from the observed values, the LARS algorithm can be summarised as:
1. |
Set and all coefficients to zero, that is . |
2. |
Find the variable most correlated with , say . Add to the ‘most correlated’ set . If go to 8. |
3. |
Take the largest possible step in the direction of (i.e., increase the magnitude of ) until some other variable, say , has the same correlation with the current residual, . |
4. |
Increment and add to . |
5. |
If go to 8. |
6. |
Proceed in the ‘least angle direction’, that is, the direction which is equiangular between all variables in , altering the magnitude of the parameter estimates of those variables in , until the th variable, , has the same correlation with the current residual. |
7. |
Go to 4. |
8. |
Let . |
As well as being a model selection process in its own right, with a small number of modifications the LARS algorithm can be used to fit the LASSO model of
Tibshirani (1996), a positive LASSO model, where the independent variables enter the model in their defined direction (i.e.,
), forward stagewise linear regression (
Hastie et al. (2001)) and forward selection (
Weisberg (1985)). Details of the required modifications in each of these cases are given in
Efron et al. (2004).
The LASSO model of
Tibshirani (1996) is given by
for all values of
, where
. The positive LASSO model is the same as the standard LASSO model, given above, with the added constraint that
Unlike the standard LARS algorithm, when fitting either of the LASSO models, variables can be dropped as well as added to the set . Therefore the total number of steps is no longer bounded by .
Forward stagewise linear regression is an iterative procedure of the form:
1. |
Initialize and the vector of residuals . |
2. |
For each calculate . The value is therefore proportional to the correlation between the th independent variable and the vector of previous residual values, . |
3. |
Calculate , the value of with the largest absolute value of . |
4. |
If then go to 7. |
5. |
Update the residual values, with
where is a small constant and when and otherwise. |
6. |
Increment and go to 2. |
7. |
Set . |
If the largest possible step were to be taken, that is
then forward stagewise linear regression reverts to the standard forward selection method as implemented in
nag_correg_linregm_fit_onestep (g02ee).
The LARS procedure results in
models, one for each step of the fitting process. In order to aid in choosing which is the most suitable
Efron et al. (2004) introduced a
-type statistic given by
where
is the approximate degrees of freedom for the
th step and
One way of choosing a model is therefore to take the one with the smallest value of .
References
Efron B, Hastie T, Johnstone I and Tibshirani R (2004) Least Angle Regression The Annals of Statistics (Volume 32) 2 407–499
Hastie T, Tibshirani R and Friedman J (2001) The Elements of Statistical Learning: Data Mining, Inference and Prediction Springer (New York)
Tibshirani R (1996) Regression Shrinkage and Selection via the Lasso Journal of the Royal Statistics Society, Series B (Methodological) (Volume 58) 1 267–288
Weisberg S (1985) Applied Linear Regression Wiley
Parameters
Compulsory Input Parameters
- 1:
– int64int32nag_int scalar
-
Indicates the type of model to fit.
- LARS is performed.
- Forward linear stagewise regression is performed.
- LASSO model is fit.
- A positive LASSO model is fit.
Constraint:
, , or .
- 2:
– int64int32nag_int scalar
-
, the total number of independent variables when all variables are included.
Constraint:
.
- 3:
– double array
-
The first dimension of the array
d must be at least
.
The second dimension of the array
d must be at least
.
, the data, which along with
pred and
isx, defines the design matrix
. The
th observation for the
th variable must be supplied in
, for
and
.
- 4:
– double array
-
, the observations on the dependent variable.
Optional Input Parameters
- 1:
– int64int32nag_int scalar
Default:
Indicates the type of data preprocessing to perform on the independent variables supplied in
d to comply with the standardized form of the design matrix.
- No preprocessing is performed.
- Each of the independent variables,
, for , are mean centered prior to fitting the model. The means of the independent variables, , are returned in b, with
, for .
- Each independent variable is normalized, with the th variable scaled by . The scaling factor used by variable is returned in .
- As and , all of the independent variables are mean centered prior to being normalized.
Constraint:
, , or .
- 2:
– int64int32nag_int scalar
Default:
Indicates the type of data preprocessing to perform on the dependent variable supplied in
y.
- No preprocessing is performed, this is equivalent to setting .
- The dependent variable, , is mean centered prior to fitting the model, so . Which is equivalent to fitting a non-penalized intercept to the model and the degrees of freedom etc. are adjusted accordingly.
The value of used is returned in .
Constraint:
or .
- 3:
– int64int32nag_int scalar
-
Default:
the dimension of the array
y and the first dimension of the array
d. (An error is raised if these dimensions are not equal.)
, the number of observations.
Constraint:
.
- 4:
– int64int32nag_int array
-
Indicates which independent variables from
d will be included in the design matrix,
.
If
isx is not supplied, all variables are included in the design matrix.
Otherwise, for
when
must be set as follows:
- To indicate that the th variable, as supplied in d, is included in the design matrix;
- To indicated that the th variable, as supplied in d, is not included in the design matrix;
and
Constraint:
or and at least one value of , for .
- 5:
– int64int32nag_int scalar
Default:
- if , ;
- otherwise .
The maximum number of steps to carry out in the model fitting process.
If , i.e., a LARS is being performed, the maximum number of steps the algorithm will take is if , otherwise .
If , i.e., a forward linear stagewise regression is being performed, the maximum number of steps the algorithm will take is likely to be several orders of magnitude more and is no longer bound by or .
If or , i.e., a LASSO or positive LASSO model is being fit, the maximum number of steps the algorithm will take lies somewhere between that of the LARS and forward linear stagewise regression, again it is no longer bound by or .
Constraint:
.
- 6:
– double array
-
Optional parameters to control various aspects of the LARS algorithm.
The default value will be used for if . The default value will also be used if an invalid value is supplied for a particular argument, for example, setting will use the default value for argument .
- The minimum step size that will be taken.
Default is
is used, where
is the
machine precision returned by
nag_machine_precision (x02aj).
- General tolerance, used amongst other things, for comparing correlations.
Default is .
- If set to , parameter estimates are rescaled before being returned.
If set to , no rescaling is performed.
This argument has no effect when or .
Default is for the parameter estimates to be rescaled.
- If set to , it is assumed that the model contains an intercept during the model fitting process and when calculating the degrees of freedom.
If set to , no intercept is assumed.
This has no effect on the amount of preprocessing performed on
y.
Default is to treat the model as having an intercept when and as not having an intercept when .
- As implemented, the LARS algorithm can either work directly with and , or it can work with the cross-product matrices, and . In most cases it is more efficient to work with the cross-product matrices. This flag allows you direct control over which method is used, however, the default value will usually be the best choice.
If , and are worked with directly.
If , the cross-product matrices are used.
Default is when and and otherwise.
Constraints:
- ;
- ;
- or ;
- or ;
- or .
Output Parameters
- 1:
– double array
-
The second dimension of the array
b will be
.
Note: nstep is equal to
, the actual number of steps carried out in the model fitting process. See
Description for further information.
the parameter estimates, with
, the parameter estimate for the
th variable,
at the
th step of the model fitting process,
.
By default, when
or
the parameter estimates are rescaled prior to being returned. If the parameter estimates are required on the normalized scale, then this can be overridden via
ropt.
The values held in the remaining part of
b depend on the type of preprocessing performed.
- If ,
-
- If ,
-
- If ,
-
- If ,
-
for .
The number of parameter estimates,
, is the number of columns in
d when
isx is not supplied, i.e.,
. Otherwise
is the number of nonzero values in
isx.
- 2:
– double array
-
The second dimension of the array
fitsum will be
.
Note: nstep is equal to
, the actual number of steps carried out in the model fitting process. See
Description for further information.
Summaries of the model fitting process. When
,
- , the sum of the absolute values of the parameter estimates for the th step of the modelling fitting process. If or , the scaled parameter estimates are used in the summation.
- , the residual sums of squares for the th step, where .
- , approximate degrees of freedom for the th step.
- , a -type statistic for the th step, where .
- , correlation between the residual at step and the most correlated variable not yet in the active set , where the residual at step is .
- , the step size used at step .
In addition
- , with if and otherwise.
- , the residual sums of squares for the null model, where when and otherwise.
- , the degrees of freedom for the null model, where if and otherwise.
- , a -type statistic for the null model, where .
- , where and .
Although the statistics described above are returned when they may not be meaningful due to the estimate not being based on the saturated model.
- 3:
– int64int32nag_int scalar
unless the function detects an error (see
Error Indicators and Warnings).
Error Indicators and Warnings
Note: nag_correg_lars (g02ma) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:
Cases prefixed with W are classified as warnings and
do not generate an error of type NAG:error_n. See nag_issue_warnings.
-
-
Constraint: , , or .
-
-
Constraint: , , or .
-
-
Constraint: or .
-
-
Constraint: .
-
-
Constraint: .
-
-
Constraint: or for all .
-
-
On entry, all values of
isx are zero.
Constraint: at least one value of
isx must be nonzero.
-
-
Constraint: .
- W
-
Fitting process did not finish in
mnstep steps. Try increasing the size of
mnstep.
All output is returned as documented, up to step
mnstep, however,
and the
statistics may not be meaningful.
- W
-
is approximately zero and hence the -type criterion cannot be calculated. All other output is returned as documented.
- W
-
, therefore has been set to a large value. Output is returned as documented.
- W
-
Degenerate model, no variables added and . Output is returned as documented.
-
-
Constraint: .
-
An unexpected error has been triggered by this routine. Please
contact
NAG.
-
Your licence key may have expired or may not have been installed correctly.
-
Dynamic memory allocation failed.
Accuracy
Not applicable.
Further Comments
nag_correg_lars (g02ma) returns the parameter estimates at various points along the solution path of a LARS, LASSO or stagewise regression analysis. If the solution is required at a different set of points, for example when performing cross-validation, then
nag_correg_lars_param (g02mc) can be used.
For datasets with a large number of observations,
, it may be impractical to store the full
matrix in memory in one go. In such instances the cross-product matrices
and
can be calculated, using for example, multiple calls to
nag_correg_ssqmat (g02bu) and
nag_correg_ssqmat_combine (g02bz), and
nag_correg_lars_xtx (g02mb) called to perform the analysis.
The amount of workspace used by
nag_correg_lars (g02ma) depends on whether the cross-product matrices are being used internally (as controlled by
ropt). If the cross-product matrices are being used then
nag_correg_lars (g02ma) internally allocates approximately
elements of real storage compared to
elements when
and
are used directly. In both cases approximately
elements of integer storage are also used. If a forward linear stagewise analysis is performed than an additional
elements of real storage are required.
Example
This example performs a LARS on a simulated dataset with observations and independent variables.
Open in the MATLAB editor:
g02ma_example
function g02ma_example
fprintf('g02ma example results\n\n');
mtype = int64(1);
d = [10.28 1.77 9.69 15.58 8.23 10.44;
9.08 8.99 11.53 6.57 15.89 12.58;
17.98 13.10 1.04 10.45 10.12 16.68;
14.82 13.79 12.23 7.00 8.14 7.79;
17.53 9.41 6.24 3.75 13.12 17.08;
7.78 10.38 9.83 2.58 10.13 4.25;
11.95 21.71 8.83 11.00 12.59 10.52;
14.60 10.09 -2.70 9.89 14.67 6.49;
3.63 9.07 12.59 14.09 9.06 8.19;
6.35 9.79 9.40 12.79 8.38 16.79;
4.66 3.55 16.82 13.83 21.39 13.88;
8.32 14.04 17.17 7.93 7.39 -1.09;
10.86 13.68 5.75 10.44 10.36 10.06;
4.76 4.92 17.83 2.90 7.58 11.97;
5.05 10.41 9.89 9.04 7.90 13.12;
5.41 9.32 5.27 15.53 5.06 19.84;
9.77 2.37 9.54 20.23 9.33 8.82;
14.28 4.34 14.23 14.95 18.16 11.03;
10.17 6.80 3.17 8.57 16.07 15.93;
5.39 2.67 6.37 13.56 10.68 7.35];
y = [-46.47; -35.80; -129.22; -42.44; -73.51;
-26.61; -63.90; -76.73; -32.64; -83.29;
-16.31; -5.82; -47.75; 18.38; -54.71;
-55.62; -45.28; -22.76; -104.32; -55.94];
warn_state = nag_issue_warnings();
nag_issue_warnings(true);
[b,fitsum,ifail] = g02ma(mtype,d,y);
nag_issue_warnings(warn_state);
ip = size(b,1);
K = size(b,2) - 2;
fprintf(' Step %s Parameter Estimate\n ',repmat(' ',1,max(ip-2,0)*5));
fprintf(repmat('-',1,5+ip*10));
fprintf('\n');
for k = 1:K
fprintf(' %3d',k);
for j = 1:ip
fprintf(' %9.3f',b(j,k));
end
fprintf('\n');
end
fprintf('\n');
fprintf(' alpha: %9.3f\n', fitsum(1,K+1));
fprintf('\n');
fprintf(' Step Sum RSS df Cp Ck Step Size\n ');
fprintf(repmat('-',1,64));
fprintf('\n');
for k = 1:K
fprintf(' %3d %9.3f %9.3f %6.0f %9.3f %9.3f %9.3f\n', ...
k,fitsum(1,k),fitsum(2,k),fitsum(3,k), ...
fitsum(4,k),fitsum(5,k),fitsum(6,k));
end
fprintf('\n');
fprintf(' sigma^2: %9.3f\n', fitsum(5,K+1));
fig1 = figure;
xpos = transpose(repmat(fitsum(1,1:K),ip,1));
ypos = transpose(b(1:ip,1:K));
xpos = [zeros(1,ip);xpos];
ypos = [zeros(1,ip);ypos];
xmin = min(min(xpos));
xmax = max(max(xpos));
ymin = min(min(ypos));
ymax = max(max(ypos));
ext = 1 + [-0.1 0.1];
xrng = [min(xmin*ext),max(xmax*ext)];
yrng = [min(ymin*ext),max(ymax*ext)];
xpos = [xpos;xrng(2)*ones(1,ip)];
ypos = [ypos;ypos(end,:)];
plot(xpos,ypos);
xlim(xrng);
ylim(yrng);
title({'{\bf g02ma Example Plot}'; ...
'Estimates for LAR model fitted to simulated dataset'});
xlabel('{\bf \Sigma_j |\beta_{kj} |}');
ylabel('{\bf Parameter Estimates (\beta_{kj})}');
label = [repmat('\beta_{k',ip,1) num2str(transpose(linspace(1,ip,ip))) ...
repmat('}',ip,1)];
h = legend(label,'Location','SouthOutside','Orientation','Horizontal');
set(h,'FontSize',get(h,'FontSize')*0.8);
g02ma example results
Step Parameter Estimate
-----------------------------------------------------------------
1 0.000 0.000 3.125 0.000 0.000 0.000
2 0.000 0.000 3.792 0.000 0.000 -0.713
3 -0.446 0.000 3.998 0.000 0.000 -1.151
4 -0.628 -0.295 4.098 0.000 0.000 -1.466
5 -1.060 -1.056 4.110 -0.864 0.000 -1.948
6 -1.073 -1.132 4.118 -0.935 -0.059 -1.981
alpha: -50.037
Step Sum RSS df Cp Ck Step Size
----------------------------------------------------------------
1 72.446 8929.855 2 13.355 123.227 72.446
2 103.385 6404.701 3 7.054 50.781 24.841
3 126.243 5258.247 4 5.286 30.836 16.225
4 145.277 4657.051 5 5.309 19.319 11.587
5 198.223 3959.401 6 5.016 12.266 24.520
6 203.529 3954.571 7 7.000 0.910 2.198
sigma^2: 304.198
This example plot shows the regression coefficients () plotted against the scaled absolute sum of the parameter estimates ().
PDF version (NAG web site
, 64-bit version, 64-bit version)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015