naginterfaces.library.correg.ridge_​opt

naginterfaces.library.correg.ridge_opt(x, isx, y, h, opt, niter, tol, orig, optloo, tau=0.0)[source]

ridge_opt calculates a ridge regression, optimizing the ridge parameter according to one of four prediction error criteria.

For full information please refer to the NAG Library document for g02ka

https://support.nag.com/numeric/nl/nagdoc_30.3/flhtml/g02/g02kaf.html

Parameters
xfloat, array-like, shape

The values of independent variables in the data matrix .

isxint, array-like, shape

Indicates which independent variables are included in the model.

The th variable in will be included in the model.

Variable is excluded.

yfloat, array-like, shape

The values of the dependent variable .

hfloat

An initial value for the ridge regression parameter ; used as a starting point for the optimization.

optint

The measure of prediction error used to optimize the ridge regression parameter . The value of must be set equal to one of:

Generalized cross-validation (GCV);

Unbiased estimate of variance (UEV)

Future prediction error (FPE)

Bayesian information criteron (BIC).

niterint

The maximum number of iterations allowed to optimize the ridge regression parameter .

tolfloat

Iterations of the ridge regression parameter will halt when consecutive values of lie within .

origint

If , the parameter estimates are calculated for the original data; otherwise and the parameter estimates are calculated for the standardized data.

optlooint

If , the leave-one-out cross-validation estimate of prediction error is calculated; otherwise no such estimate is calculated and .

taufloat, optional

Singular values less than of the SVD of the data matrix will be set equal to zero.

Returns
hfloat

is the optimized value of the ridge regression parameter .

niterint

The number of iterations used to optimize the ridge regression parameter within .

nepfloat

The number of effective parameters, , in the model.

bfloat, ndarray, shape

Contains the intercept and parameter estimates for the fitted ridge regression model in the order indicated by . The first element of contains the estimate for the intercept; contains the parameter estimate for the th independent variable in the model, for .

viffloat, ndarray, shape

The variance inflation factors in the order indicated by . For the th independent variable in the model, is the value of , for .

resfloat, ndarray, shape

is the value of the th residual for the fitted ridge regression model, for .

rssfloat

The sum of squares of residual values.

dfint

The degrees of freedom for the residual sum of squares .

perrfloat, ndarray, shape

The first four elements contain, in this order, the measures of prediction error: GCV, UEV, FPE and BIC.

If , is the LOOCV estimate of prediction error; otherwise is not referenced.

Raises
NagValueError
(errno )

On entry, .

Constraint: .

(errno )

On entry, .

Constraint: .

(errno )

On entry, .

Constraint: , , or .

(errno )

On entry, .

Constraint: .

(errno )

On entry, .

Constraint: or .

(errno )

On entry, .

Constraint: .

(errno )

On entry, .

Constraint: .

(errno )

On entry, .

Constraint: or .

(errno )

On entry, and .

Constraint: .

(errno )

On entry, ; .

Constraint: .

(errno )

On entry, .

Constraint: or .

(errno )

On entry, .

Constraint: .

(errno )

SVD failed to converge.

Warns
NagAlgorithmicWarning
(errno )

Maximum number of iterations used.

Notes

A linear model has the form:

where

is an matrix of values of a dependent variable;

is a scalar intercept term;

is an matrix of values of independent variables;

is an matrix of unknown values of parameters;

is an matrix of unknown random errors such that variance of .

Let be the mean-centred and the mean-centred . Furthermore, is scaled such that the diagonal elements of the cross product matrix are one. The linear model now takes the form:

Ridge regression estimates the parameters in a penalised least squares sense by finding the that minimizes

where denotes the -norm and is a scalar regularization or ridge parameter. For a given value of , the parameter estimates are found by evaluating

Note that if the ridge regression solution is equivalent to the ordinary least squares solution.

Rather than calculate the inverse of () directly, ridge_opt uses the singular value decomposition (SVD) of . After decomposing into where and are orthogonal matrices and is a diagonal matrix, the parameter estimates become

A consequence of introducing the ridge parameter is that the effective number of parameters, , in the model is given by the sum of diagonal elements of

see Moody (1992) for details.

Any multi-collinearity in the design matrix may be highlighted by calculating the variance inflation factors for the fitted model. The th variance inflation factor, , is a scaled version of the multiple correlation coefficient between independent variable and the other independent variables, , and is given by

The variance inflation factors are calculated as the diagonal elements of the matrix:

which, using the SVD of , is equivalent to the diagonal elements of the matrix:

Although parameter estimates are calculated by using , it is usual to report the parameter estimates associated with . These are calculated from , and the means and scalings of . Optionally, either or may be calculated.

The method can adopt one of four criteria to minimize while calculating a suitable value for :

  1. Generalized cross-validation (GCV):

  2. Unbiased estimate of variance (UEV):

  3. Future prediction error (FPE):

  4. Bayesian information criterion (BIC):

where is the sum of squares of residuals. However, the function returns all four of the above prediction errors regardless of the one selected to minimize the ridge parameter, . Furthermore, the function will optionally return the leave-one-out cross-validation (LOOCV) prediction error.

References

Hastie, T, Tibshirani, R and Friedman, J, 2003, The Elements of Statistical Learning: Data Mining, Inference and Prediction, Springer Series in Statistics

Moody, J.E., 1992, The effective number of parameters: An analysis of generalisation and regularisation in nonlinear learning systems, In: Neural Information Processing Systems, (eds J E Moody, S J Hanson, and R P Lippmann), 4, 847–854, Morgan Kaufmann San Mateo CA