PDF version (NAG web site
, 64-bit version, 64-bit version)
NAG Toolbox: nag_rand_kfold_xyw (g05pv)
Purpose
nag_rand_kfold_xyw (g05pv) generates training and validation datasets suitable for use in cross-validation or jack-knifing.
Syntax
[
nt,
state,
sx,
sy,
sw,
errbuf,
ifail] = g05pv(
k,
fold,
x,
state, 'n',
n, 'm',
m, 'sordx',
sordx, 'y',
y, 'w',
w, 'sordsx',
sordsx)
[
nt,
state,
sx,
sy,
sw,
errbuf,
ifail] = nag_rand_kfold_xyw(
k,
fold,
x,
state, 'n',
n, 'm',
m, 'sordx',
sordx, 'y',
y, 'w',
w, 'sordsx',
sordsx)
Description
Let denote a matrix of observations on variables and and each denote a vector of length . For example, might represent a matrix of independent variables, the dependent variable and the associated weights in a weighted regression.
nag_rand_kfold_xyw (g05pv) generates a series of training datasets, denoted by the matrix, vector, vector triplet of observations, and validation datasets, denoted with observations. These training and validation datasets are generated as follows.
Each of the original observations is randomly assigned to one of equally sized groups or folds. For the th sample the validation dataset consists of those observations in group and the training dataset consists of all those observations not in group . Therefore at most samples can be generated.
If is not divisible by then the observations are assigned to groups as evenly as possible, therefore any group will be at most one observation larger or smaller than any other group.
When using the resulting datasets are suitable for leave-one-out cross-validation, or the training dataset on its own for jack-knifing. When using the resulting datasets are suitable for -fold cross-validation. Datasets suitable for reversed cross-validation can be obtained by switching the training and validation datasets, i.e., use the th group as the training dataset and the rest of the data as the validation dataset.
One of the initialization functions
nag_rand_init_repeat (g05kf) (for a repeatable sequence if computed sequentially) or
nag_rand_init_nonrepeat (g05kg) (for a non-repeatable sequence) must be called prior to the first call to
nag_rand_kfold_xyw (g05pv).
References
None.
Parameters
Compulsory Input Parameters
- 1:
– int64int32nag_int scalar
-
, the number of folds.
Constraint:
.
- 2:
– int64int32nag_int scalar
-
The number of the fold to return as the validation dataset.
On the first call to
nag_rand_kfold_xyw (g05pv) should be set to
and then incremented by one at each subsequent call until all
sets of training and validation datasets have been produced. See
Further Comments for more details on how a different calling sequence can be used.
Constraint:
.
- 3:
– double array
-
The first dimension,
, of the array
x must satisfy
- if , ;
- otherwise .
The second dimension of the array
x must be at least
if
and at least
if
.
The way the data is stored in
x is defined by
sordx.
If , contains the th observation for the th variable, for and .
If , contains the th observation for the th variable, for and .
If
,
x must hold
, the values of
for the original dataset, otherwise,
x must hold the array returned in
sx by the last call to
nag_rand_kfold_xyw (g05pv).
- 4:
– int64int32nag_int array
-
Note: the actual argument supplied
must be the array
state supplied to the initialization routines
nag_rand_init_repeat (g05kf) or
nag_rand_init_nonrepeat (g05kg).
Contains information on the selected base generator and its current state.
Optional Input Parameters
- 1:
– int64int32nag_int scalar
Default:
- if , ;
- otherwise .
, the number of observations.
Constraint:
.
- 2:
– int64int32nag_int scalar
Default:
- if , ;
- otherwise .
, the number of variables.
Constraint:
.
- 3:
– int64int32nag_int scalar
Default:
Determines how variables are stored in
x.
Constraint:
or .
- 4:
– double array
-
Optionally,
, the values of
for the original dataset. If
,
y must hold the vector returned in
sy by the last call to
nag_rand_kfold_xyw (g05pv).
- 5:
– double array
-
Optionally,
, the values of
for the original dataset. If
,
w must hold the vector returned in
sw by the last call to
nag_rand_kfold_xyw (g05pv).
- 6:
– int64int32nag_int scalar
Default:
Determines how variables are stored in
sx.
Constraint:
or .
Output Parameters
- 1:
– int64int32nag_int scalar
-
, the number of observations in the training dataset.
- 2:
– int64int32nag_int array
-
Contains updated information on the state of the generator.
- 3:
– double array
-
The first dimension,
, of the array
sx will be
- if , ;
- if , .
The second dimension of the array
sx will be
if
and
otherwise.
The way the data is stored in
sx is defined by
sordsx.
If ,
contains the th observation for the th variable, for and .
If ,
contains the th observation for the th variable, for and .
sx holds the values of
for the training and validation datasets, with
held in observations
to
and
in observations
to
.
- 4:
– double array
-
If
y is supplied then
sy holds the values of
for the training and validation datasets, with
held in elements
to
and
in elements
to
.
- 5:
– double array
-
If
w is supplied then
sw holds the values of
for the training and validation datasets, with
held in elements
to
and
in elements
to
.
- 6:
– string (length at least 200) (length ≥ 200)
-
- 7:
– int64int32nag_int scalar
unless the function detects an error (see
Error Indicators and Warnings).
Error Indicators and Warnings
Note: nag_rand_kfold_xyw (g05pv) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:
Cases prefixed with W are classified as warnings and
do not generate an error of type NAG:error_n. See nag_issue_warnings.
-
-
Constraint: .
-
-
Constraint: .
-
-
Constraint: .
-
-
Constraint: .
-
-
Constraint: or .
- W
-
More than of the data did not move when the data was shuffled.
-
-
Constraint: if , .
-
-
Constraint: if , .
-
-
On entry,
state vector has been corrupted or not initialized.
-
-
Constraint: or .
-
An unexpected error has been triggered by this routine. Please
contact
NAG.
-
Your licence key may have expired or may not have been installed correctly.
-
Dynamic memory allocation failed.
Accuracy
Not applicable.
Further Comments
nag_rand_kfold_xyw (g05pv) will be computationality more efficient if each observation in
x is contiguous, that is
.
Because of the way
nag_rand_kfold_xyw (g05pv) stores the data you should usually generate the
training and validation datasets in order, i.e., set
on the first call and increment it by one at each subsequent call. However, there are times when a different calling sequence would be beneficial, for example, when performing different cross-validation analyses on different threads. This is possible, as long as the following is borne in mind:
- nag_rand_kfold_xyw (g05pv) must be called with first.
- Other than the first set, you can obtain the training and validation dataset in any order, but for a given x you can only obtain each once.
For example, if you have three threads, you would call
nag_rand_kfold_xyw (g05pv) once with
. You would then copy the
x returned onto each thread and generate the remaing
sets of data by splitting them between the threads. For example, the first thread runs with
, the second with
and the third with
.
Example
This example uses nag_rand_kfold_xyw (g05pv) to facilitate -fold cross-validation.
A set of simulated data is split into
training and validation datasets.
nag_correg_glm_binomial (g02gb) is used to fit a logistic regression model to each training dataset and then
nag_correg_glm_predict (g02gp) is used to predict the response for the observations in the validation dataset.
The counts of true and false positives and negatives along with the sensitivity and specificity is then reported.
Open in the MATLAB editor:
g05pv_example
function g05pv_example
fprintf('g05pv example results\n\n');
link = 'G';
mean = 'M';
errfn = 'B';
vfobs = false;
x = [ 0.0 -0.1 0.0 1.0; 0.4 -1.1 1.0 1.0; -0.5 0.2 1.0 0.0;
0.6 1.1 1.0 0.0; -0.3 -1.0 1.0 1.0; 2.8 -1.8 0.0 1.0;
0.4 -0.7 0.0 1.0; -0.4 -0.3 1.0 0.0; 0.5 -2.6 0.0 0.0;
-1.6 -0.3 1.0 1.0; 0.4 0.6 1.0 0.0; -1.6 0.0 1.0 1.0;
0.0 0.4 1.0 1.0; -0.1 0.7 1.0 1.0; -0.2 1.8 1.0 1.0;
-0.9 0.7 1.0 1.0; -1.1 -0.5 1.0 1.0; -0.1 -2.2 1.0 1.0;
-1.8 -0.5 1.0 1.0; -0.8 -0.9 0.0 1.0; 1.9 -0.1 1.0 1.0;
0.3 1.4 1.0 1.0; 0.4 -1.2 1.0 0.0; 2.2 1.8 1.0 0.0;
1.4 -0.4 0.0 1.0; 0.4 2.4 1.0 1.0; -0.6 1.1 1.0 1.0;
1.4 -0.6 1.0 1.0; -0.1 -0.1 0.0 0.0; -0.6 -0.4 0.0 0.0;
0.6 -0.2 1.0 1.0; -1.8 -0.3 1.0 1.0; -0.3 1.6 1.0 1.0;
-0.6 0.8 0.0 1.0; 0.3 -0.5 0.0 0.0; 1.6 1.4 1.0 1.0;
-1.1 0.6 1.0 1.0; -0.3 0.6 1.0 1.0; -0.6 0.1 1.0 1.0;
1.0 0.6 1.0 1.0];
y = [0;1;0;0;0;0;1;1;1;0;0;1;1;0;0;0;0;1;1;1;
1;0;1;1;1;0;0;1;0;0;1;1;0;0;1;0;0;0;0;1];
t = ones(size(x,1));
isx = int64(ones(size(x,2),1));
ip = int64(sum(isx) + (upper(mean(1:1)) == 'M'));
seed = int64(42321);
genid = int64(6);
subid = int64(0);
[state,ifail] = g05kf( ...
genid,subid,seed);
k = int64(5);
warn_state = nag_issue_warnings();
nag_issue_warnings(true);
tn = 0;
fn = 0;
fp = 0;
tp = 0;
for i = 1:k
fold = int64(i);
[nt,state,x,y,t,ifail] = g05pv( ...
k,fold,x,state,'y',y,'w',t);
if (ifail~=0 & ifail~=61)
break
end
[~,~,b,~,~,cov,~,ifail] = g02gb( ...
link,mean,x,isx,ip,y,t,'n',nt);
if (ifail~=0 & ifail < 6)
break
end
[~,~,pred,~,ifail] = g02gp( ...
errfn,link,mean,x(nt+1:end,:),isx,b, ...
cov,vfobs,'t',t(nt+1:end));
if (ifail~=0)
break
end
obs_val = ceil(y(nt+1:end) + 0.5);
pred_val = (pred >= 0.5) + 1;
count = zeros(2,2);
for i = 1:size(pred_val,1)
count(pred_val(i),obs_val(i)) = count(pred_val(i),obs_val(i)) + 1;
end
tn = tn + count(1,1);
fn = fn + count(1,2);
fp = fp + count(2,1);
tp = tp + count(2,2);
end
nag_issue_warnings(warn_state);
np = tp + fn;
nn = fp + tn;
fprintf(' Observed\n');
fprintf(' --------------------------\n');
fprintf(' Predicted | Negative Positive Total\n');
fprintf(' --------------------------------------\n');
fprintf(' Negative | %5d %5d %5d\n', tn, fn, tn + fn);
fprintf(' Positive | %5d %5d %5d\n', fp, tp, fp + tp);
fprintf(' Total | %5d %5d %5d\n', nn, np, nn + np);
fprintf('\n');
if (np~=0)
fprintf(' True Positive Rate (Sensitivity): %4.2f\n', tp / np);
else
fprintf(' True Positive Rate (Sensitivity): No positives in data\n');
end
if (nn~=0)
fprintf(' True Negative Rate (Specificity): %4.2f\n', tn / nn);
else
fprintf(' True Negative Rate (Specificity): No negatives in data\n');
end
g05pv example results
Observed
--------------------------
Predicted | Negative Positive Total
--------------------------------------
Negative | 18 8 26
Positive | 4 10 14
Total | 22 18 40
True Positive Rate (Sensitivity): 0.56
True Negative Rate (Specificity): 0.82
PDF version (NAG web site
, 64-bit version, 64-bit version)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015