naginterfaces.library.mip.best_​subset_​given_​size

naginterfaces.library.mip.best_subset_given_size(mincr, m, ip, nbest, f, mincnt, gamma, acc, data=None, io_manager=None)[source]

Given a set of features and a scoring mechanism for any subset of those features, best_subset_given_size selects the best subsets of size using a direct communication branch and bound algorithm.

For full information please refer to the NAG Library document for h05ab

https://support.nag.com/numeric/nl/nagdoc_30.2/flhtml/h/h05abf.html

Parameters
mincrint

Flag indicating whether the scoring function is increasing or decreasing.

, i.e., the subsets with the largest score will be selected.

, i.e., the subsets with the smallest score will be selected.

For all and .

mint

, the number of features in the full feature set.

ipint

, the number of features in the subset of interest.

nbestint

, the maximum number of best subsets required. The actual number of subsets returned is given by on final exit. If on final exit then = 42 is returned.

fcallable score = f(m, drop, z, a, data=None)

must evaluate the scoring function .

Parameters
mint

, the number of features in the full feature set.

dropint

Flag indicating whether the intermediate subsets should be constructed by dropping features from the full set () or adding features to the empty set (). See for additional details.

zint, ndarray, shape

, for , contains the list of features which, along with those specified in , define the subsets whose score is required. See for additional details.

aint, ndarray, shape

, for , contains the list of features which, along with those specified in , define the subsets whose score is required. See for additional details.

dataarbitrary, optional, modifiable in place

User-communication data for callback functions.

Returns
scorefloat, array-like, shape

The value , for , the score associated with the th subset. is constructed as follows:

is constructed by dropping the features specified in the first elements of and the single feature given in from the full set of features, . The subset will, therefore, contain features.

is constructed by adding the features specified in the first elements of and the single feature specified in to the empty set, . The subset will, therefore, contain features.

In both cases the individual features are referenced by the integers to with indicating the first feature, the second, etc., for some arbitrary ordering of the features, chosen by you prior to calling best_subset_given_size.

For example, might refer to the first variable in a particular set of data, the second, etc..

If , the score for a single subset should be returned.

This subset is constructed by adding or removing only those features specified in the first elements of .

If , this subset will either be or .

mincntint

, the minimum number of times the effect of each feature, , must have been observed before is estimated from as opposed to being calculated directly.

If then is never estimated.

If then is set to .

gammafloat

, the scaling factor used when estimating scores. If then is used.

accfloat, array-like, shape

A measure of the accuracy of the scoring function, .

Letting , then when confirming whether the scoring function is strictly increasing or decreasing (as described in ), or when assessing whether a node defined by subset can be trimmed, then any values in the range are treated as being numerically equivalent.

If then , otherwise .

If then , otherwise .

In most situations setting both and to zero should be sufficient.

Using a nonzero value, when one is not required, can significantly increase the number of subsets that need to be evaluated.

dataarbitrary, optional

User-communication data for callback functions.

io_managerFileObjManager, optional

Manager for I/O in this routine.

Returns
laint

The number of best subsets returned.

bscorefloat, ndarray, shape

Holds the score for the best subsets returned in .

bzint, ndarray, shape

The th best subset is constructed by dropping the features specified in , for , for , from the set of all features, . The score for the th best subset is given in .

Raises
NagValueError
(errno )

On entry, .

Constraint: or .

(errno )

On entry, .

Constraint: .

(errno )

On entry, and .

Constraint: .

(errno )

On entry, .

Constraint: .

(errno )

On exit from , , which is inconsistent with the score for the parent node. Score for the parent node is .

Warns
NagAlgorithmicWarning
(errno )

On entry, .

But only best subsets could be calculated.

NagCallbackTerminateWarning
(errno )

A nonzero value for has been returned: .

Notes

Given , a set of unique features and a scoring mechanism defined for all then best_subset_given_size is designed to find , an optimal subset of size . Here denotes the cardinality of , the number of elements in the set.

The definition of the optimal subset depends on the properties of the scoring mechanism, if

then the optimal subset is defined as one of the solutions to

else if

then the optimal subset is defined as one of the solutions to

If neither of these properties hold then best_subset_given_size cannot be used.

As well as returning the optimal subset, , best_subset_given_size can return the best solutions of size . If denotes the th best subset, for , then the th best subset is defined as the solution to either

or

depending on the properties of .

The solutions are found using a branch and bound method, where each node of the tree is a subset of . Assuming that [equation] holds then a particular node, defined by subset , can be trimmed from the tree if where is the th highest score we have observed so far for a subset of size , i.e., our current best guess of the score for the th best subset. In addition, because of [equation] we can also drop all nodes defined by any subset where , thus avoiding the need to enumerate the whole tree. Similar short cuts can be taken if [equation] holds. A full description of this branch and bound algorithm can be found in Ridout (1988).

Rather than calculate the score at a given node of the tree best_subset_given_size utilizes the fast branch and bound algorithm of Somol et al. (2004), and attempts to estimate the score where possible. For each feature, , two values are stored, a count and , an estimate of the contribution of that feature. An initial value of zero is used for both and . At any stage of the algorithm where both and have been calculated (as opposed to estimated), the estimated contribution of the feature is updated to

and is incremented by , therefore, at each stage is the mean contribution of observed so far and is the number of observations used to calculate that mean.

As long as , for the user-supplied constant , then rather than calculating this function estimates it using or if has been estimated, where is a user-supplied scaling factor. An estimated score is never used to trim a node or returned as the optimal score.

Setting in this function will cause the algorithm to always calculate the scores, returning to the branch and bound algorithm of Ridout (1988). In most cases it is preferable to use the fast branch and bound algorithm, by setting , unless the score function is iterative in nature, i.e., must have been calculated before can be calculated.

best_subset_given_size is a direct communication version of best_subset_given_size_revcomm().

References

Narendra, P M and Fukunaga, K, 1977, A branch and bound algorithm for feature subset selection, IEEE Transactions on Computers (9), 917–922

Ridout, M S, 1988, Algorithm AS 233: An improved branch and bound algorithm for feature subset selection, Journal of the Royal Statistics Society, Series C (Applied Statistics) (Volume 37) (1), 139–147

Somol, P, Pudil, P and Kittler, J, 2004, Fast branch and bound algorithms for optimal feature selection, IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume 26) (7), 900–912