PDF version (NAG web site
, 64-bit version, 64-bit version)
NAG Toolbox: nag_opt_bounds_quasi_deriv_easy (e04ky)
Purpose
nag_opt_bounds_quasi_deriv_easy (e04ky) is an easy-to-use quasi-Newton algorithm for finding a minimum of a function , subject to fixed upper and lower bounds on the independent variables , when first derivatives of are available.
It is intended for functions which are continuous and which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).
Syntax
[
bl,
bu,
x,
f,
g,
iw,
w,
user,
ifail] = e04ky(
ibound,
funct2,
bl,
bu,
x, 'n',
n, 'liw',
liw, 'lw',
lw, 'user',
user)
[
bl,
bu,
x,
f,
g,
iw,
w,
user,
ifail] = nag_opt_bounds_quasi_deriv_easy(
ibound,
funct2,
bl,
bu,
x, 'n',
n, 'liw',
liw, 'lw',
lw, 'user',
user)
Description
nag_opt_bounds_quasi_deriv_easy (e04ky) is applicable to problems of the form:
when first derivatives are available.
Special provision is made for problems which actually have no bounds on the , problems which have only non-negativity bounds, and problems in which and . You must supply a function to calculate the values of and its first derivatives at any point .
From a starting point you supplied there is generated, on the basis of estimates of the curvature of , a sequence of feasible points which is intended to converge to a local minimum of the constrained function. An attempt is made to verify that the final point is a minimum.
A typical iteration starts at the current point
where
(say) variables are free from both their bounds. The projected gradient vector
, whose elements are the derivatives of
with respect to the free variables, is known. A unit lower triangular matrix
and a diagonal matrix
(both of dimension
), such that
is a positive definite approximation of the matrix of second derivatives with respect to the free variables (i.e., the projected Hessian) are also held. The equations
are solved to give a search direction
, which is expanded to an
-vector
by an insertion of appropriate zero elements. Then
is found such that
is approximately a minimum (subject to the fixed bounds) with respect to
;
is replaced by
, and the matrices
and
are updated so as to be consistent with the change produced in the gradient by the step
. If any variable actually reaches a bound during the search along
, it is fixed and
is reduced for the next iteration.
There are two sets of convergence criteria – a weaker and a stronger. Whenever the weaker criteria are satisfied, the Lagrange multipliers are estimated for all the active constraints. If any Lagrange multiplier estimate is significantly negative, then one of the variables associated with a negative Lagrange multiplier estimate is released from its bound and the next search direction is computed in the extended subspace (i.e., is increased). Otherwise minimization continues in the current subspace provided that this is practicable. When it is not, or when the stronger convergence criteria are already satisfied, then, if one or more Lagrange multiplier estimates are close to zero, a slight perturbation is made in the values of the corresponding variables in turn until a lower function value is obtained. The normal algorithm is then resumed from the perturbed point.
If a saddle point is suspected, a local search is carried out with a view to moving away from the saddle point. A local search is also performed when a point is found which is thought to be a constrained minimum.
References
Gill P E and Murray W (1976) Minimization subject to bounds on the variables NPL Report NAC 72 National Physical Laboratory
Parameters
Compulsory Input Parameters
- 1:
– int64int32nag_int scalar
-
Indicates whether the facility for dealing with bounds of special forms is to be used. It must be set to one of the following values:
- If you are supplying all the and individually.
- If there are no bounds on any .
- If all the bounds are of the form .
- If and .
Constraint:
.
- 2:
– function handle or string containing name of m-file
-
You must supply
funct2 to calculate the values of the function
and its first derivative
at any point
. It should be tested separately before being used in conjunction with
nag_opt_bounds_quasi_deriv_easy (e04ky) (see the
E04 Chapter Introduction).
[fc, gc, user] = funct2(n, xc, user)
Input Parameters
- 1:
– int64int32nag_int scalar
-
The number of variables.
- 2:
– double array
-
The point at which the function and derivatives are required.
- 3:
– Any MATLAB object
funct2 is called from
nag_opt_bounds_quasi_deriv_easy (e04ky) with the object supplied to
nag_opt_bounds_quasi_deriv_easy (e04ky).
Output Parameters
- 1:
– double scalar
-
The value of the function at the current point .
- 2:
– double array
-
must be set to the value of the first derivative at the point , for .
- 3:
– Any MATLAB object
- 3:
– double array
-
The lower bounds
.
If
ibound is set to
, you must set
to
, for
. (If a lower bound is not specified for a particular
, the corresponding
should be set to
.)
If
ibound is set to
, you must set
to
;
nag_opt_bounds_quasi_deriv_easy (e04ky) will then set the remaining elements of
bl equal to
.
- 4:
– double array
-
The upper bounds
.
If
ibound is set to
, you must set
to
, for
. (If an upper bound is not specified for a particular
, the corresponding
should be set to
.)
If
ibound is set to
, you must set
to
;
nag_opt_bounds_quasi_deriv_easy (e04ky) will then set the remaining elements of
bu equal to
.
- 5:
– double array
-
must be set to a guess at the th component of the position of the minimum, for . The function checks the gradient at the starting point, and is more likely to detect any error in your programming if the initial are nonzero and mutually distinct.
Optional Input Parameters
- 1:
– int64int32nag_int scalar
-
Default:
the dimension of the arrays
bl,
bu,
x. (An error is raised if these dimensions are not equal.)
The number of independent variables.
Constraint:
.
- 2:
– int64int32nag_int scalar
-
Default:
The dimension of the array
iw.
Constraint:
.
- 3:
– int64int32nag_int scalar
-
Default:
The dimension of the array
w.
Constraint:
.
- 4:
– Any MATLAB object
user is not used by
nag_opt_bounds_quasi_deriv_easy (e04ky), but is passed to
funct2. Note that for large objects it may be more efficient to use a global variable which is accessible from the m-files than to use
user.
Output Parameters
- 1:
– double array
-
The lower bounds actually used by nag_opt_bounds_quasi_deriv_easy (e04ky).
- 2:
– double array
-
The upper bounds actually used by nag_opt_bounds_quasi_deriv_easy (e04ky).
- 3:
– double array
-
The lowest point found during the calculations. Thus, if on exit, is the th component of the position of the minimum.
- 4:
– double scalar
-
The value of
corresponding to the final point stored in
x.
- 5:
– double array
-
The value of
corresponding to the final point stored in
x, for
; the value of
for variables not on a bound should normally be close to zero.
- 6:
– int64int32nag_int array
-
If
,
or
, the first
n elements of
iw contain information about which variables are currently on their bounds and which are free. Specifically, if
is:
– |
fixed on its upper bound, is ; |
– |
fixed on its lower bound, is ; |
– |
effectively a constant (i.e., ), is ; |
– |
free, gives its position in the sequence of free variables. |
In addition, contains the number of free variables (i.e., ). The rest of the array is used as workspace.
- 7:
– double array
-
If , or ,
contains the th element of the projected gradient vector , for . In addition, contains an estimate of the condition number of the projected Hessian matrix (i.e., ). The rest of the array is used as workspace.
- 8:
– Any MATLAB object
- 9:
– int64int32nag_int scalar
unless the function detects an error (see
Error Indicators and Warnings).
Error Indicators and Warnings
Note: nag_opt_bounds_quasi_deriv_easy (e04ky) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:
Cases prefixed with W are classified as warnings and
do not generate an error of type NAG:error_n. See nag_issue_warnings.
-
-
On entry, | , |
or | , |
or | , |
or | and for some , |
or | and , |
or | , |
or | . |
-
-
There have been
function evaluations, yet the algorithm does not seem to be converging. The calculations can be restarted from the final point held in
x. The error may also indicate that
has no minimum.
- W
The conditions for a minimum have not all been met but a lower point could not be found and the algorithm has failed.
-
An overflow has occurred during the computation. This is an unlikely failure, but if it occurs you should restart at the latest point given in
x.
- W
- W
- W
- W
-
There is some doubt about whether the point
found by
nag_opt_bounds_quasi_deriv_easy (e04ky) is a minimum. The degree of confidence in the result decreases as
ifail increases. Thus, when
it is probable that the final
gives a good estimate of the position of a minimum, but when
it is very unlikely that the function has found a minimum.
-
-
In the search for a minimum, the modulus of one of the variables has become very large
. This indicates that there is a mistake in
funct2, that your problem has no finite solution, or that the problem needs rescaling (see
Further Comments).
-
-
It is very likely that you have made an error in forming the gradient.
-
An unexpected error has been triggered by this routine. Please
contact
NAG.
-
Your licence key may have expired or may not have been installed correctly.
-
Dynamic memory allocation failed.
If you are dissatisfied with the result (e.g., because
,
,
or
), it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure. If persistent trouble occurs it may be advisable to try
nag_opt_bounds_mod_deriv_easy (e04kz).
Accuracy
A successful exit (
) is made from
nag_opt_bounds_quasi_deriv_easy (e04ky) when (B1, B2 and B3) or B4 hold, and the local search confirms a minimum, where
- .
(Quantities with superscript
are the values at the
th iteration of the quantities mentioned in
Description,
,
is the
machine precision and
denotes the Euclidean norm. The vector
is returned in the array
w.)
If
, then the vector in
x on exit,
, is almost certainly an estimate of the position of the minimum,
, to the accuracy specified by
.
If
or
,
may still be a good estimate of
, but the following checks should be made. Let
denote an estimate of the condition number of the projected Hessian matrix at
. (The value of
is returned in
). If
(i) |
the sequence converges to at a superlinear or a fast linear rate, |
(ii) |
and |
(iii) |
, |
then it is almost certain that
is a close approximation to the position of a minimum. When (ii) is true, then usually
is a close approximation to
When a successful exit is made then, for a computer with a mantissa of decimals, one would expect to get about decimals accuracy in , and about decimals accuracy in , provided the problem is reasonably well scaled.
Further Comments
The number of iterations required depends on the number of variables, the behaviour of
and the distance of the starting point from the solution. The number of operations performed in an iteration of
nag_opt_bounds_quasi_deriv_easy (e04ky) is roughly proportional to
. In addition, each iteration makes at least one call of
funct2. So, unless
and the gradient vector can be evaluated very quickly, the run time will be dominated by the time spent in
funct2.
Ideally the problem should be scaled so that at the solution the value of and the corresponding values of are each in the range , and so that at points a unit distance away from the solution, is approximately a unit value greater than at the minimum. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that nag_opt_bounds_quasi_deriv_easy (e04ky) will take less computer time.
Example
A program to minimize
subject to
starting from the initial guess
.
Open in the MATLAB editor:
e04ky_example
function e04ky_example
fprintf('e04ky example results\n\n');
ibound = int64(0);
bl = [1; -2; -1000000; 1];
bu = [3; 0; 1000000; 3];
x = [3; -1; 0; 1];
[bl, bu, x, f, g, iw, w, user, ifail] = ...
e04ky(ibound, @funct2, bl, bu, x);
fprintf('Minimum point, x = %7.3f %7.3f %7.3f %7.3f\n',x);
fprintf('At found minimum, f = %7.3f\n',f);
fprintf(' g = %7.3f %7.3f %7.3f %7.3f\n',g);
function [fc, gc, user] = funct2(n, xc, user)
gc = zeros(n, 1);
a = xc(1) + 10*xc(2);
b = xc(3) - xc(4);
c = xc(2) - 2*xc(3);
d = xc(1) - xc(4);
fc = a^2 + 5*b^2 + c^4 + 10*d^4;
gc(1) = 2*a + 40*d^3;
gc(2) = 20*a + 4*c^3;
gc(3) = 10*b - 8*c^3;
gc(4) = -10*b - 40*d^3;
e04ky example results
Minimum point, x = 1.000 -0.085 0.409 1.000
At found minimum, f = 2.434
g = 0.295 0.000 -0.000 5.907
PDF version (NAG web site
, 64-bit version, 64-bit version)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015