NAG Library Routine Document
e04kyf (bounds_quasi_deriv_easy)
1
Purpose
e04kyf is an easytouse quasiNewton algorithm for finding a minimum of a function $F\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$, subject to fixed upper and lower bounds on the independent variables ${x}_{1},{x}_{2},\dots ,{x}_{n}$, when first derivatives of $F$ are available.
It is intended for functions which are continuous and which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).
2
Specification
Fortran Interface
Subroutine e04kyf ( 
n, ibound, funct2, bl, bu, x, f, g, iw, liw, w, lw, iuser, ruser, ifail) 
Integer, Intent (In)  ::  n, ibound, liw, lw  Integer, Intent (Inout)  ::  iuser(*), ifail  Integer, Intent (Out)  ::  iw(liw)  Real (Kind=nag_wp), Intent (Inout)  ::  bl(n), bu(n), x(n), ruser(*)  Real (Kind=nag_wp), Intent (Out)  ::  f, g(n), w(lw)  External  ::  funct2 

C Header Interface
#include <nagmk26.h>
void 
e04kyf_ (const Integer *n, const Integer *ibound, void (NAG_CALL *funct2)(const Integer *n, const double xc[], double *fc, double gc[], Integer iuser[], double ruser[]), double bl[], double bu[], double x[], double *f, double g[], Integer iw[], const Integer *liw, double w[], const Integer *lw, Integer iuser[], double ruser[], Integer *ifail) 

3
Description
e04kyf is applicable to problems of the form:
when first derivatives are available.
Special provision is made for problems which actually have no bounds on the ${x}_{j}$, problems which have only nonnegativity bounds, and problems in which ${l}_{1}={l}_{2}=\dots ={l}_{n}$ and ${u}_{1}={u}_{2}=\dots ={u}_{n}$. You must supply a subroutine to calculate the values of $F\left(x\right)$ and its first derivatives at any point $x$.
From a starting point you supplied there is generated, on the basis of estimates of the curvature of $F\left(x\right)$, a sequence of feasible points which is intended to converge to a local minimum of the constrained function. An attempt is made to verify that the final point is a minimum.
A typical iteration starts at the current point
$x$ where
${n}_{z}$ (say) variables are free from both their bounds. The projected gradient vector
${g}_{z}$, whose elements are the derivatives of
$F\left(x\right)$ with respect to the free variables, is known. A unit lower triangular matrix
$L$ and a diagonal matrix
$D$ (both of dimension
${n}_{z}$), such that
$LD{L}^{\mathrm{T}}$ is a positive definite approximation of the matrix of second derivatives with respect to the free variables (i.e., the projected Hessian) are also held. The equations
are solved to give a search direction
${p}_{z}$, which is expanded to an
$n$vector
$p$ by an insertion of appropriate zero elements. Then
$\alpha $ is found such that
$F\left(x+\alpha p\right)$ is approximately a minimum (subject to the fixed bounds) with respect to
$\alpha $;
$x$ is replaced by
$x+\alpha p$, and the matrices
$L$ and
$D$ are updated so as to be consistent with the change produced in the gradient by the step
$\alpha p$. If any variable actually reaches a bound during the search along
$p$, it is fixed and
${n}_{z}$ is reduced for the next iteration.
There are two sets of convergence criteria – a weaker and a stronger. Whenever the weaker criteria are satisfied, the Lagrange multipliers are estimated for all the active constraints. If any Lagrange multiplier estimate is significantly negative, then one of the variables associated with a negative Lagrange multiplier estimate is released from its bound and the next search direction is computed in the extended subspace (i.e., ${n}_{z}$ is increased). Otherwise minimization continues in the current subspace provided that this is practicable. When it is not, or when the stronger convergence criteria are already satisfied, then, if one or more Lagrange multiplier estimates are close to zero, a slight perturbation is made in the values of the corresponding variables in turn until a lower function value is obtained. The normal algorithm is then resumed from the perturbed point.
If a saddle point is suspected, a local search is carried out with a view to moving away from the saddle point. A local search is also performed when a point is found which is thought to be a constrained minimum.
4
References
Gill P E and Murray W (1976) Minimization subject to bounds on the variables NPL Report NAC 72 National Physical Laboratory
5
Arguments
 1: $\mathbf{n}$ – IntegerInput

On entry: the number $n$ of independent variables.
Constraint:
${\mathbf{n}}\ge 1$.
 2: $\mathbf{ibound}$ – IntegerInput

On entry: indicates whether the facility for dealing with bounds of special forms is to be used. It must be set to one of the following values:
 ${\mathbf{ibound}}=0$
 If you are supplying all the ${l}_{j}$ and ${u}_{j}$ individually.
 ${\mathbf{ibound}}=1$
 If there are no bounds on any ${x}_{j}$.
 ${\mathbf{ibound}}=2$
 If all the bounds are of the form $0\le {x}_{j}$.
 ${\mathbf{ibound}}=3$
 If ${l}_{1}={l}_{2}=\cdots ={l}_{n}$ and ${u}_{1}={u}_{2}=\cdots ={u}_{n}$.
Constraint:
$0\le {\mathbf{ibound}}\le 3$.
 3: $\mathbf{funct2}$ – Subroutine, supplied by the user.External Procedure

You must supply
funct2 to calculate the values of the function
$F\left(x\right)$ and its first derivative
$\frac{\partial F}{\partial {x}_{j}}$ at any point
$x$. It should be tested separately before being used in conjunction with
e04kyf (see the
E04 Chapter Introduction).
The specification of
funct2 is:
Fortran Interface
Integer, Intent (In)  ::  n  Integer, Intent (Inout)  ::  iuser(*)  Real (Kind=nag_wp), Intent (In)  ::  xc(n)  Real (Kind=nag_wp), Intent (Inout)  ::  ruser(*)  Real (Kind=nag_wp), Intent (Out)  ::  fc, gc(n) 

C Header Interface
#include <nagmk26.h>
void 
funct2 (const Integer *n, const double xc[], double *fc, double gc[], Integer iuser[], double ruser[]) 

 1: $\mathbf{n}$ – IntegerInput

On entry: the number $n$ of variables.
 2: $\mathbf{xc}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput

On entry: the point $x$ at which the function and derivatives are required.
 3: $\mathbf{fc}$ – Real (Kind=nag_wp)Output

On exit: the value of the function $F$ at the current point $x$.
 4: $\mathbf{gc}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput

On exit: ${\mathbf{gc}}\left(\mathit{j}\right)$ must be set to the value of the first derivative $\frac{\partial F}{\partial {x}_{\mathit{j}}}$ at the point $x$, for $\mathit{j}=1,2,\dots ,n$.
 5: $\mathbf{iuser}\left(*\right)$ – Integer arrayUser Workspace
 6: $\mathbf{ruser}\left(*\right)$ – Real (Kind=nag_wp) arrayUser Workspace

funct2 is called with the arguments
iuser and
ruser as supplied to
e04kyf. You should use the arrays
iuser and
ruser to supply information to
funct2.
funct2 must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which
e04kyf is called. Arguments denoted as
Input must
not be changed by this procedure.
Note: funct2 should not return floatingpoint NaN (Not a Number) or infinity values, since these are not handled by
e04kyf. If your code inadvertently
does return any NaNs or infinities,
e04kyf is likely to produce unexpected results.
 4: $\mathbf{bl}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput/Output

On entry: the lower bounds
${l}_{j}$.
If
ibound is set to
$0$, you must set
${\mathbf{bl}}\left(\mathit{j}\right)$ to
${l}_{\mathit{j}}$, for
$\mathit{j}=1,2,\dots ,n$. (If a lower bound is not specified for a particular
${x}_{\mathit{j}}$, the corresponding
${\mathbf{bl}}\left(\mathit{j}\right)$ should be set to
${10}^{6}$.)
If
ibound is set to
$3$, you must set
${\mathbf{bl}}\left(1\right)$ to
${l}_{1}$;
e04kyf will then set the remaining elements of
bl equal to
${\mathbf{bl}}\left(1\right)$.
On exit: the lower bounds actually used by e04kyf.
 5: $\mathbf{bu}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput/Output

On entry: the upper bounds
${u}_{j}$.
If
ibound is set to
$0$, you must set
${\mathbf{bu}}\left(\mathit{j}\right)$ to
${u}_{\mathit{j}}$, for
$\mathit{j}=1,2,\dots ,n$. (If an upper bound is not specified for a particular
${x}_{j}$, the corresponding
${\mathbf{bu}}\left(j\right)$ should be set to
${10}^{6}$.)
If
ibound is set to
$3$, you must set
${\mathbf{bu}}\left(1\right)$ to
${u}_{1}$;
e04kyf will then set the remaining elements of
bu equal to
${\mathbf{bu}}\left(1\right)$.
On exit: the upper bounds actually used by e04kyf.
 6: $\mathbf{x}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput/Output

On entry: ${\mathbf{x}}\left(\mathit{j}\right)$ must be set to a guess at the $\mathit{j}$th component of the position of the minimum, for $\mathit{j}=1,2,\dots ,n$. The routine checks the gradient at the starting point, and is more likely to detect any error in your programming if the initial ${\mathbf{x}}\left(j\right)$ are nonzero and mutually distinct.
On exit: the lowest point found during the calculations. Thus, if ${\mathbf{ifail}}={\mathbf{0}}$ on exit, ${\mathbf{x}}\left(j\right)$ is the $j$th component of the position of the minimum.
 7: $\mathbf{f}$ – Real (Kind=nag_wp)Output

On exit: the value of
$F\left(x\right)$ corresponding to the final point stored in
x.
 8: $\mathbf{g}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput

On exit: the value of
$\frac{\partial F}{\partial {x}_{\mathit{j}}}$ corresponding to the final point stored in
x, for
$\mathit{j}=1,2,\dots ,n$; the value of
${\mathbf{g}}\left(j\right)$ for variables not on a bound should normally be close to zero.
 9: $\mathbf{iw}\left({\mathbf{liw}}\right)$ – Integer arrayOutput

On exit: if
${\mathbf{ifail}}={\mathbf{0}}$,
${\mathbf{3}}$ or
${\mathbf{5}}$, the first
n elements of
iw contain information about which variables are currently on their bounds and which are free. Specifically, if
${x}_{i}$ is:
– 
fixed on its upper bound, ${\mathbf{iw}}\left(i\right)$ is $1$; 
– 
fixed on its lower bound, ${\mathbf{iw}}\left(i\right)$ is $2$; 
– 
effectively a constant (i.e., ${l}_{j}={u}_{j}$), ${\mathbf{iw}}\left(i\right)$ is $3$; 
– 
free, ${\mathbf{iw}}\left(i\right)$ gives its position in the sequence of free variables. 
In addition, ${\mathbf{iw}}\left({\mathbf{n}}+1\right)$ contains the number of free variables (i.e., ${n}_{z}$). The rest of the array is used as workspace.
 10: $\mathbf{liw}$ – IntegerInput

On entry: the dimension of the array
iw as declared in the (sub)program from which
e04kyf is called.
Constraint:
${\mathbf{liw}}\ge {\mathbf{n}}+2$.
 11: $\mathbf{w}\left({\mathbf{lw}}\right)$ – Real (Kind=nag_wp) arrayOutput

On exit: if ${\mathbf{ifail}}={\mathbf{0}}$, ${\mathbf{3}}$ or ${\mathbf{5}}$,
${\mathbf{w}}\left(\mathit{i}\right)$ contains the $\mathit{i}$th element of the projected gradient vector ${g}_{z}$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. In addition, ${\mathbf{w}}\left({\mathbf{n}}+1\right)$ contains an estimate of the condition number of the projected Hessian matrix (i.e., $k$). The rest of the array is used as workspace.
 12: $\mathbf{lw}$ – IntegerInput

On entry: the dimension of the array
w as declared in the (sub)program from which
e04kyf is called.
Constraint:
${\mathbf{lw}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(10\times {\mathbf{n}}+{\mathbf{n}}\times \left({\mathbf{n}}1\right)/2,11\right)$.
 13: $\mathbf{iuser}\left(*\right)$ – Integer arrayUser Workspace
 14: $\mathbf{ruser}\left(*\right)$ – Real (Kind=nag_wp) arrayUser Workspace

iuser and
ruser are not used by
e04kyf, but are passed directly to
funct2 and may be used to pass information to this routine.
 15: $\mathbf{ifail}$ – IntegerInput/Output

On entry:
ifail must be set to
$0$,
$1\text{or}1$. If you are unfamiliar with this argument you should refer to
Section 3.4 in How to Use the NAG Library and its Documentation for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$1\text{or}1$ is recommended. If the output of error messages is undesirable, then the value
$1$ is recommended. Otherwise, because for this routine the values of the output arguments may be useful even if
${\mathbf{ifail}}\ne {\mathbf{0}}$ on exit, the recommended value is
$1$.
When the value $\mathbf{1}\text{or}1$ is used it is essential to test the value of ifail on exit.
On exit:
${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see
Section 6).
6
Error Indicators and Warnings
If on entry
${\mathbf{ifail}}=0$ or
$1$, explanatory error messages are output on the current error message unit (as defined by
x04aaf).
Note: e04kyf may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the routine:
 ${\mathbf{ifail}}=1$

On entry,  ${\mathbf{n}}<1$, 
or  ${\mathbf{ibound}}<0$, 
or  ${\mathbf{ibound}}>3$, 
or  ${\mathbf{ibound}}=0$ and ${\mathbf{bl}}\left(j\right)>{\mathbf{bu}}\left(j\right)$ for some $j$, 
or  ${\mathbf{ibound}}=3$ and ${\mathbf{bl}}\left(1\right)>{\mathbf{bu}}\left(1\right)$, 
or  ${\mathbf{liw}}<{\mathbf{n}}+2$, 
or  ${\mathbf{lw}}<\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(11,10\times {\mathbf{n}}+{\mathbf{n}}\times \left({\mathbf{n}}1\right)/2\right)$. 
 ${\mathbf{ifail}}=2$

There have been
$100\times n$ function evaluations, yet the algorithm does not seem to be converging. The calculations can be restarted from the final point held in
x. The error may also indicate that
$F\left(x\right)$ has no minimum.
 ${\mathbf{ifail}}=3$
The conditions for a minimum have not all been met but a lower point could not be found and the algorithm has failed.
 ${\mathbf{ifail}}=4$
An overflow has occurred during the computation. This is an unlikely failure, but if it occurs you should restart at the latest point given in
x.
 ${\mathbf{ifail}}=5$
 ${\mathbf{ifail}}=6$
 ${\mathbf{ifail}}=7$
 ${\mathbf{ifail}}=8$

There is some doubt about whether the point
$x$ found by
e04kyf is a minimum. The degree of confidence in the result decreases as
ifail increases. Thus, when
${\mathbf{ifail}}={\mathbf{5}}$ it is probable that the final
$x$ gives a good estimate of the position of a minimum, but when
${\mathbf{ifail}}={\mathbf{8}}$ it is very unlikely that the routine has found a minimum.
 ${\mathbf{ifail}}=9$

In the search for a minimum, the modulus of one of the variables has become very large
$\left(\sim {10}^{6}\right)$. This indicates that there is a mistake in
funct2, that your problem has no finite solution, or that the problem needs rescaling (see
Section 9).
 ${\mathbf{ifail}}=10$

It is very likely that you have made an error in forming the gradient.
 ${\mathbf{ifail}}=99$
An unexpected error has been triggered by this routine. Please
contact
NAG.
See
Section 3.9 in How to Use the NAG Library and its Documentation for further information.
 ${\mathbf{ifail}}=399$
Your licence key may have expired or may not have been installed correctly.
See
Section 3.8 in How to Use the NAG Library and its Documentation for further information.
 ${\mathbf{ifail}}=999$
Dynamic memory allocation failed.
See
Section 3.7 in How to Use the NAG Library and its Documentation for further information.
If you are dissatisfied with the result (e.g., because
${\mathbf{ifail}}={\mathbf{5}}$,
${\mathbf{6}}$,
${\mathbf{7}}$ or
${\mathbf{8}}$), it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure. If persistent trouble occurs it may be advisable to try
e04kzf.
7
Accuracy
A successful exit (
${\mathbf{ifail}}={\mathbf{0}}$) is made from
e04kyf when (B1, B2 and B3) or B4 hold, and the local search confirms a minimum, where
 $\mathrm{B1}\equiv {\alpha}^{\left(k\right)}\times \Vert {p}^{\left(k\right)}\Vert <\left({x}_{\mathit{tol}}+\sqrt{\epsilon}\right)\times \left(1.0+\Vert {x}^{\left(k\right)}\Vert \right)$
 $\mathrm{B2}\equiv \left{F}^{\left(k\right)}{F}^{\left(k1\right)}\right<\left({x}_{\mathit{tol}}^{2}+\epsilon \right)\times \left(1.0+\left{F}^{\left(k\right)}\right\right)$
 $\mathrm{B3}\equiv \Vert {g}_{z}^{\left(k\right)}\Vert <\left({\epsilon}^{1/3}+{x}_{\mathit{tol}}\right)\times \left(1.0+\left{F}^{\left(k\right)}\right\right)$
 $\mathrm{B4}\equiv \Vert {g}_{z}^{\left(k\right)}\Vert <0.01\times \sqrt{\epsilon}$.
(Quantities with superscript
$k$ are the values at the
$k$th iteration of the quantities mentioned in
Section 3,
${x}_{\mathit{tol}}=100\sqrt{\epsilon}$,
$\epsilon $ is the
machine precision and
$\Vert .\Vert $ denotes the Euclidean norm. The vector
${g}_{z}$ is returned in the array
w.)
If
${\mathbf{ifail}}={\mathbf{0}}$, then the vector in
x on exit,
${x}_{\mathrm{sol}}$, is almost certainly an estimate of the position of the minimum,
${x}_{\mathrm{true}}$, to the accuracy specified by
${x}_{\mathit{tol}}$.
If
${\mathbf{ifail}}={\mathbf{3}}$ or
${\mathbf{5}}$,
${x}_{\mathrm{sol}}$ may still be a good estimate of
${x}_{\mathrm{true}}$, but the following checks should be made. Let
$k$ denote an estimate of the condition number of the projected Hessian matrix at
${x}_{\mathrm{sol}}$. (The value of
$k$ is returned in
${\mathbf{w}}\left({\mathbf{n}}+1\right)$). If
(i) 
the sequence $\left\{F\left({x}^{\left(k\right)}\right)\right\}$ converges to $F\left({x}_{\mathrm{sol}}\right)$ at a superlinear or a fast linear rate, 
(ii) 
${\Vert {g}_{z}\left({x}_{\mathrm{xol}}\right)\Vert}^{2}<10.0\times \epsilon $ and 
(iii) 
$k<1.0/\Vert {g}_{z}\left({x}_{\mathrm{sol}}\right)\Vert $, 
then it is almost certain that
${x}_{\mathrm{sol}}$ is a close approximation to the position of a minimum. When (ii) is true, then usually
$F\left({x}_{\mathrm{sol}}\right)$ is a close approximation to
$F\left({x}_{\mathrm{true}}\right)$.
When a successful exit is made then, for a computer with a mantissa of $t$ decimals, one would expect to get about $t/21$ decimals accuracy in $x$, and about $t1$ decimals accuracy in $F$, provided the problem is reasonably well scaled.
8
Parallelism and Performance
e04kyf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the
X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the
Users' Note for your implementation for any additional implementationspecific information.
The number of iterations required depends on the number of variables, the behaviour of
$F\left(x\right)$ and the distance of the starting point from the solution. The number of operations performed in an iteration of
e04kyf is roughly proportional to
${n}^{2}$. In addition, each iteration makes at least one call of
funct2. So, unless
$F\left(x\right)$ and the gradient vector can be evaluated very quickly, the run time will be dominated by the time spent in
funct2.
Ideally the problem should be scaled so that at the solution the value of $F\left(x\right)$ and the corresponding values of ${x}_{1},{x}_{2},\dots ,{x}_{n}$ are each in the range $\left(1,+1\right)$, and so that at points a unit distance away from the solution, $F$ is approximately a unit value greater than at the minimum. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that e04kyf will take less computer time.
10
Example
A program to minimize
subject to
starting from the initial guess
$\left(3,1,0,1\right)$.
10.1
Program Text
Program Text (e04kyfe.f90)
10.2
Program Data
None.
10.3
Program Results
Program Results (e04kyfe.r)